problem_id
stringlengths 18
21
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
54
| prompt
stringlengths 1.28k
64.2k
| golden_diff
stringlengths 166
811
| verification_info
stringlengths 604
118k
|
---|---|---|---|---|---|---|
gh_patches_debug_1000 | rasdani/github-patches | git_diff | getmoto__moto-1859 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cognito-idp UserPool id format does not match AWS format
The format for a Cognito UserPool Id produced by the the cognito-idp mock does not produce ids in the expected format for a Cognito UserPool - The ids produced are uuids. The format for the Id field of a UserPool is documented as
> Id
>
> The ID of the user pool.
>
> Type: String
>
> Length Constraints: Minimum length of 1. Maximum length of 55.
>
> Pattern: [\w-]+_[0-9a-zA-Z]+
>
> Required: No
https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_UserPoolType.html
So a uuid isn't a valid representation of an Id
This can be reproduced by
```
import moto
import boto3
create_pool_kwargs = {
"PoolName": "test_pool",
"Schema": [
{
"Name": "email",
"AttributeDataType": "String",
"Required": True,
"Mutable": True,
},
{
"Name": "tenant_id",
"AttributeDataType": "String",
"Mutable": False,
},
],
"AdminCreateUserConfig": {
"AllowAdminCreateUserOnly": True,
"UnusedAccountValidityDays": 1,
},
}
def set_up_tear_down_user_pool():
cognito_idp = boto3.client('cognito-idp')
pool = cognito_idp.create_user_pool(**create_pool_kwargs)
pool_id = pool['UserPool']['Id']
print(pool_id)
cognito_idp.delete_user_pool(UserPoolId=pool_id)
# with moto
with moto.mock_cognitoidp() as mock_cognito:
set_up_tear_down_user_pool()
# without
set_up_tear_down_user_pool()
```
Produces:
```
eb9ef17e-acea-4a95-8440-7ee79dd1f172
eu-west-1_qtdBQSSL4
```
The general expectation is that the pool_id is in the format "{region}_{id}". I usually use the region part when attempting to determine, from a pool id, the region that pool is available in.
I'm using the package installed via pip and python mocks.
```
moto==1.3.4
botocore==1.10.52
boto3==1.7.3
```
cognito-idp UserPool id format does not match AWS format
The format for a Cognito UserPool Id produced by the the cognito-idp mock does not produce ids in the expected format for a Cognito UserPool - The ids produced are uuids. The format for the Id field of a UserPool is documented as
> Id
>
> The ID of the user pool.
>
> Type: String
>
> Length Constraints: Minimum length of 1. Maximum length of 55.
>
> Pattern: [\w-]+_[0-9a-zA-Z]+
>
> Required: No
https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_UserPoolType.html
So a uuid isn't a valid representation of an Id
This can be reproduced by
```
import moto
import boto3
create_pool_kwargs = {
"PoolName": "test_pool",
"Schema": [
{
"Name": "email",
"AttributeDataType": "String",
"Required": True,
"Mutable": True,
},
{
"Name": "tenant_id",
"AttributeDataType": "String",
"Mutable": False,
},
],
"AdminCreateUserConfig": {
"AllowAdminCreateUserOnly": True,
"UnusedAccountValidityDays": 1,
},
}
def set_up_tear_down_user_pool():
cognito_idp = boto3.client('cognito-idp')
pool = cognito_idp.create_user_pool(**create_pool_kwargs)
pool_id = pool['UserPool']['Id']
print(pool_id)
cognito_idp.delete_user_pool(UserPoolId=pool_id)
# with moto
with moto.mock_cognitoidp() as mock_cognito:
set_up_tear_down_user_pool()
# without
set_up_tear_down_user_pool()
```
Produces:
```
eb9ef17e-acea-4a95-8440-7ee79dd1f172
eu-west-1_qtdBQSSL4
```
The general expectation is that the pool_id is in the format "{region}_{id}". I usually use the region part when attempting to determine, from a pool id, the region that pool is available in.
I'm using the package installed via pip and python mocks.
```
moto==1.3.4
botocore==1.10.52
boto3==1.7.3
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `moto/cognitoidp/models.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import datetime
4 import json
5 import os
6 import time
7 import uuid
8
9 import boto.cognito.identity
10 from jose import jws
11
12 from moto.compat import OrderedDict
13 from moto.core import BaseBackend, BaseModel
14 from .exceptions import NotAuthorizedError, ResourceNotFoundError, UserNotFoundError
15
16
17 UserStatus = {
18 "FORCE_CHANGE_PASSWORD": "FORCE_CHANGE_PASSWORD",
19 "CONFIRMED": "CONFIRMED",
20 }
21
22
23 class CognitoIdpUserPool(BaseModel):
24
25 def __init__(self, region, name, extended_config):
26 self.region = region
27 self.id = str(uuid.uuid4())
28 self.name = name
29 self.status = None
30 self.extended_config = extended_config or {}
31 self.creation_date = datetime.datetime.utcnow()
32 self.last_modified_date = datetime.datetime.utcnow()
33
34 self.clients = OrderedDict()
35 self.identity_providers = OrderedDict()
36 self.users = OrderedDict()
37 self.refresh_tokens = {}
38 self.access_tokens = {}
39 self.id_tokens = {}
40
41 with open(os.path.join(os.path.dirname(__file__), "resources/jwks-private.json")) as f:
42 self.json_web_key = json.loads(f.read())
43
44 def _base_json(self):
45 return {
46 "Id": self.id,
47 "Name": self.name,
48 "Status": self.status,
49 "CreationDate": time.mktime(self.creation_date.timetuple()),
50 "LastModifiedDate": time.mktime(self.last_modified_date.timetuple()),
51 }
52
53 def to_json(self, extended=False):
54 user_pool_json = self._base_json()
55 if extended:
56 user_pool_json.update(self.extended_config)
57 else:
58 user_pool_json["LambdaConfig"] = self.extended_config.get("LambdaConfig") or {}
59
60 return user_pool_json
61
62 def create_jwt(self, client_id, username, expires_in=60 * 60, extra_data={}):
63 now = int(time.time())
64 payload = {
65 "iss": "https://cognito-idp.{}.amazonaws.com/{}".format(self.region, self.id),
66 "sub": self.users[username].id,
67 "aud": client_id,
68 "token_use": "id",
69 "auth_time": now,
70 "exp": now + expires_in,
71 }
72 payload.update(extra_data)
73
74 return jws.sign(payload, self.json_web_key, algorithm='RS256'), expires_in
75
76 def create_id_token(self, client_id, username):
77 id_token, expires_in = self.create_jwt(client_id, username)
78 self.id_tokens[id_token] = (client_id, username)
79 return id_token, expires_in
80
81 def create_refresh_token(self, client_id, username):
82 refresh_token = str(uuid.uuid4())
83 self.refresh_tokens[refresh_token] = (client_id, username)
84 return refresh_token
85
86 def create_access_token(self, client_id, username):
87 access_token, expires_in = self.create_jwt(client_id, username)
88 self.access_tokens[access_token] = (client_id, username)
89 return access_token, expires_in
90
91 def create_tokens_from_refresh_token(self, refresh_token):
92 client_id, username = self.refresh_tokens.get(refresh_token)
93 if not username:
94 raise NotAuthorizedError(refresh_token)
95
96 access_token, expires_in = self.create_access_token(client_id, username)
97 id_token, _ = self.create_id_token(client_id, username)
98 return access_token, id_token, expires_in
99
100
101 class CognitoIdpUserPoolDomain(BaseModel):
102
103 def __init__(self, user_pool_id, domain):
104 self.user_pool_id = user_pool_id
105 self.domain = domain
106
107 def to_json(self):
108 return {
109 "UserPoolId": self.user_pool_id,
110 "AWSAccountId": str(uuid.uuid4()),
111 "CloudFrontDistribution": None,
112 "Domain": self.domain,
113 "S3Bucket": None,
114 "Status": "ACTIVE",
115 "Version": None,
116 }
117
118
119 class CognitoIdpUserPoolClient(BaseModel):
120
121 def __init__(self, user_pool_id, extended_config):
122 self.user_pool_id = user_pool_id
123 self.id = str(uuid.uuid4())
124 self.secret = str(uuid.uuid4())
125 self.extended_config = extended_config or {}
126
127 def _base_json(self):
128 return {
129 "ClientId": self.id,
130 "ClientName": self.extended_config.get("ClientName"),
131 "UserPoolId": self.user_pool_id,
132 }
133
134 def to_json(self, extended=False):
135 user_pool_client_json = self._base_json()
136 if extended:
137 user_pool_client_json.update(self.extended_config)
138
139 return user_pool_client_json
140
141
142 class CognitoIdpIdentityProvider(BaseModel):
143
144 def __init__(self, name, extended_config):
145 self.name = name
146 self.extended_config = extended_config or {}
147 self.creation_date = datetime.datetime.utcnow()
148 self.last_modified_date = datetime.datetime.utcnow()
149
150 def _base_json(self):
151 return {
152 "ProviderName": self.name,
153 "ProviderType": self.extended_config.get("ProviderType"),
154 "CreationDate": time.mktime(self.creation_date.timetuple()),
155 "LastModifiedDate": time.mktime(self.last_modified_date.timetuple()),
156 }
157
158 def to_json(self, extended=False):
159 identity_provider_json = self._base_json()
160 if extended:
161 identity_provider_json.update(self.extended_config)
162
163 return identity_provider_json
164
165
166 class CognitoIdpUser(BaseModel):
167
168 def __init__(self, user_pool_id, username, password, status, attributes):
169 self.id = str(uuid.uuid4())
170 self.user_pool_id = user_pool_id
171 self.username = username
172 self.password = password
173 self.status = status
174 self.enabled = True
175 self.attributes = attributes
176 self.create_date = datetime.datetime.utcnow()
177 self.last_modified_date = datetime.datetime.utcnow()
178
179 def _base_json(self):
180 return {
181 "UserPoolId": self.user_pool_id,
182 "Username": self.username,
183 "UserStatus": self.status,
184 "UserCreateDate": time.mktime(self.create_date.timetuple()),
185 "UserLastModifiedDate": time.mktime(self.last_modified_date.timetuple()),
186 }
187
188 # list_users brings back "Attributes" while admin_get_user brings back "UserAttributes".
189 def to_json(self, extended=False, attributes_key="Attributes"):
190 user_json = self._base_json()
191 if extended:
192 user_json.update(
193 {
194 "Enabled": self.enabled,
195 attributes_key: self.attributes,
196 "MFAOptions": []
197 }
198 )
199
200 return user_json
201
202
203 class CognitoIdpBackend(BaseBackend):
204
205 def __init__(self, region):
206 super(CognitoIdpBackend, self).__init__()
207 self.region = region
208 self.user_pools = OrderedDict()
209 self.user_pool_domains = OrderedDict()
210 self.sessions = {}
211
212 def reset(self):
213 region = self.region
214 self.__dict__ = {}
215 self.__init__(region)
216
217 # User pool
218 def create_user_pool(self, name, extended_config):
219 user_pool = CognitoIdpUserPool(self.region, name, extended_config)
220 self.user_pools[user_pool.id] = user_pool
221 return user_pool
222
223 def list_user_pools(self):
224 return self.user_pools.values()
225
226 def describe_user_pool(self, user_pool_id):
227 user_pool = self.user_pools.get(user_pool_id)
228 if not user_pool:
229 raise ResourceNotFoundError(user_pool_id)
230
231 return user_pool
232
233 def delete_user_pool(self, user_pool_id):
234 if user_pool_id not in self.user_pools:
235 raise ResourceNotFoundError(user_pool_id)
236
237 del self.user_pools[user_pool_id]
238
239 # User pool domain
240 def create_user_pool_domain(self, user_pool_id, domain):
241 if user_pool_id not in self.user_pools:
242 raise ResourceNotFoundError(user_pool_id)
243
244 user_pool_domain = CognitoIdpUserPoolDomain(user_pool_id, domain)
245 self.user_pool_domains[domain] = user_pool_domain
246 return user_pool_domain
247
248 def describe_user_pool_domain(self, domain):
249 if domain not in self.user_pool_domains:
250 return None
251
252 return self.user_pool_domains[domain]
253
254 def delete_user_pool_domain(self, domain):
255 if domain not in self.user_pool_domains:
256 raise ResourceNotFoundError(domain)
257
258 del self.user_pool_domains[domain]
259
260 # User pool client
261 def create_user_pool_client(self, user_pool_id, extended_config):
262 user_pool = self.user_pools.get(user_pool_id)
263 if not user_pool:
264 raise ResourceNotFoundError(user_pool_id)
265
266 user_pool_client = CognitoIdpUserPoolClient(user_pool_id, extended_config)
267 user_pool.clients[user_pool_client.id] = user_pool_client
268 return user_pool_client
269
270 def list_user_pool_clients(self, user_pool_id):
271 user_pool = self.user_pools.get(user_pool_id)
272 if not user_pool:
273 raise ResourceNotFoundError(user_pool_id)
274
275 return user_pool.clients.values()
276
277 def describe_user_pool_client(self, user_pool_id, client_id):
278 user_pool = self.user_pools.get(user_pool_id)
279 if not user_pool:
280 raise ResourceNotFoundError(user_pool_id)
281
282 client = user_pool.clients.get(client_id)
283 if not client:
284 raise ResourceNotFoundError(client_id)
285
286 return client
287
288 def update_user_pool_client(self, user_pool_id, client_id, extended_config):
289 user_pool = self.user_pools.get(user_pool_id)
290 if not user_pool:
291 raise ResourceNotFoundError(user_pool_id)
292
293 client = user_pool.clients.get(client_id)
294 if not client:
295 raise ResourceNotFoundError(client_id)
296
297 client.extended_config.update(extended_config)
298 return client
299
300 def delete_user_pool_client(self, user_pool_id, client_id):
301 user_pool = self.user_pools.get(user_pool_id)
302 if not user_pool:
303 raise ResourceNotFoundError(user_pool_id)
304
305 if client_id not in user_pool.clients:
306 raise ResourceNotFoundError(client_id)
307
308 del user_pool.clients[client_id]
309
310 # Identity provider
311 def create_identity_provider(self, user_pool_id, name, extended_config):
312 user_pool = self.user_pools.get(user_pool_id)
313 if not user_pool:
314 raise ResourceNotFoundError(user_pool_id)
315
316 identity_provider = CognitoIdpIdentityProvider(name, extended_config)
317 user_pool.identity_providers[name] = identity_provider
318 return identity_provider
319
320 def list_identity_providers(self, user_pool_id):
321 user_pool = self.user_pools.get(user_pool_id)
322 if not user_pool:
323 raise ResourceNotFoundError(user_pool_id)
324
325 return user_pool.identity_providers.values()
326
327 def describe_identity_provider(self, user_pool_id, name):
328 user_pool = self.user_pools.get(user_pool_id)
329 if not user_pool:
330 raise ResourceNotFoundError(user_pool_id)
331
332 identity_provider = user_pool.identity_providers.get(name)
333 if not identity_provider:
334 raise ResourceNotFoundError(name)
335
336 return identity_provider
337
338 def delete_identity_provider(self, user_pool_id, name):
339 user_pool = self.user_pools.get(user_pool_id)
340 if not user_pool:
341 raise ResourceNotFoundError(user_pool_id)
342
343 if name not in user_pool.identity_providers:
344 raise ResourceNotFoundError(name)
345
346 del user_pool.identity_providers[name]
347
348 # User
349 def admin_create_user(self, user_pool_id, username, temporary_password, attributes):
350 user_pool = self.user_pools.get(user_pool_id)
351 if not user_pool:
352 raise ResourceNotFoundError(user_pool_id)
353
354 user = CognitoIdpUser(user_pool_id, username, temporary_password, UserStatus["FORCE_CHANGE_PASSWORD"], attributes)
355 user_pool.users[user.username] = user
356 return user
357
358 def admin_get_user(self, user_pool_id, username):
359 user_pool = self.user_pools.get(user_pool_id)
360 if not user_pool:
361 raise ResourceNotFoundError(user_pool_id)
362
363 if username not in user_pool.users:
364 raise ResourceNotFoundError(username)
365
366 return user_pool.users[username]
367
368 def list_users(self, user_pool_id):
369 user_pool = self.user_pools.get(user_pool_id)
370 if not user_pool:
371 raise ResourceNotFoundError(user_pool_id)
372
373 return user_pool.users.values()
374
375 def admin_delete_user(self, user_pool_id, username):
376 user_pool = self.user_pools.get(user_pool_id)
377 if not user_pool:
378 raise ResourceNotFoundError(user_pool_id)
379
380 if username not in user_pool.users:
381 raise ResourceNotFoundError(username)
382
383 del user_pool.users[username]
384
385 def _log_user_in(self, user_pool, client, username):
386 refresh_token = user_pool.create_refresh_token(client.id, username)
387 access_token, id_token, expires_in = user_pool.create_tokens_from_refresh_token(refresh_token)
388
389 return {
390 "AuthenticationResult": {
391 "IdToken": id_token,
392 "AccessToken": access_token,
393 "RefreshToken": refresh_token,
394 "ExpiresIn": expires_in,
395 }
396 }
397
398 def admin_initiate_auth(self, user_pool_id, client_id, auth_flow, auth_parameters):
399 user_pool = self.user_pools.get(user_pool_id)
400 if not user_pool:
401 raise ResourceNotFoundError(user_pool_id)
402
403 client = user_pool.clients.get(client_id)
404 if not client:
405 raise ResourceNotFoundError(client_id)
406
407 if auth_flow == "ADMIN_NO_SRP_AUTH":
408 username = auth_parameters.get("USERNAME")
409 password = auth_parameters.get("PASSWORD")
410 user = user_pool.users.get(username)
411 if not user:
412 raise UserNotFoundError(username)
413
414 if user.password != password:
415 raise NotAuthorizedError(username)
416
417 if user.status == UserStatus["FORCE_CHANGE_PASSWORD"]:
418 session = str(uuid.uuid4())
419 self.sessions[session] = user_pool
420
421 return {
422 "ChallengeName": "NEW_PASSWORD_REQUIRED",
423 "ChallengeParameters": {},
424 "Session": session,
425 }
426
427 return self._log_user_in(user_pool, client, username)
428 elif auth_flow == "REFRESH_TOKEN":
429 refresh_token = auth_parameters.get("REFRESH_TOKEN")
430 id_token, access_token, expires_in = user_pool.create_tokens_from_refresh_token(refresh_token)
431
432 return {
433 "AuthenticationResult": {
434 "IdToken": id_token,
435 "AccessToken": access_token,
436 "ExpiresIn": expires_in,
437 }
438 }
439 else:
440 return {}
441
442 def respond_to_auth_challenge(self, session, client_id, challenge_name, challenge_responses):
443 user_pool = self.sessions.get(session)
444 if not user_pool:
445 raise ResourceNotFoundError(session)
446
447 client = user_pool.clients.get(client_id)
448 if not client:
449 raise ResourceNotFoundError(client_id)
450
451 if challenge_name == "NEW_PASSWORD_REQUIRED":
452 username = challenge_responses.get("USERNAME")
453 new_password = challenge_responses.get("NEW_PASSWORD")
454 user = user_pool.users.get(username)
455 if not user:
456 raise UserNotFoundError(username)
457
458 user.password = new_password
459 user.status = UserStatus["CONFIRMED"]
460 del self.sessions[session]
461
462 return self._log_user_in(user_pool, client, username)
463 else:
464 return {}
465
466 def confirm_forgot_password(self, client_id, username, password):
467 for user_pool in self.user_pools.values():
468 if client_id in user_pool.clients and username in user_pool.users:
469 user_pool.users[username].password = password
470 break
471 else:
472 raise ResourceNotFoundError(client_id)
473
474 def change_password(self, access_token, previous_password, proposed_password):
475 for user_pool in self.user_pools.values():
476 if access_token in user_pool.access_tokens:
477 _, username = user_pool.access_tokens[access_token]
478 user = user_pool.users.get(username)
479 if not user:
480 raise UserNotFoundError(username)
481
482 if user.password != previous_password:
483 raise NotAuthorizedError(username)
484
485 user.password = proposed_password
486 if user.status == UserStatus["FORCE_CHANGE_PASSWORD"]:
487 user.status = UserStatus["CONFIRMED"]
488
489 break
490 else:
491 raise NotAuthorizedError(access_token)
492
493
494 cognitoidp_backends = {}
495 for region in boto.cognito.identity.regions():
496 cognitoidp_backends[region.name] = CognitoIdpBackend(region.name)
497
498
499 # Hack to help moto-server process requests on localhost, where the region isn't
500 # specified in the host header. Some endpoints (change password, confirm forgot
501 # password) have no authorization header from which to extract the region.
502 def find_region_by_value(key, value):
503 for region in cognitoidp_backends:
504 backend = cognitoidp_backends[region]
505 for user_pool in backend.user_pools.values():
506 if key == "client_id" and value in user_pool.clients:
507 return region
508
509 if key == "access_token" and value in user_pool.access_tokens:
510 return region
511
512 return cognitoidp_backends.keys()[0]
513
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/moto/cognitoidp/models.py b/moto/cognitoidp/models.py
--- a/moto/cognitoidp/models.py
+++ b/moto/cognitoidp/models.py
@@ -24,7 +24,7 @@
def __init__(self, region, name, extended_config):
self.region = region
- self.id = str(uuid.uuid4())
+ self.id = "{}_{}".format(self.region, str(uuid.uuid4().hex))
self.name = name
self.status = None
self.extended_config = extended_config or {}
| {"golden_diff": "diff --git a/moto/cognitoidp/models.py b/moto/cognitoidp/models.py\n--- a/moto/cognitoidp/models.py\n+++ b/moto/cognitoidp/models.py\n@@ -24,7 +24,7 @@\n \n def __init__(self, region, name, extended_config):\n self.region = region\n- self.id = str(uuid.uuid4())\n+ self.id = \"{}_{}\".format(self.region, str(uuid.uuid4().hex))\n self.name = name\n self.status = None\n self.extended_config = extended_config or {}\n", "issue": "cognito-idp UserPool id format does not match AWS format\nThe format for a Cognito UserPool Id produced by the the cognito-idp mock does not produce ids in the expected format for a Cognito UserPool - The ids produced are uuids. The format for the Id field of a UserPool is documented as \r\n\r\n> Id\r\n> \r\n> The ID of the user pool.\r\n> \r\n> Type: String\r\n> \r\n> Length Constraints: Minimum length of 1. Maximum length of 55.\r\n> \r\n> Pattern: [\\w-]+_[0-9a-zA-Z]+\r\n> \r\n> Required: No\r\n\r\n\r\nhttps://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_UserPoolType.html\r\n\r\nSo a uuid isn't a valid representation of an Id\r\n\r\nThis can be reproduced by \r\n\r\n```\r\nimport moto\r\nimport boto3\r\n\r\ncreate_pool_kwargs = {\r\n \"PoolName\": \"test_pool\",\r\n \"Schema\": [\r\n {\r\n \"Name\": \"email\",\r\n \"AttributeDataType\": \"String\",\r\n \"Required\": True,\r\n \"Mutable\": True,\r\n },\r\n {\r\n \"Name\": \"tenant_id\",\r\n \"AttributeDataType\": \"String\",\r\n \"Mutable\": False,\r\n },\r\n ],\r\n \"AdminCreateUserConfig\": {\r\n \"AllowAdminCreateUserOnly\": True,\r\n \"UnusedAccountValidityDays\": 1,\r\n },\r\n }\r\n\r\n\r\ndef set_up_tear_down_user_pool():\r\n cognito_idp = boto3.client('cognito-idp')\r\n pool = cognito_idp.create_user_pool(**create_pool_kwargs)\r\n pool_id = pool['UserPool']['Id']\r\n print(pool_id)\r\n cognito_idp.delete_user_pool(UserPoolId=pool_id)\r\n\r\n\r\n# with moto\r\nwith moto.mock_cognitoidp() as mock_cognito:\r\n set_up_tear_down_user_pool()\r\n\r\n# without\r\nset_up_tear_down_user_pool()\r\n```\r\n\r\nProduces:\r\n\r\n```\r\neb9ef17e-acea-4a95-8440-7ee79dd1f172\r\neu-west-1_qtdBQSSL4\r\n```\r\n\r\nThe general expectation is that the pool_id is in the format \"{region}_{id}\". I usually use the region part when attempting to determine, from a pool id, the region that pool is available in. \r\n\r\nI'm using the package installed via pip and python mocks.\r\n\r\n```\r\nmoto==1.3.4\r\nbotocore==1.10.52\r\nboto3==1.7.3\r\n```\ncognito-idp UserPool id format does not match AWS format\nThe format for a Cognito UserPool Id produced by the the cognito-idp mock does not produce ids in the expected format for a Cognito UserPool - The ids produced are uuids. The format for the Id field of a UserPool is documented as \r\n\r\n> Id\r\n> \r\n> The ID of the user pool.\r\n> \r\n> Type: String\r\n> \r\n> Length Constraints: Minimum length of 1. Maximum length of 55.\r\n> \r\n> Pattern: [\\w-]+_[0-9a-zA-Z]+\r\n> \r\n> Required: No\r\n\r\n\r\nhttps://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_UserPoolType.html\r\n\r\nSo a uuid isn't a valid representation of an Id\r\n\r\nThis can be reproduced by \r\n\r\n```\r\nimport moto\r\nimport boto3\r\n\r\ncreate_pool_kwargs = {\r\n \"PoolName\": \"test_pool\",\r\n \"Schema\": [\r\n {\r\n \"Name\": \"email\",\r\n \"AttributeDataType\": \"String\",\r\n \"Required\": True,\r\n \"Mutable\": True,\r\n },\r\n {\r\n \"Name\": \"tenant_id\",\r\n \"AttributeDataType\": \"String\",\r\n \"Mutable\": False,\r\n },\r\n ],\r\n \"AdminCreateUserConfig\": {\r\n \"AllowAdminCreateUserOnly\": True,\r\n \"UnusedAccountValidityDays\": 1,\r\n },\r\n }\r\n\r\n\r\ndef set_up_tear_down_user_pool():\r\n cognito_idp = boto3.client('cognito-idp')\r\n pool = cognito_idp.create_user_pool(**create_pool_kwargs)\r\n pool_id = pool['UserPool']['Id']\r\n print(pool_id)\r\n cognito_idp.delete_user_pool(UserPoolId=pool_id)\r\n\r\n\r\n# with moto\r\nwith moto.mock_cognitoidp() as mock_cognito:\r\n set_up_tear_down_user_pool()\r\n\r\n# without\r\nset_up_tear_down_user_pool()\r\n```\r\n\r\nProduces:\r\n\r\n```\r\neb9ef17e-acea-4a95-8440-7ee79dd1f172\r\neu-west-1_qtdBQSSL4\r\n```\r\n\r\nThe general expectation is that the pool_id is in the format \"{region}_{id}\". I usually use the region part when attempting to determine, from a pool id, the region that pool is available in. \r\n\r\nI'm using the package installed via pip and python mocks.\r\n\r\n```\r\nmoto==1.3.4\r\nbotocore==1.10.52\r\nboto3==1.7.3\r\n```\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport datetime\nimport json\nimport os\nimport time\nimport uuid\n\nimport boto.cognito.identity\nfrom jose import jws\n\nfrom moto.compat import OrderedDict\nfrom moto.core import BaseBackend, BaseModel\nfrom .exceptions import NotAuthorizedError, ResourceNotFoundError, UserNotFoundError\n\n\nUserStatus = {\n \"FORCE_CHANGE_PASSWORD\": \"FORCE_CHANGE_PASSWORD\",\n \"CONFIRMED\": \"CONFIRMED\",\n}\n\n\nclass CognitoIdpUserPool(BaseModel):\n\n def __init__(self, region, name, extended_config):\n self.region = region\n self.id = str(uuid.uuid4())\n self.name = name\n self.status = None\n self.extended_config = extended_config or {}\n self.creation_date = datetime.datetime.utcnow()\n self.last_modified_date = datetime.datetime.utcnow()\n\n self.clients = OrderedDict()\n self.identity_providers = OrderedDict()\n self.users = OrderedDict()\n self.refresh_tokens = {}\n self.access_tokens = {}\n self.id_tokens = {}\n\n with open(os.path.join(os.path.dirname(__file__), \"resources/jwks-private.json\")) as f:\n self.json_web_key = json.loads(f.read())\n\n def _base_json(self):\n return {\n \"Id\": self.id,\n \"Name\": self.name,\n \"Status\": self.status,\n \"CreationDate\": time.mktime(self.creation_date.timetuple()),\n \"LastModifiedDate\": time.mktime(self.last_modified_date.timetuple()),\n }\n\n def to_json(self, extended=False):\n user_pool_json = self._base_json()\n if extended:\n user_pool_json.update(self.extended_config)\n else:\n user_pool_json[\"LambdaConfig\"] = self.extended_config.get(\"LambdaConfig\") or {}\n\n return user_pool_json\n\n def create_jwt(self, client_id, username, expires_in=60 * 60, extra_data={}):\n now = int(time.time())\n payload = {\n \"iss\": \"https://cognito-idp.{}.amazonaws.com/{}\".format(self.region, self.id),\n \"sub\": self.users[username].id,\n \"aud\": client_id,\n \"token_use\": \"id\",\n \"auth_time\": now,\n \"exp\": now + expires_in,\n }\n payload.update(extra_data)\n\n return jws.sign(payload, self.json_web_key, algorithm='RS256'), expires_in\n\n def create_id_token(self, client_id, username):\n id_token, expires_in = self.create_jwt(client_id, username)\n self.id_tokens[id_token] = (client_id, username)\n return id_token, expires_in\n\n def create_refresh_token(self, client_id, username):\n refresh_token = str(uuid.uuid4())\n self.refresh_tokens[refresh_token] = (client_id, username)\n return refresh_token\n\n def create_access_token(self, client_id, username):\n access_token, expires_in = self.create_jwt(client_id, username)\n self.access_tokens[access_token] = (client_id, username)\n return access_token, expires_in\n\n def create_tokens_from_refresh_token(self, refresh_token):\n client_id, username = self.refresh_tokens.get(refresh_token)\n if not username:\n raise NotAuthorizedError(refresh_token)\n\n access_token, expires_in = self.create_access_token(client_id, username)\n id_token, _ = self.create_id_token(client_id, username)\n return access_token, id_token, expires_in\n\n\nclass CognitoIdpUserPoolDomain(BaseModel):\n\n def __init__(self, user_pool_id, domain):\n self.user_pool_id = user_pool_id\n self.domain = domain\n\n def to_json(self):\n return {\n \"UserPoolId\": self.user_pool_id,\n \"AWSAccountId\": str(uuid.uuid4()),\n \"CloudFrontDistribution\": None,\n \"Domain\": self.domain,\n \"S3Bucket\": None,\n \"Status\": \"ACTIVE\",\n \"Version\": None,\n }\n\n\nclass CognitoIdpUserPoolClient(BaseModel):\n\n def __init__(self, user_pool_id, extended_config):\n self.user_pool_id = user_pool_id\n self.id = str(uuid.uuid4())\n self.secret = str(uuid.uuid4())\n self.extended_config = extended_config or {}\n\n def _base_json(self):\n return {\n \"ClientId\": self.id,\n \"ClientName\": self.extended_config.get(\"ClientName\"),\n \"UserPoolId\": self.user_pool_id,\n }\n\n def to_json(self, extended=False):\n user_pool_client_json = self._base_json()\n if extended:\n user_pool_client_json.update(self.extended_config)\n\n return user_pool_client_json\n\n\nclass CognitoIdpIdentityProvider(BaseModel):\n\n def __init__(self, name, extended_config):\n self.name = name\n self.extended_config = extended_config or {}\n self.creation_date = datetime.datetime.utcnow()\n self.last_modified_date = datetime.datetime.utcnow()\n\n def _base_json(self):\n return {\n \"ProviderName\": self.name,\n \"ProviderType\": self.extended_config.get(\"ProviderType\"),\n \"CreationDate\": time.mktime(self.creation_date.timetuple()),\n \"LastModifiedDate\": time.mktime(self.last_modified_date.timetuple()),\n }\n\n def to_json(self, extended=False):\n identity_provider_json = self._base_json()\n if extended:\n identity_provider_json.update(self.extended_config)\n\n return identity_provider_json\n\n\nclass CognitoIdpUser(BaseModel):\n\n def __init__(self, user_pool_id, username, password, status, attributes):\n self.id = str(uuid.uuid4())\n self.user_pool_id = user_pool_id\n self.username = username\n self.password = password\n self.status = status\n self.enabled = True\n self.attributes = attributes\n self.create_date = datetime.datetime.utcnow()\n self.last_modified_date = datetime.datetime.utcnow()\n\n def _base_json(self):\n return {\n \"UserPoolId\": self.user_pool_id,\n \"Username\": self.username,\n \"UserStatus\": self.status,\n \"UserCreateDate\": time.mktime(self.create_date.timetuple()),\n \"UserLastModifiedDate\": time.mktime(self.last_modified_date.timetuple()),\n }\n\n # list_users brings back \"Attributes\" while admin_get_user brings back \"UserAttributes\".\n def to_json(self, extended=False, attributes_key=\"Attributes\"):\n user_json = self._base_json()\n if extended:\n user_json.update(\n {\n \"Enabled\": self.enabled,\n attributes_key: self.attributes,\n \"MFAOptions\": []\n }\n )\n\n return user_json\n\n\nclass CognitoIdpBackend(BaseBackend):\n\n def __init__(self, region):\n super(CognitoIdpBackend, self).__init__()\n self.region = region\n self.user_pools = OrderedDict()\n self.user_pool_domains = OrderedDict()\n self.sessions = {}\n\n def reset(self):\n region = self.region\n self.__dict__ = {}\n self.__init__(region)\n\n # User pool\n def create_user_pool(self, name, extended_config):\n user_pool = CognitoIdpUserPool(self.region, name, extended_config)\n self.user_pools[user_pool.id] = user_pool\n return user_pool\n\n def list_user_pools(self):\n return self.user_pools.values()\n\n def describe_user_pool(self, user_pool_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n return user_pool\n\n def delete_user_pool(self, user_pool_id):\n if user_pool_id not in self.user_pools:\n raise ResourceNotFoundError(user_pool_id)\n\n del self.user_pools[user_pool_id]\n\n # User pool domain\n def create_user_pool_domain(self, user_pool_id, domain):\n if user_pool_id not in self.user_pools:\n raise ResourceNotFoundError(user_pool_id)\n\n user_pool_domain = CognitoIdpUserPoolDomain(user_pool_id, domain)\n self.user_pool_domains[domain] = user_pool_domain\n return user_pool_domain\n\n def describe_user_pool_domain(self, domain):\n if domain not in self.user_pool_domains:\n return None\n\n return self.user_pool_domains[domain]\n\n def delete_user_pool_domain(self, domain):\n if domain not in self.user_pool_domains:\n raise ResourceNotFoundError(domain)\n\n del self.user_pool_domains[domain]\n\n # User pool client\n def create_user_pool_client(self, user_pool_id, extended_config):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n user_pool_client = CognitoIdpUserPoolClient(user_pool_id, extended_config)\n user_pool.clients[user_pool_client.id] = user_pool_client\n return user_pool_client\n\n def list_user_pool_clients(self, user_pool_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n return user_pool.clients.values()\n\n def describe_user_pool_client(self, user_pool_id, client_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n client = user_pool.clients.get(client_id)\n if not client:\n raise ResourceNotFoundError(client_id)\n\n return client\n\n def update_user_pool_client(self, user_pool_id, client_id, extended_config):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n client = user_pool.clients.get(client_id)\n if not client:\n raise ResourceNotFoundError(client_id)\n\n client.extended_config.update(extended_config)\n return client\n\n def delete_user_pool_client(self, user_pool_id, client_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n if client_id not in user_pool.clients:\n raise ResourceNotFoundError(client_id)\n\n del user_pool.clients[client_id]\n\n # Identity provider\n def create_identity_provider(self, user_pool_id, name, extended_config):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n identity_provider = CognitoIdpIdentityProvider(name, extended_config)\n user_pool.identity_providers[name] = identity_provider\n return identity_provider\n\n def list_identity_providers(self, user_pool_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n return user_pool.identity_providers.values()\n\n def describe_identity_provider(self, user_pool_id, name):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n identity_provider = user_pool.identity_providers.get(name)\n if not identity_provider:\n raise ResourceNotFoundError(name)\n\n return identity_provider\n\n def delete_identity_provider(self, user_pool_id, name):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n if name not in user_pool.identity_providers:\n raise ResourceNotFoundError(name)\n\n del user_pool.identity_providers[name]\n\n # User\n def admin_create_user(self, user_pool_id, username, temporary_password, attributes):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n user = CognitoIdpUser(user_pool_id, username, temporary_password, UserStatus[\"FORCE_CHANGE_PASSWORD\"], attributes)\n user_pool.users[user.username] = user\n return user\n\n def admin_get_user(self, user_pool_id, username):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n if username not in user_pool.users:\n raise ResourceNotFoundError(username)\n\n return user_pool.users[username]\n\n def list_users(self, user_pool_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n return user_pool.users.values()\n\n def admin_delete_user(self, user_pool_id, username):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n if username not in user_pool.users:\n raise ResourceNotFoundError(username)\n\n del user_pool.users[username]\n\n def _log_user_in(self, user_pool, client, username):\n refresh_token = user_pool.create_refresh_token(client.id, username)\n access_token, id_token, expires_in = user_pool.create_tokens_from_refresh_token(refresh_token)\n\n return {\n \"AuthenticationResult\": {\n \"IdToken\": id_token,\n \"AccessToken\": access_token,\n \"RefreshToken\": refresh_token,\n \"ExpiresIn\": expires_in,\n }\n }\n\n def admin_initiate_auth(self, user_pool_id, client_id, auth_flow, auth_parameters):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n client = user_pool.clients.get(client_id)\n if not client:\n raise ResourceNotFoundError(client_id)\n\n if auth_flow == \"ADMIN_NO_SRP_AUTH\":\n username = auth_parameters.get(\"USERNAME\")\n password = auth_parameters.get(\"PASSWORD\")\n user = user_pool.users.get(username)\n if not user:\n raise UserNotFoundError(username)\n\n if user.password != password:\n raise NotAuthorizedError(username)\n\n if user.status == UserStatus[\"FORCE_CHANGE_PASSWORD\"]:\n session = str(uuid.uuid4())\n self.sessions[session] = user_pool\n\n return {\n \"ChallengeName\": \"NEW_PASSWORD_REQUIRED\",\n \"ChallengeParameters\": {},\n \"Session\": session,\n }\n\n return self._log_user_in(user_pool, client, username)\n elif auth_flow == \"REFRESH_TOKEN\":\n refresh_token = auth_parameters.get(\"REFRESH_TOKEN\")\n id_token, access_token, expires_in = user_pool.create_tokens_from_refresh_token(refresh_token)\n\n return {\n \"AuthenticationResult\": {\n \"IdToken\": id_token,\n \"AccessToken\": access_token,\n \"ExpiresIn\": expires_in,\n }\n }\n else:\n return {}\n\n def respond_to_auth_challenge(self, session, client_id, challenge_name, challenge_responses):\n user_pool = self.sessions.get(session)\n if not user_pool:\n raise ResourceNotFoundError(session)\n\n client = user_pool.clients.get(client_id)\n if not client:\n raise ResourceNotFoundError(client_id)\n\n if challenge_name == \"NEW_PASSWORD_REQUIRED\":\n username = challenge_responses.get(\"USERNAME\")\n new_password = challenge_responses.get(\"NEW_PASSWORD\")\n user = user_pool.users.get(username)\n if not user:\n raise UserNotFoundError(username)\n\n user.password = new_password\n user.status = UserStatus[\"CONFIRMED\"]\n del self.sessions[session]\n\n return self._log_user_in(user_pool, client, username)\n else:\n return {}\n\n def confirm_forgot_password(self, client_id, username, password):\n for user_pool in self.user_pools.values():\n if client_id in user_pool.clients and username in user_pool.users:\n user_pool.users[username].password = password\n break\n else:\n raise ResourceNotFoundError(client_id)\n\n def change_password(self, access_token, previous_password, proposed_password):\n for user_pool in self.user_pools.values():\n if access_token in user_pool.access_tokens:\n _, username = user_pool.access_tokens[access_token]\n user = user_pool.users.get(username)\n if not user:\n raise UserNotFoundError(username)\n\n if user.password != previous_password:\n raise NotAuthorizedError(username)\n\n user.password = proposed_password\n if user.status == UserStatus[\"FORCE_CHANGE_PASSWORD\"]:\n user.status = UserStatus[\"CONFIRMED\"]\n\n break\n else:\n raise NotAuthorizedError(access_token)\n\n\ncognitoidp_backends = {}\nfor region in boto.cognito.identity.regions():\n cognitoidp_backends[region.name] = CognitoIdpBackend(region.name)\n\n\n# Hack to help moto-server process requests on localhost, where the region isn't\n# specified in the host header. Some endpoints (change password, confirm forgot\n# password) have no authorization header from which to extract the region.\ndef find_region_by_value(key, value):\n for region in cognitoidp_backends:\n backend = cognitoidp_backends[region]\n for user_pool in backend.user_pools.values():\n if key == \"client_id\" and value in user_pool.clients:\n return region\n\n if key == \"access_token\" and value in user_pool.access_tokens:\n return region\n\n return cognitoidp_backends.keys()[0]\n", "path": "moto/cognitoidp/models.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport datetime\nimport json\nimport os\nimport time\nimport uuid\n\nimport boto.cognito.identity\nfrom jose import jws\n\nfrom moto.compat import OrderedDict\nfrom moto.core import BaseBackend, BaseModel\nfrom .exceptions import NotAuthorizedError, ResourceNotFoundError, UserNotFoundError\n\n\nUserStatus = {\n \"FORCE_CHANGE_PASSWORD\": \"FORCE_CHANGE_PASSWORD\",\n \"CONFIRMED\": \"CONFIRMED\",\n}\n\n\nclass CognitoIdpUserPool(BaseModel):\n\n def __init__(self, region, name, extended_config):\n self.region = region\n self.id = \"{}_{}\".format(self.region, str(uuid.uuid4().hex))\n self.name = name\n self.status = None\n self.extended_config = extended_config or {}\n self.creation_date = datetime.datetime.utcnow()\n self.last_modified_date = datetime.datetime.utcnow()\n\n self.clients = OrderedDict()\n self.identity_providers = OrderedDict()\n self.users = OrderedDict()\n self.refresh_tokens = {}\n self.access_tokens = {}\n self.id_tokens = {}\n\n with open(os.path.join(os.path.dirname(__file__), \"resources/jwks-private.json\")) as f:\n self.json_web_key = json.loads(f.read())\n\n def _base_json(self):\n return {\n \"Id\": self.id,\n \"Name\": self.name,\n \"Status\": self.status,\n \"CreationDate\": time.mktime(self.creation_date.timetuple()),\n \"LastModifiedDate\": time.mktime(self.last_modified_date.timetuple()),\n }\n\n def to_json(self, extended=False):\n user_pool_json = self._base_json()\n if extended:\n user_pool_json.update(self.extended_config)\n else:\n user_pool_json[\"LambdaConfig\"] = self.extended_config.get(\"LambdaConfig\") or {}\n\n return user_pool_json\n\n def create_jwt(self, client_id, username, expires_in=60 * 60, extra_data={}):\n now = int(time.time())\n payload = {\n \"iss\": \"https://cognito-idp.{}.amazonaws.com/{}\".format(self.region, self.id),\n \"sub\": self.users[username].id,\n \"aud\": client_id,\n \"token_use\": \"id\",\n \"auth_time\": now,\n \"exp\": now + expires_in,\n }\n payload.update(extra_data)\n\n return jws.sign(payload, self.json_web_key, algorithm='RS256'), expires_in\n\n def create_id_token(self, client_id, username):\n id_token, expires_in = self.create_jwt(client_id, username)\n self.id_tokens[id_token] = (client_id, username)\n return id_token, expires_in\n\n def create_refresh_token(self, client_id, username):\n refresh_token = str(uuid.uuid4())\n self.refresh_tokens[refresh_token] = (client_id, username)\n return refresh_token\n\n def create_access_token(self, client_id, username):\n access_token, expires_in = self.create_jwt(client_id, username)\n self.access_tokens[access_token] = (client_id, username)\n return access_token, expires_in\n\n def create_tokens_from_refresh_token(self, refresh_token):\n client_id, username = self.refresh_tokens.get(refresh_token)\n if not username:\n raise NotAuthorizedError(refresh_token)\n\n access_token, expires_in = self.create_access_token(client_id, username)\n id_token, _ = self.create_id_token(client_id, username)\n return access_token, id_token, expires_in\n\n\nclass CognitoIdpUserPoolDomain(BaseModel):\n\n def __init__(self, user_pool_id, domain):\n self.user_pool_id = user_pool_id\n self.domain = domain\n\n def to_json(self):\n return {\n \"UserPoolId\": self.user_pool_id,\n \"AWSAccountId\": str(uuid.uuid4()),\n \"CloudFrontDistribution\": None,\n \"Domain\": self.domain,\n \"S3Bucket\": None,\n \"Status\": \"ACTIVE\",\n \"Version\": None,\n }\n\n\nclass CognitoIdpUserPoolClient(BaseModel):\n\n def __init__(self, user_pool_id, extended_config):\n self.user_pool_id = user_pool_id\n self.id = str(uuid.uuid4())\n self.secret = str(uuid.uuid4())\n self.extended_config = extended_config or {}\n\n def _base_json(self):\n return {\n \"ClientId\": self.id,\n \"ClientName\": self.extended_config.get(\"ClientName\"),\n \"UserPoolId\": self.user_pool_id,\n }\n\n def to_json(self, extended=False):\n user_pool_client_json = self._base_json()\n if extended:\n user_pool_client_json.update(self.extended_config)\n\n return user_pool_client_json\n\n\nclass CognitoIdpIdentityProvider(BaseModel):\n\n def __init__(self, name, extended_config):\n self.name = name\n self.extended_config = extended_config or {}\n self.creation_date = datetime.datetime.utcnow()\n self.last_modified_date = datetime.datetime.utcnow()\n\n def _base_json(self):\n return {\n \"ProviderName\": self.name,\n \"ProviderType\": self.extended_config.get(\"ProviderType\"),\n \"CreationDate\": time.mktime(self.creation_date.timetuple()),\n \"LastModifiedDate\": time.mktime(self.last_modified_date.timetuple()),\n }\n\n def to_json(self, extended=False):\n identity_provider_json = self._base_json()\n if extended:\n identity_provider_json.update(self.extended_config)\n\n return identity_provider_json\n\n\nclass CognitoIdpUser(BaseModel):\n\n def __init__(self, user_pool_id, username, password, status, attributes):\n self.id = str(uuid.uuid4())\n self.user_pool_id = user_pool_id\n self.username = username\n self.password = password\n self.status = status\n self.enabled = True\n self.attributes = attributes\n self.create_date = datetime.datetime.utcnow()\n self.last_modified_date = datetime.datetime.utcnow()\n\n def _base_json(self):\n return {\n \"UserPoolId\": self.user_pool_id,\n \"Username\": self.username,\n \"UserStatus\": self.status,\n \"UserCreateDate\": time.mktime(self.create_date.timetuple()),\n \"UserLastModifiedDate\": time.mktime(self.last_modified_date.timetuple()),\n }\n\n # list_users brings back \"Attributes\" while admin_get_user brings back \"UserAttributes\".\n def to_json(self, extended=False, attributes_key=\"Attributes\"):\n user_json = self._base_json()\n if extended:\n user_json.update(\n {\n \"Enabled\": self.enabled,\n attributes_key: self.attributes,\n \"MFAOptions\": []\n }\n )\n\n return user_json\n\n\nclass CognitoIdpBackend(BaseBackend):\n\n def __init__(self, region):\n super(CognitoIdpBackend, self).__init__()\n self.region = region\n self.user_pools = OrderedDict()\n self.user_pool_domains = OrderedDict()\n self.sessions = {}\n\n def reset(self):\n region = self.region\n self.__dict__ = {}\n self.__init__(region)\n\n # User pool\n def create_user_pool(self, name, extended_config):\n user_pool = CognitoIdpUserPool(self.region, name, extended_config)\n self.user_pools[user_pool.id] = user_pool\n return user_pool\n\n def list_user_pools(self):\n return self.user_pools.values()\n\n def describe_user_pool(self, user_pool_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n return user_pool\n\n def delete_user_pool(self, user_pool_id):\n if user_pool_id not in self.user_pools:\n raise ResourceNotFoundError(user_pool_id)\n\n del self.user_pools[user_pool_id]\n\n # User pool domain\n def create_user_pool_domain(self, user_pool_id, domain):\n if user_pool_id not in self.user_pools:\n raise ResourceNotFoundError(user_pool_id)\n\n user_pool_domain = CognitoIdpUserPoolDomain(user_pool_id, domain)\n self.user_pool_domains[domain] = user_pool_domain\n return user_pool_domain\n\n def describe_user_pool_domain(self, domain):\n if domain not in self.user_pool_domains:\n return None\n\n return self.user_pool_domains[domain]\n\n def delete_user_pool_domain(self, domain):\n if domain not in self.user_pool_domains:\n raise ResourceNotFoundError(domain)\n\n del self.user_pool_domains[domain]\n\n # User pool client\n def create_user_pool_client(self, user_pool_id, extended_config):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n user_pool_client = CognitoIdpUserPoolClient(user_pool_id, extended_config)\n user_pool.clients[user_pool_client.id] = user_pool_client\n return user_pool_client\n\n def list_user_pool_clients(self, user_pool_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n return user_pool.clients.values()\n\n def describe_user_pool_client(self, user_pool_id, client_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n client = user_pool.clients.get(client_id)\n if not client:\n raise ResourceNotFoundError(client_id)\n\n return client\n\n def update_user_pool_client(self, user_pool_id, client_id, extended_config):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n client = user_pool.clients.get(client_id)\n if not client:\n raise ResourceNotFoundError(client_id)\n\n client.extended_config.update(extended_config)\n return client\n\n def delete_user_pool_client(self, user_pool_id, client_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n if client_id not in user_pool.clients:\n raise ResourceNotFoundError(client_id)\n\n del user_pool.clients[client_id]\n\n # Identity provider\n def create_identity_provider(self, user_pool_id, name, extended_config):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n identity_provider = CognitoIdpIdentityProvider(name, extended_config)\n user_pool.identity_providers[name] = identity_provider\n return identity_provider\n\n def list_identity_providers(self, user_pool_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n return user_pool.identity_providers.values()\n\n def describe_identity_provider(self, user_pool_id, name):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n identity_provider = user_pool.identity_providers.get(name)\n if not identity_provider:\n raise ResourceNotFoundError(name)\n\n return identity_provider\n\n def delete_identity_provider(self, user_pool_id, name):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n if name not in user_pool.identity_providers:\n raise ResourceNotFoundError(name)\n\n del user_pool.identity_providers[name]\n\n # User\n def admin_create_user(self, user_pool_id, username, temporary_password, attributes):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n user = CognitoIdpUser(user_pool_id, username, temporary_password, UserStatus[\"FORCE_CHANGE_PASSWORD\"], attributes)\n user_pool.users[user.username] = user\n return user\n\n def admin_get_user(self, user_pool_id, username):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n if username not in user_pool.users:\n raise ResourceNotFoundError(username)\n\n return user_pool.users[username]\n\n def list_users(self, user_pool_id):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n return user_pool.users.values()\n\n def admin_delete_user(self, user_pool_id, username):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n if username not in user_pool.users:\n raise ResourceNotFoundError(username)\n\n del user_pool.users[username]\n\n def _log_user_in(self, user_pool, client, username):\n refresh_token = user_pool.create_refresh_token(client.id, username)\n access_token, id_token, expires_in = user_pool.create_tokens_from_refresh_token(refresh_token)\n\n return {\n \"AuthenticationResult\": {\n \"IdToken\": id_token,\n \"AccessToken\": access_token,\n \"RefreshToken\": refresh_token,\n \"ExpiresIn\": expires_in,\n }\n }\n\n def admin_initiate_auth(self, user_pool_id, client_id, auth_flow, auth_parameters):\n user_pool = self.user_pools.get(user_pool_id)\n if not user_pool:\n raise ResourceNotFoundError(user_pool_id)\n\n client = user_pool.clients.get(client_id)\n if not client:\n raise ResourceNotFoundError(client_id)\n\n if auth_flow == \"ADMIN_NO_SRP_AUTH\":\n username = auth_parameters.get(\"USERNAME\")\n password = auth_parameters.get(\"PASSWORD\")\n user = user_pool.users.get(username)\n if not user:\n raise UserNotFoundError(username)\n\n if user.password != password:\n raise NotAuthorizedError(username)\n\n if user.status == UserStatus[\"FORCE_CHANGE_PASSWORD\"]:\n session = str(uuid.uuid4())\n self.sessions[session] = user_pool\n\n return {\n \"ChallengeName\": \"NEW_PASSWORD_REQUIRED\",\n \"ChallengeParameters\": {},\n \"Session\": session,\n }\n\n return self._log_user_in(user_pool, client, username)\n elif auth_flow == \"REFRESH_TOKEN\":\n refresh_token = auth_parameters.get(\"REFRESH_TOKEN\")\n id_token, access_token, expires_in = user_pool.create_tokens_from_refresh_token(refresh_token)\n\n return {\n \"AuthenticationResult\": {\n \"IdToken\": id_token,\n \"AccessToken\": access_token,\n \"ExpiresIn\": expires_in,\n }\n }\n else:\n return {}\n\n def respond_to_auth_challenge(self, session, client_id, challenge_name, challenge_responses):\n user_pool = self.sessions.get(session)\n if not user_pool:\n raise ResourceNotFoundError(session)\n\n client = user_pool.clients.get(client_id)\n if not client:\n raise ResourceNotFoundError(client_id)\n\n if challenge_name == \"NEW_PASSWORD_REQUIRED\":\n username = challenge_responses.get(\"USERNAME\")\n new_password = challenge_responses.get(\"NEW_PASSWORD\")\n user = user_pool.users.get(username)\n if not user:\n raise UserNotFoundError(username)\n\n user.password = new_password\n user.status = UserStatus[\"CONFIRMED\"]\n del self.sessions[session]\n\n return self._log_user_in(user_pool, client, username)\n else:\n return {}\n\n def confirm_forgot_password(self, client_id, username, password):\n for user_pool in self.user_pools.values():\n if client_id in user_pool.clients and username in user_pool.users:\n user_pool.users[username].password = password\n break\n else:\n raise ResourceNotFoundError(client_id)\n\n def change_password(self, access_token, previous_password, proposed_password):\n for user_pool in self.user_pools.values():\n if access_token in user_pool.access_tokens:\n _, username = user_pool.access_tokens[access_token]\n user = user_pool.users.get(username)\n if not user:\n raise UserNotFoundError(username)\n\n if user.password != previous_password:\n raise NotAuthorizedError(username)\n\n user.password = proposed_password\n if user.status == UserStatus[\"FORCE_CHANGE_PASSWORD\"]:\n user.status = UserStatus[\"CONFIRMED\"]\n\n break\n else:\n raise NotAuthorizedError(access_token)\n\n\ncognitoidp_backends = {}\nfor region in boto.cognito.identity.regions():\n cognitoidp_backends[region.name] = CognitoIdpBackend(region.name)\n\n\n# Hack to help moto-server process requests on localhost, where the region isn't\n# specified in the host header. Some endpoints (change password, confirm forgot\n# password) have no authorization header from which to extract the region.\ndef find_region_by_value(key, value):\n for region in cognitoidp_backends:\n backend = cognitoidp_backends[region]\n for user_pool in backend.user_pools.values():\n if key == \"client_id\" and value in user_pool.clients:\n return region\n\n if key == \"access_token\" and value in user_pool.access_tokens:\n return region\n\n return cognitoidp_backends.keys()[0]\n", "path": "moto/cognitoidp/models.py"}]} |
gh_patches_debug_1001 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-4910 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Validate profile fields on form
Related code
https://github.com/rtfd/readthedocs.org/blob/164800694a25d769234c6e7019c483f347fe9226/readthedocs/core/forms.py#L20-L46
This will raise an exception if the length is greater than the model
Sentry issue https://sentry.io/read-the-docs/readthedocs-org/issues/666774301/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/core/forms.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Forms for core app."""
3
4 from __future__ import (
5 absolute_import, division, print_function, unicode_literals)
6
7 import logging
8 from builtins import object
9
10 from django import forms
11 from django.contrib.auth.models import User
12 from django.forms.fields import CharField
13 from django.utils.translation import ugettext_lazy as _
14
15 from .models import UserProfile
16
17 log = logging.getLogger(__name__)
18
19
20 class UserProfileForm(forms.ModelForm):
21 first_name = CharField(label=_('First name'), required=False)
22 last_name = CharField(label=_('Last name'), required=False)
23
24 class Meta(object):
25 model = UserProfile
26 # Don't allow users edit someone else's user page
27 fields = ['first_name', 'last_name', 'homepage']
28
29 def __init__(self, *args, **kwargs):
30 super(UserProfileForm, self).__init__(*args, **kwargs)
31 try:
32 self.fields['first_name'].initial = self.instance.user.first_name
33 self.fields['last_name'].initial = self.instance.user.last_name
34 except AttributeError:
35 pass
36
37 def save(self, commit=True):
38 first_name = self.cleaned_data.pop('first_name', None)
39 last_name = self.cleaned_data.pop('last_name', None)
40 profile = super(UserProfileForm, self).save(commit=commit)
41 if commit:
42 user = profile.user
43 user.first_name = first_name
44 user.last_name = last_name
45 user.save()
46 return profile
47
48
49 class UserDeleteForm(forms.ModelForm):
50 username = CharField(
51 label=_('Username'),
52 help_text=_('Please type your username to confirm.'),
53 )
54
55 class Meta(object):
56 model = User
57 fields = ['username']
58
59 def clean_username(self):
60 data = self.cleaned_data['username']
61
62 if self.instance.username != data:
63 raise forms.ValidationError(_('Username does not match!'))
64
65 return data
66
67
68 class UserAdvertisingForm(forms.ModelForm):
69 class Meta(object):
70 model = UserProfile
71 fields = ['allow_ads']
72
73
74 class FacetField(forms.MultipleChoiceField):
75
76 """
77 For filtering searches on a facet.
78
79 Has validation for the format of facet values.
80 """
81
82 def valid_value(self, value):
83 """
84 Although this is a choice field, no choices need to be supplied.
85
86 Instead, we just validate that the value is in the correct format for
87 facet filtering (facet_name:value)
88 """
89 if ':' not in value:
90 return False
91 return True
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/readthedocs/core/forms.py b/readthedocs/core/forms.py
--- a/readthedocs/core/forms.py
+++ b/readthedocs/core/forms.py
@@ -18,8 +18,8 @@
class UserProfileForm(forms.ModelForm):
- first_name = CharField(label=_('First name'), required=False)
- last_name = CharField(label=_('Last name'), required=False)
+ first_name = CharField(label=_('First name'), required=False, max_length=30)
+ last_name = CharField(label=_('Last name'), required=False, max_length=30)
class Meta(object):
model = UserProfile
| {"golden_diff": "diff --git a/readthedocs/core/forms.py b/readthedocs/core/forms.py\n--- a/readthedocs/core/forms.py\n+++ b/readthedocs/core/forms.py\n@@ -18,8 +18,8 @@\n \n \n class UserProfileForm(forms.ModelForm):\n- first_name = CharField(label=_('First name'), required=False)\n- last_name = CharField(label=_('Last name'), required=False)\n+ first_name = CharField(label=_('First name'), required=False, max_length=30)\n+ last_name = CharField(label=_('Last name'), required=False, max_length=30)\n \n class Meta(object):\n model = UserProfile\n", "issue": "Validate profile fields on form\nRelated code\r\n\r\nhttps://github.com/rtfd/readthedocs.org/blob/164800694a25d769234c6e7019c483f347fe9226/readthedocs/core/forms.py#L20-L46\r\n\r\nThis will raise an exception if the length is greater than the model\r\n\r\nSentry issue https://sentry.io/read-the-docs/readthedocs-org/issues/666774301/\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Forms for core app.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport logging\nfrom builtins import object\n\nfrom django import forms\nfrom django.contrib.auth.models import User\nfrom django.forms.fields import CharField\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom .models import UserProfile\n\nlog = logging.getLogger(__name__)\n\n\nclass UserProfileForm(forms.ModelForm):\n first_name = CharField(label=_('First name'), required=False)\n last_name = CharField(label=_('Last name'), required=False)\n\n class Meta(object):\n model = UserProfile\n # Don't allow users edit someone else's user page\n fields = ['first_name', 'last_name', 'homepage']\n\n def __init__(self, *args, **kwargs):\n super(UserProfileForm, self).__init__(*args, **kwargs)\n try:\n self.fields['first_name'].initial = self.instance.user.first_name\n self.fields['last_name'].initial = self.instance.user.last_name\n except AttributeError:\n pass\n\n def save(self, commit=True):\n first_name = self.cleaned_data.pop('first_name', None)\n last_name = self.cleaned_data.pop('last_name', None)\n profile = super(UserProfileForm, self).save(commit=commit)\n if commit:\n user = profile.user\n user.first_name = first_name\n user.last_name = last_name\n user.save()\n return profile\n\n\nclass UserDeleteForm(forms.ModelForm):\n username = CharField(\n label=_('Username'),\n help_text=_('Please type your username to confirm.'),\n )\n\n class Meta(object):\n model = User\n fields = ['username']\n\n def clean_username(self):\n data = self.cleaned_data['username']\n\n if self.instance.username != data:\n raise forms.ValidationError(_('Username does not match!'))\n\n return data\n\n\nclass UserAdvertisingForm(forms.ModelForm):\n class Meta(object):\n model = UserProfile\n fields = ['allow_ads']\n\n\nclass FacetField(forms.MultipleChoiceField):\n\n \"\"\"\n For filtering searches on a facet.\n\n Has validation for the format of facet values.\n \"\"\"\n\n def valid_value(self, value):\n \"\"\"\n Although this is a choice field, no choices need to be supplied.\n\n Instead, we just validate that the value is in the correct format for\n facet filtering (facet_name:value)\n \"\"\"\n if ':' not in value:\n return False\n return True\n", "path": "readthedocs/core/forms.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Forms for core app.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport logging\nfrom builtins import object\n\nfrom django import forms\nfrom django.contrib.auth.models import User\nfrom django.forms.fields import CharField\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom .models import UserProfile\n\nlog = logging.getLogger(__name__)\n\n\nclass UserProfileForm(forms.ModelForm):\n first_name = CharField(label=_('First name'), required=False, max_length=30)\n last_name = CharField(label=_('Last name'), required=False, max_length=30)\n\n class Meta(object):\n model = UserProfile\n # Don't allow users edit someone else's user page\n fields = ['first_name', 'last_name', 'homepage']\n\n def __init__(self, *args, **kwargs):\n super(UserProfileForm, self).__init__(*args, **kwargs)\n try:\n self.fields['first_name'].initial = self.instance.user.first_name\n self.fields['last_name'].initial = self.instance.user.last_name\n except AttributeError:\n pass\n\n def save(self, commit=True):\n first_name = self.cleaned_data.pop('first_name', None)\n last_name = self.cleaned_data.pop('last_name', None)\n profile = super(UserProfileForm, self).save(commit=commit)\n if commit:\n user = profile.user\n user.first_name = first_name\n user.last_name = last_name\n user.save()\n return profile\n\n\nclass UserDeleteForm(forms.ModelForm):\n username = CharField(\n label=_('Username'),\n help_text=_('Please type your username to confirm.'),\n )\n\n class Meta(object):\n model = User\n fields = ['username']\n\n def clean_username(self):\n data = self.cleaned_data['username']\n\n if self.instance.username != data:\n raise forms.ValidationError(_('Username does not match!'))\n\n return data\n\n\nclass UserAdvertisingForm(forms.ModelForm):\n class Meta(object):\n model = UserProfile\n fields = ['allow_ads']\n\n\nclass FacetField(forms.MultipleChoiceField):\n\n \"\"\"\n For filtering searches on a facet.\n\n Has validation for the format of facet values.\n \"\"\"\n\n def valid_value(self, value):\n \"\"\"\n Although this is a choice field, no choices need to be supplied.\n\n Instead, we just validate that the value is in the correct format for\n facet filtering (facet_name:value)\n \"\"\"\n if ':' not in value:\n return False\n return True\n", "path": "readthedocs/core/forms.py"}]} |
gh_patches_debug_1002 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-5611 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/inference/benchmark_ops/benchmark_rmsnorm.py`
Content:
```
1 import torch
2
3 from colossalai.kernel.kernel_loader import InferenceOpsLoader
4 from colossalai.kernel.triton import rms_layernorm
5
6 try:
7 import triton # noqa
8 except ImportError:
9 print("please install triton from https://github.com/openai/triton")
10
11 inference_ops = InferenceOpsLoader().load()
12
13 # Triton benchmark plot attributions
14 configs = [
15 triton.testing.Benchmark(
16 x_names=["SEQUENCE_TOTAL"],
17 x_vals=[i for i in range(128, 1025, 128)],
18 line_arg="provider",
19 line_vals=[
20 "vllm_rms_layernorm",
21 "triton_rms_layernorm",
22 "cuda_rms_layernorm",
23 "vllm_rms_layernorm_with_residual",
24 "triton_rms_layernorm_with_residual",
25 "cuda_rms_layernorm_with_residual",
26 ],
27 line_names=[
28 "vllm_rms_layernorm",
29 "triton_rms_layernorm",
30 "cuda_rms_layernorm",
31 "vllm_rms_layernorm_with_residual",
32 "triton_rms_layernorm_with_residual",
33 "cuda_rms_layernorm_with_residual",
34 ],
35 styles=[("red", "-"), ("blue", "-"), ("yellow", "-"), ("red", "--"), ("blue", "--"), ("yellow", "--")],
36 ylabel="ms",
37 plot_name=f"RMSNorm benchmarking results",
38 args={"HIDDEN_SIZE": 1024},
39 )
40 ]
41
42
43 @triton.testing.perf_report(configs)
44 def benchmark_rms_layernorm(
45 provider: str,
46 SEQUENCE_TOTAL: int,
47 HIDDEN_SIZE: int,
48 ):
49 try:
50 from vllm.model_executor.layers.layernorm import RMSNorm
51 except ImportError:
52 raise ImportError("Please install vllm from https://github.com/vllm-project/vllm")
53
54 warmup = 10
55 rep = 1000
56
57 dtype = torch.float16
58 eps = 1e-5
59 x_shape = (SEQUENCE_TOTAL, HIDDEN_SIZE)
60 w_shape = (x_shape[-1],)
61 residual = torch.rand(x_shape, dtype=dtype, device="cuda")
62 weight = torch.ones(w_shape, dtype=dtype, device="cuda")
63 vllm_norm = RMSNorm(hidden_size=HIDDEN_SIZE, eps=eps).to(dtype=dtype, device="cuda")
64 x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device="cuda")
65 if provider == "vllm_rms_layernorm":
66 fn = lambda: vllm_norm(x)
67 elif provider == "triton_rms_layernorm":
68 fn = lambda: rms_layernorm(x, weight, eps=eps)
69 elif provider == "cuda_rms_layernorm":
70 out = torch.empty_like(x)
71 fn = lambda: inference_ops.rms_layernorm(out, x, weight, eps)
72 elif provider == "vllm_rms_layernorm_with_residual":
73 fn = lambda: vllm_norm(x, residual=residual)
74 elif provider == "triton_rms_layernorm_with_residual":
75 fn = lambda: rms_layernorm(x, weight, eps=eps, residual=residual)
76 elif provider == "cuda_rms_layernorm_with_residual":
77 fn = lambda: inference_ops.fused_add_rms_layernorm(x, residual, weight, eps)
78 else:
79 raise ValueError("Undefined provider.")
80
81 ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
82
83 return ms
84
85
86 if __name__ == "__main__":
87 benchmark_rms_layernorm.run(save_path=".", print_data=True)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/inference/benchmark_ops/benchmark_rmsnorm.py b/examples/inference/benchmark_ops/benchmark_rmsnorm.py
--- a/examples/inference/benchmark_ops/benchmark_rmsnorm.py
+++ b/examples/inference/benchmark_ops/benchmark_rmsnorm.py
@@ -35,7 +35,7 @@
styles=[("red", "-"), ("blue", "-"), ("yellow", "-"), ("red", "--"), ("blue", "--"), ("yellow", "--")],
ylabel="ms",
plot_name=f"RMSNorm benchmarking results",
- args={"HIDDEN_SIZE": 1024},
+ args={"HIDDEN_SIZE": 5120},
)
]
| {"golden_diff": "diff --git a/examples/inference/benchmark_ops/benchmark_rmsnorm.py b/examples/inference/benchmark_ops/benchmark_rmsnorm.py\n--- a/examples/inference/benchmark_ops/benchmark_rmsnorm.py\n+++ b/examples/inference/benchmark_ops/benchmark_rmsnorm.py\n@@ -35,7 +35,7 @@\n styles=[(\"red\", \"-\"), (\"blue\", \"-\"), (\"yellow\", \"-\"), (\"red\", \"--\"), (\"blue\", \"--\"), (\"yellow\", \"--\")],\n ylabel=\"ms\",\n plot_name=f\"RMSNorm benchmarking results\",\n- args={\"HIDDEN_SIZE\": 1024},\n+ args={\"HIDDEN_SIZE\": 5120},\n )\n ]\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import torch\n\nfrom colossalai.kernel.kernel_loader import InferenceOpsLoader\nfrom colossalai.kernel.triton import rms_layernorm\n\ntry:\n import triton # noqa\nexcept ImportError:\n print(\"please install triton from https://github.com/openai/triton\")\n\ninference_ops = InferenceOpsLoader().load()\n\n# Triton benchmark plot attributions\nconfigs = [\n triton.testing.Benchmark(\n x_names=[\"SEQUENCE_TOTAL\"],\n x_vals=[i for i in range(128, 1025, 128)],\n line_arg=\"provider\",\n line_vals=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n line_names=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n styles=[(\"red\", \"-\"), (\"blue\", \"-\"), (\"yellow\", \"-\"), (\"red\", \"--\"), (\"blue\", \"--\"), (\"yellow\", \"--\")],\n ylabel=\"ms\",\n plot_name=f\"RMSNorm benchmarking results\",\n args={\"HIDDEN_SIZE\": 1024},\n )\n]\n\n\[email protected]_report(configs)\ndef benchmark_rms_layernorm(\n provider: str,\n SEQUENCE_TOTAL: int,\n HIDDEN_SIZE: int,\n):\n try:\n from vllm.model_executor.layers.layernorm import RMSNorm\n except ImportError:\n raise ImportError(\"Please install vllm from https://github.com/vllm-project/vllm\")\n\n warmup = 10\n rep = 1000\n\n dtype = torch.float16\n eps = 1e-5\n x_shape = (SEQUENCE_TOTAL, HIDDEN_SIZE)\n w_shape = (x_shape[-1],)\n residual = torch.rand(x_shape, dtype=dtype, device=\"cuda\")\n weight = torch.ones(w_shape, dtype=dtype, device=\"cuda\")\n vllm_norm = RMSNorm(hidden_size=HIDDEN_SIZE, eps=eps).to(dtype=dtype, device=\"cuda\")\n x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device=\"cuda\")\n if provider == \"vllm_rms_layernorm\":\n fn = lambda: vllm_norm(x)\n elif provider == \"triton_rms_layernorm\":\n fn = lambda: rms_layernorm(x, weight, eps=eps)\n elif provider == \"cuda_rms_layernorm\":\n out = torch.empty_like(x)\n fn = lambda: inference_ops.rms_layernorm(out, x, weight, eps)\n elif provider == \"vllm_rms_layernorm_with_residual\":\n fn = lambda: vllm_norm(x, residual=residual)\n elif provider == \"triton_rms_layernorm_with_residual\":\n fn = lambda: rms_layernorm(x, weight, eps=eps, residual=residual)\n elif provider == \"cuda_rms_layernorm_with_residual\":\n fn = lambda: inference_ops.fused_add_rms_layernorm(x, residual, weight, eps)\n else:\n raise ValueError(\"Undefined provider.\")\n\n ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)\n\n return ms\n\n\nif __name__ == \"__main__\":\n benchmark_rms_layernorm.run(save_path=\".\", print_data=True)\n", "path": "examples/inference/benchmark_ops/benchmark_rmsnorm.py"}], "after_files": [{"content": "import torch\n\nfrom colossalai.kernel.kernel_loader import InferenceOpsLoader\nfrom colossalai.kernel.triton import rms_layernorm\n\ntry:\n import triton # noqa\nexcept ImportError:\n print(\"please install triton from https://github.com/openai/triton\")\n\ninference_ops = InferenceOpsLoader().load()\n\n# Triton benchmark plot attributions\nconfigs = [\n triton.testing.Benchmark(\n x_names=[\"SEQUENCE_TOTAL\"],\n x_vals=[i for i in range(128, 1025, 128)],\n line_arg=\"provider\",\n line_vals=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n line_names=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n styles=[(\"red\", \"-\"), (\"blue\", \"-\"), (\"yellow\", \"-\"), (\"red\", \"--\"), (\"blue\", \"--\"), (\"yellow\", \"--\")],\n ylabel=\"ms\",\n plot_name=f\"RMSNorm benchmarking results\",\n args={\"HIDDEN_SIZE\": 5120},\n )\n]\n\n\[email protected]_report(configs)\ndef benchmark_rms_layernorm(\n provider: str,\n SEQUENCE_TOTAL: int,\n HIDDEN_SIZE: int,\n):\n try:\n from vllm.model_executor.layers.layernorm import RMSNorm\n except ImportError:\n raise ImportError(\"Please install vllm from https://github.com/vllm-project/vllm\")\n\n warmup = 10\n rep = 1000\n\n dtype = torch.float16\n eps = 1e-5\n x_shape = (SEQUENCE_TOTAL, HIDDEN_SIZE)\n w_shape = (x_shape[-1],)\n residual = torch.rand(x_shape, dtype=dtype, device=\"cuda\")\n weight = torch.ones(w_shape, dtype=dtype, device=\"cuda\")\n vllm_norm = RMSNorm(hidden_size=HIDDEN_SIZE, eps=eps).to(dtype=dtype, device=\"cuda\")\n x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device=\"cuda\")\n if provider == \"vllm_rms_layernorm\":\n fn = lambda: vllm_norm(x)\n elif provider == \"triton_rms_layernorm\":\n fn = lambda: rms_layernorm(x, weight, eps=eps)\n elif provider == \"cuda_rms_layernorm\":\n out = torch.empty_like(x)\n fn = lambda: inference_ops.rms_layernorm(out, x, weight, eps)\n elif provider == \"vllm_rms_layernorm_with_residual\":\n fn = lambda: vllm_norm(x, residual=residual)\n elif provider == \"triton_rms_layernorm_with_residual\":\n fn = lambda: rms_layernorm(x, weight, eps=eps, residual=residual)\n elif provider == \"cuda_rms_layernorm_with_residual\":\n fn = lambda: inference_ops.fused_add_rms_layernorm(x, residual, weight, eps)\n else:\n raise ValueError(\"Undefined provider.\")\n\n ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)\n\n return ms\n\n\nif __name__ == \"__main__\":\n benchmark_rms_layernorm.run(save_path=\".\", print_data=True)\n", "path": "examples/inference/benchmark_ops/benchmark_rmsnorm.py"}]} |
gh_patches_debug_1003 | rasdani/github-patches | git_diff | spack__spack-19617 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jupyter: No module named ipykernel_launcher
### Steps to reproduce the issue
```console
$ spack env create my-jupyter
$ spack env activate my-jupyter
$ spack add py-jupyter
$ spack add py-ipython
$ spack add py-ipykernel
$ spack add py-notebook
$ spack install
```
### Error Message
If I try to start `jupyter notebook` now and open a Python3 Notebook I get no working Python3 kernel
```
Kernel started: af71e14f-24f7-40a4-92a8-48e79f5d621c, name: python3
/home/axel/src/spack/opt/spack/linux-ubuntu18.04-skylake/gcc-8.4.0/python-3.8.6-wuh5zypqqvf3fba4ootslwky3plqqsqw/bin/python3.8: No module named ipykernel_launcher
[I 00:55:29.178 NotebookApp] KernelRestarter: restarting kernel (1/5), new random ports
/home/axel/src/spack/opt/spack/linux-ubuntu18.04-skylake/gcc-8.4.0/python-3.8.6-wuh5zypqqvf3fba4ootslwky3plqqsqw/bin/python3.8: No module named ipykernel_launcher
# ...
```
### Information on your system
```bash
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
```
`spack debug report`:
* **Spack:** 0.15.4-1470-99ef3d11c1
* **Python:** 3.8.6
* **Platform:** linux-ubuntu18.04-skylake
### Additional information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have searched the issues of this repo and believe this is not a duplicate
- [ ] I have run the failing commands in debug mode and reported the output
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/py-ipykernel/package.py`
Content:
```
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6
7 class PyIpykernel(PythonPackage):
8 """IPython Kernel for Jupyter"""
9
10 homepage = "https://pypi.python.org/pypi/ipykernel"
11 url = "https://pypi.io/packages/source/i/ipykernel/ipykernel-5.3.4.tar.gz"
12
13 version('5.3.4', sha256='9b2652af1607986a1b231c62302d070bc0534f564c393a5d9d130db9abbbe89d')
14 version('5.1.1', sha256='f0e962052718068ad3b1d8bcc703794660858f58803c3798628817f492a8769c')
15 version('5.1.0', sha256='0fc0bf97920d454102168ec2008620066878848fcfca06c22b669696212e292f')
16 version('4.10.0', sha256='699103c8e64886e3ec7053f2a6aa83bb90426063526f63a818732ff385202bad')
17 version('4.5.0', sha256='245a798edb8fd751b95750d8645d736dd739a020e7fc7d5627dac4d1c35d8295')
18 version('4.4.1', sha256='6d48398b3112efb733b254edede4b7f3262c28bd19f665b64ef1acf6ec5cd74f')
19 version('4.4.0', sha256='d516427c3bd689205e6693c9616302ef34017b91ada3c9ea3fca6e90702b7ffe')
20 version('4.3.1', sha256='8219d3eaa3e4d4efc5f349114e41a40f0986c91a960846bb81d5da817fb7cc3f')
21 version('4.3.0', sha256='f214c661328c836e02b6f185f98f3eccd7ce396791937493ffa1babf5e3267ab')
22 version('4.2.2', sha256='a876da43e01acec2c305abdd8e6aa55f052bab1196171ccf1cb9a6aa230298b0')
23 version('4.2.1', sha256='081a5d4db33db58697be2d682b92f79b2c239493445f13dd457c15bc3e52c874')
24 version('4.2.0', sha256='723b3d4baac20f0c9cd91fc75c3e813636ecb6c6e303fb34d628c3df078985a7')
25 version('4.1.1', sha256='d8c5555386d0f18f1336dea9800f9f0fe96dcecc9757c0f980e11fdfadb661ff')
26 version('4.1.0', sha256='e0e150ad55e487e49054efc9a4b0e2e17f27e1de77444b26760789077b146d86')
27
28 depends_on('[email protected]:2.8,3.3:', type=('build', 'run'))
29 depends_on('[email protected]:', when='@5.0:', type=('build', 'run'))
30 depends_on('[email protected]:', when='@5.2:', type=('build', 'run'))
31 depends_on('py-setuptools', type='build', when='@5:')
32 depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))
33 depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))
34 depends_on('[email protected]:', type=('build', 'run'))
35 depends_on('py-jupyter-client', type=('build', 'run'))
36 depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))
37 depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))
38 depends_on('py-appnope', when='platform=darwin', type=('build', 'run'))
39 depends_on('py-pytest@:5.3.3,5.3.5:', type='test')
40 depends_on('py-pytest-cov', type='test')
41 # depends_on('py-flaky', type='test')
42 depends_on('py-nose', type='test')
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/var/spack/repos/builtin/packages/py-ipykernel/package.py b/var/spack/repos/builtin/packages/py-ipykernel/package.py
--- a/var/spack/repos/builtin/packages/py-ipykernel/package.py
+++ b/var/spack/repos/builtin/packages/py-ipykernel/package.py
@@ -40,3 +40,9 @@
depends_on('py-pytest-cov', type='test')
# depends_on('py-flaky', type='test')
depends_on('py-nose', type='test')
+
+ phases = ['build', 'install', 'install_data']
+
+ def install_data(self):
+ """ install the Jupyter kernel spec """
+ self.spec['python'].command('-m ipykernel', ['install'])
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/py-ipykernel/package.py b/var/spack/repos/builtin/packages/py-ipykernel/package.py\n--- a/var/spack/repos/builtin/packages/py-ipykernel/package.py\n+++ b/var/spack/repos/builtin/packages/py-ipykernel/package.py\n@@ -40,3 +40,9 @@\n depends_on('py-pytest-cov', type='test')\n # depends_on('py-flaky', type='test')\n depends_on('py-nose', type='test')\n+\n+ phases = ['build', 'install', 'install_data']\n+\n+ def install_data(self):\n+ \"\"\" install the Jupyter kernel spec \"\"\"\n+ self.spec['python'].command('-m ipykernel', ['install'])\n", "issue": "Jupyter: No module named ipykernel_launcher\n### Steps to reproduce the issue\r\n\r\n```console\r\n$ spack env create my-jupyter\r\n$ spack env activate my-jupyter\r\n$ spack add py-jupyter\r\n$ spack add py-ipython\r\n$ spack add py-ipykernel\r\n$ spack add py-notebook\r\n$ spack install\r\n```\r\n\r\n### Error Message\r\n\r\nIf I try to start `jupyter notebook` now and open a Python3 Notebook I get no working Python3 kernel\r\n```\r\nKernel started: af71e14f-24f7-40a4-92a8-48e79f5d621c, name: python3\r\n/home/axel/src/spack/opt/spack/linux-ubuntu18.04-skylake/gcc-8.4.0/python-3.8.6-wuh5zypqqvf3fba4ootslwky3plqqsqw/bin/python3.8: No module named ipykernel_launcher\r\n\r\n[I 00:55:29.178 NotebookApp] KernelRestarter: restarting kernel (1/5), new random ports\r\n/home/axel/src/spack/opt/spack/linux-ubuntu18.04-skylake/gcc-8.4.0/python-3.8.6-wuh5zypqqvf3fba4ootslwky3plqqsqw/bin/python3.8: No module named ipykernel_launcher\r\n\r\n# ...\r\n```\r\n\r\n### Information on your system\r\n\r\n```bash\r\n$ lsb_release -a\r\nNo LSB modules are available.\r\nDistributor ID:\tUbuntu\r\nDescription:\tUbuntu 18.04.5 LTS\r\nRelease:\t18.04\r\nCodename:\tbionic\r\n```\r\n\r\n`spack debug report`:\r\n* **Spack:** 0.15.4-1470-99ef3d11c1\r\n* **Python:** 3.8.6\r\n* **Platform:** linux-ubuntu18.04-skylake\r\n\r\n### Additional information\r\n\r\n<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->\r\n- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform\r\n- [x] I have searched the issues of this repo and believe this is not a duplicate\r\n- [ ] I have run the failing commands in debug mode and reported the output\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\n\nclass PyIpykernel(PythonPackage):\n \"\"\"IPython Kernel for Jupyter\"\"\"\n\n homepage = \"https://pypi.python.org/pypi/ipykernel\"\n url = \"https://pypi.io/packages/source/i/ipykernel/ipykernel-5.3.4.tar.gz\"\n\n version('5.3.4', sha256='9b2652af1607986a1b231c62302d070bc0534f564c393a5d9d130db9abbbe89d')\n version('5.1.1', sha256='f0e962052718068ad3b1d8bcc703794660858f58803c3798628817f492a8769c')\n version('5.1.0', sha256='0fc0bf97920d454102168ec2008620066878848fcfca06c22b669696212e292f')\n version('4.10.0', sha256='699103c8e64886e3ec7053f2a6aa83bb90426063526f63a818732ff385202bad')\n version('4.5.0', sha256='245a798edb8fd751b95750d8645d736dd739a020e7fc7d5627dac4d1c35d8295')\n version('4.4.1', sha256='6d48398b3112efb733b254edede4b7f3262c28bd19f665b64ef1acf6ec5cd74f')\n version('4.4.0', sha256='d516427c3bd689205e6693c9616302ef34017b91ada3c9ea3fca6e90702b7ffe')\n version('4.3.1', sha256='8219d3eaa3e4d4efc5f349114e41a40f0986c91a960846bb81d5da817fb7cc3f')\n version('4.3.0', sha256='f214c661328c836e02b6f185f98f3eccd7ce396791937493ffa1babf5e3267ab')\n version('4.2.2', sha256='a876da43e01acec2c305abdd8e6aa55f052bab1196171ccf1cb9a6aa230298b0')\n version('4.2.1', sha256='081a5d4db33db58697be2d682b92f79b2c239493445f13dd457c15bc3e52c874')\n version('4.2.0', sha256='723b3d4baac20f0c9cd91fc75c3e813636ecb6c6e303fb34d628c3df078985a7')\n version('4.1.1', sha256='d8c5555386d0f18f1336dea9800f9f0fe96dcecc9757c0f980e11fdfadb661ff')\n version('4.1.0', sha256='e0e150ad55e487e49054efc9a4b0e2e17f27e1de77444b26760789077b146d86')\n\n depends_on('[email protected]:2.8,3.3:', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.0:', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.2:', type=('build', 'run'))\n depends_on('py-setuptools', type='build', when='@5:')\n depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))\n depends_on('[email protected]:', type=('build', 'run'))\n depends_on('py-jupyter-client', type=('build', 'run'))\n depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))\n depends_on('py-appnope', when='platform=darwin', type=('build', 'run'))\n depends_on('py-pytest@:5.3.3,5.3.5:', type='test')\n depends_on('py-pytest-cov', type='test')\n # depends_on('py-flaky', type='test')\n depends_on('py-nose', type='test')\n", "path": "var/spack/repos/builtin/packages/py-ipykernel/package.py"}], "after_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\n\nclass PyIpykernel(PythonPackage):\n \"\"\"IPython Kernel for Jupyter\"\"\"\n\n homepage = \"https://pypi.python.org/pypi/ipykernel\"\n url = \"https://pypi.io/packages/source/i/ipykernel/ipykernel-5.3.4.tar.gz\"\n\n version('5.3.4', sha256='9b2652af1607986a1b231c62302d070bc0534f564c393a5d9d130db9abbbe89d')\n version('5.1.1', sha256='f0e962052718068ad3b1d8bcc703794660858f58803c3798628817f492a8769c')\n version('5.1.0', sha256='0fc0bf97920d454102168ec2008620066878848fcfca06c22b669696212e292f')\n version('4.10.0', sha256='699103c8e64886e3ec7053f2a6aa83bb90426063526f63a818732ff385202bad')\n version('4.5.0', sha256='245a798edb8fd751b95750d8645d736dd739a020e7fc7d5627dac4d1c35d8295')\n version('4.4.1', sha256='6d48398b3112efb733b254edede4b7f3262c28bd19f665b64ef1acf6ec5cd74f')\n version('4.4.0', sha256='d516427c3bd689205e6693c9616302ef34017b91ada3c9ea3fca6e90702b7ffe')\n version('4.3.1', sha256='8219d3eaa3e4d4efc5f349114e41a40f0986c91a960846bb81d5da817fb7cc3f')\n version('4.3.0', sha256='f214c661328c836e02b6f185f98f3eccd7ce396791937493ffa1babf5e3267ab')\n version('4.2.2', sha256='a876da43e01acec2c305abdd8e6aa55f052bab1196171ccf1cb9a6aa230298b0')\n version('4.2.1', sha256='081a5d4db33db58697be2d682b92f79b2c239493445f13dd457c15bc3e52c874')\n version('4.2.0', sha256='723b3d4baac20f0c9cd91fc75c3e813636ecb6c6e303fb34d628c3df078985a7')\n version('4.1.1', sha256='d8c5555386d0f18f1336dea9800f9f0fe96dcecc9757c0f980e11fdfadb661ff')\n version('4.1.0', sha256='e0e150ad55e487e49054efc9a4b0e2e17f27e1de77444b26760789077b146d86')\n\n depends_on('[email protected]:2.8,3.3:', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.0:', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.2:', type=('build', 'run'))\n depends_on('py-setuptools', type='build', when='@5:')\n depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))\n depends_on('[email protected]:', type=('build', 'run'))\n depends_on('py-jupyter-client', type=('build', 'run'))\n depends_on('[email protected]:', when='@:4.999', type=('build', 'run'))\n depends_on('[email protected]:', when='@5.0.0:', type=('build', 'run'))\n depends_on('py-appnope', when='platform=darwin', type=('build', 'run'))\n depends_on('py-pytest@:5.3.3,5.3.5:', type='test')\n depends_on('py-pytest-cov', type='test')\n # depends_on('py-flaky', type='test')\n depends_on('py-nose', type='test')\n\n phases = ['build', 'install', 'install_data']\n\n def install_data(self):\n \"\"\" install the Jupyter kernel spec \"\"\"\n self.spec['python'].command('-m ipykernel', ['install'])\n", "path": "var/spack/repos/builtin/packages/py-ipykernel/package.py"}]} |
gh_patches_debug_1004 | rasdani/github-patches | git_diff | spack__spack-43770 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installation issue: nettle fails to build due to undocumented openssl dependency?
### Steps to reproduce the issue
```console
$ spack spec -I <spec>
Input spec
--------------------------------
- nettle
Concretized
--------------------------------
- [email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~guile build_system=generic arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+cxx build_system=autotools libs=shared,static patches=69ad2e2 arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools patches=35c4492,7793209,a49dd5b arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+cpanm+opcode+open+shared+threads build_system=generic patches=714e4d1 arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~debug~pic+shared build_system=generic arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools patches=bbf97f1 arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~symlinks+termlib abi=none build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+compat+opt build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+sigsegv build_system=autotools patches=9dc5fbd,bfdffa7 arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools libs=shared,static arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
```
### Error message
<details><summary>Error message</summary>
<pre>
==> nettle: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j16' 'V=1'
...
1151 nettle-openssl.o: In function `openssl_md5_init':
>> 1152 /localData/000scratch/rowanw/spack-stage/spack-stage-nettle-3.9.1-bv6yy4efn7x73ybk5at6bly7tplvvul
5/spack-src/examples/nettle-openssl.c:408: undefined reference to `EVP_MD_CTX_new'
1153 nettle-openssl.o: In function `openssl_sha1_init':
>> 1154 /localData/000scratch/rowanw/spack-stage/spack-stage-nettle-3.9.1-bv6yy4efn7x73ybk5at6bly7tplvvul
5/spack-src/examples/nettle-openssl.c:409: undefined reference to `EVP_MD_CTX_new'
>> 1155 collect2: error: ld returned 1 exit status
>> 1156 make[1]: *** [Makefile:100: nettle-benchmark] Error 1
</pre></details>
### Information on your system
* **Spack:** 0.21.0 (c35700db51bfc673798643697df3ef0e8a5177f1)
* **Python:** 3.8.18
* **Platform:** linux-centos7-ivybridge
* **Concretizer:** clingo
### Additional information
A [quick google](https://stackoverflow.com/questions/46768071/openssl-linking-undefined-reference-evp-md-ctx-new-and-fre) of the error message suggests this is due to linking against an old openssl version, which checks out as I'm running on centos 7 and the default system libcrypto does not include the missing symbol while a newer version does:
```
$ ls -al /lib64/libcrypto.so
lrwxrwxrwx 1 root root 19 Apr 11 2023 /lib64/libcrypto.so -> libcrypto.so.1.0.2k
$ nm --dynamic /lib64/libcrypto.so.1.0.2k |grep EVP_MD_CTX_new
$ nm --dynamic /lib64/libcrypto.so.1.1.1k |grep EVP_MD_CTX_new
000000000015be20 T EVP_MD_CTX_new
```
Obviously spack shouldn't be relying on the system library; the nettle package doesn't specify any kind of dependency on openssl so that seems like a bug.
### General information
- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [X] I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
- [X] I have uploaded the build log and environment files
- [X] I have searched the issues of this repo and believe this is not a duplicate
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/nettle/package.py`
Content:
```
1 # Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack.package import *
7
8
9 class Nettle(AutotoolsPackage, GNUMirrorPackage):
10 """The Nettle package contains the low-level cryptographic library
11 that is designed to fit easily in many contexts."""
12
13 homepage = "https://www.lysator.liu.se/~nisse/nettle/"
14 gnu_mirror_path = "nettle/nettle-3.3.tar.gz"
15
16 license("GPL-2.0-or-later OR LGPL-3.0-or-later")
17
18 version("3.9.1", sha256="ccfeff981b0ca71bbd6fbcb054f407c60ffb644389a5be80d6716d5b550c6ce3")
19 version("3.8.1", sha256="364f3e2b77cd7dcde83fd7c45219c834e54b0c75e428b6f894a23d12dd41cbfe")
20 version("3.4.1", sha256="f941cf1535cd5d1819be5ccae5babef01f6db611f9b5a777bae9c7604b8a92ad")
21 version("3.4", sha256="ae7a42df026550b85daca8389b6a60ba6313b0567f374392e54918588a411e94")
22 version("3.3", sha256="46942627d5d0ca11720fec18d81fc38f7ef837ea4197c1f630e71ce0d470b11e")
23 version("3.2", sha256="ea4283def236413edab5a4cf9cf32adf540c8df1b9b67641cfc2302fca849d97")
24 version("2.7.1", sha256="bc71ebd43435537d767799e414fce88e521b7278d48c860651216e1fc6555b40")
25 version("2.7", sha256="c294ea133c05382cc2effb1734d49f4abeb1ad8515543a333de49a11422cd4d6")
26
27 depends_on("gmp")
28 depends_on("m4", type="build")
29 depends_on("openssl")
30
31 def configure_args(self):
32 return ["CFLAGS={0}".format(self.compiler.c99_flag)]
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/var/spack/repos/builtin/packages/nettle/package.py b/var/spack/repos/builtin/packages/nettle/package.py
--- a/var/spack/repos/builtin/packages/nettle/package.py
+++ b/var/spack/repos/builtin/packages/nettle/package.py
@@ -26,7 +26,11 @@
depends_on("gmp")
depends_on("m4", type="build")
- depends_on("openssl")
+
+ def flag_handler(self, name, flags):
+ if name == "cflags":
+ flags.append(self.compiler.c99_flag)
+ return (flags, None, None)
def configure_args(self):
- return ["CFLAGS={0}".format(self.compiler.c99_flag)]
+ return ["--disable-openssl"]
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/nettle/package.py b/var/spack/repos/builtin/packages/nettle/package.py\n--- a/var/spack/repos/builtin/packages/nettle/package.py\n+++ b/var/spack/repos/builtin/packages/nettle/package.py\n@@ -26,7 +26,11 @@\n \n depends_on(\"gmp\")\n depends_on(\"m4\", type=\"build\")\n- depends_on(\"openssl\")\n+\n+ def flag_handler(self, name, flags):\n+ if name == \"cflags\":\n+ flags.append(self.compiler.c99_flag)\n+ return (flags, None, None)\n \n def configure_args(self):\n- return [\"CFLAGS={0}\".format(self.compiler.c99_flag)]\n+ return [\"--disable-openssl\"]\n", "issue": "Installation issue: nettle fails to build due to undocumented openssl dependency?\n### Steps to reproduce the issue\n\n```console\r\n$ spack spec -I <spec>\r\nInput spec\r\n--------------------------------\r\n - nettle\r\n\r\nConcretized\r\n--------------------------------\r\n - [email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~guile build_system=generic arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+cxx build_system=autotools libs=shared,static patches=69ad2e2 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools patches=35c4492,7793209,a49dd5b arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+cpanm+opcode+open+shared+threads build_system=generic patches=714e4d1 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~debug~pic+shared build_system=generic arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools patches=bbf97f1 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~symlinks+termlib abi=none build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+compat+opt build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+sigsegv build_system=autotools patches=9dc5fbd,bfdffa7 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools libs=shared,static arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n```\r\n\n\n### Error message\n\n<details><summary>Error message</summary>\r\n\r\n<pre>\r\n==> nettle: Executing phase: 'build'\r\n==> Error: ProcessError: Command exited with status 2:\r\n 'make' '-j16' 'V=1'\r\n...\r\n 1151 nettle-openssl.o: In function `openssl_md5_init':\r\n >> 1152 /localData/000scratch/rowanw/spack-stage/spack-stage-nettle-3.9.1-bv6yy4efn7x73ybk5at6bly7tplvvul\r\n 5/spack-src/examples/nettle-openssl.c:408: undefined reference to `EVP_MD_CTX_new'\r\n 1153 nettle-openssl.o: In function `openssl_sha1_init':\r\n >> 1154 /localData/000scratch/rowanw/spack-stage/spack-stage-nettle-3.9.1-bv6yy4efn7x73ybk5at6bly7tplvvul\r\n 5/spack-src/examples/nettle-openssl.c:409: undefined reference to `EVP_MD_CTX_new'\r\n >> 1155 collect2: error: ld returned 1 exit status\r\n >> 1156 make[1]: *** [Makefile:100: nettle-benchmark] Error 1\r\n</pre></details>\r\n\n\n### Information on your system\n\n* **Spack:** 0.21.0 (c35700db51bfc673798643697df3ef0e8a5177f1)\r\n* **Python:** 3.8.18\r\n* **Platform:** linux-centos7-ivybridge\r\n* **Concretizer:** clingo\n\n### Additional information\n\nA [quick google](https://stackoverflow.com/questions/46768071/openssl-linking-undefined-reference-evp-md-ctx-new-and-fre) of the error message suggests this is due to linking against an old openssl version, which checks out as I'm running on centos 7 and the default system libcrypto does not include the missing symbol while a newer version does:\r\n\r\n```\r\n$ ls -al /lib64/libcrypto.so\r\nlrwxrwxrwx 1 root root 19 Apr 11 2023 /lib64/libcrypto.so -> libcrypto.so.1.0.2k\r\n\r\n$ nm --dynamic /lib64/libcrypto.so.1.0.2k |grep EVP_MD_CTX_new\r\n\r\n$ nm --dynamic /lib64/libcrypto.so.1.1.1k |grep EVP_MD_CTX_new\r\n000000000015be20 T EVP_MD_CTX_new\r\n```\r\n\r\nObviously spack shouldn't be relying on the system library; the nettle package doesn't specify any kind of dependency on openssl so that seems like a bug.\n\n### General information\n\n- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform\n- [X] I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers\n- [X] I have uploaded the build log and environment files\n- [X] I have searched the issues of this repo and believe this is not a duplicate\n", "before_files": [{"content": "# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack.package import *\n\n\nclass Nettle(AutotoolsPackage, GNUMirrorPackage):\n \"\"\"The Nettle package contains the low-level cryptographic library\n that is designed to fit easily in many contexts.\"\"\"\n\n homepage = \"https://www.lysator.liu.se/~nisse/nettle/\"\n gnu_mirror_path = \"nettle/nettle-3.3.tar.gz\"\n\n license(\"GPL-2.0-or-later OR LGPL-3.0-or-later\")\n\n version(\"3.9.1\", sha256=\"ccfeff981b0ca71bbd6fbcb054f407c60ffb644389a5be80d6716d5b550c6ce3\")\n version(\"3.8.1\", sha256=\"364f3e2b77cd7dcde83fd7c45219c834e54b0c75e428b6f894a23d12dd41cbfe\")\n version(\"3.4.1\", sha256=\"f941cf1535cd5d1819be5ccae5babef01f6db611f9b5a777bae9c7604b8a92ad\")\n version(\"3.4\", sha256=\"ae7a42df026550b85daca8389b6a60ba6313b0567f374392e54918588a411e94\")\n version(\"3.3\", sha256=\"46942627d5d0ca11720fec18d81fc38f7ef837ea4197c1f630e71ce0d470b11e\")\n version(\"3.2\", sha256=\"ea4283def236413edab5a4cf9cf32adf540c8df1b9b67641cfc2302fca849d97\")\n version(\"2.7.1\", sha256=\"bc71ebd43435537d767799e414fce88e521b7278d48c860651216e1fc6555b40\")\n version(\"2.7\", sha256=\"c294ea133c05382cc2effb1734d49f4abeb1ad8515543a333de49a11422cd4d6\")\n\n depends_on(\"gmp\")\n depends_on(\"m4\", type=\"build\")\n depends_on(\"openssl\")\n\n def configure_args(self):\n return [\"CFLAGS={0}\".format(self.compiler.c99_flag)]\n", "path": "var/spack/repos/builtin/packages/nettle/package.py"}], "after_files": [{"content": "# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack.package import *\n\n\nclass Nettle(AutotoolsPackage, GNUMirrorPackage):\n \"\"\"The Nettle package contains the low-level cryptographic library\n that is designed to fit easily in many contexts.\"\"\"\n\n homepage = \"https://www.lysator.liu.se/~nisse/nettle/\"\n gnu_mirror_path = \"nettle/nettle-3.3.tar.gz\"\n\n license(\"GPL-2.0-or-later OR LGPL-3.0-or-later\")\n\n version(\"3.9.1\", sha256=\"ccfeff981b0ca71bbd6fbcb054f407c60ffb644389a5be80d6716d5b550c6ce3\")\n version(\"3.8.1\", sha256=\"364f3e2b77cd7dcde83fd7c45219c834e54b0c75e428b6f894a23d12dd41cbfe\")\n version(\"3.4.1\", sha256=\"f941cf1535cd5d1819be5ccae5babef01f6db611f9b5a777bae9c7604b8a92ad\")\n version(\"3.4\", sha256=\"ae7a42df026550b85daca8389b6a60ba6313b0567f374392e54918588a411e94\")\n version(\"3.3\", sha256=\"46942627d5d0ca11720fec18d81fc38f7ef837ea4197c1f630e71ce0d470b11e\")\n version(\"3.2\", sha256=\"ea4283def236413edab5a4cf9cf32adf540c8df1b9b67641cfc2302fca849d97\")\n version(\"2.7.1\", sha256=\"bc71ebd43435537d767799e414fce88e521b7278d48c860651216e1fc6555b40\")\n version(\"2.7\", sha256=\"c294ea133c05382cc2effb1734d49f4abeb1ad8515543a333de49a11422cd4d6\")\n\n depends_on(\"gmp\")\n depends_on(\"m4\", type=\"build\")\n\n def flag_handler(self, name, flags):\n if name == \"cflags\":\n flags.append(self.compiler.c99_flag)\n return (flags, None, None)\n\n def configure_args(self):\n return [\"--disable-openssl\"]\n", "path": "var/spack/repos/builtin/packages/nettle/package.py"}]} |
gh_patches_debug_1005 | rasdani/github-patches | git_diff | superduper-io__superduper-1837 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG]: Variable inject for list values in a serialised component missing kwargs
c = Component()
c.dict() -> {some keys: [ {}, { 'v': Variable_type }] }
due to
```
def _replace_variables(x, db, **kwargs):
from .document import Document
if isinstance(x, dict):
return {
_replace_variables(k, db, **kwargs): _replace_variables(v, db, **kwargs)
for k, v in x.items()
}
if isinstance(x, (list, tuple)):
return [_replace_variables(v, db) for v in x] -> BUG (need **kwargs here)
if isinstance(x, Variable):
return x.set(db, **kwargs)
if isinstance(x, Document):
return x.set_variables(db, **kwargs)
return x
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `superduperdb/base/serializable.py`
Content:
```
1 import dataclasses as dc
2 import importlib
3 import typing as t
4 from copy import deepcopy
5
6 from superduperdb.base.leaf import Leaf
7 from superduperdb.misc.serialization import asdict
8
9
10 def _from_dict(r: t.Any, db: None = None) -> t.Any:
11 from superduperdb.base.document import Document
12 from superduperdb.components.datatype import File, LazyArtifact
13
14 if isinstance(r, Document):
15 r = r.unpack(db, leaves_to_keep=(LazyArtifact, File))
16 if isinstance(r, (list, tuple)):
17 return [_from_dict(i, db=db) for i in r]
18 if not isinstance(r, dict):
19 return r
20 if '_content' in r:
21 r = r['_content']
22 if 'cls' in r and 'module' in r and 'dict' in r:
23 module = importlib.import_module(r['module'])
24 cls_ = getattr(module, r['cls'])
25 kwargs = _from_dict(r['dict'])
26 kwargs_init = {k: v for k, v in kwargs.items() if k not in cls_.set_post_init}
27 kwargs_post_init = {k: v for k, v in kwargs.items() if k in cls_.set_post_init}
28 instance = cls_(**kwargs_init)
29 for k, v in kwargs_post_init.items():
30 setattr(instance, k, v)
31 return instance
32 else:
33 return {k: _from_dict(v, db=db) for k, v in r.items()}
34
35
36 class VariableError(Exception):
37 ...
38
39
40 def _find_variables(r):
41 if isinstance(r, dict):
42 return sum([_find_variables(v) for v in r.values()], [])
43 elif isinstance(r, (list, tuple)):
44 return sum([_find_variables(v) for v in r], [])
45 elif isinstance(r, Variable):
46 return [r]
47 return []
48
49
50 def _replace_variables(x, db, **kwargs):
51 from .document import Document
52
53 if isinstance(x, dict):
54 return {
55 _replace_variables(k, db, **kwargs): _replace_variables(v, db, **kwargs)
56 for k, v in x.items()
57 }
58 if isinstance(x, (list, tuple)):
59 return [_replace_variables(v, db) for v in x]
60 if isinstance(x, Variable):
61 return x.set(db, **kwargs)
62 if isinstance(x, Document):
63 return x.set_variables(db, **kwargs)
64 return x
65
66
67 @dc.dataclass
68 class Serializable(Leaf):
69 """
70 Base class for serializable objects. This class is used to serialize and
71 deserialize objects to and from JSON + Artifact instances.
72 """
73
74 set_post_init: t.ClassVar[t.Sequence] = ()
75
76 @property
77 def unique_id(self):
78 return str(hash(self.dict().encode()))
79
80 @property
81 def variables(self) -> t.List['Variable']:
82 out = {}
83 r = self.encode(leaf_types_to_keep=(Variable,))
84 v = _find_variables(r)
85 for var in v:
86 out[var.value] = var
87 return sorted(list(out.values()), key=lambda x: x.value)
88
89 def set_variables(self, db, **kwargs) -> 'Serializable':
90 """
91 Set free variables of self.
92
93 :param db:
94 """
95 r = self.encode(leaf_types_to_keep=(Variable,))
96 r = _replace_variables(r, db, **kwargs)
97 return self.decode(r)
98
99 def encode(
100 self,
101 leaf_types_to_keep: t.Sequence = (),
102 ):
103 r = dict(self.dict().encode(leaf_types_to_keep=leaf_types_to_keep))
104 r['leaf_type'] = 'serializable'
105 return {'_content': r}
106
107 @classmethod
108 def decode(cls, r, db: t.Optional[t.Any] = None):
109 return _from_dict(r, db=db)
110
111 def dict(self):
112 from superduperdb import Document
113
114 return Document(asdict(self))
115
116 def copy(self):
117 return deepcopy(self)
118
119
120 @dc.dataclass
121 class Variable(Serializable):
122 """
123 Mechanism for allowing "free variables" in a serializable object.
124 The idea is to allow a variable to be set at runtime, rather than
125 at object creation time.
126
127 :param value: The name of the variable to be set at runtime.
128 :param setter_callback: A callback function that takes the value, datalayer
129 and kwargs as input and returns the formatted
130 variable.
131 """
132
133 value: t.Any
134 setter_callback: dc.InitVar[t.Optional[t.Callable]] = None
135
136 def __post_init__(self, setter_callback):
137 self.setter_callback = setter_callback
138
139 def __repr__(self) -> str:
140 return '$' + str(self.value)
141
142 def __hash__(self) -> int:
143 return hash(self.value)
144
145 def set(self, db, **kwargs):
146 """
147 Get the intended value from the values of the global variables.
148
149 >>> Variable('number').set(db, number=1.5, other='test')
150 1.5
151
152 :param db: The datalayer instance.
153 :param kwargs: Variables to be used in the setter_callback
154 or as formatting variables.
155 """
156 if self.setter_callback is not None:
157 try:
158 return self.setter_callback(db, self.value, kwargs)
159 except Exception as e:
160 raise VariableError(
161 f'Could not set variable {self.value} '
162 f'based on {self.setter_callback} and **kwargs: {kwargs}'
163 ) from e
164 else:
165 assert isinstance(self.value, str)
166 return kwargs[self.value]
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/superduperdb/base/serializable.py b/superduperdb/base/serializable.py
--- a/superduperdb/base/serializable.py
+++ b/superduperdb/base/serializable.py
@@ -56,7 +56,7 @@
for k, v in x.items()
}
if isinstance(x, (list, tuple)):
- return [_replace_variables(v, db) for v in x]
+ return [_replace_variables(v, db, **kwargs) for v in x]
if isinstance(x, Variable):
return x.set(db, **kwargs)
if isinstance(x, Document):
| {"golden_diff": "diff --git a/superduperdb/base/serializable.py b/superduperdb/base/serializable.py\n--- a/superduperdb/base/serializable.py\n+++ b/superduperdb/base/serializable.py\n@@ -56,7 +56,7 @@\n for k, v in x.items()\n }\n if isinstance(x, (list, tuple)):\n- return [_replace_variables(v, db) for v in x]\n+ return [_replace_variables(v, db, **kwargs) for v in x]\n if isinstance(x, Variable):\n return x.set(db, **kwargs)\n if isinstance(x, Document):\n", "issue": "[BUG]: Variable inject for list values in a serialised component missing kwargs\nc = Component()\r\n\r\nc.dict() -> {some keys: [ {}, { 'v': Variable_type }] }\r\n\r\ndue to \r\n```\r\n\r\ndef _replace_variables(x, db, **kwargs):\r\n from .document import Document\r\n\r\n if isinstance(x, dict):\r\n return {\r\n _replace_variables(k, db, **kwargs): _replace_variables(v, db, **kwargs)\r\n for k, v in x.items()\r\n }\r\n if isinstance(x, (list, tuple)):\r\n return [_replace_variables(v, db) for v in x] -> BUG (need **kwargs here)\r\n if isinstance(x, Variable):\r\n return x.set(db, **kwargs)\r\n if isinstance(x, Document):\r\n return x.set_variables(db, **kwargs)\r\n return x\r\n\r\n```\n", "before_files": [{"content": "import dataclasses as dc\nimport importlib\nimport typing as t\nfrom copy import deepcopy\n\nfrom superduperdb.base.leaf import Leaf\nfrom superduperdb.misc.serialization import asdict\n\n\ndef _from_dict(r: t.Any, db: None = None) -> t.Any:\n from superduperdb.base.document import Document\n from superduperdb.components.datatype import File, LazyArtifact\n\n if isinstance(r, Document):\n r = r.unpack(db, leaves_to_keep=(LazyArtifact, File))\n if isinstance(r, (list, tuple)):\n return [_from_dict(i, db=db) for i in r]\n if not isinstance(r, dict):\n return r\n if '_content' in r:\n r = r['_content']\n if 'cls' in r and 'module' in r and 'dict' in r:\n module = importlib.import_module(r['module'])\n cls_ = getattr(module, r['cls'])\n kwargs = _from_dict(r['dict'])\n kwargs_init = {k: v for k, v in kwargs.items() if k not in cls_.set_post_init}\n kwargs_post_init = {k: v for k, v in kwargs.items() if k in cls_.set_post_init}\n instance = cls_(**kwargs_init)\n for k, v in kwargs_post_init.items():\n setattr(instance, k, v)\n return instance\n else:\n return {k: _from_dict(v, db=db) for k, v in r.items()}\n\n\nclass VariableError(Exception):\n ...\n\n\ndef _find_variables(r):\n if isinstance(r, dict):\n return sum([_find_variables(v) for v in r.values()], [])\n elif isinstance(r, (list, tuple)):\n return sum([_find_variables(v) for v in r], [])\n elif isinstance(r, Variable):\n return [r]\n return []\n\n\ndef _replace_variables(x, db, **kwargs):\n from .document import Document\n\n if isinstance(x, dict):\n return {\n _replace_variables(k, db, **kwargs): _replace_variables(v, db, **kwargs)\n for k, v in x.items()\n }\n if isinstance(x, (list, tuple)):\n return [_replace_variables(v, db) for v in x]\n if isinstance(x, Variable):\n return x.set(db, **kwargs)\n if isinstance(x, Document):\n return x.set_variables(db, **kwargs)\n return x\n\n\[email protected]\nclass Serializable(Leaf):\n \"\"\"\n Base class for serializable objects. This class is used to serialize and\n deserialize objects to and from JSON + Artifact instances.\n \"\"\"\n\n set_post_init: t.ClassVar[t.Sequence] = ()\n\n @property\n def unique_id(self):\n return str(hash(self.dict().encode()))\n\n @property\n def variables(self) -> t.List['Variable']:\n out = {}\n r = self.encode(leaf_types_to_keep=(Variable,))\n v = _find_variables(r)\n for var in v:\n out[var.value] = var\n return sorted(list(out.values()), key=lambda x: x.value)\n\n def set_variables(self, db, **kwargs) -> 'Serializable':\n \"\"\"\n Set free variables of self.\n\n :param db:\n \"\"\"\n r = self.encode(leaf_types_to_keep=(Variable,))\n r = _replace_variables(r, db, **kwargs)\n return self.decode(r)\n\n def encode(\n self,\n leaf_types_to_keep: t.Sequence = (),\n ):\n r = dict(self.dict().encode(leaf_types_to_keep=leaf_types_to_keep))\n r['leaf_type'] = 'serializable'\n return {'_content': r}\n\n @classmethod\n def decode(cls, r, db: t.Optional[t.Any] = None):\n return _from_dict(r, db=db)\n\n def dict(self):\n from superduperdb import Document\n\n return Document(asdict(self))\n\n def copy(self):\n return deepcopy(self)\n\n\[email protected]\nclass Variable(Serializable):\n \"\"\"\n Mechanism for allowing \"free variables\" in a serializable object.\n The idea is to allow a variable to be set at runtime, rather than\n at object creation time.\n\n :param value: The name of the variable to be set at runtime.\n :param setter_callback: A callback function that takes the value, datalayer\n and kwargs as input and returns the formatted\n variable.\n \"\"\"\n\n value: t.Any\n setter_callback: dc.InitVar[t.Optional[t.Callable]] = None\n\n def __post_init__(self, setter_callback):\n self.setter_callback = setter_callback\n\n def __repr__(self) -> str:\n return '$' + str(self.value)\n\n def __hash__(self) -> int:\n return hash(self.value)\n\n def set(self, db, **kwargs):\n \"\"\"\n Get the intended value from the values of the global variables.\n\n >>> Variable('number').set(db, number=1.5, other='test')\n 1.5\n\n :param db: The datalayer instance.\n :param kwargs: Variables to be used in the setter_callback\n or as formatting variables.\n \"\"\"\n if self.setter_callback is not None:\n try:\n return self.setter_callback(db, self.value, kwargs)\n except Exception as e:\n raise VariableError(\n f'Could not set variable {self.value} '\n f'based on {self.setter_callback} and **kwargs: {kwargs}'\n ) from e\n else:\n assert isinstance(self.value, str)\n return kwargs[self.value]\n", "path": "superduperdb/base/serializable.py"}], "after_files": [{"content": "import dataclasses as dc\nimport importlib\nimport typing as t\nfrom copy import deepcopy\n\nfrom superduperdb.base.leaf import Leaf\nfrom superduperdb.misc.serialization import asdict\n\n\ndef _from_dict(r: t.Any, db: None = None) -> t.Any:\n from superduperdb.base.document import Document\n from superduperdb.components.datatype import File, LazyArtifact\n\n if isinstance(r, Document):\n r = r.unpack(db, leaves_to_keep=(LazyArtifact, File))\n if isinstance(r, (list, tuple)):\n return [_from_dict(i, db=db) for i in r]\n if not isinstance(r, dict):\n return r\n if '_content' in r:\n r = r['_content']\n if 'cls' in r and 'module' in r and 'dict' in r:\n module = importlib.import_module(r['module'])\n cls_ = getattr(module, r['cls'])\n kwargs = _from_dict(r['dict'])\n kwargs_init = {k: v for k, v in kwargs.items() if k not in cls_.set_post_init}\n kwargs_post_init = {k: v for k, v in kwargs.items() if k in cls_.set_post_init}\n instance = cls_(**kwargs_init)\n for k, v in kwargs_post_init.items():\n setattr(instance, k, v)\n return instance\n else:\n return {k: _from_dict(v, db=db) for k, v in r.items()}\n\n\nclass VariableError(Exception):\n ...\n\n\ndef _find_variables(r):\n if isinstance(r, dict):\n return sum([_find_variables(v) for v in r.values()], [])\n elif isinstance(r, (list, tuple)):\n return sum([_find_variables(v) for v in r], [])\n elif isinstance(r, Variable):\n return [r]\n return []\n\n\ndef _replace_variables(x, db, **kwargs):\n from .document import Document\n\n if isinstance(x, dict):\n return {\n _replace_variables(k, db, **kwargs): _replace_variables(v, db, **kwargs)\n for k, v in x.items()\n }\n if isinstance(x, (list, tuple)):\n return [_replace_variables(v, db, **kwargs) for v in x]\n if isinstance(x, Variable):\n return x.set(db, **kwargs)\n if isinstance(x, Document):\n return x.set_variables(db, **kwargs)\n return x\n\n\[email protected]\nclass Serializable(Leaf):\n \"\"\"\n Base class for serializable objects. This class is used to serialize and\n deserialize objects to and from JSON + Artifact instances.\n \"\"\"\n\n set_post_init: t.ClassVar[t.Sequence] = ()\n\n @property\n def unique_id(self):\n return str(hash(self.dict().encode()))\n\n @property\n def variables(self) -> t.List['Variable']:\n out = {}\n r = self.encode(leaf_types_to_keep=(Variable,))\n v = _find_variables(r)\n for var in v:\n out[var.value] = var\n return sorted(list(out.values()), key=lambda x: x.value)\n\n def set_variables(self, db, **kwargs) -> 'Serializable':\n \"\"\"\n Set free variables of self.\n\n :param db:\n \"\"\"\n r = self.encode(leaf_types_to_keep=(Variable,))\n r = _replace_variables(r, db, **kwargs)\n return self.decode(r)\n\n def encode(\n self,\n leaf_types_to_keep: t.Sequence = (),\n ):\n r = dict(self.dict().encode(leaf_types_to_keep=leaf_types_to_keep))\n r['leaf_type'] = 'serializable'\n return {'_content': r}\n\n @classmethod\n def decode(cls, r, db: t.Optional[t.Any] = None):\n return _from_dict(r, db=db)\n\n def dict(self):\n from superduperdb import Document\n\n return Document(asdict(self))\n\n def copy(self):\n return deepcopy(self)\n\n\[email protected]\nclass Variable(Serializable):\n \"\"\"\n Mechanism for allowing \"free variables\" in a serializable object.\n The idea is to allow a variable to be set at runtime, rather than\n at object creation time.\n\n :param value: The name of the variable to be set at runtime.\n :param setter_callback: A callback function that takes the value, datalayer\n and kwargs as input and returns the formatted\n variable.\n \"\"\"\n\n value: t.Any\n setter_callback: dc.InitVar[t.Optional[t.Callable]] = None\n\n def __post_init__(self, setter_callback):\n self.setter_callback = setter_callback\n\n def __repr__(self) -> str:\n return '$' + str(self.value)\n\n def __hash__(self) -> int:\n return hash(self.value)\n\n def set(self, db, **kwargs):\n \"\"\"\n Get the intended value from the values of the global variables.\n\n >>> Variable('number').set(db, number=1.5, other='test')\n 1.5\n\n :param db: The datalayer instance.\n :param kwargs: Variables to be used in the setter_callback\n or as formatting variables.\n \"\"\"\n if self.setter_callback is not None:\n try:\n return self.setter_callback(db, self.value, kwargs)\n except Exception as e:\n raise VariableError(\n f'Could not set variable {self.value} '\n f'based on {self.setter_callback} and **kwargs: {kwargs}'\n ) from e\n else:\n assert isinstance(self.value, str)\n return kwargs[self.value]\n", "path": "superduperdb/base/serializable.py"}]} |
gh_patches_debug_1006 | rasdani/github-patches | git_diff | DataBiosphere__toil-4528 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WES ignores host in production
When trying to run `toil server --host 0.0.0.0`, I noticed that it would always only listen on `127.0.0.1` no matter what `--host` is set to but running with `--debug` didn't have this problem.
```
❯ toil server --host 0.0.0.0
...
[2022-11-11 16:50:46 +0000] [7173] [INFO] Starting gunicorn 20.1.0
[2022-11-11 16:50:46 +0000] [7173] [INFO] Listening at: http://127.0.0.1:8000
...
```
vs
```
❯ toil server --host 0.0.0.0 --debug
...
INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:8080
...
```
I tracked the problem down to [this line](https://github.com/DataBiosphere/toil/blob/master/src/toil/server/wsgi_app.py#L44). It appears to be overwriting the settings taken from the command line with Gunicorn's defaults before checking to see if anything has been set which `bind` won't be as it's been set to `None` in the merge.
Swapping the dictionaries around seems to have fixed it.
```python
for key, value in {**vars(env_args), **self.options}.items():
```
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/TOIL-1242)
┆Issue Number: TOIL-1242
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/toil/server/wsgi_app.py`
Content:
```
1 # Copyright (C) 2015-2021 Regents of the University of California
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any, Dict, Optional
15
16 from gunicorn.app.base import BaseApplication # type: ignore
17
18
19 class GunicornApplication(BaseApplication): # type: ignore
20 """
21 An entry point to integrate a Gunicorn WSGI server in Python. To start a
22 WSGI application with callable `app`, run the following code:
23
24 WSGIApplication(app, options={
25 ...
26 }).run()
27
28 For more details, see: https://docs.gunicorn.org/en/latest/custom.html
29 """
30 def __init__(self, app: object, options: Optional[Dict[str, Any]] = None):
31 self.options = options or {}
32 self.application = app
33 super().__init__()
34
35 def init(self, *args: Any) -> None:
36 pass
37
38 def load_config(self) -> None:
39 parser = self.cfg.parser()
40 env_args = parser.parse_args(self.cfg.get_cmd_args_from_env())
41
42 # TODO: also read from the Gunicorn config file?
43
44 for key, value in {**self.options, **vars(env_args)}.items():
45 if key in self.cfg.settings and value is not None:
46 self.cfg.set(key.lower(), value)
47
48 def load(self) -> object:
49 return self.application
50
51
52 def run_app(app: object, options: Optional[Dict[str, Any]] = None) -> None:
53 """
54 Run a Gunicorn WSGI server.
55 """
56 GunicornApplication(app, options=options).run()
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/toil/server/wsgi_app.py b/src/toil/server/wsgi_app.py
--- a/src/toil/server/wsgi_app.py
+++ b/src/toil/server/wsgi_app.py
@@ -41,7 +41,7 @@
# TODO: also read from the Gunicorn config file?
- for key, value in {**self.options, **vars(env_args)}.items():
+ for key, value in {**vars(env_args), **self.options}.items():
if key in self.cfg.settings and value is not None:
self.cfg.set(key.lower(), value)
| {"golden_diff": "diff --git a/src/toil/server/wsgi_app.py b/src/toil/server/wsgi_app.py\n--- a/src/toil/server/wsgi_app.py\n+++ b/src/toil/server/wsgi_app.py\n@@ -41,7 +41,7 @@\n \n # TODO: also read from the Gunicorn config file?\n \n- for key, value in {**self.options, **vars(env_args)}.items():\n+ for key, value in {**vars(env_args), **self.options}.items():\n if key in self.cfg.settings and value is not None:\n self.cfg.set(key.lower(), value)\n", "issue": "WES ignores host in production\nWhen trying to run `toil server --host 0.0.0.0`, I noticed that it would always only listen on `127.0.0.1` no matter what `--host` is set to but running with `--debug` didn't have this problem.\n\n```\n\u276f toil server --host 0.0.0.0\n...\n[2022-11-11 16:50:46 +0000] [7173] [INFO] Starting gunicorn 20.1.0\n[2022-11-11 16:50:46 +0000] [7173] [INFO] Listening at: http://127.0.0.1:8000\n...\n```\nvs\n```\n\u276f toil server --host 0.0.0.0 --debug\n...\nINFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.\n * Running on all addresses (0.0.0.0)\n * Running on http://127.0.0.1:8080\n...\n```\n\nI tracked the problem down to [this line](https://github.com/DataBiosphere/toil/blob/master/src/toil/server/wsgi_app.py#L44). It appears to be overwriting the settings taken from the command line with Gunicorn's defaults before checking to see if anything has been set which `bind` won't be as it's been set to `None` in the merge.\n\nSwapping the dictionaries around seems to have fixed it.\n```python\n for key, value in {**vars(env_args), **self.options}.items():\n```\n\n\u2506Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/TOIL-1242)\n\u2506Issue Number: TOIL-1242\n\n", "before_files": [{"content": "# Copyright (C) 2015-2021 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, Optional\n\nfrom gunicorn.app.base import BaseApplication # type: ignore\n\n\nclass GunicornApplication(BaseApplication): # type: ignore\n \"\"\"\n An entry point to integrate a Gunicorn WSGI server in Python. To start a\n WSGI application with callable `app`, run the following code:\n\n WSGIApplication(app, options={\n ...\n }).run()\n\n For more details, see: https://docs.gunicorn.org/en/latest/custom.html\n \"\"\"\n def __init__(self, app: object, options: Optional[Dict[str, Any]] = None):\n self.options = options or {}\n self.application = app\n super().__init__()\n\n def init(self, *args: Any) -> None:\n pass\n\n def load_config(self) -> None:\n parser = self.cfg.parser()\n env_args = parser.parse_args(self.cfg.get_cmd_args_from_env())\n\n # TODO: also read from the Gunicorn config file?\n\n for key, value in {**self.options, **vars(env_args)}.items():\n if key in self.cfg.settings and value is not None:\n self.cfg.set(key.lower(), value)\n\n def load(self) -> object:\n return self.application\n\n\ndef run_app(app: object, options: Optional[Dict[str, Any]] = None) -> None:\n \"\"\"\n Run a Gunicorn WSGI server.\n \"\"\"\n GunicornApplication(app, options=options).run()\n", "path": "src/toil/server/wsgi_app.py"}], "after_files": [{"content": "# Copyright (C) 2015-2021 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, Optional\n\nfrom gunicorn.app.base import BaseApplication # type: ignore\n\n\nclass GunicornApplication(BaseApplication): # type: ignore\n \"\"\"\n An entry point to integrate a Gunicorn WSGI server in Python. To start a\n WSGI application with callable `app`, run the following code:\n\n WSGIApplication(app, options={\n ...\n }).run()\n\n For more details, see: https://docs.gunicorn.org/en/latest/custom.html\n \"\"\"\n def __init__(self, app: object, options: Optional[Dict[str, Any]] = None):\n self.options = options or {}\n self.application = app\n super().__init__()\n\n def init(self, *args: Any) -> None:\n pass\n\n def load_config(self) -> None:\n parser = self.cfg.parser()\n env_args = parser.parse_args(self.cfg.get_cmd_args_from_env())\n\n # TODO: also read from the Gunicorn config file?\n\n for key, value in {**vars(env_args), **self.options}.items():\n if key in self.cfg.settings and value is not None:\n self.cfg.set(key.lower(), value)\n\n def load(self) -> object:\n return self.application\n\n\ndef run_app(app: object, options: Optional[Dict[str, Any]] = None) -> None:\n \"\"\"\n Run a Gunicorn WSGI server.\n \"\"\"\n GunicornApplication(app, options=options).run()\n", "path": "src/toil/server/wsgi_app.py"}]} |
gh_patches_debug_1007 | rasdani/github-patches | git_diff | pypa__pip-1283 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pip Generated Scripts and .py(a|w) on Windows
It appears setuptools also supports `.pya` and `pyw` on Windows for generated script wrappers instead of `.exe`. We should also strip these when installing a Wheel.
https://pythonhosted.org/setuptools/easy_install.html#natural-script-launcher
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/wheel.py`
Content:
```
1 """
2 Support for installing and building the "wheel" binary package format.
3 """
4 from __future__ import with_statement
5
6 import csv
7 import functools
8 import hashlib
9 import os
10 import pkg_resources
11 import re
12 import shutil
13 import sys
14 from base64 import urlsafe_b64encode
15
16 from pip.backwardcompat import ConfigParser
17 from pip.locations import distutils_scheme
18 from pip.log import logger
19 from pip import pep425tags
20 from pip.util import call_subprocess, normalize_path, make_path_relative
21 from pip._vendor.distlib.scripts import ScriptMaker
22
23 wheel_ext = '.whl'
24
25 def wheel_setuptools_support():
26 """
27 Return True if we have a setuptools that supports wheel.
28 """
29 fulfilled = hasattr(pkg_resources, 'DistInfoDistribution')
30 if not fulfilled:
31 logger.warn("Wheel installs require setuptools >= 0.8 for dist-info support.")
32 return fulfilled
33
34 def rehash(path, algo='sha256', blocksize=1<<20):
35 """Return (hash, length) for path using hashlib.new(algo)"""
36 h = hashlib.new(algo)
37 length = 0
38 with open(path, 'rb') as f:
39 block = f.read(blocksize)
40 while block:
41 length += len(block)
42 h.update(block)
43 block = f.read(blocksize)
44 digest = 'sha256='+urlsafe_b64encode(h.digest()).decode('latin1').rstrip('=')
45 return (digest, length)
46
47 try:
48 unicode
49 def binary(s):
50 if isinstance(s, unicode):
51 return s.encode('ascii')
52 return s
53 except NameError:
54 def binary(s):
55 if isinstance(s, str):
56 return s.encode('ascii')
57
58 def open_for_csv(name, mode):
59 if sys.version_info[0] < 3:
60 nl = {}
61 bin = 'b'
62 else:
63 nl = { 'newline': '' }
64 bin = ''
65 return open(name, mode + bin, **nl)
66
67 def fix_script(path):
68 """Replace #!python with #!/path/to/python
69 Return True if file was changed."""
70 # XXX RECORD hashes will need to be updated
71 if os.path.isfile(path):
72 script = open(path, 'rb')
73 try:
74 firstline = script.readline()
75 if not firstline.startswith(binary('#!python')):
76 return False
77 exename = sys.executable.encode(sys.getfilesystemencoding())
78 firstline = binary('#!') + exename + binary(os.linesep)
79 rest = script.read()
80 finally:
81 script.close()
82 script = open(path, 'wb')
83 try:
84 script.write(firstline)
85 script.write(rest)
86 finally:
87 script.close()
88 return True
89
90 dist_info_re = re.compile(r"""^(?P<namever>(?P<name>.+?)(-(?P<ver>\d.+?))?)
91 \.dist-info$""", re.VERBOSE)
92
93 def root_is_purelib(name, wheeldir):
94 """
95 Return True if the extracted wheel in wheeldir should go into purelib.
96 """
97 name_folded = name.replace("-", "_")
98 for item in os.listdir(wheeldir):
99 match = dist_info_re.match(item)
100 if match and match.group('name') == name_folded:
101 with open(os.path.join(wheeldir, item, 'WHEEL')) as wheel:
102 for line in wheel:
103 line = line.lower().rstrip()
104 if line == "root-is-purelib: true":
105 return True
106 return False
107
108 def get_entrypoints(filename):
109 if not os.path.exists(filename):
110 return {}, {}
111 cp = ConfigParser.RawConfigParser()
112 cp.read(filename)
113 console = {}
114 gui = {}
115 if cp.has_section('console_scripts'):
116 console = dict(cp.items('console_scripts'))
117 if cp.has_section('gui_scripts'):
118 gui = dict(cp.items('gui_scripts'))
119 return console, gui
120
121 def move_wheel_files(name, req, wheeldir, user=False, home=None, root=None):
122 """Install a wheel"""
123
124 scheme = distutils_scheme(name, user=user, home=home, root=root)
125
126 if root_is_purelib(name, wheeldir):
127 lib_dir = scheme['purelib']
128 else:
129 lib_dir = scheme['platlib']
130
131 info_dir = []
132 data_dirs = []
133 source = wheeldir.rstrip(os.path.sep) + os.path.sep
134
135 # Record details of the files moved
136 # installed = files copied from the wheel to the destination
137 # changed = files changed while installing (scripts #! line typically)
138 # generated = files newly generated during the install (script wrappers)
139 installed = {}
140 changed = set()
141 generated = []
142
143 def normpath(src, p):
144 return make_path_relative(src, p).replace(os.path.sep, '/')
145
146 def record_installed(srcfile, destfile, modified=False):
147 """Map archive RECORD paths to installation RECORD paths."""
148 oldpath = normpath(srcfile, wheeldir)
149 newpath = normpath(destfile, lib_dir)
150 installed[oldpath] = newpath
151 if modified:
152 changed.add(destfile)
153
154 def clobber(source, dest, is_base, fixer=None, filter=None):
155 if not os.path.exists(dest): # common for the 'include' path
156 os.makedirs(dest)
157
158 for dir, subdirs, files in os.walk(source):
159 basedir = dir[len(source):].lstrip(os.path.sep)
160 if is_base and basedir.split(os.path.sep, 1)[0].endswith('.data'):
161 continue
162 for s in subdirs:
163 destsubdir = os.path.join(dest, basedir, s)
164 if is_base and basedir == '' and destsubdir.endswith('.data'):
165 data_dirs.append(s)
166 continue
167 elif (is_base
168 and s.endswith('.dist-info')
169 # is self.req.project_name case preserving?
170 and s.lower().startswith(req.project_name.replace('-', '_').lower())):
171 assert not info_dir, 'Multiple .dist-info directories'
172 info_dir.append(destsubdir)
173 if not os.path.exists(destsubdir):
174 os.makedirs(destsubdir)
175 for f in files:
176 # Skip unwanted files
177 if filter and filter(f):
178 continue
179 srcfile = os.path.join(dir, f)
180 destfile = os.path.join(dest, basedir, f)
181 shutil.move(srcfile, destfile)
182 changed = False
183 if fixer:
184 changed = fixer(destfile)
185 record_installed(srcfile, destfile, changed)
186
187 clobber(source, lib_dir, True)
188
189 assert info_dir, "%s .dist-info directory not found" % req
190
191 # Get the defined entry points
192 ep_file = os.path.join(info_dir[0], 'entry_points.txt')
193 console, gui = get_entrypoints(ep_file)
194
195 def is_entrypoint_wrapper(name):
196 # EP, EP.exe and EP-script.py are scripts generated for
197 # entry point EP by setuptools
198 if name.lower().endswith('.exe'):
199 matchname = name[:-4]
200 elif name.lower().endswith('-script.py'):
201 matchname = name[:-10]
202 else:
203 matchname = name
204 # Ignore setuptools-generated scripts
205 return (matchname in console or matchname in gui)
206
207 for datadir in data_dirs:
208 fixer = None
209 filter = None
210 for subdir in os.listdir(os.path.join(wheeldir, datadir)):
211 fixer = None
212 if subdir == 'scripts':
213 fixer = fix_script
214 filter = is_entrypoint_wrapper
215 source = os.path.join(wheeldir, datadir, subdir)
216 dest = scheme[subdir]
217 clobber(source, dest, False, fixer=fixer, filter=filter)
218
219 maker = ScriptMaker(None, scheme['scripts'])
220 maker.variants = set(('', ))
221
222 # This is required because otherwise distlib creates scripts that are not
223 # executable.
224 # See https://bitbucket.org/pypa/distlib/issue/32/
225 maker.set_mode = True
226
227 # Special case pip and setuptools to generate versioned wrappers
228 #
229 # The issue is that some projects (specifically, pip and setuptools) use
230 # code in setup.py to create "versioned" entry points - pip2.7 on Python
231 # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into
232 # the wheel metadata at build time, and so if the wheel is installed with
233 # a *different* version of Python the entry points will be wrong. The
234 # correct fix for this is to enhance the metadata to be able to describe
235 # such versioned entry points, but that won't happen till Metadata 2.0 is
236 # available.
237 # In the meantime, projects using versioned entry points will either have
238 # incorrect versioned entry points, or they will not be able to distribute
239 # "universal" wheels (i.e., they will need a wheel per Python version).
240 #
241 # Because setuptools and pip are bundled with _ensurepip and virtualenv,
242 # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we
243 # override the versioned entry points in the wheel and generate the
244 # correct ones. This code is purely a short-term measure until Metadat 2.0
245 # is available.
246 pip_script = console.pop('pip', None)
247 if pip_script:
248 spec = 'pip = ' + pip_script
249 generated.extend(maker.make(spec))
250 spec = 'pip%s = %s' % (sys.version[:1], pip_script)
251 generated.extend(maker.make(spec))
252 spec = 'pip%s = %s' % (sys.version[:3], pip_script)
253 generated.extend(maker.make(spec))
254 # Delete any other versioned pip entry points
255 pip_ep = [k for k in console if re.match(r'pip(\d(\.\d)?)?$', k)]
256 for k in pip_ep:
257 del console[k]
258 easy_install_script = console.pop('easy_install', None)
259 if easy_install_script:
260 spec = 'easy_install = ' + easy_install_script
261 generated.extend(maker.make(spec))
262 spec = 'easy_install-%s = %s' % (sys.version[:3], easy_install_script)
263 generated.extend(maker.make(spec))
264 # Delete any other versioned easy_install entry points
265 easy_install_ep = [k for k in console
266 if re.match(r'easy_install(-\d\.\d)?$', k)]
267 for k in easy_install_ep:
268 del console[k]
269
270 # Generate the console and GUI entry points specified in the wheel
271 if len(console) > 0:
272 generated.extend(maker.make_multiple(['%s = %s' % kv for kv in console.items()]))
273 if len(gui) > 0:
274 generated.extend(maker.make_multiple(['%s = %s' % kv for kv in gui.items()], {'gui': True}))
275
276 record = os.path.join(info_dir[0], 'RECORD')
277 temp_record = os.path.join(info_dir[0], 'RECORD.pip')
278 with open_for_csv(record, 'r') as record_in:
279 with open_for_csv(temp_record, 'w+') as record_out:
280 reader = csv.reader(record_in)
281 writer = csv.writer(record_out)
282 for row in reader:
283 row[0] = installed.pop(row[0], row[0])
284 if row[0] in changed:
285 row[1], row[2] = rehash(row[0])
286 writer.writerow(row)
287 for f in generated:
288 h, l = rehash(f)
289 writer.writerow((f, h, l))
290 for f in installed:
291 writer.writerow((installed[f], '', ''))
292 shutil.move(temp_record, record)
293
294 def _unique(fn):
295 @functools.wraps(fn)
296 def unique(*args, **kw):
297 seen = set()
298 for item in fn(*args, **kw):
299 if item not in seen:
300 seen.add(item)
301 yield item
302 return unique
303
304 # TODO: this goes somewhere besides the wheel module
305 @_unique
306 def uninstallation_paths(dist):
307 """
308 Yield all the uninstallation paths for dist based on RECORD-without-.pyc
309
310 Yield paths to all the files in RECORD. For each .py file in RECORD, add
311 the .pyc in the same directory.
312
313 UninstallPathSet.add() takes care of the __pycache__ .pyc.
314 """
315 from pip.req import FakeFile # circular import
316 r = csv.reader(FakeFile(dist.get_metadata_lines('RECORD')))
317 for row in r:
318 path = os.path.join(dist.location, row[0])
319 yield path
320 if path.endswith('.py'):
321 dn, fn = os.path.split(path)
322 base = fn[:-3]
323 path = os.path.join(dn, base+'.pyc')
324 yield path
325
326
327 class Wheel(object):
328 """A wheel file"""
329
330 # TODO: maybe move the install code into this class
331
332 wheel_file_re = re.compile(
333 r"""^(?P<namever>(?P<name>.+?)(-(?P<ver>\d.+?))?)
334 ((-(?P<build>\d.*?))?-(?P<pyver>.+?)-(?P<abi>.+?)-(?P<plat>.+?)
335 \.whl|\.dist-info)$""",
336 re.VERBOSE)
337
338 def __init__(self, filename):
339 wheel_info = self.wheel_file_re.match(filename)
340 self.filename = filename
341 self.name = wheel_info.group('name').replace('_', '-')
342 # we'll assume "_" means "-" due to wheel naming scheme
343 # (https://github.com/pypa/pip/issues/1150)
344 self.version = wheel_info.group('ver').replace('_', '-')
345 self.pyversions = wheel_info.group('pyver').split('.')
346 self.abis = wheel_info.group('abi').split('.')
347 self.plats = wheel_info.group('plat').split('.')
348
349 # All the tag combinations from this file
350 self.file_tags = set((x, y, z) for x in self.pyversions for y
351 in self.abis for z in self.plats)
352
353 def support_index_min(self, tags=None):
354 """
355 Return the lowest index that a file_tag achieves in the supported_tags list
356 e.g. if there are 8 supported tags, and one of the file tags is first in the
357 list, then return 0.
358 """
359 if tags is None: # for mock
360 tags = pep425tags.supported_tags
361 indexes = [tags.index(c) for c in self.file_tags if c in tags]
362 return min(indexes) if indexes else None
363
364 def supported(self, tags=None):
365 """Is this wheel supported on this system?"""
366 if tags is None: # for mock
367 tags = pep425tags.supported_tags
368 return bool(set(tags).intersection(self.file_tags))
369
370
371 class WheelBuilder(object):
372 """Build wheels from a RequirementSet."""
373
374 def __init__(self, requirement_set, finder, wheel_dir, build_options=[], global_options=[]):
375 self.requirement_set = requirement_set
376 self.finder = finder
377 self.wheel_dir = normalize_path(wheel_dir)
378 self.build_options = build_options
379 self.global_options = global_options
380
381 def _build_one(self, req):
382 """Build one wheel."""
383
384 base_args = [
385 sys.executable, '-c',
386 "import setuptools;__file__=%r;"\
387 "exec(compile(open(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))" % req.setup_py] + \
388 list(self.global_options)
389
390 logger.notify('Running setup.py bdist_wheel for %s' % req.name)
391 logger.notify('Destination directory: %s' % self.wheel_dir)
392 wheel_args = base_args + ['bdist_wheel', '-d', self.wheel_dir] + self.build_options
393 try:
394 call_subprocess(wheel_args, cwd=req.source_dir, show_stdout=False)
395 return True
396 except:
397 logger.error('Failed building wheel for %s' % req.name)
398 return False
399
400 def build(self):
401 """Build wheels."""
402
403 #unpack and constructs req set
404 self.requirement_set.prepare_files(self.finder)
405
406 reqset = self.requirement_set.requirements.values()
407
408 #make the wheelhouse
409 if not os.path.exists(self.wheel_dir):
410 os.makedirs(self.wheel_dir)
411
412 #build the wheels
413 logger.notify('Building wheels for collected packages: %s' % ', '.join([req.name for req in reqset]))
414 logger.indent += 2
415 build_success, build_failure = [], []
416 for req in reqset:
417 if req.is_wheel:
418 logger.notify("Skipping building wheel: %s", req.url)
419 continue
420 if self._build_one(req):
421 build_success.append(req)
422 else:
423 build_failure.append(req)
424 logger.indent -= 2
425
426 #notify sucess/failure
427 if build_success:
428 logger.notify('Successfully built %s' % ' '.join([req.name for req in build_success]))
429 if build_failure:
430 logger.notify('Failed to build %s' % ' '.join([req.name for req in build_failure]))
431
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pip/wheel.py b/pip/wheel.py
--- a/pip/wheel.py
+++ b/pip/wheel.py
@@ -199,6 +199,8 @@
matchname = name[:-4]
elif name.lower().endswith('-script.py'):
matchname = name[:-10]
+ elif name.lower().endswith(".pya"):
+ matchname = name[:-4]
else:
matchname = name
# Ignore setuptools-generated scripts
| {"golden_diff": "diff --git a/pip/wheel.py b/pip/wheel.py\n--- a/pip/wheel.py\n+++ b/pip/wheel.py\n@@ -199,6 +199,8 @@\n matchname = name[:-4]\n elif name.lower().endswith('-script.py'):\n matchname = name[:-10]\n+ elif name.lower().endswith(\".pya\"):\n+ matchname = name[:-4]\n else:\n matchname = name\n # Ignore setuptools-generated scripts\n", "issue": "Pip Generated Scripts and .py(a|w) on Windows\nIt appears setuptools also supports `.pya` and `pyw` on Windows for generated script wrappers instead of `.exe`. We should also strip these when installing a Wheel.\n\nhttps://pythonhosted.org/setuptools/easy_install.html#natural-script-launcher\n\n", "before_files": [{"content": "\"\"\"\nSupport for installing and building the \"wheel\" binary package format.\n\"\"\"\nfrom __future__ import with_statement\n\nimport csv\nimport functools\nimport hashlib\nimport os\nimport pkg_resources\nimport re\nimport shutil\nimport sys\nfrom base64 import urlsafe_b64encode\n\nfrom pip.backwardcompat import ConfigParser\nfrom pip.locations import distutils_scheme\nfrom pip.log import logger\nfrom pip import pep425tags\nfrom pip.util import call_subprocess, normalize_path, make_path_relative\nfrom pip._vendor.distlib.scripts import ScriptMaker\n\nwheel_ext = '.whl'\n\ndef wheel_setuptools_support():\n \"\"\"\n Return True if we have a setuptools that supports wheel.\n \"\"\"\n fulfilled = hasattr(pkg_resources, 'DistInfoDistribution')\n if not fulfilled:\n logger.warn(\"Wheel installs require setuptools >= 0.8 for dist-info support.\")\n return fulfilled\n\ndef rehash(path, algo='sha256', blocksize=1<<20):\n \"\"\"Return (hash, length) for path using hashlib.new(algo)\"\"\"\n h = hashlib.new(algo)\n length = 0\n with open(path, 'rb') as f:\n block = f.read(blocksize)\n while block:\n length += len(block)\n h.update(block)\n block = f.read(blocksize)\n digest = 'sha256='+urlsafe_b64encode(h.digest()).decode('latin1').rstrip('=')\n return (digest, length)\n\ntry:\n unicode\n def binary(s):\n if isinstance(s, unicode):\n return s.encode('ascii')\n return s\nexcept NameError:\n def binary(s):\n if isinstance(s, str):\n return s.encode('ascii')\n\ndef open_for_csv(name, mode):\n if sys.version_info[0] < 3:\n nl = {}\n bin = 'b'\n else:\n nl = { 'newline': '' }\n bin = ''\n return open(name, mode + bin, **nl)\n\ndef fix_script(path):\n \"\"\"Replace #!python with #!/path/to/python\n Return True if file was changed.\"\"\"\n # XXX RECORD hashes will need to be updated\n if os.path.isfile(path):\n script = open(path, 'rb')\n try:\n firstline = script.readline()\n if not firstline.startswith(binary('#!python')):\n return False\n exename = sys.executable.encode(sys.getfilesystemencoding())\n firstline = binary('#!') + exename + binary(os.linesep)\n rest = script.read()\n finally:\n script.close()\n script = open(path, 'wb')\n try:\n script.write(firstline)\n script.write(rest)\n finally:\n script.close()\n return True\n\ndist_info_re = re.compile(r\"\"\"^(?P<namever>(?P<name>.+?)(-(?P<ver>\\d.+?))?)\n \\.dist-info$\"\"\", re.VERBOSE)\n\ndef root_is_purelib(name, wheeldir):\n \"\"\"\n Return True if the extracted wheel in wheeldir should go into purelib.\n \"\"\"\n name_folded = name.replace(\"-\", \"_\")\n for item in os.listdir(wheeldir):\n match = dist_info_re.match(item)\n if match and match.group('name') == name_folded:\n with open(os.path.join(wheeldir, item, 'WHEEL')) as wheel:\n for line in wheel:\n line = line.lower().rstrip()\n if line == \"root-is-purelib: true\":\n return True\n return False\n\ndef get_entrypoints(filename):\n if not os.path.exists(filename):\n return {}, {}\n cp = ConfigParser.RawConfigParser()\n cp.read(filename)\n console = {}\n gui = {}\n if cp.has_section('console_scripts'):\n console = dict(cp.items('console_scripts'))\n if cp.has_section('gui_scripts'):\n gui = dict(cp.items('gui_scripts'))\n return console, gui\n\ndef move_wheel_files(name, req, wheeldir, user=False, home=None, root=None):\n \"\"\"Install a wheel\"\"\"\n\n scheme = distutils_scheme(name, user=user, home=home, root=root)\n\n if root_is_purelib(name, wheeldir):\n lib_dir = scheme['purelib']\n else:\n lib_dir = scheme['platlib']\n\n info_dir = []\n data_dirs = []\n source = wheeldir.rstrip(os.path.sep) + os.path.sep\n\n # Record details of the files moved\n # installed = files copied from the wheel to the destination\n # changed = files changed while installing (scripts #! line typically)\n # generated = files newly generated during the install (script wrappers)\n installed = {}\n changed = set()\n generated = []\n\n def normpath(src, p):\n return make_path_relative(src, p).replace(os.path.sep, '/')\n\n def record_installed(srcfile, destfile, modified=False):\n \"\"\"Map archive RECORD paths to installation RECORD paths.\"\"\"\n oldpath = normpath(srcfile, wheeldir)\n newpath = normpath(destfile, lib_dir)\n installed[oldpath] = newpath\n if modified:\n changed.add(destfile)\n\n def clobber(source, dest, is_base, fixer=None, filter=None):\n if not os.path.exists(dest): # common for the 'include' path\n os.makedirs(dest)\n\n for dir, subdirs, files in os.walk(source):\n basedir = dir[len(source):].lstrip(os.path.sep)\n if is_base and basedir.split(os.path.sep, 1)[0].endswith('.data'):\n continue\n for s in subdirs:\n destsubdir = os.path.join(dest, basedir, s)\n if is_base and basedir == '' and destsubdir.endswith('.data'):\n data_dirs.append(s)\n continue\n elif (is_base\n and s.endswith('.dist-info')\n # is self.req.project_name case preserving?\n and s.lower().startswith(req.project_name.replace('-', '_').lower())):\n assert not info_dir, 'Multiple .dist-info directories'\n info_dir.append(destsubdir)\n if not os.path.exists(destsubdir):\n os.makedirs(destsubdir)\n for f in files:\n # Skip unwanted files\n if filter and filter(f):\n continue\n srcfile = os.path.join(dir, f)\n destfile = os.path.join(dest, basedir, f)\n shutil.move(srcfile, destfile)\n changed = False\n if fixer:\n changed = fixer(destfile)\n record_installed(srcfile, destfile, changed)\n\n clobber(source, lib_dir, True)\n\n assert info_dir, \"%s .dist-info directory not found\" % req\n\n # Get the defined entry points\n ep_file = os.path.join(info_dir[0], 'entry_points.txt')\n console, gui = get_entrypoints(ep_file)\n\n def is_entrypoint_wrapper(name):\n # EP, EP.exe and EP-script.py are scripts generated for\n # entry point EP by setuptools\n if name.lower().endswith('.exe'):\n matchname = name[:-4]\n elif name.lower().endswith('-script.py'):\n matchname = name[:-10]\n else:\n matchname = name\n # Ignore setuptools-generated scripts\n return (matchname in console or matchname in gui)\n\n for datadir in data_dirs:\n fixer = None\n filter = None\n for subdir in os.listdir(os.path.join(wheeldir, datadir)):\n fixer = None\n if subdir == 'scripts':\n fixer = fix_script\n filter = is_entrypoint_wrapper\n source = os.path.join(wheeldir, datadir, subdir)\n dest = scheme[subdir]\n clobber(source, dest, False, fixer=fixer, filter=filter)\n\n maker = ScriptMaker(None, scheme['scripts'])\n maker.variants = set(('', ))\n\n # This is required because otherwise distlib creates scripts that are not\n # executable.\n # See https://bitbucket.org/pypa/distlib/issue/32/\n maker.set_mode = True\n\n # Special case pip and setuptools to generate versioned wrappers\n #\n # The issue is that some projects (specifically, pip and setuptools) use\n # code in setup.py to create \"versioned\" entry points - pip2.7 on Python\n # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into\n # the wheel metadata at build time, and so if the wheel is installed with\n # a *different* version of Python the entry points will be wrong. The\n # correct fix for this is to enhance the metadata to be able to describe\n # such versioned entry points, but that won't happen till Metadata 2.0 is\n # available.\n # In the meantime, projects using versioned entry points will either have\n # incorrect versioned entry points, or they will not be able to distribute\n # \"universal\" wheels (i.e., they will need a wheel per Python version).\n #\n # Because setuptools and pip are bundled with _ensurepip and virtualenv,\n # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we\n # override the versioned entry points in the wheel and generate the\n # correct ones. This code is purely a short-term measure until Metadat 2.0\n # is available.\n pip_script = console.pop('pip', None)\n if pip_script:\n spec = 'pip = ' + pip_script\n generated.extend(maker.make(spec))\n spec = 'pip%s = %s' % (sys.version[:1], pip_script)\n generated.extend(maker.make(spec))\n spec = 'pip%s = %s' % (sys.version[:3], pip_script)\n generated.extend(maker.make(spec))\n # Delete any other versioned pip entry points\n pip_ep = [k for k in console if re.match(r'pip(\\d(\\.\\d)?)?$', k)]\n for k in pip_ep:\n del console[k]\n easy_install_script = console.pop('easy_install', None)\n if easy_install_script:\n spec = 'easy_install = ' + easy_install_script\n generated.extend(maker.make(spec))\n spec = 'easy_install-%s = %s' % (sys.version[:3], easy_install_script)\n generated.extend(maker.make(spec))\n # Delete any other versioned easy_install entry points\n easy_install_ep = [k for k in console\n if re.match(r'easy_install(-\\d\\.\\d)?$', k)]\n for k in easy_install_ep:\n del console[k]\n\n # Generate the console and GUI entry points specified in the wheel\n if len(console) > 0:\n generated.extend(maker.make_multiple(['%s = %s' % kv for kv in console.items()]))\n if len(gui) > 0:\n generated.extend(maker.make_multiple(['%s = %s' % kv for kv in gui.items()], {'gui': True}))\n\n record = os.path.join(info_dir[0], 'RECORD')\n temp_record = os.path.join(info_dir[0], 'RECORD.pip')\n with open_for_csv(record, 'r') as record_in:\n with open_for_csv(temp_record, 'w+') as record_out:\n reader = csv.reader(record_in)\n writer = csv.writer(record_out)\n for row in reader:\n row[0] = installed.pop(row[0], row[0])\n if row[0] in changed:\n row[1], row[2] = rehash(row[0])\n writer.writerow(row)\n for f in generated:\n h, l = rehash(f)\n writer.writerow((f, h, l))\n for f in installed:\n writer.writerow((installed[f], '', ''))\n shutil.move(temp_record, record)\n\ndef _unique(fn):\n @functools.wraps(fn)\n def unique(*args, **kw):\n seen = set()\n for item in fn(*args, **kw):\n if item not in seen:\n seen.add(item)\n yield item\n return unique\n\n# TODO: this goes somewhere besides the wheel module\n@_unique\ndef uninstallation_paths(dist):\n \"\"\"\n Yield all the uninstallation paths for dist based on RECORD-without-.pyc\n\n Yield paths to all the files in RECORD. For each .py file in RECORD, add\n the .pyc in the same directory.\n\n UninstallPathSet.add() takes care of the __pycache__ .pyc.\n \"\"\"\n from pip.req import FakeFile # circular import\n r = csv.reader(FakeFile(dist.get_metadata_lines('RECORD')))\n for row in r:\n path = os.path.join(dist.location, row[0])\n yield path\n if path.endswith('.py'):\n dn, fn = os.path.split(path)\n base = fn[:-3]\n path = os.path.join(dn, base+'.pyc')\n yield path\n\n\nclass Wheel(object):\n \"\"\"A wheel file\"\"\"\n\n # TODO: maybe move the install code into this class\n\n wheel_file_re = re.compile(\n r\"\"\"^(?P<namever>(?P<name>.+?)(-(?P<ver>\\d.+?))?)\n ((-(?P<build>\\d.*?))?-(?P<pyver>.+?)-(?P<abi>.+?)-(?P<plat>.+?)\n \\.whl|\\.dist-info)$\"\"\",\n re.VERBOSE)\n\n def __init__(self, filename):\n wheel_info = self.wheel_file_re.match(filename)\n self.filename = filename\n self.name = wheel_info.group('name').replace('_', '-')\n # we'll assume \"_\" means \"-\" due to wheel naming scheme\n # (https://github.com/pypa/pip/issues/1150)\n self.version = wheel_info.group('ver').replace('_', '-')\n self.pyversions = wheel_info.group('pyver').split('.')\n self.abis = wheel_info.group('abi').split('.')\n self.plats = wheel_info.group('plat').split('.')\n\n # All the tag combinations from this file\n self.file_tags = set((x, y, z) for x in self.pyversions for y\n in self.abis for z in self.plats)\n\n def support_index_min(self, tags=None):\n \"\"\"\n Return the lowest index that a file_tag achieves in the supported_tags list\n e.g. if there are 8 supported tags, and one of the file tags is first in the\n list, then return 0.\n \"\"\"\n if tags is None: # for mock\n tags = pep425tags.supported_tags\n indexes = [tags.index(c) for c in self.file_tags if c in tags]\n return min(indexes) if indexes else None\n\n def supported(self, tags=None):\n \"\"\"Is this wheel supported on this system?\"\"\"\n if tags is None: # for mock\n tags = pep425tags.supported_tags\n return bool(set(tags).intersection(self.file_tags))\n\n\nclass WheelBuilder(object):\n \"\"\"Build wheels from a RequirementSet.\"\"\"\n\n def __init__(self, requirement_set, finder, wheel_dir, build_options=[], global_options=[]):\n self.requirement_set = requirement_set\n self.finder = finder\n self.wheel_dir = normalize_path(wheel_dir)\n self.build_options = build_options\n self.global_options = global_options\n\n def _build_one(self, req):\n \"\"\"Build one wheel.\"\"\"\n\n base_args = [\n sys.executable, '-c',\n \"import setuptools;__file__=%r;\"\\\n \"exec(compile(open(__file__).read().replace('\\\\r\\\\n', '\\\\n'), __file__, 'exec'))\" % req.setup_py] + \\\n list(self.global_options)\n\n logger.notify('Running setup.py bdist_wheel for %s' % req.name)\n logger.notify('Destination directory: %s' % self.wheel_dir)\n wheel_args = base_args + ['bdist_wheel', '-d', self.wheel_dir] + self.build_options\n try:\n call_subprocess(wheel_args, cwd=req.source_dir, show_stdout=False)\n return True\n except:\n logger.error('Failed building wheel for %s' % req.name)\n return False\n\n def build(self):\n \"\"\"Build wheels.\"\"\"\n\n #unpack and constructs req set\n self.requirement_set.prepare_files(self.finder)\n\n reqset = self.requirement_set.requirements.values()\n\n #make the wheelhouse\n if not os.path.exists(self.wheel_dir):\n os.makedirs(self.wheel_dir)\n\n #build the wheels\n logger.notify('Building wheels for collected packages: %s' % ', '.join([req.name for req in reqset]))\n logger.indent += 2\n build_success, build_failure = [], []\n for req in reqset:\n if req.is_wheel:\n logger.notify(\"Skipping building wheel: %s\", req.url)\n continue\n if self._build_one(req):\n build_success.append(req)\n else:\n build_failure.append(req)\n logger.indent -= 2\n\n #notify sucess/failure\n if build_success:\n logger.notify('Successfully built %s' % ' '.join([req.name for req in build_success]))\n if build_failure:\n logger.notify('Failed to build %s' % ' '.join([req.name for req in build_failure]))\n", "path": "pip/wheel.py"}], "after_files": [{"content": "\"\"\"\nSupport for installing and building the \"wheel\" binary package format.\n\"\"\"\nfrom __future__ import with_statement\n\nimport csv\nimport functools\nimport hashlib\nimport os\nimport pkg_resources\nimport re\nimport shutil\nimport sys\nfrom base64 import urlsafe_b64encode\n\nfrom pip.backwardcompat import ConfigParser\nfrom pip.locations import distutils_scheme\nfrom pip.log import logger\nfrom pip import pep425tags\nfrom pip.util import call_subprocess, normalize_path, make_path_relative\nfrom pip._vendor.distlib.scripts import ScriptMaker\n\nwheel_ext = '.whl'\n\ndef wheel_setuptools_support():\n \"\"\"\n Return True if we have a setuptools that supports wheel.\n \"\"\"\n fulfilled = hasattr(pkg_resources, 'DistInfoDistribution')\n if not fulfilled:\n logger.warn(\"Wheel installs require setuptools >= 0.8 for dist-info support.\")\n return fulfilled\n\ndef rehash(path, algo='sha256', blocksize=1<<20):\n \"\"\"Return (hash, length) for path using hashlib.new(algo)\"\"\"\n h = hashlib.new(algo)\n length = 0\n with open(path, 'rb') as f:\n block = f.read(blocksize)\n while block:\n length += len(block)\n h.update(block)\n block = f.read(blocksize)\n digest = 'sha256='+urlsafe_b64encode(h.digest()).decode('latin1').rstrip('=')\n return (digest, length)\n\ntry:\n unicode\n def binary(s):\n if isinstance(s, unicode):\n return s.encode('ascii')\n return s\nexcept NameError:\n def binary(s):\n if isinstance(s, str):\n return s.encode('ascii')\n\ndef open_for_csv(name, mode):\n if sys.version_info[0] < 3:\n nl = {}\n bin = 'b'\n else:\n nl = { 'newline': '' }\n bin = ''\n return open(name, mode + bin, **nl)\n\ndef fix_script(path):\n \"\"\"Replace #!python with #!/path/to/python\n Return True if file was changed.\"\"\"\n # XXX RECORD hashes will need to be updated\n if os.path.isfile(path):\n script = open(path, 'rb')\n try:\n firstline = script.readline()\n if not firstline.startswith(binary('#!python')):\n return False\n exename = sys.executable.encode(sys.getfilesystemencoding())\n firstline = binary('#!') + exename + binary(os.linesep)\n rest = script.read()\n finally:\n script.close()\n script = open(path, 'wb')\n try:\n script.write(firstline)\n script.write(rest)\n finally:\n script.close()\n return True\n\ndist_info_re = re.compile(r\"\"\"^(?P<namever>(?P<name>.+?)(-(?P<ver>\\d.+?))?)\n \\.dist-info$\"\"\", re.VERBOSE)\n\ndef root_is_purelib(name, wheeldir):\n \"\"\"\n Return True if the extracted wheel in wheeldir should go into purelib.\n \"\"\"\n name_folded = name.replace(\"-\", \"_\")\n for item in os.listdir(wheeldir):\n match = dist_info_re.match(item)\n if match and match.group('name') == name_folded:\n with open(os.path.join(wheeldir, item, 'WHEEL')) as wheel:\n for line in wheel:\n line = line.lower().rstrip()\n if line == \"root-is-purelib: true\":\n return True\n return False\n\ndef get_entrypoints(filename):\n if not os.path.exists(filename):\n return {}, {}\n cp = ConfigParser.RawConfigParser()\n cp.read(filename)\n console = {}\n gui = {}\n if cp.has_section('console_scripts'):\n console = dict(cp.items('console_scripts'))\n if cp.has_section('gui_scripts'):\n gui = dict(cp.items('gui_scripts'))\n return console, gui\n\ndef move_wheel_files(name, req, wheeldir, user=False, home=None, root=None):\n \"\"\"Install a wheel\"\"\"\n\n scheme = distutils_scheme(name, user=user, home=home, root=root)\n\n if root_is_purelib(name, wheeldir):\n lib_dir = scheme['purelib']\n else:\n lib_dir = scheme['platlib']\n\n info_dir = []\n data_dirs = []\n source = wheeldir.rstrip(os.path.sep) + os.path.sep\n\n # Record details of the files moved\n # installed = files copied from the wheel to the destination\n # changed = files changed while installing (scripts #! line typically)\n # generated = files newly generated during the install (script wrappers)\n installed = {}\n changed = set()\n generated = []\n\n def normpath(src, p):\n return make_path_relative(src, p).replace(os.path.sep, '/')\n\n def record_installed(srcfile, destfile, modified=False):\n \"\"\"Map archive RECORD paths to installation RECORD paths.\"\"\"\n oldpath = normpath(srcfile, wheeldir)\n newpath = normpath(destfile, lib_dir)\n installed[oldpath] = newpath\n if modified:\n changed.add(destfile)\n\n def clobber(source, dest, is_base, fixer=None, filter=None):\n if not os.path.exists(dest): # common for the 'include' path\n os.makedirs(dest)\n\n for dir, subdirs, files in os.walk(source):\n basedir = dir[len(source):].lstrip(os.path.sep)\n if is_base and basedir.split(os.path.sep, 1)[0].endswith('.data'):\n continue\n for s in subdirs:\n destsubdir = os.path.join(dest, basedir, s)\n if is_base and basedir == '' and destsubdir.endswith('.data'):\n data_dirs.append(s)\n continue\n elif (is_base\n and s.endswith('.dist-info')\n # is self.req.project_name case preserving?\n and s.lower().startswith(req.project_name.replace('-', '_').lower())):\n assert not info_dir, 'Multiple .dist-info directories'\n info_dir.append(destsubdir)\n if not os.path.exists(destsubdir):\n os.makedirs(destsubdir)\n for f in files:\n # Skip unwanted files\n if filter and filter(f):\n continue\n srcfile = os.path.join(dir, f)\n destfile = os.path.join(dest, basedir, f)\n shutil.move(srcfile, destfile)\n changed = False\n if fixer:\n changed = fixer(destfile)\n record_installed(srcfile, destfile, changed)\n\n clobber(source, lib_dir, True)\n\n assert info_dir, \"%s .dist-info directory not found\" % req\n\n # Get the defined entry points\n ep_file = os.path.join(info_dir[0], 'entry_points.txt')\n console, gui = get_entrypoints(ep_file)\n\n def is_entrypoint_wrapper(name):\n # EP, EP.exe and EP-script.py are scripts generated for\n # entry point EP by setuptools\n if name.lower().endswith('.exe'):\n matchname = name[:-4]\n elif name.lower().endswith('-script.py'):\n matchname = name[:-10]\n elif name.lower().endswith(\".pya\"):\n matchname = name[:-4]\n else:\n matchname = name\n # Ignore setuptools-generated scripts\n return (matchname in console or matchname in gui)\n\n for datadir in data_dirs:\n fixer = None\n filter = None\n for subdir in os.listdir(os.path.join(wheeldir, datadir)):\n fixer = None\n if subdir == 'scripts':\n fixer = fix_script\n filter = is_entrypoint_wrapper\n source = os.path.join(wheeldir, datadir, subdir)\n dest = scheme[subdir]\n clobber(source, dest, False, fixer=fixer, filter=filter)\n\n maker = ScriptMaker(None, scheme['scripts'])\n maker.variants = set(('', ))\n\n # This is required because otherwise distlib creates scripts that are not\n # executable.\n # See https://bitbucket.org/pypa/distlib/issue/32/\n maker.set_mode = True\n\n # Special case pip and setuptools to generate versioned wrappers\n #\n # The issue is that some projects (specifically, pip and setuptools) use\n # code in setup.py to create \"versioned\" entry points - pip2.7 on Python\n # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into\n # the wheel metadata at build time, and so if the wheel is installed with\n # a *different* version of Python the entry points will be wrong. The\n # correct fix for this is to enhance the metadata to be able to describe\n # such versioned entry points, but that won't happen till Metadata 2.0 is\n # available.\n # In the meantime, projects using versioned entry points will either have\n # incorrect versioned entry points, or they will not be able to distribute\n # \"universal\" wheels (i.e., they will need a wheel per Python version).\n #\n # Because setuptools and pip are bundled with _ensurepip and virtualenv,\n # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we\n # override the versioned entry points in the wheel and generate the\n # correct ones. This code is purely a short-term measure until Metadat 2.0\n # is available.\n pip_script = console.pop('pip', None)\n if pip_script:\n spec = 'pip = ' + pip_script\n generated.extend(maker.make(spec))\n spec = 'pip%s = %s' % (sys.version[:1], pip_script)\n generated.extend(maker.make(spec))\n spec = 'pip%s = %s' % (sys.version[:3], pip_script)\n generated.extend(maker.make(spec))\n # Delete any other versioned pip entry points\n pip_ep = [k for k in console if re.match(r'pip(\\d(\\.\\d)?)?$', k)]\n for k in pip_ep:\n del console[k]\n easy_install_script = console.pop('easy_install', None)\n if easy_install_script:\n spec = 'easy_install = ' + easy_install_script\n generated.extend(maker.make(spec))\n spec = 'easy_install-%s = %s' % (sys.version[:3], easy_install_script)\n generated.extend(maker.make(spec))\n # Delete any other versioned easy_install entry points\n easy_install_ep = [k for k in console\n if re.match(r'easy_install(-\\d\\.\\d)?$', k)]\n for k in easy_install_ep:\n del console[k]\n\n # Generate the console and GUI entry points specified in the wheel\n if len(console) > 0:\n generated.extend(maker.make_multiple(['%s = %s' % kv for kv in console.items()]))\n if len(gui) > 0:\n generated.extend(maker.make_multiple(['%s = %s' % kv for kv in gui.items()], {'gui': True}))\n\n record = os.path.join(info_dir[0], 'RECORD')\n temp_record = os.path.join(info_dir[0], 'RECORD.pip')\n with open_for_csv(record, 'r') as record_in:\n with open_for_csv(temp_record, 'w+') as record_out:\n reader = csv.reader(record_in)\n writer = csv.writer(record_out)\n for row in reader:\n row[0] = installed.pop(row[0], row[0])\n if row[0] in changed:\n row[1], row[2] = rehash(row[0])\n writer.writerow(row)\n for f in generated:\n h, l = rehash(f)\n writer.writerow((f, h, l))\n for f in installed:\n writer.writerow((installed[f], '', ''))\n shutil.move(temp_record, record)\n\ndef _unique(fn):\n @functools.wraps(fn)\n def unique(*args, **kw):\n seen = set()\n for item in fn(*args, **kw):\n if item not in seen:\n seen.add(item)\n yield item\n return unique\n\n# TODO: this goes somewhere besides the wheel module\n@_unique\ndef uninstallation_paths(dist):\n \"\"\"\n Yield all the uninstallation paths for dist based on RECORD-without-.pyc\n\n Yield paths to all the files in RECORD. For each .py file in RECORD, add\n the .pyc in the same directory.\n\n UninstallPathSet.add() takes care of the __pycache__ .pyc.\n \"\"\"\n from pip.req import FakeFile # circular import\n r = csv.reader(FakeFile(dist.get_metadata_lines('RECORD')))\n for row in r:\n path = os.path.join(dist.location, row[0])\n yield path\n if path.endswith('.py'):\n dn, fn = os.path.split(path)\n base = fn[:-3]\n path = os.path.join(dn, base+'.pyc')\n yield path\n\n\nclass Wheel(object):\n \"\"\"A wheel file\"\"\"\n\n # TODO: maybe move the install code into this class\n\n wheel_file_re = re.compile(\n r\"\"\"^(?P<namever>(?P<name>.+?)(-(?P<ver>\\d.+?))?)\n ((-(?P<build>\\d.*?))?-(?P<pyver>.+?)-(?P<abi>.+?)-(?P<plat>.+?)\n \\.whl|\\.dist-info)$\"\"\",\n re.VERBOSE)\n\n def __init__(self, filename):\n wheel_info = self.wheel_file_re.match(filename)\n self.filename = filename\n self.name = wheel_info.group('name').replace('_', '-')\n # we'll assume \"_\" means \"-\" due to wheel naming scheme\n # (https://github.com/pypa/pip/issues/1150)\n self.version = wheel_info.group('ver').replace('_', '-')\n self.pyversions = wheel_info.group('pyver').split('.')\n self.abis = wheel_info.group('abi').split('.')\n self.plats = wheel_info.group('plat').split('.')\n\n # All the tag combinations from this file\n self.file_tags = set((x, y, z) for x in self.pyversions for y\n in self.abis for z in self.plats)\n\n def support_index_min(self, tags=None):\n \"\"\"\n Return the lowest index that a file_tag achieves in the supported_tags list\n e.g. if there are 8 supported tags, and one of the file tags is first in the\n list, then return 0.\n \"\"\"\n if tags is None: # for mock\n tags = pep425tags.supported_tags\n indexes = [tags.index(c) for c in self.file_tags if c in tags]\n return min(indexes) if indexes else None\n\n def supported(self, tags=None):\n \"\"\"Is this wheel supported on this system?\"\"\"\n if tags is None: # for mock\n tags = pep425tags.supported_tags\n return bool(set(tags).intersection(self.file_tags))\n\n\nclass WheelBuilder(object):\n \"\"\"Build wheels from a RequirementSet.\"\"\"\n\n def __init__(self, requirement_set, finder, wheel_dir, build_options=[], global_options=[]):\n self.requirement_set = requirement_set\n self.finder = finder\n self.wheel_dir = normalize_path(wheel_dir)\n self.build_options = build_options\n self.global_options = global_options\n\n def _build_one(self, req):\n \"\"\"Build one wheel.\"\"\"\n\n base_args = [\n sys.executable, '-c',\n \"import setuptools;__file__=%r;\"\\\n \"exec(compile(open(__file__).read().replace('\\\\r\\\\n', '\\\\n'), __file__, 'exec'))\" % req.setup_py] + \\\n list(self.global_options)\n\n logger.notify('Running setup.py bdist_wheel for %s' % req.name)\n logger.notify('Destination directory: %s' % self.wheel_dir)\n wheel_args = base_args + ['bdist_wheel', '-d', self.wheel_dir] + self.build_options\n try:\n call_subprocess(wheel_args, cwd=req.source_dir, show_stdout=False)\n return True\n except:\n logger.error('Failed building wheel for %s' % req.name)\n return False\n\n def build(self):\n \"\"\"Build wheels.\"\"\"\n\n #unpack and constructs req set\n self.requirement_set.prepare_files(self.finder)\n\n reqset = self.requirement_set.requirements.values()\n\n #make the wheelhouse\n if not os.path.exists(self.wheel_dir):\n os.makedirs(self.wheel_dir)\n\n #build the wheels\n logger.notify('Building wheels for collected packages: %s' % ', '.join([req.name for req in reqset]))\n logger.indent += 2\n build_success, build_failure = [], []\n for req in reqset:\n if req.is_wheel:\n logger.notify(\"Skipping building wheel: %s\", req.url)\n continue\n if self._build_one(req):\n build_success.append(req)\n else:\n build_failure.append(req)\n logger.indent -= 2\n\n #notify sucess/failure\n if build_success:\n logger.notify('Successfully built %s' % ' '.join([req.name for req in build_success]))\n if build_failure:\n logger.notify('Failed to build %s' % ' '.join([req.name for req in build_failure]))\n", "path": "pip/wheel.py"}]} |
gh_patches_debug_1008 | rasdani/github-patches | git_diff | translate__pootle-5619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Priority column is missing
Since the column reordering we've lost the priority column in the vfolders table
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/virtualfolder/views.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django import forms
10 from django.http import Http404
11 from django.shortcuts import get_object_or_404
12 from django.urls import reverse
13 from django.utils.functional import cached_property
14
15 from pootle.core.browser import get_table_headings
16 from pootle.core.delegate import search_backend
17 from pootle.core.exceptions import Http400
18 from pootle.core.http import JsonResponse
19 from pootle.core.url_helpers import get_path_parts, split_pootle_path
20 from pootle.i18n.gettext import ugettext as _
21 from pootle_misc.util import ajax_required
22 from pootle_store.forms import UnitSearchForm
23 from pootle_store.unit.results import GroupedResults
24 from pootle_translationproject.views import TPTranslateView
25
26 from .delegate import vfolders_data_tool
27 from .models import VirtualFolder
28
29
30 def make_vfolder_dict(context, vf, stats):
31 lang_code, proj_code = split_pootle_path(context.pootle_path)[:2]
32 base_url = reverse(
33 "pootle-vfolder-tp-translate",
34 kwargs=dict(
35 vfolder_name=vf,
36 language_code=lang_code,
37 project_code=proj_code))
38 return {
39 'href_translate': base_url,
40 'title': stats["title"],
41 'code': vf,
42 'priority': stats.get("priority"),
43 'is_grayed': not stats["isVisible"],
44 'stats': stats,
45 'icon': 'vfolder'}
46
47
48 class VFolderTPTranslateView(TPTranslateView):
49 display_vfolder_priority = False
50
51 @cached_property
52 def check_data(self):
53 return self.vfolders_data_view.vfolder_data_tool.get_checks(
54 user=self.request.user).get(self.vfolder_pk, {})
55
56 @cached_property
57 def vfolder(self):
58 return VirtualFolder.objects.get(name=self.kwargs["vfolder_name"])
59
60 @property
61 def vfolder_pk(self):
62 return self.vfolder.pk
63
64 def get_context_data(self, *args, **kwargs):
65 ctx = super(
66 VFolderTPTranslateView,
67 self).get_context_data(*args, **kwargs)
68 ctx["unit_api_root"] = reverse(
69 "vfolder-pootle-xhr-units",
70 kwargs=dict(vfolder_name=self.vfolder.name))
71 ctx["resource_path"] = (
72 "/".join(
73 ["++vfolder",
74 self.vfolder.name,
75 self.object.pootle_path.replace(self.ctx_path, "")]))
76 ctx["resource_path_parts"] = get_path_parts(ctx["resource_path"])
77 return ctx
78
79
80 @ajax_required
81 def get_vfolder_units(request, **kwargs):
82 """Gets source and target texts and its metadata.
83
84 :return: A JSON-encoded string containing the source and target texts
85 grouped by the store they belong to.
86
87 The optional `count` GET parameter defines the chunk size to
88 consider. The user's preference will be used by default.
89
90 When the `initial` GET parameter is present, a sorted list of
91 the result set ids will be returned too.
92 """
93 search_form = UnitSearchForm(request.GET, user=request.user)
94
95 vfolder = get_object_or_404(
96 VirtualFolder,
97 name=kwargs.get("vfolder_name"))
98
99 if not search_form.is_valid():
100 errors = search_form.errors.as_data()
101 if "path" in errors:
102 for error in errors["path"]:
103 if error.code == "max_length":
104 raise Http400(_('Path too long.'))
105 elif error.code == "required":
106 raise Http400(_('Arguments missing.'))
107 raise Http404(forms.ValidationError(search_form.errors).messages)
108
109 search_form.cleaned_data["vfolder"] = vfolder
110 backend = search_backend.get(VirtualFolder)(
111 request.user, **search_form.cleaned_data)
112 total, start, end, units_qs = backend.search()
113 return JsonResponse(
114 {'start': start,
115 'end': end,
116 'total': total,
117 'unitGroups': GroupedResults(units_qs).data})
118
119
120 class VFoldersDataView(object):
121
122 _table_fields = (
123 'name', 'progress', 'activity',
124 'total', 'need-translation',
125 'suggestions', 'critical')
126
127 def __init__(self, context, user, has_admin_access=False):
128 self.context = context
129 self.user = user
130 self.has_admin_access = has_admin_access
131
132 @property
133 def vfolder_data_tool(self):
134 return vfolders_data_tool.get(self.context.__class__)(self.context)
135
136 @property
137 def table_fields(self):
138 fields = self._table_fields
139 if self.has_admin_access:
140 fields += ('last-updated', )
141 return fields
142
143 @cached_property
144 def table_data(self):
145 ctx = {}
146 if len(self.all_stats) > 0:
147 ctx.update({
148 'children': {
149 'id': 'vfolders',
150 'fields': self.table_fields,
151 'headings': get_table_headings(self.table_fields),
152 'rows': self.table_items}})
153 return ctx
154
155 @cached_property
156 def all_stats(self):
157 return self.vfolder_data_tool.get_stats(user=self.user)
158
159 @cached_property
160 def stats(self):
161 return dict(children=self.all_stats)
162
163 @property
164 def table_items(self):
165 return [
166 make_vfolder_dict(self.context, *vf)
167 for vf
168 in self.all_stats.items()]
169
170 @cached_property
171 def has_data(self):
172 return (
173 self.vfolder_data_tool.all_stat_data.exists()
174 if self.vfolder_data_tool.show_all_to(self.user)
175 else self.vfolder_data_tool.stat_data.exists())
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/virtualfolder/views.py b/pootle/apps/virtualfolder/views.py
--- a/pootle/apps/virtualfolder/views.py
+++ b/pootle/apps/virtualfolder/views.py
@@ -122,7 +122,7 @@
_table_fields = (
'name', 'progress', 'activity',
'total', 'need-translation',
- 'suggestions', 'critical')
+ 'suggestions', 'critical', 'priority')
def __init__(self, context, user, has_admin_access=False):
self.context = context
| {"golden_diff": "diff --git a/pootle/apps/virtualfolder/views.py b/pootle/apps/virtualfolder/views.py\n--- a/pootle/apps/virtualfolder/views.py\n+++ b/pootle/apps/virtualfolder/views.py\n@@ -122,7 +122,7 @@\n _table_fields = (\n 'name', 'progress', 'activity',\n 'total', 'need-translation',\n- 'suggestions', 'critical')\n+ 'suggestions', 'critical', 'priority')\n \n def __init__(self, context, user, has_admin_access=False):\n self.context = context\n", "issue": "Priority column is missing\nSince the column reordering we've lost the priority column in the vfolders table\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django import forms\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.browser import get_table_headings\nfrom pootle.core.delegate import search_backend\nfrom pootle.core.exceptions import Http400\nfrom pootle.core.http import JsonResponse\nfrom pootle.core.url_helpers import get_path_parts, split_pootle_path\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle_misc.util import ajax_required\nfrom pootle_store.forms import UnitSearchForm\nfrom pootle_store.unit.results import GroupedResults\nfrom pootle_translationproject.views import TPTranslateView\n\nfrom .delegate import vfolders_data_tool\nfrom .models import VirtualFolder\n\n\ndef make_vfolder_dict(context, vf, stats):\n lang_code, proj_code = split_pootle_path(context.pootle_path)[:2]\n base_url = reverse(\n \"pootle-vfolder-tp-translate\",\n kwargs=dict(\n vfolder_name=vf,\n language_code=lang_code,\n project_code=proj_code))\n return {\n 'href_translate': base_url,\n 'title': stats[\"title\"],\n 'code': vf,\n 'priority': stats.get(\"priority\"),\n 'is_grayed': not stats[\"isVisible\"],\n 'stats': stats,\n 'icon': 'vfolder'}\n\n\nclass VFolderTPTranslateView(TPTranslateView):\n display_vfolder_priority = False\n\n @cached_property\n def check_data(self):\n return self.vfolders_data_view.vfolder_data_tool.get_checks(\n user=self.request.user).get(self.vfolder_pk, {})\n\n @cached_property\n def vfolder(self):\n return VirtualFolder.objects.get(name=self.kwargs[\"vfolder_name\"])\n\n @property\n def vfolder_pk(self):\n return self.vfolder.pk\n\n def get_context_data(self, *args, **kwargs):\n ctx = super(\n VFolderTPTranslateView,\n self).get_context_data(*args, **kwargs)\n ctx[\"unit_api_root\"] = reverse(\n \"vfolder-pootle-xhr-units\",\n kwargs=dict(vfolder_name=self.vfolder.name))\n ctx[\"resource_path\"] = (\n \"/\".join(\n [\"++vfolder\",\n self.vfolder.name,\n self.object.pootle_path.replace(self.ctx_path, \"\")]))\n ctx[\"resource_path_parts\"] = get_path_parts(ctx[\"resource_path\"])\n return ctx\n\n\n@ajax_required\ndef get_vfolder_units(request, **kwargs):\n \"\"\"Gets source and target texts and its metadata.\n\n :return: A JSON-encoded string containing the source and target texts\n grouped by the store they belong to.\n\n The optional `count` GET parameter defines the chunk size to\n consider. The user's preference will be used by default.\n\n When the `initial` GET parameter is present, a sorted list of\n the result set ids will be returned too.\n \"\"\"\n search_form = UnitSearchForm(request.GET, user=request.user)\n\n vfolder = get_object_or_404(\n VirtualFolder,\n name=kwargs.get(\"vfolder_name\"))\n\n if not search_form.is_valid():\n errors = search_form.errors.as_data()\n if \"path\" in errors:\n for error in errors[\"path\"]:\n if error.code == \"max_length\":\n raise Http400(_('Path too long.'))\n elif error.code == \"required\":\n raise Http400(_('Arguments missing.'))\n raise Http404(forms.ValidationError(search_form.errors).messages)\n\n search_form.cleaned_data[\"vfolder\"] = vfolder\n backend = search_backend.get(VirtualFolder)(\n request.user, **search_form.cleaned_data)\n total, start, end, units_qs = backend.search()\n return JsonResponse(\n {'start': start,\n 'end': end,\n 'total': total,\n 'unitGroups': GroupedResults(units_qs).data})\n\n\nclass VFoldersDataView(object):\n\n _table_fields = (\n 'name', 'progress', 'activity',\n 'total', 'need-translation',\n 'suggestions', 'critical')\n\n def __init__(self, context, user, has_admin_access=False):\n self.context = context\n self.user = user\n self.has_admin_access = has_admin_access\n\n @property\n def vfolder_data_tool(self):\n return vfolders_data_tool.get(self.context.__class__)(self.context)\n\n @property\n def table_fields(self):\n fields = self._table_fields\n if self.has_admin_access:\n fields += ('last-updated', )\n return fields\n\n @cached_property\n def table_data(self):\n ctx = {}\n if len(self.all_stats) > 0:\n ctx.update({\n 'children': {\n 'id': 'vfolders',\n 'fields': self.table_fields,\n 'headings': get_table_headings(self.table_fields),\n 'rows': self.table_items}})\n return ctx\n\n @cached_property\n def all_stats(self):\n return self.vfolder_data_tool.get_stats(user=self.user)\n\n @cached_property\n def stats(self):\n return dict(children=self.all_stats)\n\n @property\n def table_items(self):\n return [\n make_vfolder_dict(self.context, *vf)\n for vf\n in self.all_stats.items()]\n\n @cached_property\n def has_data(self):\n return (\n self.vfolder_data_tool.all_stat_data.exists()\n if self.vfolder_data_tool.show_all_to(self.user)\n else self.vfolder_data_tool.stat_data.exists())\n", "path": "pootle/apps/virtualfolder/views.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django import forms\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.browser import get_table_headings\nfrom pootle.core.delegate import search_backend\nfrom pootle.core.exceptions import Http400\nfrom pootle.core.http import JsonResponse\nfrom pootle.core.url_helpers import get_path_parts, split_pootle_path\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle_misc.util import ajax_required\nfrom pootle_store.forms import UnitSearchForm\nfrom pootle_store.unit.results import GroupedResults\nfrom pootle_translationproject.views import TPTranslateView\n\nfrom .delegate import vfolders_data_tool\nfrom .models import VirtualFolder\n\n\ndef make_vfolder_dict(context, vf, stats):\n lang_code, proj_code = split_pootle_path(context.pootle_path)[:2]\n base_url = reverse(\n \"pootle-vfolder-tp-translate\",\n kwargs=dict(\n vfolder_name=vf,\n language_code=lang_code,\n project_code=proj_code))\n return {\n 'href_translate': base_url,\n 'title': stats[\"title\"],\n 'code': vf,\n 'priority': stats.get(\"priority\"),\n 'is_grayed': not stats[\"isVisible\"],\n 'stats': stats,\n 'icon': 'vfolder'}\n\n\nclass VFolderTPTranslateView(TPTranslateView):\n display_vfolder_priority = False\n\n @cached_property\n def check_data(self):\n return self.vfolders_data_view.vfolder_data_tool.get_checks(\n user=self.request.user).get(self.vfolder_pk, {})\n\n @cached_property\n def vfolder(self):\n return VirtualFolder.objects.get(name=self.kwargs[\"vfolder_name\"])\n\n @property\n def vfolder_pk(self):\n return self.vfolder.pk\n\n def get_context_data(self, *args, **kwargs):\n ctx = super(\n VFolderTPTranslateView,\n self).get_context_data(*args, **kwargs)\n ctx[\"unit_api_root\"] = reverse(\n \"vfolder-pootle-xhr-units\",\n kwargs=dict(vfolder_name=self.vfolder.name))\n ctx[\"resource_path\"] = (\n \"/\".join(\n [\"++vfolder\",\n self.vfolder.name,\n self.object.pootle_path.replace(self.ctx_path, \"\")]))\n ctx[\"resource_path_parts\"] = get_path_parts(ctx[\"resource_path\"])\n return ctx\n\n\n@ajax_required\ndef get_vfolder_units(request, **kwargs):\n \"\"\"Gets source and target texts and its metadata.\n\n :return: A JSON-encoded string containing the source and target texts\n grouped by the store they belong to.\n\n The optional `count` GET parameter defines the chunk size to\n consider. The user's preference will be used by default.\n\n When the `initial` GET parameter is present, a sorted list of\n the result set ids will be returned too.\n \"\"\"\n search_form = UnitSearchForm(request.GET, user=request.user)\n\n vfolder = get_object_or_404(\n VirtualFolder,\n name=kwargs.get(\"vfolder_name\"))\n\n if not search_form.is_valid():\n errors = search_form.errors.as_data()\n if \"path\" in errors:\n for error in errors[\"path\"]:\n if error.code == \"max_length\":\n raise Http400(_('Path too long.'))\n elif error.code == \"required\":\n raise Http400(_('Arguments missing.'))\n raise Http404(forms.ValidationError(search_form.errors).messages)\n\n search_form.cleaned_data[\"vfolder\"] = vfolder\n backend = search_backend.get(VirtualFolder)(\n request.user, **search_form.cleaned_data)\n total, start, end, units_qs = backend.search()\n return JsonResponse(\n {'start': start,\n 'end': end,\n 'total': total,\n 'unitGroups': GroupedResults(units_qs).data})\n\n\nclass VFoldersDataView(object):\n\n _table_fields = (\n 'name', 'progress', 'activity',\n 'total', 'need-translation',\n 'suggestions', 'critical', 'priority')\n\n def __init__(self, context, user, has_admin_access=False):\n self.context = context\n self.user = user\n self.has_admin_access = has_admin_access\n\n @property\n def vfolder_data_tool(self):\n return vfolders_data_tool.get(self.context.__class__)(self.context)\n\n @property\n def table_fields(self):\n fields = self._table_fields\n if self.has_admin_access:\n fields += ('last-updated', )\n return fields\n\n @cached_property\n def table_data(self):\n ctx = {}\n if len(self.all_stats) > 0:\n ctx.update({\n 'children': {\n 'id': 'vfolders',\n 'fields': self.table_fields,\n 'headings': get_table_headings(self.table_fields),\n 'rows': self.table_items}})\n return ctx\n\n @cached_property\n def all_stats(self):\n return self.vfolder_data_tool.get_stats(user=self.user)\n\n @cached_property\n def stats(self):\n return dict(children=self.all_stats)\n\n @property\n def table_items(self):\n return [\n make_vfolder_dict(self.context, *vf)\n for vf\n in self.all_stats.items()]\n\n @cached_property\n def has_data(self):\n return (\n self.vfolder_data_tool.all_stat_data.exists()\n if self.vfolder_data_tool.show_all_to(self.user)\n else self.vfolder_data_tool.stat_data.exists())\n", "path": "pootle/apps/virtualfolder/views.py"}]} |
gh_patches_debug_1009 | rasdani/github-patches | git_diff | redis__redis-py-1108 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PythonParser vs HiredisParser on_disconnect behavior
PythonParser's `on_disconnect` implementation is inconsistent with HiredisParser implementation (or vice versa):
```python
class PythonParser(...):
def on_disconnect(self):
"Called when the socket disconnects"
if self._sock is not None:
self._sock.close()
self._sock = None
if self._buffer is not None:
self._buffer.close()
self._buffer = None
self.encoder = None
```
and
```python
class HiredisParser(...):
def on_disconnect(self):
self._sock = None
self._reader = None
self._next_response = False
```
Why does the PythonParser closes the `_sock` object?
By doing this the subsequent `shutdown()` and `close()` in `Connection.disconnect` does not make any sense, in fact it shutdown on closed socket raises error which is ignored.
I can submit a PR but please tell me what place to fix? (HiredisParser/PythonParser/shutdown)
PS: this issue causes other issues in other repos (celery/kombu#954, celery/celery#3898)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redis/connection.py`
Content:
```
1 from __future__ import unicode_literals
2 from distutils.version import StrictVersion
3 from itertools import chain
4 import io
5 import os
6 import socket
7 import sys
8 import threading
9 import warnings
10
11 try:
12 import ssl
13 ssl_available = True
14 except ImportError:
15 ssl_available = False
16
17 from redis._compat import (xrange, imap, byte_to_chr, unicode, long,
18 nativestr, basestring, iteritems,
19 LifoQueue, Empty, Full, urlparse, parse_qs,
20 recv, recv_into, select, unquote)
21 from redis.exceptions import (
22 DataError,
23 RedisError,
24 ConnectionError,
25 TimeoutError,
26 BusyLoadingError,
27 ResponseError,
28 InvalidResponse,
29 AuthenticationError,
30 NoScriptError,
31 ExecAbortError,
32 ReadOnlyError
33 )
34 from redis.utils import HIREDIS_AVAILABLE
35 if HIREDIS_AVAILABLE:
36 import hiredis
37
38 hiredis_version = StrictVersion(hiredis.__version__)
39 HIREDIS_SUPPORTS_CALLABLE_ERRORS = \
40 hiredis_version >= StrictVersion('0.1.3')
41 HIREDIS_SUPPORTS_BYTE_BUFFER = \
42 hiredis_version >= StrictVersion('0.1.4')
43
44 if not HIREDIS_SUPPORTS_BYTE_BUFFER:
45 msg = ("redis-py works best with hiredis >= 0.1.4. You're running "
46 "hiredis %s. Please consider upgrading." % hiredis.__version__)
47 warnings.warn(msg)
48
49 HIREDIS_USE_BYTE_BUFFER = True
50 # only use byte buffer if hiredis supports it
51 if not HIREDIS_SUPPORTS_BYTE_BUFFER:
52 HIREDIS_USE_BYTE_BUFFER = False
53
54 SYM_STAR = b'*'
55 SYM_DOLLAR = b'$'
56 SYM_CRLF = b'\r\n'
57 SYM_EMPTY = b''
58
59 SERVER_CLOSED_CONNECTION_ERROR = "Connection closed by server."
60
61
62 class Token(object):
63 """
64 Literal strings in Redis commands, such as the command names and any
65 hard-coded arguments are wrapped in this class so we know not to apply
66 and encoding rules on them.
67 """
68
69 _cache = {}
70
71 @classmethod
72 def get_token(cls, value):
73 "Gets a cached token object or creates a new one if not already cached"
74
75 # Use try/except because after running for a short time most tokens
76 # should already be cached
77 try:
78 return cls._cache[value]
79 except KeyError:
80 token = Token(value)
81 cls._cache[value] = token
82 return token
83
84 def __init__(self, value):
85 if isinstance(value, Token):
86 value = value.value
87 self.value = value
88 self.encoded_value = value.encode()
89
90 def __repr__(self):
91 return self.value
92
93 def __str__(self):
94 return self.value
95
96
97 class Encoder(object):
98 "Encode strings to bytes and decode bytes to strings"
99
100 def __init__(self, encoding, encoding_errors, decode_responses):
101 self.encoding = encoding
102 self.encoding_errors = encoding_errors
103 self.decode_responses = decode_responses
104
105 def encode(self, value):
106 "Return a bytestring representation of the value"
107 if isinstance(value, Token):
108 return value.encoded_value
109 elif isinstance(value, bytes):
110 return value
111 elif isinstance(value, bool):
112 # special case bool since it is a subclass of int
113 raise DataError("Invalid input of type: 'bool'. Convert to a "
114 "byte, string or number first.")
115 elif isinstance(value, float):
116 value = repr(value).encode()
117 elif isinstance(value, (int, long)):
118 # python 2 repr() on longs is '123L', so use str() instead
119 value = str(value).encode()
120 elif not isinstance(value, basestring):
121 # a value we don't know how to deal with. throw an error
122 typename = type(value).__name__
123 raise DataError("Invalid input of type: '%s'. Convert to a "
124 "byte, string or number first." % typename)
125 if isinstance(value, unicode):
126 value = value.encode(self.encoding, self.encoding_errors)
127 return value
128
129 def decode(self, value, force=False):
130 "Return a unicode string from the byte representation"
131 if (self.decode_responses or force) and isinstance(value, bytes):
132 value = value.decode(self.encoding, self.encoding_errors)
133 return value
134
135
136 class BaseParser(object):
137 EXCEPTION_CLASSES = {
138 'ERR': {
139 'max number of clients reached': ConnectionError
140 },
141 'EXECABORT': ExecAbortError,
142 'LOADING': BusyLoadingError,
143 'NOSCRIPT': NoScriptError,
144 'READONLY': ReadOnlyError,
145 }
146
147 def parse_error(self, response):
148 "Parse an error response"
149 error_code = response.split(' ')[0]
150 if error_code in self.EXCEPTION_CLASSES:
151 response = response[len(error_code) + 1:]
152 exception_class = self.EXCEPTION_CLASSES[error_code]
153 if isinstance(exception_class, dict):
154 exception_class = exception_class.get(response, ResponseError)
155 return exception_class(response)
156 return ResponseError(response)
157
158
159 class SocketBuffer(object):
160 def __init__(self, socket, socket_read_size):
161 self._sock = socket
162 self.socket_read_size = socket_read_size
163 self._buffer = io.BytesIO()
164 # number of bytes written to the buffer from the socket
165 self.bytes_written = 0
166 # number of bytes read from the buffer
167 self.bytes_read = 0
168
169 @property
170 def length(self):
171 return self.bytes_written - self.bytes_read
172
173 def _read_from_socket(self, length=None):
174 socket_read_size = self.socket_read_size
175 buf = self._buffer
176 buf.seek(self.bytes_written)
177 marker = 0
178
179 try:
180 while True:
181 data = recv(self._sock, socket_read_size)
182 # an empty string indicates the server shutdown the socket
183 if isinstance(data, bytes) and len(data) == 0:
184 raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)
185 buf.write(data)
186 data_length = len(data)
187 self.bytes_written += data_length
188 marker += data_length
189
190 if length is not None and length > marker:
191 continue
192 break
193 except socket.timeout:
194 raise TimeoutError("Timeout reading from socket")
195 except socket.error:
196 e = sys.exc_info()[1]
197 raise ConnectionError("Error while reading from socket: %s" %
198 (e.args,))
199
200 def read(self, length):
201 length = length + 2 # make sure to read the \r\n terminator
202 # make sure we've read enough data from the socket
203 if length > self.length:
204 self._read_from_socket(length - self.length)
205
206 self._buffer.seek(self.bytes_read)
207 data = self._buffer.read(length)
208 self.bytes_read += len(data)
209
210 # purge the buffer when we've consumed it all so it doesn't
211 # grow forever
212 if self.bytes_read == self.bytes_written:
213 self.purge()
214
215 return data[:-2]
216
217 def readline(self):
218 buf = self._buffer
219 buf.seek(self.bytes_read)
220 data = buf.readline()
221 while not data.endswith(SYM_CRLF):
222 # there's more data in the socket that we need
223 self._read_from_socket()
224 buf.seek(self.bytes_read)
225 data = buf.readline()
226
227 self.bytes_read += len(data)
228
229 # purge the buffer when we've consumed it all so it doesn't
230 # grow forever
231 if self.bytes_read == self.bytes_written:
232 self.purge()
233
234 return data[:-2]
235
236 def purge(self):
237 self._buffer.seek(0)
238 self._buffer.truncate()
239 self.bytes_written = 0
240 self.bytes_read = 0
241
242 def close(self):
243 try:
244 self.purge()
245 self._buffer.close()
246 except Exception:
247 # issue #633 suggests the purge/close somehow raised a
248 # BadFileDescriptor error. Perhaps the client ran out of
249 # memory or something else? It's probably OK to ignore
250 # any error being raised from purge/close since we're
251 # removing the reference to the instance below.
252 pass
253 self._buffer = None
254 self._sock = None
255
256
257 class PythonParser(BaseParser):
258 "Plain Python parsing class"
259 def __init__(self, socket_read_size):
260 self.socket_read_size = socket_read_size
261 self.encoder = None
262 self._sock = None
263 self._buffer = None
264
265 def __del__(self):
266 try:
267 self.on_disconnect()
268 except Exception:
269 pass
270
271 def on_connect(self, connection):
272 "Called when the socket connects"
273 self._sock = connection._sock
274 self._buffer = SocketBuffer(self._sock, self.socket_read_size)
275 self.encoder = connection.encoder
276
277 def on_disconnect(self):
278 "Called when the socket disconnects"
279 if self._sock is not None:
280 self._sock.close()
281 self._sock = None
282 if self._buffer is not None:
283 self._buffer.close()
284 self._buffer = None
285 self.encoder = None
286
287 def can_read(self):
288 return self._buffer and bool(self._buffer.length)
289
290 def read_response(self):
291 response = self._buffer.readline()
292 if not response:
293 raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
294
295 byte, response = byte_to_chr(response[0]), response[1:]
296
297 if byte not in ('-', '+', ':', '$', '*'):
298 raise InvalidResponse("Protocol Error: %s, %s" %
299 (str(byte), str(response)))
300
301 # server returned an error
302 if byte == '-':
303 response = nativestr(response)
304 error = self.parse_error(response)
305 # if the error is a ConnectionError, raise immediately so the user
306 # is notified
307 if isinstance(error, ConnectionError):
308 raise error
309 # otherwise, we're dealing with a ResponseError that might belong
310 # inside a pipeline response. the connection's read_response()
311 # and/or the pipeline's execute() will raise this error if
312 # necessary, so just return the exception instance here.
313 return error
314 # single value
315 elif byte == '+':
316 pass
317 # int value
318 elif byte == ':':
319 response = long(response)
320 # bulk response
321 elif byte == '$':
322 length = int(response)
323 if length == -1:
324 return None
325 response = self._buffer.read(length)
326 # multi-bulk response
327 elif byte == '*':
328 length = int(response)
329 if length == -1:
330 return None
331 response = [self.read_response() for i in xrange(length)]
332 if isinstance(response, bytes):
333 response = self.encoder.decode(response)
334 return response
335
336
337 class HiredisParser(BaseParser):
338 "Parser class for connections using Hiredis"
339 def __init__(self, socket_read_size):
340 if not HIREDIS_AVAILABLE:
341 raise RedisError("Hiredis is not installed")
342 self.socket_read_size = socket_read_size
343
344 if HIREDIS_USE_BYTE_BUFFER:
345 self._buffer = bytearray(socket_read_size)
346
347 def __del__(self):
348 try:
349 self.on_disconnect()
350 except Exception:
351 pass
352
353 def on_connect(self, connection):
354 self._sock = connection._sock
355 kwargs = {
356 'protocolError': InvalidResponse,
357 'replyError': self.parse_error,
358 }
359
360 # hiredis < 0.1.3 doesn't support functions that create exceptions
361 if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:
362 kwargs['replyError'] = ResponseError
363
364 if connection.encoder.decode_responses:
365 kwargs['encoding'] = connection.encoder.encoding
366 self._reader = hiredis.Reader(**kwargs)
367 self._next_response = False
368
369 def on_disconnect(self):
370 self._sock = None
371 self._reader = None
372 self._next_response = False
373
374 def can_read(self):
375 if not self._reader:
376 raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
377
378 if self._next_response is False:
379 self._next_response = self._reader.gets()
380 return self._next_response is not False
381
382 def read_response(self):
383 if not self._reader:
384 raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
385
386 # _next_response might be cached from a can_read() call
387 if self._next_response is not False:
388 response = self._next_response
389 self._next_response = False
390 return response
391
392 response = self._reader.gets()
393 socket_read_size = self.socket_read_size
394 while response is False:
395 try:
396 if HIREDIS_USE_BYTE_BUFFER:
397 bufflen = recv_into(self._sock, self._buffer)
398 if bufflen == 0:
399 raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)
400 else:
401 buffer = recv(self._sock, socket_read_size)
402 # an empty string indicates the server shutdown the socket
403 if not isinstance(buffer, bytes) or len(buffer) == 0:
404 raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)
405 except socket.timeout:
406 raise TimeoutError("Timeout reading from socket")
407 except socket.error:
408 e = sys.exc_info()[1]
409 raise ConnectionError("Error while reading from socket: %s" %
410 (e.args,))
411 if HIREDIS_USE_BYTE_BUFFER:
412 self._reader.feed(self._buffer, 0, bufflen)
413 else:
414 self._reader.feed(buffer)
415 response = self._reader.gets()
416 # if an older version of hiredis is installed, we need to attempt
417 # to convert ResponseErrors to their appropriate types.
418 if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:
419 if isinstance(response, ResponseError):
420 response = self.parse_error(response.args[0])
421 elif isinstance(response, list) and response and \
422 isinstance(response[0], ResponseError):
423 response[0] = self.parse_error(response[0].args[0])
424 # if the response is a ConnectionError or the response is a list and
425 # the first item is a ConnectionError, raise it as something bad
426 # happened
427 if isinstance(response, ConnectionError):
428 raise response
429 elif isinstance(response, list) and response and \
430 isinstance(response[0], ConnectionError):
431 raise response[0]
432 return response
433
434
435 if HIREDIS_AVAILABLE:
436 DefaultParser = HiredisParser
437 else:
438 DefaultParser = PythonParser
439
440
441 class Connection(object):
442 "Manages TCP communication to and from a Redis server"
443 description_format = "Connection<host=%(host)s,port=%(port)s,db=%(db)s>"
444
445 def __init__(self, host='localhost', port=6379, db=0, password=None,
446 socket_timeout=None, socket_connect_timeout=None,
447 socket_keepalive=False, socket_keepalive_options=None,
448 socket_type=0, retry_on_timeout=False, encoding='utf-8',
449 encoding_errors='strict', decode_responses=False,
450 parser_class=DefaultParser, socket_read_size=65536):
451 self.pid = os.getpid()
452 self.host = host
453 self.port = int(port)
454 self.db = db
455 self.password = password
456 self.socket_timeout = socket_timeout
457 self.socket_connect_timeout = socket_connect_timeout or socket_timeout
458 self.socket_keepalive = socket_keepalive
459 self.socket_keepalive_options = socket_keepalive_options or {}
460 self.socket_type = socket_type
461 self.retry_on_timeout = retry_on_timeout
462 self.encoder = Encoder(encoding, encoding_errors, decode_responses)
463 self._sock = None
464 self._parser = parser_class(socket_read_size=socket_read_size)
465 self._description_args = {
466 'host': self.host,
467 'port': self.port,
468 'db': self.db,
469 }
470 self._connect_callbacks = []
471 self._buffer_cutoff = 6000
472
473 def __repr__(self):
474 return self.description_format % self._description_args
475
476 def __del__(self):
477 try:
478 self.disconnect()
479 except Exception:
480 pass
481
482 def register_connect_callback(self, callback):
483 self._connect_callbacks.append(callback)
484
485 def clear_connect_callbacks(self):
486 self._connect_callbacks = []
487
488 def connect(self):
489 "Connects to the Redis server if not already connected"
490 if self._sock:
491 return
492 try:
493 sock = self._connect()
494 except socket.timeout:
495 raise TimeoutError("Timeout connecting to server")
496 except socket.error:
497 e = sys.exc_info()[1]
498 raise ConnectionError(self._error_message(e))
499
500 self._sock = sock
501 try:
502 self.on_connect()
503 except RedisError:
504 # clean up after any error in on_connect
505 self.disconnect()
506 raise
507
508 # run any user callbacks. right now the only internal callback
509 # is for pubsub channel/pattern resubscription
510 for callback in self._connect_callbacks:
511 callback(self)
512
513 def _connect(self):
514 "Create a TCP socket connection"
515 # we want to mimic what socket.create_connection does to support
516 # ipv4/ipv6, but we want to set options prior to calling
517 # socket.connect()
518 err = None
519 for res in socket.getaddrinfo(self.host, self.port, self.socket_type,
520 socket.SOCK_STREAM):
521 family, socktype, proto, canonname, socket_address = res
522 sock = None
523 try:
524 sock = socket.socket(family, socktype, proto)
525 # TCP_NODELAY
526 sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
527
528 # TCP_KEEPALIVE
529 if self.socket_keepalive:
530 sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
531 for k, v in iteritems(self.socket_keepalive_options):
532 sock.setsockopt(socket.SOL_TCP, k, v)
533
534 # set the socket_connect_timeout before we connect
535 sock.settimeout(self.socket_connect_timeout)
536
537 # connect
538 sock.connect(socket_address)
539
540 # set the socket_timeout now that we're connected
541 sock.settimeout(self.socket_timeout)
542 return sock
543
544 except socket.error as _:
545 err = _
546 if sock is not None:
547 sock.close()
548
549 if err is not None:
550 raise err
551 raise socket.error("socket.getaddrinfo returned an empty list")
552
553 def _error_message(self, exception):
554 # args for socket.error can either be (errno, "message")
555 # or just "message"
556 if len(exception.args) == 1:
557 return "Error connecting to %s:%s. %s." % \
558 (self.host, self.port, exception.args[0])
559 else:
560 return "Error %s connecting to %s:%s. %s." % \
561 (exception.args[0], self.host, self.port, exception.args[1])
562
563 def on_connect(self):
564 "Initialize the connection, authenticate and select a database"
565 self._parser.on_connect(self)
566
567 # if a password is specified, authenticate
568 if self.password:
569 self.send_command('AUTH', self.password)
570 if nativestr(self.read_response()) != 'OK':
571 raise AuthenticationError('Invalid Password')
572
573 # if a database is specified, switch to it
574 if self.db:
575 self.send_command('SELECT', self.db)
576 if nativestr(self.read_response()) != 'OK':
577 raise ConnectionError('Invalid Database')
578
579 def disconnect(self):
580 "Disconnects from the Redis server"
581 self._parser.on_disconnect()
582 if self._sock is None:
583 return
584 try:
585 self._sock.shutdown(socket.SHUT_RDWR)
586 self._sock.close()
587 except socket.error:
588 pass
589 self._sock = None
590
591 def send_packed_command(self, command):
592 "Send an already packed command to the Redis server"
593 if not self._sock:
594 self.connect()
595 try:
596 if isinstance(command, str):
597 command = [command]
598 for item in command:
599 self._sock.sendall(item)
600 except socket.timeout:
601 self.disconnect()
602 raise TimeoutError("Timeout writing to socket")
603 except socket.error:
604 e = sys.exc_info()[1]
605 self.disconnect()
606 if len(e.args) == 1:
607 errno, errmsg = 'UNKNOWN', e.args[0]
608 else:
609 errno = e.args[0]
610 errmsg = e.args[1]
611 raise ConnectionError("Error %s while writing to socket. %s." %
612 (errno, errmsg))
613 except Exception as e:
614 self.disconnect()
615 raise e
616
617 def send_command(self, *args):
618 "Pack and send a command to the Redis server"
619 self.send_packed_command(self.pack_command(*args))
620
621 def can_read(self, timeout=0):
622 "Poll the socket to see if there's data that can be read."
623 sock = self._sock
624 if not sock:
625 self.connect()
626 sock = self._sock
627 return self._parser.can_read() or \
628 bool(select([sock], [], [], timeout)[0])
629
630 def read_response(self):
631 "Read the response from a previously sent command"
632 try:
633 response = self._parser.read_response()
634 except Exception as e:
635 self.disconnect()
636 raise e
637 if isinstance(response, ResponseError):
638 raise response
639 return response
640
641 def pack_command(self, *args):
642 "Pack a series of arguments into the Redis protocol"
643 output = []
644 # the client might have included 1 or more literal arguments in
645 # the command name, e.g., 'CONFIG GET'. The Redis server expects these
646 # arguments to be sent separately, so split the first argument
647 # manually. All of these arguements get wrapped in the Token class
648 # to prevent them from being encoded.
649 command = args[0]
650 if ' ' in command:
651 args = tuple(Token.get_token(s)
652 for s in command.split()) + args[1:]
653 else:
654 args = (Token.get_token(command),) + args[1:]
655
656 buff = SYM_EMPTY.join((SYM_STAR, str(len(args)).encode(), SYM_CRLF))
657
658 buffer_cutoff = self._buffer_cutoff
659 for arg in imap(self.encoder.encode, args):
660 # to avoid large string mallocs, chunk the command into the
661 # output list if we're sending large values
662 if len(buff) > buffer_cutoff or len(arg) > buffer_cutoff:
663 buff = SYM_EMPTY.join(
664 (buff, SYM_DOLLAR, str(len(arg)).encode(), SYM_CRLF))
665 output.append(buff)
666 output.append(arg)
667 buff = SYM_CRLF
668 else:
669 buff = SYM_EMPTY.join(
670 (buff, SYM_DOLLAR, str(len(arg)).encode(),
671 SYM_CRLF, arg, SYM_CRLF))
672 output.append(buff)
673 return output
674
675 def pack_commands(self, commands):
676 "Pack multiple commands into the Redis protocol"
677 output = []
678 pieces = []
679 buffer_length = 0
680 buffer_cutoff = self._buffer_cutoff
681
682 for cmd in commands:
683 for chunk in self.pack_command(*cmd):
684 chunklen = len(chunk)
685 if buffer_length > buffer_cutoff or chunklen > buffer_cutoff:
686 output.append(SYM_EMPTY.join(pieces))
687 buffer_length = 0
688 pieces = []
689
690 if chunklen > self._buffer_cutoff:
691 output.append(chunk)
692 else:
693 pieces.append(chunk)
694 buffer_length += chunklen
695
696 if pieces:
697 output.append(SYM_EMPTY.join(pieces))
698 return output
699
700
701 class SSLConnection(Connection):
702 description_format = "SSLConnection<host=%(host)s,port=%(port)s,db=%(db)s>"
703
704 def __init__(self, ssl_keyfile=None, ssl_certfile=None,
705 ssl_cert_reqs='required', ssl_ca_certs=None, **kwargs):
706 if not ssl_available:
707 raise RedisError("Python wasn't built with SSL support")
708
709 super(SSLConnection, self).__init__(**kwargs)
710
711 self.keyfile = ssl_keyfile
712 self.certfile = ssl_certfile
713 if ssl_cert_reqs is None:
714 ssl_cert_reqs = ssl.CERT_NONE
715 elif isinstance(ssl_cert_reqs, basestring):
716 CERT_REQS = {
717 'none': ssl.CERT_NONE,
718 'optional': ssl.CERT_OPTIONAL,
719 'required': ssl.CERT_REQUIRED
720 }
721 if ssl_cert_reqs not in CERT_REQS:
722 raise RedisError(
723 "Invalid SSL Certificate Requirements Flag: %s" %
724 ssl_cert_reqs)
725 ssl_cert_reqs = CERT_REQS[ssl_cert_reqs]
726 self.cert_reqs = ssl_cert_reqs
727 self.ca_certs = ssl_ca_certs
728
729 def _connect(self):
730 "Wrap the socket with SSL support"
731 sock = super(SSLConnection, self)._connect()
732 sock = ssl.wrap_socket(sock,
733 cert_reqs=self.cert_reqs,
734 keyfile=self.keyfile,
735 certfile=self.certfile,
736 ca_certs=self.ca_certs)
737 return sock
738
739
740 class UnixDomainSocketConnection(Connection):
741 description_format = "UnixDomainSocketConnection<path=%(path)s,db=%(db)s>"
742
743 def __init__(self, path='', db=0, password=None,
744 socket_timeout=None, encoding='utf-8',
745 encoding_errors='strict', decode_responses=False,
746 retry_on_timeout=False,
747 parser_class=DefaultParser, socket_read_size=65536):
748 self.pid = os.getpid()
749 self.path = path
750 self.db = db
751 self.password = password
752 self.socket_timeout = socket_timeout
753 self.retry_on_timeout = retry_on_timeout
754 self.encoder = Encoder(encoding, encoding_errors, decode_responses)
755 self._sock = None
756 self._parser = parser_class(socket_read_size=socket_read_size)
757 self._description_args = {
758 'path': self.path,
759 'db': self.db,
760 }
761 self._connect_callbacks = []
762 self._buffer_cutoff = 6000
763
764 def _connect(self):
765 "Create a Unix domain socket connection"
766 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
767 sock.settimeout(self.socket_timeout)
768 sock.connect(self.path)
769 return sock
770
771 def _error_message(self, exception):
772 # args for socket.error can either be (errno, "message")
773 # or just "message"
774 if len(exception.args) == 1:
775 return "Error connecting to unix socket: %s. %s." % \
776 (self.path, exception.args[0])
777 else:
778 return "Error %s connecting to unix socket: %s. %s." % \
779 (exception.args[0], self.path, exception.args[1])
780
781
782 FALSE_STRINGS = ('0', 'F', 'FALSE', 'N', 'NO')
783
784
785 def to_bool(value):
786 if value is None or value == '':
787 return None
788 if isinstance(value, basestring) and value.upper() in FALSE_STRINGS:
789 return False
790 return bool(value)
791
792
793 URL_QUERY_ARGUMENT_PARSERS = {
794 'socket_timeout': float,
795 'socket_connect_timeout': float,
796 'socket_keepalive': to_bool,
797 'retry_on_timeout': to_bool,
798 'max_connections': int,
799 }
800
801
802 class ConnectionPool(object):
803 "Generic connection pool"
804 @classmethod
805 def from_url(cls, url, db=None, decode_components=False, **kwargs):
806 """
807 Return a connection pool configured from the given URL.
808
809 For example::
810
811 redis://[:password]@localhost:6379/0
812 rediss://[:password]@localhost:6379/0
813 unix://[:password]@/path/to/socket.sock?db=0
814
815 Three URL schemes are supported:
816
817 - ```redis://``
818 <https://www.iana.org/assignments/uri-schemes/prov/redis>`_ creates a
819 normal TCP socket connection
820 - ```rediss://``
821 <https://www.iana.org/assignments/uri-schemes/prov/rediss>`_ creates
822 a SSL wrapped TCP socket connection
823 - ``unix://`` creates a Unix Domain Socket connection
824
825 There are several ways to specify a database number. The parse function
826 will return the first specified option:
827 1. A ``db`` querystring option, e.g. redis://localhost?db=0
828 2. If using the redis:// scheme, the path argument of the url, e.g.
829 redis://localhost/0
830 3. The ``db`` argument to this function.
831
832 If none of these options are specified, db=0 is used.
833
834 The ``decode_components`` argument allows this function to work with
835 percent-encoded URLs. If this argument is set to ``True`` all ``%xx``
836 escapes will be replaced by their single-character equivalents after
837 the URL has been parsed. This only applies to the ``hostname``,
838 ``path``, and ``password`` components.
839
840 Any additional querystring arguments and keyword arguments will be
841 passed along to the ConnectionPool class's initializer. The querystring
842 arguments ``socket_connect_timeout`` and ``socket_timeout`` if supplied
843 are parsed as float values. The arguments ``socket_keepalive`` and
844 ``retry_on_timeout`` are parsed to boolean values that accept
845 True/False, Yes/No values to indicate state. Invalid types cause a
846 ``UserWarning`` to be raised. In the case of conflicting arguments,
847 querystring arguments always win.
848
849 """
850 url = urlparse(url)
851 url_options = {}
852
853 for name, value in iteritems(parse_qs(url.query)):
854 if value and len(value) > 0:
855 parser = URL_QUERY_ARGUMENT_PARSERS.get(name)
856 if parser:
857 try:
858 url_options[name] = parser(value[0])
859 except (TypeError, ValueError):
860 warnings.warn(UserWarning(
861 "Invalid value for `%s` in connection URL." % name
862 ))
863 else:
864 url_options[name] = value[0]
865
866 if decode_components:
867 password = unquote(url.password) if url.password else None
868 path = unquote(url.path) if url.path else None
869 hostname = unquote(url.hostname) if url.hostname else None
870 else:
871 password = url.password
872 path = url.path
873 hostname = url.hostname
874
875 # We only support redis:// and unix:// schemes.
876 if url.scheme == 'unix':
877 url_options.update({
878 'password': password,
879 'path': path,
880 'connection_class': UnixDomainSocketConnection,
881 })
882
883 else:
884 url_options.update({
885 'host': hostname,
886 'port': int(url.port or 6379),
887 'password': password,
888 })
889
890 # If there's a path argument, use it as the db argument if a
891 # querystring value wasn't specified
892 if 'db' not in url_options and path:
893 try:
894 url_options['db'] = int(path.replace('/', ''))
895 except (AttributeError, ValueError):
896 pass
897
898 if url.scheme == 'rediss':
899 url_options['connection_class'] = SSLConnection
900
901 # last shot at the db value
902 url_options['db'] = int(url_options.get('db', db or 0))
903
904 # update the arguments from the URL values
905 kwargs.update(url_options)
906
907 # backwards compatability
908 if 'charset' in kwargs:
909 warnings.warn(DeprecationWarning(
910 '"charset" is deprecated. Use "encoding" instead'))
911 kwargs['encoding'] = kwargs.pop('charset')
912 if 'errors' in kwargs:
913 warnings.warn(DeprecationWarning(
914 '"errors" is deprecated. Use "encoding_errors" instead'))
915 kwargs['encoding_errors'] = kwargs.pop('errors')
916
917 return cls(**kwargs)
918
919 def __init__(self, connection_class=Connection, max_connections=None,
920 **connection_kwargs):
921 """
922 Create a connection pool. If max_connections is set, then this
923 object raises redis.ConnectionError when the pool's limit is reached.
924
925 By default, TCP connections are created unless connection_class is
926 specified. Use redis.UnixDomainSocketConnection for unix sockets.
927
928 Any additional keyword arguments are passed to the constructor of
929 connection_class.
930 """
931 max_connections = max_connections or 2 ** 31
932 if not isinstance(max_connections, (int, long)) or max_connections < 0:
933 raise ValueError('"max_connections" must be a positive integer')
934
935 self.connection_class = connection_class
936 self.connection_kwargs = connection_kwargs
937 self.max_connections = max_connections
938
939 self.reset()
940
941 def __repr__(self):
942 return "%s<%s>" % (
943 type(self).__name__,
944 self.connection_class.description_format % self.connection_kwargs,
945 )
946
947 def reset(self):
948 self.pid = os.getpid()
949 self._created_connections = 0
950 self._available_connections = []
951 self._in_use_connections = set()
952 self._check_lock = threading.Lock()
953
954 def _checkpid(self):
955 if self.pid != os.getpid():
956 with self._check_lock:
957 if self.pid == os.getpid():
958 # another thread already did the work while we waited
959 # on the lock.
960 return
961 self.disconnect()
962 self.reset()
963
964 def get_connection(self, command_name, *keys, **options):
965 "Get a connection from the pool"
966 self._checkpid()
967 try:
968 connection = self._available_connections.pop()
969 except IndexError:
970 connection = self.make_connection()
971 self._in_use_connections.add(connection)
972 return connection
973
974 def get_encoder(self):
975 "Return an encoder based on encoding settings"
976 kwargs = self.connection_kwargs
977 return Encoder(
978 encoding=kwargs.get('encoding', 'utf-8'),
979 encoding_errors=kwargs.get('encoding_errors', 'strict'),
980 decode_responses=kwargs.get('decode_responses', False)
981 )
982
983 def make_connection(self):
984 "Create a new connection"
985 if self._created_connections >= self.max_connections:
986 raise ConnectionError("Too many connections")
987 self._created_connections += 1
988 return self.connection_class(**self.connection_kwargs)
989
990 def release(self, connection):
991 "Releases the connection back to the pool"
992 self._checkpid()
993 if connection.pid != self.pid:
994 return
995 self._in_use_connections.remove(connection)
996 self._available_connections.append(connection)
997
998 def disconnect(self):
999 "Disconnects all connections in the pool"
1000 all_conns = chain(self._available_connections,
1001 self._in_use_connections)
1002 for connection in all_conns:
1003 connection.disconnect()
1004
1005
1006 class BlockingConnectionPool(ConnectionPool):
1007 """
1008 Thread-safe blocking connection pool::
1009
1010 >>> from redis.client import Redis
1011 >>> client = Redis(connection_pool=BlockingConnectionPool())
1012
1013 It performs the same function as the default
1014 ``:py:class: ~redis.connection.ConnectionPool`` implementation, in that,
1015 it maintains a pool of reusable connections that can be shared by
1016 multiple redis clients (safely across threads if required).
1017
1018 The difference is that, in the event that a client tries to get a
1019 connection from the pool when all of connections are in use, rather than
1020 raising a ``:py:class: ~redis.exceptions.ConnectionError`` (as the default
1021 ``:py:class: ~redis.connection.ConnectionPool`` implementation does), it
1022 makes the client wait ("blocks") for a specified number of seconds until
1023 a connection becomes available.
1024
1025 Use ``max_connections`` to increase / decrease the pool size::
1026
1027 >>> pool = BlockingConnectionPool(max_connections=10)
1028
1029 Use ``timeout`` to tell it either how many seconds to wait for a connection
1030 to become available, or to block forever:
1031
1032 # Block forever.
1033 >>> pool = BlockingConnectionPool(timeout=None)
1034
1035 # Raise a ``ConnectionError`` after five seconds if a connection is
1036 # not available.
1037 >>> pool = BlockingConnectionPool(timeout=5)
1038 """
1039 def __init__(self, max_connections=50, timeout=20,
1040 connection_class=Connection, queue_class=LifoQueue,
1041 **connection_kwargs):
1042
1043 self.queue_class = queue_class
1044 self.timeout = timeout
1045 super(BlockingConnectionPool, self).__init__(
1046 connection_class=connection_class,
1047 max_connections=max_connections,
1048 **connection_kwargs)
1049
1050 def reset(self):
1051 self.pid = os.getpid()
1052 self._check_lock = threading.Lock()
1053
1054 # Create and fill up a thread safe queue with ``None`` values.
1055 self.pool = self.queue_class(self.max_connections)
1056 while True:
1057 try:
1058 self.pool.put_nowait(None)
1059 except Full:
1060 break
1061
1062 # Keep a list of actual connection instances so that we can
1063 # disconnect them later.
1064 self._connections = []
1065
1066 def make_connection(self):
1067 "Make a fresh connection."
1068 connection = self.connection_class(**self.connection_kwargs)
1069 self._connections.append(connection)
1070 return connection
1071
1072 def get_connection(self, command_name, *keys, **options):
1073 """
1074 Get a connection, blocking for ``self.timeout`` until a connection
1075 is available from the pool.
1076
1077 If the connection returned is ``None`` then creates a new connection.
1078 Because we use a last-in first-out queue, the existing connections
1079 (having been returned to the pool after the initial ``None`` values
1080 were added) will be returned before ``None`` values. This means we only
1081 create new connections when we need to, i.e.: the actual number of
1082 connections will only increase in response to demand.
1083 """
1084 # Make sure we haven't changed process.
1085 self._checkpid()
1086
1087 # Try and get a connection from the pool. If one isn't available within
1088 # self.timeout then raise a ``ConnectionError``.
1089 connection = None
1090 try:
1091 connection = self.pool.get(block=True, timeout=self.timeout)
1092 except Empty:
1093 # Note that this is not caught by the redis client and will be
1094 # raised unless handled by application code. If you want never to
1095 raise ConnectionError("No connection available.")
1096
1097 # If the ``connection`` is actually ``None`` then that's a cue to make
1098 # a new connection to add to the pool.
1099 if connection is None:
1100 connection = self.make_connection()
1101
1102 return connection
1103
1104 def release(self, connection):
1105 "Releases the connection back to the pool."
1106 # Make sure we haven't changed process.
1107 self._checkpid()
1108 if connection.pid != self.pid:
1109 return
1110
1111 # Put the connection back into the pool.
1112 try:
1113 self.pool.put_nowait(connection)
1114 except Full:
1115 # perhaps the pool has been reset() after a fork? regardless,
1116 # we don't want this connection
1117 pass
1118
1119 def disconnect(self):
1120 "Disconnects all connections in the pool."
1121 for connection in self._connections:
1122 connection.disconnect()
1123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redis/connection.py b/redis/connection.py
--- a/redis/connection.py
+++ b/redis/connection.py
@@ -276,9 +276,7 @@
def on_disconnect(self):
"Called when the socket disconnects"
- if self._sock is not None:
- self._sock.close()
- self._sock = None
+ self._sock = None
if self._buffer is not None:
self._buffer.close()
self._buffer = None
| {"golden_diff": "diff --git a/redis/connection.py b/redis/connection.py\n--- a/redis/connection.py\n+++ b/redis/connection.py\n@@ -276,9 +276,7 @@\n \n def on_disconnect(self):\n \"Called when the socket disconnects\"\n- if self._sock is not None:\n- self._sock.close()\n- self._sock = None\n+ self._sock = None\n if self._buffer is not None:\n self._buffer.close()\n self._buffer = None\n", "issue": "PythonParser vs HiredisParser on_disconnect behavior\nPythonParser's `on_disconnect` implementation is inconsistent with HiredisParser implementation (or vice versa):\r\n```python\r\nclass PythonParser(...):\r\n def on_disconnect(self):\r\n \"Called when the socket disconnects\"\r\n if self._sock is not None:\r\n self._sock.close()\r\n self._sock = None\r\n if self._buffer is not None:\r\n self._buffer.close()\r\n self._buffer = None\r\n self.encoder = None\r\n```\r\nand\r\n```python\r\nclass HiredisParser(...):\r\n def on_disconnect(self):\r\n self._sock = None\r\n self._reader = None\r\n self._next_response = False\r\n```\r\nWhy does the PythonParser closes the `_sock` object?\r\nBy doing this the subsequent `shutdown()` and `close()` in `Connection.disconnect` does not make any sense, in fact it shutdown on closed socket raises error which is ignored.\r\n\r\nI can submit a PR but please tell me what place to fix? (HiredisParser/PythonParser/shutdown)\r\n\r\nPS: this issue causes other issues in other repos (celery/kombu#954, celery/celery#3898) \n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom distutils.version import StrictVersion\nfrom itertools import chain\nimport io\nimport os\nimport socket\nimport sys\nimport threading\nimport warnings\n\ntry:\n import ssl\n ssl_available = True\nexcept ImportError:\n ssl_available = False\n\nfrom redis._compat import (xrange, imap, byte_to_chr, unicode, long,\n nativestr, basestring, iteritems,\n LifoQueue, Empty, Full, urlparse, parse_qs,\n recv, recv_into, select, unquote)\nfrom redis.exceptions import (\n DataError,\n RedisError,\n ConnectionError,\n TimeoutError,\n BusyLoadingError,\n ResponseError,\n InvalidResponse,\n AuthenticationError,\n NoScriptError,\n ExecAbortError,\n ReadOnlyError\n)\nfrom redis.utils import HIREDIS_AVAILABLE\nif HIREDIS_AVAILABLE:\n import hiredis\n\n hiredis_version = StrictVersion(hiredis.__version__)\n HIREDIS_SUPPORTS_CALLABLE_ERRORS = \\\n hiredis_version >= StrictVersion('0.1.3')\n HIREDIS_SUPPORTS_BYTE_BUFFER = \\\n hiredis_version >= StrictVersion('0.1.4')\n\n if not HIREDIS_SUPPORTS_BYTE_BUFFER:\n msg = (\"redis-py works best with hiredis >= 0.1.4. You're running \"\n \"hiredis %s. Please consider upgrading.\" % hiredis.__version__)\n warnings.warn(msg)\n\n HIREDIS_USE_BYTE_BUFFER = True\n # only use byte buffer if hiredis supports it\n if not HIREDIS_SUPPORTS_BYTE_BUFFER:\n HIREDIS_USE_BYTE_BUFFER = False\n\nSYM_STAR = b'*'\nSYM_DOLLAR = b'$'\nSYM_CRLF = b'\\r\\n'\nSYM_EMPTY = b''\n\nSERVER_CLOSED_CONNECTION_ERROR = \"Connection closed by server.\"\n\n\nclass Token(object):\n \"\"\"\n Literal strings in Redis commands, such as the command names and any\n hard-coded arguments are wrapped in this class so we know not to apply\n and encoding rules on them.\n \"\"\"\n\n _cache = {}\n\n @classmethod\n def get_token(cls, value):\n \"Gets a cached token object or creates a new one if not already cached\"\n\n # Use try/except because after running for a short time most tokens\n # should already be cached\n try:\n return cls._cache[value]\n except KeyError:\n token = Token(value)\n cls._cache[value] = token\n return token\n\n def __init__(self, value):\n if isinstance(value, Token):\n value = value.value\n self.value = value\n self.encoded_value = value.encode()\n\n def __repr__(self):\n return self.value\n\n def __str__(self):\n return self.value\n\n\nclass Encoder(object):\n \"Encode strings to bytes and decode bytes to strings\"\n\n def __init__(self, encoding, encoding_errors, decode_responses):\n self.encoding = encoding\n self.encoding_errors = encoding_errors\n self.decode_responses = decode_responses\n\n def encode(self, value):\n \"Return a bytestring representation of the value\"\n if isinstance(value, Token):\n return value.encoded_value\n elif isinstance(value, bytes):\n return value\n elif isinstance(value, bool):\n # special case bool since it is a subclass of int\n raise DataError(\"Invalid input of type: 'bool'. Convert to a \"\n \"byte, string or number first.\")\n elif isinstance(value, float):\n value = repr(value).encode()\n elif isinstance(value, (int, long)):\n # python 2 repr() on longs is '123L', so use str() instead\n value = str(value).encode()\n elif not isinstance(value, basestring):\n # a value we don't know how to deal with. throw an error\n typename = type(value).__name__\n raise DataError(\"Invalid input of type: '%s'. Convert to a \"\n \"byte, string or number first.\" % typename)\n if isinstance(value, unicode):\n value = value.encode(self.encoding, self.encoding_errors)\n return value\n\n def decode(self, value, force=False):\n \"Return a unicode string from the byte representation\"\n if (self.decode_responses or force) and isinstance(value, bytes):\n value = value.decode(self.encoding, self.encoding_errors)\n return value\n\n\nclass BaseParser(object):\n EXCEPTION_CLASSES = {\n 'ERR': {\n 'max number of clients reached': ConnectionError\n },\n 'EXECABORT': ExecAbortError,\n 'LOADING': BusyLoadingError,\n 'NOSCRIPT': NoScriptError,\n 'READONLY': ReadOnlyError,\n }\n\n def parse_error(self, response):\n \"Parse an error response\"\n error_code = response.split(' ')[0]\n if error_code in self.EXCEPTION_CLASSES:\n response = response[len(error_code) + 1:]\n exception_class = self.EXCEPTION_CLASSES[error_code]\n if isinstance(exception_class, dict):\n exception_class = exception_class.get(response, ResponseError)\n return exception_class(response)\n return ResponseError(response)\n\n\nclass SocketBuffer(object):\n def __init__(self, socket, socket_read_size):\n self._sock = socket\n self.socket_read_size = socket_read_size\n self._buffer = io.BytesIO()\n # number of bytes written to the buffer from the socket\n self.bytes_written = 0\n # number of bytes read from the buffer\n self.bytes_read = 0\n\n @property\n def length(self):\n return self.bytes_written - self.bytes_read\n\n def _read_from_socket(self, length=None):\n socket_read_size = self.socket_read_size\n buf = self._buffer\n buf.seek(self.bytes_written)\n marker = 0\n\n try:\n while True:\n data = recv(self._sock, socket_read_size)\n # an empty string indicates the server shutdown the socket\n if isinstance(data, bytes) and len(data) == 0:\n raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)\n buf.write(data)\n data_length = len(data)\n self.bytes_written += data_length\n marker += data_length\n\n if length is not None and length > marker:\n continue\n break\n except socket.timeout:\n raise TimeoutError(\"Timeout reading from socket\")\n except socket.error:\n e = sys.exc_info()[1]\n raise ConnectionError(\"Error while reading from socket: %s\" %\n (e.args,))\n\n def read(self, length):\n length = length + 2 # make sure to read the \\r\\n terminator\n # make sure we've read enough data from the socket\n if length > self.length:\n self._read_from_socket(length - self.length)\n\n self._buffer.seek(self.bytes_read)\n data = self._buffer.read(length)\n self.bytes_read += len(data)\n\n # purge the buffer when we've consumed it all so it doesn't\n # grow forever\n if self.bytes_read == self.bytes_written:\n self.purge()\n\n return data[:-2]\n\n def readline(self):\n buf = self._buffer\n buf.seek(self.bytes_read)\n data = buf.readline()\n while not data.endswith(SYM_CRLF):\n # there's more data in the socket that we need\n self._read_from_socket()\n buf.seek(self.bytes_read)\n data = buf.readline()\n\n self.bytes_read += len(data)\n\n # purge the buffer when we've consumed it all so it doesn't\n # grow forever\n if self.bytes_read == self.bytes_written:\n self.purge()\n\n return data[:-2]\n\n def purge(self):\n self._buffer.seek(0)\n self._buffer.truncate()\n self.bytes_written = 0\n self.bytes_read = 0\n\n def close(self):\n try:\n self.purge()\n self._buffer.close()\n except Exception:\n # issue #633 suggests the purge/close somehow raised a\n # BadFileDescriptor error. Perhaps the client ran out of\n # memory or something else? It's probably OK to ignore\n # any error being raised from purge/close since we're\n # removing the reference to the instance below.\n pass\n self._buffer = None\n self._sock = None\n\n\nclass PythonParser(BaseParser):\n \"Plain Python parsing class\"\n def __init__(self, socket_read_size):\n self.socket_read_size = socket_read_size\n self.encoder = None\n self._sock = None\n self._buffer = None\n\n def __del__(self):\n try:\n self.on_disconnect()\n except Exception:\n pass\n\n def on_connect(self, connection):\n \"Called when the socket connects\"\n self._sock = connection._sock\n self._buffer = SocketBuffer(self._sock, self.socket_read_size)\n self.encoder = connection.encoder\n\n def on_disconnect(self):\n \"Called when the socket disconnects\"\n if self._sock is not None:\n self._sock.close()\n self._sock = None\n if self._buffer is not None:\n self._buffer.close()\n self._buffer = None\n self.encoder = None\n\n def can_read(self):\n return self._buffer and bool(self._buffer.length)\n\n def read_response(self):\n response = self._buffer.readline()\n if not response:\n raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)\n\n byte, response = byte_to_chr(response[0]), response[1:]\n\n if byte not in ('-', '+', ':', '$', '*'):\n raise InvalidResponse(\"Protocol Error: %s, %s\" %\n (str(byte), str(response)))\n\n # server returned an error\n if byte == '-':\n response = nativestr(response)\n error = self.parse_error(response)\n # if the error is a ConnectionError, raise immediately so the user\n # is notified\n if isinstance(error, ConnectionError):\n raise error\n # otherwise, we're dealing with a ResponseError that might belong\n # inside a pipeline response. the connection's read_response()\n # and/or the pipeline's execute() will raise this error if\n # necessary, so just return the exception instance here.\n return error\n # single value\n elif byte == '+':\n pass\n # int value\n elif byte == ':':\n response = long(response)\n # bulk response\n elif byte == '$':\n length = int(response)\n if length == -1:\n return None\n response = self._buffer.read(length)\n # multi-bulk response\n elif byte == '*':\n length = int(response)\n if length == -1:\n return None\n response = [self.read_response() for i in xrange(length)]\n if isinstance(response, bytes):\n response = self.encoder.decode(response)\n return response\n\n\nclass HiredisParser(BaseParser):\n \"Parser class for connections using Hiredis\"\n def __init__(self, socket_read_size):\n if not HIREDIS_AVAILABLE:\n raise RedisError(\"Hiredis is not installed\")\n self.socket_read_size = socket_read_size\n\n if HIREDIS_USE_BYTE_BUFFER:\n self._buffer = bytearray(socket_read_size)\n\n def __del__(self):\n try:\n self.on_disconnect()\n except Exception:\n pass\n\n def on_connect(self, connection):\n self._sock = connection._sock\n kwargs = {\n 'protocolError': InvalidResponse,\n 'replyError': self.parse_error,\n }\n\n # hiredis < 0.1.3 doesn't support functions that create exceptions\n if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:\n kwargs['replyError'] = ResponseError\n\n if connection.encoder.decode_responses:\n kwargs['encoding'] = connection.encoder.encoding\n self._reader = hiredis.Reader(**kwargs)\n self._next_response = False\n\n def on_disconnect(self):\n self._sock = None\n self._reader = None\n self._next_response = False\n\n def can_read(self):\n if not self._reader:\n raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)\n\n if self._next_response is False:\n self._next_response = self._reader.gets()\n return self._next_response is not False\n\n def read_response(self):\n if not self._reader:\n raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)\n\n # _next_response might be cached from a can_read() call\n if self._next_response is not False:\n response = self._next_response\n self._next_response = False\n return response\n\n response = self._reader.gets()\n socket_read_size = self.socket_read_size\n while response is False:\n try:\n if HIREDIS_USE_BYTE_BUFFER:\n bufflen = recv_into(self._sock, self._buffer)\n if bufflen == 0:\n raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)\n else:\n buffer = recv(self._sock, socket_read_size)\n # an empty string indicates the server shutdown the socket\n if not isinstance(buffer, bytes) or len(buffer) == 0:\n raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)\n except socket.timeout:\n raise TimeoutError(\"Timeout reading from socket\")\n except socket.error:\n e = sys.exc_info()[1]\n raise ConnectionError(\"Error while reading from socket: %s\" %\n (e.args,))\n if HIREDIS_USE_BYTE_BUFFER:\n self._reader.feed(self._buffer, 0, bufflen)\n else:\n self._reader.feed(buffer)\n response = self._reader.gets()\n # if an older version of hiredis is installed, we need to attempt\n # to convert ResponseErrors to their appropriate types.\n if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:\n if isinstance(response, ResponseError):\n response = self.parse_error(response.args[0])\n elif isinstance(response, list) and response and \\\n isinstance(response[0], ResponseError):\n response[0] = self.parse_error(response[0].args[0])\n # if the response is a ConnectionError or the response is a list and\n # the first item is a ConnectionError, raise it as something bad\n # happened\n if isinstance(response, ConnectionError):\n raise response\n elif isinstance(response, list) and response and \\\n isinstance(response[0], ConnectionError):\n raise response[0]\n return response\n\n\nif HIREDIS_AVAILABLE:\n DefaultParser = HiredisParser\nelse:\n DefaultParser = PythonParser\n\n\nclass Connection(object):\n \"Manages TCP communication to and from a Redis server\"\n description_format = \"Connection<host=%(host)s,port=%(port)s,db=%(db)s>\"\n\n def __init__(self, host='localhost', port=6379, db=0, password=None,\n socket_timeout=None, socket_connect_timeout=None,\n socket_keepalive=False, socket_keepalive_options=None,\n socket_type=0, retry_on_timeout=False, encoding='utf-8',\n encoding_errors='strict', decode_responses=False,\n parser_class=DefaultParser, socket_read_size=65536):\n self.pid = os.getpid()\n self.host = host\n self.port = int(port)\n self.db = db\n self.password = password\n self.socket_timeout = socket_timeout\n self.socket_connect_timeout = socket_connect_timeout or socket_timeout\n self.socket_keepalive = socket_keepalive\n self.socket_keepalive_options = socket_keepalive_options or {}\n self.socket_type = socket_type\n self.retry_on_timeout = retry_on_timeout\n self.encoder = Encoder(encoding, encoding_errors, decode_responses)\n self._sock = None\n self._parser = parser_class(socket_read_size=socket_read_size)\n self._description_args = {\n 'host': self.host,\n 'port': self.port,\n 'db': self.db,\n }\n self._connect_callbacks = []\n self._buffer_cutoff = 6000\n\n def __repr__(self):\n return self.description_format % self._description_args\n\n def __del__(self):\n try:\n self.disconnect()\n except Exception:\n pass\n\n def register_connect_callback(self, callback):\n self._connect_callbacks.append(callback)\n\n def clear_connect_callbacks(self):\n self._connect_callbacks = []\n\n def connect(self):\n \"Connects to the Redis server if not already connected\"\n if self._sock:\n return\n try:\n sock = self._connect()\n except socket.timeout:\n raise TimeoutError(\"Timeout connecting to server\")\n except socket.error:\n e = sys.exc_info()[1]\n raise ConnectionError(self._error_message(e))\n\n self._sock = sock\n try:\n self.on_connect()\n except RedisError:\n # clean up after any error in on_connect\n self.disconnect()\n raise\n\n # run any user callbacks. right now the only internal callback\n # is for pubsub channel/pattern resubscription\n for callback in self._connect_callbacks:\n callback(self)\n\n def _connect(self):\n \"Create a TCP socket connection\"\n # we want to mimic what socket.create_connection does to support\n # ipv4/ipv6, but we want to set options prior to calling\n # socket.connect()\n err = None\n for res in socket.getaddrinfo(self.host, self.port, self.socket_type,\n socket.SOCK_STREAM):\n family, socktype, proto, canonname, socket_address = res\n sock = None\n try:\n sock = socket.socket(family, socktype, proto)\n # TCP_NODELAY\n sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)\n\n # TCP_KEEPALIVE\n if self.socket_keepalive:\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)\n for k, v in iteritems(self.socket_keepalive_options):\n sock.setsockopt(socket.SOL_TCP, k, v)\n\n # set the socket_connect_timeout before we connect\n sock.settimeout(self.socket_connect_timeout)\n\n # connect\n sock.connect(socket_address)\n\n # set the socket_timeout now that we're connected\n sock.settimeout(self.socket_timeout)\n return sock\n\n except socket.error as _:\n err = _\n if sock is not None:\n sock.close()\n\n if err is not None:\n raise err\n raise socket.error(\"socket.getaddrinfo returned an empty list\")\n\n def _error_message(self, exception):\n # args for socket.error can either be (errno, \"message\")\n # or just \"message\"\n if len(exception.args) == 1:\n return \"Error connecting to %s:%s. %s.\" % \\\n (self.host, self.port, exception.args[0])\n else:\n return \"Error %s connecting to %s:%s. %s.\" % \\\n (exception.args[0], self.host, self.port, exception.args[1])\n\n def on_connect(self):\n \"Initialize the connection, authenticate and select a database\"\n self._parser.on_connect(self)\n\n # if a password is specified, authenticate\n if self.password:\n self.send_command('AUTH', self.password)\n if nativestr(self.read_response()) != 'OK':\n raise AuthenticationError('Invalid Password')\n\n # if a database is specified, switch to it\n if self.db:\n self.send_command('SELECT', self.db)\n if nativestr(self.read_response()) != 'OK':\n raise ConnectionError('Invalid Database')\n\n def disconnect(self):\n \"Disconnects from the Redis server\"\n self._parser.on_disconnect()\n if self._sock is None:\n return\n try:\n self._sock.shutdown(socket.SHUT_RDWR)\n self._sock.close()\n except socket.error:\n pass\n self._sock = None\n\n def send_packed_command(self, command):\n \"Send an already packed command to the Redis server\"\n if not self._sock:\n self.connect()\n try:\n if isinstance(command, str):\n command = [command]\n for item in command:\n self._sock.sendall(item)\n except socket.timeout:\n self.disconnect()\n raise TimeoutError(\"Timeout writing to socket\")\n except socket.error:\n e = sys.exc_info()[1]\n self.disconnect()\n if len(e.args) == 1:\n errno, errmsg = 'UNKNOWN', e.args[0]\n else:\n errno = e.args[0]\n errmsg = e.args[1]\n raise ConnectionError(\"Error %s while writing to socket. %s.\" %\n (errno, errmsg))\n except Exception as e:\n self.disconnect()\n raise e\n\n def send_command(self, *args):\n \"Pack and send a command to the Redis server\"\n self.send_packed_command(self.pack_command(*args))\n\n def can_read(self, timeout=0):\n \"Poll the socket to see if there's data that can be read.\"\n sock = self._sock\n if not sock:\n self.connect()\n sock = self._sock\n return self._parser.can_read() or \\\n bool(select([sock], [], [], timeout)[0])\n\n def read_response(self):\n \"Read the response from a previously sent command\"\n try:\n response = self._parser.read_response()\n except Exception as e:\n self.disconnect()\n raise e\n if isinstance(response, ResponseError):\n raise response\n return response\n\n def pack_command(self, *args):\n \"Pack a series of arguments into the Redis protocol\"\n output = []\n # the client might have included 1 or more literal arguments in\n # the command name, e.g., 'CONFIG GET'. The Redis server expects these\n # arguments to be sent separately, so split the first argument\n # manually. All of these arguements get wrapped in the Token class\n # to prevent them from being encoded.\n command = args[0]\n if ' ' in command:\n args = tuple(Token.get_token(s)\n for s in command.split()) + args[1:]\n else:\n args = (Token.get_token(command),) + args[1:]\n\n buff = SYM_EMPTY.join((SYM_STAR, str(len(args)).encode(), SYM_CRLF))\n\n buffer_cutoff = self._buffer_cutoff\n for arg in imap(self.encoder.encode, args):\n # to avoid large string mallocs, chunk the command into the\n # output list if we're sending large values\n if len(buff) > buffer_cutoff or len(arg) > buffer_cutoff:\n buff = SYM_EMPTY.join(\n (buff, SYM_DOLLAR, str(len(arg)).encode(), SYM_CRLF))\n output.append(buff)\n output.append(arg)\n buff = SYM_CRLF\n else:\n buff = SYM_EMPTY.join(\n (buff, SYM_DOLLAR, str(len(arg)).encode(),\n SYM_CRLF, arg, SYM_CRLF))\n output.append(buff)\n return output\n\n def pack_commands(self, commands):\n \"Pack multiple commands into the Redis protocol\"\n output = []\n pieces = []\n buffer_length = 0\n buffer_cutoff = self._buffer_cutoff\n\n for cmd in commands:\n for chunk in self.pack_command(*cmd):\n chunklen = len(chunk)\n if buffer_length > buffer_cutoff or chunklen > buffer_cutoff:\n output.append(SYM_EMPTY.join(pieces))\n buffer_length = 0\n pieces = []\n\n if chunklen > self._buffer_cutoff:\n output.append(chunk)\n else:\n pieces.append(chunk)\n buffer_length += chunklen\n\n if pieces:\n output.append(SYM_EMPTY.join(pieces))\n return output\n\n\nclass SSLConnection(Connection):\n description_format = \"SSLConnection<host=%(host)s,port=%(port)s,db=%(db)s>\"\n\n def __init__(self, ssl_keyfile=None, ssl_certfile=None,\n ssl_cert_reqs='required', ssl_ca_certs=None, **kwargs):\n if not ssl_available:\n raise RedisError(\"Python wasn't built with SSL support\")\n\n super(SSLConnection, self).__init__(**kwargs)\n\n self.keyfile = ssl_keyfile\n self.certfile = ssl_certfile\n if ssl_cert_reqs is None:\n ssl_cert_reqs = ssl.CERT_NONE\n elif isinstance(ssl_cert_reqs, basestring):\n CERT_REQS = {\n 'none': ssl.CERT_NONE,\n 'optional': ssl.CERT_OPTIONAL,\n 'required': ssl.CERT_REQUIRED\n }\n if ssl_cert_reqs not in CERT_REQS:\n raise RedisError(\n \"Invalid SSL Certificate Requirements Flag: %s\" %\n ssl_cert_reqs)\n ssl_cert_reqs = CERT_REQS[ssl_cert_reqs]\n self.cert_reqs = ssl_cert_reqs\n self.ca_certs = ssl_ca_certs\n\n def _connect(self):\n \"Wrap the socket with SSL support\"\n sock = super(SSLConnection, self)._connect()\n sock = ssl.wrap_socket(sock,\n cert_reqs=self.cert_reqs,\n keyfile=self.keyfile,\n certfile=self.certfile,\n ca_certs=self.ca_certs)\n return sock\n\n\nclass UnixDomainSocketConnection(Connection):\n description_format = \"UnixDomainSocketConnection<path=%(path)s,db=%(db)s>\"\n\n def __init__(self, path='', db=0, password=None,\n socket_timeout=None, encoding='utf-8',\n encoding_errors='strict', decode_responses=False,\n retry_on_timeout=False,\n parser_class=DefaultParser, socket_read_size=65536):\n self.pid = os.getpid()\n self.path = path\n self.db = db\n self.password = password\n self.socket_timeout = socket_timeout\n self.retry_on_timeout = retry_on_timeout\n self.encoder = Encoder(encoding, encoding_errors, decode_responses)\n self._sock = None\n self._parser = parser_class(socket_read_size=socket_read_size)\n self._description_args = {\n 'path': self.path,\n 'db': self.db,\n }\n self._connect_callbacks = []\n self._buffer_cutoff = 6000\n\n def _connect(self):\n \"Create a Unix domain socket connection\"\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.settimeout(self.socket_timeout)\n sock.connect(self.path)\n return sock\n\n def _error_message(self, exception):\n # args for socket.error can either be (errno, \"message\")\n # or just \"message\"\n if len(exception.args) == 1:\n return \"Error connecting to unix socket: %s. %s.\" % \\\n (self.path, exception.args[0])\n else:\n return \"Error %s connecting to unix socket: %s. %s.\" % \\\n (exception.args[0], self.path, exception.args[1])\n\n\nFALSE_STRINGS = ('0', 'F', 'FALSE', 'N', 'NO')\n\n\ndef to_bool(value):\n if value is None or value == '':\n return None\n if isinstance(value, basestring) and value.upper() in FALSE_STRINGS:\n return False\n return bool(value)\n\n\nURL_QUERY_ARGUMENT_PARSERS = {\n 'socket_timeout': float,\n 'socket_connect_timeout': float,\n 'socket_keepalive': to_bool,\n 'retry_on_timeout': to_bool,\n 'max_connections': int,\n}\n\n\nclass ConnectionPool(object):\n \"Generic connection pool\"\n @classmethod\n def from_url(cls, url, db=None, decode_components=False, **kwargs):\n \"\"\"\n Return a connection pool configured from the given URL.\n\n For example::\n\n redis://[:password]@localhost:6379/0\n rediss://[:password]@localhost:6379/0\n unix://[:password]@/path/to/socket.sock?db=0\n\n Three URL schemes are supported:\n\n - ```redis://``\n <https://www.iana.org/assignments/uri-schemes/prov/redis>`_ creates a\n normal TCP socket connection\n - ```rediss://``\n <https://www.iana.org/assignments/uri-schemes/prov/rediss>`_ creates\n a SSL wrapped TCP socket connection\n - ``unix://`` creates a Unix Domain Socket connection\n\n There are several ways to specify a database number. The parse function\n will return the first specified option:\n 1. A ``db`` querystring option, e.g. redis://localhost?db=0\n 2. If using the redis:// scheme, the path argument of the url, e.g.\n redis://localhost/0\n 3. The ``db`` argument to this function.\n\n If none of these options are specified, db=0 is used.\n\n The ``decode_components`` argument allows this function to work with\n percent-encoded URLs. If this argument is set to ``True`` all ``%xx``\n escapes will be replaced by their single-character equivalents after\n the URL has been parsed. This only applies to the ``hostname``,\n ``path``, and ``password`` components.\n\n Any additional querystring arguments and keyword arguments will be\n passed along to the ConnectionPool class's initializer. The querystring\n arguments ``socket_connect_timeout`` and ``socket_timeout`` if supplied\n are parsed as float values. The arguments ``socket_keepalive`` and\n ``retry_on_timeout`` are parsed to boolean values that accept\n True/False, Yes/No values to indicate state. Invalid types cause a\n ``UserWarning`` to be raised. In the case of conflicting arguments,\n querystring arguments always win.\n\n \"\"\"\n url = urlparse(url)\n url_options = {}\n\n for name, value in iteritems(parse_qs(url.query)):\n if value and len(value) > 0:\n parser = URL_QUERY_ARGUMENT_PARSERS.get(name)\n if parser:\n try:\n url_options[name] = parser(value[0])\n except (TypeError, ValueError):\n warnings.warn(UserWarning(\n \"Invalid value for `%s` in connection URL.\" % name\n ))\n else:\n url_options[name] = value[0]\n\n if decode_components:\n password = unquote(url.password) if url.password else None\n path = unquote(url.path) if url.path else None\n hostname = unquote(url.hostname) if url.hostname else None\n else:\n password = url.password\n path = url.path\n hostname = url.hostname\n\n # We only support redis:// and unix:// schemes.\n if url.scheme == 'unix':\n url_options.update({\n 'password': password,\n 'path': path,\n 'connection_class': UnixDomainSocketConnection,\n })\n\n else:\n url_options.update({\n 'host': hostname,\n 'port': int(url.port or 6379),\n 'password': password,\n })\n\n # If there's a path argument, use it as the db argument if a\n # querystring value wasn't specified\n if 'db' not in url_options and path:\n try:\n url_options['db'] = int(path.replace('/', ''))\n except (AttributeError, ValueError):\n pass\n\n if url.scheme == 'rediss':\n url_options['connection_class'] = SSLConnection\n\n # last shot at the db value\n url_options['db'] = int(url_options.get('db', db or 0))\n\n # update the arguments from the URL values\n kwargs.update(url_options)\n\n # backwards compatability\n if 'charset' in kwargs:\n warnings.warn(DeprecationWarning(\n '\"charset\" is deprecated. Use \"encoding\" instead'))\n kwargs['encoding'] = kwargs.pop('charset')\n if 'errors' in kwargs:\n warnings.warn(DeprecationWarning(\n '\"errors\" is deprecated. Use \"encoding_errors\" instead'))\n kwargs['encoding_errors'] = kwargs.pop('errors')\n\n return cls(**kwargs)\n\n def __init__(self, connection_class=Connection, max_connections=None,\n **connection_kwargs):\n \"\"\"\n Create a connection pool. If max_connections is set, then this\n object raises redis.ConnectionError when the pool's limit is reached.\n\n By default, TCP connections are created unless connection_class is\n specified. Use redis.UnixDomainSocketConnection for unix sockets.\n\n Any additional keyword arguments are passed to the constructor of\n connection_class.\n \"\"\"\n max_connections = max_connections or 2 ** 31\n if not isinstance(max_connections, (int, long)) or max_connections < 0:\n raise ValueError('\"max_connections\" must be a positive integer')\n\n self.connection_class = connection_class\n self.connection_kwargs = connection_kwargs\n self.max_connections = max_connections\n\n self.reset()\n\n def __repr__(self):\n return \"%s<%s>\" % (\n type(self).__name__,\n self.connection_class.description_format % self.connection_kwargs,\n )\n\n def reset(self):\n self.pid = os.getpid()\n self._created_connections = 0\n self._available_connections = []\n self._in_use_connections = set()\n self._check_lock = threading.Lock()\n\n def _checkpid(self):\n if self.pid != os.getpid():\n with self._check_lock:\n if self.pid == os.getpid():\n # another thread already did the work while we waited\n # on the lock.\n return\n self.disconnect()\n self.reset()\n\n def get_connection(self, command_name, *keys, **options):\n \"Get a connection from the pool\"\n self._checkpid()\n try:\n connection = self._available_connections.pop()\n except IndexError:\n connection = self.make_connection()\n self._in_use_connections.add(connection)\n return connection\n\n def get_encoder(self):\n \"Return an encoder based on encoding settings\"\n kwargs = self.connection_kwargs\n return Encoder(\n encoding=kwargs.get('encoding', 'utf-8'),\n encoding_errors=kwargs.get('encoding_errors', 'strict'),\n decode_responses=kwargs.get('decode_responses', False)\n )\n\n def make_connection(self):\n \"Create a new connection\"\n if self._created_connections >= self.max_connections:\n raise ConnectionError(\"Too many connections\")\n self._created_connections += 1\n return self.connection_class(**self.connection_kwargs)\n\n def release(self, connection):\n \"Releases the connection back to the pool\"\n self._checkpid()\n if connection.pid != self.pid:\n return\n self._in_use_connections.remove(connection)\n self._available_connections.append(connection)\n\n def disconnect(self):\n \"Disconnects all connections in the pool\"\n all_conns = chain(self._available_connections,\n self._in_use_connections)\n for connection in all_conns:\n connection.disconnect()\n\n\nclass BlockingConnectionPool(ConnectionPool):\n \"\"\"\n Thread-safe blocking connection pool::\n\n >>> from redis.client import Redis\n >>> client = Redis(connection_pool=BlockingConnectionPool())\n\n It performs the same function as the default\n ``:py:class: ~redis.connection.ConnectionPool`` implementation, in that,\n it maintains a pool of reusable connections that can be shared by\n multiple redis clients (safely across threads if required).\n\n The difference is that, in the event that a client tries to get a\n connection from the pool when all of connections are in use, rather than\n raising a ``:py:class: ~redis.exceptions.ConnectionError`` (as the default\n ``:py:class: ~redis.connection.ConnectionPool`` implementation does), it\n makes the client wait (\"blocks\") for a specified number of seconds until\n a connection becomes available.\n\n Use ``max_connections`` to increase / decrease the pool size::\n\n >>> pool = BlockingConnectionPool(max_connections=10)\n\n Use ``timeout`` to tell it either how many seconds to wait for a connection\n to become available, or to block forever:\n\n # Block forever.\n >>> pool = BlockingConnectionPool(timeout=None)\n\n # Raise a ``ConnectionError`` after five seconds if a connection is\n # not available.\n >>> pool = BlockingConnectionPool(timeout=5)\n \"\"\"\n def __init__(self, max_connections=50, timeout=20,\n connection_class=Connection, queue_class=LifoQueue,\n **connection_kwargs):\n\n self.queue_class = queue_class\n self.timeout = timeout\n super(BlockingConnectionPool, self).__init__(\n connection_class=connection_class,\n max_connections=max_connections,\n **connection_kwargs)\n\n def reset(self):\n self.pid = os.getpid()\n self._check_lock = threading.Lock()\n\n # Create and fill up a thread safe queue with ``None`` values.\n self.pool = self.queue_class(self.max_connections)\n while True:\n try:\n self.pool.put_nowait(None)\n except Full:\n break\n\n # Keep a list of actual connection instances so that we can\n # disconnect them later.\n self._connections = []\n\n def make_connection(self):\n \"Make a fresh connection.\"\n connection = self.connection_class(**self.connection_kwargs)\n self._connections.append(connection)\n return connection\n\n def get_connection(self, command_name, *keys, **options):\n \"\"\"\n Get a connection, blocking for ``self.timeout`` until a connection\n is available from the pool.\n\n If the connection returned is ``None`` then creates a new connection.\n Because we use a last-in first-out queue, the existing connections\n (having been returned to the pool after the initial ``None`` values\n were added) will be returned before ``None`` values. This means we only\n create new connections when we need to, i.e.: the actual number of\n connections will only increase in response to demand.\n \"\"\"\n # Make sure we haven't changed process.\n self._checkpid()\n\n # Try and get a connection from the pool. If one isn't available within\n # self.timeout then raise a ``ConnectionError``.\n connection = None\n try:\n connection = self.pool.get(block=True, timeout=self.timeout)\n except Empty:\n # Note that this is not caught by the redis client and will be\n # raised unless handled by application code. If you want never to\n raise ConnectionError(\"No connection available.\")\n\n # If the ``connection`` is actually ``None`` then that's a cue to make\n # a new connection to add to the pool.\n if connection is None:\n connection = self.make_connection()\n\n return connection\n\n def release(self, connection):\n \"Releases the connection back to the pool.\"\n # Make sure we haven't changed process.\n self._checkpid()\n if connection.pid != self.pid:\n return\n\n # Put the connection back into the pool.\n try:\n self.pool.put_nowait(connection)\n except Full:\n # perhaps the pool has been reset() after a fork? regardless,\n # we don't want this connection\n pass\n\n def disconnect(self):\n \"Disconnects all connections in the pool.\"\n for connection in self._connections:\n connection.disconnect()\n", "path": "redis/connection.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom distutils.version import StrictVersion\nfrom itertools import chain\nimport io\nimport os\nimport socket\nimport sys\nimport threading\nimport warnings\n\ntry:\n import ssl\n ssl_available = True\nexcept ImportError:\n ssl_available = False\n\nfrom redis._compat import (xrange, imap, byte_to_chr, unicode, long,\n nativestr, basestring, iteritems,\n LifoQueue, Empty, Full, urlparse, parse_qs,\n recv, recv_into, select, unquote)\nfrom redis.exceptions import (\n DataError,\n RedisError,\n ConnectionError,\n TimeoutError,\n BusyLoadingError,\n ResponseError,\n InvalidResponse,\n AuthenticationError,\n NoScriptError,\n ExecAbortError,\n ReadOnlyError\n)\nfrom redis.utils import HIREDIS_AVAILABLE\nif HIREDIS_AVAILABLE:\n import hiredis\n\n hiredis_version = StrictVersion(hiredis.__version__)\n HIREDIS_SUPPORTS_CALLABLE_ERRORS = \\\n hiredis_version >= StrictVersion('0.1.3')\n HIREDIS_SUPPORTS_BYTE_BUFFER = \\\n hiredis_version >= StrictVersion('0.1.4')\n\n if not HIREDIS_SUPPORTS_BYTE_BUFFER:\n msg = (\"redis-py works best with hiredis >= 0.1.4. You're running \"\n \"hiredis %s. Please consider upgrading.\" % hiredis.__version__)\n warnings.warn(msg)\n\n HIREDIS_USE_BYTE_BUFFER = True\n # only use byte buffer if hiredis supports it\n if not HIREDIS_SUPPORTS_BYTE_BUFFER:\n HIREDIS_USE_BYTE_BUFFER = False\n\nSYM_STAR = b'*'\nSYM_DOLLAR = b'$'\nSYM_CRLF = b'\\r\\n'\nSYM_EMPTY = b''\n\nSERVER_CLOSED_CONNECTION_ERROR = \"Connection closed by server.\"\n\n\nclass Token(object):\n \"\"\"\n Literal strings in Redis commands, such as the command names and any\n hard-coded arguments are wrapped in this class so we know not to apply\n and encoding rules on them.\n \"\"\"\n\n _cache = {}\n\n @classmethod\n def get_token(cls, value):\n \"Gets a cached token object or creates a new one if not already cached\"\n\n # Use try/except because after running for a short time most tokens\n # should already be cached\n try:\n return cls._cache[value]\n except KeyError:\n token = Token(value)\n cls._cache[value] = token\n return token\n\n def __init__(self, value):\n if isinstance(value, Token):\n value = value.value\n self.value = value\n self.encoded_value = value.encode()\n\n def __repr__(self):\n return self.value\n\n def __str__(self):\n return self.value\n\n\nclass Encoder(object):\n \"Encode strings to bytes and decode bytes to strings\"\n\n def __init__(self, encoding, encoding_errors, decode_responses):\n self.encoding = encoding\n self.encoding_errors = encoding_errors\n self.decode_responses = decode_responses\n\n def encode(self, value):\n \"Return a bytestring representation of the value\"\n if isinstance(value, Token):\n return value.encoded_value\n elif isinstance(value, bytes):\n return value\n elif isinstance(value, bool):\n # special case bool since it is a subclass of int\n raise DataError(\"Invalid input of type: 'bool'. Convert to a \"\n \"byte, string or number first.\")\n elif isinstance(value, float):\n value = repr(value).encode()\n elif isinstance(value, (int, long)):\n # python 2 repr() on longs is '123L', so use str() instead\n value = str(value).encode()\n elif not isinstance(value, basestring):\n # a value we don't know how to deal with. throw an error\n typename = type(value).__name__\n raise DataError(\"Invalid input of type: '%s'. Convert to a \"\n \"byte, string or number first.\" % typename)\n if isinstance(value, unicode):\n value = value.encode(self.encoding, self.encoding_errors)\n return value\n\n def decode(self, value, force=False):\n \"Return a unicode string from the byte representation\"\n if (self.decode_responses or force) and isinstance(value, bytes):\n value = value.decode(self.encoding, self.encoding_errors)\n return value\n\n\nclass BaseParser(object):\n EXCEPTION_CLASSES = {\n 'ERR': {\n 'max number of clients reached': ConnectionError\n },\n 'EXECABORT': ExecAbortError,\n 'LOADING': BusyLoadingError,\n 'NOSCRIPT': NoScriptError,\n 'READONLY': ReadOnlyError,\n }\n\n def parse_error(self, response):\n \"Parse an error response\"\n error_code = response.split(' ')[0]\n if error_code in self.EXCEPTION_CLASSES:\n response = response[len(error_code) + 1:]\n exception_class = self.EXCEPTION_CLASSES[error_code]\n if isinstance(exception_class, dict):\n exception_class = exception_class.get(response, ResponseError)\n return exception_class(response)\n return ResponseError(response)\n\n\nclass SocketBuffer(object):\n def __init__(self, socket, socket_read_size):\n self._sock = socket\n self.socket_read_size = socket_read_size\n self._buffer = io.BytesIO()\n # number of bytes written to the buffer from the socket\n self.bytes_written = 0\n # number of bytes read from the buffer\n self.bytes_read = 0\n\n @property\n def length(self):\n return self.bytes_written - self.bytes_read\n\n def _read_from_socket(self, length=None):\n socket_read_size = self.socket_read_size\n buf = self._buffer\n buf.seek(self.bytes_written)\n marker = 0\n\n try:\n while True:\n data = recv(self._sock, socket_read_size)\n # an empty string indicates the server shutdown the socket\n if isinstance(data, bytes) and len(data) == 0:\n raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)\n buf.write(data)\n data_length = len(data)\n self.bytes_written += data_length\n marker += data_length\n\n if length is not None and length > marker:\n continue\n break\n except socket.timeout:\n raise TimeoutError(\"Timeout reading from socket\")\n except socket.error:\n e = sys.exc_info()[1]\n raise ConnectionError(\"Error while reading from socket: %s\" %\n (e.args,))\n\n def read(self, length):\n length = length + 2 # make sure to read the \\r\\n terminator\n # make sure we've read enough data from the socket\n if length > self.length:\n self._read_from_socket(length - self.length)\n\n self._buffer.seek(self.bytes_read)\n data = self._buffer.read(length)\n self.bytes_read += len(data)\n\n # purge the buffer when we've consumed it all so it doesn't\n # grow forever\n if self.bytes_read == self.bytes_written:\n self.purge()\n\n return data[:-2]\n\n def readline(self):\n buf = self._buffer\n buf.seek(self.bytes_read)\n data = buf.readline()\n while not data.endswith(SYM_CRLF):\n # there's more data in the socket that we need\n self._read_from_socket()\n buf.seek(self.bytes_read)\n data = buf.readline()\n\n self.bytes_read += len(data)\n\n # purge the buffer when we've consumed it all so it doesn't\n # grow forever\n if self.bytes_read == self.bytes_written:\n self.purge()\n\n return data[:-2]\n\n def purge(self):\n self._buffer.seek(0)\n self._buffer.truncate()\n self.bytes_written = 0\n self.bytes_read = 0\n\n def close(self):\n try:\n self.purge()\n self._buffer.close()\n except Exception:\n # issue #633 suggests the purge/close somehow raised a\n # BadFileDescriptor error. Perhaps the client ran out of\n # memory or something else? It's probably OK to ignore\n # any error being raised from purge/close since we're\n # removing the reference to the instance below.\n pass\n self._buffer = None\n self._sock = None\n\n\nclass PythonParser(BaseParser):\n \"Plain Python parsing class\"\n def __init__(self, socket_read_size):\n self.socket_read_size = socket_read_size\n self.encoder = None\n self._sock = None\n self._buffer = None\n\n def __del__(self):\n try:\n self.on_disconnect()\n except Exception:\n pass\n\n def on_connect(self, connection):\n \"Called when the socket connects\"\n self._sock = connection._sock\n self._buffer = SocketBuffer(self._sock, self.socket_read_size)\n self.encoder = connection.encoder\n\n def on_disconnect(self):\n \"Called when the socket disconnects\"\n self._sock = None\n if self._buffer is not None:\n self._buffer.close()\n self._buffer = None\n self.encoder = None\n\n def can_read(self):\n return self._buffer and bool(self._buffer.length)\n\n def read_response(self):\n response = self._buffer.readline()\n if not response:\n raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)\n\n byte, response = byte_to_chr(response[0]), response[1:]\n\n if byte not in ('-', '+', ':', '$', '*'):\n raise InvalidResponse(\"Protocol Error: %s, %s\" %\n (str(byte), str(response)))\n\n # server returned an error\n if byte == '-':\n response = nativestr(response)\n error = self.parse_error(response)\n # if the error is a ConnectionError, raise immediately so the user\n # is notified\n if isinstance(error, ConnectionError):\n raise error\n # otherwise, we're dealing with a ResponseError that might belong\n # inside a pipeline response. the connection's read_response()\n # and/or the pipeline's execute() will raise this error if\n # necessary, so just return the exception instance here.\n return error\n # single value\n elif byte == '+':\n pass\n # int value\n elif byte == ':':\n response = long(response)\n # bulk response\n elif byte == '$':\n length = int(response)\n if length == -1:\n return None\n response = self._buffer.read(length)\n # multi-bulk response\n elif byte == '*':\n length = int(response)\n if length == -1:\n return None\n response = [self.read_response() for i in xrange(length)]\n if isinstance(response, bytes):\n response = self.encoder.decode(response)\n return response\n\n\nclass HiredisParser(BaseParser):\n \"Parser class for connections using Hiredis\"\n def __init__(self, socket_read_size):\n if not HIREDIS_AVAILABLE:\n raise RedisError(\"Hiredis is not installed\")\n self.socket_read_size = socket_read_size\n\n if HIREDIS_USE_BYTE_BUFFER:\n self._buffer = bytearray(socket_read_size)\n\n def __del__(self):\n try:\n self.on_disconnect()\n except Exception:\n pass\n\n def on_connect(self, connection):\n self._sock = connection._sock\n kwargs = {\n 'protocolError': InvalidResponse,\n 'replyError': self.parse_error,\n }\n\n # hiredis < 0.1.3 doesn't support functions that create exceptions\n if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:\n kwargs['replyError'] = ResponseError\n\n if connection.encoder.decode_responses:\n kwargs['encoding'] = connection.encoder.encoding\n self._reader = hiredis.Reader(**kwargs)\n self._next_response = False\n\n def on_disconnect(self):\n self._sock = None\n self._reader = None\n self._next_response = False\n\n def can_read(self):\n if not self._reader:\n raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)\n\n if self._next_response is False:\n self._next_response = self._reader.gets()\n return self._next_response is not False\n\n def read_response(self):\n if not self._reader:\n raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)\n\n # _next_response might be cached from a can_read() call\n if self._next_response is not False:\n response = self._next_response\n self._next_response = False\n return response\n\n response = self._reader.gets()\n socket_read_size = self.socket_read_size\n while response is False:\n try:\n if HIREDIS_USE_BYTE_BUFFER:\n bufflen = recv_into(self._sock, self._buffer)\n if bufflen == 0:\n raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)\n else:\n buffer = recv(self._sock, socket_read_size)\n # an empty string indicates the server shutdown the socket\n if not isinstance(buffer, bytes) or len(buffer) == 0:\n raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)\n except socket.timeout:\n raise TimeoutError(\"Timeout reading from socket\")\n except socket.error:\n e = sys.exc_info()[1]\n raise ConnectionError(\"Error while reading from socket: %s\" %\n (e.args,))\n if HIREDIS_USE_BYTE_BUFFER:\n self._reader.feed(self._buffer, 0, bufflen)\n else:\n self._reader.feed(buffer)\n response = self._reader.gets()\n # if an older version of hiredis is installed, we need to attempt\n # to convert ResponseErrors to their appropriate types.\n if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:\n if isinstance(response, ResponseError):\n response = self.parse_error(response.args[0])\n elif isinstance(response, list) and response and \\\n isinstance(response[0], ResponseError):\n response[0] = self.parse_error(response[0].args[0])\n # if the response is a ConnectionError or the response is a list and\n # the first item is a ConnectionError, raise it as something bad\n # happened\n if isinstance(response, ConnectionError):\n raise response\n elif isinstance(response, list) and response and \\\n isinstance(response[0], ConnectionError):\n raise response[0]\n return response\n\n\nif HIREDIS_AVAILABLE:\n DefaultParser = HiredisParser\nelse:\n DefaultParser = PythonParser\n\n\nclass Connection(object):\n \"Manages TCP communication to and from a Redis server\"\n description_format = \"Connection<host=%(host)s,port=%(port)s,db=%(db)s>\"\n\n def __init__(self, host='localhost', port=6379, db=0, password=None,\n socket_timeout=None, socket_connect_timeout=None,\n socket_keepalive=False, socket_keepalive_options=None,\n socket_type=0, retry_on_timeout=False, encoding='utf-8',\n encoding_errors='strict', decode_responses=False,\n parser_class=DefaultParser, socket_read_size=65536):\n self.pid = os.getpid()\n self.host = host\n self.port = int(port)\n self.db = db\n self.password = password\n self.socket_timeout = socket_timeout\n self.socket_connect_timeout = socket_connect_timeout or socket_timeout\n self.socket_keepalive = socket_keepalive\n self.socket_keepalive_options = socket_keepalive_options or {}\n self.socket_type = socket_type\n self.retry_on_timeout = retry_on_timeout\n self.encoder = Encoder(encoding, encoding_errors, decode_responses)\n self._sock = None\n self._parser = parser_class(socket_read_size=socket_read_size)\n self._description_args = {\n 'host': self.host,\n 'port': self.port,\n 'db': self.db,\n }\n self._connect_callbacks = []\n self._buffer_cutoff = 6000\n\n def __repr__(self):\n return self.description_format % self._description_args\n\n def __del__(self):\n try:\n self.disconnect()\n except Exception:\n pass\n\n def register_connect_callback(self, callback):\n self._connect_callbacks.append(callback)\n\n def clear_connect_callbacks(self):\n self._connect_callbacks = []\n\n def connect(self):\n \"Connects to the Redis server if not already connected\"\n if self._sock:\n return\n try:\n sock = self._connect()\n except socket.timeout:\n raise TimeoutError(\"Timeout connecting to server\")\n except socket.error:\n e = sys.exc_info()[1]\n raise ConnectionError(self._error_message(e))\n\n self._sock = sock\n try:\n self.on_connect()\n except RedisError:\n # clean up after any error in on_connect\n self.disconnect()\n raise\n\n # run any user callbacks. right now the only internal callback\n # is for pubsub channel/pattern resubscription\n for callback in self._connect_callbacks:\n callback(self)\n\n def _connect(self):\n \"Create a TCP socket connection\"\n # we want to mimic what socket.create_connection does to support\n # ipv4/ipv6, but we want to set options prior to calling\n # socket.connect()\n err = None\n for res in socket.getaddrinfo(self.host, self.port, self.socket_type,\n socket.SOCK_STREAM):\n family, socktype, proto, canonname, socket_address = res\n sock = None\n try:\n sock = socket.socket(family, socktype, proto)\n # TCP_NODELAY\n sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)\n\n # TCP_KEEPALIVE\n if self.socket_keepalive:\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)\n for k, v in iteritems(self.socket_keepalive_options):\n sock.setsockopt(socket.SOL_TCP, k, v)\n\n # set the socket_connect_timeout before we connect\n sock.settimeout(self.socket_connect_timeout)\n\n # connect\n sock.connect(socket_address)\n\n # set the socket_timeout now that we're connected\n sock.settimeout(self.socket_timeout)\n return sock\n\n except socket.error as _:\n err = _\n if sock is not None:\n sock.close()\n\n if err is not None:\n raise err\n raise socket.error(\"socket.getaddrinfo returned an empty list\")\n\n def _error_message(self, exception):\n # args for socket.error can either be (errno, \"message\")\n # or just \"message\"\n if len(exception.args) == 1:\n return \"Error connecting to %s:%s. %s.\" % \\\n (self.host, self.port, exception.args[0])\n else:\n return \"Error %s connecting to %s:%s. %s.\" % \\\n (exception.args[0], self.host, self.port, exception.args[1])\n\n def on_connect(self):\n \"Initialize the connection, authenticate and select a database\"\n self._parser.on_connect(self)\n\n # if a password is specified, authenticate\n if self.password:\n self.send_command('AUTH', self.password)\n if nativestr(self.read_response()) != 'OK':\n raise AuthenticationError('Invalid Password')\n\n # if a database is specified, switch to it\n if self.db:\n self.send_command('SELECT', self.db)\n if nativestr(self.read_response()) != 'OK':\n raise ConnectionError('Invalid Database')\n\n def disconnect(self):\n \"Disconnects from the Redis server\"\n self._parser.on_disconnect()\n if self._sock is None:\n return\n try:\n self._sock.shutdown(socket.SHUT_RDWR)\n self._sock.close()\n except socket.error:\n pass\n self._sock = None\n\n def send_packed_command(self, command):\n \"Send an already packed command to the Redis server\"\n if not self._sock:\n self.connect()\n try:\n if isinstance(command, str):\n command = [command]\n for item in command:\n self._sock.sendall(item)\n except socket.timeout:\n self.disconnect()\n raise TimeoutError(\"Timeout writing to socket\")\n except socket.error:\n e = sys.exc_info()[1]\n self.disconnect()\n if len(e.args) == 1:\n errno, errmsg = 'UNKNOWN', e.args[0]\n else:\n errno = e.args[0]\n errmsg = e.args[1]\n raise ConnectionError(\"Error %s while writing to socket. %s.\" %\n (errno, errmsg))\n except Exception as e:\n self.disconnect()\n raise e\n\n def send_command(self, *args):\n \"Pack and send a command to the Redis server\"\n self.send_packed_command(self.pack_command(*args))\n\n def can_read(self, timeout=0):\n \"Poll the socket to see if there's data that can be read.\"\n sock = self._sock\n if not sock:\n self.connect()\n sock = self._sock\n return self._parser.can_read() or \\\n bool(select([sock], [], [], timeout)[0])\n\n def read_response(self):\n \"Read the response from a previously sent command\"\n try:\n response = self._parser.read_response()\n except Exception as e:\n self.disconnect()\n raise e\n if isinstance(response, ResponseError):\n raise response\n return response\n\n def pack_command(self, *args):\n \"Pack a series of arguments into the Redis protocol\"\n output = []\n # the client might have included 1 or more literal arguments in\n # the command name, e.g., 'CONFIG GET'. The Redis server expects these\n # arguments to be sent separately, so split the first argument\n # manually. All of these arguements get wrapped in the Token class\n # to prevent them from being encoded.\n command = args[0]\n if ' ' in command:\n args = tuple(Token.get_token(s)\n for s in command.split()) + args[1:]\n else:\n args = (Token.get_token(command),) + args[1:]\n\n buff = SYM_EMPTY.join((SYM_STAR, str(len(args)).encode(), SYM_CRLF))\n\n buffer_cutoff = self._buffer_cutoff\n for arg in imap(self.encoder.encode, args):\n # to avoid large string mallocs, chunk the command into the\n # output list if we're sending large values\n if len(buff) > buffer_cutoff or len(arg) > buffer_cutoff:\n buff = SYM_EMPTY.join(\n (buff, SYM_DOLLAR, str(len(arg)).encode(), SYM_CRLF))\n output.append(buff)\n output.append(arg)\n buff = SYM_CRLF\n else:\n buff = SYM_EMPTY.join(\n (buff, SYM_DOLLAR, str(len(arg)).encode(),\n SYM_CRLF, arg, SYM_CRLF))\n output.append(buff)\n return output\n\n def pack_commands(self, commands):\n \"Pack multiple commands into the Redis protocol\"\n output = []\n pieces = []\n buffer_length = 0\n buffer_cutoff = self._buffer_cutoff\n\n for cmd in commands:\n for chunk in self.pack_command(*cmd):\n chunklen = len(chunk)\n if buffer_length > buffer_cutoff or chunklen > buffer_cutoff:\n output.append(SYM_EMPTY.join(pieces))\n buffer_length = 0\n pieces = []\n\n if chunklen > self._buffer_cutoff:\n output.append(chunk)\n else:\n pieces.append(chunk)\n buffer_length += chunklen\n\n if pieces:\n output.append(SYM_EMPTY.join(pieces))\n return output\n\n\nclass SSLConnection(Connection):\n description_format = \"SSLConnection<host=%(host)s,port=%(port)s,db=%(db)s>\"\n\n def __init__(self, ssl_keyfile=None, ssl_certfile=None,\n ssl_cert_reqs='required', ssl_ca_certs=None, **kwargs):\n if not ssl_available:\n raise RedisError(\"Python wasn't built with SSL support\")\n\n super(SSLConnection, self).__init__(**kwargs)\n\n self.keyfile = ssl_keyfile\n self.certfile = ssl_certfile\n if ssl_cert_reqs is None:\n ssl_cert_reqs = ssl.CERT_NONE\n elif isinstance(ssl_cert_reqs, basestring):\n CERT_REQS = {\n 'none': ssl.CERT_NONE,\n 'optional': ssl.CERT_OPTIONAL,\n 'required': ssl.CERT_REQUIRED\n }\n if ssl_cert_reqs not in CERT_REQS:\n raise RedisError(\n \"Invalid SSL Certificate Requirements Flag: %s\" %\n ssl_cert_reqs)\n ssl_cert_reqs = CERT_REQS[ssl_cert_reqs]\n self.cert_reqs = ssl_cert_reqs\n self.ca_certs = ssl_ca_certs\n\n def _connect(self):\n \"Wrap the socket with SSL support\"\n sock = super(SSLConnection, self)._connect()\n sock = ssl.wrap_socket(sock,\n cert_reqs=self.cert_reqs,\n keyfile=self.keyfile,\n certfile=self.certfile,\n ca_certs=self.ca_certs)\n return sock\n\n\nclass UnixDomainSocketConnection(Connection):\n description_format = \"UnixDomainSocketConnection<path=%(path)s,db=%(db)s>\"\n\n def __init__(self, path='', db=0, password=None,\n socket_timeout=None, encoding='utf-8',\n encoding_errors='strict', decode_responses=False,\n retry_on_timeout=False,\n parser_class=DefaultParser, socket_read_size=65536):\n self.pid = os.getpid()\n self.path = path\n self.db = db\n self.password = password\n self.socket_timeout = socket_timeout\n self.retry_on_timeout = retry_on_timeout\n self.encoder = Encoder(encoding, encoding_errors, decode_responses)\n self._sock = None\n self._parser = parser_class(socket_read_size=socket_read_size)\n self._description_args = {\n 'path': self.path,\n 'db': self.db,\n }\n self._connect_callbacks = []\n self._buffer_cutoff = 6000\n\n def _connect(self):\n \"Create a Unix domain socket connection\"\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.settimeout(self.socket_timeout)\n sock.connect(self.path)\n return sock\n\n def _error_message(self, exception):\n # args for socket.error can either be (errno, \"message\")\n # or just \"message\"\n if len(exception.args) == 1:\n return \"Error connecting to unix socket: %s. %s.\" % \\\n (self.path, exception.args[0])\n else:\n return \"Error %s connecting to unix socket: %s. %s.\" % \\\n (exception.args[0], self.path, exception.args[1])\n\n\nFALSE_STRINGS = ('0', 'F', 'FALSE', 'N', 'NO')\n\n\ndef to_bool(value):\n if value is None or value == '':\n return None\n if isinstance(value, basestring) and value.upper() in FALSE_STRINGS:\n return False\n return bool(value)\n\n\nURL_QUERY_ARGUMENT_PARSERS = {\n 'socket_timeout': float,\n 'socket_connect_timeout': float,\n 'socket_keepalive': to_bool,\n 'retry_on_timeout': to_bool,\n 'max_connections': int,\n}\n\n\nclass ConnectionPool(object):\n \"Generic connection pool\"\n @classmethod\n def from_url(cls, url, db=None, decode_components=False, **kwargs):\n \"\"\"\n Return a connection pool configured from the given URL.\n\n For example::\n\n redis://[:password]@localhost:6379/0\n rediss://[:password]@localhost:6379/0\n unix://[:password]@/path/to/socket.sock?db=0\n\n Three URL schemes are supported:\n\n - ```redis://``\n <https://www.iana.org/assignments/uri-schemes/prov/redis>`_ creates a\n normal TCP socket connection\n - ```rediss://``\n <https://www.iana.org/assignments/uri-schemes/prov/rediss>`_ creates\n a SSL wrapped TCP socket connection\n - ``unix://`` creates a Unix Domain Socket connection\n\n There are several ways to specify a database number. The parse function\n will return the first specified option:\n 1. A ``db`` querystring option, e.g. redis://localhost?db=0\n 2. If using the redis:// scheme, the path argument of the url, e.g.\n redis://localhost/0\n 3. The ``db`` argument to this function.\n\n If none of these options are specified, db=0 is used.\n\n The ``decode_components`` argument allows this function to work with\n percent-encoded URLs. If this argument is set to ``True`` all ``%xx``\n escapes will be replaced by their single-character equivalents after\n the URL has been parsed. This only applies to the ``hostname``,\n ``path``, and ``password`` components.\n\n Any additional querystring arguments and keyword arguments will be\n passed along to the ConnectionPool class's initializer. The querystring\n arguments ``socket_connect_timeout`` and ``socket_timeout`` if supplied\n are parsed as float values. The arguments ``socket_keepalive`` and\n ``retry_on_timeout`` are parsed to boolean values that accept\n True/False, Yes/No values to indicate state. Invalid types cause a\n ``UserWarning`` to be raised. In the case of conflicting arguments,\n querystring arguments always win.\n\n \"\"\"\n url = urlparse(url)\n url_options = {}\n\n for name, value in iteritems(parse_qs(url.query)):\n if value and len(value) > 0:\n parser = URL_QUERY_ARGUMENT_PARSERS.get(name)\n if parser:\n try:\n url_options[name] = parser(value[0])\n except (TypeError, ValueError):\n warnings.warn(UserWarning(\n \"Invalid value for `%s` in connection URL.\" % name\n ))\n else:\n url_options[name] = value[0]\n\n if decode_components:\n password = unquote(url.password) if url.password else None\n path = unquote(url.path) if url.path else None\n hostname = unquote(url.hostname) if url.hostname else None\n else:\n password = url.password\n path = url.path\n hostname = url.hostname\n\n # We only support redis:// and unix:// schemes.\n if url.scheme == 'unix':\n url_options.update({\n 'password': password,\n 'path': path,\n 'connection_class': UnixDomainSocketConnection,\n })\n\n else:\n url_options.update({\n 'host': hostname,\n 'port': int(url.port or 6379),\n 'password': password,\n })\n\n # If there's a path argument, use it as the db argument if a\n # querystring value wasn't specified\n if 'db' not in url_options and path:\n try:\n url_options['db'] = int(path.replace('/', ''))\n except (AttributeError, ValueError):\n pass\n\n if url.scheme == 'rediss':\n url_options['connection_class'] = SSLConnection\n\n # last shot at the db value\n url_options['db'] = int(url_options.get('db', db or 0))\n\n # update the arguments from the URL values\n kwargs.update(url_options)\n\n # backwards compatability\n if 'charset' in kwargs:\n warnings.warn(DeprecationWarning(\n '\"charset\" is deprecated. Use \"encoding\" instead'))\n kwargs['encoding'] = kwargs.pop('charset')\n if 'errors' in kwargs:\n warnings.warn(DeprecationWarning(\n '\"errors\" is deprecated. Use \"encoding_errors\" instead'))\n kwargs['encoding_errors'] = kwargs.pop('errors')\n\n return cls(**kwargs)\n\n def __init__(self, connection_class=Connection, max_connections=None,\n **connection_kwargs):\n \"\"\"\n Create a connection pool. If max_connections is set, then this\n object raises redis.ConnectionError when the pool's limit is reached.\n\n By default, TCP connections are created unless connection_class is\n specified. Use redis.UnixDomainSocketConnection for unix sockets.\n\n Any additional keyword arguments are passed to the constructor of\n connection_class.\n \"\"\"\n max_connections = max_connections or 2 ** 31\n if not isinstance(max_connections, (int, long)) or max_connections < 0:\n raise ValueError('\"max_connections\" must be a positive integer')\n\n self.connection_class = connection_class\n self.connection_kwargs = connection_kwargs\n self.max_connections = max_connections\n\n self.reset()\n\n def __repr__(self):\n return \"%s<%s>\" % (\n type(self).__name__,\n self.connection_class.description_format % self.connection_kwargs,\n )\n\n def reset(self):\n self.pid = os.getpid()\n self._created_connections = 0\n self._available_connections = []\n self._in_use_connections = set()\n self._check_lock = threading.Lock()\n\n def _checkpid(self):\n if self.pid != os.getpid():\n with self._check_lock:\n if self.pid == os.getpid():\n # another thread already did the work while we waited\n # on the lock.\n return\n self.disconnect()\n self.reset()\n\n def get_connection(self, command_name, *keys, **options):\n \"Get a connection from the pool\"\n self._checkpid()\n try:\n connection = self._available_connections.pop()\n except IndexError:\n connection = self.make_connection()\n self._in_use_connections.add(connection)\n return connection\n\n def get_encoder(self):\n \"Return an encoder based on encoding settings\"\n kwargs = self.connection_kwargs\n return Encoder(\n encoding=kwargs.get('encoding', 'utf-8'),\n encoding_errors=kwargs.get('encoding_errors', 'strict'),\n decode_responses=kwargs.get('decode_responses', False)\n )\n\n def make_connection(self):\n \"Create a new connection\"\n if self._created_connections >= self.max_connections:\n raise ConnectionError(\"Too many connections\")\n self._created_connections += 1\n return self.connection_class(**self.connection_kwargs)\n\n def release(self, connection):\n \"Releases the connection back to the pool\"\n self._checkpid()\n if connection.pid != self.pid:\n return\n self._in_use_connections.remove(connection)\n self._available_connections.append(connection)\n\n def disconnect(self):\n \"Disconnects all connections in the pool\"\n all_conns = chain(self._available_connections,\n self._in_use_connections)\n for connection in all_conns:\n connection.disconnect()\n\n\nclass BlockingConnectionPool(ConnectionPool):\n \"\"\"\n Thread-safe blocking connection pool::\n\n >>> from redis.client import Redis\n >>> client = Redis(connection_pool=BlockingConnectionPool())\n\n It performs the same function as the default\n ``:py:class: ~redis.connection.ConnectionPool`` implementation, in that,\n it maintains a pool of reusable connections that can be shared by\n multiple redis clients (safely across threads if required).\n\n The difference is that, in the event that a client tries to get a\n connection from the pool when all of connections are in use, rather than\n raising a ``:py:class: ~redis.exceptions.ConnectionError`` (as the default\n ``:py:class: ~redis.connection.ConnectionPool`` implementation does), it\n makes the client wait (\"blocks\") for a specified number of seconds until\n a connection becomes available.\n\n Use ``max_connections`` to increase / decrease the pool size::\n\n >>> pool = BlockingConnectionPool(max_connections=10)\n\n Use ``timeout`` to tell it either how many seconds to wait for a connection\n to become available, or to block forever:\n\n # Block forever.\n >>> pool = BlockingConnectionPool(timeout=None)\n\n # Raise a ``ConnectionError`` after five seconds if a connection is\n # not available.\n >>> pool = BlockingConnectionPool(timeout=5)\n \"\"\"\n def __init__(self, max_connections=50, timeout=20,\n connection_class=Connection, queue_class=LifoQueue,\n **connection_kwargs):\n\n self.queue_class = queue_class\n self.timeout = timeout\n super(BlockingConnectionPool, self).__init__(\n connection_class=connection_class,\n max_connections=max_connections,\n **connection_kwargs)\n\n def reset(self):\n self.pid = os.getpid()\n self._check_lock = threading.Lock()\n\n # Create and fill up a thread safe queue with ``None`` values.\n self.pool = self.queue_class(self.max_connections)\n while True:\n try:\n self.pool.put_nowait(None)\n except Full:\n break\n\n # Keep a list of actual connection instances so that we can\n # disconnect them later.\n self._connections = []\n\n def make_connection(self):\n \"Make a fresh connection.\"\n connection = self.connection_class(**self.connection_kwargs)\n self._connections.append(connection)\n return connection\n\n def get_connection(self, command_name, *keys, **options):\n \"\"\"\n Get a connection, blocking for ``self.timeout`` until a connection\n is available from the pool.\n\n If the connection returned is ``None`` then creates a new connection.\n Because we use a last-in first-out queue, the existing connections\n (having been returned to the pool after the initial ``None`` values\n were added) will be returned before ``None`` values. This means we only\n create new connections when we need to, i.e.: the actual number of\n connections will only increase in response to demand.\n \"\"\"\n # Make sure we haven't changed process.\n self._checkpid()\n\n # Try and get a connection from the pool. If one isn't available within\n # self.timeout then raise a ``ConnectionError``.\n connection = None\n try:\n connection = self.pool.get(block=True, timeout=self.timeout)\n except Empty:\n # Note that this is not caught by the redis client and will be\n # raised unless handled by application code. If you want never to\n raise ConnectionError(\"No connection available.\")\n\n # If the ``connection`` is actually ``None`` then that's a cue to make\n # a new connection to add to the pool.\n if connection is None:\n connection = self.make_connection()\n\n return connection\n\n def release(self, connection):\n \"Releases the connection back to the pool.\"\n # Make sure we haven't changed process.\n self._checkpid()\n if connection.pid != self.pid:\n return\n\n # Put the connection back into the pool.\n try:\n self.pool.put_nowait(connection)\n except Full:\n # perhaps the pool has been reset() after a fork? regardless,\n # we don't want this connection\n pass\n\n def disconnect(self):\n \"Disconnects all connections in the pool.\"\n for connection in self._connections:\n connection.disconnect()\n", "path": "redis/connection.py"}]} |
gh_patches_debug_1010 | rasdani/github-patches | git_diff | svthalia__concrexit-3616 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Disable Sentry cron monitoring
### What?
We need to disable the sentry 'cron' monitoring of periodic tasks.
### Why?
Sentry is making cron monitors paid after the beta.
### How?
I think it's a single line in settings.py, and maybe some cleanup on sentry to remove the existing monitors.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/thaliawebsite/settings.py`
Content:
```
1 """Django settings for concrexit.
2
3 For more information on this file, see
4 https://docs.djangoproject.com/en/dev/topics/settings/
5
6 For the full list of settings and their values, see
7 https://docs.djangoproject.com/en/dev/ref/settings/
8 """
9
10 import base64
11 import json
12 import logging
13 import os
14
15 from django.core.management.commands import makemessages
16 from django.utils import timezone
17 from django.utils.translation import gettext_lazy as _
18
19 from celery.schedules import crontab
20
21 logger = logging.getLogger(__name__)
22
23 # Sentinel objects that are distinct from None
24 _NOT_SET = object()
25
26
27 class Misconfiguration(Exception):
28 """Exception that is raised when something is misconfigured in this file."""
29
30
31 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
32 BASE_DIR = os.path.abspath(
33 os.path.join(os.path.dirname(os.path.abspath(__file__)), "", "..")
34 )
35
36 SOURCE_COMMIT = os.environ.get("SOURCE_COMMIT", "unknown")
37
38 # Many of the settings are dependent on the environment we're running in.
39 # The default environment is development, so the programmer doesn't have to set anything
40 DJANGO_ENV = os.environ.get("DJANGO_ENV", "development")
41 _environments = ["production", "staging", "testing", "development"]
42 if DJANGO_ENV not in _environments:
43 raise Misconfiguration(f"Set DJANGO_ENV to one of: {', '.join(_environments)}")
44
45
46 def _set_django_env(env):
47 """Set the DJANGO_ENV variable.
48
49 This is a helper function for the doctests below because doctests cannot set global variables.
50 """
51 global DJANGO_ENV # noqa: PLW0603
52 DJANGO_ENV = env
53
54
55 def setting(*, development, production, staging=_NOT_SET, testing=_NOT_SET):
56 """Generate a setting depending on the DJANGO_ENV and the arguments.
57
58 This function is meant for static settings that depend on the DJANGO_ENV. If the
59 staging or testing arguments are left to their defaults, they will fall back to
60 the production and development settings respectively.
61
62 Example:
63 >>> _set_django_env("production")
64 >>> SEND_MESSAGES_WITH = setting(development="console", production="mail", staging="DM")
65 >>> SEND_MESSAGES_WITH
66 'mail'
67 >>> _set_django_env("testing")
68 >>> setting(development="console", production="mail", staging="DM")
69 'console'
70 """
71 if DJANGO_ENV == "development" or (DJANGO_ENV == "testing" and testing is _NOT_SET):
72 return development
73 if DJANGO_ENV == "testing":
74 return testing
75 if DJANGO_ENV == "production" or (DJANGO_ENV == "staging" and staging is _NOT_SET):
76 return production
77 if DJANGO_ENV == "staging":
78 return staging
79 raise Misconfiguration(f"Set DJANGO_ENV to one of: {', '.join(_environments)}")
80
81
82 def from_env(
83 name, *, production=_NOT_SET, staging=_NOT_SET, testing=_NOT_SET, development=None
84 ):
85 """Generate a setting that's overridable by the process environment.
86
87 This will raise an exception if a default is not set for production. Because we use
88 the sentinel value _NOT_SET, you can still set a default of None for production if wanted.
89
90 As with :func:`setting` the staging and testing values will fall back to production
91 and development. So if an environment variable is required in production, and no default
92 is set for staging, staging will also raise the exception.
93
94 Example:
95 >>> _set_django_env("production")
96 >>> # A secret key should always be set in production via the environment
97 >>> from_env("MEDIA_ROOT", development="/media/root")
98 Traceback (most recent call last):
99 ...
100 thaliawebsite.settings.Misconfiguration: Environment variable `MEDIA_ROOT` must be supplied in production
101 >>> _set_django_env("development")
102 >>> from_env("MEDIA_ROOT", development="/media/root")
103 '/media/root'
104 """
105 try:
106 return os.environ[name]
107 except KeyError:
108 if DJANGO_ENV == "production" or (
109 DJANGO_ENV == "staging" and staging is _NOT_SET
110 ):
111 if production is _NOT_SET and os.environ.get("MANAGE_PY", "0") == "0":
112 raise Misconfiguration(
113 f"Environment variable `{name}` must be supplied in production"
114 )
115 if production is _NOT_SET and os.environ.get("MANAGE_PY", "0") == "1":
116 logger.warning(
117 "Ignoring unset %s because we're running a management command", name
118 )
119 return development
120 return production
121 if DJANGO_ENV == "staging":
122 return staging
123 if DJANGO_ENV == "development" or (
124 DJANGO_ENV == "testing" and testing is _NOT_SET
125 ):
126 return development
127 if DJANGO_ENV == "testing":
128 return testing
129 raise Misconfiguration(f"DJANGO_ENV set to unsupported value: {DJANGO_ENV}")
130
131
132 ###############################################################################
133 # Site settings
134
135 # We use this setting to generate the email addresses, and for BASE_URL below.
136 SITE_DOMAIN = from_env("SITE_DOMAIN", development="localhost", production="thalia.nu")
137
138 # Used to generate some absolute urls when we don't have access to a request.
139 BASE_URL = from_env(
140 "BASE_URL",
141 development=f"http://{SITE_DOMAIN}:8000",
142 production=f"https://{SITE_DOMAIN}",
143 )
144
145 # Default FROM email
146 DEFAULT_FROM_EMAIL = f"{os.environ.get('ADDRESS_NOREPLY', 'noreply')}@{SITE_DOMAIN}"
147 # https://docs.djangoproject.com/en/dev/ref/settings/#server-email
148 SERVER_EMAIL = DEFAULT_FROM_EMAIL
149 NEWSLETTER_FROM_ADDRESS = (
150 f"{os.environ.get('ADDRESS_NEWSLETTER', 'newsletter')}@{SITE_DOMAIN}"
151 )
152 BOARD_NOTIFICATION_ADDRESS = (
153 f"{os.environ.get('ADDRESS_CONTACT', 'info')}@{SITE_DOMAIN}"
154 )
155 PARTNER_NOTIFICATION_ADDRESS = (
156 f"{os.environ.get('ADDRESS_COLLABORATION', 'samenwerking')}@{SITE_DOMAIN}"
157 )
158 EDUCATION_NOTIFICATION_ADDRESS = (
159 f"{os.environ.get('ADDRESS_EDUCATION', 'educacie')}@{SITE_DOMAIN}"
160 )
161 PROMO_REQUEST_NOTIFICATION_ADDRESS = (
162 f"{os.environ.get('ADDRESS_PROMOREQUESTS', 'promocie')}@{SITE_DOMAIN}"
163 )
164 TREASURER_NOTIFICATION_ADDRESS = (
165 f"{os.environ.get('ADDRESS_TREASURER', 'treasurer')}@{SITE_DOMAIN}"
166 )
167
168
169 # How many days to keep reference faces after a user marks them for deletion
170 FACEDETECTION_REFERENCE_FACE_STORAGE_PERIOD_AFTER_DELETE_DAYS = 180
171
172 # How many reference faces a user can have at the same time
173 FACEDETECTION_MAX_NUM_REFERENCE_FACES = 5
174
175 # ARN of the concrexit-facedetection-lambda function.
176 # See https://github.com/svthalia/concrexit-facedetection-lambda.
177 FACEDETECTION_LAMBDA_ARN = os.environ.get("FACEDETECTION_LAMBDA_ARN")
178
179 FACEDETECTION_LAMBDA_BATCH_SIZE = int(
180 os.environ.get("FACEDETECTION_LAMBDA_BATCH_SIZE", 20)
181 )
182
183 # The scheme the app uses for oauth redirection
184 APP_OAUTH_SCHEME = os.environ.get("APP_OAUTH_SCHEME", "nu.thalia")
185
186 # Membership prices
187 MEMBERSHIP_PRICES = {
188 "year": int(os.environ.get("MEMBERSHIP_PRICE_YEAR_CENTS", "750")) / 100,
189 "study": int(os.environ.get("MEMBERSHIP_PRICE_STUDY_CENTS", "3000")) / 100,
190 }
191
192 # Window during which a payment can be deleted again
193 PAYMENT_CHANGE_WINDOW = int(os.environ.get("PAYMENTS_CHANGE_WINDOW", 10 * 60))
194
195 # Payments creditor identifier
196 SEPA_CREDITOR_ID = os.environ.get("SEPA_CREDITOR_ID", "<unknown>")
197
198 # Payment batch withdrawal date default offset after creation date
199 PAYMENT_BATCH_DEFAULT_WITHDRAWAL_DATE_OFFSET = timezone.timedelta(days=14)
200
201 THALIA_PAY_ENABLED_PAYMENT_METHOD = (
202 from_env("THALIA_PAY_ENABLED", development="1", staging="1", production="0") == "1"
203 )
204 THALIA_PAY_FOR_NEW_MEMBERS = os.environ.get("THALIA_PAY_FOR_NEW_MEMBERS", "1") == "1"
205
206 ###############################################################################
207 # Django settings
208
209 # https://docs.djangoproject.com/en/dev/ref/settings/#secret-key
210 SECRET_KEY = from_env(
211 "SECRET_KEY", development="#o-0d1q5&^&06tn@8pr1f(n3$crafd++^%sacao7hj*ea@c)^t"
212 )
213
214 # https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
215 ALLOWED_HOSTS = [
216 SITE_DOMAIN,
217 *from_env("ALLOWED_HOSTS", development="*", production="").split(","),
218 ]
219
220 DJANGO_DRF_FILEPOND_UPLOAD_TMP = from_env(
221 "DJANGO_DRF_FILEPOND_UPLOAD_TMP",
222 development=os.path.join(BASE_DIR, "filepond-temp-uploads"),
223 )
224 DJANGO_DRF_FILEPOND_FILE_STORE_PATH = from_env(
225 "DJANGO_DRF_FILEPOND_FILE_STORE_PATH",
226 development=os.path.join(BASE_DIR, "filepond-uploaded"),
227 )
228 DJANGO_DRF_FILEPOND_ALLOW_EXTERNAL_UPLOAD_DIR = True
229 DJANGO_DRF_FILEPOND_PERMISSION_CLASSES = {
230 "GET_FETCH": [
231 "oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope",
232 ],
233 "GET_LOAD": [
234 "oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope",
235 ],
236 "POST_PROCESS": [
237 "oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope",
238 ],
239 "GET_RESTORE": [
240 "oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope",
241 ],
242 "DELETE_REVERT": [
243 "oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope",
244 ],
245 "PATCH_PATCH": [
246 "oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope",
247 ],
248 }
249
250 # https://docs.djangoproject.com/en/dev/ref/settings/#static-root
251 STATIC_ROOT = from_env("STATIC_ROOT", development=os.path.join(BASE_DIR, "static"))
252
253 # https://docs.djangoproject.com/en/dev/ref/settings/#media-root
254 MEDIA_ROOT = from_env("MEDIA_ROOT", development=os.path.join(BASE_DIR, "media"))
255
256 # https://github.com/johnsensible/django-sendfile#nginx-backend
257 SENDFILE_URL = "/media/sendfile/"
258 SENDFILE_ROOT = MEDIA_ROOT
259 SENDFILE_BACKEND = setting(
260 development="django_sendfile.backends.development",
261 production="django_sendfile.backends.nginx",
262 )
263
264 PRIVATE_MEDIA_LOCATION = ""
265 PUBLIC_MEDIA_LOCATION = "public"
266 STATICFILES_LOCATION = "static"
267
268 MEDIA_URL = "/media/private/"
269
270 AWS_ACCESS_KEY_ID = from_env("AWS_ACCESS_KEY_ID", production=None)
271 AWS_SECRET_ACCESS_KEY = from_env("AWS_SECRET_ACCESS_KEY", production=None)
272 AWS_STORAGE_BUCKET_NAME = from_env("AWS_STORAGE_BUCKET_NAME", production=None)
273 AWS_DEFAULT_ACL = "private"
274 AWS_S3_OBJECT_PARAMETERS = {"CacheControl": "max-age=86400"}
275 AWS_S3_SIGNATURE_VERSION = "s3v4"
276
277 if AWS_STORAGE_BUCKET_NAME is not None:
278 AWS_CLOUDFRONT_KEY = base64.urlsafe_b64decode(
279 os.environ.get("AWS_CLOUDFRONT_KEY", None)
280 ).decode("utf-8")
281 AWS_CLOUDFRONT_KEY_ID = os.environ.get("AWS_CLOUDFRONT_KEY_ID", None)
282 AWS_S3_CUSTOM_DOMAIN = os.environ.get("AWS_CLOUDFRONT_DOMAIN", None)
283
284 _STATICFILES_STORAGE = "thaliawebsite.storage.backend.StaticS3Storage"
285 STATIC_URL = f"https://{AWS_S3_CUSTOM_DOMAIN}/static/"
286
287 _DEFAULT_FILE_STORAGE = "thaliawebsite.storage.backend.PrivateS3Storage"
288
289 _PUBLIC_FILE_STORAGE = "thaliawebsite.storage.backend.PublicS3Storage"
290 PUBLIC_MEDIA_URL = f"https://{AWS_S3_CUSTOM_DOMAIN}/"
291 else:
292 _STATICFILES_STORAGE = setting(
293 development="django.contrib.staticfiles.storage.StaticFilesStorage",
294 production="django.contrib.staticfiles.storage.ManifestStaticFilesStorage",
295 )
296 STATIC_URL = "/static/"
297
298 _DEFAULT_FILE_STORAGE = "thaliawebsite.storage.backend.PrivateFileSystemStorage"
299
300 _PUBLIC_FILE_STORAGE = "thaliawebsite.storage.backend.PublicFileSystemStorage"
301 PUBLIC_MEDIA_URL = "/media/public/"
302
303 STORAGES = {
304 "default": {"BACKEND": _DEFAULT_FILE_STORAGE},
305 "public": {"BACKEND": _PUBLIC_FILE_STORAGE},
306 "staticfiles": {"BACKEND": _STATICFILES_STORAGE},
307 }
308
309 # https://docs.djangoproject.com/en/dev/ref/settings/#conn-max-age
310 CONN_MAX_AGE = int(from_env("CONN_MAX_AGE", development="0", production="60"))
311
312 # Useful for managing members
313 # https://docs.djangoproject.com/en/dev/ref/settings/#data-upload-max-number-fields
314 DATA_UPLOAD_MAX_NUMBER_FIELDS = os.environ.get("DATA_UPLOAD_MAX_NUMBER_FIELDS", 10000)
315
316 # https://docs.djangoproject.com/en/dev/ref/settings/#debug
317 DEBUG = bool(
318 from_env("DJANGO_DEBUG", development=True, production=False, testing=False)
319 )
320 # https://docs.djangoproject.com/en/dev/ref/settings/#internal-ips
321 INTERNAL_IPS = ["127.0.0.1", "172.17.0.1"] if DEBUG else []
322
323
324 def show_toolbar(request):
325 return DEBUG
326
327
328 DEBUG_TOOLBAR_CONFIG = {"SHOW_TOOLBAR_CALLBACK": show_toolbar}
329
330 # https://docs.djangoproject.com/en/dev/ref/settings/#session-cookie-secure
331 SESSION_COOKIE_SECURE = setting(development=False, production=True)
332 # https://docs.djangoproject.com/en/dev/ref/settings/#csrf-cookie-secure
333 CSRF_COOKIE_SECURE = setting(development=False, production=True)
334
335 # https://docs.djangoproject.com/en/dev/ref/settings/#std-setting-SECURE_PROXY_SSL_HEADER
336 SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
337
338 # https://docs.djangoproject.com/en/dev/ref/settings/#default-auto-field
339 DEFAULT_AUTO_FIELD = "django.db.models.AutoField"
340
341
342 ###############################################################################
343 # Celery settings
344 # https://docs.celeryq.dev/en/stable/userguide/configuration.html#configuration
345
346 # Set CELERY_BROKER_URL="redis://127.0.0.1:6379" to use a local redis server in development.
347 CELERY_BROKER_URL = from_env("CELERY_BROKER_URL")
348
349 # Always execute tasks synchronously when no broker is configured in development and testing.
350 # See https://docs.celeryq.dev/en/stable/userguide/configuration.html#std-setting-task_always_eager
351 CELERY_TASK_ALWAYS_EAGER = CELERY_BROKER_URL is None
352
353
354 # See https://docs.celeryq.dev/en/stable/getting-started/backends-and-brokers/redis.html#caveats
355 CELERY_BROKER_TRANSPORT_OPTIONS = {"visibility_timeout": 18000}
356
357 # https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html
358 CELERY_BEAT_SCHEDULE = {
359 "synchronize_mailinglists": {
360 "task": "mailinglists.tasks.sync_mail",
361 "schedule": crontab(minute=30),
362 },
363 "synchronize_moneybird": {
364 "task": "moneybirdsynchronization.tasks.synchronize_moneybird",
365 "schedule": crontab(minute=30, hour=1),
366 },
367 "sendpromooverviewweekly": {
368 "task": "promotion.tasks.promo_update_weekly",
369 "schedule": crontab(minute=0, hour=8, day_of_week=1),
370 },
371 "sendpromoooverviewdaily": {
372 "task": "promotion.tasks.promo_update_daily",
373 "schedule": crontab(minute=0, hour=8),
374 },
375 "facedetectlambda": {
376 "task": "facedetection.tasks.trigger_facedetect_lambda",
377 "schedule": crontab(minute=0, hour=1),
378 },
379 "revokeoldmandates": {
380 "task": "payments.tasks.revoke_mandates",
381 "schedule": crontab(minute=0, hour=1),
382 },
383 "membershipannouncement": {
384 "task": "members.tasks.membership_announcement",
385 "schedule": crontab(minute=0, hour=6, day_of_month=31, month_of_year=8),
386 },
387 "inforequest": {
388 "task": "members.tasks.info_request",
389 "schedule": crontab(minute=0, hour=6, day_of_month=15, month_of_year=10),
390 },
391 "expirationannouncement": {
392 "task": "members.tasks.expiration_announcement",
393 "schedule": crontab(minute=0, hour=6, day_of_month=8, month_of_year=8),
394 },
395 "minimiseregistration": {
396 "task": "registrations.tasks.minimise_registrations",
397 "schedule": crontab(minute=0, hour=3, day_of_month=1),
398 },
399 "sendscheduledmessages": {
400 "task": "pushnotifications.tasks.send_scheduled_messages",
401 "schedule": crontab(minute="*/2"),
402 "args": (120,),
403 },
404 "revokestaff": {
405 "task": "activemembers.tasks.revoke_staff",
406 "schedule": crontab(minute=30, hour=3),
407 },
408 "deletegsuiteusers": {
409 "task": "activemembers.tasks.delete_gsuite_users",
410 "schedule": crontab(minute=30, hour=3, day_of_week=1),
411 },
412 "sendplannednewsletters": {
413 "task": "newsletters.tasks.send_planned_newsletters",
414 "schedule": crontab(minute="*/5"),
415 },
416 "dataminimisation": {
417 "task": "thaliawebsite.tasks.data_minimisation",
418 "schedule": crontab(minute=0, hour=3),
419 },
420 "cleanup": {
421 "task": "thaliawebsite.tasks.clean_up",
422 "schedule": crontab(minute=0, hour=23),
423 },
424 "cleartokens": {
425 "task": "thaliawebsite.tasks.clear_tokens",
426 "schedule": crontab(minute=30, hour=3),
427 },
428 "sendpromoupdateoverviewdaily": {
429 "task": "promotion.tasks.promo_update_overview_daily",
430 "schedule": crontab(minute=0, hour=8),
431 },
432 }
433
434 ###############################################################################
435 # Email settings
436 # https://docs.djangoproject.com/en/dev/ref/settings/#email-backend
437 _EMAIL_BACKEND = from_env("EMAIL_BACKEND", development="console", production="smtp")
438 if _EMAIL_BACKEND == "console":
439 EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
440
441 if _EMAIL_BACKEND == "smtp":
442 EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
443 EMAIL_HOST = os.environ.get("DJANGO_EMAIL_HOST")
444 EMAIL_PORT = os.environ.get("DJANGO_EMAIL_PORT", 25)
445 EMAIL_HOST_USER = os.environ.get("DJANGO_EMAIL_HOST_USER", "")
446 EMAIL_HOST_PASSWORD = os.environ.get("DJANGO_EMAIL_HOST_PASSWORD", "")
447 EMAIL_USE_TLS = os.environ.get("DJANGO_EMAIL_USE_TLS", "1") == "1"
448 EMAIL_TIMEOUT = int(os.environ.get("EMAIL_TIMEOUT", "10"))
449 if EMAIL_HOST is None:
450 logger.warning(
451 "The email host is set to the default of localhost, are you sure you don't want to set EMAIL_HOST?"
452 )
453 EMAIL_HOST = "localhost"
454
455 ###############################################################################
456 # Database settings
457 # https://docs.djangoproject.com/en/dev/ref/settings/#databases
458 DATABASE_ENGINE = from_env(
459 "DATABASE_ENGINE", development="sqlite", production="postgresql", testing=None
460 )
461 if DATABASE_ENGINE == "sqlite":
462 DATABASES = {
463 "default": {
464 "ENGINE": "django.db.backends.sqlite3",
465 "NAME": os.path.join(BASE_DIR, "db.sqlite3"),
466 }
467 }
468
469 if DATABASE_ENGINE == "postgresql":
470 DATABASES = {
471 "default": {
472 "ENGINE": "django.db.backends.postgresql",
473 "USER": os.environ.get("POSTGRES_USER", "concrexit"),
474 "PASSWORD": os.environ.get("POSTGRES_PASSWORD", None),
475 "NAME": os.environ.get("POSTGRES_DB", ""),
476 "HOST": os.environ.get("POSTGRES_HOST", ""),
477 "PORT": os.environ.get("POSTGRES_PORT", "5432"),
478 "CONN_MAX_AGE": 300,
479 }
480 }
481
482 if DJANGO_ENV == "testing":
483 DATABASES = {
484 "default": {
485 "ENGINE": "django.db.backends.postgresql",
486 "NAME": "thalia",
487 "USER": "postgres",
488 "PASSWORD": "postgres",
489 "HOST": "127.0.0.1",
490 "PORT": 5432,
491 },
492 }
493
494 ###############################################################################
495 # Firebase config
496 FIREBASE_CREDENTIALS = os.environ.get("FIREBASE_CREDENTIALS", "{}")
497 if FIREBASE_CREDENTIALS != "{}":
498 FIREBASE_CREDENTIALS = base64.urlsafe_b64decode(FIREBASE_CREDENTIALS)
499 FIREBASE_CREDENTIALS = json.loads(FIREBASE_CREDENTIALS)
500
501 if FIREBASE_CREDENTIALS != {}:
502 from firebase_admin import credentials, initialize_app
503
504 try:
505 initialize_app(credential=credentials.Certificate(FIREBASE_CREDENTIALS))
506 except ValueError:
507 logger.error("Firebase application failed to initialise")
508
509 ###############################################################################
510 # GSuite config
511 GSUITE_ADMIN_SCOPES = [
512 "https://www.googleapis.com/auth/admin.directory.group",
513 "https://www.googleapis.com/auth/admin.directory.user",
514 "https://www.googleapis.com/auth/apps.groups.settings",
515 ]
516
517 GSUITE_ADMIN_CREDENTIALS = os.environ.get("GSUITE_ADMIN_CREDENTIALS", "{}")
518 if GSUITE_ADMIN_CREDENTIALS != "{}":
519 GSUITE_ADMIN_CREDENTIALS = base64.urlsafe_b64decode(GSUITE_ADMIN_CREDENTIALS)
520 GSUITE_ADMIN_CREDENTIALS = json.loads(GSUITE_ADMIN_CREDENTIALS)
521 GSUITE_ADMIN_USER = os.environ.get("GSUITE_ADMIN_USER", "[email protected]")
522 GSUITE_DOMAIN = from_env(
523 "GSUITE_DOMAIN", development="thalia.localhost", production="thalia.nu"
524 )
525 GSUITE_MEMBERS_DOMAIN = from_env(
526 "GSUITE_MEMBERS_DOMAIN",
527 development="members.thalia.localhost",
528 production="members.thalia.nu",
529 )
530 GSUITE_MEMBERS_AUTOSYNC = os.environ.get("GSUITE_MEMBERS_AUTOSYNC", "0") == "1"
531
532 if GSUITE_ADMIN_CREDENTIALS != {}:
533 from google.oauth2 import service_account
534
535 GSUITE_ADMIN_CREDENTIALS = service_account.Credentials.from_service_account_info(
536 GSUITE_ADMIN_CREDENTIALS, scopes=GSUITE_ADMIN_SCOPES
537 ).with_subject(GSUITE_ADMIN_USER)
538
539 EMAIL_DOMAIN_BLACKLIST = [GSUITE_MEMBERS_DOMAIN]
540
541 ###############################################################################
542 # Google maps API key and secrets
543 GOOGLE_MAPS_API_KEY = os.environ.get("GOOGLE_MAPS_API_KEY", "")
544 GOOGLE_MAPS_API_SECRET = os.environ.get("GOOGLE_MAPS_API_SECRET", "")
545 GOOGLE_PLACES_API_KEY = os.environ.get("GOOGLE_PLACES_API_KEY", "")
546
547 ###############################################################################
548 # Sentry setup
549 if "SENTRY_DSN" in os.environ:
550 import sentry_sdk
551 from sentry_sdk.integrations.celery import CeleryIntegration
552 from sentry_sdk.integrations.django import DjangoIntegration
553
554 sentry_sdk.init(
555 dsn=os.environ.get("SENTRY_DSN"),
556 integrations=[
557 DjangoIntegration(),
558 CeleryIntegration(
559 monitor_beat_tasks=True,
560 ),
561 ],
562 release=SOURCE_COMMIT,
563 send_default_pii=True,
564 environment=DJANGO_ENV,
565 traces_sample_rate=float(os.environ.get("SENTRY_TRACES_SAMPLE_RATE", 0.2)),
566 profiles_sample_rate=float(os.environ.get("SENTRY_PROFILES_SAMPLE_RATE", 0.0)),
567 )
568
569
570 ###############################################################################
571 # (Mostly) static settings
572 INSTALLED_APPS = [
573 "django.contrib.auth",
574 "django.contrib.contenttypes",
575 "django.contrib.sessions",
576 "django.contrib.messages",
577 "django.contrib.staticfiles",
578 "django.contrib.sitemaps",
579 # Dependencies
580 "django_otp",
581 "django_otp.plugins.otp_static",
582 "django_otp.plugins.otp_totp",
583 "formtools",
584 "two_factor",
585 "oauth2_provider",
586 "corsheaders",
587 "django_bootstrap5",
588 "tinymce",
589 "rest_framework",
590 "rest_framework.authtoken",
591 "debug_toolbar",
592 "sass_processor",
593 "admin_auto_filters",
594 "django_drf_filepond",
595 "django_filepond_widget",
596 "thumbnails",
597 # Our apps
598 # Directly link to the app config when applicable as recommended
599 # by the docs: https://docs.djangoproject.com/en/2.0/ref/applications/
600 "thaliawebsite.apps.ThaliaWebsiteConfig", # include for admin settings
601 # Load django.contrib.admin after thaliawebsite so the admin page gets modified
602 "django.contrib.admin",
603 # Our apps ordered such that templates in the first
604 # apps can override those used by the later apps.
605 "pushnotifications.apps.PushNotificationsConfig",
606 "facedetection.apps.FaceDetectionConfig",
607 "announcements.apps.AnnouncementsConfig",
608 "promotion.apps.PromotionConfig",
609 "members.apps.MembersConfig",
610 "documents.apps.DocumentsConfig",
611 "activemembers.apps.ActiveMembersConfig",
612 "photos.apps.PhotosConfig",
613 "utils",
614 "mailinglists.apps.MailinglistsConfig",
615 "merchandise.apps.MerchandiseConfig",
616 "thabloid.apps.ThabloidConfig",
617 "partners.apps.PartnersConfig",
618 "events.apps.EventsConfig",
619 "pizzas.apps.PizzasConfig",
620 "newsletters.apps.NewslettersConfig",
621 "education.apps.EducationConfig",
622 "registrations.apps.RegistrationsConfig",
623 "payments.apps.PaymentsConfig",
624 "singlepages.apps.SinglepagesConfig",
625 "shortlinks.apps.ShortLinkConfig",
626 "sales.apps.SalesConfig",
627 "moneybirdsynchronization.apps.MoneybirdsynchronizationConfig",
628 ]
629
630 MIDDLEWARE = [
631 "debug_toolbar.middleware.DebugToolbarMiddleware",
632 "django.middleware.security.SecurityMiddleware",
633 "django.contrib.sessions.middleware.SessionMiddleware",
634 "django.middleware.http.ConditionalGetMiddleware",
635 "corsheaders.middleware.CorsMiddleware",
636 "django.middleware.common.CommonMiddleware",
637 "django.middleware.csrf.CsrfViewMiddleware",
638 "django.contrib.auth.middleware.AuthenticationMiddleware",
639 "django_otp.middleware.OTPMiddleware",
640 "django.contrib.messages.middleware.MessageMiddleware",
641 "thaliawebsite.middleware.RealIPMiddleware",
642 "django_ratelimit.middleware.RatelimitMiddleware",
643 "members.middleware.MemberMiddleware",
644 "announcements.middleware.AnnouncementMiddleware",
645 ]
646
647 if DJANGO_ENV in ("development", "testing"):
648 INSTALLED_APPS += [
649 "django_template_check",
650 "django_extensions",
651 ]
652
653 if DJANGO_ENV == "testing":
654 for x in (
655 "debug_toolbar.middleware.DebugToolbarMiddleware",
656 "django.middleware.http.ConditionalGetMiddleware",
657 "django.middleware.csrf.CsrfViewMiddleware",
658 ):
659 MIDDLEWARE.remove(x)
660 for x in ("debug_toolbar",):
661 INSTALLED_APPS.remove(x)
662
663 ROOT_URLCONF = "thaliawebsite.urls"
664
665 TEMPLATES = [
666 {
667 "BACKEND": "django.template.backends.django.DjangoTemplates",
668 "DIRS": [os.path.join(BASE_DIR, "templates")],
669 "APP_DIRS": setting(development=True, production=False),
670 "OPTIONS": {
671 "context_processors": [
672 "thaliawebsite.context_processors.source_commit",
673 "django.template.context_processors.debug",
674 "django.template.context_processors.request",
675 "django.template.context_processors.media",
676 "django.contrib.auth.context_processors.auth",
677 "django.contrib.messages.context_processors.messages",
678 "announcements.context_processors.announcements",
679 "thaliawebsite.context_processors.aprilfools",
680 "thaliawebsite.context_processors.lustrum_styling",
681 ],
682 },
683 },
684 ]
685
686 if DJANGO_ENV in ["production", "staging"]:
687 # Use caching template loader
688 TEMPLATES[0]["OPTIONS"]["loaders"] = [
689 (
690 "django.template.loaders.cached.Loader",
691 [
692 "django.template.loaders.filesystem.Loader",
693 "django.template.loaders.app_directories.Loader",
694 ],
695 )
696 ]
697
698 # Default logging: https://github.com/django/django/blob/master/django/utils/log.py
699 # We disable mailing the admin.
700 # Server errors will be sent to Sentry via the config below this.
701 LOGGING = {
702 "version": 1,
703 "disable_existing_loggers": False,
704 "filters": {
705 "require_debug_false": {
706 "()": "django.utils.log.RequireDebugFalse",
707 },
708 "require_debug_true": {
709 "()": "django.utils.log.RequireDebugTrue",
710 },
711 },
712 "formatters": {
713 "django.server": {
714 "()": "django.utils.log.ServerFormatter",
715 "format": "[{server_time}] {message}",
716 "style": "{",
717 }
718 },
719 "handlers": {
720 "console": {
721 "level": "INFO",
722 "filters": ["require_debug_true"],
723 "class": "logging.StreamHandler",
724 },
725 "django.server": {
726 "level": "INFO",
727 "class": "logging.StreamHandler",
728 "formatter": "django.server",
729 },
730 },
731 "loggers": {
732 "django": {
733 "handlers": ["console"],
734 "level": "INFO",
735 },
736 "django.server": {
737 "handlers": ["django.server"],
738 "level": "INFO",
739 "propagate": False,
740 },
741 },
742 }
743
744 REDIS_CACHE_PORT = int(
745 from_env("REDIS_CACHE_PORT", development="6379", production="6379")
746 )
747 REDIS_CACHE_HOST = from_env("REDIS_CACHE_HOST")
748 REDIS_CACHE_URL = (
749 f"redis://{REDIS_CACHE_HOST}:{REDIS_CACHE_PORT}" if REDIS_CACHE_HOST else None
750 )
751
752 CACHES = {
753 "default": (
754 {
755 "BACKEND": "django.core.cache.backends.redis.RedisCache",
756 "LOCATION": REDIS_CACHE_URL,
757 }
758 if REDIS_CACHE_URL is not None
759 else {
760 "BACKEND": "django.core.cache.backends.db.DatabaseCache",
761 "LOCATION": "django_default_db_cache",
762 }
763 ),
764 }
765
766 SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"
767
768 WSGI_APPLICATION = "thaliawebsite.wsgi.application"
769
770 # Login pages
771 LOGIN_URL = "two_factor:login"
772 LOGIN_REDIRECT_URL = "/"
773
774 # Cors configuration
775 CORS_ORIGIN_ALLOW_ALL = True
776 CORS_URLS_REGEX = r"^/(?:api/v1|api/v2|user/oauth)/.*"
777
778 # OAuth configuration
779 OIDC_RSA_PRIVATE_KEY = from_env("OIDC_RSA_PRIVATE_KEY", testing=None)
780 if OIDC_RSA_PRIVATE_KEY is not None:
781 OIDC_RSA_PRIVATE_KEY = base64.urlsafe_b64decode(OIDC_RSA_PRIVATE_KEY).decode()
782
783 OAUTH2_PROVIDER = {
784 "OIDC_ENABLED": True,
785 "OIDC_RSA_PRIVATE_KEY": OIDC_RSA_PRIVATE_KEY,
786 "ALLOWED_REDIRECT_URI_SCHEMES": setting(
787 production=["https", APP_OAUTH_SCHEME],
788 staging=["http", "https", APP_OAUTH_SCHEME],
789 development=["http", "https", APP_OAUTH_SCHEME],
790 ),
791 "SCOPES": {
792 "openid": "OpenID Connect",
793 "read": "Authenticated read access to the website",
794 "write": "Authenticated write access to the website",
795 "activemembers:read": "Read access to committee, society and board groups",
796 "announcements:read": "Read access to announcements",
797 "events:read": "Read access to events and your event registrations",
798 "events:register": "Write access to the state of your event registrations",
799 "events:admin": "Admin access to the events",
800 "food:read": "Read access to food events",
801 "food:order": "Order access to food events",
802 "food:admin": "Admin access to food events",
803 "members:read": "Read access to the members directory",
804 "photos:read": "Read access to photos",
805 "profile:read": "Read access to your member profile",
806 "profile:write": "Write access to your member profile",
807 "pushnotifications:read": "Read access to push notifications",
808 "pushnotifications:write": "Write access to push notifications",
809 "partners:read": "Read access to partners",
810 "payments:read": "Read access to payments",
811 "payments:write": "Write access to payments",
812 "payments:admin": "Admin access to payments",
813 "sales:read": "Read access to your Point of Sale orders",
814 "sales:order": "Place Point of Sale orders on your behalf",
815 "sales:admin": "Admin access to Point of Sale orders",
816 },
817 }
818
819 # Password validation
820 # https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators
821 AUTH_PASSWORD_VALIDATORS = [
822 {
823 "NAME": (
824 "django.contrib.auth."
825 "password_validation.UserAttributeSimilarityValidator"
826 ),
827 },
828 {
829 "NAME": ("django.contrib.auth.password_validation.MinimumLengthValidator"),
830 },
831 {
832 "NAME": ("django.contrib.auth.password_validation.CommonPasswordValidator"),
833 },
834 {
835 "NAME": ("django.contrib.auth.password_validation.NumericPasswordValidator"),
836 },
837 ]
838
839 PASSWORD_HASHERS = setting(
840 development=(
841 "django.contrib.auth.hashers.PBKDF2PasswordHasher",
842 "django.contrib.auth.hashers.MD5PasswordHasher",
843 ),
844 production=(
845 "django.contrib.auth.hashers.Argon2PasswordHasher",
846 "django.contrib.auth.hashers.PBKDF2PasswordHasher",
847 "django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher",
848 "django.contrib.auth.hashers.BCryptSHA256PasswordHasher",
849 "django.contrib.auth.hashers.BCryptPasswordHasher",
850 ),
851 testing=("django.contrib.auth.hashers.MD5PasswordHasher",),
852 )
853
854 AUTHENTICATION_BACKENDS = [
855 "django.contrib.auth.backends.ModelBackend",
856 "activemembers.backends.MemberGroupBackend",
857 ]
858
859 REST_FRAMEWORK = {
860 "DEFAULT_AUTHENTICATION_CLASSES": (
861 "rest_framework.authentication.SessionAuthentication",
862 "thaliawebsite.api.authentication.APIv1TokenAuthentication",
863 "oauth2_provider.contrib.rest_framework.OAuth2Authentication",
864 ),
865 "DEFAULT_PAGINATION_CLASS": "thaliawebsite.api.pagination.APIv2LimitOffsetPagination",
866 "PAGE_SIZE": 50, # Only for API v2
867 "ALLOWED_VERSIONS": ["v1", "v2", "calendarjs", "facedetection"],
868 "DEFAULT_VERSIONING_CLASS": "rest_framework.versioning.NamespaceVersioning",
869 "DEFAULT_SCHEMA_CLASS": "thaliawebsite.api.openapi.OAuthAutoSchema",
870 "DEFAULT_THROTTLE_CLASSES": [
871 "thaliawebsite.api.throttling.AnonRateThrottle",
872 "thaliawebsite.api.throttling.UserRateThrottle",
873 ],
874 "DEFAULT_THROTTLE_RATES": setting(
875 production={"anon": "30/min", "user": "90/min"},
876 staging={"anon": "30/min", "user": "90/min"},
877 development={"anon": None, "user": None},
878 ),
879 }
880
881 # Rate limiting
882 RATELIMIT_VIEW = "thaliawebsite.views.rate_limited_view"
883
884 # Internationalization
885 # https://docs.djangoproject.com/en/dev/topics/i18n/
886 USE_I18N = True
887 LANGUAGES = [("en", _("English"))]
888 LANGUAGE_CODE = "en"
889 TIME_ZONE = "Europe/Amsterdam"
890
891 # We provide formatting overrides in the `thaliawebsite.en.formats`, because Django
892 # no longer supports running without localization. This works to enforce the same format
893 # regardless of the user's language/locale, because `en` is the only enabled language.
894 FORMAT_MODULE_PATH = ["thaliawebsite.locale"]
895
896 # Static files
897 STATICFILES_FINDERS = (
898 "django.contrib.staticfiles.finders.FileSystemFinder",
899 "django.contrib.staticfiles.finders.AppDirectoriesFinder",
900 "sass_processor.finders.CssFinder",
901 )
902
903 # Allow importing .scss files that don't start with an underscore.
904 # See https://github.com/jrief/django-sass-processor
905 SASS_PROCESSOR_INCLUDE_FILE_PATTERN = r"^.+\.scss$"
906
907 # See utils/model/signals.py for explanation
908 SUSPEND_SIGNALS = False
909
910 THUMBNAILS_METADATA = (
911 {
912 "BACKEND": "thumbnails.backends.metadata.RedisBackend",
913 "host": REDIS_CACHE_HOST,
914 "port": REDIS_CACHE_PORT,
915 }
916 if REDIS_CACHE_HOST
917 else {
918 "BACKEND": "thumbnails.backends.metadata.DatabaseBackend",
919 }
920 )
921
922 THUMBNAILS = {
923 "METADATA": THUMBNAILS_METADATA,
924 "STORAGE": {
925 # django-thumbnails does not use the Django 4.2 `storages` API yet,
926 # but we can simply give it the path as we would with the new API.
927 "BACKEND": _DEFAULT_FILE_STORAGE,
928 },
929 "SIZES": {
930 "small": {
931 "FORMAT": "webp",
932 "PROCESSORS": [
933 {
934 "PATH": "utils.media.processors.thumbnail",
935 "size": (300, 300),
936 "mode": "cover",
937 },
938 ],
939 },
940 "medium": {
941 "FORMAT": "webp",
942 "PROCESSORS": [
943 {
944 "PATH": "utils.media.processors.thumbnail",
945 "size": (600, 600),
946 "mode": "cover",
947 },
948 ],
949 },
950 "large": {
951 "FORMAT": "webp",
952 "PROCESSORS": [
953 {
954 "PATH": "utils.media.processors.thumbnail",
955 "size": (1200, 900),
956 "mode": "cover",
957 },
958 ],
959 },
960 "photo_medium": {
961 "FORMAT": "webp",
962 "PROCESSORS": [
963 {
964 "PATH": "utils.media.processors.thumbnail",
965 "size": (1200, 900),
966 },
967 ],
968 },
969 "photo_large": {
970 "FORMAT": "webp",
971 "PROCESSORS": [
972 {
973 "PATH": "utils.media.processors.thumbnail",
974 "size": (1920, 1920),
975 },
976 ],
977 },
978 "avatar_large": {
979 "FORMAT": "webp",
980 "PROCESSORS": [
981 {
982 "PATH": "utils.media.processors.thumbnail",
983 "size": (900, 900),
984 "mode": "cover",
985 },
986 ],
987 },
988 "slide_small": {
989 "FORMAT": "webp",
990 "PROCESSORS": [
991 {
992 "PATH": "utils.media.processors.thumbnail",
993 "size": (500, 108),
994 "mode": "cover",
995 },
996 ],
997 },
998 "slide_medium": {
999 "FORMAT": "webp",
1000 "PROCESSORS": [
1001 {
1002 "PATH": "utils.media.processors.thumbnail",
1003 "size": (1000, 215),
1004 "mode": "cover",
1005 },
1006 ],
1007 },
1008 "slide": {
1009 "FORMAT": "webp",
1010 "PROCESSORS": [
1011 {
1012 "PATH": "utils.media.processors.thumbnail",
1013 "size": (2000, 430),
1014 "mode": "cover",
1015 },
1016 ],
1017 },
1018 "fit_small": {
1019 "FORMAT": "webp",
1020 "PROCESSORS": [
1021 {
1022 "PATH": "utils.media.processors.thumbnail",
1023 "size": (300, 300),
1024 },
1025 ],
1026 },
1027 "fit_medium": {
1028 "FORMAT": "webp",
1029 "PROCESSORS": [
1030 {
1031 "PATH": "utils.media.processors.thumbnail",
1032 "size": (600, 600),
1033 },
1034 ],
1035 },
1036 "fit_medium_pad": {
1037 "FORMAT": "webp",
1038 "PROCESSORS": [
1039 {
1040 "PATH": "utils.media.processors.thumbnail",
1041 "size": (600, 250),
1042 "mode": "pad",
1043 },
1044 ],
1045 },
1046 "fit_small_pad": {
1047 "FORMAT": "webp",
1048 "PROCESSORS": [
1049 {
1050 "PATH": "utils.media.processors.thumbnail",
1051 "size": (360, 150),
1052 "mode": "pad",
1053 },
1054 ],
1055 },
1056 "fit_large": {
1057 "FORMAT": "webp",
1058 "PROCESSORS": [
1059 {
1060 "PATH": "utils.media.processors.thumbnail",
1061 "size": (1200, 900),
1062 },
1063 ],
1064 },
1065 "source": {
1066 "FORMAT": "jpg",
1067 "PROCESSORS": [
1068 {
1069 "PATH": "utils.media.processors.process_upload",
1070 "size": (8_000, 8_000),
1071 "format": "jpg",
1072 }
1073 ],
1074 },
1075 "source_png": {
1076 "FORMAT": "png",
1077 "PROCESSORS": [
1078 {
1079 "PATH": "utils.media.processors.process_upload",
1080 "size": (8_000, 8_000),
1081 "format": "png",
1082 }
1083 ],
1084 },
1085 },
1086 }
1087
1088 THUMBNAIL_SIZES = set(THUMBNAILS["SIZES"].keys())
1089
1090 # TinyMCE config
1091 TINYMCE_DEFAULT_CONFIG = {
1092 "max_height": 500,
1093 "menubar": False,
1094 "plugins": "autolink autoresize link image code media paste lists",
1095 "toolbar": "h2 h3 | bold italic underline strikethrough | image media | link unlink "
1096 "| bullist numlist | undo redo | code",
1097 "contextmenu": "bold italic underline strikethrough | link",
1098 "paste_as_text": True,
1099 "relative_urls": False,
1100 "remove_script_host": False,
1101 "autoresize_bottom_margin": 50,
1102 }
1103 TINYMCE_EXTRA_MEDIA = {
1104 "css": {
1105 "all": [
1106 "css/tinymce.css",
1107 ],
1108 },
1109 }
1110
1111
1112 BOOTSTRAP5 = {"required_css_class": "required-field"}
1113
1114 # https://docs.djangoproject.com/en/dev/ref/settings/#default-exception-reporter-filter
1115 DEFAULT_EXCEPTION_REPORTER_FILTER = (
1116 "utils.exception_filter.ThaliaSafeExceptionReporterFilter"
1117 )
1118
1119 # Make sure the locations in django.po files don't include line nrs.
1120 makemessages.Command.xgettext_options.append("--add-location=file")
1121
1122 GRAPH_MODELS = {
1123 "all_applications": False,
1124 "group_models": True,
1125 "app_labels": [
1126 "events",
1127 "photos",
1128 "merchandise",
1129 "thabloid",
1130 "partners",
1131 "newsletters",
1132 "shortlinks",
1133 "promotion",
1134 "documents",
1135 "pizzas",
1136 "announcements",
1137 "sales",
1138 "registrations",
1139 "mailinglists",
1140 "payments",
1141 "members",
1142 "admin",
1143 "pushnotifications",
1144 "activemembers",
1145 "education",
1146 "auth",
1147 ],
1148 }
1149
1150 MONEYBIRD_START_DATE = os.environ.get("MONEYBIRD_START_DATE", "2023-09-01")
1151
1152 MONEYBIRD_ADMINISTRATION_ID: int | None = (
1153 int(os.environ.get("MONEYBIRD_ADMINISTRATION_ID"))
1154 if os.environ.get("MONEYBIRD_ADMINISTRATION_ID")
1155 else None
1156 )
1157
1158 MONEYBIRD_API_KEY = os.environ.get("MONEYBIRD_API_KEY")
1159
1160 MONEYBIRD_SYNC_ENABLED = MONEYBIRD_ADMINISTRATION_ID and MONEYBIRD_API_KEY
1161
1162 MONEYBIRD_MEMBER_PK_CUSTOM_FIELD_ID: int | None = (
1163 int(os.environ.get("MONEYBIRD_MEMBER_PK_CUSTOM_FIELD_ID"))
1164 if os.environ.get("MONEYBIRD_MEMBER_PK_CUSTOM_FIELD_ID")
1165 else None
1166 )
1167 MONEYBIRD_UNKNOWN_PAYER_CONTACT_ID: int | None = (
1168 int(os.environ.get("MONEYBIRD_UNKNOWN_PAYER_CONTACT_ID"))
1169 if os.environ.get("MONEYBIRD_UNKNOWN_PAYER_CONTACT_ID")
1170 else None
1171 )
1172 MONEYBIRD_CONTRIBUTION_LEDGER_ID: int | None = (
1173 int(os.environ.get("MONEYBIRD_CONTRIBUTION_LEDGER_ID"))
1174 if os.environ.get("MONEYBIRD_CONTRIBUTION_LEDGER_ID")
1175 else None
1176 )
1177
1178 MONEYBIRD_TPAY_FINANCIAL_ACCOUNT_ID: int | None = (
1179 int(os.environ.get("MONEYBIRD_TPAY_FINANCIAL_ACCOUNT_ID"))
1180 if os.environ.get("MONEYBIRD_TPAY_FINANCIAL_ACCOUNT_ID")
1181 else None
1182 )
1183 MONEYBIRD_CASH_FINANCIAL_ACCOUNT_ID: int | None = (
1184 int(os.environ.get("MONEYBIRD_CASH_FINANCIAL_ACCOUNT_ID"))
1185 if os.environ.get("MONEYBIRD_CASH_FINANCIAL_ACCOUNT_ID")
1186 else None
1187 )
1188 MONEYBIRD_CARD_FINANCIAL_ACCOUNT_ID: int | None = (
1189 int(os.environ.get("MONEYBIRD_CARD_FINANCIAL_ACCOUNT_ID"))
1190 if os.environ.get("MONEYBIRD_CARD_FINANCIAL_ACCOUNT_ID")
1191 else None
1192 )
1193
1194 MONEYBIRD_ZERO_TAX_RATE_ID: int | None = (
1195 int(os.environ.get("MONEYBIRD_ZERO_TAX_RATE_ID"))
1196 if os.environ.get("MONEYBIRD_ZERO_TAX_RATE_ID")
1197 else None
1198 )
1199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/thaliawebsite/settings.py b/website/thaliawebsite/settings.py
--- a/website/thaliawebsite/settings.py
+++ b/website/thaliawebsite/settings.py
@@ -555,9 +555,7 @@
dsn=os.environ.get("SENTRY_DSN"),
integrations=[
DjangoIntegration(),
- CeleryIntegration(
- monitor_beat_tasks=True,
- ),
+ CeleryIntegration(),
],
release=SOURCE_COMMIT,
send_default_pii=True,
| {"golden_diff": "diff --git a/website/thaliawebsite/settings.py b/website/thaliawebsite/settings.py\n--- a/website/thaliawebsite/settings.py\n+++ b/website/thaliawebsite/settings.py\n@@ -555,9 +555,7 @@\n dsn=os.environ.get(\"SENTRY_DSN\"),\n integrations=[\n DjangoIntegration(),\n- CeleryIntegration(\n- monitor_beat_tasks=True,\n- ),\n+ CeleryIntegration(),\n ],\n release=SOURCE_COMMIT,\n send_default_pii=True,\n", "issue": "Disable Sentry cron monitoring\n\r\n\r\n### What?\r\nWe need to disable the sentry 'cron' monitoring of periodic tasks.\r\n\r\n### Why?\r\nSentry is making cron monitors paid after the beta.\r\n\r\n### How?\r\nI think it's a single line in settings.py, and maybe some cleanup on sentry to remove the existing monitors.\r\n\n", "before_files": [{"content": "\"\"\"Django settings for concrexit.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/dev/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/dev/ref/settings/\n\"\"\"\n\nimport base64\nimport json\nimport logging\nimport os\n\nfrom django.core.management.commands import makemessages\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom celery.schedules import crontab\n\nlogger = logging.getLogger(__name__)\n\n# Sentinel objects that are distinct from None\n_NOT_SET = object()\n\n\nclass Misconfiguration(Exception):\n \"\"\"Exception that is raised when something is misconfigured in this file.\"\"\"\n\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.abspath(\n os.path.join(os.path.dirname(os.path.abspath(__file__)), \"\", \"..\")\n)\n\nSOURCE_COMMIT = os.environ.get(\"SOURCE_COMMIT\", \"unknown\")\n\n# Many of the settings are dependent on the environment we're running in.\n# The default environment is development, so the programmer doesn't have to set anything\nDJANGO_ENV = os.environ.get(\"DJANGO_ENV\", \"development\")\n_environments = [\"production\", \"staging\", \"testing\", \"development\"]\nif DJANGO_ENV not in _environments:\n raise Misconfiguration(f\"Set DJANGO_ENV to one of: {', '.join(_environments)}\")\n\n\ndef _set_django_env(env):\n \"\"\"Set the DJANGO_ENV variable.\n\n This is a helper function for the doctests below because doctests cannot set global variables.\n \"\"\"\n global DJANGO_ENV # noqa: PLW0603\n DJANGO_ENV = env\n\n\ndef setting(*, development, production, staging=_NOT_SET, testing=_NOT_SET):\n \"\"\"Generate a setting depending on the DJANGO_ENV and the arguments.\n\n This function is meant for static settings that depend on the DJANGO_ENV. If the\n staging or testing arguments are left to their defaults, they will fall back to\n the production and development settings respectively.\n\n Example:\n >>> _set_django_env(\"production\")\n >>> SEND_MESSAGES_WITH = setting(development=\"console\", production=\"mail\", staging=\"DM\")\n >>> SEND_MESSAGES_WITH\n 'mail'\n >>> _set_django_env(\"testing\")\n >>> setting(development=\"console\", production=\"mail\", staging=\"DM\")\n 'console'\n \"\"\"\n if DJANGO_ENV == \"development\" or (DJANGO_ENV == \"testing\" and testing is _NOT_SET):\n return development\n if DJANGO_ENV == \"testing\":\n return testing\n if DJANGO_ENV == \"production\" or (DJANGO_ENV == \"staging\" and staging is _NOT_SET):\n return production\n if DJANGO_ENV == \"staging\":\n return staging\n raise Misconfiguration(f\"Set DJANGO_ENV to one of: {', '.join(_environments)}\")\n\n\ndef from_env(\n name, *, production=_NOT_SET, staging=_NOT_SET, testing=_NOT_SET, development=None\n):\n \"\"\"Generate a setting that's overridable by the process environment.\n\n This will raise an exception if a default is not set for production. Because we use\n the sentinel value _NOT_SET, you can still set a default of None for production if wanted.\n\n As with :func:`setting` the staging and testing values will fall back to production\n and development. So if an environment variable is required in production, and no default\n is set for staging, staging will also raise the exception.\n\n Example:\n >>> _set_django_env(\"production\")\n >>> # A secret key should always be set in production via the environment\n >>> from_env(\"MEDIA_ROOT\", development=\"/media/root\")\n Traceback (most recent call last):\n ...\n thaliawebsite.settings.Misconfiguration: Environment variable `MEDIA_ROOT` must be supplied in production\n >>> _set_django_env(\"development\")\n >>> from_env(\"MEDIA_ROOT\", development=\"/media/root\")\n '/media/root'\n \"\"\"\n try:\n return os.environ[name]\n except KeyError:\n if DJANGO_ENV == \"production\" or (\n DJANGO_ENV == \"staging\" and staging is _NOT_SET\n ):\n if production is _NOT_SET and os.environ.get(\"MANAGE_PY\", \"0\") == \"0\":\n raise Misconfiguration(\n f\"Environment variable `{name}` must be supplied in production\"\n )\n if production is _NOT_SET and os.environ.get(\"MANAGE_PY\", \"0\") == \"1\":\n logger.warning(\n \"Ignoring unset %s because we're running a management command\", name\n )\n return development\n return production\n if DJANGO_ENV == \"staging\":\n return staging\n if DJANGO_ENV == \"development\" or (\n DJANGO_ENV == \"testing\" and testing is _NOT_SET\n ):\n return development\n if DJANGO_ENV == \"testing\":\n return testing\n raise Misconfiguration(f\"DJANGO_ENV set to unsupported value: {DJANGO_ENV}\")\n\n\n###############################################################################\n# Site settings\n\n# We use this setting to generate the email addresses, and for BASE_URL below.\nSITE_DOMAIN = from_env(\"SITE_DOMAIN\", development=\"localhost\", production=\"thalia.nu\")\n\n# Used to generate some absolute urls when we don't have access to a request.\nBASE_URL = from_env(\n \"BASE_URL\",\n development=f\"http://{SITE_DOMAIN}:8000\",\n production=f\"https://{SITE_DOMAIN}\",\n)\n\n# Default FROM email\nDEFAULT_FROM_EMAIL = f\"{os.environ.get('ADDRESS_NOREPLY', 'noreply')}@{SITE_DOMAIN}\"\n# https://docs.djangoproject.com/en/dev/ref/settings/#server-email\nSERVER_EMAIL = DEFAULT_FROM_EMAIL\nNEWSLETTER_FROM_ADDRESS = (\n f\"{os.environ.get('ADDRESS_NEWSLETTER', 'newsletter')}@{SITE_DOMAIN}\"\n)\nBOARD_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_CONTACT', 'info')}@{SITE_DOMAIN}\"\n)\nPARTNER_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_COLLABORATION', 'samenwerking')}@{SITE_DOMAIN}\"\n)\nEDUCATION_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_EDUCATION', 'educacie')}@{SITE_DOMAIN}\"\n)\nPROMO_REQUEST_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_PROMOREQUESTS', 'promocie')}@{SITE_DOMAIN}\"\n)\nTREASURER_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_TREASURER', 'treasurer')}@{SITE_DOMAIN}\"\n)\n\n\n# How many days to keep reference faces after a user marks them for deletion\nFACEDETECTION_REFERENCE_FACE_STORAGE_PERIOD_AFTER_DELETE_DAYS = 180\n\n# How many reference faces a user can have at the same time\nFACEDETECTION_MAX_NUM_REFERENCE_FACES = 5\n\n# ARN of the concrexit-facedetection-lambda function.\n# See https://github.com/svthalia/concrexit-facedetection-lambda.\nFACEDETECTION_LAMBDA_ARN = os.environ.get(\"FACEDETECTION_LAMBDA_ARN\")\n\nFACEDETECTION_LAMBDA_BATCH_SIZE = int(\n os.environ.get(\"FACEDETECTION_LAMBDA_BATCH_SIZE\", 20)\n)\n\n# The scheme the app uses for oauth redirection\nAPP_OAUTH_SCHEME = os.environ.get(\"APP_OAUTH_SCHEME\", \"nu.thalia\")\n\n# Membership prices\nMEMBERSHIP_PRICES = {\n \"year\": int(os.environ.get(\"MEMBERSHIP_PRICE_YEAR_CENTS\", \"750\")) / 100,\n \"study\": int(os.environ.get(\"MEMBERSHIP_PRICE_STUDY_CENTS\", \"3000\")) / 100,\n}\n\n# Window during which a payment can be deleted again\nPAYMENT_CHANGE_WINDOW = int(os.environ.get(\"PAYMENTS_CHANGE_WINDOW\", 10 * 60))\n\n# Payments creditor identifier\nSEPA_CREDITOR_ID = os.environ.get(\"SEPA_CREDITOR_ID\", \"<unknown>\")\n\n# Payment batch withdrawal date default offset after creation date\nPAYMENT_BATCH_DEFAULT_WITHDRAWAL_DATE_OFFSET = timezone.timedelta(days=14)\n\nTHALIA_PAY_ENABLED_PAYMENT_METHOD = (\n from_env(\"THALIA_PAY_ENABLED\", development=\"1\", staging=\"1\", production=\"0\") == \"1\"\n)\nTHALIA_PAY_FOR_NEW_MEMBERS = os.environ.get(\"THALIA_PAY_FOR_NEW_MEMBERS\", \"1\") == \"1\"\n\n###############################################################################\n# Django settings\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#secret-key\nSECRET_KEY = from_env(\n \"SECRET_KEY\", development=\"#o-0d1q5&^&06tn@8pr1f(n3$crafd++^%sacao7hj*ea@c)^t\"\n)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts\nALLOWED_HOSTS = [\n SITE_DOMAIN,\n *from_env(\"ALLOWED_HOSTS\", development=\"*\", production=\"\").split(\",\"),\n]\n\nDJANGO_DRF_FILEPOND_UPLOAD_TMP = from_env(\n \"DJANGO_DRF_FILEPOND_UPLOAD_TMP\",\n development=os.path.join(BASE_DIR, \"filepond-temp-uploads\"),\n)\nDJANGO_DRF_FILEPOND_FILE_STORE_PATH = from_env(\n \"DJANGO_DRF_FILEPOND_FILE_STORE_PATH\",\n development=os.path.join(BASE_DIR, \"filepond-uploaded\"),\n)\nDJANGO_DRF_FILEPOND_ALLOW_EXTERNAL_UPLOAD_DIR = True\nDJANGO_DRF_FILEPOND_PERMISSION_CLASSES = {\n \"GET_FETCH\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"GET_LOAD\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"POST_PROCESS\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"GET_RESTORE\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"DELETE_REVERT\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"PATCH_PATCH\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#static-root\nSTATIC_ROOT = from_env(\"STATIC_ROOT\", development=os.path.join(BASE_DIR, \"static\"))\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#media-root\nMEDIA_ROOT = from_env(\"MEDIA_ROOT\", development=os.path.join(BASE_DIR, \"media\"))\n\n# https://github.com/johnsensible/django-sendfile#nginx-backend\nSENDFILE_URL = \"/media/sendfile/\"\nSENDFILE_ROOT = MEDIA_ROOT\nSENDFILE_BACKEND = setting(\n development=\"django_sendfile.backends.development\",\n production=\"django_sendfile.backends.nginx\",\n)\n\nPRIVATE_MEDIA_LOCATION = \"\"\nPUBLIC_MEDIA_LOCATION = \"public\"\nSTATICFILES_LOCATION = \"static\"\n\nMEDIA_URL = \"/media/private/\"\n\nAWS_ACCESS_KEY_ID = from_env(\"AWS_ACCESS_KEY_ID\", production=None)\nAWS_SECRET_ACCESS_KEY = from_env(\"AWS_SECRET_ACCESS_KEY\", production=None)\nAWS_STORAGE_BUCKET_NAME = from_env(\"AWS_STORAGE_BUCKET_NAME\", production=None)\nAWS_DEFAULT_ACL = \"private\"\nAWS_S3_OBJECT_PARAMETERS = {\"CacheControl\": \"max-age=86400\"}\nAWS_S3_SIGNATURE_VERSION = \"s3v4\"\n\nif AWS_STORAGE_BUCKET_NAME is not None:\n AWS_CLOUDFRONT_KEY = base64.urlsafe_b64decode(\n os.environ.get(\"AWS_CLOUDFRONT_KEY\", None)\n ).decode(\"utf-8\")\n AWS_CLOUDFRONT_KEY_ID = os.environ.get(\"AWS_CLOUDFRONT_KEY_ID\", None)\n AWS_S3_CUSTOM_DOMAIN = os.environ.get(\"AWS_CLOUDFRONT_DOMAIN\", None)\n\n _STATICFILES_STORAGE = \"thaliawebsite.storage.backend.StaticS3Storage\"\n STATIC_URL = f\"https://{AWS_S3_CUSTOM_DOMAIN}/static/\"\n\n _DEFAULT_FILE_STORAGE = \"thaliawebsite.storage.backend.PrivateS3Storage\"\n\n _PUBLIC_FILE_STORAGE = \"thaliawebsite.storage.backend.PublicS3Storage\"\n PUBLIC_MEDIA_URL = f\"https://{AWS_S3_CUSTOM_DOMAIN}/\"\nelse:\n _STATICFILES_STORAGE = setting(\n development=\"django.contrib.staticfiles.storage.StaticFilesStorage\",\n production=\"django.contrib.staticfiles.storage.ManifestStaticFilesStorage\",\n )\n STATIC_URL = \"/static/\"\n\n _DEFAULT_FILE_STORAGE = \"thaliawebsite.storage.backend.PrivateFileSystemStorage\"\n\n _PUBLIC_FILE_STORAGE = \"thaliawebsite.storage.backend.PublicFileSystemStorage\"\n PUBLIC_MEDIA_URL = \"/media/public/\"\n\nSTORAGES = {\n \"default\": {\"BACKEND\": _DEFAULT_FILE_STORAGE},\n \"public\": {\"BACKEND\": _PUBLIC_FILE_STORAGE},\n \"staticfiles\": {\"BACKEND\": _STATICFILES_STORAGE},\n}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#conn-max-age\nCONN_MAX_AGE = int(from_env(\"CONN_MAX_AGE\", development=\"0\", production=\"60\"))\n\n# Useful for managing members\n# https://docs.djangoproject.com/en/dev/ref/settings/#data-upload-max-number-fields\nDATA_UPLOAD_MAX_NUMBER_FIELDS = os.environ.get(\"DATA_UPLOAD_MAX_NUMBER_FIELDS\", 10000)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#debug\nDEBUG = bool(\n from_env(\"DJANGO_DEBUG\", development=True, production=False, testing=False)\n)\n# https://docs.djangoproject.com/en/dev/ref/settings/#internal-ips\nINTERNAL_IPS = [\"127.0.0.1\", \"172.17.0.1\"] if DEBUG else []\n\n\ndef show_toolbar(request):\n return DEBUG\n\n\nDEBUG_TOOLBAR_CONFIG = {\"SHOW_TOOLBAR_CALLBACK\": show_toolbar}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#session-cookie-secure\nSESSION_COOKIE_SECURE = setting(development=False, production=True)\n# https://docs.djangoproject.com/en/dev/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = setting(development=False, production=True)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#std-setting-SECURE_PROXY_SSL_HEADER\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#default-auto-field\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n\n###############################################################################\n# Celery settings\n# https://docs.celeryq.dev/en/stable/userguide/configuration.html#configuration\n\n# Set CELERY_BROKER_URL=\"redis://127.0.0.1:6379\" to use a local redis server in development.\nCELERY_BROKER_URL = from_env(\"CELERY_BROKER_URL\")\n\n# Always execute tasks synchronously when no broker is configured in development and testing.\n# See https://docs.celeryq.dev/en/stable/userguide/configuration.html#std-setting-task_always_eager\nCELERY_TASK_ALWAYS_EAGER = CELERY_BROKER_URL is None\n\n\n# See https://docs.celeryq.dev/en/stable/getting-started/backends-and-brokers/redis.html#caveats\nCELERY_BROKER_TRANSPORT_OPTIONS = {\"visibility_timeout\": 18000}\n\n# https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html\nCELERY_BEAT_SCHEDULE = {\n \"synchronize_mailinglists\": {\n \"task\": \"mailinglists.tasks.sync_mail\",\n \"schedule\": crontab(minute=30),\n },\n \"synchronize_moneybird\": {\n \"task\": \"moneybirdsynchronization.tasks.synchronize_moneybird\",\n \"schedule\": crontab(minute=30, hour=1),\n },\n \"sendpromooverviewweekly\": {\n \"task\": \"promotion.tasks.promo_update_weekly\",\n \"schedule\": crontab(minute=0, hour=8, day_of_week=1),\n },\n \"sendpromoooverviewdaily\": {\n \"task\": \"promotion.tasks.promo_update_daily\",\n \"schedule\": crontab(minute=0, hour=8),\n },\n \"facedetectlambda\": {\n \"task\": \"facedetection.tasks.trigger_facedetect_lambda\",\n \"schedule\": crontab(minute=0, hour=1),\n },\n \"revokeoldmandates\": {\n \"task\": \"payments.tasks.revoke_mandates\",\n \"schedule\": crontab(minute=0, hour=1),\n },\n \"membershipannouncement\": {\n \"task\": \"members.tasks.membership_announcement\",\n \"schedule\": crontab(minute=0, hour=6, day_of_month=31, month_of_year=8),\n },\n \"inforequest\": {\n \"task\": \"members.tasks.info_request\",\n \"schedule\": crontab(minute=0, hour=6, day_of_month=15, month_of_year=10),\n },\n \"expirationannouncement\": {\n \"task\": \"members.tasks.expiration_announcement\",\n \"schedule\": crontab(minute=0, hour=6, day_of_month=8, month_of_year=8),\n },\n \"minimiseregistration\": {\n \"task\": \"registrations.tasks.minimise_registrations\",\n \"schedule\": crontab(minute=0, hour=3, day_of_month=1),\n },\n \"sendscheduledmessages\": {\n \"task\": \"pushnotifications.tasks.send_scheduled_messages\",\n \"schedule\": crontab(minute=\"*/2\"),\n \"args\": (120,),\n },\n \"revokestaff\": {\n \"task\": \"activemembers.tasks.revoke_staff\",\n \"schedule\": crontab(minute=30, hour=3),\n },\n \"deletegsuiteusers\": {\n \"task\": \"activemembers.tasks.delete_gsuite_users\",\n \"schedule\": crontab(minute=30, hour=3, day_of_week=1),\n },\n \"sendplannednewsletters\": {\n \"task\": \"newsletters.tasks.send_planned_newsletters\",\n \"schedule\": crontab(minute=\"*/5\"),\n },\n \"dataminimisation\": {\n \"task\": \"thaliawebsite.tasks.data_minimisation\",\n \"schedule\": crontab(minute=0, hour=3),\n },\n \"cleanup\": {\n \"task\": \"thaliawebsite.tasks.clean_up\",\n \"schedule\": crontab(minute=0, hour=23),\n },\n \"cleartokens\": {\n \"task\": \"thaliawebsite.tasks.clear_tokens\",\n \"schedule\": crontab(minute=30, hour=3),\n },\n \"sendpromoupdateoverviewdaily\": {\n \"task\": \"promotion.tasks.promo_update_overview_daily\",\n \"schedule\": crontab(minute=0, hour=8),\n },\n}\n\n###############################################################################\n# Email settings\n# https://docs.djangoproject.com/en/dev/ref/settings/#email-backend\n_EMAIL_BACKEND = from_env(\"EMAIL_BACKEND\", development=\"console\", production=\"smtp\")\nif _EMAIL_BACKEND == \"console\":\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n\nif _EMAIL_BACKEND == \"smtp\":\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\n EMAIL_HOST = os.environ.get(\"DJANGO_EMAIL_HOST\")\n EMAIL_PORT = os.environ.get(\"DJANGO_EMAIL_PORT\", 25)\n EMAIL_HOST_USER = os.environ.get(\"DJANGO_EMAIL_HOST_USER\", \"\")\n EMAIL_HOST_PASSWORD = os.environ.get(\"DJANGO_EMAIL_HOST_PASSWORD\", \"\")\n EMAIL_USE_TLS = os.environ.get(\"DJANGO_EMAIL_USE_TLS\", \"1\") == \"1\"\n EMAIL_TIMEOUT = int(os.environ.get(\"EMAIL_TIMEOUT\", \"10\"))\n if EMAIL_HOST is None:\n logger.warning(\n \"The email host is set to the default of localhost, are you sure you don't want to set EMAIL_HOST?\"\n )\n EMAIL_HOST = \"localhost\"\n\n###############################################################################\n# Database settings\n# https://docs.djangoproject.com/en/dev/ref/settings/#databases\nDATABASE_ENGINE = from_env(\n \"DATABASE_ENGINE\", development=\"sqlite\", production=\"postgresql\", testing=None\n)\nif DATABASE_ENGINE == \"sqlite\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": os.path.join(BASE_DIR, \"db.sqlite3\"),\n }\n }\n\nif DATABASE_ENGINE == \"postgresql\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"USER\": os.environ.get(\"POSTGRES_USER\", \"concrexit\"),\n \"PASSWORD\": os.environ.get(\"POSTGRES_PASSWORD\", None),\n \"NAME\": os.environ.get(\"POSTGRES_DB\", \"\"),\n \"HOST\": os.environ.get(\"POSTGRES_HOST\", \"\"),\n \"PORT\": os.environ.get(\"POSTGRES_PORT\", \"5432\"),\n \"CONN_MAX_AGE\": 300,\n }\n }\n\nif DJANGO_ENV == \"testing\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": \"thalia\",\n \"USER\": \"postgres\",\n \"PASSWORD\": \"postgres\",\n \"HOST\": \"127.0.0.1\",\n \"PORT\": 5432,\n },\n }\n\n###############################################################################\n# Firebase config\nFIREBASE_CREDENTIALS = os.environ.get(\"FIREBASE_CREDENTIALS\", \"{}\")\nif FIREBASE_CREDENTIALS != \"{}\":\n FIREBASE_CREDENTIALS = base64.urlsafe_b64decode(FIREBASE_CREDENTIALS)\nFIREBASE_CREDENTIALS = json.loads(FIREBASE_CREDENTIALS)\n\nif FIREBASE_CREDENTIALS != {}:\n from firebase_admin import credentials, initialize_app\n\n try:\n initialize_app(credential=credentials.Certificate(FIREBASE_CREDENTIALS))\n except ValueError:\n logger.error(\"Firebase application failed to initialise\")\n\n###############################################################################\n# GSuite config\nGSUITE_ADMIN_SCOPES = [\n \"https://www.googleapis.com/auth/admin.directory.group\",\n \"https://www.googleapis.com/auth/admin.directory.user\",\n \"https://www.googleapis.com/auth/apps.groups.settings\",\n]\n\nGSUITE_ADMIN_CREDENTIALS = os.environ.get(\"GSUITE_ADMIN_CREDENTIALS\", \"{}\")\nif GSUITE_ADMIN_CREDENTIALS != \"{}\":\n GSUITE_ADMIN_CREDENTIALS = base64.urlsafe_b64decode(GSUITE_ADMIN_CREDENTIALS)\nGSUITE_ADMIN_CREDENTIALS = json.loads(GSUITE_ADMIN_CREDENTIALS)\nGSUITE_ADMIN_USER = os.environ.get(\"GSUITE_ADMIN_USER\", \"[email protected]\")\nGSUITE_DOMAIN = from_env(\n \"GSUITE_DOMAIN\", development=\"thalia.localhost\", production=\"thalia.nu\"\n)\nGSUITE_MEMBERS_DOMAIN = from_env(\n \"GSUITE_MEMBERS_DOMAIN\",\n development=\"members.thalia.localhost\",\n production=\"members.thalia.nu\",\n)\nGSUITE_MEMBERS_AUTOSYNC = os.environ.get(\"GSUITE_MEMBERS_AUTOSYNC\", \"0\") == \"1\"\n\nif GSUITE_ADMIN_CREDENTIALS != {}:\n from google.oauth2 import service_account\n\n GSUITE_ADMIN_CREDENTIALS = service_account.Credentials.from_service_account_info(\n GSUITE_ADMIN_CREDENTIALS, scopes=GSUITE_ADMIN_SCOPES\n ).with_subject(GSUITE_ADMIN_USER)\n\nEMAIL_DOMAIN_BLACKLIST = [GSUITE_MEMBERS_DOMAIN]\n\n###############################################################################\n# Google maps API key and secrets\nGOOGLE_MAPS_API_KEY = os.environ.get(\"GOOGLE_MAPS_API_KEY\", \"\")\nGOOGLE_MAPS_API_SECRET = os.environ.get(\"GOOGLE_MAPS_API_SECRET\", \"\")\nGOOGLE_PLACES_API_KEY = os.environ.get(\"GOOGLE_PLACES_API_KEY\", \"\")\n\n###############################################################################\n# Sentry setup\nif \"SENTRY_DSN\" in os.environ:\n import sentry_sdk\n from sentry_sdk.integrations.celery import CeleryIntegration\n from sentry_sdk.integrations.django import DjangoIntegration\n\n sentry_sdk.init(\n dsn=os.environ.get(\"SENTRY_DSN\"),\n integrations=[\n DjangoIntegration(),\n CeleryIntegration(\n monitor_beat_tasks=True,\n ),\n ],\n release=SOURCE_COMMIT,\n send_default_pii=True,\n environment=DJANGO_ENV,\n traces_sample_rate=float(os.environ.get(\"SENTRY_TRACES_SAMPLE_RATE\", 0.2)),\n profiles_sample_rate=float(os.environ.get(\"SENTRY_PROFILES_SAMPLE_RATE\", 0.0)),\n )\n\n\n###############################################################################\n# (Mostly) static settings\nINSTALLED_APPS = [\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.sitemaps\",\n # Dependencies\n \"django_otp\",\n \"django_otp.plugins.otp_static\",\n \"django_otp.plugins.otp_totp\",\n \"formtools\",\n \"two_factor\",\n \"oauth2_provider\",\n \"corsheaders\",\n \"django_bootstrap5\",\n \"tinymce\",\n \"rest_framework\",\n \"rest_framework.authtoken\",\n \"debug_toolbar\",\n \"sass_processor\",\n \"admin_auto_filters\",\n \"django_drf_filepond\",\n \"django_filepond_widget\",\n \"thumbnails\",\n # Our apps\n # Directly link to the app config when applicable as recommended\n # by the docs: https://docs.djangoproject.com/en/2.0/ref/applications/\n \"thaliawebsite.apps.ThaliaWebsiteConfig\", # include for admin settings\n # Load django.contrib.admin after thaliawebsite so the admin page gets modified\n \"django.contrib.admin\",\n # Our apps ordered such that templates in the first\n # apps can override those used by the later apps.\n \"pushnotifications.apps.PushNotificationsConfig\",\n \"facedetection.apps.FaceDetectionConfig\",\n \"announcements.apps.AnnouncementsConfig\",\n \"promotion.apps.PromotionConfig\",\n \"members.apps.MembersConfig\",\n \"documents.apps.DocumentsConfig\",\n \"activemembers.apps.ActiveMembersConfig\",\n \"photos.apps.PhotosConfig\",\n \"utils\",\n \"mailinglists.apps.MailinglistsConfig\",\n \"merchandise.apps.MerchandiseConfig\",\n \"thabloid.apps.ThabloidConfig\",\n \"partners.apps.PartnersConfig\",\n \"events.apps.EventsConfig\",\n \"pizzas.apps.PizzasConfig\",\n \"newsletters.apps.NewslettersConfig\",\n \"education.apps.EducationConfig\",\n \"registrations.apps.RegistrationsConfig\",\n \"payments.apps.PaymentsConfig\",\n \"singlepages.apps.SinglepagesConfig\",\n \"shortlinks.apps.ShortLinkConfig\",\n \"sales.apps.SalesConfig\",\n \"moneybirdsynchronization.apps.MoneybirdsynchronizationConfig\",\n]\n\nMIDDLEWARE = [\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.http.ConditionalGetMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django_otp.middleware.OTPMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"thaliawebsite.middleware.RealIPMiddleware\",\n \"django_ratelimit.middleware.RatelimitMiddleware\",\n \"members.middleware.MemberMiddleware\",\n \"announcements.middleware.AnnouncementMiddleware\",\n]\n\nif DJANGO_ENV in (\"development\", \"testing\"):\n INSTALLED_APPS += [\n \"django_template_check\",\n \"django_extensions\",\n ]\n\nif DJANGO_ENV == \"testing\":\n for x in (\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n \"django.middleware.http.ConditionalGetMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n ):\n MIDDLEWARE.remove(x)\n for x in (\"debug_toolbar\",):\n INSTALLED_APPS.remove(x)\n\nROOT_URLCONF = \"thaliawebsite.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"templates\")],\n \"APP_DIRS\": setting(development=True, production=False),\n \"OPTIONS\": {\n \"context_processors\": [\n \"thaliawebsite.context_processors.source_commit\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.template.context_processors.media\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"announcements.context_processors.announcements\",\n \"thaliawebsite.context_processors.aprilfools\",\n \"thaliawebsite.context_processors.lustrum_styling\",\n ],\n },\n },\n]\n\nif DJANGO_ENV in [\"production\", \"staging\"]:\n # Use caching template loader\n TEMPLATES[0][\"OPTIONS\"][\"loaders\"] = [\n (\n \"django.template.loaders.cached.Loader\",\n [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n )\n ]\n\n# Default logging: https://github.com/django/django/blob/master/django/utils/log.py\n# We disable mailing the admin.\n# Server errors will be sent to Sentry via the config below this.\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"filters\": {\n \"require_debug_false\": {\n \"()\": \"django.utils.log.RequireDebugFalse\",\n },\n \"require_debug_true\": {\n \"()\": \"django.utils.log.RequireDebugTrue\",\n },\n },\n \"formatters\": {\n \"django.server\": {\n \"()\": \"django.utils.log.ServerFormatter\",\n \"format\": \"[{server_time}] {message}\",\n \"style\": \"{\",\n }\n },\n \"handlers\": {\n \"console\": {\n \"level\": \"INFO\",\n \"filters\": [\"require_debug_true\"],\n \"class\": \"logging.StreamHandler\",\n },\n \"django.server\": {\n \"level\": \"INFO\",\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"django.server\",\n },\n },\n \"loggers\": {\n \"django\": {\n \"handlers\": [\"console\"],\n \"level\": \"INFO\",\n },\n \"django.server\": {\n \"handlers\": [\"django.server\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n}\n\nREDIS_CACHE_PORT = int(\n from_env(\"REDIS_CACHE_PORT\", development=\"6379\", production=\"6379\")\n)\nREDIS_CACHE_HOST = from_env(\"REDIS_CACHE_HOST\")\nREDIS_CACHE_URL = (\n f\"redis://{REDIS_CACHE_HOST}:{REDIS_CACHE_PORT}\" if REDIS_CACHE_HOST else None\n)\n\nCACHES = {\n \"default\": (\n {\n \"BACKEND\": \"django.core.cache.backends.redis.RedisCache\",\n \"LOCATION\": REDIS_CACHE_URL,\n }\n if REDIS_CACHE_URL is not None\n else {\n \"BACKEND\": \"django.core.cache.backends.db.DatabaseCache\",\n \"LOCATION\": \"django_default_db_cache\",\n }\n ),\n}\n\nSESSION_ENGINE = \"django.contrib.sessions.backends.cached_db\"\n\nWSGI_APPLICATION = \"thaliawebsite.wsgi.application\"\n\n# Login pages\nLOGIN_URL = \"two_factor:login\"\nLOGIN_REDIRECT_URL = \"/\"\n\n# Cors configuration\nCORS_ORIGIN_ALLOW_ALL = True\nCORS_URLS_REGEX = r\"^/(?:api/v1|api/v2|user/oauth)/.*\"\n\n# OAuth configuration\nOIDC_RSA_PRIVATE_KEY = from_env(\"OIDC_RSA_PRIVATE_KEY\", testing=None)\nif OIDC_RSA_PRIVATE_KEY is not None:\n OIDC_RSA_PRIVATE_KEY = base64.urlsafe_b64decode(OIDC_RSA_PRIVATE_KEY).decode()\n\nOAUTH2_PROVIDER = {\n \"OIDC_ENABLED\": True,\n \"OIDC_RSA_PRIVATE_KEY\": OIDC_RSA_PRIVATE_KEY,\n \"ALLOWED_REDIRECT_URI_SCHEMES\": setting(\n production=[\"https\", APP_OAUTH_SCHEME],\n staging=[\"http\", \"https\", APP_OAUTH_SCHEME],\n development=[\"http\", \"https\", APP_OAUTH_SCHEME],\n ),\n \"SCOPES\": {\n \"openid\": \"OpenID Connect\",\n \"read\": \"Authenticated read access to the website\",\n \"write\": \"Authenticated write access to the website\",\n \"activemembers:read\": \"Read access to committee, society and board groups\",\n \"announcements:read\": \"Read access to announcements\",\n \"events:read\": \"Read access to events and your event registrations\",\n \"events:register\": \"Write access to the state of your event registrations\",\n \"events:admin\": \"Admin access to the events\",\n \"food:read\": \"Read access to food events\",\n \"food:order\": \"Order access to food events\",\n \"food:admin\": \"Admin access to food events\",\n \"members:read\": \"Read access to the members directory\",\n \"photos:read\": \"Read access to photos\",\n \"profile:read\": \"Read access to your member profile\",\n \"profile:write\": \"Write access to your member profile\",\n \"pushnotifications:read\": \"Read access to push notifications\",\n \"pushnotifications:write\": \"Write access to push notifications\",\n \"partners:read\": \"Read access to partners\",\n \"payments:read\": \"Read access to payments\",\n \"payments:write\": \"Write access to payments\",\n \"payments:admin\": \"Admin access to payments\",\n \"sales:read\": \"Read access to your Point of Sale orders\",\n \"sales:order\": \"Place Point of Sale orders on your behalf\",\n \"sales:admin\": \"Admin access to Point of Sale orders\",\n },\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": (\n \"django.contrib.auth.\"\n \"password_validation.UserAttributeSimilarityValidator\"\n ),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.MinimumLengthValidator\"),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.CommonPasswordValidator\"),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.NumericPasswordValidator\"),\n },\n]\n\nPASSWORD_HASHERS = setting(\n development=(\n \"django.contrib.auth.hashers.PBKDF2PasswordHasher\",\n \"django.contrib.auth.hashers.MD5PasswordHasher\",\n ),\n production=(\n \"django.contrib.auth.hashers.Argon2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher\",\n \"django.contrib.auth.hashers.BCryptSHA256PasswordHasher\",\n \"django.contrib.auth.hashers.BCryptPasswordHasher\",\n ),\n testing=(\"django.contrib.auth.hashers.MD5PasswordHasher\",),\n)\n\nAUTHENTICATION_BACKENDS = [\n \"django.contrib.auth.backends.ModelBackend\",\n \"activemembers.backends.MemberGroupBackend\",\n]\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.SessionAuthentication\",\n \"thaliawebsite.api.authentication.APIv1TokenAuthentication\",\n \"oauth2_provider.contrib.rest_framework.OAuth2Authentication\",\n ),\n \"DEFAULT_PAGINATION_CLASS\": \"thaliawebsite.api.pagination.APIv2LimitOffsetPagination\",\n \"PAGE_SIZE\": 50, # Only for API v2\n \"ALLOWED_VERSIONS\": [\"v1\", \"v2\", \"calendarjs\", \"facedetection\"],\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.NamespaceVersioning\",\n \"DEFAULT_SCHEMA_CLASS\": \"thaliawebsite.api.openapi.OAuthAutoSchema\",\n \"DEFAULT_THROTTLE_CLASSES\": [\n \"thaliawebsite.api.throttling.AnonRateThrottle\",\n \"thaliawebsite.api.throttling.UserRateThrottle\",\n ],\n \"DEFAULT_THROTTLE_RATES\": setting(\n production={\"anon\": \"30/min\", \"user\": \"90/min\"},\n staging={\"anon\": \"30/min\", \"user\": \"90/min\"},\n development={\"anon\": None, \"user\": None},\n ),\n}\n\n# Rate limiting\nRATELIMIT_VIEW = \"thaliawebsite.views.rate_limited_view\"\n\n# Internationalization\n# https://docs.djangoproject.com/en/dev/topics/i18n/\nUSE_I18N = True\nLANGUAGES = [(\"en\", _(\"English\"))]\nLANGUAGE_CODE = \"en\"\nTIME_ZONE = \"Europe/Amsterdam\"\n\n# We provide formatting overrides in the `thaliawebsite.en.formats`, because Django\n# no longer supports running without localization. This works to enforce the same format\n# regardless of the user's language/locale, because `en` is the only enabled language.\nFORMAT_MODULE_PATH = [\"thaliawebsite.locale\"]\n\n# Static files\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"sass_processor.finders.CssFinder\",\n)\n\n# Allow importing .scss files that don't start with an underscore.\n# See https://github.com/jrief/django-sass-processor\nSASS_PROCESSOR_INCLUDE_FILE_PATTERN = r\"^.+\\.scss$\"\n\n# See utils/model/signals.py for explanation\nSUSPEND_SIGNALS = False\n\nTHUMBNAILS_METADATA = (\n {\n \"BACKEND\": \"thumbnails.backends.metadata.RedisBackend\",\n \"host\": REDIS_CACHE_HOST,\n \"port\": REDIS_CACHE_PORT,\n }\n if REDIS_CACHE_HOST\n else {\n \"BACKEND\": \"thumbnails.backends.metadata.DatabaseBackend\",\n }\n)\n\nTHUMBNAILS = {\n \"METADATA\": THUMBNAILS_METADATA,\n \"STORAGE\": {\n # django-thumbnails does not use the Django 4.2 `storages` API yet,\n # but we can simply give it the path as we would with the new API.\n \"BACKEND\": _DEFAULT_FILE_STORAGE,\n },\n \"SIZES\": {\n \"small\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (300, 300),\n \"mode\": \"cover\",\n },\n ],\n },\n \"medium\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (600, 600),\n \"mode\": \"cover\",\n },\n ],\n },\n \"large\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1200, 900),\n \"mode\": \"cover\",\n },\n ],\n },\n \"photo_medium\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1200, 900),\n },\n ],\n },\n \"photo_large\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1920, 1920),\n },\n ],\n },\n \"avatar_large\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (900, 900),\n \"mode\": \"cover\",\n },\n ],\n },\n \"slide_small\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (500, 108),\n \"mode\": \"cover\",\n },\n ],\n },\n \"slide_medium\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1000, 215),\n \"mode\": \"cover\",\n },\n ],\n },\n \"slide\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (2000, 430),\n \"mode\": \"cover\",\n },\n ],\n },\n \"fit_small\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (300, 300),\n },\n ],\n },\n \"fit_medium\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (600, 600),\n },\n ],\n },\n \"fit_medium_pad\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (600, 250),\n \"mode\": \"pad\",\n },\n ],\n },\n \"fit_small_pad\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (360, 150),\n \"mode\": \"pad\",\n },\n ],\n },\n \"fit_large\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1200, 900),\n },\n ],\n },\n \"source\": {\n \"FORMAT\": \"jpg\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.process_upload\",\n \"size\": (8_000, 8_000),\n \"format\": \"jpg\",\n }\n ],\n },\n \"source_png\": {\n \"FORMAT\": \"png\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.process_upload\",\n \"size\": (8_000, 8_000),\n \"format\": \"png\",\n }\n ],\n },\n },\n}\n\nTHUMBNAIL_SIZES = set(THUMBNAILS[\"SIZES\"].keys())\n\n# TinyMCE config\nTINYMCE_DEFAULT_CONFIG = {\n \"max_height\": 500,\n \"menubar\": False,\n \"plugins\": \"autolink autoresize link image code media paste lists\",\n \"toolbar\": \"h2 h3 | bold italic underline strikethrough | image media | link unlink \"\n \"| bullist numlist | undo redo | code\",\n \"contextmenu\": \"bold italic underline strikethrough | link\",\n \"paste_as_text\": True,\n \"relative_urls\": False,\n \"remove_script_host\": False,\n \"autoresize_bottom_margin\": 50,\n}\nTINYMCE_EXTRA_MEDIA = {\n \"css\": {\n \"all\": [\n \"css/tinymce.css\",\n ],\n },\n}\n\n\nBOOTSTRAP5 = {\"required_css_class\": \"required-field\"}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#default-exception-reporter-filter\nDEFAULT_EXCEPTION_REPORTER_FILTER = (\n \"utils.exception_filter.ThaliaSafeExceptionReporterFilter\"\n)\n\n# Make sure the locations in django.po files don't include line nrs.\nmakemessages.Command.xgettext_options.append(\"--add-location=file\")\n\nGRAPH_MODELS = {\n \"all_applications\": False,\n \"group_models\": True,\n \"app_labels\": [\n \"events\",\n \"photos\",\n \"merchandise\",\n \"thabloid\",\n \"partners\",\n \"newsletters\",\n \"shortlinks\",\n \"promotion\",\n \"documents\",\n \"pizzas\",\n \"announcements\",\n \"sales\",\n \"registrations\",\n \"mailinglists\",\n \"payments\",\n \"members\",\n \"admin\",\n \"pushnotifications\",\n \"activemembers\",\n \"education\",\n \"auth\",\n ],\n}\n\nMONEYBIRD_START_DATE = os.environ.get(\"MONEYBIRD_START_DATE\", \"2023-09-01\")\n\nMONEYBIRD_ADMINISTRATION_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_ADMINISTRATION_ID\"))\n if os.environ.get(\"MONEYBIRD_ADMINISTRATION_ID\")\n else None\n)\n\nMONEYBIRD_API_KEY = os.environ.get(\"MONEYBIRD_API_KEY\")\n\nMONEYBIRD_SYNC_ENABLED = MONEYBIRD_ADMINISTRATION_ID and MONEYBIRD_API_KEY\n\nMONEYBIRD_MEMBER_PK_CUSTOM_FIELD_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_MEMBER_PK_CUSTOM_FIELD_ID\"))\n if os.environ.get(\"MONEYBIRD_MEMBER_PK_CUSTOM_FIELD_ID\")\n else None\n)\nMONEYBIRD_UNKNOWN_PAYER_CONTACT_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_UNKNOWN_PAYER_CONTACT_ID\"))\n if os.environ.get(\"MONEYBIRD_UNKNOWN_PAYER_CONTACT_ID\")\n else None\n)\nMONEYBIRD_CONTRIBUTION_LEDGER_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_CONTRIBUTION_LEDGER_ID\"))\n if os.environ.get(\"MONEYBIRD_CONTRIBUTION_LEDGER_ID\")\n else None\n)\n\nMONEYBIRD_TPAY_FINANCIAL_ACCOUNT_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_TPAY_FINANCIAL_ACCOUNT_ID\"))\n if os.environ.get(\"MONEYBIRD_TPAY_FINANCIAL_ACCOUNT_ID\")\n else None\n)\nMONEYBIRD_CASH_FINANCIAL_ACCOUNT_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_CASH_FINANCIAL_ACCOUNT_ID\"))\n if os.environ.get(\"MONEYBIRD_CASH_FINANCIAL_ACCOUNT_ID\")\n else None\n)\nMONEYBIRD_CARD_FINANCIAL_ACCOUNT_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_CARD_FINANCIAL_ACCOUNT_ID\"))\n if os.environ.get(\"MONEYBIRD_CARD_FINANCIAL_ACCOUNT_ID\")\n else None\n)\n\nMONEYBIRD_ZERO_TAX_RATE_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_ZERO_TAX_RATE_ID\"))\n if os.environ.get(\"MONEYBIRD_ZERO_TAX_RATE_ID\")\n else None\n)\n", "path": "website/thaliawebsite/settings.py"}], "after_files": [{"content": "\"\"\"Django settings for concrexit.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/dev/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/dev/ref/settings/\n\"\"\"\n\nimport base64\nimport json\nimport logging\nimport os\n\nfrom django.core.management.commands import makemessages\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom celery.schedules import crontab\n\nlogger = logging.getLogger(__name__)\n\n# Sentinel objects that are distinct from None\n_NOT_SET = object()\n\n\nclass Misconfiguration(Exception):\n \"\"\"Exception that is raised when something is misconfigured in this file.\"\"\"\n\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.abspath(\n os.path.join(os.path.dirname(os.path.abspath(__file__)), \"\", \"..\")\n)\n\nSOURCE_COMMIT = os.environ.get(\"SOURCE_COMMIT\", \"unknown\")\n\n# Many of the settings are dependent on the environment we're running in.\n# The default environment is development, so the programmer doesn't have to set anything\nDJANGO_ENV = os.environ.get(\"DJANGO_ENV\", \"development\")\n_environments = [\"production\", \"staging\", \"testing\", \"development\"]\nif DJANGO_ENV not in _environments:\n raise Misconfiguration(f\"Set DJANGO_ENV to one of: {', '.join(_environments)}\")\n\n\ndef _set_django_env(env):\n \"\"\"Set the DJANGO_ENV variable.\n\n This is a helper function for the doctests below because doctests cannot set global variables.\n \"\"\"\n global DJANGO_ENV # noqa: PLW0603\n DJANGO_ENV = env\n\n\ndef setting(*, development, production, staging=_NOT_SET, testing=_NOT_SET):\n \"\"\"Generate a setting depending on the DJANGO_ENV and the arguments.\n\n This function is meant for static settings that depend on the DJANGO_ENV. If the\n staging or testing arguments are left to their defaults, they will fall back to\n the production and development settings respectively.\n\n Example:\n >>> _set_django_env(\"production\")\n >>> SEND_MESSAGES_WITH = setting(development=\"console\", production=\"mail\", staging=\"DM\")\n >>> SEND_MESSAGES_WITH\n 'mail'\n >>> _set_django_env(\"testing\")\n >>> setting(development=\"console\", production=\"mail\", staging=\"DM\")\n 'console'\n \"\"\"\n if DJANGO_ENV == \"development\" or (DJANGO_ENV == \"testing\" and testing is _NOT_SET):\n return development\n if DJANGO_ENV == \"testing\":\n return testing\n if DJANGO_ENV == \"production\" or (DJANGO_ENV == \"staging\" and staging is _NOT_SET):\n return production\n if DJANGO_ENV == \"staging\":\n return staging\n raise Misconfiguration(f\"Set DJANGO_ENV to one of: {', '.join(_environments)}\")\n\n\ndef from_env(\n name, *, production=_NOT_SET, staging=_NOT_SET, testing=_NOT_SET, development=None\n):\n \"\"\"Generate a setting that's overridable by the process environment.\n\n This will raise an exception if a default is not set for production. Because we use\n the sentinel value _NOT_SET, you can still set a default of None for production if wanted.\n\n As with :func:`setting` the staging and testing values will fall back to production\n and development. So if an environment variable is required in production, and no default\n is set for staging, staging will also raise the exception.\n\n Example:\n >>> _set_django_env(\"production\")\n >>> # A secret key should always be set in production via the environment\n >>> from_env(\"MEDIA_ROOT\", development=\"/media/root\")\n Traceback (most recent call last):\n ...\n thaliawebsite.settings.Misconfiguration: Environment variable `MEDIA_ROOT` must be supplied in production\n >>> _set_django_env(\"development\")\n >>> from_env(\"MEDIA_ROOT\", development=\"/media/root\")\n '/media/root'\n \"\"\"\n try:\n return os.environ[name]\n except KeyError:\n if DJANGO_ENV == \"production\" or (\n DJANGO_ENV == \"staging\" and staging is _NOT_SET\n ):\n if production is _NOT_SET and os.environ.get(\"MANAGE_PY\", \"0\") == \"0\":\n raise Misconfiguration(\n f\"Environment variable `{name}` must be supplied in production\"\n )\n if production is _NOT_SET and os.environ.get(\"MANAGE_PY\", \"0\") == \"1\":\n logger.warning(\n \"Ignoring unset %s because we're running a management command\", name\n )\n return development\n return production\n if DJANGO_ENV == \"staging\":\n return staging\n if DJANGO_ENV == \"development\" or (\n DJANGO_ENV == \"testing\" and testing is _NOT_SET\n ):\n return development\n if DJANGO_ENV == \"testing\":\n return testing\n raise Misconfiguration(f\"DJANGO_ENV set to unsupported value: {DJANGO_ENV}\")\n\n\n###############################################################################\n# Site settings\n\n# We use this setting to generate the email addresses, and for BASE_URL below.\nSITE_DOMAIN = from_env(\"SITE_DOMAIN\", development=\"localhost\", production=\"thalia.nu\")\n\n# Used to generate some absolute urls when we don't have access to a request.\nBASE_URL = from_env(\n \"BASE_URL\",\n development=f\"http://{SITE_DOMAIN}:8000\",\n production=f\"https://{SITE_DOMAIN}\",\n)\n\n# Default FROM email\nDEFAULT_FROM_EMAIL = f\"{os.environ.get('ADDRESS_NOREPLY', 'noreply')}@{SITE_DOMAIN}\"\n# https://docs.djangoproject.com/en/dev/ref/settings/#server-email\nSERVER_EMAIL = DEFAULT_FROM_EMAIL\nNEWSLETTER_FROM_ADDRESS = (\n f\"{os.environ.get('ADDRESS_NEWSLETTER', 'newsletter')}@{SITE_DOMAIN}\"\n)\nBOARD_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_CONTACT', 'info')}@{SITE_DOMAIN}\"\n)\nPARTNER_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_COLLABORATION', 'samenwerking')}@{SITE_DOMAIN}\"\n)\nEDUCATION_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_EDUCATION', 'educacie')}@{SITE_DOMAIN}\"\n)\nPROMO_REQUEST_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_PROMOREQUESTS', 'promocie')}@{SITE_DOMAIN}\"\n)\nTREASURER_NOTIFICATION_ADDRESS = (\n f\"{os.environ.get('ADDRESS_TREASURER', 'treasurer')}@{SITE_DOMAIN}\"\n)\n\n\n# How many days to keep reference faces after a user marks them for deletion\nFACEDETECTION_REFERENCE_FACE_STORAGE_PERIOD_AFTER_DELETE_DAYS = 180\n\n# How many reference faces a user can have at the same time\nFACEDETECTION_MAX_NUM_REFERENCE_FACES = 5\n\n# ARN of the concrexit-facedetection-lambda function.\n# See https://github.com/svthalia/concrexit-facedetection-lambda.\nFACEDETECTION_LAMBDA_ARN = os.environ.get(\"FACEDETECTION_LAMBDA_ARN\")\n\nFACEDETECTION_LAMBDA_BATCH_SIZE = int(\n os.environ.get(\"FACEDETECTION_LAMBDA_BATCH_SIZE\", 20)\n)\n\n# The scheme the app uses for oauth redirection\nAPP_OAUTH_SCHEME = os.environ.get(\"APP_OAUTH_SCHEME\", \"nu.thalia\")\n\n# Membership prices\nMEMBERSHIP_PRICES = {\n \"year\": int(os.environ.get(\"MEMBERSHIP_PRICE_YEAR_CENTS\", \"750\")) / 100,\n \"study\": int(os.environ.get(\"MEMBERSHIP_PRICE_STUDY_CENTS\", \"3000\")) / 100,\n}\n\n# Window during which a payment can be deleted again\nPAYMENT_CHANGE_WINDOW = int(os.environ.get(\"PAYMENTS_CHANGE_WINDOW\", 10 * 60))\n\n# Payments creditor identifier\nSEPA_CREDITOR_ID = os.environ.get(\"SEPA_CREDITOR_ID\", \"<unknown>\")\n\n# Payment batch withdrawal date default offset after creation date\nPAYMENT_BATCH_DEFAULT_WITHDRAWAL_DATE_OFFSET = timezone.timedelta(days=14)\n\nTHALIA_PAY_ENABLED_PAYMENT_METHOD = (\n from_env(\"THALIA_PAY_ENABLED\", development=\"1\", staging=\"1\", production=\"0\") == \"1\"\n)\nTHALIA_PAY_FOR_NEW_MEMBERS = os.environ.get(\"THALIA_PAY_FOR_NEW_MEMBERS\", \"1\") == \"1\"\n\n###############################################################################\n# Django settings\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#secret-key\nSECRET_KEY = from_env(\n \"SECRET_KEY\", development=\"#o-0d1q5&^&06tn@8pr1f(n3$crafd++^%sacao7hj*ea@c)^t\"\n)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts\nALLOWED_HOSTS = [\n SITE_DOMAIN,\n *from_env(\"ALLOWED_HOSTS\", development=\"*\", production=\"\").split(\",\"),\n]\n\nDJANGO_DRF_FILEPOND_UPLOAD_TMP = from_env(\n \"DJANGO_DRF_FILEPOND_UPLOAD_TMP\",\n development=os.path.join(BASE_DIR, \"filepond-temp-uploads\"),\n)\nDJANGO_DRF_FILEPOND_FILE_STORE_PATH = from_env(\n \"DJANGO_DRF_FILEPOND_FILE_STORE_PATH\",\n development=os.path.join(BASE_DIR, \"filepond-uploaded\"),\n)\nDJANGO_DRF_FILEPOND_ALLOW_EXTERNAL_UPLOAD_DIR = True\nDJANGO_DRF_FILEPOND_PERMISSION_CLASSES = {\n \"GET_FETCH\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"GET_LOAD\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"POST_PROCESS\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"GET_RESTORE\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"DELETE_REVERT\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n \"PATCH_PATCH\": [\n \"oauth2_provider.contrib.rest_framework.IsAuthenticatedOrTokenHasScope\",\n ],\n}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#static-root\nSTATIC_ROOT = from_env(\"STATIC_ROOT\", development=os.path.join(BASE_DIR, \"static\"))\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#media-root\nMEDIA_ROOT = from_env(\"MEDIA_ROOT\", development=os.path.join(BASE_DIR, \"media\"))\n\n# https://github.com/johnsensible/django-sendfile#nginx-backend\nSENDFILE_URL = \"/media/sendfile/\"\nSENDFILE_ROOT = MEDIA_ROOT\nSENDFILE_BACKEND = setting(\n development=\"django_sendfile.backends.development\",\n production=\"django_sendfile.backends.nginx\",\n)\n\nPRIVATE_MEDIA_LOCATION = \"\"\nPUBLIC_MEDIA_LOCATION = \"public\"\nSTATICFILES_LOCATION = \"static\"\n\nMEDIA_URL = \"/media/private/\"\n\nAWS_ACCESS_KEY_ID = from_env(\"AWS_ACCESS_KEY_ID\", production=None)\nAWS_SECRET_ACCESS_KEY = from_env(\"AWS_SECRET_ACCESS_KEY\", production=None)\nAWS_STORAGE_BUCKET_NAME = from_env(\"AWS_STORAGE_BUCKET_NAME\", production=None)\nAWS_DEFAULT_ACL = \"private\"\nAWS_S3_OBJECT_PARAMETERS = {\"CacheControl\": \"max-age=86400\"}\nAWS_S3_SIGNATURE_VERSION = \"s3v4\"\n\nif AWS_STORAGE_BUCKET_NAME is not None:\n AWS_CLOUDFRONT_KEY = base64.urlsafe_b64decode(\n os.environ.get(\"AWS_CLOUDFRONT_KEY\", None)\n ).decode(\"utf-8\")\n AWS_CLOUDFRONT_KEY_ID = os.environ.get(\"AWS_CLOUDFRONT_KEY_ID\", None)\n AWS_S3_CUSTOM_DOMAIN = os.environ.get(\"AWS_CLOUDFRONT_DOMAIN\", None)\n\n _STATICFILES_STORAGE = \"thaliawebsite.storage.backend.StaticS3Storage\"\n STATIC_URL = f\"https://{AWS_S3_CUSTOM_DOMAIN}/static/\"\n\n _DEFAULT_FILE_STORAGE = \"thaliawebsite.storage.backend.PrivateS3Storage\"\n\n _PUBLIC_FILE_STORAGE = \"thaliawebsite.storage.backend.PublicS3Storage\"\n PUBLIC_MEDIA_URL = f\"https://{AWS_S3_CUSTOM_DOMAIN}/\"\nelse:\n _STATICFILES_STORAGE = setting(\n development=\"django.contrib.staticfiles.storage.StaticFilesStorage\",\n production=\"django.contrib.staticfiles.storage.ManifestStaticFilesStorage\",\n )\n STATIC_URL = \"/static/\"\n\n _DEFAULT_FILE_STORAGE = \"thaliawebsite.storage.backend.PrivateFileSystemStorage\"\n\n _PUBLIC_FILE_STORAGE = \"thaliawebsite.storage.backend.PublicFileSystemStorage\"\n PUBLIC_MEDIA_URL = \"/media/public/\"\n\nSTORAGES = {\n \"default\": {\"BACKEND\": _DEFAULT_FILE_STORAGE},\n \"public\": {\"BACKEND\": _PUBLIC_FILE_STORAGE},\n \"staticfiles\": {\"BACKEND\": _STATICFILES_STORAGE},\n}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#conn-max-age\nCONN_MAX_AGE = int(from_env(\"CONN_MAX_AGE\", development=\"0\", production=\"60\"))\n\n# Useful for managing members\n# https://docs.djangoproject.com/en/dev/ref/settings/#data-upload-max-number-fields\nDATA_UPLOAD_MAX_NUMBER_FIELDS = os.environ.get(\"DATA_UPLOAD_MAX_NUMBER_FIELDS\", 10000)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#debug\nDEBUG = bool(\n from_env(\"DJANGO_DEBUG\", development=True, production=False, testing=False)\n)\n# https://docs.djangoproject.com/en/dev/ref/settings/#internal-ips\nINTERNAL_IPS = [\"127.0.0.1\", \"172.17.0.1\"] if DEBUG else []\n\n\ndef show_toolbar(request):\n return DEBUG\n\n\nDEBUG_TOOLBAR_CONFIG = {\"SHOW_TOOLBAR_CALLBACK\": show_toolbar}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#session-cookie-secure\nSESSION_COOKIE_SECURE = setting(development=False, production=True)\n# https://docs.djangoproject.com/en/dev/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = setting(development=False, production=True)\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#std-setting-SECURE_PROXY_SSL_HEADER\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#default-auto-field\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n\n###############################################################################\n# Celery settings\n# https://docs.celeryq.dev/en/stable/userguide/configuration.html#configuration\n\n# Set CELERY_BROKER_URL=\"redis://127.0.0.1:6379\" to use a local redis server in development.\nCELERY_BROKER_URL = from_env(\"CELERY_BROKER_URL\")\n\n# Always execute tasks synchronously when no broker is configured in development and testing.\n# See https://docs.celeryq.dev/en/stable/userguide/configuration.html#std-setting-task_always_eager\nCELERY_TASK_ALWAYS_EAGER = CELERY_BROKER_URL is None\n\n\n# See https://docs.celeryq.dev/en/stable/getting-started/backends-and-brokers/redis.html#caveats\nCELERY_BROKER_TRANSPORT_OPTIONS = {\"visibility_timeout\": 18000}\n\n# https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html\nCELERY_BEAT_SCHEDULE = {\n \"synchronize_mailinglists\": {\n \"task\": \"mailinglists.tasks.sync_mail\",\n \"schedule\": crontab(minute=30),\n },\n \"synchronize_moneybird\": {\n \"task\": \"moneybirdsynchronization.tasks.synchronize_moneybird\",\n \"schedule\": crontab(minute=30, hour=1),\n },\n \"sendpromooverviewweekly\": {\n \"task\": \"promotion.tasks.promo_update_weekly\",\n \"schedule\": crontab(minute=0, hour=8, day_of_week=1),\n },\n \"sendpromoooverviewdaily\": {\n \"task\": \"promotion.tasks.promo_update_daily\",\n \"schedule\": crontab(minute=0, hour=8),\n },\n \"facedetectlambda\": {\n \"task\": \"facedetection.tasks.trigger_facedetect_lambda\",\n \"schedule\": crontab(minute=0, hour=1),\n },\n \"revokeoldmandates\": {\n \"task\": \"payments.tasks.revoke_mandates\",\n \"schedule\": crontab(minute=0, hour=1),\n },\n \"membershipannouncement\": {\n \"task\": \"members.tasks.membership_announcement\",\n \"schedule\": crontab(minute=0, hour=6, day_of_month=31, month_of_year=8),\n },\n \"inforequest\": {\n \"task\": \"members.tasks.info_request\",\n \"schedule\": crontab(minute=0, hour=6, day_of_month=15, month_of_year=10),\n },\n \"expirationannouncement\": {\n \"task\": \"members.tasks.expiration_announcement\",\n \"schedule\": crontab(minute=0, hour=6, day_of_month=8, month_of_year=8),\n },\n \"minimiseregistration\": {\n \"task\": \"registrations.tasks.minimise_registrations\",\n \"schedule\": crontab(minute=0, hour=3, day_of_month=1),\n },\n \"sendscheduledmessages\": {\n \"task\": \"pushnotifications.tasks.send_scheduled_messages\",\n \"schedule\": crontab(minute=\"*/2\"),\n \"args\": (120,),\n },\n \"revokestaff\": {\n \"task\": \"activemembers.tasks.revoke_staff\",\n \"schedule\": crontab(minute=30, hour=3),\n },\n \"deletegsuiteusers\": {\n \"task\": \"activemembers.tasks.delete_gsuite_users\",\n \"schedule\": crontab(minute=30, hour=3, day_of_week=1),\n },\n \"sendplannednewsletters\": {\n \"task\": \"newsletters.tasks.send_planned_newsletters\",\n \"schedule\": crontab(minute=\"*/5\"),\n },\n \"dataminimisation\": {\n \"task\": \"thaliawebsite.tasks.data_minimisation\",\n \"schedule\": crontab(minute=0, hour=3),\n },\n \"cleanup\": {\n \"task\": \"thaliawebsite.tasks.clean_up\",\n \"schedule\": crontab(minute=0, hour=23),\n },\n \"cleartokens\": {\n \"task\": \"thaliawebsite.tasks.clear_tokens\",\n \"schedule\": crontab(minute=30, hour=3),\n },\n \"sendpromoupdateoverviewdaily\": {\n \"task\": \"promotion.tasks.promo_update_overview_daily\",\n \"schedule\": crontab(minute=0, hour=8),\n },\n}\n\n###############################################################################\n# Email settings\n# https://docs.djangoproject.com/en/dev/ref/settings/#email-backend\n_EMAIL_BACKEND = from_env(\"EMAIL_BACKEND\", development=\"console\", production=\"smtp\")\nif _EMAIL_BACKEND == \"console\":\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n\nif _EMAIL_BACKEND == \"smtp\":\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\n EMAIL_HOST = os.environ.get(\"DJANGO_EMAIL_HOST\")\n EMAIL_PORT = os.environ.get(\"DJANGO_EMAIL_PORT\", 25)\n EMAIL_HOST_USER = os.environ.get(\"DJANGO_EMAIL_HOST_USER\", \"\")\n EMAIL_HOST_PASSWORD = os.environ.get(\"DJANGO_EMAIL_HOST_PASSWORD\", \"\")\n EMAIL_USE_TLS = os.environ.get(\"DJANGO_EMAIL_USE_TLS\", \"1\") == \"1\"\n EMAIL_TIMEOUT = int(os.environ.get(\"EMAIL_TIMEOUT\", \"10\"))\n if EMAIL_HOST is None:\n logger.warning(\n \"The email host is set to the default of localhost, are you sure you don't want to set EMAIL_HOST?\"\n )\n EMAIL_HOST = \"localhost\"\n\n###############################################################################\n# Database settings\n# https://docs.djangoproject.com/en/dev/ref/settings/#databases\nDATABASE_ENGINE = from_env(\n \"DATABASE_ENGINE\", development=\"sqlite\", production=\"postgresql\", testing=None\n)\nif DATABASE_ENGINE == \"sqlite\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": os.path.join(BASE_DIR, \"db.sqlite3\"),\n }\n }\n\nif DATABASE_ENGINE == \"postgresql\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"USER\": os.environ.get(\"POSTGRES_USER\", \"concrexit\"),\n \"PASSWORD\": os.environ.get(\"POSTGRES_PASSWORD\", None),\n \"NAME\": os.environ.get(\"POSTGRES_DB\", \"\"),\n \"HOST\": os.environ.get(\"POSTGRES_HOST\", \"\"),\n \"PORT\": os.environ.get(\"POSTGRES_PORT\", \"5432\"),\n \"CONN_MAX_AGE\": 300,\n }\n }\n\nif DJANGO_ENV == \"testing\":\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": \"thalia\",\n \"USER\": \"postgres\",\n \"PASSWORD\": \"postgres\",\n \"HOST\": \"127.0.0.1\",\n \"PORT\": 5432,\n },\n }\n\n###############################################################################\n# Firebase config\nFIREBASE_CREDENTIALS = os.environ.get(\"FIREBASE_CREDENTIALS\", \"{}\")\nif FIREBASE_CREDENTIALS != \"{}\":\n FIREBASE_CREDENTIALS = base64.urlsafe_b64decode(FIREBASE_CREDENTIALS)\nFIREBASE_CREDENTIALS = json.loads(FIREBASE_CREDENTIALS)\n\nif FIREBASE_CREDENTIALS != {}:\n from firebase_admin import credentials, initialize_app\n\n try:\n initialize_app(credential=credentials.Certificate(FIREBASE_CREDENTIALS))\n except ValueError:\n logger.error(\"Firebase application failed to initialise\")\n\n###############################################################################\n# GSuite config\nGSUITE_ADMIN_SCOPES = [\n \"https://www.googleapis.com/auth/admin.directory.group\",\n \"https://www.googleapis.com/auth/admin.directory.user\",\n \"https://www.googleapis.com/auth/apps.groups.settings\",\n]\n\nGSUITE_ADMIN_CREDENTIALS = os.environ.get(\"GSUITE_ADMIN_CREDENTIALS\", \"{}\")\nif GSUITE_ADMIN_CREDENTIALS != \"{}\":\n GSUITE_ADMIN_CREDENTIALS = base64.urlsafe_b64decode(GSUITE_ADMIN_CREDENTIALS)\nGSUITE_ADMIN_CREDENTIALS = json.loads(GSUITE_ADMIN_CREDENTIALS)\nGSUITE_ADMIN_USER = os.environ.get(\"GSUITE_ADMIN_USER\", \"[email protected]\")\nGSUITE_DOMAIN = from_env(\n \"GSUITE_DOMAIN\", development=\"thalia.localhost\", production=\"thalia.nu\"\n)\nGSUITE_MEMBERS_DOMAIN = from_env(\n \"GSUITE_MEMBERS_DOMAIN\",\n development=\"members.thalia.localhost\",\n production=\"members.thalia.nu\",\n)\nGSUITE_MEMBERS_AUTOSYNC = os.environ.get(\"GSUITE_MEMBERS_AUTOSYNC\", \"0\") == \"1\"\n\nif GSUITE_ADMIN_CREDENTIALS != {}:\n from google.oauth2 import service_account\n\n GSUITE_ADMIN_CREDENTIALS = service_account.Credentials.from_service_account_info(\n GSUITE_ADMIN_CREDENTIALS, scopes=GSUITE_ADMIN_SCOPES\n ).with_subject(GSUITE_ADMIN_USER)\n\nEMAIL_DOMAIN_BLACKLIST = [GSUITE_MEMBERS_DOMAIN]\n\n###############################################################################\n# Google maps API key and secrets\nGOOGLE_MAPS_API_KEY = os.environ.get(\"GOOGLE_MAPS_API_KEY\", \"\")\nGOOGLE_MAPS_API_SECRET = os.environ.get(\"GOOGLE_MAPS_API_SECRET\", \"\")\nGOOGLE_PLACES_API_KEY = os.environ.get(\"GOOGLE_PLACES_API_KEY\", \"\")\n\n###############################################################################\n# Sentry setup\nif \"SENTRY_DSN\" in os.environ:\n import sentry_sdk\n from sentry_sdk.integrations.celery import CeleryIntegration\n from sentry_sdk.integrations.django import DjangoIntegration\n\n sentry_sdk.init(\n dsn=os.environ.get(\"SENTRY_DSN\"),\n integrations=[\n DjangoIntegration(),\n CeleryIntegration(),\n ],\n release=SOURCE_COMMIT,\n send_default_pii=True,\n environment=DJANGO_ENV,\n traces_sample_rate=float(os.environ.get(\"SENTRY_TRACES_SAMPLE_RATE\", 0.2)),\n profiles_sample_rate=float(os.environ.get(\"SENTRY_PROFILES_SAMPLE_RATE\", 0.0)),\n )\n\n\n###############################################################################\n# (Mostly) static settings\nINSTALLED_APPS = [\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.sitemaps\",\n # Dependencies\n \"django_otp\",\n \"django_otp.plugins.otp_static\",\n \"django_otp.plugins.otp_totp\",\n \"formtools\",\n \"two_factor\",\n \"oauth2_provider\",\n \"corsheaders\",\n \"django_bootstrap5\",\n \"tinymce\",\n \"rest_framework\",\n \"rest_framework.authtoken\",\n \"debug_toolbar\",\n \"sass_processor\",\n \"admin_auto_filters\",\n \"django_drf_filepond\",\n \"django_filepond_widget\",\n \"thumbnails\",\n # Our apps\n # Directly link to the app config when applicable as recommended\n # by the docs: https://docs.djangoproject.com/en/2.0/ref/applications/\n \"thaliawebsite.apps.ThaliaWebsiteConfig\", # include for admin settings\n # Load django.contrib.admin after thaliawebsite so the admin page gets modified\n \"django.contrib.admin\",\n # Our apps ordered such that templates in the first\n # apps can override those used by the later apps.\n \"pushnotifications.apps.PushNotificationsConfig\",\n \"facedetection.apps.FaceDetectionConfig\",\n \"announcements.apps.AnnouncementsConfig\",\n \"promotion.apps.PromotionConfig\",\n \"members.apps.MembersConfig\",\n \"documents.apps.DocumentsConfig\",\n \"activemembers.apps.ActiveMembersConfig\",\n \"photos.apps.PhotosConfig\",\n \"utils\",\n \"mailinglists.apps.MailinglistsConfig\",\n \"merchandise.apps.MerchandiseConfig\",\n \"thabloid.apps.ThabloidConfig\",\n \"partners.apps.PartnersConfig\",\n \"events.apps.EventsConfig\",\n \"pizzas.apps.PizzasConfig\",\n \"newsletters.apps.NewslettersConfig\",\n \"education.apps.EducationConfig\",\n \"registrations.apps.RegistrationsConfig\",\n \"payments.apps.PaymentsConfig\",\n \"singlepages.apps.SinglepagesConfig\",\n \"shortlinks.apps.ShortLinkConfig\",\n \"sales.apps.SalesConfig\",\n \"moneybirdsynchronization.apps.MoneybirdsynchronizationConfig\",\n]\n\nMIDDLEWARE = [\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.http.ConditionalGetMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django_otp.middleware.OTPMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"thaliawebsite.middleware.RealIPMiddleware\",\n \"django_ratelimit.middleware.RatelimitMiddleware\",\n \"members.middleware.MemberMiddleware\",\n \"announcements.middleware.AnnouncementMiddleware\",\n]\n\nif DJANGO_ENV in (\"development\", \"testing\"):\n INSTALLED_APPS += [\n \"django_template_check\",\n \"django_extensions\",\n ]\n\nif DJANGO_ENV == \"testing\":\n for x in (\n \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n \"django.middleware.http.ConditionalGetMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n ):\n MIDDLEWARE.remove(x)\n for x in (\"debug_toolbar\",):\n INSTALLED_APPS.remove(x)\n\nROOT_URLCONF = \"thaliawebsite.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"templates\")],\n \"APP_DIRS\": setting(development=True, production=False),\n \"OPTIONS\": {\n \"context_processors\": [\n \"thaliawebsite.context_processors.source_commit\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.template.context_processors.media\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"announcements.context_processors.announcements\",\n \"thaliawebsite.context_processors.aprilfools\",\n \"thaliawebsite.context_processors.lustrum_styling\",\n ],\n },\n },\n]\n\nif DJANGO_ENV in [\"production\", \"staging\"]:\n # Use caching template loader\n TEMPLATES[0][\"OPTIONS\"][\"loaders\"] = [\n (\n \"django.template.loaders.cached.Loader\",\n [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n )\n ]\n\n# Default logging: https://github.com/django/django/blob/master/django/utils/log.py\n# We disable mailing the admin.\n# Server errors will be sent to Sentry via the config below this.\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"filters\": {\n \"require_debug_false\": {\n \"()\": \"django.utils.log.RequireDebugFalse\",\n },\n \"require_debug_true\": {\n \"()\": \"django.utils.log.RequireDebugTrue\",\n },\n },\n \"formatters\": {\n \"django.server\": {\n \"()\": \"django.utils.log.ServerFormatter\",\n \"format\": \"[{server_time}] {message}\",\n \"style\": \"{\",\n }\n },\n \"handlers\": {\n \"console\": {\n \"level\": \"INFO\",\n \"filters\": [\"require_debug_true\"],\n \"class\": \"logging.StreamHandler\",\n },\n \"django.server\": {\n \"level\": \"INFO\",\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"django.server\",\n },\n },\n \"loggers\": {\n \"django\": {\n \"handlers\": [\"console\"],\n \"level\": \"INFO\",\n },\n \"django.server\": {\n \"handlers\": [\"django.server\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n}\n\nREDIS_CACHE_PORT = int(\n from_env(\"REDIS_CACHE_PORT\", development=\"6379\", production=\"6379\")\n)\nREDIS_CACHE_HOST = from_env(\"REDIS_CACHE_HOST\")\nREDIS_CACHE_URL = (\n f\"redis://{REDIS_CACHE_HOST}:{REDIS_CACHE_PORT}\" if REDIS_CACHE_HOST else None\n)\n\nCACHES = {\n \"default\": (\n {\n \"BACKEND\": \"django.core.cache.backends.redis.RedisCache\",\n \"LOCATION\": REDIS_CACHE_URL,\n }\n if REDIS_CACHE_URL is not None\n else {\n \"BACKEND\": \"django.core.cache.backends.db.DatabaseCache\",\n \"LOCATION\": \"django_default_db_cache\",\n }\n ),\n}\n\nSESSION_ENGINE = \"django.contrib.sessions.backends.cached_db\"\n\nWSGI_APPLICATION = \"thaliawebsite.wsgi.application\"\n\n# Login pages\nLOGIN_URL = \"two_factor:login\"\nLOGIN_REDIRECT_URL = \"/\"\n\n# Cors configuration\nCORS_ORIGIN_ALLOW_ALL = True\nCORS_URLS_REGEX = r\"^/(?:api/v1|api/v2|user/oauth)/.*\"\n\n# OAuth configuration\nOIDC_RSA_PRIVATE_KEY = from_env(\"OIDC_RSA_PRIVATE_KEY\", testing=None)\nif OIDC_RSA_PRIVATE_KEY is not None:\n OIDC_RSA_PRIVATE_KEY = base64.urlsafe_b64decode(OIDC_RSA_PRIVATE_KEY).decode()\n\nOAUTH2_PROVIDER = {\n \"OIDC_ENABLED\": True,\n \"OIDC_RSA_PRIVATE_KEY\": OIDC_RSA_PRIVATE_KEY,\n \"ALLOWED_REDIRECT_URI_SCHEMES\": setting(\n production=[\"https\", APP_OAUTH_SCHEME],\n staging=[\"http\", \"https\", APP_OAUTH_SCHEME],\n development=[\"http\", \"https\", APP_OAUTH_SCHEME],\n ),\n \"SCOPES\": {\n \"openid\": \"OpenID Connect\",\n \"read\": \"Authenticated read access to the website\",\n \"write\": \"Authenticated write access to the website\",\n \"activemembers:read\": \"Read access to committee, society and board groups\",\n \"announcements:read\": \"Read access to announcements\",\n \"events:read\": \"Read access to events and your event registrations\",\n \"events:register\": \"Write access to the state of your event registrations\",\n \"events:admin\": \"Admin access to the events\",\n \"food:read\": \"Read access to food events\",\n \"food:order\": \"Order access to food events\",\n \"food:admin\": \"Admin access to food events\",\n \"members:read\": \"Read access to the members directory\",\n \"photos:read\": \"Read access to photos\",\n \"profile:read\": \"Read access to your member profile\",\n \"profile:write\": \"Write access to your member profile\",\n \"pushnotifications:read\": \"Read access to push notifications\",\n \"pushnotifications:write\": \"Write access to push notifications\",\n \"partners:read\": \"Read access to partners\",\n \"payments:read\": \"Read access to payments\",\n \"payments:write\": \"Write access to payments\",\n \"payments:admin\": \"Admin access to payments\",\n \"sales:read\": \"Read access to your Point of Sale orders\",\n \"sales:order\": \"Place Point of Sale orders on your behalf\",\n \"sales:admin\": \"Admin access to Point of Sale orders\",\n },\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": (\n \"django.contrib.auth.\"\n \"password_validation.UserAttributeSimilarityValidator\"\n ),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.MinimumLengthValidator\"),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.CommonPasswordValidator\"),\n },\n {\n \"NAME\": (\"django.contrib.auth.password_validation.NumericPasswordValidator\"),\n },\n]\n\nPASSWORD_HASHERS = setting(\n development=(\n \"django.contrib.auth.hashers.PBKDF2PasswordHasher\",\n \"django.contrib.auth.hashers.MD5PasswordHasher\",\n ),\n production=(\n \"django.contrib.auth.hashers.Argon2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2PasswordHasher\",\n \"django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher\",\n \"django.contrib.auth.hashers.BCryptSHA256PasswordHasher\",\n \"django.contrib.auth.hashers.BCryptPasswordHasher\",\n ),\n testing=(\"django.contrib.auth.hashers.MD5PasswordHasher\",),\n)\n\nAUTHENTICATION_BACKENDS = [\n \"django.contrib.auth.backends.ModelBackend\",\n \"activemembers.backends.MemberGroupBackend\",\n]\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.SessionAuthentication\",\n \"thaliawebsite.api.authentication.APIv1TokenAuthentication\",\n \"oauth2_provider.contrib.rest_framework.OAuth2Authentication\",\n ),\n \"DEFAULT_PAGINATION_CLASS\": \"thaliawebsite.api.pagination.APIv2LimitOffsetPagination\",\n \"PAGE_SIZE\": 50, # Only for API v2\n \"ALLOWED_VERSIONS\": [\"v1\", \"v2\", \"calendarjs\", \"facedetection\"],\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.NamespaceVersioning\",\n \"DEFAULT_SCHEMA_CLASS\": \"thaliawebsite.api.openapi.OAuthAutoSchema\",\n \"DEFAULT_THROTTLE_CLASSES\": [\n \"thaliawebsite.api.throttling.AnonRateThrottle\",\n \"thaliawebsite.api.throttling.UserRateThrottle\",\n ],\n \"DEFAULT_THROTTLE_RATES\": setting(\n production={\"anon\": \"30/min\", \"user\": \"90/min\"},\n staging={\"anon\": \"30/min\", \"user\": \"90/min\"},\n development={\"anon\": None, \"user\": None},\n ),\n}\n\n# Rate limiting\nRATELIMIT_VIEW = \"thaliawebsite.views.rate_limited_view\"\n\n# Internationalization\n# https://docs.djangoproject.com/en/dev/topics/i18n/\nUSE_I18N = True\nLANGUAGES = [(\"en\", _(\"English\"))]\nLANGUAGE_CODE = \"en\"\nTIME_ZONE = \"Europe/Amsterdam\"\n\n# We provide formatting overrides in the `thaliawebsite.en.formats`, because Django\n# no longer supports running without localization. This works to enforce the same format\n# regardless of the user's language/locale, because `en` is the only enabled language.\nFORMAT_MODULE_PATH = [\"thaliawebsite.locale\"]\n\n# Static files\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"sass_processor.finders.CssFinder\",\n)\n\n# Allow importing .scss files that don't start with an underscore.\n# See https://github.com/jrief/django-sass-processor\nSASS_PROCESSOR_INCLUDE_FILE_PATTERN = r\"^.+\\.scss$\"\n\n# See utils/model/signals.py for explanation\nSUSPEND_SIGNALS = False\n\nTHUMBNAILS_METADATA = (\n {\n \"BACKEND\": \"thumbnails.backends.metadata.RedisBackend\",\n \"host\": REDIS_CACHE_HOST,\n \"port\": REDIS_CACHE_PORT,\n }\n if REDIS_CACHE_HOST\n else {\n \"BACKEND\": \"thumbnails.backends.metadata.DatabaseBackend\",\n }\n)\n\nTHUMBNAILS = {\n \"METADATA\": THUMBNAILS_METADATA,\n \"STORAGE\": {\n # django-thumbnails does not use the Django 4.2 `storages` API yet,\n # but we can simply give it the path as we would with the new API.\n \"BACKEND\": _DEFAULT_FILE_STORAGE,\n },\n \"SIZES\": {\n \"small\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (300, 300),\n \"mode\": \"cover\",\n },\n ],\n },\n \"medium\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (600, 600),\n \"mode\": \"cover\",\n },\n ],\n },\n \"large\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1200, 900),\n \"mode\": \"cover\",\n },\n ],\n },\n \"photo_medium\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1200, 900),\n },\n ],\n },\n \"photo_large\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1920, 1920),\n },\n ],\n },\n \"avatar_large\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (900, 900),\n \"mode\": \"cover\",\n },\n ],\n },\n \"slide_small\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (500, 108),\n \"mode\": \"cover\",\n },\n ],\n },\n \"slide_medium\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1000, 215),\n \"mode\": \"cover\",\n },\n ],\n },\n \"slide\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (2000, 430),\n \"mode\": \"cover\",\n },\n ],\n },\n \"fit_small\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (300, 300),\n },\n ],\n },\n \"fit_medium\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (600, 600),\n },\n ],\n },\n \"fit_medium_pad\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (600, 250),\n \"mode\": \"pad\",\n },\n ],\n },\n \"fit_small_pad\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (360, 150),\n \"mode\": \"pad\",\n },\n ],\n },\n \"fit_large\": {\n \"FORMAT\": \"webp\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.thumbnail\",\n \"size\": (1200, 900),\n },\n ],\n },\n \"source\": {\n \"FORMAT\": \"jpg\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.process_upload\",\n \"size\": (8_000, 8_000),\n \"format\": \"jpg\",\n }\n ],\n },\n \"source_png\": {\n \"FORMAT\": \"png\",\n \"PROCESSORS\": [\n {\n \"PATH\": \"utils.media.processors.process_upload\",\n \"size\": (8_000, 8_000),\n \"format\": \"png\",\n }\n ],\n },\n },\n}\n\nTHUMBNAIL_SIZES = set(THUMBNAILS[\"SIZES\"].keys())\n\n# TinyMCE config\nTINYMCE_DEFAULT_CONFIG = {\n \"max_height\": 500,\n \"menubar\": False,\n \"plugins\": \"autolink autoresize link image code media paste lists\",\n \"toolbar\": \"h2 h3 | bold italic underline strikethrough | image media | link unlink \"\n \"| bullist numlist | undo redo | code\",\n \"contextmenu\": \"bold italic underline strikethrough | link\",\n \"paste_as_text\": True,\n \"relative_urls\": False,\n \"remove_script_host\": False,\n \"autoresize_bottom_margin\": 50,\n}\nTINYMCE_EXTRA_MEDIA = {\n \"css\": {\n \"all\": [\n \"css/tinymce.css\",\n ],\n },\n}\n\n\nBOOTSTRAP5 = {\"required_css_class\": \"required-field\"}\n\n# https://docs.djangoproject.com/en/dev/ref/settings/#default-exception-reporter-filter\nDEFAULT_EXCEPTION_REPORTER_FILTER = (\n \"utils.exception_filter.ThaliaSafeExceptionReporterFilter\"\n)\n\n# Make sure the locations in django.po files don't include line nrs.\nmakemessages.Command.xgettext_options.append(\"--add-location=file\")\n\nGRAPH_MODELS = {\n \"all_applications\": False,\n \"group_models\": True,\n \"app_labels\": [\n \"events\",\n \"photos\",\n \"merchandise\",\n \"thabloid\",\n \"partners\",\n \"newsletters\",\n \"shortlinks\",\n \"promotion\",\n \"documents\",\n \"pizzas\",\n \"announcements\",\n \"sales\",\n \"registrations\",\n \"mailinglists\",\n \"payments\",\n \"members\",\n \"admin\",\n \"pushnotifications\",\n \"activemembers\",\n \"education\",\n \"auth\",\n ],\n}\n\nMONEYBIRD_START_DATE = os.environ.get(\"MONEYBIRD_START_DATE\", \"2023-09-01\")\n\nMONEYBIRD_ADMINISTRATION_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_ADMINISTRATION_ID\"))\n if os.environ.get(\"MONEYBIRD_ADMINISTRATION_ID\")\n else None\n)\n\nMONEYBIRD_API_KEY = os.environ.get(\"MONEYBIRD_API_KEY\")\n\nMONEYBIRD_SYNC_ENABLED = MONEYBIRD_ADMINISTRATION_ID and MONEYBIRD_API_KEY\n\nMONEYBIRD_MEMBER_PK_CUSTOM_FIELD_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_MEMBER_PK_CUSTOM_FIELD_ID\"))\n if os.environ.get(\"MONEYBIRD_MEMBER_PK_CUSTOM_FIELD_ID\")\n else None\n)\nMONEYBIRD_UNKNOWN_PAYER_CONTACT_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_UNKNOWN_PAYER_CONTACT_ID\"))\n if os.environ.get(\"MONEYBIRD_UNKNOWN_PAYER_CONTACT_ID\")\n else None\n)\nMONEYBIRD_CONTRIBUTION_LEDGER_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_CONTRIBUTION_LEDGER_ID\"))\n if os.environ.get(\"MONEYBIRD_CONTRIBUTION_LEDGER_ID\")\n else None\n)\n\nMONEYBIRD_TPAY_FINANCIAL_ACCOUNT_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_TPAY_FINANCIAL_ACCOUNT_ID\"))\n if os.environ.get(\"MONEYBIRD_TPAY_FINANCIAL_ACCOUNT_ID\")\n else None\n)\nMONEYBIRD_CASH_FINANCIAL_ACCOUNT_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_CASH_FINANCIAL_ACCOUNT_ID\"))\n if os.environ.get(\"MONEYBIRD_CASH_FINANCIAL_ACCOUNT_ID\")\n else None\n)\nMONEYBIRD_CARD_FINANCIAL_ACCOUNT_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_CARD_FINANCIAL_ACCOUNT_ID\"))\n if os.environ.get(\"MONEYBIRD_CARD_FINANCIAL_ACCOUNT_ID\")\n else None\n)\n\nMONEYBIRD_ZERO_TAX_RATE_ID: int | None = (\n int(os.environ.get(\"MONEYBIRD_ZERO_TAX_RATE_ID\"))\n if os.environ.get(\"MONEYBIRD_ZERO_TAX_RATE_ID\")\n else None\n)\n", "path": "website/thaliawebsite/settings.py"}]} |
gh_patches_debug_1011 | rasdani/github-patches | git_diff | freedomofpress__securedrop-5595 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dev server hot reload has stopped working
## Description
In #5532 the `env` attribute was dropped from `SDConfig` in `sdconfig.py`. That value is checked in [`source.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/source.py#L11) and [`journalist.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/journalist.py#L26) to determine whether the Flask app will run in [debug](https://flask.palletsprojects.com/en/1.1.x/config/#DEBUG) mode. By default it will not, so the dev server has stopped responding to code changes.
Given the Flask documentation warnings about setting debug mode via code and not the `FLASK_DEBUG` environment variable, we may want to reevaluate all of this, but right now let's just get back to a properly functioning dev server.
## Steps to Reproduce
- Check out `develop` at a commit before the `sdconfig.py` change (eff931fa8a0e74d5c3be87e46c5d0f004f02e289).
- Run `make dev`.
- Change `securedrop/journalist_app/main.py` to trigger a reload, and confirm that the change is detected.
- Stop the dev server.
- Check out `develop` and run `make dev`.
- Change `securedrop/journalist_app/main.py` again, and observe that the change is not detected.
## Expected Behavior
That the dev server would notice code changes and reload to pick them up.
## Actual Behavior
It does not care one whit about your useless flailings. We are all `prod` now.
## Comments
Just need to restore `SDConfig.env`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/sdconfig.py`
Content:
```
1 from typing import Dict
2 from typing import Optional
3
4 from typing import Type
5
6 import config as _config
7 from typing import List
8
9
10 class SDConfig:
11 def __init__(self) -> None:
12 self.JOURNALIST_APP_FLASK_CONFIG_CLS = \
13 _config.JournalistInterfaceFlaskConfig # type: Type
14
15 self.SOURCE_APP_FLASK_CONFIG_CLS = \
16 _config.SourceInterfaceFlaskConfig # type: Type
17
18 self.DATABASE_ENGINE = _config.DATABASE_ENGINE # type: str
19 self.DATABASE_FILE = _config.DATABASE_FILE # type: str
20
21 self.DATABASE_USERNAME = getattr(_config, "DATABASE_USERNAME", None) # type: Optional[str]
22 self.DATABASE_PASSWORD = getattr(_config, "DATABASE_PASSWORD", None) # type: Optional[str]
23 self.DATABASE_HOST = getattr(_config, "DATABASE_HOST", None) # type: Optional[str]
24 self.DATABASE_NAME = getattr(_config, "DATABASE_NAME", None) # type: Optional[str]
25
26 self.ADJECTIVES = _config.ADJECTIVES # type: str
27 self.NOUNS = _config.NOUNS # type: str
28 self.WORD_LIST = _config.WORD_LIST # type: str
29
30 self.DEFAULT_LOCALE = _config.DEFAULT_LOCALE # type: str
31 self.SUPPORTED_LOCALES = getattr(
32 _config, "SUPPORTED_LOCALES", [self.DEFAULT_LOCALE]
33 ) # type: List[str]
34
35 self.GPG_KEY_DIR = _config.GPG_KEY_DIR # type: str
36
37 self.JOURNALIST_KEY = _config.JOURNALIST_KEY # type: str
38 self.JOURNALIST_TEMPLATES_DIR = _config.JOURNALIST_TEMPLATES_DIR # type: str
39
40 self.SCRYPT_GPG_PEPPER = _config.SCRYPT_GPG_PEPPER # type: str
41 self.SCRYPT_ID_PEPPER = _config.SCRYPT_ID_PEPPER # type: str
42 self.SCRYPT_PARAMS = _config.SCRYPT_PARAMS # type: Dict[str, int]
43
44 self.SECUREDROP_DATA_ROOT = _config.SECUREDROP_DATA_ROOT # type: str
45 self.SECUREDROP_ROOT = _config.SECUREDROP_ROOT # type: str
46
47 self.SESSION_EXPIRATION_MINUTES = _config.SESSION_EXPIRATION_MINUTES # type: int
48
49 self.SOURCE_TEMPLATES_DIR = _config.SOURCE_TEMPLATES_DIR # type: str
50 self.TEMP_DIR = _config.TEMP_DIR # type: str
51 self.STORE_DIR = _config.STORE_DIR # type: str
52 self.TRANSLATION_DIRS = getattr(_config, "TRANSLATION_DIRS", None) # type: Optional[str]
53
54 self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str
55
56 if _config.env == 'test':
57 self.RQ_WORKER_NAME = 'test' # type: str
58 else:
59 self.RQ_WORKER_NAME = 'default'
60
61 @property
62 def DATABASE_URI(self) -> str:
63 if self.DATABASE_ENGINE == "sqlite":
64 db_uri = (self.DATABASE_ENGINE + ":///" +
65 self.DATABASE_FILE)
66 else:
67 if self.DATABASE_USERNAME is None:
68 raise RuntimeError("Missing DATABASE_USERNAME entry from config.py")
69 if self.DATABASE_PASSWORD is None:
70 raise RuntimeError("Missing DATABASE_PASSWORD entry from config.py")
71 if self.DATABASE_HOST is None:
72 raise RuntimeError("Missing DATABASE_HOST entry from config.py")
73 if self.DATABASE_NAME is None:
74 raise RuntimeError("Missing DATABASE_NAME entry from config.py")
75
76 db_uri = (
77 self.DATABASE_ENGINE + '://' +
78 self.DATABASE_USERNAME + ':' +
79 self.DATABASE_PASSWORD + '@' +
80 self.DATABASE_HOST + '/' +
81 self.DATABASE_NAME
82 )
83 return db_uri
84
85
86 config = SDConfig() # type: SDConfig
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/sdconfig.py b/securedrop/sdconfig.py
--- a/securedrop/sdconfig.py
+++ b/securedrop/sdconfig.py
@@ -53,7 +53,8 @@
self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str
- if _config.env == 'test':
+ self.env = getattr(_config, 'env', 'prod') # type: str
+ if self.env == 'test':
self.RQ_WORKER_NAME = 'test' # type: str
else:
self.RQ_WORKER_NAME = 'default'
| {"golden_diff": "diff --git a/securedrop/sdconfig.py b/securedrop/sdconfig.py\n--- a/securedrop/sdconfig.py\n+++ b/securedrop/sdconfig.py\n@@ -53,7 +53,8 @@\n \n self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str\n \n- if _config.env == 'test':\n+ self.env = getattr(_config, 'env', 'prod') # type: str\n+ if self.env == 'test':\n self.RQ_WORKER_NAME = 'test' # type: str\n else:\n self.RQ_WORKER_NAME = 'default'\n", "issue": "dev server hot reload has stopped working\n## Description\r\n\r\nIn #5532 the `env` attribute was dropped from `SDConfig` in `sdconfig.py`. That value is checked in [`source.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/source.py#L11) and [`journalist.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/journalist.py#L26) to determine whether the Flask app will run in [debug](https://flask.palletsprojects.com/en/1.1.x/config/#DEBUG) mode. By default it will not, so the dev server has stopped responding to code changes.\r\n\r\nGiven the Flask documentation warnings about setting debug mode via code and not the `FLASK_DEBUG` environment variable, we may want to reevaluate all of this, but right now let's just get back to a properly functioning dev server.\r\n\r\n## Steps to Reproduce\r\n\r\n- Check out `develop` at a commit before the `sdconfig.py` change (eff931fa8a0e74d5c3be87e46c5d0f004f02e289).\r\n- Run `make dev`.\r\n- Change `securedrop/journalist_app/main.py` to trigger a reload, and confirm that the change is detected.\r\n- Stop the dev server.\r\n- Check out `develop` and run `make dev`.\r\n- Change `securedrop/journalist_app/main.py` again, and observe that the change is not detected.\r\n\r\n## Expected Behavior\r\n\r\nThat the dev server would notice code changes and reload to pick them up.\r\n\r\n## Actual Behavior\r\n\r\nIt does not care one whit about your useless flailings. We are all `prod` now.\r\n\r\n## Comments\r\n\r\nJust need to restore `SDConfig.env`.\r\n\n", "before_files": [{"content": "from typing import Dict\nfrom typing import Optional\n\nfrom typing import Type\n\nimport config as _config\nfrom typing import List\n\n\nclass SDConfig:\n def __init__(self) -> None:\n self.JOURNALIST_APP_FLASK_CONFIG_CLS = \\\n _config.JournalistInterfaceFlaskConfig # type: Type\n\n self.SOURCE_APP_FLASK_CONFIG_CLS = \\\n _config.SourceInterfaceFlaskConfig # type: Type\n\n self.DATABASE_ENGINE = _config.DATABASE_ENGINE # type: str\n self.DATABASE_FILE = _config.DATABASE_FILE # type: str\n\n self.DATABASE_USERNAME = getattr(_config, \"DATABASE_USERNAME\", None) # type: Optional[str]\n self.DATABASE_PASSWORD = getattr(_config, \"DATABASE_PASSWORD\", None) # type: Optional[str]\n self.DATABASE_HOST = getattr(_config, \"DATABASE_HOST\", None) # type: Optional[str]\n self.DATABASE_NAME = getattr(_config, \"DATABASE_NAME\", None) # type: Optional[str]\n\n self.ADJECTIVES = _config.ADJECTIVES # type: str\n self.NOUNS = _config.NOUNS # type: str\n self.WORD_LIST = _config.WORD_LIST # type: str\n\n self.DEFAULT_LOCALE = _config.DEFAULT_LOCALE # type: str\n self.SUPPORTED_LOCALES = getattr(\n _config, \"SUPPORTED_LOCALES\", [self.DEFAULT_LOCALE]\n ) # type: List[str]\n\n self.GPG_KEY_DIR = _config.GPG_KEY_DIR # type: str\n\n self.JOURNALIST_KEY = _config.JOURNALIST_KEY # type: str\n self.JOURNALIST_TEMPLATES_DIR = _config.JOURNALIST_TEMPLATES_DIR # type: str\n\n self.SCRYPT_GPG_PEPPER = _config.SCRYPT_GPG_PEPPER # type: str\n self.SCRYPT_ID_PEPPER = _config.SCRYPT_ID_PEPPER # type: str\n self.SCRYPT_PARAMS = _config.SCRYPT_PARAMS # type: Dict[str, int]\n\n self.SECUREDROP_DATA_ROOT = _config.SECUREDROP_DATA_ROOT # type: str\n self.SECUREDROP_ROOT = _config.SECUREDROP_ROOT # type: str\n\n self.SESSION_EXPIRATION_MINUTES = _config.SESSION_EXPIRATION_MINUTES # type: int\n\n self.SOURCE_TEMPLATES_DIR = _config.SOURCE_TEMPLATES_DIR # type: str\n self.TEMP_DIR = _config.TEMP_DIR # type: str\n self.STORE_DIR = _config.STORE_DIR # type: str\n self.TRANSLATION_DIRS = getattr(_config, \"TRANSLATION_DIRS\", None) # type: Optional[str]\n\n self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str\n\n if _config.env == 'test':\n self.RQ_WORKER_NAME = 'test' # type: str\n else:\n self.RQ_WORKER_NAME = 'default'\n\n @property\n def DATABASE_URI(self) -> str:\n if self.DATABASE_ENGINE == \"sqlite\":\n db_uri = (self.DATABASE_ENGINE + \":///\" +\n self.DATABASE_FILE)\n else:\n if self.DATABASE_USERNAME is None:\n raise RuntimeError(\"Missing DATABASE_USERNAME entry from config.py\")\n if self.DATABASE_PASSWORD is None:\n raise RuntimeError(\"Missing DATABASE_PASSWORD entry from config.py\")\n if self.DATABASE_HOST is None:\n raise RuntimeError(\"Missing DATABASE_HOST entry from config.py\")\n if self.DATABASE_NAME is None:\n raise RuntimeError(\"Missing DATABASE_NAME entry from config.py\")\n\n db_uri = (\n self.DATABASE_ENGINE + '://' +\n self.DATABASE_USERNAME + ':' +\n self.DATABASE_PASSWORD + '@' +\n self.DATABASE_HOST + '/' +\n self.DATABASE_NAME\n )\n return db_uri\n\n\nconfig = SDConfig() # type: SDConfig\n", "path": "securedrop/sdconfig.py"}], "after_files": [{"content": "from typing import Dict\nfrom typing import Optional\n\nfrom typing import Type\n\nimport config as _config\nfrom typing import List\n\n\nclass SDConfig:\n def __init__(self) -> None:\n self.JOURNALIST_APP_FLASK_CONFIG_CLS = \\\n _config.JournalistInterfaceFlaskConfig # type: Type\n\n self.SOURCE_APP_FLASK_CONFIG_CLS = \\\n _config.SourceInterfaceFlaskConfig # type: Type\n\n self.DATABASE_ENGINE = _config.DATABASE_ENGINE # type: str\n self.DATABASE_FILE = _config.DATABASE_FILE # type: str\n\n self.DATABASE_USERNAME = getattr(_config, \"DATABASE_USERNAME\", None) # type: Optional[str]\n self.DATABASE_PASSWORD = getattr(_config, \"DATABASE_PASSWORD\", None) # type: Optional[str]\n self.DATABASE_HOST = getattr(_config, \"DATABASE_HOST\", None) # type: Optional[str]\n self.DATABASE_NAME = getattr(_config, \"DATABASE_NAME\", None) # type: Optional[str]\n\n self.ADJECTIVES = _config.ADJECTIVES # type: str\n self.NOUNS = _config.NOUNS # type: str\n self.WORD_LIST = _config.WORD_LIST # type: str\n\n self.DEFAULT_LOCALE = _config.DEFAULT_LOCALE # type: str\n self.SUPPORTED_LOCALES = getattr(\n _config, \"SUPPORTED_LOCALES\", [self.DEFAULT_LOCALE]\n ) # type: List[str]\n\n self.GPG_KEY_DIR = _config.GPG_KEY_DIR # type: str\n\n self.JOURNALIST_KEY = _config.JOURNALIST_KEY # type: str\n self.JOURNALIST_TEMPLATES_DIR = _config.JOURNALIST_TEMPLATES_DIR # type: str\n\n self.SCRYPT_GPG_PEPPER = _config.SCRYPT_GPG_PEPPER # type: str\n self.SCRYPT_ID_PEPPER = _config.SCRYPT_ID_PEPPER # type: str\n self.SCRYPT_PARAMS = _config.SCRYPT_PARAMS # type: Dict[str, int]\n\n self.SECUREDROP_DATA_ROOT = _config.SECUREDROP_DATA_ROOT # type: str\n self.SECUREDROP_ROOT = _config.SECUREDROP_ROOT # type: str\n\n self.SESSION_EXPIRATION_MINUTES = _config.SESSION_EXPIRATION_MINUTES # type: int\n\n self.SOURCE_TEMPLATES_DIR = _config.SOURCE_TEMPLATES_DIR # type: str\n self.TEMP_DIR = _config.TEMP_DIR # type: str\n self.STORE_DIR = _config.STORE_DIR # type: str\n self.TRANSLATION_DIRS = getattr(_config, \"TRANSLATION_DIRS\", None) # type: Optional[str]\n\n self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str\n\n self.env = getattr(_config, 'env', 'prod') # type: str\n if self.env == 'test':\n self.RQ_WORKER_NAME = 'test' # type: str\n else:\n self.RQ_WORKER_NAME = 'default'\n\n @property\n def DATABASE_URI(self) -> str:\n if self.DATABASE_ENGINE == \"sqlite\":\n db_uri = (self.DATABASE_ENGINE + \":///\" +\n self.DATABASE_FILE)\n else:\n if self.DATABASE_USERNAME is None:\n raise RuntimeError(\"Missing DATABASE_USERNAME entry from config.py\")\n if self.DATABASE_PASSWORD is None:\n raise RuntimeError(\"Missing DATABASE_PASSWORD entry from config.py\")\n if self.DATABASE_HOST is None:\n raise RuntimeError(\"Missing DATABASE_HOST entry from config.py\")\n if self.DATABASE_NAME is None:\n raise RuntimeError(\"Missing DATABASE_NAME entry from config.py\")\n\n db_uri = (\n self.DATABASE_ENGINE + '://' +\n self.DATABASE_USERNAME + ':' +\n self.DATABASE_PASSWORD + '@' +\n self.DATABASE_HOST + '/' +\n self.DATABASE_NAME\n )\n return db_uri\n\n\nconfig = SDConfig() # type: SDConfig\n", "path": "securedrop/sdconfig.py"}]} |
gh_patches_debug_1012 | rasdani/github-patches | git_diff | facebookresearch__hydra-2161 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Link to upgrade guide crashes documentation site
In `hydra-core==1.2.0dev5`, `basic_launcher` produces the following warning:
```
/home/runner/work/hydra-zen/hydra-zen/.tox/pre-release/lib/python3.8/site-packages/hydra/_internal/core_plugins
/basic_launcher.py:74:
UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/upgrades/1.1_to_1.2/changes_to_job_working_dir for more information.
```
But following the provided URL, https://hydra.cc/docs/upgrades/1.1_to_1.2/changes_to_job_working_dir , leads to a crash in the docs site:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/core/utils.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import copy
3 import logging
4 import os
5 import re
6 import sys
7 from contextlib import contextmanager
8 from dataclasses import dataclass
9 from datetime import datetime
10 from enum import Enum
11 from os.path import splitext
12 from pathlib import Path
13 from textwrap import dedent
14 from typing import Any, Dict, Optional, Sequence, Union, cast
15
16 from omegaconf import DictConfig, OmegaConf, open_dict, read_write
17
18 from hydra import version
19 from hydra._internal.deprecation_warning import deprecation_warning
20 from hydra.core.hydra_config import HydraConfig
21 from hydra.core.singleton import Singleton
22 from hydra.types import HydraContext, TaskFunction
23
24 log = logging.getLogger(__name__)
25
26
27 def simple_stdout_log_config(level: int = logging.INFO) -> None:
28 root = logging.getLogger()
29 root.setLevel(level)
30 handler = logging.StreamHandler(sys.stdout)
31 formatter = logging.Formatter("%(message)s")
32 handler.setFormatter(formatter)
33 root.addHandler(handler)
34
35
36 def configure_log(
37 log_config: DictConfig,
38 verbose_config: Union[bool, str, Sequence[str]] = False,
39 ) -> None:
40 assert isinstance(verbose_config, (bool, str)) or OmegaConf.is_list(verbose_config)
41 if log_config is not None:
42 conf: Dict[str, Any] = OmegaConf.to_container( # type: ignore
43 log_config, resolve=True
44 )
45 if conf["root"] is not None:
46 logging.config.dictConfig(conf)
47 else:
48 # default logging to stdout
49 root = logging.getLogger()
50 root.setLevel(logging.INFO)
51 handler = logging.StreamHandler(sys.stdout)
52 formatter = logging.Formatter(
53 "[%(asctime)s][%(name)s][%(levelname)s] - %(message)s"
54 )
55 handler.setFormatter(formatter)
56 root.addHandler(handler)
57 if isinstance(verbose_config, bool):
58 if verbose_config:
59 logging.getLogger().setLevel(logging.DEBUG)
60 else:
61 if isinstance(verbose_config, str):
62 verbose_list = OmegaConf.create([verbose_config])
63 elif OmegaConf.is_list(verbose_config):
64 verbose_list = verbose_config # type: ignore
65 else:
66 assert False
67
68 for logger in verbose_list:
69 logging.getLogger(logger).setLevel(logging.DEBUG)
70
71
72 def _save_config(cfg: DictConfig, filename: str, output_dir: Path) -> None:
73 output_dir.mkdir(parents=True, exist_ok=True)
74 with open(str(output_dir / filename), "w", encoding="utf-8") as file:
75 file.write(OmegaConf.to_yaml(cfg))
76
77
78 def filter_overrides(overrides: Sequence[str]) -> Sequence[str]:
79 """
80 :param overrides: overrides list
81 :return: returning a new overrides list with all the keys starting with hydra. filtered.
82 """
83 return [x for x in overrides if not x.startswith("hydra.")]
84
85
86 def _check_hydra_context(hydra_context: Optional[HydraContext]) -> None:
87 if hydra_context is None:
88 # hydra_context is required as of Hydra 1.2.
89 # We can remove this check in Hydra 1.3.
90 raise TypeError(
91 dedent(
92 """
93 run_job's signature has changed: the `hydra_context` arg is now required.
94 For more info, check https://github.com/facebookresearch/hydra/pull/1581."""
95 ),
96 )
97
98
99 def run_job(
100 task_function: TaskFunction,
101 config: DictConfig,
102 job_dir_key: str,
103 job_subdir_key: Optional[str],
104 hydra_context: HydraContext,
105 configure_logging: bool = True,
106 ) -> "JobReturn":
107 _check_hydra_context(hydra_context)
108 callbacks = hydra_context.callbacks
109
110 old_cwd = os.getcwd()
111 orig_hydra_cfg = HydraConfig.instance().cfg
112
113 # init Hydra config for config evaluation
114 HydraConfig.instance().set_config(config)
115
116 output_dir = str(OmegaConf.select(config, job_dir_key))
117 if job_subdir_key is not None:
118 # evaluate job_subdir_key lazily.
119 # this is running on the client side in sweep and contains things such as job:id which
120 # are only available there.
121 subdir = str(OmegaConf.select(config, job_subdir_key))
122 output_dir = os.path.join(output_dir, subdir)
123
124 with read_write(config.hydra.runtime):
125 with open_dict(config.hydra.runtime):
126 config.hydra.runtime.output_dir = os.path.abspath(output_dir)
127
128 # update Hydra config
129 HydraConfig.instance().set_config(config)
130 _chdir = None
131 try:
132 ret = JobReturn()
133 task_cfg = copy.deepcopy(config)
134 with read_write(task_cfg):
135 with open_dict(task_cfg):
136 del task_cfg["hydra"]
137
138 ret.cfg = task_cfg
139 hydra_cfg = copy.deepcopy(HydraConfig.instance().cfg)
140 assert isinstance(hydra_cfg, DictConfig)
141 ret.hydra_cfg = hydra_cfg
142 overrides = OmegaConf.to_container(config.hydra.overrides.task)
143 assert isinstance(overrides, list)
144 ret.overrides = overrides
145 # handle output directories here
146 Path(str(output_dir)).mkdir(parents=True, exist_ok=True)
147
148 _chdir = hydra_cfg.hydra.job.chdir
149
150 if _chdir is None:
151 if version.base_at_least("1.2"):
152 _chdir = False
153
154 if _chdir is None:
155 url = "https://hydra.cc/docs/upgrades/1.1_to_1.2/changes_to_job_working_dir"
156 deprecation_warning(
157 message=dedent(
158 f"""\
159 Future Hydra versions will no longer change working directory at job runtime by default.
160 See {url} for more information."""
161 ),
162 stacklevel=2,
163 )
164 _chdir = True
165
166 if _chdir:
167 os.chdir(output_dir)
168 ret.working_dir = output_dir
169 else:
170 ret.working_dir = os.getcwd()
171
172 if configure_logging:
173 configure_log(config.hydra.job_logging, config.hydra.verbose)
174
175 if config.hydra.output_subdir is not None:
176 hydra_output = Path(config.hydra.runtime.output_dir) / Path(
177 config.hydra.output_subdir
178 )
179 _save_config(task_cfg, "config.yaml", hydra_output)
180 _save_config(hydra_cfg, "hydra.yaml", hydra_output)
181 _save_config(config.hydra.overrides.task, "overrides.yaml", hydra_output)
182
183 with env_override(hydra_cfg.hydra.job.env_set):
184 callbacks.on_job_start(config=config)
185 try:
186 ret.return_value = task_function(task_cfg)
187 ret.status = JobStatus.COMPLETED
188 except Exception as e:
189 ret.return_value = e
190 ret.status = JobStatus.FAILED
191
192 ret.task_name = JobRuntime.instance().get("name")
193
194 _flush_loggers()
195
196 callbacks.on_job_end(config=config, job_return=ret)
197
198 return ret
199 finally:
200 HydraConfig.instance().cfg = orig_hydra_cfg
201 if _chdir:
202 os.chdir(old_cwd)
203
204
205 def get_valid_filename(s: str) -> str:
206 s = str(s).strip().replace(" ", "_")
207 return re.sub(r"(?u)[^-\w.]", "", s)
208
209
210 def setup_globals() -> None:
211 # please add documentation when you add a new resolver
212 OmegaConf.register_new_resolver(
213 "now",
214 lambda pattern: datetime.now().strftime(pattern),
215 use_cache=True,
216 replace=True,
217 )
218 OmegaConf.register_new_resolver(
219 "hydra",
220 lambda path: OmegaConf.select(cast(DictConfig, HydraConfig.get()), path),
221 replace=True,
222 )
223
224 vi = sys.version_info
225 version_dict = {
226 "major": f"{vi[0]}",
227 "minor": f"{vi[0]}.{vi[1]}",
228 "micro": f"{vi[0]}.{vi[1]}.{vi[2]}",
229 }
230 OmegaConf.register_new_resolver(
231 "python_version", lambda level="minor": version_dict.get(level), replace=True
232 )
233
234
235 class JobStatus(Enum):
236 UNKNOWN = 0
237 COMPLETED = 1
238 FAILED = 2
239
240
241 @dataclass
242 class JobReturn:
243 overrides: Optional[Sequence[str]] = None
244 cfg: Optional[DictConfig] = None
245 hydra_cfg: Optional[DictConfig] = None
246 working_dir: Optional[str] = None
247 task_name: Optional[str] = None
248 status: JobStatus = JobStatus.UNKNOWN
249 _return_value: Any = None
250
251 @property
252 def return_value(self) -> Any:
253 assert self.status != JobStatus.UNKNOWN, "return_value not yet available"
254 if self.status == JobStatus.COMPLETED:
255 return self._return_value
256 else:
257 sys.stderr.write(
258 f"Error executing job with overrides: {self.overrides}" + os.linesep
259 )
260 raise self._return_value
261
262 @return_value.setter
263 def return_value(self, value: Any) -> None:
264 self._return_value = value
265
266
267 class JobRuntime(metaclass=Singleton):
268 def __init__(self) -> None:
269 self.conf: DictConfig = OmegaConf.create()
270 self.set("name", "UNKNOWN_NAME")
271
272 def get(self, key: str) -> Any:
273 ret = OmegaConf.select(self.conf, key)
274 if ret is None:
275 raise KeyError(f"Key not found in {type(self).__name__}: {key}")
276 return ret
277
278 def set(self, key: str, value: Any) -> None:
279 log.debug(f"Setting {type(self).__name__}:{key}={value}")
280 self.conf[key] = value
281
282
283 def validate_config_path(config_path: Optional[str]) -> None:
284 if config_path is not None:
285 split_file = splitext(config_path)
286 if split_file[1] in (".yaml", ".yml"):
287 msg = dedent(
288 """\
289 Using config_path to specify the config name is not supported, specify the config name via config_name.
290 See https://hydra.cc/docs/next/upgrades/0.11_to_1.0/config_path_changes
291 """
292 )
293 raise ValueError(msg)
294
295
296 @contextmanager
297 def env_override(env: Dict[str, str]) -> Any:
298 """Temporarily set environment variables inside the context manager and
299 fully restore previous environment afterwards
300 """
301 original_env = {key: os.getenv(key) for key in env}
302 os.environ.update(env)
303 try:
304 yield
305 finally:
306 for key, value in original_env.items():
307 if value is None:
308 del os.environ[key]
309 else:
310 os.environ[key] = value
311
312
313 def _flush_loggers() -> None:
314 # Python logging does not have an official API to flush all loggers.
315 # This will have to do.
316 for h_weak_ref in logging._handlerList: # type: ignore
317 try:
318 h_weak_ref().flush()
319 except Exception:
320 # ignore exceptions thrown during flushing
321 pass
322
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/core/utils.py b/hydra/core/utils.py
--- a/hydra/core/utils.py
+++ b/hydra/core/utils.py
@@ -152,7 +152,7 @@
_chdir = False
if _chdir is None:
- url = "https://hydra.cc/docs/upgrades/1.1_to_1.2/changes_to_job_working_dir"
+ url = "https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/"
deprecation_warning(
message=dedent(
f"""\
| {"golden_diff": "diff --git a/hydra/core/utils.py b/hydra/core/utils.py\n--- a/hydra/core/utils.py\n+++ b/hydra/core/utils.py\n@@ -152,7 +152,7 @@\n _chdir = False\n \n if _chdir is None:\n- url = \"https://hydra.cc/docs/upgrades/1.1_to_1.2/changes_to_job_working_dir\"\n+ url = \"https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/\"\n deprecation_warning(\n message=dedent(\n f\"\"\"\\\n", "issue": "[Bug] Link to upgrade guide crashes documentation site\nIn `hydra-core==1.2.0dev5`, `basic_launcher` produces the following warning:\r\n``` \r\n/home/runner/work/hydra-zen/hydra-zen/.tox/pre-release/lib/python3.8/site-packages/hydra/_internal/core_plugins\r\n/basic_launcher.py:74: \r\n\r\nUserWarning: Future Hydra versions will no longer change working directory at job runtime by default.\r\n See https://hydra.cc/docs/upgrades/1.1_to_1.2/changes_to_job_working_dir for more information.\r\n```\r\n\r\nBut following the provided URL, https://hydra.cc/docs/upgrades/1.1_to_1.2/changes_to_job_working_dir , leads to a crash in the docs site:\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport copy\nimport logging\nimport os\nimport re\nimport sys\nfrom contextlib import contextmanager\nfrom dataclasses import dataclass\nfrom datetime import datetime\nfrom enum import Enum\nfrom os.path import splitext\nfrom pathlib import Path\nfrom textwrap import dedent\nfrom typing import Any, Dict, Optional, Sequence, Union, cast\n\nfrom omegaconf import DictConfig, OmegaConf, open_dict, read_write\n\nfrom hydra import version\nfrom hydra._internal.deprecation_warning import deprecation_warning\nfrom hydra.core.hydra_config import HydraConfig\nfrom hydra.core.singleton import Singleton\nfrom hydra.types import HydraContext, TaskFunction\n\nlog = logging.getLogger(__name__)\n\n\ndef simple_stdout_log_config(level: int = logging.INFO) -> None:\n root = logging.getLogger()\n root.setLevel(level)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\"%(message)s\")\n handler.setFormatter(formatter)\n root.addHandler(handler)\n\n\ndef configure_log(\n log_config: DictConfig,\n verbose_config: Union[bool, str, Sequence[str]] = False,\n) -> None:\n assert isinstance(verbose_config, (bool, str)) or OmegaConf.is_list(verbose_config)\n if log_config is not None:\n conf: Dict[str, Any] = OmegaConf.to_container( # type: ignore\n log_config, resolve=True\n )\n if conf[\"root\"] is not None:\n logging.config.dictConfig(conf)\n else:\n # default logging to stdout\n root = logging.getLogger()\n root.setLevel(logging.INFO)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\n \"[%(asctime)s][%(name)s][%(levelname)s] - %(message)s\"\n )\n handler.setFormatter(formatter)\n root.addHandler(handler)\n if isinstance(verbose_config, bool):\n if verbose_config:\n logging.getLogger().setLevel(logging.DEBUG)\n else:\n if isinstance(verbose_config, str):\n verbose_list = OmegaConf.create([verbose_config])\n elif OmegaConf.is_list(verbose_config):\n verbose_list = verbose_config # type: ignore\n else:\n assert False\n\n for logger in verbose_list:\n logging.getLogger(logger).setLevel(logging.DEBUG)\n\n\ndef _save_config(cfg: DictConfig, filename: str, output_dir: Path) -> None:\n output_dir.mkdir(parents=True, exist_ok=True)\n with open(str(output_dir / filename), \"w\", encoding=\"utf-8\") as file:\n file.write(OmegaConf.to_yaml(cfg))\n\n\ndef filter_overrides(overrides: Sequence[str]) -> Sequence[str]:\n \"\"\"\n :param overrides: overrides list\n :return: returning a new overrides list with all the keys starting with hydra. filtered.\n \"\"\"\n return [x for x in overrides if not x.startswith(\"hydra.\")]\n\n\ndef _check_hydra_context(hydra_context: Optional[HydraContext]) -> None:\n if hydra_context is None:\n # hydra_context is required as of Hydra 1.2.\n # We can remove this check in Hydra 1.3.\n raise TypeError(\n dedent(\n \"\"\"\n run_job's signature has changed: the `hydra_context` arg is now required.\n For more info, check https://github.com/facebookresearch/hydra/pull/1581.\"\"\"\n ),\n )\n\n\ndef run_job(\n task_function: TaskFunction,\n config: DictConfig,\n job_dir_key: str,\n job_subdir_key: Optional[str],\n hydra_context: HydraContext,\n configure_logging: bool = True,\n) -> \"JobReturn\":\n _check_hydra_context(hydra_context)\n callbacks = hydra_context.callbacks\n\n old_cwd = os.getcwd()\n orig_hydra_cfg = HydraConfig.instance().cfg\n\n # init Hydra config for config evaluation\n HydraConfig.instance().set_config(config)\n\n output_dir = str(OmegaConf.select(config, job_dir_key))\n if job_subdir_key is not None:\n # evaluate job_subdir_key lazily.\n # this is running on the client side in sweep and contains things such as job:id which\n # are only available there.\n subdir = str(OmegaConf.select(config, job_subdir_key))\n output_dir = os.path.join(output_dir, subdir)\n\n with read_write(config.hydra.runtime):\n with open_dict(config.hydra.runtime):\n config.hydra.runtime.output_dir = os.path.abspath(output_dir)\n\n # update Hydra config\n HydraConfig.instance().set_config(config)\n _chdir = None\n try:\n ret = JobReturn()\n task_cfg = copy.deepcopy(config)\n with read_write(task_cfg):\n with open_dict(task_cfg):\n del task_cfg[\"hydra\"]\n\n ret.cfg = task_cfg\n hydra_cfg = copy.deepcopy(HydraConfig.instance().cfg)\n assert isinstance(hydra_cfg, DictConfig)\n ret.hydra_cfg = hydra_cfg\n overrides = OmegaConf.to_container(config.hydra.overrides.task)\n assert isinstance(overrides, list)\n ret.overrides = overrides\n # handle output directories here\n Path(str(output_dir)).mkdir(parents=True, exist_ok=True)\n\n _chdir = hydra_cfg.hydra.job.chdir\n\n if _chdir is None:\n if version.base_at_least(\"1.2\"):\n _chdir = False\n\n if _chdir is None:\n url = \"https://hydra.cc/docs/upgrades/1.1_to_1.2/changes_to_job_working_dir\"\n deprecation_warning(\n message=dedent(\n f\"\"\"\\\n Future Hydra versions will no longer change working directory at job runtime by default.\n See {url} for more information.\"\"\"\n ),\n stacklevel=2,\n )\n _chdir = True\n\n if _chdir:\n os.chdir(output_dir)\n ret.working_dir = output_dir\n else:\n ret.working_dir = os.getcwd()\n\n if configure_logging:\n configure_log(config.hydra.job_logging, config.hydra.verbose)\n\n if config.hydra.output_subdir is not None:\n hydra_output = Path(config.hydra.runtime.output_dir) / Path(\n config.hydra.output_subdir\n )\n _save_config(task_cfg, \"config.yaml\", hydra_output)\n _save_config(hydra_cfg, \"hydra.yaml\", hydra_output)\n _save_config(config.hydra.overrides.task, \"overrides.yaml\", hydra_output)\n\n with env_override(hydra_cfg.hydra.job.env_set):\n callbacks.on_job_start(config=config)\n try:\n ret.return_value = task_function(task_cfg)\n ret.status = JobStatus.COMPLETED\n except Exception as e:\n ret.return_value = e\n ret.status = JobStatus.FAILED\n\n ret.task_name = JobRuntime.instance().get(\"name\")\n\n _flush_loggers()\n\n callbacks.on_job_end(config=config, job_return=ret)\n\n return ret\n finally:\n HydraConfig.instance().cfg = orig_hydra_cfg\n if _chdir:\n os.chdir(old_cwd)\n\n\ndef get_valid_filename(s: str) -> str:\n s = str(s).strip().replace(\" \", \"_\")\n return re.sub(r\"(?u)[^-\\w.]\", \"\", s)\n\n\ndef setup_globals() -> None:\n # please add documentation when you add a new resolver\n OmegaConf.register_new_resolver(\n \"now\",\n lambda pattern: datetime.now().strftime(pattern),\n use_cache=True,\n replace=True,\n )\n OmegaConf.register_new_resolver(\n \"hydra\",\n lambda path: OmegaConf.select(cast(DictConfig, HydraConfig.get()), path),\n replace=True,\n )\n\n vi = sys.version_info\n version_dict = {\n \"major\": f\"{vi[0]}\",\n \"minor\": f\"{vi[0]}.{vi[1]}\",\n \"micro\": f\"{vi[0]}.{vi[1]}.{vi[2]}\",\n }\n OmegaConf.register_new_resolver(\n \"python_version\", lambda level=\"minor\": version_dict.get(level), replace=True\n )\n\n\nclass JobStatus(Enum):\n UNKNOWN = 0\n COMPLETED = 1\n FAILED = 2\n\n\n@dataclass\nclass JobReturn:\n overrides: Optional[Sequence[str]] = None\n cfg: Optional[DictConfig] = None\n hydra_cfg: Optional[DictConfig] = None\n working_dir: Optional[str] = None\n task_name: Optional[str] = None\n status: JobStatus = JobStatus.UNKNOWN\n _return_value: Any = None\n\n @property\n def return_value(self) -> Any:\n assert self.status != JobStatus.UNKNOWN, \"return_value not yet available\"\n if self.status == JobStatus.COMPLETED:\n return self._return_value\n else:\n sys.stderr.write(\n f\"Error executing job with overrides: {self.overrides}\" + os.linesep\n )\n raise self._return_value\n\n @return_value.setter\n def return_value(self, value: Any) -> None:\n self._return_value = value\n\n\nclass JobRuntime(metaclass=Singleton):\n def __init__(self) -> None:\n self.conf: DictConfig = OmegaConf.create()\n self.set(\"name\", \"UNKNOWN_NAME\")\n\n def get(self, key: str) -> Any:\n ret = OmegaConf.select(self.conf, key)\n if ret is None:\n raise KeyError(f\"Key not found in {type(self).__name__}: {key}\")\n return ret\n\n def set(self, key: str, value: Any) -> None:\n log.debug(f\"Setting {type(self).__name__}:{key}={value}\")\n self.conf[key] = value\n\n\ndef validate_config_path(config_path: Optional[str]) -> None:\n if config_path is not None:\n split_file = splitext(config_path)\n if split_file[1] in (\".yaml\", \".yml\"):\n msg = dedent(\n \"\"\"\\\n Using config_path to specify the config name is not supported, specify the config name via config_name.\n See https://hydra.cc/docs/next/upgrades/0.11_to_1.0/config_path_changes\n \"\"\"\n )\n raise ValueError(msg)\n\n\n@contextmanager\ndef env_override(env: Dict[str, str]) -> Any:\n \"\"\"Temporarily set environment variables inside the context manager and\n fully restore previous environment afterwards\n \"\"\"\n original_env = {key: os.getenv(key) for key in env}\n os.environ.update(env)\n try:\n yield\n finally:\n for key, value in original_env.items():\n if value is None:\n del os.environ[key]\n else:\n os.environ[key] = value\n\n\ndef _flush_loggers() -> None:\n # Python logging does not have an official API to flush all loggers.\n # This will have to do.\n for h_weak_ref in logging._handlerList: # type: ignore\n try:\n h_weak_ref().flush()\n except Exception:\n # ignore exceptions thrown during flushing\n pass\n", "path": "hydra/core/utils.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport copy\nimport logging\nimport os\nimport re\nimport sys\nfrom contextlib import contextmanager\nfrom dataclasses import dataclass\nfrom datetime import datetime\nfrom enum import Enum\nfrom os.path import splitext\nfrom pathlib import Path\nfrom textwrap import dedent\nfrom typing import Any, Dict, Optional, Sequence, Union, cast\n\nfrom omegaconf import DictConfig, OmegaConf, open_dict, read_write\n\nfrom hydra import version\nfrom hydra._internal.deprecation_warning import deprecation_warning\nfrom hydra.core.hydra_config import HydraConfig\nfrom hydra.core.singleton import Singleton\nfrom hydra.types import HydraContext, TaskFunction\n\nlog = logging.getLogger(__name__)\n\n\ndef simple_stdout_log_config(level: int = logging.INFO) -> None:\n root = logging.getLogger()\n root.setLevel(level)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\"%(message)s\")\n handler.setFormatter(formatter)\n root.addHandler(handler)\n\n\ndef configure_log(\n log_config: DictConfig,\n verbose_config: Union[bool, str, Sequence[str]] = False,\n) -> None:\n assert isinstance(verbose_config, (bool, str)) or OmegaConf.is_list(verbose_config)\n if log_config is not None:\n conf: Dict[str, Any] = OmegaConf.to_container( # type: ignore\n log_config, resolve=True\n )\n if conf[\"root\"] is not None:\n logging.config.dictConfig(conf)\n else:\n # default logging to stdout\n root = logging.getLogger()\n root.setLevel(logging.INFO)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\n \"[%(asctime)s][%(name)s][%(levelname)s] - %(message)s\"\n )\n handler.setFormatter(formatter)\n root.addHandler(handler)\n if isinstance(verbose_config, bool):\n if verbose_config:\n logging.getLogger().setLevel(logging.DEBUG)\n else:\n if isinstance(verbose_config, str):\n verbose_list = OmegaConf.create([verbose_config])\n elif OmegaConf.is_list(verbose_config):\n verbose_list = verbose_config # type: ignore\n else:\n assert False\n\n for logger in verbose_list:\n logging.getLogger(logger).setLevel(logging.DEBUG)\n\n\ndef _save_config(cfg: DictConfig, filename: str, output_dir: Path) -> None:\n output_dir.mkdir(parents=True, exist_ok=True)\n with open(str(output_dir / filename), \"w\", encoding=\"utf-8\") as file:\n file.write(OmegaConf.to_yaml(cfg))\n\n\ndef filter_overrides(overrides: Sequence[str]) -> Sequence[str]:\n \"\"\"\n :param overrides: overrides list\n :return: returning a new overrides list with all the keys starting with hydra. filtered.\n \"\"\"\n return [x for x in overrides if not x.startswith(\"hydra.\")]\n\n\ndef _check_hydra_context(hydra_context: Optional[HydraContext]) -> None:\n if hydra_context is None:\n # hydra_context is required as of Hydra 1.2.\n # We can remove this check in Hydra 1.3.\n raise TypeError(\n dedent(\n \"\"\"\n run_job's signature has changed: the `hydra_context` arg is now required.\n For more info, check https://github.com/facebookresearch/hydra/pull/1581.\"\"\"\n ),\n )\n\n\ndef run_job(\n task_function: TaskFunction,\n config: DictConfig,\n job_dir_key: str,\n job_subdir_key: Optional[str],\n hydra_context: HydraContext,\n configure_logging: bool = True,\n) -> \"JobReturn\":\n _check_hydra_context(hydra_context)\n callbacks = hydra_context.callbacks\n\n old_cwd = os.getcwd()\n orig_hydra_cfg = HydraConfig.instance().cfg\n\n # init Hydra config for config evaluation\n HydraConfig.instance().set_config(config)\n\n output_dir = str(OmegaConf.select(config, job_dir_key))\n if job_subdir_key is not None:\n # evaluate job_subdir_key lazily.\n # this is running on the client side in sweep and contains things such as job:id which\n # are only available there.\n subdir = str(OmegaConf.select(config, job_subdir_key))\n output_dir = os.path.join(output_dir, subdir)\n\n with read_write(config.hydra.runtime):\n with open_dict(config.hydra.runtime):\n config.hydra.runtime.output_dir = os.path.abspath(output_dir)\n\n # update Hydra config\n HydraConfig.instance().set_config(config)\n _chdir = None\n try:\n ret = JobReturn()\n task_cfg = copy.deepcopy(config)\n with read_write(task_cfg):\n with open_dict(task_cfg):\n del task_cfg[\"hydra\"]\n\n ret.cfg = task_cfg\n hydra_cfg = copy.deepcopy(HydraConfig.instance().cfg)\n assert isinstance(hydra_cfg, DictConfig)\n ret.hydra_cfg = hydra_cfg\n overrides = OmegaConf.to_container(config.hydra.overrides.task)\n assert isinstance(overrides, list)\n ret.overrides = overrides\n # handle output directories here\n Path(str(output_dir)).mkdir(parents=True, exist_ok=True)\n\n _chdir = hydra_cfg.hydra.job.chdir\n\n if _chdir is None:\n if version.base_at_least(\"1.2\"):\n _chdir = False\n\n if _chdir is None:\n url = \"https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/\"\n deprecation_warning(\n message=dedent(\n f\"\"\"\\\n Future Hydra versions will no longer change working directory at job runtime by default.\n See {url} for more information.\"\"\"\n ),\n stacklevel=2,\n )\n _chdir = True\n\n if _chdir:\n os.chdir(output_dir)\n ret.working_dir = output_dir\n else:\n ret.working_dir = os.getcwd()\n\n if configure_logging:\n configure_log(config.hydra.job_logging, config.hydra.verbose)\n\n if config.hydra.output_subdir is not None:\n hydra_output = Path(config.hydra.runtime.output_dir) / Path(\n config.hydra.output_subdir\n )\n _save_config(task_cfg, \"config.yaml\", hydra_output)\n _save_config(hydra_cfg, \"hydra.yaml\", hydra_output)\n _save_config(config.hydra.overrides.task, \"overrides.yaml\", hydra_output)\n\n with env_override(hydra_cfg.hydra.job.env_set):\n callbacks.on_job_start(config=config)\n try:\n ret.return_value = task_function(task_cfg)\n ret.status = JobStatus.COMPLETED\n except Exception as e:\n ret.return_value = e\n ret.status = JobStatus.FAILED\n\n ret.task_name = JobRuntime.instance().get(\"name\")\n\n _flush_loggers()\n\n callbacks.on_job_end(config=config, job_return=ret)\n\n return ret\n finally:\n HydraConfig.instance().cfg = orig_hydra_cfg\n if _chdir:\n os.chdir(old_cwd)\n\n\ndef get_valid_filename(s: str) -> str:\n s = str(s).strip().replace(\" \", \"_\")\n return re.sub(r\"(?u)[^-\\w.]\", \"\", s)\n\n\ndef setup_globals() -> None:\n # please add documentation when you add a new resolver\n OmegaConf.register_new_resolver(\n \"now\",\n lambda pattern: datetime.now().strftime(pattern),\n use_cache=True,\n replace=True,\n )\n OmegaConf.register_new_resolver(\n \"hydra\",\n lambda path: OmegaConf.select(cast(DictConfig, HydraConfig.get()), path),\n replace=True,\n )\n\n vi = sys.version_info\n version_dict = {\n \"major\": f\"{vi[0]}\",\n \"minor\": f\"{vi[0]}.{vi[1]}\",\n \"micro\": f\"{vi[0]}.{vi[1]}.{vi[2]}\",\n }\n OmegaConf.register_new_resolver(\n \"python_version\", lambda level=\"minor\": version_dict.get(level), replace=True\n )\n\n\nclass JobStatus(Enum):\n UNKNOWN = 0\n COMPLETED = 1\n FAILED = 2\n\n\n@dataclass\nclass JobReturn:\n overrides: Optional[Sequence[str]] = None\n cfg: Optional[DictConfig] = None\n hydra_cfg: Optional[DictConfig] = None\n working_dir: Optional[str] = None\n task_name: Optional[str] = None\n status: JobStatus = JobStatus.UNKNOWN\n _return_value: Any = None\n\n @property\n def return_value(self) -> Any:\n assert self.status != JobStatus.UNKNOWN, \"return_value not yet available\"\n if self.status == JobStatus.COMPLETED:\n return self._return_value\n else:\n sys.stderr.write(\n f\"Error executing job with overrides: {self.overrides}\" + os.linesep\n )\n raise self._return_value\n\n @return_value.setter\n def return_value(self, value: Any) -> None:\n self._return_value = value\n\n\nclass JobRuntime(metaclass=Singleton):\n def __init__(self) -> None:\n self.conf: DictConfig = OmegaConf.create()\n self.set(\"name\", \"UNKNOWN_NAME\")\n\n def get(self, key: str) -> Any:\n ret = OmegaConf.select(self.conf, key)\n if ret is None:\n raise KeyError(f\"Key not found in {type(self).__name__}: {key}\")\n return ret\n\n def set(self, key: str, value: Any) -> None:\n log.debug(f\"Setting {type(self).__name__}:{key}={value}\")\n self.conf[key] = value\n\n\ndef validate_config_path(config_path: Optional[str]) -> None:\n if config_path is not None:\n split_file = splitext(config_path)\n if split_file[1] in (\".yaml\", \".yml\"):\n msg = dedent(\n \"\"\"\\\n Using config_path to specify the config name is not supported, specify the config name via config_name.\n See https://hydra.cc/docs/next/upgrades/0.11_to_1.0/config_path_changes\n \"\"\"\n )\n raise ValueError(msg)\n\n\n@contextmanager\ndef env_override(env: Dict[str, str]) -> Any:\n \"\"\"Temporarily set environment variables inside the context manager and\n fully restore previous environment afterwards\n \"\"\"\n original_env = {key: os.getenv(key) for key in env}\n os.environ.update(env)\n try:\n yield\n finally:\n for key, value in original_env.items():\n if value is None:\n del os.environ[key]\n else:\n os.environ[key] = value\n\n\ndef _flush_loggers() -> None:\n # Python logging does not have an official API to flush all loggers.\n # This will have to do.\n for h_weak_ref in logging._handlerList: # type: ignore\n try:\n h_weak_ref().flush()\n except Exception:\n # ignore exceptions thrown during flushing\n pass\n", "path": "hydra/core/utils.py"}]} |
gh_patches_debug_1013 | rasdani/github-patches | git_diff | keras-team__keras-19387 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError: DTypePolicy.__new__() when deepcopy(layer_instance)
Hello,
I use `Python==3.11.8` with `keras==3.1.1`.
When I create a layer instance and try to deepcopy this layer I receive the following error which did not happen before.
```python
>>> import keras
>>> import copy
>>> layer_obj = keras.layers.Dense(1)
>>> copy.deepcopy(layer_obj)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 146, in deepcopy
y = copier(x, memo)
^^^^^^^^^^^^^^^
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 265, in _reconstruct
y = func(*args)
^^^^^^^^^^^
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copyreg.py", line 105, in __newobj__
return cls.__new__(cls, *args)
^^^^^^^^^^^^^^^^^^^^^^^
TypeError: DTypePolicy.__new__() missing 1 required positional argument: 'name'
>>> >>> copy.deepcopy(layer_obj)
File "<stdin>", line 1
>>> copy.deepcopy(layer_obj)
^^
SyntaxError: invalid syntax
>>> Traceback (most recent call last):
File "<stdin>", line 1
Traceback (most recent call last):
^^^^^^^^^^^
SyntaxError: invalid syntax. Perhaps you forgot a comma?
>>> File "<stdin>", line 1, in <module>
File "<stdin>", line 1
File "<stdin>", line 1, in <module>
IndentationError: unexpected indent
>>> File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 172, in deepcopy
File "<stdin>", line 1
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 172, in deepcopy
IndentationError: unexpected indent
>>> y = _reconstruct(x, memo, *rv)
File "<stdin>", line 1
y = _reconstruct(x, memo, *rv)
IndentationError: unexpected indent
>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<stdin>", line 1
^^^^^^^^^^^^^^^^^^^^^^^^^^
IndentationError: unexpected indent
>>> File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 271, in _reconstruct
File "<stdin>", line 1
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 271, in _reconstruct
IndentationError: unexpected indent
>>> state = deepcopy(state, memo)
File "<stdin>", line 1
state = deepcopy(state, memo)
IndentationError: unexpected indent
>>> ^^^^^^^^^^^^^^^^^^^^^
File "<stdin>", line 1
^^^^^^^^^^^^^^^^^^^^^
IndentationError: unexpected indent
>>> File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 146, in deepcopy
File "<stdin>", line 1
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 146, in deepcopy
IndentationError: unexpected indent
>>> y = copier(x, memo)
File "<stdin>", line 1
y = copier(x, memo)
IndentationError: unexpected indent
>>> ^^^^^^^^^^^^^^^
File "<stdin>", line 1
^^^^^^^^^^^^^^^
IndentationError: unexpected indent
>>> File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 231, in _deepcopy_dict
File "<stdin>", line 1
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 231, in _deepcopy_dict
IndentationError: unexpected indent
>>> y[deepcopy(key, memo)] = deepcopy(value, memo)
File "<stdin>", line 1
y[deepcopy(key, memo)] = deepcopy(value, memo)
IndentationError: unexpected indent
>>> ^^^^^^^^^^^^^^^^^^^^^
File "<stdin>", line 1
^^^^^^^^^^^^^^^^^^^^^
IndentationError: unexpected indent
>>> File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 172, in deepcopy
File "<stdin>", line 1
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 172, in deepcopy
IndentationError: unexpected indent
>>> y = _reconstruct(x, memo, *rv)
File "<stdin>", line 1
y = _reconstruct(x, memo, *rv)
IndentationError: unexpected indent
>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<stdin>", line 1
^^^^^^^^^^^^^^^^^^^^^^^^^^
IndentationError: unexpected indent
>>> File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 265, in _reconstruct
File "<stdin>", line 1
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py", line 265, in _reconstruct
IndentationError: unexpected indent
>>> y = func(*args)
File "<stdin>", line 1
y = func(*args)
IndentationError: unexpected indent
>>> ^^^^^^^^^^^
File "<stdin>", line 1
^^^^^^^^^^^
IndentationError: unexpected indent
>>> File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copyreg.py", line 105, in __newobj__
File "<stdin>", line 1
File "/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copyreg.py", line 105, in __newobj__
IndentationError: unexpected indent
>>> return cls.__new__(cls, *args)
File "<stdin>", line 1
return cls.__new__(cls, *args)
IndentationError: unexpected indent
>>> ^^^^^^^^^^^^^^^^^^^^^^^
File "<stdin>", line 1
^^^^^^^^^^^^^^^^^^^^^^^
IndentationError: unexpected indent
>>> TypeError: DTypePolicy.__new__() missing 1 required positional argument: 'name'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras/dtype_policies/dtype_policy.py`
Content:
```
1 from keras import backend
2 from keras import ops
3 from keras.api_export import keras_export
4 from keras.backend.common import global_state
5
6
7 @keras_export(
8 [
9 "keras.DTypePolicy",
10 "keras.dtype_policies.DTypePolicy",
11 "keras.mixed_precision.DTypePolicy", # Legacy
12 "keras.mixed_precision.Policy", # Legacy
13 ]
14 )
15 class DTypePolicy:
16 """A dtype policy for a Keras layer.
17
18 A dtype policy determines a layer's computation and variable dtypes. Each
19 layer has a policy. Policies can be passed to the `dtype` argument of layer
20 constructors, or a global policy can be set with
21 `keras.config.set_dtype_policy`.
22
23 Args:
24 name: The policy name, which determines the compute and variable dtypes.
25 Can be any dtype name, such as `"float32"` or `"float64"`,
26 which causes both the compute and variable dtypes
27 will be that dtype.
28 Can also be the string `"mixed_float16"` or `"mixed_bfloat16"`,
29 which causes the compute dtype to be `float16` or `bfloat16`
30 and the variable dtype to be `float32`.
31
32 Typically you only need to interact with dtype policies when using mixed
33 precision, which is the use of float16 or bfloat16 for computations and
34 float32 for variables. This is why the term `mixed_precision` appears in the
35 API name. Mixed precision can be enabled by passing `"mixed_float16"` or
36 `"mixed_bfloat16"` to `keras.mixed_precision.set_dtype_policy()`.
37
38 >>> keras.config.set_dtype_policy("mixed_float16")
39 >>> layer1 = keras.layers.Dense(10)
40 >>> layer1.dtype_policy # layer1 will automatically use mixed precision
41 <DTypePolicy "mixed_float16">
42 >>> # Can optionally override layer to use float32
43 >>> # instead of mixed precision.
44 >>> layer2 = keras.layers.Dense(10, dtype="float32")
45 >>> layer2.dtype_policy
46 <DTypePolicy "float32">
47 >>> # Set policy back to initial float32.
48 >>> keras.config.set_dtype_policy('float32')
49
50 In the example above, passing `dtype="float32"` to the layer is
51 equivalent to passing
52 `dtype=keras.config.DTypePolicy("float32")`.
53 In general, passing a dtype policy name to a layer is equivalent
54 to passing the corresponding policy, so it is never necessary
55 to explicitly construct a `DTypePolicy` object.
56 """
57
58 def __new__(cls, name):
59 if not isinstance(name, str):
60 raise TypeError(
61 "'name' must be a string, such as 'mixed_float16'. "
62 f"Received: name={name} (of type {type(name)})"
63 )
64 # For backwards compatibility
65 # TODO: We should consider deprecating this behavior
66 if cls is __class__:
67 if name.startswith("int8"):
68 return QuantizedDTypePolicy(name)
69 return FloatDTypePolicy(name)
70 return super().__new__(cls)
71
72 def __init__(self, name):
73 self._name = name
74 self._compute_dtype = backend.floatx()
75 self._variable_dtype = backend.floatx()
76
77 def _parse_name(self, name):
78 """Parses a `DTypePolicy` name into a compute and variable dtype.
79
80 Args:
81 name: The name of the policy.
82
83 Returns:
84 The `(compute_dtype, variable_dtype)` pair.
85 """
86 raise NotImplementedError
87
88 @property
89 def variable_dtype(self):
90 """The variable dtype of this policy.
91
92 This is the dtype layers will create their variables in, unless a layer
93 explicitly chooses a different dtype. If this is different than
94 `DTypePolicy.compute_dtype`, Layers will cast variables to
95 the compute dtype to avoid type errors.
96
97 Variable regularizers are run in the variable dtype, not the compute
98 dtype.
99
100 Returns:
101 The variable dtype of this policy, as a string.
102 """
103 return self._variable_dtype
104
105 @property
106 def compute_dtype(self):
107 """The compute dtype of this policy.
108
109 This is the dtype layers will do their computations in. Typically layers
110 output tensors with the compute dtype as well.
111
112 Note that even if the compute dtype is float16 or bfloat16, hardware
113 devices may not do individual adds, multiplies, and other fundamental
114 operations in float16 or bfloat16, but instead may do some of them in
115 float32 for numeric stability. The compute dtype is the dtype of the
116 inputs and outputs of the ops that the layer executes.
117 Internally, many ops will do certain internal calculations in
118 float32 or some other device-internal intermediate format with higher
119 precision than float16/bfloat16, to increase numeric stability.
120
121 Returns:
122 The compute dtype of this policy, as a string.
123 """
124 return self._compute_dtype
125
126 @property
127 def name(self):
128 """Returns the name of this policy."""
129 return self._name
130
131 def convert_input(self, x, autocast, dtype):
132 dtype = backend.standardize_dtype(dtype)
133 if backend.is_tensor(x):
134 if (
135 autocast
136 and backend.is_float_dtype(x.dtype)
137 and x.dtype != dtype
138 ):
139 x = backend.cast(x, dtype=dtype)
140 return x
141 elif backend.is_keras_tensor(x):
142 if (
143 autocast
144 and backend.is_float_dtype(x.dtype)
145 and x.dtype != dtype
146 ):
147 x.dtype = dtype
148 return x
149 elif hasattr(x, "__array__"):
150 return ops.convert_to_tensor(x, dtype=dtype)
151 return x
152
153 def get_config(self):
154 return {"name": self.name}
155
156 @classmethod
157 def from_config(cls, config):
158 return cls(**config)
159
160
161 @keras_export(
162 ["keras.FloatDTypePolicy", "keras.dtype_policies.FloatDTypePolicy"]
163 )
164 class FloatDTypePolicy(DTypePolicy):
165 def __init__(self, name):
166 super().__init__(name)
167 self._compute_dtype, self._variable_dtype = self._parse_name(name)
168 # TODO: check that the current hardware supports the provided
169 # dtype policy and raise/warn otherwise.
170
171 def _parse_name(self, name):
172 if name == "mixed_float16":
173 return "float16", "float32"
174 elif name == "mixed_bfloat16":
175 return "bfloat16", "float32"
176 try:
177 dtype = backend.standardize_dtype(name)
178 return dtype, dtype
179 except ValueError:
180 raise ValueError(
181 f"Cannot convert '{name}' to a mixed precision "
182 "FloatDTypePolicy. Valid policies include 'mixed_float16', "
183 "'mixed_bfloat16', and the name of any float dtype such as "
184 "'float32'."
185 )
186
187 def __repr__(self):
188 return f'<FloatDTypePolicy "{self._name}">'
189
190
191 @keras_export(
192 ["keras.QuantizedDTypePolicy", "keras.dtype_policies.QuantizedDTypePolicy"]
193 )
194 class QuantizedDTypePolicy(DTypePolicy):
195 def __init__(self, name):
196 super().__init__(name)
197 self._quantization_mode, self._compute_dtype, self._variable_dtype = (
198 self._parse_name(name)
199 )
200
201 def _parse_name(self, name):
202 error_msg = (
203 f"Cannot convert '{name}' to a QuantizedDTypePolicy. "
204 "Valid policies include "
205 "'int8_from_float32', 'int8_from_float16', 'int8_from_bfloat16', "
206 "'int8_from_mixed_float16', 'int8_from_mixed_bfloat16'."
207 )
208 split_name = name.split("_from_")
209 if len(split_name) != 2:
210 raise ValueError(error_msg)
211 mode, from_name = split_name
212 if mode not in ("int8",):
213 raise ValueError(error_msg)
214 if from_name == "mixed_float16":
215 return mode, "float16", "float32"
216 elif from_name == "mixed_bfloat16":
217 return mode, "bfloat16", "float32"
218 try:
219 dtype = backend.standardize_dtype(from_name)
220 return mode, dtype, dtype
221 except ValueError:
222 raise ValueError(error_msg)
223
224 @property
225 def quantization_mode(self):
226 """The quantization mode of this policy.
227
228 Returns:
229 The quantization mode of this policy, as a string.
230 """
231 return self._quantization_mode
232
233 def __repr__(self):
234 return f'<QuantizedDTypePolicy "{self._name}">'
235
236
237 @keras_export(
238 [
239 "keras.config.set_dtype_policy",
240 "keras.mixed_precision.set_dtype_policy", # Legacy
241 "keras.mixed_precision.set_global_policy", # Legacy
242 ]
243 )
244 def set_dtype_policy(policy):
245 """Sets the default dtype policy globally.
246
247 Example:
248
249 >>> keras.config.set_dtype_policy("mixed_float16")
250 """
251 if not isinstance(policy, DTypePolicy):
252 if isinstance(policy, str):
253 if policy.startswith("int8"):
254 policy = QuantizedDTypePolicy(policy)
255 else:
256 policy = FloatDTypePolicy(policy)
257 else:
258 raise ValueError(
259 "Invalid `policy` argument. "
260 "Expected the string name of a policy "
261 "(such as 'mixed_float16') or a `DTypePolicy` "
262 f"instance. Received: policy={policy} "
263 f"(of type {type(policy)})"
264 )
265 global_state.set_global_attribute("dtype_policy", policy)
266
267
268 @keras_export(
269 [
270 "keras.config.dtype_policy",
271 "keras.mixed_precision.dtype_policy", # Legacy
272 "keras.mixed_precision.global_policy", # Legacy
273 ]
274 )
275 def dtype_policy():
276 """Returns the current default dtype policy object."""
277 policy = global_state.get_global_attribute("dtype_policy", None)
278 if policy is None:
279 policy = FloatDTypePolicy(backend.floatx())
280 set_dtype_policy(policy)
281 return policy
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/keras/dtype_policies/dtype_policy.py b/keras/dtype_policies/dtype_policy.py
--- a/keras/dtype_policies/dtype_policy.py
+++ b/keras/dtype_policies/dtype_policy.py
@@ -69,6 +69,10 @@
return FloatDTypePolicy(name)
return super().__new__(cls)
+ def __getnewargs__(self):
+ # To support `copy`, `deepcopy` and `pickle`
+ return (self._name,)
+
def __init__(self, name):
self._name = name
self._compute_dtype = backend.floatx()
| {"golden_diff": "diff --git a/keras/dtype_policies/dtype_policy.py b/keras/dtype_policies/dtype_policy.py\n--- a/keras/dtype_policies/dtype_policy.py\n+++ b/keras/dtype_policies/dtype_policy.py\n@@ -69,6 +69,10 @@\n return FloatDTypePolicy(name)\n return super().__new__(cls)\n \n+ def __getnewargs__(self):\n+ # To support `copy`, `deepcopy` and `pickle`\n+ return (self._name,)\n+\n def __init__(self, name):\n self._name = name\n self._compute_dtype = backend.floatx()\n", "issue": "TypeError: DTypePolicy.__new__() when deepcopy(layer_instance)\nHello,\r\n\r\nI use `Python==3.11.8` with `keras==3.1.1`.\r\n\r\nWhen I create a layer instance and try to deepcopy this layer I receive the following error which did not happen before.\r\n\r\n\r\n```python\r\n>>> import keras\r\n>>> import copy\r\n>>> layer_obj = keras.layers.Dense(1)\r\n>>> copy.deepcopy(layer_obj)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 172, in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 271, in _reconstruct\r\n state = deepcopy(state, memo)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 146, in deepcopy\r\n y = copier(x, memo)\r\n ^^^^^^^^^^^^^^^\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 231, in _deepcopy_dict\r\n y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 172, in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 265, in _reconstruct\r\n y = func(*args)\r\n ^^^^^^^^^^^\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copyreg.py\", line 105, in __newobj__\r\n return cls.__new__(cls, *args)\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\nTypeError: DTypePolicy.__new__() missing 1 required positional argument: 'name'\r\n>>> >>> copy.deepcopy(layer_obj)\r\n File \"<stdin>\", line 1\r\n >>> copy.deepcopy(layer_obj)\r\n ^^\r\nSyntaxError: invalid syntax\r\n>>> Traceback (most recent call last):\r\n File \"<stdin>\", line 1\r\n Traceback (most recent call last):\r\n ^^^^^^^^^^^\r\nSyntaxError: invalid syntax. Perhaps you forgot a comma?\r\n>>> File \"<stdin>\", line 1, in <module>\r\n File \"<stdin>\", line 1\r\n File \"<stdin>\", line 1, in <module>\r\nIndentationError: unexpected indent\r\n>>> File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 172, in deepcopy\r\n File \"<stdin>\", line 1\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 172, in deepcopy\r\nIndentationError: unexpected indent\r\n>>> y = _reconstruct(x, memo, *rv)\r\n File \"<stdin>\", line 1\r\n y = _reconstruct(x, memo, *rv)\r\nIndentationError: unexpected indent\r\n>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"<stdin>\", line 1\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nIndentationError: unexpected indent\r\n>>> File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 271, in _reconstruct\r\n File \"<stdin>\", line 1\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 271, in _reconstruct\r\nIndentationError: unexpected indent\r\n>>> state = deepcopy(state, memo)\r\n File \"<stdin>\", line 1\r\n state = deepcopy(state, memo)\r\nIndentationError: unexpected indent\r\n>>> ^^^^^^^^^^^^^^^^^^^^^\r\n File \"<stdin>\", line 1\r\n ^^^^^^^^^^^^^^^^^^^^^\r\nIndentationError: unexpected indent\r\n>>> File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 146, in deepcopy\r\n File \"<stdin>\", line 1\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 146, in deepcopy\r\nIndentationError: unexpected indent\r\n>>> y = copier(x, memo)\r\n File \"<stdin>\", line 1\r\n y = copier(x, memo)\r\nIndentationError: unexpected indent\r\n>>> ^^^^^^^^^^^^^^^\r\n File \"<stdin>\", line 1\r\n ^^^^^^^^^^^^^^^\r\nIndentationError: unexpected indent\r\n>>> File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 231, in _deepcopy_dict\r\n File \"<stdin>\", line 1\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 231, in _deepcopy_dict\r\nIndentationError: unexpected indent\r\n>>> y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n File \"<stdin>\", line 1\r\n y[deepcopy(key, memo)] = deepcopy(value, memo)\r\nIndentationError: unexpected indent\r\n>>> ^^^^^^^^^^^^^^^^^^^^^\r\n File \"<stdin>\", line 1\r\n ^^^^^^^^^^^^^^^^^^^^^\r\nIndentationError: unexpected indent\r\n>>> File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 172, in deepcopy\r\n File \"<stdin>\", line 1\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 172, in deepcopy\r\nIndentationError: unexpected indent\r\n>>> y = _reconstruct(x, memo, *rv)\r\n File \"<stdin>\", line 1\r\n y = _reconstruct(x, memo, *rv)\r\nIndentationError: unexpected indent\r\n>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"<stdin>\", line 1\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nIndentationError: unexpected indent\r\n>>> File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 265, in _reconstruct\r\n File \"<stdin>\", line 1\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copy.py\", line 265, in _reconstruct\r\nIndentationError: unexpected indent\r\n>>> y = func(*args)\r\n File \"<stdin>\", line 1\r\n y = func(*args)\r\nIndentationError: unexpected indent\r\n>>> ^^^^^^^^^^^\r\n File \"<stdin>\", line 1\r\n ^^^^^^^^^^^\r\nIndentationError: unexpected indent\r\n>>> File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copyreg.py\", line 105, in __newobj__\r\n File \"<stdin>\", line 1\r\n File \"/Users/romainegele/miniforge3/envs/dlp/lib/python3.11/copyreg.py\", line 105, in __newobj__\r\nIndentationError: unexpected indent\r\n>>> return cls.__new__(cls, *args)\r\n File \"<stdin>\", line 1\r\n return cls.__new__(cls, *args)\r\nIndentationError: unexpected indent\r\n>>> ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"<stdin>\", line 1\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\nIndentationError: unexpected indent\r\n>>> TypeError: DTypePolicy.__new__() missing 1 required positional argument: 'name'\r\n```\n", "before_files": [{"content": "from keras import backend\nfrom keras import ops\nfrom keras.api_export import keras_export\nfrom keras.backend.common import global_state\n\n\n@keras_export(\n [\n \"keras.DTypePolicy\",\n \"keras.dtype_policies.DTypePolicy\",\n \"keras.mixed_precision.DTypePolicy\", # Legacy\n \"keras.mixed_precision.Policy\", # Legacy\n ]\n)\nclass DTypePolicy:\n \"\"\"A dtype policy for a Keras layer.\n\n A dtype policy determines a layer's computation and variable dtypes. Each\n layer has a policy. Policies can be passed to the `dtype` argument of layer\n constructors, or a global policy can be set with\n `keras.config.set_dtype_policy`.\n\n Args:\n name: The policy name, which determines the compute and variable dtypes.\n Can be any dtype name, such as `\"float32\"` or `\"float64\"`,\n which causes both the compute and variable dtypes\n will be that dtype.\n Can also be the string `\"mixed_float16\"` or `\"mixed_bfloat16\"`,\n which causes the compute dtype to be `float16` or `bfloat16`\n and the variable dtype to be `float32`.\n\n Typically you only need to interact with dtype policies when using mixed\n precision, which is the use of float16 or bfloat16 for computations and\n float32 for variables. This is why the term `mixed_precision` appears in the\n API name. Mixed precision can be enabled by passing `\"mixed_float16\"` or\n `\"mixed_bfloat16\"` to `keras.mixed_precision.set_dtype_policy()`.\n\n >>> keras.config.set_dtype_policy(\"mixed_float16\")\n >>> layer1 = keras.layers.Dense(10)\n >>> layer1.dtype_policy # layer1 will automatically use mixed precision\n <DTypePolicy \"mixed_float16\">\n >>> # Can optionally override layer to use float32\n >>> # instead of mixed precision.\n >>> layer2 = keras.layers.Dense(10, dtype=\"float32\")\n >>> layer2.dtype_policy\n <DTypePolicy \"float32\">\n >>> # Set policy back to initial float32.\n >>> keras.config.set_dtype_policy('float32')\n\n In the example above, passing `dtype=\"float32\"` to the layer is\n equivalent to passing\n `dtype=keras.config.DTypePolicy(\"float32\")`.\n In general, passing a dtype policy name to a layer is equivalent\n to passing the corresponding policy, so it is never necessary\n to explicitly construct a `DTypePolicy` object.\n \"\"\"\n\n def __new__(cls, name):\n if not isinstance(name, str):\n raise TypeError(\n \"'name' must be a string, such as 'mixed_float16'. \"\n f\"Received: name={name} (of type {type(name)})\"\n )\n # For backwards compatibility\n # TODO: We should consider deprecating this behavior\n if cls is __class__:\n if name.startswith(\"int8\"):\n return QuantizedDTypePolicy(name)\n return FloatDTypePolicy(name)\n return super().__new__(cls)\n\n def __init__(self, name):\n self._name = name\n self._compute_dtype = backend.floatx()\n self._variable_dtype = backend.floatx()\n\n def _parse_name(self, name):\n \"\"\"Parses a `DTypePolicy` name into a compute and variable dtype.\n\n Args:\n name: The name of the policy.\n\n Returns:\n The `(compute_dtype, variable_dtype)` pair.\n \"\"\"\n raise NotImplementedError\n\n @property\n def variable_dtype(self):\n \"\"\"The variable dtype of this policy.\n\n This is the dtype layers will create their variables in, unless a layer\n explicitly chooses a different dtype. If this is different than\n `DTypePolicy.compute_dtype`, Layers will cast variables to\n the compute dtype to avoid type errors.\n\n Variable regularizers are run in the variable dtype, not the compute\n dtype.\n\n Returns:\n The variable dtype of this policy, as a string.\n \"\"\"\n return self._variable_dtype\n\n @property\n def compute_dtype(self):\n \"\"\"The compute dtype of this policy.\n\n This is the dtype layers will do their computations in. Typically layers\n output tensors with the compute dtype as well.\n\n Note that even if the compute dtype is float16 or bfloat16, hardware\n devices may not do individual adds, multiplies, and other fundamental\n operations in float16 or bfloat16, but instead may do some of them in\n float32 for numeric stability. The compute dtype is the dtype of the\n inputs and outputs of the ops that the layer executes.\n Internally, many ops will do certain internal calculations in\n float32 or some other device-internal intermediate format with higher\n precision than float16/bfloat16, to increase numeric stability.\n\n Returns:\n The compute dtype of this policy, as a string.\n \"\"\"\n return self._compute_dtype\n\n @property\n def name(self):\n \"\"\"Returns the name of this policy.\"\"\"\n return self._name\n\n def convert_input(self, x, autocast, dtype):\n dtype = backend.standardize_dtype(dtype)\n if backend.is_tensor(x):\n if (\n autocast\n and backend.is_float_dtype(x.dtype)\n and x.dtype != dtype\n ):\n x = backend.cast(x, dtype=dtype)\n return x\n elif backend.is_keras_tensor(x):\n if (\n autocast\n and backend.is_float_dtype(x.dtype)\n and x.dtype != dtype\n ):\n x.dtype = dtype\n return x\n elif hasattr(x, \"__array__\"):\n return ops.convert_to_tensor(x, dtype=dtype)\n return x\n\n def get_config(self):\n return {\"name\": self.name}\n\n @classmethod\n def from_config(cls, config):\n return cls(**config)\n\n\n@keras_export(\n [\"keras.FloatDTypePolicy\", \"keras.dtype_policies.FloatDTypePolicy\"]\n)\nclass FloatDTypePolicy(DTypePolicy):\n def __init__(self, name):\n super().__init__(name)\n self._compute_dtype, self._variable_dtype = self._parse_name(name)\n # TODO: check that the current hardware supports the provided\n # dtype policy and raise/warn otherwise.\n\n def _parse_name(self, name):\n if name == \"mixed_float16\":\n return \"float16\", \"float32\"\n elif name == \"mixed_bfloat16\":\n return \"bfloat16\", \"float32\"\n try:\n dtype = backend.standardize_dtype(name)\n return dtype, dtype\n except ValueError:\n raise ValueError(\n f\"Cannot convert '{name}' to a mixed precision \"\n \"FloatDTypePolicy. Valid policies include 'mixed_float16', \"\n \"'mixed_bfloat16', and the name of any float dtype such as \"\n \"'float32'.\"\n )\n\n def __repr__(self):\n return f'<FloatDTypePolicy \"{self._name}\">'\n\n\n@keras_export(\n [\"keras.QuantizedDTypePolicy\", \"keras.dtype_policies.QuantizedDTypePolicy\"]\n)\nclass QuantizedDTypePolicy(DTypePolicy):\n def __init__(self, name):\n super().__init__(name)\n self._quantization_mode, self._compute_dtype, self._variable_dtype = (\n self._parse_name(name)\n )\n\n def _parse_name(self, name):\n error_msg = (\n f\"Cannot convert '{name}' to a QuantizedDTypePolicy. \"\n \"Valid policies include \"\n \"'int8_from_float32', 'int8_from_float16', 'int8_from_bfloat16', \"\n \"'int8_from_mixed_float16', 'int8_from_mixed_bfloat16'.\"\n )\n split_name = name.split(\"_from_\")\n if len(split_name) != 2:\n raise ValueError(error_msg)\n mode, from_name = split_name\n if mode not in (\"int8\",):\n raise ValueError(error_msg)\n if from_name == \"mixed_float16\":\n return mode, \"float16\", \"float32\"\n elif from_name == \"mixed_bfloat16\":\n return mode, \"bfloat16\", \"float32\"\n try:\n dtype = backend.standardize_dtype(from_name)\n return mode, dtype, dtype\n except ValueError:\n raise ValueError(error_msg)\n\n @property\n def quantization_mode(self):\n \"\"\"The quantization mode of this policy.\n\n Returns:\n The quantization mode of this policy, as a string.\n \"\"\"\n return self._quantization_mode\n\n def __repr__(self):\n return f'<QuantizedDTypePolicy \"{self._name}\">'\n\n\n@keras_export(\n [\n \"keras.config.set_dtype_policy\",\n \"keras.mixed_precision.set_dtype_policy\", # Legacy\n \"keras.mixed_precision.set_global_policy\", # Legacy\n ]\n)\ndef set_dtype_policy(policy):\n \"\"\"Sets the default dtype policy globally.\n\n Example:\n\n >>> keras.config.set_dtype_policy(\"mixed_float16\")\n \"\"\"\n if not isinstance(policy, DTypePolicy):\n if isinstance(policy, str):\n if policy.startswith(\"int8\"):\n policy = QuantizedDTypePolicy(policy)\n else:\n policy = FloatDTypePolicy(policy)\n else:\n raise ValueError(\n \"Invalid `policy` argument. \"\n \"Expected the string name of a policy \"\n \"(such as 'mixed_float16') or a `DTypePolicy` \"\n f\"instance. Received: policy={policy} \"\n f\"(of type {type(policy)})\"\n )\n global_state.set_global_attribute(\"dtype_policy\", policy)\n\n\n@keras_export(\n [\n \"keras.config.dtype_policy\",\n \"keras.mixed_precision.dtype_policy\", # Legacy\n \"keras.mixed_precision.global_policy\", # Legacy\n ]\n)\ndef dtype_policy():\n \"\"\"Returns the current default dtype policy object.\"\"\"\n policy = global_state.get_global_attribute(\"dtype_policy\", None)\n if policy is None:\n policy = FloatDTypePolicy(backend.floatx())\n set_dtype_policy(policy)\n return policy\n", "path": "keras/dtype_policies/dtype_policy.py"}], "after_files": [{"content": "from keras import backend\nfrom keras import ops\nfrom keras.api_export import keras_export\nfrom keras.backend.common import global_state\n\n\n@keras_export(\n [\n \"keras.DTypePolicy\",\n \"keras.dtype_policies.DTypePolicy\",\n \"keras.mixed_precision.DTypePolicy\", # Legacy\n \"keras.mixed_precision.Policy\", # Legacy\n ]\n)\nclass DTypePolicy:\n \"\"\"A dtype policy for a Keras layer.\n\n A dtype policy determines a layer's computation and variable dtypes. Each\n layer has a policy. Policies can be passed to the `dtype` argument of layer\n constructors, or a global policy can be set with\n `keras.config.set_dtype_policy`.\n\n Args:\n name: The policy name, which determines the compute and variable dtypes.\n Can be any dtype name, such as `\"float32\"` or `\"float64\"`,\n which causes both the compute and variable dtypes\n will be that dtype.\n Can also be the string `\"mixed_float16\"` or `\"mixed_bfloat16\"`,\n which causes the compute dtype to be `float16` or `bfloat16`\n and the variable dtype to be `float32`.\n\n Typically you only need to interact with dtype policies when using mixed\n precision, which is the use of float16 or bfloat16 for computations and\n float32 for variables. This is why the term `mixed_precision` appears in the\n API name. Mixed precision can be enabled by passing `\"mixed_float16\"` or\n `\"mixed_bfloat16\"` to `keras.mixed_precision.set_dtype_policy()`.\n\n >>> keras.config.set_dtype_policy(\"mixed_float16\")\n >>> layer1 = keras.layers.Dense(10)\n >>> layer1.dtype_policy # layer1 will automatically use mixed precision\n <DTypePolicy \"mixed_float16\">\n >>> # Can optionally override layer to use float32\n >>> # instead of mixed precision.\n >>> layer2 = keras.layers.Dense(10, dtype=\"float32\")\n >>> layer2.dtype_policy\n <DTypePolicy \"float32\">\n >>> # Set policy back to initial float32.\n >>> keras.config.set_dtype_policy('float32')\n\n In the example above, passing `dtype=\"float32\"` to the layer is\n equivalent to passing\n `dtype=keras.config.DTypePolicy(\"float32\")`.\n In general, passing a dtype policy name to a layer is equivalent\n to passing the corresponding policy, so it is never necessary\n to explicitly construct a `DTypePolicy` object.\n \"\"\"\n\n def __new__(cls, name):\n if not isinstance(name, str):\n raise TypeError(\n \"'name' must be a string, such as 'mixed_float16'. \"\n f\"Received: name={name} (of type {type(name)})\"\n )\n # For backwards compatibility\n # TODO: We should consider deprecating this behavior\n if cls is __class__:\n if name.startswith(\"int8\"):\n return QuantizedDTypePolicy(name)\n return FloatDTypePolicy(name)\n return super().__new__(cls)\n\n def __getnewargs__(self):\n # To support `copy`, `deepcopy` and `pickle`\n return (self._name,)\n\n def __init__(self, name):\n self._name = name\n self._compute_dtype = backend.floatx()\n self._variable_dtype = backend.floatx()\n\n def _parse_name(self, name):\n \"\"\"Parses a `DTypePolicy` name into a compute and variable dtype.\n\n Args:\n name: The name of the policy.\n\n Returns:\n The `(compute_dtype, variable_dtype)` pair.\n \"\"\"\n raise NotImplementedError\n\n @property\n def variable_dtype(self):\n \"\"\"The variable dtype of this policy.\n\n This is the dtype layers will create their variables in, unless a layer\n explicitly chooses a different dtype. If this is different than\n `DTypePolicy.compute_dtype`, Layers will cast variables to\n the compute dtype to avoid type errors.\n\n Variable regularizers are run in the variable dtype, not the compute\n dtype.\n\n Returns:\n The variable dtype of this policy, as a string.\n \"\"\"\n return self._variable_dtype\n\n @property\n def compute_dtype(self):\n \"\"\"The compute dtype of this policy.\n\n This is the dtype layers will do their computations in. Typically layers\n output tensors with the compute dtype as well.\n\n Note that even if the compute dtype is float16 or bfloat16, hardware\n devices may not do individual adds, multiplies, and other fundamental\n operations in float16 or bfloat16, but instead may do some of them in\n float32 for numeric stability. The compute dtype is the dtype of the\n inputs and outputs of the ops that the layer executes.\n Internally, many ops will do certain internal calculations in\n float32 or some other device-internal intermediate format with higher\n precision than float16/bfloat16, to increase numeric stability.\n\n Returns:\n The compute dtype of this policy, as a string.\n \"\"\"\n return self._compute_dtype\n\n @property\n def name(self):\n \"\"\"Returns the name of this policy.\"\"\"\n return self._name\n\n def convert_input(self, x, autocast, dtype):\n dtype = backend.standardize_dtype(dtype)\n if backend.is_tensor(x):\n if (\n autocast\n and backend.is_float_dtype(x.dtype)\n and x.dtype != dtype\n ):\n x = backend.cast(x, dtype=dtype)\n return x\n elif backend.is_keras_tensor(x):\n if (\n autocast\n and backend.is_float_dtype(x.dtype)\n and x.dtype != dtype\n ):\n x.dtype = dtype\n return x\n elif hasattr(x, \"__array__\"):\n return ops.convert_to_tensor(x, dtype=dtype)\n return x\n\n def get_config(self):\n return {\"name\": self.name}\n\n @classmethod\n def from_config(cls, config):\n return cls(**config)\n\n\n@keras_export(\n [\"keras.FloatDTypePolicy\", \"keras.dtype_policies.FloatDTypePolicy\"]\n)\nclass FloatDTypePolicy(DTypePolicy):\n def __init__(self, name):\n super().__init__(name)\n self._compute_dtype, self._variable_dtype = self._parse_name(name)\n # TODO: check that the current hardware supports the provided\n # dtype policy and raise/warn otherwise.\n\n def _parse_name(self, name):\n if name == \"mixed_float16\":\n return \"float16\", \"float32\"\n elif name == \"mixed_bfloat16\":\n return \"bfloat16\", \"float32\"\n try:\n dtype = backend.standardize_dtype(name)\n return dtype, dtype\n except ValueError:\n raise ValueError(\n f\"Cannot convert '{name}' to a mixed precision \"\n \"FloatDTypePolicy. Valid policies include 'mixed_float16', \"\n \"'mixed_bfloat16', and the name of any float dtype such as \"\n \"'float32'.\"\n )\n\n def __repr__(self):\n return f'<FloatDTypePolicy \"{self._name}\">'\n\n\n@keras_export(\n [\"keras.QuantizedDTypePolicy\", \"keras.dtype_policies.QuantizedDTypePolicy\"]\n)\nclass QuantizedDTypePolicy(DTypePolicy):\n def __init__(self, name):\n super().__init__(name)\n self._quantization_mode, self._compute_dtype, self._variable_dtype = (\n self._parse_name(name)\n )\n\n def _parse_name(self, name):\n error_msg = (\n f\"Cannot convert '{name}' to a QuantizedDTypePolicy. \"\n \"Valid policies include \"\n \"'int8_from_float32', 'int8_from_float16', 'int8_from_bfloat16', \"\n \"'int8_from_mixed_float16', 'int8_from_mixed_bfloat16'.\"\n )\n split_name = name.split(\"_from_\")\n if len(split_name) != 2:\n raise ValueError(error_msg)\n mode, from_name = split_name\n if mode not in (\"int8\",):\n raise ValueError(error_msg)\n if from_name == \"mixed_float16\":\n return mode, \"float16\", \"float32\"\n elif from_name == \"mixed_bfloat16\":\n return mode, \"bfloat16\", \"float32\"\n try:\n dtype = backend.standardize_dtype(from_name)\n return mode, dtype, dtype\n except ValueError:\n raise ValueError(error_msg)\n\n @property\n def quantization_mode(self):\n \"\"\"The quantization mode of this policy.\n\n Returns:\n The quantization mode of this policy, as a string.\n \"\"\"\n return self._quantization_mode\n\n def __repr__(self):\n return f'<QuantizedDTypePolicy \"{self._name}\">'\n\n\n@keras_export(\n [\n \"keras.config.set_dtype_policy\",\n \"keras.mixed_precision.set_dtype_policy\", # Legacy\n \"keras.mixed_precision.set_global_policy\", # Legacy\n ]\n)\ndef set_dtype_policy(policy):\n \"\"\"Sets the default dtype policy globally.\n\n Example:\n\n >>> keras.config.set_dtype_policy(\"mixed_float16\")\n \"\"\"\n if not isinstance(policy, DTypePolicy):\n if isinstance(policy, str):\n if policy.startswith(\"int8\"):\n policy = QuantizedDTypePolicy(policy)\n else:\n policy = FloatDTypePolicy(policy)\n else:\n raise ValueError(\n \"Invalid `policy` argument. \"\n \"Expected the string name of a policy \"\n \"(such as 'mixed_float16') or a `DTypePolicy` \"\n f\"instance. Received: policy={policy} \"\n f\"(of type {type(policy)})\"\n )\n global_state.set_global_attribute(\"dtype_policy\", policy)\n\n\n@keras_export(\n [\n \"keras.config.dtype_policy\",\n \"keras.mixed_precision.dtype_policy\", # Legacy\n \"keras.mixed_precision.global_policy\", # Legacy\n ]\n)\ndef dtype_policy():\n \"\"\"Returns the current default dtype policy object.\"\"\"\n policy = global_state.get_global_attribute(\"dtype_policy\", None)\n if policy is None:\n policy = FloatDTypePolicy(backend.floatx())\n set_dtype_policy(policy)\n return policy\n", "path": "keras/dtype_policies/dtype_policy.py"}]} |
gh_patches_debug_1014 | rasdani/github-patches | git_diff | google__turbinia-602 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Configuration file not behaving as expected
I was struggling a bit today with having the configuration file mapped to what I needed it to be (for launching dftimewolf with a one-off configuration of Turbinia).
My ~/.turbiniarc is set to what I want, but the config is still picked up from "whatever file it can find" in the directory pointed to by `TURBINIA_CONFIG_PATH` (specified in `ENVCONFIGVAR`)
This happens because when importing `evidence` (e.g. `from turbinia import evidence`), `LoadConfig` is called with no parameters, and thus populates the config with whatever files it can find there. Subsequent calls to `LoadConfig`, even when passing a `config_file` will still return this first configuration because it has already been loaded.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/config/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2016 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Basic Turbinia config."""
16
17 from __future__ import unicode_literals
18
19 import imp
20 import itertools
21 import logging
22 import os
23 import sys
24
25 from turbinia import TurbiniaException
26
27 DATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
28
29 # Look for config files with these names
30 CONFIGFILES = ['.turbiniarc', 'turbinia.conf', 'turbinia_config_tmpl.py']
31 # Look in homedir first, then /etc/turbinia
32 CONFIGPATH = [
33 os.path.expanduser('~'),
34 '/etc/turbinia',
35 os.path.dirname(os.path.abspath(__file__)),
36 ]
37 # Config setup reminder for cleaner error handling on empty configs.
38 CONFIG_MSG = (
39 'Copy turbinia/config/turbinia_config_tmpl.py to ~/.turbiniarc '
40 'or /etc/turbinia/turbinia.conf, edit, and re-run.')
41
42 # Required config vars
43 REQUIRED_VARS = [
44 # Turbinia Config
45 'INSTANCE_ID',
46 'STATE_MANAGER',
47 'TASK_MANAGER',
48 'LOG_FILE',
49 'LOCK_FILE',
50 'OUTPUT_DIR',
51 'TMP_DIR',
52 'SLEEP_TIME',
53 'SINGLE_RUN',
54 'MOUNT_DIR_PREFIX',
55 'SHARED_FILESYSTEM',
56 'DEBUG_TASKS',
57 'DEPENDENCIES',
58 'DOCKER_ENABLED',
59 'DISABLED_JOBS',
60 ]
61
62 # Optional config vars. Some may be mandatory depending on the configuration
63 # (e.g. if TASK_MANAGER is set to 'PSQ', then the GCE Config variables are
64 # required), but these requirements are not enforced.
65 OPTIONAL_VARS = [
66 # GCE CONFIG
67 'TURBINIA_PROJECT',
68 'TURBINIA_ZONE',
69 'TURBINIA_REGION',
70 'BUCKET_NAME',
71 'PSQ_TOPIC',
72 'PUBSUB_TOPIC',
73 'GCS_OUTPUT_PATH',
74 'STACKDRIVER_LOGGING',
75 'STACKDRIVER_TRACEBACK',
76 # REDIS CONFIG
77 'REDIS_HOST',
78 'REDIS_PORT',
79 'REDIS_DB',
80 # Celery config
81 'CELERY_BROKER',
82 'CELERY_BACKEND',
83 'KOMBU_BROKER',
84 'KOMBU_CHANNEL',
85 'KOMBU_DURABLE',
86 # Email config
87 'EMAIL_NOTIFICATIONS',
88 'EMAIL_HOST_ADDRESS',
89 'EMAIL_PORT',
90 'EMAIL_ADDRESS',
91 'EMAIL_PASSWORD',
92 ]
93
94 # Environment variable to look for path data in
95 ENVCONFIGVAR = 'TURBINIA_CONFIG_PATH'
96
97 CONFIG = None
98
99 log = logging.getLogger('turbinia')
100
101
102 def LoadConfig(config_file=None):
103 """Finds Turbinia config file and loads it.
104
105 Args:
106 config_file(str): full path to config file
107 """
108 # TODO(aarontp): Find way to not require global var here. Maybe a singleton
109 # pattern on the config class.
110 # pylint: disable=global-statement
111 global CONFIG
112 if CONFIG:
113 log.debug(
114 'Returning cached config from {0:s} instead of reloading config'.format(
115 CONFIG.configSource))
116 return CONFIG
117
118 if not config_file:
119 log.debug('No config specified. Looking in default locations for config.')
120 # If the environment variable is set, take precedence over the pre-defined
121 # CONFIGPATHs.
122 configpath = CONFIGPATH
123 if ENVCONFIGVAR in os.environ:
124 configpath = os.environ[ENVCONFIGVAR].split(':')
125
126 # Load first file found
127 for _dir, _file in itertools.product(configpath, CONFIGFILES):
128 if os.path.exists(os.path.join(_dir, _file)):
129 config_file = os.path.join(_dir, _file)
130 break
131
132 if config_file is None:
133 raise TurbiniaException('No config files found')
134
135 log.debug('Loading config from {0:s}'.format(config_file))
136 # Warn about using fallback source config, but it's currently necessary for
137 # tests. See issue #446.
138 if 'turbinia_config_tmpl' in config_file:
139 log.warning('Using fallback source config. {0:s}'.format(CONFIG_MSG))
140 try:
141 _config = imp.load_source('config', config_file)
142 except IOError as exception:
143 message = (
144 'Could not load config file {0:s}: {1!s}'.format(
145 config_file, exception))
146 log.error(message)
147 raise TurbiniaException(message)
148
149 _config.configSource = config_file
150 ValidateAndSetConfig(_config)
151
152 # Set the environment var for this so that we don't see the "No project ID
153 # could be determined." warning later.
154 if hasattr(_config, 'TURBINIA_PROJECT') and _config.TURBINIA_PROJECT:
155 os.environ['GOOGLE_CLOUD_PROJECT'] = _config.TURBINIA_PROJECT
156
157 CONFIG = _config
158 log.debug(
159 'Returning parsed config loaded from {0:s}'.format(CONFIG.configSource))
160 return _config
161
162
163 def ValidateAndSetConfig(_config):
164 """Makes sure that the config has the vars loaded and set in the module."""
165 # Explicitly set the config path
166 setattr(sys.modules[__name__], 'configSource', _config.configSource)
167
168 CONFIGVARS = REQUIRED_VARS + OPTIONAL_VARS
169 for var in CONFIGVARS:
170 empty_value = False
171 if not hasattr(_config, var):
172 if var in OPTIONAL_VARS:
173 log.debug(
174 'Setting non-existent but optional config variable {0:s} to '
175 'None'.format(var))
176 empty_value = True
177 else:
178 raise TurbiniaException(
179 'Required config attribute {0:s}:{1:s} not in config'.format(
180 _config.configSource, var))
181 if var in REQUIRED_VARS and getattr(_config, var) is None:
182 raise TurbiniaException(
183 'Config attribute {0:s}:{1:s} is not set'.format(
184 _config.configSource, var))
185
186 # Set the attribute in the current module
187 if empty_value:
188 setattr(sys.modules[__name__], var, None)
189 else:
190 setattr(sys.modules[__name__], var, getattr(_config, var))
191
192
193 def ParseDependencies():
194 """Parses the config file DEPENDENCIES variable.
195
196 Raises:
197 TurbiniaException: If bad config file.
198
199 Returns:
200 dependencies(dict): The parsed dependency values.
201 """
202 dependencies = {}
203 try:
204 for values in CONFIG.DEPENDENCIES:
205 job = values['job'].lower()
206 dependencies[job] = {}
207 dependencies[job]['programs'] = values['programs']
208 dependencies[job]['docker_image'] = values.get('docker_image')
209 except (KeyError, TypeError) as exception:
210 raise TurbiniaException(
211 'An issue has occurred while parsing the '
212 'dependency config: {0!s}'.format(exception))
213 return dependencies
214
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/turbinia/config/__init__.py b/turbinia/config/__init__.py
--- a/turbinia/config/__init__.py
+++ b/turbinia/config/__init__.py
@@ -109,7 +109,7 @@
# pattern on the config class.
# pylint: disable=global-statement
global CONFIG
- if CONFIG:
+ if CONFIG and not config_file:
log.debug(
'Returning cached config from {0:s} instead of reloading config'.format(
CONFIG.configSource))
| {"golden_diff": "diff --git a/turbinia/config/__init__.py b/turbinia/config/__init__.py\n--- a/turbinia/config/__init__.py\n+++ b/turbinia/config/__init__.py\n@@ -109,7 +109,7 @@\n # pattern on the config class.\n # pylint: disable=global-statement\n global CONFIG\n- if CONFIG:\n+ if CONFIG and not config_file:\n log.debug(\n 'Returning cached config from {0:s} instead of reloading config'.format(\n CONFIG.configSource))\n", "issue": "Configuration file not behaving as expected\nI was struggling a bit today with having the configuration file mapped to what I needed it to be (for launching dftimewolf with a one-off configuration of Turbinia).\r\n\r\nMy ~/.turbiniarc is set to what I want, but the config is still picked up from \"whatever file it can find\" in the directory pointed to by `TURBINIA_CONFIG_PATH` (specified in `ENVCONFIGVAR`)\r\n\r\nThis happens because when importing `evidence` (e.g. `from turbinia import evidence`), `LoadConfig` is called with no parameters, and thus populates the config with whatever files it can find there. Subsequent calls to `LoadConfig`, even when passing a `config_file` will still return this first configuration because it has already been loaded.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Basic Turbinia config.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport imp\nimport itertools\nimport logging\nimport os\nimport sys\n\nfrom turbinia import TurbiniaException\n\nDATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'\n\n# Look for config files with these names\nCONFIGFILES = ['.turbiniarc', 'turbinia.conf', 'turbinia_config_tmpl.py']\n# Look in homedir first, then /etc/turbinia\nCONFIGPATH = [\n os.path.expanduser('~'),\n '/etc/turbinia',\n os.path.dirname(os.path.abspath(__file__)),\n]\n# Config setup reminder for cleaner error handling on empty configs.\nCONFIG_MSG = (\n 'Copy turbinia/config/turbinia_config_tmpl.py to ~/.turbiniarc '\n 'or /etc/turbinia/turbinia.conf, edit, and re-run.')\n\n# Required config vars\nREQUIRED_VARS = [\n # Turbinia Config\n 'INSTANCE_ID',\n 'STATE_MANAGER',\n 'TASK_MANAGER',\n 'LOG_FILE',\n 'LOCK_FILE',\n 'OUTPUT_DIR',\n 'TMP_DIR',\n 'SLEEP_TIME',\n 'SINGLE_RUN',\n 'MOUNT_DIR_PREFIX',\n 'SHARED_FILESYSTEM',\n 'DEBUG_TASKS',\n 'DEPENDENCIES',\n 'DOCKER_ENABLED',\n 'DISABLED_JOBS',\n]\n\n# Optional config vars. Some may be mandatory depending on the configuration\n# (e.g. if TASK_MANAGER is set to 'PSQ', then the GCE Config variables are\n# required), but these requirements are not enforced.\nOPTIONAL_VARS = [\n # GCE CONFIG\n 'TURBINIA_PROJECT',\n 'TURBINIA_ZONE',\n 'TURBINIA_REGION',\n 'BUCKET_NAME',\n 'PSQ_TOPIC',\n 'PUBSUB_TOPIC',\n 'GCS_OUTPUT_PATH',\n 'STACKDRIVER_LOGGING',\n 'STACKDRIVER_TRACEBACK',\n # REDIS CONFIG\n 'REDIS_HOST',\n 'REDIS_PORT',\n 'REDIS_DB',\n # Celery config\n 'CELERY_BROKER',\n 'CELERY_BACKEND',\n 'KOMBU_BROKER',\n 'KOMBU_CHANNEL',\n 'KOMBU_DURABLE',\n # Email config\n 'EMAIL_NOTIFICATIONS',\n 'EMAIL_HOST_ADDRESS',\n 'EMAIL_PORT',\n 'EMAIL_ADDRESS',\n 'EMAIL_PASSWORD',\n]\n\n# Environment variable to look for path data in\nENVCONFIGVAR = 'TURBINIA_CONFIG_PATH'\n\nCONFIG = None\n\nlog = logging.getLogger('turbinia')\n\n\ndef LoadConfig(config_file=None):\n \"\"\"Finds Turbinia config file and loads it.\n\n Args:\n config_file(str): full path to config file\n \"\"\"\n # TODO(aarontp): Find way to not require global var here. Maybe a singleton\n # pattern on the config class.\n # pylint: disable=global-statement\n global CONFIG\n if CONFIG:\n log.debug(\n 'Returning cached config from {0:s} instead of reloading config'.format(\n CONFIG.configSource))\n return CONFIG\n\n if not config_file:\n log.debug('No config specified. Looking in default locations for config.')\n # If the environment variable is set, take precedence over the pre-defined\n # CONFIGPATHs.\n configpath = CONFIGPATH\n if ENVCONFIGVAR in os.environ:\n configpath = os.environ[ENVCONFIGVAR].split(':')\n\n # Load first file found\n for _dir, _file in itertools.product(configpath, CONFIGFILES):\n if os.path.exists(os.path.join(_dir, _file)):\n config_file = os.path.join(_dir, _file)\n break\n\n if config_file is None:\n raise TurbiniaException('No config files found')\n\n log.debug('Loading config from {0:s}'.format(config_file))\n # Warn about using fallback source config, but it's currently necessary for\n # tests. See issue #446.\n if 'turbinia_config_tmpl' in config_file:\n log.warning('Using fallback source config. {0:s}'.format(CONFIG_MSG))\n try:\n _config = imp.load_source('config', config_file)\n except IOError as exception:\n message = (\n 'Could not load config file {0:s}: {1!s}'.format(\n config_file, exception))\n log.error(message)\n raise TurbiniaException(message)\n\n _config.configSource = config_file\n ValidateAndSetConfig(_config)\n\n # Set the environment var for this so that we don't see the \"No project ID\n # could be determined.\" warning later.\n if hasattr(_config, 'TURBINIA_PROJECT') and _config.TURBINIA_PROJECT:\n os.environ['GOOGLE_CLOUD_PROJECT'] = _config.TURBINIA_PROJECT\n\n CONFIG = _config\n log.debug(\n 'Returning parsed config loaded from {0:s}'.format(CONFIG.configSource))\n return _config\n\n\ndef ValidateAndSetConfig(_config):\n \"\"\"Makes sure that the config has the vars loaded and set in the module.\"\"\"\n # Explicitly set the config path\n setattr(sys.modules[__name__], 'configSource', _config.configSource)\n\n CONFIGVARS = REQUIRED_VARS + OPTIONAL_VARS\n for var in CONFIGVARS:\n empty_value = False\n if not hasattr(_config, var):\n if var in OPTIONAL_VARS:\n log.debug(\n 'Setting non-existent but optional config variable {0:s} to '\n 'None'.format(var))\n empty_value = True\n else:\n raise TurbiniaException(\n 'Required config attribute {0:s}:{1:s} not in config'.format(\n _config.configSource, var))\n if var in REQUIRED_VARS and getattr(_config, var) is None:\n raise TurbiniaException(\n 'Config attribute {0:s}:{1:s} is not set'.format(\n _config.configSource, var))\n\n # Set the attribute in the current module\n if empty_value:\n setattr(sys.modules[__name__], var, None)\n else:\n setattr(sys.modules[__name__], var, getattr(_config, var))\n\n\ndef ParseDependencies():\n \"\"\"Parses the config file DEPENDENCIES variable.\n\n Raises:\n TurbiniaException: If bad config file.\n\n Returns:\n dependencies(dict): The parsed dependency values.\n \"\"\"\n dependencies = {}\n try:\n for values in CONFIG.DEPENDENCIES:\n job = values['job'].lower()\n dependencies[job] = {}\n dependencies[job]['programs'] = values['programs']\n dependencies[job]['docker_image'] = values.get('docker_image')\n except (KeyError, TypeError) as exception:\n raise TurbiniaException(\n 'An issue has occurred while parsing the '\n 'dependency config: {0!s}'.format(exception))\n return dependencies\n", "path": "turbinia/config/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Basic Turbinia config.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport imp\nimport itertools\nimport logging\nimport os\nimport sys\n\nfrom turbinia import TurbiniaException\n\nDATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'\n\n# Look for config files with these names\nCONFIGFILES = ['.turbiniarc', 'turbinia.conf', 'turbinia_config_tmpl.py']\n# Look in homedir first, then /etc/turbinia\nCONFIGPATH = [\n os.path.expanduser('~'),\n '/etc/turbinia',\n os.path.dirname(os.path.abspath(__file__)),\n]\n# Config setup reminder for cleaner error handling on empty configs.\nCONFIG_MSG = (\n 'Copy turbinia/config/turbinia_config_tmpl.py to ~/.turbiniarc '\n 'or /etc/turbinia/turbinia.conf, edit, and re-run.')\n\n# Required config vars\nREQUIRED_VARS = [\n # Turbinia Config\n 'INSTANCE_ID',\n 'STATE_MANAGER',\n 'TASK_MANAGER',\n 'LOG_FILE',\n 'LOCK_FILE',\n 'OUTPUT_DIR',\n 'TMP_DIR',\n 'SLEEP_TIME',\n 'SINGLE_RUN',\n 'MOUNT_DIR_PREFIX',\n 'SHARED_FILESYSTEM',\n 'DEBUG_TASKS',\n 'DEPENDENCIES',\n 'DOCKER_ENABLED',\n 'DISABLED_JOBS',\n]\n\n# Optional config vars. Some may be mandatory depending on the configuration\n# (e.g. if TASK_MANAGER is set to 'PSQ', then the GCE Config variables are\n# required), but these requirements are not enforced.\nOPTIONAL_VARS = [\n # GCE CONFIG\n 'TURBINIA_PROJECT',\n 'TURBINIA_ZONE',\n 'TURBINIA_REGION',\n 'BUCKET_NAME',\n 'PSQ_TOPIC',\n 'PUBSUB_TOPIC',\n 'GCS_OUTPUT_PATH',\n 'STACKDRIVER_LOGGING',\n 'STACKDRIVER_TRACEBACK',\n # REDIS CONFIG\n 'REDIS_HOST',\n 'REDIS_PORT',\n 'REDIS_DB',\n # Celery config\n 'CELERY_BROKER',\n 'CELERY_BACKEND',\n 'KOMBU_BROKER',\n 'KOMBU_CHANNEL',\n 'KOMBU_DURABLE',\n # Email config\n 'EMAIL_NOTIFICATIONS',\n 'EMAIL_HOST_ADDRESS',\n 'EMAIL_PORT',\n 'EMAIL_ADDRESS',\n 'EMAIL_PASSWORD',\n]\n\n# Environment variable to look for path data in\nENVCONFIGVAR = 'TURBINIA_CONFIG_PATH'\n\nCONFIG = None\n\nlog = logging.getLogger('turbinia')\n\n\ndef LoadConfig(config_file=None):\n \"\"\"Finds Turbinia config file and loads it.\n\n Args:\n config_file(str): full path to config file\n \"\"\"\n # TODO(aarontp): Find way to not require global var here. Maybe a singleton\n # pattern on the config class.\n # pylint: disable=global-statement\n global CONFIG\n if CONFIG and not config_file:\n log.debug(\n 'Returning cached config from {0:s} instead of reloading config'.format(\n CONFIG.configSource))\n return CONFIG\n\n if not config_file:\n log.debug('No config specified. Looking in default locations for config.')\n # If the environment variable is set, take precedence over the pre-defined\n # CONFIGPATHs.\n configpath = CONFIGPATH\n if ENVCONFIGVAR in os.environ:\n configpath = os.environ[ENVCONFIGVAR].split(':')\n\n # Load first file found\n for _dir, _file in itertools.product(configpath, CONFIGFILES):\n if os.path.exists(os.path.join(_dir, _file)):\n config_file = os.path.join(_dir, _file)\n break\n\n if config_file is None:\n raise TurbiniaException('No config files found')\n\n log.debug('Loading config from {0:s}'.format(config_file))\n # Warn about using fallback source config, but it's currently necessary for\n # tests. See issue #446.\n if 'turbinia_config_tmpl' in config_file:\n log.warning('Using fallback source config. {0:s}'.format(CONFIG_MSG))\n try:\n _config = imp.load_source('config', config_file)\n except IOError as exception:\n message = (\n 'Could not load config file {0:s}: {1!s}'.format(\n config_file, exception))\n log.error(message)\n raise TurbiniaException(message)\n\n _config.configSource = config_file\n ValidateAndSetConfig(_config)\n\n # Set the environment var for this so that we don't see the \"No project ID\n # could be determined.\" warning later.\n if hasattr(_config, 'TURBINIA_PROJECT') and _config.TURBINIA_PROJECT:\n os.environ['GOOGLE_CLOUD_PROJECT'] = _config.TURBINIA_PROJECT\n\n CONFIG = _config\n log.debug(\n 'Returning parsed config loaded from {0:s}'.format(CONFIG.configSource))\n return _config\n\n\ndef ValidateAndSetConfig(_config):\n \"\"\"Makes sure that the config has the vars loaded and set in the module.\"\"\"\n # Explicitly set the config path\n setattr(sys.modules[__name__], 'configSource', _config.configSource)\n\n CONFIGVARS = REQUIRED_VARS + OPTIONAL_VARS\n for var in CONFIGVARS:\n empty_value = False\n if not hasattr(_config, var):\n if var in OPTIONAL_VARS:\n log.debug(\n 'Setting non-existent but optional config variable {0:s} to '\n 'None'.format(var))\n empty_value = True\n else:\n raise TurbiniaException(\n 'Required config attribute {0:s}:{1:s} not in config'.format(\n _config.configSource, var))\n if var in REQUIRED_VARS and getattr(_config, var) is None:\n raise TurbiniaException(\n 'Config attribute {0:s}:{1:s} is not set'.format(\n _config.configSource, var))\n\n # Set the attribute in the current module\n if empty_value:\n setattr(sys.modules[__name__], var, None)\n else:\n setattr(sys.modules[__name__], var, getattr(_config, var))\n\n\ndef ParseDependencies():\n \"\"\"Parses the config file DEPENDENCIES variable.\n\n Raises:\n TurbiniaException: If bad config file.\n\n Returns:\n dependencies(dict): The parsed dependency values.\n \"\"\"\n dependencies = {}\n try:\n for values in CONFIG.DEPENDENCIES:\n job = values['job'].lower()\n dependencies[job] = {}\n dependencies[job]['programs'] = values['programs']\n dependencies[job]['docker_image'] = values.get('docker_image')\n except (KeyError, TypeError) as exception:\n raise TurbiniaException(\n 'An issue has occurred while parsing the '\n 'dependency config: {0!s}'.format(exception))\n return dependencies\n", "path": "turbinia/config/__init__.py"}]} |
gh_patches_debug_1015 | rasdani/github-patches | git_diff | celery__celery-2349 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Internal error when embedding app
Celery 3.1.13 and 3.1.16 (latest release as of this writing)
I'm wrapping the celery app inside a utility class, which constructs the app and the worker:
``` python
self.celery = celery.Celery()
self.worker = self.celery.WorkController(pool_cls='solo', queues=[self.queue_name])
self.celery.task(self._receive_callback, name=self.callback_task_name)
```
The utility class has a start() method which starts the worker like this:
``` python
t = threading.Thread(target=self.worker.start)
# Starting the worker in a daemonic thread so that it doesn't keep the process
# alive when the main thread exits
t.setDaemon(True)
t.start()
```
When the embedded app receives the task it crashes with the following traceback:
``` python
CRITICAL:celery.worker.job:Task [my_task_name][cfe87fb7-373d-4082-a72c-0f44d265cc9f] INTERNAL ERROR: AttributeError("'NoneType' object has no attribute 'loader'",)
Traceback (most recent call last):
File "/virtualenvdir/lib/python2.7/site-packages/celery/app/trace.py", line 333, in trace_task
task.__trace__ = build_tracer(task.name, task, **opts)
File "/virtualenvdir/lib/python2.7/site-packages/celery/app/trace.py", line 160, in build_tracer
loader = loader or app.loader
AttributeError: 'NoneType' object has no attribute 'loader'
```
I printed the stack trace from the exception handler in celery.app.trace.trace_task right before report_internal error is called and the error seems to be triggered in _trace_task_ret:
``` python
def _trace_task_ret(name, uuid, args, kwargs, request={}, app=None, **opts):
return trace_task((app or current_app).tasks[name],
uuid, args, kwargs, request, app=app, **opts)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/app/trace.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 celery.app.trace
4 ~~~~~~~~~~~~~~~~
5
6 This module defines how the task execution is traced:
7 errors are recorded, handlers are applied and so on.
8
9 """
10 from __future__ import absolute_import
11
12 # ## ---
13 # This is the heart of the worker, the inner loop so to speak.
14 # It used to be split up into nice little classes and methods,
15 # but in the end it only resulted in bad performance and horrible tracebacks,
16 # so instead we now use one closure per task class.
17
18 import os
19 import socket
20 import sys
21
22 from warnings import warn
23
24 from billiard.einfo import ExceptionInfo
25 from kombu.exceptions import EncodeError
26 from kombu.utils import kwdict
27
28 from celery import current_app, group
29 from celery import states, signals
30 from celery._state import _task_stack
31 from celery.app import set_default_app
32 from celery.app.task import Task as BaseTask, Context
33 from celery.exceptions import Ignore, Reject, Retry
34 from celery.utils.log import get_logger
35 from celery.utils.objects import mro_lookup
36 from celery.utils.serialization import (
37 get_pickleable_exception,
38 get_pickleable_etype,
39 )
40
41 __all__ = ['TraceInfo', 'build_tracer', 'trace_task', 'eager_trace_task',
42 'setup_worker_optimizations', 'reset_worker_optimizations']
43
44 _logger = get_logger(__name__)
45
46 send_prerun = signals.task_prerun.send
47 send_postrun = signals.task_postrun.send
48 send_success = signals.task_success.send
49 STARTED = states.STARTED
50 SUCCESS = states.SUCCESS
51 IGNORED = states.IGNORED
52 REJECTED = states.REJECTED
53 RETRY = states.RETRY
54 FAILURE = states.FAILURE
55 EXCEPTION_STATES = states.EXCEPTION_STATES
56 IGNORE_STATES = frozenset([IGNORED, RETRY, REJECTED])
57
58 #: set by :func:`setup_worker_optimizations`
59 _tasks = None
60 _patched = {}
61
62
63 def task_has_custom(task, attr):
64 """Return true if the task or one of its bases
65 defines ``attr`` (excluding the one in BaseTask)."""
66 return mro_lookup(task.__class__, attr, stop=(BaseTask, object),
67 monkey_patched=['celery.app.task'])
68
69
70 class TraceInfo(object):
71 __slots__ = ('state', 'retval')
72
73 def __init__(self, state, retval=None):
74 self.state = state
75 self.retval = retval
76
77 def handle_error_state(self, task, eager=False):
78 store_errors = not eager
79 if task.ignore_result:
80 store_errors = task.store_errors_even_if_ignored
81
82 return {
83 RETRY: self.handle_retry,
84 FAILURE: self.handle_failure,
85 }[self.state](task, store_errors=store_errors)
86
87 def handle_retry(self, task, store_errors=True):
88 """Handle retry exception."""
89 # the exception raised is the Retry semi-predicate,
90 # and it's exc' attribute is the original exception raised (if any).
91 req = task.request
92 type_, _, tb = sys.exc_info()
93 try:
94 reason = self.retval
95 einfo = ExceptionInfo((type_, reason, tb))
96 if store_errors:
97 task.backend.mark_as_retry(
98 req.id, reason.exc, einfo.traceback, request=req,
99 )
100 task.on_retry(reason.exc, req.id, req.args, req.kwargs, einfo)
101 signals.task_retry.send(sender=task, request=req,
102 reason=reason, einfo=einfo)
103 return einfo
104 finally:
105 del(tb)
106
107 def handle_failure(self, task, store_errors=True):
108 """Handle exception."""
109 req = task.request
110 type_, _, tb = sys.exc_info()
111 try:
112 exc = self.retval
113 einfo = ExceptionInfo()
114 einfo.exception = get_pickleable_exception(einfo.exception)
115 einfo.type = get_pickleable_etype(einfo.type)
116 if store_errors:
117 task.backend.mark_as_failure(
118 req.id, exc, einfo.traceback, request=req,
119 )
120 task.on_failure(exc, req.id, req.args, req.kwargs, einfo)
121 signals.task_failure.send(sender=task, task_id=req.id,
122 exception=exc, args=req.args,
123 kwargs=req.kwargs,
124 traceback=tb,
125 einfo=einfo)
126 return einfo
127 finally:
128 del(tb)
129
130
131 def build_tracer(name, task, loader=None, hostname=None, store_errors=True,
132 Info=TraceInfo, eager=False, propagate=False, app=None,
133 IGNORE_STATES=IGNORE_STATES):
134 """Return a function that traces task execution; catches all
135 exceptions and updates result backend with the state and result
136
137 If the call was successful, it saves the result to the task result
138 backend, and sets the task status to `"SUCCESS"`.
139
140 If the call raises :exc:`~@Retry`, it extracts
141 the original exception, uses that as the result and sets the task state
142 to `"RETRY"`.
143
144 If the call results in an exception, it saves the exception as the task
145 result, and sets the task state to `"FAILURE"`.
146
147 Return a function that takes the following arguments:
148
149 :param uuid: The id of the task.
150 :param args: List of positional args to pass on to the function.
151 :param kwargs: Keyword arguments mapping to pass on to the function.
152 :keyword request: Request dict.
153
154 """
155 # If the task doesn't define a custom __call__ method
156 # we optimize it away by simply calling the run method directly,
157 # saving the extra method call and a line less in the stack trace.
158 fun = task if task_has_custom(task, '__call__') else task.run
159
160 loader = loader or app.loader
161 backend = task.backend
162 ignore_result = task.ignore_result
163 track_started = task.track_started
164 track_started = not eager and (task.track_started and not ignore_result)
165 publish_result = not eager and not ignore_result
166 hostname = hostname or socket.gethostname()
167
168 loader_task_init = loader.on_task_init
169 loader_cleanup = loader.on_process_cleanup
170
171 task_on_success = None
172 task_after_return = None
173 if task_has_custom(task, 'on_success'):
174 task_on_success = task.on_success
175 if task_has_custom(task, 'after_return'):
176 task_after_return = task.after_return
177
178 store_result = backend.store_result
179 backend_cleanup = backend.process_cleanup
180
181 pid = os.getpid()
182
183 request_stack = task.request_stack
184 push_request = request_stack.push
185 pop_request = request_stack.pop
186 push_task = _task_stack.push
187 pop_task = _task_stack.pop
188 on_chord_part_return = backend.on_chord_part_return
189
190 prerun_receivers = signals.task_prerun.receivers
191 postrun_receivers = signals.task_postrun.receivers
192 success_receivers = signals.task_success.receivers
193
194 from celery import canvas
195 signature = canvas.maybe_signature # maybe_ does not clone if already
196
197 def on_error(request, exc, uuid, state=FAILURE, call_errbacks=True):
198 if propagate:
199 raise
200 I = Info(state, exc)
201 R = I.handle_error_state(task, eager=eager)
202 if call_errbacks:
203 group(
204 [signature(errback, app=app)
205 for errback in request.errbacks or []], app=app,
206 ).apply_async((uuid, ))
207 return I, R, I.state, I.retval
208
209 def trace_task(uuid, args, kwargs, request=None):
210 # R - is the possibly prepared return value.
211 # I - is the Info object.
212 # retval - is the always unmodified return value.
213 # state - is the resulting task state.
214
215 # This function is very long because we have unrolled all the calls
216 # for performance reasons, and because the function is so long
217 # we want the main variables (I, and R) to stand out visually from the
218 # the rest of the variables, so breaking PEP8 is worth it ;)
219 R = I = retval = state = None
220 kwargs = kwdict(kwargs)
221 try:
222 push_task(task)
223 task_request = Context(request or {}, args=args,
224 called_directly=False, kwargs=kwargs)
225 push_request(task_request)
226 try:
227 # -*- PRE -*-
228 if prerun_receivers:
229 send_prerun(sender=task, task_id=uuid, task=task,
230 args=args, kwargs=kwargs)
231 loader_task_init(uuid, task)
232 if track_started:
233 store_result(
234 uuid, {'pid': pid, 'hostname': hostname}, STARTED,
235 request=task_request,
236 )
237
238 # -*- TRACE -*-
239 try:
240 R = retval = fun(*args, **kwargs)
241 state = SUCCESS
242 except Reject as exc:
243 I, R = Info(REJECTED, exc), ExceptionInfo(internal=True)
244 state, retval = I.state, I.retval
245 except Ignore as exc:
246 I, R = Info(IGNORED, exc), ExceptionInfo(internal=True)
247 state, retval = I.state, I.retval
248 except Retry as exc:
249 I, R, state, retval = on_error(
250 task_request, exc, uuid, RETRY, call_errbacks=False,
251 )
252 except Exception as exc:
253 I, R, state, retval = on_error(task_request, exc, uuid)
254 except BaseException as exc:
255 raise
256 else:
257 try:
258 # callback tasks must be applied before the result is
259 # stored, so that result.children is populated.
260
261 # groups are called inline and will store trail
262 # separately, so need to call them separately
263 # so that the trail's not added multiple times :(
264 # (Issue #1936)
265 callbacks = task.request.callbacks
266 if callbacks:
267 if len(task.request.callbacks) > 1:
268 sigs, groups = [], []
269 for sig in callbacks:
270 sig = signature(sig, app=app)
271 if isinstance(sig, group):
272 groups.append(sig)
273 else:
274 sigs.append(sig)
275 for group_ in groups:
276 group.apply_async((retval, ))
277 if sigs:
278 group(sigs).apply_async(retval, )
279 else:
280 signature(callbacks[0], app=app).delay(retval)
281 if publish_result:
282 store_result(
283 uuid, retval, SUCCESS, request=task_request,
284 )
285 except EncodeError as exc:
286 I, R, state, retval = on_error(task_request, exc, uuid)
287 else:
288 if task_on_success:
289 task_on_success(retval, uuid, args, kwargs)
290 if success_receivers:
291 send_success(sender=task, result=retval)
292
293 # -* POST *-
294 if state not in IGNORE_STATES:
295 if task_request.chord:
296 on_chord_part_return(task, state, R)
297 if task_after_return:
298 task_after_return(
299 state, retval, uuid, args, kwargs, None,
300 )
301 finally:
302 try:
303 if postrun_receivers:
304 send_postrun(sender=task, task_id=uuid, task=task,
305 args=args, kwargs=kwargs,
306 retval=retval, state=state)
307 finally:
308 pop_task()
309 pop_request()
310 if not eager:
311 try:
312 backend_cleanup()
313 loader_cleanup()
314 except (KeyboardInterrupt, SystemExit, MemoryError):
315 raise
316 except Exception as exc:
317 _logger.error('Process cleanup failed: %r', exc,
318 exc_info=True)
319 except MemoryError:
320 raise
321 except Exception as exc:
322 if eager:
323 raise
324 R = report_internal_error(task, exc)
325 return R, I
326
327 return trace_task
328
329
330 def trace_task(task, uuid, args, kwargs, request={}, **opts):
331 try:
332 if task.__trace__ is None:
333 task.__trace__ = build_tracer(task.name, task, **opts)
334 return task.__trace__(uuid, args, kwargs, request)[0]
335 except Exception as exc:
336 return report_internal_error(task, exc)
337
338
339 def _trace_task_ret(name, uuid, args, kwargs, request={}, app=None, **opts):
340 return trace_task((app or current_app).tasks[name],
341 uuid, args, kwargs, request, app=app, **opts)
342 trace_task_ret = _trace_task_ret
343
344
345 def _fast_trace_task(task, uuid, args, kwargs, request={}):
346 # setup_worker_optimizations will point trace_task_ret to here,
347 # so this is the function used in the worker.
348 return _tasks[task].__trace__(uuid, args, kwargs, request)[0]
349
350
351 def eager_trace_task(task, uuid, args, kwargs, request=None, **opts):
352 opts.setdefault('eager', True)
353 return build_tracer(task.name, task, **opts)(
354 uuid, args, kwargs, request)
355
356
357 def report_internal_error(task, exc):
358 _type, _value, _tb = sys.exc_info()
359 try:
360 _value = task.backend.prepare_exception(exc, 'pickle')
361 exc_info = ExceptionInfo((_type, _value, _tb), internal=True)
362 warn(RuntimeWarning(
363 'Exception raised outside body: {0!r}:\n{1}'.format(
364 exc, exc_info.traceback)))
365 return exc_info
366 finally:
367 del(_tb)
368
369
370 def setup_worker_optimizations(app):
371 global _tasks
372 global trace_task_ret
373
374 # make sure custom Task.__call__ methods that calls super
375 # will not mess up the request/task stack.
376 _install_stack_protection()
377
378 # all new threads start without a current app, so if an app is not
379 # passed on to the thread it will fall back to the "default app",
380 # which then could be the wrong app. So for the worker
381 # we set this to always return our app. This is a hack,
382 # and means that only a single app can be used for workers
383 # running in the same process.
384 app.set_current()
385 set_default_app(app)
386
387 # evaluate all task classes by finalizing the app.
388 app.finalize()
389
390 # set fast shortcut to task registry
391 _tasks = app._tasks
392
393 trace_task_ret = _fast_trace_task
394 from celery.worker import job as job_module
395 job_module.trace_task_ret = _fast_trace_task
396 job_module.__optimize__()
397
398
399 def reset_worker_optimizations():
400 global trace_task_ret
401 trace_task_ret = _trace_task_ret
402 try:
403 delattr(BaseTask, '_stackprotected')
404 except AttributeError:
405 pass
406 try:
407 BaseTask.__call__ = _patched.pop('BaseTask.__call__')
408 except KeyError:
409 pass
410 from celery.worker import job as job_module
411 job_module.trace_task_ret = _trace_task_ret
412
413
414 def _install_stack_protection():
415 # Patches BaseTask.__call__ in the worker to handle the edge case
416 # where people override it and also call super.
417 #
418 # - The worker optimizes away BaseTask.__call__ and instead
419 # calls task.run directly.
420 # - so with the addition of current_task and the request stack
421 # BaseTask.__call__ now pushes to those stacks so that
422 # they work when tasks are called directly.
423 #
424 # The worker only optimizes away __call__ in the case
425 # where it has not been overridden, so the request/task stack
426 # will blow if a custom task class defines __call__ and also
427 # calls super().
428 if not getattr(BaseTask, '_stackprotected', False):
429 _patched['BaseTask.__call__'] = orig = BaseTask.__call__
430
431 def __protected_call__(self, *args, **kwargs):
432 stack = self.request_stack
433 req = stack.top
434 if req and not req._protected and \
435 len(stack) == 1 and not req.called_directly:
436 req._protected = 1
437 return self.run(*args, **kwargs)
438 return orig(self, *args, **kwargs)
439 BaseTask.__call__ = __protected_call__
440 BaseTask._stackprotected = True
441
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/celery/app/trace.py b/celery/app/trace.py
--- a/celery/app/trace.py
+++ b/celery/app/trace.py
@@ -337,7 +337,8 @@
def _trace_task_ret(name, uuid, args, kwargs, request={}, app=None, **opts):
- return trace_task((app or current_app).tasks[name],
+ app = app or current_app
+ return trace_task(app.tasks[name],
uuid, args, kwargs, request, app=app, **opts)
trace_task_ret = _trace_task_ret
| {"golden_diff": "diff --git a/celery/app/trace.py b/celery/app/trace.py\n--- a/celery/app/trace.py\n+++ b/celery/app/trace.py\n@@ -337,7 +337,8 @@\n \n \n def _trace_task_ret(name, uuid, args, kwargs, request={}, app=None, **opts):\n- return trace_task((app or current_app).tasks[name],\n+ app = app or current_app\n+ return trace_task(app.tasks[name],\n uuid, args, kwargs, request, app=app, **opts)\n trace_task_ret = _trace_task_ret\n", "issue": "Internal error when embedding app\nCelery 3.1.13 and 3.1.16 (latest release as of this writing)\n\nI'm wrapping the celery app inside a utility class, which constructs the app and the worker:\n\n``` python\n self.celery = celery.Celery()\n self.worker = self.celery.WorkController(pool_cls='solo', queues=[self.queue_name])\n self.celery.task(self._receive_callback, name=self.callback_task_name)\n```\n\nThe utility class has a start() method which starts the worker like this:\n\n``` python\n t = threading.Thread(target=self.worker.start)\n # Starting the worker in a daemonic thread so that it doesn't keep the process\n # alive when the main thread exits\n t.setDaemon(True)\n t.start()\n```\n\nWhen the embedded app receives the task it crashes with the following traceback:\n\n``` python\n CRITICAL:celery.worker.job:Task [my_task_name][cfe87fb7-373d-4082-a72c-0f44d265cc9f] INTERNAL ERROR: AttributeError(\"'NoneType' object has no attribute 'loader'\",)\n Traceback (most recent call last):\n File \"/virtualenvdir/lib/python2.7/site-packages/celery/app/trace.py\", line 333, in trace_task\n task.__trace__ = build_tracer(task.name, task, **opts)\n File \"/virtualenvdir/lib/python2.7/site-packages/celery/app/trace.py\", line 160, in build_tracer\n loader = loader or app.loader\n AttributeError: 'NoneType' object has no attribute 'loader'\n```\n\nI printed the stack trace from the exception handler in celery.app.trace.trace_task right before report_internal error is called and the error seems to be triggered in _trace_task_ret: \n\n``` python\ndef _trace_task_ret(name, uuid, args, kwargs, request={}, app=None, **opts):\n return trace_task((app or current_app).tasks[name],\n uuid, args, kwargs, request, app=app, **opts)\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n celery.app.trace\n ~~~~~~~~~~~~~~~~\n\n This module defines how the task execution is traced:\n errors are recorded, handlers are applied and so on.\n\n\"\"\"\nfrom __future__ import absolute_import\n\n# ## ---\n# This is the heart of the worker, the inner loop so to speak.\n# It used to be split up into nice little classes and methods,\n# but in the end it only resulted in bad performance and horrible tracebacks,\n# so instead we now use one closure per task class.\n\nimport os\nimport socket\nimport sys\n\nfrom warnings import warn\n\nfrom billiard.einfo import ExceptionInfo\nfrom kombu.exceptions import EncodeError\nfrom kombu.utils import kwdict\n\nfrom celery import current_app, group\nfrom celery import states, signals\nfrom celery._state import _task_stack\nfrom celery.app import set_default_app\nfrom celery.app.task import Task as BaseTask, Context\nfrom celery.exceptions import Ignore, Reject, Retry\nfrom celery.utils.log import get_logger\nfrom celery.utils.objects import mro_lookup\nfrom celery.utils.serialization import (\n get_pickleable_exception,\n get_pickleable_etype,\n)\n\n__all__ = ['TraceInfo', 'build_tracer', 'trace_task', 'eager_trace_task',\n 'setup_worker_optimizations', 'reset_worker_optimizations']\n\n_logger = get_logger(__name__)\n\nsend_prerun = signals.task_prerun.send\nsend_postrun = signals.task_postrun.send\nsend_success = signals.task_success.send\nSTARTED = states.STARTED\nSUCCESS = states.SUCCESS\nIGNORED = states.IGNORED\nREJECTED = states.REJECTED\nRETRY = states.RETRY\nFAILURE = states.FAILURE\nEXCEPTION_STATES = states.EXCEPTION_STATES\nIGNORE_STATES = frozenset([IGNORED, RETRY, REJECTED])\n\n#: set by :func:`setup_worker_optimizations`\n_tasks = None\n_patched = {}\n\n\ndef task_has_custom(task, attr):\n \"\"\"Return true if the task or one of its bases\n defines ``attr`` (excluding the one in BaseTask).\"\"\"\n return mro_lookup(task.__class__, attr, stop=(BaseTask, object),\n monkey_patched=['celery.app.task'])\n\n\nclass TraceInfo(object):\n __slots__ = ('state', 'retval')\n\n def __init__(self, state, retval=None):\n self.state = state\n self.retval = retval\n\n def handle_error_state(self, task, eager=False):\n store_errors = not eager\n if task.ignore_result:\n store_errors = task.store_errors_even_if_ignored\n\n return {\n RETRY: self.handle_retry,\n FAILURE: self.handle_failure,\n }[self.state](task, store_errors=store_errors)\n\n def handle_retry(self, task, store_errors=True):\n \"\"\"Handle retry exception.\"\"\"\n # the exception raised is the Retry semi-predicate,\n # and it's exc' attribute is the original exception raised (if any).\n req = task.request\n type_, _, tb = sys.exc_info()\n try:\n reason = self.retval\n einfo = ExceptionInfo((type_, reason, tb))\n if store_errors:\n task.backend.mark_as_retry(\n req.id, reason.exc, einfo.traceback, request=req,\n )\n task.on_retry(reason.exc, req.id, req.args, req.kwargs, einfo)\n signals.task_retry.send(sender=task, request=req,\n reason=reason, einfo=einfo)\n return einfo\n finally:\n del(tb)\n\n def handle_failure(self, task, store_errors=True):\n \"\"\"Handle exception.\"\"\"\n req = task.request\n type_, _, tb = sys.exc_info()\n try:\n exc = self.retval\n einfo = ExceptionInfo()\n einfo.exception = get_pickleable_exception(einfo.exception)\n einfo.type = get_pickleable_etype(einfo.type)\n if store_errors:\n task.backend.mark_as_failure(\n req.id, exc, einfo.traceback, request=req,\n )\n task.on_failure(exc, req.id, req.args, req.kwargs, einfo)\n signals.task_failure.send(sender=task, task_id=req.id,\n exception=exc, args=req.args,\n kwargs=req.kwargs,\n traceback=tb,\n einfo=einfo)\n return einfo\n finally:\n del(tb)\n\n\ndef build_tracer(name, task, loader=None, hostname=None, store_errors=True,\n Info=TraceInfo, eager=False, propagate=False, app=None,\n IGNORE_STATES=IGNORE_STATES):\n \"\"\"Return a function that traces task execution; catches all\n exceptions and updates result backend with the state and result\n\n If the call was successful, it saves the result to the task result\n backend, and sets the task status to `\"SUCCESS\"`.\n\n If the call raises :exc:`~@Retry`, it extracts\n the original exception, uses that as the result and sets the task state\n to `\"RETRY\"`.\n\n If the call results in an exception, it saves the exception as the task\n result, and sets the task state to `\"FAILURE\"`.\n\n Return a function that takes the following arguments:\n\n :param uuid: The id of the task.\n :param args: List of positional args to pass on to the function.\n :param kwargs: Keyword arguments mapping to pass on to the function.\n :keyword request: Request dict.\n\n \"\"\"\n # If the task doesn't define a custom __call__ method\n # we optimize it away by simply calling the run method directly,\n # saving the extra method call and a line less in the stack trace.\n fun = task if task_has_custom(task, '__call__') else task.run\n\n loader = loader or app.loader\n backend = task.backend\n ignore_result = task.ignore_result\n track_started = task.track_started\n track_started = not eager and (task.track_started and not ignore_result)\n publish_result = not eager and not ignore_result\n hostname = hostname or socket.gethostname()\n\n loader_task_init = loader.on_task_init\n loader_cleanup = loader.on_process_cleanup\n\n task_on_success = None\n task_after_return = None\n if task_has_custom(task, 'on_success'):\n task_on_success = task.on_success\n if task_has_custom(task, 'after_return'):\n task_after_return = task.after_return\n\n store_result = backend.store_result\n backend_cleanup = backend.process_cleanup\n\n pid = os.getpid()\n\n request_stack = task.request_stack\n push_request = request_stack.push\n pop_request = request_stack.pop\n push_task = _task_stack.push\n pop_task = _task_stack.pop\n on_chord_part_return = backend.on_chord_part_return\n\n prerun_receivers = signals.task_prerun.receivers\n postrun_receivers = signals.task_postrun.receivers\n success_receivers = signals.task_success.receivers\n\n from celery import canvas\n signature = canvas.maybe_signature # maybe_ does not clone if already\n\n def on_error(request, exc, uuid, state=FAILURE, call_errbacks=True):\n if propagate:\n raise\n I = Info(state, exc)\n R = I.handle_error_state(task, eager=eager)\n if call_errbacks:\n group(\n [signature(errback, app=app)\n for errback in request.errbacks or []], app=app,\n ).apply_async((uuid, ))\n return I, R, I.state, I.retval\n\n def trace_task(uuid, args, kwargs, request=None):\n # R - is the possibly prepared return value.\n # I - is the Info object.\n # retval - is the always unmodified return value.\n # state - is the resulting task state.\n\n # This function is very long because we have unrolled all the calls\n # for performance reasons, and because the function is so long\n # we want the main variables (I, and R) to stand out visually from the\n # the rest of the variables, so breaking PEP8 is worth it ;)\n R = I = retval = state = None\n kwargs = kwdict(kwargs)\n try:\n push_task(task)\n task_request = Context(request or {}, args=args,\n called_directly=False, kwargs=kwargs)\n push_request(task_request)\n try:\n # -*- PRE -*-\n if prerun_receivers:\n send_prerun(sender=task, task_id=uuid, task=task,\n args=args, kwargs=kwargs)\n loader_task_init(uuid, task)\n if track_started:\n store_result(\n uuid, {'pid': pid, 'hostname': hostname}, STARTED,\n request=task_request,\n )\n\n # -*- TRACE -*-\n try:\n R = retval = fun(*args, **kwargs)\n state = SUCCESS\n except Reject as exc:\n I, R = Info(REJECTED, exc), ExceptionInfo(internal=True)\n state, retval = I.state, I.retval\n except Ignore as exc:\n I, R = Info(IGNORED, exc), ExceptionInfo(internal=True)\n state, retval = I.state, I.retval\n except Retry as exc:\n I, R, state, retval = on_error(\n task_request, exc, uuid, RETRY, call_errbacks=False,\n )\n except Exception as exc:\n I, R, state, retval = on_error(task_request, exc, uuid)\n except BaseException as exc:\n raise\n else:\n try:\n # callback tasks must be applied before the result is\n # stored, so that result.children is populated.\n\n # groups are called inline and will store trail\n # separately, so need to call them separately\n # so that the trail's not added multiple times :(\n # (Issue #1936)\n callbacks = task.request.callbacks\n if callbacks:\n if len(task.request.callbacks) > 1:\n sigs, groups = [], []\n for sig in callbacks:\n sig = signature(sig, app=app)\n if isinstance(sig, group):\n groups.append(sig)\n else:\n sigs.append(sig)\n for group_ in groups:\n group.apply_async((retval, ))\n if sigs:\n group(sigs).apply_async(retval, )\n else:\n signature(callbacks[0], app=app).delay(retval)\n if publish_result:\n store_result(\n uuid, retval, SUCCESS, request=task_request,\n )\n except EncodeError as exc:\n I, R, state, retval = on_error(task_request, exc, uuid)\n else:\n if task_on_success:\n task_on_success(retval, uuid, args, kwargs)\n if success_receivers:\n send_success(sender=task, result=retval)\n\n # -* POST *-\n if state not in IGNORE_STATES:\n if task_request.chord:\n on_chord_part_return(task, state, R)\n if task_after_return:\n task_after_return(\n state, retval, uuid, args, kwargs, None,\n )\n finally:\n try:\n if postrun_receivers:\n send_postrun(sender=task, task_id=uuid, task=task,\n args=args, kwargs=kwargs,\n retval=retval, state=state)\n finally:\n pop_task()\n pop_request()\n if not eager:\n try:\n backend_cleanup()\n loader_cleanup()\n except (KeyboardInterrupt, SystemExit, MemoryError):\n raise\n except Exception as exc:\n _logger.error('Process cleanup failed: %r', exc,\n exc_info=True)\n except MemoryError:\n raise\n except Exception as exc:\n if eager:\n raise\n R = report_internal_error(task, exc)\n return R, I\n\n return trace_task\n\n\ndef trace_task(task, uuid, args, kwargs, request={}, **opts):\n try:\n if task.__trace__ is None:\n task.__trace__ = build_tracer(task.name, task, **opts)\n return task.__trace__(uuid, args, kwargs, request)[0]\n except Exception as exc:\n return report_internal_error(task, exc)\n\n\ndef _trace_task_ret(name, uuid, args, kwargs, request={}, app=None, **opts):\n return trace_task((app or current_app).tasks[name],\n uuid, args, kwargs, request, app=app, **opts)\ntrace_task_ret = _trace_task_ret\n\n\ndef _fast_trace_task(task, uuid, args, kwargs, request={}):\n # setup_worker_optimizations will point trace_task_ret to here,\n # so this is the function used in the worker.\n return _tasks[task].__trace__(uuid, args, kwargs, request)[0]\n\n\ndef eager_trace_task(task, uuid, args, kwargs, request=None, **opts):\n opts.setdefault('eager', True)\n return build_tracer(task.name, task, **opts)(\n uuid, args, kwargs, request)\n\n\ndef report_internal_error(task, exc):\n _type, _value, _tb = sys.exc_info()\n try:\n _value = task.backend.prepare_exception(exc, 'pickle')\n exc_info = ExceptionInfo((_type, _value, _tb), internal=True)\n warn(RuntimeWarning(\n 'Exception raised outside body: {0!r}:\\n{1}'.format(\n exc, exc_info.traceback)))\n return exc_info\n finally:\n del(_tb)\n\n\ndef setup_worker_optimizations(app):\n global _tasks\n global trace_task_ret\n\n # make sure custom Task.__call__ methods that calls super\n # will not mess up the request/task stack.\n _install_stack_protection()\n\n # all new threads start without a current app, so if an app is not\n # passed on to the thread it will fall back to the \"default app\",\n # which then could be the wrong app. So for the worker\n # we set this to always return our app. This is a hack,\n # and means that only a single app can be used for workers\n # running in the same process.\n app.set_current()\n set_default_app(app)\n\n # evaluate all task classes by finalizing the app.\n app.finalize()\n\n # set fast shortcut to task registry\n _tasks = app._tasks\n\n trace_task_ret = _fast_trace_task\n from celery.worker import job as job_module\n job_module.trace_task_ret = _fast_trace_task\n job_module.__optimize__()\n\n\ndef reset_worker_optimizations():\n global trace_task_ret\n trace_task_ret = _trace_task_ret\n try:\n delattr(BaseTask, '_stackprotected')\n except AttributeError:\n pass\n try:\n BaseTask.__call__ = _patched.pop('BaseTask.__call__')\n except KeyError:\n pass\n from celery.worker import job as job_module\n job_module.trace_task_ret = _trace_task_ret\n\n\ndef _install_stack_protection():\n # Patches BaseTask.__call__ in the worker to handle the edge case\n # where people override it and also call super.\n #\n # - The worker optimizes away BaseTask.__call__ and instead\n # calls task.run directly.\n # - so with the addition of current_task and the request stack\n # BaseTask.__call__ now pushes to those stacks so that\n # they work when tasks are called directly.\n #\n # The worker only optimizes away __call__ in the case\n # where it has not been overridden, so the request/task stack\n # will blow if a custom task class defines __call__ and also\n # calls super().\n if not getattr(BaseTask, '_stackprotected', False):\n _patched['BaseTask.__call__'] = orig = BaseTask.__call__\n\n def __protected_call__(self, *args, **kwargs):\n stack = self.request_stack\n req = stack.top\n if req and not req._protected and \\\n len(stack) == 1 and not req.called_directly:\n req._protected = 1\n return self.run(*args, **kwargs)\n return orig(self, *args, **kwargs)\n BaseTask.__call__ = __protected_call__\n BaseTask._stackprotected = True\n", "path": "celery/app/trace.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n celery.app.trace\n ~~~~~~~~~~~~~~~~\n\n This module defines how the task execution is traced:\n errors are recorded, handlers are applied and so on.\n\n\"\"\"\nfrom __future__ import absolute_import\n\n# ## ---\n# This is the heart of the worker, the inner loop so to speak.\n# It used to be split up into nice little classes and methods,\n# but in the end it only resulted in bad performance and horrible tracebacks,\n# so instead we now use one closure per task class.\n\nimport os\nimport socket\nimport sys\n\nfrom warnings import warn\n\nfrom billiard.einfo import ExceptionInfo\nfrom kombu.exceptions import EncodeError\nfrom kombu.utils import kwdict\n\nfrom celery import current_app, group\nfrom celery import states, signals\nfrom celery._state import _task_stack\nfrom celery.app import set_default_app\nfrom celery.app.task import Task as BaseTask, Context\nfrom celery.exceptions import Ignore, Reject, Retry\nfrom celery.utils.log import get_logger\nfrom celery.utils.objects import mro_lookup\nfrom celery.utils.serialization import (\n get_pickleable_exception,\n get_pickleable_etype,\n)\n\n__all__ = ['TraceInfo', 'build_tracer', 'trace_task', 'eager_trace_task',\n 'setup_worker_optimizations', 'reset_worker_optimizations']\n\n_logger = get_logger(__name__)\n\nsend_prerun = signals.task_prerun.send\nsend_postrun = signals.task_postrun.send\nsend_success = signals.task_success.send\nSTARTED = states.STARTED\nSUCCESS = states.SUCCESS\nIGNORED = states.IGNORED\nREJECTED = states.REJECTED\nRETRY = states.RETRY\nFAILURE = states.FAILURE\nEXCEPTION_STATES = states.EXCEPTION_STATES\nIGNORE_STATES = frozenset([IGNORED, RETRY, REJECTED])\n\n#: set by :func:`setup_worker_optimizations`\n_tasks = None\n_patched = {}\n\n\ndef task_has_custom(task, attr):\n \"\"\"Return true if the task or one of its bases\n defines ``attr`` (excluding the one in BaseTask).\"\"\"\n return mro_lookup(task.__class__, attr, stop=(BaseTask, object),\n monkey_patched=['celery.app.task'])\n\n\nclass TraceInfo(object):\n __slots__ = ('state', 'retval')\n\n def __init__(self, state, retval=None):\n self.state = state\n self.retval = retval\n\n def handle_error_state(self, task, eager=False):\n store_errors = not eager\n if task.ignore_result:\n store_errors = task.store_errors_even_if_ignored\n\n return {\n RETRY: self.handle_retry,\n FAILURE: self.handle_failure,\n }[self.state](task, store_errors=store_errors)\n\n def handle_retry(self, task, store_errors=True):\n \"\"\"Handle retry exception.\"\"\"\n # the exception raised is the Retry semi-predicate,\n # and it's exc' attribute is the original exception raised (if any).\n req = task.request\n type_, _, tb = sys.exc_info()\n try:\n reason = self.retval\n einfo = ExceptionInfo((type_, reason, tb))\n if store_errors:\n task.backend.mark_as_retry(\n req.id, reason.exc, einfo.traceback, request=req,\n )\n task.on_retry(reason.exc, req.id, req.args, req.kwargs, einfo)\n signals.task_retry.send(sender=task, request=req,\n reason=reason, einfo=einfo)\n return einfo\n finally:\n del(tb)\n\n def handle_failure(self, task, store_errors=True):\n \"\"\"Handle exception.\"\"\"\n req = task.request\n type_, _, tb = sys.exc_info()\n try:\n exc = self.retval\n einfo = ExceptionInfo()\n einfo.exception = get_pickleable_exception(einfo.exception)\n einfo.type = get_pickleable_etype(einfo.type)\n if store_errors:\n task.backend.mark_as_failure(\n req.id, exc, einfo.traceback, request=req,\n )\n task.on_failure(exc, req.id, req.args, req.kwargs, einfo)\n signals.task_failure.send(sender=task, task_id=req.id,\n exception=exc, args=req.args,\n kwargs=req.kwargs,\n traceback=tb,\n einfo=einfo)\n return einfo\n finally:\n del(tb)\n\n\ndef build_tracer(name, task, loader=None, hostname=None, store_errors=True,\n Info=TraceInfo, eager=False, propagate=False, app=None,\n IGNORE_STATES=IGNORE_STATES):\n \"\"\"Return a function that traces task execution; catches all\n exceptions and updates result backend with the state and result\n\n If the call was successful, it saves the result to the task result\n backend, and sets the task status to `\"SUCCESS\"`.\n\n If the call raises :exc:`~@Retry`, it extracts\n the original exception, uses that as the result and sets the task state\n to `\"RETRY\"`.\n\n If the call results in an exception, it saves the exception as the task\n result, and sets the task state to `\"FAILURE\"`.\n\n Return a function that takes the following arguments:\n\n :param uuid: The id of the task.\n :param args: List of positional args to pass on to the function.\n :param kwargs: Keyword arguments mapping to pass on to the function.\n :keyword request: Request dict.\n\n \"\"\"\n # If the task doesn't define a custom __call__ method\n # we optimize it away by simply calling the run method directly,\n # saving the extra method call and a line less in the stack trace.\n fun = task if task_has_custom(task, '__call__') else task.run\n\n loader = loader or app.loader\n backend = task.backend\n ignore_result = task.ignore_result\n track_started = task.track_started\n track_started = not eager and (task.track_started and not ignore_result)\n publish_result = not eager and not ignore_result\n hostname = hostname or socket.gethostname()\n\n loader_task_init = loader.on_task_init\n loader_cleanup = loader.on_process_cleanup\n\n task_on_success = None\n task_after_return = None\n if task_has_custom(task, 'on_success'):\n task_on_success = task.on_success\n if task_has_custom(task, 'after_return'):\n task_after_return = task.after_return\n\n store_result = backend.store_result\n backend_cleanup = backend.process_cleanup\n\n pid = os.getpid()\n\n request_stack = task.request_stack\n push_request = request_stack.push\n pop_request = request_stack.pop\n push_task = _task_stack.push\n pop_task = _task_stack.pop\n on_chord_part_return = backend.on_chord_part_return\n\n prerun_receivers = signals.task_prerun.receivers\n postrun_receivers = signals.task_postrun.receivers\n success_receivers = signals.task_success.receivers\n\n from celery import canvas\n signature = canvas.maybe_signature # maybe_ does not clone if already\n\n def on_error(request, exc, uuid, state=FAILURE, call_errbacks=True):\n if propagate:\n raise\n I = Info(state, exc)\n R = I.handle_error_state(task, eager=eager)\n if call_errbacks:\n group(\n [signature(errback, app=app)\n for errback in request.errbacks or []], app=app,\n ).apply_async((uuid, ))\n return I, R, I.state, I.retval\n\n def trace_task(uuid, args, kwargs, request=None):\n # R - is the possibly prepared return value.\n # I - is the Info object.\n # retval - is the always unmodified return value.\n # state - is the resulting task state.\n\n # This function is very long because we have unrolled all the calls\n # for performance reasons, and because the function is so long\n # we want the main variables (I, and R) to stand out visually from the\n # the rest of the variables, so breaking PEP8 is worth it ;)\n R = I = retval = state = None\n kwargs = kwdict(kwargs)\n try:\n push_task(task)\n task_request = Context(request or {}, args=args,\n called_directly=False, kwargs=kwargs)\n push_request(task_request)\n try:\n # -*- PRE -*-\n if prerun_receivers:\n send_prerun(sender=task, task_id=uuid, task=task,\n args=args, kwargs=kwargs)\n loader_task_init(uuid, task)\n if track_started:\n store_result(\n uuid, {'pid': pid, 'hostname': hostname}, STARTED,\n request=task_request,\n )\n\n # -*- TRACE -*-\n try:\n R = retval = fun(*args, **kwargs)\n state = SUCCESS\n except Reject as exc:\n I, R = Info(REJECTED, exc), ExceptionInfo(internal=True)\n state, retval = I.state, I.retval\n except Ignore as exc:\n I, R = Info(IGNORED, exc), ExceptionInfo(internal=True)\n state, retval = I.state, I.retval\n except Retry as exc:\n I, R, state, retval = on_error(\n task_request, exc, uuid, RETRY, call_errbacks=False,\n )\n except Exception as exc:\n I, R, state, retval = on_error(task_request, exc, uuid)\n except BaseException as exc:\n raise\n else:\n try:\n # callback tasks must be applied before the result is\n # stored, so that result.children is populated.\n\n # groups are called inline and will store trail\n # separately, so need to call them separately\n # so that the trail's not added multiple times :(\n # (Issue #1936)\n callbacks = task.request.callbacks\n if callbacks:\n if len(task.request.callbacks) > 1:\n sigs, groups = [], []\n for sig in callbacks:\n sig = signature(sig, app=app)\n if isinstance(sig, group):\n groups.append(sig)\n else:\n sigs.append(sig)\n for group_ in groups:\n group.apply_async((retval, ))\n if sigs:\n group(sigs).apply_async(retval, )\n else:\n signature(callbacks[0], app=app).delay(retval)\n if publish_result:\n store_result(\n uuid, retval, SUCCESS, request=task_request,\n )\n except EncodeError as exc:\n I, R, state, retval = on_error(task_request, exc, uuid)\n else:\n if task_on_success:\n task_on_success(retval, uuid, args, kwargs)\n if success_receivers:\n send_success(sender=task, result=retval)\n\n # -* POST *-\n if state not in IGNORE_STATES:\n if task_request.chord:\n on_chord_part_return(task, state, R)\n if task_after_return:\n task_after_return(\n state, retval, uuid, args, kwargs, None,\n )\n finally:\n try:\n if postrun_receivers:\n send_postrun(sender=task, task_id=uuid, task=task,\n args=args, kwargs=kwargs,\n retval=retval, state=state)\n finally:\n pop_task()\n pop_request()\n if not eager:\n try:\n backend_cleanup()\n loader_cleanup()\n except (KeyboardInterrupt, SystemExit, MemoryError):\n raise\n except Exception as exc:\n _logger.error('Process cleanup failed: %r', exc,\n exc_info=True)\n except MemoryError:\n raise\n except Exception as exc:\n if eager:\n raise\n R = report_internal_error(task, exc)\n return R, I\n\n return trace_task\n\n\ndef trace_task(task, uuid, args, kwargs, request={}, **opts):\n try:\n if task.__trace__ is None:\n task.__trace__ = build_tracer(task.name, task, **opts)\n return task.__trace__(uuid, args, kwargs, request)[0]\n except Exception as exc:\n return report_internal_error(task, exc)\n\n\ndef _trace_task_ret(name, uuid, args, kwargs, request={}, app=None, **opts):\n app = app or current_app\n return trace_task(app.tasks[name],\n uuid, args, kwargs, request, app=app, **opts)\ntrace_task_ret = _trace_task_ret\n\n\ndef _fast_trace_task(task, uuid, args, kwargs, request={}):\n # setup_worker_optimizations will point trace_task_ret to here,\n # so this is the function used in the worker.\n return _tasks[task].__trace__(uuid, args, kwargs, request)[0]\n\n\ndef eager_trace_task(task, uuid, args, kwargs, request=None, **opts):\n opts.setdefault('eager', True)\n return build_tracer(task.name, task, **opts)(\n uuid, args, kwargs, request)\n\n\ndef report_internal_error(task, exc):\n _type, _value, _tb = sys.exc_info()\n try:\n _value = task.backend.prepare_exception(exc, 'pickle')\n exc_info = ExceptionInfo((_type, _value, _tb), internal=True)\n warn(RuntimeWarning(\n 'Exception raised outside body: {0!r}:\\n{1}'.format(\n exc, exc_info.traceback)))\n return exc_info\n finally:\n del(_tb)\n\n\ndef setup_worker_optimizations(app):\n global _tasks\n global trace_task_ret\n\n # make sure custom Task.__call__ methods that calls super\n # will not mess up the request/task stack.\n _install_stack_protection()\n\n # all new threads start without a current app, so if an app is not\n # passed on to the thread it will fall back to the \"default app\",\n # which then could be the wrong app. So for the worker\n # we set this to always return our app. This is a hack,\n # and means that only a single app can be used for workers\n # running in the same process.\n app.set_current()\n set_default_app(app)\n\n # evaluate all task classes by finalizing the app.\n app.finalize()\n\n # set fast shortcut to task registry\n _tasks = app._tasks\n\n trace_task_ret = _fast_trace_task\n from celery.worker import job as job_module\n job_module.trace_task_ret = _fast_trace_task\n job_module.__optimize__()\n\n\ndef reset_worker_optimizations():\n global trace_task_ret\n trace_task_ret = _trace_task_ret\n try:\n delattr(BaseTask, '_stackprotected')\n except AttributeError:\n pass\n try:\n BaseTask.__call__ = _patched.pop('BaseTask.__call__')\n except KeyError:\n pass\n from celery.worker import job as job_module\n job_module.trace_task_ret = _trace_task_ret\n\n\ndef _install_stack_protection():\n # Patches BaseTask.__call__ in the worker to handle the edge case\n # where people override it and also call super.\n #\n # - The worker optimizes away BaseTask.__call__ and instead\n # calls task.run directly.\n # - so with the addition of current_task and the request stack\n # BaseTask.__call__ now pushes to those stacks so that\n # they work when tasks are called directly.\n #\n # The worker only optimizes away __call__ in the case\n # where it has not been overridden, so the request/task stack\n # will blow if a custom task class defines __call__ and also\n # calls super().\n if not getattr(BaseTask, '_stackprotected', False):\n _patched['BaseTask.__call__'] = orig = BaseTask.__call__\n\n def __protected_call__(self, *args, **kwargs):\n stack = self.request_stack\n req = stack.top\n if req and not req._protected and \\\n len(stack) == 1 and not req.called_directly:\n req._protected = 1\n return self.run(*args, **kwargs)\n return orig(self, *args, **kwargs)\n BaseTask.__call__ = __protected_call__\n BaseTask._stackprotected = True\n", "path": "celery/app/trace.py"}]} |
gh_patches_debug_1016 | rasdani/github-patches | git_diff | ansible__ansible-modules-core-3507 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Filter also branch info in has_local_mods
##### ISSUE TYPE
- Bugfix Pull Request
##### COMPONENT NAME
git
##### ANSIBLE VERSION
```
ansible 2.0.1.0
```
##### SUMMARY
Filter also branch info in has_local_mods
If you have in the git configuration show branch enabled there will be always a change.
The output of git status is for example:
## master...origin/master
?? untracked.file
M changed.file
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source_control/git.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2012, Michael DeHaan <[email protected]>
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20
21 DOCUMENTATION = '''
22 ---
23 module: git
24 author:
25 - "Ansible Core Team"
26 - "Michael DeHaan"
27 version_added: "0.0.1"
28 short_description: Deploy software (or files) from git checkouts
29 description:
30 - Manage I(git) checkouts of repositories to deploy files or software.
31 options:
32 repo:
33 required: true
34 aliases: [ name ]
35 description:
36 - git, SSH, or HTTP(S) protocol address of the git repository.
37 dest:
38 required: true
39 description:
40 - Absolute path of where the repository should be checked out to.
41 This parameter is required, unless C(clone) is set to C(no)
42 This change was made in version 1.8.3. Prior to this version,
43 the C(dest) parameter was always required.
44 version:
45 required: false
46 default: "HEAD"
47 description:
48 - What version of the repository to check out. This can be the
49 full 40-character I(SHA-1) hash, the literal string C(HEAD), a
50 branch name, or a tag name.
51 accept_hostkey:
52 required: false
53 default: "no"
54 choices: [ "yes", "no" ]
55 version_added: "1.5"
56 description:
57 - if C(yes), adds the hostkey for the repo url if not already
58 added. If ssh_opts contains "-o StrictHostKeyChecking=no",
59 this parameter is ignored.
60 ssh_opts:
61 required: false
62 default: None
63 version_added: "1.5"
64 description:
65 - Creates a wrapper script and exports the path as GIT_SSH
66 which git then automatically uses to override ssh arguments.
67 An example value could be "-o StrictHostKeyChecking=no"
68 key_file:
69 required: false
70 default: None
71 version_added: "1.5"
72 description:
73 - Specify an optional private key file to use for the checkout.
74 reference:
75 required: false
76 default: null
77 version_added: "1.4"
78 description:
79 - Reference repository (see "git clone --reference ...")
80 remote:
81 required: false
82 default: "origin"
83 description:
84 - Name of the remote.
85 refspec:
86 required: false
87 default: null
88 version_added: "1.9"
89 description:
90 - Add an additional refspec to be fetched.
91 If version is set to a I(SHA-1) not reachable from any branch
92 or tag, this option may be necessary to specify the ref containing
93 the I(SHA-1).
94 Uses the same syntax as the 'git fetch' command.
95 An example value could be "refs/meta/config".
96 force:
97 required: false
98 default: "no"
99 choices: [ "yes", "no" ]
100 version_added: "0.7"
101 description:
102 - If C(yes), any modified files in the working
103 repository will be discarded. Prior to 0.7, this was always
104 'yes' and could not be disabled. Prior to 1.9, the default was
105 `yes`
106 depth:
107 required: false
108 default: null
109 version_added: "1.2"
110 description:
111 - Create a shallow clone with a history truncated to the specified
112 number or revisions. The minimum possible value is C(1), otherwise
113 ignored.
114 clone:
115 required: false
116 default: "yes"
117 choices: [ "yes", "no" ]
118 version_added: "1.9"
119 description:
120 - If C(no), do not clone the repository if it does not exist locally
121 update:
122 required: false
123 default: "yes"
124 choices: [ "yes", "no" ]
125 version_added: "1.2"
126 description:
127 - If C(no), do not retrieve new revisions from the origin repository
128 executable:
129 required: false
130 default: null
131 version_added: "1.4"
132 description:
133 - Path to git executable to use. If not supplied,
134 the normal mechanism for resolving binary paths will be used.
135 bare:
136 required: false
137 default: "no"
138 choices: [ "yes", "no" ]
139 version_added: "1.4"
140 description:
141 - if C(yes), repository will be created as a bare repo, otherwise
142 it will be a standard repo with a workspace.
143
144 recursive:
145 required: false
146 default: "yes"
147 choices: [ "yes", "no" ]
148 version_added: "1.6"
149 description:
150 - if C(no), repository will be cloned without the --recursive
151 option, skipping sub-modules.
152
153 track_submodules:
154 required: false
155 default: "no"
156 choices: ["yes", "no"]
157 version_added: "1.8"
158 description:
159 - if C(yes), submodules will track the latest commit on their
160 master branch (or other branch specified in .gitmodules). If
161 C(no), submodules will be kept at the revision specified by the
162 main project. This is equivalent to specifying the --remote flag
163 to git submodule update.
164
165 verify_commit:
166 required: false
167 default: "no"
168 choices: ["yes", "no"]
169 version_added: "2.0"
170 description:
171 - if C(yes), when cloning or checking out a C(version) verify the
172 signature of a GPG signed commit. This requires C(git) version>=2.1.0
173 to be installed. The commit MUST be signed and the public key MUST
174 be trusted in the GPG trustdb.
175
176 requirements:
177 - git (the command line tool)
178 notes:
179 - "If the task seems to be hanging, first verify remote host is in C(known_hosts).
180 SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt,
181 one solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling
182 the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts."
183 '''
184
185 EXAMPLES = '''
186 # Example git checkout from Ansible Playbooks
187 - git: repo=git://foosball.example.org/path/to/repo.git
188 dest=/srv/checkout
189 version=release-0.22
190
191 # Example read-write git checkout from github
192 - git: repo=ssh://[email protected]/mylogin/hello.git dest=/home/mylogin/hello
193
194 # Example just ensuring the repo checkout exists
195 - git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout update=no
196
197 # Example just get information about the repository whether or not it has
198 # already been cloned locally.
199 - git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout clone=no update=no
200
201 # Example checkout a github repo and use refspec to fetch all pull requests
202 - git: repo=https://github.com/ansible/ansible-examples.git dest=/src/ansible-examples refspec=+refs/pull/*:refs/heads/*
203 '''
204
205 import re
206 import tempfile
207
208 def get_submodule_update_params(module, git_path, cwd):
209
210 #or: git submodule [--quiet] update [--init] [-N|--no-fetch]
211 #[-f|--force] [--rebase] [--reference <repository>] [--merge]
212 #[--recursive] [--] [<path>...]
213
214 params = []
215
216 # run a bad submodule command to get valid params
217 cmd = "%s submodule update --help" % (git_path)
218 rc, stdout, stderr = module.run_command(cmd, cwd=cwd)
219 lines = stderr.split('\n')
220 update_line = None
221 for line in lines:
222 if 'git submodule [--quiet] update ' in line:
223 update_line = line
224 if update_line:
225 update_line = update_line.replace('[','')
226 update_line = update_line.replace(']','')
227 update_line = update_line.replace('|',' ')
228 parts = shlex.split(update_line)
229 for part in parts:
230 if part.startswith('--'):
231 part = part.replace('--', '')
232 params.append(part)
233
234 return params
235
236 def write_ssh_wrapper():
237 module_dir = get_module_path()
238 try:
239 # make sure we have full permission to the module_dir, which
240 # may not be the case if we're sudo'ing to a non-root user
241 if os.access(module_dir, os.W_OK|os.R_OK|os.X_OK):
242 fd, wrapper_path = tempfile.mkstemp(prefix=module_dir + '/')
243 else:
244 raise OSError
245 except (IOError, OSError):
246 fd, wrapper_path = tempfile.mkstemp()
247 fh = os.fdopen(fd, 'w+b')
248 template = """#!/bin/sh
249 if [ -z "$GIT_SSH_OPTS" ]; then
250 BASEOPTS=""
251 else
252 BASEOPTS=$GIT_SSH_OPTS
253 fi
254
255 if [ -z "$GIT_KEY" ]; then
256 ssh $BASEOPTS "$@"
257 else
258 ssh -i "$GIT_KEY" $BASEOPTS "$@"
259 fi
260 """
261 fh.write(template)
262 fh.close()
263 st = os.stat(wrapper_path)
264 os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC)
265 return wrapper_path
266
267 def set_git_ssh(ssh_wrapper, key_file, ssh_opts):
268
269 if os.environ.get("GIT_SSH"):
270 del os.environ["GIT_SSH"]
271 os.environ["GIT_SSH"] = ssh_wrapper
272
273 if os.environ.get("GIT_KEY"):
274 del os.environ["GIT_KEY"]
275
276 if key_file:
277 os.environ["GIT_KEY"] = key_file
278
279 if os.environ.get("GIT_SSH_OPTS"):
280 del os.environ["GIT_SSH_OPTS"]
281
282 if ssh_opts:
283 os.environ["GIT_SSH_OPTS"] = ssh_opts
284
285 def get_version(module, git_path, dest, ref="HEAD"):
286 ''' samples the version of the git repo '''
287
288 cmd = "%s rev-parse %s" % (git_path, ref)
289 rc, stdout, stderr = module.run_command(cmd, cwd=dest)
290 sha = stdout.rstrip('\n')
291 return sha
292
293 def get_submodule_versions(git_path, module, dest, version='HEAD'):
294 cmd = [git_path, 'submodule', 'foreach', git_path, 'rev-parse', version]
295 (rc, out, err) = module.run_command(cmd, cwd=dest)
296 if rc != 0:
297 module.fail_json(msg='Unable to determine hashes of submodules')
298 submodules = {}
299 subm_name = None
300 for line in out.splitlines():
301 if line.startswith("Entering '"):
302 subm_name = line[10:-1]
303 elif len(line.strip()) == 40:
304 if subm_name is None:
305 module.fail_json()
306 submodules[subm_name] = line.strip()
307 subm_name = None
308 else:
309 module.fail_json(msg='Unable to parse submodule hash line: %s' % line.strip())
310 if subm_name is not None:
311 module.fail_json(msg='Unable to find hash for submodule: %s' % subm_name)
312
313 return submodules
314
315 def clone(git_path, module, repo, dest, remote, depth, version, bare,
316 reference, refspec, verify_commit):
317 ''' makes a new git repo if it does not already exist '''
318 dest_dirname = os.path.dirname(dest)
319 try:
320 os.makedirs(dest_dirname)
321 except:
322 pass
323 cmd = [ git_path, 'clone' ]
324
325 branch_or_tag = is_remote_branch(git_path, module, dest, repo, version) \
326 or is_remote_tag(git_path, module, dest, repo, version)
327
328 if bare:
329 cmd.append('--bare')
330 else:
331 cmd.extend([ '--origin', remote ])
332 if branch_or_tag:
333 cmd.extend([ '--branch', version ])
334 if depth and (branch_or_tag or version == 'HEAD' or refspec):
335 # only use depth if the remote opject is branch or tag (i.e. fetchable)
336 cmd.extend([ '--depth', str(depth) ])
337 if reference:
338 cmd.extend([ '--reference', str(reference) ])
339 cmd.extend([ repo, dest ])
340 module.run_command(cmd, check_rc=True, cwd=dest_dirname)
341 if bare:
342 if remote != 'origin':
343 module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest)
344
345 if refspec:
346 cmd = [git_path, 'fetch']
347 if depth:
348 cmd.extend([ '--depth', str(depth) ])
349 cmd.extend([remote, refspec])
350 module.run_command(cmd, check_rc=True, cwd=dest)
351
352 if verify_commit:
353 verify_commit_sign(git_path, module, dest, version)
354
355 def has_local_mods(module, git_path, dest, bare):
356 if bare:
357 return False
358
359 cmd = "%s status -s" % (git_path)
360 rc, stdout, stderr = module.run_command(cmd, cwd=dest)
361 lines = stdout.splitlines()
362 lines = filter(lambda c: not re.search('^\\?\\?.*$', c), lines)
363
364 return len(lines) > 0
365
366 def reset(git_path, module, dest):
367 '''
368 Resets the index and working tree to HEAD.
369 Discards any changes to tracked files in working
370 tree since that commit.
371 '''
372 cmd = "%s reset --hard HEAD" % (git_path,)
373 return module.run_command(cmd, check_rc=True, cwd=dest)
374
375 def get_remote_head(git_path, module, dest, version, remote, bare):
376 cloning = False
377 cwd = None
378 tag = False
379 if remote == module.params['repo']:
380 cloning = True
381 else:
382 cwd = dest
383 if version == 'HEAD':
384 if cloning:
385 # cloning the repo, just get the remote's HEAD version
386 cmd = '%s ls-remote %s -h HEAD' % (git_path, remote)
387 else:
388 head_branch = get_head_branch(git_path, module, dest, remote, bare)
389 cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch)
390 elif is_remote_branch(git_path, module, dest, remote, version):
391 cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
392 elif is_remote_tag(git_path, module, dest, remote, version):
393 tag = True
394 cmd = '%s ls-remote %s -t refs/tags/%s*' % (git_path, remote, version)
395 else:
396 # appears to be a sha1. return as-is since it appears
397 # cannot check for a specific sha1 on remote
398 return version
399 (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd)
400 if len(out) < 1:
401 module.fail_json(msg="Could not determine remote revision for %s" % version)
402
403 if tag:
404 # Find the dereferenced tag if this is an annotated tag.
405 for tag in out.split('\n'):
406 if tag.endswith(version + '^{}'):
407 out = tag
408 break
409 elif tag.endswith(version):
410 out = tag
411
412 rev = out.split()[0]
413 return rev
414
415 def is_remote_tag(git_path, module, dest, remote, version):
416 cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version)
417 (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
418 if version in out:
419 return True
420 else:
421 return False
422
423 def get_branches(git_path, module, dest):
424 branches = []
425 cmd = '%s branch -a' % (git_path,)
426 (rc, out, err) = module.run_command(cmd, cwd=dest)
427 if rc != 0:
428 module.fail_json(msg="Could not determine branch data - received %s" % out)
429 for line in out.split('\n'):
430 branches.append(line.strip())
431 return branches
432
433 def get_tags(git_path, module, dest):
434 tags = []
435 cmd = '%s tag' % (git_path,)
436 (rc, out, err) = module.run_command(cmd, cwd=dest)
437 if rc != 0:
438 module.fail_json(msg="Could not determine tag data - received %s" % out)
439 for line in out.split('\n'):
440 tags.append(line.strip())
441 return tags
442
443 def is_remote_branch(git_path, module, dest, remote, version):
444 cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
445 (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
446 if version in out:
447 return True
448 else:
449 return False
450
451 def is_local_branch(git_path, module, dest, branch):
452 branches = get_branches(git_path, module, dest)
453 lbranch = '%s' % branch
454 if lbranch in branches:
455 return True
456 elif '* %s' % branch in branches:
457 return True
458 else:
459 return False
460
461 def is_not_a_branch(git_path, module, dest):
462 branches = get_branches(git_path, module, dest)
463 for b in branches:
464 if b.startswith('* ') and ('no branch' in b or 'detached from' in b):
465 return True
466 return False
467
468 def get_head_branch(git_path, module, dest, remote, bare=False):
469 '''
470 Determine what branch HEAD is associated with. This is partly
471 taken from lib/ansible/utils/__init__.py. It finds the correct
472 path to .git/HEAD and reads from that file the branch that HEAD is
473 associated with. In the case of a detached HEAD, this will look
474 up the branch in .git/refs/remotes/<remote>/HEAD.
475 '''
476 if bare:
477 repo_path = dest
478 else:
479 repo_path = os.path.join(dest, '.git')
480 # Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
481 if os.path.isfile(repo_path):
482 try:
483 gitdir = yaml.safe_load(open(repo_path)).get('gitdir')
484 # There is a posibility the .git file to have an absolute path.
485 if os.path.isabs(gitdir):
486 repo_path = gitdir
487 else:
488 repo_path = os.path.join(repo_path.split('.git')[0], gitdir)
489 except (IOError, AttributeError):
490 return ''
491 # Read .git/HEAD for the name of the branch.
492 # If we're in a detached HEAD state, look up the branch associated with
493 # the remote HEAD in .git/refs/remotes/<remote>/HEAD
494 f = open(os.path.join(repo_path, "HEAD"))
495 if is_not_a_branch(git_path, module, dest):
496 f.close()
497 f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))
498 branch = f.readline().split('/')[-1].rstrip("\n")
499 f.close()
500 return branch
501
502 def set_remote_url(git_path, module, repo, dest, remote):
503 ''' updates repo from remote sources '''
504 commands = [("set a new url %s for %s" % (repo, remote), [git_path, 'remote', 'set-url', remote, repo])]
505
506 for (label,command) in commands:
507 (rc,out,err) = module.run_command(command, cwd=dest)
508 if rc != 0:
509 module.fail_json(msg="Failed to %s: %s %s" % (label, out, err))
510
511 def fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec):
512 ''' updates repo from remote sources '''
513 set_remote_url(git_path, module, repo, dest, remote)
514 commands = []
515
516 fetch_str = 'download remote objects and refs'
517 fetch_cmd = [git_path, 'fetch']
518
519
520 refspecs = []
521 if depth:
522 # try to find the minimal set of refs we need to fetch to get a
523 # successful checkout
524 if refspec:
525 refspecs.append(refspec)
526 elif version == 'HEAD':
527 refspecs.append('HEAD')
528 elif is_remote_branch(git_path, module, dest, repo, version):
529 refspecs.append(version)
530 elif is_remote_tag(git_path, module, dest, repo, version):
531 refspecs.append('+refs/tags/'+version+':refs/tags/'+version)
532 if refspecs:
533 # if refspecs is empty, i.e. version is neither heads nor tags
534 # fall back to a full clone, otherwise we might not be able to checkout
535 # version
536 fetch_cmd.extend(['--depth', str(depth)])
537
538 fetch_cmd.extend([remote])
539 if not depth or not refspecs:
540 # don't try to be minimalistic but do a full clone
541 # also do this if depth is given, but version is something that can't be fetched directly
542 if bare:
543 refspecs = ['+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*']
544 else:
545 # unlike in bare mode, there's no way to combine the
546 # additional refspec with the default git fetch behavior,
547 # so use two commands
548 commands.append((fetch_str, fetch_cmd))
549 refspecs = ['+refs/tags/*:refs/tags/*']
550 if refspec:
551 refspecs.append(refspec)
552
553 commands.append((fetch_str, fetch_cmd + refspecs))
554
555 for (label,command) in commands:
556 (rc,out,err) = module.run_command(command, cwd=dest)
557 if rc != 0:
558 module.fail_json(msg="Failed to %s: %s %s" % (label, out, err), cmd=command)
559
560 def submodules_fetch(git_path, module, remote, track_submodules, dest):
561 changed = False
562
563 if not os.path.exists(os.path.join(dest, '.gitmodules')):
564 # no submodules
565 return changed
566
567 gitmodules_file = open(os.path.join(dest, '.gitmodules'), 'r')
568 for line in gitmodules_file:
569 # Check for new submodules
570 if not changed and line.strip().startswith('path'):
571 path = line.split('=', 1)[1].strip()
572 # Check that dest/path/.git exists
573 if not os.path.exists(os.path.join(dest, path, '.git')):
574 changed = True
575
576 # add the submodule repo's hostkey
577 if line.strip().startswith('url'):
578 repo = line.split('=', 1)[1].strip()
579 if module.params['ssh_opts'] is not None:
580 if not "-o StrictHostKeyChecking=no" in module.params['ssh_opts']:
581 add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
582 else:
583 add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
584
585 # Check for updates to existing modules
586 if not changed:
587 # Fetch updates
588 begin = get_submodule_versions(git_path, module, dest)
589 cmd = [git_path, 'submodule', 'foreach', git_path, 'fetch']
590 (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
591 if rc != 0:
592 module.fail_json(msg="Failed to fetch submodules: %s" % out + err)
593
594 if track_submodules:
595 # Compare against submodule HEAD
596 ### FIXME: determine this from .gitmodules
597 version = 'master'
598 after = get_submodule_versions(git_path, module, dest, '%s/%s'
599 % (remote, version))
600 if begin != after:
601 changed = True
602 else:
603 # Compare against the superproject's expectation
604 cmd = [git_path, 'submodule', 'status']
605 (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
606 if rc != 0:
607 module.fail_json(msg='Failed to retrieve submodule status: %s' % out + err)
608 for line in out.splitlines():
609 if line[0] != ' ':
610 changed = True
611 break
612 return changed
613
614 def submodule_update(git_path, module, dest, track_submodules):
615 ''' init and update any submodules '''
616
617 # get the valid submodule params
618 params = get_submodule_update_params(module, git_path, dest)
619
620 # skip submodule commands if .gitmodules is not present
621 if not os.path.exists(os.path.join(dest, '.gitmodules')):
622 return (0, '', '')
623 cmd = [ git_path, 'submodule', 'sync' ]
624 (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
625 if 'remote' in params and track_submodules:
626 cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ,'--remote' ]
627 else:
628 cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ]
629 (rc, out, err) = module.run_command(cmd, cwd=dest)
630 if rc != 0:
631 module.fail_json(msg="Failed to init/update submodules: %s" % out + err)
632 return (rc, out, err)
633
634 def set_remote_branch(git_path, module, dest, remote, version, depth):
635 cmd = "%s remote set-branches %s %s" % (git_path, remote, version)
636 (rc, out, err) = module.run_command(cmd, cwd=dest)
637 if rc != 0:
638 module.fail_json(msg="Failed to set remote branch: %s" % version)
639 cmd = "%s fetch --depth=%s %s %s" % (git_path, depth, remote, version)
640 (rc, out, err) = module.run_command(cmd, cwd=dest)
641 if rc != 0:
642 module.fail_json(msg="Failed to fetch branch from remote: %s" % version)
643
644 def switch_version(git_path, module, dest, remote, version, verify_commit):
645 cmd = ''
646 if version != 'HEAD':
647 if is_remote_branch(git_path, module, dest, remote, version):
648 if not is_local_branch(git_path, module, dest, version):
649 depth = module.params['depth']
650 if depth:
651 # git clone --depth implies --single-branch, which makes
652 # the checkout fail if the version changes
653 set_remote_branch(git_path, module, dest, remote, version, depth)
654 cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version)
655 else:
656 (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version), cwd=dest)
657 if rc != 0:
658 module.fail_json(msg="Failed to checkout branch %s" % version,
659 stdout=out, stderr=err, rc=rc)
660 cmd = "%s reset --hard %s/%s" % (git_path, remote, version)
661 else:
662 cmd = "%s checkout --force %s" % (git_path, version)
663 else:
664 branch = get_head_branch(git_path, module, dest, remote)
665 (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch), cwd=dest)
666 if rc != 0:
667 module.fail_json(msg="Failed to checkout branch %s" % branch,
668 stdout=out, stderr=err, rc=rc)
669 cmd = "%s reset --hard %s" % (git_path, remote)
670 (rc, out1, err1) = module.run_command(cmd, cwd=dest)
671 if rc != 0:
672 if version != 'HEAD':
673 module.fail_json(msg="Failed to checkout %s" % (version),
674 stdout=out1, stderr=err1, rc=rc, cmd=cmd)
675 else:
676 module.fail_json(msg="Failed to checkout branch %s" % (branch),
677 stdout=out1, stderr=err1, rc=rc, cmd=cmd)
678
679 if verify_commit:
680 verify_commit_sign(git_path, module, dest, version)
681
682 return (rc, out1, err1)
683
684
685 def verify_commit_sign(git_path, module, dest, version):
686 cmd = "%s verify-commit %s" % (git_path, version)
687 (rc, out, err) = module.run_command(cmd, cwd=dest)
688 if rc != 0:
689 module.fail_json(msg='Failed to verify GPG signature of commit/tag "%s"' % version)
690 return (rc, out, err)
691
692 # ===========================================
693
694 def main():
695 module = AnsibleModule(
696 argument_spec = dict(
697 dest=dict(type='path'),
698 repo=dict(required=True, aliases=['name']),
699 version=dict(default='HEAD'),
700 remote=dict(default='origin'),
701 refspec=dict(default=None),
702 reference=dict(default=None),
703 force=dict(default='no', type='bool'),
704 depth=dict(default=None, type='int'),
705 clone=dict(default='yes', type='bool'),
706 update=dict(default='yes', type='bool'),
707 verify_commit=dict(default='no', type='bool'),
708 accept_hostkey=dict(default='no', type='bool'),
709 key_file=dict(default=None, type='path', required=False),
710 ssh_opts=dict(default=None, required=False),
711 executable=dict(default=None, type='path'),
712 bare=dict(default='no', type='bool'),
713 recursive=dict(default='yes', type='bool'),
714 track_submodules=dict(default='no', type='bool'),
715 ),
716 supports_check_mode=True
717 )
718
719 dest = module.params['dest']
720 repo = module.params['repo']
721 version = module.params['version']
722 remote = module.params['remote']
723 refspec = module.params['refspec']
724 force = module.params['force']
725 depth = module.params['depth']
726 update = module.params['update']
727 allow_clone = module.params['clone']
728 bare = module.params['bare']
729 verify_commit = module.params['verify_commit']
730 reference = module.params['reference']
731 git_path = module.params['executable'] or module.get_bin_path('git', True)
732 key_file = module.params['key_file']
733 ssh_opts = module.params['ssh_opts']
734
735 # We screenscrape a huge amount of git commands so use C locale anytime we
736 # call run_command()
737 module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')
738
739 gitconfig = None
740 if not dest and allow_clone:
741 module.fail_json(msg="the destination directory must be specified unless clone=no")
742 elif dest:
743 dest = os.path.abspath(dest)
744 if bare:
745 gitconfig = os.path.join(dest, 'config')
746 else:
747 gitconfig = os.path.join(dest, '.git', 'config')
748
749 # create a wrapper script and export
750 # GIT_SSH=<path> as an environment variable
751 # for git to use the wrapper script
752 ssh_wrapper = None
753 if key_file or ssh_opts:
754 ssh_wrapper = write_ssh_wrapper()
755 set_git_ssh(ssh_wrapper, key_file, ssh_opts)
756 module.add_cleanup_file(path=ssh_wrapper)
757
758 # add the git repo's hostkey
759 if module.params['ssh_opts'] is not None:
760 if not "-o StrictHostKeyChecking=no" in module.params['ssh_opts']:
761 add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
762 else:
763 add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
764
765 recursive = module.params['recursive']
766 track_submodules = module.params['track_submodules']
767
768 rc, out, err, status = (0, None, None, None)
769
770 before = None
771 local_mods = False
772 repo_updated = None
773 if (dest and not os.path.exists(gitconfig)) or (not dest and not allow_clone):
774 # if there is no git configuration, do a clone operation unless:
775 # * the user requested no clone (they just want info)
776 # * we're doing a check mode test
777 # In those cases we do an ls-remote
778 if module.check_mode or not allow_clone:
779 remote_head = get_remote_head(git_path, module, dest, version, repo, bare)
780 module.exit_json(changed=True, before=before, after=remote_head)
781 # there's no git config, so clone
782 clone(git_path, module, repo, dest, remote, depth, version, bare, reference, refspec, verify_commit)
783 repo_updated = True
784 elif not update:
785 # Just return having found a repo already in the dest path
786 # this does no checking that the repo is the actual repo
787 # requested.
788 before = get_version(module, git_path, dest)
789 module.exit_json(changed=False, before=before, after=before)
790 else:
791 # else do a pull
792 local_mods = has_local_mods(module, git_path, dest, bare)
793 before = get_version(module, git_path, dest)
794 if local_mods:
795 # failure should happen regardless of check mode
796 if not force:
797 module.fail_json(msg="Local modifications exist in repository (force=no).")
798 # if force and in non-check mode, do a reset
799 if not module.check_mode:
800 reset(git_path, module, dest)
801 # exit if already at desired sha version
802 set_remote_url(git_path, module, repo, dest, remote)
803 remote_head = get_remote_head(git_path, module, dest, version, remote, bare)
804 if before == remote_head:
805 if local_mods:
806 module.exit_json(changed=True, before=before, after=remote_head,
807 msg="Local modifications exist")
808 elif is_remote_tag(git_path, module, dest, repo, version):
809 # if the remote is a tag and we have the tag locally, exit early
810 if version in get_tags(git_path, module, dest):
811 repo_updated = False
812 else:
813 # if the remote is a branch and we have the branch locally, exit early
814 if version in get_branches(git_path, module, dest):
815 repo_updated = False
816 if repo_updated is None:
817 if module.check_mode:
818 module.exit_json(changed=True, before=before, after=remote_head)
819 fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec)
820 repo_updated = True
821
822 # switch to version specified regardless of whether
823 # we got new revisions from the repository
824 if not bare:
825 switch_version(git_path, module, dest, remote, version, verify_commit)
826
827 # Deal with submodules
828 submodules_updated = False
829 if recursive and not bare:
830 submodules_updated = submodules_fetch(git_path, module, remote, track_submodules, dest)
831
832 if module.check_mode:
833 if submodules_updated:
834 module.exit_json(changed=True, before=before, after=remote_head, submodules_changed=True)
835 else:
836 module.exit_json(changed=False, before=before, after=remote_head)
837
838 if submodules_updated:
839 # Switch to version specified
840 submodule_update(git_path, module, dest, track_submodules)
841
842 # determine if we changed anything
843 after = get_version(module, git_path, dest)
844
845 changed = False
846 if before != after or local_mods or submodules_updated:
847 changed = True
848
849 # cleanup the wrapper script
850 if ssh_wrapper:
851 try:
852 os.remove(ssh_wrapper)
853 except OSError:
854 # No need to fail if the file already doesn't exist
855 pass
856
857 module.exit_json(changed=changed, before=before, after=after)
858
859 # import module snippets
860 from ansible.module_utils.basic import *
861 from ansible.module_utils.known_hosts import *
862
863 if __name__ == '__main__':
864 main()
865
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/source_control/git.py b/source_control/git.py
--- a/source_control/git.py
+++ b/source_control/git.py
@@ -356,7 +356,7 @@
if bare:
return False
- cmd = "%s status -s" % (git_path)
+ cmd = "%s status --porcelain" % (git_path)
rc, stdout, stderr = module.run_command(cmd, cwd=dest)
lines = stdout.splitlines()
lines = filter(lambda c: not re.search('^\\?\\?.*$', c), lines)
| {"golden_diff": "diff --git a/source_control/git.py b/source_control/git.py\n--- a/source_control/git.py\n+++ b/source_control/git.py\n@@ -356,7 +356,7 @@\n if bare:\n return False\n \n- cmd = \"%s status -s\" % (git_path)\n+ cmd = \"%s status --porcelain\" % (git_path)\n rc, stdout, stderr = module.run_command(cmd, cwd=dest)\n lines = stdout.splitlines()\n lines = filter(lambda c: not re.search('^\\\\?\\\\?.*$', c), lines)\n", "issue": "Filter also branch info in has_local_mods\n##### ISSUE TYPE\n- Bugfix Pull Request\n##### COMPONENT NAME\n\ngit\n##### ANSIBLE VERSION\n\n```\nansible 2.0.1.0\n```\n##### SUMMARY\n\nFilter also branch info in has_local_mods\n\nIf you have in the git configuration show branch enabled there will be always a change. \nThe output of git status is for example:\n\n ## master...origin/master\n ?? untracked.file\n M changed.file\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2012, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: git\nauthor: \n - \"Ansible Core Team\"\n - \"Michael DeHaan\"\nversion_added: \"0.0.1\"\nshort_description: Deploy software (or files) from git checkouts\ndescription:\n - Manage I(git) checkouts of repositories to deploy files or software.\noptions:\n repo:\n required: true\n aliases: [ name ]\n description:\n - git, SSH, or HTTP(S) protocol address of the git repository.\n dest:\n required: true\n description:\n - Absolute path of where the repository should be checked out to.\n This parameter is required, unless C(clone) is set to C(no)\n This change was made in version 1.8.3. Prior to this version,\n the C(dest) parameter was always required.\n version:\n required: false\n default: \"HEAD\"\n description:\n - What version of the repository to check out. This can be the\n full 40-character I(SHA-1) hash, the literal string C(HEAD), a\n branch name, or a tag name.\n accept_hostkey:\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.5\"\n description:\n - if C(yes), adds the hostkey for the repo url if not already \n added. If ssh_opts contains \"-o StrictHostKeyChecking=no\", \n this parameter is ignored.\n ssh_opts:\n required: false\n default: None\n version_added: \"1.5\"\n description:\n - Creates a wrapper script and exports the path as GIT_SSH\n which git then automatically uses to override ssh arguments.\n An example value could be \"-o StrictHostKeyChecking=no\"\n key_file:\n required: false\n default: None\n version_added: \"1.5\"\n description:\n - Specify an optional private key file to use for the checkout.\n reference:\n required: false\n default: null\n version_added: \"1.4\"\n description:\n - Reference repository (see \"git clone --reference ...\")\n remote:\n required: false\n default: \"origin\"\n description:\n - Name of the remote.\n refspec:\n required: false\n default: null\n version_added: \"1.9\"\n description:\n - Add an additional refspec to be fetched.\n If version is set to a I(SHA-1) not reachable from any branch\n or tag, this option may be necessary to specify the ref containing\n the I(SHA-1).\n Uses the same syntax as the 'git fetch' command.\n An example value could be \"refs/meta/config\".\n force:\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"0.7\"\n description:\n - If C(yes), any modified files in the working\n repository will be discarded. Prior to 0.7, this was always\n 'yes' and could not be disabled. Prior to 1.9, the default was\n `yes`\n depth:\n required: false\n default: null\n version_added: \"1.2\"\n description:\n - Create a shallow clone with a history truncated to the specified\n number or revisions. The minimum possible value is C(1), otherwise\n ignored.\n clone:\n required: false\n default: \"yes\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.9\"\n description:\n - If C(no), do not clone the repository if it does not exist locally\n update:\n required: false\n default: \"yes\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.2\"\n description:\n - If C(no), do not retrieve new revisions from the origin repository\n executable:\n required: false\n default: null\n version_added: \"1.4\"\n description:\n - Path to git executable to use. If not supplied,\n the normal mechanism for resolving binary paths will be used.\n bare:\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.4\"\n description:\n - if C(yes), repository will be created as a bare repo, otherwise\n it will be a standard repo with a workspace.\n\n recursive:\n required: false\n default: \"yes\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.6\"\n description:\n - if C(no), repository will be cloned without the --recursive\n option, skipping sub-modules.\n\n track_submodules:\n required: false\n default: \"no\"\n choices: [\"yes\", \"no\"]\n version_added: \"1.8\"\n description:\n - if C(yes), submodules will track the latest commit on their\n master branch (or other branch specified in .gitmodules). If\n C(no), submodules will be kept at the revision specified by the\n main project. This is equivalent to specifying the --remote flag\n to git submodule update.\n\n verify_commit:\n required: false\n default: \"no\"\n choices: [\"yes\", \"no\"]\n version_added: \"2.0\"\n description:\n - if C(yes), when cloning or checking out a C(version) verify the\n signature of a GPG signed commit. This requires C(git) version>=2.1.0\n to be installed. The commit MUST be signed and the public key MUST\n be trusted in the GPG trustdb.\n\nrequirements:\n - git (the command line tool)\nnotes:\n - \"If the task seems to be hanging, first verify remote host is in C(known_hosts).\n SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt, \n one solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling \n the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts.\"\n'''\n\nEXAMPLES = '''\n# Example git checkout from Ansible Playbooks\n- git: repo=git://foosball.example.org/path/to/repo.git\n dest=/srv/checkout\n version=release-0.22\n\n# Example read-write git checkout from github\n- git: repo=ssh://[email protected]/mylogin/hello.git dest=/home/mylogin/hello\n\n# Example just ensuring the repo checkout exists\n- git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout update=no\n\n# Example just get information about the repository whether or not it has\n# already been cloned locally.\n- git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout clone=no update=no\n\n# Example checkout a github repo and use refspec to fetch all pull requests\n- git: repo=https://github.com/ansible/ansible-examples.git dest=/src/ansible-examples refspec=+refs/pull/*:refs/heads/*\n'''\n\nimport re\nimport tempfile\n\ndef get_submodule_update_params(module, git_path, cwd):\n\n #or: git submodule [--quiet] update [--init] [-N|--no-fetch] \n #[-f|--force] [--rebase] [--reference <repository>] [--merge] \n #[--recursive] [--] [<path>...]\n\n params = []\n\n # run a bad submodule command to get valid params \n cmd = \"%s submodule update --help\" % (git_path)\n rc, stdout, stderr = module.run_command(cmd, cwd=cwd)\n lines = stderr.split('\\n')\n update_line = None\n for line in lines:\n if 'git submodule [--quiet] update ' in line:\n update_line = line\n if update_line:\n update_line = update_line.replace('[','')\n update_line = update_line.replace(']','')\n update_line = update_line.replace('|',' ')\n parts = shlex.split(update_line)\n for part in parts: \n if part.startswith('--'):\n part = part.replace('--', '')\n params.append(part)\n\n return params\n\ndef write_ssh_wrapper():\n module_dir = get_module_path()\n try:\n # make sure we have full permission to the module_dir, which\n # may not be the case if we're sudo'ing to a non-root user\n if os.access(module_dir, os.W_OK|os.R_OK|os.X_OK):\n fd, wrapper_path = tempfile.mkstemp(prefix=module_dir + '/')\n else:\n raise OSError\n except (IOError, OSError):\n fd, wrapper_path = tempfile.mkstemp()\n fh = os.fdopen(fd, 'w+b')\n template = \"\"\"#!/bin/sh\nif [ -z \"$GIT_SSH_OPTS\" ]; then\n BASEOPTS=\"\"\nelse\n BASEOPTS=$GIT_SSH_OPTS\nfi\n\nif [ -z \"$GIT_KEY\" ]; then\n ssh $BASEOPTS \"$@\"\nelse\n ssh -i \"$GIT_KEY\" $BASEOPTS \"$@\"\nfi\n\"\"\"\n fh.write(template)\n fh.close()\n st = os.stat(wrapper_path)\n os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC)\n return wrapper_path\n\ndef set_git_ssh(ssh_wrapper, key_file, ssh_opts):\n\n if os.environ.get(\"GIT_SSH\"):\n del os.environ[\"GIT_SSH\"]\n os.environ[\"GIT_SSH\"] = ssh_wrapper\n\n if os.environ.get(\"GIT_KEY\"):\n del os.environ[\"GIT_KEY\"]\n\n if key_file:\n os.environ[\"GIT_KEY\"] = key_file \n\n if os.environ.get(\"GIT_SSH_OPTS\"):\n del os.environ[\"GIT_SSH_OPTS\"]\n\n if ssh_opts:\n os.environ[\"GIT_SSH_OPTS\"] = ssh_opts\n\ndef get_version(module, git_path, dest, ref=\"HEAD\"):\n ''' samples the version of the git repo '''\n\n cmd = \"%s rev-parse %s\" % (git_path, ref)\n rc, stdout, stderr = module.run_command(cmd, cwd=dest)\n sha = stdout.rstrip('\\n')\n return sha\n\ndef get_submodule_versions(git_path, module, dest, version='HEAD'):\n cmd = [git_path, 'submodule', 'foreach', git_path, 'rev-parse', version]\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg='Unable to determine hashes of submodules')\n submodules = {}\n subm_name = None\n for line in out.splitlines():\n if line.startswith(\"Entering '\"):\n subm_name = line[10:-1]\n elif len(line.strip()) == 40:\n if subm_name is None:\n module.fail_json()\n submodules[subm_name] = line.strip()\n subm_name = None\n else:\n module.fail_json(msg='Unable to parse submodule hash line: %s' % line.strip())\n if subm_name is not None:\n module.fail_json(msg='Unable to find hash for submodule: %s' % subm_name)\n\n return submodules\n\ndef clone(git_path, module, repo, dest, remote, depth, version, bare,\n reference, refspec, verify_commit):\n ''' makes a new git repo if it does not already exist '''\n dest_dirname = os.path.dirname(dest)\n try:\n os.makedirs(dest_dirname)\n except:\n pass\n cmd = [ git_path, 'clone' ]\n\n branch_or_tag = is_remote_branch(git_path, module, dest, repo, version) \\\n or is_remote_tag(git_path, module, dest, repo, version)\n\n if bare:\n cmd.append('--bare')\n else:\n cmd.extend([ '--origin', remote ])\n if branch_or_tag:\n cmd.extend([ '--branch', version ])\n if depth and (branch_or_tag or version == 'HEAD' or refspec):\n # only use depth if the remote opject is branch or tag (i.e. fetchable)\n cmd.extend([ '--depth', str(depth) ])\n if reference:\n cmd.extend([ '--reference', str(reference) ])\n cmd.extend([ repo, dest ])\n module.run_command(cmd, check_rc=True, cwd=dest_dirname)\n if bare:\n if remote != 'origin':\n module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest)\n\n if refspec:\n cmd = [git_path, 'fetch']\n if depth:\n cmd.extend([ '--depth', str(depth) ])\n cmd.extend([remote, refspec])\n module.run_command(cmd, check_rc=True, cwd=dest)\n\n if verify_commit:\n verify_commit_sign(git_path, module, dest, version)\n\ndef has_local_mods(module, git_path, dest, bare):\n if bare:\n return False\n\n cmd = \"%s status -s\" % (git_path)\n rc, stdout, stderr = module.run_command(cmd, cwd=dest)\n lines = stdout.splitlines()\n lines = filter(lambda c: not re.search('^\\\\?\\\\?.*$', c), lines)\n\n return len(lines) > 0\n\ndef reset(git_path, module, dest):\n '''\n Resets the index and working tree to HEAD.\n Discards any changes to tracked files in working\n tree since that commit.\n '''\n cmd = \"%s reset --hard HEAD\" % (git_path,)\n return module.run_command(cmd, check_rc=True, cwd=dest)\n\ndef get_remote_head(git_path, module, dest, version, remote, bare):\n cloning = False\n cwd = None\n tag = False\n if remote == module.params['repo']:\n cloning = True\n else:\n cwd = dest\n if version == 'HEAD':\n if cloning:\n # cloning the repo, just get the remote's HEAD version\n cmd = '%s ls-remote %s -h HEAD' % (git_path, remote)\n else:\n head_branch = get_head_branch(git_path, module, dest, remote, bare)\n cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch)\n elif is_remote_branch(git_path, module, dest, remote, version):\n cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)\n elif is_remote_tag(git_path, module, dest, remote, version):\n tag = True\n cmd = '%s ls-remote %s -t refs/tags/%s*' % (git_path, remote, version)\n else:\n # appears to be a sha1. return as-is since it appears\n # cannot check for a specific sha1 on remote\n return version\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd)\n if len(out) < 1:\n module.fail_json(msg=\"Could not determine remote revision for %s\" % version)\n\n if tag:\n # Find the dereferenced tag if this is an annotated tag.\n for tag in out.split('\\n'):\n if tag.endswith(version + '^{}'):\n out = tag\n break\n elif tag.endswith(version):\n out = tag\n\n rev = out.split()[0]\n return rev\n\ndef is_remote_tag(git_path, module, dest, remote, version):\n cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version)\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if version in out:\n return True\n else:\n return False\n\ndef get_branches(git_path, module, dest):\n branches = []\n cmd = '%s branch -a' % (git_path,)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Could not determine branch data - received %s\" % out)\n for line in out.split('\\n'):\n branches.append(line.strip())\n return branches\n\ndef get_tags(git_path, module, dest):\n tags = []\n cmd = '%s tag' % (git_path,)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Could not determine tag data - received %s\" % out)\n for line in out.split('\\n'):\n tags.append(line.strip())\n return tags\n\ndef is_remote_branch(git_path, module, dest, remote, version):\n cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if version in out:\n return True\n else:\n return False\n\ndef is_local_branch(git_path, module, dest, branch):\n branches = get_branches(git_path, module, dest)\n lbranch = '%s' % branch\n if lbranch in branches:\n return True\n elif '* %s' % branch in branches:\n return True\n else:\n return False\n\ndef is_not_a_branch(git_path, module, dest):\n branches = get_branches(git_path, module, dest)\n for b in branches:\n if b.startswith('* ') and ('no branch' in b or 'detached from' in b):\n return True\n return False\n\ndef get_head_branch(git_path, module, dest, remote, bare=False):\n '''\n Determine what branch HEAD is associated with. This is partly\n taken from lib/ansible/utils/__init__.py. It finds the correct\n path to .git/HEAD and reads from that file the branch that HEAD is\n associated with. In the case of a detached HEAD, this will look\n up the branch in .git/refs/remotes/<remote>/HEAD.\n '''\n if bare:\n repo_path = dest\n else:\n repo_path = os.path.join(dest, '.git')\n # Check if the .git is a file. If it is a file, it means that we are in a submodule structure.\n if os.path.isfile(repo_path):\n try:\n gitdir = yaml.safe_load(open(repo_path)).get('gitdir')\n # There is a posibility the .git file to have an absolute path.\n if os.path.isabs(gitdir):\n repo_path = gitdir\n else:\n repo_path = os.path.join(repo_path.split('.git')[0], gitdir)\n except (IOError, AttributeError):\n return ''\n # Read .git/HEAD for the name of the branch.\n # If we're in a detached HEAD state, look up the branch associated with\n # the remote HEAD in .git/refs/remotes/<remote>/HEAD\n f = open(os.path.join(repo_path, \"HEAD\"))\n if is_not_a_branch(git_path, module, dest):\n f.close()\n f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))\n branch = f.readline().split('/')[-1].rstrip(\"\\n\")\n f.close()\n return branch\n\ndef set_remote_url(git_path, module, repo, dest, remote):\n ''' updates repo from remote sources '''\n commands = [(\"set a new url %s for %s\" % (repo, remote), [git_path, 'remote', 'set-url', remote, repo])]\n\n for (label,command) in commands:\n (rc,out,err) = module.run_command(command, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to %s: %s %s\" % (label, out, err))\n\ndef fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec):\n ''' updates repo from remote sources '''\n set_remote_url(git_path, module, repo, dest, remote)\n commands = []\n\n fetch_str = 'download remote objects and refs'\n fetch_cmd = [git_path, 'fetch']\n\n\n refspecs = []\n if depth:\n # try to find the minimal set of refs we need to fetch to get a\n # successful checkout\n if refspec:\n refspecs.append(refspec)\n elif version == 'HEAD':\n refspecs.append('HEAD')\n elif is_remote_branch(git_path, module, dest, repo, version):\n refspecs.append(version)\n elif is_remote_tag(git_path, module, dest, repo, version):\n refspecs.append('+refs/tags/'+version+':refs/tags/'+version)\n if refspecs:\n # if refspecs is empty, i.e. version is neither heads nor tags\n # fall back to a full clone, otherwise we might not be able to checkout\n # version\n fetch_cmd.extend(['--depth', str(depth)])\n\n fetch_cmd.extend([remote])\n if not depth or not refspecs:\n # don't try to be minimalistic but do a full clone\n # also do this if depth is given, but version is something that can't be fetched directly\n if bare:\n refspecs = ['+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*']\n else:\n # unlike in bare mode, there's no way to combine the\n # additional refspec with the default git fetch behavior,\n # so use two commands\n commands.append((fetch_str, fetch_cmd))\n refspecs = ['+refs/tags/*:refs/tags/*']\n if refspec:\n refspecs.append(refspec)\n\n commands.append((fetch_str, fetch_cmd + refspecs))\n\n for (label,command) in commands:\n (rc,out,err) = module.run_command(command, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to %s: %s %s\" % (label, out, err), cmd=command)\n\ndef submodules_fetch(git_path, module, remote, track_submodules, dest):\n changed = False\n\n if not os.path.exists(os.path.join(dest, '.gitmodules')):\n # no submodules\n return changed\n\n gitmodules_file = open(os.path.join(dest, '.gitmodules'), 'r')\n for line in gitmodules_file:\n # Check for new submodules\n if not changed and line.strip().startswith('path'):\n path = line.split('=', 1)[1].strip()\n # Check that dest/path/.git exists\n if not os.path.exists(os.path.join(dest, path, '.git')):\n changed = True\n\n # add the submodule repo's hostkey\n if line.strip().startswith('url'):\n repo = line.split('=', 1)[1].strip()\n if module.params['ssh_opts'] is not None:\n if not \"-o StrictHostKeyChecking=no\" in module.params['ssh_opts']:\n add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])\n else:\n add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])\n\n # Check for updates to existing modules\n if not changed:\n # Fetch updates\n begin = get_submodule_versions(git_path, module, dest)\n cmd = [git_path, 'submodule', 'foreach', git_path, 'fetch']\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to fetch submodules: %s\" % out + err)\n\n if track_submodules:\n # Compare against submodule HEAD\n ### FIXME: determine this from .gitmodules\n version = 'master'\n after = get_submodule_versions(git_path, module, dest, '%s/%s'\n % (remote, version))\n if begin != after:\n changed = True\n else:\n # Compare against the superproject's expectation\n cmd = [git_path, 'submodule', 'status']\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if rc != 0:\n module.fail_json(msg='Failed to retrieve submodule status: %s' % out + err)\n for line in out.splitlines():\n if line[0] != ' ':\n changed = True\n break\n return changed\n\ndef submodule_update(git_path, module, dest, track_submodules):\n ''' init and update any submodules '''\n\n # get the valid submodule params\n params = get_submodule_update_params(module, git_path, dest)\n\n # skip submodule commands if .gitmodules is not present\n if not os.path.exists(os.path.join(dest, '.gitmodules')):\n return (0, '', '')\n cmd = [ git_path, 'submodule', 'sync' ]\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if 'remote' in params and track_submodules:\n cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ,'--remote' ]\n else:\n cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ]\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to init/update submodules: %s\" % out + err)\n return (rc, out, err)\n\ndef set_remote_branch(git_path, module, dest, remote, version, depth):\n cmd = \"%s remote set-branches %s %s\" % (git_path, remote, version)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to set remote branch: %s\" % version)\n cmd = \"%s fetch --depth=%s %s %s\" % (git_path, depth, remote, version)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to fetch branch from remote: %s\" % version)\n\ndef switch_version(git_path, module, dest, remote, version, verify_commit):\n cmd = ''\n if version != 'HEAD':\n if is_remote_branch(git_path, module, dest, remote, version):\n if not is_local_branch(git_path, module, dest, version):\n depth = module.params['depth']\n if depth:\n # git clone --depth implies --single-branch, which makes\n # the checkout fail if the version changes\n set_remote_branch(git_path, module, dest, remote, version, depth)\n cmd = \"%s checkout --track -b %s %s/%s\" % (git_path, version, remote, version)\n else:\n (rc, out, err) = module.run_command(\"%s checkout --force %s\" % (git_path, version), cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to checkout branch %s\" % version,\n stdout=out, stderr=err, rc=rc)\n cmd = \"%s reset --hard %s/%s\" % (git_path, remote, version)\n else:\n cmd = \"%s checkout --force %s\" % (git_path, version)\n else:\n branch = get_head_branch(git_path, module, dest, remote)\n (rc, out, err) = module.run_command(\"%s checkout --force %s\" % (git_path, branch), cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to checkout branch %s\" % branch,\n stdout=out, stderr=err, rc=rc)\n cmd = \"%s reset --hard %s\" % (git_path, remote)\n (rc, out1, err1) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n if version != 'HEAD':\n module.fail_json(msg=\"Failed to checkout %s\" % (version),\n stdout=out1, stderr=err1, rc=rc, cmd=cmd)\n else:\n module.fail_json(msg=\"Failed to checkout branch %s\" % (branch),\n stdout=out1, stderr=err1, rc=rc, cmd=cmd)\n\n if verify_commit:\n verify_commit_sign(git_path, module, dest, version)\n\n return (rc, out1, err1)\n\n\ndef verify_commit_sign(git_path, module, dest, version):\n cmd = \"%s verify-commit %s\" % (git_path, version)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg='Failed to verify GPG signature of commit/tag \"%s\"' % version)\n return (rc, out, err)\n\n# ===========================================\n\ndef main():\n module = AnsibleModule(\n argument_spec = dict(\n dest=dict(type='path'),\n repo=dict(required=True, aliases=['name']),\n version=dict(default='HEAD'),\n remote=dict(default='origin'),\n refspec=dict(default=None),\n reference=dict(default=None),\n force=dict(default='no', type='bool'),\n depth=dict(default=None, type='int'),\n clone=dict(default='yes', type='bool'),\n update=dict(default='yes', type='bool'),\n verify_commit=dict(default='no', type='bool'),\n accept_hostkey=dict(default='no', type='bool'),\n key_file=dict(default=None, type='path', required=False),\n ssh_opts=dict(default=None, required=False),\n executable=dict(default=None, type='path'),\n bare=dict(default='no', type='bool'),\n recursive=dict(default='yes', type='bool'),\n track_submodules=dict(default='no', type='bool'),\n ),\n supports_check_mode=True\n )\n\n dest = module.params['dest']\n repo = module.params['repo']\n version = module.params['version']\n remote = module.params['remote']\n refspec = module.params['refspec']\n force = module.params['force']\n depth = module.params['depth']\n update = module.params['update']\n allow_clone = module.params['clone']\n bare = module.params['bare']\n verify_commit = module.params['verify_commit']\n reference = module.params['reference']\n git_path = module.params['executable'] or module.get_bin_path('git', True)\n key_file = module.params['key_file']\n ssh_opts = module.params['ssh_opts']\n\n # We screenscrape a huge amount of git commands so use C locale anytime we\n # call run_command()\n module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')\n\n gitconfig = None\n if not dest and allow_clone:\n module.fail_json(msg=\"the destination directory must be specified unless clone=no\")\n elif dest:\n dest = os.path.abspath(dest)\n if bare:\n gitconfig = os.path.join(dest, 'config')\n else:\n gitconfig = os.path.join(dest, '.git', 'config')\n\n # create a wrapper script and export\n # GIT_SSH=<path> as an environment variable\n # for git to use the wrapper script\n ssh_wrapper = None\n if key_file or ssh_opts:\n ssh_wrapper = write_ssh_wrapper()\n set_git_ssh(ssh_wrapper, key_file, ssh_opts)\n module.add_cleanup_file(path=ssh_wrapper)\n\n # add the git repo's hostkey \n if module.params['ssh_opts'] is not None:\n if not \"-o StrictHostKeyChecking=no\" in module.params['ssh_opts']:\n add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])\n else:\n add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])\n\n recursive = module.params['recursive']\n track_submodules = module.params['track_submodules']\n\n rc, out, err, status = (0, None, None, None)\n\n before = None\n local_mods = False\n repo_updated = None\n if (dest and not os.path.exists(gitconfig)) or (not dest and not allow_clone):\n # if there is no git configuration, do a clone operation unless:\n # * the user requested no clone (they just want info)\n # * we're doing a check mode test\n # In those cases we do an ls-remote\n if module.check_mode or not allow_clone:\n remote_head = get_remote_head(git_path, module, dest, version, repo, bare)\n module.exit_json(changed=True, before=before, after=remote_head)\n # there's no git config, so clone\n clone(git_path, module, repo, dest, remote, depth, version, bare, reference, refspec, verify_commit)\n repo_updated = True\n elif not update:\n # Just return having found a repo already in the dest path\n # this does no checking that the repo is the actual repo\n # requested.\n before = get_version(module, git_path, dest)\n module.exit_json(changed=False, before=before, after=before)\n else:\n # else do a pull\n local_mods = has_local_mods(module, git_path, dest, bare)\n before = get_version(module, git_path, dest)\n if local_mods:\n # failure should happen regardless of check mode\n if not force:\n module.fail_json(msg=\"Local modifications exist in repository (force=no).\")\n # if force and in non-check mode, do a reset\n if not module.check_mode:\n reset(git_path, module, dest)\n # exit if already at desired sha version\n set_remote_url(git_path, module, repo, dest, remote)\n remote_head = get_remote_head(git_path, module, dest, version, remote, bare)\n if before == remote_head:\n if local_mods:\n module.exit_json(changed=True, before=before, after=remote_head,\n msg=\"Local modifications exist\")\n elif is_remote_tag(git_path, module, dest, repo, version):\n # if the remote is a tag and we have the tag locally, exit early\n if version in get_tags(git_path, module, dest):\n repo_updated = False\n else:\n # if the remote is a branch and we have the branch locally, exit early\n if version in get_branches(git_path, module, dest):\n repo_updated = False\n if repo_updated is None:\n if module.check_mode:\n module.exit_json(changed=True, before=before, after=remote_head)\n fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec)\n repo_updated = True\n\n # switch to version specified regardless of whether\n # we got new revisions from the repository\n if not bare:\n switch_version(git_path, module, dest, remote, version, verify_commit)\n\n # Deal with submodules\n submodules_updated = False\n if recursive and not bare:\n submodules_updated = submodules_fetch(git_path, module, remote, track_submodules, dest)\n\n if module.check_mode:\n if submodules_updated:\n module.exit_json(changed=True, before=before, after=remote_head, submodules_changed=True)\n else:\n module.exit_json(changed=False, before=before, after=remote_head)\n\n if submodules_updated:\n # Switch to version specified\n submodule_update(git_path, module, dest, track_submodules)\n\n # determine if we changed anything\n after = get_version(module, git_path, dest)\n\n changed = False\n if before != after or local_mods or submodules_updated:\n changed = True\n\n # cleanup the wrapper script\n if ssh_wrapper:\n try:\n os.remove(ssh_wrapper)\n except OSError:\n # No need to fail if the file already doesn't exist\n pass\n\n module.exit_json(changed=changed, before=before, after=after)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.known_hosts import *\n\nif __name__ == '__main__':\n main()\n", "path": "source_control/git.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2012, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: git\nauthor: \n - \"Ansible Core Team\"\n - \"Michael DeHaan\"\nversion_added: \"0.0.1\"\nshort_description: Deploy software (or files) from git checkouts\ndescription:\n - Manage I(git) checkouts of repositories to deploy files or software.\noptions:\n repo:\n required: true\n aliases: [ name ]\n description:\n - git, SSH, or HTTP(S) protocol address of the git repository.\n dest:\n required: true\n description:\n - Absolute path of where the repository should be checked out to.\n This parameter is required, unless C(clone) is set to C(no)\n This change was made in version 1.8.3. Prior to this version,\n the C(dest) parameter was always required.\n version:\n required: false\n default: \"HEAD\"\n description:\n - What version of the repository to check out. This can be the\n full 40-character I(SHA-1) hash, the literal string C(HEAD), a\n branch name, or a tag name.\n accept_hostkey:\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.5\"\n description:\n - if C(yes), adds the hostkey for the repo url if not already \n added. If ssh_opts contains \"-o StrictHostKeyChecking=no\", \n this parameter is ignored.\n ssh_opts:\n required: false\n default: None\n version_added: \"1.5\"\n description:\n - Creates a wrapper script and exports the path as GIT_SSH\n which git then automatically uses to override ssh arguments.\n An example value could be \"-o StrictHostKeyChecking=no\"\n key_file:\n required: false\n default: None\n version_added: \"1.5\"\n description:\n - Specify an optional private key file to use for the checkout.\n reference:\n required: false\n default: null\n version_added: \"1.4\"\n description:\n - Reference repository (see \"git clone --reference ...\")\n remote:\n required: false\n default: \"origin\"\n description:\n - Name of the remote.\n refspec:\n required: false\n default: null\n version_added: \"1.9\"\n description:\n - Add an additional refspec to be fetched.\n If version is set to a I(SHA-1) not reachable from any branch\n or tag, this option may be necessary to specify the ref containing\n the I(SHA-1).\n Uses the same syntax as the 'git fetch' command.\n An example value could be \"refs/meta/config\".\n force:\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"0.7\"\n description:\n - If C(yes), any modified files in the working\n repository will be discarded. Prior to 0.7, this was always\n 'yes' and could not be disabled. Prior to 1.9, the default was\n `yes`\n depth:\n required: false\n default: null\n version_added: \"1.2\"\n description:\n - Create a shallow clone with a history truncated to the specified\n number or revisions. The minimum possible value is C(1), otherwise\n ignored.\n clone:\n required: false\n default: \"yes\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.9\"\n description:\n - If C(no), do not clone the repository if it does not exist locally\n update:\n required: false\n default: \"yes\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.2\"\n description:\n - If C(no), do not retrieve new revisions from the origin repository\n executable:\n required: false\n default: null\n version_added: \"1.4\"\n description:\n - Path to git executable to use. If not supplied,\n the normal mechanism for resolving binary paths will be used.\n bare:\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.4\"\n description:\n - if C(yes), repository will be created as a bare repo, otherwise\n it will be a standard repo with a workspace.\n\n recursive:\n required: false\n default: \"yes\"\n choices: [ \"yes\", \"no\" ]\n version_added: \"1.6\"\n description:\n - if C(no), repository will be cloned without the --recursive\n option, skipping sub-modules.\n\n track_submodules:\n required: false\n default: \"no\"\n choices: [\"yes\", \"no\"]\n version_added: \"1.8\"\n description:\n - if C(yes), submodules will track the latest commit on their\n master branch (or other branch specified in .gitmodules). If\n C(no), submodules will be kept at the revision specified by the\n main project. This is equivalent to specifying the --remote flag\n to git submodule update.\n\n verify_commit:\n required: false\n default: \"no\"\n choices: [\"yes\", \"no\"]\n version_added: \"2.0\"\n description:\n - if C(yes), when cloning or checking out a C(version) verify the\n signature of a GPG signed commit. This requires C(git) version>=2.1.0\n to be installed. The commit MUST be signed and the public key MUST\n be trusted in the GPG trustdb.\n\nrequirements:\n - git (the command line tool)\nnotes:\n - \"If the task seems to be hanging, first verify remote host is in C(known_hosts).\n SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt, \n one solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling \n the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts.\"\n'''\n\nEXAMPLES = '''\n# Example git checkout from Ansible Playbooks\n- git: repo=git://foosball.example.org/path/to/repo.git\n dest=/srv/checkout\n version=release-0.22\n\n# Example read-write git checkout from github\n- git: repo=ssh://[email protected]/mylogin/hello.git dest=/home/mylogin/hello\n\n# Example just ensuring the repo checkout exists\n- git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout update=no\n\n# Example just get information about the repository whether or not it has\n# already been cloned locally.\n- git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout clone=no update=no\n\n# Example checkout a github repo and use refspec to fetch all pull requests\n- git: repo=https://github.com/ansible/ansible-examples.git dest=/src/ansible-examples refspec=+refs/pull/*:refs/heads/*\n'''\n\nimport re\nimport tempfile\n\ndef get_submodule_update_params(module, git_path, cwd):\n\n #or: git submodule [--quiet] update [--init] [-N|--no-fetch] \n #[-f|--force] [--rebase] [--reference <repository>] [--merge] \n #[--recursive] [--] [<path>...]\n\n params = []\n\n # run a bad submodule command to get valid params \n cmd = \"%s submodule update --help\" % (git_path)\n rc, stdout, stderr = module.run_command(cmd, cwd=cwd)\n lines = stderr.split('\\n')\n update_line = None\n for line in lines:\n if 'git submodule [--quiet] update ' in line:\n update_line = line\n if update_line:\n update_line = update_line.replace('[','')\n update_line = update_line.replace(']','')\n update_line = update_line.replace('|',' ')\n parts = shlex.split(update_line)\n for part in parts: \n if part.startswith('--'):\n part = part.replace('--', '')\n params.append(part)\n\n return params\n\ndef write_ssh_wrapper():\n module_dir = get_module_path()\n try:\n # make sure we have full permission to the module_dir, which\n # may not be the case if we're sudo'ing to a non-root user\n if os.access(module_dir, os.W_OK|os.R_OK|os.X_OK):\n fd, wrapper_path = tempfile.mkstemp(prefix=module_dir + '/')\n else:\n raise OSError\n except (IOError, OSError):\n fd, wrapper_path = tempfile.mkstemp()\n fh = os.fdopen(fd, 'w+b')\n template = \"\"\"#!/bin/sh\nif [ -z \"$GIT_SSH_OPTS\" ]; then\n BASEOPTS=\"\"\nelse\n BASEOPTS=$GIT_SSH_OPTS\nfi\n\nif [ -z \"$GIT_KEY\" ]; then\n ssh $BASEOPTS \"$@\"\nelse\n ssh -i \"$GIT_KEY\" $BASEOPTS \"$@\"\nfi\n\"\"\"\n fh.write(template)\n fh.close()\n st = os.stat(wrapper_path)\n os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC)\n return wrapper_path\n\ndef set_git_ssh(ssh_wrapper, key_file, ssh_opts):\n\n if os.environ.get(\"GIT_SSH\"):\n del os.environ[\"GIT_SSH\"]\n os.environ[\"GIT_SSH\"] = ssh_wrapper\n\n if os.environ.get(\"GIT_KEY\"):\n del os.environ[\"GIT_KEY\"]\n\n if key_file:\n os.environ[\"GIT_KEY\"] = key_file \n\n if os.environ.get(\"GIT_SSH_OPTS\"):\n del os.environ[\"GIT_SSH_OPTS\"]\n\n if ssh_opts:\n os.environ[\"GIT_SSH_OPTS\"] = ssh_opts\n\ndef get_version(module, git_path, dest, ref=\"HEAD\"):\n ''' samples the version of the git repo '''\n\n cmd = \"%s rev-parse %s\" % (git_path, ref)\n rc, stdout, stderr = module.run_command(cmd, cwd=dest)\n sha = stdout.rstrip('\\n')\n return sha\n\ndef get_submodule_versions(git_path, module, dest, version='HEAD'):\n cmd = [git_path, 'submodule', 'foreach', git_path, 'rev-parse', version]\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg='Unable to determine hashes of submodules')\n submodules = {}\n subm_name = None\n for line in out.splitlines():\n if line.startswith(\"Entering '\"):\n subm_name = line[10:-1]\n elif len(line.strip()) == 40:\n if subm_name is None:\n module.fail_json()\n submodules[subm_name] = line.strip()\n subm_name = None\n else:\n module.fail_json(msg='Unable to parse submodule hash line: %s' % line.strip())\n if subm_name is not None:\n module.fail_json(msg='Unable to find hash for submodule: %s' % subm_name)\n\n return submodules\n\ndef clone(git_path, module, repo, dest, remote, depth, version, bare,\n reference, refspec, verify_commit):\n ''' makes a new git repo if it does not already exist '''\n dest_dirname = os.path.dirname(dest)\n try:\n os.makedirs(dest_dirname)\n except:\n pass\n cmd = [ git_path, 'clone' ]\n\n branch_or_tag = is_remote_branch(git_path, module, dest, repo, version) \\\n or is_remote_tag(git_path, module, dest, repo, version)\n\n if bare:\n cmd.append('--bare')\n else:\n cmd.extend([ '--origin', remote ])\n if branch_or_tag:\n cmd.extend([ '--branch', version ])\n if depth and (branch_or_tag or version == 'HEAD' or refspec):\n # only use depth if the remote opject is branch or tag (i.e. fetchable)\n cmd.extend([ '--depth', str(depth) ])\n if reference:\n cmd.extend([ '--reference', str(reference) ])\n cmd.extend([ repo, dest ])\n module.run_command(cmd, check_rc=True, cwd=dest_dirname)\n if bare:\n if remote != 'origin':\n module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest)\n\n if refspec:\n cmd = [git_path, 'fetch']\n if depth:\n cmd.extend([ '--depth', str(depth) ])\n cmd.extend([remote, refspec])\n module.run_command(cmd, check_rc=True, cwd=dest)\n\n if verify_commit:\n verify_commit_sign(git_path, module, dest, version)\n\ndef has_local_mods(module, git_path, dest, bare):\n if bare:\n return False\n\n cmd = \"%s status --porcelain\" % (git_path)\n rc, stdout, stderr = module.run_command(cmd, cwd=dest)\n lines = stdout.splitlines()\n lines = filter(lambda c: not re.search('^\\\\?\\\\?.*$', c), lines)\n\n return len(lines) > 0\n\ndef reset(git_path, module, dest):\n '''\n Resets the index and working tree to HEAD.\n Discards any changes to tracked files in working\n tree since that commit.\n '''\n cmd = \"%s reset --hard HEAD\" % (git_path,)\n return module.run_command(cmd, check_rc=True, cwd=dest)\n\ndef get_remote_head(git_path, module, dest, version, remote, bare):\n cloning = False\n cwd = None\n tag = False\n if remote == module.params['repo']:\n cloning = True\n else:\n cwd = dest\n if version == 'HEAD':\n if cloning:\n # cloning the repo, just get the remote's HEAD version\n cmd = '%s ls-remote %s -h HEAD' % (git_path, remote)\n else:\n head_branch = get_head_branch(git_path, module, dest, remote, bare)\n cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch)\n elif is_remote_branch(git_path, module, dest, remote, version):\n cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)\n elif is_remote_tag(git_path, module, dest, remote, version):\n tag = True\n cmd = '%s ls-remote %s -t refs/tags/%s*' % (git_path, remote, version)\n else:\n # appears to be a sha1. return as-is since it appears\n # cannot check for a specific sha1 on remote\n return version\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd)\n if len(out) < 1:\n module.fail_json(msg=\"Could not determine remote revision for %s\" % version)\n\n if tag:\n # Find the dereferenced tag if this is an annotated tag.\n for tag in out.split('\\n'):\n if tag.endswith(version + '^{}'):\n out = tag\n break\n elif tag.endswith(version):\n out = tag\n\n rev = out.split()[0]\n return rev\n\ndef is_remote_tag(git_path, module, dest, remote, version):\n cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version)\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if version in out:\n return True\n else:\n return False\n\ndef get_branches(git_path, module, dest):\n branches = []\n cmd = '%s branch -a' % (git_path,)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Could not determine branch data - received %s\" % out)\n for line in out.split('\\n'):\n branches.append(line.strip())\n return branches\n\ndef get_tags(git_path, module, dest):\n tags = []\n cmd = '%s tag' % (git_path,)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Could not determine tag data - received %s\" % out)\n for line in out.split('\\n'):\n tags.append(line.strip())\n return tags\n\ndef is_remote_branch(git_path, module, dest, remote, version):\n cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if version in out:\n return True\n else:\n return False\n\ndef is_local_branch(git_path, module, dest, branch):\n branches = get_branches(git_path, module, dest)\n lbranch = '%s' % branch\n if lbranch in branches:\n return True\n elif '* %s' % branch in branches:\n return True\n else:\n return False\n\ndef is_not_a_branch(git_path, module, dest):\n branches = get_branches(git_path, module, dest)\n for b in branches:\n if b.startswith('* ') and ('no branch' in b or 'detached from' in b):\n return True\n return False\n\ndef get_head_branch(git_path, module, dest, remote, bare=False):\n '''\n Determine what branch HEAD is associated with. This is partly\n taken from lib/ansible/utils/__init__.py. It finds the correct\n path to .git/HEAD and reads from that file the branch that HEAD is\n associated with. In the case of a detached HEAD, this will look\n up the branch in .git/refs/remotes/<remote>/HEAD.\n '''\n if bare:\n repo_path = dest\n else:\n repo_path = os.path.join(dest, '.git')\n # Check if the .git is a file. If it is a file, it means that we are in a submodule structure.\n if os.path.isfile(repo_path):\n try:\n gitdir = yaml.safe_load(open(repo_path)).get('gitdir')\n # There is a posibility the .git file to have an absolute path.\n if os.path.isabs(gitdir):\n repo_path = gitdir\n else:\n repo_path = os.path.join(repo_path.split('.git')[0], gitdir)\n except (IOError, AttributeError):\n return ''\n # Read .git/HEAD for the name of the branch.\n # If we're in a detached HEAD state, look up the branch associated with\n # the remote HEAD in .git/refs/remotes/<remote>/HEAD\n f = open(os.path.join(repo_path, \"HEAD\"))\n if is_not_a_branch(git_path, module, dest):\n f.close()\n f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))\n branch = f.readline().split('/')[-1].rstrip(\"\\n\")\n f.close()\n return branch\n\ndef set_remote_url(git_path, module, repo, dest, remote):\n ''' updates repo from remote sources '''\n commands = [(\"set a new url %s for %s\" % (repo, remote), [git_path, 'remote', 'set-url', remote, repo])]\n\n for (label,command) in commands:\n (rc,out,err) = module.run_command(command, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to %s: %s %s\" % (label, out, err))\n\ndef fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec):\n ''' updates repo from remote sources '''\n set_remote_url(git_path, module, repo, dest, remote)\n commands = []\n\n fetch_str = 'download remote objects and refs'\n fetch_cmd = [git_path, 'fetch']\n\n\n refspecs = []\n if depth:\n # try to find the minimal set of refs we need to fetch to get a\n # successful checkout\n if refspec:\n refspecs.append(refspec)\n elif version == 'HEAD':\n refspecs.append('HEAD')\n elif is_remote_branch(git_path, module, dest, repo, version):\n refspecs.append(version)\n elif is_remote_tag(git_path, module, dest, repo, version):\n refspecs.append('+refs/tags/'+version+':refs/tags/'+version)\n if refspecs:\n # if refspecs is empty, i.e. version is neither heads nor tags\n # fall back to a full clone, otherwise we might not be able to checkout\n # version\n fetch_cmd.extend(['--depth', str(depth)])\n\n fetch_cmd.extend([remote])\n if not depth or not refspecs:\n # don't try to be minimalistic but do a full clone\n # also do this if depth is given, but version is something that can't be fetched directly\n if bare:\n refspecs = ['+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*']\n else:\n # unlike in bare mode, there's no way to combine the\n # additional refspec with the default git fetch behavior,\n # so use two commands\n commands.append((fetch_str, fetch_cmd))\n refspecs = ['+refs/tags/*:refs/tags/*']\n if refspec:\n refspecs.append(refspec)\n\n commands.append((fetch_str, fetch_cmd + refspecs))\n\n for (label,command) in commands:\n (rc,out,err) = module.run_command(command, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to %s: %s %s\" % (label, out, err), cmd=command)\n\ndef submodules_fetch(git_path, module, remote, track_submodules, dest):\n changed = False\n\n if not os.path.exists(os.path.join(dest, '.gitmodules')):\n # no submodules\n return changed\n\n gitmodules_file = open(os.path.join(dest, '.gitmodules'), 'r')\n for line in gitmodules_file:\n # Check for new submodules\n if not changed and line.strip().startswith('path'):\n path = line.split('=', 1)[1].strip()\n # Check that dest/path/.git exists\n if not os.path.exists(os.path.join(dest, path, '.git')):\n changed = True\n\n # add the submodule repo's hostkey\n if line.strip().startswith('url'):\n repo = line.split('=', 1)[1].strip()\n if module.params['ssh_opts'] is not None:\n if not \"-o StrictHostKeyChecking=no\" in module.params['ssh_opts']:\n add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])\n else:\n add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])\n\n # Check for updates to existing modules\n if not changed:\n # Fetch updates\n begin = get_submodule_versions(git_path, module, dest)\n cmd = [git_path, 'submodule', 'foreach', git_path, 'fetch']\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to fetch submodules: %s\" % out + err)\n\n if track_submodules:\n # Compare against submodule HEAD\n ### FIXME: determine this from .gitmodules\n version = 'master'\n after = get_submodule_versions(git_path, module, dest, '%s/%s'\n % (remote, version))\n if begin != after:\n changed = True\n else:\n # Compare against the superproject's expectation\n cmd = [git_path, 'submodule', 'status']\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if rc != 0:\n module.fail_json(msg='Failed to retrieve submodule status: %s' % out + err)\n for line in out.splitlines():\n if line[0] != ' ':\n changed = True\n break\n return changed\n\ndef submodule_update(git_path, module, dest, track_submodules):\n ''' init and update any submodules '''\n\n # get the valid submodule params\n params = get_submodule_update_params(module, git_path, dest)\n\n # skip submodule commands if .gitmodules is not present\n if not os.path.exists(os.path.join(dest, '.gitmodules')):\n return (0, '', '')\n cmd = [ git_path, 'submodule', 'sync' ]\n (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)\n if 'remote' in params and track_submodules:\n cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ,'--remote' ]\n else:\n cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ]\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to init/update submodules: %s\" % out + err)\n return (rc, out, err)\n\ndef set_remote_branch(git_path, module, dest, remote, version, depth):\n cmd = \"%s remote set-branches %s %s\" % (git_path, remote, version)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to set remote branch: %s\" % version)\n cmd = \"%s fetch --depth=%s %s %s\" % (git_path, depth, remote, version)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to fetch branch from remote: %s\" % version)\n\ndef switch_version(git_path, module, dest, remote, version, verify_commit):\n cmd = ''\n if version != 'HEAD':\n if is_remote_branch(git_path, module, dest, remote, version):\n if not is_local_branch(git_path, module, dest, version):\n depth = module.params['depth']\n if depth:\n # git clone --depth implies --single-branch, which makes\n # the checkout fail if the version changes\n set_remote_branch(git_path, module, dest, remote, version, depth)\n cmd = \"%s checkout --track -b %s %s/%s\" % (git_path, version, remote, version)\n else:\n (rc, out, err) = module.run_command(\"%s checkout --force %s\" % (git_path, version), cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to checkout branch %s\" % version,\n stdout=out, stderr=err, rc=rc)\n cmd = \"%s reset --hard %s/%s\" % (git_path, remote, version)\n else:\n cmd = \"%s checkout --force %s\" % (git_path, version)\n else:\n branch = get_head_branch(git_path, module, dest, remote)\n (rc, out, err) = module.run_command(\"%s checkout --force %s\" % (git_path, branch), cwd=dest)\n if rc != 0:\n module.fail_json(msg=\"Failed to checkout branch %s\" % branch,\n stdout=out, stderr=err, rc=rc)\n cmd = \"%s reset --hard %s\" % (git_path, remote)\n (rc, out1, err1) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n if version != 'HEAD':\n module.fail_json(msg=\"Failed to checkout %s\" % (version),\n stdout=out1, stderr=err1, rc=rc, cmd=cmd)\n else:\n module.fail_json(msg=\"Failed to checkout branch %s\" % (branch),\n stdout=out1, stderr=err1, rc=rc, cmd=cmd)\n\n if verify_commit:\n verify_commit_sign(git_path, module, dest, version)\n\n return (rc, out1, err1)\n\n\ndef verify_commit_sign(git_path, module, dest, version):\n cmd = \"%s verify-commit %s\" % (git_path, version)\n (rc, out, err) = module.run_command(cmd, cwd=dest)\n if rc != 0:\n module.fail_json(msg='Failed to verify GPG signature of commit/tag \"%s\"' % version)\n return (rc, out, err)\n\n# ===========================================\n\ndef main():\n module = AnsibleModule(\n argument_spec = dict(\n dest=dict(type='path'),\n repo=dict(required=True, aliases=['name']),\n version=dict(default='HEAD'),\n remote=dict(default='origin'),\n refspec=dict(default=None),\n reference=dict(default=None),\n force=dict(default='no', type='bool'),\n depth=dict(default=None, type='int'),\n clone=dict(default='yes', type='bool'),\n update=dict(default='yes', type='bool'),\n verify_commit=dict(default='no', type='bool'),\n accept_hostkey=dict(default='no', type='bool'),\n key_file=dict(default=None, type='path', required=False),\n ssh_opts=dict(default=None, required=False),\n executable=dict(default=None, type='path'),\n bare=dict(default='no', type='bool'),\n recursive=dict(default='yes', type='bool'),\n track_submodules=dict(default='no', type='bool'),\n ),\n supports_check_mode=True\n )\n\n dest = module.params['dest']\n repo = module.params['repo']\n version = module.params['version']\n remote = module.params['remote']\n refspec = module.params['refspec']\n force = module.params['force']\n depth = module.params['depth']\n update = module.params['update']\n allow_clone = module.params['clone']\n bare = module.params['bare']\n verify_commit = module.params['verify_commit']\n reference = module.params['reference']\n git_path = module.params['executable'] or module.get_bin_path('git', True)\n key_file = module.params['key_file']\n ssh_opts = module.params['ssh_opts']\n\n # We screenscrape a huge amount of git commands so use C locale anytime we\n # call run_command()\n module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')\n\n gitconfig = None\n if not dest and allow_clone:\n module.fail_json(msg=\"the destination directory must be specified unless clone=no\")\n elif dest:\n dest = os.path.abspath(dest)\n if bare:\n gitconfig = os.path.join(dest, 'config')\n else:\n gitconfig = os.path.join(dest, '.git', 'config')\n\n # create a wrapper script and export\n # GIT_SSH=<path> as an environment variable\n # for git to use the wrapper script\n ssh_wrapper = None\n if key_file or ssh_opts:\n ssh_wrapper = write_ssh_wrapper()\n set_git_ssh(ssh_wrapper, key_file, ssh_opts)\n module.add_cleanup_file(path=ssh_wrapper)\n\n # add the git repo's hostkey \n if module.params['ssh_opts'] is not None:\n if not \"-o StrictHostKeyChecking=no\" in module.params['ssh_opts']:\n add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])\n else:\n add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])\n\n recursive = module.params['recursive']\n track_submodules = module.params['track_submodules']\n\n rc, out, err, status = (0, None, None, None)\n\n before = None\n local_mods = False\n repo_updated = None\n if (dest and not os.path.exists(gitconfig)) or (not dest and not allow_clone):\n # if there is no git configuration, do a clone operation unless:\n # * the user requested no clone (they just want info)\n # * we're doing a check mode test\n # In those cases we do an ls-remote\n if module.check_mode or not allow_clone:\n remote_head = get_remote_head(git_path, module, dest, version, repo, bare)\n module.exit_json(changed=True, before=before, after=remote_head)\n # there's no git config, so clone\n clone(git_path, module, repo, dest, remote, depth, version, bare, reference, refspec, verify_commit)\n repo_updated = True\n elif not update:\n # Just return having found a repo already in the dest path\n # this does no checking that the repo is the actual repo\n # requested.\n before = get_version(module, git_path, dest)\n module.exit_json(changed=False, before=before, after=before)\n else:\n # else do a pull\n local_mods = has_local_mods(module, git_path, dest, bare)\n before = get_version(module, git_path, dest)\n if local_mods:\n # failure should happen regardless of check mode\n if not force:\n module.fail_json(msg=\"Local modifications exist in repository (force=no).\")\n # if force and in non-check mode, do a reset\n if not module.check_mode:\n reset(git_path, module, dest)\n # exit if already at desired sha version\n set_remote_url(git_path, module, repo, dest, remote)\n remote_head = get_remote_head(git_path, module, dest, version, remote, bare)\n if before == remote_head:\n if local_mods:\n module.exit_json(changed=True, before=before, after=remote_head,\n msg=\"Local modifications exist\")\n elif is_remote_tag(git_path, module, dest, repo, version):\n # if the remote is a tag and we have the tag locally, exit early\n if version in get_tags(git_path, module, dest):\n repo_updated = False\n else:\n # if the remote is a branch and we have the branch locally, exit early\n if version in get_branches(git_path, module, dest):\n repo_updated = False\n if repo_updated is None:\n if module.check_mode:\n module.exit_json(changed=True, before=before, after=remote_head)\n fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec)\n repo_updated = True\n\n # switch to version specified regardless of whether\n # we got new revisions from the repository\n if not bare:\n switch_version(git_path, module, dest, remote, version, verify_commit)\n\n # Deal with submodules\n submodules_updated = False\n if recursive and not bare:\n submodules_updated = submodules_fetch(git_path, module, remote, track_submodules, dest)\n\n if module.check_mode:\n if submodules_updated:\n module.exit_json(changed=True, before=before, after=remote_head, submodules_changed=True)\n else:\n module.exit_json(changed=False, before=before, after=remote_head)\n\n if submodules_updated:\n # Switch to version specified\n submodule_update(git_path, module, dest, track_submodules)\n\n # determine if we changed anything\n after = get_version(module, git_path, dest)\n\n changed = False\n if before != after or local_mods or submodules_updated:\n changed = True\n\n # cleanup the wrapper script\n if ssh_wrapper:\n try:\n os.remove(ssh_wrapper)\n except OSError:\n # No need to fail if the file already doesn't exist\n pass\n\n module.exit_json(changed=changed, before=before, after=after)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.known_hosts import *\n\nif __name__ == '__main__':\n main()\n", "path": "source_control/git.py"}]} |
gh_patches_debug_1017 | rasdani/github-patches | git_diff | openai__gym-1092 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ImportError when installing on Windows 10 and [33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>
Dears,
Would you please let me know how I could solve this warning and this error? (Windows 10)
Using TensorFlow backend.
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
File "C:\Users\fi\Desktop\rl\code\3.6\stock_market_reinforcement_learning-master\environment.py", line 43, in __init__
self.reset()
File "C:\Users\fi\Anaconda30\envs\tensorflow\lib\site-packages\gym\core.py", line 70, in reset
raise NotImplementedError
NotImplementedErrorr
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gym/envs/mujoco/mujoco_env.py`
Content:
```
1 import os
2
3 from gym import error, spaces
4 from gym.utils import seeding
5 import numpy as np
6 from os import path
7 import gym
8 import six
9
10 try:
11 import mujoco_py
12 except ImportError as e:
13 raise error.DependencyNotInstalled("{}. (HINT: you need to install mujoco_py, and also perform the setup instructions here: https://github.com/openai/mujoco-py/.)".format(e))
14
15 DEFAULT_SIZE = 500
16
17 class MujocoEnv(gym.Env):
18 """Superclass for all MuJoCo environments.
19 """
20
21 def __init__(self, model_path, frame_skip):
22 if model_path.startswith("/"):
23 fullpath = model_path
24 else:
25 fullpath = os.path.join(os.path.dirname(__file__), "assets", model_path)
26 if not path.exists(fullpath):
27 raise IOError("File %s does not exist" % fullpath)
28 self.frame_skip = frame_skip
29 self.model = mujoco_py.load_model_from_path(fullpath)
30 self.sim = mujoco_py.MjSim(self.model)
31 self.data = self.sim.data
32 self.viewer = None
33 self._viewers = {}
34
35 self.metadata = {
36 'render.modes': ['human', 'rgb_array'],
37 'video.frames_per_second': int(np.round(1.0 / self.dt))
38 }
39
40 self.init_qpos = self.sim.data.qpos.ravel().copy()
41 self.init_qvel = self.sim.data.qvel.ravel().copy()
42 observation, _reward, done, _info = self.step(np.zeros(self.model.nu))
43 assert not done
44 self.obs_dim = observation.size
45
46 bounds = self.model.actuator_ctrlrange.copy()
47 low = bounds[:, 0]
48 high = bounds[:, 1]
49 self.action_space = spaces.Box(low=low, high=high)
50
51 high = np.inf*np.ones(self.obs_dim)
52 low = -high
53 self.observation_space = spaces.Box(low, high)
54
55 self.seed()
56
57 def seed(self, seed=None):
58 self.np_random, seed = seeding.np_random(seed)
59 return [seed]
60
61 # methods to override:
62 # ----------------------------
63
64 def reset_model(self):
65 """
66 Reset the robot degrees of freedom (qpos and qvel).
67 Implement this in each subclass.
68 """
69 raise NotImplementedError
70
71 def viewer_setup(self):
72 """
73 This method is called when the viewer is initialized and after every reset
74 Optionally implement this method, if you need to tinker with camera position
75 and so forth.
76 """
77 pass
78
79 # -----------------------------
80
81 def reset(self):
82 self.sim.reset()
83 ob = self.reset_model()
84 old_viewer = self.viewer
85 for v in self._viewers.values():
86 self.viewer = v
87 self.viewer_setup()
88 self.viewer = old_viewer
89 return ob
90
91 def set_state(self, qpos, qvel):
92 assert qpos.shape == (self.model.nq,) and qvel.shape == (self.model.nv,)
93 old_state = self.sim.get_state()
94 new_state = mujoco_py.MjSimState(old_state.time, qpos, qvel,
95 old_state.act, old_state.udd_state)
96 self.sim.set_state(new_state)
97 self.sim.forward()
98
99 @property
100 def dt(self):
101 return self.model.opt.timestep * self.frame_skip
102
103 def do_simulation(self, ctrl, n_frames):
104 self.sim.data.ctrl[:] = ctrl
105 for _ in range(n_frames):
106 self.sim.step()
107
108 def render(self, mode='human', width=DEFAULT_SIZE, height=DEFAULT_SIZE):
109 if mode == 'rgb_array':
110 self._get_viewer(mode).render(width, height)
111 # window size used for old mujoco-py:
112 data = self._get_viewer(mode).read_pixels(width, height, depth=False)
113 # original image is upside-down, so flip it
114 return data[::-1, :, :]
115 elif mode == 'human':
116 self._get_viewer(mode).render()
117
118 def close(self):
119 if self.viewer is not None:
120 # self.viewer.finish()
121 self.viewer = None
122 self._viewers = {}
123
124 def _get_viewer(self, mode):
125 self.viewer = self._viewers.get(mode)
126 if self.viewer is None:
127 if mode == 'human':
128 self.viewer = mujoco_py.MjViewer(self.sim)
129 elif mode == 'rgb_array':
130 self.viewer = mujoco_py.MjRenderContextOffscreen(self.sim, 0)
131 self.viewer_setup()
132 self._viewers[mode] = self.viewer
133 return self.viewer
134
135 def get_body_com(self, body_name):
136 return self.data.get_body_xpos(body_name)
137
138 def state_vector(self):
139 return np.concatenate([
140 self.sim.data.qpos.flat,
141 self.sim.data.qvel.flat
142 ])
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gym/envs/mujoco/mujoco_env.py b/gym/envs/mujoco/mujoco_env.py
--- a/gym/envs/mujoco/mujoco_env.py
+++ b/gym/envs/mujoco/mujoco_env.py
@@ -46,7 +46,7 @@
bounds = self.model.actuator_ctrlrange.copy()
low = bounds[:, 0]
high = bounds[:, 1]
- self.action_space = spaces.Box(low=low, high=high)
+ self.action_space = spaces.Box(low=low, high=high, dtype=np.float32)
high = np.inf*np.ones(self.obs_dim)
low = -high
| {"golden_diff": "diff --git a/gym/envs/mujoco/mujoco_env.py b/gym/envs/mujoco/mujoco_env.py\n--- a/gym/envs/mujoco/mujoco_env.py\n+++ b/gym/envs/mujoco/mujoco_env.py\n@@ -46,7 +46,7 @@\n bounds = self.model.actuator_ctrlrange.copy()\n low = bounds[:, 0]\n high = bounds[:, 1]\n- self.action_space = spaces.Box(low=low, high=high)\n+ self.action_space = spaces.Box(low=low, high=high, dtype=np.float32)\n \n high = np.inf*np.ones(self.obs_dim)\n low = -high\n", "issue": "ImportError when installing on Windows 10 and [33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>\nDears,\r\nWould you please let me know how I could solve this warning and this error? (Windows 10)\r\n\r\nUsing TensorFlow backend.\r\n\u001b[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.\u001b[0m\r\n\r\n File \"C:\\Users\\fi\\Desktop\\rl\\code\\3.6\\stock_market_reinforcement_learning-master\\environment.py\", line 43, in __init__\r\n self.reset()\r\n File \"C:\\Users\\fi\\Anaconda30\\envs\\tensorflow\\lib\\site-packages\\gym\\core.py\", line 70, in reset\r\n raise NotImplementedError\r\nNotImplementedErrorr\r\n\n", "before_files": [{"content": "import os\n\nfrom gym import error, spaces\nfrom gym.utils import seeding\nimport numpy as np\nfrom os import path\nimport gym\nimport six\n\ntry:\n import mujoco_py\nexcept ImportError as e:\n raise error.DependencyNotInstalled(\"{}. (HINT: you need to install mujoco_py, and also perform the setup instructions here: https://github.com/openai/mujoco-py/.)\".format(e))\n\nDEFAULT_SIZE = 500\n\nclass MujocoEnv(gym.Env):\n \"\"\"Superclass for all MuJoCo environments.\n \"\"\"\n\n def __init__(self, model_path, frame_skip):\n if model_path.startswith(\"/\"):\n fullpath = model_path\n else:\n fullpath = os.path.join(os.path.dirname(__file__), \"assets\", model_path)\n if not path.exists(fullpath):\n raise IOError(\"File %s does not exist\" % fullpath)\n self.frame_skip = frame_skip\n self.model = mujoco_py.load_model_from_path(fullpath)\n self.sim = mujoco_py.MjSim(self.model)\n self.data = self.sim.data\n self.viewer = None\n self._viewers = {}\n\n self.metadata = {\n 'render.modes': ['human', 'rgb_array'],\n 'video.frames_per_second': int(np.round(1.0 / self.dt))\n }\n\n self.init_qpos = self.sim.data.qpos.ravel().copy()\n self.init_qvel = self.sim.data.qvel.ravel().copy()\n observation, _reward, done, _info = self.step(np.zeros(self.model.nu))\n assert not done\n self.obs_dim = observation.size\n\n bounds = self.model.actuator_ctrlrange.copy()\n low = bounds[:, 0]\n high = bounds[:, 1]\n self.action_space = spaces.Box(low=low, high=high)\n\n high = np.inf*np.ones(self.obs_dim)\n low = -high\n self.observation_space = spaces.Box(low, high)\n\n self.seed()\n\n def seed(self, seed=None):\n self.np_random, seed = seeding.np_random(seed)\n return [seed]\n\n # methods to override:\n # ----------------------------\n\n def reset_model(self):\n \"\"\"\n Reset the robot degrees of freedom (qpos and qvel).\n Implement this in each subclass.\n \"\"\"\n raise NotImplementedError\n\n def viewer_setup(self):\n \"\"\"\n This method is called when the viewer is initialized and after every reset\n Optionally implement this method, if you need to tinker with camera position\n and so forth.\n \"\"\"\n pass\n\n # -----------------------------\n\n def reset(self):\n self.sim.reset()\n ob = self.reset_model()\n old_viewer = self.viewer\n for v in self._viewers.values():\n self.viewer = v\n self.viewer_setup()\n self.viewer = old_viewer\n return ob\n\n def set_state(self, qpos, qvel):\n assert qpos.shape == (self.model.nq,) and qvel.shape == (self.model.nv,)\n old_state = self.sim.get_state()\n new_state = mujoco_py.MjSimState(old_state.time, qpos, qvel,\n old_state.act, old_state.udd_state)\n self.sim.set_state(new_state)\n self.sim.forward()\n\n @property\n def dt(self):\n return self.model.opt.timestep * self.frame_skip\n\n def do_simulation(self, ctrl, n_frames):\n self.sim.data.ctrl[:] = ctrl\n for _ in range(n_frames):\n self.sim.step()\n\n def render(self, mode='human', width=DEFAULT_SIZE, height=DEFAULT_SIZE):\n if mode == 'rgb_array':\n self._get_viewer(mode).render(width, height)\n # window size used for old mujoco-py:\n data = self._get_viewer(mode).read_pixels(width, height, depth=False)\n # original image is upside-down, so flip it\n return data[::-1, :, :]\n elif mode == 'human':\n self._get_viewer(mode).render()\n\n def close(self):\n if self.viewer is not None:\n # self.viewer.finish()\n self.viewer = None\n self._viewers = {}\n\n def _get_viewer(self, mode):\n self.viewer = self._viewers.get(mode)\n if self.viewer is None:\n if mode == 'human':\n self.viewer = mujoco_py.MjViewer(self.sim)\n elif mode == 'rgb_array':\n self.viewer = mujoco_py.MjRenderContextOffscreen(self.sim, 0)\n self.viewer_setup()\n self._viewers[mode] = self.viewer\n return self.viewer\n\n def get_body_com(self, body_name):\n return self.data.get_body_xpos(body_name)\n\n def state_vector(self):\n return np.concatenate([\n self.sim.data.qpos.flat,\n self.sim.data.qvel.flat\n ])\n", "path": "gym/envs/mujoco/mujoco_env.py"}], "after_files": [{"content": "import os\n\nfrom gym import error, spaces\nfrom gym.utils import seeding\nimport numpy as np\nfrom os import path\nimport gym\nimport six\n\ntry:\n import mujoco_py\nexcept ImportError as e:\n raise error.DependencyNotInstalled(\"{}. (HINT: you need to install mujoco_py, and also perform the setup instructions here: https://github.com/openai/mujoco-py/.)\".format(e))\n\nDEFAULT_SIZE = 500\n\nclass MujocoEnv(gym.Env):\n \"\"\"Superclass for all MuJoCo environments.\n \"\"\"\n\n def __init__(self, model_path, frame_skip):\n if model_path.startswith(\"/\"):\n fullpath = model_path\n else:\n fullpath = os.path.join(os.path.dirname(__file__), \"assets\", model_path)\n if not path.exists(fullpath):\n raise IOError(\"File %s does not exist\" % fullpath)\n self.frame_skip = frame_skip\n self.model = mujoco_py.load_model_from_path(fullpath)\n self.sim = mujoco_py.MjSim(self.model)\n self.data = self.sim.data\n self.viewer = None\n self._viewers = {}\n\n self.metadata = {\n 'render.modes': ['human', 'rgb_array'],\n 'video.frames_per_second': int(np.round(1.0 / self.dt))\n }\n\n self.init_qpos = self.sim.data.qpos.ravel().copy()\n self.init_qvel = self.sim.data.qvel.ravel().copy()\n observation, _reward, done, _info = self.step(np.zeros(self.model.nu))\n assert not done\n self.obs_dim = observation.size\n\n bounds = self.model.actuator_ctrlrange.copy()\n low = bounds[:, 0]\n high = bounds[:, 1]\n self.action_space = spaces.Box(low=low, high=high, dtype=np.float32)\n\n high = np.inf*np.ones(self.obs_dim)\n low = -high\n self.observation_space = spaces.Box(low, high)\n\n self.seed()\n\n def seed(self, seed=None):\n self.np_random, seed = seeding.np_random(seed)\n return [seed]\n\n # methods to override:\n # ----------------------------\n\n def reset_model(self):\n \"\"\"\n Reset the robot degrees of freedom (qpos and qvel).\n Implement this in each subclass.\n \"\"\"\n raise NotImplementedError\n\n def viewer_setup(self):\n \"\"\"\n This method is called when the viewer is initialized and after every reset\n Optionally implement this method, if you need to tinker with camera position\n and so forth.\n \"\"\"\n pass\n\n # -----------------------------\n\n def reset(self):\n self.sim.reset()\n ob = self.reset_model()\n old_viewer = self.viewer\n for v in self._viewers.values():\n self.viewer = v\n self.viewer_setup()\n self.viewer = old_viewer\n return ob\n\n def set_state(self, qpos, qvel):\n assert qpos.shape == (self.model.nq,) and qvel.shape == (self.model.nv,)\n old_state = self.sim.get_state()\n new_state = mujoco_py.MjSimState(old_state.time, qpos, qvel,\n old_state.act, old_state.udd_state)\n self.sim.set_state(new_state)\n self.sim.forward()\n\n @property\n def dt(self):\n return self.model.opt.timestep * self.frame_skip\n\n def do_simulation(self, ctrl, n_frames):\n self.sim.data.ctrl[:] = ctrl\n for _ in range(n_frames):\n self.sim.step()\n\n def render(self, mode='human', width=DEFAULT_SIZE, height=DEFAULT_SIZE):\n if mode == 'rgb_array':\n self._get_viewer(mode).render(width, height)\n # window size used for old mujoco-py:\n data = self._get_viewer(mode).read_pixels(width, height, depth=False)\n # original image is upside-down, so flip it\n return data[::-1, :, :]\n elif mode == 'human':\n self._get_viewer(mode).render()\n\n def close(self):\n if self.viewer is not None:\n # self.viewer.finish()\n self.viewer = None\n self._viewers = {}\n\n def _get_viewer(self, mode):\n self.viewer = self._viewers.get(mode)\n if self.viewer is None:\n if mode == 'human':\n self.viewer = mujoco_py.MjViewer(self.sim)\n elif mode == 'rgb_array':\n self.viewer = mujoco_py.MjRenderContextOffscreen(self.sim, 0)\n self.viewer_setup()\n self._viewers[mode] = self.viewer\n return self.viewer\n\n def get_body_com(self, body_name):\n return self.data.get_body_xpos(body_name)\n\n def state_vector(self):\n return np.concatenate([\n self.sim.data.qpos.flat,\n self.sim.data.qvel.flat\n ])\n", "path": "gym/envs/mujoco/mujoco_env.py"}]} |
gh_patches_debug_1018 | rasdani/github-patches | git_diff | ivy-llc__ivy-13797 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
diagflat
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/numpy/creation_routines/building_matrices.py`
Content:
```
1 import ivy
2 from ivy.functional.frontends.numpy.func_wrapper import (
3 to_ivy_arrays_and_back,
4 handle_numpy_dtype,
5 )
6
7
8 @to_ivy_arrays_and_back
9 def tril(m, k=0):
10 return ivy.tril(m, k=k)
11
12
13 @to_ivy_arrays_and_back
14 def triu(m, k=0):
15 return ivy.triu(m, k=k)
16
17
18 @handle_numpy_dtype
19 @to_ivy_arrays_and_back
20 def tri(N, M=None, k=0, dtype="float64", *, like=None):
21 if M is None:
22 M = N
23 ones = ivy.ones((N, M), dtype=dtype)
24 return ivy.tril(ones, k=k)
25
26
27 @to_ivy_arrays_and_back
28 def diag(v, k=0):
29 return ivy.diag(v, k=k)
30
31
32 @to_ivy_arrays_and_back
33 def vander(x, N=None, increasing=False):
34 if ivy.is_float_dtype(x):
35 x = x.astype(ivy.float64)
36 elif ivy.is_bool_dtype or ivy.is_int_dtype(x):
37 x = x.astype(ivy.int64)
38 return ivy.vander(x, N=N, increasing=increasing)
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/numpy/creation_routines/building_matrices.py b/ivy/functional/frontends/numpy/creation_routines/building_matrices.py
--- a/ivy/functional/frontends/numpy/creation_routines/building_matrices.py
+++ b/ivy/functional/frontends/numpy/creation_routines/building_matrices.py
@@ -36,3 +36,12 @@
elif ivy.is_bool_dtype or ivy.is_int_dtype(x):
x = x.astype(ivy.int64)
return ivy.vander(x, N=N, increasing=increasing)
+
+
+# diagflat
+@to_ivy_arrays_and_back
+def diagflat(v, k=0):
+ ret = ivy.diagflat(v, offset=k)
+ while len(ivy.shape(ret)) < 2:
+ ret = ret.expand_dims(axis=0)
+ return ret
| {"golden_diff": "diff --git a/ivy/functional/frontends/numpy/creation_routines/building_matrices.py b/ivy/functional/frontends/numpy/creation_routines/building_matrices.py\n--- a/ivy/functional/frontends/numpy/creation_routines/building_matrices.py\n+++ b/ivy/functional/frontends/numpy/creation_routines/building_matrices.py\n@@ -36,3 +36,12 @@\n elif ivy.is_bool_dtype or ivy.is_int_dtype(x):\n x = x.astype(ivy.int64)\n return ivy.vander(x, N=N, increasing=increasing)\n+\n+\n+# diagflat\n+@to_ivy_arrays_and_back\n+def diagflat(v, k=0):\n+ ret = ivy.diagflat(v, offset=k)\n+ while len(ivy.shape(ret)) < 2:\n+ ret = ret.expand_dims(axis=0)\n+ return ret\n", "issue": "diagflat\n\n", "before_files": [{"content": "import ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_numpy_dtype,\n)\n\n\n@to_ivy_arrays_and_back\ndef tril(m, k=0):\n return ivy.tril(m, k=k)\n\n\n@to_ivy_arrays_and_back\ndef triu(m, k=0):\n return ivy.triu(m, k=k)\n\n\n@handle_numpy_dtype\n@to_ivy_arrays_and_back\ndef tri(N, M=None, k=0, dtype=\"float64\", *, like=None):\n if M is None:\n M = N\n ones = ivy.ones((N, M), dtype=dtype)\n return ivy.tril(ones, k=k)\n\n\n@to_ivy_arrays_and_back\ndef diag(v, k=0):\n return ivy.diag(v, k=k)\n\n\n@to_ivy_arrays_and_back\ndef vander(x, N=None, increasing=False):\n if ivy.is_float_dtype(x):\n x = x.astype(ivy.float64)\n elif ivy.is_bool_dtype or ivy.is_int_dtype(x):\n x = x.astype(ivy.int64)\n return ivy.vander(x, N=N, increasing=increasing)\n", "path": "ivy/functional/frontends/numpy/creation_routines/building_matrices.py"}], "after_files": [{"content": "import ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_numpy_dtype,\n)\n\n\n@to_ivy_arrays_and_back\ndef tril(m, k=0):\n return ivy.tril(m, k=k)\n\n\n@to_ivy_arrays_and_back\ndef triu(m, k=0):\n return ivy.triu(m, k=k)\n\n\n@handle_numpy_dtype\n@to_ivy_arrays_and_back\ndef tri(N, M=None, k=0, dtype=\"float64\", *, like=None):\n if M is None:\n M = N\n ones = ivy.ones((N, M), dtype=dtype)\n return ivy.tril(ones, k=k)\n\n\n@to_ivy_arrays_and_back\ndef diag(v, k=0):\n return ivy.diag(v, k=k)\n\n\n@to_ivy_arrays_and_back\ndef vander(x, N=None, increasing=False):\n if ivy.is_float_dtype(x):\n x = x.astype(ivy.float64)\n elif ivy.is_bool_dtype or ivy.is_int_dtype(x):\n x = x.astype(ivy.int64)\n return ivy.vander(x, N=N, increasing=increasing)\n\n\n# diagflat\n@to_ivy_arrays_and_back\ndef diagflat(v, k=0):\n ret = ivy.diagflat(v, offset=k)\n while len(ivy.shape(ret)) < 2:\n ret = ret.expand_dims(axis=0)\n return ret\n", "path": "ivy/functional/frontends/numpy/creation_routines/building_matrices.py"}]} |
gh_patches_debug_1019 | rasdani/github-patches | git_diff | beetbox__beets-3980 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate Plugin: beet dup -C "md5sum {file}" doesn't calculate checksum on Win10 cmd (incl ad-hoc fix)
I was trying to check for duplicates with md5sum. And ran into several problems.
### Problem
Running this command in verbose (`-vv`) mode:
(copied like its shown in this doc https://beets.readthedocs.io/en/stable/plugins/duplicates.html)
```cmd
$ beet -vv dup -C 'md5sum {file}'
```
Led to this problem:
```
user configuration: F:\Users\yasok\AppData\Roaming\beets\config.yaml
data directory: F:\Users\yasok\AppData\Roaming\beets
plugin paths:
Sending event: pluginload
library database: F:\Users\yasok\AppData\Roaming\beets\library.db
library directory: G:\MusicNoDupes
Sending event: library_opened
Traceback (most recent call last):
File "f:\users\yasok\anaconda3\envs\tagging\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "f:\users\yasok\anaconda3\envs\tagging\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "F:\Users\yasok\anaconda3\envs\tagging\Scripts\beet.exe\__main__.py", line 7, in <module>
File "f:\users\yasok\anaconda3\envs\tagging\lib\site-packages\beets\ui\__init__.py", line 1291, in main
_raw_main(args)
File "f:\users\yasok\anaconda3\envs\tagging\lib\site-packages\beets\ui\__init__.py", line 1278, in _raw_main
subcommand.func(lib, suboptions, subargs)
File "f:\users\yasok\anaconda3\envs\tagging\lib\site-packages\beetsplug\duplicates.py", line 152, in _dup
keys = [k]
UnboundLocalError: local variable 'k' referenced before assignment
```
After debugging I noticed that the command it's trying to run is:
```
'md5sum
```
missing the "{file}", so I figured I'll try it with
```cmd
$ beet -vv dup -C "md5sum {file}"
```
which didn't crash, but led to:
```
duplicates: key md5sum on item G:\MusicNoDupes\SWR3.online\Jogis Jungs\00 Jogis Jungs 026 Masern in der Schweiz.mp3 not cached:computing checksum
duplicates: failed to checksum G:\MusicNoDupes\SWR3.online\Jogis Jungs\00 Jogis Jungs 026 Masern in der Schweiz.mp3: Command 'md5sum b'G:\\MusicNoDupes\\SWR3.online\\Jogis Jungs\\00 Jogis Jungs 026 Masern in der Schweiz.mp3'' returned non-zero exit status 1.
```
I debugged again and realized it tries to run the command as:
```
md5sum b'G:\\MusicNoDupes\\SWR3.online\\Jogis Jungs\\00 Jogis Jungs 026 Masern in der Schweiz.mp3
```
The "b' " at the start confuses md5sum and leads to it not finding the file.
### ad-hoc fix
I changed the following line:
F:\Users\yasok\anaconda3\envs\tagging\Lib\site-packages\beetsplug\duplicates.py:200
From
```python
args = [p.format(file=item.path) for p in shlex.split(prog)]
```
To
```python
args = [p.format(file=displayable_path(item.path)) for p in shlex.split(prog)]
```
Now `$ beet -vv dup -C "md5sum {file}"` works.
### Setup
* OS: Windows 10 Pro 20H2
* beets version 1.5.0
* Python version 3.8.8
* plugins: duplicates
* Turning off plugins made problem go away (yes/no): not applicable
My configuration (output of `beet config`) is:
```yaml
duplicates:
tiebreak:
items: [bitrate]
directory: G:\MusicNoDupes
import:
move: yes
plugins: duplicates
terminal_encoding: utf-8
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/duplicates.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This file is part of beets.
3 # Copyright 2016, Pedro Silva.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 """List duplicate tracks or albums.
17 """
18 from __future__ import division, absolute_import, print_function
19
20 import shlex
21 import six
22
23 from beets.plugins import BeetsPlugin
24 from beets.ui import decargs, print_, Subcommand, UserError
25 from beets.util import command_output, displayable_path, subprocess, \
26 bytestring_path, MoveOperation, decode_commandline_path
27 from beets.library import Item, Album
28
29
30 PLUGIN = 'duplicates'
31
32
33 class DuplicatesPlugin(BeetsPlugin):
34 """List duplicate tracks or albums
35 """
36 def __init__(self):
37 super(DuplicatesPlugin, self).__init__()
38
39 self.config.add({
40 'album': False,
41 'checksum': '',
42 'copy': '',
43 'count': False,
44 'delete': False,
45 'format': '',
46 'full': False,
47 'keys': [],
48 'merge': False,
49 'move': '',
50 'path': False,
51 'tiebreak': {},
52 'strict': False,
53 'tag': '',
54 })
55
56 self._command = Subcommand('duplicates',
57 help=__doc__,
58 aliases=['dup'])
59 self._command.parser.add_option(
60 u'-c', u'--count', dest='count',
61 action='store_true',
62 help=u'show duplicate counts',
63 )
64 self._command.parser.add_option(
65 u'-C', u'--checksum', dest='checksum',
66 action='store', metavar='PROG',
67 help=u'report duplicates based on arbitrary command',
68 )
69 self._command.parser.add_option(
70 u'-d', u'--delete', dest='delete',
71 action='store_true',
72 help=u'delete items from library and disk',
73 )
74 self._command.parser.add_option(
75 u'-F', u'--full', dest='full',
76 action='store_true',
77 help=u'show all versions of duplicate tracks or albums',
78 )
79 self._command.parser.add_option(
80 u'-s', u'--strict', dest='strict',
81 action='store_true',
82 help=u'report duplicates only if all attributes are set',
83 )
84 self._command.parser.add_option(
85 u'-k', u'--key', dest='keys',
86 action='append', metavar='KEY',
87 help=u'report duplicates based on keys (use multiple times)',
88 )
89 self._command.parser.add_option(
90 u'-M', u'--merge', dest='merge',
91 action='store_true',
92 help=u'merge duplicate items',
93 )
94 self._command.parser.add_option(
95 u'-m', u'--move', dest='move',
96 action='store', metavar='DEST',
97 help=u'move items to dest',
98 )
99 self._command.parser.add_option(
100 u'-o', u'--copy', dest='copy',
101 action='store', metavar='DEST',
102 help=u'copy items to dest',
103 )
104 self._command.parser.add_option(
105 u'-t', u'--tag', dest='tag',
106 action='store',
107 help=u'tag matched items with \'k=v\' attribute',
108 )
109 self._command.parser.add_all_common_options()
110
111 def commands(self):
112
113 def _dup(lib, opts, args):
114 self.config.set_args(opts)
115 album = self.config['album'].get(bool)
116 checksum = self.config['checksum'].get(str)
117 copy = bytestring_path(self.config['copy'].as_str())
118 count = self.config['count'].get(bool)
119 delete = self.config['delete'].get(bool)
120 fmt = self.config['format'].get(str)
121 full = self.config['full'].get(bool)
122 keys = self.config['keys'].as_str_seq()
123 merge = self.config['merge'].get(bool)
124 move = bytestring_path(self.config['move'].as_str())
125 path = self.config['path'].get(bool)
126 tiebreak = self.config['tiebreak'].get(dict)
127 strict = self.config['strict'].get(bool)
128 tag = self.config['tag'].get(str)
129
130 if album:
131 if not keys:
132 keys = ['mb_albumid']
133 items = lib.albums(decargs(args))
134 else:
135 if not keys:
136 keys = ['mb_trackid', 'mb_albumid']
137 items = lib.items(decargs(args))
138
139 if path:
140 fmt = u'$path'
141
142 # Default format string for count mode.
143 if count and not fmt:
144 if album:
145 fmt = u'$albumartist - $album'
146 else:
147 fmt = u'$albumartist - $album - $title'
148 fmt += u': {0}'
149
150 if checksum:
151 for i in items:
152 k, _ = self._checksum(i, checksum)
153 keys = [k]
154
155 for obj_id, obj_count, objs in self._duplicates(items,
156 keys=keys,
157 full=full,
158 strict=strict,
159 tiebreak=tiebreak,
160 merge=merge):
161 if obj_id: # Skip empty IDs.
162 for o in objs:
163 self._process_item(o,
164 copy=copy,
165 move=move,
166 delete=delete,
167 tag=tag,
168 fmt=fmt.format(obj_count))
169
170 self._command.func = _dup
171 return [self._command]
172
173 def _process_item(self, item, copy=False, move=False, delete=False,
174 tag=False, fmt=u''):
175 """Process Item `item`.
176 """
177 print_(format(item, fmt))
178 if copy:
179 item.move(basedir=copy, operation=MoveOperation.COPY)
180 item.store()
181 if move:
182 item.move(basedir=move)
183 item.store()
184 if delete:
185 item.remove(delete=True)
186 if tag:
187 try:
188 k, v = tag.split('=')
189 except Exception:
190 raise UserError(
191 u"{}: can't parse k=v tag: {}".format(PLUGIN, tag)
192 )
193 setattr(item, k, v)
194 item.store()
195
196 def _checksum(self, item, prog):
197 """Run external `prog` on file path associated with `item`, cache
198 output as flexattr on a key that is the name of the program, and
199 return the key, checksum tuple.
200 """
201 args = [p.format(file=decode_commandline_path(item.path))
202 for p in shlex.split(prog)]
203 key = args[0]
204 checksum = getattr(item, key, False)
205 if not checksum:
206 self._log.debug(u'key {0} on item {1} not cached:'
207 u'computing checksum',
208 key, displayable_path(item.path))
209 try:
210 checksum = command_output(args).stdout
211 setattr(item, key, checksum)
212 item.store()
213 self._log.debug(u'computed checksum for {0} using {1}',
214 item.title, key)
215 except subprocess.CalledProcessError as e:
216 self._log.debug(u'failed to checksum {0}: {1}',
217 displayable_path(item.path), e)
218 else:
219 self._log.debug(u'key {0} on item {1} cached:'
220 u'not computing checksum',
221 key, displayable_path(item.path))
222 return key, checksum
223
224 def _group_by(self, objs, keys, strict):
225 """Return a dictionary with keys arbitrary concatenations of attributes
226 and values lists of objects (Albums or Items) with those keys.
227
228 If strict, all attributes must be defined for a duplicate match.
229 """
230 import collections
231 counts = collections.defaultdict(list)
232 for obj in objs:
233 values = [getattr(obj, k, None) for k in keys]
234 values = [v for v in values if v not in (None, '')]
235 if strict and len(values) < len(keys):
236 self._log.debug(u'some keys {0} on item {1} are null or empty:'
237 u' skipping',
238 keys, displayable_path(obj.path))
239 elif (not strict and not len(values)):
240 self._log.debug(u'all keys {0} on item {1} are null or empty:'
241 u' skipping',
242 keys, displayable_path(obj.path))
243 else:
244 key = tuple(values)
245 counts[key].append(obj)
246
247 return counts
248
249 def _order(self, objs, tiebreak=None):
250 """Return the objects (Items or Albums) sorted by descending
251 order of priority.
252
253 If provided, the `tiebreak` dict indicates the field to use to
254 prioritize the objects. Otherwise, Items are placed in order of
255 "completeness" (objects with more non-null fields come first)
256 and Albums are ordered by their track count.
257 """
258 kind = 'items' if all(isinstance(o, Item) for o in objs) else 'albums'
259
260 if tiebreak and kind in tiebreak.keys():
261 key = lambda x: tuple(getattr(x, k) for k in tiebreak[kind])
262 else:
263 if kind == 'items':
264 def truthy(v):
265 # Avoid a Unicode warning by avoiding comparison
266 # between a bytes object and the empty Unicode
267 # string ''.
268 return v is not None and \
269 (v != '' if isinstance(v, six.text_type) else True)
270 fields = Item.all_keys()
271 key = lambda x: sum(1 for f in fields if truthy(getattr(x, f)))
272 else:
273 key = lambda x: len(x.items())
274
275 return sorted(objs, key=key, reverse=True)
276
277 def _merge_items(self, objs):
278 """Merge Item objs by copying missing fields from items in the tail to
279 the head item.
280
281 Return same number of items, with the head item modified.
282 """
283 fields = Item.all_keys()
284 for f in fields:
285 for o in objs[1:]:
286 if getattr(objs[0], f, None) in (None, ''):
287 value = getattr(o, f, None)
288 if value:
289 self._log.debug(u'key {0} on item {1} is null '
290 u'or empty: setting from item {2}',
291 f, displayable_path(objs[0].path),
292 displayable_path(o.path))
293 setattr(objs[0], f, value)
294 objs[0].store()
295 break
296 return objs
297
298 def _merge_albums(self, objs):
299 """Merge Album objs by copying missing items from albums in the tail
300 to the head album.
301
302 Return same number of albums, with the head album modified."""
303 ids = [i.mb_trackid for i in objs[0].items()]
304 for o in objs[1:]:
305 for i in o.items():
306 if i.mb_trackid not in ids:
307 missing = Item.from_path(i.path)
308 missing.album_id = objs[0].id
309 missing.add(i._db)
310 self._log.debug(u'item {0} missing from album {1}:'
311 u' merging from {2} into {3}',
312 missing,
313 objs[0],
314 displayable_path(o.path),
315 displayable_path(missing.destination()))
316 missing.move(operation=MoveOperation.COPY)
317 return objs
318
319 def _merge(self, objs):
320 """Merge duplicate items. See ``_merge_items`` and ``_merge_albums``
321 for the relevant strategies.
322 """
323 kind = Item if all(isinstance(o, Item) for o in objs) else Album
324 if kind is Item:
325 objs = self._merge_items(objs)
326 else:
327 objs = self._merge_albums(objs)
328 return objs
329
330 def _duplicates(self, objs, keys, full, strict, tiebreak, merge):
331 """Generate triples of keys, duplicate counts, and constituent objects.
332 """
333 offset = 0 if full else 1
334 for k, objs in self._group_by(objs, keys, strict).items():
335 if len(objs) > 1:
336 objs = self._order(objs, tiebreak)
337 if merge:
338 objs = self._merge(objs)
339 yield (k, len(objs) - offset, objs[offset:])
340
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/beetsplug/duplicates.py b/beetsplug/duplicates.py
--- a/beetsplug/duplicates.py
+++ b/beetsplug/duplicates.py
@@ -136,6 +136,11 @@
keys = ['mb_trackid', 'mb_albumid']
items = lib.items(decargs(args))
+ # If there's nothing to do, return early. The code below assumes
+ # `items` to be non-empty.
+ if not items:
+ return
+
if path:
fmt = u'$path'
| {"golden_diff": "diff --git a/beetsplug/duplicates.py b/beetsplug/duplicates.py\n--- a/beetsplug/duplicates.py\n+++ b/beetsplug/duplicates.py\n@@ -136,6 +136,11 @@\n keys = ['mb_trackid', 'mb_albumid']\n items = lib.items(decargs(args))\n \n+ # If there's nothing to do, return early. The code below assumes\n+ # `items` to be non-empty.\n+ if not items:\n+ return\n+\n if path:\n fmt = u'$path'\n", "issue": "Duplicate Plugin: beet dup -C \"md5sum {file}\" doesn't calculate checksum on Win10 cmd (incl ad-hoc fix)\nI was trying to check for duplicates with md5sum. And ran into several problems.\r\n\r\n### Problem\r\n\r\nRunning this command in verbose (`-vv`) mode:\r\n(copied like its shown in this doc https://beets.readthedocs.io/en/stable/plugins/duplicates.html)\r\n```cmd\r\n$ beet -vv dup -C 'md5sum {file}'\r\n```\r\n\r\nLed to this problem:\r\n\r\n```\r\nuser configuration: F:\\Users\\yasok\\AppData\\Roaming\\beets\\config.yaml\r\ndata directory: F:\\Users\\yasok\\AppData\\Roaming\\beets\r\nplugin paths:\r\nSending event: pluginload\r\nlibrary database: F:\\Users\\yasok\\AppData\\Roaming\\beets\\library.db\r\nlibrary directory: G:\\MusicNoDupes\r\nSending event: library_opened\r\nTraceback (most recent call last):\r\n File \"f:\\users\\yasok\\anaconda3\\envs\\tagging\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"f:\\users\\yasok\\anaconda3\\envs\\tagging\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"F:\\Users\\yasok\\anaconda3\\envs\\tagging\\Scripts\\beet.exe\\__main__.py\", line 7, in <module>\r\n File \"f:\\users\\yasok\\anaconda3\\envs\\tagging\\lib\\site-packages\\beets\\ui\\__init__.py\", line 1291, in main\r\n _raw_main(args)\r\n File \"f:\\users\\yasok\\anaconda3\\envs\\tagging\\lib\\site-packages\\beets\\ui\\__init__.py\", line 1278, in _raw_main\r\n subcommand.func(lib, suboptions, subargs)\r\n File \"f:\\users\\yasok\\anaconda3\\envs\\tagging\\lib\\site-packages\\beetsplug\\duplicates.py\", line 152, in _dup\r\n keys = [k]\r\nUnboundLocalError: local variable 'k' referenced before assignment\r\n```\r\n\r\nAfter debugging I noticed that the command it's trying to run is:\r\n```\r\n'md5sum\r\n```\r\nmissing the \"{file}\", so I figured I'll try it with \r\n```cmd\r\n$ beet -vv dup -C \"md5sum {file}\"\r\n```\r\nwhich didn't crash, but led to:\r\n\r\n```\r\nduplicates: key md5sum on item G:\\MusicNoDupes\\SWR3.online\\Jogis Jungs\\00 Jogis Jungs 026 Masern in der Schweiz.mp3 not cached:computing checksum\r\nduplicates: failed to checksum G:\\MusicNoDupes\\SWR3.online\\Jogis Jungs\\00 Jogis Jungs 026 Masern in der Schweiz.mp3: Command 'md5sum b'G:\\\\MusicNoDupes\\\\SWR3.online\\\\Jogis Jungs\\\\00 Jogis Jungs 026 Masern in der Schweiz.mp3'' returned non-zero exit status 1.\r\n```\r\n\r\nI debugged again and realized it tries to run the command as:\r\n```\r\nmd5sum b'G:\\\\MusicNoDupes\\\\SWR3.online\\\\Jogis Jungs\\\\00 Jogis Jungs 026 Masern in der Schweiz.mp3\r\n```\r\nThe \"b' \" at the start confuses md5sum and leads to it not finding the file.\r\n\r\n### ad-hoc fix\r\nI changed the following line:\r\nF:\\Users\\yasok\\anaconda3\\envs\\tagging\\Lib\\site-packages\\beetsplug\\duplicates.py:200\r\nFrom\r\n```python\r\n args = [p.format(file=item.path) for p in shlex.split(prog)]\r\n```\r\nTo\r\n```python\r\n args = [p.format(file=displayable_path(item.path)) for p in shlex.split(prog)]\r\n```\r\n\r\nNow `$ beet -vv dup -C \"md5sum {file}\"` works.\r\n\r\n### Setup\r\n\r\n* OS: Windows 10 Pro 20H2\r\n* beets version 1.5.0\r\n* Python version 3.8.8\r\n* plugins: duplicates\r\n* Turning off plugins made problem go away (yes/no): not applicable\r\n\r\nMy configuration (output of `beet config`) is:\r\n\r\n```yaml\r\nduplicates:\r\n tiebreak: \r\n items: [bitrate]\r\ndirectory: G:\\MusicNoDupes\r\nimport:\r\n move: yes\r\nplugins: duplicates\r\nterminal_encoding: utf-8\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Pedro Silva.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"List duplicate tracks or albums.\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nimport shlex\nimport six\n\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import decargs, print_, Subcommand, UserError\nfrom beets.util import command_output, displayable_path, subprocess, \\\n bytestring_path, MoveOperation, decode_commandline_path\nfrom beets.library import Item, Album\n\n\nPLUGIN = 'duplicates'\n\n\nclass DuplicatesPlugin(BeetsPlugin):\n \"\"\"List duplicate tracks or albums\n \"\"\"\n def __init__(self):\n super(DuplicatesPlugin, self).__init__()\n\n self.config.add({\n 'album': False,\n 'checksum': '',\n 'copy': '',\n 'count': False,\n 'delete': False,\n 'format': '',\n 'full': False,\n 'keys': [],\n 'merge': False,\n 'move': '',\n 'path': False,\n 'tiebreak': {},\n 'strict': False,\n 'tag': '',\n })\n\n self._command = Subcommand('duplicates',\n help=__doc__,\n aliases=['dup'])\n self._command.parser.add_option(\n u'-c', u'--count', dest='count',\n action='store_true',\n help=u'show duplicate counts',\n )\n self._command.parser.add_option(\n u'-C', u'--checksum', dest='checksum',\n action='store', metavar='PROG',\n help=u'report duplicates based on arbitrary command',\n )\n self._command.parser.add_option(\n u'-d', u'--delete', dest='delete',\n action='store_true',\n help=u'delete items from library and disk',\n )\n self._command.parser.add_option(\n u'-F', u'--full', dest='full',\n action='store_true',\n help=u'show all versions of duplicate tracks or albums',\n )\n self._command.parser.add_option(\n u'-s', u'--strict', dest='strict',\n action='store_true',\n help=u'report duplicates only if all attributes are set',\n )\n self._command.parser.add_option(\n u'-k', u'--key', dest='keys',\n action='append', metavar='KEY',\n help=u'report duplicates based on keys (use multiple times)',\n )\n self._command.parser.add_option(\n u'-M', u'--merge', dest='merge',\n action='store_true',\n help=u'merge duplicate items',\n )\n self._command.parser.add_option(\n u'-m', u'--move', dest='move',\n action='store', metavar='DEST',\n help=u'move items to dest',\n )\n self._command.parser.add_option(\n u'-o', u'--copy', dest='copy',\n action='store', metavar='DEST',\n help=u'copy items to dest',\n )\n self._command.parser.add_option(\n u'-t', u'--tag', dest='tag',\n action='store',\n help=u'tag matched items with \\'k=v\\' attribute',\n )\n self._command.parser.add_all_common_options()\n\n def commands(self):\n\n def _dup(lib, opts, args):\n self.config.set_args(opts)\n album = self.config['album'].get(bool)\n checksum = self.config['checksum'].get(str)\n copy = bytestring_path(self.config['copy'].as_str())\n count = self.config['count'].get(bool)\n delete = self.config['delete'].get(bool)\n fmt = self.config['format'].get(str)\n full = self.config['full'].get(bool)\n keys = self.config['keys'].as_str_seq()\n merge = self.config['merge'].get(bool)\n move = bytestring_path(self.config['move'].as_str())\n path = self.config['path'].get(bool)\n tiebreak = self.config['tiebreak'].get(dict)\n strict = self.config['strict'].get(bool)\n tag = self.config['tag'].get(str)\n\n if album:\n if not keys:\n keys = ['mb_albumid']\n items = lib.albums(decargs(args))\n else:\n if not keys:\n keys = ['mb_trackid', 'mb_albumid']\n items = lib.items(decargs(args))\n\n if path:\n fmt = u'$path'\n\n # Default format string for count mode.\n if count and not fmt:\n if album:\n fmt = u'$albumartist - $album'\n else:\n fmt = u'$albumartist - $album - $title'\n fmt += u': {0}'\n\n if checksum:\n for i in items:\n k, _ = self._checksum(i, checksum)\n keys = [k]\n\n for obj_id, obj_count, objs in self._duplicates(items,\n keys=keys,\n full=full,\n strict=strict,\n tiebreak=tiebreak,\n merge=merge):\n if obj_id: # Skip empty IDs.\n for o in objs:\n self._process_item(o,\n copy=copy,\n move=move,\n delete=delete,\n tag=tag,\n fmt=fmt.format(obj_count))\n\n self._command.func = _dup\n return [self._command]\n\n def _process_item(self, item, copy=False, move=False, delete=False,\n tag=False, fmt=u''):\n \"\"\"Process Item `item`.\n \"\"\"\n print_(format(item, fmt))\n if copy:\n item.move(basedir=copy, operation=MoveOperation.COPY)\n item.store()\n if move:\n item.move(basedir=move)\n item.store()\n if delete:\n item.remove(delete=True)\n if tag:\n try:\n k, v = tag.split('=')\n except Exception:\n raise UserError(\n u\"{}: can't parse k=v tag: {}\".format(PLUGIN, tag)\n )\n setattr(item, k, v)\n item.store()\n\n def _checksum(self, item, prog):\n \"\"\"Run external `prog` on file path associated with `item`, cache\n output as flexattr on a key that is the name of the program, and\n return the key, checksum tuple.\n \"\"\"\n args = [p.format(file=decode_commandline_path(item.path))\n for p in shlex.split(prog)]\n key = args[0]\n checksum = getattr(item, key, False)\n if not checksum:\n self._log.debug(u'key {0} on item {1} not cached:'\n u'computing checksum',\n key, displayable_path(item.path))\n try:\n checksum = command_output(args).stdout\n setattr(item, key, checksum)\n item.store()\n self._log.debug(u'computed checksum for {0} using {1}',\n item.title, key)\n except subprocess.CalledProcessError as e:\n self._log.debug(u'failed to checksum {0}: {1}',\n displayable_path(item.path), e)\n else:\n self._log.debug(u'key {0} on item {1} cached:'\n u'not computing checksum',\n key, displayable_path(item.path))\n return key, checksum\n\n def _group_by(self, objs, keys, strict):\n \"\"\"Return a dictionary with keys arbitrary concatenations of attributes\n and values lists of objects (Albums or Items) with those keys.\n\n If strict, all attributes must be defined for a duplicate match.\n \"\"\"\n import collections\n counts = collections.defaultdict(list)\n for obj in objs:\n values = [getattr(obj, k, None) for k in keys]\n values = [v for v in values if v not in (None, '')]\n if strict and len(values) < len(keys):\n self._log.debug(u'some keys {0} on item {1} are null or empty:'\n u' skipping',\n keys, displayable_path(obj.path))\n elif (not strict and not len(values)):\n self._log.debug(u'all keys {0} on item {1} are null or empty:'\n u' skipping',\n keys, displayable_path(obj.path))\n else:\n key = tuple(values)\n counts[key].append(obj)\n\n return counts\n\n def _order(self, objs, tiebreak=None):\n \"\"\"Return the objects (Items or Albums) sorted by descending\n order of priority.\n\n If provided, the `tiebreak` dict indicates the field to use to\n prioritize the objects. Otherwise, Items are placed in order of\n \"completeness\" (objects with more non-null fields come first)\n and Albums are ordered by their track count.\n \"\"\"\n kind = 'items' if all(isinstance(o, Item) for o in objs) else 'albums'\n\n if tiebreak and kind in tiebreak.keys():\n key = lambda x: tuple(getattr(x, k) for k in tiebreak[kind])\n else:\n if kind == 'items':\n def truthy(v):\n # Avoid a Unicode warning by avoiding comparison\n # between a bytes object and the empty Unicode\n # string ''.\n return v is not None and \\\n (v != '' if isinstance(v, six.text_type) else True)\n fields = Item.all_keys()\n key = lambda x: sum(1 for f in fields if truthy(getattr(x, f)))\n else:\n key = lambda x: len(x.items())\n\n return sorted(objs, key=key, reverse=True)\n\n def _merge_items(self, objs):\n \"\"\"Merge Item objs by copying missing fields from items in the tail to\n the head item.\n\n Return same number of items, with the head item modified.\n \"\"\"\n fields = Item.all_keys()\n for f in fields:\n for o in objs[1:]:\n if getattr(objs[0], f, None) in (None, ''):\n value = getattr(o, f, None)\n if value:\n self._log.debug(u'key {0} on item {1} is null '\n u'or empty: setting from item {2}',\n f, displayable_path(objs[0].path),\n displayable_path(o.path))\n setattr(objs[0], f, value)\n objs[0].store()\n break\n return objs\n\n def _merge_albums(self, objs):\n \"\"\"Merge Album objs by copying missing items from albums in the tail\n to the head album.\n\n Return same number of albums, with the head album modified.\"\"\"\n ids = [i.mb_trackid for i in objs[0].items()]\n for o in objs[1:]:\n for i in o.items():\n if i.mb_trackid not in ids:\n missing = Item.from_path(i.path)\n missing.album_id = objs[0].id\n missing.add(i._db)\n self._log.debug(u'item {0} missing from album {1}:'\n u' merging from {2} into {3}',\n missing,\n objs[0],\n displayable_path(o.path),\n displayable_path(missing.destination()))\n missing.move(operation=MoveOperation.COPY)\n return objs\n\n def _merge(self, objs):\n \"\"\"Merge duplicate items. See ``_merge_items`` and ``_merge_albums``\n for the relevant strategies.\n \"\"\"\n kind = Item if all(isinstance(o, Item) for o in objs) else Album\n if kind is Item:\n objs = self._merge_items(objs)\n else:\n objs = self._merge_albums(objs)\n return objs\n\n def _duplicates(self, objs, keys, full, strict, tiebreak, merge):\n \"\"\"Generate triples of keys, duplicate counts, and constituent objects.\n \"\"\"\n offset = 0 if full else 1\n for k, objs in self._group_by(objs, keys, strict).items():\n if len(objs) > 1:\n objs = self._order(objs, tiebreak)\n if merge:\n objs = self._merge(objs)\n yield (k, len(objs) - offset, objs[offset:])\n", "path": "beetsplug/duplicates.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Pedro Silva.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"List duplicate tracks or albums.\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nimport shlex\nimport six\n\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import decargs, print_, Subcommand, UserError\nfrom beets.util import command_output, displayable_path, subprocess, \\\n bytestring_path, MoveOperation, decode_commandline_path\nfrom beets.library import Item, Album\n\n\nPLUGIN = 'duplicates'\n\n\nclass DuplicatesPlugin(BeetsPlugin):\n \"\"\"List duplicate tracks or albums\n \"\"\"\n def __init__(self):\n super(DuplicatesPlugin, self).__init__()\n\n self.config.add({\n 'album': False,\n 'checksum': '',\n 'copy': '',\n 'count': False,\n 'delete': False,\n 'format': '',\n 'full': False,\n 'keys': [],\n 'merge': False,\n 'move': '',\n 'path': False,\n 'tiebreak': {},\n 'strict': False,\n 'tag': '',\n })\n\n self._command = Subcommand('duplicates',\n help=__doc__,\n aliases=['dup'])\n self._command.parser.add_option(\n u'-c', u'--count', dest='count',\n action='store_true',\n help=u'show duplicate counts',\n )\n self._command.parser.add_option(\n u'-C', u'--checksum', dest='checksum',\n action='store', metavar='PROG',\n help=u'report duplicates based on arbitrary command',\n )\n self._command.parser.add_option(\n u'-d', u'--delete', dest='delete',\n action='store_true',\n help=u'delete items from library and disk',\n )\n self._command.parser.add_option(\n u'-F', u'--full', dest='full',\n action='store_true',\n help=u'show all versions of duplicate tracks or albums',\n )\n self._command.parser.add_option(\n u'-s', u'--strict', dest='strict',\n action='store_true',\n help=u'report duplicates only if all attributes are set',\n )\n self._command.parser.add_option(\n u'-k', u'--key', dest='keys',\n action='append', metavar='KEY',\n help=u'report duplicates based on keys (use multiple times)',\n )\n self._command.parser.add_option(\n u'-M', u'--merge', dest='merge',\n action='store_true',\n help=u'merge duplicate items',\n )\n self._command.parser.add_option(\n u'-m', u'--move', dest='move',\n action='store', metavar='DEST',\n help=u'move items to dest',\n )\n self._command.parser.add_option(\n u'-o', u'--copy', dest='copy',\n action='store', metavar='DEST',\n help=u'copy items to dest',\n )\n self._command.parser.add_option(\n u'-t', u'--tag', dest='tag',\n action='store',\n help=u'tag matched items with \\'k=v\\' attribute',\n )\n self._command.parser.add_all_common_options()\n\n def commands(self):\n\n def _dup(lib, opts, args):\n self.config.set_args(opts)\n album = self.config['album'].get(bool)\n checksum = self.config['checksum'].get(str)\n copy = bytestring_path(self.config['copy'].as_str())\n count = self.config['count'].get(bool)\n delete = self.config['delete'].get(bool)\n fmt = self.config['format'].get(str)\n full = self.config['full'].get(bool)\n keys = self.config['keys'].as_str_seq()\n merge = self.config['merge'].get(bool)\n move = bytestring_path(self.config['move'].as_str())\n path = self.config['path'].get(bool)\n tiebreak = self.config['tiebreak'].get(dict)\n strict = self.config['strict'].get(bool)\n tag = self.config['tag'].get(str)\n\n if album:\n if not keys:\n keys = ['mb_albumid']\n items = lib.albums(decargs(args))\n else:\n if not keys:\n keys = ['mb_trackid', 'mb_albumid']\n items = lib.items(decargs(args))\n\n # If there's nothing to do, return early. The code below assumes\n # `items` to be non-empty.\n if not items:\n return\n\n if path:\n fmt = u'$path'\n\n # Default format string for count mode.\n if count and not fmt:\n if album:\n fmt = u'$albumartist - $album'\n else:\n fmt = u'$albumartist - $album - $title'\n fmt += u': {0}'\n\n if checksum:\n for i in items:\n k, _ = self._checksum(i, checksum)\n keys = [k]\n\n for obj_id, obj_count, objs in self._duplicates(items,\n keys=keys,\n full=full,\n strict=strict,\n tiebreak=tiebreak,\n merge=merge):\n if obj_id: # Skip empty IDs.\n for o in objs:\n self._process_item(o,\n copy=copy,\n move=move,\n delete=delete,\n tag=tag,\n fmt=fmt.format(obj_count))\n\n self._command.func = _dup\n return [self._command]\n\n def _process_item(self, item, copy=False, move=False, delete=False,\n tag=False, fmt=u''):\n \"\"\"Process Item `item`.\n \"\"\"\n print_(format(item, fmt))\n if copy:\n item.move(basedir=copy, operation=MoveOperation.COPY)\n item.store()\n if move:\n item.move(basedir=move)\n item.store()\n if delete:\n item.remove(delete=True)\n if tag:\n try:\n k, v = tag.split('=')\n except Exception:\n raise UserError(\n u\"{}: can't parse k=v tag: {}\".format(PLUGIN, tag)\n )\n setattr(item, k, v)\n item.store()\n\n def _checksum(self, item, prog):\n \"\"\"Run external `prog` on file path associated with `item`, cache\n output as flexattr on a key that is the name of the program, and\n return the key, checksum tuple.\n \"\"\"\n args = [p.format(file=decode_commandline_path(item.path))\n for p in shlex.split(prog)]\n key = args[0]\n checksum = getattr(item, key, False)\n if not checksum:\n self._log.debug(u'key {0} on item {1} not cached:'\n u'computing checksum',\n key, displayable_path(item.path))\n try:\n checksum = command_output(args).stdout\n setattr(item, key, checksum)\n item.store()\n self._log.debug(u'computed checksum for {0} using {1}',\n item.title, key)\n except subprocess.CalledProcessError as e:\n self._log.debug(u'failed to checksum {0}: {1}',\n displayable_path(item.path), e)\n else:\n self._log.debug(u'key {0} on item {1} cached:'\n u'not computing checksum',\n key, displayable_path(item.path))\n return key, checksum\n\n def _group_by(self, objs, keys, strict):\n \"\"\"Return a dictionary with keys arbitrary concatenations of attributes\n and values lists of objects (Albums or Items) with those keys.\n\n If strict, all attributes must be defined for a duplicate match.\n \"\"\"\n import collections\n counts = collections.defaultdict(list)\n for obj in objs:\n values = [getattr(obj, k, None) for k in keys]\n values = [v for v in values if v not in (None, '')]\n if strict and len(values) < len(keys):\n self._log.debug(u'some keys {0} on item {1} are null or empty:'\n u' skipping',\n keys, displayable_path(obj.path))\n elif (not strict and not len(values)):\n self._log.debug(u'all keys {0} on item {1} are null or empty:'\n u' skipping',\n keys, displayable_path(obj.path))\n else:\n key = tuple(values)\n counts[key].append(obj)\n\n return counts\n\n def _order(self, objs, tiebreak=None):\n \"\"\"Return the objects (Items or Albums) sorted by descending\n order of priority.\n\n If provided, the `tiebreak` dict indicates the field to use to\n prioritize the objects. Otherwise, Items are placed in order of\n \"completeness\" (objects with more non-null fields come first)\n and Albums are ordered by their track count.\n \"\"\"\n kind = 'items' if all(isinstance(o, Item) for o in objs) else 'albums'\n\n if tiebreak and kind in tiebreak.keys():\n key = lambda x: tuple(getattr(x, k) for k in tiebreak[kind])\n else:\n if kind == 'items':\n def truthy(v):\n # Avoid a Unicode warning by avoiding comparison\n # between a bytes object and the empty Unicode\n # string ''.\n return v is not None and \\\n (v != '' if isinstance(v, six.text_type) else True)\n fields = Item.all_keys()\n key = lambda x: sum(1 for f in fields if truthy(getattr(x, f)))\n else:\n key = lambda x: len(x.items())\n\n return sorted(objs, key=key, reverse=True)\n\n def _merge_items(self, objs):\n \"\"\"Merge Item objs by copying missing fields from items in the tail to\n the head item.\n\n Return same number of items, with the head item modified.\n \"\"\"\n fields = Item.all_keys()\n for f in fields:\n for o in objs[1:]:\n if getattr(objs[0], f, None) in (None, ''):\n value = getattr(o, f, None)\n if value:\n self._log.debug(u'key {0} on item {1} is null '\n u'or empty: setting from item {2}',\n f, displayable_path(objs[0].path),\n displayable_path(o.path))\n setattr(objs[0], f, value)\n objs[0].store()\n break\n return objs\n\n def _merge_albums(self, objs):\n \"\"\"Merge Album objs by copying missing items from albums in the tail\n to the head album.\n\n Return same number of albums, with the head album modified.\"\"\"\n ids = [i.mb_trackid for i in objs[0].items()]\n for o in objs[1:]:\n for i in o.items():\n if i.mb_trackid not in ids:\n missing = Item.from_path(i.path)\n missing.album_id = objs[0].id\n missing.add(i._db)\n self._log.debug(u'item {0} missing from album {1}:'\n u' merging from {2} into {3}',\n missing,\n objs[0],\n displayable_path(o.path),\n displayable_path(missing.destination()))\n missing.move(operation=MoveOperation.COPY)\n return objs\n\n def _merge(self, objs):\n \"\"\"Merge duplicate items. See ``_merge_items`` and ``_merge_albums``\n for the relevant strategies.\n \"\"\"\n kind = Item if all(isinstance(o, Item) for o in objs) else Album\n if kind is Item:\n objs = self._merge_items(objs)\n else:\n objs = self._merge_albums(objs)\n return objs\n\n def _duplicates(self, objs, keys, full, strict, tiebreak, merge):\n \"\"\"Generate triples of keys, duplicate counts, and constituent objects.\n \"\"\"\n offset = 0 if full else 1\n for k, objs in self._group_by(objs, keys, strict).items():\n if len(objs) > 1:\n objs = self._order(objs, tiebreak)\n if merge:\n objs = self._merge(objs)\n yield (k, len(objs) - offset, objs[offset:])\n", "path": "beetsplug/duplicates.py"}]} |
gh_patches_debug_1020 | rasdani/github-patches | git_diff | facebookresearch__CompilerGym-548 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve examples documentation to make it clear that they are standalone
## 🚀 Feature
Tangentially to #532, I think it would be good to add a "Usage" section to examples/README.md that makes it clear that these example scripts can be used through pip-installed CompilerGym, and possibly split the examples rules out of the top level makefile into an examples/Makefile file for standalone usage.
## Motivation
It is not clear whether the included examples require building from source (they don't) or can be used on their own (they can).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/setup.py`
Content:
```
1 #!/usr/bin/env python3
2 #
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 #
5 # This source code is licensed under the MIT license found in the
6 # LICENSE file in the root directory of this source tree.
7
8 import distutils.util
9
10 import setuptools
11
12 with open("../VERSION") as f:
13 version = f.read().strip()
14 with open("requirements.txt") as f:
15 requirements = [ln.split("#")[0].rstrip() for ln in f.readlines()]
16
17 setuptools.setup(
18 name="compiler_gym_examples",
19 version=version,
20 description="Example code for CompilerGym",
21 author="Facebook AI Research",
22 url="https://github.com/facebookresearch/CompilerGym",
23 license="MIT",
24 install_requires=requirements,
25 packages=[
26 "llvm_autotuning",
27 "llvm_autotuning.autotuners",
28 "llvm_rl",
29 "llvm_rl.model",
30 ],
31 python_requires=">=3.8",
32 platforms=[distutils.util.get_platform()],
33 zip_safe=False,
34 )
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/setup.py b/examples/setup.py
--- a/examples/setup.py
+++ b/examples/setup.py
@@ -13,6 +13,8 @@
version = f.read().strip()
with open("requirements.txt") as f:
requirements = [ln.split("#")[0].rstrip() for ln in f.readlines()]
+with open("../tests/requirements.txt") as f:
+ requirements += [ln.split("#")[0].rstrip() for ln in f.readlines()]
setuptools.setup(
name="compiler_gym_examples",
| {"golden_diff": "diff --git a/examples/setup.py b/examples/setup.py\n--- a/examples/setup.py\n+++ b/examples/setup.py\n@@ -13,6 +13,8 @@\n version = f.read().strip()\n with open(\"requirements.txt\") as f:\n requirements = [ln.split(\"#\")[0].rstrip() for ln in f.readlines()]\n+with open(\"../tests/requirements.txt\") as f:\n+ requirements += [ln.split(\"#\")[0].rstrip() for ln in f.readlines()]\n \n setuptools.setup(\n name=\"compiler_gym_examples\",\n", "issue": "Improve examples documentation to make it clear that they are standalone\n## \ud83d\ude80 Feature\r\n\r\nTangentially to #532, I think it would be good to add a \"Usage\" section to examples/README.md that makes it clear that these example scripts can be used through pip-installed CompilerGym, and possibly split the examples rules out of the top level makefile into an examples/Makefile file for standalone usage.\r\n\r\n## Motivation\r\n\r\nIt is not clear whether the included examples require building from source (they don't) or can be used on their own (they can).\n", "before_files": [{"content": "#!/usr/bin/env python3\n#\n# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport distutils.util\n\nimport setuptools\n\nwith open(\"../VERSION\") as f:\n version = f.read().strip()\nwith open(\"requirements.txt\") as f:\n requirements = [ln.split(\"#\")[0].rstrip() for ln in f.readlines()]\n\nsetuptools.setup(\n name=\"compiler_gym_examples\",\n version=version,\n description=\"Example code for CompilerGym\",\n author=\"Facebook AI Research\",\n url=\"https://github.com/facebookresearch/CompilerGym\",\n license=\"MIT\",\n install_requires=requirements,\n packages=[\n \"llvm_autotuning\",\n \"llvm_autotuning.autotuners\",\n \"llvm_rl\",\n \"llvm_rl.model\",\n ],\n python_requires=\">=3.8\",\n platforms=[distutils.util.get_platform()],\n zip_safe=False,\n)\n", "path": "examples/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n#\n# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport distutils.util\n\nimport setuptools\n\nwith open(\"../VERSION\") as f:\n version = f.read().strip()\nwith open(\"requirements.txt\") as f:\n requirements = [ln.split(\"#\")[0].rstrip() for ln in f.readlines()]\nwith open(\"../tests/requirements.txt\") as f:\n requirements += [ln.split(\"#\")[0].rstrip() for ln in f.readlines()]\n\nsetuptools.setup(\n name=\"compiler_gym_examples\",\n version=version,\n description=\"Example code for CompilerGym\",\n author=\"Facebook AI Research\",\n url=\"https://github.com/facebookresearch/CompilerGym\",\n license=\"MIT\",\n install_requires=requirements,\n packages=[\n \"llvm_autotuning\",\n \"llvm_autotuning.autotuners\",\n \"llvm_rl\",\n \"llvm_rl.model\",\n ],\n python_requires=\">=3.8\",\n platforms=[distutils.util.get_platform()],\n zip_safe=False,\n)\n", "path": "examples/setup.py"}]} |
gh_patches_debug_1021 | rasdani/github-patches | git_diff | kedro-org__kedro-2087 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change autogeneration of node names to not contain commas
## Description
Change how node names are generated to not contain commas. Currently we stringify a node's definition and set that as the node name if no name has been set explicitly: https://github.com/kedro-org/kedro/blob/main/kedro/pipeline/node.py#L239
Change is so that the generated name will contain the function name and output, where the outputs are separated by `_` or `-` instead of `,`.
So `two_inputs([A0,B0]) -> [C1,C2]` would become `two_inputs -> [C1-C2]`
## Context
https://github.com/kedro-org/kedro/issues/1828
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/pipeline/node.py`
Content:
```
1 """This module provides user-friendly functions for creating nodes as parts
2 of Kedro pipelines.
3 """
4 import copy
5 import inspect
6 import logging
7 import re
8 from collections import Counter
9 from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Union
10 from warnings import warn
11
12
13 class Node:
14 """``Node`` is an auxiliary class facilitating the operations required to
15 run user-provided functions as part of Kedro pipelines.
16 """
17
18 def __init__(
19 self,
20 func: Callable,
21 inputs: Union[None, str, List[str], Dict[str, str]],
22 outputs: Union[None, str, List[str], Dict[str, str]],
23 *,
24 name: str = None,
25 tags: Union[str, Iterable[str]] = None,
26 confirms: Union[str, List[str]] = None,
27 namespace: str = None,
28 ):
29 """Create a node in the pipeline by providing a function to be called
30 along with variable names for inputs and/or outputs.
31
32 Args:
33 func: A function that corresponds to the node logic.
34 The function should have at least one input or output.
35 inputs: The name or the list of the names of variables used as
36 inputs to the function. The number of names should match
37 the number of arguments in the definition of the provided
38 function. When Dict[str, str] is provided, variable names
39 will be mapped to function argument names.
40 outputs: The name or the list of the names of variables used
41 as outputs to the function. The number of names should match
42 the number of outputs returned by the provided function.
43 When Dict[str, str] is provided, variable names will be mapped
44 to the named outputs the function returns.
45 name: Optional node name to be used when displaying the node in
46 logs or any other visualisations.
47 tags: Optional set of tags to be applied to the node.
48 confirms: Optional name or the list of the names of the datasets
49 that should be confirmed. This will result in calling
50 ``confirm()`` method of the corresponding data set instance.
51 Specified dataset names do not necessarily need to be present
52 in the node ``inputs`` or ``outputs``.
53 namespace: Optional node namespace.
54
55 Raises:
56 ValueError: Raised in the following cases:
57 a) When the provided arguments do not conform to
58 the format suggested by the type hint of the argument.
59 b) When the node produces multiple outputs with the same name.
60 c) When an input has the same name as an output.
61 d) When the given node name violates the requirements:
62 it must contain only letters, digits, hyphens, underscores
63 and/or fullstops.
64
65 """
66
67 if not callable(func):
68 raise ValueError(
69 _node_error_message(
70 f"first argument must be a function, not '{type(func).__name__}'."
71 )
72 )
73
74 if inputs and not isinstance(inputs, (list, dict, str)):
75 raise ValueError(
76 _node_error_message(
77 f"'inputs' type must be one of [String, List, Dict, None], "
78 f"not '{type(inputs).__name__}'."
79 )
80 )
81
82 if outputs and not isinstance(outputs, (list, dict, str)):
83 raise ValueError(
84 _node_error_message(
85 f"'outputs' type must be one of [String, List, Dict, None], "
86 f"not '{type(outputs).__name__}'."
87 )
88 )
89
90 if not inputs and not outputs:
91 raise ValueError(
92 _node_error_message("it must have some 'inputs' or 'outputs'.")
93 )
94
95 self._validate_inputs(func, inputs)
96
97 self._func = func
98 self._inputs = inputs
99 self._outputs = outputs
100 if name and not re.match(r"[\w\.-]+$", name):
101 raise ValueError(
102 f"'{name}' is not a valid node name. It must contain only "
103 f"letters, digits, hyphens, underscores and/or fullstops."
104 )
105 self._name = name
106 self._namespace = namespace
107 self._tags = set(_to_list(tags))
108
109 self._validate_unique_outputs()
110 self._validate_inputs_dif_than_outputs()
111 self._confirms = confirms
112
113 def _copy(self, **overwrite_params):
114 """
115 Helper function to copy the node, replacing some values.
116 """
117 params = {
118 "func": self._func,
119 "inputs": self._inputs,
120 "outputs": self._outputs,
121 "name": self._name,
122 "namespace": self._namespace,
123 "tags": self._tags,
124 "confirms": self._confirms,
125 }
126 params.update(overwrite_params)
127 return Node(**params)
128
129 @property
130 def _logger(self):
131 return logging.getLogger(__name__)
132
133 @property
134 def _unique_key(self):
135 def hashable(value):
136 if isinstance(value, dict):
137 # we sort it because a node with inputs/outputs
138 # {"arg1": "a", "arg2": "b"} is equivalent to
139 # a node with inputs/outputs {"arg2": "b", "arg1": "a"}
140 return tuple(sorted(value.items()))
141 if isinstance(value, list):
142 return tuple(value)
143 return value
144
145 return (self.name, hashable(self._inputs), hashable(self._outputs))
146
147 def __eq__(self, other):
148 if not isinstance(other, Node):
149 return NotImplemented
150 return self._unique_key == other._unique_key
151
152 def __lt__(self, other):
153 if not isinstance(other, Node):
154 return NotImplemented
155 return self._unique_key < other._unique_key
156
157 def __hash__(self):
158 return hash(self._unique_key)
159
160 def __str__(self):
161 def _set_to_str(xset):
162 return f"[{','.join(xset)}]"
163
164 out_str = _set_to_str(self.outputs) if self._outputs else "None"
165 in_str = _set_to_str(self.inputs) if self._inputs else "None"
166
167 prefix = self._name + ": " if self._name else ""
168 return prefix + f"{self._func_name}({in_str}) -> {out_str}"
169
170 def __repr__(self): # pragma: no cover
171 return (
172 f"Node({self._func_name}, {repr(self._inputs)}, {repr(self._outputs)}, "
173 f"{repr(self._name)})"
174 )
175
176 def __call__(self, **kwargs) -> Dict[str, Any]:
177 return self.run(inputs=kwargs)
178
179 @property
180 def _func_name(self) -> str:
181 name = _get_readable_func_name(self._func)
182 if name == "<partial>":
183 warn(
184 f"The node producing outputs '{self.outputs}' is made from a 'partial' function. "
185 f"Partial functions do not have a '__name__' attribute: consider using "
186 f"'functools.update_wrapper' for better log messages."
187 )
188 return name
189
190 @property
191 def func(self) -> Callable:
192 """Exposes the underlying function of the node.
193
194 Returns:
195 Return the underlying function of the node.
196 """
197 return self._func
198
199 @func.setter
200 def func(self, func: Callable):
201 """Sets the underlying function of the node.
202 Useful if user wants to decorate the function in a node's Hook implementation.
203
204 Args:
205 func: The new function for node's execution.
206 """
207 self._func = func
208
209 @property
210 def tags(self) -> Set[str]:
211 """Return the tags assigned to the node.
212
213 Returns:
214 Return the set of all assigned tags to the node.
215
216 """
217 return set(self._tags)
218
219 def tag(self, tags: Union[str, Iterable[str]]) -> "Node":
220 """Create a new ``Node`` which is an exact copy of the current one,
221 but with more tags added to it.
222
223 Args:
224 tags: The tags to be added to the new node.
225
226 Returns:
227 A copy of the current ``Node`` object with the tags added.
228
229 """
230 return self._copy(tags=self.tags | set(_to_list(tags)))
231
232 @property
233 def name(self) -> str:
234 """Node's name.
235
236 Returns:
237 Node's name if provided or the name of its function.
238 """
239 node_name = self._name or str(self)
240 if self.namespace:
241 return f"{self.namespace}.{node_name}"
242 return node_name
243
244 @property
245 def short_name(self) -> str:
246 """Node's name.
247
248 Returns:
249 Returns a short, user-friendly name that is not guaranteed to be unique.
250 The namespace is stripped out of the node name.
251 """
252 if self._name:
253 return self._name
254
255 return self._func_name.replace("_", " ").title()
256
257 @property
258 def namespace(self) -> Optional[str]:
259 """Node's namespace.
260
261 Returns:
262 String representing node's namespace, typically from outer to inner scopes.
263 """
264 return self._namespace
265
266 @property
267 def inputs(self) -> List[str]:
268 """Return node inputs as a list, in the order required to bind them properly to
269 the node's function.
270
271 Returns:
272 Node input names as a list.
273
274 """
275 if isinstance(self._inputs, dict):
276 return _dict_inputs_to_list(self._func, self._inputs)
277 return _to_list(self._inputs)
278
279 @property
280 def outputs(self) -> List[str]:
281 """Return node outputs as a list preserving the original order
282 if possible.
283
284 Returns:
285 Node output names as a list.
286
287 """
288 return _to_list(self._outputs)
289
290 @property
291 def confirms(self) -> List[str]:
292 """Return dataset names to confirm as a list.
293
294 Returns:
295 Dataset names to confirm as a list.
296 """
297 return _to_list(self._confirms)
298
299 def run(self, inputs: Dict[str, Any] = None) -> Dict[str, Any]:
300 """Run this node using the provided inputs and return its results
301 in a dictionary.
302
303 Args:
304 inputs: Dictionary of inputs as specified at the creation of
305 the node.
306
307 Raises:
308 ValueError: In the following cases:
309 a) The node function inputs are incompatible with the node
310 input definition.
311 Example 1: node definition input is a list of 2
312 DataFrames, whereas only 1 was provided or 2 different ones
313 were provided.
314 b) The node function outputs are incompatible with the node
315 output definition.
316 Example 1: node function definition is a dictionary,
317 whereas function returns a list.
318 Example 2: node definition output is a list of 5
319 strings, whereas the function returns a list of 4 objects.
320 Exception: Any exception thrown during execution of the node.
321
322 Returns:
323 All produced node outputs are returned in a dictionary, where the
324 keys are defined by the node outputs.
325
326 """
327 self._logger.info("Running node: %s", str(self))
328
329 outputs = None
330
331 if not (inputs is None or isinstance(inputs, dict)):
332 raise ValueError(
333 f"Node.run() expects a dictionary or None, "
334 f"but got {type(inputs)} instead"
335 )
336
337 try:
338 inputs = {} if inputs is None else inputs
339 if not self._inputs:
340 outputs = self._run_with_no_inputs(inputs)
341 elif isinstance(self._inputs, str):
342 outputs = self._run_with_one_input(inputs, self._inputs)
343 elif isinstance(self._inputs, list):
344 outputs = self._run_with_list(inputs, self._inputs)
345 elif isinstance(self._inputs, dict):
346 outputs = self._run_with_dict(inputs, self._inputs)
347
348 return self._outputs_to_dictionary(outputs)
349
350 # purposely catch all exceptions
351 except Exception as exc:
352 self._logger.error(
353 "Node %s failed with error: \n%s",
354 str(self),
355 str(exc),
356 extra={"markup": True},
357 )
358 raise exc
359
360 def _run_with_no_inputs(self, inputs: Dict[str, Any]):
361 if inputs:
362 raise ValueError(
363 f"Node {str(self)} expected no inputs, "
364 f"but got the following {len(inputs)} input(s) instead: "
365 f"{sorted(inputs.keys())}."
366 )
367
368 return self._func()
369
370 def _run_with_one_input(self, inputs: Dict[str, Any], node_input: str):
371 if len(inputs) != 1 or node_input not in inputs:
372 raise ValueError(
373 f"Node {str(self)} expected one input named '{node_input}', "
374 f"but got the following {len(inputs)} input(s) instead: "
375 f"{sorted(inputs.keys())}."
376 )
377
378 return self._func(inputs[node_input])
379
380 def _run_with_list(self, inputs: Dict[str, Any], node_inputs: List[str]):
381 # Node inputs and provided run inputs should completely overlap
382 if set(node_inputs) != set(inputs.keys()):
383 raise ValueError(
384 f"Node {str(self)} expected {len(node_inputs)} input(s) {node_inputs}, "
385 f"but got the following {len(inputs)} input(s) instead: "
386 f"{sorted(inputs.keys())}."
387 )
388 # Ensure the function gets the inputs in the correct order
389 return self._func(*(inputs[item] for item in node_inputs))
390
391 def _run_with_dict(self, inputs: Dict[str, Any], node_inputs: Dict[str, str]):
392 # Node inputs and provided run inputs should completely overlap
393 if set(node_inputs.values()) != set(inputs.keys()):
394 raise ValueError(
395 f"Node {str(self)} expected {len(set(node_inputs.values()))} input(s) "
396 f"{sorted(set(node_inputs.values()))}, "
397 f"but got the following {len(inputs)} input(s) instead: "
398 f"{sorted(inputs.keys())}."
399 )
400 kwargs = {arg: inputs[alias] for arg, alias in node_inputs.items()}
401 return self._func(**kwargs)
402
403 def _outputs_to_dictionary(self, outputs):
404 def _from_dict():
405 if set(self._outputs.keys()) != set(outputs.keys()):
406 raise ValueError(
407 f"Failed to save outputs of node {str(self)}.\n"
408 f"The node's output keys {set(outputs.keys())} do not match with "
409 f"the returned output's keys {set(self._outputs.keys())}."
410 )
411 return {name: outputs[key] for key, name in self._outputs.items()}
412
413 def _from_list():
414 if not isinstance(outputs, (list, tuple)):
415 raise ValueError(
416 f"Failed to save outputs of node {str(self)}.\n"
417 f"The node definition contains a list of "
418 f"outputs {self._outputs}, whereas the node function "
419 f"returned a '{type(outputs).__name__}'."
420 )
421 if len(outputs) != len(self._outputs):
422 raise ValueError(
423 f"Failed to save outputs of node {str(self)}.\n"
424 f"The node function returned {len(outputs)} output(s), "
425 f"whereas the node definition contains {len(self._outputs)} "
426 f"output(s)."
427 )
428
429 return dict(zip(self._outputs, outputs))
430
431 if isinstance(self._outputs, dict) and not isinstance(outputs, dict):
432 raise ValueError(
433 f"Failed to save outputs of node {self}.\n"
434 f"The node output is a dictionary, whereas the "
435 f"function output is not."
436 )
437
438 if self._outputs is None:
439 return {}
440 if isinstance(self._outputs, str):
441 return {self._outputs: outputs}
442 if isinstance(self._outputs, dict):
443 return _from_dict()
444 return _from_list()
445
446 def _validate_inputs(self, func, inputs):
447 # inspect does not support built-in Python functions written in C.
448 # Thus we only validate func if it is not built-in.
449 if not inspect.isbuiltin(func):
450 args, kwargs = self._process_inputs_for_bind(inputs)
451 try:
452 inspect.signature(func, follow_wrapped=False).bind(*args, **kwargs)
453 except Exception as exc:
454 func_args = inspect.signature(
455 func, follow_wrapped=False
456 ).parameters.keys()
457 func_name = _get_readable_func_name(func)
458
459 raise TypeError(
460 f"Inputs of '{func_name}' function expected {list(func_args)}, "
461 f"but got {inputs}"
462 ) from exc
463
464 def _validate_unique_outputs(self):
465 diff = Counter(self.outputs) - Counter(set(self.outputs))
466 if diff:
467 raise ValueError(
468 f"Failed to create node {self} due to duplicate"
469 f" output(s) {set(diff.keys())}.\nNode outputs must be unique."
470 )
471
472 def _validate_inputs_dif_than_outputs(self):
473 common_in_out = set(self.inputs).intersection(set(self.outputs))
474 if common_in_out:
475 raise ValueError(
476 f"Failed to create node {self}.\n"
477 f"A node cannot have the same inputs and outputs: "
478 f"{common_in_out}"
479 )
480
481 @staticmethod
482 def _process_inputs_for_bind(inputs: Union[None, str, List[str], Dict[str, str]]):
483 # Safeguard that we do not mutate list inputs
484 inputs = copy.copy(inputs)
485 args = [] # type: List[str]
486 kwargs = {} # type: Dict[str, str]
487 if isinstance(inputs, str):
488 args = [inputs]
489 elif isinstance(inputs, list):
490 args = inputs
491 elif isinstance(inputs, dict):
492 kwargs = inputs
493 return args, kwargs
494
495
496 def _node_error_message(msg) -> str:
497 return (
498 f"Invalid Node definition: {msg}\n"
499 f"Format should be: node(function, inputs, outputs)"
500 )
501
502
503 def node(
504 func: Callable,
505 inputs: Union[None, str, List[str], Dict[str, str]],
506 outputs: Union[None, str, List[str], Dict[str, str]],
507 *,
508 name: str = None,
509 tags: Union[str, Iterable[str]] = None,
510 confirms: Union[str, List[str]] = None,
511 namespace: str = None,
512 ) -> Node:
513 """Create a node in the pipeline by providing a function to be called
514 along with variable names for inputs and/or outputs.
515
516 Args:
517 func: A function that corresponds to the node logic. The function
518 should have at least one input or output.
519 inputs: The name or the list of the names of variables used as inputs
520 to the function. The number of names should match the number of
521 arguments in the definition of the provided function. When
522 Dict[str, str] is provided, variable names will be mapped to
523 function argument names.
524 outputs: The name or the list of the names of variables used as outputs
525 to the function. The number of names should match the number of
526 outputs returned by the provided function. When Dict[str, str]
527 is provided, variable names will be mapped to the named outputs the
528 function returns.
529 name: Optional node name to be used when displaying the node in logs or
530 any other visualisations.
531 tags: Optional set of tags to be applied to the node.
532 confirms: Optional name or the list of the names of the datasets
533 that should be confirmed. This will result in calling ``confirm()``
534 method of the corresponding data set instance. Specified dataset
535 names do not necessarily need to be present in the node ``inputs``
536 or ``outputs``.
537 namespace: Optional node namespace.
538
539 Returns:
540 A Node object with mapped inputs, outputs and function.
541
542 Example:
543 ::
544
545 >>> import pandas as pd
546 >>> import numpy as np
547 >>>
548 >>> def clean_data(cars: pd.DataFrame,
549 >>> boats: pd.DataFrame) -> Dict[str, pd.DataFrame]:
550 >>> return dict(cars_df=cars.dropna(), boats_df=boats.dropna())
551 >>>
552 >>> def halve_dataframe(data: pd.DataFrame) -> List[pd.DataFrame]:
553 >>> return np.array_split(data, 2)
554 >>>
555 >>> nodes = [
556 >>> node(clean_data,
557 >>> inputs=['cars2017', 'boats2017'],
558 >>> outputs=dict(cars_df='clean_cars2017',
559 >>> boats_df='clean_boats2017')),
560 >>> node(halve_dataframe,
561 >>> 'clean_cars2017',
562 >>> ['train_cars2017', 'test_cars2017']),
563 >>> node(halve_dataframe,
564 >>> dict(data='clean_boats2017'),
565 >>> ['train_boats2017', 'test_boats2017'])
566 >>> ]
567 """
568 return Node(
569 func,
570 inputs,
571 outputs,
572 name=name,
573 tags=tags,
574 confirms=confirms,
575 namespace=namespace,
576 )
577
578
579 def _dict_inputs_to_list(func: Callable[[Any], Any], inputs: Dict[str, str]):
580 """Convert a dict representation of the node inputs to a list, ensuring
581 the appropriate order for binding them to the node's function.
582 """
583 sig = inspect.signature(func, follow_wrapped=False).bind(**inputs)
584 return [*sig.args, *sig.kwargs.values()]
585
586
587 def _to_list(element: Union[None, str, Iterable[str], Dict[str, str]]) -> List[str]:
588 """Make a list out of node inputs/outputs.
589
590 Returns:
591 List[str]: Node input/output names as a list to standardise.
592 """
593
594 if element is None:
595 return []
596 if isinstance(element, str):
597 return [element]
598 if isinstance(element, dict):
599 return list(element.values())
600 return list(element)
601
602
603 def _get_readable_func_name(func: Callable) -> str:
604 """Get a user-friendly readable name of the function provided.
605
606 Returns:
607 str: readable name of the provided callable func.
608 """
609
610 if hasattr(func, "__name__"):
611 return func.__name__
612
613 name = repr(func)
614 if "functools.partial" in name:
615 name = "<partial>"
616
617 return name
618
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kedro/pipeline/node.py b/kedro/pipeline/node.py
--- a/kedro/pipeline/node.py
+++ b/kedro/pipeline/node.py
@@ -159,7 +159,7 @@
def __str__(self):
def _set_to_str(xset):
- return f"[{','.join(xset)}]"
+ return f"[{';'.join(xset)}]"
out_str = _set_to_str(self.outputs) if self._outputs else "None"
in_str = _set_to_str(self.inputs) if self._inputs else "None"
| {"golden_diff": "diff --git a/kedro/pipeline/node.py b/kedro/pipeline/node.py\n--- a/kedro/pipeline/node.py\n+++ b/kedro/pipeline/node.py\n@@ -159,7 +159,7 @@\n \n def __str__(self):\n def _set_to_str(xset):\n- return f\"[{','.join(xset)}]\"\n+ return f\"[{';'.join(xset)}]\"\n \n out_str = _set_to_str(self.outputs) if self._outputs else \"None\"\n in_str = _set_to_str(self.inputs) if self._inputs else \"None\"\n", "issue": "Change autogeneration of node names to not contain commas\n## Description\r\nChange how node names are generated to not contain commas. Currently we stringify a node's definition and set that as the node name if no name has been set explicitly: https://github.com/kedro-org/kedro/blob/main/kedro/pipeline/node.py#L239\r\n\r\nChange is so that the generated name will contain the function name and output, where the outputs are separated by `_` or `-` instead of `,`.\r\n\r\nSo `two_inputs([A0,B0]) -> [C1,C2]` would become `two_inputs -> [C1-C2]`\r\n\r\n## Context\r\nhttps://github.com/kedro-org/kedro/issues/1828\n", "before_files": [{"content": "\"\"\"This module provides user-friendly functions for creating nodes as parts\nof Kedro pipelines.\n\"\"\"\nimport copy\nimport inspect\nimport logging\nimport re\nfrom collections import Counter\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Set, Union\nfrom warnings import warn\n\n\nclass Node:\n \"\"\"``Node`` is an auxiliary class facilitating the operations required to\n run user-provided functions as part of Kedro pipelines.\n \"\"\"\n\n def __init__(\n self,\n func: Callable,\n inputs: Union[None, str, List[str], Dict[str, str]],\n outputs: Union[None, str, List[str], Dict[str, str]],\n *,\n name: str = None,\n tags: Union[str, Iterable[str]] = None,\n confirms: Union[str, List[str]] = None,\n namespace: str = None,\n ):\n \"\"\"Create a node in the pipeline by providing a function to be called\n along with variable names for inputs and/or outputs.\n\n Args:\n func: A function that corresponds to the node logic.\n The function should have at least one input or output.\n inputs: The name or the list of the names of variables used as\n inputs to the function. The number of names should match\n the number of arguments in the definition of the provided\n function. When Dict[str, str] is provided, variable names\n will be mapped to function argument names.\n outputs: The name or the list of the names of variables used\n as outputs to the function. The number of names should match\n the number of outputs returned by the provided function.\n When Dict[str, str] is provided, variable names will be mapped\n to the named outputs the function returns.\n name: Optional node name to be used when displaying the node in\n logs or any other visualisations.\n tags: Optional set of tags to be applied to the node.\n confirms: Optional name or the list of the names of the datasets\n that should be confirmed. This will result in calling\n ``confirm()`` method of the corresponding data set instance.\n Specified dataset names do not necessarily need to be present\n in the node ``inputs`` or ``outputs``.\n namespace: Optional node namespace.\n\n Raises:\n ValueError: Raised in the following cases:\n a) When the provided arguments do not conform to\n the format suggested by the type hint of the argument.\n b) When the node produces multiple outputs with the same name.\n c) When an input has the same name as an output.\n d) When the given node name violates the requirements:\n it must contain only letters, digits, hyphens, underscores\n and/or fullstops.\n\n \"\"\"\n\n if not callable(func):\n raise ValueError(\n _node_error_message(\n f\"first argument must be a function, not '{type(func).__name__}'.\"\n )\n )\n\n if inputs and not isinstance(inputs, (list, dict, str)):\n raise ValueError(\n _node_error_message(\n f\"'inputs' type must be one of [String, List, Dict, None], \"\n f\"not '{type(inputs).__name__}'.\"\n )\n )\n\n if outputs and not isinstance(outputs, (list, dict, str)):\n raise ValueError(\n _node_error_message(\n f\"'outputs' type must be one of [String, List, Dict, None], \"\n f\"not '{type(outputs).__name__}'.\"\n )\n )\n\n if not inputs and not outputs:\n raise ValueError(\n _node_error_message(\"it must have some 'inputs' or 'outputs'.\")\n )\n\n self._validate_inputs(func, inputs)\n\n self._func = func\n self._inputs = inputs\n self._outputs = outputs\n if name and not re.match(r\"[\\w\\.-]+$\", name):\n raise ValueError(\n f\"'{name}' is not a valid node name. It must contain only \"\n f\"letters, digits, hyphens, underscores and/or fullstops.\"\n )\n self._name = name\n self._namespace = namespace\n self._tags = set(_to_list(tags))\n\n self._validate_unique_outputs()\n self._validate_inputs_dif_than_outputs()\n self._confirms = confirms\n\n def _copy(self, **overwrite_params):\n \"\"\"\n Helper function to copy the node, replacing some values.\n \"\"\"\n params = {\n \"func\": self._func,\n \"inputs\": self._inputs,\n \"outputs\": self._outputs,\n \"name\": self._name,\n \"namespace\": self._namespace,\n \"tags\": self._tags,\n \"confirms\": self._confirms,\n }\n params.update(overwrite_params)\n return Node(**params)\n\n @property\n def _logger(self):\n return logging.getLogger(__name__)\n\n @property\n def _unique_key(self):\n def hashable(value):\n if isinstance(value, dict):\n # we sort it because a node with inputs/outputs\n # {\"arg1\": \"a\", \"arg2\": \"b\"} is equivalent to\n # a node with inputs/outputs {\"arg2\": \"b\", \"arg1\": \"a\"}\n return tuple(sorted(value.items()))\n if isinstance(value, list):\n return tuple(value)\n return value\n\n return (self.name, hashable(self._inputs), hashable(self._outputs))\n\n def __eq__(self, other):\n if not isinstance(other, Node):\n return NotImplemented\n return self._unique_key == other._unique_key\n\n def __lt__(self, other):\n if not isinstance(other, Node):\n return NotImplemented\n return self._unique_key < other._unique_key\n\n def __hash__(self):\n return hash(self._unique_key)\n\n def __str__(self):\n def _set_to_str(xset):\n return f\"[{','.join(xset)}]\"\n\n out_str = _set_to_str(self.outputs) if self._outputs else \"None\"\n in_str = _set_to_str(self.inputs) if self._inputs else \"None\"\n\n prefix = self._name + \": \" if self._name else \"\"\n return prefix + f\"{self._func_name}({in_str}) -> {out_str}\"\n\n def __repr__(self): # pragma: no cover\n return (\n f\"Node({self._func_name}, {repr(self._inputs)}, {repr(self._outputs)}, \"\n f\"{repr(self._name)})\"\n )\n\n def __call__(self, **kwargs) -> Dict[str, Any]:\n return self.run(inputs=kwargs)\n\n @property\n def _func_name(self) -> str:\n name = _get_readable_func_name(self._func)\n if name == \"<partial>\":\n warn(\n f\"The node producing outputs '{self.outputs}' is made from a 'partial' function. \"\n f\"Partial functions do not have a '__name__' attribute: consider using \"\n f\"'functools.update_wrapper' for better log messages.\"\n )\n return name\n\n @property\n def func(self) -> Callable:\n \"\"\"Exposes the underlying function of the node.\n\n Returns:\n Return the underlying function of the node.\n \"\"\"\n return self._func\n\n @func.setter\n def func(self, func: Callable):\n \"\"\"Sets the underlying function of the node.\n Useful if user wants to decorate the function in a node's Hook implementation.\n\n Args:\n func: The new function for node's execution.\n \"\"\"\n self._func = func\n\n @property\n def tags(self) -> Set[str]:\n \"\"\"Return the tags assigned to the node.\n\n Returns:\n Return the set of all assigned tags to the node.\n\n \"\"\"\n return set(self._tags)\n\n def tag(self, tags: Union[str, Iterable[str]]) -> \"Node\":\n \"\"\"Create a new ``Node`` which is an exact copy of the current one,\n but with more tags added to it.\n\n Args:\n tags: The tags to be added to the new node.\n\n Returns:\n A copy of the current ``Node`` object with the tags added.\n\n \"\"\"\n return self._copy(tags=self.tags | set(_to_list(tags)))\n\n @property\n def name(self) -> str:\n \"\"\"Node's name.\n\n Returns:\n Node's name if provided or the name of its function.\n \"\"\"\n node_name = self._name or str(self)\n if self.namespace:\n return f\"{self.namespace}.{node_name}\"\n return node_name\n\n @property\n def short_name(self) -> str:\n \"\"\"Node's name.\n\n Returns:\n Returns a short, user-friendly name that is not guaranteed to be unique.\n The namespace is stripped out of the node name.\n \"\"\"\n if self._name:\n return self._name\n\n return self._func_name.replace(\"_\", \" \").title()\n\n @property\n def namespace(self) -> Optional[str]:\n \"\"\"Node's namespace.\n\n Returns:\n String representing node's namespace, typically from outer to inner scopes.\n \"\"\"\n return self._namespace\n\n @property\n def inputs(self) -> List[str]:\n \"\"\"Return node inputs as a list, in the order required to bind them properly to\n the node's function.\n\n Returns:\n Node input names as a list.\n\n \"\"\"\n if isinstance(self._inputs, dict):\n return _dict_inputs_to_list(self._func, self._inputs)\n return _to_list(self._inputs)\n\n @property\n def outputs(self) -> List[str]:\n \"\"\"Return node outputs as a list preserving the original order\n if possible.\n\n Returns:\n Node output names as a list.\n\n \"\"\"\n return _to_list(self._outputs)\n\n @property\n def confirms(self) -> List[str]:\n \"\"\"Return dataset names to confirm as a list.\n\n Returns:\n Dataset names to confirm as a list.\n \"\"\"\n return _to_list(self._confirms)\n\n def run(self, inputs: Dict[str, Any] = None) -> Dict[str, Any]:\n \"\"\"Run this node using the provided inputs and return its results\n in a dictionary.\n\n Args:\n inputs: Dictionary of inputs as specified at the creation of\n the node.\n\n Raises:\n ValueError: In the following cases:\n a) The node function inputs are incompatible with the node\n input definition.\n Example 1: node definition input is a list of 2\n DataFrames, whereas only 1 was provided or 2 different ones\n were provided.\n b) The node function outputs are incompatible with the node\n output definition.\n Example 1: node function definition is a dictionary,\n whereas function returns a list.\n Example 2: node definition output is a list of 5\n strings, whereas the function returns a list of 4 objects.\n Exception: Any exception thrown during execution of the node.\n\n Returns:\n All produced node outputs are returned in a dictionary, where the\n keys are defined by the node outputs.\n\n \"\"\"\n self._logger.info(\"Running node: %s\", str(self))\n\n outputs = None\n\n if not (inputs is None or isinstance(inputs, dict)):\n raise ValueError(\n f\"Node.run() expects a dictionary or None, \"\n f\"but got {type(inputs)} instead\"\n )\n\n try:\n inputs = {} if inputs is None else inputs\n if not self._inputs:\n outputs = self._run_with_no_inputs(inputs)\n elif isinstance(self._inputs, str):\n outputs = self._run_with_one_input(inputs, self._inputs)\n elif isinstance(self._inputs, list):\n outputs = self._run_with_list(inputs, self._inputs)\n elif isinstance(self._inputs, dict):\n outputs = self._run_with_dict(inputs, self._inputs)\n\n return self._outputs_to_dictionary(outputs)\n\n # purposely catch all exceptions\n except Exception as exc:\n self._logger.error(\n \"Node %s failed with error: \\n%s\",\n str(self),\n str(exc),\n extra={\"markup\": True},\n )\n raise exc\n\n def _run_with_no_inputs(self, inputs: Dict[str, Any]):\n if inputs:\n raise ValueError(\n f\"Node {str(self)} expected no inputs, \"\n f\"but got the following {len(inputs)} input(s) instead: \"\n f\"{sorted(inputs.keys())}.\"\n )\n\n return self._func()\n\n def _run_with_one_input(self, inputs: Dict[str, Any], node_input: str):\n if len(inputs) != 1 or node_input not in inputs:\n raise ValueError(\n f\"Node {str(self)} expected one input named '{node_input}', \"\n f\"but got the following {len(inputs)} input(s) instead: \"\n f\"{sorted(inputs.keys())}.\"\n )\n\n return self._func(inputs[node_input])\n\n def _run_with_list(self, inputs: Dict[str, Any], node_inputs: List[str]):\n # Node inputs and provided run inputs should completely overlap\n if set(node_inputs) != set(inputs.keys()):\n raise ValueError(\n f\"Node {str(self)} expected {len(node_inputs)} input(s) {node_inputs}, \"\n f\"but got the following {len(inputs)} input(s) instead: \"\n f\"{sorted(inputs.keys())}.\"\n )\n # Ensure the function gets the inputs in the correct order\n return self._func(*(inputs[item] for item in node_inputs))\n\n def _run_with_dict(self, inputs: Dict[str, Any], node_inputs: Dict[str, str]):\n # Node inputs and provided run inputs should completely overlap\n if set(node_inputs.values()) != set(inputs.keys()):\n raise ValueError(\n f\"Node {str(self)} expected {len(set(node_inputs.values()))} input(s) \"\n f\"{sorted(set(node_inputs.values()))}, \"\n f\"but got the following {len(inputs)} input(s) instead: \"\n f\"{sorted(inputs.keys())}.\"\n )\n kwargs = {arg: inputs[alias] for arg, alias in node_inputs.items()}\n return self._func(**kwargs)\n\n def _outputs_to_dictionary(self, outputs):\n def _from_dict():\n if set(self._outputs.keys()) != set(outputs.keys()):\n raise ValueError(\n f\"Failed to save outputs of node {str(self)}.\\n\"\n f\"The node's output keys {set(outputs.keys())} do not match with \"\n f\"the returned output's keys {set(self._outputs.keys())}.\"\n )\n return {name: outputs[key] for key, name in self._outputs.items()}\n\n def _from_list():\n if not isinstance(outputs, (list, tuple)):\n raise ValueError(\n f\"Failed to save outputs of node {str(self)}.\\n\"\n f\"The node definition contains a list of \"\n f\"outputs {self._outputs}, whereas the node function \"\n f\"returned a '{type(outputs).__name__}'.\"\n )\n if len(outputs) != len(self._outputs):\n raise ValueError(\n f\"Failed to save outputs of node {str(self)}.\\n\"\n f\"The node function returned {len(outputs)} output(s), \"\n f\"whereas the node definition contains {len(self._outputs)} \"\n f\"output(s).\"\n )\n\n return dict(zip(self._outputs, outputs))\n\n if isinstance(self._outputs, dict) and not isinstance(outputs, dict):\n raise ValueError(\n f\"Failed to save outputs of node {self}.\\n\"\n f\"The node output is a dictionary, whereas the \"\n f\"function output is not.\"\n )\n\n if self._outputs is None:\n return {}\n if isinstance(self._outputs, str):\n return {self._outputs: outputs}\n if isinstance(self._outputs, dict):\n return _from_dict()\n return _from_list()\n\n def _validate_inputs(self, func, inputs):\n # inspect does not support built-in Python functions written in C.\n # Thus we only validate func if it is not built-in.\n if not inspect.isbuiltin(func):\n args, kwargs = self._process_inputs_for_bind(inputs)\n try:\n inspect.signature(func, follow_wrapped=False).bind(*args, **kwargs)\n except Exception as exc:\n func_args = inspect.signature(\n func, follow_wrapped=False\n ).parameters.keys()\n func_name = _get_readable_func_name(func)\n\n raise TypeError(\n f\"Inputs of '{func_name}' function expected {list(func_args)}, \"\n f\"but got {inputs}\"\n ) from exc\n\n def _validate_unique_outputs(self):\n diff = Counter(self.outputs) - Counter(set(self.outputs))\n if diff:\n raise ValueError(\n f\"Failed to create node {self} due to duplicate\"\n f\" output(s) {set(diff.keys())}.\\nNode outputs must be unique.\"\n )\n\n def _validate_inputs_dif_than_outputs(self):\n common_in_out = set(self.inputs).intersection(set(self.outputs))\n if common_in_out:\n raise ValueError(\n f\"Failed to create node {self}.\\n\"\n f\"A node cannot have the same inputs and outputs: \"\n f\"{common_in_out}\"\n )\n\n @staticmethod\n def _process_inputs_for_bind(inputs: Union[None, str, List[str], Dict[str, str]]):\n # Safeguard that we do not mutate list inputs\n inputs = copy.copy(inputs)\n args = [] # type: List[str]\n kwargs = {} # type: Dict[str, str]\n if isinstance(inputs, str):\n args = [inputs]\n elif isinstance(inputs, list):\n args = inputs\n elif isinstance(inputs, dict):\n kwargs = inputs\n return args, kwargs\n\n\ndef _node_error_message(msg) -> str:\n return (\n f\"Invalid Node definition: {msg}\\n\"\n f\"Format should be: node(function, inputs, outputs)\"\n )\n\n\ndef node(\n func: Callable,\n inputs: Union[None, str, List[str], Dict[str, str]],\n outputs: Union[None, str, List[str], Dict[str, str]],\n *,\n name: str = None,\n tags: Union[str, Iterable[str]] = None,\n confirms: Union[str, List[str]] = None,\n namespace: str = None,\n) -> Node:\n \"\"\"Create a node in the pipeline by providing a function to be called\n along with variable names for inputs and/or outputs.\n\n Args:\n func: A function that corresponds to the node logic. The function\n should have at least one input or output.\n inputs: The name or the list of the names of variables used as inputs\n to the function. The number of names should match the number of\n arguments in the definition of the provided function. When\n Dict[str, str] is provided, variable names will be mapped to\n function argument names.\n outputs: The name or the list of the names of variables used as outputs\n to the function. The number of names should match the number of\n outputs returned by the provided function. When Dict[str, str]\n is provided, variable names will be mapped to the named outputs the\n function returns.\n name: Optional node name to be used when displaying the node in logs or\n any other visualisations.\n tags: Optional set of tags to be applied to the node.\n confirms: Optional name or the list of the names of the datasets\n that should be confirmed. This will result in calling ``confirm()``\n method of the corresponding data set instance. Specified dataset\n names do not necessarily need to be present in the node ``inputs``\n or ``outputs``.\n namespace: Optional node namespace.\n\n Returns:\n A Node object with mapped inputs, outputs and function.\n\n Example:\n ::\n\n >>> import pandas as pd\n >>> import numpy as np\n >>>\n >>> def clean_data(cars: pd.DataFrame,\n >>> boats: pd.DataFrame) -> Dict[str, pd.DataFrame]:\n >>> return dict(cars_df=cars.dropna(), boats_df=boats.dropna())\n >>>\n >>> def halve_dataframe(data: pd.DataFrame) -> List[pd.DataFrame]:\n >>> return np.array_split(data, 2)\n >>>\n >>> nodes = [\n >>> node(clean_data,\n >>> inputs=['cars2017', 'boats2017'],\n >>> outputs=dict(cars_df='clean_cars2017',\n >>> boats_df='clean_boats2017')),\n >>> node(halve_dataframe,\n >>> 'clean_cars2017',\n >>> ['train_cars2017', 'test_cars2017']),\n >>> node(halve_dataframe,\n >>> dict(data='clean_boats2017'),\n >>> ['train_boats2017', 'test_boats2017'])\n >>> ]\n \"\"\"\n return Node(\n func,\n inputs,\n outputs,\n name=name,\n tags=tags,\n confirms=confirms,\n namespace=namespace,\n )\n\n\ndef _dict_inputs_to_list(func: Callable[[Any], Any], inputs: Dict[str, str]):\n \"\"\"Convert a dict representation of the node inputs to a list, ensuring\n the appropriate order for binding them to the node's function.\n \"\"\"\n sig = inspect.signature(func, follow_wrapped=False).bind(**inputs)\n return [*sig.args, *sig.kwargs.values()]\n\n\ndef _to_list(element: Union[None, str, Iterable[str], Dict[str, str]]) -> List[str]:\n \"\"\"Make a list out of node inputs/outputs.\n\n Returns:\n List[str]: Node input/output names as a list to standardise.\n \"\"\"\n\n if element is None:\n return []\n if isinstance(element, str):\n return [element]\n if isinstance(element, dict):\n return list(element.values())\n return list(element)\n\n\ndef _get_readable_func_name(func: Callable) -> str:\n \"\"\"Get a user-friendly readable name of the function provided.\n\n Returns:\n str: readable name of the provided callable func.\n \"\"\"\n\n if hasattr(func, \"__name__\"):\n return func.__name__\n\n name = repr(func)\n if \"functools.partial\" in name:\n name = \"<partial>\"\n\n return name\n", "path": "kedro/pipeline/node.py"}], "after_files": [{"content": "\"\"\"This module provides user-friendly functions for creating nodes as parts\nof Kedro pipelines.\n\"\"\"\nimport copy\nimport inspect\nimport logging\nimport re\nfrom collections import Counter\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Set, Union\nfrom warnings import warn\n\n\nclass Node:\n \"\"\"``Node`` is an auxiliary class facilitating the operations required to\n run user-provided functions as part of Kedro pipelines.\n \"\"\"\n\n def __init__(\n self,\n func: Callable,\n inputs: Union[None, str, List[str], Dict[str, str]],\n outputs: Union[None, str, List[str], Dict[str, str]],\n *,\n name: str = None,\n tags: Union[str, Iterable[str]] = None,\n confirms: Union[str, List[str]] = None,\n namespace: str = None,\n ):\n \"\"\"Create a node in the pipeline by providing a function to be called\n along with variable names for inputs and/or outputs.\n\n Args:\n func: A function that corresponds to the node logic.\n The function should have at least one input or output.\n inputs: The name or the list of the names of variables used as\n inputs to the function. The number of names should match\n the number of arguments in the definition of the provided\n function. When Dict[str, str] is provided, variable names\n will be mapped to function argument names.\n outputs: The name or the list of the names of variables used\n as outputs to the function. The number of names should match\n the number of outputs returned by the provided function.\n When Dict[str, str] is provided, variable names will be mapped\n to the named outputs the function returns.\n name: Optional node name to be used when displaying the node in\n logs or any other visualisations.\n tags: Optional set of tags to be applied to the node.\n confirms: Optional name or the list of the names of the datasets\n that should be confirmed. This will result in calling\n ``confirm()`` method of the corresponding data set instance.\n Specified dataset names do not necessarily need to be present\n in the node ``inputs`` or ``outputs``.\n namespace: Optional node namespace.\n\n Raises:\n ValueError: Raised in the following cases:\n a) When the provided arguments do not conform to\n the format suggested by the type hint of the argument.\n b) When the node produces multiple outputs with the same name.\n c) When an input has the same name as an output.\n d) When the given node name violates the requirements:\n it must contain only letters, digits, hyphens, underscores\n and/or fullstops.\n\n \"\"\"\n\n if not callable(func):\n raise ValueError(\n _node_error_message(\n f\"first argument must be a function, not '{type(func).__name__}'.\"\n )\n )\n\n if inputs and not isinstance(inputs, (list, dict, str)):\n raise ValueError(\n _node_error_message(\n f\"'inputs' type must be one of [String, List, Dict, None], \"\n f\"not '{type(inputs).__name__}'.\"\n )\n )\n\n if outputs and not isinstance(outputs, (list, dict, str)):\n raise ValueError(\n _node_error_message(\n f\"'outputs' type must be one of [String, List, Dict, None], \"\n f\"not '{type(outputs).__name__}'.\"\n )\n )\n\n if not inputs and not outputs:\n raise ValueError(\n _node_error_message(\"it must have some 'inputs' or 'outputs'.\")\n )\n\n self._validate_inputs(func, inputs)\n\n self._func = func\n self._inputs = inputs\n self._outputs = outputs\n if name and not re.match(r\"[\\w\\.-]+$\", name):\n raise ValueError(\n f\"'{name}' is not a valid node name. It must contain only \"\n f\"letters, digits, hyphens, underscores and/or fullstops.\"\n )\n self._name = name\n self._namespace = namespace\n self._tags = set(_to_list(tags))\n\n self._validate_unique_outputs()\n self._validate_inputs_dif_than_outputs()\n self._confirms = confirms\n\n def _copy(self, **overwrite_params):\n \"\"\"\n Helper function to copy the node, replacing some values.\n \"\"\"\n params = {\n \"func\": self._func,\n \"inputs\": self._inputs,\n \"outputs\": self._outputs,\n \"name\": self._name,\n \"namespace\": self._namespace,\n \"tags\": self._tags,\n \"confirms\": self._confirms,\n }\n params.update(overwrite_params)\n return Node(**params)\n\n @property\n def _logger(self):\n return logging.getLogger(__name__)\n\n @property\n def _unique_key(self):\n def hashable(value):\n if isinstance(value, dict):\n # we sort it because a node with inputs/outputs\n # {\"arg1\": \"a\", \"arg2\": \"b\"} is equivalent to\n # a node with inputs/outputs {\"arg2\": \"b\", \"arg1\": \"a\"}\n return tuple(sorted(value.items()))\n if isinstance(value, list):\n return tuple(value)\n return value\n\n return (self.name, hashable(self._inputs), hashable(self._outputs))\n\n def __eq__(self, other):\n if not isinstance(other, Node):\n return NotImplemented\n return self._unique_key == other._unique_key\n\n def __lt__(self, other):\n if not isinstance(other, Node):\n return NotImplemented\n return self._unique_key < other._unique_key\n\n def __hash__(self):\n return hash(self._unique_key)\n\n def __str__(self):\n def _set_to_str(xset):\n return f\"[{';'.join(xset)}]\"\n\n out_str = _set_to_str(self.outputs) if self._outputs else \"None\"\n in_str = _set_to_str(self.inputs) if self._inputs else \"None\"\n\n prefix = self._name + \": \" if self._name else \"\"\n return prefix + f\"{self._func_name}({in_str}) -> {out_str}\"\n\n def __repr__(self): # pragma: no cover\n return (\n f\"Node({self._func_name}, {repr(self._inputs)}, {repr(self._outputs)}, \"\n f\"{repr(self._name)})\"\n )\n\n def __call__(self, **kwargs) -> Dict[str, Any]:\n return self.run(inputs=kwargs)\n\n @property\n def _func_name(self) -> str:\n name = _get_readable_func_name(self._func)\n if name == \"<partial>\":\n warn(\n f\"The node producing outputs '{self.outputs}' is made from a 'partial' function. \"\n f\"Partial functions do not have a '__name__' attribute: consider using \"\n f\"'functools.update_wrapper' for better log messages.\"\n )\n return name\n\n @property\n def func(self) -> Callable:\n \"\"\"Exposes the underlying function of the node.\n\n Returns:\n Return the underlying function of the node.\n \"\"\"\n return self._func\n\n @func.setter\n def func(self, func: Callable):\n \"\"\"Sets the underlying function of the node.\n Useful if user wants to decorate the function in a node's Hook implementation.\n\n Args:\n func: The new function for node's execution.\n \"\"\"\n self._func = func\n\n @property\n def tags(self) -> Set[str]:\n \"\"\"Return the tags assigned to the node.\n\n Returns:\n Return the set of all assigned tags to the node.\n\n \"\"\"\n return set(self._tags)\n\n def tag(self, tags: Union[str, Iterable[str]]) -> \"Node\":\n \"\"\"Create a new ``Node`` which is an exact copy of the current one,\n but with more tags added to it.\n\n Args:\n tags: The tags to be added to the new node.\n\n Returns:\n A copy of the current ``Node`` object with the tags added.\n\n \"\"\"\n return self._copy(tags=self.tags | set(_to_list(tags)))\n\n @property\n def name(self) -> str:\n \"\"\"Node's name.\n\n Returns:\n Node's name if provided or the name of its function.\n \"\"\"\n node_name = self._name or str(self)\n if self.namespace:\n return f\"{self.namespace}.{node_name}\"\n return node_name\n\n @property\n def short_name(self) -> str:\n \"\"\"Node's name.\n\n Returns:\n Returns a short, user-friendly name that is not guaranteed to be unique.\n The namespace is stripped out of the node name.\n \"\"\"\n if self._name:\n return self._name\n\n return self._func_name.replace(\"_\", \" \").title()\n\n @property\n def namespace(self) -> Optional[str]:\n \"\"\"Node's namespace.\n\n Returns:\n String representing node's namespace, typically from outer to inner scopes.\n \"\"\"\n return self._namespace\n\n @property\n def inputs(self) -> List[str]:\n \"\"\"Return node inputs as a list, in the order required to bind them properly to\n the node's function.\n\n Returns:\n Node input names as a list.\n\n \"\"\"\n if isinstance(self._inputs, dict):\n return _dict_inputs_to_list(self._func, self._inputs)\n return _to_list(self._inputs)\n\n @property\n def outputs(self) -> List[str]:\n \"\"\"Return node outputs as a list preserving the original order\n if possible.\n\n Returns:\n Node output names as a list.\n\n \"\"\"\n return _to_list(self._outputs)\n\n @property\n def confirms(self) -> List[str]:\n \"\"\"Return dataset names to confirm as a list.\n\n Returns:\n Dataset names to confirm as a list.\n \"\"\"\n return _to_list(self._confirms)\n\n def run(self, inputs: Dict[str, Any] = None) -> Dict[str, Any]:\n \"\"\"Run this node using the provided inputs and return its results\n in a dictionary.\n\n Args:\n inputs: Dictionary of inputs as specified at the creation of\n the node.\n\n Raises:\n ValueError: In the following cases:\n a) The node function inputs are incompatible with the node\n input definition.\n Example 1: node definition input is a list of 2\n DataFrames, whereas only 1 was provided or 2 different ones\n were provided.\n b) The node function outputs are incompatible with the node\n output definition.\n Example 1: node function definition is a dictionary,\n whereas function returns a list.\n Example 2: node definition output is a list of 5\n strings, whereas the function returns a list of 4 objects.\n Exception: Any exception thrown during execution of the node.\n\n Returns:\n All produced node outputs are returned in a dictionary, where the\n keys are defined by the node outputs.\n\n \"\"\"\n self._logger.info(\"Running node: %s\", str(self))\n\n outputs = None\n\n if not (inputs is None or isinstance(inputs, dict)):\n raise ValueError(\n f\"Node.run() expects a dictionary or None, \"\n f\"but got {type(inputs)} instead\"\n )\n\n try:\n inputs = {} if inputs is None else inputs\n if not self._inputs:\n outputs = self._run_with_no_inputs(inputs)\n elif isinstance(self._inputs, str):\n outputs = self._run_with_one_input(inputs, self._inputs)\n elif isinstance(self._inputs, list):\n outputs = self._run_with_list(inputs, self._inputs)\n elif isinstance(self._inputs, dict):\n outputs = self._run_with_dict(inputs, self._inputs)\n\n return self._outputs_to_dictionary(outputs)\n\n # purposely catch all exceptions\n except Exception as exc:\n self._logger.error(\n \"Node %s failed with error: \\n%s\",\n str(self),\n str(exc),\n extra={\"markup\": True},\n )\n raise exc\n\n def _run_with_no_inputs(self, inputs: Dict[str, Any]):\n if inputs:\n raise ValueError(\n f\"Node {str(self)} expected no inputs, \"\n f\"but got the following {len(inputs)} input(s) instead: \"\n f\"{sorted(inputs.keys())}.\"\n )\n\n return self._func()\n\n def _run_with_one_input(self, inputs: Dict[str, Any], node_input: str):\n if len(inputs) != 1 or node_input not in inputs:\n raise ValueError(\n f\"Node {str(self)} expected one input named '{node_input}', \"\n f\"but got the following {len(inputs)} input(s) instead: \"\n f\"{sorted(inputs.keys())}.\"\n )\n\n return self._func(inputs[node_input])\n\n def _run_with_list(self, inputs: Dict[str, Any], node_inputs: List[str]):\n # Node inputs and provided run inputs should completely overlap\n if set(node_inputs) != set(inputs.keys()):\n raise ValueError(\n f\"Node {str(self)} expected {len(node_inputs)} input(s) {node_inputs}, \"\n f\"but got the following {len(inputs)} input(s) instead: \"\n f\"{sorted(inputs.keys())}.\"\n )\n # Ensure the function gets the inputs in the correct order\n return self._func(*(inputs[item] for item in node_inputs))\n\n def _run_with_dict(self, inputs: Dict[str, Any], node_inputs: Dict[str, str]):\n # Node inputs and provided run inputs should completely overlap\n if set(node_inputs.values()) != set(inputs.keys()):\n raise ValueError(\n f\"Node {str(self)} expected {len(set(node_inputs.values()))} input(s) \"\n f\"{sorted(set(node_inputs.values()))}, \"\n f\"but got the following {len(inputs)} input(s) instead: \"\n f\"{sorted(inputs.keys())}.\"\n )\n kwargs = {arg: inputs[alias] for arg, alias in node_inputs.items()}\n return self._func(**kwargs)\n\n def _outputs_to_dictionary(self, outputs):\n def _from_dict():\n if set(self._outputs.keys()) != set(outputs.keys()):\n raise ValueError(\n f\"Failed to save outputs of node {str(self)}.\\n\"\n f\"The node's output keys {set(outputs.keys())} do not match with \"\n f\"the returned output's keys {set(self._outputs.keys())}.\"\n )\n return {name: outputs[key] for key, name in self._outputs.items()}\n\n def _from_list():\n if not isinstance(outputs, (list, tuple)):\n raise ValueError(\n f\"Failed to save outputs of node {str(self)}.\\n\"\n f\"The node definition contains a list of \"\n f\"outputs {self._outputs}, whereas the node function \"\n f\"returned a '{type(outputs).__name__}'.\"\n )\n if len(outputs) != len(self._outputs):\n raise ValueError(\n f\"Failed to save outputs of node {str(self)}.\\n\"\n f\"The node function returned {len(outputs)} output(s), \"\n f\"whereas the node definition contains {len(self._outputs)} \"\n f\"output(s).\"\n )\n\n return dict(zip(self._outputs, outputs))\n\n if isinstance(self._outputs, dict) and not isinstance(outputs, dict):\n raise ValueError(\n f\"Failed to save outputs of node {self}.\\n\"\n f\"The node output is a dictionary, whereas the \"\n f\"function output is not.\"\n )\n\n if self._outputs is None:\n return {}\n if isinstance(self._outputs, str):\n return {self._outputs: outputs}\n if isinstance(self._outputs, dict):\n return _from_dict()\n return _from_list()\n\n def _validate_inputs(self, func, inputs):\n # inspect does not support built-in Python functions written in C.\n # Thus we only validate func if it is not built-in.\n if not inspect.isbuiltin(func):\n args, kwargs = self._process_inputs_for_bind(inputs)\n try:\n inspect.signature(func, follow_wrapped=False).bind(*args, **kwargs)\n except Exception as exc:\n func_args = inspect.signature(\n func, follow_wrapped=False\n ).parameters.keys()\n func_name = _get_readable_func_name(func)\n\n raise TypeError(\n f\"Inputs of '{func_name}' function expected {list(func_args)}, \"\n f\"but got {inputs}\"\n ) from exc\n\n def _validate_unique_outputs(self):\n diff = Counter(self.outputs) - Counter(set(self.outputs))\n if diff:\n raise ValueError(\n f\"Failed to create node {self} due to duplicate\"\n f\" output(s) {set(diff.keys())}.\\nNode outputs must be unique.\"\n )\n\n def _validate_inputs_dif_than_outputs(self):\n common_in_out = set(self.inputs).intersection(set(self.outputs))\n if common_in_out:\n raise ValueError(\n f\"Failed to create node {self}.\\n\"\n f\"A node cannot have the same inputs and outputs: \"\n f\"{common_in_out}\"\n )\n\n @staticmethod\n def _process_inputs_for_bind(inputs: Union[None, str, List[str], Dict[str, str]]):\n # Safeguard that we do not mutate list inputs\n inputs = copy.copy(inputs)\n args = [] # type: List[str]\n kwargs = {} # type: Dict[str, str]\n if isinstance(inputs, str):\n args = [inputs]\n elif isinstance(inputs, list):\n args = inputs\n elif isinstance(inputs, dict):\n kwargs = inputs\n return args, kwargs\n\n\ndef _node_error_message(msg) -> str:\n return (\n f\"Invalid Node definition: {msg}\\n\"\n f\"Format should be: node(function, inputs, outputs)\"\n )\n\n\ndef node(\n func: Callable,\n inputs: Union[None, str, List[str], Dict[str, str]],\n outputs: Union[None, str, List[str], Dict[str, str]],\n *,\n name: str = None,\n tags: Union[str, Iterable[str]] = None,\n confirms: Union[str, List[str]] = None,\n namespace: str = None,\n) -> Node:\n \"\"\"Create a node in the pipeline by providing a function to be called\n along with variable names for inputs and/or outputs.\n\n Args:\n func: A function that corresponds to the node logic. The function\n should have at least one input or output.\n inputs: The name or the list of the names of variables used as inputs\n to the function. The number of names should match the number of\n arguments in the definition of the provided function. When\n Dict[str, str] is provided, variable names will be mapped to\n function argument names.\n outputs: The name or the list of the names of variables used as outputs\n to the function. The number of names should match the number of\n outputs returned by the provided function. When Dict[str, str]\n is provided, variable names will be mapped to the named outputs the\n function returns.\n name: Optional node name to be used when displaying the node in logs or\n any other visualisations.\n tags: Optional set of tags to be applied to the node.\n confirms: Optional name or the list of the names of the datasets\n that should be confirmed. This will result in calling ``confirm()``\n method of the corresponding data set instance. Specified dataset\n names do not necessarily need to be present in the node ``inputs``\n or ``outputs``.\n namespace: Optional node namespace.\n\n Returns:\n A Node object with mapped inputs, outputs and function.\n\n Example:\n ::\n\n >>> import pandas as pd\n >>> import numpy as np\n >>>\n >>> def clean_data(cars: pd.DataFrame,\n >>> boats: pd.DataFrame) -> Dict[str, pd.DataFrame]:\n >>> return dict(cars_df=cars.dropna(), boats_df=boats.dropna())\n >>>\n >>> def halve_dataframe(data: pd.DataFrame) -> List[pd.DataFrame]:\n >>> return np.array_split(data, 2)\n >>>\n >>> nodes = [\n >>> node(clean_data,\n >>> inputs=['cars2017', 'boats2017'],\n >>> outputs=dict(cars_df='clean_cars2017',\n >>> boats_df='clean_boats2017')),\n >>> node(halve_dataframe,\n >>> 'clean_cars2017',\n >>> ['train_cars2017', 'test_cars2017']),\n >>> node(halve_dataframe,\n >>> dict(data='clean_boats2017'),\n >>> ['train_boats2017', 'test_boats2017'])\n >>> ]\n \"\"\"\n return Node(\n func,\n inputs,\n outputs,\n name=name,\n tags=tags,\n confirms=confirms,\n namespace=namespace,\n )\n\n\ndef _dict_inputs_to_list(func: Callable[[Any], Any], inputs: Dict[str, str]):\n \"\"\"Convert a dict representation of the node inputs to a list, ensuring\n the appropriate order for binding them to the node's function.\n \"\"\"\n sig = inspect.signature(func, follow_wrapped=False).bind(**inputs)\n return [*sig.args, *sig.kwargs.values()]\n\n\ndef _to_list(element: Union[None, str, Iterable[str], Dict[str, str]]) -> List[str]:\n \"\"\"Make a list out of node inputs/outputs.\n\n Returns:\n List[str]: Node input/output names as a list to standardise.\n \"\"\"\n\n if element is None:\n return []\n if isinstance(element, str):\n return [element]\n if isinstance(element, dict):\n return list(element.values())\n return list(element)\n\n\ndef _get_readable_func_name(func: Callable) -> str:\n \"\"\"Get a user-friendly readable name of the function provided.\n\n Returns:\n str: readable name of the provided callable func.\n \"\"\"\n\n if hasattr(func, \"__name__\"):\n return func.__name__\n\n name = repr(func)\n if \"functools.partial\" in name:\n name = \"<partial>\"\n\n return name\n", "path": "kedro/pipeline/node.py"}]} |
gh_patches_debug_1022 | rasdani/github-patches | git_diff | encode__django-rest-framework-4973 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A "pure" HEAD request on a ViewSet does'nt behave like a GET
A HEAD request on a ViewSet let the action attribute empty.
```curl -I http://myhost/api/foo/```
It should fallback to simulate a GET request for everything but the rendering.
Meaning self.action should be either 'list' or 'retrieve'.
Note: ```curl -I -XGET [...]``` behaves as expected.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rest_framework/viewsets.py`
Content:
```
1 """
2 ViewSets are essentially just a type of class based view, that doesn't provide
3 any method handlers, such as `get()`, `post()`, etc... but instead has actions,
4 such as `list()`, `retrieve()`, `create()`, etc...
5
6 Actions are only bound to methods at the point of instantiating the views.
7
8 user_list = UserViewSet.as_view({'get': 'list'})
9 user_detail = UserViewSet.as_view({'get': 'retrieve'})
10
11 Typically, rather than instantiate views from viewsets directly, you'll
12 register the viewset with a router and let the URL conf be determined
13 automatically.
14
15 router = DefaultRouter()
16 router.register(r'users', UserViewSet, 'user')
17 urlpatterns = router.urls
18 """
19 from __future__ import unicode_literals
20
21 from functools import update_wrapper
22
23 from django.utils.decorators import classonlymethod
24 from django.views.decorators.csrf import csrf_exempt
25
26 from rest_framework import generics, mixins, views
27
28
29 class ViewSetMixin(object):
30 """
31 This is the magic.
32
33 Overrides `.as_view()` so that it takes an `actions` keyword that performs
34 the binding of HTTP methods to actions on the Resource.
35
36 For example, to create a concrete view binding the 'GET' and 'POST' methods
37 to the 'list' and 'create' actions...
38
39 view = MyViewSet.as_view({'get': 'list', 'post': 'create'})
40 """
41
42 @classonlymethod
43 def as_view(cls, actions=None, **initkwargs):
44 """
45 Because of the way class based views create a closure around the
46 instantiated view, we need to totally reimplement `.as_view`,
47 and slightly modify the view function that is created and returned.
48 """
49 # The suffix initkwarg is reserved for identifying the viewset type
50 # eg. 'List' or 'Instance'.
51 cls.suffix = None
52
53 # actions must not be empty
54 if not actions:
55 raise TypeError("The `actions` argument must be provided when "
56 "calling `.as_view()` on a ViewSet. For example "
57 "`.as_view({'get': 'list'})`")
58
59 # sanitize keyword arguments
60 for key in initkwargs:
61 if key in cls.http_method_names:
62 raise TypeError("You tried to pass in the %s method name as a "
63 "keyword argument to %s(). Don't do that."
64 % (key, cls.__name__))
65 if not hasattr(cls, key):
66 raise TypeError("%s() received an invalid keyword %r" % (
67 cls.__name__, key))
68
69 def view(request, *args, **kwargs):
70 self = cls(**initkwargs)
71 # We also store the mapping of request methods to actions,
72 # so that we can later set the action attribute.
73 # eg. `self.action = 'list'` on an incoming GET request.
74 self.action_map = actions
75
76 # Bind methods to actions
77 # This is the bit that's different to a standard view
78 for method, action in actions.items():
79 handler = getattr(self, action)
80 setattr(self, method, handler)
81
82 # And continue as usual
83 return self.dispatch(request, *args, **kwargs)
84
85 # take name and docstring from class
86 update_wrapper(view, cls, updated=())
87
88 # and possible attributes set by decorators
89 # like csrf_exempt from dispatch
90 update_wrapper(view, cls.dispatch, assigned=())
91
92 # We need to set these on the view function, so that breadcrumb
93 # generation can pick out these bits of information from a
94 # resolved URL.
95 view.cls = cls
96 view.initkwargs = initkwargs
97 view.suffix = initkwargs.get('suffix', None)
98 view.actions = actions
99 return csrf_exempt(view)
100
101 def initialize_request(self, request, *args, **kwargs):
102 """
103 Set the `.action` attribute on the view,
104 depending on the request method.
105 """
106 request = super(ViewSetMixin, self).initialize_request(request, *args, **kwargs)
107 method = request.method.lower()
108 if method == 'options':
109 # This is a special case as we always provide handling for the
110 # options method in the base `View` class.
111 # Unlike the other explicitly defined actions, 'metadata' is implicit.
112 self.action = 'metadata'
113 else:
114 self.action = self.action_map.get(method)
115 return request
116
117
118 class ViewSet(ViewSetMixin, views.APIView):
119 """
120 The base ViewSet class does not provide any actions by default.
121 """
122 pass
123
124
125 class GenericViewSet(ViewSetMixin, generics.GenericAPIView):
126 """
127 The GenericViewSet class does not provide any actions by default,
128 but does include the base set of generic view behavior, such as
129 the `get_object` and `get_queryset` methods.
130 """
131 pass
132
133
134 class ReadOnlyModelViewSet(mixins.RetrieveModelMixin,
135 mixins.ListModelMixin,
136 GenericViewSet):
137 """
138 A viewset that provides default `list()` and `retrieve()` actions.
139 """
140 pass
141
142
143 class ModelViewSet(mixins.CreateModelMixin,
144 mixins.RetrieveModelMixin,
145 mixins.UpdateModelMixin,
146 mixins.DestroyModelMixin,
147 mixins.ListModelMixin,
148 GenericViewSet):
149 """
150 A viewset that provides default `create()`, `retrieve()`, `update()`,
151 `partial_update()`, `destroy()` and `list()` actions.
152 """
153 pass
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rest_framework/viewsets.py b/rest_framework/viewsets.py
--- a/rest_framework/viewsets.py
+++ b/rest_framework/viewsets.py
@@ -79,6 +79,9 @@
handler = getattr(self, action)
setattr(self, method, handler)
+ if hasattr(self, 'get') and not hasattr(self, 'head'):
+ self.head = self.get
+
# And continue as usual
return self.dispatch(request, *args, **kwargs)
| {"golden_diff": "diff --git a/rest_framework/viewsets.py b/rest_framework/viewsets.py\n--- a/rest_framework/viewsets.py\n+++ b/rest_framework/viewsets.py\n@@ -79,6 +79,9 @@\n handler = getattr(self, action)\n setattr(self, method, handler)\n \n+ if hasattr(self, 'get') and not hasattr(self, 'head'):\n+ self.head = self.get\n+\n # And continue as usual\n return self.dispatch(request, *args, **kwargs)\n", "issue": "A \"pure\" HEAD request on a ViewSet does'nt behave like a GET\nA HEAD request on a ViewSet let the action attribute empty.\r\n```curl -I http://myhost/api/foo/```\r\n\r\nIt should fallback to simulate a GET request for everything but the rendering.\r\nMeaning self.action should be either 'list' or 'retrieve'.\r\n\r\nNote: ```curl -I -XGET [...]``` behaves as expected.\n", "before_files": [{"content": "\"\"\"\nViewSets are essentially just a type of class based view, that doesn't provide\nany method handlers, such as `get()`, `post()`, etc... but instead has actions,\nsuch as `list()`, `retrieve()`, `create()`, etc...\n\nActions are only bound to methods at the point of instantiating the views.\n\n user_list = UserViewSet.as_view({'get': 'list'})\n user_detail = UserViewSet.as_view({'get': 'retrieve'})\n\nTypically, rather than instantiate views from viewsets directly, you'll\nregister the viewset with a router and let the URL conf be determined\nautomatically.\n\n router = DefaultRouter()\n router.register(r'users', UserViewSet, 'user')\n urlpatterns = router.urls\n\"\"\"\nfrom __future__ import unicode_literals\n\nfrom functools import update_wrapper\n\nfrom django.utils.decorators import classonlymethod\nfrom django.views.decorators.csrf import csrf_exempt\n\nfrom rest_framework import generics, mixins, views\n\n\nclass ViewSetMixin(object):\n \"\"\"\n This is the magic.\n\n Overrides `.as_view()` so that it takes an `actions` keyword that performs\n the binding of HTTP methods to actions on the Resource.\n\n For example, to create a concrete view binding the 'GET' and 'POST' methods\n to the 'list' and 'create' actions...\n\n view = MyViewSet.as_view({'get': 'list', 'post': 'create'})\n \"\"\"\n\n @classonlymethod\n def as_view(cls, actions=None, **initkwargs):\n \"\"\"\n Because of the way class based views create a closure around the\n instantiated view, we need to totally reimplement `.as_view`,\n and slightly modify the view function that is created and returned.\n \"\"\"\n # The suffix initkwarg is reserved for identifying the viewset type\n # eg. 'List' or 'Instance'.\n cls.suffix = None\n\n # actions must not be empty\n if not actions:\n raise TypeError(\"The `actions` argument must be provided when \"\n \"calling `.as_view()` on a ViewSet. For example \"\n \"`.as_view({'get': 'list'})`\")\n\n # sanitize keyword arguments\n for key in initkwargs:\n if key in cls.http_method_names:\n raise TypeError(\"You tried to pass in the %s method name as a \"\n \"keyword argument to %s(). Don't do that.\"\n % (key, cls.__name__))\n if not hasattr(cls, key):\n raise TypeError(\"%s() received an invalid keyword %r\" % (\n cls.__name__, key))\n\n def view(request, *args, **kwargs):\n self = cls(**initkwargs)\n # We also store the mapping of request methods to actions,\n # so that we can later set the action attribute.\n # eg. `self.action = 'list'` on an incoming GET request.\n self.action_map = actions\n\n # Bind methods to actions\n # This is the bit that's different to a standard view\n for method, action in actions.items():\n handler = getattr(self, action)\n setattr(self, method, handler)\n\n # And continue as usual\n return self.dispatch(request, *args, **kwargs)\n\n # take name and docstring from class\n update_wrapper(view, cls, updated=())\n\n # and possible attributes set by decorators\n # like csrf_exempt from dispatch\n update_wrapper(view, cls.dispatch, assigned=())\n\n # We need to set these on the view function, so that breadcrumb\n # generation can pick out these bits of information from a\n # resolved URL.\n view.cls = cls\n view.initkwargs = initkwargs\n view.suffix = initkwargs.get('suffix', None)\n view.actions = actions\n return csrf_exempt(view)\n\n def initialize_request(self, request, *args, **kwargs):\n \"\"\"\n Set the `.action` attribute on the view,\n depending on the request method.\n \"\"\"\n request = super(ViewSetMixin, self).initialize_request(request, *args, **kwargs)\n method = request.method.lower()\n if method == 'options':\n # This is a special case as we always provide handling for the\n # options method in the base `View` class.\n # Unlike the other explicitly defined actions, 'metadata' is implicit.\n self.action = 'metadata'\n else:\n self.action = self.action_map.get(method)\n return request\n\n\nclass ViewSet(ViewSetMixin, views.APIView):\n \"\"\"\n The base ViewSet class does not provide any actions by default.\n \"\"\"\n pass\n\n\nclass GenericViewSet(ViewSetMixin, generics.GenericAPIView):\n \"\"\"\n The GenericViewSet class does not provide any actions by default,\n but does include the base set of generic view behavior, such as\n the `get_object` and `get_queryset` methods.\n \"\"\"\n pass\n\n\nclass ReadOnlyModelViewSet(mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n GenericViewSet):\n \"\"\"\n A viewset that provides default `list()` and `retrieve()` actions.\n \"\"\"\n pass\n\n\nclass ModelViewSet(mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.UpdateModelMixin,\n mixins.DestroyModelMixin,\n mixins.ListModelMixin,\n GenericViewSet):\n \"\"\"\n A viewset that provides default `create()`, `retrieve()`, `update()`,\n `partial_update()`, `destroy()` and `list()` actions.\n \"\"\"\n pass\n", "path": "rest_framework/viewsets.py"}], "after_files": [{"content": "\"\"\"\nViewSets are essentially just a type of class based view, that doesn't provide\nany method handlers, such as `get()`, `post()`, etc... but instead has actions,\nsuch as `list()`, `retrieve()`, `create()`, etc...\n\nActions are only bound to methods at the point of instantiating the views.\n\n user_list = UserViewSet.as_view({'get': 'list'})\n user_detail = UserViewSet.as_view({'get': 'retrieve'})\n\nTypically, rather than instantiate views from viewsets directly, you'll\nregister the viewset with a router and let the URL conf be determined\nautomatically.\n\n router = DefaultRouter()\n router.register(r'users', UserViewSet, 'user')\n urlpatterns = router.urls\n\"\"\"\nfrom __future__ import unicode_literals\n\nfrom functools import update_wrapper\n\nfrom django.utils.decorators import classonlymethod\nfrom django.views.decorators.csrf import csrf_exempt\n\nfrom rest_framework import generics, mixins, views\n\n\nclass ViewSetMixin(object):\n \"\"\"\n This is the magic.\n\n Overrides `.as_view()` so that it takes an `actions` keyword that performs\n the binding of HTTP methods to actions on the Resource.\n\n For example, to create a concrete view binding the 'GET' and 'POST' methods\n to the 'list' and 'create' actions...\n\n view = MyViewSet.as_view({'get': 'list', 'post': 'create'})\n \"\"\"\n\n @classonlymethod\n def as_view(cls, actions=None, **initkwargs):\n \"\"\"\n Because of the way class based views create a closure around the\n instantiated view, we need to totally reimplement `.as_view`,\n and slightly modify the view function that is created and returned.\n \"\"\"\n # The suffix initkwarg is reserved for identifying the viewset type\n # eg. 'List' or 'Instance'.\n cls.suffix = None\n\n # actions must not be empty\n if not actions:\n raise TypeError(\"The `actions` argument must be provided when \"\n \"calling `.as_view()` on a ViewSet. For example \"\n \"`.as_view({'get': 'list'})`\")\n\n # sanitize keyword arguments\n for key in initkwargs:\n if key in cls.http_method_names:\n raise TypeError(\"You tried to pass in the %s method name as a \"\n \"keyword argument to %s(). Don't do that.\"\n % (key, cls.__name__))\n if not hasattr(cls, key):\n raise TypeError(\"%s() received an invalid keyword %r\" % (\n cls.__name__, key))\n\n def view(request, *args, **kwargs):\n self = cls(**initkwargs)\n # We also store the mapping of request methods to actions,\n # so that we can later set the action attribute.\n # eg. `self.action = 'list'` on an incoming GET request.\n self.action_map = actions\n\n # Bind methods to actions\n # This is the bit that's different to a standard view\n for method, action in actions.items():\n handler = getattr(self, action)\n setattr(self, method, handler)\n\n if hasattr(self, 'get') and not hasattr(self, 'head'):\n self.head = self.get\n\n # And continue as usual\n return self.dispatch(request, *args, **kwargs)\n\n # take name and docstring from class\n update_wrapper(view, cls, updated=())\n\n # and possible attributes set by decorators\n # like csrf_exempt from dispatch\n update_wrapper(view, cls.dispatch, assigned=())\n\n # We need to set these on the view function, so that breadcrumb\n # generation can pick out these bits of information from a\n # resolved URL.\n view.cls = cls\n view.initkwargs = initkwargs\n view.suffix = initkwargs.get('suffix', None)\n view.actions = actions\n return csrf_exempt(view)\n\n def initialize_request(self, request, *args, **kwargs):\n \"\"\"\n Set the `.action` attribute on the view,\n depending on the request method.\n \"\"\"\n request = super(ViewSetMixin, self).initialize_request(request, *args, **kwargs)\n method = request.method.lower()\n if method == 'options':\n # This is a special case as we always provide handling for the\n # options method in the base `View` class.\n # Unlike the other explicitly defined actions, 'metadata' is implicit.\n self.action = 'metadata'\n else:\n self.action = self.action_map.get(method)\n return request\n\n\nclass ViewSet(ViewSetMixin, views.APIView):\n \"\"\"\n The base ViewSet class does not provide any actions by default.\n \"\"\"\n pass\n\n\nclass GenericViewSet(ViewSetMixin, generics.GenericAPIView):\n \"\"\"\n The GenericViewSet class does not provide any actions by default,\n but does include the base set of generic view behavior, such as\n the `get_object` and `get_queryset` methods.\n \"\"\"\n pass\n\n\nclass ReadOnlyModelViewSet(mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n GenericViewSet):\n \"\"\"\n A viewset that provides default `list()` and `retrieve()` actions.\n \"\"\"\n pass\n\n\nclass ModelViewSet(mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.UpdateModelMixin,\n mixins.DestroyModelMixin,\n mixins.ListModelMixin,\n GenericViewSet):\n \"\"\"\n A viewset that provides default `create()`, `retrieve()`, `update()`,\n `partial_update()`, `destroy()` and `list()` actions.\n \"\"\"\n pass\n", "path": "rest_framework/viewsets.py"}]} |
gh_patches_debug_1023 | rasdani/github-patches | git_diff | Cog-Creators__Red-DiscordBot-1324 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error on [p]cog update
### Type:
- [ ] Suggestion
- [ X] Bug
### Brief description of the problem
`[p]cog update` fails with ValueError
### Expected behavior
Cogs to update normally
### Actual behavior
```Exception in command 'cog update'
Traceback (most recent call last):
File "c:\program files\python35\lib\site-packages\discord\ext\commands\core.py", line 62, in wrapped
ret = yield from coro(*args, **kwargs)
File "C:\Program Files\Python35\Lib\site-packages\redbot\cogs\downloader\downloader.py", line 303, in _cog_update
updated = await self._repo_manager.update_all_repos()
File "C:\Program Files\Python35\Lib\site-packages\redbot\cogs\downloader\repo_manager.py", line 642, in update_all_repos
repo, (old, new) = await self.update_repo(repo_name)
ValueError: not enough values to unpack (expected 2, got 1)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\program files\python35\lib\site-packages\discord\ext\commands\bot.py", line 886, in invoke
yield from ctx.command.invoke(ctx)
File "c:\program files\python35\lib\site-packages\discord\ext\commands\core.py", line 899, in invoke
yield from ctx.invoked_subcommand.invoke(ctx)
File "c:\program files\python35\lib\site-packages\discord\ext\commands\core.py", line 493, in invoke
yield from injected(*ctx.args, **ctx.kwargs)
File "c:\program files\python35\lib\site-packages\discord\ext\commands\core.py", line 71, in wrapped
raise CommandInvokeError(e) from e
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: ValueError: not enough values to unpack (expected 2, got 1)
```
### Steps to reproduce
1. Fresh RedBot
2. Add repo (I used https://github.com/bobloy/Fox-V3)
3. Made a change to info.json (Was using tabs and spaces in same file)
4. `[p]cog update`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redbot/cogs/downloader/repo_manager.py`
Content:
```
1 import asyncio
2 import functools
3 import os
4 import pkgutil
5 import shutil
6 from concurrent.futures import ThreadPoolExecutor
7 from pathlib import Path
8 from subprocess import run as sp_run, PIPE
9 from sys import executable
10 from typing import Tuple, MutableMapping, Union
11
12 from discord.ext import commands
13
14 from redbot.core import Config
15 from redbot.core import data_manager
16 from .errors import *
17 from .installable import Installable, InstallableType
18 from .json_mixins import RepoJSONMixin
19 from .log import log
20
21
22 class Repo(RepoJSONMixin):
23 GIT_CLONE = "git clone -b {branch} {url} {folder}"
24 GIT_CLONE_NO_BRANCH = "git clone {url} {folder}"
25 GIT_CURRENT_BRANCH = "git -C {path} rev-parse --abbrev-ref HEAD"
26 GIT_LATEST_COMMIT = "git -C {path} rev-parse {branch}"
27 GIT_HARD_RESET = "git -C {path} reset --hard origin/{branch} -q"
28 GIT_PULL = "git -C {path} pull -q --ff-only"
29 GIT_DIFF_FILE_STATUS = ("git -C {path} diff --no-commit-id --name-status"
30 " {old_hash} {new_hash}")
31 GIT_LOG = ("git -C {path} log --relative-date --reverse {old_hash}.."
32 " {relative_file_path}")
33 GIT_DISCOVER_REMOTE_URL = "git -C {path} config --get remote.origin.url"
34
35 PIP_INSTALL = "{python} -m pip install -U -t {target_dir} {reqs}"
36
37 def __init__(self, name: str, url: str, branch: str, folder_path: Path,
38 available_modules: Tuple[Installable]=(), loop: asyncio.AbstractEventLoop=None):
39 self.url = url
40 self.branch = branch
41
42 self.name = name
43
44 self.folder_path = folder_path
45 self.folder_path.mkdir(parents=True, exist_ok=True)
46
47 super().__init__(self.folder_path)
48
49 self.available_modules = available_modules
50
51 self._executor = ThreadPoolExecutor(1)
52
53 self._repo_lock = asyncio.Lock()
54
55 self._loop = loop
56 if self._loop is None:
57 self._loop = asyncio.get_event_loop()
58
59 @classmethod
60 async def convert(cls, ctx: commands.Context, argument: str):
61 downloader_cog = ctx.bot.get_cog("Downloader")
62 if downloader_cog is None:
63 raise commands.CommandError("No Downloader cog found.")
64
65 # noinspection PyProtectedMember
66 repo_manager = downloader_cog._repo_manager
67 poss_repo = repo_manager.get_repo(argument)
68 if poss_repo is None:
69 raise commands.BadArgument("Repo by the name {} does not exist.".format(argument))
70 return poss_repo
71
72 def _existing_git_repo(self) -> (bool, Path):
73 git_path = self.folder_path / '.git'
74 return git_path.exists(), git_path
75
76 async def _get_file_update_statuses(
77 self, old_hash: str, new_hash: str) -> MutableMapping[str, str]:
78 """
79 Gets the file update status letters for each changed file between
80 the two hashes.
81 :param old_hash: Pre-update
82 :param new_hash: Post-update
83 :return: Mapping of filename -> status_letter
84 """
85 p = await self._run(
86 self.GIT_DIFF_FILE_STATUS.format(
87 path=self.folder_path,
88 old_hash=old_hash,
89 new_hash=new_hash
90 )
91 )
92
93 if p.returncode != 0:
94 raise GitDiffError("Git diff failed for repo at path:"
95 " {}".format(self.folder_path))
96
97 stdout = p.stdout.strip().decode().split('\n')
98
99 ret = {}
100
101 for filename in stdout:
102 # TODO: filter these filenames by ones in self.available_modules
103 status, _, filepath = filename.partition('\t')
104 ret[filepath] = status
105
106 return ret
107
108 async def _get_commit_notes(self, old_commit_hash: str,
109 relative_file_path: str) -> str:
110 """
111 Gets the commit notes from git log.
112 :param old_commit_hash: Point in time to start getting messages
113 :param relative_file_path: Path relative to the repo folder of the file
114 to get messages for.
115 :return: Git commit note log
116 """
117 p = await self._run(
118 self.GIT_LOG.format(
119 path=self.folder_path,
120 old_hash=old_commit_hash,
121 relative_file_path=relative_file_path
122 )
123 )
124
125 if p.returncode != 0:
126 raise GitException("An exception occurred while executing git log on"
127 " this repo: {}".format(self.folder_path))
128
129 return p.stdout.decode().strip()
130
131 def _update_available_modules(self) -> Tuple[str]:
132 """
133 Updates the available modules attribute for this repo.
134 :return: List of available modules.
135 """
136 curr_modules = []
137 """
138 for name in self.folder_path.iterdir():
139 if name.is_dir():
140 spec = importlib.util.spec_from_file_location(
141 name.stem, location=str(name.parent)
142 )
143 if spec is not None:
144 curr_modules.append(
145 Installable(location=name)
146 )
147 """
148 for file_finder, name, is_pkg in pkgutil.walk_packages(path=[str(self.folder_path), ]):
149 curr_modules.append(
150 Installable(location=self.folder_path / name)
151 )
152 self.available_modules = curr_modules
153
154 # noinspection PyTypeChecker
155 return tuple(self.available_modules)
156
157 async def _run(self, *args, **kwargs):
158 env = os.environ.copy()
159 env['GIT_TERMINAL_PROMPT'] = '0'
160 kwargs['env'] = env
161 async with self._repo_lock:
162 return await self._loop.run_in_executor(
163 self._executor,
164 functools.partial(sp_run, *args, stdout=PIPE, **kwargs)
165 )
166
167 async def clone(self) -> Tuple[str]:
168 """Clone a new repo.
169
170 Returns
171 -------
172 `tuple` of `str`
173 All available module names from this repo.
174
175 """
176 exists, path = self._existing_git_repo()
177 if exists:
178 raise ExistingGitRepo(
179 "A git repo already exists at path: {}".format(path)
180 )
181
182 if self.branch is not None:
183 p = await self._run(
184 self.GIT_CLONE.format(
185 branch=self.branch,
186 url=self.url,
187 folder=self.folder_path
188 ).split()
189 )
190 else:
191 p = await self._run(
192 self.GIT_CLONE_NO_BRANCH.format(
193 url=self.url,
194 folder=self.folder_path
195 ).split()
196 )
197
198 if p.returncode != 0:
199 raise CloningError("Error when running git clone.")
200
201 if self.branch is None:
202 self.branch = await self.current_branch()
203
204 self._read_info_file()
205
206 return self._update_available_modules()
207
208 async def current_branch(self) -> str:
209 """Determine the current branch using git commands.
210
211 Returns
212 -------
213 str
214 The current branch name.
215
216 """
217 exists, _ = self._existing_git_repo()
218 if not exists:
219 raise MissingGitRepo(
220 "A git repo does not exist at path: {}".format(self.folder_path)
221 )
222
223 p = await self._run(
224 self.GIT_CURRENT_BRANCH.format(
225 path=self.folder_path
226 ).split()
227 )
228
229 if p.returncode != 0:
230 raise GitException("Could not determine current branch"
231 " at path: {}".format(self.folder_path))
232
233 return p.stdout.decode().strip()
234
235 async def current_commit(self, branch: str=None) -> str:
236 """Determine the current commit hash of the repo.
237
238 Parameters
239 ----------
240 branch : `str`, optional
241 Override for repo's branch attribute.
242
243 Returns
244 -------
245 str
246 The requested commit hash.
247
248 """
249 if branch is None:
250 branch = self.branch
251
252 exists, _ = self._existing_git_repo()
253 if not exists:
254 raise MissingGitRepo(
255 "A git repo does not exist at path: {}".format(self.folder_path)
256 )
257
258 p = await self._run(
259 self.GIT_LATEST_COMMIT.format(
260 path=self.folder_path,
261 branch=branch
262 ).split()
263 )
264
265 if p.returncode != 0:
266 raise CurrentHashError("Unable to determine old commit hash.")
267
268 return p.stdout.decode().strip()
269
270 async def current_url(self, folder: Path=None) -> str:
271 """
272 Discovers the FETCH URL for a Git repo.
273
274 Parameters
275 ----------
276 folder : pathlib.Path
277 The folder to search for a URL.
278
279 Returns
280 -------
281 str
282 The FETCH URL.
283
284 Raises
285 ------
286 RuntimeError
287 When the folder does not contain a git repo with a FETCH URL.
288 """
289 if folder is None:
290 folder = self.folder_path
291
292 p = await self._run(
293 Repo.GIT_DISCOVER_REMOTE_URL.format(
294 path=folder
295 ).split()
296 )
297
298 if p.returncode != 0:
299 raise RuntimeError("Unable to discover a repo URL.")
300
301 return p.stdout.decode().strip()
302
303 async def hard_reset(self, branch: str=None) -> None:
304 """Perform a hard reset on the current repo.
305
306 Parameters
307 ----------
308 branch : `str`, optional
309 Override for repo branch attribute.
310
311 """
312 if branch is None:
313 branch = self.branch
314
315 exists, _ = self._existing_git_repo()
316 if not exists:
317 raise MissingGitRepo(
318 "A git repo does not exist at path: {}".format(self.folder_path)
319 )
320
321 p = await self._run(
322 self.GIT_HARD_RESET.format(
323 path=self.folder_path,
324 branch=branch
325 ).split()
326 )
327
328 if p.returncode != 0:
329 raise HardResetError("Some error occurred when trying to"
330 " execute a hard reset on the repo at"
331 " the following path: {}".format(self.folder_path))
332
333 async def update(self) -> (str, str):
334 """Update the current branch of this repo.
335
336 Returns
337 -------
338 `tuple` of `str`
339 :py:code`(old commit hash, new commit hash)`
340
341 """
342 curr_branch = await self.current_branch()
343 old_commit = await self.current_commit(branch=curr_branch)
344
345 await self.hard_reset(branch=curr_branch)
346
347 p = await self._run(
348 self.GIT_PULL.format(
349 path=self.folder_path
350 ).split()
351 )
352
353 if p.returncode != 0:
354 raise UpdateError("Git pull returned a non zero exit code"
355 " for the repo located at path: {}".format(self.folder_path))
356
357 new_commit = await self.current_commit(branch=curr_branch)
358
359 self._update_available_modules()
360 self._read_info_file()
361
362 return old_commit, new_commit
363
364 async def install_cog(self, cog: Installable, target_dir: Path) -> bool:
365 """Install a cog to the target directory.
366
367 Parameters
368 ----------
369 cog : Installable
370 The package to install.
371 target_dir : pathlib.Path
372 The target directory for the cog installation.
373
374 Returns
375 -------
376 bool
377 The success of the installation.
378
379 """
380 if cog not in self.available_cogs:
381 raise DownloaderException("That cog does not exist in this repo")
382
383 if not target_dir.is_dir():
384 raise ValueError("That target directory is not actually a directory.")
385
386 if not target_dir.exists():
387 raise ValueError("That target directory does not exist.")
388
389 return await cog.copy_to(target_dir=target_dir)
390
391 async def install_libraries(self, target_dir: Path, libraries: Tuple[Installable]=()) -> bool:
392 """Install shared libraries to the target directory.
393
394 If :code:`libraries` is not specified, all shared libraries in the repo
395 will be installed.
396
397 Parameters
398 ----------
399 target_dir : pathlib.Path
400 Directory to install shared libraries to.
401 libraries : `tuple` of `Installable`
402 A subset of available libraries.
403
404 Returns
405 -------
406 bool
407 The success of the installation.
408
409 """
410 if len(libraries) > 0:
411 if not all([i in self.available_libraries for i in libraries]):
412 raise ValueError("Some given libraries are not available in this repo.")
413 else:
414 libraries = self.available_libraries
415
416 if len(libraries) > 0:
417 ret = True
418 for lib in libraries:
419 ret = ret and await lib.copy_to(target_dir=target_dir)
420 return ret
421 return True
422
423 async def install_requirements(self, cog: Installable, target_dir: Path) -> bool:
424 """Install a cog's requirements.
425
426 Requirements will be installed via pip directly into
427 :code:`target_dir`.
428
429 Parameters
430 ----------
431 cog : Installable
432 Cog for which to install requirements.
433 target_dir : pathlib.Path
434 Path to directory where requirements are to be installed.
435
436 Returns
437 -------
438 bool
439 Success of the installation.
440
441 """
442 if not target_dir.is_dir():
443 raise ValueError("Target directory is not a directory.")
444 target_dir.mkdir(parents=True, exist_ok=True)
445
446 return await self.install_raw_requirements(cog.requirements, target_dir)
447
448 async def install_raw_requirements(self, requirements: Tuple[str], target_dir: Path) -> bool:
449 """Install a list of requirements using pip.
450
451 Parameters
452 ----------
453 requirements : `tuple` of `str`
454 List of requirement names to install via pip.
455 target_dir : pathlib.Path
456 Path to directory where requirements are to be installed.
457
458 Returns
459 -------
460 bool
461 Success of the installation
462
463 """
464 if len(requirements) == 0:
465 return True
466
467 # TODO: Check and see if any of these modules are already available
468
469 p = await self._run(
470 self.PIP_INSTALL.format(
471 python=executable,
472 target_dir=target_dir,
473 reqs=" ".join(requirements)
474 ).split()
475 )
476
477 if p.returncode != 0:
478 log.error("Something went wrong when installing"
479 " the following requirements:"
480 " {}".format(", ".join(requirements)))
481 return False
482 return True
483
484 @property
485 def available_cogs(self) -> Tuple[Installable]:
486 """`tuple` of `installable` : All available cogs in this Repo.
487
488 This excludes hidden or shared packages.
489 """
490 # noinspection PyTypeChecker
491 return tuple(
492 [m for m in self.available_modules
493 if m.type == InstallableType.COG and not m.hidden]
494 )
495
496 @property
497 def available_libraries(self) -> Tuple[Installable]:
498 """`tuple` of `installable` : All available shared libraries in this
499 Repo.
500 """
501 # noinspection PyTypeChecker
502 return tuple(
503 [m for m in self.available_modules
504 if m.type == InstallableType.SHARED_LIBRARY]
505 )
506
507 @classmethod
508 async def from_folder(cls, folder: Path):
509 repo = cls(name=folder.stem, branch="", url="", folder_path=folder)
510 repo.branch = await repo.current_branch()
511 repo.url = await repo.current_url()
512 repo._update_available_modules()
513 return repo
514
515
516 class RepoManager:
517 def __init__(self, downloader_config: Config):
518 self.downloader_config = downloader_config
519
520 self._repos = {}
521
522 loop = asyncio.get_event_loop()
523 loop.create_task(self._load_repos(set=True)) # str_name: Repo
524
525 @property
526 def repos_folder(self) -> Path:
527 data_folder = data_manager.cog_data_path(self)
528 return data_folder / 'repos'
529
530 def does_repo_exist(self, name: str) -> bool:
531 return name in self._repos
532
533 @staticmethod
534 def validate_and_normalize_repo_name(name: str) -> str:
535 if not name.isidentifier():
536 raise InvalidRepoName("Not a valid Python variable name.")
537 return name.lower()
538
539 async def add_repo(self, url: str, name: str, branch: str="master") -> Repo:
540 """Add and clone a git repository.
541
542 Parameters
543 ----------
544 url : str
545 URL to the git repository.
546 name : str
547 Internal name of the repository.
548 branch : str
549 Name of the default branch to checkout into.
550
551 Returns
552 -------
553 Repo
554 New Repo object representing the cloned repository.
555
556 """
557 name = self.validate_and_normalize_repo_name(name)
558 if self.does_repo_exist(name):
559 raise InvalidRepoName(
560 "That repo name you provided already exists."
561 " Please choose another."
562 )
563
564 # noinspection PyTypeChecker
565 r = Repo(url=url, name=name, branch=branch,
566 folder_path=self.repos_folder / name)
567 await r.clone()
568
569 self._repos[name] = r
570
571 return r
572
573 def get_repo(self, name: str) -> Union[Repo, None]:
574 """Get a Repo object for a repository.
575
576 Parameters
577 ----------
578 name : str
579 The name of the repository to retrieve.
580
581 Returns
582 -------
583 `Repo` or `None`
584 Repo object for the repository, if it exists.
585
586 """
587 return self._repos.get(name, None)
588
589 def get_all_repo_names(self) -> Tuple[str]:
590 """Get all repo names.
591
592 Returns
593 -------
594 `tuple` of `str`
595
596 """
597 # noinspection PyTypeChecker
598 return tuple(self._repos.keys())
599
600 async def delete_repo(self, name: str):
601 """Delete a repository and its folders.
602
603 Parameters
604 ----------
605 name : str
606 The name of the repository to delete.
607
608 Raises
609 ------
610 MissingGitRepo
611 If the repo does not exist.
612
613 """
614 repo = self.get_repo(name)
615 if repo is None:
616 raise MissingGitRepo("There is no repo with the name {}".format(name))
617
618 shutil.rmtree(str(repo.folder_path))
619
620 try:
621 del self._repos[name]
622 except KeyError:
623 pass
624
625 async def update_repo(self, repo_name: str) -> MutableMapping[Repo, Tuple[str, str]]:
626 repo = self._repos[repo_name]
627 old, new = await repo.update()
628 return {repo: (old, new)}
629
630 async def update_all_repos(self) -> MutableMapping[Repo, Tuple[str, str]]:
631 """Call `Repo.update` on all repositories.
632
633 Returns
634 -------
635 dict
636 A mapping of `Repo` objects that received new commits to a `tuple`
637 of `str` containing old and new commit hashes.
638
639 """
640 ret = {}
641 for repo_name, _ in self._repos.items():
642 repo, (old, new) = await self.update_repo(repo_name)
643 if old != new:
644 ret[repo] = (old, new)
645 return ret
646
647 async def _load_repos(self, set=False) -> MutableMapping[str, Repo]:
648 ret = {}
649 for folder in self.repos_folder.iterdir():
650 if not folder.is_dir():
651 continue
652 try:
653 ret[folder.stem] = await Repo.from_folder(folder)
654 except RuntimeError:
655 # Thrown when there's no findable git remote URL
656 pass
657
658 if set:
659 self._repos = ret
660 return ret
661
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redbot/cogs/downloader/repo_manager.py b/redbot/cogs/downloader/repo_manager.py
--- a/redbot/cogs/downloader/repo_manager.py
+++ b/redbot/cogs/downloader/repo_manager.py
@@ -639,7 +639,7 @@
"""
ret = {}
for repo_name, _ in self._repos.items():
- repo, (old, new) = await self.update_repo(repo_name)
+ repo, (old, new) = (await self.update_repo(repo_name)).popitem()
if old != new:
ret[repo] = (old, new)
return ret
| {"golden_diff": "diff --git a/redbot/cogs/downloader/repo_manager.py b/redbot/cogs/downloader/repo_manager.py\n--- a/redbot/cogs/downloader/repo_manager.py\n+++ b/redbot/cogs/downloader/repo_manager.py\n@@ -639,7 +639,7 @@\n \"\"\"\n ret = {}\n for repo_name, _ in self._repos.items():\n- repo, (old, new) = await self.update_repo(repo_name)\n+ repo, (old, new) = (await self.update_repo(repo_name)).popitem()\n if old != new:\n ret[repo] = (old, new)\n return ret\n", "issue": "Error on [p]cog update\n### Type:\r\n\r\n- [ ] Suggestion\r\n- [ X] Bug\r\n\r\n### Brief description of the problem\r\n`[p]cog update` fails with ValueError\r\n\r\n### Expected behavior\r\nCogs to update normally\r\n### Actual behavior\r\n\r\n```Exception in command 'cog update'\r\nTraceback (most recent call last):\r\n File \"c:\\program files\\python35\\lib\\site-packages\\discord\\ext\\commands\\core.py\", line 62, in wrapped\r\n ret = yield from coro(*args, **kwargs)\r\n File \"C:\\Program Files\\Python35\\Lib\\site-packages\\redbot\\cogs\\downloader\\downloader.py\", line 303, in _cog_update\r\n updated = await self._repo_manager.update_all_repos()\r\n File \"C:\\Program Files\\Python35\\Lib\\site-packages\\redbot\\cogs\\downloader\\repo_manager.py\", line 642, in update_all_repos\r\n repo, (old, new) = await self.update_repo(repo_name)\r\nValueError: not enough values to unpack (expected 2, got 1)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\program files\\python35\\lib\\site-packages\\discord\\ext\\commands\\bot.py\", line 886, in invoke\r\n yield from ctx.command.invoke(ctx)\r\n File \"c:\\program files\\python35\\lib\\site-packages\\discord\\ext\\commands\\core.py\", line 899, in invoke\r\n yield from ctx.invoked_subcommand.invoke(ctx)\r\n File \"c:\\program files\\python35\\lib\\site-packages\\discord\\ext\\commands\\core.py\", line 493, in invoke\r\n yield from injected(*ctx.args, **ctx.kwargs)\r\n File \"c:\\program files\\python35\\lib\\site-packages\\discord\\ext\\commands\\core.py\", line 71, in wrapped\r\n raise CommandInvokeError(e) from e\r\ndiscord.ext.commands.errors.CommandInvokeError: Command raised an exception: ValueError: not enough values to unpack (expected 2, got 1)\r\n```\r\n\r\n### Steps to reproduce\r\n\r\n1. Fresh RedBot\r\n2. Add repo (I used https://github.com/bobloy/Fox-V3)\r\n3. Made a change to info.json (Was using tabs and spaces in same file)\r\n4. `[p]cog update`\r\n\n", "before_files": [{"content": "import asyncio\nimport functools\nimport os\nimport pkgutil\nimport shutil\nfrom concurrent.futures import ThreadPoolExecutor\nfrom pathlib import Path\nfrom subprocess import run as sp_run, PIPE\nfrom sys import executable\nfrom typing import Tuple, MutableMapping, Union\n\nfrom discord.ext import commands\n\nfrom redbot.core import Config\nfrom redbot.core import data_manager\nfrom .errors import *\nfrom .installable import Installable, InstallableType\nfrom .json_mixins import RepoJSONMixin\nfrom .log import log\n\n\nclass Repo(RepoJSONMixin):\n GIT_CLONE = \"git clone -b {branch} {url} {folder}\"\n GIT_CLONE_NO_BRANCH = \"git clone {url} {folder}\"\n GIT_CURRENT_BRANCH = \"git -C {path} rev-parse --abbrev-ref HEAD\"\n GIT_LATEST_COMMIT = \"git -C {path} rev-parse {branch}\"\n GIT_HARD_RESET = \"git -C {path} reset --hard origin/{branch} -q\"\n GIT_PULL = \"git -C {path} pull -q --ff-only\"\n GIT_DIFF_FILE_STATUS = (\"git -C {path} diff --no-commit-id --name-status\"\n \" {old_hash} {new_hash}\")\n GIT_LOG = (\"git -C {path} log --relative-date --reverse {old_hash}..\"\n \" {relative_file_path}\")\n GIT_DISCOVER_REMOTE_URL = \"git -C {path} config --get remote.origin.url\"\n\n PIP_INSTALL = \"{python} -m pip install -U -t {target_dir} {reqs}\"\n\n def __init__(self, name: str, url: str, branch: str, folder_path: Path,\n available_modules: Tuple[Installable]=(), loop: asyncio.AbstractEventLoop=None):\n self.url = url\n self.branch = branch\n\n self.name = name\n\n self.folder_path = folder_path\n self.folder_path.mkdir(parents=True, exist_ok=True)\n\n super().__init__(self.folder_path)\n\n self.available_modules = available_modules\n\n self._executor = ThreadPoolExecutor(1)\n\n self._repo_lock = asyncio.Lock()\n\n self._loop = loop\n if self._loop is None:\n self._loop = asyncio.get_event_loop()\n\n @classmethod\n async def convert(cls, ctx: commands.Context, argument: str):\n downloader_cog = ctx.bot.get_cog(\"Downloader\")\n if downloader_cog is None:\n raise commands.CommandError(\"No Downloader cog found.\")\n\n # noinspection PyProtectedMember\n repo_manager = downloader_cog._repo_manager\n poss_repo = repo_manager.get_repo(argument)\n if poss_repo is None:\n raise commands.BadArgument(\"Repo by the name {} does not exist.\".format(argument))\n return poss_repo\n\n def _existing_git_repo(self) -> (bool, Path):\n git_path = self.folder_path / '.git'\n return git_path.exists(), git_path\n\n async def _get_file_update_statuses(\n self, old_hash: str, new_hash: str) -> MutableMapping[str, str]:\n \"\"\"\n Gets the file update status letters for each changed file between\n the two hashes.\n :param old_hash: Pre-update\n :param new_hash: Post-update\n :return: Mapping of filename -> status_letter\n \"\"\"\n p = await self._run(\n self.GIT_DIFF_FILE_STATUS.format(\n path=self.folder_path,\n old_hash=old_hash,\n new_hash=new_hash\n )\n )\n\n if p.returncode != 0:\n raise GitDiffError(\"Git diff failed for repo at path:\"\n \" {}\".format(self.folder_path))\n\n stdout = p.stdout.strip().decode().split('\\n')\n\n ret = {}\n\n for filename in stdout:\n # TODO: filter these filenames by ones in self.available_modules\n status, _, filepath = filename.partition('\\t')\n ret[filepath] = status\n\n return ret\n\n async def _get_commit_notes(self, old_commit_hash: str,\n relative_file_path: str) -> str:\n \"\"\"\n Gets the commit notes from git log.\n :param old_commit_hash: Point in time to start getting messages\n :param relative_file_path: Path relative to the repo folder of the file\n to get messages for.\n :return: Git commit note log\n \"\"\"\n p = await self._run(\n self.GIT_LOG.format(\n path=self.folder_path,\n old_hash=old_commit_hash,\n relative_file_path=relative_file_path\n )\n )\n\n if p.returncode != 0:\n raise GitException(\"An exception occurred while executing git log on\"\n \" this repo: {}\".format(self.folder_path))\n\n return p.stdout.decode().strip()\n\n def _update_available_modules(self) -> Tuple[str]:\n \"\"\"\n Updates the available modules attribute for this repo.\n :return: List of available modules.\n \"\"\"\n curr_modules = []\n \"\"\"\n for name in self.folder_path.iterdir():\n if name.is_dir():\n spec = importlib.util.spec_from_file_location(\n name.stem, location=str(name.parent)\n )\n if spec is not None:\n curr_modules.append(\n Installable(location=name)\n )\n \"\"\"\n for file_finder, name, is_pkg in pkgutil.walk_packages(path=[str(self.folder_path), ]):\n curr_modules.append(\n Installable(location=self.folder_path / name)\n )\n self.available_modules = curr_modules\n\n # noinspection PyTypeChecker\n return tuple(self.available_modules)\n\n async def _run(self, *args, **kwargs):\n env = os.environ.copy()\n env['GIT_TERMINAL_PROMPT'] = '0'\n kwargs['env'] = env\n async with self._repo_lock:\n return await self._loop.run_in_executor(\n self._executor,\n functools.partial(sp_run, *args, stdout=PIPE, **kwargs)\n )\n\n async def clone(self) -> Tuple[str]:\n \"\"\"Clone a new repo.\n\n Returns\n -------\n `tuple` of `str`\n All available module names from this repo.\n\n \"\"\"\n exists, path = self._existing_git_repo()\n if exists:\n raise ExistingGitRepo(\n \"A git repo already exists at path: {}\".format(path)\n )\n\n if self.branch is not None:\n p = await self._run(\n self.GIT_CLONE.format(\n branch=self.branch,\n url=self.url,\n folder=self.folder_path\n ).split()\n )\n else:\n p = await self._run(\n self.GIT_CLONE_NO_BRANCH.format(\n url=self.url,\n folder=self.folder_path\n ).split()\n )\n\n if p.returncode != 0:\n raise CloningError(\"Error when running git clone.\")\n\n if self.branch is None:\n self.branch = await self.current_branch()\n\n self._read_info_file()\n\n return self._update_available_modules()\n\n async def current_branch(self) -> str:\n \"\"\"Determine the current branch using git commands.\n\n Returns\n -------\n str\n The current branch name.\n\n \"\"\"\n exists, _ = self._existing_git_repo()\n if not exists:\n raise MissingGitRepo(\n \"A git repo does not exist at path: {}\".format(self.folder_path)\n )\n\n p = await self._run(\n self.GIT_CURRENT_BRANCH.format(\n path=self.folder_path\n ).split()\n )\n\n if p.returncode != 0:\n raise GitException(\"Could not determine current branch\"\n \" at path: {}\".format(self.folder_path))\n\n return p.stdout.decode().strip()\n\n async def current_commit(self, branch: str=None) -> str:\n \"\"\"Determine the current commit hash of the repo.\n\n Parameters\n ----------\n branch : `str`, optional\n Override for repo's branch attribute.\n \n Returns\n -------\n str\n The requested commit hash.\n\n \"\"\"\n if branch is None:\n branch = self.branch\n\n exists, _ = self._existing_git_repo()\n if not exists:\n raise MissingGitRepo(\n \"A git repo does not exist at path: {}\".format(self.folder_path)\n )\n\n p = await self._run(\n self.GIT_LATEST_COMMIT.format(\n path=self.folder_path,\n branch=branch\n ).split()\n )\n\n if p.returncode != 0:\n raise CurrentHashError(\"Unable to determine old commit hash.\")\n\n return p.stdout.decode().strip()\n\n async def current_url(self, folder: Path=None) -> str:\n \"\"\"\n Discovers the FETCH URL for a Git repo.\n\n Parameters\n ----------\n folder : pathlib.Path\n The folder to search for a URL.\n\n Returns\n -------\n str\n The FETCH URL.\n\n Raises\n ------\n RuntimeError\n When the folder does not contain a git repo with a FETCH URL.\n \"\"\"\n if folder is None:\n folder = self.folder_path\n\n p = await self._run(\n Repo.GIT_DISCOVER_REMOTE_URL.format(\n path=folder\n ).split()\n )\n\n if p.returncode != 0:\n raise RuntimeError(\"Unable to discover a repo URL.\")\n\n return p.stdout.decode().strip()\n\n async def hard_reset(self, branch: str=None) -> None:\n \"\"\"Perform a hard reset on the current repo.\n\n Parameters\n ----------\n branch : `str`, optional\n Override for repo branch attribute.\n\n \"\"\"\n if branch is None:\n branch = self.branch\n\n exists, _ = self._existing_git_repo()\n if not exists:\n raise MissingGitRepo(\n \"A git repo does not exist at path: {}\".format(self.folder_path)\n )\n\n p = await self._run(\n self.GIT_HARD_RESET.format(\n path=self.folder_path,\n branch=branch\n ).split()\n )\n\n if p.returncode != 0:\n raise HardResetError(\"Some error occurred when trying to\"\n \" execute a hard reset on the repo at\"\n \" the following path: {}\".format(self.folder_path))\n\n async def update(self) -> (str, str):\n \"\"\"Update the current branch of this repo.\n\n Returns\n -------\n `tuple` of `str`\n :py:code`(old commit hash, new commit hash)`\n\n \"\"\"\n curr_branch = await self.current_branch()\n old_commit = await self.current_commit(branch=curr_branch)\n\n await self.hard_reset(branch=curr_branch)\n\n p = await self._run(\n self.GIT_PULL.format(\n path=self.folder_path\n ).split()\n )\n\n if p.returncode != 0:\n raise UpdateError(\"Git pull returned a non zero exit code\"\n \" for the repo located at path: {}\".format(self.folder_path))\n\n new_commit = await self.current_commit(branch=curr_branch)\n\n self._update_available_modules()\n self._read_info_file()\n\n return old_commit, new_commit\n\n async def install_cog(self, cog: Installable, target_dir: Path) -> bool:\n \"\"\"Install a cog to the target directory.\n\n Parameters\n ----------\n cog : Installable\n The package to install.\n target_dir : pathlib.Path\n The target directory for the cog installation.\n\n Returns\n -------\n bool\n The success of the installation.\n\n \"\"\"\n if cog not in self.available_cogs:\n raise DownloaderException(\"That cog does not exist in this repo\")\n\n if not target_dir.is_dir():\n raise ValueError(\"That target directory is not actually a directory.\")\n\n if not target_dir.exists():\n raise ValueError(\"That target directory does not exist.\")\n\n return await cog.copy_to(target_dir=target_dir)\n\n async def install_libraries(self, target_dir: Path, libraries: Tuple[Installable]=()) -> bool:\n \"\"\"Install shared libraries to the target directory.\n\n If :code:`libraries` is not specified, all shared libraries in the repo\n will be installed.\n\n Parameters\n ----------\n target_dir : pathlib.Path\n Directory to install shared libraries to.\n libraries : `tuple` of `Installable`\n A subset of available libraries.\n \n Returns\n -------\n bool\n The success of the installation.\n\n \"\"\"\n if len(libraries) > 0:\n if not all([i in self.available_libraries for i in libraries]):\n raise ValueError(\"Some given libraries are not available in this repo.\")\n else:\n libraries = self.available_libraries\n\n if len(libraries) > 0:\n ret = True\n for lib in libraries:\n ret = ret and await lib.copy_to(target_dir=target_dir)\n return ret\n return True\n\n async def install_requirements(self, cog: Installable, target_dir: Path) -> bool:\n \"\"\"Install a cog's requirements.\n \n Requirements will be installed via pip directly into\n :code:`target_dir`.\n\n Parameters\n ----------\n cog : Installable\n Cog for which to install requirements.\n target_dir : pathlib.Path\n Path to directory where requirements are to be installed.\n\n Returns\n -------\n bool\n Success of the installation.\n\n \"\"\"\n if not target_dir.is_dir():\n raise ValueError(\"Target directory is not a directory.\")\n target_dir.mkdir(parents=True, exist_ok=True)\n\n return await self.install_raw_requirements(cog.requirements, target_dir)\n\n async def install_raw_requirements(self, requirements: Tuple[str], target_dir: Path) -> bool:\n \"\"\"Install a list of requirements using pip.\n\n Parameters\n ----------\n requirements : `tuple` of `str`\n List of requirement names to install via pip.\n target_dir : pathlib.Path\n Path to directory where requirements are to be installed.\n\n Returns\n -------\n bool\n Success of the installation\n\n \"\"\"\n if len(requirements) == 0:\n return True\n\n # TODO: Check and see if any of these modules are already available\n\n p = await self._run(\n self.PIP_INSTALL.format(\n python=executable,\n target_dir=target_dir,\n reqs=\" \".join(requirements)\n ).split()\n )\n\n if p.returncode != 0:\n log.error(\"Something went wrong when installing\"\n \" the following requirements:\"\n \" {}\".format(\", \".join(requirements)))\n return False\n return True\n\n @property\n def available_cogs(self) -> Tuple[Installable]:\n \"\"\"`tuple` of `installable` : All available cogs in this Repo.\n \n This excludes hidden or shared packages.\n \"\"\"\n # noinspection PyTypeChecker\n return tuple(\n [m for m in self.available_modules\n if m.type == InstallableType.COG and not m.hidden]\n )\n\n @property\n def available_libraries(self) -> Tuple[Installable]:\n \"\"\"`tuple` of `installable` : All available shared libraries in this\n Repo.\n \"\"\"\n # noinspection PyTypeChecker\n return tuple(\n [m for m in self.available_modules\n if m.type == InstallableType.SHARED_LIBRARY]\n )\n\n @classmethod\n async def from_folder(cls, folder: Path):\n repo = cls(name=folder.stem, branch=\"\", url=\"\", folder_path=folder)\n repo.branch = await repo.current_branch()\n repo.url = await repo.current_url()\n repo._update_available_modules()\n return repo\n\n\nclass RepoManager:\n def __init__(self, downloader_config: Config):\n self.downloader_config = downloader_config\n\n self._repos = {}\n\n loop = asyncio.get_event_loop()\n loop.create_task(self._load_repos(set=True)) # str_name: Repo\n\n @property\n def repos_folder(self) -> Path:\n data_folder = data_manager.cog_data_path(self)\n return data_folder / 'repos'\n\n def does_repo_exist(self, name: str) -> bool:\n return name in self._repos\n\n @staticmethod\n def validate_and_normalize_repo_name(name: str) -> str:\n if not name.isidentifier():\n raise InvalidRepoName(\"Not a valid Python variable name.\")\n return name.lower()\n\n async def add_repo(self, url: str, name: str, branch: str=\"master\") -> Repo:\n \"\"\"Add and clone a git repository.\n\n Parameters\n ----------\n url : str\n URL to the git repository.\n name : str\n Internal name of the repository.\n branch : str\n Name of the default branch to checkout into.\n\n Returns\n -------\n Repo\n New Repo object representing the cloned repository.\n\n \"\"\"\n name = self.validate_and_normalize_repo_name(name)\n if self.does_repo_exist(name):\n raise InvalidRepoName(\n \"That repo name you provided already exists.\"\n \" Please choose another.\"\n )\n\n # noinspection PyTypeChecker\n r = Repo(url=url, name=name, branch=branch,\n folder_path=self.repos_folder / name)\n await r.clone()\n\n self._repos[name] = r\n\n return r\n\n def get_repo(self, name: str) -> Union[Repo, None]:\n \"\"\"Get a Repo object for a repository.\n\n Parameters\n ----------\n name : str\n The name of the repository to retrieve.\n\n Returns\n -------\n `Repo` or `None`\n Repo object for the repository, if it exists.\n\n \"\"\"\n return self._repos.get(name, None)\n\n def get_all_repo_names(self) -> Tuple[str]:\n \"\"\"Get all repo names.\n\n Returns\n -------\n `tuple` of `str`\n\n \"\"\"\n # noinspection PyTypeChecker\n return tuple(self._repos.keys())\n\n async def delete_repo(self, name: str):\n \"\"\"Delete a repository and its folders.\n\n Parameters\n ----------\n name : str\n The name of the repository to delete.\n\n Raises\n ------\n MissingGitRepo\n If the repo does not exist.\n\n \"\"\"\n repo = self.get_repo(name)\n if repo is None:\n raise MissingGitRepo(\"There is no repo with the name {}\".format(name))\n\n shutil.rmtree(str(repo.folder_path))\n\n try:\n del self._repos[name]\n except KeyError:\n pass\n\n async def update_repo(self, repo_name: str) -> MutableMapping[Repo, Tuple[str, str]]:\n repo = self._repos[repo_name]\n old, new = await repo.update()\n return {repo: (old, new)}\n\n async def update_all_repos(self) -> MutableMapping[Repo, Tuple[str, str]]:\n \"\"\"Call `Repo.update` on all repositories.\n\n Returns\n -------\n dict\n A mapping of `Repo` objects that received new commits to a `tuple`\n of `str` containing old and new commit hashes.\n\n \"\"\"\n ret = {}\n for repo_name, _ in self._repos.items():\n repo, (old, new) = await self.update_repo(repo_name)\n if old != new:\n ret[repo] = (old, new)\n return ret\n\n async def _load_repos(self, set=False) -> MutableMapping[str, Repo]:\n ret = {}\n for folder in self.repos_folder.iterdir():\n if not folder.is_dir():\n continue\n try:\n ret[folder.stem] = await Repo.from_folder(folder)\n except RuntimeError:\n # Thrown when there's no findable git remote URL\n pass\n\n if set:\n self._repos = ret\n return ret\n", "path": "redbot/cogs/downloader/repo_manager.py"}], "after_files": [{"content": "import asyncio\nimport functools\nimport os\nimport pkgutil\nimport shutil\nfrom concurrent.futures import ThreadPoolExecutor\nfrom pathlib import Path\nfrom subprocess import run as sp_run, PIPE\nfrom sys import executable\nfrom typing import Tuple, MutableMapping, Union\n\nfrom discord.ext import commands\n\nfrom redbot.core import Config\nfrom redbot.core import data_manager\nfrom .errors import *\nfrom .installable import Installable, InstallableType\nfrom .json_mixins import RepoJSONMixin\nfrom .log import log\n\n\nclass Repo(RepoJSONMixin):\n GIT_CLONE = \"git clone -b {branch} {url} {folder}\"\n GIT_CLONE_NO_BRANCH = \"git clone {url} {folder}\"\n GIT_CURRENT_BRANCH = \"git -C {path} rev-parse --abbrev-ref HEAD\"\n GIT_LATEST_COMMIT = \"git -C {path} rev-parse {branch}\"\n GIT_HARD_RESET = \"git -C {path} reset --hard origin/{branch} -q\"\n GIT_PULL = \"git -C {path} pull -q --ff-only\"\n GIT_DIFF_FILE_STATUS = (\"git -C {path} diff --no-commit-id --name-status\"\n \" {old_hash} {new_hash}\")\n GIT_LOG = (\"git -C {path} log --relative-date --reverse {old_hash}..\"\n \" {relative_file_path}\")\n GIT_DISCOVER_REMOTE_URL = \"git -C {path} config --get remote.origin.url\"\n\n PIP_INSTALL = \"{python} -m pip install -U -t {target_dir} {reqs}\"\n\n def __init__(self, name: str, url: str, branch: str, folder_path: Path,\n available_modules: Tuple[Installable]=(), loop: asyncio.AbstractEventLoop=None):\n self.url = url\n self.branch = branch\n\n self.name = name\n\n self.folder_path = folder_path\n self.folder_path.mkdir(parents=True, exist_ok=True)\n\n super().__init__(self.folder_path)\n\n self.available_modules = available_modules\n\n self._executor = ThreadPoolExecutor(1)\n\n self._repo_lock = asyncio.Lock()\n\n self._loop = loop\n if self._loop is None:\n self._loop = asyncio.get_event_loop()\n\n @classmethod\n async def convert(cls, ctx: commands.Context, argument: str):\n downloader_cog = ctx.bot.get_cog(\"Downloader\")\n if downloader_cog is None:\n raise commands.CommandError(\"No Downloader cog found.\")\n\n # noinspection PyProtectedMember\n repo_manager = downloader_cog._repo_manager\n poss_repo = repo_manager.get_repo(argument)\n if poss_repo is None:\n raise commands.BadArgument(\"Repo by the name {} does not exist.\".format(argument))\n return poss_repo\n\n def _existing_git_repo(self) -> (bool, Path):\n git_path = self.folder_path / '.git'\n return git_path.exists(), git_path\n\n async def _get_file_update_statuses(\n self, old_hash: str, new_hash: str) -> MutableMapping[str, str]:\n \"\"\"\n Gets the file update status letters for each changed file between\n the two hashes.\n :param old_hash: Pre-update\n :param new_hash: Post-update\n :return: Mapping of filename -> status_letter\n \"\"\"\n p = await self._run(\n self.GIT_DIFF_FILE_STATUS.format(\n path=self.folder_path,\n old_hash=old_hash,\n new_hash=new_hash\n )\n )\n\n if p.returncode != 0:\n raise GitDiffError(\"Git diff failed for repo at path:\"\n \" {}\".format(self.folder_path))\n\n stdout = p.stdout.strip().decode().split('\\n')\n\n ret = {}\n\n for filename in stdout:\n # TODO: filter these filenames by ones in self.available_modules\n status, _, filepath = filename.partition('\\t')\n ret[filepath] = status\n\n return ret\n\n async def _get_commit_notes(self, old_commit_hash: str,\n relative_file_path: str) -> str:\n \"\"\"\n Gets the commit notes from git log.\n :param old_commit_hash: Point in time to start getting messages\n :param relative_file_path: Path relative to the repo folder of the file\n to get messages for.\n :return: Git commit note log\n \"\"\"\n p = await self._run(\n self.GIT_LOG.format(\n path=self.folder_path,\n old_hash=old_commit_hash,\n relative_file_path=relative_file_path\n )\n )\n\n if p.returncode != 0:\n raise GitException(\"An exception occurred while executing git log on\"\n \" this repo: {}\".format(self.folder_path))\n\n return p.stdout.decode().strip()\n\n def _update_available_modules(self) -> Tuple[str]:\n \"\"\"\n Updates the available modules attribute for this repo.\n :return: List of available modules.\n \"\"\"\n curr_modules = []\n \"\"\"\n for name in self.folder_path.iterdir():\n if name.is_dir():\n spec = importlib.util.spec_from_file_location(\n name.stem, location=str(name.parent)\n )\n if spec is not None:\n curr_modules.append(\n Installable(location=name)\n )\n \"\"\"\n for file_finder, name, is_pkg in pkgutil.walk_packages(path=[str(self.folder_path), ]):\n curr_modules.append(\n Installable(location=self.folder_path / name)\n )\n self.available_modules = curr_modules\n\n # noinspection PyTypeChecker\n return tuple(self.available_modules)\n\n async def _run(self, *args, **kwargs):\n env = os.environ.copy()\n env['GIT_TERMINAL_PROMPT'] = '0'\n kwargs['env'] = env\n async with self._repo_lock:\n return await self._loop.run_in_executor(\n self._executor,\n functools.partial(sp_run, *args, stdout=PIPE, **kwargs)\n )\n\n async def clone(self) -> Tuple[str]:\n \"\"\"Clone a new repo.\n\n Returns\n -------\n `tuple` of `str`\n All available module names from this repo.\n\n \"\"\"\n exists, path = self._existing_git_repo()\n if exists:\n raise ExistingGitRepo(\n \"A git repo already exists at path: {}\".format(path)\n )\n\n if self.branch is not None:\n p = await self._run(\n self.GIT_CLONE.format(\n branch=self.branch,\n url=self.url,\n folder=self.folder_path\n ).split()\n )\n else:\n p = await self._run(\n self.GIT_CLONE_NO_BRANCH.format(\n url=self.url,\n folder=self.folder_path\n ).split()\n )\n\n if p.returncode != 0:\n raise CloningError(\"Error when running git clone.\")\n\n if self.branch is None:\n self.branch = await self.current_branch()\n\n self._read_info_file()\n\n return self._update_available_modules()\n\n async def current_branch(self) -> str:\n \"\"\"Determine the current branch using git commands.\n\n Returns\n -------\n str\n The current branch name.\n\n \"\"\"\n exists, _ = self._existing_git_repo()\n if not exists:\n raise MissingGitRepo(\n \"A git repo does not exist at path: {}\".format(self.folder_path)\n )\n\n p = await self._run(\n self.GIT_CURRENT_BRANCH.format(\n path=self.folder_path\n ).split()\n )\n\n if p.returncode != 0:\n raise GitException(\"Could not determine current branch\"\n \" at path: {}\".format(self.folder_path))\n\n return p.stdout.decode().strip()\n\n async def current_commit(self, branch: str=None) -> str:\n \"\"\"Determine the current commit hash of the repo.\n\n Parameters\n ----------\n branch : `str`, optional\n Override for repo's branch attribute.\n \n Returns\n -------\n str\n The requested commit hash.\n\n \"\"\"\n if branch is None:\n branch = self.branch\n\n exists, _ = self._existing_git_repo()\n if not exists:\n raise MissingGitRepo(\n \"A git repo does not exist at path: {}\".format(self.folder_path)\n )\n\n p = await self._run(\n self.GIT_LATEST_COMMIT.format(\n path=self.folder_path,\n branch=branch\n ).split()\n )\n\n if p.returncode != 0:\n raise CurrentHashError(\"Unable to determine old commit hash.\")\n\n return p.stdout.decode().strip()\n\n async def current_url(self, folder: Path=None) -> str:\n \"\"\"\n Discovers the FETCH URL for a Git repo.\n\n Parameters\n ----------\n folder : pathlib.Path\n The folder to search for a URL.\n\n Returns\n -------\n str\n The FETCH URL.\n\n Raises\n ------\n RuntimeError\n When the folder does not contain a git repo with a FETCH URL.\n \"\"\"\n if folder is None:\n folder = self.folder_path\n\n p = await self._run(\n Repo.GIT_DISCOVER_REMOTE_URL.format(\n path=folder\n ).split()\n )\n\n if p.returncode != 0:\n raise RuntimeError(\"Unable to discover a repo URL.\")\n\n return p.stdout.decode().strip()\n\n async def hard_reset(self, branch: str=None) -> None:\n \"\"\"Perform a hard reset on the current repo.\n\n Parameters\n ----------\n branch : `str`, optional\n Override for repo branch attribute.\n\n \"\"\"\n if branch is None:\n branch = self.branch\n\n exists, _ = self._existing_git_repo()\n if not exists:\n raise MissingGitRepo(\n \"A git repo does not exist at path: {}\".format(self.folder_path)\n )\n\n p = await self._run(\n self.GIT_HARD_RESET.format(\n path=self.folder_path,\n branch=branch\n ).split()\n )\n\n if p.returncode != 0:\n raise HardResetError(\"Some error occurred when trying to\"\n \" execute a hard reset on the repo at\"\n \" the following path: {}\".format(self.folder_path))\n\n async def update(self) -> (str, str):\n \"\"\"Update the current branch of this repo.\n\n Returns\n -------\n `tuple` of `str`\n :py:code`(old commit hash, new commit hash)`\n\n \"\"\"\n curr_branch = await self.current_branch()\n old_commit = await self.current_commit(branch=curr_branch)\n\n await self.hard_reset(branch=curr_branch)\n\n p = await self._run(\n self.GIT_PULL.format(\n path=self.folder_path\n ).split()\n )\n\n if p.returncode != 0:\n raise UpdateError(\"Git pull returned a non zero exit code\"\n \" for the repo located at path: {}\".format(self.folder_path))\n\n new_commit = await self.current_commit(branch=curr_branch)\n\n self._update_available_modules()\n self._read_info_file()\n\n return old_commit, new_commit\n\n async def install_cog(self, cog: Installable, target_dir: Path) -> bool:\n \"\"\"Install a cog to the target directory.\n\n Parameters\n ----------\n cog : Installable\n The package to install.\n target_dir : pathlib.Path\n The target directory for the cog installation.\n\n Returns\n -------\n bool\n The success of the installation.\n\n \"\"\"\n if cog not in self.available_cogs:\n raise DownloaderException(\"That cog does not exist in this repo\")\n\n if not target_dir.is_dir():\n raise ValueError(\"That target directory is not actually a directory.\")\n\n if not target_dir.exists():\n raise ValueError(\"That target directory does not exist.\")\n\n return await cog.copy_to(target_dir=target_dir)\n\n async def install_libraries(self, target_dir: Path, libraries: Tuple[Installable]=()) -> bool:\n \"\"\"Install shared libraries to the target directory.\n\n If :code:`libraries` is not specified, all shared libraries in the repo\n will be installed.\n\n Parameters\n ----------\n target_dir : pathlib.Path\n Directory to install shared libraries to.\n libraries : `tuple` of `Installable`\n A subset of available libraries.\n \n Returns\n -------\n bool\n The success of the installation.\n\n \"\"\"\n if len(libraries) > 0:\n if not all([i in self.available_libraries for i in libraries]):\n raise ValueError(\"Some given libraries are not available in this repo.\")\n else:\n libraries = self.available_libraries\n\n if len(libraries) > 0:\n ret = True\n for lib in libraries:\n ret = ret and await lib.copy_to(target_dir=target_dir)\n return ret\n return True\n\n async def install_requirements(self, cog: Installable, target_dir: Path) -> bool:\n \"\"\"Install a cog's requirements.\n \n Requirements will be installed via pip directly into\n :code:`target_dir`.\n\n Parameters\n ----------\n cog : Installable\n Cog for which to install requirements.\n target_dir : pathlib.Path\n Path to directory where requirements are to be installed.\n\n Returns\n -------\n bool\n Success of the installation.\n\n \"\"\"\n if not target_dir.is_dir():\n raise ValueError(\"Target directory is not a directory.\")\n target_dir.mkdir(parents=True, exist_ok=True)\n\n return await self.install_raw_requirements(cog.requirements, target_dir)\n\n async def install_raw_requirements(self, requirements: Tuple[str], target_dir: Path) -> bool:\n \"\"\"Install a list of requirements using pip.\n\n Parameters\n ----------\n requirements : `tuple` of `str`\n List of requirement names to install via pip.\n target_dir : pathlib.Path\n Path to directory where requirements are to be installed.\n\n Returns\n -------\n bool\n Success of the installation\n\n \"\"\"\n if len(requirements) == 0:\n return True\n\n # TODO: Check and see if any of these modules are already available\n\n p = await self._run(\n self.PIP_INSTALL.format(\n python=executable,\n target_dir=target_dir,\n reqs=\" \".join(requirements)\n ).split()\n )\n\n if p.returncode != 0:\n log.error(\"Something went wrong when installing\"\n \" the following requirements:\"\n \" {}\".format(\", \".join(requirements)))\n return False\n return True\n\n @property\n def available_cogs(self) -> Tuple[Installable]:\n \"\"\"`tuple` of `installable` : All available cogs in this Repo.\n \n This excludes hidden or shared packages.\n \"\"\"\n # noinspection PyTypeChecker\n return tuple(\n [m for m in self.available_modules\n if m.type == InstallableType.COG and not m.hidden]\n )\n\n @property\n def available_libraries(self) -> Tuple[Installable]:\n \"\"\"`tuple` of `installable` : All available shared libraries in this\n Repo.\n \"\"\"\n # noinspection PyTypeChecker\n return tuple(\n [m for m in self.available_modules\n if m.type == InstallableType.SHARED_LIBRARY]\n )\n\n @classmethod\n async def from_folder(cls, folder: Path):\n repo = cls(name=folder.stem, branch=\"\", url=\"\", folder_path=folder)\n repo.branch = await repo.current_branch()\n repo.url = await repo.current_url()\n repo._update_available_modules()\n return repo\n\n\nclass RepoManager:\n def __init__(self, downloader_config: Config):\n self.downloader_config = downloader_config\n\n self._repos = {}\n\n loop = asyncio.get_event_loop()\n loop.create_task(self._load_repos(set=True)) # str_name: Repo\n\n @property\n def repos_folder(self) -> Path:\n data_folder = data_manager.cog_data_path(self)\n return data_folder / 'repos'\n\n def does_repo_exist(self, name: str) -> bool:\n return name in self._repos\n\n @staticmethod\n def validate_and_normalize_repo_name(name: str) -> str:\n if not name.isidentifier():\n raise InvalidRepoName(\"Not a valid Python variable name.\")\n return name.lower()\n\n async def add_repo(self, url: str, name: str, branch: str=\"master\") -> Repo:\n \"\"\"Add and clone a git repository.\n\n Parameters\n ----------\n url : str\n URL to the git repository.\n name : str\n Internal name of the repository.\n branch : str\n Name of the default branch to checkout into.\n\n Returns\n -------\n Repo\n New Repo object representing the cloned repository.\n\n \"\"\"\n name = self.validate_and_normalize_repo_name(name)\n if self.does_repo_exist(name):\n raise InvalidRepoName(\n \"That repo name you provided already exists.\"\n \" Please choose another.\"\n )\n\n # noinspection PyTypeChecker\n r = Repo(url=url, name=name, branch=branch,\n folder_path=self.repos_folder / name)\n await r.clone()\n\n self._repos[name] = r\n\n return r\n\n def get_repo(self, name: str) -> Union[Repo, None]:\n \"\"\"Get a Repo object for a repository.\n\n Parameters\n ----------\n name : str\n The name of the repository to retrieve.\n\n Returns\n -------\n `Repo` or `None`\n Repo object for the repository, if it exists.\n\n \"\"\"\n return self._repos.get(name, None)\n\n def get_all_repo_names(self) -> Tuple[str]:\n \"\"\"Get all repo names.\n\n Returns\n -------\n `tuple` of `str`\n\n \"\"\"\n # noinspection PyTypeChecker\n return tuple(self._repos.keys())\n\n async def delete_repo(self, name: str):\n \"\"\"Delete a repository and its folders.\n\n Parameters\n ----------\n name : str\n The name of the repository to delete.\n\n Raises\n ------\n MissingGitRepo\n If the repo does not exist.\n\n \"\"\"\n repo = self.get_repo(name)\n if repo is None:\n raise MissingGitRepo(\"There is no repo with the name {}\".format(name))\n\n shutil.rmtree(str(repo.folder_path))\n\n try:\n del self._repos[name]\n except KeyError:\n pass\n\n async def update_repo(self, repo_name: str) -> MutableMapping[Repo, Tuple[str, str]]:\n repo = self._repos[repo_name]\n old, new = await repo.update()\n return {repo: (old, new)}\n\n async def update_all_repos(self) -> MutableMapping[Repo, Tuple[str, str]]:\n \"\"\"Call `Repo.update` on all repositories.\n\n Returns\n -------\n dict\n A mapping of `Repo` objects that received new commits to a `tuple`\n of `str` containing old and new commit hashes.\n\n \"\"\"\n ret = {}\n for repo_name, _ in self._repos.items():\n repo, (old, new) = (await self.update_repo(repo_name)).popitem()\n if old != new:\n ret[repo] = (old, new)\n return ret\n\n async def _load_repos(self, set=False) -> MutableMapping[str, Repo]:\n ret = {}\n for folder in self.repos_folder.iterdir():\n if not folder.is_dir():\n continue\n try:\n ret[folder.stem] = await Repo.from_folder(folder)\n except RuntimeError:\n # Thrown when there's no findable git remote URL\n pass\n\n if set:\n self._repos = ret\n return ret\n", "path": "redbot/cogs/downloader/repo_manager.py"}]} |
gh_patches_debug_1024 | rasdani/github-patches | git_diff | celery__celery-1970 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Task.throws cannot be a list, misleading documentation
The check at https://github.com/celery/celery/blob/b35569090c5cabfa784b00a68b55c7628fee813d/celery/worker/job.py#L456 throws this error when the `Task.throws` is a list:
``` shell
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
```
Documentation on `Task.throws` is misleading by mentioning that `throws` can be a `List/tuple`:
https://github.com/celery/celery/blob/b35569090c5cabfa784b00a68b55c7628fee813d/celery/app/task.py#L316-L322
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/app/task.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 celery.app.task
4 ~~~~~~~~~~~~~~~
5
6 Task Implementation: Task request context, and the base task class.
7
8 """
9 from __future__ import absolute_import
10
11 import sys
12
13 from billiard.einfo import ExceptionInfo
14
15 from celery import current_app
16 from celery import states
17 from celery._state import _task_stack
18 from celery.canvas import signature
19 from celery.exceptions import MaxRetriesExceededError, Reject, Retry
20 from celery.five import class_property, items, with_metaclass
21 from celery.local import Proxy
22 from celery.result import EagerResult
23 from celery.utils import gen_task_name, fun_takes_kwargs, uuid, maybe_reraise
24 from celery.utils.functional import mattrgetter, maybe_list
25 from celery.utils.imports import instantiate
26 from celery.utils.mail import ErrorMail
27
28 from .annotations import resolve_all as resolve_all_annotations
29 from .registry import _unpickle_task_v2
30 from .utils import appstr
31
32 __all__ = ['Context', 'Task']
33
34 #: extracts attributes related to publishing a message from an object.
35 extract_exec_options = mattrgetter(
36 'queue', 'routing_key', 'exchange', 'priority', 'expires',
37 'serializer', 'delivery_mode', 'compression', 'time_limit',
38 'soft_time_limit', 'immediate', 'mandatory', # imm+man is deprecated
39 )
40
41 # We take __repr__ very seriously around here ;)
42 R_BOUND_TASK = '<class {0.__name__} of {app}{flags}>'
43 R_UNBOUND_TASK = '<unbound {0.__name__}{flags}>'
44 R_SELF_TASK = '<@task {0.name} bound to other {0.__self__}>'
45 R_INSTANCE = '<@task: {0.name} of {app}{flags}>'
46
47
48 class _CompatShared(object):
49
50 def __init__(self, name, cons):
51 self.name = name
52 self.cons = cons
53
54 def __hash__(self):
55 return hash(self.name)
56
57 def __repr__(self):
58 return '<OldTask: %r>' % (self.name, )
59
60 def __call__(self, app):
61 return self.cons(app)
62
63
64 def _strflags(flags, default=''):
65 if flags:
66 return ' ({0})'.format(', '.join(flags))
67 return default
68
69
70 def _reprtask(task, fmt=None, flags=None):
71 flags = list(flags) if flags is not None else []
72 flags.append('v2 compatible') if task.__v2_compat__ else None
73 if not fmt:
74 fmt = R_BOUND_TASK if task._app else R_UNBOUND_TASK
75 return fmt.format(
76 task, flags=_strflags(flags),
77 app=appstr(task._app) if task._app else None,
78 )
79
80
81 class Context(object):
82 # Default context
83 logfile = None
84 loglevel = None
85 hostname = None
86 id = None
87 args = None
88 kwargs = None
89 retries = 0
90 eta = None
91 expires = None
92 is_eager = False
93 headers = None
94 delivery_info = None
95 reply_to = None
96 correlation_id = None
97 taskset = None # compat alias to group
98 group = None
99 chord = None
100 utc = None
101 called_directly = True
102 callbacks = None
103 errbacks = None
104 timelimit = None
105 _children = None # see property
106 _protected = 0
107
108 def __init__(self, *args, **kwargs):
109 self.update(*args, **kwargs)
110
111 def update(self, *args, **kwargs):
112 return self.__dict__.update(*args, **kwargs)
113
114 def clear(self):
115 return self.__dict__.clear()
116
117 def get(self, key, default=None):
118 return getattr(self, key, default)
119
120 def __repr__(self):
121 return '<Context: {0!r}>'.format(vars(self))
122
123 @property
124 def children(self):
125 # children must be an empy list for every thread
126 if self._children is None:
127 self._children = []
128 return self._children
129
130
131 class TaskType(type):
132 """Meta class for tasks.
133
134 Automatically registers the task in the task registry (except
135 if the :attr:`Task.abstract`` attribute is set).
136
137 If no :attr:`Task.name` attribute is provided, then the name is generated
138 from the module and class name.
139
140 """
141 _creation_count = {} # used by old non-abstract task classes
142
143 def __new__(cls, name, bases, attrs):
144 new = super(TaskType, cls).__new__
145 task_module = attrs.get('__module__') or '__main__'
146
147 # - Abstract class: abstract attribute should not be inherited.
148 abstract = attrs.pop('abstract', None)
149 if abstract or not attrs.get('autoregister', True):
150 return new(cls, name, bases, attrs)
151
152 # The 'app' attribute is now a property, with the real app located
153 # in the '_app' attribute. Previously this was a regular attribute,
154 # so we should support classes defining it.
155 app = attrs.pop('_app', None) or attrs.pop('app', None)
156
157 # Attempt to inherit app from one the bases
158 if not isinstance(app, Proxy) and app is None:
159 for base in bases:
160 if getattr(base, '_app', None):
161 app = base._app
162 break
163 else:
164 app = current_app._get_current_object()
165 attrs['_app'] = app
166
167 # - Automatically generate missing/empty name.
168 task_name = attrs.get('name')
169 if not task_name:
170 attrs['name'] = task_name = gen_task_name(app, name, task_module)
171
172 if not attrs.get('_decorated'):
173 # non decorated tasks must also be shared in case
174 # an app is created multiple times due to modules
175 # imported under multiple names.
176 # Hairy stuff, here to be compatible with 2.x.
177 # People should not use non-abstract task classes anymore,
178 # use the task decorator.
179 from celery.app.builtins import shared_task
180 unique_name = '.'.join([task_module, name])
181 if unique_name not in cls._creation_count:
182 # the creation count is used as a safety
183 # so that the same task is not added recursively
184 # to the set of constructors.
185 cls._creation_count[unique_name] = 1
186 shared_task(_CompatShared(
187 unique_name,
188 lambda app: TaskType.__new__(cls, name, bases,
189 dict(attrs, _app=app)),
190 ))
191
192 # - Create and register class.
193 # Because of the way import happens (recursively)
194 # we may or may not be the first time the task tries to register
195 # with the framework. There should only be one class for each task
196 # name, so we always return the registered version.
197 tasks = app._tasks
198 if task_name not in tasks:
199 tasks.register(new(cls, name, bases, attrs))
200 instance = tasks[task_name]
201 instance.bind(app)
202 return instance.__class__
203
204 def __repr__(cls):
205 return _reprtask(cls)
206
207
208 @with_metaclass(TaskType)
209 class Task(object):
210 """Task base class.
211
212 When called tasks apply the :meth:`run` method. This method must
213 be defined by all tasks (that is unless the :meth:`__call__` method
214 is overridden).
215
216 """
217 __trace__ = None
218 __v2_compat__ = False # set by old base in celery.task.base
219
220 ErrorMail = ErrorMail
221 MaxRetriesExceededError = MaxRetriesExceededError
222
223 #: Execution strategy used, or the qualified name of one.
224 Strategy = 'celery.worker.strategy:default'
225
226 #: This is the instance bound to if the task is a method of a class.
227 __self__ = None
228
229 #: The application instance associated with this task class.
230 _app = None
231
232 #: Name of the task.
233 name = None
234
235 #: If :const:`True` the task is an abstract base class.
236 abstract = True
237
238 #: If disabled the worker will not forward magic keyword arguments.
239 #: Deprecated and scheduled for removal in v4.0.
240 accept_magic_kwargs = False
241
242 #: Maximum number of retries before giving up. If set to :const:`None`,
243 #: it will **never** stop retrying.
244 max_retries = 3
245
246 #: Default time in seconds before a retry of the task should be
247 #: executed. 3 minutes by default.
248 default_retry_delay = 3 * 60
249
250 #: Rate limit for this task type. Examples: :const:`None` (no rate
251 #: limit), `'100/s'` (hundred tasks a second), `'100/m'` (hundred tasks
252 #: a minute),`'100/h'` (hundred tasks an hour)
253 rate_limit = None
254
255 #: If enabled the worker will not store task state and return values
256 #: for this task. Defaults to the :setting:`CELERY_IGNORE_RESULT`
257 #: setting.
258 ignore_result = None
259
260 #: If enabled the request will keep track of subtasks started by
261 #: this task, and this information will be sent with the result
262 #: (``result.children``).
263 trail = True
264
265 #: When enabled errors will be stored even if the task is otherwise
266 #: configured to ignore results.
267 store_errors_even_if_ignored = None
268
269 #: If enabled an email will be sent to :setting:`ADMINS` whenever a task
270 #: of this type fails.
271 send_error_emails = None
272
273 #: The name of a serializer that are registered with
274 #: :mod:`kombu.serialization.registry`. Default is `'pickle'`.
275 serializer = None
276
277 #: Hard time limit.
278 #: Defaults to the :setting:`CELERYD_TASK_TIME_LIMIT` setting.
279 time_limit = None
280
281 #: Soft time limit.
282 #: Defaults to the :setting:`CELERYD_TASK_SOFT_TIME_LIMIT` setting.
283 soft_time_limit = None
284
285 #: The result store backend used for this task.
286 backend = None
287
288 #: If disabled this task won't be registered automatically.
289 autoregister = True
290
291 #: If enabled the task will report its status as 'started' when the task
292 #: is executed by a worker. Disabled by default as the normal behaviour
293 #: is to not report that level of granularity. Tasks are either pending,
294 #: finished, or waiting to be retried.
295 #:
296 #: Having a 'started' status can be useful for when there are long
297 #: running tasks and there is a need to report which task is currently
298 #: running.
299 #:
300 #: The application default can be overridden using the
301 #: :setting:`CELERY_TRACK_STARTED` setting.
302 track_started = None
303
304 #: When enabled messages for this task will be acknowledged **after**
305 #: the task has been executed, and not *just before* which is the
306 #: default behavior.
307 #:
308 #: Please note that this means the task may be executed twice if the
309 #: worker crashes mid execution (which may be acceptable for some
310 #: applications).
311 #:
312 #: The application default can be overridden with the
313 #: :setting:`CELERY_ACKS_LATE` setting.
314 acks_late = None
315
316 #: List/tuple of expected exceptions.
317 #:
318 #: These are errors that are expected in normal operation
319 #: and that should not be regarded as a real error by the worker.
320 #: Currently this means that the state will be updated to an error
321 #: state, but the worker will not log the event as an error.
322 throws = ()
323
324 #: Default task expiry time.
325 expires = None
326
327 #: Some may expect a request to exist even if the task has not been
328 #: called. This should probably be deprecated.
329 _default_request = None
330
331 _exec_options = None
332
333 __bound__ = False
334
335 from_config = (
336 ('send_error_emails', 'CELERY_SEND_TASK_ERROR_EMAILS'),
337 ('serializer', 'CELERY_TASK_SERIALIZER'),
338 ('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
339 ('track_started', 'CELERY_TRACK_STARTED'),
340 ('acks_late', 'CELERY_ACKS_LATE'),
341 ('ignore_result', 'CELERY_IGNORE_RESULT'),
342 ('store_errors_even_if_ignored',
343 'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
344 )
345
346 _backend = None # set by backend property.
347
348 __bound__ = False
349
350 # - Tasks are lazily bound, so that configuration is not set
351 # - until the task is actually used
352
353 @classmethod
354 def bind(self, app):
355 was_bound, self.__bound__ = self.__bound__, True
356 self._app = app
357 conf = app.conf
358 self._exec_options = None # clear option cache
359
360 for attr_name, config_name in self.from_config:
361 if getattr(self, attr_name, None) is None:
362 setattr(self, attr_name, conf[config_name])
363 if self.accept_magic_kwargs is None:
364 self.accept_magic_kwargs = app.accept_magic_kwargs
365
366 # decorate with annotations from config.
367 if not was_bound:
368 self.annotate()
369
370 from celery.utils.threads import LocalStack
371 self.request_stack = LocalStack()
372
373 # PeriodicTask uses this to add itself to the PeriodicTask schedule.
374 self.on_bound(app)
375
376 return app
377
378 @classmethod
379 def on_bound(self, app):
380 """This method can be defined to do additional actions when the
381 task class is bound to an app."""
382 pass
383
384 @classmethod
385 def _get_app(self):
386 if self._app is None:
387 self._app = current_app
388 if not self.__bound__:
389 # The app property's __set__ method is not called
390 # if Task.app is set (on the class), so must bind on use.
391 self.bind(self._app)
392 return self._app
393 app = class_property(_get_app, bind)
394
395 @classmethod
396 def annotate(self):
397 for d in resolve_all_annotations(self.app.annotations, self):
398 for key, value in items(d):
399 if key.startswith('@'):
400 self.add_around(key[1:], value)
401 else:
402 setattr(self, key, value)
403
404 @classmethod
405 def add_around(self, attr, around):
406 orig = getattr(self, attr)
407 if getattr(orig, '__wrapped__', None):
408 orig = orig.__wrapped__
409 meth = around(orig)
410 meth.__wrapped__ = orig
411 setattr(self, attr, meth)
412
413 def __call__(self, *args, **kwargs):
414 _task_stack.push(self)
415 self.push_request()
416 try:
417 # add self if this is a bound task
418 if self.__self__ is not None:
419 return self.run(self.__self__, *args, **kwargs)
420 return self.run(*args, **kwargs)
421 finally:
422 self.pop_request()
423 _task_stack.pop()
424
425 def __reduce__(self):
426 # - tasks are pickled into the name of the task only, and the reciever
427 # - simply grabs it from the local registry.
428 # - in later versions the module of the task is also included,
429 # - and the receiving side tries to import that module so that
430 # - it will work even if the task has not been registered.
431 mod = type(self).__module__
432 mod = mod if mod and mod in sys.modules else None
433 return (_unpickle_task_v2, (self.name, mod), None)
434
435 def run(self, *args, **kwargs):
436 """The body of the task executed by workers."""
437 raise NotImplementedError('Tasks must define the run method.')
438
439 def start_strategy(self, app, consumer, **kwargs):
440 return instantiate(self.Strategy, self, app, consumer, **kwargs)
441
442 def delay(self, *args, **kwargs):
443 """Star argument version of :meth:`apply_async`.
444
445 Does not support the extra options enabled by :meth:`apply_async`.
446
447 :param \*args: positional arguments passed on to the task.
448 :param \*\*kwargs: keyword arguments passed on to the task.
449
450 :returns :class:`celery.result.AsyncResult`:
451
452 """
453 return self.apply_async(args, kwargs)
454
455 def apply_async(self, args=None, kwargs=None, task_id=None, producer=None,
456 link=None, link_error=None, **options):
457 """Apply tasks asynchronously by sending a message.
458
459 :keyword args: The positional arguments to pass on to the
460 task (a :class:`list` or :class:`tuple`).
461
462 :keyword kwargs: The keyword arguments to pass on to the
463 task (a :class:`dict`)
464
465 :keyword countdown: Number of seconds into the future that the
466 task should execute. Defaults to immediate
467 execution.
468
469 :keyword eta: A :class:`~datetime.datetime` object describing
470 the absolute time and date of when the task should
471 be executed. May not be specified if `countdown`
472 is also supplied.
473
474 :keyword expires: Either a :class:`int`, describing the number of
475 seconds, or a :class:`~datetime.datetime` object
476 that describes the absolute time and date of when
477 the task should expire. The task will not be
478 executed after the expiration time.
479
480 :keyword connection: Re-use existing broker connection instead
481 of establishing a new one.
482
483 :keyword retry: If enabled sending of the task message will be retried
484 in the event of connection loss or failure. Default
485 is taken from the :setting:`CELERY_TASK_PUBLISH_RETRY`
486 setting. Note you need to handle the
487 producer/connection manually for this to work.
488
489 :keyword retry_policy: Override the retry policy used. See the
490 :setting:`CELERY_TASK_PUBLISH_RETRY` setting.
491
492 :keyword routing_key: Custom routing key used to route the task to a
493 worker server. If in combination with a
494 ``queue`` argument only used to specify custom
495 routing keys to topic exchanges.
496
497 :keyword queue: The queue to route the task to. This must be a key
498 present in :setting:`CELERY_QUEUES`, or
499 :setting:`CELERY_CREATE_MISSING_QUEUES` must be
500 enabled. See :ref:`guide-routing` for more
501 information.
502
503 :keyword exchange: Named custom exchange to send the task to.
504 Usually not used in combination with the ``queue``
505 argument.
506
507 :keyword priority: The task priority, a number between 0 and 9.
508 Defaults to the :attr:`priority` attribute.
509
510 :keyword serializer: A string identifying the default
511 serialization method to use. Can be `pickle`,
512 `json`, `yaml`, `msgpack` or any custom
513 serialization method that has been registered
514 with :mod:`kombu.serialization.registry`.
515 Defaults to the :attr:`serializer` attribute.
516
517 :keyword compression: A string identifying the compression method
518 to use. Can be one of ``zlib``, ``bzip2``,
519 or any custom compression methods registered with
520 :func:`kombu.compression.register`. Defaults to
521 the :setting:`CELERY_MESSAGE_COMPRESSION`
522 setting.
523 :keyword link: A single, or a list of tasks to apply if the
524 task exits successfully.
525 :keyword link_error: A single, or a list of tasks to apply
526 if an error occurs while executing the task.
527
528 :keyword producer: :class:[email protected]` instance to use.
529 :keyword add_to_parent: If set to True (default) and the task
530 is applied while executing another task, then the result
531 will be appended to the parent tasks ``request.children``
532 attribute. Trailing can also be disabled by default using the
533 :attr:`trail` attribute
534 :keyword publisher: Deprecated alias to ``producer``.
535
536 Also supports all keyword arguments supported by
537 :meth:`kombu.Producer.publish`.
538
539 .. note::
540 If the :setting:`CELERY_ALWAYS_EAGER` setting is set, it will
541 be replaced by a local :func:`apply` call instead.
542
543 """
544 app = self._get_app()
545 if app.conf.CELERY_ALWAYS_EAGER:
546 return self.apply(args, kwargs, task_id=task_id or uuid(),
547 link=link, link_error=link_error, **options)
548 # add 'self' if this is a "task_method".
549 if self.__self__ is not None:
550 args = args if isinstance(args, tuple) else tuple(args or ())
551 args = (self.__self__, ) + args
552 return app.send_task(
553 self.name, args, kwargs, task_id=task_id, producer=producer,
554 link=link, link_error=link_error, result_cls=self.AsyncResult,
555 **dict(self._get_exec_options(), **options)
556 )
557
558 def subtask_from_request(self, request=None, args=None, kwargs=None,
559 queue=None, **extra_options):
560 request = self.request if request is None else request
561 args = request.args if args is None else args
562 kwargs = request.kwargs if kwargs is None else kwargs
563 limit_hard, limit_soft = request.timelimit or (None, None)
564 options = {
565 'task_id': request.id,
566 'link': request.callbacks,
567 'link_error': request.errbacks,
568 'group_id': request.group,
569 'chord': request.chord,
570 'soft_time_limit': limit_soft,
571 'time_limit': limit_hard,
572 }
573 options.update(
574 {'queue': queue} if queue else (request.delivery_info or {})
575 )
576 return self.subtask(args, kwargs, options, type=self, **extra_options)
577
578 def retry(self, args=None, kwargs=None, exc=None, throw=True,
579 eta=None, countdown=None, max_retries=None, **options):
580 """Retry the task.
581
582 :param args: Positional arguments to retry with.
583 :param kwargs: Keyword arguments to retry with.
584 :keyword exc: Custom exception to report when the max restart
585 limit has been exceeded (default:
586 :exc:`~@MaxRetriesExceededError`).
587
588 If this argument is set and retry is called while
589 an exception was raised (``sys.exc_info()`` is set)
590 it will attempt to reraise the current exception.
591
592 If no exception was raised it will raise the ``exc``
593 argument provided.
594 :keyword countdown: Time in seconds to delay the retry for.
595 :keyword eta: Explicit time and date to run the retry at
596 (must be a :class:`~datetime.datetime` instance).
597 :keyword max_retries: If set, overrides the default retry limit.
598 :keyword time_limit: If set, overrides the default time limit.
599 :keyword soft_time_limit: If set, overrides the default soft
600 time limit.
601 :keyword \*\*options: Any extra options to pass on to
602 meth:`apply_async`.
603 :keyword throw: If this is :const:`False`, do not raise the
604 :exc:`~@Retry` exception,
605 that tells the worker to mark the task as being
606 retried. Note that this means the task will be
607 marked as failed if the task raises an exception,
608 or successful if it returns.
609
610 :raises celery.exceptions.Retry: To tell the worker that
611 the task has been re-sent for retry. This always happens,
612 unless the `throw` keyword argument has been explicitly set
613 to :const:`False`, and is considered normal operation.
614
615 **Example**
616
617 .. code-block:: python
618
619 >>> from imaginary_twitter_lib import Twitter
620 >>> from proj.celery import app
621
622 >>> @app.task()
623 ... def tweet(auth, message):
624 ... twitter = Twitter(oauth=auth)
625 ... try:
626 ... twitter.post_status_update(message)
627 ... except twitter.FailWhale as exc:
628 ... # Retry in 5 minutes.
629 ... raise tweet.retry(countdown=60 * 5, exc=exc)
630
631 Although the task will never return above as `retry` raises an
632 exception to notify the worker, we use `raise` in front of the retry
633 to convey that the rest of the block will not be executed.
634
635 """
636 request = self.request
637 retries = request.retries + 1
638 max_retries = self.max_retries if max_retries is None else max_retries
639
640 # Not in worker or emulated by (apply/always_eager),
641 # so just raise the original exception.
642 if request.called_directly:
643 maybe_reraise() # raise orig stack if PyErr_Occurred
644 raise exc or Retry('Task can be retried', None)
645
646 if not eta and countdown is None:
647 countdown = self.default_retry_delay
648
649 is_eager = request.is_eager
650 S = self.subtask_from_request(
651 request, args, kwargs,
652 countdown=countdown, eta=eta, retries=retries,
653 **options
654 )
655
656 if max_retries is not None and retries > max_retries:
657 if exc:
658 # first try to reraise the original exception
659 maybe_reraise()
660 # or if not in an except block then raise the custom exc.
661 raise exc()
662 raise self.MaxRetriesExceededError(
663 "Can't retry {0}[{1}] args:{2} kwargs:{3}".format(
664 self.name, request.id, S.args, S.kwargs))
665
666 # If task was executed eagerly using apply(),
667 # then the retry must also be executed eagerly.
668 try:
669 S.apply().get() if is_eager else S.apply_async()
670 except Exception as exc:
671 if is_eager:
672 raise
673 raise Reject(exc, requeue=True)
674 ret = Retry(exc=exc, when=eta or countdown)
675 if throw:
676 raise ret
677 return ret
678
679 def apply(self, args=None, kwargs=None,
680 link=None, link_error=None, **options):
681 """Execute this task locally, by blocking until the task returns.
682
683 :param args: positional arguments passed on to the task.
684 :param kwargs: keyword arguments passed on to the task.
685 :keyword throw: Re-raise task exceptions. Defaults to
686 the :setting:`CELERY_EAGER_PROPAGATES_EXCEPTIONS`
687 setting.
688
689 :rtype :class:`celery.result.EagerResult`:
690
691 """
692 # trace imports Task, so need to import inline.
693 from celery.app.trace import eager_trace_task
694
695 app = self._get_app()
696 args = args or ()
697 # add 'self' if this is a bound method.
698 if self.__self__ is not None:
699 args = (self.__self__, ) + tuple(args)
700 kwargs = kwargs or {}
701 task_id = options.get('task_id') or uuid()
702 retries = options.get('retries', 0)
703 throw = app.either('CELERY_EAGER_PROPAGATES_EXCEPTIONS',
704 options.pop('throw', None))
705
706 # Make sure we get the task instance, not class.
707 task = app._tasks[self.name]
708
709 request = {'id': task_id,
710 'retries': retries,
711 'is_eager': True,
712 'logfile': options.get('logfile'),
713 'loglevel': options.get('loglevel', 0),
714 'callbacks': maybe_list(link),
715 'errbacks': maybe_list(link_error),
716 'headers': options.get('headers'),
717 'delivery_info': {'is_eager': True}}
718 if self.accept_magic_kwargs:
719 default_kwargs = {'task_name': task.name,
720 'task_id': task_id,
721 'task_retries': retries,
722 'task_is_eager': True,
723 'logfile': options.get('logfile'),
724 'loglevel': options.get('loglevel', 0),
725 'delivery_info': {'is_eager': True}}
726 supported_keys = fun_takes_kwargs(task.run, default_kwargs)
727 extend_with = {
728 key: val for key, val in items(default_kwargs)
729 if key in supported_keys
730 }
731 kwargs.update(extend_with)
732
733 tb = None
734 retval, info = eager_trace_task(task, task_id, args, kwargs,
735 app=self._get_app(),
736 request=request, propagate=throw)
737 if isinstance(retval, ExceptionInfo):
738 retval, tb = retval.exception, retval.traceback
739 state = states.SUCCESS if info is None else info.state
740 return EagerResult(task_id, retval, state, traceback=tb)
741
742 def AsyncResult(self, task_id, **kwargs):
743 """Get AsyncResult instance for this kind of task.
744
745 :param task_id: Task id to get result for.
746
747 """
748 return self._get_app().AsyncResult(task_id, backend=self.backend,
749 task_name=self.name, **kwargs)
750
751 def subtask(self, args=None, *starargs, **starkwargs):
752 """Return :class:`~celery.signature` object for
753 this task, wrapping arguments and execution options
754 for a single task invocation."""
755 starkwargs.setdefault('app', self.app)
756 return signature(self, args, *starargs, **starkwargs)
757
758 def s(self, *args, **kwargs):
759 """``.s(*a, **k) -> .subtask(a, k)``"""
760 return self.subtask(args, kwargs)
761
762 def si(self, *args, **kwargs):
763 """``.si(*a, **k) -> .subtask(a, k, immutable=True)``"""
764 return self.subtask(args, kwargs, immutable=True)
765
766 def chunks(self, it, n):
767 """Creates a :class:`~celery.canvas.chunks` task for this task."""
768 from celery import chunks
769 return chunks(self.s(), it, n, app=self.app)
770
771 def map(self, it):
772 """Creates a :class:`~celery.canvas.xmap` task from ``it``."""
773 from celery import xmap
774 return xmap(self.s(), it, app=self.app)
775
776 def starmap(self, it):
777 """Creates a :class:`~celery.canvas.xstarmap` task from ``it``."""
778 from celery import xstarmap
779 return xstarmap(self.s(), it, app=self.app)
780
781 def send_event(self, type_, **fields):
782 req = self.request
783 with self.app.events.default_dispatcher(hostname=req.hostname) as d:
784 return d.send(type_, uuid=req.id, **fields)
785
786 def update_state(self, task_id=None, state=None, meta=None):
787 """Update task state.
788
789 :keyword task_id: Id of the task to update, defaults to the
790 id of the current task
791 :keyword state: New state (:class:`str`).
792 :keyword meta: State metadata (:class:`dict`).
793
794
795
796 """
797 if task_id is None:
798 task_id = self.request.id
799 self.backend.store_result(task_id, meta, state)
800
801 def on_success(self, retval, task_id, args, kwargs):
802 """Success handler.
803
804 Run by the worker if the task executes successfully.
805
806 :param retval: The return value of the task.
807 :param task_id: Unique id of the executed task.
808 :param args: Original arguments for the executed task.
809 :param kwargs: Original keyword arguments for the executed task.
810
811 The return value of this handler is ignored.
812
813 """
814 pass
815
816 def on_retry(self, exc, task_id, args, kwargs, einfo):
817 """Retry handler.
818
819 This is run by the worker when the task is to be retried.
820
821 :param exc: The exception sent to :meth:`retry`.
822 :param task_id: Unique id of the retried task.
823 :param args: Original arguments for the retried task.
824 :param kwargs: Original keyword arguments for the retried task.
825
826 :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
827 instance, containing the traceback.
828
829 The return value of this handler is ignored.
830
831 """
832 pass
833
834 def on_failure(self, exc, task_id, args, kwargs, einfo):
835 """Error handler.
836
837 This is run by the worker when the task fails.
838
839 :param exc: The exception raised by the task.
840 :param task_id: Unique id of the failed task.
841 :param args: Original arguments for the task that failed.
842 :param kwargs: Original keyword arguments for the task
843 that failed.
844
845 :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
846 instance, containing the traceback.
847
848 The return value of this handler is ignored.
849
850 """
851 pass
852
853 def after_return(self, status, retval, task_id, args, kwargs, einfo):
854 """Handler called after the task returns.
855
856 :param status: Current task state.
857 :param retval: Task return value/exception.
858 :param task_id: Unique id of the task.
859 :param args: Original arguments for the task that failed.
860 :param kwargs: Original keyword arguments for the task
861 that failed.
862
863 :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
864 instance, containing the traceback (if any).
865
866 The return value of this handler is ignored.
867
868 """
869 pass
870
871 def send_error_email(self, context, exc, **kwargs):
872 if self.send_error_emails and \
873 not getattr(self, 'disable_error_emails', None):
874 self.ErrorMail(self, **kwargs).send(context, exc)
875
876 def add_trail(self, result):
877 if self.trail:
878 self.request.children.append(result)
879 return result
880
881 def push_request(self, *args, **kwargs):
882 self.request_stack.push(Context(*args, **kwargs))
883
884 def pop_request(self):
885 self.request_stack.pop()
886
887 def __repr__(self):
888 """`repr(task)`"""
889 return _reprtask(self, R_SELF_TASK if self.__self__ else R_INSTANCE)
890
891 def _get_request(self):
892 """Get current request object."""
893 req = self.request_stack.top
894 if req is None:
895 # task was not called, but some may still expect a request
896 # to be there, perhaps that should be deprecated.
897 if self._default_request is None:
898 self._default_request = Context()
899 return self._default_request
900 return req
901 request = property(_get_request)
902
903 def _get_exec_options(self):
904 if self._exec_options is None:
905 self._exec_options = extract_exec_options(self)
906 return self._exec_options
907
908 @property
909 def backend(self):
910 backend = self._backend
911 if backend is None:
912 return self.app.backend
913 return backend
914
915 @backend.setter
916 def backend(self, value): # noqa
917 self._backend = value
918
919 @property
920 def __name__(self):
921 return self.__class__.__name__
922 BaseTask = Task # compat alias
923
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -313,7 +313,7 @@
#: :setting:`CELERY_ACKS_LATE` setting.
acks_late = None
- #: List/tuple of expected exceptions.
+ #: Tuple of expected exceptions.
#:
#: These are errors that are expected in normal operation
#: and that should not be regarded as a real error by the worker.
| {"golden_diff": "diff --git a/celery/app/task.py b/celery/app/task.py\n--- a/celery/app/task.py\n+++ b/celery/app/task.py\n@@ -313,7 +313,7 @@\n #: :setting:`CELERY_ACKS_LATE` setting.\n acks_late = None\n \n- #: List/tuple of expected exceptions.\n+ #: Tuple of expected exceptions.\n #:\n #: These are errors that are expected in normal operation\n #: and that should not be regarded as a real error by the worker.\n", "issue": "Task.throws cannot be a list, misleading documentation\nThe check at https://github.com/celery/celery/blob/b35569090c5cabfa784b00a68b55c7628fee813d/celery/worker/job.py#L456 throws this error when the `Task.throws` is a list:\n\n``` shell\nTypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types\n```\n\nDocumentation on `Task.throws` is misleading by mentioning that `throws` can be a `List/tuple`: \nhttps://github.com/celery/celery/blob/b35569090c5cabfa784b00a68b55c7628fee813d/celery/app/task.py#L316-L322\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n celery.app.task\n ~~~~~~~~~~~~~~~\n\n Task Implementation: Task request context, and the base task class.\n\n\"\"\"\nfrom __future__ import absolute_import\n\nimport sys\n\nfrom billiard.einfo import ExceptionInfo\n\nfrom celery import current_app\nfrom celery import states\nfrom celery._state import _task_stack\nfrom celery.canvas import signature\nfrom celery.exceptions import MaxRetriesExceededError, Reject, Retry\nfrom celery.five import class_property, items, with_metaclass\nfrom celery.local import Proxy\nfrom celery.result import EagerResult\nfrom celery.utils import gen_task_name, fun_takes_kwargs, uuid, maybe_reraise\nfrom celery.utils.functional import mattrgetter, maybe_list\nfrom celery.utils.imports import instantiate\nfrom celery.utils.mail import ErrorMail\n\nfrom .annotations import resolve_all as resolve_all_annotations\nfrom .registry import _unpickle_task_v2\nfrom .utils import appstr\n\n__all__ = ['Context', 'Task']\n\n#: extracts attributes related to publishing a message from an object.\nextract_exec_options = mattrgetter(\n 'queue', 'routing_key', 'exchange', 'priority', 'expires',\n 'serializer', 'delivery_mode', 'compression', 'time_limit',\n 'soft_time_limit', 'immediate', 'mandatory', # imm+man is deprecated\n)\n\n# We take __repr__ very seriously around here ;)\nR_BOUND_TASK = '<class {0.__name__} of {app}{flags}>'\nR_UNBOUND_TASK = '<unbound {0.__name__}{flags}>'\nR_SELF_TASK = '<@task {0.name} bound to other {0.__self__}>'\nR_INSTANCE = '<@task: {0.name} of {app}{flags}>'\n\n\nclass _CompatShared(object):\n\n def __init__(self, name, cons):\n self.name = name\n self.cons = cons\n\n def __hash__(self):\n return hash(self.name)\n\n def __repr__(self):\n return '<OldTask: %r>' % (self.name, )\n\n def __call__(self, app):\n return self.cons(app)\n\n\ndef _strflags(flags, default=''):\n if flags:\n return ' ({0})'.format(', '.join(flags))\n return default\n\n\ndef _reprtask(task, fmt=None, flags=None):\n flags = list(flags) if flags is not None else []\n flags.append('v2 compatible') if task.__v2_compat__ else None\n if not fmt:\n fmt = R_BOUND_TASK if task._app else R_UNBOUND_TASK\n return fmt.format(\n task, flags=_strflags(flags),\n app=appstr(task._app) if task._app else None,\n )\n\n\nclass Context(object):\n # Default context\n logfile = None\n loglevel = None\n hostname = None\n id = None\n args = None\n kwargs = None\n retries = 0\n eta = None\n expires = None\n is_eager = False\n headers = None\n delivery_info = None\n reply_to = None\n correlation_id = None\n taskset = None # compat alias to group\n group = None\n chord = None\n utc = None\n called_directly = True\n callbacks = None\n errbacks = None\n timelimit = None\n _children = None # see property\n _protected = 0\n\n def __init__(self, *args, **kwargs):\n self.update(*args, **kwargs)\n\n def update(self, *args, **kwargs):\n return self.__dict__.update(*args, **kwargs)\n\n def clear(self):\n return self.__dict__.clear()\n\n def get(self, key, default=None):\n return getattr(self, key, default)\n\n def __repr__(self):\n return '<Context: {0!r}>'.format(vars(self))\n\n @property\n def children(self):\n # children must be an empy list for every thread\n if self._children is None:\n self._children = []\n return self._children\n\n\nclass TaskType(type):\n \"\"\"Meta class for tasks.\n\n Automatically registers the task in the task registry (except\n if the :attr:`Task.abstract`` attribute is set).\n\n If no :attr:`Task.name` attribute is provided, then the name is generated\n from the module and class name.\n\n \"\"\"\n _creation_count = {} # used by old non-abstract task classes\n\n def __new__(cls, name, bases, attrs):\n new = super(TaskType, cls).__new__\n task_module = attrs.get('__module__') or '__main__'\n\n # - Abstract class: abstract attribute should not be inherited.\n abstract = attrs.pop('abstract', None)\n if abstract or not attrs.get('autoregister', True):\n return new(cls, name, bases, attrs)\n\n # The 'app' attribute is now a property, with the real app located\n # in the '_app' attribute. Previously this was a regular attribute,\n # so we should support classes defining it.\n app = attrs.pop('_app', None) or attrs.pop('app', None)\n\n # Attempt to inherit app from one the bases\n if not isinstance(app, Proxy) and app is None:\n for base in bases:\n if getattr(base, '_app', None):\n app = base._app\n break\n else:\n app = current_app._get_current_object()\n attrs['_app'] = app\n\n # - Automatically generate missing/empty name.\n task_name = attrs.get('name')\n if not task_name:\n attrs['name'] = task_name = gen_task_name(app, name, task_module)\n\n if not attrs.get('_decorated'):\n # non decorated tasks must also be shared in case\n # an app is created multiple times due to modules\n # imported under multiple names.\n # Hairy stuff, here to be compatible with 2.x.\n # People should not use non-abstract task classes anymore,\n # use the task decorator.\n from celery.app.builtins import shared_task\n unique_name = '.'.join([task_module, name])\n if unique_name not in cls._creation_count:\n # the creation count is used as a safety\n # so that the same task is not added recursively\n # to the set of constructors.\n cls._creation_count[unique_name] = 1\n shared_task(_CompatShared(\n unique_name,\n lambda app: TaskType.__new__(cls, name, bases,\n dict(attrs, _app=app)),\n ))\n\n # - Create and register class.\n # Because of the way import happens (recursively)\n # we may or may not be the first time the task tries to register\n # with the framework. There should only be one class for each task\n # name, so we always return the registered version.\n tasks = app._tasks\n if task_name not in tasks:\n tasks.register(new(cls, name, bases, attrs))\n instance = tasks[task_name]\n instance.bind(app)\n return instance.__class__\n\n def __repr__(cls):\n return _reprtask(cls)\n\n\n@with_metaclass(TaskType)\nclass Task(object):\n \"\"\"Task base class.\n\n When called tasks apply the :meth:`run` method. This method must\n be defined by all tasks (that is unless the :meth:`__call__` method\n is overridden).\n\n \"\"\"\n __trace__ = None\n __v2_compat__ = False # set by old base in celery.task.base\n\n ErrorMail = ErrorMail\n MaxRetriesExceededError = MaxRetriesExceededError\n\n #: Execution strategy used, or the qualified name of one.\n Strategy = 'celery.worker.strategy:default'\n\n #: This is the instance bound to if the task is a method of a class.\n __self__ = None\n\n #: The application instance associated with this task class.\n _app = None\n\n #: Name of the task.\n name = None\n\n #: If :const:`True` the task is an abstract base class.\n abstract = True\n\n #: If disabled the worker will not forward magic keyword arguments.\n #: Deprecated and scheduled for removal in v4.0.\n accept_magic_kwargs = False\n\n #: Maximum number of retries before giving up. If set to :const:`None`,\n #: it will **never** stop retrying.\n max_retries = 3\n\n #: Default time in seconds before a retry of the task should be\n #: executed. 3 minutes by default.\n default_retry_delay = 3 * 60\n\n #: Rate limit for this task type. Examples: :const:`None` (no rate\n #: limit), `'100/s'` (hundred tasks a second), `'100/m'` (hundred tasks\n #: a minute),`'100/h'` (hundred tasks an hour)\n rate_limit = None\n\n #: If enabled the worker will not store task state and return values\n #: for this task. Defaults to the :setting:`CELERY_IGNORE_RESULT`\n #: setting.\n ignore_result = None\n\n #: If enabled the request will keep track of subtasks started by\n #: this task, and this information will be sent with the result\n #: (``result.children``).\n trail = True\n\n #: When enabled errors will be stored even if the task is otherwise\n #: configured to ignore results.\n store_errors_even_if_ignored = None\n\n #: If enabled an email will be sent to :setting:`ADMINS` whenever a task\n #: of this type fails.\n send_error_emails = None\n\n #: The name of a serializer that are registered with\n #: :mod:`kombu.serialization.registry`. Default is `'pickle'`.\n serializer = None\n\n #: Hard time limit.\n #: Defaults to the :setting:`CELERYD_TASK_TIME_LIMIT` setting.\n time_limit = None\n\n #: Soft time limit.\n #: Defaults to the :setting:`CELERYD_TASK_SOFT_TIME_LIMIT` setting.\n soft_time_limit = None\n\n #: The result store backend used for this task.\n backend = None\n\n #: If disabled this task won't be registered automatically.\n autoregister = True\n\n #: If enabled the task will report its status as 'started' when the task\n #: is executed by a worker. Disabled by default as the normal behaviour\n #: is to not report that level of granularity. Tasks are either pending,\n #: finished, or waiting to be retried.\n #:\n #: Having a 'started' status can be useful for when there are long\n #: running tasks and there is a need to report which task is currently\n #: running.\n #:\n #: The application default can be overridden using the\n #: :setting:`CELERY_TRACK_STARTED` setting.\n track_started = None\n\n #: When enabled messages for this task will be acknowledged **after**\n #: the task has been executed, and not *just before* which is the\n #: default behavior.\n #:\n #: Please note that this means the task may be executed twice if the\n #: worker crashes mid execution (which may be acceptable for some\n #: applications).\n #:\n #: The application default can be overridden with the\n #: :setting:`CELERY_ACKS_LATE` setting.\n acks_late = None\n\n #: List/tuple of expected exceptions.\n #:\n #: These are errors that are expected in normal operation\n #: and that should not be regarded as a real error by the worker.\n #: Currently this means that the state will be updated to an error\n #: state, but the worker will not log the event as an error.\n throws = ()\n\n #: Default task expiry time.\n expires = None\n\n #: Some may expect a request to exist even if the task has not been\n #: called. This should probably be deprecated.\n _default_request = None\n\n _exec_options = None\n\n __bound__ = False\n\n from_config = (\n ('send_error_emails', 'CELERY_SEND_TASK_ERROR_EMAILS'),\n ('serializer', 'CELERY_TASK_SERIALIZER'),\n ('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),\n ('track_started', 'CELERY_TRACK_STARTED'),\n ('acks_late', 'CELERY_ACKS_LATE'),\n ('ignore_result', 'CELERY_IGNORE_RESULT'),\n ('store_errors_even_if_ignored',\n 'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),\n )\n\n _backend = None # set by backend property.\n\n __bound__ = False\n\n # - Tasks are lazily bound, so that configuration is not set\n # - until the task is actually used\n\n @classmethod\n def bind(self, app):\n was_bound, self.__bound__ = self.__bound__, True\n self._app = app\n conf = app.conf\n self._exec_options = None # clear option cache\n\n for attr_name, config_name in self.from_config:\n if getattr(self, attr_name, None) is None:\n setattr(self, attr_name, conf[config_name])\n if self.accept_magic_kwargs is None:\n self.accept_magic_kwargs = app.accept_magic_kwargs\n\n # decorate with annotations from config.\n if not was_bound:\n self.annotate()\n\n from celery.utils.threads import LocalStack\n self.request_stack = LocalStack()\n\n # PeriodicTask uses this to add itself to the PeriodicTask schedule.\n self.on_bound(app)\n\n return app\n\n @classmethod\n def on_bound(self, app):\n \"\"\"This method can be defined to do additional actions when the\n task class is bound to an app.\"\"\"\n pass\n\n @classmethod\n def _get_app(self):\n if self._app is None:\n self._app = current_app\n if not self.__bound__:\n # The app property's __set__ method is not called\n # if Task.app is set (on the class), so must bind on use.\n self.bind(self._app)\n return self._app\n app = class_property(_get_app, bind)\n\n @classmethod\n def annotate(self):\n for d in resolve_all_annotations(self.app.annotations, self):\n for key, value in items(d):\n if key.startswith('@'):\n self.add_around(key[1:], value)\n else:\n setattr(self, key, value)\n\n @classmethod\n def add_around(self, attr, around):\n orig = getattr(self, attr)\n if getattr(orig, '__wrapped__', None):\n orig = orig.__wrapped__\n meth = around(orig)\n meth.__wrapped__ = orig\n setattr(self, attr, meth)\n\n def __call__(self, *args, **kwargs):\n _task_stack.push(self)\n self.push_request()\n try:\n # add self if this is a bound task\n if self.__self__ is not None:\n return self.run(self.__self__, *args, **kwargs)\n return self.run(*args, **kwargs)\n finally:\n self.pop_request()\n _task_stack.pop()\n\n def __reduce__(self):\n # - tasks are pickled into the name of the task only, and the reciever\n # - simply grabs it from the local registry.\n # - in later versions the module of the task is also included,\n # - and the receiving side tries to import that module so that\n # - it will work even if the task has not been registered.\n mod = type(self).__module__\n mod = mod if mod and mod in sys.modules else None\n return (_unpickle_task_v2, (self.name, mod), None)\n\n def run(self, *args, **kwargs):\n \"\"\"The body of the task executed by workers.\"\"\"\n raise NotImplementedError('Tasks must define the run method.')\n\n def start_strategy(self, app, consumer, **kwargs):\n return instantiate(self.Strategy, self, app, consumer, **kwargs)\n\n def delay(self, *args, **kwargs):\n \"\"\"Star argument version of :meth:`apply_async`.\n\n Does not support the extra options enabled by :meth:`apply_async`.\n\n :param \\*args: positional arguments passed on to the task.\n :param \\*\\*kwargs: keyword arguments passed on to the task.\n\n :returns :class:`celery.result.AsyncResult`:\n\n \"\"\"\n return self.apply_async(args, kwargs)\n\n def apply_async(self, args=None, kwargs=None, task_id=None, producer=None,\n link=None, link_error=None, **options):\n \"\"\"Apply tasks asynchronously by sending a message.\n\n :keyword args: The positional arguments to pass on to the\n task (a :class:`list` or :class:`tuple`).\n\n :keyword kwargs: The keyword arguments to pass on to the\n task (a :class:`dict`)\n\n :keyword countdown: Number of seconds into the future that the\n task should execute. Defaults to immediate\n execution.\n\n :keyword eta: A :class:`~datetime.datetime` object describing\n the absolute time and date of when the task should\n be executed. May not be specified if `countdown`\n is also supplied.\n\n :keyword expires: Either a :class:`int`, describing the number of\n seconds, or a :class:`~datetime.datetime` object\n that describes the absolute time and date of when\n the task should expire. The task will not be\n executed after the expiration time.\n\n :keyword connection: Re-use existing broker connection instead\n of establishing a new one.\n\n :keyword retry: If enabled sending of the task message will be retried\n in the event of connection loss or failure. Default\n is taken from the :setting:`CELERY_TASK_PUBLISH_RETRY`\n setting. Note you need to handle the\n producer/connection manually for this to work.\n\n :keyword retry_policy: Override the retry policy used. See the\n :setting:`CELERY_TASK_PUBLISH_RETRY` setting.\n\n :keyword routing_key: Custom routing key used to route the task to a\n worker server. If in combination with a\n ``queue`` argument only used to specify custom\n routing keys to topic exchanges.\n\n :keyword queue: The queue to route the task to. This must be a key\n present in :setting:`CELERY_QUEUES`, or\n :setting:`CELERY_CREATE_MISSING_QUEUES` must be\n enabled. See :ref:`guide-routing` for more\n information.\n\n :keyword exchange: Named custom exchange to send the task to.\n Usually not used in combination with the ``queue``\n argument.\n\n :keyword priority: The task priority, a number between 0 and 9.\n Defaults to the :attr:`priority` attribute.\n\n :keyword serializer: A string identifying the default\n serialization method to use. Can be `pickle`,\n `json`, `yaml`, `msgpack` or any custom\n serialization method that has been registered\n with :mod:`kombu.serialization.registry`.\n Defaults to the :attr:`serializer` attribute.\n\n :keyword compression: A string identifying the compression method\n to use. Can be one of ``zlib``, ``bzip2``,\n or any custom compression methods registered with\n :func:`kombu.compression.register`. Defaults to\n the :setting:`CELERY_MESSAGE_COMPRESSION`\n setting.\n :keyword link: A single, or a list of tasks to apply if the\n task exits successfully.\n :keyword link_error: A single, or a list of tasks to apply\n if an error occurs while executing the task.\n\n :keyword producer: :class:[email protected]` instance to use.\n :keyword add_to_parent: If set to True (default) and the task\n is applied while executing another task, then the result\n will be appended to the parent tasks ``request.children``\n attribute. Trailing can also be disabled by default using the\n :attr:`trail` attribute\n :keyword publisher: Deprecated alias to ``producer``.\n\n Also supports all keyword arguments supported by\n :meth:`kombu.Producer.publish`.\n\n .. note::\n If the :setting:`CELERY_ALWAYS_EAGER` setting is set, it will\n be replaced by a local :func:`apply` call instead.\n\n \"\"\"\n app = self._get_app()\n if app.conf.CELERY_ALWAYS_EAGER:\n return self.apply(args, kwargs, task_id=task_id or uuid(),\n link=link, link_error=link_error, **options)\n # add 'self' if this is a \"task_method\".\n if self.__self__ is not None:\n args = args if isinstance(args, tuple) else tuple(args or ())\n args = (self.__self__, ) + args\n return app.send_task(\n self.name, args, kwargs, task_id=task_id, producer=producer,\n link=link, link_error=link_error, result_cls=self.AsyncResult,\n **dict(self._get_exec_options(), **options)\n )\n\n def subtask_from_request(self, request=None, args=None, kwargs=None,\n queue=None, **extra_options):\n request = self.request if request is None else request\n args = request.args if args is None else args\n kwargs = request.kwargs if kwargs is None else kwargs\n limit_hard, limit_soft = request.timelimit or (None, None)\n options = {\n 'task_id': request.id,\n 'link': request.callbacks,\n 'link_error': request.errbacks,\n 'group_id': request.group,\n 'chord': request.chord,\n 'soft_time_limit': limit_soft,\n 'time_limit': limit_hard,\n }\n options.update(\n {'queue': queue} if queue else (request.delivery_info or {})\n )\n return self.subtask(args, kwargs, options, type=self, **extra_options)\n\n def retry(self, args=None, kwargs=None, exc=None, throw=True,\n eta=None, countdown=None, max_retries=None, **options):\n \"\"\"Retry the task.\n\n :param args: Positional arguments to retry with.\n :param kwargs: Keyword arguments to retry with.\n :keyword exc: Custom exception to report when the max restart\n limit has been exceeded (default:\n :exc:`~@MaxRetriesExceededError`).\n\n If this argument is set and retry is called while\n an exception was raised (``sys.exc_info()`` is set)\n it will attempt to reraise the current exception.\n\n If no exception was raised it will raise the ``exc``\n argument provided.\n :keyword countdown: Time in seconds to delay the retry for.\n :keyword eta: Explicit time and date to run the retry at\n (must be a :class:`~datetime.datetime` instance).\n :keyword max_retries: If set, overrides the default retry limit.\n :keyword time_limit: If set, overrides the default time limit.\n :keyword soft_time_limit: If set, overrides the default soft\n time limit.\n :keyword \\*\\*options: Any extra options to pass on to\n meth:`apply_async`.\n :keyword throw: If this is :const:`False`, do not raise the\n :exc:`~@Retry` exception,\n that tells the worker to mark the task as being\n retried. Note that this means the task will be\n marked as failed if the task raises an exception,\n or successful if it returns.\n\n :raises celery.exceptions.Retry: To tell the worker that\n the task has been re-sent for retry. This always happens,\n unless the `throw` keyword argument has been explicitly set\n to :const:`False`, and is considered normal operation.\n\n **Example**\n\n .. code-block:: python\n\n >>> from imaginary_twitter_lib import Twitter\n >>> from proj.celery import app\n\n >>> @app.task()\n ... def tweet(auth, message):\n ... twitter = Twitter(oauth=auth)\n ... try:\n ... twitter.post_status_update(message)\n ... except twitter.FailWhale as exc:\n ... # Retry in 5 minutes.\n ... raise tweet.retry(countdown=60 * 5, exc=exc)\n\n Although the task will never return above as `retry` raises an\n exception to notify the worker, we use `raise` in front of the retry\n to convey that the rest of the block will not be executed.\n\n \"\"\"\n request = self.request\n retries = request.retries + 1\n max_retries = self.max_retries if max_retries is None else max_retries\n\n # Not in worker or emulated by (apply/always_eager),\n # so just raise the original exception.\n if request.called_directly:\n maybe_reraise() # raise orig stack if PyErr_Occurred\n raise exc or Retry('Task can be retried', None)\n\n if not eta and countdown is None:\n countdown = self.default_retry_delay\n\n is_eager = request.is_eager\n S = self.subtask_from_request(\n request, args, kwargs,\n countdown=countdown, eta=eta, retries=retries,\n **options\n )\n\n if max_retries is not None and retries > max_retries:\n if exc:\n # first try to reraise the original exception\n maybe_reraise()\n # or if not in an except block then raise the custom exc.\n raise exc()\n raise self.MaxRetriesExceededError(\n \"Can't retry {0}[{1}] args:{2} kwargs:{3}\".format(\n self.name, request.id, S.args, S.kwargs))\n\n # If task was executed eagerly using apply(),\n # then the retry must also be executed eagerly.\n try:\n S.apply().get() if is_eager else S.apply_async()\n except Exception as exc:\n if is_eager:\n raise\n raise Reject(exc, requeue=True)\n ret = Retry(exc=exc, when=eta or countdown)\n if throw:\n raise ret\n return ret\n\n def apply(self, args=None, kwargs=None,\n link=None, link_error=None, **options):\n \"\"\"Execute this task locally, by blocking until the task returns.\n\n :param args: positional arguments passed on to the task.\n :param kwargs: keyword arguments passed on to the task.\n :keyword throw: Re-raise task exceptions. Defaults to\n the :setting:`CELERY_EAGER_PROPAGATES_EXCEPTIONS`\n setting.\n\n :rtype :class:`celery.result.EagerResult`:\n\n \"\"\"\n # trace imports Task, so need to import inline.\n from celery.app.trace import eager_trace_task\n\n app = self._get_app()\n args = args or ()\n # add 'self' if this is a bound method.\n if self.__self__ is not None:\n args = (self.__self__, ) + tuple(args)\n kwargs = kwargs or {}\n task_id = options.get('task_id') or uuid()\n retries = options.get('retries', 0)\n throw = app.either('CELERY_EAGER_PROPAGATES_EXCEPTIONS',\n options.pop('throw', None))\n\n # Make sure we get the task instance, not class.\n task = app._tasks[self.name]\n\n request = {'id': task_id,\n 'retries': retries,\n 'is_eager': True,\n 'logfile': options.get('logfile'),\n 'loglevel': options.get('loglevel', 0),\n 'callbacks': maybe_list(link),\n 'errbacks': maybe_list(link_error),\n 'headers': options.get('headers'),\n 'delivery_info': {'is_eager': True}}\n if self.accept_magic_kwargs:\n default_kwargs = {'task_name': task.name,\n 'task_id': task_id,\n 'task_retries': retries,\n 'task_is_eager': True,\n 'logfile': options.get('logfile'),\n 'loglevel': options.get('loglevel', 0),\n 'delivery_info': {'is_eager': True}}\n supported_keys = fun_takes_kwargs(task.run, default_kwargs)\n extend_with = {\n key: val for key, val in items(default_kwargs)\n if key in supported_keys\n }\n kwargs.update(extend_with)\n\n tb = None\n retval, info = eager_trace_task(task, task_id, args, kwargs,\n app=self._get_app(),\n request=request, propagate=throw)\n if isinstance(retval, ExceptionInfo):\n retval, tb = retval.exception, retval.traceback\n state = states.SUCCESS if info is None else info.state\n return EagerResult(task_id, retval, state, traceback=tb)\n\n def AsyncResult(self, task_id, **kwargs):\n \"\"\"Get AsyncResult instance for this kind of task.\n\n :param task_id: Task id to get result for.\n\n \"\"\"\n return self._get_app().AsyncResult(task_id, backend=self.backend,\n task_name=self.name, **kwargs)\n\n def subtask(self, args=None, *starargs, **starkwargs):\n \"\"\"Return :class:`~celery.signature` object for\n this task, wrapping arguments and execution options\n for a single task invocation.\"\"\"\n starkwargs.setdefault('app', self.app)\n return signature(self, args, *starargs, **starkwargs)\n\n def s(self, *args, **kwargs):\n \"\"\"``.s(*a, **k) -> .subtask(a, k)``\"\"\"\n return self.subtask(args, kwargs)\n\n def si(self, *args, **kwargs):\n \"\"\"``.si(*a, **k) -> .subtask(a, k, immutable=True)``\"\"\"\n return self.subtask(args, kwargs, immutable=True)\n\n def chunks(self, it, n):\n \"\"\"Creates a :class:`~celery.canvas.chunks` task for this task.\"\"\"\n from celery import chunks\n return chunks(self.s(), it, n, app=self.app)\n\n def map(self, it):\n \"\"\"Creates a :class:`~celery.canvas.xmap` task from ``it``.\"\"\"\n from celery import xmap\n return xmap(self.s(), it, app=self.app)\n\n def starmap(self, it):\n \"\"\"Creates a :class:`~celery.canvas.xstarmap` task from ``it``.\"\"\"\n from celery import xstarmap\n return xstarmap(self.s(), it, app=self.app)\n\n def send_event(self, type_, **fields):\n req = self.request\n with self.app.events.default_dispatcher(hostname=req.hostname) as d:\n return d.send(type_, uuid=req.id, **fields)\n\n def update_state(self, task_id=None, state=None, meta=None):\n \"\"\"Update task state.\n\n :keyword task_id: Id of the task to update, defaults to the\n id of the current task\n :keyword state: New state (:class:`str`).\n :keyword meta: State metadata (:class:`dict`).\n\n\n\n \"\"\"\n if task_id is None:\n task_id = self.request.id\n self.backend.store_result(task_id, meta, state)\n\n def on_success(self, retval, task_id, args, kwargs):\n \"\"\"Success handler.\n\n Run by the worker if the task executes successfully.\n\n :param retval: The return value of the task.\n :param task_id: Unique id of the executed task.\n :param args: Original arguments for the executed task.\n :param kwargs: Original keyword arguments for the executed task.\n\n The return value of this handler is ignored.\n\n \"\"\"\n pass\n\n def on_retry(self, exc, task_id, args, kwargs, einfo):\n \"\"\"Retry handler.\n\n This is run by the worker when the task is to be retried.\n\n :param exc: The exception sent to :meth:`retry`.\n :param task_id: Unique id of the retried task.\n :param args: Original arguments for the retried task.\n :param kwargs: Original keyword arguments for the retried task.\n\n :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`\n instance, containing the traceback.\n\n The return value of this handler is ignored.\n\n \"\"\"\n pass\n\n def on_failure(self, exc, task_id, args, kwargs, einfo):\n \"\"\"Error handler.\n\n This is run by the worker when the task fails.\n\n :param exc: The exception raised by the task.\n :param task_id: Unique id of the failed task.\n :param args: Original arguments for the task that failed.\n :param kwargs: Original keyword arguments for the task\n that failed.\n\n :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`\n instance, containing the traceback.\n\n The return value of this handler is ignored.\n\n \"\"\"\n pass\n\n def after_return(self, status, retval, task_id, args, kwargs, einfo):\n \"\"\"Handler called after the task returns.\n\n :param status: Current task state.\n :param retval: Task return value/exception.\n :param task_id: Unique id of the task.\n :param args: Original arguments for the task that failed.\n :param kwargs: Original keyword arguments for the task\n that failed.\n\n :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`\n instance, containing the traceback (if any).\n\n The return value of this handler is ignored.\n\n \"\"\"\n pass\n\n def send_error_email(self, context, exc, **kwargs):\n if self.send_error_emails and \\\n not getattr(self, 'disable_error_emails', None):\n self.ErrorMail(self, **kwargs).send(context, exc)\n\n def add_trail(self, result):\n if self.trail:\n self.request.children.append(result)\n return result\n\n def push_request(self, *args, **kwargs):\n self.request_stack.push(Context(*args, **kwargs))\n\n def pop_request(self):\n self.request_stack.pop()\n\n def __repr__(self):\n \"\"\"`repr(task)`\"\"\"\n return _reprtask(self, R_SELF_TASK if self.__self__ else R_INSTANCE)\n\n def _get_request(self):\n \"\"\"Get current request object.\"\"\"\n req = self.request_stack.top\n if req is None:\n # task was not called, but some may still expect a request\n # to be there, perhaps that should be deprecated.\n if self._default_request is None:\n self._default_request = Context()\n return self._default_request\n return req\n request = property(_get_request)\n\n def _get_exec_options(self):\n if self._exec_options is None:\n self._exec_options = extract_exec_options(self)\n return self._exec_options\n\n @property\n def backend(self):\n backend = self._backend\n if backend is None:\n return self.app.backend\n return backend\n\n @backend.setter\n def backend(self, value): # noqa\n self._backend = value\n\n @property\n def __name__(self):\n return self.__class__.__name__\nBaseTask = Task # compat alias\n", "path": "celery/app/task.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n celery.app.task\n ~~~~~~~~~~~~~~~\n\n Task Implementation: Task request context, and the base task class.\n\n\"\"\"\nfrom __future__ import absolute_import\n\nimport sys\n\nfrom billiard.einfo import ExceptionInfo\n\nfrom celery import current_app\nfrom celery import states\nfrom celery._state import _task_stack\nfrom celery.canvas import signature\nfrom celery.exceptions import MaxRetriesExceededError, Reject, Retry\nfrom celery.five import class_property, items, with_metaclass\nfrom celery.local import Proxy\nfrom celery.result import EagerResult\nfrom celery.utils import gen_task_name, fun_takes_kwargs, uuid, maybe_reraise\nfrom celery.utils.functional import mattrgetter, maybe_list\nfrom celery.utils.imports import instantiate\nfrom celery.utils.mail import ErrorMail\n\nfrom .annotations import resolve_all as resolve_all_annotations\nfrom .registry import _unpickle_task_v2\nfrom .utils import appstr\n\n__all__ = ['Context', 'Task']\n\n#: extracts attributes related to publishing a message from an object.\nextract_exec_options = mattrgetter(\n 'queue', 'routing_key', 'exchange', 'priority', 'expires',\n 'serializer', 'delivery_mode', 'compression', 'time_limit',\n 'soft_time_limit', 'immediate', 'mandatory', # imm+man is deprecated\n)\n\n# We take __repr__ very seriously around here ;)\nR_BOUND_TASK = '<class {0.__name__} of {app}{flags}>'\nR_UNBOUND_TASK = '<unbound {0.__name__}{flags}>'\nR_SELF_TASK = '<@task {0.name} bound to other {0.__self__}>'\nR_INSTANCE = '<@task: {0.name} of {app}{flags}>'\n\n\nclass _CompatShared(object):\n\n def __init__(self, name, cons):\n self.name = name\n self.cons = cons\n\n def __hash__(self):\n return hash(self.name)\n\n def __repr__(self):\n return '<OldTask: %r>' % (self.name, )\n\n def __call__(self, app):\n return self.cons(app)\n\n\ndef _strflags(flags, default=''):\n if flags:\n return ' ({0})'.format(', '.join(flags))\n return default\n\n\ndef _reprtask(task, fmt=None, flags=None):\n flags = list(flags) if flags is not None else []\n flags.append('v2 compatible') if task.__v2_compat__ else None\n if not fmt:\n fmt = R_BOUND_TASK if task._app else R_UNBOUND_TASK\n return fmt.format(\n task, flags=_strflags(flags),\n app=appstr(task._app) if task._app else None,\n )\n\n\nclass Context(object):\n # Default context\n logfile = None\n loglevel = None\n hostname = None\n id = None\n args = None\n kwargs = None\n retries = 0\n eta = None\n expires = None\n is_eager = False\n headers = None\n delivery_info = None\n reply_to = None\n correlation_id = None\n taskset = None # compat alias to group\n group = None\n chord = None\n utc = None\n called_directly = True\n callbacks = None\n errbacks = None\n timelimit = None\n _children = None # see property\n _protected = 0\n\n def __init__(self, *args, **kwargs):\n self.update(*args, **kwargs)\n\n def update(self, *args, **kwargs):\n return self.__dict__.update(*args, **kwargs)\n\n def clear(self):\n return self.__dict__.clear()\n\n def get(self, key, default=None):\n return getattr(self, key, default)\n\n def __repr__(self):\n return '<Context: {0!r}>'.format(vars(self))\n\n @property\n def children(self):\n # children must be an empy list for every thread\n if self._children is None:\n self._children = []\n return self._children\n\n\nclass TaskType(type):\n \"\"\"Meta class for tasks.\n\n Automatically registers the task in the task registry (except\n if the :attr:`Task.abstract`` attribute is set).\n\n If no :attr:`Task.name` attribute is provided, then the name is generated\n from the module and class name.\n\n \"\"\"\n _creation_count = {} # used by old non-abstract task classes\n\n def __new__(cls, name, bases, attrs):\n new = super(TaskType, cls).__new__\n task_module = attrs.get('__module__') or '__main__'\n\n # - Abstract class: abstract attribute should not be inherited.\n abstract = attrs.pop('abstract', None)\n if abstract or not attrs.get('autoregister', True):\n return new(cls, name, bases, attrs)\n\n # The 'app' attribute is now a property, with the real app located\n # in the '_app' attribute. Previously this was a regular attribute,\n # so we should support classes defining it.\n app = attrs.pop('_app', None) or attrs.pop('app', None)\n\n # Attempt to inherit app from one the bases\n if not isinstance(app, Proxy) and app is None:\n for base in bases:\n if getattr(base, '_app', None):\n app = base._app\n break\n else:\n app = current_app._get_current_object()\n attrs['_app'] = app\n\n # - Automatically generate missing/empty name.\n task_name = attrs.get('name')\n if not task_name:\n attrs['name'] = task_name = gen_task_name(app, name, task_module)\n\n if not attrs.get('_decorated'):\n # non decorated tasks must also be shared in case\n # an app is created multiple times due to modules\n # imported under multiple names.\n # Hairy stuff, here to be compatible with 2.x.\n # People should not use non-abstract task classes anymore,\n # use the task decorator.\n from celery.app.builtins import shared_task\n unique_name = '.'.join([task_module, name])\n if unique_name not in cls._creation_count:\n # the creation count is used as a safety\n # so that the same task is not added recursively\n # to the set of constructors.\n cls._creation_count[unique_name] = 1\n shared_task(_CompatShared(\n unique_name,\n lambda app: TaskType.__new__(cls, name, bases,\n dict(attrs, _app=app)),\n ))\n\n # - Create and register class.\n # Because of the way import happens (recursively)\n # we may or may not be the first time the task tries to register\n # with the framework. There should only be one class for each task\n # name, so we always return the registered version.\n tasks = app._tasks\n if task_name not in tasks:\n tasks.register(new(cls, name, bases, attrs))\n instance = tasks[task_name]\n instance.bind(app)\n return instance.__class__\n\n def __repr__(cls):\n return _reprtask(cls)\n\n\n@with_metaclass(TaskType)\nclass Task(object):\n \"\"\"Task base class.\n\n When called tasks apply the :meth:`run` method. This method must\n be defined by all tasks (that is unless the :meth:`__call__` method\n is overridden).\n\n \"\"\"\n __trace__ = None\n __v2_compat__ = False # set by old base in celery.task.base\n\n ErrorMail = ErrorMail\n MaxRetriesExceededError = MaxRetriesExceededError\n\n #: Execution strategy used, or the qualified name of one.\n Strategy = 'celery.worker.strategy:default'\n\n #: This is the instance bound to if the task is a method of a class.\n __self__ = None\n\n #: The application instance associated with this task class.\n _app = None\n\n #: Name of the task.\n name = None\n\n #: If :const:`True` the task is an abstract base class.\n abstract = True\n\n #: If disabled the worker will not forward magic keyword arguments.\n #: Deprecated and scheduled for removal in v4.0.\n accept_magic_kwargs = False\n\n #: Maximum number of retries before giving up. If set to :const:`None`,\n #: it will **never** stop retrying.\n max_retries = 3\n\n #: Default time in seconds before a retry of the task should be\n #: executed. 3 minutes by default.\n default_retry_delay = 3 * 60\n\n #: Rate limit for this task type. Examples: :const:`None` (no rate\n #: limit), `'100/s'` (hundred tasks a second), `'100/m'` (hundred tasks\n #: a minute),`'100/h'` (hundred tasks an hour)\n rate_limit = None\n\n #: If enabled the worker will not store task state and return values\n #: for this task. Defaults to the :setting:`CELERY_IGNORE_RESULT`\n #: setting.\n ignore_result = None\n\n #: If enabled the request will keep track of subtasks started by\n #: this task, and this information will be sent with the result\n #: (``result.children``).\n trail = True\n\n #: When enabled errors will be stored even if the task is otherwise\n #: configured to ignore results.\n store_errors_even_if_ignored = None\n\n #: If enabled an email will be sent to :setting:`ADMINS` whenever a task\n #: of this type fails.\n send_error_emails = None\n\n #: The name of a serializer that are registered with\n #: :mod:`kombu.serialization.registry`. Default is `'pickle'`.\n serializer = None\n\n #: Hard time limit.\n #: Defaults to the :setting:`CELERYD_TASK_TIME_LIMIT` setting.\n time_limit = None\n\n #: Soft time limit.\n #: Defaults to the :setting:`CELERYD_TASK_SOFT_TIME_LIMIT` setting.\n soft_time_limit = None\n\n #: The result store backend used for this task.\n backend = None\n\n #: If disabled this task won't be registered automatically.\n autoregister = True\n\n #: If enabled the task will report its status as 'started' when the task\n #: is executed by a worker. Disabled by default as the normal behaviour\n #: is to not report that level of granularity. Tasks are either pending,\n #: finished, or waiting to be retried.\n #:\n #: Having a 'started' status can be useful for when there are long\n #: running tasks and there is a need to report which task is currently\n #: running.\n #:\n #: The application default can be overridden using the\n #: :setting:`CELERY_TRACK_STARTED` setting.\n track_started = None\n\n #: When enabled messages for this task will be acknowledged **after**\n #: the task has been executed, and not *just before* which is the\n #: default behavior.\n #:\n #: Please note that this means the task may be executed twice if the\n #: worker crashes mid execution (which may be acceptable for some\n #: applications).\n #:\n #: The application default can be overridden with the\n #: :setting:`CELERY_ACKS_LATE` setting.\n acks_late = None\n\n #: Tuple of expected exceptions.\n #:\n #: These are errors that are expected in normal operation\n #: and that should not be regarded as a real error by the worker.\n #: Currently this means that the state will be updated to an error\n #: state, but the worker will not log the event as an error.\n throws = ()\n\n #: Default task expiry time.\n expires = None\n\n #: Some may expect a request to exist even if the task has not been\n #: called. This should probably be deprecated.\n _default_request = None\n\n _exec_options = None\n\n __bound__ = False\n\n from_config = (\n ('send_error_emails', 'CELERY_SEND_TASK_ERROR_EMAILS'),\n ('serializer', 'CELERY_TASK_SERIALIZER'),\n ('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),\n ('track_started', 'CELERY_TRACK_STARTED'),\n ('acks_late', 'CELERY_ACKS_LATE'),\n ('ignore_result', 'CELERY_IGNORE_RESULT'),\n ('store_errors_even_if_ignored',\n 'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),\n )\n\n _backend = None # set by backend property.\n\n __bound__ = False\n\n # - Tasks are lazily bound, so that configuration is not set\n # - until the task is actually used\n\n @classmethod\n def bind(self, app):\n was_bound, self.__bound__ = self.__bound__, True\n self._app = app\n conf = app.conf\n self._exec_options = None # clear option cache\n\n for attr_name, config_name in self.from_config:\n if getattr(self, attr_name, None) is None:\n setattr(self, attr_name, conf[config_name])\n if self.accept_magic_kwargs is None:\n self.accept_magic_kwargs = app.accept_magic_kwargs\n\n # decorate with annotations from config.\n if not was_bound:\n self.annotate()\n\n from celery.utils.threads import LocalStack\n self.request_stack = LocalStack()\n\n # PeriodicTask uses this to add itself to the PeriodicTask schedule.\n self.on_bound(app)\n\n return app\n\n @classmethod\n def on_bound(self, app):\n \"\"\"This method can be defined to do additional actions when the\n task class is bound to an app.\"\"\"\n pass\n\n @classmethod\n def _get_app(self):\n if self._app is None:\n self._app = current_app\n if not self.__bound__:\n # The app property's __set__ method is not called\n # if Task.app is set (on the class), so must bind on use.\n self.bind(self._app)\n return self._app\n app = class_property(_get_app, bind)\n\n @classmethod\n def annotate(self):\n for d in resolve_all_annotations(self.app.annotations, self):\n for key, value in items(d):\n if key.startswith('@'):\n self.add_around(key[1:], value)\n else:\n setattr(self, key, value)\n\n @classmethod\n def add_around(self, attr, around):\n orig = getattr(self, attr)\n if getattr(orig, '__wrapped__', None):\n orig = orig.__wrapped__\n meth = around(orig)\n meth.__wrapped__ = orig\n setattr(self, attr, meth)\n\n def __call__(self, *args, **kwargs):\n _task_stack.push(self)\n self.push_request()\n try:\n # add self if this is a bound task\n if self.__self__ is not None:\n return self.run(self.__self__, *args, **kwargs)\n return self.run(*args, **kwargs)\n finally:\n self.pop_request()\n _task_stack.pop()\n\n def __reduce__(self):\n # - tasks are pickled into the name of the task only, and the reciever\n # - simply grabs it from the local registry.\n # - in later versions the module of the task is also included,\n # - and the receiving side tries to import that module so that\n # - it will work even if the task has not been registered.\n mod = type(self).__module__\n mod = mod if mod and mod in sys.modules else None\n return (_unpickle_task_v2, (self.name, mod), None)\n\n def run(self, *args, **kwargs):\n \"\"\"The body of the task executed by workers.\"\"\"\n raise NotImplementedError('Tasks must define the run method.')\n\n def start_strategy(self, app, consumer, **kwargs):\n return instantiate(self.Strategy, self, app, consumer, **kwargs)\n\n def delay(self, *args, **kwargs):\n \"\"\"Star argument version of :meth:`apply_async`.\n\n Does not support the extra options enabled by :meth:`apply_async`.\n\n :param \\*args: positional arguments passed on to the task.\n :param \\*\\*kwargs: keyword arguments passed on to the task.\n\n :returns :class:`celery.result.AsyncResult`:\n\n \"\"\"\n return self.apply_async(args, kwargs)\n\n def apply_async(self, args=None, kwargs=None, task_id=None, producer=None,\n link=None, link_error=None, **options):\n \"\"\"Apply tasks asynchronously by sending a message.\n\n :keyword args: The positional arguments to pass on to the\n task (a :class:`list` or :class:`tuple`).\n\n :keyword kwargs: The keyword arguments to pass on to the\n task (a :class:`dict`)\n\n :keyword countdown: Number of seconds into the future that the\n task should execute. Defaults to immediate\n execution.\n\n :keyword eta: A :class:`~datetime.datetime` object describing\n the absolute time and date of when the task should\n be executed. May not be specified if `countdown`\n is also supplied.\n\n :keyword expires: Either a :class:`int`, describing the number of\n seconds, or a :class:`~datetime.datetime` object\n that describes the absolute time and date of when\n the task should expire. The task will not be\n executed after the expiration time.\n\n :keyword connection: Re-use existing broker connection instead\n of establishing a new one.\n\n :keyword retry: If enabled sending of the task message will be retried\n in the event of connection loss or failure. Default\n is taken from the :setting:`CELERY_TASK_PUBLISH_RETRY`\n setting. Note you need to handle the\n producer/connection manually for this to work.\n\n :keyword retry_policy: Override the retry policy used. See the\n :setting:`CELERY_TASK_PUBLISH_RETRY` setting.\n\n :keyword routing_key: Custom routing key used to route the task to a\n worker server. If in combination with a\n ``queue`` argument only used to specify custom\n routing keys to topic exchanges.\n\n :keyword queue: The queue to route the task to. This must be a key\n present in :setting:`CELERY_QUEUES`, or\n :setting:`CELERY_CREATE_MISSING_QUEUES` must be\n enabled. See :ref:`guide-routing` for more\n information.\n\n :keyword exchange: Named custom exchange to send the task to.\n Usually not used in combination with the ``queue``\n argument.\n\n :keyword priority: The task priority, a number between 0 and 9.\n Defaults to the :attr:`priority` attribute.\n\n :keyword serializer: A string identifying the default\n serialization method to use. Can be `pickle`,\n `json`, `yaml`, `msgpack` or any custom\n serialization method that has been registered\n with :mod:`kombu.serialization.registry`.\n Defaults to the :attr:`serializer` attribute.\n\n :keyword compression: A string identifying the compression method\n to use. Can be one of ``zlib``, ``bzip2``,\n or any custom compression methods registered with\n :func:`kombu.compression.register`. Defaults to\n the :setting:`CELERY_MESSAGE_COMPRESSION`\n setting.\n :keyword link: A single, or a list of tasks to apply if the\n task exits successfully.\n :keyword link_error: A single, or a list of tasks to apply\n if an error occurs while executing the task.\n\n :keyword producer: :class:[email protected]` instance to use.\n :keyword add_to_parent: If set to True (default) and the task\n is applied while executing another task, then the result\n will be appended to the parent tasks ``request.children``\n attribute. Trailing can also be disabled by default using the\n :attr:`trail` attribute\n :keyword publisher: Deprecated alias to ``producer``.\n\n Also supports all keyword arguments supported by\n :meth:`kombu.Producer.publish`.\n\n .. note::\n If the :setting:`CELERY_ALWAYS_EAGER` setting is set, it will\n be replaced by a local :func:`apply` call instead.\n\n \"\"\"\n app = self._get_app()\n if app.conf.CELERY_ALWAYS_EAGER:\n return self.apply(args, kwargs, task_id=task_id or uuid(),\n link=link, link_error=link_error, **options)\n # add 'self' if this is a \"task_method\".\n if self.__self__ is not None:\n args = args if isinstance(args, tuple) else tuple(args or ())\n args = (self.__self__, ) + args\n return app.send_task(\n self.name, args, kwargs, task_id=task_id, producer=producer,\n link=link, link_error=link_error, result_cls=self.AsyncResult,\n **dict(self._get_exec_options(), **options)\n )\n\n def subtask_from_request(self, request=None, args=None, kwargs=None,\n queue=None, **extra_options):\n request = self.request if request is None else request\n args = request.args if args is None else args\n kwargs = request.kwargs if kwargs is None else kwargs\n limit_hard, limit_soft = request.timelimit or (None, None)\n options = {\n 'task_id': request.id,\n 'link': request.callbacks,\n 'link_error': request.errbacks,\n 'group_id': request.group,\n 'chord': request.chord,\n 'soft_time_limit': limit_soft,\n 'time_limit': limit_hard,\n }\n options.update(\n {'queue': queue} if queue else (request.delivery_info or {})\n )\n return self.subtask(args, kwargs, options, type=self, **extra_options)\n\n def retry(self, args=None, kwargs=None, exc=None, throw=True,\n eta=None, countdown=None, max_retries=None, **options):\n \"\"\"Retry the task.\n\n :param args: Positional arguments to retry with.\n :param kwargs: Keyword arguments to retry with.\n :keyword exc: Custom exception to report when the max restart\n limit has been exceeded (default:\n :exc:`~@MaxRetriesExceededError`).\n\n If this argument is set and retry is called while\n an exception was raised (``sys.exc_info()`` is set)\n it will attempt to reraise the current exception.\n\n If no exception was raised it will raise the ``exc``\n argument provided.\n :keyword countdown: Time in seconds to delay the retry for.\n :keyword eta: Explicit time and date to run the retry at\n (must be a :class:`~datetime.datetime` instance).\n :keyword max_retries: If set, overrides the default retry limit.\n :keyword time_limit: If set, overrides the default time limit.\n :keyword soft_time_limit: If set, overrides the default soft\n time limit.\n :keyword \\*\\*options: Any extra options to pass on to\n meth:`apply_async`.\n :keyword throw: If this is :const:`False`, do not raise the\n :exc:`~@Retry` exception,\n that tells the worker to mark the task as being\n retried. Note that this means the task will be\n marked as failed if the task raises an exception,\n or successful if it returns.\n\n :raises celery.exceptions.Retry: To tell the worker that\n the task has been re-sent for retry. This always happens,\n unless the `throw` keyword argument has been explicitly set\n to :const:`False`, and is considered normal operation.\n\n **Example**\n\n .. code-block:: python\n\n >>> from imaginary_twitter_lib import Twitter\n >>> from proj.celery import app\n\n >>> @app.task()\n ... def tweet(auth, message):\n ... twitter = Twitter(oauth=auth)\n ... try:\n ... twitter.post_status_update(message)\n ... except twitter.FailWhale as exc:\n ... # Retry in 5 minutes.\n ... raise tweet.retry(countdown=60 * 5, exc=exc)\n\n Although the task will never return above as `retry` raises an\n exception to notify the worker, we use `raise` in front of the retry\n to convey that the rest of the block will not be executed.\n\n \"\"\"\n request = self.request\n retries = request.retries + 1\n max_retries = self.max_retries if max_retries is None else max_retries\n\n # Not in worker or emulated by (apply/always_eager),\n # so just raise the original exception.\n if request.called_directly:\n maybe_reraise() # raise orig stack if PyErr_Occurred\n raise exc or Retry('Task can be retried', None)\n\n if not eta and countdown is None:\n countdown = self.default_retry_delay\n\n is_eager = request.is_eager\n S = self.subtask_from_request(\n request, args, kwargs,\n countdown=countdown, eta=eta, retries=retries,\n **options\n )\n\n if max_retries is not None and retries > max_retries:\n if exc:\n # first try to reraise the original exception\n maybe_reraise()\n # or if not in an except block then raise the custom exc.\n raise exc()\n raise self.MaxRetriesExceededError(\n \"Can't retry {0}[{1}] args:{2} kwargs:{3}\".format(\n self.name, request.id, S.args, S.kwargs))\n\n # If task was executed eagerly using apply(),\n # then the retry must also be executed eagerly.\n try:\n S.apply().get() if is_eager else S.apply_async()\n except Exception as exc:\n if is_eager:\n raise\n raise Reject(exc, requeue=True)\n ret = Retry(exc=exc, when=eta or countdown)\n if throw:\n raise ret\n return ret\n\n def apply(self, args=None, kwargs=None,\n link=None, link_error=None, **options):\n \"\"\"Execute this task locally, by blocking until the task returns.\n\n :param args: positional arguments passed on to the task.\n :param kwargs: keyword arguments passed on to the task.\n :keyword throw: Re-raise task exceptions. Defaults to\n the :setting:`CELERY_EAGER_PROPAGATES_EXCEPTIONS`\n setting.\n\n :rtype :class:`celery.result.EagerResult`:\n\n \"\"\"\n # trace imports Task, so need to import inline.\n from celery.app.trace import eager_trace_task\n\n app = self._get_app()\n args = args or ()\n # add 'self' if this is a bound method.\n if self.__self__ is not None:\n args = (self.__self__, ) + tuple(args)\n kwargs = kwargs or {}\n task_id = options.get('task_id') or uuid()\n retries = options.get('retries', 0)\n throw = app.either('CELERY_EAGER_PROPAGATES_EXCEPTIONS',\n options.pop('throw', None))\n\n # Make sure we get the task instance, not class.\n task = app._tasks[self.name]\n\n request = {'id': task_id,\n 'retries': retries,\n 'is_eager': True,\n 'logfile': options.get('logfile'),\n 'loglevel': options.get('loglevel', 0),\n 'callbacks': maybe_list(link),\n 'errbacks': maybe_list(link_error),\n 'headers': options.get('headers'),\n 'delivery_info': {'is_eager': True}}\n if self.accept_magic_kwargs:\n default_kwargs = {'task_name': task.name,\n 'task_id': task_id,\n 'task_retries': retries,\n 'task_is_eager': True,\n 'logfile': options.get('logfile'),\n 'loglevel': options.get('loglevel', 0),\n 'delivery_info': {'is_eager': True}}\n supported_keys = fun_takes_kwargs(task.run, default_kwargs)\n extend_with = {\n key: val for key, val in items(default_kwargs)\n if key in supported_keys\n }\n kwargs.update(extend_with)\n\n tb = None\n retval, info = eager_trace_task(task, task_id, args, kwargs,\n app=self._get_app(),\n request=request, propagate=throw)\n if isinstance(retval, ExceptionInfo):\n retval, tb = retval.exception, retval.traceback\n state = states.SUCCESS if info is None else info.state\n return EagerResult(task_id, retval, state, traceback=tb)\n\n def AsyncResult(self, task_id, **kwargs):\n \"\"\"Get AsyncResult instance for this kind of task.\n\n :param task_id: Task id to get result for.\n\n \"\"\"\n return self._get_app().AsyncResult(task_id, backend=self.backend,\n task_name=self.name, **kwargs)\n\n def subtask(self, args=None, *starargs, **starkwargs):\n \"\"\"Return :class:`~celery.signature` object for\n this task, wrapping arguments and execution options\n for a single task invocation.\"\"\"\n starkwargs.setdefault('app', self.app)\n return signature(self, args, *starargs, **starkwargs)\n\n def s(self, *args, **kwargs):\n \"\"\"``.s(*a, **k) -> .subtask(a, k)``\"\"\"\n return self.subtask(args, kwargs)\n\n def si(self, *args, **kwargs):\n \"\"\"``.si(*a, **k) -> .subtask(a, k, immutable=True)``\"\"\"\n return self.subtask(args, kwargs, immutable=True)\n\n def chunks(self, it, n):\n \"\"\"Creates a :class:`~celery.canvas.chunks` task for this task.\"\"\"\n from celery import chunks\n return chunks(self.s(), it, n, app=self.app)\n\n def map(self, it):\n \"\"\"Creates a :class:`~celery.canvas.xmap` task from ``it``.\"\"\"\n from celery import xmap\n return xmap(self.s(), it, app=self.app)\n\n def starmap(self, it):\n \"\"\"Creates a :class:`~celery.canvas.xstarmap` task from ``it``.\"\"\"\n from celery import xstarmap\n return xstarmap(self.s(), it, app=self.app)\n\n def send_event(self, type_, **fields):\n req = self.request\n with self.app.events.default_dispatcher(hostname=req.hostname) as d:\n return d.send(type_, uuid=req.id, **fields)\n\n def update_state(self, task_id=None, state=None, meta=None):\n \"\"\"Update task state.\n\n :keyword task_id: Id of the task to update, defaults to the\n id of the current task\n :keyword state: New state (:class:`str`).\n :keyword meta: State metadata (:class:`dict`).\n\n\n\n \"\"\"\n if task_id is None:\n task_id = self.request.id\n self.backend.store_result(task_id, meta, state)\n\n def on_success(self, retval, task_id, args, kwargs):\n \"\"\"Success handler.\n\n Run by the worker if the task executes successfully.\n\n :param retval: The return value of the task.\n :param task_id: Unique id of the executed task.\n :param args: Original arguments for the executed task.\n :param kwargs: Original keyword arguments for the executed task.\n\n The return value of this handler is ignored.\n\n \"\"\"\n pass\n\n def on_retry(self, exc, task_id, args, kwargs, einfo):\n \"\"\"Retry handler.\n\n This is run by the worker when the task is to be retried.\n\n :param exc: The exception sent to :meth:`retry`.\n :param task_id: Unique id of the retried task.\n :param args: Original arguments for the retried task.\n :param kwargs: Original keyword arguments for the retried task.\n\n :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`\n instance, containing the traceback.\n\n The return value of this handler is ignored.\n\n \"\"\"\n pass\n\n def on_failure(self, exc, task_id, args, kwargs, einfo):\n \"\"\"Error handler.\n\n This is run by the worker when the task fails.\n\n :param exc: The exception raised by the task.\n :param task_id: Unique id of the failed task.\n :param args: Original arguments for the task that failed.\n :param kwargs: Original keyword arguments for the task\n that failed.\n\n :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`\n instance, containing the traceback.\n\n The return value of this handler is ignored.\n\n \"\"\"\n pass\n\n def after_return(self, status, retval, task_id, args, kwargs, einfo):\n \"\"\"Handler called after the task returns.\n\n :param status: Current task state.\n :param retval: Task return value/exception.\n :param task_id: Unique id of the task.\n :param args: Original arguments for the task that failed.\n :param kwargs: Original keyword arguments for the task\n that failed.\n\n :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`\n instance, containing the traceback (if any).\n\n The return value of this handler is ignored.\n\n \"\"\"\n pass\n\n def send_error_email(self, context, exc, **kwargs):\n if self.send_error_emails and \\\n not getattr(self, 'disable_error_emails', None):\n self.ErrorMail(self, **kwargs).send(context, exc)\n\n def add_trail(self, result):\n if self.trail:\n self.request.children.append(result)\n return result\n\n def push_request(self, *args, **kwargs):\n self.request_stack.push(Context(*args, **kwargs))\n\n def pop_request(self):\n self.request_stack.pop()\n\n def __repr__(self):\n \"\"\"`repr(task)`\"\"\"\n return _reprtask(self, R_SELF_TASK if self.__self__ else R_INSTANCE)\n\n def _get_request(self):\n \"\"\"Get current request object.\"\"\"\n req = self.request_stack.top\n if req is None:\n # task was not called, but some may still expect a request\n # to be there, perhaps that should be deprecated.\n if self._default_request is None:\n self._default_request = Context()\n return self._default_request\n return req\n request = property(_get_request)\n\n def _get_exec_options(self):\n if self._exec_options is None:\n self._exec_options = extract_exec_options(self)\n return self._exec_options\n\n @property\n def backend(self):\n backend = self._backend\n if backend is None:\n return self.app.backend\n return backend\n\n @backend.setter\n def backend(self, value): # noqa\n self._backend = value\n\n @property\n def __name__(self):\n return self.__class__.__name__\nBaseTask = Task # compat alias\n", "path": "celery/app/task.py"}]} |
gh_patches_debug_1025 | rasdani/github-patches | git_diff | Rapptz__discord.py-1057 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot connect to voice channels
Running `await voice_channel.connect()` raises
`AttributeError: 'LP_EncoderStruct' object has no attribute 'value'`
Relevant portions of the traceback ([and the line itself](https://github.com/Rapptz/discord.py/blob/rewrite/discord/opus.py#L52)):
```
File ".../lib/python3.6/site-packages/discord/abc.py", line 985, in connect
voice = VoiceClient(state=state, timeout=timeout, channel=self)
File ".../lib/python3.6/site-packages/discord/voice_client.py", line 109, in __init__
self.encoder = opus.Encoder()
File ".../lib/python3.6/site-packages/discord/opus.py", line 225, in __init__
self._state = self._create_state()
File ".../lib/python3.6/site-packages/discord/opus.py", line 239, in _create_state
return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))
File ".../lib/python3.6/site-packages/discord/opus.py", line 52, in _err_ne
if result.value != 0:
AttributeError: 'LP_EncoderStruct' object has no attribute 'value'
```
I have opus 1.2.1 installed on 64-bit Linux, and it is loaded according to `discord.opus.is_loaded()`.
Any clue as to what might be the issue?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `discord/opus.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """
4 The MIT License (MIT)
5
6 Copyright (c) 2015-2017 Rapptz
7
8 Permission is hereby granted, free of charge, to any person obtaining a
9 copy of this software and associated documentation files (the "Software"),
10 to deal in the Software without restriction, including without limitation
11 the rights to use, copy, modify, merge, publish, distribute, sublicense,
12 and/or sell copies of the Software, and to permit persons to whom the
13 Software is furnished to do so, subject to the following conditions:
14
15 The above copyright notice and this permission notice shall be included in
16 all copies or substantial portions of the Software.
17
18 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
19 OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
20 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
21 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
22 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
23 FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
24 DEALINGS IN THE SOFTWARE.
25 """
26
27 import ctypes
28 import ctypes.util
29 import array
30 from .errors import DiscordException
31 import logging
32 import sys
33 import os.path
34
35 log = logging.getLogger(__name__)
36 c_int_ptr = ctypes.POINTER(ctypes.c_int)
37 c_int16_ptr = ctypes.POINTER(ctypes.c_int16)
38 c_float_ptr = ctypes.POINTER(ctypes.c_float)
39
40 class EncoderStruct(ctypes.Structure):
41 pass
42
43 EncoderStructPtr = ctypes.POINTER(EncoderStruct)
44
45 def _err_lt(result, func, args):
46 if result < 0:
47 log.info('error has happened in {0.__name__}'.format(func))
48 raise OpusError(result)
49 return result
50
51 def _err_ne(result, func, args):
52 if result.value != 0:
53 log.info('error has happened in {0.__name__}'.format(func))
54 raise OpusError(result.value)
55 return result
56
57 # A list of exported functions.
58 # The first argument is obviously the name.
59 # The second one are the types of arguments it takes.
60 # The third is the result type.
61 # The fourth is the error handler.
62 exported_functions = [
63 ('opus_strerror',
64 [ctypes.c_int], ctypes.c_char_p, None),
65 ('opus_encoder_get_size',
66 [ctypes.c_int], ctypes.c_int, None),
67 ('opus_encoder_create',
68 [ctypes.c_int, ctypes.c_int, ctypes.c_int, c_int_ptr], EncoderStructPtr, _err_ne),
69 ('opus_encode',
70 [EncoderStructPtr, c_int16_ptr, ctypes.c_int, ctypes.c_char_p, ctypes.c_int32], ctypes.c_int32, _err_lt),
71 ('opus_encoder_ctl',
72 None, ctypes.c_int32, _err_lt),
73 ('opus_encoder_destroy',
74 [EncoderStructPtr], None, None),
75 ]
76
77 def libopus_loader(name):
78 # create the library...
79 lib = ctypes.cdll.LoadLibrary(name)
80
81 # register the functions...
82 for item in exported_functions:
83 try:
84 func = getattr(lib, item[0])
85 except Exception as e:
86 raise e
87
88 try:
89 if item[1]:
90 func.argtypes = item[1]
91
92 func.restype = item[2]
93 except KeyError:
94 pass
95
96 try:
97 if item[3]:
98 func.errcheck = item[3]
99 except KeyError:
100 log.exception("Error assigning check function to %s", func)
101
102 return lib
103
104 try:
105 if sys.platform == 'win32':
106 _basedir = os.path.dirname(os.path.abspath(__file__))
107 _bitness = 'x64' if sys.maxsize > 2**32 else 'x86'
108 _filename = os.path.join(_basedir, 'bin', 'libopus-0.{}.dll'.format(_bitness))
109 _lib = libopus_loader(_filename)
110 else:
111 _lib = libopus_loader(ctypes.util.find_library('opus'))
112 except Exception as e:
113 _lib = None
114
115 def load_opus(name):
116 """Loads the libopus shared library for use with voice.
117
118 If this function is not called then the library uses the function
119 `ctypes.util.find_library`__ and then loads that one
120 if available.
121
122 .. _find library: https://docs.python.org/3.5/library/ctypes.html#finding-shared-libraries
123 __ `find library`_
124
125 Not loading a library leads to voice not working.
126
127 This function propagates the exceptions thrown.
128
129 Warning
130 --------
131 The bitness of the library must match the bitness of your python
132 interpreter. If the library is 64-bit then your python interpreter
133 must be 64-bit as well. Usually if there's a mismatch in bitness then
134 the load will throw an exception.
135
136 Note
137 ----
138 On Windows, the .dll extension is not necessary. However, on Linux
139 the full extension is required to load the library, e.g. ``libopus.so.1``.
140 On Linux however, `find library`_ will usually find the library automatically
141 without you having to call this.
142
143 Parameters
144 ----------
145 name: str
146 The filename of the shared library.
147 """
148 global _lib
149 _lib = libopus_loader(name)
150
151 def is_loaded():
152 """Function to check if opus lib is successfully loaded either
153 via the ``ctypes.util.find_library`` call of :func:`load_opus`.
154
155 This must return ``True`` for voice to work.
156
157 Returns
158 -------
159 bool
160 Indicates if the opus library has been loaded.
161 """
162 global _lib
163 return _lib is not None
164
165 class OpusError(DiscordException):
166 """An exception that is thrown for libopus related errors.
167
168 Attributes
169 ----------
170 code : :class:`int`
171 The error code returned.
172 """
173
174 def __init__(self, code):
175 self.code = code
176 msg = _lib.opus_strerror(self.code).decode('utf-8')
177 log.info('"%s" has happened', msg)
178 super().__init__(msg)
179
180 class OpusNotLoaded(DiscordException):
181 """An exception that is thrown for when libopus is not loaded."""
182 pass
183
184
185 # Some constants...
186 OK = 0
187 APPLICATION_AUDIO = 2049
188 APPLICATION_VOIP = 2048
189 APPLICATION_LOWDELAY = 2051
190 CTL_SET_BITRATE = 4002
191 CTL_SET_BANDWIDTH = 4008
192 CTL_SET_FEC = 4012
193 CTL_SET_PLP = 4014
194 CTL_SET_SIGNAL = 4024
195
196 band_ctl = {
197 'narrow': 1101,
198 'medium': 1102,
199 'wide': 1103,
200 'superwide': 1104,
201 'full': 1105,
202 }
203
204 signal_ctl = {
205 'auto': -1000,
206 'voice': 3001,
207 'music': 3002,
208 }
209
210 class Encoder:
211 SAMPLING_RATE = 48000
212 CHANNELS = 2
213 FRAME_LENGTH = 20
214 SAMPLE_SIZE = 4 # (bit_rate / 8) * CHANNELS (bit_rate == 16)
215 SAMPLES_PER_FRAME = int(SAMPLING_RATE / 1000 * FRAME_LENGTH)
216
217 FRAME_SIZE = SAMPLES_PER_FRAME * SAMPLE_SIZE
218
219 def __init__(self, application=APPLICATION_AUDIO):
220 self.application = application
221
222 if not is_loaded():
223 raise OpusNotLoaded()
224
225 self._state = self._create_state()
226 self.set_bitrate(128)
227 self.set_fec(True)
228 self.set_expected_packet_loss_percent(0.15)
229 self.set_bandwidth('full')
230 self.set_signal_type('auto')
231
232 def __del__(self):
233 if hasattr(self, '_state'):
234 _lib.opus_encoder_destroy(self._state)
235 self._state = None
236
237 def _create_state(self):
238 ret = ctypes.c_int()
239 return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))
240
241 def set_bitrate(self, kbps):
242 kbps = min(128, max(16, int(kbps)))
243
244 _lib.opus_encoder_ctl(self._state, CTL_SET_BITRATE, kbps * 1024)
245 return kbps
246
247 def set_bandwidth(self, req):
248 if req not in band_ctl:
249 raise KeyError('%r is not a valid bandwidth setting. Try one of: %s' % (req, ','.join(band_ctl)))
250
251 k = band_ctl[req]
252 _lib.opus_encoder_ctl(self._state, CTL_SET_BANDWIDTH, k)
253
254 def set_signal_type(self, req):
255 if req not in signal_ctl:
256 raise KeyError('%r is not a valid signal setting. Try one of: %s' % (req, ','.join(signal_ctl)))
257
258 k = signal_ctl[req]
259 _lib.opus_encoder_ctl(self._state, CTL_SET_SIGNAL, k)
260
261 def set_fec(self, enabled=True):
262 _lib.opus_encoder_ctl(self._state, CTL_SET_FEC, 1 if enabled else 0)
263
264 def set_expected_packet_loss_percent(self, percentage):
265 _lib.opus_encoder_ctl(self._state, CTL_SET_PLP, min(100, max(0, int(percentage * 100))))
266
267 def encode(self, pcm, frame_size):
268 max_data_bytes = len(pcm)
269 pcm = ctypes.cast(pcm, c_int16_ptr)
270 data = (ctypes.c_char * max_data_bytes)()
271
272 ret = _lib.opus_encode(self._state, pcm, frame_size, data, max_data_bytes)
273
274 return array.array('b', data[:ret]).tobytes()
275
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/discord/opus.py b/discord/opus.py
--- a/discord/opus.py
+++ b/discord/opus.py
@@ -49,9 +49,10 @@
return result
def _err_ne(result, func, args):
- if result.value != 0:
+ ret = args[-1]._obj
+ if ret.value != 0:
log.info('error has happened in {0.__name__}'.format(func))
- raise OpusError(result.value)
+ raise OpusError(ret.value)
return result
# A list of exported functions.
| {"golden_diff": "diff --git a/discord/opus.py b/discord/opus.py\n--- a/discord/opus.py\n+++ b/discord/opus.py\n@@ -49,9 +49,10 @@\n return result\n \n def _err_ne(result, func, args):\n- if result.value != 0:\n+ ret = args[-1]._obj\n+ if ret.value != 0:\n log.info('error has happened in {0.__name__}'.format(func))\n- raise OpusError(result.value)\n+ raise OpusError(ret.value)\n return result\n \n # A list of exported functions.\n", "issue": "Cannot connect to voice channels\nRunning `await voice_channel.connect()` raises\r\n`AttributeError: 'LP_EncoderStruct' object has no attribute 'value'`\r\n\r\nRelevant portions of the traceback ([and the line itself](https://github.com/Rapptz/discord.py/blob/rewrite/discord/opus.py#L52)):\r\n```\r\n File \".../lib/python3.6/site-packages/discord/abc.py\", line 985, in connect\r\n voice = VoiceClient(state=state, timeout=timeout, channel=self)\r\n File \".../lib/python3.6/site-packages/discord/voice_client.py\", line 109, in __init__\r\n self.encoder = opus.Encoder()\r\n File \".../lib/python3.6/site-packages/discord/opus.py\", line 225, in __init__\r\n self._state = self._create_state()\r\n File \".../lib/python3.6/site-packages/discord/opus.py\", line 239, in _create_state\r\n return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))\r\n File \".../lib/python3.6/site-packages/discord/opus.py\", line 52, in _err_ne\r\n if result.value != 0:\r\nAttributeError: 'LP_EncoderStruct' object has no attribute 'value'\r\n```\r\nI have opus 1.2.1 installed on 64-bit Linux, and it is loaded according to `discord.opus.is_loaded()`.\r\n\r\nAny clue as to what might be the issue?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2017 Rapptz\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\n\nimport ctypes\nimport ctypes.util\nimport array\nfrom .errors import DiscordException\nimport logging\nimport sys\nimport os.path\n\nlog = logging.getLogger(__name__)\nc_int_ptr = ctypes.POINTER(ctypes.c_int)\nc_int16_ptr = ctypes.POINTER(ctypes.c_int16)\nc_float_ptr = ctypes.POINTER(ctypes.c_float)\n\nclass EncoderStruct(ctypes.Structure):\n pass\n\nEncoderStructPtr = ctypes.POINTER(EncoderStruct)\n\ndef _err_lt(result, func, args):\n if result < 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(result)\n return result\n\ndef _err_ne(result, func, args):\n if result.value != 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(result.value)\n return result\n\n# A list of exported functions.\n# The first argument is obviously the name.\n# The second one are the types of arguments it takes.\n# The third is the result type.\n# The fourth is the error handler.\nexported_functions = [\n ('opus_strerror',\n [ctypes.c_int], ctypes.c_char_p, None),\n ('opus_encoder_get_size',\n [ctypes.c_int], ctypes.c_int, None),\n ('opus_encoder_create',\n [ctypes.c_int, ctypes.c_int, ctypes.c_int, c_int_ptr], EncoderStructPtr, _err_ne),\n ('opus_encode',\n [EncoderStructPtr, c_int16_ptr, ctypes.c_int, ctypes.c_char_p, ctypes.c_int32], ctypes.c_int32, _err_lt),\n ('opus_encoder_ctl',\n None, ctypes.c_int32, _err_lt),\n ('opus_encoder_destroy',\n [EncoderStructPtr], None, None),\n]\n\ndef libopus_loader(name):\n # create the library...\n lib = ctypes.cdll.LoadLibrary(name)\n\n # register the functions...\n for item in exported_functions:\n try:\n func = getattr(lib, item[0])\n except Exception as e:\n raise e\n\n try:\n if item[1]:\n func.argtypes = item[1]\n\n func.restype = item[2]\n except KeyError:\n pass\n\n try:\n if item[3]:\n func.errcheck = item[3]\n except KeyError:\n log.exception(\"Error assigning check function to %s\", func)\n\n return lib\n\ntry:\n if sys.platform == 'win32':\n _basedir = os.path.dirname(os.path.abspath(__file__))\n _bitness = 'x64' if sys.maxsize > 2**32 else 'x86'\n _filename = os.path.join(_basedir, 'bin', 'libopus-0.{}.dll'.format(_bitness))\n _lib = libopus_loader(_filename)\n else:\n _lib = libopus_loader(ctypes.util.find_library('opus'))\nexcept Exception as e:\n _lib = None\n\ndef load_opus(name):\n \"\"\"Loads the libopus shared library for use with voice.\n\n If this function is not called then the library uses the function\n `ctypes.util.find_library`__ and then loads that one\n if available.\n\n .. _find library: https://docs.python.org/3.5/library/ctypes.html#finding-shared-libraries\n __ `find library`_\n\n Not loading a library leads to voice not working.\n\n This function propagates the exceptions thrown.\n\n Warning\n --------\n The bitness of the library must match the bitness of your python\n interpreter. If the library is 64-bit then your python interpreter\n must be 64-bit as well. Usually if there's a mismatch in bitness then\n the load will throw an exception.\n\n Note\n ----\n On Windows, the .dll extension is not necessary. However, on Linux\n the full extension is required to load the library, e.g. ``libopus.so.1``.\n On Linux however, `find library`_ will usually find the library automatically\n without you having to call this.\n\n Parameters\n ----------\n name: str\n The filename of the shared library.\n \"\"\"\n global _lib\n _lib = libopus_loader(name)\n\ndef is_loaded():\n \"\"\"Function to check if opus lib is successfully loaded either\n via the ``ctypes.util.find_library`` call of :func:`load_opus`.\n\n This must return ``True`` for voice to work.\n\n Returns\n -------\n bool\n Indicates if the opus library has been loaded.\n \"\"\"\n global _lib\n return _lib is not None\n\nclass OpusError(DiscordException):\n \"\"\"An exception that is thrown for libopus related errors.\n\n Attributes\n ----------\n code : :class:`int`\n The error code returned.\n \"\"\"\n\n def __init__(self, code):\n self.code = code\n msg = _lib.opus_strerror(self.code).decode('utf-8')\n log.info('\"%s\" has happened', msg)\n super().__init__(msg)\n\nclass OpusNotLoaded(DiscordException):\n \"\"\"An exception that is thrown for when libopus is not loaded.\"\"\"\n pass\n\n\n# Some constants...\nOK = 0\nAPPLICATION_AUDIO = 2049\nAPPLICATION_VOIP = 2048\nAPPLICATION_LOWDELAY = 2051\nCTL_SET_BITRATE = 4002\nCTL_SET_BANDWIDTH = 4008\nCTL_SET_FEC = 4012\nCTL_SET_PLP = 4014\nCTL_SET_SIGNAL = 4024\n\nband_ctl = {\n 'narrow': 1101,\n 'medium': 1102,\n 'wide': 1103,\n 'superwide': 1104,\n 'full': 1105,\n}\n\nsignal_ctl = {\n 'auto': -1000,\n 'voice': 3001,\n 'music': 3002,\n}\n\nclass Encoder:\n SAMPLING_RATE = 48000\n CHANNELS = 2\n FRAME_LENGTH = 20\n SAMPLE_SIZE = 4 # (bit_rate / 8) * CHANNELS (bit_rate == 16)\n SAMPLES_PER_FRAME = int(SAMPLING_RATE / 1000 * FRAME_LENGTH)\n\n FRAME_SIZE = SAMPLES_PER_FRAME * SAMPLE_SIZE\n\n def __init__(self, application=APPLICATION_AUDIO):\n self.application = application\n\n if not is_loaded():\n raise OpusNotLoaded()\n\n self._state = self._create_state()\n self.set_bitrate(128)\n self.set_fec(True)\n self.set_expected_packet_loss_percent(0.15)\n self.set_bandwidth('full')\n self.set_signal_type('auto')\n\n def __del__(self):\n if hasattr(self, '_state'):\n _lib.opus_encoder_destroy(self._state)\n self._state = None\n\n def _create_state(self):\n ret = ctypes.c_int()\n return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))\n\n def set_bitrate(self, kbps):\n kbps = min(128, max(16, int(kbps)))\n\n _lib.opus_encoder_ctl(self._state, CTL_SET_BITRATE, kbps * 1024)\n return kbps\n\n def set_bandwidth(self, req):\n if req not in band_ctl:\n raise KeyError('%r is not a valid bandwidth setting. Try one of: %s' % (req, ','.join(band_ctl)))\n\n k = band_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_BANDWIDTH, k)\n\n def set_signal_type(self, req):\n if req not in signal_ctl:\n raise KeyError('%r is not a valid signal setting. Try one of: %s' % (req, ','.join(signal_ctl)))\n\n k = signal_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_SIGNAL, k)\n\n def set_fec(self, enabled=True):\n _lib.opus_encoder_ctl(self._state, CTL_SET_FEC, 1 if enabled else 0)\n\n def set_expected_packet_loss_percent(self, percentage):\n _lib.opus_encoder_ctl(self._state, CTL_SET_PLP, min(100, max(0, int(percentage * 100))))\n\n def encode(self, pcm, frame_size):\n max_data_bytes = len(pcm)\n pcm = ctypes.cast(pcm, c_int16_ptr)\n data = (ctypes.c_char * max_data_bytes)()\n\n ret = _lib.opus_encode(self._state, pcm, frame_size, data, max_data_bytes)\n\n return array.array('b', data[:ret]).tobytes()\n", "path": "discord/opus.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2017 Rapptz\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\n\nimport ctypes\nimport ctypes.util\nimport array\nfrom .errors import DiscordException\nimport logging\nimport sys\nimport os.path\n\nlog = logging.getLogger(__name__)\nc_int_ptr = ctypes.POINTER(ctypes.c_int)\nc_int16_ptr = ctypes.POINTER(ctypes.c_int16)\nc_float_ptr = ctypes.POINTER(ctypes.c_float)\n\nclass EncoderStruct(ctypes.Structure):\n pass\n\nEncoderStructPtr = ctypes.POINTER(EncoderStruct)\n\ndef _err_lt(result, func, args):\n if result < 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(result)\n return result\n\ndef _err_ne(result, func, args):\n ret = args[-1]._obj\n if ret.value != 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(ret.value)\n return result\n\n# A list of exported functions.\n# The first argument is obviously the name.\n# The second one are the types of arguments it takes.\n# The third is the result type.\n# The fourth is the error handler.\nexported_functions = [\n ('opus_strerror',\n [ctypes.c_int], ctypes.c_char_p, None),\n ('opus_encoder_get_size',\n [ctypes.c_int], ctypes.c_int, None),\n ('opus_encoder_create',\n [ctypes.c_int, ctypes.c_int, ctypes.c_int, c_int_ptr], EncoderStructPtr, _err_ne),\n ('opus_encode',\n [EncoderStructPtr, c_int16_ptr, ctypes.c_int, ctypes.c_char_p, ctypes.c_int32], ctypes.c_int32, _err_lt),\n ('opus_encoder_ctl',\n None, ctypes.c_int32, _err_lt),\n ('opus_encoder_destroy',\n [EncoderStructPtr], None, None),\n]\n\ndef libopus_loader(name):\n # create the library...\n lib = ctypes.cdll.LoadLibrary(name)\n\n # register the functions...\n for item in exported_functions:\n try:\n func = getattr(lib, item[0])\n except Exception as e:\n raise e\n\n try:\n if item[1]:\n func.argtypes = item[1]\n\n func.restype = item[2]\n except KeyError:\n pass\n\n try:\n if item[3]:\n func.errcheck = item[3]\n except KeyError:\n log.exception(\"Error assigning check function to %s\", func)\n\n return lib\n\ntry:\n if sys.platform == 'win32':\n _basedir = os.path.dirname(os.path.abspath(__file__))\n _bitness = 'x64' if sys.maxsize > 2**32 else 'x86'\n _filename = os.path.join(_basedir, 'bin', 'libopus-0.{}.dll'.format(_bitness))\n _lib = libopus_loader(_filename)\n else:\n _lib = libopus_loader(ctypes.util.find_library('opus'))\nexcept Exception as e:\n _lib = None\n\ndef load_opus(name):\n \"\"\"Loads the libopus shared library for use with voice.\n\n If this function is not called then the library uses the function\n `ctypes.util.find_library`__ and then loads that one\n if available.\n\n .. _find library: https://docs.python.org/3.5/library/ctypes.html#finding-shared-libraries\n __ `find library`_\n\n Not loading a library leads to voice not working.\n\n This function propagates the exceptions thrown.\n\n Warning\n --------\n The bitness of the library must match the bitness of your python\n interpreter. If the library is 64-bit then your python interpreter\n must be 64-bit as well. Usually if there's a mismatch in bitness then\n the load will throw an exception.\n\n Note\n ----\n On Windows, the .dll extension is not necessary. However, on Linux\n the full extension is required to load the library, e.g. ``libopus.so.1``.\n On Linux however, `find library`_ will usually find the library automatically\n without you having to call this.\n\n Parameters\n ----------\n name: str\n The filename of the shared library.\n \"\"\"\n global _lib\n _lib = libopus_loader(name)\n\ndef is_loaded():\n \"\"\"Function to check if opus lib is successfully loaded either\n via the ``ctypes.util.find_library`` call of :func:`load_opus`.\n\n This must return ``True`` for voice to work.\n\n Returns\n -------\n bool\n Indicates if the opus library has been loaded.\n \"\"\"\n global _lib\n return _lib is not None\n\nclass OpusError(DiscordException):\n \"\"\"An exception that is thrown for libopus related errors.\n\n Attributes\n ----------\n code : :class:`int`\n The error code returned.\n \"\"\"\n\n def __init__(self, code):\n self.code = code\n msg = _lib.opus_strerror(self.code).decode('utf-8')\n log.info('\"%s\" has happened', msg)\n super().__init__(msg)\n\nclass OpusNotLoaded(DiscordException):\n \"\"\"An exception that is thrown for when libopus is not loaded.\"\"\"\n pass\n\n\n# Some constants...\nOK = 0\nAPPLICATION_AUDIO = 2049\nAPPLICATION_VOIP = 2048\nAPPLICATION_LOWDELAY = 2051\nCTL_SET_BITRATE = 4002\nCTL_SET_BANDWIDTH = 4008\nCTL_SET_FEC = 4012\nCTL_SET_PLP = 4014\nCTL_SET_SIGNAL = 4024\n\nband_ctl = {\n 'narrow': 1101,\n 'medium': 1102,\n 'wide': 1103,\n 'superwide': 1104,\n 'full': 1105,\n}\n\nsignal_ctl = {\n 'auto': -1000,\n 'voice': 3001,\n 'music': 3002,\n}\n\nclass Encoder:\n SAMPLING_RATE = 48000\n CHANNELS = 2\n FRAME_LENGTH = 20\n SAMPLE_SIZE = 4 # (bit_rate / 8) * CHANNELS (bit_rate == 16)\n SAMPLES_PER_FRAME = int(SAMPLING_RATE / 1000 * FRAME_LENGTH)\n\n FRAME_SIZE = SAMPLES_PER_FRAME * SAMPLE_SIZE\n\n def __init__(self, application=APPLICATION_AUDIO):\n self.application = application\n\n if not is_loaded():\n raise OpusNotLoaded()\n\n self._state = self._create_state()\n self.set_bitrate(128)\n self.set_fec(True)\n self.set_expected_packet_loss_percent(0.15)\n self.set_bandwidth('full')\n self.set_signal_type('auto')\n\n def __del__(self):\n if hasattr(self, '_state'):\n _lib.opus_encoder_destroy(self._state)\n self._state = None\n\n def _create_state(self):\n ret = ctypes.c_int()\n return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))\n\n def set_bitrate(self, kbps):\n kbps = min(128, max(16, int(kbps)))\n\n _lib.opus_encoder_ctl(self._state, CTL_SET_BITRATE, kbps * 1024)\n return kbps\n\n def set_bandwidth(self, req):\n if req not in band_ctl:\n raise KeyError('%r is not a valid bandwidth setting. Try one of: %s' % (req, ','.join(band_ctl)))\n\n k = band_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_BANDWIDTH, k)\n\n def set_signal_type(self, req):\n if req not in signal_ctl:\n raise KeyError('%r is not a valid signal setting. Try one of: %s' % (req, ','.join(signal_ctl)))\n\n k = signal_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_SIGNAL, k)\n\n def set_fec(self, enabled=True):\n _lib.opus_encoder_ctl(self._state, CTL_SET_FEC, 1 if enabled else 0)\n\n def set_expected_packet_loss_percent(self, percentage):\n _lib.opus_encoder_ctl(self._state, CTL_SET_PLP, min(100, max(0, int(percentage * 100))))\n\n def encode(self, pcm, frame_size):\n max_data_bytes = len(pcm)\n pcm = ctypes.cast(pcm, c_int16_ptr)\n data = (ctypes.c_char * max_data_bytes)()\n\n ret = _lib.opus_encode(self._state, pcm, frame_size, data, max_data_bytes)\n\n return array.array('b', data[:ret]).tobytes()\n", "path": "discord/opus.py"}]} |
gh_patches_debug_1026 | rasdani/github-patches | git_diff | python-pillow__Pillow-7555 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[10.1.0 regression] Palette images save as blank PDFs
Minimal example (you can use [this tiny PNG](https://github.com/python-pillow/Pillow/assets/1119169/d8d45152-7734-4fe3-a2d3-fb49839a0893) for example):
```python
from PIL import Image
image = Image.open('test.png')
image = image.convert('P')
image.save('test.pdf')
```
Output PDF with Pillow 10.0.1:

Output PDF with Pillow 10.1.0:

Issue faced with Python 3.11.6 on Ubuntu 22.04 and Debian 12 (bookworm). I also had the same issue in Docker environments, so I could make a Docker image if needed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/PIL/PdfImagePlugin.py`
Content:
```
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # PDF (Acrobat) file handling
6 #
7 # History:
8 # 1996-07-16 fl Created
9 # 1997-01-18 fl Fixed header
10 # 2004-02-21 fl Fixes for 1/L/CMYK images, etc.
11 # 2004-02-24 fl Fixes for 1 and P images.
12 #
13 # Copyright (c) 1997-2004 by Secret Labs AB. All rights reserved.
14 # Copyright (c) 1996-1997 by Fredrik Lundh.
15 #
16 # See the README file for information on usage and redistribution.
17 #
18
19 ##
20 # Image plugin for PDF images (output only).
21 ##
22
23 import io
24 import math
25 import os
26 import time
27
28 from . import Image, ImageFile, ImageSequence, PdfParser, __version__, features
29
30 #
31 # --------------------------------------------------------------------
32
33 # object ids:
34 # 1. catalogue
35 # 2. pages
36 # 3. image
37 # 4. page
38 # 5. page contents
39
40
41 def _save_all(im, fp, filename):
42 _save(im, fp, filename, save_all=True)
43
44
45 ##
46 # (Internal) Image save plugin for the PDF format.
47
48
49 def _write_image(im, filename, existing_pdf, image_refs):
50 # FIXME: Should replace ASCIIHexDecode with RunLengthDecode
51 # (packbits) or LZWDecode (tiff/lzw compression). Note that
52 # PDF 1.2 also supports Flatedecode (zip compression).
53
54 params = None
55 decode = None
56
57 #
58 # Get image characteristics
59
60 width, height = im.size
61
62 dict_obj = {"BitsPerComponent": 8}
63 if im.mode == "1":
64 if features.check("libtiff"):
65 filter = "CCITTFaxDecode"
66 dict_obj["BitsPerComponent"] = 1
67 params = PdfParser.PdfArray(
68 [
69 PdfParser.PdfDict(
70 {
71 "K": -1,
72 "BlackIs1": True,
73 "Columns": width,
74 "Rows": height,
75 }
76 )
77 ]
78 )
79 else:
80 filter = "DCTDecode"
81 dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceGray")
82 procset = "ImageB" # grayscale
83 elif im.mode == "L":
84 filter = "DCTDecode"
85 # params = f"<< /Predictor 15 /Columns {width-2} >>"
86 dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceGray")
87 procset = "ImageB" # grayscale
88 elif im.mode == "LA":
89 filter = "JPXDecode"
90 # params = f"<< /Predictor 15 /Columns {width-2} >>"
91 procset = "ImageB" # grayscale
92 dict_obj["SMaskInData"] = 1
93 elif im.mode == "P":
94 filter = "ASCIIHexDecode"
95 palette = im.getpalette()
96 dict_obj["ColorSpace"] = [
97 PdfParser.PdfName("Indexed"),
98 PdfParser.PdfName("DeviceRGB"),
99 255,
100 PdfParser.PdfBinary(palette),
101 ]
102 procset = "ImageI" # indexed color
103
104 if "transparency" in im.info:
105 smask = im.convert("LA").getchannel("A")
106 smask.encoderinfo = {}
107
108 image_ref = _write_image(smask, filename, existing_pdf, image_refs)[0]
109 dict_obj["SMask"] = image_ref
110 elif im.mode == "RGB":
111 filter = "DCTDecode"
112 dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceRGB")
113 procset = "ImageC" # color images
114 elif im.mode == "RGBA":
115 filter = "JPXDecode"
116 procset = "ImageC" # color images
117 dict_obj["SMaskInData"] = 1
118 elif im.mode == "CMYK":
119 filter = "DCTDecode"
120 dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceCMYK")
121 procset = "ImageC" # color images
122 decode = [1, 0, 1, 0, 1, 0, 1, 0]
123 else:
124 msg = f"cannot save mode {im.mode}"
125 raise ValueError(msg)
126
127 #
128 # image
129
130 op = io.BytesIO()
131
132 if filter == "ASCIIHexDecode":
133 ImageFile._save(im, op, [("hex", (0, 0) + im.size, 0, im.mode)])
134 elif filter == "CCITTFaxDecode":
135 im.save(
136 op,
137 "TIFF",
138 compression="group4",
139 # use a single strip
140 strip_size=math.ceil(width / 8) * height,
141 )
142 elif filter == "DCTDecode":
143 Image.SAVE["JPEG"](im, op, filename)
144 elif filter == "JPXDecode":
145 del dict_obj["BitsPerComponent"]
146 Image.SAVE["JPEG2000"](im, op, filename)
147 else:
148 msg = f"unsupported PDF filter ({filter})"
149 raise ValueError(msg)
150
151 stream = op.getvalue()
152 if filter == "CCITTFaxDecode":
153 stream = stream[8:]
154 filter = PdfParser.PdfArray([PdfParser.PdfName(filter)])
155 else:
156 filter = PdfParser.PdfName(filter)
157
158 image_ref = image_refs.pop(0)
159 existing_pdf.write_obj(
160 image_ref,
161 stream=stream,
162 Type=PdfParser.PdfName("XObject"),
163 Subtype=PdfParser.PdfName("Image"),
164 Width=width, # * 72.0 / x_resolution,
165 Height=height, # * 72.0 / y_resolution,
166 Filter=filter,
167 Decode=decode,
168 DecodeParms=params,
169 **dict_obj,
170 )
171
172 return image_ref, procset
173
174
175 def _save(im, fp, filename, save_all=False):
176 is_appending = im.encoderinfo.get("append", False)
177 if is_appending:
178 existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode="r+b")
179 else:
180 existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode="w+b")
181
182 dpi = im.encoderinfo.get("dpi")
183 if dpi:
184 x_resolution = dpi[0]
185 y_resolution = dpi[1]
186 else:
187 x_resolution = y_resolution = im.encoderinfo.get("resolution", 72.0)
188
189 info = {
190 "title": None
191 if is_appending
192 else os.path.splitext(os.path.basename(filename))[0],
193 "author": None,
194 "subject": None,
195 "keywords": None,
196 "creator": None,
197 "producer": None,
198 "creationDate": None if is_appending else time.gmtime(),
199 "modDate": None if is_appending else time.gmtime(),
200 }
201 for k, default in info.items():
202 v = im.encoderinfo.get(k) if k in im.encoderinfo else default
203 if v:
204 existing_pdf.info[k[0].upper() + k[1:]] = v
205
206 #
207 # make sure image data is available
208 im.load()
209
210 existing_pdf.start_writing()
211 existing_pdf.write_header()
212 existing_pdf.write_comment(f"created by Pillow {__version__} PDF driver")
213
214 #
215 # pages
216 ims = [im]
217 if save_all:
218 append_images = im.encoderinfo.get("append_images", [])
219 for append_im in append_images:
220 append_im.encoderinfo = im.encoderinfo.copy()
221 ims.append(append_im)
222 number_of_pages = 0
223 image_refs = []
224 page_refs = []
225 contents_refs = []
226 for im in ims:
227 im_number_of_pages = 1
228 if save_all:
229 try:
230 im_number_of_pages = im.n_frames
231 except AttributeError:
232 # Image format does not have n_frames.
233 # It is a single frame image
234 pass
235 number_of_pages += im_number_of_pages
236 for i in range(im_number_of_pages):
237 image_refs.append(existing_pdf.next_object_id(0))
238 if im.mode == "P" and "transparency" in im.info:
239 image_refs.append(existing_pdf.next_object_id(0))
240
241 page_refs.append(existing_pdf.next_object_id(0))
242 contents_refs.append(existing_pdf.next_object_id(0))
243 existing_pdf.pages.append(page_refs[-1])
244
245 #
246 # catalog and list of pages
247 existing_pdf.write_catalog()
248
249 page_number = 0
250 for im_sequence in ims:
251 im_pages = ImageSequence.Iterator(im_sequence) if save_all else [im_sequence]
252 for im in im_pages:
253 image_ref, procset = _write_image(im, filename, existing_pdf, image_refs)
254
255 #
256 # page
257
258 existing_pdf.write_page(
259 page_refs[page_number],
260 Resources=PdfParser.PdfDict(
261 ProcSet=[PdfParser.PdfName("PDF"), PdfParser.PdfName(procset)],
262 XObject=PdfParser.PdfDict(image=image_ref),
263 ),
264 MediaBox=[
265 0,
266 0,
267 im.width * 72.0 / x_resolution,
268 im.height * 72.0 / y_resolution,
269 ],
270 Contents=contents_refs[page_number],
271 )
272
273 #
274 # page contents
275
276 page_contents = b"q %f 0 0 %f 0 0 cm /image Do Q\n" % (
277 im.width * 72.0 / x_resolution,
278 im.height * 72.0 / y_resolution,
279 )
280
281 existing_pdf.write_obj(contents_refs[page_number], stream=page_contents)
282
283 page_number += 1
284
285 #
286 # trailer
287 existing_pdf.write_xref_and_trailer()
288 if hasattr(fp, "flush"):
289 fp.flush()
290 existing_pdf.close()
291
292
293 #
294 # --------------------------------------------------------------------
295
296
297 Image.register_save("PDF", _save)
298 Image.register_save_all("PDF", _save_all)
299
300 Image.register_extension("PDF", ".pdf")
301
302 Image.register_mime("PDF", "application/pdf")
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/PIL/PdfImagePlugin.py b/src/PIL/PdfImagePlugin.py
--- a/src/PIL/PdfImagePlugin.py
+++ b/src/PIL/PdfImagePlugin.py
@@ -96,7 +96,7 @@
dict_obj["ColorSpace"] = [
PdfParser.PdfName("Indexed"),
PdfParser.PdfName("DeviceRGB"),
- 255,
+ len(palette) // 3 - 1,
PdfParser.PdfBinary(palette),
]
procset = "ImageI" # indexed color
| {"golden_diff": "diff --git a/src/PIL/PdfImagePlugin.py b/src/PIL/PdfImagePlugin.py\n--- a/src/PIL/PdfImagePlugin.py\n+++ b/src/PIL/PdfImagePlugin.py\n@@ -96,7 +96,7 @@\n dict_obj[\"ColorSpace\"] = [\n PdfParser.PdfName(\"Indexed\"),\n PdfParser.PdfName(\"DeviceRGB\"),\n- 255,\n+ len(palette) // 3 - 1,\n PdfParser.PdfBinary(palette),\n ]\n procset = \"ImageI\" # indexed color\n", "issue": "[10.1.0 regression] Palette images save as blank PDFs\nMinimal example (you can use [this tiny PNG](https://github.com/python-pillow/Pillow/assets/1119169/d8d45152-7734-4fe3-a2d3-fb49839a0893) for example):\r\n\r\n```python\r\nfrom PIL import Image\r\n\r\nimage = Image.open('test.png')\r\nimage = image.convert('P')\r\nimage.save('test.pdf')\r\n```\r\n\r\nOutput PDF with Pillow 10.0.1:\r\n\r\n\r\nOutput PDF with Pillow 10.1.0:\r\n\r\n\r\nIssue faced with Python 3.11.6 on Ubuntu 22.04 and Debian 12 (bookworm). I also had the same issue in Docker environments, so I could make a Docker image if needed.\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# PDF (Acrobat) file handling\n#\n# History:\n# 1996-07-16 fl Created\n# 1997-01-18 fl Fixed header\n# 2004-02-21 fl Fixes for 1/L/CMYK images, etc.\n# 2004-02-24 fl Fixes for 1 and P images.\n#\n# Copyright (c) 1997-2004 by Secret Labs AB. All rights reserved.\n# Copyright (c) 1996-1997 by Fredrik Lundh.\n#\n# See the README file for information on usage and redistribution.\n#\n\n##\n# Image plugin for PDF images (output only).\n##\n\nimport io\nimport math\nimport os\nimport time\n\nfrom . import Image, ImageFile, ImageSequence, PdfParser, __version__, features\n\n#\n# --------------------------------------------------------------------\n\n# object ids:\n# 1. catalogue\n# 2. pages\n# 3. image\n# 4. page\n# 5. page contents\n\n\ndef _save_all(im, fp, filename):\n _save(im, fp, filename, save_all=True)\n\n\n##\n# (Internal) Image save plugin for the PDF format.\n\n\ndef _write_image(im, filename, existing_pdf, image_refs):\n # FIXME: Should replace ASCIIHexDecode with RunLengthDecode\n # (packbits) or LZWDecode (tiff/lzw compression). Note that\n # PDF 1.2 also supports Flatedecode (zip compression).\n\n params = None\n decode = None\n\n #\n # Get image characteristics\n\n width, height = im.size\n\n dict_obj = {\"BitsPerComponent\": 8}\n if im.mode == \"1\":\n if features.check(\"libtiff\"):\n filter = \"CCITTFaxDecode\"\n dict_obj[\"BitsPerComponent\"] = 1\n params = PdfParser.PdfArray(\n [\n PdfParser.PdfDict(\n {\n \"K\": -1,\n \"BlackIs1\": True,\n \"Columns\": width,\n \"Rows\": height,\n }\n )\n ]\n )\n else:\n filter = \"DCTDecode\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceGray\")\n procset = \"ImageB\" # grayscale\n elif im.mode == \"L\":\n filter = \"DCTDecode\"\n # params = f\"<< /Predictor 15 /Columns {width-2} >>\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceGray\")\n procset = \"ImageB\" # grayscale\n elif im.mode == \"LA\":\n filter = \"JPXDecode\"\n # params = f\"<< /Predictor 15 /Columns {width-2} >>\"\n procset = \"ImageB\" # grayscale\n dict_obj[\"SMaskInData\"] = 1\n elif im.mode == \"P\":\n filter = \"ASCIIHexDecode\"\n palette = im.getpalette()\n dict_obj[\"ColorSpace\"] = [\n PdfParser.PdfName(\"Indexed\"),\n PdfParser.PdfName(\"DeviceRGB\"),\n 255,\n PdfParser.PdfBinary(palette),\n ]\n procset = \"ImageI\" # indexed color\n\n if \"transparency\" in im.info:\n smask = im.convert(\"LA\").getchannel(\"A\")\n smask.encoderinfo = {}\n\n image_ref = _write_image(smask, filename, existing_pdf, image_refs)[0]\n dict_obj[\"SMask\"] = image_ref\n elif im.mode == \"RGB\":\n filter = \"DCTDecode\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceRGB\")\n procset = \"ImageC\" # color images\n elif im.mode == \"RGBA\":\n filter = \"JPXDecode\"\n procset = \"ImageC\" # color images\n dict_obj[\"SMaskInData\"] = 1\n elif im.mode == \"CMYK\":\n filter = \"DCTDecode\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceCMYK\")\n procset = \"ImageC\" # color images\n decode = [1, 0, 1, 0, 1, 0, 1, 0]\n else:\n msg = f\"cannot save mode {im.mode}\"\n raise ValueError(msg)\n\n #\n # image\n\n op = io.BytesIO()\n\n if filter == \"ASCIIHexDecode\":\n ImageFile._save(im, op, [(\"hex\", (0, 0) + im.size, 0, im.mode)])\n elif filter == \"CCITTFaxDecode\":\n im.save(\n op,\n \"TIFF\",\n compression=\"group4\",\n # use a single strip\n strip_size=math.ceil(width / 8) * height,\n )\n elif filter == \"DCTDecode\":\n Image.SAVE[\"JPEG\"](im, op, filename)\n elif filter == \"JPXDecode\":\n del dict_obj[\"BitsPerComponent\"]\n Image.SAVE[\"JPEG2000\"](im, op, filename)\n else:\n msg = f\"unsupported PDF filter ({filter})\"\n raise ValueError(msg)\n\n stream = op.getvalue()\n if filter == \"CCITTFaxDecode\":\n stream = stream[8:]\n filter = PdfParser.PdfArray([PdfParser.PdfName(filter)])\n else:\n filter = PdfParser.PdfName(filter)\n\n image_ref = image_refs.pop(0)\n existing_pdf.write_obj(\n image_ref,\n stream=stream,\n Type=PdfParser.PdfName(\"XObject\"),\n Subtype=PdfParser.PdfName(\"Image\"),\n Width=width, # * 72.0 / x_resolution,\n Height=height, # * 72.0 / y_resolution,\n Filter=filter,\n Decode=decode,\n DecodeParms=params,\n **dict_obj,\n )\n\n return image_ref, procset\n\n\ndef _save(im, fp, filename, save_all=False):\n is_appending = im.encoderinfo.get(\"append\", False)\n if is_appending:\n existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode=\"r+b\")\n else:\n existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode=\"w+b\")\n\n dpi = im.encoderinfo.get(\"dpi\")\n if dpi:\n x_resolution = dpi[0]\n y_resolution = dpi[1]\n else:\n x_resolution = y_resolution = im.encoderinfo.get(\"resolution\", 72.0)\n\n info = {\n \"title\": None\n if is_appending\n else os.path.splitext(os.path.basename(filename))[0],\n \"author\": None,\n \"subject\": None,\n \"keywords\": None,\n \"creator\": None,\n \"producer\": None,\n \"creationDate\": None if is_appending else time.gmtime(),\n \"modDate\": None if is_appending else time.gmtime(),\n }\n for k, default in info.items():\n v = im.encoderinfo.get(k) if k in im.encoderinfo else default\n if v:\n existing_pdf.info[k[0].upper() + k[1:]] = v\n\n #\n # make sure image data is available\n im.load()\n\n existing_pdf.start_writing()\n existing_pdf.write_header()\n existing_pdf.write_comment(f\"created by Pillow {__version__} PDF driver\")\n\n #\n # pages\n ims = [im]\n if save_all:\n append_images = im.encoderinfo.get(\"append_images\", [])\n for append_im in append_images:\n append_im.encoderinfo = im.encoderinfo.copy()\n ims.append(append_im)\n number_of_pages = 0\n image_refs = []\n page_refs = []\n contents_refs = []\n for im in ims:\n im_number_of_pages = 1\n if save_all:\n try:\n im_number_of_pages = im.n_frames\n except AttributeError:\n # Image format does not have n_frames.\n # It is a single frame image\n pass\n number_of_pages += im_number_of_pages\n for i in range(im_number_of_pages):\n image_refs.append(existing_pdf.next_object_id(0))\n if im.mode == \"P\" and \"transparency\" in im.info:\n image_refs.append(existing_pdf.next_object_id(0))\n\n page_refs.append(existing_pdf.next_object_id(0))\n contents_refs.append(existing_pdf.next_object_id(0))\n existing_pdf.pages.append(page_refs[-1])\n\n #\n # catalog and list of pages\n existing_pdf.write_catalog()\n\n page_number = 0\n for im_sequence in ims:\n im_pages = ImageSequence.Iterator(im_sequence) if save_all else [im_sequence]\n for im in im_pages:\n image_ref, procset = _write_image(im, filename, existing_pdf, image_refs)\n\n #\n # page\n\n existing_pdf.write_page(\n page_refs[page_number],\n Resources=PdfParser.PdfDict(\n ProcSet=[PdfParser.PdfName(\"PDF\"), PdfParser.PdfName(procset)],\n XObject=PdfParser.PdfDict(image=image_ref),\n ),\n MediaBox=[\n 0,\n 0,\n im.width * 72.0 / x_resolution,\n im.height * 72.0 / y_resolution,\n ],\n Contents=contents_refs[page_number],\n )\n\n #\n # page contents\n\n page_contents = b\"q %f 0 0 %f 0 0 cm /image Do Q\\n\" % (\n im.width * 72.0 / x_resolution,\n im.height * 72.0 / y_resolution,\n )\n\n existing_pdf.write_obj(contents_refs[page_number], stream=page_contents)\n\n page_number += 1\n\n #\n # trailer\n existing_pdf.write_xref_and_trailer()\n if hasattr(fp, \"flush\"):\n fp.flush()\n existing_pdf.close()\n\n\n#\n# --------------------------------------------------------------------\n\n\nImage.register_save(\"PDF\", _save)\nImage.register_save_all(\"PDF\", _save_all)\n\nImage.register_extension(\"PDF\", \".pdf\")\n\nImage.register_mime(\"PDF\", \"application/pdf\")\n", "path": "src/PIL/PdfImagePlugin.py"}], "after_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# PDF (Acrobat) file handling\n#\n# History:\n# 1996-07-16 fl Created\n# 1997-01-18 fl Fixed header\n# 2004-02-21 fl Fixes for 1/L/CMYK images, etc.\n# 2004-02-24 fl Fixes for 1 and P images.\n#\n# Copyright (c) 1997-2004 by Secret Labs AB. All rights reserved.\n# Copyright (c) 1996-1997 by Fredrik Lundh.\n#\n# See the README file for information on usage and redistribution.\n#\n\n##\n# Image plugin for PDF images (output only).\n##\n\nimport io\nimport math\nimport os\nimport time\n\nfrom . import Image, ImageFile, ImageSequence, PdfParser, __version__, features\n\n#\n# --------------------------------------------------------------------\n\n# object ids:\n# 1. catalogue\n# 2. pages\n# 3. image\n# 4. page\n# 5. page contents\n\n\ndef _save_all(im, fp, filename):\n _save(im, fp, filename, save_all=True)\n\n\n##\n# (Internal) Image save plugin for the PDF format.\n\n\ndef _write_image(im, filename, existing_pdf, image_refs):\n # FIXME: Should replace ASCIIHexDecode with RunLengthDecode\n # (packbits) or LZWDecode (tiff/lzw compression). Note that\n # PDF 1.2 also supports Flatedecode (zip compression).\n\n params = None\n decode = None\n\n #\n # Get image characteristics\n\n width, height = im.size\n\n dict_obj = {\"BitsPerComponent\": 8}\n if im.mode == \"1\":\n if features.check(\"libtiff\"):\n filter = \"CCITTFaxDecode\"\n dict_obj[\"BitsPerComponent\"] = 1\n params = PdfParser.PdfArray(\n [\n PdfParser.PdfDict(\n {\n \"K\": -1,\n \"BlackIs1\": True,\n \"Columns\": width,\n \"Rows\": height,\n }\n )\n ]\n )\n else:\n filter = \"DCTDecode\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceGray\")\n procset = \"ImageB\" # grayscale\n elif im.mode == \"L\":\n filter = \"DCTDecode\"\n # params = f\"<< /Predictor 15 /Columns {width-2} >>\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceGray\")\n procset = \"ImageB\" # grayscale\n elif im.mode == \"LA\":\n filter = \"JPXDecode\"\n # params = f\"<< /Predictor 15 /Columns {width-2} >>\"\n procset = \"ImageB\" # grayscale\n dict_obj[\"SMaskInData\"] = 1\n elif im.mode == \"P\":\n filter = \"ASCIIHexDecode\"\n palette = im.getpalette()\n dict_obj[\"ColorSpace\"] = [\n PdfParser.PdfName(\"Indexed\"),\n PdfParser.PdfName(\"DeviceRGB\"),\n len(palette) // 3 - 1,\n PdfParser.PdfBinary(palette),\n ]\n procset = \"ImageI\" # indexed color\n\n if \"transparency\" in im.info:\n smask = im.convert(\"LA\").getchannel(\"A\")\n smask.encoderinfo = {}\n\n image_ref = _write_image(smask, filename, existing_pdf, image_refs)[0]\n dict_obj[\"SMask\"] = image_ref\n elif im.mode == \"RGB\":\n filter = \"DCTDecode\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceRGB\")\n procset = \"ImageC\" # color images\n elif im.mode == \"RGBA\":\n filter = \"JPXDecode\"\n procset = \"ImageC\" # color images\n dict_obj[\"SMaskInData\"] = 1\n elif im.mode == \"CMYK\":\n filter = \"DCTDecode\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceCMYK\")\n procset = \"ImageC\" # color images\n decode = [1, 0, 1, 0, 1, 0, 1, 0]\n else:\n msg = f\"cannot save mode {im.mode}\"\n raise ValueError(msg)\n\n #\n # image\n\n op = io.BytesIO()\n\n if filter == \"ASCIIHexDecode\":\n ImageFile._save(im, op, [(\"hex\", (0, 0) + im.size, 0, im.mode)])\n elif filter == \"CCITTFaxDecode\":\n im.save(\n op,\n \"TIFF\",\n compression=\"group4\",\n # use a single strip\n strip_size=math.ceil(width / 8) * height,\n )\n elif filter == \"DCTDecode\":\n Image.SAVE[\"JPEG\"](im, op, filename)\n elif filter == \"JPXDecode\":\n del dict_obj[\"BitsPerComponent\"]\n Image.SAVE[\"JPEG2000\"](im, op, filename)\n else:\n msg = f\"unsupported PDF filter ({filter})\"\n raise ValueError(msg)\n\n stream = op.getvalue()\n if filter == \"CCITTFaxDecode\":\n stream = stream[8:]\n filter = PdfParser.PdfArray([PdfParser.PdfName(filter)])\n else:\n filter = PdfParser.PdfName(filter)\n\n image_ref = image_refs.pop(0)\n existing_pdf.write_obj(\n image_ref,\n stream=stream,\n Type=PdfParser.PdfName(\"XObject\"),\n Subtype=PdfParser.PdfName(\"Image\"),\n Width=width, # * 72.0 / x_resolution,\n Height=height, # * 72.0 / y_resolution,\n Filter=filter,\n Decode=decode,\n DecodeParms=params,\n **dict_obj,\n )\n\n return image_ref, procset\n\n\ndef _save(im, fp, filename, save_all=False):\n is_appending = im.encoderinfo.get(\"append\", False)\n if is_appending:\n existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode=\"r+b\")\n else:\n existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode=\"w+b\")\n\n dpi = im.encoderinfo.get(\"dpi\")\n if dpi:\n x_resolution = dpi[0]\n y_resolution = dpi[1]\n else:\n x_resolution = y_resolution = im.encoderinfo.get(\"resolution\", 72.0)\n\n info = {\n \"title\": None\n if is_appending\n else os.path.splitext(os.path.basename(filename))[0],\n \"author\": None,\n \"subject\": None,\n \"keywords\": None,\n \"creator\": None,\n \"producer\": None,\n \"creationDate\": None if is_appending else time.gmtime(),\n \"modDate\": None if is_appending else time.gmtime(),\n }\n for k, default in info.items():\n v = im.encoderinfo.get(k) if k in im.encoderinfo else default\n if v:\n existing_pdf.info[k[0].upper() + k[1:]] = v\n\n #\n # make sure image data is available\n im.load()\n\n existing_pdf.start_writing()\n existing_pdf.write_header()\n existing_pdf.write_comment(f\"created by Pillow {__version__} PDF driver\")\n\n #\n # pages\n ims = [im]\n if save_all:\n append_images = im.encoderinfo.get(\"append_images\", [])\n for append_im in append_images:\n append_im.encoderinfo = im.encoderinfo.copy()\n ims.append(append_im)\n number_of_pages = 0\n image_refs = []\n page_refs = []\n contents_refs = []\n for im in ims:\n im_number_of_pages = 1\n if save_all:\n try:\n im_number_of_pages = im.n_frames\n except AttributeError:\n # Image format does not have n_frames.\n # It is a single frame image\n pass\n number_of_pages += im_number_of_pages\n for i in range(im_number_of_pages):\n image_refs.append(existing_pdf.next_object_id(0))\n if im.mode == \"P\" and \"transparency\" in im.info:\n image_refs.append(existing_pdf.next_object_id(0))\n\n page_refs.append(existing_pdf.next_object_id(0))\n contents_refs.append(existing_pdf.next_object_id(0))\n existing_pdf.pages.append(page_refs[-1])\n\n #\n # catalog and list of pages\n existing_pdf.write_catalog()\n\n page_number = 0\n for im_sequence in ims:\n im_pages = ImageSequence.Iterator(im_sequence) if save_all else [im_sequence]\n for im in im_pages:\n image_ref, procset = _write_image(im, filename, existing_pdf, image_refs)\n\n #\n # page\n\n existing_pdf.write_page(\n page_refs[page_number],\n Resources=PdfParser.PdfDict(\n ProcSet=[PdfParser.PdfName(\"PDF\"), PdfParser.PdfName(procset)],\n XObject=PdfParser.PdfDict(image=image_ref),\n ),\n MediaBox=[\n 0,\n 0,\n im.width * 72.0 / x_resolution,\n im.height * 72.0 / y_resolution,\n ],\n Contents=contents_refs[page_number],\n )\n\n #\n # page contents\n\n page_contents = b\"q %f 0 0 %f 0 0 cm /image Do Q\\n\" % (\n im.width * 72.0 / x_resolution,\n im.height * 72.0 / y_resolution,\n )\n\n existing_pdf.write_obj(contents_refs[page_number], stream=page_contents)\n\n page_number += 1\n\n #\n # trailer\n existing_pdf.write_xref_and_trailer()\n if hasattr(fp, \"flush\"):\n fp.flush()\n existing_pdf.close()\n\n\n#\n# --------------------------------------------------------------------\n\n\nImage.register_save(\"PDF\", _save)\nImage.register_save_all(\"PDF\", _save_all)\n\nImage.register_extension(\"PDF\", \".pdf\")\n\nImage.register_mime(\"PDF\", \"application/pdf\")\n", "path": "src/PIL/PdfImagePlugin.py"}]} |
gh_patches_debug_1027 | rasdani/github-patches | git_diff | Pyomo__pyomo-895 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NEOS error
Our current build is failing because of the error below, I speculate due to usually-untested code being triggered by a transient network failure.
```
======================================================================
ERROR: test_kestrel_plugin (pyomo.neos.tests.test_neos.TestKestrel)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/Pyomo/pyomo/pyomo/neos/tests/test_neos.py", line 90, in test_kestrel_plugin
results = solver_manager.solve(m, opt='cbc')
File "/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/async_solver.py", line 28, in solve
return self.execute(*args, **kwds)
File "/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/manager.py", line 107, in execute
ah = self.queue(*args, **kwds)
File "/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/manager.py", line 122, in queue
return self._perform_queue(ah, *args, **kwds)
File "/home/travis/build/Pyomo/pyomo/pyomo/neos/plugins/kestrel_plugin.py", line 127, in _perform_queue
raise ActionManagerError(
NameError: name 'ActionManagerError' is not defined
```
NEOS error
Our current build is failing because of the error below, I speculate due to usually-untested code being triggered by a transient network failure.
```
======================================================================
ERROR: test_kestrel_plugin (pyomo.neos.tests.test_neos.TestKestrel)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/Pyomo/pyomo/pyomo/neos/tests/test_neos.py", line 90, in test_kestrel_plugin
results = solver_manager.solve(m, opt='cbc')
File "/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/async_solver.py", line 28, in solve
return self.execute(*args, **kwds)
File "/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/manager.py", line 107, in execute
ah = self.queue(*args, **kwds)
File "/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/manager.py", line 122, in queue
return self._perform_queue(ah, *args, **kwds)
File "/home/travis/build/Pyomo/pyomo/pyomo/neos/plugins/kestrel_plugin.py", line 127, in _perform_queue
raise ActionManagerError(
NameError: name 'ActionManagerError' is not defined
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyomo/neos/plugins/kestrel_plugin.py`
Content:
```
1 # ___________________________________________________________________________
2 #
3 # Pyomo: Python Optimization Modeling Objects
4 # Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC
5 # Under the terms of Contract DE-NA0003525 with National Technology and
6 # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
7 # rights in this software.
8 # This software is distributed under the 3-clause BSD License.
9 # ___________________________________________________________________________
10
11 import logging
12 import os
13 import re
14 import six
15
16 from six.moves.xmlrpc_client import ProtocolError
17
18 from pyomo.opt import SolverFactory, SolverManagerFactory, OptSolver
19 from pyomo.opt.parallel.async_solver import (
20 AsynchronousSolverManager, ActionStatus
21 )
22 from pyomo.opt.base import OptSolver
23 from pyomo.core.base import Block
24 import pyomo.neos.kestrel
25
26
27 logger = logging.getLogger('pyomo.neos')
28
29
30 def _neos_error(msg, results, current_message):
31 error_re = re.compile('error', flags=re.I)
32 warn_re = re.compile('warn', flags=re.I)
33
34 logger.error("%s NEOS log:\n%s" % ( msg, current_message, ))
35 soln_data = results.data
36 if six.PY3:
37 soln_data = soln_data.decode('utf-8')
38 for line in soln_data.splitlines():
39 if error_re.search(line):
40 logger.error(line)
41 elif warn_re.search(line):
42 logger.warn(line)
43
44
45 @SolverManagerFactory.register(
46 'neos', doc="Asynchronously execute solvers on the NEOS server")
47 class SolverManager_NEOS(AsynchronousSolverManager):
48
49 def clear(self):
50 """
51 Clear manager state
52 """
53 AsynchronousSolverManager.clear(self)
54 self.kestrel = pyomo.neos.kestrel.kestrelAMPL()
55 self._ah = {} # maps NEOS job numbers to their corresponding
56 # action handle.
57 self._args = {}
58 self._opt_data = {}
59
60 # to grab streamed output from NEOS, need to keep
61 # map of action handle to the to-date string of
62 # extracted output.
63 # TBD: The following entries aren't currently cleaned up, but
64 # we're still trying to get the basics down.
65 # store pairs of NEOS message offset and NEOS message string.
66 # index into the map is the NEOS job number
67 self._neos_log = {}
68 self._solvers = {}
69
70 def _perform_queue(self, ah, *args, **kwds):
71 """
72 Perform the queue operation. This method returns the ActionHandle,
73 and the ActionHandle status indicates whether the queue was successful.
74 """
75 solver = kwds.pop('solver', kwds.pop('opt', None))
76 if solver is None:
77 raise ActionManagerError(
78 "No solver passed to %s, use keyword option 'solver'"
79 % (type(self).__name__) )
80 if not isinstance(solver, six.string_types):
81 solver_name = solver.name
82 if solver_name == 'asl':
83 solver_name = \
84 os.path.basename(solver.executable())
85 else:
86 solver_name = solver
87 solver = None
88
89 #
90 # Handle ephemeral solvers options here. These
91 # will override whatever is currently in the options
92 # dictionary, but we will reset these options to
93 # their original value at the end of this method.
94 #
95 user_solver_options = {}
96 # make sure to transfer the options dict on the
97 # solver plugin if the user does not use a string
98 # to identify the neos solver. The ephemeral
99 # options must also go after these.
100 if solver is not None:
101 user_solver_options.update(solver.options)
102 _options = kwds.pop('options', {})
103 if isinstance(_options, six.string_types):
104 _options = OptSolver._options_string_to_dict(_options)
105 user_solver_options.update(_options)
106 user_solver_options.update(
107 OptSolver._options_string_to_dict(kwds.pop('options_string', '')))
108
109 # JDS: [5/13/17] The following is a HACK. This timeout flag is
110 # set by pyomo/scripting/util.py:apply_optimizer. If we do not
111 # remove it, it will get passed to the NEOS solver. For solvers
112 # like CPLEX 12.7.0, this will cause a fatal error as it is not
113 # a known option.
114 if user_solver_options.get('timelimit',0) is None:
115 del user_solver_options['timelimit']
116
117 opt = SolverFactory('_neos')
118 opt._presolve(*args, **kwds)
119 #
120 # Map NEOS name, using lowercase convention in Pyomo
121 #
122 if len(self._solvers) == 0:
123 for name in self.kestrel.solvers():
124 if name.endswith('AMPL'):
125 self._solvers[ name[:-5].lower() ] = name[:-5]
126 if solver_name not in self._solvers:
127 raise ActionManagerError(
128 "Solver '%s' is not recognized by NEOS. "
129 "Solver names recognized:\n%s"
130 % (solver_name, str(sorted(self._solvers.keys()))))
131 #
132 # Apply kestrel
133 #
134 # Set the kestrel_options environment
135 #
136 neos_sname = self._solvers[solver_name].lower()
137 os.environ['kestrel_options'] = 'solver=%s' % self._solvers[solver_name]
138 #
139 # Set the <solver>_options environment
140 #
141 solver_options = {}
142 for key in opt.options:
143 solver_options[key]=opt.options[key]
144 solver_options.update(user_solver_options)
145 options = opt._get_options_string(solver_options)
146 if not options == "":
147 os.environ[neos_sname+'_options'] = options
148 #
149 # Generate an XML string using these two environment variables
150 #
151 xml = self.kestrel.formXML(opt._problem_files[0])
152 (jobNumber, password) = self.kestrel.submit(xml)
153 ah.job = jobNumber
154 ah.password = password
155 #
156 # Cleanup
157 #
158 del os.environ['kestrel_options']
159 try:
160 del os.environ[neos_sname+"_options"]
161 except:
162 pass
163 #
164 # Store action handle, and return
165 #
166 self._ah[jobNumber] = ah
167 self._neos_log[jobNumber] = (0, "")
168 self._opt_data[jobNumber] = (opt,
169 opt._smap_id,
170 opt._load_solutions,
171 opt._select_index,
172 opt._default_variable_value)
173 self._args[jobNumber] = args
174 return ah
175
176 def _perform_wait_any(self):
177 """
178 Perform the wait_any operation. This method returns an
179 ActionHandle with the results of waiting. If None is returned
180 then the ActionManager assumes that it can call this method again.
181 Note that an ActionHandle can be returned with a dummy value,
182 to indicate an error.
183 """
184 for jobNumber in self._ah:
185
186 status = self.kestrel.neos.getJobStatus(jobNumber,
187 self._ah[jobNumber].password)
188
189 if status not in ("Running", "Waiting"):
190 # the job is done.
191 ah = self._ah[jobNumber]
192 del self._ah[jobNumber]
193 ah.status = ActionStatus.done
194
195 (opt,
196 smap_id,
197 load_solutions,
198 select_index,
199 default_variable_value) = self._opt_data[jobNumber]
200 del self._opt_data[jobNumber]
201
202 args = self._args[jobNumber]
203 del self._args[jobNumber]
204
205 # retrieve the final results, which are in message/log format.
206 results = self.kestrel.neos.getFinalResults(jobNumber, ah.password)
207
208 (current_offset, current_message) = self._neos_log[jobNumber]
209 with open(opt._log_file, 'w') as OUTPUT:
210 OUTPUT.write(current_message)
211 with open(opt._soln_file, 'w') as OUTPUT:
212 if six.PY2:
213 OUTPUT.write(results.data)
214 else:
215 OUTPUT.write(results.data.decode('utf-8'))
216
217 rc = None
218 try:
219 solver_results = opt.process_output(rc)
220 except:
221 _neos_error( "Error parsing NEOS solution file",
222 results, current_message )
223 return ah
224
225 solver_results._smap_id = smap_id
226 self.results[ah.id] = solver_results
227
228 if isinstance(args[0], Block):
229 _model = args[0]
230 if load_solutions:
231 try:
232 _model.solutions.load_from(
233 solver_results,
234 select=select_index,
235 default_variable_value=default_variable_value)
236 except:
237 _neos_error(
238 "Error loading NEOS solution into model",
239 results, current_message )
240 solver_results._smap_id = None
241 solver_results.solution.clear()
242 else:
243 solver_results._smap = _model.solutions.symbol_map[smap_id]
244 _model.solutions.delete_symbol_map(smap_id)
245
246 return ah
247 else:
248 # The job is still running...
249 #
250 # Grab the partial messages from NEOS as you go, in case
251 # you want to output on-the-fly. You will only get data
252 # if the job was routed to the "short" priority queue.
253 (current_offset, current_message) = self._neos_log[jobNumber]
254 # TBD: blocking isn't the way to go, but non-blocking
255 # was triggering some exception in kestrel.
256 #
257 # [5/13/17]: The blocking fetch will timeout in 2
258 # minutes. If NEOS doesn't produce intermediate results
259 # by then we will need to catch (and eat) the exception
260 try:
261 (message_fragment, new_offset) \
262 = self.kestrel.neos.getIntermediateResults(
263 jobNumber,
264 self._ah[jobNumber].password,
265 current_offset )
266 logger.info(message_fragment)
267 self._neos_log[jobNumber] = (
268 new_offset,
269 current_message + (
270 message_fragment.data if six.PY2
271 else (message_fragment.data).decode('utf-8') ) )
272 except ProtocolError:
273 # The command probably timed out
274 pass
275
276 return None
277
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyomo/neos/plugins/kestrel_plugin.py b/pyomo/neos/plugins/kestrel_plugin.py
--- a/pyomo/neos/plugins/kestrel_plugin.py
+++ b/pyomo/neos/plugins/kestrel_plugin.py
@@ -16,6 +16,7 @@
from six.moves.xmlrpc_client import ProtocolError
from pyomo.opt import SolverFactory, SolverManagerFactory, OptSolver
+from pyomo.opt.parallel.manager import ActionManagerError
from pyomo.opt.parallel.async_solver import (
AsynchronousSolverManager, ActionStatus
)
| {"golden_diff": "diff --git a/pyomo/neos/plugins/kestrel_plugin.py b/pyomo/neos/plugins/kestrel_plugin.py\n--- a/pyomo/neos/plugins/kestrel_plugin.py\n+++ b/pyomo/neos/plugins/kestrel_plugin.py\n@@ -16,6 +16,7 @@\n from six.moves.xmlrpc_client import ProtocolError\n \n from pyomo.opt import SolverFactory, SolverManagerFactory, OptSolver\n+from pyomo.opt.parallel.manager import ActionManagerError\n from pyomo.opt.parallel.async_solver import (\n AsynchronousSolverManager, ActionStatus\n )\n", "issue": "NEOS error\nOur current build is failing because of the error below, I speculate due to usually-untested code being triggered by a transient network failure.\r\n```\r\n======================================================================\r\nERROR: test_kestrel_plugin (pyomo.neos.tests.test_neos.TestKestrel)\r\n----------------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/neos/tests/test_neos.py\", line 90, in test_kestrel_plugin\r\n results = solver_manager.solve(m, opt='cbc')\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/async_solver.py\", line 28, in solve\r\n return self.execute(*args, **kwds)\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/manager.py\", line 107, in execute\r\n ah = self.queue(*args, **kwds)\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/manager.py\", line 122, in queue\r\n return self._perform_queue(ah, *args, **kwds)\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/neos/plugins/kestrel_plugin.py\", line 127, in _perform_queue\r\n raise ActionManagerError(\r\nNameError: name 'ActionManagerError' is not defined\r\n```\nNEOS error\nOur current build is failing because of the error below, I speculate due to usually-untested code being triggered by a transient network failure.\r\n```\r\n======================================================================\r\nERROR: test_kestrel_plugin (pyomo.neos.tests.test_neos.TestKestrel)\r\n----------------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/neos/tests/test_neos.py\", line 90, in test_kestrel_plugin\r\n results = solver_manager.solve(m, opt='cbc')\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/async_solver.py\", line 28, in solve\r\n return self.execute(*args, **kwds)\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/manager.py\", line 107, in execute\r\n ah = self.queue(*args, **kwds)\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/opt/parallel/manager.py\", line 122, in queue\r\n return self._perform_queue(ah, *args, **kwds)\r\n File \"/home/travis/build/Pyomo/pyomo/pyomo/neos/plugins/kestrel_plugin.py\", line 127, in _perform_queue\r\n raise ActionManagerError(\r\nNameError: name 'ActionManagerError' is not defined\r\n```\n", "before_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\nimport logging\nimport os\nimport re\nimport six\n\nfrom six.moves.xmlrpc_client import ProtocolError\n\nfrom pyomo.opt import SolverFactory, SolverManagerFactory, OptSolver\nfrom pyomo.opt.parallel.async_solver import (\n AsynchronousSolverManager, ActionStatus\n)\nfrom pyomo.opt.base import OptSolver\nfrom pyomo.core.base import Block\nimport pyomo.neos.kestrel\n\n\nlogger = logging.getLogger('pyomo.neos')\n\n\ndef _neos_error(msg, results, current_message):\n error_re = re.compile('error', flags=re.I)\n warn_re = re.compile('warn', flags=re.I)\n\n logger.error(\"%s NEOS log:\\n%s\" % ( msg, current_message, ))\n soln_data = results.data\n if six.PY3:\n soln_data = soln_data.decode('utf-8')\n for line in soln_data.splitlines():\n if error_re.search(line):\n logger.error(line)\n elif warn_re.search(line):\n logger.warn(line)\n\n\[email protected](\n 'neos', doc=\"Asynchronously execute solvers on the NEOS server\")\nclass SolverManager_NEOS(AsynchronousSolverManager):\n\n def clear(self):\n \"\"\"\n Clear manager state\n \"\"\"\n AsynchronousSolverManager.clear(self)\n self.kestrel = pyomo.neos.kestrel.kestrelAMPL()\n self._ah = {} # maps NEOS job numbers to their corresponding\n # action handle.\n self._args = {}\n self._opt_data = {}\n\n # to grab streamed output from NEOS, need to keep\n # map of action handle to the to-date string of\n # extracted output.\n # TBD: The following entries aren't currently cleaned up, but\n # we're still trying to get the basics down.\n # store pairs of NEOS message offset and NEOS message string.\n # index into the map is the NEOS job number\n self._neos_log = {}\n self._solvers = {}\n\n def _perform_queue(self, ah, *args, **kwds):\n \"\"\"\n Perform the queue operation. This method returns the ActionHandle,\n and the ActionHandle status indicates whether the queue was successful.\n \"\"\"\n solver = kwds.pop('solver', kwds.pop('opt', None))\n if solver is None:\n raise ActionManagerError(\n \"No solver passed to %s, use keyword option 'solver'\"\n % (type(self).__name__) )\n if not isinstance(solver, six.string_types):\n solver_name = solver.name\n if solver_name == 'asl':\n solver_name = \\\n os.path.basename(solver.executable())\n else:\n solver_name = solver\n solver = None\n\n #\n # Handle ephemeral solvers options here. These\n # will override whatever is currently in the options\n # dictionary, but we will reset these options to\n # their original value at the end of this method.\n #\n user_solver_options = {}\n # make sure to transfer the options dict on the\n # solver plugin if the user does not use a string\n # to identify the neos solver. The ephemeral\n # options must also go after these.\n if solver is not None:\n user_solver_options.update(solver.options)\n _options = kwds.pop('options', {})\n if isinstance(_options, six.string_types):\n _options = OptSolver._options_string_to_dict(_options)\n user_solver_options.update(_options)\n user_solver_options.update(\n OptSolver._options_string_to_dict(kwds.pop('options_string', '')))\n\n # JDS: [5/13/17] The following is a HACK. This timeout flag is\n # set by pyomo/scripting/util.py:apply_optimizer. If we do not\n # remove it, it will get passed to the NEOS solver. For solvers\n # like CPLEX 12.7.0, this will cause a fatal error as it is not\n # a known option.\n if user_solver_options.get('timelimit',0) is None:\n del user_solver_options['timelimit']\n\n opt = SolverFactory('_neos')\n opt._presolve(*args, **kwds)\n #\n # Map NEOS name, using lowercase convention in Pyomo\n #\n if len(self._solvers) == 0:\n for name in self.kestrel.solvers():\n if name.endswith('AMPL'):\n self._solvers[ name[:-5].lower() ] = name[:-5]\n if solver_name not in self._solvers:\n raise ActionManagerError(\n \"Solver '%s' is not recognized by NEOS. \"\n \"Solver names recognized:\\n%s\"\n % (solver_name, str(sorted(self._solvers.keys()))))\n #\n # Apply kestrel\n #\n # Set the kestrel_options environment\n #\n neos_sname = self._solvers[solver_name].lower()\n os.environ['kestrel_options'] = 'solver=%s' % self._solvers[solver_name]\n #\n # Set the <solver>_options environment\n #\n solver_options = {}\n for key in opt.options:\n solver_options[key]=opt.options[key]\n solver_options.update(user_solver_options)\n options = opt._get_options_string(solver_options)\n if not options == \"\":\n os.environ[neos_sname+'_options'] = options\n #\n # Generate an XML string using these two environment variables\n #\n xml = self.kestrel.formXML(opt._problem_files[0])\n (jobNumber, password) = self.kestrel.submit(xml)\n ah.job = jobNumber\n ah.password = password\n #\n # Cleanup\n #\n del os.environ['kestrel_options']\n try:\n del os.environ[neos_sname+\"_options\"]\n except:\n pass\n #\n # Store action handle, and return\n #\n self._ah[jobNumber] = ah\n self._neos_log[jobNumber] = (0, \"\")\n self._opt_data[jobNumber] = (opt,\n opt._smap_id,\n opt._load_solutions,\n opt._select_index,\n opt._default_variable_value)\n self._args[jobNumber] = args\n return ah\n\n def _perform_wait_any(self):\n \"\"\"\n Perform the wait_any operation. This method returns an\n ActionHandle with the results of waiting. If None is returned\n then the ActionManager assumes that it can call this method again.\n Note that an ActionHandle can be returned with a dummy value,\n to indicate an error.\n \"\"\"\n for jobNumber in self._ah:\n\n status = self.kestrel.neos.getJobStatus(jobNumber,\n self._ah[jobNumber].password)\n\n if status not in (\"Running\", \"Waiting\"):\n # the job is done.\n ah = self._ah[jobNumber]\n del self._ah[jobNumber]\n ah.status = ActionStatus.done\n\n (opt,\n smap_id,\n load_solutions,\n select_index,\n default_variable_value) = self._opt_data[jobNumber]\n del self._opt_data[jobNumber]\n\n args = self._args[jobNumber]\n del self._args[jobNumber]\n\n # retrieve the final results, which are in message/log format.\n results = self.kestrel.neos.getFinalResults(jobNumber, ah.password)\n\n (current_offset, current_message) = self._neos_log[jobNumber]\n with open(opt._log_file, 'w') as OUTPUT:\n OUTPUT.write(current_message)\n with open(opt._soln_file, 'w') as OUTPUT:\n if six.PY2:\n OUTPUT.write(results.data)\n else:\n OUTPUT.write(results.data.decode('utf-8'))\n\n rc = None\n try:\n solver_results = opt.process_output(rc)\n except:\n _neos_error( \"Error parsing NEOS solution file\",\n results, current_message )\n return ah\n\n solver_results._smap_id = smap_id\n self.results[ah.id] = solver_results\n\n if isinstance(args[0], Block):\n _model = args[0]\n if load_solutions:\n try:\n _model.solutions.load_from(\n solver_results,\n select=select_index,\n default_variable_value=default_variable_value)\n except:\n _neos_error(\n \"Error loading NEOS solution into model\",\n results, current_message )\n solver_results._smap_id = None\n solver_results.solution.clear()\n else:\n solver_results._smap = _model.solutions.symbol_map[smap_id]\n _model.solutions.delete_symbol_map(smap_id)\n\n return ah\n else:\n # The job is still running...\n #\n # Grab the partial messages from NEOS as you go, in case\n # you want to output on-the-fly. You will only get data\n # if the job was routed to the \"short\" priority queue.\n (current_offset, current_message) = self._neos_log[jobNumber]\n # TBD: blocking isn't the way to go, but non-blocking\n # was triggering some exception in kestrel.\n #\n # [5/13/17]: The blocking fetch will timeout in 2\n # minutes. If NEOS doesn't produce intermediate results\n # by then we will need to catch (and eat) the exception\n try:\n (message_fragment, new_offset) \\\n = self.kestrel.neos.getIntermediateResults(\n jobNumber,\n self._ah[jobNumber].password,\n current_offset )\n logger.info(message_fragment)\n self._neos_log[jobNumber] = (\n new_offset,\n current_message + (\n message_fragment.data if six.PY2\n else (message_fragment.data).decode('utf-8') ) )\n except ProtocolError:\n # The command probably timed out\n pass\n\n return None\n\n", "path": "pyomo/neos/plugins/kestrel_plugin.py"}], "after_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\nimport logging\nimport os\nimport re\nimport six\n\nfrom six.moves.xmlrpc_client import ProtocolError\n\nfrom pyomo.opt import SolverFactory, SolverManagerFactory, OptSolver\nfrom pyomo.opt.parallel.manager import ActionManagerError\nfrom pyomo.opt.parallel.async_solver import (\n AsynchronousSolverManager, ActionStatus\n)\nfrom pyomo.opt.base import OptSolver\nfrom pyomo.core.base import Block\nimport pyomo.neos.kestrel\n\n\nlogger = logging.getLogger('pyomo.neos')\n\n\ndef _neos_error(msg, results, current_message):\n error_re = re.compile('error', flags=re.I)\n warn_re = re.compile('warn', flags=re.I)\n\n logger.error(\"%s NEOS log:\\n%s\" % ( msg, current_message, ))\n soln_data = results.data\n if six.PY3:\n soln_data = soln_data.decode('utf-8')\n for line in soln_data.splitlines():\n if error_re.search(line):\n logger.error(line)\n elif warn_re.search(line):\n logger.warn(line)\n\n\[email protected](\n 'neos', doc=\"Asynchronously execute solvers on the NEOS server\")\nclass SolverManager_NEOS(AsynchronousSolverManager):\n\n def clear(self):\n \"\"\"\n Clear manager state\n \"\"\"\n AsynchronousSolverManager.clear(self)\n self.kestrel = pyomo.neos.kestrel.kestrelAMPL()\n self._ah = {} # maps NEOS job numbers to their corresponding\n # action handle.\n self._args = {}\n self._opt_data = {}\n\n # to grab streamed output from NEOS, need to keep\n # map of action handle to the to-date string of\n # extracted output.\n # TBD: The following entries aren't currently cleaned up, but\n # we're still trying to get the basics down.\n # store pairs of NEOS message offset and NEOS message string.\n # index into the map is the NEOS job number\n self._neos_log = {}\n self._solvers = {}\n\n def _perform_queue(self, ah, *args, **kwds):\n \"\"\"\n Perform the queue operation. This method returns the ActionHandle,\n and the ActionHandle status indicates whether the queue was successful.\n \"\"\"\n solver = kwds.pop('solver', kwds.pop('opt', None))\n if solver is None:\n raise ActionManagerError(\n \"No solver passed to %s, use keyword option 'solver'\"\n % (type(self).__name__) )\n if not isinstance(solver, six.string_types):\n solver_name = solver.name\n if solver_name == 'asl':\n solver_name = \\\n os.path.basename(solver.executable())\n else:\n solver_name = solver\n solver = None\n\n #\n # Handle ephemeral solvers options here. These\n # will override whatever is currently in the options\n # dictionary, but we will reset these options to\n # their original value at the end of this method.\n #\n user_solver_options = {}\n # make sure to transfer the options dict on the\n # solver plugin if the user does not use a string\n # to identify the neos solver. The ephemeral\n # options must also go after these.\n if solver is not None:\n user_solver_options.update(solver.options)\n _options = kwds.pop('options', {})\n if isinstance(_options, six.string_types):\n _options = OptSolver._options_string_to_dict(_options)\n user_solver_options.update(_options)\n user_solver_options.update(\n OptSolver._options_string_to_dict(kwds.pop('options_string', '')))\n\n # JDS: [5/13/17] The following is a HACK. This timeout flag is\n # set by pyomo/scripting/util.py:apply_optimizer. If we do not\n # remove it, it will get passed to the NEOS solver. For solvers\n # like CPLEX 12.7.0, this will cause a fatal error as it is not\n # a known option.\n if user_solver_options.get('timelimit',0) is None:\n del user_solver_options['timelimit']\n\n opt = SolverFactory('_neos')\n opt._presolve(*args, **kwds)\n #\n # Map NEOS name, using lowercase convention in Pyomo\n #\n if len(self._solvers) == 0:\n for name in self.kestrel.solvers():\n if name.endswith('AMPL'):\n self._solvers[ name[:-5].lower() ] = name[:-5]\n if solver_name not in self._solvers:\n raise ActionManagerError(\n \"Solver '%s' is not recognized by NEOS. \"\n \"Solver names recognized:\\n%s\"\n % (solver_name, str(sorted(self._solvers.keys()))))\n #\n # Apply kestrel\n #\n # Set the kestrel_options environment\n #\n neos_sname = self._solvers[solver_name].lower()\n os.environ['kestrel_options'] = 'solver=%s' % self._solvers[solver_name]\n #\n # Set the <solver>_options environment\n #\n solver_options = {}\n for key in opt.options:\n solver_options[key]=opt.options[key]\n solver_options.update(user_solver_options)\n options = opt._get_options_string(solver_options)\n if not options == \"\":\n os.environ[neos_sname+'_options'] = options\n #\n # Generate an XML string using these two environment variables\n #\n xml = self.kestrel.formXML(opt._problem_files[0])\n (jobNumber, password) = self.kestrel.submit(xml)\n ah.job = jobNumber\n ah.password = password\n #\n # Cleanup\n #\n del os.environ['kestrel_options']\n try:\n del os.environ[neos_sname+\"_options\"]\n except:\n pass\n #\n # Store action handle, and return\n #\n self._ah[jobNumber] = ah\n self._neos_log[jobNumber] = (0, \"\")\n self._opt_data[jobNumber] = (opt,\n opt._smap_id,\n opt._load_solutions,\n opt._select_index,\n opt._default_variable_value)\n self._args[jobNumber] = args\n return ah\n\n def _perform_wait_any(self):\n \"\"\"\n Perform the wait_any operation. This method returns an\n ActionHandle with the results of waiting. If None is returned\n then the ActionManager assumes that it can call this method again.\n Note that an ActionHandle can be returned with a dummy value,\n to indicate an error.\n \"\"\"\n for jobNumber in self._ah:\n\n status = self.kestrel.neos.getJobStatus(jobNumber,\n self._ah[jobNumber].password)\n\n if status not in (\"Running\", \"Waiting\"):\n # the job is done.\n ah = self._ah[jobNumber]\n del self._ah[jobNumber]\n ah.status = ActionStatus.done\n\n (opt,\n smap_id,\n load_solutions,\n select_index,\n default_variable_value) = self._opt_data[jobNumber]\n del self._opt_data[jobNumber]\n\n args = self._args[jobNumber]\n del self._args[jobNumber]\n\n # retrieve the final results, which are in message/log format.\n results = self.kestrel.neos.getFinalResults(jobNumber, ah.password)\n\n (current_offset, current_message) = self._neos_log[jobNumber]\n with open(opt._log_file, 'w') as OUTPUT:\n OUTPUT.write(current_message)\n with open(opt._soln_file, 'w') as OUTPUT:\n if six.PY2:\n OUTPUT.write(results.data)\n else:\n OUTPUT.write(results.data.decode('utf-8'))\n\n rc = None\n try:\n solver_results = opt.process_output(rc)\n except:\n _neos_error( \"Error parsing NEOS solution file\",\n results, current_message )\n return ah\n\n solver_results._smap_id = smap_id\n self.results[ah.id] = solver_results\n\n if isinstance(args[0], Block):\n _model = args[0]\n if load_solutions:\n try:\n _model.solutions.load_from(\n solver_results,\n select=select_index,\n default_variable_value=default_variable_value)\n except:\n _neos_error(\n \"Error loading NEOS solution into model\",\n results, current_message )\n solver_results._smap_id = None\n solver_results.solution.clear()\n else:\n solver_results._smap = _model.solutions.symbol_map[smap_id]\n _model.solutions.delete_symbol_map(smap_id)\n\n return ah\n else:\n # The job is still running...\n #\n # Grab the partial messages from NEOS as you go, in case\n # you want to output on-the-fly. You will only get data\n # if the job was routed to the \"short\" priority queue.\n (current_offset, current_message) = self._neos_log[jobNumber]\n # TBD: blocking isn't the way to go, but non-blocking\n # was triggering some exception in kestrel.\n #\n # [5/13/17]: The blocking fetch will timeout in 2\n # minutes. If NEOS doesn't produce intermediate results\n # by then we will need to catch (and eat) the exception\n try:\n (message_fragment, new_offset) \\\n = self.kestrel.neos.getIntermediateResults(\n jobNumber,\n self._ah[jobNumber].password,\n current_offset )\n logger.info(message_fragment)\n self._neos_log[jobNumber] = (\n new_offset,\n current_message + (\n message_fragment.data if six.PY2\n else (message_fragment.data).decode('utf-8') ) )\n except ProtocolError:\n # The command probably timed out\n pass\n\n return None\n\n", "path": "pyomo/neos/plugins/kestrel_plugin.py"}]} |
gh_patches_debug_1028 | rasdani/github-patches | git_diff | googleapis__python-bigquery-348 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix(dbapi): avoid running % format when no query parameters are passed
**Is your feature request related to a problem? Please describe.**
It is unexpected to get format errors when a string contains `%`, but there are no query parameters in the query.
https://github.com/mxmzdlv/pybigquery/issues/72
**Describe the solution you'd like**
In addition to checking if `parameters` is none, check if `len(parameters) == 0` to avoid unnecessary format operations.
https://github.com/googleapis/python-bigquery/blob/dca2e4ca7c2ae183ac4bb60f653d425a43a86bea/google/cloud/bigquery/dbapi/cursor.py#L444
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/cloud/bigquery/dbapi/cursor.py`
Content:
```
1 # Copyright 2017 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Cursor for the Google BigQuery DB-API."""
16
17 import collections
18 from collections import abc as collections_abc
19 import copy
20 import logging
21
22 import six
23
24 from google.cloud.bigquery import job
25 from google.cloud.bigquery.dbapi import _helpers
26 from google.cloud.bigquery.dbapi import exceptions
27 import google.cloud.exceptions
28
29
30 _LOGGER = logging.getLogger(__name__)
31
32 # Per PEP 249: A 7-item sequence containing information describing one result
33 # column. The first two items (name and type_code) are mandatory, the other
34 # five are optional and are set to None if no meaningful values can be
35 # provided.
36 Column = collections.namedtuple(
37 "Column",
38 [
39 "name",
40 "type_code",
41 "display_size",
42 "internal_size",
43 "precision",
44 "scale",
45 "null_ok",
46 ],
47 )
48
49
50 @_helpers.raise_on_closed("Operating on a closed cursor.")
51 class Cursor(object):
52 """DB-API Cursor to Google BigQuery.
53
54 Args:
55 connection (google.cloud.bigquery.dbapi.Connection):
56 A DB-API connection to Google BigQuery.
57 """
58
59 def __init__(self, connection):
60 self.connection = connection
61 self.description = None
62 # Per PEP 249: The attribute is -1 in case no .execute*() has been
63 # performed on the cursor or the rowcount of the last operation
64 # cannot be determined by the interface.
65 self.rowcount = -1
66 # Per PEP 249: The arraysize attribute defaults to 1, meaning to fetch
67 # a single row at a time. However, we deviate from that, and set the
68 # default to None, allowing the backend to automatically determine the
69 # most appropriate size.
70 self.arraysize = None
71 self._query_data = None
72 self._query_job = None
73 self._closed = False
74
75 def close(self):
76 """Mark the cursor as closed, preventing its further use."""
77 self._closed = True
78
79 def _set_description(self, schema):
80 """Set description from schema.
81
82 Args:
83 schema (Sequence[google.cloud.bigquery.schema.SchemaField]):
84 A description of fields in the schema.
85 """
86 if schema is None:
87 self.description = None
88 return
89
90 self.description = tuple(
91 Column(
92 name=field.name,
93 type_code=field.field_type,
94 display_size=None,
95 internal_size=None,
96 precision=None,
97 scale=None,
98 null_ok=field.is_nullable,
99 )
100 for field in schema
101 )
102
103 def _set_rowcount(self, query_results):
104 """Set the rowcount from query results.
105
106 Normally, this sets rowcount to the number of rows returned by the
107 query, but if it was a DML statement, it sets rowcount to the number
108 of modified rows.
109
110 Args:
111 query_results (google.cloud.bigquery.query._QueryResults):
112 Results of a query.
113 """
114 total_rows = 0
115 num_dml_affected_rows = query_results.num_dml_affected_rows
116
117 if query_results.total_rows is not None and query_results.total_rows > 0:
118 total_rows = query_results.total_rows
119 if num_dml_affected_rows is not None and num_dml_affected_rows > 0:
120 total_rows = num_dml_affected_rows
121 self.rowcount = total_rows
122
123 def execute(self, operation, parameters=None, job_id=None, job_config=None):
124 """Prepare and execute a database operation.
125
126 .. note::
127 When setting query parameters, values which are "text"
128 (``unicode`` in Python2, ``str`` in Python3) will use
129 the 'STRING' BigQuery type. Values which are "bytes" (``str`` in
130 Python2, ``bytes`` in Python3), will use using the 'BYTES' type.
131
132 A `~datetime.datetime` parameter without timezone information uses
133 the 'DATETIME' BigQuery type (example: Global Pi Day Celebration
134 March 14, 2017 at 1:59pm). A `~datetime.datetime` parameter with
135 timezone information uses the 'TIMESTAMP' BigQuery type (example:
136 a wedding on April 29, 2011 at 11am, British Summer Time).
137
138 For more information about BigQuery data types, see:
139 https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types
140
141 ``STRUCT``/``RECORD`` and ``REPEATED`` query parameters are not
142 yet supported. See:
143 https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3524
144
145 Args:
146 operation (str): A Google BigQuery query string.
147
148 parameters (Union[Mapping[str, Any], Sequence[Any]]):
149 (Optional) dictionary or sequence of parameter values.
150
151 job_id (str):
152 (Optional) The job_id to use. If not set, a job ID
153 is generated at random.
154
155 job_config (google.cloud.bigquery.job.QueryJobConfig):
156 (Optional) Extra configuration options for the query job.
157 """
158 self._query_data = None
159 self._query_job = None
160 client = self.connection._client
161
162 # The DB-API uses the pyformat formatting, since the way BigQuery does
163 # query parameters was not one of the standard options. Convert both
164 # the query and the parameters to the format expected by the client
165 # libraries.
166 formatted_operation = _format_operation(operation, parameters=parameters)
167 query_parameters = _helpers.to_query_parameters(parameters)
168
169 if client._default_query_job_config:
170 if job_config:
171 config = job_config._fill_from_default(client._default_query_job_config)
172 else:
173 config = copy.deepcopy(client._default_query_job_config)
174 else:
175 config = job_config or job.QueryJobConfig(use_legacy_sql=False)
176
177 config.query_parameters = query_parameters
178 self._query_job = client.query(
179 formatted_operation, job_config=config, job_id=job_id
180 )
181
182 if self._query_job.dry_run:
183 self._set_description(schema=None)
184 self.rowcount = 0
185 return
186
187 # Wait for the query to finish.
188 try:
189 self._query_job.result()
190 except google.cloud.exceptions.GoogleCloudError as exc:
191 raise exceptions.DatabaseError(exc)
192
193 query_results = self._query_job._query_results
194 self._set_rowcount(query_results)
195 self._set_description(query_results.schema)
196
197 def executemany(self, operation, seq_of_parameters):
198 """Prepare and execute a database operation multiple times.
199
200 Args:
201 operation (str): A Google BigQuery query string.
202
203 seq_of_parameters (Union[Sequence[Mapping[str, Any], Sequence[Any]]]):
204 Sequence of many sets of parameter values.
205 """
206 for parameters in seq_of_parameters:
207 self.execute(operation, parameters)
208
209 def _try_fetch(self, size=None):
210 """Try to start fetching data, if not yet started.
211
212 Mutates self to indicate that iteration has started.
213 """
214 if self._query_job is None:
215 raise exceptions.InterfaceError(
216 "No query results: execute() must be called before fetch."
217 )
218
219 if self._query_job.dry_run:
220 self._query_data = iter([])
221 return
222
223 is_dml = (
224 self._query_job.statement_type
225 and self._query_job.statement_type.upper() != "SELECT"
226 )
227 if is_dml:
228 self._query_data = iter([])
229 return
230
231 if self._query_data is None:
232 client = self.connection._client
233 bqstorage_client = self.connection._bqstorage_client
234
235 if bqstorage_client is not None:
236 rows_iterable = self._bqstorage_fetch(bqstorage_client)
237 self._query_data = _helpers.to_bq_table_rows(rows_iterable)
238 return
239
240 rows_iter = client.list_rows(
241 self._query_job.destination,
242 selected_fields=self._query_job._query_results.schema,
243 page_size=self.arraysize,
244 )
245 self._query_data = iter(rows_iter)
246
247 def _bqstorage_fetch(self, bqstorage_client):
248 """Start fetching data with the BigQuery Storage API.
249
250 The method assumes that the data about the relevant query job already
251 exists internally.
252
253 Args:
254 bqstorage_client(\
255 google.cloud.bigquery_storage_v1.BigQueryReadClient \
256 ):
257 A client tha know how to talk to the BigQuery Storage API.
258
259 Returns:
260 Iterable[Mapping]:
261 A sequence of rows, represented as dictionaries.
262 """
263 # Hitting this code path with a BQ Storage client instance implies that
264 # bigquery_storage can indeed be imported here without errors.
265 from google.cloud import bigquery_storage
266
267 table_reference = self._query_job.destination
268
269 requested_session = bigquery_storage.types.ReadSession(
270 table=table_reference.to_bqstorage(),
271 data_format=bigquery_storage.types.DataFormat.ARROW,
272 )
273 read_session = bqstorage_client.create_read_session(
274 parent="projects/{}".format(table_reference.project),
275 read_session=requested_session,
276 # a single stream only, as DB API is not well-suited for multithreading
277 max_stream_count=1,
278 )
279
280 if not read_session.streams:
281 return iter([]) # empty table, nothing to read
282
283 stream_name = read_session.streams[0].name
284 read_rows_stream = bqstorage_client.read_rows(stream_name)
285
286 rows_iterable = read_rows_stream.rows(read_session)
287 return rows_iterable
288
289 def fetchone(self):
290 """Fetch a single row from the results of the last ``execute*()`` call.
291
292 .. note::
293 If a dry run query was executed, no rows are returned.
294
295 Returns:
296 Tuple:
297 A tuple representing a row or ``None`` if no more data is
298 available.
299
300 Raises:
301 google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.
302 """
303 self._try_fetch()
304 try:
305 return six.next(self._query_data)
306 except StopIteration:
307 return None
308
309 def fetchmany(self, size=None):
310 """Fetch multiple results from the last ``execute*()`` call.
311
312 .. note::
313 If a dry run query was executed, no rows are returned.
314
315 .. note::
316 The size parameter is not used for the request/response size.
317 Set the ``arraysize`` attribute before calling ``execute()`` to
318 set the batch size.
319
320 Args:
321 size (int):
322 (Optional) Maximum number of rows to return. Defaults to the
323 ``arraysize`` property value. If ``arraysize`` is not set, it
324 defaults to ``1``.
325
326 Returns:
327 List[Tuple]: A list of rows.
328
329 Raises:
330 google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.
331 """
332 if size is None:
333 # Since self.arraysize can be None (a deviation from PEP 249),
334 # use an actual PEP 249 default of 1 in such case (*some* number
335 # is needed here).
336 size = self.arraysize if self.arraysize else 1
337
338 self._try_fetch(size=size)
339 rows = []
340
341 for row in self._query_data:
342 rows.append(row)
343 if len(rows) >= size:
344 break
345
346 return rows
347
348 def fetchall(self):
349 """Fetch all remaining results from the last ``execute*()`` call.
350
351 .. note::
352 If a dry run query was executed, no rows are returned.
353
354 Returns:
355 List[Tuple]: A list of all the rows in the results.
356
357 Raises:
358 google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.
359 """
360 self._try_fetch()
361 return list(self._query_data)
362
363 def setinputsizes(self, sizes):
364 """No-op, but for consistency raise an error if cursor is closed."""
365
366 def setoutputsize(self, size, column=None):
367 """No-op, but for consistency raise an error if cursor is closed."""
368
369
370 def _format_operation_list(operation, parameters):
371 """Formats parameters in operation in the way BigQuery expects.
372
373 The input operation will be a query like ``SELECT %s`` and the output
374 will be a query like ``SELECT ?``.
375
376 Args:
377 operation (str): A Google BigQuery query string.
378
379 parameters (Sequence[Any]): Sequence of parameter values.
380
381 Returns:
382 str: A formatted query string.
383
384 Raises:
385 google.cloud.bigquery.dbapi.ProgrammingError:
386 if a parameter used in the operation is not found in the
387 ``parameters`` argument.
388 """
389 formatted_params = ["?" for _ in parameters]
390
391 try:
392 return operation % tuple(formatted_params)
393 except TypeError as exc:
394 raise exceptions.ProgrammingError(exc)
395
396
397 def _format_operation_dict(operation, parameters):
398 """Formats parameters in operation in the way BigQuery expects.
399
400 The input operation will be a query like ``SELECT %(namedparam)s`` and
401 the output will be a query like ``SELECT @namedparam``.
402
403 Args:
404 operation (str): A Google BigQuery query string.
405
406 parameters (Mapping[str, Any]): Dictionary of parameter values.
407
408 Returns:
409 str: A formatted query string.
410
411 Raises:
412 google.cloud.bigquery.dbapi.ProgrammingError:
413 if a parameter used in the operation is not found in the
414 ``parameters`` argument.
415 """
416 formatted_params = {}
417 for name in parameters:
418 escaped_name = name.replace("`", r"\`")
419 formatted_params[name] = "@`{}`".format(escaped_name)
420
421 try:
422 return operation % formatted_params
423 except KeyError as exc:
424 raise exceptions.ProgrammingError(exc)
425
426
427 def _format_operation(operation, parameters=None):
428 """Formats parameters in operation in way BigQuery expects.
429
430 Args:
431 operation (str): A Google BigQuery query string.
432
433 parameters (Union[Mapping[str, Any], Sequence[Any]]):
434 Optional parameter values.
435
436 Returns:
437 str: A formatted query string.
438
439 Raises:
440 google.cloud.bigquery.dbapi.ProgrammingError:
441 if a parameter used in the operation is not found in the
442 ``parameters`` argument.
443 """
444 if parameters is None:
445 return operation
446
447 if isinstance(parameters, collections_abc.Mapping):
448 return _format_operation_dict(operation, parameters)
449
450 return _format_operation_list(operation, parameters)
451
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/google/cloud/bigquery/dbapi/cursor.py b/google/cloud/bigquery/dbapi/cursor.py
--- a/google/cloud/bigquery/dbapi/cursor.py
+++ b/google/cloud/bigquery/dbapi/cursor.py
@@ -441,7 +441,7 @@
if a parameter used in the operation is not found in the
``parameters`` argument.
"""
- if parameters is None:
+ if parameters is None or len(parameters) == 0:
return operation
if isinstance(parameters, collections_abc.Mapping):
| {"golden_diff": "diff --git a/google/cloud/bigquery/dbapi/cursor.py b/google/cloud/bigquery/dbapi/cursor.py\n--- a/google/cloud/bigquery/dbapi/cursor.py\n+++ b/google/cloud/bigquery/dbapi/cursor.py\n@@ -441,7 +441,7 @@\n if a parameter used in the operation is not found in the\n ``parameters`` argument.\n \"\"\"\n- if parameters is None:\n+ if parameters is None or len(parameters) == 0:\n return operation\n \n if isinstance(parameters, collections_abc.Mapping):\n", "issue": "fix(dbapi): avoid running % format when no query parameters are passed\n **Is your feature request related to a problem? Please describe.**\r\n\r\nIt is unexpected to get format errors when a string contains `%`, but there are no query parameters in the query.\r\n\r\nhttps://github.com/mxmzdlv/pybigquery/issues/72\r\n\r\n **Describe the solution you'd like**\r\n\r\nIn addition to checking if `parameters` is none, check if `len(parameters) == 0` to avoid unnecessary format operations.\r\n\r\nhttps://github.com/googleapis/python-bigquery/blob/dca2e4ca7c2ae183ac4bb60f653d425a43a86bea/google/cloud/bigquery/dbapi/cursor.py#L444\r\n\n", "before_files": [{"content": "# Copyright 2017 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Cursor for the Google BigQuery DB-API.\"\"\"\n\nimport collections\nfrom collections import abc as collections_abc\nimport copy\nimport logging\n\nimport six\n\nfrom google.cloud.bigquery import job\nfrom google.cloud.bigquery.dbapi import _helpers\nfrom google.cloud.bigquery.dbapi import exceptions\nimport google.cloud.exceptions\n\n\n_LOGGER = logging.getLogger(__name__)\n\n# Per PEP 249: A 7-item sequence containing information describing one result\n# column. The first two items (name and type_code) are mandatory, the other\n# five are optional and are set to None if no meaningful values can be\n# provided.\nColumn = collections.namedtuple(\n \"Column\",\n [\n \"name\",\n \"type_code\",\n \"display_size\",\n \"internal_size\",\n \"precision\",\n \"scale\",\n \"null_ok\",\n ],\n)\n\n\n@_helpers.raise_on_closed(\"Operating on a closed cursor.\")\nclass Cursor(object):\n \"\"\"DB-API Cursor to Google BigQuery.\n\n Args:\n connection (google.cloud.bigquery.dbapi.Connection):\n A DB-API connection to Google BigQuery.\n \"\"\"\n\n def __init__(self, connection):\n self.connection = connection\n self.description = None\n # Per PEP 249: The attribute is -1 in case no .execute*() has been\n # performed on the cursor or the rowcount of the last operation\n # cannot be determined by the interface.\n self.rowcount = -1\n # Per PEP 249: The arraysize attribute defaults to 1, meaning to fetch\n # a single row at a time. However, we deviate from that, and set the\n # default to None, allowing the backend to automatically determine the\n # most appropriate size.\n self.arraysize = None\n self._query_data = None\n self._query_job = None\n self._closed = False\n\n def close(self):\n \"\"\"Mark the cursor as closed, preventing its further use.\"\"\"\n self._closed = True\n\n def _set_description(self, schema):\n \"\"\"Set description from schema.\n\n Args:\n schema (Sequence[google.cloud.bigquery.schema.SchemaField]):\n A description of fields in the schema.\n \"\"\"\n if schema is None:\n self.description = None\n return\n\n self.description = tuple(\n Column(\n name=field.name,\n type_code=field.field_type,\n display_size=None,\n internal_size=None,\n precision=None,\n scale=None,\n null_ok=field.is_nullable,\n )\n for field in schema\n )\n\n def _set_rowcount(self, query_results):\n \"\"\"Set the rowcount from query results.\n\n Normally, this sets rowcount to the number of rows returned by the\n query, but if it was a DML statement, it sets rowcount to the number\n of modified rows.\n\n Args:\n query_results (google.cloud.bigquery.query._QueryResults):\n Results of a query.\n \"\"\"\n total_rows = 0\n num_dml_affected_rows = query_results.num_dml_affected_rows\n\n if query_results.total_rows is not None and query_results.total_rows > 0:\n total_rows = query_results.total_rows\n if num_dml_affected_rows is not None and num_dml_affected_rows > 0:\n total_rows = num_dml_affected_rows\n self.rowcount = total_rows\n\n def execute(self, operation, parameters=None, job_id=None, job_config=None):\n \"\"\"Prepare and execute a database operation.\n\n .. note::\n When setting query parameters, values which are \"text\"\n (``unicode`` in Python2, ``str`` in Python3) will use\n the 'STRING' BigQuery type. Values which are \"bytes\" (``str`` in\n Python2, ``bytes`` in Python3), will use using the 'BYTES' type.\n\n A `~datetime.datetime` parameter without timezone information uses\n the 'DATETIME' BigQuery type (example: Global Pi Day Celebration\n March 14, 2017 at 1:59pm). A `~datetime.datetime` parameter with\n timezone information uses the 'TIMESTAMP' BigQuery type (example:\n a wedding on April 29, 2011 at 11am, British Summer Time).\n\n For more information about BigQuery data types, see:\n https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types\n\n ``STRUCT``/``RECORD`` and ``REPEATED`` query parameters are not\n yet supported. See:\n https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3524\n\n Args:\n operation (str): A Google BigQuery query string.\n\n parameters (Union[Mapping[str, Any], Sequence[Any]]):\n (Optional) dictionary or sequence of parameter values.\n\n job_id (str):\n (Optional) The job_id to use. If not set, a job ID\n is generated at random.\n\n job_config (google.cloud.bigquery.job.QueryJobConfig):\n (Optional) Extra configuration options for the query job.\n \"\"\"\n self._query_data = None\n self._query_job = None\n client = self.connection._client\n\n # The DB-API uses the pyformat formatting, since the way BigQuery does\n # query parameters was not one of the standard options. Convert both\n # the query and the parameters to the format expected by the client\n # libraries.\n formatted_operation = _format_operation(operation, parameters=parameters)\n query_parameters = _helpers.to_query_parameters(parameters)\n\n if client._default_query_job_config:\n if job_config:\n config = job_config._fill_from_default(client._default_query_job_config)\n else:\n config = copy.deepcopy(client._default_query_job_config)\n else:\n config = job_config or job.QueryJobConfig(use_legacy_sql=False)\n\n config.query_parameters = query_parameters\n self._query_job = client.query(\n formatted_operation, job_config=config, job_id=job_id\n )\n\n if self._query_job.dry_run:\n self._set_description(schema=None)\n self.rowcount = 0\n return\n\n # Wait for the query to finish.\n try:\n self._query_job.result()\n except google.cloud.exceptions.GoogleCloudError as exc:\n raise exceptions.DatabaseError(exc)\n\n query_results = self._query_job._query_results\n self._set_rowcount(query_results)\n self._set_description(query_results.schema)\n\n def executemany(self, operation, seq_of_parameters):\n \"\"\"Prepare and execute a database operation multiple times.\n\n Args:\n operation (str): A Google BigQuery query string.\n\n seq_of_parameters (Union[Sequence[Mapping[str, Any], Sequence[Any]]]):\n Sequence of many sets of parameter values.\n \"\"\"\n for parameters in seq_of_parameters:\n self.execute(operation, parameters)\n\n def _try_fetch(self, size=None):\n \"\"\"Try to start fetching data, if not yet started.\n\n Mutates self to indicate that iteration has started.\n \"\"\"\n if self._query_job is None:\n raise exceptions.InterfaceError(\n \"No query results: execute() must be called before fetch.\"\n )\n\n if self._query_job.dry_run:\n self._query_data = iter([])\n return\n\n is_dml = (\n self._query_job.statement_type\n and self._query_job.statement_type.upper() != \"SELECT\"\n )\n if is_dml:\n self._query_data = iter([])\n return\n\n if self._query_data is None:\n client = self.connection._client\n bqstorage_client = self.connection._bqstorage_client\n\n if bqstorage_client is not None:\n rows_iterable = self._bqstorage_fetch(bqstorage_client)\n self._query_data = _helpers.to_bq_table_rows(rows_iterable)\n return\n\n rows_iter = client.list_rows(\n self._query_job.destination,\n selected_fields=self._query_job._query_results.schema,\n page_size=self.arraysize,\n )\n self._query_data = iter(rows_iter)\n\n def _bqstorage_fetch(self, bqstorage_client):\n \"\"\"Start fetching data with the BigQuery Storage API.\n\n The method assumes that the data about the relevant query job already\n exists internally.\n\n Args:\n bqstorage_client(\\\n google.cloud.bigquery_storage_v1.BigQueryReadClient \\\n ):\n A client tha know how to talk to the BigQuery Storage API.\n\n Returns:\n Iterable[Mapping]:\n A sequence of rows, represented as dictionaries.\n \"\"\"\n # Hitting this code path with a BQ Storage client instance implies that\n # bigquery_storage can indeed be imported here without errors.\n from google.cloud import bigquery_storage\n\n table_reference = self._query_job.destination\n\n requested_session = bigquery_storage.types.ReadSession(\n table=table_reference.to_bqstorage(),\n data_format=bigquery_storage.types.DataFormat.ARROW,\n )\n read_session = bqstorage_client.create_read_session(\n parent=\"projects/{}\".format(table_reference.project),\n read_session=requested_session,\n # a single stream only, as DB API is not well-suited for multithreading\n max_stream_count=1,\n )\n\n if not read_session.streams:\n return iter([]) # empty table, nothing to read\n\n stream_name = read_session.streams[0].name\n read_rows_stream = bqstorage_client.read_rows(stream_name)\n\n rows_iterable = read_rows_stream.rows(read_session)\n return rows_iterable\n\n def fetchone(self):\n \"\"\"Fetch a single row from the results of the last ``execute*()`` call.\n\n .. note::\n If a dry run query was executed, no rows are returned.\n\n Returns:\n Tuple:\n A tuple representing a row or ``None`` if no more data is\n available.\n\n Raises:\n google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.\n \"\"\"\n self._try_fetch()\n try:\n return six.next(self._query_data)\n except StopIteration:\n return None\n\n def fetchmany(self, size=None):\n \"\"\"Fetch multiple results from the last ``execute*()`` call.\n\n .. note::\n If a dry run query was executed, no rows are returned.\n\n .. note::\n The size parameter is not used for the request/response size.\n Set the ``arraysize`` attribute before calling ``execute()`` to\n set the batch size.\n\n Args:\n size (int):\n (Optional) Maximum number of rows to return. Defaults to the\n ``arraysize`` property value. If ``arraysize`` is not set, it\n defaults to ``1``.\n\n Returns:\n List[Tuple]: A list of rows.\n\n Raises:\n google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.\n \"\"\"\n if size is None:\n # Since self.arraysize can be None (a deviation from PEP 249),\n # use an actual PEP 249 default of 1 in such case (*some* number\n # is needed here).\n size = self.arraysize if self.arraysize else 1\n\n self._try_fetch(size=size)\n rows = []\n\n for row in self._query_data:\n rows.append(row)\n if len(rows) >= size:\n break\n\n return rows\n\n def fetchall(self):\n \"\"\"Fetch all remaining results from the last ``execute*()`` call.\n\n .. note::\n If a dry run query was executed, no rows are returned.\n\n Returns:\n List[Tuple]: A list of all the rows in the results.\n\n Raises:\n google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.\n \"\"\"\n self._try_fetch()\n return list(self._query_data)\n\n def setinputsizes(self, sizes):\n \"\"\"No-op, but for consistency raise an error if cursor is closed.\"\"\"\n\n def setoutputsize(self, size, column=None):\n \"\"\"No-op, but for consistency raise an error if cursor is closed.\"\"\"\n\n\ndef _format_operation_list(operation, parameters):\n \"\"\"Formats parameters in operation in the way BigQuery expects.\n\n The input operation will be a query like ``SELECT %s`` and the output\n will be a query like ``SELECT ?``.\n\n Args:\n operation (str): A Google BigQuery query string.\n\n parameters (Sequence[Any]): Sequence of parameter values.\n\n Returns:\n str: A formatted query string.\n\n Raises:\n google.cloud.bigquery.dbapi.ProgrammingError:\n if a parameter used in the operation is not found in the\n ``parameters`` argument.\n \"\"\"\n formatted_params = [\"?\" for _ in parameters]\n\n try:\n return operation % tuple(formatted_params)\n except TypeError as exc:\n raise exceptions.ProgrammingError(exc)\n\n\ndef _format_operation_dict(operation, parameters):\n \"\"\"Formats parameters in operation in the way BigQuery expects.\n\n The input operation will be a query like ``SELECT %(namedparam)s`` and\n the output will be a query like ``SELECT @namedparam``.\n\n Args:\n operation (str): A Google BigQuery query string.\n\n parameters (Mapping[str, Any]): Dictionary of parameter values.\n\n Returns:\n str: A formatted query string.\n\n Raises:\n google.cloud.bigquery.dbapi.ProgrammingError:\n if a parameter used in the operation is not found in the\n ``parameters`` argument.\n \"\"\"\n formatted_params = {}\n for name in parameters:\n escaped_name = name.replace(\"`\", r\"\\`\")\n formatted_params[name] = \"@`{}`\".format(escaped_name)\n\n try:\n return operation % formatted_params\n except KeyError as exc:\n raise exceptions.ProgrammingError(exc)\n\n\ndef _format_operation(operation, parameters=None):\n \"\"\"Formats parameters in operation in way BigQuery expects.\n\n Args:\n operation (str): A Google BigQuery query string.\n\n parameters (Union[Mapping[str, Any], Sequence[Any]]):\n Optional parameter values.\n\n Returns:\n str: A formatted query string.\n\n Raises:\n google.cloud.bigquery.dbapi.ProgrammingError:\n if a parameter used in the operation is not found in the\n ``parameters`` argument.\n \"\"\"\n if parameters is None:\n return operation\n\n if isinstance(parameters, collections_abc.Mapping):\n return _format_operation_dict(operation, parameters)\n\n return _format_operation_list(operation, parameters)\n", "path": "google/cloud/bigquery/dbapi/cursor.py"}], "after_files": [{"content": "# Copyright 2017 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Cursor for the Google BigQuery DB-API.\"\"\"\n\nimport collections\nfrom collections import abc as collections_abc\nimport copy\nimport logging\n\nimport six\n\nfrom google.cloud.bigquery import job\nfrom google.cloud.bigquery.dbapi import _helpers\nfrom google.cloud.bigquery.dbapi import exceptions\nimport google.cloud.exceptions\n\n\n_LOGGER = logging.getLogger(__name__)\n\n# Per PEP 249: A 7-item sequence containing information describing one result\n# column. The first two items (name and type_code) are mandatory, the other\n# five are optional and are set to None if no meaningful values can be\n# provided.\nColumn = collections.namedtuple(\n \"Column\",\n [\n \"name\",\n \"type_code\",\n \"display_size\",\n \"internal_size\",\n \"precision\",\n \"scale\",\n \"null_ok\",\n ],\n)\n\n\n@_helpers.raise_on_closed(\"Operating on a closed cursor.\")\nclass Cursor(object):\n \"\"\"DB-API Cursor to Google BigQuery.\n\n Args:\n connection (google.cloud.bigquery.dbapi.Connection):\n A DB-API connection to Google BigQuery.\n \"\"\"\n\n def __init__(self, connection):\n self.connection = connection\n self.description = None\n # Per PEP 249: The attribute is -1 in case no .execute*() has been\n # performed on the cursor or the rowcount of the last operation\n # cannot be determined by the interface.\n self.rowcount = -1\n # Per PEP 249: The arraysize attribute defaults to 1, meaning to fetch\n # a single row at a time. However, we deviate from that, and set the\n # default to None, allowing the backend to automatically determine the\n # most appropriate size.\n self.arraysize = None\n self._query_data = None\n self._query_job = None\n self._closed = False\n\n def close(self):\n \"\"\"Mark the cursor as closed, preventing its further use.\"\"\"\n self._closed = True\n\n def _set_description(self, schema):\n \"\"\"Set description from schema.\n\n Args:\n schema (Sequence[google.cloud.bigquery.schema.SchemaField]):\n A description of fields in the schema.\n \"\"\"\n if schema is None:\n self.description = None\n return\n\n self.description = tuple(\n Column(\n name=field.name,\n type_code=field.field_type,\n display_size=None,\n internal_size=None,\n precision=None,\n scale=None,\n null_ok=field.is_nullable,\n )\n for field in schema\n )\n\n def _set_rowcount(self, query_results):\n \"\"\"Set the rowcount from query results.\n\n Normally, this sets rowcount to the number of rows returned by the\n query, but if it was a DML statement, it sets rowcount to the number\n of modified rows.\n\n Args:\n query_results (google.cloud.bigquery.query._QueryResults):\n Results of a query.\n \"\"\"\n total_rows = 0\n num_dml_affected_rows = query_results.num_dml_affected_rows\n\n if query_results.total_rows is not None and query_results.total_rows > 0:\n total_rows = query_results.total_rows\n if num_dml_affected_rows is not None and num_dml_affected_rows > 0:\n total_rows = num_dml_affected_rows\n self.rowcount = total_rows\n\n def execute(self, operation, parameters=None, job_id=None, job_config=None):\n \"\"\"Prepare and execute a database operation.\n\n .. note::\n When setting query parameters, values which are \"text\"\n (``unicode`` in Python2, ``str`` in Python3) will use\n the 'STRING' BigQuery type. Values which are \"bytes\" (``str`` in\n Python2, ``bytes`` in Python3), will use using the 'BYTES' type.\n\n A `~datetime.datetime` parameter without timezone information uses\n the 'DATETIME' BigQuery type (example: Global Pi Day Celebration\n March 14, 2017 at 1:59pm). A `~datetime.datetime` parameter with\n timezone information uses the 'TIMESTAMP' BigQuery type (example:\n a wedding on April 29, 2011 at 11am, British Summer Time).\n\n For more information about BigQuery data types, see:\n https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types\n\n ``STRUCT``/``RECORD`` and ``REPEATED`` query parameters are not\n yet supported. See:\n https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3524\n\n Args:\n operation (str): A Google BigQuery query string.\n\n parameters (Union[Mapping[str, Any], Sequence[Any]]):\n (Optional) dictionary or sequence of parameter values.\n\n job_id (str):\n (Optional) The job_id to use. If not set, a job ID\n is generated at random.\n\n job_config (google.cloud.bigquery.job.QueryJobConfig):\n (Optional) Extra configuration options for the query job.\n \"\"\"\n self._query_data = None\n self._query_job = None\n client = self.connection._client\n\n # The DB-API uses the pyformat formatting, since the way BigQuery does\n # query parameters was not one of the standard options. Convert both\n # the query and the parameters to the format expected by the client\n # libraries.\n formatted_operation = _format_operation(operation, parameters=parameters)\n query_parameters = _helpers.to_query_parameters(parameters)\n\n if client._default_query_job_config:\n if job_config:\n config = job_config._fill_from_default(client._default_query_job_config)\n else:\n config = copy.deepcopy(client._default_query_job_config)\n else:\n config = job_config or job.QueryJobConfig(use_legacy_sql=False)\n\n config.query_parameters = query_parameters\n self._query_job = client.query(\n formatted_operation, job_config=config, job_id=job_id\n )\n\n if self._query_job.dry_run:\n self._set_description(schema=None)\n self.rowcount = 0\n return\n\n # Wait for the query to finish.\n try:\n self._query_job.result()\n except google.cloud.exceptions.GoogleCloudError as exc:\n raise exceptions.DatabaseError(exc)\n\n query_results = self._query_job._query_results\n self._set_rowcount(query_results)\n self._set_description(query_results.schema)\n\n def executemany(self, operation, seq_of_parameters):\n \"\"\"Prepare and execute a database operation multiple times.\n\n Args:\n operation (str): A Google BigQuery query string.\n\n seq_of_parameters (Union[Sequence[Mapping[str, Any], Sequence[Any]]]):\n Sequence of many sets of parameter values.\n \"\"\"\n for parameters in seq_of_parameters:\n self.execute(operation, parameters)\n\n def _try_fetch(self, size=None):\n \"\"\"Try to start fetching data, if not yet started.\n\n Mutates self to indicate that iteration has started.\n \"\"\"\n if self._query_job is None:\n raise exceptions.InterfaceError(\n \"No query results: execute() must be called before fetch.\"\n )\n\n if self._query_job.dry_run:\n self._query_data = iter([])\n return\n\n is_dml = (\n self._query_job.statement_type\n and self._query_job.statement_type.upper() != \"SELECT\"\n )\n if is_dml:\n self._query_data = iter([])\n return\n\n if self._query_data is None:\n client = self.connection._client\n bqstorage_client = self.connection._bqstorage_client\n\n if bqstorage_client is not None:\n rows_iterable = self._bqstorage_fetch(bqstorage_client)\n self._query_data = _helpers.to_bq_table_rows(rows_iterable)\n return\n\n rows_iter = client.list_rows(\n self._query_job.destination,\n selected_fields=self._query_job._query_results.schema,\n page_size=self.arraysize,\n )\n self._query_data = iter(rows_iter)\n\n def _bqstorage_fetch(self, bqstorage_client):\n \"\"\"Start fetching data with the BigQuery Storage API.\n\n The method assumes that the data about the relevant query job already\n exists internally.\n\n Args:\n bqstorage_client(\\\n google.cloud.bigquery_storage_v1.BigQueryReadClient \\\n ):\n A client tha know how to talk to the BigQuery Storage API.\n\n Returns:\n Iterable[Mapping]:\n A sequence of rows, represented as dictionaries.\n \"\"\"\n # Hitting this code path with a BQ Storage client instance implies that\n # bigquery_storage can indeed be imported here without errors.\n from google.cloud import bigquery_storage\n\n table_reference = self._query_job.destination\n\n requested_session = bigquery_storage.types.ReadSession(\n table=table_reference.to_bqstorage(),\n data_format=bigquery_storage.types.DataFormat.ARROW,\n )\n read_session = bqstorage_client.create_read_session(\n parent=\"projects/{}\".format(table_reference.project),\n read_session=requested_session,\n # a single stream only, as DB API is not well-suited for multithreading\n max_stream_count=1,\n )\n\n if not read_session.streams:\n return iter([]) # empty table, nothing to read\n\n stream_name = read_session.streams[0].name\n read_rows_stream = bqstorage_client.read_rows(stream_name)\n\n rows_iterable = read_rows_stream.rows(read_session)\n return rows_iterable\n\n def fetchone(self):\n \"\"\"Fetch a single row from the results of the last ``execute*()`` call.\n\n .. note::\n If a dry run query was executed, no rows are returned.\n\n Returns:\n Tuple:\n A tuple representing a row or ``None`` if no more data is\n available.\n\n Raises:\n google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.\n \"\"\"\n self._try_fetch()\n try:\n return six.next(self._query_data)\n except StopIteration:\n return None\n\n def fetchmany(self, size=None):\n \"\"\"Fetch multiple results from the last ``execute*()`` call.\n\n .. note::\n If a dry run query was executed, no rows are returned.\n\n .. note::\n The size parameter is not used for the request/response size.\n Set the ``arraysize`` attribute before calling ``execute()`` to\n set the batch size.\n\n Args:\n size (int):\n (Optional) Maximum number of rows to return. Defaults to the\n ``arraysize`` property value. If ``arraysize`` is not set, it\n defaults to ``1``.\n\n Returns:\n List[Tuple]: A list of rows.\n\n Raises:\n google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.\n \"\"\"\n if size is None:\n # Since self.arraysize can be None (a deviation from PEP 249),\n # use an actual PEP 249 default of 1 in such case (*some* number\n # is needed here).\n size = self.arraysize if self.arraysize else 1\n\n self._try_fetch(size=size)\n rows = []\n\n for row in self._query_data:\n rows.append(row)\n if len(rows) >= size:\n break\n\n return rows\n\n def fetchall(self):\n \"\"\"Fetch all remaining results from the last ``execute*()`` call.\n\n .. note::\n If a dry run query was executed, no rows are returned.\n\n Returns:\n List[Tuple]: A list of all the rows in the results.\n\n Raises:\n google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.\n \"\"\"\n self._try_fetch()\n return list(self._query_data)\n\n def setinputsizes(self, sizes):\n \"\"\"No-op, but for consistency raise an error if cursor is closed.\"\"\"\n\n def setoutputsize(self, size, column=None):\n \"\"\"No-op, but for consistency raise an error if cursor is closed.\"\"\"\n\n\ndef _format_operation_list(operation, parameters):\n \"\"\"Formats parameters in operation in the way BigQuery expects.\n\n The input operation will be a query like ``SELECT %s`` and the output\n will be a query like ``SELECT ?``.\n\n Args:\n operation (str): A Google BigQuery query string.\n\n parameters (Sequence[Any]): Sequence of parameter values.\n\n Returns:\n str: A formatted query string.\n\n Raises:\n google.cloud.bigquery.dbapi.ProgrammingError:\n if a parameter used in the operation is not found in the\n ``parameters`` argument.\n \"\"\"\n formatted_params = [\"?\" for _ in parameters]\n\n try:\n return operation % tuple(formatted_params)\n except TypeError as exc:\n raise exceptions.ProgrammingError(exc)\n\n\ndef _format_operation_dict(operation, parameters):\n \"\"\"Formats parameters in operation in the way BigQuery expects.\n\n The input operation will be a query like ``SELECT %(namedparam)s`` and\n the output will be a query like ``SELECT @namedparam``.\n\n Args:\n operation (str): A Google BigQuery query string.\n\n parameters (Mapping[str, Any]): Dictionary of parameter values.\n\n Returns:\n str: A formatted query string.\n\n Raises:\n google.cloud.bigquery.dbapi.ProgrammingError:\n if a parameter used in the operation is not found in the\n ``parameters`` argument.\n \"\"\"\n formatted_params = {}\n for name in parameters:\n escaped_name = name.replace(\"`\", r\"\\`\")\n formatted_params[name] = \"@`{}`\".format(escaped_name)\n\n try:\n return operation % formatted_params\n except KeyError as exc:\n raise exceptions.ProgrammingError(exc)\n\n\ndef _format_operation(operation, parameters=None):\n \"\"\"Formats parameters in operation in way BigQuery expects.\n\n Args:\n operation (str): A Google BigQuery query string.\n\n parameters (Union[Mapping[str, Any], Sequence[Any]]):\n Optional parameter values.\n\n Returns:\n str: A formatted query string.\n\n Raises:\n google.cloud.bigquery.dbapi.ProgrammingError:\n if a parameter used in the operation is not found in the\n ``parameters`` argument.\n \"\"\"\n if parameters is None or len(parameters) == 0:\n return operation\n\n if isinstance(parameters, collections_abc.Mapping):\n return _format_operation_dict(operation, parameters)\n\n return _format_operation_list(operation, parameters)\n", "path": "google/cloud/bigquery/dbapi/cursor.py"}]} |
gh_patches_debug_1029 | rasdani/github-patches | git_diff | qtile__qtile-4682 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mirrored widgets on multi-display get wrong background transparency
### Issue description
I'm trying to use the advised way of putting the same widget on multiple bars (displays). That means I define a widget object like this:
```python
widget_volume = widget.PulseVolume(
fmt=" {}",
mouse_callbacks={"Button3": lambda: qtile.spawn("pavucontrol")},
limit_max_volume=True,
background="#242936" + "99",
**powerline_left,
)
```
Note that for clarity I have separated out the alpha channel from the background color.
After this, I add this widget variable to multiple Bar objects:
```python
screens = []
for monitor in range(monitors):
screens.append(
Screen(
top=bar.Bar(
widgets=[
...
widget.Sep(
background="#242936" + "99",
size_percent=60,
),
widget_volume,
...
```
On Screen 1, this works fine, but on Screens 2 and 3 the same widget gets a weird background transparency. Please see the screenshots below for what I mean. All widgets except the volume widget are declared inside the bar and they get the correct background color and transparency.
Screen 1:

Screen 2:

Screen 3:

I have tried modifying the transparency part ("99") to fully opaque ("ff") and fully transparent ("00") and those show as expected on all screens. It's just with partial transparency that the calculation seems to be off on my 2nd and 3rd screen.
Additionally, as you can see these screenshots are when using the powerline decoration from qtile_extras, but the same happens when using the widgets from qtile proper.
### Version
Current master (551269802) + PR 4525 patch
### Backend
Wayland (experimental)
### Config
_No response_
### Logs
_No response_
### Required
- [X] I have searched past issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/widget/base.py`
Content:
```
1 # Copyright (c) 2008-2010 Aldo Cortesi
2 # Copyright (c) 2011 Florian Mounier
3 # Copyright (c) 2011 Kenji_Takahashi
4 # Copyright (c) 2011 Paul Colomiets
5 # Copyright (c) 2012 roger
6 # Copyright (c) 2012 Craig Barnes
7 # Copyright (c) 2012-2015 Tycho Andersen
8 # Copyright (c) 2013 dequis
9 # Copyright (c) 2013 David R. Andersen
10 # Copyright (c) 2013 Tao Sauvage
11 # Copyright (c) 2014-2015 Sean Vig
12 # Copyright (c) 2014 Justin Bronder
13 #
14 # Permission is hereby granted, free of charge, to any person obtaining a copy
15 # of this software and associated documentation files (the "Software"), to deal
16 # in the Software without restriction, including without limitation the rights
17 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
18 # copies of the Software, and to permit persons to whom the Software is
19 # furnished to do so, subject to the following conditions:
20 #
21 # The above copyright notice and this permission notice shall be included in
22 # all copies or substantial portions of the Software.
23 #
24 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
25 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
26 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
27 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
28 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
29 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
30 # SOFTWARE.
31
32 from __future__ import annotations
33
34 import asyncio
35 import copy
36 import math
37 import subprocess
38 from typing import TYPE_CHECKING
39
40 from libqtile import bar, configurable, confreader
41 from libqtile.command import interface
42 from libqtile.command.base import CommandError, CommandObject, expose_command
43 from libqtile.lazy import LazyCall
44 from libqtile.log_utils import logger
45 from libqtile.utils import create_task
46
47 if TYPE_CHECKING:
48 from typing import Any
49
50 from libqtile.command.base import ItemT
51
52 # Each widget class must define which bar orientation(s) it supports by setting
53 # these bits in an 'orientations' class attribute. Simply having the attribute
54 # inherited by superclasses is discouraged, because if a superclass that was
55 # only supporting one orientation, adds support for the other, its subclasses
56 # will have to be adapted too, in general. ORIENTATION_NONE is only added for
57 # completeness' sake.
58 # +------------------------+--------------------+--------------------+
59 # | Widget bits | Horizontal bar | Vertical bar |
60 # +========================+====================+====================+
61 # | ORIENTATION_NONE | ConfigError raised | ConfigError raised |
62 # +------------------------+--------------------+--------------------+
63 # | ORIENTATION_HORIZONTAL | Widget displayed | ConfigError raised |
64 # | | horizontally | |
65 # +------------------------+--------------------+--------------------+
66 # | ORIENTATION_VERTICAL | ConfigError raised | Widget displayed |
67 # | | | vertically |
68 # +------------------------+--------------------+--------------------+
69 # | ORIENTATION_BOTH | Widget displayed | Widget displayed |
70 # | | horizontally | vertically |
71 # +------------------------+--------------------+--------------------+
72
73
74 class _Orientations(int):
75 def __new__(cls, value, doc):
76 return super().__new__(cls, value)
77
78 def __init__(self, value, doc):
79 self.doc = doc
80
81 def __str__(self):
82 return self.doc
83
84 def __repr__(self):
85 return self.doc
86
87
88 ORIENTATION_NONE = _Orientations(0, "none")
89 ORIENTATION_HORIZONTAL = _Orientations(1, "horizontal only")
90 ORIENTATION_VERTICAL = _Orientations(2, "vertical only")
91 ORIENTATION_BOTH = _Orientations(3, "horizontal and vertical")
92
93
94 class _Widget(CommandObject, configurable.Configurable):
95 """Base Widget class
96
97 If length is set to the special value `bar.STRETCH`, the bar itself will
98 set the length to the maximum remaining space, after all other widgets have
99 been configured.
100
101 In horizontal bars, 'length' corresponds to the width of the widget; in
102 vertical bars, it corresponds to the widget's height.
103
104 The offsetx and offsety attributes are set by the Bar after all widgets
105 have been configured.
106
107 Callback functions can be assigned to button presses by passing a dict to the
108 'callbacks' kwarg. No arguments are passed to the function so, if
109 you need access to the qtile object, it needs to be imported into your code.
110
111 ``lazy`` functions can also be passed as callback functions and can be used in
112 the same way as keybindings.
113
114 For example:
115
116 .. code-block:: python
117
118 from libqtile import qtile
119
120 def open_calendar():
121 qtile.spawn('gsimplecal next_month')
122
123 clock = widget.Clock(
124 mouse_callbacks={
125 'Button1': open_calendar,
126 'Button3': lazy.spawn('gsimplecal prev_month')
127 }
128 )
129
130 When the clock widget receives a click with button 1, the ``open_calendar`` function
131 will be executed.
132 """
133
134 orientations = ORIENTATION_BOTH
135
136 # Default (empty set) is for all backends to be supported. Widgets can override this
137 # to explicitly confirm which backends are supported
138 supported_backends: set[str] = set()
139
140 offsetx: int = 0
141 offsety: int = 0
142 defaults: list[tuple[str, Any, str]] = [
143 ("background", None, "Widget background color"),
144 (
145 "mouse_callbacks",
146 {},
147 "Dict of mouse button press callback functions. Accepts functions and ``lazy`` calls.",
148 ),
149 ]
150
151 def __init__(self, length, **config):
152 """
153 length: bar.STRETCH, bar.CALCULATED, or a specified length.
154 """
155 CommandObject.__init__(self)
156 self.name = self.__class__.__name__.lower()
157 if "name" in config:
158 self.name = config["name"]
159
160 configurable.Configurable.__init__(self, **config)
161 self.add_defaults(_Widget.defaults)
162
163 if length in (bar.CALCULATED, bar.STRETCH):
164 self.length_type = length
165 self.length = 0
166 elif isinstance(length, int):
167 self.length_type = bar.STATIC
168 self.length = length
169 else:
170 raise confreader.ConfigError("Widget width must be an int")
171
172 self.configured = False
173 self._futures: list[asyncio.Handle] = []
174 self._mirrors: set[_Widget] = set()
175 self.finalized = False
176
177 @property
178 def length(self):
179 if self.length_type == bar.CALCULATED:
180 return int(self.calculate_length())
181 return self._length
182
183 @length.setter
184 def length(self, value):
185 self._length = value
186
187 @property
188 def width(self):
189 if self.bar.horizontal:
190 return self.length
191 return self.bar.width
192
193 @property
194 def height(self):
195 if self.bar.horizontal:
196 return self.bar.height
197 return self.length
198
199 @property
200 def offset(self):
201 if self.bar.horizontal:
202 return self.offsetx
203 return self.offsety
204
205 def _test_orientation_compatibility(self, horizontal):
206 if horizontal:
207 if not self.orientations & ORIENTATION_HORIZONTAL:
208 raise confreader.ConfigError(
209 self.__class__.__name__
210 + " is not compatible with the orientation of the bar."
211 )
212 elif not self.orientations & ORIENTATION_VERTICAL:
213 raise confreader.ConfigError(
214 self.__class__.__name__ + " is not compatible with the orientation of the bar."
215 )
216
217 def timer_setup(self):
218 """This is called exactly once, after the widget has been configured
219 and timers are available to be set up."""
220 pass
221
222 def _configure(self, qtile, bar):
223 self._test_orientation_compatibility(bar.horizontal)
224
225 self.qtile = qtile
226 self.bar = bar
227 self.drawer = bar.window.create_drawer(self.bar.width, self.bar.height)
228
229 # Clear this flag as widget may be restarted (e.g. if screen removed and re-added)
230 self.finalized = False
231
232 # Timers are added to futures list so they can be cancelled if the `finalize` method is
233 # called before the timers have fired.
234 if not self.configured:
235 timer = self.qtile.call_soon(self.timer_setup)
236 async_timer = self.qtile.call_soon(asyncio.create_task, self._config_async())
237
238 # Add these to our list of futures so they can be cancelled.
239 self._futures.extend([timer, async_timer])
240
241 async def _config_async(self):
242 """
243 This is called once when the main eventloop has started. this
244 happens after _configure has been run.
245
246 Widgets that need to use asyncio coroutines after this point may
247 wish to initialise the relevant code (e.g. connections to dbus
248 using dbus_next) here.
249 """
250 pass
251
252 def finalize(self):
253 for future in self._futures:
254 future.cancel()
255 if hasattr(self, "layout") and self.layout:
256 self.layout.finalize()
257 self.drawer.finalize()
258 self.finalized = True
259
260 # Reset configuration status so the widget can be reconfigured
261 # e.g. when screen is re-added
262 self.configured = False
263
264 def clear(self):
265 self.drawer.set_source_rgb(self.bar.background)
266 self.drawer.fillrect(self.offsetx, self.offsety, self.width, self.height)
267
268 @expose_command()
269 def info(self):
270 """Info for this object."""
271 return dict(
272 name=self.name,
273 offset=self.offset,
274 length=self.length,
275 width=self.width,
276 height=self.height,
277 )
278
279 def add_callbacks(self, defaults):
280 """Add default callbacks with a lower priority than user-specified callbacks."""
281 defaults.update(self.mouse_callbacks)
282 self.mouse_callbacks = defaults
283
284 def button_press(self, x, y, button):
285 name = "Button{0}".format(button)
286 if name in self.mouse_callbacks:
287 cmd = self.mouse_callbacks[name]
288 if isinstance(cmd, LazyCall):
289 if cmd.check(self.qtile):
290 status, val = self.qtile.server.call(
291 (cmd.selectors, cmd.name, cmd.args, cmd.kwargs)
292 )
293 if status in (interface.ERROR, interface.EXCEPTION):
294 logger.error("Mouse callback command error %s: %s", cmd.name, val)
295 else:
296 cmd()
297
298 def button_release(self, x, y, button):
299 pass
300
301 def get(self, q, name):
302 """
303 Utility function for quick retrieval of a widget by name.
304 """
305 w = q.widgets_map.get(name)
306 if not w:
307 raise CommandError("No such widget: %s" % name)
308 return w
309
310 def _items(self, name: str) -> ItemT:
311 if name == "bar":
312 return True, []
313 elif name == "screen":
314 return True, []
315 return None
316
317 def _select(self, name, sel):
318 if name == "bar":
319 return self.bar
320 elif name == "screen":
321 return self.bar.screen
322
323 def draw(self):
324 """
325 Method that draws the widget. You may call this explicitly to
326 redraw the widget, but only if the length of the widget hasn't
327 changed. If it has, you must call bar.draw instead.
328 """
329 raise NotImplementedError
330
331 def calculate_length(self):
332 """
333 Must be implemented if the widget can take CALCULATED for length.
334 It must return the width of the widget if it's installed in a
335 horizontal bar; it must return the height of the widget if it's
336 installed in a vertical bar. Usually you will test the orientation
337 of the bar with 'self.bar.horizontal'.
338 """
339 raise NotImplementedError
340
341 def timeout_add(self, seconds, method, method_args=()):
342 """
343 This method calls ``.call_later`` with given arguments.
344 """
345 # Don't add timers for finalised widgets
346 if self.finalized:
347 return
348
349 future = self.qtile.call_later(seconds, self._wrapper, method, *method_args)
350
351 self._futures.append(future)
352 return future
353
354 def call_process(self, command, **kwargs):
355 """
356 This method uses `subprocess.check_output` to run the given command
357 and return the string from stdout, which is decoded when using
358 Python 3.
359 """
360 return subprocess.check_output(command, **kwargs, encoding="utf-8")
361
362 def _remove_dead_timers(self):
363 """Remove completed and cancelled timers from the list."""
364
365 def is_ready(timer):
366 return timer in self.qtile._eventloop._ready
367
368 self._futures = [
369 timer
370 for timer in self._futures
371 # Filter out certain handles...
372 if not (
373 timer.cancelled()
374 # Once a scheduled timer is ready to be run its _scheduled flag is set to False
375 # and it's added to the loop's `_ready` queue
376 or (
377 isinstance(timer, asyncio.TimerHandle)
378 and not timer._scheduled
379 and not is_ready(timer)
380 )
381 # Callbacks scheduled via `call_soon` are put into the loop's `_ready` queue
382 # and are removed once they've been executed
383 or (isinstance(timer, asyncio.Handle) and not is_ready(timer))
384 )
385 ]
386
387 def _wrapper(self, method, *method_args):
388 self._remove_dead_timers()
389 try:
390 if asyncio.iscoroutinefunction(method):
391 create_task(method(*method_args))
392 elif asyncio.iscoroutine(method):
393 create_task(method)
394 else:
395 method(*method_args)
396 except: # noqa: E722
397 logger.exception("got exception from widget timer")
398
399 def create_mirror(self):
400 return Mirror(self, background=self.background)
401
402 def clone(self):
403 return copy.deepcopy(self)
404
405 def mouse_enter(self, x, y):
406 pass
407
408 def mouse_leave(self, x, y):
409 pass
410
411 def _draw_with_mirrors(self) -> None:
412 self._old_draw()
413 for mirror in self._mirrors:
414 if not mirror.configured:
415 continue
416
417 # If the widget and mirror are on the same bar then we could have an
418 # infinite loop when we call bar.draw(). mirror.draw() will trigger a resize
419 # if it's the wrong size.
420 if mirror.length_type == bar.CALCULATED and mirror.bar is not self.bar:
421 mirror.bar.draw()
422 else:
423 mirror.draw()
424
425 def add_mirror(self, widget: _Widget):
426 if not self._mirrors:
427 self._old_draw = self.draw
428 self.draw = self._draw_with_mirrors # type: ignore
429
430 self._mirrors.add(widget)
431 if not self.drawer.has_mirrors:
432 self.drawer.has_mirrors = True
433
434 def remove_mirror(self, widget: _Widget):
435 try:
436 self._mirrors.remove(widget)
437 except KeyError:
438 pass
439
440 if not self._mirrors:
441 self.drawer.has_mirrors = False
442
443 if hasattr(self, "_old_draw"):
444 # Deletes the reference to draw and falls back to the original
445 del self.draw
446 del self._old_draw
447
448
449 UNSPECIFIED = bar.Obj("UNSPECIFIED")
450
451
452 class _TextBox(_Widget):
453 """
454 Base class for widgets that are just boxes containing text.
455 """
456
457 orientations = ORIENTATION_BOTH
458 defaults = [
459 ("font", "sans", "Default font"),
460 ("fontsize", None, "Font size. Calculated if None."),
461 ("padding", None, "Padding. Calculated if None."),
462 ("foreground", "ffffff", "Foreground colour"),
463 ("fontshadow", None, "font shadow color, default is None(no shadow)"),
464 ("markup", True, "Whether or not to use pango markup"),
465 (
466 "fmt",
467 "{}",
468 "Format to apply to the string returned by the widget. Main purpose: applying markup. "
469 "For a widget that returns ``foo``, using ``fmt='<i>{}</i>'`` would give you ``<i>foo</i>``. "
470 "To control what the widget outputs in the first place, use the ``format`` paramater of the widget (if it has one).",
471 ),
472 ("max_chars", 0, "Maximum number of characters to display in widget."),
473 (
474 "scroll",
475 False,
476 "Whether text should be scrolled. When True, you must set the widget's ``width``.",
477 ),
478 (
479 "scroll_repeat",
480 True,
481 "Whether text should restart scrolling once the text has ended",
482 ),
483 (
484 "scroll_delay",
485 2,
486 "Number of seconds to pause before starting scrolling and restarting/clearing text at end",
487 ),
488 ("scroll_step", 1, "Number of pixels to scroll with each step"),
489 ("scroll_interval", 0.1, "Time in seconds before next scrolling step"),
490 (
491 "scroll_clear",
492 False,
493 "Whether text should scroll completely away (True) or stop when the end of the text is shown (False)",
494 ),
495 ("scroll_hide", False, "Whether the widget should hide when scrolling has finished"),
496 (
497 "scroll_fixed_width",
498 False,
499 "When ``scroll=True`` the ``width`` parameter is a maximum width and, when text is shorter than this, the widget will resize. "
500 "Setting ``scroll_fixed_width=True`` will force the widget to have a fixed width, regardless of the size of the text.",
501 ),
502 ] # type: list[tuple[str, Any, str]]
503
504 def __init__(self, text=" ", width=bar.CALCULATED, **config):
505 self.layout = None
506 _Widget.__init__(self, width, **config)
507 self.add_defaults(_TextBox.defaults)
508 self.text = text
509 self._is_scrolling = False
510 self._should_scroll = False
511 self._scroll_offset = 0
512 self._scroll_queued = False
513 self._scroll_timer = None
514 self._scroll_width = width
515
516 @property
517 def text(self):
518 return self._text
519
520 @text.setter
521 def text(self, value):
522 if len(value) > self.max_chars > 0:
523 value = value[: self.max_chars] + "…"
524 self._text = value
525 if self.layout:
526 self.layout.text = self.formatted_text
527 if self.scroll:
528 self.check_width()
529 self.reset_scroll()
530
531 @property
532 def formatted_text(self):
533 return self.fmt.format(self._text)
534
535 @property
536 def foreground(self):
537 return self._foreground
538
539 @foreground.setter
540 def foreground(self, fg):
541 self._foreground = fg
542 if self.layout:
543 self.layout.colour = fg
544
545 @property
546 def font(self):
547 return self._font
548
549 @font.setter
550 def font(self, value):
551 self._font = value
552 if self.layout:
553 self.layout.font = value
554
555 @property
556 def fontshadow(self):
557 return self._fontshadow
558
559 @fontshadow.setter
560 def fontshadow(self, value):
561 self._fontshadow = value
562 if self.layout:
563 self.layout.font_shadow = value
564
565 @property
566 def actual_padding(self):
567 if self.padding is None:
568 return self.fontsize / 2
569 else:
570 return self.padding
571
572 def _configure(self, qtile, bar):
573 _Widget._configure(self, qtile, bar)
574 if self.fontsize is None:
575 self.fontsize = self.bar.height - self.bar.height / 5
576 self.layout = self.drawer.textlayout(
577 self.formatted_text,
578 self.foreground,
579 self.font,
580 self.fontsize,
581 self.fontshadow,
582 markup=self.markup,
583 )
584 if not isinstance(self._scroll_width, int) and self.scroll:
585 logger.warning("%s: You must specify a width when enabling scrolling.", self.name)
586 self.scroll = False
587
588 if self.scroll:
589 self.check_width()
590
591 def check_width(self):
592 """
593 Check whether the widget needs to have calculated or fixed width
594 and whether the text should be scrolled.
595 """
596 if self.layout.width > self._scroll_width:
597 self.length_type = bar.STATIC
598 self.length = self._scroll_width
599 self._is_scrolling = True
600 self._should_scroll = True
601 else:
602 if self.scroll_fixed_width:
603 self.length_type = bar.STATIC
604 self.length = self._scroll_width
605 else:
606 self.length_type = bar.CALCULATED
607 self._should_scroll = False
608
609 def calculate_length(self):
610 if self.text:
611 if self.bar.horizontal:
612 return min(self.layout.width, self.bar.width) + self.actual_padding * 2
613 else:
614 return min(self.layout.width, self.bar.height) + self.actual_padding * 2
615 else:
616 return 0
617
618 def can_draw(self):
619 can_draw = (
620 self.layout is not None and not self.layout.finalized() and self.offsetx is not None
621 ) # if the bar hasn't placed us yet
622 return can_draw
623
624 def draw(self):
625 if not self.can_draw():
626 return
627 self.drawer.clear(self.background or self.bar.background)
628
629 # size = self.bar.height if self.bar.horizontal else self.bar.width
630 self.drawer.ctx.save()
631
632 if not self.bar.horizontal:
633 # Left bar reads bottom to top
634 if self.bar.screen.left is self.bar:
635 self.drawer.ctx.rotate(-90 * math.pi / 180.0)
636 self.drawer.ctx.translate(-self.length, 0)
637
638 # Right bar is top to bottom
639 else:
640 self.drawer.ctx.translate(self.bar.width, 0)
641 self.drawer.ctx.rotate(90 * math.pi / 180.0)
642
643 # If we're scrolling, we clip the context to the scroll width less the padding
644 # Move the text layout position (and we only see the clipped portion)
645 if self._should_scroll:
646 self.drawer.ctx.rectangle(
647 self.actual_padding,
648 0,
649 self._scroll_width - 2 * self.actual_padding,
650 self.bar.size,
651 )
652 self.drawer.ctx.clip()
653
654 size = self.bar.height if self.bar.horizontal else self.bar.width
655
656 self.layout.draw(
657 (self.actual_padding or 0) - self._scroll_offset,
658 int(size / 2.0 - self.layout.height / 2.0) + 1,
659 )
660 self.drawer.ctx.restore()
661
662 self.drawer.draw(
663 offsetx=self.offsetx, offsety=self.offsety, width=self.width, height=self.height
664 )
665
666 # We only want to scroll if:
667 # - User has asked us to scroll and the scroll width is smaller than the layout (should_scroll=True)
668 # - We are still scrolling (is_scrolling=True)
669 # - We haven't already queued the next scroll (scroll_queued=False)
670 if self._should_scroll and self._is_scrolling and not self._scroll_queued:
671 self._scroll_queued = True
672 if self._scroll_offset == 0:
673 interval = self.scroll_delay
674 else:
675 interval = self.scroll_interval
676 self._scroll_timer = self.timeout_add(interval, self.do_scroll)
677
678 def do_scroll(self):
679 # Allow the next scroll tick to be queued
680 self._scroll_queued = False
681
682 # If we're still scrolling, adjust the next offset
683 if self._is_scrolling:
684 self._scroll_offset += self.scroll_step
685
686 # Check whether we need to stop scrolling when:
687 # - we've scrolled all the text off the widget (scroll_clear = True)
688 # - the final pixel is visible (scroll_clear = False)
689 if (self.scroll_clear and self._scroll_offset > self.layout.width) or (
690 not self.scroll_clear
691 and (self.layout.width - self._scroll_offset)
692 < (self._scroll_width - 2 * self.actual_padding)
693 ):
694 self._is_scrolling = False
695
696 # We've reached the end of the scroll so what next?
697 if not self._is_scrolling:
698 if self.scroll_repeat:
699 # Pause and restart scrolling
700 self._scroll_timer = self.timeout_add(self.scroll_delay, self.reset_scroll)
701 elif self.scroll_hide:
702 # Clear the text
703 self._scroll_timer = self.timeout_add(self.scroll_delay, self.hide_scroll)
704 # If neither of these options then the text is no longer updated.
705
706 self.draw()
707
708 def reset_scroll(self):
709 self._scroll_offset = 0
710 self._is_scrolling = True
711 self._scroll_queued = False
712 if self._scroll_timer:
713 self._scroll_timer.cancel()
714 self.draw()
715
716 def hide_scroll(self):
717 self.update("")
718
719 @expose_command()
720 def set_font(self, font=UNSPECIFIED, fontsize=UNSPECIFIED, fontshadow=UNSPECIFIED):
721 """
722 Change the font used by this widget. If font is None, the current
723 font is used.
724 """
725 if font is not UNSPECIFIED:
726 self.font = font
727 if fontsize is not UNSPECIFIED:
728 self.fontsize = fontsize
729 if fontshadow is not UNSPECIFIED:
730 self.fontshadow = fontshadow
731 self.bar.draw()
732
733 @expose_command()
734 def info(self):
735 d = _Widget.info(self)
736 d["foreground"] = self.foreground
737 d["text"] = self.formatted_text
738 return d
739
740 def update(self, text):
741 """Update the widget text."""
742 # Don't try to update text in dead layouts
743 # This is mainly required for ThreadPoolText based widgets as the
744 # polling function cannot be cancelled and so may be called after the widget
745 # is finalised.
746 if not self.can_draw():
747 return
748
749 if self.text == text:
750 return
751 if text is None:
752 text = ""
753
754 old_width = self.layout.width
755 self.text = text
756
757 # If our width hasn't changed, we just draw ourselves. Otherwise,
758 # we draw the whole bar.
759 if self.layout.width == old_width:
760 self.draw()
761 else:
762 self.bar.draw()
763
764
765 class InLoopPollText(_TextBox):
766 """A common interface for polling some 'fast' information, munging it, and
767 rendering the result in a text box. You probably want to use
768 ThreadPoolText instead.
769
770 ('fast' here means that this runs /in/ the event loop, so don't block! If
771 you want to run something nontrivial, use ThreadedPollWidget.)"""
772
773 defaults = [
774 (
775 "update_interval",
776 600,
777 "Update interval in seconds, if none, the widget updates only once.",
778 ),
779 ] # type: list[tuple[str, Any, str]]
780
781 def __init__(self, default_text="N/A", **config):
782 _TextBox.__init__(self, default_text, **config)
783 self.add_defaults(InLoopPollText.defaults)
784
785 def timer_setup(self):
786 update_interval = self.tick()
787 # If self.update_interval is defined and .tick() returns None, re-call
788 # after self.update_interval
789 if update_interval is None and self.update_interval is not None:
790 self.timeout_add(self.update_interval, self.timer_setup)
791 # We can change the update interval by returning something from .tick()
792 elif update_interval:
793 self.timeout_add(update_interval, self.timer_setup)
794 # If update_interval is False, we won't re-call
795
796 def _configure(self, qtile, bar):
797 should_tick = self.configured
798 _TextBox._configure(self, qtile, bar)
799
800 # Update when we are being re-configured.
801 if should_tick:
802 self.tick()
803
804 def button_press(self, x, y, button):
805 self.tick()
806 _TextBox.button_press(self, x, y, button)
807
808 def poll(self):
809 return "N/A"
810
811 def tick(self):
812 text = self.poll()
813 self.update(text)
814
815
816 class ThreadPoolText(_TextBox):
817 """A common interface for wrapping blocking events which when triggered
818 will update a textbox.
819
820 The poll method is intended to wrap a blocking function which may take
821 quite a while to return anything. It will be executed as a future and
822 should return updated text when completed. It may also return None to
823 disable any further updates.
824
825 param: text - Initial text to display.
826 """
827
828 defaults = [
829 (
830 "update_interval",
831 600,
832 "Update interval in seconds, if none, the widget updates only once.",
833 ),
834 ] # type: list[tuple[str, Any, str]]
835
836 def __init__(self, text, **config):
837 super().__init__(text, **config)
838 self.add_defaults(ThreadPoolText.defaults)
839
840 def timer_setup(self):
841 def on_done(future):
842 try:
843 result = future.result()
844 except Exception:
845 result = None
846 logger.exception("poll() raised exceptions, not rescheduling")
847
848 if result is not None:
849 try:
850 self.update(result)
851
852 if self.update_interval is not None:
853 self.timeout_add(self.update_interval, self.timer_setup)
854
855 except Exception:
856 logger.exception("Failed to reschedule timer for %s.", self.name)
857 else:
858 logger.warning("%s's poll() returned None, not rescheduling", self.name)
859
860 self.future = self.qtile.run_in_executor(self.poll)
861 self.future.add_done_callback(on_done)
862
863 def poll(self):
864 pass
865
866 @expose_command()
867 def force_update(self):
868 """Immediately poll the widget. Existing timers are unaffected."""
869 self.update(self.poll())
870
871
872 # these two classes below look SUSPICIOUSLY similar
873
874
875 class PaddingMixin(configurable.Configurable):
876 """Mixin that provides padding(_x|_y|)
877
878 To use it, subclass and add this to __init__:
879
880 self.add_defaults(base.PaddingMixin.defaults)
881 """
882
883 defaults = [
884 ("padding", 3, "Padding inside the box"),
885 ("padding_x", None, "X Padding. Overrides 'padding' if set"),
886 ("padding_y", None, "Y Padding. Overrides 'padding' if set"),
887 ] # type: list[tuple[str, Any, str]]
888
889 padding_x = configurable.ExtraFallback("padding_x", "padding")
890 padding_y = configurable.ExtraFallback("padding_y", "padding")
891
892
893 class MarginMixin(configurable.Configurable):
894 """Mixin that provides margin(_x|_y|)
895
896 To use it, subclass and add this to __init__:
897
898 self.add_defaults(base.MarginMixin.defaults)
899 """
900
901 defaults = [
902 ("margin", 3, "Margin inside the box"),
903 ("margin_x", None, "X Margin. Overrides 'margin' if set"),
904 ("margin_y", None, "Y Margin. Overrides 'margin' if set"),
905 ] # type: list[tuple[str, Any, str]]
906
907 margin_x = configurable.ExtraFallback("margin_x", "margin")
908 margin_y = configurable.ExtraFallback("margin_y", "margin")
909
910
911 class Mirror(_Widget):
912 """
913 A widget for showing the same widget content in more than one place, for
914 instance, on bars across multiple screens.
915
916 You don't need to use it directly; instead, just instantiate your widget
917 once and hand it in to multiple bars. For instance::
918
919 cpu = widget.CPUGraph()
920 clock = widget.Clock()
921
922 screens = [
923 Screen(top=bar.Bar([widget.GroupBox(), cpu, clock])),
924 Screen(top=bar.Bar([widget.GroupBox(), cpu, clock])),
925 ]
926
927 Widgets can be passed to more than one bar, so that there don't need to be
928 any duplicates executing the same code all the time, and they'll always be
929 visually identical.
930
931 This works for all widgets that use `drawers` (and nothing else) to display
932 their contents. Currently, this is all widgets except for `Systray`.
933 """
934
935 def __init__(self, reflection, **config):
936 _Widget.__init__(self, reflection.length, **config)
937 self.reflects = reflection
938 self._length = 0
939 self.length_type = self.reflects.length_type
940
941 def _configure(self, qtile, bar):
942 _Widget._configure(self, qtile, bar)
943 self.reflects.add_mirror(self)
944 # We need to fill the background once before `draw` is called so, if
945 # there's no reflection, the mirror matches its parent bar.
946 self.drawer.clear(self.background or self.bar.background)
947
948 def calculate_length(self):
949 return self.reflects.calculate_length()
950
951 @property
952 def length(self):
953 if self.length_type != bar.STRETCH:
954 return self.reflects.length
955 return self._length
956
957 @length.setter
958 def length(self, value):
959 self._length = value
960
961 def draw(self):
962 self.drawer.clear(self.reflects.background or self.bar.background)
963 self.reflects.drawer.paint_to(self.drawer)
964 self.drawer.draw(offsetx=self.offset, offsety=self.offsety, width=self.width)
965
966 def button_press(self, x, y, button):
967 self.reflects.button_press(x, y, button)
968
969 def mouse_enter(self, x, y):
970 self.reflects.mouse_enter(x, y)
971
972 def mouse_leave(self, x, y):
973 self.reflects.mouse_leave(x, y)
974
975 def finalize(self):
976 self.reflects.remove_mirror(self)
977 _Widget.finalize(self)
978
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libqtile/widget/base.py b/libqtile/widget/base.py
--- a/libqtile/widget/base.py
+++ b/libqtile/widget/base.py
@@ -959,7 +959,7 @@
self._length = value
def draw(self):
- self.drawer.clear(self.reflects.background or self.bar.background)
+ self.drawer.clear_rect()
self.reflects.drawer.paint_to(self.drawer)
self.drawer.draw(offsetx=self.offset, offsety=self.offsety, width=self.width)
| {"golden_diff": "diff --git a/libqtile/widget/base.py b/libqtile/widget/base.py\n--- a/libqtile/widget/base.py\n+++ b/libqtile/widget/base.py\n@@ -959,7 +959,7 @@\n self._length = value\n \n def draw(self):\n- self.drawer.clear(self.reflects.background or self.bar.background)\n+ self.drawer.clear_rect()\n self.reflects.drawer.paint_to(self.drawer)\n self.drawer.draw(offsetx=self.offset, offsety=self.offsety, width=self.width)\n", "issue": "Mirrored widgets on multi-display get wrong background transparency\n### Issue description\n\nI'm trying to use the advised way of putting the same widget on multiple bars (displays). That means I define a widget object like this:\r\n```python\r\nwidget_volume = widget.PulseVolume(\r\n fmt=\"\udb81\udd7e {}\",\r\n mouse_callbacks={\"Button3\": lambda: qtile.spawn(\"pavucontrol\")},\r\n limit_max_volume=True,\r\n background=\"#242936\" + \"99\",\r\n **powerline_left,\r\n)\r\n```\r\nNote that for clarity I have separated out the alpha channel from the background color.\r\n\r\nAfter this, I add this widget variable to multiple Bar objects:\r\n```python\r\nscreens = []\r\n for monitor in range(monitors):\r\n screens.append(\r\n Screen(\r\n top=bar.Bar(\r\n widgets=[\r\n...\r\n widget.Sep(\r\n background=\"#242936\" + \"99\",\r\n size_percent=60,\r\n ),\r\n widget_volume,\r\n...\r\n```\r\nOn Screen 1, this works fine, but on Screens 2 and 3 the same widget gets a weird background transparency. Please see the screenshots below for what I mean. All widgets except the volume widget are declared inside the bar and they get the correct background color and transparency.\r\n\r\nScreen 1:\r\n\r\nScreen 2:\r\n\r\nScreen 3:\r\n\r\n\r\nI have tried modifying the transparency part (\"99\") to fully opaque (\"ff\") and fully transparent (\"00\") and those show as expected on all screens. It's just with partial transparency that the calculation seems to be off on my 2nd and 3rd screen.\r\n\r\nAdditionally, as you can see these screenshots are when using the powerline decoration from qtile_extras, but the same happens when using the widgets from qtile proper.\n\n### Version\n\nCurrent master (551269802) + PR 4525 patch\n\n### Backend\n\nWayland (experimental)\n\n### Config\n\n_No response_\n\n### Logs\n\n_No response_\n\n### Required\n\n- [X] I have searched past issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "# Copyright (c) 2008-2010 Aldo Cortesi\n# Copyright (c) 2011 Florian Mounier\n# Copyright (c) 2011 Kenji_Takahashi\n# Copyright (c) 2011 Paul Colomiets\n# Copyright (c) 2012 roger\n# Copyright (c) 2012 Craig Barnes\n# Copyright (c) 2012-2015 Tycho Andersen\n# Copyright (c) 2013 dequis\n# Copyright (c) 2013 David R. Andersen\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2014-2015 Sean Vig\n# Copyright (c) 2014 Justin Bronder\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom __future__ import annotations\n\nimport asyncio\nimport copy\nimport math\nimport subprocess\nfrom typing import TYPE_CHECKING\n\nfrom libqtile import bar, configurable, confreader\nfrom libqtile.command import interface\nfrom libqtile.command.base import CommandError, CommandObject, expose_command\nfrom libqtile.lazy import LazyCall\nfrom libqtile.log_utils import logger\nfrom libqtile.utils import create_task\n\nif TYPE_CHECKING:\n from typing import Any\n\n from libqtile.command.base import ItemT\n\n# Each widget class must define which bar orientation(s) it supports by setting\n# these bits in an 'orientations' class attribute. Simply having the attribute\n# inherited by superclasses is discouraged, because if a superclass that was\n# only supporting one orientation, adds support for the other, its subclasses\n# will have to be adapted too, in general. ORIENTATION_NONE is only added for\n# completeness' sake.\n# +------------------------+--------------------+--------------------+\n# | Widget bits | Horizontal bar | Vertical bar |\n# +========================+====================+====================+\n# | ORIENTATION_NONE | ConfigError raised | ConfigError raised |\n# +------------------------+--------------------+--------------------+\n# | ORIENTATION_HORIZONTAL | Widget displayed | ConfigError raised |\n# | | horizontally | |\n# +------------------------+--------------------+--------------------+\n# | ORIENTATION_VERTICAL | ConfigError raised | Widget displayed |\n# | | | vertically |\n# +------------------------+--------------------+--------------------+\n# | ORIENTATION_BOTH | Widget displayed | Widget displayed |\n# | | horizontally | vertically |\n# +------------------------+--------------------+--------------------+\n\n\nclass _Orientations(int):\n def __new__(cls, value, doc):\n return super().__new__(cls, value)\n\n def __init__(self, value, doc):\n self.doc = doc\n\n def __str__(self):\n return self.doc\n\n def __repr__(self):\n return self.doc\n\n\nORIENTATION_NONE = _Orientations(0, \"none\")\nORIENTATION_HORIZONTAL = _Orientations(1, \"horizontal only\")\nORIENTATION_VERTICAL = _Orientations(2, \"vertical only\")\nORIENTATION_BOTH = _Orientations(3, \"horizontal and vertical\")\n\n\nclass _Widget(CommandObject, configurable.Configurable):\n \"\"\"Base Widget class\n\n If length is set to the special value `bar.STRETCH`, the bar itself will\n set the length to the maximum remaining space, after all other widgets have\n been configured.\n\n In horizontal bars, 'length' corresponds to the width of the widget; in\n vertical bars, it corresponds to the widget's height.\n\n The offsetx and offsety attributes are set by the Bar after all widgets\n have been configured.\n\n Callback functions can be assigned to button presses by passing a dict to the\n 'callbacks' kwarg. No arguments are passed to the function so, if\n you need access to the qtile object, it needs to be imported into your code.\n\n ``lazy`` functions can also be passed as callback functions and can be used in\n the same way as keybindings.\n\n For example:\n\n .. code-block:: python\n\n from libqtile import qtile\n\n def open_calendar():\n qtile.spawn('gsimplecal next_month')\n\n clock = widget.Clock(\n mouse_callbacks={\n 'Button1': open_calendar,\n 'Button3': lazy.spawn('gsimplecal prev_month')\n }\n )\n\n When the clock widget receives a click with button 1, the ``open_calendar`` function\n will be executed.\n \"\"\"\n\n orientations = ORIENTATION_BOTH\n\n # Default (empty set) is for all backends to be supported. Widgets can override this\n # to explicitly confirm which backends are supported\n supported_backends: set[str] = set()\n\n offsetx: int = 0\n offsety: int = 0\n defaults: list[tuple[str, Any, str]] = [\n (\"background\", None, \"Widget background color\"),\n (\n \"mouse_callbacks\",\n {},\n \"Dict of mouse button press callback functions. Accepts functions and ``lazy`` calls.\",\n ),\n ]\n\n def __init__(self, length, **config):\n \"\"\"\n length: bar.STRETCH, bar.CALCULATED, or a specified length.\n \"\"\"\n CommandObject.__init__(self)\n self.name = self.__class__.__name__.lower()\n if \"name\" in config:\n self.name = config[\"name\"]\n\n configurable.Configurable.__init__(self, **config)\n self.add_defaults(_Widget.defaults)\n\n if length in (bar.CALCULATED, bar.STRETCH):\n self.length_type = length\n self.length = 0\n elif isinstance(length, int):\n self.length_type = bar.STATIC\n self.length = length\n else:\n raise confreader.ConfigError(\"Widget width must be an int\")\n\n self.configured = False\n self._futures: list[asyncio.Handle] = []\n self._mirrors: set[_Widget] = set()\n self.finalized = False\n\n @property\n def length(self):\n if self.length_type == bar.CALCULATED:\n return int(self.calculate_length())\n return self._length\n\n @length.setter\n def length(self, value):\n self._length = value\n\n @property\n def width(self):\n if self.bar.horizontal:\n return self.length\n return self.bar.width\n\n @property\n def height(self):\n if self.bar.horizontal:\n return self.bar.height\n return self.length\n\n @property\n def offset(self):\n if self.bar.horizontal:\n return self.offsetx\n return self.offsety\n\n def _test_orientation_compatibility(self, horizontal):\n if horizontal:\n if not self.orientations & ORIENTATION_HORIZONTAL:\n raise confreader.ConfigError(\n self.__class__.__name__\n + \" is not compatible with the orientation of the bar.\"\n )\n elif not self.orientations & ORIENTATION_VERTICAL:\n raise confreader.ConfigError(\n self.__class__.__name__ + \" is not compatible with the orientation of the bar.\"\n )\n\n def timer_setup(self):\n \"\"\"This is called exactly once, after the widget has been configured\n and timers are available to be set up.\"\"\"\n pass\n\n def _configure(self, qtile, bar):\n self._test_orientation_compatibility(bar.horizontal)\n\n self.qtile = qtile\n self.bar = bar\n self.drawer = bar.window.create_drawer(self.bar.width, self.bar.height)\n\n # Clear this flag as widget may be restarted (e.g. if screen removed and re-added)\n self.finalized = False\n\n # Timers are added to futures list so they can be cancelled if the `finalize` method is\n # called before the timers have fired.\n if not self.configured:\n timer = self.qtile.call_soon(self.timer_setup)\n async_timer = self.qtile.call_soon(asyncio.create_task, self._config_async())\n\n # Add these to our list of futures so they can be cancelled.\n self._futures.extend([timer, async_timer])\n\n async def _config_async(self):\n \"\"\"\n This is called once when the main eventloop has started. this\n happens after _configure has been run.\n\n Widgets that need to use asyncio coroutines after this point may\n wish to initialise the relevant code (e.g. connections to dbus\n using dbus_next) here.\n \"\"\"\n pass\n\n def finalize(self):\n for future in self._futures:\n future.cancel()\n if hasattr(self, \"layout\") and self.layout:\n self.layout.finalize()\n self.drawer.finalize()\n self.finalized = True\n\n # Reset configuration status so the widget can be reconfigured\n # e.g. when screen is re-added\n self.configured = False\n\n def clear(self):\n self.drawer.set_source_rgb(self.bar.background)\n self.drawer.fillrect(self.offsetx, self.offsety, self.width, self.height)\n\n @expose_command()\n def info(self):\n \"\"\"Info for this object.\"\"\"\n return dict(\n name=self.name,\n offset=self.offset,\n length=self.length,\n width=self.width,\n height=self.height,\n )\n\n def add_callbacks(self, defaults):\n \"\"\"Add default callbacks with a lower priority than user-specified callbacks.\"\"\"\n defaults.update(self.mouse_callbacks)\n self.mouse_callbacks = defaults\n\n def button_press(self, x, y, button):\n name = \"Button{0}\".format(button)\n if name in self.mouse_callbacks:\n cmd = self.mouse_callbacks[name]\n if isinstance(cmd, LazyCall):\n if cmd.check(self.qtile):\n status, val = self.qtile.server.call(\n (cmd.selectors, cmd.name, cmd.args, cmd.kwargs)\n )\n if status in (interface.ERROR, interface.EXCEPTION):\n logger.error(\"Mouse callback command error %s: %s\", cmd.name, val)\n else:\n cmd()\n\n def button_release(self, x, y, button):\n pass\n\n def get(self, q, name):\n \"\"\"\n Utility function for quick retrieval of a widget by name.\n \"\"\"\n w = q.widgets_map.get(name)\n if not w:\n raise CommandError(\"No such widget: %s\" % name)\n return w\n\n def _items(self, name: str) -> ItemT:\n if name == \"bar\":\n return True, []\n elif name == \"screen\":\n return True, []\n return None\n\n def _select(self, name, sel):\n if name == \"bar\":\n return self.bar\n elif name == \"screen\":\n return self.bar.screen\n\n def draw(self):\n \"\"\"\n Method that draws the widget. You may call this explicitly to\n redraw the widget, but only if the length of the widget hasn't\n changed. If it has, you must call bar.draw instead.\n \"\"\"\n raise NotImplementedError\n\n def calculate_length(self):\n \"\"\"\n Must be implemented if the widget can take CALCULATED for length.\n It must return the width of the widget if it's installed in a\n horizontal bar; it must return the height of the widget if it's\n installed in a vertical bar. Usually you will test the orientation\n of the bar with 'self.bar.horizontal'.\n \"\"\"\n raise NotImplementedError\n\n def timeout_add(self, seconds, method, method_args=()):\n \"\"\"\n This method calls ``.call_later`` with given arguments.\n \"\"\"\n # Don't add timers for finalised widgets\n if self.finalized:\n return\n\n future = self.qtile.call_later(seconds, self._wrapper, method, *method_args)\n\n self._futures.append(future)\n return future\n\n def call_process(self, command, **kwargs):\n \"\"\"\n This method uses `subprocess.check_output` to run the given command\n and return the string from stdout, which is decoded when using\n Python 3.\n \"\"\"\n return subprocess.check_output(command, **kwargs, encoding=\"utf-8\")\n\n def _remove_dead_timers(self):\n \"\"\"Remove completed and cancelled timers from the list.\"\"\"\n\n def is_ready(timer):\n return timer in self.qtile._eventloop._ready\n\n self._futures = [\n timer\n for timer in self._futures\n # Filter out certain handles...\n if not (\n timer.cancelled()\n # Once a scheduled timer is ready to be run its _scheduled flag is set to False\n # and it's added to the loop's `_ready` queue\n or (\n isinstance(timer, asyncio.TimerHandle)\n and not timer._scheduled\n and not is_ready(timer)\n )\n # Callbacks scheduled via `call_soon` are put into the loop's `_ready` queue\n # and are removed once they've been executed\n or (isinstance(timer, asyncio.Handle) and not is_ready(timer))\n )\n ]\n\n def _wrapper(self, method, *method_args):\n self._remove_dead_timers()\n try:\n if asyncio.iscoroutinefunction(method):\n create_task(method(*method_args))\n elif asyncio.iscoroutine(method):\n create_task(method)\n else:\n method(*method_args)\n except: # noqa: E722\n logger.exception(\"got exception from widget timer\")\n\n def create_mirror(self):\n return Mirror(self, background=self.background)\n\n def clone(self):\n return copy.deepcopy(self)\n\n def mouse_enter(self, x, y):\n pass\n\n def mouse_leave(self, x, y):\n pass\n\n def _draw_with_mirrors(self) -> None:\n self._old_draw()\n for mirror in self._mirrors:\n if not mirror.configured:\n continue\n\n # If the widget and mirror are on the same bar then we could have an\n # infinite loop when we call bar.draw(). mirror.draw() will trigger a resize\n # if it's the wrong size.\n if mirror.length_type == bar.CALCULATED and mirror.bar is not self.bar:\n mirror.bar.draw()\n else:\n mirror.draw()\n\n def add_mirror(self, widget: _Widget):\n if not self._mirrors:\n self._old_draw = self.draw\n self.draw = self._draw_with_mirrors # type: ignore\n\n self._mirrors.add(widget)\n if not self.drawer.has_mirrors:\n self.drawer.has_mirrors = True\n\n def remove_mirror(self, widget: _Widget):\n try:\n self._mirrors.remove(widget)\n except KeyError:\n pass\n\n if not self._mirrors:\n self.drawer.has_mirrors = False\n\n if hasattr(self, \"_old_draw\"):\n # Deletes the reference to draw and falls back to the original\n del self.draw\n del self._old_draw\n\n\nUNSPECIFIED = bar.Obj(\"UNSPECIFIED\")\n\n\nclass _TextBox(_Widget):\n \"\"\"\n Base class for widgets that are just boxes containing text.\n \"\"\"\n\n orientations = ORIENTATION_BOTH\n defaults = [\n (\"font\", \"sans\", \"Default font\"),\n (\"fontsize\", None, \"Font size. Calculated if None.\"),\n (\"padding\", None, \"Padding. Calculated if None.\"),\n (\"foreground\", \"ffffff\", \"Foreground colour\"),\n (\"fontshadow\", None, \"font shadow color, default is None(no shadow)\"),\n (\"markup\", True, \"Whether or not to use pango markup\"),\n (\n \"fmt\",\n \"{}\",\n \"Format to apply to the string returned by the widget. Main purpose: applying markup. \"\n \"For a widget that returns ``foo``, using ``fmt='<i>{}</i>'`` would give you ``<i>foo</i>``. \"\n \"To control what the widget outputs in the first place, use the ``format`` paramater of the widget (if it has one).\",\n ),\n (\"max_chars\", 0, \"Maximum number of characters to display in widget.\"),\n (\n \"scroll\",\n False,\n \"Whether text should be scrolled. When True, you must set the widget's ``width``.\",\n ),\n (\n \"scroll_repeat\",\n True,\n \"Whether text should restart scrolling once the text has ended\",\n ),\n (\n \"scroll_delay\",\n 2,\n \"Number of seconds to pause before starting scrolling and restarting/clearing text at end\",\n ),\n (\"scroll_step\", 1, \"Number of pixels to scroll with each step\"),\n (\"scroll_interval\", 0.1, \"Time in seconds before next scrolling step\"),\n (\n \"scroll_clear\",\n False,\n \"Whether text should scroll completely away (True) or stop when the end of the text is shown (False)\",\n ),\n (\"scroll_hide\", False, \"Whether the widget should hide when scrolling has finished\"),\n (\n \"scroll_fixed_width\",\n False,\n \"When ``scroll=True`` the ``width`` parameter is a maximum width and, when text is shorter than this, the widget will resize. \"\n \"Setting ``scroll_fixed_width=True`` will force the widget to have a fixed width, regardless of the size of the text.\",\n ),\n ] # type: list[tuple[str, Any, str]]\n\n def __init__(self, text=\" \", width=bar.CALCULATED, **config):\n self.layout = None\n _Widget.__init__(self, width, **config)\n self.add_defaults(_TextBox.defaults)\n self.text = text\n self._is_scrolling = False\n self._should_scroll = False\n self._scroll_offset = 0\n self._scroll_queued = False\n self._scroll_timer = None\n self._scroll_width = width\n\n @property\n def text(self):\n return self._text\n\n @text.setter\n def text(self, value):\n if len(value) > self.max_chars > 0:\n value = value[: self.max_chars] + \"\u2026\"\n self._text = value\n if self.layout:\n self.layout.text = self.formatted_text\n if self.scroll:\n self.check_width()\n self.reset_scroll()\n\n @property\n def formatted_text(self):\n return self.fmt.format(self._text)\n\n @property\n def foreground(self):\n return self._foreground\n\n @foreground.setter\n def foreground(self, fg):\n self._foreground = fg\n if self.layout:\n self.layout.colour = fg\n\n @property\n def font(self):\n return self._font\n\n @font.setter\n def font(self, value):\n self._font = value\n if self.layout:\n self.layout.font = value\n\n @property\n def fontshadow(self):\n return self._fontshadow\n\n @fontshadow.setter\n def fontshadow(self, value):\n self._fontshadow = value\n if self.layout:\n self.layout.font_shadow = value\n\n @property\n def actual_padding(self):\n if self.padding is None:\n return self.fontsize / 2\n else:\n return self.padding\n\n def _configure(self, qtile, bar):\n _Widget._configure(self, qtile, bar)\n if self.fontsize is None:\n self.fontsize = self.bar.height - self.bar.height / 5\n self.layout = self.drawer.textlayout(\n self.formatted_text,\n self.foreground,\n self.font,\n self.fontsize,\n self.fontshadow,\n markup=self.markup,\n )\n if not isinstance(self._scroll_width, int) and self.scroll:\n logger.warning(\"%s: You must specify a width when enabling scrolling.\", self.name)\n self.scroll = False\n\n if self.scroll:\n self.check_width()\n\n def check_width(self):\n \"\"\"\n Check whether the widget needs to have calculated or fixed width\n and whether the text should be scrolled.\n \"\"\"\n if self.layout.width > self._scroll_width:\n self.length_type = bar.STATIC\n self.length = self._scroll_width\n self._is_scrolling = True\n self._should_scroll = True\n else:\n if self.scroll_fixed_width:\n self.length_type = bar.STATIC\n self.length = self._scroll_width\n else:\n self.length_type = bar.CALCULATED\n self._should_scroll = False\n\n def calculate_length(self):\n if self.text:\n if self.bar.horizontal:\n return min(self.layout.width, self.bar.width) + self.actual_padding * 2\n else:\n return min(self.layout.width, self.bar.height) + self.actual_padding * 2\n else:\n return 0\n\n def can_draw(self):\n can_draw = (\n self.layout is not None and not self.layout.finalized() and self.offsetx is not None\n ) # if the bar hasn't placed us yet\n return can_draw\n\n def draw(self):\n if not self.can_draw():\n return\n self.drawer.clear(self.background or self.bar.background)\n\n # size = self.bar.height if self.bar.horizontal else self.bar.width\n self.drawer.ctx.save()\n\n if not self.bar.horizontal:\n # Left bar reads bottom to top\n if self.bar.screen.left is self.bar:\n self.drawer.ctx.rotate(-90 * math.pi / 180.0)\n self.drawer.ctx.translate(-self.length, 0)\n\n # Right bar is top to bottom\n else:\n self.drawer.ctx.translate(self.bar.width, 0)\n self.drawer.ctx.rotate(90 * math.pi / 180.0)\n\n # If we're scrolling, we clip the context to the scroll width less the padding\n # Move the text layout position (and we only see the clipped portion)\n if self._should_scroll:\n self.drawer.ctx.rectangle(\n self.actual_padding,\n 0,\n self._scroll_width - 2 * self.actual_padding,\n self.bar.size,\n )\n self.drawer.ctx.clip()\n\n size = self.bar.height if self.bar.horizontal else self.bar.width\n\n self.layout.draw(\n (self.actual_padding or 0) - self._scroll_offset,\n int(size / 2.0 - self.layout.height / 2.0) + 1,\n )\n self.drawer.ctx.restore()\n\n self.drawer.draw(\n offsetx=self.offsetx, offsety=self.offsety, width=self.width, height=self.height\n )\n\n # We only want to scroll if:\n # - User has asked us to scroll and the scroll width is smaller than the layout (should_scroll=True)\n # - We are still scrolling (is_scrolling=True)\n # - We haven't already queued the next scroll (scroll_queued=False)\n if self._should_scroll and self._is_scrolling and not self._scroll_queued:\n self._scroll_queued = True\n if self._scroll_offset == 0:\n interval = self.scroll_delay\n else:\n interval = self.scroll_interval\n self._scroll_timer = self.timeout_add(interval, self.do_scroll)\n\n def do_scroll(self):\n # Allow the next scroll tick to be queued\n self._scroll_queued = False\n\n # If we're still scrolling, adjust the next offset\n if self._is_scrolling:\n self._scroll_offset += self.scroll_step\n\n # Check whether we need to stop scrolling when:\n # - we've scrolled all the text off the widget (scroll_clear = True)\n # - the final pixel is visible (scroll_clear = False)\n if (self.scroll_clear and self._scroll_offset > self.layout.width) or (\n not self.scroll_clear\n and (self.layout.width - self._scroll_offset)\n < (self._scroll_width - 2 * self.actual_padding)\n ):\n self._is_scrolling = False\n\n # We've reached the end of the scroll so what next?\n if not self._is_scrolling:\n if self.scroll_repeat:\n # Pause and restart scrolling\n self._scroll_timer = self.timeout_add(self.scroll_delay, self.reset_scroll)\n elif self.scroll_hide:\n # Clear the text\n self._scroll_timer = self.timeout_add(self.scroll_delay, self.hide_scroll)\n # If neither of these options then the text is no longer updated.\n\n self.draw()\n\n def reset_scroll(self):\n self._scroll_offset = 0\n self._is_scrolling = True\n self._scroll_queued = False\n if self._scroll_timer:\n self._scroll_timer.cancel()\n self.draw()\n\n def hide_scroll(self):\n self.update(\"\")\n\n @expose_command()\n def set_font(self, font=UNSPECIFIED, fontsize=UNSPECIFIED, fontshadow=UNSPECIFIED):\n \"\"\"\n Change the font used by this widget. If font is None, the current\n font is used.\n \"\"\"\n if font is not UNSPECIFIED:\n self.font = font\n if fontsize is not UNSPECIFIED:\n self.fontsize = fontsize\n if fontshadow is not UNSPECIFIED:\n self.fontshadow = fontshadow\n self.bar.draw()\n\n @expose_command()\n def info(self):\n d = _Widget.info(self)\n d[\"foreground\"] = self.foreground\n d[\"text\"] = self.formatted_text\n return d\n\n def update(self, text):\n \"\"\"Update the widget text.\"\"\"\n # Don't try to update text in dead layouts\n # This is mainly required for ThreadPoolText based widgets as the\n # polling function cannot be cancelled and so may be called after the widget\n # is finalised.\n if not self.can_draw():\n return\n\n if self.text == text:\n return\n if text is None:\n text = \"\"\n\n old_width = self.layout.width\n self.text = text\n\n # If our width hasn't changed, we just draw ourselves. Otherwise,\n # we draw the whole bar.\n if self.layout.width == old_width:\n self.draw()\n else:\n self.bar.draw()\n\n\nclass InLoopPollText(_TextBox):\n \"\"\"A common interface for polling some 'fast' information, munging it, and\n rendering the result in a text box. You probably want to use\n ThreadPoolText instead.\n\n ('fast' here means that this runs /in/ the event loop, so don't block! If\n you want to run something nontrivial, use ThreadedPollWidget.)\"\"\"\n\n defaults = [\n (\n \"update_interval\",\n 600,\n \"Update interval in seconds, if none, the widget updates only once.\",\n ),\n ] # type: list[tuple[str, Any, str]]\n\n def __init__(self, default_text=\"N/A\", **config):\n _TextBox.__init__(self, default_text, **config)\n self.add_defaults(InLoopPollText.defaults)\n\n def timer_setup(self):\n update_interval = self.tick()\n # If self.update_interval is defined and .tick() returns None, re-call\n # after self.update_interval\n if update_interval is None and self.update_interval is not None:\n self.timeout_add(self.update_interval, self.timer_setup)\n # We can change the update interval by returning something from .tick()\n elif update_interval:\n self.timeout_add(update_interval, self.timer_setup)\n # If update_interval is False, we won't re-call\n\n def _configure(self, qtile, bar):\n should_tick = self.configured\n _TextBox._configure(self, qtile, bar)\n\n # Update when we are being re-configured.\n if should_tick:\n self.tick()\n\n def button_press(self, x, y, button):\n self.tick()\n _TextBox.button_press(self, x, y, button)\n\n def poll(self):\n return \"N/A\"\n\n def tick(self):\n text = self.poll()\n self.update(text)\n\n\nclass ThreadPoolText(_TextBox):\n \"\"\"A common interface for wrapping blocking events which when triggered\n will update a textbox.\n\n The poll method is intended to wrap a blocking function which may take\n quite a while to return anything. It will be executed as a future and\n should return updated text when completed. It may also return None to\n disable any further updates.\n\n param: text - Initial text to display.\n \"\"\"\n\n defaults = [\n (\n \"update_interval\",\n 600,\n \"Update interval in seconds, if none, the widget updates only once.\",\n ),\n ] # type: list[tuple[str, Any, str]]\n\n def __init__(self, text, **config):\n super().__init__(text, **config)\n self.add_defaults(ThreadPoolText.defaults)\n\n def timer_setup(self):\n def on_done(future):\n try:\n result = future.result()\n except Exception:\n result = None\n logger.exception(\"poll() raised exceptions, not rescheduling\")\n\n if result is not None:\n try:\n self.update(result)\n\n if self.update_interval is not None:\n self.timeout_add(self.update_interval, self.timer_setup)\n\n except Exception:\n logger.exception(\"Failed to reschedule timer for %s.\", self.name)\n else:\n logger.warning(\"%s's poll() returned None, not rescheduling\", self.name)\n\n self.future = self.qtile.run_in_executor(self.poll)\n self.future.add_done_callback(on_done)\n\n def poll(self):\n pass\n\n @expose_command()\n def force_update(self):\n \"\"\"Immediately poll the widget. Existing timers are unaffected.\"\"\"\n self.update(self.poll())\n\n\n# these two classes below look SUSPICIOUSLY similar\n\n\nclass PaddingMixin(configurable.Configurable):\n \"\"\"Mixin that provides padding(_x|_y|)\n\n To use it, subclass and add this to __init__:\n\n self.add_defaults(base.PaddingMixin.defaults)\n \"\"\"\n\n defaults = [\n (\"padding\", 3, \"Padding inside the box\"),\n (\"padding_x\", None, \"X Padding. Overrides 'padding' if set\"),\n (\"padding_y\", None, \"Y Padding. Overrides 'padding' if set\"),\n ] # type: list[tuple[str, Any, str]]\n\n padding_x = configurable.ExtraFallback(\"padding_x\", \"padding\")\n padding_y = configurable.ExtraFallback(\"padding_y\", \"padding\")\n\n\nclass MarginMixin(configurable.Configurable):\n \"\"\"Mixin that provides margin(_x|_y|)\n\n To use it, subclass and add this to __init__:\n\n self.add_defaults(base.MarginMixin.defaults)\n \"\"\"\n\n defaults = [\n (\"margin\", 3, \"Margin inside the box\"),\n (\"margin_x\", None, \"X Margin. Overrides 'margin' if set\"),\n (\"margin_y\", None, \"Y Margin. Overrides 'margin' if set\"),\n ] # type: list[tuple[str, Any, str]]\n\n margin_x = configurable.ExtraFallback(\"margin_x\", \"margin\")\n margin_y = configurable.ExtraFallback(\"margin_y\", \"margin\")\n\n\nclass Mirror(_Widget):\n \"\"\"\n A widget for showing the same widget content in more than one place, for\n instance, on bars across multiple screens.\n\n You don't need to use it directly; instead, just instantiate your widget\n once and hand it in to multiple bars. For instance::\n\n cpu = widget.CPUGraph()\n clock = widget.Clock()\n\n screens = [\n Screen(top=bar.Bar([widget.GroupBox(), cpu, clock])),\n Screen(top=bar.Bar([widget.GroupBox(), cpu, clock])),\n ]\n\n Widgets can be passed to more than one bar, so that there don't need to be\n any duplicates executing the same code all the time, and they'll always be\n visually identical.\n\n This works for all widgets that use `drawers` (and nothing else) to display\n their contents. Currently, this is all widgets except for `Systray`.\n \"\"\"\n\n def __init__(self, reflection, **config):\n _Widget.__init__(self, reflection.length, **config)\n self.reflects = reflection\n self._length = 0\n self.length_type = self.reflects.length_type\n\n def _configure(self, qtile, bar):\n _Widget._configure(self, qtile, bar)\n self.reflects.add_mirror(self)\n # We need to fill the background once before `draw` is called so, if\n # there's no reflection, the mirror matches its parent bar.\n self.drawer.clear(self.background or self.bar.background)\n\n def calculate_length(self):\n return self.reflects.calculate_length()\n\n @property\n def length(self):\n if self.length_type != bar.STRETCH:\n return self.reflects.length\n return self._length\n\n @length.setter\n def length(self, value):\n self._length = value\n\n def draw(self):\n self.drawer.clear(self.reflects.background or self.bar.background)\n self.reflects.drawer.paint_to(self.drawer)\n self.drawer.draw(offsetx=self.offset, offsety=self.offsety, width=self.width)\n\n def button_press(self, x, y, button):\n self.reflects.button_press(x, y, button)\n\n def mouse_enter(self, x, y):\n self.reflects.mouse_enter(x, y)\n\n def mouse_leave(self, x, y):\n self.reflects.mouse_leave(x, y)\n\n def finalize(self):\n self.reflects.remove_mirror(self)\n _Widget.finalize(self)\n", "path": "libqtile/widget/base.py"}], "after_files": [{"content": "# Copyright (c) 2008-2010 Aldo Cortesi\n# Copyright (c) 2011 Florian Mounier\n# Copyright (c) 2011 Kenji_Takahashi\n# Copyright (c) 2011 Paul Colomiets\n# Copyright (c) 2012 roger\n# Copyright (c) 2012 Craig Barnes\n# Copyright (c) 2012-2015 Tycho Andersen\n# Copyright (c) 2013 dequis\n# Copyright (c) 2013 David R. Andersen\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2014-2015 Sean Vig\n# Copyright (c) 2014 Justin Bronder\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom __future__ import annotations\n\nimport asyncio\nimport copy\nimport math\nimport subprocess\nfrom typing import TYPE_CHECKING\n\nfrom libqtile import bar, configurable, confreader\nfrom libqtile.command import interface\nfrom libqtile.command.base import CommandError, CommandObject, expose_command\nfrom libqtile.lazy import LazyCall\nfrom libqtile.log_utils import logger\nfrom libqtile.utils import create_task\n\nif TYPE_CHECKING:\n from typing import Any\n\n from libqtile.command.base import ItemT\n\n# Each widget class must define which bar orientation(s) it supports by setting\n# these bits in an 'orientations' class attribute. Simply having the attribute\n# inherited by superclasses is discouraged, because if a superclass that was\n# only supporting one orientation, adds support for the other, its subclasses\n# will have to be adapted too, in general. ORIENTATION_NONE is only added for\n# completeness' sake.\n# +------------------------+--------------------+--------------------+\n# | Widget bits | Horizontal bar | Vertical bar |\n# +========================+====================+====================+\n# | ORIENTATION_NONE | ConfigError raised | ConfigError raised |\n# +------------------------+--------------------+--------------------+\n# | ORIENTATION_HORIZONTAL | Widget displayed | ConfigError raised |\n# | | horizontally | |\n# +------------------------+--------------------+--------------------+\n# | ORIENTATION_VERTICAL | ConfigError raised | Widget displayed |\n# | | | vertically |\n# +------------------------+--------------------+--------------------+\n# | ORIENTATION_BOTH | Widget displayed | Widget displayed |\n# | | horizontally | vertically |\n# +------------------------+--------------------+--------------------+\n\n\nclass _Orientations(int):\n def __new__(cls, value, doc):\n return super().__new__(cls, value)\n\n def __init__(self, value, doc):\n self.doc = doc\n\n def __str__(self):\n return self.doc\n\n def __repr__(self):\n return self.doc\n\n\nORIENTATION_NONE = _Orientations(0, \"none\")\nORIENTATION_HORIZONTAL = _Orientations(1, \"horizontal only\")\nORIENTATION_VERTICAL = _Orientations(2, \"vertical only\")\nORIENTATION_BOTH = _Orientations(3, \"horizontal and vertical\")\n\n\nclass _Widget(CommandObject, configurable.Configurable):\n \"\"\"Base Widget class\n\n If length is set to the special value `bar.STRETCH`, the bar itself will\n set the length to the maximum remaining space, after all other widgets have\n been configured.\n\n In horizontal bars, 'length' corresponds to the width of the widget; in\n vertical bars, it corresponds to the widget's height.\n\n The offsetx and offsety attributes are set by the Bar after all widgets\n have been configured.\n\n Callback functions can be assigned to button presses by passing a dict to the\n 'callbacks' kwarg. No arguments are passed to the function so, if\n you need access to the qtile object, it needs to be imported into your code.\n\n ``lazy`` functions can also be passed as callback functions and can be used in\n the same way as keybindings.\n\n For example:\n\n .. code-block:: python\n\n from libqtile import qtile\n\n def open_calendar():\n qtile.spawn('gsimplecal next_month')\n\n clock = widget.Clock(\n mouse_callbacks={\n 'Button1': open_calendar,\n 'Button3': lazy.spawn('gsimplecal prev_month')\n }\n )\n\n When the clock widget receives a click with button 1, the ``open_calendar`` function\n will be executed.\n \"\"\"\n\n orientations = ORIENTATION_BOTH\n\n # Default (empty set) is for all backends to be supported. Widgets can override this\n # to explicitly confirm which backends are supported\n supported_backends: set[str] = set()\n\n offsetx: int = 0\n offsety: int = 0\n defaults: list[tuple[str, Any, str]] = [\n (\"background\", None, \"Widget background color\"),\n (\n \"mouse_callbacks\",\n {},\n \"Dict of mouse button press callback functions. Accepts functions and ``lazy`` calls.\",\n ),\n ]\n\n def __init__(self, length, **config):\n \"\"\"\n length: bar.STRETCH, bar.CALCULATED, or a specified length.\n \"\"\"\n CommandObject.__init__(self)\n self.name = self.__class__.__name__.lower()\n if \"name\" in config:\n self.name = config[\"name\"]\n\n configurable.Configurable.__init__(self, **config)\n self.add_defaults(_Widget.defaults)\n\n if length in (bar.CALCULATED, bar.STRETCH):\n self.length_type = length\n self.length = 0\n elif isinstance(length, int):\n self.length_type = bar.STATIC\n self.length = length\n else:\n raise confreader.ConfigError(\"Widget width must be an int\")\n\n self.configured = False\n self._futures: list[asyncio.Handle] = []\n self._mirrors: set[_Widget] = set()\n self.finalized = False\n\n @property\n def length(self):\n if self.length_type == bar.CALCULATED:\n return int(self.calculate_length())\n return self._length\n\n @length.setter\n def length(self, value):\n self._length = value\n\n @property\n def width(self):\n if self.bar.horizontal:\n return self.length\n return self.bar.width\n\n @property\n def height(self):\n if self.bar.horizontal:\n return self.bar.height\n return self.length\n\n @property\n def offset(self):\n if self.bar.horizontal:\n return self.offsetx\n return self.offsety\n\n def _test_orientation_compatibility(self, horizontal):\n if horizontal:\n if not self.orientations & ORIENTATION_HORIZONTAL:\n raise confreader.ConfigError(\n self.__class__.__name__\n + \" is not compatible with the orientation of the bar.\"\n )\n elif not self.orientations & ORIENTATION_VERTICAL:\n raise confreader.ConfigError(\n self.__class__.__name__ + \" is not compatible with the orientation of the bar.\"\n )\n\n def timer_setup(self):\n \"\"\"This is called exactly once, after the widget has been configured\n and timers are available to be set up.\"\"\"\n pass\n\n def _configure(self, qtile, bar):\n self._test_orientation_compatibility(bar.horizontal)\n\n self.qtile = qtile\n self.bar = bar\n self.drawer = bar.window.create_drawer(self.bar.width, self.bar.height)\n\n # Clear this flag as widget may be restarted (e.g. if screen removed and re-added)\n self.finalized = False\n\n # Timers are added to futures list so they can be cancelled if the `finalize` method is\n # called before the timers have fired.\n if not self.configured:\n timer = self.qtile.call_soon(self.timer_setup)\n async_timer = self.qtile.call_soon(asyncio.create_task, self._config_async())\n\n # Add these to our list of futures so they can be cancelled.\n self._futures.extend([timer, async_timer])\n\n async def _config_async(self):\n \"\"\"\n This is called once when the main eventloop has started. this\n happens after _configure has been run.\n\n Widgets that need to use asyncio coroutines after this point may\n wish to initialise the relevant code (e.g. connections to dbus\n using dbus_next) here.\n \"\"\"\n pass\n\n def finalize(self):\n for future in self._futures:\n future.cancel()\n if hasattr(self, \"layout\") and self.layout:\n self.layout.finalize()\n self.drawer.finalize()\n self.finalized = True\n\n # Reset configuration status so the widget can be reconfigured\n # e.g. when screen is re-added\n self.configured = False\n\n def clear(self):\n self.drawer.set_source_rgb(self.bar.background)\n self.drawer.fillrect(self.offsetx, self.offsety, self.width, self.height)\n\n @expose_command()\n def info(self):\n \"\"\"Info for this object.\"\"\"\n return dict(\n name=self.name,\n offset=self.offset,\n length=self.length,\n width=self.width,\n height=self.height,\n )\n\n def add_callbacks(self, defaults):\n \"\"\"Add default callbacks with a lower priority than user-specified callbacks.\"\"\"\n defaults.update(self.mouse_callbacks)\n self.mouse_callbacks = defaults\n\n def button_press(self, x, y, button):\n name = \"Button{0}\".format(button)\n if name in self.mouse_callbacks:\n cmd = self.mouse_callbacks[name]\n if isinstance(cmd, LazyCall):\n if cmd.check(self.qtile):\n status, val = self.qtile.server.call(\n (cmd.selectors, cmd.name, cmd.args, cmd.kwargs)\n )\n if status in (interface.ERROR, interface.EXCEPTION):\n logger.error(\"Mouse callback command error %s: %s\", cmd.name, val)\n else:\n cmd()\n\n def button_release(self, x, y, button):\n pass\n\n def get(self, q, name):\n \"\"\"\n Utility function for quick retrieval of a widget by name.\n \"\"\"\n w = q.widgets_map.get(name)\n if not w:\n raise CommandError(\"No such widget: %s\" % name)\n return w\n\n def _items(self, name: str) -> ItemT:\n if name == \"bar\":\n return True, []\n elif name == \"screen\":\n return True, []\n return None\n\n def _select(self, name, sel):\n if name == \"bar\":\n return self.bar\n elif name == \"screen\":\n return self.bar.screen\n\n def draw(self):\n \"\"\"\n Method that draws the widget. You may call this explicitly to\n redraw the widget, but only if the length of the widget hasn't\n changed. If it has, you must call bar.draw instead.\n \"\"\"\n raise NotImplementedError\n\n def calculate_length(self):\n \"\"\"\n Must be implemented if the widget can take CALCULATED for length.\n It must return the width of the widget if it's installed in a\n horizontal bar; it must return the height of the widget if it's\n installed in a vertical bar. Usually you will test the orientation\n of the bar with 'self.bar.horizontal'.\n \"\"\"\n raise NotImplementedError\n\n def timeout_add(self, seconds, method, method_args=()):\n \"\"\"\n This method calls ``.call_later`` with given arguments.\n \"\"\"\n # Don't add timers for finalised widgets\n if self.finalized:\n return\n\n future = self.qtile.call_later(seconds, self._wrapper, method, *method_args)\n\n self._futures.append(future)\n return future\n\n def call_process(self, command, **kwargs):\n \"\"\"\n This method uses `subprocess.check_output` to run the given command\n and return the string from stdout, which is decoded when using\n Python 3.\n \"\"\"\n return subprocess.check_output(command, **kwargs, encoding=\"utf-8\")\n\n def _remove_dead_timers(self):\n \"\"\"Remove completed and cancelled timers from the list.\"\"\"\n\n def is_ready(timer):\n return timer in self.qtile._eventloop._ready\n\n self._futures = [\n timer\n for timer in self._futures\n # Filter out certain handles...\n if not (\n timer.cancelled()\n # Once a scheduled timer is ready to be run its _scheduled flag is set to False\n # and it's added to the loop's `_ready` queue\n or (\n isinstance(timer, asyncio.TimerHandle)\n and not timer._scheduled\n and not is_ready(timer)\n )\n # Callbacks scheduled via `call_soon` are put into the loop's `_ready` queue\n # and are removed once they've been executed\n or (isinstance(timer, asyncio.Handle) and not is_ready(timer))\n )\n ]\n\n def _wrapper(self, method, *method_args):\n self._remove_dead_timers()\n try:\n if asyncio.iscoroutinefunction(method):\n create_task(method(*method_args))\n elif asyncio.iscoroutine(method):\n create_task(method)\n else:\n method(*method_args)\n except: # noqa: E722\n logger.exception(\"got exception from widget timer\")\n\n def create_mirror(self):\n return Mirror(self, background=self.background)\n\n def clone(self):\n return copy.copy(self)\n\n def mouse_enter(self, x, y):\n pass\n\n def mouse_leave(self, x, y):\n pass\n\n def _draw_with_mirrors(self) -> None:\n self._old_draw()\n for mirror in self._mirrors:\n if not mirror.configured:\n continue\n\n # If the widget and mirror are on the same bar then we could have an\n # infinite loop when we call bar.draw(). mirror.draw() will trigger a resize\n # if it's the wrong size.\n if mirror.length_type == bar.CALCULATED and mirror.bar is not self.bar:\n mirror.bar.draw()\n else:\n mirror.draw()\n\n def add_mirror(self, widget: _Widget):\n if not self._mirrors:\n self._old_draw = self.draw\n self.draw = self._draw_with_mirrors # type: ignore\n\n self._mirrors.add(widget)\n if not self.drawer.has_mirrors:\n self.drawer.has_mirrors = True\n\n def remove_mirror(self, widget: _Widget):\n try:\n self._mirrors.remove(widget)\n except KeyError:\n pass\n\n if not self._mirrors:\n self.drawer.has_mirrors = False\n\n if hasattr(self, \"_old_draw\"):\n # Deletes the reference to draw and falls back to the original\n del self.draw\n del self._old_draw\n\n\nUNSPECIFIED = bar.Obj(\"UNSPECIFIED\")\n\n\nclass _TextBox(_Widget):\n \"\"\"\n Base class for widgets that are just boxes containing text.\n \"\"\"\n\n orientations = ORIENTATION_BOTH\n defaults = [\n (\"font\", \"sans\", \"Default font\"),\n (\"fontsize\", None, \"Font size. Calculated if None.\"),\n (\"padding\", None, \"Padding. Calculated if None.\"),\n (\"foreground\", \"ffffff\", \"Foreground colour\"),\n (\"fontshadow\", None, \"font shadow color, default is None(no shadow)\"),\n (\"markup\", True, \"Whether or not to use pango markup\"),\n (\n \"fmt\",\n \"{}\",\n \"Format to apply to the string returned by the widget. Main purpose: applying markup. \"\n \"For a widget that returns ``foo``, using ``fmt='<i>{}</i>'`` would give you ``<i>foo</i>``. \"\n \"To control what the widget outputs in the first place, use the ``format`` paramater of the widget (if it has one).\",\n ),\n (\"max_chars\", 0, \"Maximum number of characters to display in widget.\"),\n (\n \"scroll\",\n False,\n \"Whether text should be scrolled. When True, you must set the widget's ``width``.\",\n ),\n (\n \"scroll_repeat\",\n True,\n \"Whether text should restart scrolling once the text has ended\",\n ),\n (\n \"scroll_delay\",\n 2,\n \"Number of seconds to pause before starting scrolling and restarting/clearing text at end\",\n ),\n (\"scroll_step\", 1, \"Number of pixels to scroll with each step\"),\n (\"scroll_interval\", 0.1, \"Time in seconds before next scrolling step\"),\n (\n \"scroll_clear\",\n False,\n \"Whether text should scroll completely away (True) or stop when the end of the text is shown (False)\",\n ),\n (\"scroll_hide\", False, \"Whether the widget should hide when scrolling has finished\"),\n (\n \"scroll_fixed_width\",\n False,\n \"When ``scroll=True`` the ``width`` parameter is a maximum width and, when text is shorter than this, the widget will resize. \"\n \"Setting ``scroll_fixed_width=True`` will force the widget to have a fixed width, regardless of the size of the text.\",\n ),\n ] # type: list[tuple[str, Any, str]]\n\n def __init__(self, text=\" \", width=bar.CALCULATED, **config):\n self.layout = None\n _Widget.__init__(self, width, **config)\n self.add_defaults(_TextBox.defaults)\n self.text = text\n self._is_scrolling = False\n self._should_scroll = False\n self._scroll_offset = 0\n self._scroll_queued = False\n self._scroll_timer = None\n self._scroll_width = width\n\n @property\n def text(self):\n return self._text\n\n @text.setter\n def text(self, value):\n if len(value) > self.max_chars > 0:\n value = value[: self.max_chars] + \"\u2026\"\n self._text = value\n if self.layout:\n self.layout.text = self.formatted_text\n if self.scroll:\n self.check_width()\n self.reset_scroll()\n\n @property\n def formatted_text(self):\n return self.fmt.format(self._text)\n\n @property\n def foreground(self):\n return self._foreground\n\n @foreground.setter\n def foreground(self, fg):\n self._foreground = fg\n if self.layout:\n self.layout.colour = fg\n\n @property\n def font(self):\n return self._font\n\n @font.setter\n def font(self, value):\n self._font = value\n if self.layout:\n self.layout.font = value\n\n @property\n def fontshadow(self):\n return self._fontshadow\n\n @fontshadow.setter\n def fontshadow(self, value):\n self._fontshadow = value\n if self.layout:\n self.layout.font_shadow = value\n\n @property\n def actual_padding(self):\n if self.padding is None:\n return self.fontsize / 2\n else:\n return self.padding\n\n def _configure(self, qtile, bar):\n _Widget._configure(self, qtile, bar)\n if self.fontsize is None:\n self.fontsize = self.bar.height - self.bar.height / 5\n self.layout = self.drawer.textlayout(\n self.formatted_text,\n self.foreground,\n self.font,\n self.fontsize,\n self.fontshadow,\n markup=self.markup,\n )\n if not isinstance(self._scroll_width, int) and self.scroll:\n logger.warning(\"%s: You must specify a width when enabling scrolling.\", self.name)\n self.scroll = False\n\n if self.scroll:\n self.check_width()\n\n def check_width(self):\n \"\"\"\n Check whether the widget needs to have calculated or fixed width\n and whether the text should be scrolled.\n \"\"\"\n if self.layout.width > self._scroll_width:\n self.length_type = bar.STATIC\n self.length = self._scroll_width\n self._is_scrolling = True\n self._should_scroll = True\n else:\n if self.scroll_fixed_width:\n self.length_type = bar.STATIC\n self.length = self._scroll_width\n else:\n self.length_type = bar.CALCULATED\n self._should_scroll = False\n\n def calculate_length(self):\n if self.text:\n if self.bar.horizontal:\n return min(self.layout.width, self.bar.width) + self.actual_padding * 2\n else:\n return min(self.layout.width, self.bar.height) + self.actual_padding * 2\n else:\n return 0\n\n def can_draw(self):\n can_draw = (\n self.layout is not None and not self.layout.finalized() and self.offsetx is not None\n ) # if the bar hasn't placed us yet\n return can_draw\n\n def draw(self):\n if not self.can_draw():\n return\n self.drawer.clear(self.background or self.bar.background)\n\n # size = self.bar.height if self.bar.horizontal else self.bar.width\n self.drawer.ctx.save()\n\n if not self.bar.horizontal:\n # Left bar reads bottom to top\n if self.bar.screen.left is self.bar:\n self.drawer.ctx.rotate(-90 * math.pi / 180.0)\n self.drawer.ctx.translate(-self.length, 0)\n\n # Right bar is top to bottom\n else:\n self.drawer.ctx.translate(self.bar.width, 0)\n self.drawer.ctx.rotate(90 * math.pi / 180.0)\n\n # If we're scrolling, we clip the context to the scroll width less the padding\n # Move the text layout position (and we only see the clipped portion)\n if self._should_scroll:\n self.drawer.ctx.rectangle(\n self.actual_padding,\n 0,\n self._scroll_width - 2 * self.actual_padding,\n self.bar.size,\n )\n self.drawer.ctx.clip()\n\n size = self.bar.height if self.bar.horizontal else self.bar.width\n\n self.layout.draw(\n (self.actual_padding or 0) - self._scroll_offset,\n int(size / 2.0 - self.layout.height / 2.0) + 1,\n )\n self.drawer.ctx.restore()\n\n self.drawer.draw(\n offsetx=self.offsetx, offsety=self.offsety, width=self.width, height=self.height\n )\n\n # We only want to scroll if:\n # - User has asked us to scroll and the scroll width is smaller than the layout (should_scroll=True)\n # - We are still scrolling (is_scrolling=True)\n # - We haven't already queued the next scroll (scroll_queued=False)\n if self._should_scroll and self._is_scrolling and not self._scroll_queued:\n self._scroll_queued = True\n if self._scroll_offset == 0:\n interval = self.scroll_delay\n else:\n interval = self.scroll_interval\n self._scroll_timer = self.timeout_add(interval, self.do_scroll)\n\n def do_scroll(self):\n # Allow the next scroll tick to be queued\n self._scroll_queued = False\n\n # If we're still scrolling, adjust the next offset\n if self._is_scrolling:\n self._scroll_offset += self.scroll_step\n\n # Check whether we need to stop scrolling when:\n # - we've scrolled all the text off the widget (scroll_clear = True)\n # - the final pixel is visible (scroll_clear = False)\n if (self.scroll_clear and self._scroll_offset > self.layout.width) or (\n not self.scroll_clear\n and (self.layout.width - self._scroll_offset)\n < (self._scroll_width - 2 * self.actual_padding)\n ):\n self._is_scrolling = False\n\n # We've reached the end of the scroll so what next?\n if not self._is_scrolling:\n if self.scroll_repeat:\n # Pause and restart scrolling\n self._scroll_timer = self.timeout_add(self.scroll_delay, self.reset_scroll)\n elif self.scroll_hide:\n # Clear the text\n self._scroll_timer = self.timeout_add(self.scroll_delay, self.hide_scroll)\n # If neither of these options then the text is no longer updated.\n\n self.draw()\n\n def reset_scroll(self):\n self._scroll_offset = 0\n self._is_scrolling = True\n self._scroll_queued = False\n if self._scroll_timer:\n self._scroll_timer.cancel()\n self.draw()\n\n def hide_scroll(self):\n self.update(\"\")\n\n @expose_command()\n def set_font(self, font=UNSPECIFIED, fontsize=UNSPECIFIED, fontshadow=UNSPECIFIED):\n \"\"\"\n Change the font used by this widget. If font is None, the current\n font is used.\n \"\"\"\n if font is not UNSPECIFIED:\n self.font = font\n if fontsize is not UNSPECIFIED:\n self.fontsize = fontsize\n if fontshadow is not UNSPECIFIED:\n self.fontshadow = fontshadow\n self.bar.draw()\n\n @expose_command()\n def info(self):\n d = _Widget.info(self)\n d[\"foreground\"] = self.foreground\n d[\"text\"] = self.formatted_text\n return d\n\n def update(self, text):\n \"\"\"Update the widget text.\"\"\"\n # Don't try to update text in dead layouts\n # This is mainly required for ThreadPoolText based widgets as the\n # polling function cannot be cancelled and so may be called after the widget\n # is finalised.\n if not self.can_draw():\n return\n\n if self.text == text:\n return\n if text is None:\n text = \"\"\n\n old_width = self.layout.width\n self.text = text\n\n # If our width hasn't changed, we just draw ourselves. Otherwise,\n # we draw the whole bar.\n if self.layout.width == old_width:\n self.draw()\n else:\n self.bar.draw()\n\n\nclass InLoopPollText(_TextBox):\n \"\"\"A common interface for polling some 'fast' information, munging it, and\n rendering the result in a text box. You probably want to use\n ThreadPoolText instead.\n\n ('fast' here means that this runs /in/ the event loop, so don't block! If\n you want to run something nontrivial, use ThreadedPollWidget.)\"\"\"\n\n defaults = [\n (\n \"update_interval\",\n 600,\n \"Update interval in seconds, if none, the widget updates only once.\",\n ),\n ] # type: list[tuple[str, Any, str]]\n\n def __init__(self, default_text=\"N/A\", **config):\n _TextBox.__init__(self, default_text, **config)\n self.add_defaults(InLoopPollText.defaults)\n\n def timer_setup(self):\n update_interval = self.tick()\n # If self.update_interval is defined and .tick() returns None, re-call\n # after self.update_interval\n if update_interval is None and self.update_interval is not None:\n self.timeout_add(self.update_interval, self.timer_setup)\n # We can change the update interval by returning something from .tick()\n elif update_interval:\n self.timeout_add(update_interval, self.timer_setup)\n # If update_interval is False, we won't re-call\n\n def _configure(self, qtile, bar):\n should_tick = self.configured\n _TextBox._configure(self, qtile, bar)\n\n # Update when we are being re-configured.\n if should_tick:\n self.tick()\n\n def button_press(self, x, y, button):\n self.tick()\n _TextBox.button_press(self, x, y, button)\n\n def poll(self):\n return \"N/A\"\n\n def tick(self):\n text = self.poll()\n self.update(text)\n\n\nclass ThreadPoolText(_TextBox):\n \"\"\"A common interface for wrapping blocking events which when triggered\n will update a textbox.\n\n The poll method is intended to wrap a blocking function which may take\n quite a while to return anything. It will be executed as a future and\n should return updated text when completed. It may also return None to\n disable any further updates.\n\n param: text - Initial text to display.\n \"\"\"\n\n defaults = [\n (\n \"update_interval\",\n 600,\n \"Update interval in seconds, if none, the widget updates only once.\",\n ),\n ] # type: list[tuple[str, Any, str]]\n\n def __init__(self, text, **config):\n super().__init__(text, **config)\n self.add_defaults(ThreadPoolText.defaults)\n\n def timer_setup(self):\n def on_done(future):\n try:\n result = future.result()\n except Exception:\n result = None\n logger.exception(\"poll() raised exceptions, not rescheduling\")\n\n if result is not None:\n try:\n self.update(result)\n\n if self.update_interval is not None:\n self.timeout_add(self.update_interval, self.timer_setup)\n\n except Exception:\n logger.exception(\"Failed to reschedule timer for %s.\", self.name)\n else:\n logger.warning(\"%s's poll() returned None, not rescheduling\", self.name)\n\n self.future = self.qtile.run_in_executor(self.poll)\n self.future.add_done_callback(on_done)\n\n def poll(self):\n pass\n\n @expose_command()\n def force_update(self):\n \"\"\"Immediately poll the widget. Existing timers are unaffected.\"\"\"\n self.update(self.poll())\n\n\n# these two classes below look SUSPICIOUSLY similar\n\n\nclass PaddingMixin(configurable.Configurable):\n \"\"\"Mixin that provides padding(_x|_y|)\n\n To use it, subclass and add this to __init__:\n\n self.add_defaults(base.PaddingMixin.defaults)\n \"\"\"\n\n defaults = [\n (\"padding\", 3, \"Padding inside the box\"),\n (\"padding_x\", None, \"X Padding. Overrides 'padding' if set\"),\n (\"padding_y\", None, \"Y Padding. Overrides 'padding' if set\"),\n ] # type: list[tuple[str, Any, str]]\n\n padding_x = configurable.ExtraFallback(\"padding_x\", \"padding\")\n padding_y = configurable.ExtraFallback(\"padding_y\", \"padding\")\n\n\nclass MarginMixin(configurable.Configurable):\n \"\"\"Mixin that provides margin(_x|_y|)\n\n To use it, subclass and add this to __init__:\n\n self.add_defaults(base.MarginMixin.defaults)\n \"\"\"\n\n defaults = [\n (\"margin\", 3, \"Margin inside the box\"),\n (\"margin_x\", None, \"X Margin. Overrides 'margin' if set\"),\n (\"margin_y\", None, \"Y Margin. Overrides 'margin' if set\"),\n ] # type: list[tuple[str, Any, str]]\n\n margin_x = configurable.ExtraFallback(\"margin_x\", \"margin\")\n margin_y = configurable.ExtraFallback(\"margin_y\", \"margin\")\n\n\nclass Mirror(_Widget):\n \"\"\"\n A widget for showing the same widget content in more than one place, for\n instance, on bars across multiple screens.\n\n You don't need to use it directly; instead, just instantiate your widget\n once and hand it in to multiple bars. For instance::\n\n cpu = widget.CPUGraph()\n clock = widget.Clock()\n\n screens = [\n Screen(top=bar.Bar([widget.GroupBox(), cpu, clock])),\n Screen(top=bar.Bar([widget.GroupBox(), cpu, clock])),\n ]\n\n Widgets can be passed to more than one bar, so that there don't need to be\n any duplicates executing the same code all the time, and they'll always be\n visually identical.\n\n This works for all widgets that use `drawers` (and nothing else) to display\n their contents. Currently, this is all widgets except for `Systray`.\n \"\"\"\n\n def __init__(self, reflection, **config):\n _Widget.__init__(self, reflection.length, **config)\n self.reflects = reflection\n self._length = 0\n self.length_type = self.reflects.length_type\n\n def _configure(self, qtile, bar):\n _Widget._configure(self, qtile, bar)\n self.reflects.add_mirror(self)\n # We need to fill the background once before `draw` is called so, if\n # there's no reflection, the mirror matches its parent bar.\n self.drawer.clear(self.background or self.bar.background)\n\n def calculate_length(self):\n return self.reflects.calculate_length()\n\n @property\n def length(self):\n if self.length_type != bar.STRETCH:\n return self.reflects.length\n return self._length\n\n @length.setter\n def length(self, value):\n self._length = value\n\n def draw(self):\n self.drawer.clear_rect()\n self.reflects.drawer.paint_to(self.drawer)\n self.drawer.draw(offsetx=self.offset, offsety=self.offsety, width=self.width)\n\n def button_press(self, x, y, button):\n self.reflects.button_press(x, y, button)\n\n def mouse_enter(self, x, y):\n self.reflects.mouse_enter(x, y)\n\n def mouse_leave(self, x, y):\n self.reflects.mouse_leave(x, y)\n\n def finalize(self):\n self.reflects.remove_mirror(self)\n _Widget.finalize(self)\n", "path": "libqtile/widget/base.py"}]} |
gh_patches_debug_1030 | rasdani/github-patches | git_diff | wagtail__wagtail-10871 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Display pages and snippets’ "First published at" as absolute value
### Is your proposal related to a problem?
In side panels, we display pages and snippets’ "First published at" datetime as a relative date. To know the absolute date(time), users have to go to the history page and look at the first entry.
### Describe the solution you'd like
It’d be simpler if hovering over the "First published at \[time ago\]" text would reveal the absolute datetime in a tooltip, similarly to what we do in listings, with a dotted underline as a cue.
Code: https://github.com/wagtail/wagtail/blob/e0b0d03cf025c025a19dbcb803b64f0c1fce212c/wagtail/admin/templates/wagtailadmin/shared/side_panels/status.html#L19
What we do on listings (different field) for reference:
```html
<button type="button" class="w-human-readable-date" data-tippy-content="Aug. 14, 2023, 10:47 a.m.">
<time class="w-human-readable-date__date" datetime="2023-08-14T10:47:04.536893+00:00">
1 hour ago
</time>
</button>
```
### Describe alternatives you've considered
We could also use a read-only FieldPanel to display `first_published_at`.
### Additional context
See for example https://static-wagtail-v5-1.netlify.app/admin/pages/69/edit/
> Form page
> First published 4 years ago
Display pages and snippets’ "First published at" as absolute value
### Is your proposal related to a problem?
In side panels, we display pages and snippets’ "First published at" datetime as a relative date. To know the absolute date(time), users have to go to the history page and look at the first entry.
### Describe the solution you'd like
It’d be simpler if hovering over the "First published at \[time ago\]" text would reveal the absolute datetime in a tooltip, similarly to what we do in listings, with a dotted underline as a cue.
Code: https://github.com/wagtail/wagtail/blob/e0b0d03cf025c025a19dbcb803b64f0c1fce212c/wagtail/admin/templates/wagtailadmin/shared/side_panels/status.html#L19
What we do on listings (different field) for reference:
```html
<button type="button" class="w-human-readable-date" data-tippy-content="Aug. 14, 2023, 10:47 a.m.">
<time class="w-human-readable-date__date" datetime="2023-08-14T10:47:04.536893+00:00">
1 hour ago
</time>
</button>
```
### Describe alternatives you've considered
We could also use a read-only FieldPanel to display `first_published_at`.
### Additional context
See for example https://static-wagtail-v5-1.netlify.app/admin/pages/69/edit/
> Form page
> First published 4 years ago
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/templatetags/wagtailadmin_tags.py`
Content:
```
1 import json
2 from datetime import datetime
3 from urllib.parse import urljoin
4 from warnings import warn
5
6 from django import template
7 from django.conf import settings
8 from django.contrib.admin.utils import quote
9 from django.contrib.humanize.templatetags.humanize import intcomma, naturaltime
10 from django.contrib.messages.constants import DEFAULT_TAGS as MESSAGE_TAGS
11 from django.http.request import HttpHeaders
12 from django.middleware.csrf import get_token
13 from django.shortcuts import resolve_url as resolve_url_func
14 from django.template import Context
15 from django.template.base import token_kwargs
16 from django.template.defaultfilters import stringfilter
17 from django.templatetags.static import static
18 from django.urls import reverse
19 from django.urls.exceptions import NoReverseMatch
20 from django.utils import timezone
21 from django.utils.encoding import force_str
22 from django.utils.html import avoid_wrapping, json_script
23 from django.utils.http import urlencode
24 from django.utils.safestring import mark_safe
25 from django.utils.timesince import timesince
26 from django.utils.translation import gettext_lazy as _
27
28 from wagtail import hooks
29 from wagtail.admin.admin_url_finder import AdminURLFinder
30 from wagtail.admin.localization import get_js_translation_strings
31 from wagtail.admin.menu import admin_menu
32 from wagtail.admin.search import admin_search_areas
33 from wagtail.admin.staticfiles import versioned_static as versioned_static_func
34 from wagtail.admin.ui import sidebar
35 from wagtail.admin.utils import (
36 get_admin_base_url,
37 get_latest_str,
38 get_user_display_name,
39 get_valid_next_url_from_request,
40 )
41 from wagtail.admin.views.bulk_action.registry import bulk_action_registry
42 from wagtail.admin.widgets import ButtonWithDropdown, PageListingButton
43 from wagtail.coreutils import (
44 camelcase_to_underscore,
45 escape_script,
46 get_content_type_label,
47 get_locales_display_names,
48 )
49 from wagtail.coreutils import cautious_slugify as _cautious_slugify
50 from wagtail.models import (
51 CollectionViewRestriction,
52 Locale,
53 Page,
54 PageViewRestriction,
55 UserPagePermissionsProxy,
56 )
57 from wagtail.permission_policies.pages import PagePermissionPolicy
58 from wagtail.telepath import JSContext
59 from wagtail.users.utils import get_gravatar_url
60 from wagtail.utils.deprecation import RemovedInWagtail60Warning
61
62 register = template.Library()
63
64 register.filter("intcomma", intcomma)
65 register.filter("naturaltime", naturaltime)
66
67
68 @register.inclusion_tag("wagtailadmin/shared/breadcrumbs.html", takes_context=True)
69 def breadcrumbs(
70 context,
71 page,
72 url_name,
73 url_root_name=None,
74 include_self=True,
75 is_expanded=False,
76 page_perms=None,
77 querystring_value=None,
78 trailing_breadcrumb_title=None,
79 classname=None,
80 ):
81 user = context["request"].user
82
83 # find the closest common ancestor of the pages that this user has direct explore permission
84 # (i.e. add/edit/publish/lock) over; this will be the root of the breadcrumb
85 cca = PagePermissionPolicy().explorable_root_instance(user)
86 if not cca:
87 return {"pages": Page.objects.none()}
88
89 return {
90 "pages": page.get_ancestors(inclusive=include_self)
91 .descendant_of(cca, inclusive=True)
92 .specific(),
93 "current_page": page,
94 "is_expanded": is_expanded,
95 "page_perms": page_perms,
96 "querystring_value": querystring_value or "",
97 "trailing_breadcrumb_title": trailing_breadcrumb_title, # Only used in collapsible breadcrumb templates
98 "url_name": url_name,
99 "url_root_name": url_root_name,
100 "classname": classname,
101 }
102
103
104 @register.inclusion_tag("wagtailadmin/shared/search_other.html", takes_context=True)
105 def search_other(context, current=None):
106 request = context["request"]
107
108 return {
109 "options_html": admin_search_areas.render_html(request, current),
110 "request": request,
111 }
112
113
114 @register.filter("ellipsistrim")
115 def ellipsistrim(value, max_length):
116 if len(value) > max_length:
117 truncd_val = value[:max_length]
118 if not len(value) == (max_length + 1) and value[max_length + 1] != " ":
119 truncd_val = truncd_val[: truncd_val.rfind(" ")]
120 return truncd_val + "…"
121 return value
122
123
124 @register.filter
125 def fieldtype(bound_field):
126 try:
127 return camelcase_to_underscore(bound_field.field.__class__.__name__)
128 except AttributeError:
129 try:
130 return camelcase_to_underscore(bound_field.__class__.__name__)
131 except AttributeError:
132 return ""
133
134
135 @register.filter
136 def widgettype(bound_field):
137 try:
138 return camelcase_to_underscore(bound_field.field.widget.__class__.__name__)
139 except AttributeError:
140 try:
141 return camelcase_to_underscore(bound_field.widget.__class__.__name__)
142 except AttributeError:
143 return ""
144
145
146 def _get_user_page_permissions(context):
147 # RemovedInWagtail60Warning: Remove this function
148
149 # Create a UserPagePermissionsProxy object to represent the user's global permissions, and
150 # cache it in the context for the duration of the page request, if one does not exist already
151 if "user_page_permissions" not in context:
152 context["user_page_permissions"] = UserPagePermissionsProxy(
153 context["request"].user
154 )
155 return context["user_page_permissions"]
156
157
158 @register.simple_tag(takes_context=True)
159 def page_permissions(context, page):
160 """
161 Usage: {% page_permissions page as page_perms %}
162 Sets the variable 'page_perms' to a PagePermissionTester object that can be queried to find out
163 what actions the current logged-in user can perform on the given page.
164 """
165 # RemovedInWagtail60Warning: Keep the UserPagePermissionsProxy object in the context
166 # for backwards compatibility during the deprecation period, even though we don't use it
167 _get_user_page_permissions(context)
168 return page.permissions_for_user(context["request"].user)
169
170
171 @register.simple_tag
172 def is_page(obj):
173 """
174 Usage: {% is_page obj as is_page %}
175 Sets the variable 'is_page' to True if the given object is a Page instance,
176 False otherwise. Useful in shared templates that accept both Page and
177 non-Page objects (e.g. snippets with the optional features enabled).
178 """
179 return isinstance(obj, Page)
180
181
182 @register.simple_tag(takes_context=True)
183 def admin_edit_url(context, obj, user=None):
184 """
185 Usage: {% admin_edit_url obj user %}
186 Returns the URL of the edit view for the given object and user using the
187 registered AdminURLFinder for the object. The AdminURLFinder instance is
188 cached in the context for the duration of the page request.
189 The user argument is optional and defaults to request.user if request is
190 available in the context.
191 """
192 if not user and "request" in context:
193 user = context["request"].user
194 if "admin_url_finder" not in context:
195 context["admin_url_finder"] = AdminURLFinder(user)
196 return context["admin_url_finder"].get_edit_url(obj)
197
198
199 @register.simple_tag
200 def admin_url_name(obj, action):
201 """
202 Usage: {% admin_url_name obj action %}
203 Returns the URL name of the given action for the given object, e.g.
204 'wagtailadmin_pages:edit' for a Page object and 'edit' action.
205 Works with pages and snippets only.
206 """
207 if isinstance(obj, Page):
208 return f"wagtailadmin_pages:{action}"
209 return obj.snippet_viewset.get_url_name(action)
210
211
212 @register.simple_tag
213 def latest_str(obj):
214 """
215 Usage: {% latest_str obj %}
216 Returns the latest string representation of an object, making use of the
217 latest revision where available to reflect draft changes.
218 """
219 return get_latest_str(obj)
220
221
222 @register.simple_tag
223 def classnames(*classes):
224 """
225 Usage <div class="{% classnames "w-base" classname active|yesno:"w-base--active," any_other_var %}"></div>
226 Returns any args as a space-separated joined string for using in HTML class names.
227 """
228 return " ".join([classname.strip() for classname in classes if classname])
229
230
231 @register.simple_tag(takes_context=True)
232 def test_collection_is_public(context, collection):
233 """
234 Usage: {% test_collection_is_public collection as is_public %}
235 Sets 'is_public' to True iff there are no collection view restrictions in place
236 on this collection.
237 Caches the list of collection view restrictions in the context, to avoid repeated
238 DB queries on repeated calls.
239 """
240 if "all_collection_view_restrictions" not in context:
241 context[
242 "all_collection_view_restrictions"
243 ] = CollectionViewRestriction.objects.select_related("collection").values_list(
244 "collection__name", flat=True
245 )
246
247 is_private = collection.name in context["all_collection_view_restrictions"]
248
249 return not is_private
250
251
252 @register.simple_tag(takes_context=True)
253 def test_page_is_public(context, page):
254 """
255 Usage: {% test_page_is_public page as is_public %}
256 Sets 'is_public' to True iff there are no page view restrictions in place on
257 this page.
258 Caches the list of page view restrictions on the request, to avoid repeated
259 DB queries on repeated calls.
260 """
261 if not hasattr(context["request"], "all_page_view_restriction_paths"):
262 context[
263 "request"
264 ].all_page_view_restriction_paths = PageViewRestriction.objects.select_related(
265 "page"
266 ).values_list(
267 "page__path", flat=True
268 )
269
270 is_private = any(
271 page.path.startswith(restricted_path)
272 for restricted_path in context["request"].all_page_view_restriction_paths
273 )
274
275 return not is_private
276
277
278 @register.simple_tag
279 def hook_output(hook_name):
280 """
281 Example: {% hook_output 'insert_global_admin_css' %}
282 Whenever we have a hook whose functions take no parameters and return a string, this tag can be used
283 to output the concatenation of all of those return values onto the page.
284 Note that the output is not escaped - it is the hook function's responsibility to escape unsafe content.
285 """
286 snippets = [fn() for fn in hooks.get_hooks(hook_name)]
287
288 if hook_name == "insert_editor_css" and snippets:
289 warn(
290 "The `insert_editor_css` hook is deprecated - use `insert_global_admin_css` instead.",
291 category=RemovedInWagtail60Warning,
292 )
293
294 return mark_safe("".join(snippets))
295
296
297 @register.simple_tag
298 def base_url_setting(default=None):
299 return get_admin_base_url() or default
300
301
302 @register.simple_tag
303 def allow_unicode_slugs():
304 return getattr(settings, "WAGTAIL_ALLOW_UNICODE_SLUGS", True)
305
306
307 class EscapeScriptNode(template.Node):
308 TAG_NAME = "escapescript"
309
310 def __init__(self, nodelist):
311 super().__init__()
312 self.nodelist = nodelist
313
314 def render(self, context):
315 out = self.nodelist.render(context)
316 return escape_script(out)
317
318 @classmethod
319 def handle(cls, parser, token):
320 nodelist = parser.parse(("end" + EscapeScriptNode.TAG_NAME,))
321 parser.delete_first_token()
322 return cls(nodelist)
323
324
325 register.tag(EscapeScriptNode.TAG_NAME, EscapeScriptNode.handle)
326
327
328 # Helpers for Widget.render_with_errors, our extension to the Django widget API that allows widgets to
329 # take on the responsibility of rendering their own error messages
330 @register.filter
331 def render_with_errors(bound_field):
332 """
333 Usage: {{ field|render_with_errors }} as opposed to {{ field }}.
334 If the field (a BoundField instance) has errors on it, and the associated widget implements
335 a render_with_errors method, call that; otherwise, call the regular widget rendering mechanism.
336 """
337 widget = bound_field.field.widget
338 if bound_field.errors and hasattr(widget, "render_with_errors"):
339 return widget.render_with_errors(
340 bound_field.html_name,
341 bound_field.value(),
342 attrs={"id": bound_field.auto_id},
343 errors=bound_field.errors,
344 )
345 else:
346 attrs = {}
347 # If the widget doesn't have an aria-describedby attribute,
348 # and the field has help text, and the field has an id,
349 # add an aria-describedby attribute pointing to the help text.
350 # In this case, the corresponding help text element's id is set in the
351 # wagtailadmin/shared/field.html template.
352
353 # In Django 5.0 and up, this is done automatically, but we want to keep
354 # this code because we use a different convention for the help text id
355 # (we use -helptext suffix instead of Django's _helptext).
356 if (
357 not bound_field.field.widget.attrs.get("aria-describedby")
358 and bound_field.field.help_text
359 and bound_field.id_for_label
360 ):
361 attrs["aria-describedby"] = f"{bound_field.id_for_label}-helptext"
362 return bound_field.as_widget(attrs=attrs)
363
364
365 @register.filter
366 def has_unrendered_errors(bound_field):
367 """
368 Return true if this field has errors that were not accounted for by render_with_errors, because
369 the widget does not support the render_with_errors method
370 """
371 return bound_field.errors and not hasattr(
372 bound_field.field.widget, "render_with_errors"
373 )
374
375
376 @register.filter(is_safe=True)
377 @stringfilter
378 def cautious_slugify(value):
379 return _cautious_slugify(value)
380
381
382 @register.simple_tag(takes_context=True)
383 def querystring(context, **kwargs):
384 """
385 Print out the current querystring. Any keyword arguments to this template
386 tag will be added to the querystring before it is printed out.
387
388 <a href="/page/{% querystring key='value' %}">
389
390 Will result in something like:
391
392 <a href="/page/?foo=bar&key=value">
393 """
394 request = context["request"]
395 querydict = request.GET.copy()
396 # Can't do querydict.update(kwargs), because QueryDict.update() appends to
397 # the list of values, instead of replacing the values.
398 for key, value in kwargs.items():
399 if value is None:
400 # Remove the key if the value is None
401 querydict.pop(key, None)
402 else:
403 # Set the key otherwise
404 querydict[key] = str(value)
405
406 return "?" + querydict.urlencode()
407
408
409 @register.simple_tag(takes_context=True)
410 def pagination_querystring(context, page_number, page_key="p"):
411 """
412 Print out a querystring with an updated page number:
413
414 {% if page.has_next_page %}
415 <a href="{% pagination_link page.next_page_number %}">Next page</a>
416 {% endif %}
417 """
418 return querystring(context, **{page_key: page_number})
419
420
421 @register.inclusion_tag(
422 "wagtailadmin/pages/listing/_pagination.html", takes_context=True
423 )
424 def paginate(context, page, base_url="", page_key="p", classname=""):
425 """
426 Print pagination previous/next links, and the page count. Take the
427 following arguments:
428
429 page
430 The current page of results. This should be a Django pagination `Page`
431 instance
432
433 base_url
434 The base URL of the next/previous page, with no querystring.
435 This is optional, and defaults to the current page by just printing the
436 querystring for the next/previous page.
437
438 page_key
439 The name of the page variable in the query string. Defaults to 'p'.
440
441 classname
442 Extra classes to add to the next/previous links.
443 """
444 request = context["request"]
445 return {
446 "base_url": base_url,
447 "classname": classname,
448 "request": request,
449 "page": page,
450 "page_key": page_key,
451 "paginator": page.paginator,
452 }
453
454
455 @register.inclusion_tag("wagtailadmin/pages/listing/_buttons.html", takes_context=True)
456 def page_listing_buttons(context, page, page_perms):
457 next_url = context["request"].path
458 button_hooks = hooks.get_hooks("register_page_listing_buttons")
459
460 buttons = []
461 for hook in button_hooks:
462 buttons.extend(hook(page, page_perms, next_url))
463
464 buttons.sort()
465
466 for hook in hooks.get_hooks("construct_page_listing_buttons"):
467 hook(buttons, page, page_perms, context)
468
469 return {"page": page, "buttons": buttons}
470
471
472 @register.inclusion_tag(
473 "wagtailadmin/pages/listing/_page_header_buttons.html", takes_context=True
474 )
475 def page_header_buttons(context, page, page_perms):
476 next_url = context["request"].path
477 button_hooks = hooks.get_hooks("register_page_header_buttons")
478
479 buttons = []
480 for hook in button_hooks:
481 buttons.extend(hook(page, page_perms, next_url))
482
483 buttons.sort()
484 return {
485 "page": page,
486 "buttons": buttons,
487 "title": _("Actions"),
488 "icon_name": "dots-horizontal",
489 "button_classes": [
490 "w-p-0",
491 "w-w-12",
492 "w-h-slim-header",
493 "hover:w-scale-110",
494 "w-transition",
495 "w-outline-offset-inside",
496 "w-relative",
497 "w-z-30",
498 ],
499 }
500
501
502 @register.inclusion_tag("wagtailadmin/pages/listing/_buttons.html", takes_context=True)
503 def bulk_action_choices(context, app_label, model_name):
504 bulk_actions_list = list(
505 bulk_action_registry.get_bulk_actions_for_model(app_label, model_name)
506 )
507 bulk_actions_list.sort(key=lambda x: x.action_priority)
508
509 bulk_action_more_list = []
510 if len(bulk_actions_list) > 4:
511 bulk_action_more_list = bulk_actions_list[4:]
512 bulk_actions_list = bulk_actions_list[:4]
513
514 next_url = get_valid_next_url_from_request(context["request"])
515 if not next_url:
516 next_url = context["request"].path
517
518 bulk_action_buttons = [
519 PageListingButton(
520 action.display_name,
521 reverse(
522 "wagtail_bulk_action", args=[app_label, model_name, action.action_type]
523 )
524 + "?"
525 + urlencode({"next": next_url}),
526 attrs={"aria-label": action.aria_label},
527 priority=action.action_priority,
528 classes=action.classes | {"bulk-action-btn"},
529 )
530 for action in bulk_actions_list
531 ]
532
533 if bulk_action_more_list:
534 more_button = ButtonWithDropdown(
535 label=_("More"),
536 attrs={"title": _("More bulk actions")},
537 button_classes={"button", "button-secondary", "button-small"},
538 buttons_data=[
539 {
540 "label": action.display_name,
541 "url": reverse(
542 "wagtail_bulk_action",
543 args=[app_label, model_name, action.action_type],
544 )
545 + "?"
546 + urlencode({"next": next_url}),
547 "attrs": {"aria-label": action.aria_label},
548 "priority": action.action_priority,
549 "classes": {"bulk-action-btn"},
550 }
551 for action in bulk_action_more_list
552 ],
553 )
554 bulk_action_buttons.append(more_button)
555
556 return {"buttons": bulk_action_buttons}
557
558
559 @register.inclusion_tag("wagtailadmin/shared/avatar.html")
560 def avatar(user=None, classname=None, size=None, tooltip=None):
561 """
562 Displays a user avatar using the avatar template
563 Usage:
564 {% load wagtailadmin_tags %}
565 ...
566 {% avatar user=request.user size='small' tooltip='JaneDoe' %}
567 :param user: the user to get avatar information from (User)
568 :param size: default None (None|'small'|'large'|'square')
569 :param tooltip: Optional tooltip to display under the avatar (string)
570 :return: Rendered template snippet
571 """
572 return {"user": user, "classname": classname, "size": size, "tooltip": tooltip}
573
574
575 @register.simple_tag
576 def message_level_tag(message):
577 """
578 Return the tag for this message's level as defined in
579 django.contrib.messages.constants.DEFAULT_TAGS, ignoring the project-level
580 MESSAGE_TAGS setting (which end-users might customise).
581 """
582 return MESSAGE_TAGS.get(message.level)
583
584
585 @register.simple_tag
586 def message_tags(message):
587 level_tag = message_level_tag(message)
588 if message.extra_tags and level_tag:
589 return message.extra_tags + " " + level_tag
590 elif message.extra_tags:
591 return message.extra_tags
592 elif level_tag:
593 return level_tag
594 else:
595 return ""
596
597
598 @register.filter("abs")
599 def _abs(val):
600 return abs(val)
601
602
603 @register.filter
604 def admin_urlquote(value):
605 return quote(value)
606
607
608 @register.simple_tag
609 def avatar_url(user, size=50, gravatar_only=False):
610 """
611 A template tag that receives a user and size and return
612 the appropriate avatar url for that user.
613 Example usage: {% avatar_url request.user 50 %}
614 """
615
616 if (
617 not gravatar_only
618 and hasattr(user, "wagtail_userprofile")
619 and user.wagtail_userprofile.avatar
620 ):
621 return user.wagtail_userprofile.avatar.url
622
623 if hasattr(user, "email"):
624 gravatar_url = get_gravatar_url(user.email, size=size)
625 if gravatar_url is not None:
626 return gravatar_url
627
628 return versioned_static_func("wagtailadmin/images/default-user-avatar.png")
629
630
631 @register.simple_tag(takes_context=True)
632 def admin_theme_classname(context):
633 """
634 Retrieves the theme name for the current user.
635 """
636 user = context["request"].user
637 theme_name = (
638 user.wagtail_userprofile.theme
639 if hasattr(user, "wagtail_userprofile")
640 else "system"
641 )
642 return f"w-theme-{theme_name}"
643
644
645 @register.simple_tag
646 def js_translation_strings():
647 return mark_safe(json.dumps(get_js_translation_strings()))
648
649
650 @register.simple_tag
651 def notification_static(path):
652 """
653 Variant of the {% static %}` tag for use in notification emails - tries to form
654 a full URL using WAGTAILADMIN_BASE_URL if the static URL isn't already a full URL.
655 """
656 return urljoin(base_url_setting(), static(path))
657
658
659 @register.simple_tag
660 def versioned_static(path):
661 """
662 Wrapper for Django's static file finder to append a cache-busting query parameter
663 that updates on each Wagtail version
664 """
665 return versioned_static_func(path)
666
667
668 @register.inclusion_tag("wagtailadmin/shared/icon.html", takes_context=False)
669 def icon(name=None, classname=None, title=None, wrapped=False, class_name=None):
670 """
671 Abstracts away the actual icon implementation.
672
673 Usage:
674 {% load wagtailadmin_tags %}
675 ...
676 {% icon name="cogs" classname="icon--red" title="Settings" %}
677
678 :param name: the icon name/id, required (string)
679 :param classname: defaults to 'icon' if not provided (string)
680 :param title: accessible label intended for screen readers (string)
681 :return: Rendered template snippet (string)
682 """
683 if not name:
684 raise ValueError("You must supply an icon name")
685
686 if class_name:
687 warn(
688 (
689 "Icon template tag `class_name` has been renamed to `classname`, please adopt the new usage instead. "
690 f'Replace `{{% icon ... class_name="{class_name}" %}}` with `{{% icon ... classname="{class_name}" %}}`'
691 ),
692 category=RemovedInWagtail60Warning,
693 )
694
695 deprecated_icons = [
696 "angle-double-left",
697 "angle-double-right",
698 "arrow-down-big",
699 "arrow-up-big",
700 "arrows-up-down",
701 "chain-broken",
702 "dots-vertical",
703 "ellipsis-v",
704 "horizontalrule",
705 "repeat",
706 "reset",
707 "undo",
708 "wagtail-inverse",
709 ]
710
711 if name in deprecated_icons:
712 warn(
713 (f"Icon `{name}` is deprecated and will be removed in a future release."),
714 category=RemovedInWagtail60Warning,
715 )
716
717 renamed_icons = {
718 "chevron-down": "arrow-down",
719 "download-alt": "download",
720 "duplicate": "copy",
721 "tick": "check",
722 "uni52": "folder-inverse",
723 }
724
725 if name in renamed_icons:
726 old_name = name
727 name = renamed_icons[name]
728 warn(
729 (
730 f"Icon `{old_name}` has been renamed to `{name}`, please adopt the new usage instead. "
731 f'Replace `{{% icon name="{old_name}" ... %}}` with `{{% icon name="{name}" ... %}}`'
732 ),
733 category=RemovedInWagtail60Warning,
734 )
735
736 return {
737 "name": name,
738 # supporting class_name for backwards compatibility
739 "classname": classname or class_name or "icon",
740 "title": title,
741 "wrapped": wrapped,
742 }
743
744
745 @register.inclusion_tag("wagtailadmin/shared/status_tag.html")
746 def status(
747 label=None,
748 classname=None,
749 url=None,
750 title=None,
751 hidden_label=None,
752 attrs=None,
753 ):
754 """
755 Generates a status-tag css with <span></span> or <a><a/> implementation.
756
757 Usage:
758
759 {% status label="live" url="/test/" title="title" hidden_label="current status:" classname="w-status--primary" %}
760
761 :param label: the status test, (string)
762 :param classname: defaults to 'status-tag' if not provided (string)
763 :param url: the status url(to specify the use of anchor tag instead of default span), (string)
764 :param title: accessible label intended for screen readers (string)
765 :param hidden_label : the to specify the additional visually hidden span text, (string)
766 :param attrs: any additional HTML attributes (as a string) to append to the root element
767 :return: Rendered template snippet (string)
768
769 """
770 return {
771 "label": label,
772 "attrs": attrs,
773 "classname": classname,
774 "hidden_label": hidden_label,
775 "title": title,
776 "url": url,
777 }
778
779
780 @register.filter()
781 def timesince_simple(d):
782 """
783 Returns a simplified timesince:
784 19 hours, 48 minutes ago -> 19 hours ago
785 1 week, 1 day ago -> 1 week ago
786 0 minutes ago -> just now
787 """
788 # Note: Duplicate code in timesince_last_update()
789 time_period = timesince(d).split(",")[0]
790 if time_period == avoid_wrapping(_("0 minutes")):
791 return _("just now")
792 return _("%(time_period)s ago") % {"time_period": time_period}
793
794
795 @register.simple_tag
796 def timesince_last_update(
797 last_update, show_time_prefix=False, user_display_name="", use_shorthand=True
798 ):
799 """
800 Returns:
801 - the time of update if last_update is today, if show_time_prefix=True, the output will be prefixed with "at "
802 - time since last update otherwise. Defaults to the simplified timesince,
803 but can return the full string if needed
804 """
805 # translation usage below is intentionally verbose to be easier to work with translations
806
807 if last_update.date() == datetime.today().date():
808 if timezone.is_aware(last_update):
809 time_str = timezone.localtime(last_update).strftime("%H:%M")
810 else:
811 time_str = last_update.strftime("%H:%M")
812
813 if show_time_prefix:
814 if user_display_name:
815 return _("at %(time)s by %(user_display_name)s") % {
816 "time": time_str,
817 "user_display_name": user_display_name,
818 }
819 else:
820 return _("at %(time)s") % {"time": time_str}
821 else:
822 if user_display_name:
823 return _("%(time)s by %(user_display_name)s") % {
824 "time": time_str,
825 "user_display_name": user_display_name,
826 }
827 else:
828 return time_str
829 else:
830 if use_shorthand:
831 # Note: Duplicate code in timesince_simple()
832 time_period = timesince(last_update).split(",")[0]
833 if time_period == avoid_wrapping(_("0 minutes")):
834 if user_display_name:
835 return _("just now by %(user_display_name)s") % {
836 "user_display_name": user_display_name
837 }
838 else:
839 return _("just now")
840 else:
841 time_period = timesince(last_update)
842
843 if user_display_name:
844 return _("%(time_period)s ago by %(user_display_name)s") % {
845 "time_period": time_period,
846 "user_display_name": user_display_name,
847 }
848 else:
849 return _("%(time_period)s ago") % {"time_period": time_period}
850
851
852 @register.filter
853 def user_display_name(user):
854 return get_user_display_name(user)
855
856
857 @register.filter
858 def format_content_type(content_type):
859 return get_content_type_label(content_type)
860
861
862 @register.simple_tag
863 def i18n_enabled():
864 return getattr(settings, "WAGTAIL_I18N_ENABLED", False)
865
866
867 @register.simple_tag
868 def locales():
869 return json.dumps(
870 [
871 {
872 "code": locale.language_code,
873 "display_name": force_str(locale.get_display_name()),
874 }
875 for locale in Locale.objects.all()
876 ]
877 )
878
879
880 @register.simple_tag
881 def locale_label_from_id(locale_id):
882 """
883 Returns the Locale display name given its id.
884 """
885 return get_locales_display_names().get(locale_id)
886
887
888 @register.simple_tag(takes_context=True)
889 def sidebar_collapsed(context):
890 request = context.get("request")
891 collapsed = request.COOKIES.get("wagtail_sidebar_collapsed", "0")
892 if collapsed == "0":
893 return False
894 return True
895
896
897 @register.simple_tag(takes_context=True)
898 def sidebar_props(context):
899 request = context["request"]
900 search_areas = admin_search_areas.search_items_for_request(request)
901 if search_areas:
902 search_area = search_areas[0]
903 else:
904 search_area = None
905
906 account_menu = [
907 sidebar.LinkMenuItem(
908 "account", _("Account"), reverse("wagtailadmin_account"), icon_name="user"
909 ),
910 sidebar.ActionMenuItem(
911 "logout", _("Log out"), reverse("wagtailadmin_logout"), icon_name="logout"
912 ),
913 ]
914
915 modules = [
916 sidebar.WagtailBrandingModule(),
917 sidebar.SearchModule(search_area) if search_area else None,
918 sidebar.MainMenuModule(
919 admin_menu.render_component(request), account_menu, request.user
920 ),
921 ]
922 modules = [module for module in modules if module is not None]
923
924 return json_script(
925 {
926 "modules": JSContext().pack(modules),
927 },
928 element_id="wagtail-sidebar-props",
929 )
930
931
932 @register.simple_tag
933 def get_comments_enabled():
934 return getattr(settings, "WAGTAILADMIN_COMMENTS_ENABLED", True)
935
936
937 @register.simple_tag(takes_context=True)
938 def wagtail_config(context):
939 request = context["request"]
940 config = {
941 "CSRF_TOKEN": get_token(request),
942 "CSRF_HEADER_NAME": HttpHeaders.parse_header_name(
943 getattr(settings, "CSRF_HEADER_NAME")
944 ),
945 "ADMIN_URLS": {
946 "DISMISSIBLES": reverse("wagtailadmin_dismissibles"),
947 },
948 }
949
950 default_settings = {
951 "WAGTAIL_AUTO_UPDATE_PREVIEW": True,
952 "WAGTAIL_AUTO_UPDATE_PREVIEW_INTERVAL": 500,
953 }
954 config.update(
955 {
956 option: getattr(settings, option, default)
957 for option, default in default_settings.items()
958 }
959 )
960
961 return config
962
963
964 @register.simple_tag
965 def resolve_url(url):
966 # Used by wagtailadmin/shared/pagination_nav.html - given an input that may be a URL route
967 # name, or a direct URL path, return it as a direct URL path. On failure (or being passed
968 # an empty / None value), return empty string
969 if not url:
970 return ""
971
972 try:
973 return resolve_url_func(url)
974 except NoReverseMatch:
975 return ""
976
977
978 @register.simple_tag(takes_context=True)
979 def component(context, obj, fallback_render_method=False):
980 # Render a component by calling its render_html method, passing request and context from the
981 # calling template.
982 # If fallback_render_method is true, objects without a render_html method will have render()
983 # called instead (with no arguments) - this is to provide deprecation path for things that have
984 # been newly upgraded to use the component pattern.
985
986 has_render_html_method = hasattr(obj, "render_html")
987 if fallback_render_method and not has_render_html_method and hasattr(obj, "render"):
988 return obj.render()
989 elif not has_render_html_method:
990 raise ValueError(f"Cannot render {obj!r} as a component")
991
992 return obj.render_html(context)
993
994
995 class FragmentNode(template.Node):
996 def __init__(self, nodelist, target_var):
997 self.nodelist = nodelist
998 self.target_var = target_var
999
1000 def render(self, context):
1001 fragment = self.nodelist.render(context) if self.nodelist else ""
1002 context[self.target_var] = fragment
1003 return ""
1004
1005
1006 @register.tag(name="fragment")
1007 def fragment(parser, token):
1008 """
1009 Store a template fragment as a variable.
1010
1011 Usage:
1012 {% fragment as header_title %}
1013 {% blocktrans trimmed %}Welcome to the {{ site_name }} Wagtail CMS{% endblocktrans %}
1014 {% endfragment %}
1015
1016 Copy-paste of slippers’ fragment template tag.
1017 See https://github.com/mixxorz/slippers/blob/254c720e6bb02eb46ae07d104863fce41d4d3164/slippers/templatetags/slippers.py#L173.
1018 """
1019 error_message = "The syntax for fragment is {% fragment as variable_name %}"
1020
1021 try:
1022 tag_name, _, target_var = token.split_contents()
1023 nodelist = parser.parse(("endfragment",))
1024 parser.delete_first_token()
1025 except ValueError:
1026 if settings.DEBUG:
1027 raise template.TemplateSyntaxError(error_message)
1028 return ""
1029
1030 return FragmentNode(nodelist, target_var)
1031
1032
1033 class BlockInclusionNode(template.Node):
1034 """
1035 Create template-driven tags like Django’s inclusion_tag / InclusionNode, but for block-level tags.
1036
1037 Usage:
1038 {% my_tag status="test" label="Alert" %}
1039 Proceed with caution.
1040 {% endmy_tag %}
1041
1042 Within `my_tag`’s template, the template fragment will be accessible as the {{ children }} context variable.
1043
1044 The output can also be stored as a variable in the parent context:
1045
1046 {% my_tag status="test" label="Alert" as my_variable %}
1047 Proceed with caution.
1048 {% endmy_tag %}
1049
1050 Inspired by slippers’ Component Node.
1051 See https://github.com/mixxorz/slippers/blob/254c720e6bb02eb46ae07d104863fce41d4d3164/slippers/templatetags/slippers.py#L47.
1052 """
1053
1054 def __init__(self, nodelist, template, extra_context, target_var=None):
1055 self.nodelist = nodelist
1056 self.template = template
1057 self.extra_context = extra_context
1058 self.target_var = target_var
1059
1060 def get_context_data(self, parent_context):
1061 return parent_context
1062
1063 def render(self, context):
1064 children = self.nodelist.render(context) if self.nodelist else ""
1065
1066 values = {
1067 # Resolve the tag’s parameters within the current context.
1068 key: value.resolve(context)
1069 for key, value in self.extra_context.items()
1070 }
1071
1072 t = context.template.engine.get_template(self.template)
1073 # Add the `children` variable in the rendered template’s context.
1074 context_data = self.get_context_data({**values, "children": children})
1075 output = t.render(Context(context_data, autoescape=context.autoescape))
1076
1077 if self.target_var:
1078 context[self.target_var] = output
1079 return ""
1080
1081 return output
1082
1083 @classmethod
1084 def handle(cls, parser, token):
1085 tag_name, *remaining_bits = token.split_contents()
1086
1087 nodelist = parser.parse((f"end{tag_name}",))
1088 parser.delete_first_token()
1089
1090 extra_context = token_kwargs(remaining_bits, parser)
1091
1092 # Allow component fragment to be assigned to a variable
1093 target_var = None
1094 if len(remaining_bits) >= 2 and remaining_bits[-2] == "as":
1095 target_var = remaining_bits[-1]
1096
1097 return cls(nodelist, cls.template, extra_context, target_var)
1098
1099
1100 class DialogNode(BlockInclusionNode):
1101 template = "wagtailadmin/shared/dialog/dialog.html"
1102
1103 def get_context_data(self, parent_context):
1104 context = super().get_context_data(parent_context)
1105
1106 if "title" not in context:
1107 raise TypeError("You must supply a title")
1108 if "id" not in context:
1109 raise TypeError("You must supply an id")
1110
1111 # Used for determining which icon the message will use
1112 message_icon_name = {
1113 "info": "info-circle",
1114 "warning": "warning",
1115 "critical": "warning",
1116 "success": "circle-check",
1117 }
1118
1119 message_status = context.get("message_status")
1120
1121 # If there is a message status then determine which icon to use.
1122 if message_status:
1123 context["message_icon_name"] = message_icon_name[message_status]
1124
1125 return context
1126
1127
1128 register.tag("dialog", DialogNode.handle)
1129
1130
1131 class HelpBlockNode(BlockInclusionNode):
1132 template = "wagtailadmin/shared/help_block.html"
1133
1134
1135 register.tag("help_block", HelpBlockNode.handle)
1136
1137
1138 class DropdownNode(BlockInclusionNode):
1139 template = "wagtailadmin/shared/dropdown/dropdown.html"
1140
1141
1142 register.tag("dropdown", DropdownNode.handle)
1143
1144
1145 class PanelNode(BlockInclusionNode):
1146 template = "wagtailadmin/shared/panel.html"
1147
1148
1149 register.tag("panel", PanelNode.handle)
1150
1151
1152 class FieldNode(BlockInclusionNode):
1153 template = "wagtailadmin/shared/field.html"
1154
1155
1156 register.tag("field", FieldNode.handle)
1157
1158
1159 class FieldRowNode(BlockInclusionNode):
1160 template = "wagtailadmin/shared/forms/field_row.html"
1161
1162
1163 register.tag("field_row", FieldRowNode.handle)
1164
1165
1166 # Button used to open dialogs
1167 @register.inclusion_tag("wagtailadmin/shared/dialog/dialog_toggle.html")
1168 def dialog_toggle(dialog_id, classname="", text=None):
1169 if not dialog_id:
1170 raise ValueError("You must supply the dialog ID")
1171
1172 return {
1173 "classname": classname,
1174 "text": text,
1175 # dialog_id must match the ID of the dialog you are toggling
1176 "dialog_id": dialog_id,
1177 }
1178
1179
1180 @register.simple_tag()
1181 def workflow_status_with_date(workflow_state):
1182 translation_context = {
1183 "finished_at": naturaltime(workflow_state.current_task_state.finished_at),
1184 "started_at": naturaltime(workflow_state.current_task_state.started_at),
1185 "task_name": workflow_state.current_task_state.task.name,
1186 "status_display": workflow_state.get_status_display,
1187 }
1188
1189 if workflow_state.status == "needs_changes":
1190 return _("Changes requested %(finished_at)s") % translation_context
1191
1192 if workflow_state.status == "in_progress":
1193 return _("Sent to %(task_name)s %(started_at)s") % translation_context
1194
1195 return _("%(status_display)s %(task_name)s %(started_at)s") % translation_context
1196
1197
1198 @register.inclusion_tag("wagtailadmin/shared/human_readable_date.html")
1199 def human_readable_date(date, description=None):
1200 return {
1201 "date": date,
1202 "description": description,
1203 }
1204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/admin/templatetags/wagtailadmin_tags.py b/wagtail/admin/templatetags/wagtailadmin_tags.py
--- a/wagtail/admin/templatetags/wagtailadmin_tags.py
+++ b/wagtail/admin/templatetags/wagtailadmin_tags.py
@@ -1196,8 +1196,9 @@
@register.inclusion_tag("wagtailadmin/shared/human_readable_date.html")
-def human_readable_date(date, description=None):
+def human_readable_date(date, description=None, position="top"):
return {
"date": date,
"description": description,
+ "position": position,
}
| {"golden_diff": "diff --git a/wagtail/admin/templatetags/wagtailadmin_tags.py b/wagtail/admin/templatetags/wagtailadmin_tags.py\n--- a/wagtail/admin/templatetags/wagtailadmin_tags.py\n+++ b/wagtail/admin/templatetags/wagtailadmin_tags.py\n@@ -1196,8 +1196,9 @@\n \n \n @register.inclusion_tag(\"wagtailadmin/shared/human_readable_date.html\")\n-def human_readable_date(date, description=None):\n+def human_readable_date(date, description=None, position=\"top\"):\n return {\n \"date\": date,\n \"description\": description,\n+ \"position\": position,\n }\n", "issue": "Display pages and snippets\u2019 \"First published at\" as absolute value\n### Is your proposal related to a problem?\r\n\r\nIn side panels, we display pages and snippets\u2019 \"First published at\" datetime as a relative date. To know the absolute date(time), users have to go to the history page and look at the first entry.\r\n\r\n### Describe the solution you'd like\r\n\r\nIt\u2019d be simpler if hovering over the \"First published at \\[time ago\\]\" text would reveal the absolute datetime in a tooltip, similarly to what we do in listings, with a dotted underline as a cue.\r\n\r\nCode: https://github.com/wagtail/wagtail/blob/e0b0d03cf025c025a19dbcb803b64f0c1fce212c/wagtail/admin/templates/wagtailadmin/shared/side_panels/status.html#L19\r\n\r\nWhat we do on listings (different field) for reference:\r\n\r\n```html\r\n<button type=\"button\" class=\"w-human-readable-date\" data-tippy-content=\"Aug. 14, 2023, 10:47 a.m.\">\r\n <time class=\"w-human-readable-date__date\" datetime=\"2023-08-14T10:47:04.536893+00:00\">\r\n 1 hour ago\r\n </time>\r\n \r\n</button>\r\n```\r\n\r\n### Describe alternatives you've considered\r\n\r\nWe could also use a read-only FieldPanel to display `first_published_at`.\r\n\r\n### Additional context\r\n\r\nSee for example https://static-wagtail-v5-1.netlify.app/admin/pages/69/edit/\r\n\r\n> Form page\r\n> First published 4 years ago\r\n\nDisplay pages and snippets\u2019 \"First published at\" as absolute value\n### Is your proposal related to a problem?\r\n\r\nIn side panels, we display pages and snippets\u2019 \"First published at\" datetime as a relative date. To know the absolute date(time), users have to go to the history page and look at the first entry.\r\n\r\n### Describe the solution you'd like\r\n\r\nIt\u2019d be simpler if hovering over the \"First published at \\[time ago\\]\" text would reveal the absolute datetime in a tooltip, similarly to what we do in listings, with a dotted underline as a cue.\r\n\r\nCode: https://github.com/wagtail/wagtail/blob/e0b0d03cf025c025a19dbcb803b64f0c1fce212c/wagtail/admin/templates/wagtailadmin/shared/side_panels/status.html#L19\r\n\r\nWhat we do on listings (different field) for reference:\r\n\r\n```html\r\n<button type=\"button\" class=\"w-human-readable-date\" data-tippy-content=\"Aug. 14, 2023, 10:47 a.m.\">\r\n <time class=\"w-human-readable-date__date\" datetime=\"2023-08-14T10:47:04.536893+00:00\">\r\n 1 hour ago\r\n </time>\r\n \r\n</button>\r\n```\r\n\r\n### Describe alternatives you've considered\r\n\r\nWe could also use a read-only FieldPanel to display `first_published_at`.\r\n\r\n### Additional context\r\n\r\nSee for example https://static-wagtail-v5-1.netlify.app/admin/pages/69/edit/\r\n\r\n> Form page\r\n> First published 4 years ago\r\n\n", "before_files": [{"content": "import json\nfrom datetime import datetime\nfrom urllib.parse import urljoin\nfrom warnings import warn\n\nfrom django import template\nfrom django.conf import settings\nfrom django.contrib.admin.utils import quote\nfrom django.contrib.humanize.templatetags.humanize import intcomma, naturaltime\nfrom django.contrib.messages.constants import DEFAULT_TAGS as MESSAGE_TAGS\nfrom django.http.request import HttpHeaders\nfrom django.middleware.csrf import get_token\nfrom django.shortcuts import resolve_url as resolve_url_func\nfrom django.template import Context\nfrom django.template.base import token_kwargs\nfrom django.template.defaultfilters import stringfilter\nfrom django.templatetags.static import static\nfrom django.urls import reverse\nfrom django.urls.exceptions import NoReverseMatch\nfrom django.utils import timezone\nfrom django.utils.encoding import force_str\nfrom django.utils.html import avoid_wrapping, json_script\nfrom django.utils.http import urlencode\nfrom django.utils.safestring import mark_safe\nfrom django.utils.timesince import timesince\nfrom django.utils.translation import gettext_lazy as _\n\nfrom wagtail import hooks\nfrom wagtail.admin.admin_url_finder import AdminURLFinder\nfrom wagtail.admin.localization import get_js_translation_strings\nfrom wagtail.admin.menu import admin_menu\nfrom wagtail.admin.search import admin_search_areas\nfrom wagtail.admin.staticfiles import versioned_static as versioned_static_func\nfrom wagtail.admin.ui import sidebar\nfrom wagtail.admin.utils import (\n get_admin_base_url,\n get_latest_str,\n get_user_display_name,\n get_valid_next_url_from_request,\n)\nfrom wagtail.admin.views.bulk_action.registry import bulk_action_registry\nfrom wagtail.admin.widgets import ButtonWithDropdown, PageListingButton\nfrom wagtail.coreutils import (\n camelcase_to_underscore,\n escape_script,\n get_content_type_label,\n get_locales_display_names,\n)\nfrom wagtail.coreutils import cautious_slugify as _cautious_slugify\nfrom wagtail.models import (\n CollectionViewRestriction,\n Locale,\n Page,\n PageViewRestriction,\n UserPagePermissionsProxy,\n)\nfrom wagtail.permission_policies.pages import PagePermissionPolicy\nfrom wagtail.telepath import JSContext\nfrom wagtail.users.utils import get_gravatar_url\nfrom wagtail.utils.deprecation import RemovedInWagtail60Warning\n\nregister = template.Library()\n\nregister.filter(\"intcomma\", intcomma)\nregister.filter(\"naturaltime\", naturaltime)\n\n\[email protected]_tag(\"wagtailadmin/shared/breadcrumbs.html\", takes_context=True)\ndef breadcrumbs(\n context,\n page,\n url_name,\n url_root_name=None,\n include_self=True,\n is_expanded=False,\n page_perms=None,\n querystring_value=None,\n trailing_breadcrumb_title=None,\n classname=None,\n):\n user = context[\"request\"].user\n\n # find the closest common ancestor of the pages that this user has direct explore permission\n # (i.e. add/edit/publish/lock) over; this will be the root of the breadcrumb\n cca = PagePermissionPolicy().explorable_root_instance(user)\n if not cca:\n return {\"pages\": Page.objects.none()}\n\n return {\n \"pages\": page.get_ancestors(inclusive=include_self)\n .descendant_of(cca, inclusive=True)\n .specific(),\n \"current_page\": page,\n \"is_expanded\": is_expanded,\n \"page_perms\": page_perms,\n \"querystring_value\": querystring_value or \"\",\n \"trailing_breadcrumb_title\": trailing_breadcrumb_title, # Only used in collapsible breadcrumb templates\n \"url_name\": url_name,\n \"url_root_name\": url_root_name,\n \"classname\": classname,\n }\n\n\[email protected]_tag(\"wagtailadmin/shared/search_other.html\", takes_context=True)\ndef search_other(context, current=None):\n request = context[\"request\"]\n\n return {\n \"options_html\": admin_search_areas.render_html(request, current),\n \"request\": request,\n }\n\n\[email protected](\"ellipsistrim\")\ndef ellipsistrim(value, max_length):\n if len(value) > max_length:\n truncd_val = value[:max_length]\n if not len(value) == (max_length + 1) and value[max_length + 1] != \" \":\n truncd_val = truncd_val[: truncd_val.rfind(\" \")]\n return truncd_val + \"\u2026\"\n return value\n\n\[email protected]\ndef fieldtype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.__class__.__name__)\n except AttributeError:\n try:\n return camelcase_to_underscore(bound_field.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\[email protected]\ndef widgettype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.widget.__class__.__name__)\n except AttributeError:\n try:\n return camelcase_to_underscore(bound_field.widget.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\ndef _get_user_page_permissions(context):\n # RemovedInWagtail60Warning: Remove this function\n\n # Create a UserPagePermissionsProxy object to represent the user's global permissions, and\n # cache it in the context for the duration of the page request, if one does not exist already\n if \"user_page_permissions\" not in context:\n context[\"user_page_permissions\"] = UserPagePermissionsProxy(\n context[\"request\"].user\n )\n return context[\"user_page_permissions\"]\n\n\[email protected]_tag(takes_context=True)\ndef page_permissions(context, page):\n \"\"\"\n Usage: {% page_permissions page as page_perms %}\n Sets the variable 'page_perms' to a PagePermissionTester object that can be queried to find out\n what actions the current logged-in user can perform on the given page.\n \"\"\"\n # RemovedInWagtail60Warning: Keep the UserPagePermissionsProxy object in the context\n # for backwards compatibility during the deprecation period, even though we don't use it\n _get_user_page_permissions(context)\n return page.permissions_for_user(context[\"request\"].user)\n\n\[email protected]_tag\ndef is_page(obj):\n \"\"\"\n Usage: {% is_page obj as is_page %}\n Sets the variable 'is_page' to True if the given object is a Page instance,\n False otherwise. Useful in shared templates that accept both Page and\n non-Page objects (e.g. snippets with the optional features enabled).\n \"\"\"\n return isinstance(obj, Page)\n\n\[email protected]_tag(takes_context=True)\ndef admin_edit_url(context, obj, user=None):\n \"\"\"\n Usage: {% admin_edit_url obj user %}\n Returns the URL of the edit view for the given object and user using the\n registered AdminURLFinder for the object. The AdminURLFinder instance is\n cached in the context for the duration of the page request.\n The user argument is optional and defaults to request.user if request is\n available in the context.\n \"\"\"\n if not user and \"request\" in context:\n user = context[\"request\"].user\n if \"admin_url_finder\" not in context:\n context[\"admin_url_finder\"] = AdminURLFinder(user)\n return context[\"admin_url_finder\"].get_edit_url(obj)\n\n\[email protected]_tag\ndef admin_url_name(obj, action):\n \"\"\"\n Usage: {% admin_url_name obj action %}\n Returns the URL name of the given action for the given object, e.g.\n 'wagtailadmin_pages:edit' for a Page object and 'edit' action.\n Works with pages and snippets only.\n \"\"\"\n if isinstance(obj, Page):\n return f\"wagtailadmin_pages:{action}\"\n return obj.snippet_viewset.get_url_name(action)\n\n\[email protected]_tag\ndef latest_str(obj):\n \"\"\"\n Usage: {% latest_str obj %}\n Returns the latest string representation of an object, making use of the\n latest revision where available to reflect draft changes.\n \"\"\"\n return get_latest_str(obj)\n\n\[email protected]_tag\ndef classnames(*classes):\n \"\"\"\n Usage <div class=\"{% classnames \"w-base\" classname active|yesno:\"w-base--active,\" any_other_var %}\"></div>\n Returns any args as a space-separated joined string for using in HTML class names.\n \"\"\"\n return \" \".join([classname.strip() for classname in classes if classname])\n\n\[email protected]_tag(takes_context=True)\ndef test_collection_is_public(context, collection):\n \"\"\"\n Usage: {% test_collection_is_public collection as is_public %}\n Sets 'is_public' to True iff there are no collection view restrictions in place\n on this collection.\n Caches the list of collection view restrictions in the context, to avoid repeated\n DB queries on repeated calls.\n \"\"\"\n if \"all_collection_view_restrictions\" not in context:\n context[\n \"all_collection_view_restrictions\"\n ] = CollectionViewRestriction.objects.select_related(\"collection\").values_list(\n \"collection__name\", flat=True\n )\n\n is_private = collection.name in context[\"all_collection_view_restrictions\"]\n\n return not is_private\n\n\[email protected]_tag(takes_context=True)\ndef test_page_is_public(context, page):\n \"\"\"\n Usage: {% test_page_is_public page as is_public %}\n Sets 'is_public' to True iff there are no page view restrictions in place on\n this page.\n Caches the list of page view restrictions on the request, to avoid repeated\n DB queries on repeated calls.\n \"\"\"\n if not hasattr(context[\"request\"], \"all_page_view_restriction_paths\"):\n context[\n \"request\"\n ].all_page_view_restriction_paths = PageViewRestriction.objects.select_related(\n \"page\"\n ).values_list(\n \"page__path\", flat=True\n )\n\n is_private = any(\n page.path.startswith(restricted_path)\n for restricted_path in context[\"request\"].all_page_view_restriction_paths\n )\n\n return not is_private\n\n\[email protected]_tag\ndef hook_output(hook_name):\n \"\"\"\n Example: {% hook_output 'insert_global_admin_css' %}\n Whenever we have a hook whose functions take no parameters and return a string, this tag can be used\n to output the concatenation of all of those return values onto the page.\n Note that the output is not escaped - it is the hook function's responsibility to escape unsafe content.\n \"\"\"\n snippets = [fn() for fn in hooks.get_hooks(hook_name)]\n\n if hook_name == \"insert_editor_css\" and snippets:\n warn(\n \"The `insert_editor_css` hook is deprecated - use `insert_global_admin_css` instead.\",\n category=RemovedInWagtail60Warning,\n )\n\n return mark_safe(\"\".join(snippets))\n\n\[email protected]_tag\ndef base_url_setting(default=None):\n return get_admin_base_url() or default\n\n\[email protected]_tag\ndef allow_unicode_slugs():\n return getattr(settings, \"WAGTAIL_ALLOW_UNICODE_SLUGS\", True)\n\n\nclass EscapeScriptNode(template.Node):\n TAG_NAME = \"escapescript\"\n\n def __init__(self, nodelist):\n super().__init__()\n self.nodelist = nodelist\n\n def render(self, context):\n out = self.nodelist.render(context)\n return escape_script(out)\n\n @classmethod\n def handle(cls, parser, token):\n nodelist = parser.parse((\"end\" + EscapeScriptNode.TAG_NAME,))\n parser.delete_first_token()\n return cls(nodelist)\n\n\nregister.tag(EscapeScriptNode.TAG_NAME, EscapeScriptNode.handle)\n\n\n# Helpers for Widget.render_with_errors, our extension to the Django widget API that allows widgets to\n# take on the responsibility of rendering their own error messages\[email protected]\ndef render_with_errors(bound_field):\n \"\"\"\n Usage: {{ field|render_with_errors }} as opposed to {{ field }}.\n If the field (a BoundField instance) has errors on it, and the associated widget implements\n a render_with_errors method, call that; otherwise, call the regular widget rendering mechanism.\n \"\"\"\n widget = bound_field.field.widget\n if bound_field.errors and hasattr(widget, \"render_with_errors\"):\n return widget.render_with_errors(\n bound_field.html_name,\n bound_field.value(),\n attrs={\"id\": bound_field.auto_id},\n errors=bound_field.errors,\n )\n else:\n attrs = {}\n # If the widget doesn't have an aria-describedby attribute,\n # and the field has help text, and the field has an id,\n # add an aria-describedby attribute pointing to the help text.\n # In this case, the corresponding help text element's id is set in the\n # wagtailadmin/shared/field.html template.\n\n # In Django 5.0 and up, this is done automatically, but we want to keep\n # this code because we use a different convention for the help text id\n # (we use -helptext suffix instead of Django's _helptext).\n if (\n not bound_field.field.widget.attrs.get(\"aria-describedby\")\n and bound_field.field.help_text\n and bound_field.id_for_label\n ):\n attrs[\"aria-describedby\"] = f\"{bound_field.id_for_label}-helptext\"\n return bound_field.as_widget(attrs=attrs)\n\n\[email protected]\ndef has_unrendered_errors(bound_field):\n \"\"\"\n Return true if this field has errors that were not accounted for by render_with_errors, because\n the widget does not support the render_with_errors method\n \"\"\"\n return bound_field.errors and not hasattr(\n bound_field.field.widget, \"render_with_errors\"\n )\n\n\[email protected](is_safe=True)\n@stringfilter\ndef cautious_slugify(value):\n return _cautious_slugify(value)\n\n\[email protected]_tag(takes_context=True)\ndef querystring(context, **kwargs):\n \"\"\"\n Print out the current querystring. Any keyword arguments to this template\n tag will be added to the querystring before it is printed out.\n\n <a href=\"/page/{% querystring key='value' %}\">\n\n Will result in something like:\n\n <a href=\"/page/?foo=bar&key=value\">\n \"\"\"\n request = context[\"request\"]\n querydict = request.GET.copy()\n # Can't do querydict.update(kwargs), because QueryDict.update() appends to\n # the list of values, instead of replacing the values.\n for key, value in kwargs.items():\n if value is None:\n # Remove the key if the value is None\n querydict.pop(key, None)\n else:\n # Set the key otherwise\n querydict[key] = str(value)\n\n return \"?\" + querydict.urlencode()\n\n\[email protected]_tag(takes_context=True)\ndef pagination_querystring(context, page_number, page_key=\"p\"):\n \"\"\"\n Print out a querystring with an updated page number:\n\n {% if page.has_next_page %}\n <a href=\"{% pagination_link page.next_page_number %}\">Next page</a>\n {% endif %}\n \"\"\"\n return querystring(context, **{page_key: page_number})\n\n\[email protected]_tag(\n \"wagtailadmin/pages/listing/_pagination.html\", takes_context=True\n)\ndef paginate(context, page, base_url=\"\", page_key=\"p\", classname=\"\"):\n \"\"\"\n Print pagination previous/next links, and the page count. Take the\n following arguments:\n\n page\n The current page of results. This should be a Django pagination `Page`\n instance\n\n base_url\n The base URL of the next/previous page, with no querystring.\n This is optional, and defaults to the current page by just printing the\n querystring for the next/previous page.\n\n page_key\n The name of the page variable in the query string. Defaults to 'p'.\n\n classname\n Extra classes to add to the next/previous links.\n \"\"\"\n request = context[\"request\"]\n return {\n \"base_url\": base_url,\n \"classname\": classname,\n \"request\": request,\n \"page\": page,\n \"page_key\": page_key,\n \"paginator\": page.paginator,\n }\n\n\[email protected]_tag(\"wagtailadmin/pages/listing/_buttons.html\", takes_context=True)\ndef page_listing_buttons(context, page, page_perms):\n next_url = context[\"request\"].path\n button_hooks = hooks.get_hooks(\"register_page_listing_buttons\")\n\n buttons = []\n for hook in button_hooks:\n buttons.extend(hook(page, page_perms, next_url))\n\n buttons.sort()\n\n for hook in hooks.get_hooks(\"construct_page_listing_buttons\"):\n hook(buttons, page, page_perms, context)\n\n return {\"page\": page, \"buttons\": buttons}\n\n\[email protected]_tag(\n \"wagtailadmin/pages/listing/_page_header_buttons.html\", takes_context=True\n)\ndef page_header_buttons(context, page, page_perms):\n next_url = context[\"request\"].path\n button_hooks = hooks.get_hooks(\"register_page_header_buttons\")\n\n buttons = []\n for hook in button_hooks:\n buttons.extend(hook(page, page_perms, next_url))\n\n buttons.sort()\n return {\n \"page\": page,\n \"buttons\": buttons,\n \"title\": _(\"Actions\"),\n \"icon_name\": \"dots-horizontal\",\n \"button_classes\": [\n \"w-p-0\",\n \"w-w-12\",\n \"w-h-slim-header\",\n \"hover:w-scale-110\",\n \"w-transition\",\n \"w-outline-offset-inside\",\n \"w-relative\",\n \"w-z-30\",\n ],\n }\n\n\[email protected]_tag(\"wagtailadmin/pages/listing/_buttons.html\", takes_context=True)\ndef bulk_action_choices(context, app_label, model_name):\n bulk_actions_list = list(\n bulk_action_registry.get_bulk_actions_for_model(app_label, model_name)\n )\n bulk_actions_list.sort(key=lambda x: x.action_priority)\n\n bulk_action_more_list = []\n if len(bulk_actions_list) > 4:\n bulk_action_more_list = bulk_actions_list[4:]\n bulk_actions_list = bulk_actions_list[:4]\n\n next_url = get_valid_next_url_from_request(context[\"request\"])\n if not next_url:\n next_url = context[\"request\"].path\n\n bulk_action_buttons = [\n PageListingButton(\n action.display_name,\n reverse(\n \"wagtail_bulk_action\", args=[app_label, model_name, action.action_type]\n )\n + \"?\"\n + urlencode({\"next\": next_url}),\n attrs={\"aria-label\": action.aria_label},\n priority=action.action_priority,\n classes=action.classes | {\"bulk-action-btn\"},\n )\n for action in bulk_actions_list\n ]\n\n if bulk_action_more_list:\n more_button = ButtonWithDropdown(\n label=_(\"More\"),\n attrs={\"title\": _(\"More bulk actions\")},\n button_classes={\"button\", \"button-secondary\", \"button-small\"},\n buttons_data=[\n {\n \"label\": action.display_name,\n \"url\": reverse(\n \"wagtail_bulk_action\",\n args=[app_label, model_name, action.action_type],\n )\n + \"?\"\n + urlencode({\"next\": next_url}),\n \"attrs\": {\"aria-label\": action.aria_label},\n \"priority\": action.action_priority,\n \"classes\": {\"bulk-action-btn\"},\n }\n for action in bulk_action_more_list\n ],\n )\n bulk_action_buttons.append(more_button)\n\n return {\"buttons\": bulk_action_buttons}\n\n\[email protected]_tag(\"wagtailadmin/shared/avatar.html\")\ndef avatar(user=None, classname=None, size=None, tooltip=None):\n \"\"\"\n Displays a user avatar using the avatar template\n Usage:\n {% load wagtailadmin_tags %}\n ...\n {% avatar user=request.user size='small' tooltip='JaneDoe' %}\n :param user: the user to get avatar information from (User)\n :param size: default None (None|'small'|'large'|'square')\n :param tooltip: Optional tooltip to display under the avatar (string)\n :return: Rendered template snippet\n \"\"\"\n return {\"user\": user, \"classname\": classname, \"size\": size, \"tooltip\": tooltip}\n\n\[email protected]_tag\ndef message_level_tag(message):\n \"\"\"\n Return the tag for this message's level as defined in\n django.contrib.messages.constants.DEFAULT_TAGS, ignoring the project-level\n MESSAGE_TAGS setting (which end-users might customise).\n \"\"\"\n return MESSAGE_TAGS.get(message.level)\n\n\[email protected]_tag\ndef message_tags(message):\n level_tag = message_level_tag(message)\n if message.extra_tags and level_tag:\n return message.extra_tags + \" \" + level_tag\n elif message.extra_tags:\n return message.extra_tags\n elif level_tag:\n return level_tag\n else:\n return \"\"\n\n\[email protected](\"abs\")\ndef _abs(val):\n return abs(val)\n\n\[email protected]\ndef admin_urlquote(value):\n return quote(value)\n\n\[email protected]_tag\ndef avatar_url(user, size=50, gravatar_only=False):\n \"\"\"\n A template tag that receives a user and size and return\n the appropriate avatar url for that user.\n Example usage: {% avatar_url request.user 50 %}\n \"\"\"\n\n if (\n not gravatar_only\n and hasattr(user, \"wagtail_userprofile\")\n and user.wagtail_userprofile.avatar\n ):\n return user.wagtail_userprofile.avatar.url\n\n if hasattr(user, \"email\"):\n gravatar_url = get_gravatar_url(user.email, size=size)\n if gravatar_url is not None:\n return gravatar_url\n\n return versioned_static_func(\"wagtailadmin/images/default-user-avatar.png\")\n\n\[email protected]_tag(takes_context=True)\ndef admin_theme_classname(context):\n \"\"\"\n Retrieves the theme name for the current user.\n \"\"\"\n user = context[\"request\"].user\n theme_name = (\n user.wagtail_userprofile.theme\n if hasattr(user, \"wagtail_userprofile\")\n else \"system\"\n )\n return f\"w-theme-{theme_name}\"\n\n\[email protected]_tag\ndef js_translation_strings():\n return mark_safe(json.dumps(get_js_translation_strings()))\n\n\[email protected]_tag\ndef notification_static(path):\n \"\"\"\n Variant of the {% static %}` tag for use in notification emails - tries to form\n a full URL using WAGTAILADMIN_BASE_URL if the static URL isn't already a full URL.\n \"\"\"\n return urljoin(base_url_setting(), static(path))\n\n\[email protected]_tag\ndef versioned_static(path):\n \"\"\"\n Wrapper for Django's static file finder to append a cache-busting query parameter\n that updates on each Wagtail version\n \"\"\"\n return versioned_static_func(path)\n\n\[email protected]_tag(\"wagtailadmin/shared/icon.html\", takes_context=False)\ndef icon(name=None, classname=None, title=None, wrapped=False, class_name=None):\n \"\"\"\n Abstracts away the actual icon implementation.\n\n Usage:\n {% load wagtailadmin_tags %}\n ...\n {% icon name=\"cogs\" classname=\"icon--red\" title=\"Settings\" %}\n\n :param name: the icon name/id, required (string)\n :param classname: defaults to 'icon' if not provided (string)\n :param title: accessible label intended for screen readers (string)\n :return: Rendered template snippet (string)\n \"\"\"\n if not name:\n raise ValueError(\"You must supply an icon name\")\n\n if class_name:\n warn(\n (\n \"Icon template tag `class_name` has been renamed to `classname`, please adopt the new usage instead. \"\n f'Replace `{{% icon ... class_name=\"{class_name}\" %}}` with `{{% icon ... classname=\"{class_name}\" %}}`'\n ),\n category=RemovedInWagtail60Warning,\n )\n\n deprecated_icons = [\n \"angle-double-left\",\n \"angle-double-right\",\n \"arrow-down-big\",\n \"arrow-up-big\",\n \"arrows-up-down\",\n \"chain-broken\",\n \"dots-vertical\",\n \"ellipsis-v\",\n \"horizontalrule\",\n \"repeat\",\n \"reset\",\n \"undo\",\n \"wagtail-inverse\",\n ]\n\n if name in deprecated_icons:\n warn(\n (f\"Icon `{name}` is deprecated and will be removed in a future release.\"),\n category=RemovedInWagtail60Warning,\n )\n\n renamed_icons = {\n \"chevron-down\": \"arrow-down\",\n \"download-alt\": \"download\",\n \"duplicate\": \"copy\",\n \"tick\": \"check\",\n \"uni52\": \"folder-inverse\",\n }\n\n if name in renamed_icons:\n old_name = name\n name = renamed_icons[name]\n warn(\n (\n f\"Icon `{old_name}` has been renamed to `{name}`, please adopt the new usage instead. \"\n f'Replace `{{% icon name=\"{old_name}\" ... %}}` with `{{% icon name=\"{name}\" ... %}}`'\n ),\n category=RemovedInWagtail60Warning,\n )\n\n return {\n \"name\": name,\n # supporting class_name for backwards compatibility\n \"classname\": classname or class_name or \"icon\",\n \"title\": title,\n \"wrapped\": wrapped,\n }\n\n\[email protected]_tag(\"wagtailadmin/shared/status_tag.html\")\ndef status(\n label=None,\n classname=None,\n url=None,\n title=None,\n hidden_label=None,\n attrs=None,\n):\n \"\"\"\n Generates a status-tag css with <span></span> or <a><a/> implementation.\n\n Usage:\n\n {% status label=\"live\" url=\"/test/\" title=\"title\" hidden_label=\"current status:\" classname=\"w-status--primary\" %}\n\n :param label: the status test, (string)\n :param classname: defaults to 'status-tag' if not provided (string)\n :param url: the status url(to specify the use of anchor tag instead of default span), (string)\n :param title: accessible label intended for screen readers (string)\n :param hidden_label : the to specify the additional visually hidden span text, (string)\n :param attrs: any additional HTML attributes (as a string) to append to the root element\n :return: Rendered template snippet (string)\n\n \"\"\"\n return {\n \"label\": label,\n \"attrs\": attrs,\n \"classname\": classname,\n \"hidden_label\": hidden_label,\n \"title\": title,\n \"url\": url,\n }\n\n\[email protected]()\ndef timesince_simple(d):\n \"\"\"\n Returns a simplified timesince:\n 19 hours, 48 minutes ago -> 19 hours ago\n 1 week, 1 day ago -> 1 week ago\n 0 minutes ago -> just now\n \"\"\"\n # Note: Duplicate code in timesince_last_update()\n time_period = timesince(d).split(\",\")[0]\n if time_period == avoid_wrapping(_(\"0 minutes\")):\n return _(\"just now\")\n return _(\"%(time_period)s ago\") % {\"time_period\": time_period}\n\n\[email protected]_tag\ndef timesince_last_update(\n last_update, show_time_prefix=False, user_display_name=\"\", use_shorthand=True\n):\n \"\"\"\n Returns:\n - the time of update if last_update is today, if show_time_prefix=True, the output will be prefixed with \"at \"\n - time since last update otherwise. Defaults to the simplified timesince,\n but can return the full string if needed\n \"\"\"\n # translation usage below is intentionally verbose to be easier to work with translations\n\n if last_update.date() == datetime.today().date():\n if timezone.is_aware(last_update):\n time_str = timezone.localtime(last_update).strftime(\"%H:%M\")\n else:\n time_str = last_update.strftime(\"%H:%M\")\n\n if show_time_prefix:\n if user_display_name:\n return _(\"at %(time)s by %(user_display_name)s\") % {\n \"time\": time_str,\n \"user_display_name\": user_display_name,\n }\n else:\n return _(\"at %(time)s\") % {\"time\": time_str}\n else:\n if user_display_name:\n return _(\"%(time)s by %(user_display_name)s\") % {\n \"time\": time_str,\n \"user_display_name\": user_display_name,\n }\n else:\n return time_str\n else:\n if use_shorthand:\n # Note: Duplicate code in timesince_simple()\n time_period = timesince(last_update).split(\",\")[0]\n if time_period == avoid_wrapping(_(\"0 minutes\")):\n if user_display_name:\n return _(\"just now by %(user_display_name)s\") % {\n \"user_display_name\": user_display_name\n }\n else:\n return _(\"just now\")\n else:\n time_period = timesince(last_update)\n\n if user_display_name:\n return _(\"%(time_period)s ago by %(user_display_name)s\") % {\n \"time_period\": time_period,\n \"user_display_name\": user_display_name,\n }\n else:\n return _(\"%(time_period)s ago\") % {\"time_period\": time_period}\n\n\[email protected]\ndef user_display_name(user):\n return get_user_display_name(user)\n\n\[email protected]\ndef format_content_type(content_type):\n return get_content_type_label(content_type)\n\n\[email protected]_tag\ndef i18n_enabled():\n return getattr(settings, \"WAGTAIL_I18N_ENABLED\", False)\n\n\[email protected]_tag\ndef locales():\n return json.dumps(\n [\n {\n \"code\": locale.language_code,\n \"display_name\": force_str(locale.get_display_name()),\n }\n for locale in Locale.objects.all()\n ]\n )\n\n\[email protected]_tag\ndef locale_label_from_id(locale_id):\n \"\"\"\n Returns the Locale display name given its id.\n \"\"\"\n return get_locales_display_names().get(locale_id)\n\n\[email protected]_tag(takes_context=True)\ndef sidebar_collapsed(context):\n request = context.get(\"request\")\n collapsed = request.COOKIES.get(\"wagtail_sidebar_collapsed\", \"0\")\n if collapsed == \"0\":\n return False\n return True\n\n\[email protected]_tag(takes_context=True)\ndef sidebar_props(context):\n request = context[\"request\"]\n search_areas = admin_search_areas.search_items_for_request(request)\n if search_areas:\n search_area = search_areas[0]\n else:\n search_area = None\n\n account_menu = [\n sidebar.LinkMenuItem(\n \"account\", _(\"Account\"), reverse(\"wagtailadmin_account\"), icon_name=\"user\"\n ),\n sidebar.ActionMenuItem(\n \"logout\", _(\"Log out\"), reverse(\"wagtailadmin_logout\"), icon_name=\"logout\"\n ),\n ]\n\n modules = [\n sidebar.WagtailBrandingModule(),\n sidebar.SearchModule(search_area) if search_area else None,\n sidebar.MainMenuModule(\n admin_menu.render_component(request), account_menu, request.user\n ),\n ]\n modules = [module for module in modules if module is not None]\n\n return json_script(\n {\n \"modules\": JSContext().pack(modules),\n },\n element_id=\"wagtail-sidebar-props\",\n )\n\n\[email protected]_tag\ndef get_comments_enabled():\n return getattr(settings, \"WAGTAILADMIN_COMMENTS_ENABLED\", True)\n\n\[email protected]_tag(takes_context=True)\ndef wagtail_config(context):\n request = context[\"request\"]\n config = {\n \"CSRF_TOKEN\": get_token(request),\n \"CSRF_HEADER_NAME\": HttpHeaders.parse_header_name(\n getattr(settings, \"CSRF_HEADER_NAME\")\n ),\n \"ADMIN_URLS\": {\n \"DISMISSIBLES\": reverse(\"wagtailadmin_dismissibles\"),\n },\n }\n\n default_settings = {\n \"WAGTAIL_AUTO_UPDATE_PREVIEW\": True,\n \"WAGTAIL_AUTO_UPDATE_PREVIEW_INTERVAL\": 500,\n }\n config.update(\n {\n option: getattr(settings, option, default)\n for option, default in default_settings.items()\n }\n )\n\n return config\n\n\[email protected]_tag\ndef resolve_url(url):\n # Used by wagtailadmin/shared/pagination_nav.html - given an input that may be a URL route\n # name, or a direct URL path, return it as a direct URL path. On failure (or being passed\n # an empty / None value), return empty string\n if not url:\n return \"\"\n\n try:\n return resolve_url_func(url)\n except NoReverseMatch:\n return \"\"\n\n\[email protected]_tag(takes_context=True)\ndef component(context, obj, fallback_render_method=False):\n # Render a component by calling its render_html method, passing request and context from the\n # calling template.\n # If fallback_render_method is true, objects without a render_html method will have render()\n # called instead (with no arguments) - this is to provide deprecation path for things that have\n # been newly upgraded to use the component pattern.\n\n has_render_html_method = hasattr(obj, \"render_html\")\n if fallback_render_method and not has_render_html_method and hasattr(obj, \"render\"):\n return obj.render()\n elif not has_render_html_method:\n raise ValueError(f\"Cannot render {obj!r} as a component\")\n\n return obj.render_html(context)\n\n\nclass FragmentNode(template.Node):\n def __init__(self, nodelist, target_var):\n self.nodelist = nodelist\n self.target_var = target_var\n\n def render(self, context):\n fragment = self.nodelist.render(context) if self.nodelist else \"\"\n context[self.target_var] = fragment\n return \"\"\n\n\[email protected](name=\"fragment\")\ndef fragment(parser, token):\n \"\"\"\n Store a template fragment as a variable.\n\n Usage:\n {% fragment as header_title %}\n {% blocktrans trimmed %}Welcome to the {{ site_name }} Wagtail CMS{% endblocktrans %}\n {% endfragment %}\n\n Copy-paste of slippers\u2019 fragment template tag.\n See https://github.com/mixxorz/slippers/blob/254c720e6bb02eb46ae07d104863fce41d4d3164/slippers/templatetags/slippers.py#L173.\n \"\"\"\n error_message = \"The syntax for fragment is {% fragment as variable_name %}\"\n\n try:\n tag_name, _, target_var = token.split_contents()\n nodelist = parser.parse((\"endfragment\",))\n parser.delete_first_token()\n except ValueError:\n if settings.DEBUG:\n raise template.TemplateSyntaxError(error_message)\n return \"\"\n\n return FragmentNode(nodelist, target_var)\n\n\nclass BlockInclusionNode(template.Node):\n \"\"\"\n Create template-driven tags like Django\u2019s inclusion_tag / InclusionNode, but for block-level tags.\n\n Usage:\n {% my_tag status=\"test\" label=\"Alert\" %}\n Proceed with caution.\n {% endmy_tag %}\n\n Within `my_tag`\u2019s template, the template fragment will be accessible as the {{ children }} context variable.\n\n The output can also be stored as a variable in the parent context:\n\n {% my_tag status=\"test\" label=\"Alert\" as my_variable %}\n Proceed with caution.\n {% endmy_tag %}\n\n Inspired by slippers\u2019 Component Node.\n See https://github.com/mixxorz/slippers/blob/254c720e6bb02eb46ae07d104863fce41d4d3164/slippers/templatetags/slippers.py#L47.\n \"\"\"\n\n def __init__(self, nodelist, template, extra_context, target_var=None):\n self.nodelist = nodelist\n self.template = template\n self.extra_context = extra_context\n self.target_var = target_var\n\n def get_context_data(self, parent_context):\n return parent_context\n\n def render(self, context):\n children = self.nodelist.render(context) if self.nodelist else \"\"\n\n values = {\n # Resolve the tag\u2019s parameters within the current context.\n key: value.resolve(context)\n for key, value in self.extra_context.items()\n }\n\n t = context.template.engine.get_template(self.template)\n # Add the `children` variable in the rendered template\u2019s context.\n context_data = self.get_context_data({**values, \"children\": children})\n output = t.render(Context(context_data, autoescape=context.autoescape))\n\n if self.target_var:\n context[self.target_var] = output\n return \"\"\n\n return output\n\n @classmethod\n def handle(cls, parser, token):\n tag_name, *remaining_bits = token.split_contents()\n\n nodelist = parser.parse((f\"end{tag_name}\",))\n parser.delete_first_token()\n\n extra_context = token_kwargs(remaining_bits, parser)\n\n # Allow component fragment to be assigned to a variable\n target_var = None\n if len(remaining_bits) >= 2 and remaining_bits[-2] == \"as\":\n target_var = remaining_bits[-1]\n\n return cls(nodelist, cls.template, extra_context, target_var)\n\n\nclass DialogNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/dialog/dialog.html\"\n\n def get_context_data(self, parent_context):\n context = super().get_context_data(parent_context)\n\n if \"title\" not in context:\n raise TypeError(\"You must supply a title\")\n if \"id\" not in context:\n raise TypeError(\"You must supply an id\")\n\n # Used for determining which icon the message will use\n message_icon_name = {\n \"info\": \"info-circle\",\n \"warning\": \"warning\",\n \"critical\": \"warning\",\n \"success\": \"circle-check\",\n }\n\n message_status = context.get(\"message_status\")\n\n # If there is a message status then determine which icon to use.\n if message_status:\n context[\"message_icon_name\"] = message_icon_name[message_status]\n\n return context\n\n\nregister.tag(\"dialog\", DialogNode.handle)\n\n\nclass HelpBlockNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/help_block.html\"\n\n\nregister.tag(\"help_block\", HelpBlockNode.handle)\n\n\nclass DropdownNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/dropdown/dropdown.html\"\n\n\nregister.tag(\"dropdown\", DropdownNode.handle)\n\n\nclass PanelNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/panel.html\"\n\n\nregister.tag(\"panel\", PanelNode.handle)\n\n\nclass FieldNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/field.html\"\n\n\nregister.tag(\"field\", FieldNode.handle)\n\n\nclass FieldRowNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/forms/field_row.html\"\n\n\nregister.tag(\"field_row\", FieldRowNode.handle)\n\n\n# Button used to open dialogs\[email protected]_tag(\"wagtailadmin/shared/dialog/dialog_toggle.html\")\ndef dialog_toggle(dialog_id, classname=\"\", text=None):\n if not dialog_id:\n raise ValueError(\"You must supply the dialog ID\")\n\n return {\n \"classname\": classname,\n \"text\": text,\n # dialog_id must match the ID of the dialog you are toggling\n \"dialog_id\": dialog_id,\n }\n\n\[email protected]_tag()\ndef workflow_status_with_date(workflow_state):\n translation_context = {\n \"finished_at\": naturaltime(workflow_state.current_task_state.finished_at),\n \"started_at\": naturaltime(workflow_state.current_task_state.started_at),\n \"task_name\": workflow_state.current_task_state.task.name,\n \"status_display\": workflow_state.get_status_display,\n }\n\n if workflow_state.status == \"needs_changes\":\n return _(\"Changes requested %(finished_at)s\") % translation_context\n\n if workflow_state.status == \"in_progress\":\n return _(\"Sent to %(task_name)s %(started_at)s\") % translation_context\n\n return _(\"%(status_display)s %(task_name)s %(started_at)s\") % translation_context\n\n\[email protected]_tag(\"wagtailadmin/shared/human_readable_date.html\")\ndef human_readable_date(date, description=None):\n return {\n \"date\": date,\n \"description\": description,\n }\n", "path": "wagtail/admin/templatetags/wagtailadmin_tags.py"}], "after_files": [{"content": "import json\nfrom datetime import datetime\nfrom urllib.parse import urljoin\nfrom warnings import warn\n\nfrom django import template\nfrom django.conf import settings\nfrom django.contrib.admin.utils import quote\nfrom django.contrib.humanize.templatetags.humanize import intcomma, naturaltime\nfrom django.contrib.messages.constants import DEFAULT_TAGS as MESSAGE_TAGS\nfrom django.http.request import HttpHeaders\nfrom django.middleware.csrf import get_token\nfrom django.shortcuts import resolve_url as resolve_url_func\nfrom django.template import Context\nfrom django.template.base import token_kwargs\nfrom django.template.defaultfilters import stringfilter\nfrom django.templatetags.static import static\nfrom django.urls import reverse\nfrom django.urls.exceptions import NoReverseMatch\nfrom django.utils import timezone\nfrom django.utils.encoding import force_str\nfrom django.utils.html import avoid_wrapping, json_script\nfrom django.utils.http import urlencode\nfrom django.utils.safestring import mark_safe\nfrom django.utils.timesince import timesince\nfrom django.utils.translation import gettext_lazy as _\n\nfrom wagtail import hooks\nfrom wagtail.admin.admin_url_finder import AdminURLFinder\nfrom wagtail.admin.localization import get_js_translation_strings\nfrom wagtail.admin.menu import admin_menu\nfrom wagtail.admin.search import admin_search_areas\nfrom wagtail.admin.staticfiles import versioned_static as versioned_static_func\nfrom wagtail.admin.ui import sidebar\nfrom wagtail.admin.utils import (\n get_admin_base_url,\n get_latest_str,\n get_user_display_name,\n get_valid_next_url_from_request,\n)\nfrom wagtail.admin.views.bulk_action.registry import bulk_action_registry\nfrom wagtail.admin.widgets import ButtonWithDropdown, PageListingButton\nfrom wagtail.coreutils import (\n camelcase_to_underscore,\n escape_script,\n get_content_type_label,\n get_locales_display_names,\n)\nfrom wagtail.coreutils import cautious_slugify as _cautious_slugify\nfrom wagtail.models import (\n CollectionViewRestriction,\n Locale,\n Page,\n PageViewRestriction,\n UserPagePermissionsProxy,\n)\nfrom wagtail.permission_policies.pages import PagePermissionPolicy\nfrom wagtail.telepath import JSContext\nfrom wagtail.users.utils import get_gravatar_url\nfrom wagtail.utils.deprecation import RemovedInWagtail60Warning\n\nregister = template.Library()\n\nregister.filter(\"intcomma\", intcomma)\nregister.filter(\"naturaltime\", naturaltime)\n\n\[email protected]_tag(\"wagtailadmin/shared/breadcrumbs.html\", takes_context=True)\ndef breadcrumbs(\n context,\n page,\n url_name,\n url_root_name=None,\n include_self=True,\n is_expanded=False,\n page_perms=None,\n querystring_value=None,\n trailing_breadcrumb_title=None,\n classname=None,\n):\n user = context[\"request\"].user\n\n # find the closest common ancestor of the pages that this user has direct explore permission\n # (i.e. add/edit/publish/lock) over; this will be the root of the breadcrumb\n cca = PagePermissionPolicy().explorable_root_instance(user)\n if not cca:\n return {\"pages\": Page.objects.none()}\n\n return {\n \"pages\": page.get_ancestors(inclusive=include_self)\n .descendant_of(cca, inclusive=True)\n .specific(),\n \"current_page\": page,\n \"is_expanded\": is_expanded,\n \"page_perms\": page_perms,\n \"querystring_value\": querystring_value or \"\",\n \"trailing_breadcrumb_title\": trailing_breadcrumb_title, # Only used in collapsible breadcrumb templates\n \"url_name\": url_name,\n \"url_root_name\": url_root_name,\n \"classname\": classname,\n }\n\n\[email protected]_tag(\"wagtailadmin/shared/search_other.html\", takes_context=True)\ndef search_other(context, current=None):\n request = context[\"request\"]\n\n return {\n \"options_html\": admin_search_areas.render_html(request, current),\n \"request\": request,\n }\n\n\[email protected](\"ellipsistrim\")\ndef ellipsistrim(value, max_length):\n if len(value) > max_length:\n truncd_val = value[:max_length]\n if not len(value) == (max_length + 1) and value[max_length + 1] != \" \":\n truncd_val = truncd_val[: truncd_val.rfind(\" \")]\n return truncd_val + \"\u2026\"\n return value\n\n\[email protected]\ndef fieldtype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.__class__.__name__)\n except AttributeError:\n try:\n return camelcase_to_underscore(bound_field.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\[email protected]\ndef widgettype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.widget.__class__.__name__)\n except AttributeError:\n try:\n return camelcase_to_underscore(bound_field.widget.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\ndef _get_user_page_permissions(context):\n # RemovedInWagtail60Warning: Remove this function\n\n # Create a UserPagePermissionsProxy object to represent the user's global permissions, and\n # cache it in the context for the duration of the page request, if one does not exist already\n if \"user_page_permissions\" not in context:\n context[\"user_page_permissions\"] = UserPagePermissionsProxy(\n context[\"request\"].user\n )\n return context[\"user_page_permissions\"]\n\n\[email protected]_tag(takes_context=True)\ndef page_permissions(context, page):\n \"\"\"\n Usage: {% page_permissions page as page_perms %}\n Sets the variable 'page_perms' to a PagePermissionTester object that can be queried to find out\n what actions the current logged-in user can perform on the given page.\n \"\"\"\n # RemovedInWagtail60Warning: Keep the UserPagePermissionsProxy object in the context\n # for backwards compatibility during the deprecation period, even though we don't use it\n _get_user_page_permissions(context)\n return page.permissions_for_user(context[\"request\"].user)\n\n\[email protected]_tag\ndef is_page(obj):\n \"\"\"\n Usage: {% is_page obj as is_page %}\n Sets the variable 'is_page' to True if the given object is a Page instance,\n False otherwise. Useful in shared templates that accept both Page and\n non-Page objects (e.g. snippets with the optional features enabled).\n \"\"\"\n return isinstance(obj, Page)\n\n\[email protected]_tag(takes_context=True)\ndef admin_edit_url(context, obj, user=None):\n \"\"\"\n Usage: {% admin_edit_url obj user %}\n Returns the URL of the edit view for the given object and user using the\n registered AdminURLFinder for the object. The AdminURLFinder instance is\n cached in the context for the duration of the page request.\n The user argument is optional and defaults to request.user if request is\n available in the context.\n \"\"\"\n if not user and \"request\" in context:\n user = context[\"request\"].user\n if \"admin_url_finder\" not in context:\n context[\"admin_url_finder\"] = AdminURLFinder(user)\n return context[\"admin_url_finder\"].get_edit_url(obj)\n\n\[email protected]_tag\ndef admin_url_name(obj, action):\n \"\"\"\n Usage: {% admin_url_name obj action %}\n Returns the URL name of the given action for the given object, e.g.\n 'wagtailadmin_pages:edit' for a Page object and 'edit' action.\n Works with pages and snippets only.\n \"\"\"\n if isinstance(obj, Page):\n return f\"wagtailadmin_pages:{action}\"\n return obj.snippet_viewset.get_url_name(action)\n\n\[email protected]_tag\ndef latest_str(obj):\n \"\"\"\n Usage: {% latest_str obj %}\n Returns the latest string representation of an object, making use of the\n latest revision where available to reflect draft changes.\n \"\"\"\n return get_latest_str(obj)\n\n\[email protected]_tag\ndef classnames(*classes):\n \"\"\"\n Usage <div class=\"{% classnames \"w-base\" classname active|yesno:\"w-base--active,\" any_other_var %}\"></div>\n Returns any args as a space-separated joined string for using in HTML class names.\n \"\"\"\n return \" \".join([classname.strip() for classname in classes if classname])\n\n\[email protected]_tag(takes_context=True)\ndef test_collection_is_public(context, collection):\n \"\"\"\n Usage: {% test_collection_is_public collection as is_public %}\n Sets 'is_public' to True iff there are no collection view restrictions in place\n on this collection.\n Caches the list of collection view restrictions in the context, to avoid repeated\n DB queries on repeated calls.\n \"\"\"\n if \"all_collection_view_restrictions\" not in context:\n context[\n \"all_collection_view_restrictions\"\n ] = CollectionViewRestriction.objects.select_related(\"collection\").values_list(\n \"collection__name\", flat=True\n )\n\n is_private = collection.name in context[\"all_collection_view_restrictions\"]\n\n return not is_private\n\n\[email protected]_tag(takes_context=True)\ndef test_page_is_public(context, page):\n \"\"\"\n Usage: {% test_page_is_public page as is_public %}\n Sets 'is_public' to True iff there are no page view restrictions in place on\n this page.\n Caches the list of page view restrictions on the request, to avoid repeated\n DB queries on repeated calls.\n \"\"\"\n if not hasattr(context[\"request\"], \"all_page_view_restriction_paths\"):\n context[\n \"request\"\n ].all_page_view_restriction_paths = PageViewRestriction.objects.select_related(\n \"page\"\n ).values_list(\n \"page__path\", flat=True\n )\n\n is_private = any(\n page.path.startswith(restricted_path)\n for restricted_path in context[\"request\"].all_page_view_restriction_paths\n )\n\n return not is_private\n\n\[email protected]_tag\ndef hook_output(hook_name):\n \"\"\"\n Example: {% hook_output 'insert_global_admin_css' %}\n Whenever we have a hook whose functions take no parameters and return a string, this tag can be used\n to output the concatenation of all of those return values onto the page.\n Note that the output is not escaped - it is the hook function's responsibility to escape unsafe content.\n \"\"\"\n snippets = [fn() for fn in hooks.get_hooks(hook_name)]\n\n if hook_name == \"insert_editor_css\" and snippets:\n warn(\n \"The `insert_editor_css` hook is deprecated - use `insert_global_admin_css` instead.\",\n category=RemovedInWagtail60Warning,\n )\n\n return mark_safe(\"\".join(snippets))\n\n\[email protected]_tag\ndef base_url_setting(default=None):\n return get_admin_base_url() or default\n\n\[email protected]_tag\ndef allow_unicode_slugs():\n return getattr(settings, \"WAGTAIL_ALLOW_UNICODE_SLUGS\", True)\n\n\nclass EscapeScriptNode(template.Node):\n TAG_NAME = \"escapescript\"\n\n def __init__(self, nodelist):\n super().__init__()\n self.nodelist = nodelist\n\n def render(self, context):\n out = self.nodelist.render(context)\n return escape_script(out)\n\n @classmethod\n def handle(cls, parser, token):\n nodelist = parser.parse((\"end\" + EscapeScriptNode.TAG_NAME,))\n parser.delete_first_token()\n return cls(nodelist)\n\n\nregister.tag(EscapeScriptNode.TAG_NAME, EscapeScriptNode.handle)\n\n\n# Helpers for Widget.render_with_errors, our extension to the Django widget API that allows widgets to\n# take on the responsibility of rendering their own error messages\[email protected]\ndef render_with_errors(bound_field):\n \"\"\"\n Usage: {{ field|render_with_errors }} as opposed to {{ field }}.\n If the field (a BoundField instance) has errors on it, and the associated widget implements\n a render_with_errors method, call that; otherwise, call the regular widget rendering mechanism.\n \"\"\"\n widget = bound_field.field.widget\n if bound_field.errors and hasattr(widget, \"render_with_errors\"):\n return widget.render_with_errors(\n bound_field.html_name,\n bound_field.value(),\n attrs={\"id\": bound_field.auto_id},\n errors=bound_field.errors,\n )\n else:\n attrs = {}\n # If the widget doesn't have an aria-describedby attribute,\n # and the field has help text, and the field has an id,\n # add an aria-describedby attribute pointing to the help text.\n # In this case, the corresponding help text element's id is set in the\n # wagtailadmin/shared/field.html template.\n\n # In Django 5.0 and up, this is done automatically, but we want to keep\n # this code because we use a different convention for the help text id\n # (we use -helptext suffix instead of Django's _helptext).\n if (\n not bound_field.field.widget.attrs.get(\"aria-describedby\")\n and bound_field.field.help_text\n and bound_field.id_for_label\n ):\n attrs[\"aria-describedby\"] = f\"{bound_field.id_for_label}-helptext\"\n return bound_field.as_widget(attrs=attrs)\n\n\[email protected]\ndef has_unrendered_errors(bound_field):\n \"\"\"\n Return true if this field has errors that were not accounted for by render_with_errors, because\n the widget does not support the render_with_errors method\n \"\"\"\n return bound_field.errors and not hasattr(\n bound_field.field.widget, \"render_with_errors\"\n )\n\n\[email protected](is_safe=True)\n@stringfilter\ndef cautious_slugify(value):\n return _cautious_slugify(value)\n\n\[email protected]_tag(takes_context=True)\ndef querystring(context, **kwargs):\n \"\"\"\n Print out the current querystring. Any keyword arguments to this template\n tag will be added to the querystring before it is printed out.\n\n <a href=\"/page/{% querystring key='value' %}\">\n\n Will result in something like:\n\n <a href=\"/page/?foo=bar&key=value\">\n \"\"\"\n request = context[\"request\"]\n querydict = request.GET.copy()\n # Can't do querydict.update(kwargs), because QueryDict.update() appends to\n # the list of values, instead of replacing the values.\n for key, value in kwargs.items():\n if value is None:\n # Remove the key if the value is None\n querydict.pop(key, None)\n else:\n # Set the key otherwise\n querydict[key] = str(value)\n\n return \"?\" + querydict.urlencode()\n\n\[email protected]_tag(takes_context=True)\ndef pagination_querystring(context, page_number, page_key=\"p\"):\n \"\"\"\n Print out a querystring with an updated page number:\n\n {% if page.has_next_page %}\n <a href=\"{% pagination_link page.next_page_number %}\">Next page</a>\n {% endif %}\n \"\"\"\n return querystring(context, **{page_key: page_number})\n\n\[email protected]_tag(\n \"wagtailadmin/pages/listing/_pagination.html\", takes_context=True\n)\ndef paginate(context, page, base_url=\"\", page_key=\"p\", classname=\"\"):\n \"\"\"\n Print pagination previous/next links, and the page count. Take the\n following arguments:\n\n page\n The current page of results. This should be a Django pagination `Page`\n instance\n\n base_url\n The base URL of the next/previous page, with no querystring.\n This is optional, and defaults to the current page by just printing the\n querystring for the next/previous page.\n\n page_key\n The name of the page variable in the query string. Defaults to 'p'.\n\n classname\n Extra classes to add to the next/previous links.\n \"\"\"\n request = context[\"request\"]\n return {\n \"base_url\": base_url,\n \"classname\": classname,\n \"request\": request,\n \"page\": page,\n \"page_key\": page_key,\n \"paginator\": page.paginator,\n }\n\n\[email protected]_tag(\"wagtailadmin/pages/listing/_buttons.html\", takes_context=True)\ndef page_listing_buttons(context, page, page_perms):\n next_url = context[\"request\"].path\n button_hooks = hooks.get_hooks(\"register_page_listing_buttons\")\n\n buttons = []\n for hook in button_hooks:\n buttons.extend(hook(page, page_perms, next_url))\n\n buttons.sort()\n\n for hook in hooks.get_hooks(\"construct_page_listing_buttons\"):\n hook(buttons, page, page_perms, context)\n\n return {\"page\": page, \"buttons\": buttons}\n\n\[email protected]_tag(\n \"wagtailadmin/pages/listing/_page_header_buttons.html\", takes_context=True\n)\ndef page_header_buttons(context, page, page_perms):\n next_url = context[\"request\"].path\n button_hooks = hooks.get_hooks(\"register_page_header_buttons\")\n\n buttons = []\n for hook in button_hooks:\n buttons.extend(hook(page, page_perms, next_url))\n\n buttons.sort()\n return {\n \"page\": page,\n \"buttons\": buttons,\n \"title\": _(\"Actions\"),\n \"icon_name\": \"dots-horizontal\",\n \"button_classes\": [\n \"w-p-0\",\n \"w-w-12\",\n \"w-h-slim-header\",\n \"hover:w-scale-110\",\n \"w-transition\",\n \"w-outline-offset-inside\",\n \"w-relative\",\n \"w-z-30\",\n ],\n }\n\n\[email protected]_tag(\"wagtailadmin/pages/listing/_buttons.html\", takes_context=True)\ndef bulk_action_choices(context, app_label, model_name):\n bulk_actions_list = list(\n bulk_action_registry.get_bulk_actions_for_model(app_label, model_name)\n )\n bulk_actions_list.sort(key=lambda x: x.action_priority)\n\n bulk_action_more_list = []\n if len(bulk_actions_list) > 4:\n bulk_action_more_list = bulk_actions_list[4:]\n bulk_actions_list = bulk_actions_list[:4]\n\n next_url = get_valid_next_url_from_request(context[\"request\"])\n if not next_url:\n next_url = context[\"request\"].path\n\n bulk_action_buttons = [\n PageListingButton(\n action.display_name,\n reverse(\n \"wagtail_bulk_action\", args=[app_label, model_name, action.action_type]\n )\n + \"?\"\n + urlencode({\"next\": next_url}),\n attrs={\"aria-label\": action.aria_label},\n priority=action.action_priority,\n classes=action.classes | {\"bulk-action-btn\"},\n )\n for action in bulk_actions_list\n ]\n\n if bulk_action_more_list:\n more_button = ButtonWithDropdown(\n label=_(\"More\"),\n attrs={\"title\": _(\"More bulk actions\")},\n button_classes={\"button\", \"button-secondary\", \"button-small\"},\n buttons_data=[\n {\n \"label\": action.display_name,\n \"url\": reverse(\n \"wagtail_bulk_action\",\n args=[app_label, model_name, action.action_type],\n )\n + \"?\"\n + urlencode({\"next\": next_url}),\n \"attrs\": {\"aria-label\": action.aria_label},\n \"priority\": action.action_priority,\n \"classes\": {\"bulk-action-btn\"},\n }\n for action in bulk_action_more_list\n ],\n )\n bulk_action_buttons.append(more_button)\n\n return {\"buttons\": bulk_action_buttons}\n\n\[email protected]_tag(\"wagtailadmin/shared/avatar.html\")\ndef avatar(user=None, classname=None, size=None, tooltip=None):\n \"\"\"\n Displays a user avatar using the avatar template\n Usage:\n {% load wagtailadmin_tags %}\n ...\n {% avatar user=request.user size='small' tooltip='JaneDoe' %}\n :param user: the user to get avatar information from (User)\n :param size: default None (None|'small'|'large'|'square')\n :param tooltip: Optional tooltip to display under the avatar (string)\n :return: Rendered template snippet\n \"\"\"\n return {\"user\": user, \"classname\": classname, \"size\": size, \"tooltip\": tooltip}\n\n\[email protected]_tag\ndef message_level_tag(message):\n \"\"\"\n Return the tag for this message's level as defined in\n django.contrib.messages.constants.DEFAULT_TAGS, ignoring the project-level\n MESSAGE_TAGS setting (which end-users might customise).\n \"\"\"\n return MESSAGE_TAGS.get(message.level)\n\n\[email protected]_tag\ndef message_tags(message):\n level_tag = message_level_tag(message)\n if message.extra_tags and level_tag:\n return message.extra_tags + \" \" + level_tag\n elif message.extra_tags:\n return message.extra_tags\n elif level_tag:\n return level_tag\n else:\n return \"\"\n\n\[email protected](\"abs\")\ndef _abs(val):\n return abs(val)\n\n\[email protected]\ndef admin_urlquote(value):\n return quote(value)\n\n\[email protected]_tag\ndef avatar_url(user, size=50, gravatar_only=False):\n \"\"\"\n A template tag that receives a user and size and return\n the appropriate avatar url for that user.\n Example usage: {% avatar_url request.user 50 %}\n \"\"\"\n\n if (\n not gravatar_only\n and hasattr(user, \"wagtail_userprofile\")\n and user.wagtail_userprofile.avatar\n ):\n return user.wagtail_userprofile.avatar.url\n\n if hasattr(user, \"email\"):\n gravatar_url = get_gravatar_url(user.email, size=size)\n if gravatar_url is not None:\n return gravatar_url\n\n return versioned_static_func(\"wagtailadmin/images/default-user-avatar.png\")\n\n\[email protected]_tag(takes_context=True)\ndef admin_theme_classname(context):\n \"\"\"\n Retrieves the theme name for the current user.\n \"\"\"\n user = context[\"request\"].user\n theme_name = (\n user.wagtail_userprofile.theme\n if hasattr(user, \"wagtail_userprofile\")\n else \"system\"\n )\n return f\"w-theme-{theme_name}\"\n\n\[email protected]_tag\ndef js_translation_strings():\n return mark_safe(json.dumps(get_js_translation_strings()))\n\n\[email protected]_tag\ndef notification_static(path):\n \"\"\"\n Variant of the {% static %}` tag for use in notification emails - tries to form\n a full URL using WAGTAILADMIN_BASE_URL if the static URL isn't already a full URL.\n \"\"\"\n return urljoin(base_url_setting(), static(path))\n\n\[email protected]_tag\ndef versioned_static(path):\n \"\"\"\n Wrapper for Django's static file finder to append a cache-busting query parameter\n that updates on each Wagtail version\n \"\"\"\n return versioned_static_func(path)\n\n\[email protected]_tag(\"wagtailadmin/shared/icon.html\", takes_context=False)\ndef icon(name=None, classname=None, title=None, wrapped=False, class_name=None):\n \"\"\"\n Abstracts away the actual icon implementation.\n\n Usage:\n {% load wagtailadmin_tags %}\n ...\n {% icon name=\"cogs\" classname=\"icon--red\" title=\"Settings\" %}\n\n :param name: the icon name/id, required (string)\n :param classname: defaults to 'icon' if not provided (string)\n :param title: accessible label intended for screen readers (string)\n :return: Rendered template snippet (string)\n \"\"\"\n if not name:\n raise ValueError(\"You must supply an icon name\")\n\n if class_name:\n warn(\n (\n \"Icon template tag `class_name` has been renamed to `classname`, please adopt the new usage instead. \"\n f'Replace `{{% icon ... class_name=\"{class_name}\" %}}` with `{{% icon ... classname=\"{class_name}\" %}}`'\n ),\n category=RemovedInWagtail60Warning,\n )\n\n deprecated_icons = [\n \"angle-double-left\",\n \"angle-double-right\",\n \"arrow-down-big\",\n \"arrow-up-big\",\n \"arrows-up-down\",\n \"chain-broken\",\n \"dots-vertical\",\n \"ellipsis-v\",\n \"horizontalrule\",\n \"repeat\",\n \"reset\",\n \"undo\",\n \"wagtail-inverse\",\n ]\n\n if name in deprecated_icons:\n warn(\n (f\"Icon `{name}` is deprecated and will be removed in a future release.\"),\n category=RemovedInWagtail60Warning,\n )\n\n renamed_icons = {\n \"chevron-down\": \"arrow-down\",\n \"download-alt\": \"download\",\n \"duplicate\": \"copy\",\n \"tick\": \"check\",\n \"uni52\": \"folder-inverse\",\n }\n\n if name in renamed_icons:\n old_name = name\n name = renamed_icons[name]\n warn(\n (\n f\"Icon `{old_name}` has been renamed to `{name}`, please adopt the new usage instead. \"\n f'Replace `{{% icon name=\"{old_name}\" ... %}}` with `{{% icon name=\"{name}\" ... %}}`'\n ),\n category=RemovedInWagtail60Warning,\n )\n\n return {\n \"name\": name,\n # supporting class_name for backwards compatibility\n \"classname\": classname or class_name or \"icon\",\n \"title\": title,\n \"wrapped\": wrapped,\n }\n\n\[email protected]_tag(\"wagtailadmin/shared/status_tag.html\")\ndef status(\n label=None,\n classname=None,\n url=None,\n title=None,\n hidden_label=None,\n attrs=None,\n):\n \"\"\"\n Generates a status-tag css with <span></span> or <a><a/> implementation.\n\n Usage:\n\n {% status label=\"live\" url=\"/test/\" title=\"title\" hidden_label=\"current status:\" classname=\"w-status--primary\" %}\n\n :param label: the status test, (string)\n :param classname: defaults to 'status-tag' if not provided (string)\n :param url: the status url(to specify the use of anchor tag instead of default span), (string)\n :param title: accessible label intended for screen readers (string)\n :param hidden_label : the to specify the additional visually hidden span text, (string)\n :param attrs: any additional HTML attributes (as a string) to append to the root element\n :return: Rendered template snippet (string)\n\n \"\"\"\n return {\n \"label\": label,\n \"attrs\": attrs,\n \"classname\": classname,\n \"hidden_label\": hidden_label,\n \"title\": title,\n \"url\": url,\n }\n\n\[email protected]()\ndef timesince_simple(d):\n \"\"\"\n Returns a simplified timesince:\n 19 hours, 48 minutes ago -> 19 hours ago\n 1 week, 1 day ago -> 1 week ago\n 0 minutes ago -> just now\n \"\"\"\n # Note: Duplicate code in timesince_last_update()\n time_period = timesince(d).split(\",\")[0]\n if time_period == avoid_wrapping(_(\"0 minutes\")):\n return _(\"just now\")\n return _(\"%(time_period)s ago\") % {\"time_period\": time_period}\n\n\[email protected]_tag\ndef timesince_last_update(\n last_update, show_time_prefix=False, user_display_name=\"\", use_shorthand=True\n):\n \"\"\"\n Returns:\n - the time of update if last_update is today, if show_time_prefix=True, the output will be prefixed with \"at \"\n - time since last update otherwise. Defaults to the simplified timesince,\n but can return the full string if needed\n \"\"\"\n # translation usage below is intentionally verbose to be easier to work with translations\n\n if last_update.date() == datetime.today().date():\n if timezone.is_aware(last_update):\n time_str = timezone.localtime(last_update).strftime(\"%H:%M\")\n else:\n time_str = last_update.strftime(\"%H:%M\")\n\n if show_time_prefix:\n if user_display_name:\n return _(\"at %(time)s by %(user_display_name)s\") % {\n \"time\": time_str,\n \"user_display_name\": user_display_name,\n }\n else:\n return _(\"at %(time)s\") % {\"time\": time_str}\n else:\n if user_display_name:\n return _(\"%(time)s by %(user_display_name)s\") % {\n \"time\": time_str,\n \"user_display_name\": user_display_name,\n }\n else:\n return time_str\n else:\n if use_shorthand:\n # Note: Duplicate code in timesince_simple()\n time_period = timesince(last_update).split(\",\")[0]\n if time_period == avoid_wrapping(_(\"0 minutes\")):\n if user_display_name:\n return _(\"just now by %(user_display_name)s\") % {\n \"user_display_name\": user_display_name\n }\n else:\n return _(\"just now\")\n else:\n time_period = timesince(last_update)\n\n if user_display_name:\n return _(\"%(time_period)s ago by %(user_display_name)s\") % {\n \"time_period\": time_period,\n \"user_display_name\": user_display_name,\n }\n else:\n return _(\"%(time_period)s ago\") % {\"time_period\": time_period}\n\n\[email protected]\ndef user_display_name(user):\n return get_user_display_name(user)\n\n\[email protected]\ndef format_content_type(content_type):\n return get_content_type_label(content_type)\n\n\[email protected]_tag\ndef i18n_enabled():\n return getattr(settings, \"WAGTAIL_I18N_ENABLED\", False)\n\n\[email protected]_tag\ndef locales():\n return json.dumps(\n [\n {\n \"code\": locale.language_code,\n \"display_name\": force_str(locale.get_display_name()),\n }\n for locale in Locale.objects.all()\n ]\n )\n\n\[email protected]_tag\ndef locale_label_from_id(locale_id):\n \"\"\"\n Returns the Locale display name given its id.\n \"\"\"\n return get_locales_display_names().get(locale_id)\n\n\[email protected]_tag(takes_context=True)\ndef sidebar_collapsed(context):\n request = context.get(\"request\")\n collapsed = request.COOKIES.get(\"wagtail_sidebar_collapsed\", \"0\")\n if collapsed == \"0\":\n return False\n return True\n\n\[email protected]_tag(takes_context=True)\ndef sidebar_props(context):\n request = context[\"request\"]\n search_areas = admin_search_areas.search_items_for_request(request)\n if search_areas:\n search_area = search_areas[0]\n else:\n search_area = None\n\n account_menu = [\n sidebar.LinkMenuItem(\n \"account\", _(\"Account\"), reverse(\"wagtailadmin_account\"), icon_name=\"user\"\n ),\n sidebar.ActionMenuItem(\n \"logout\", _(\"Log out\"), reverse(\"wagtailadmin_logout\"), icon_name=\"logout\"\n ),\n ]\n\n modules = [\n sidebar.WagtailBrandingModule(),\n sidebar.SearchModule(search_area) if search_area else None,\n sidebar.MainMenuModule(\n admin_menu.render_component(request), account_menu, request.user\n ),\n ]\n modules = [module for module in modules if module is not None]\n\n return json_script(\n {\n \"modules\": JSContext().pack(modules),\n },\n element_id=\"wagtail-sidebar-props\",\n )\n\n\[email protected]_tag\ndef get_comments_enabled():\n return getattr(settings, \"WAGTAILADMIN_COMMENTS_ENABLED\", True)\n\n\[email protected]_tag(takes_context=True)\ndef wagtail_config(context):\n request = context[\"request\"]\n config = {\n \"CSRF_TOKEN\": get_token(request),\n \"CSRF_HEADER_NAME\": HttpHeaders.parse_header_name(\n getattr(settings, \"CSRF_HEADER_NAME\")\n ),\n \"ADMIN_URLS\": {\n \"DISMISSIBLES\": reverse(\"wagtailadmin_dismissibles\"),\n },\n }\n\n default_settings = {\n \"WAGTAIL_AUTO_UPDATE_PREVIEW\": True,\n \"WAGTAIL_AUTO_UPDATE_PREVIEW_INTERVAL\": 500,\n }\n config.update(\n {\n option: getattr(settings, option, default)\n for option, default in default_settings.items()\n }\n )\n\n return config\n\n\[email protected]_tag\ndef resolve_url(url):\n # Used by wagtailadmin/shared/pagination_nav.html - given an input that may be a URL route\n # name, or a direct URL path, return it as a direct URL path. On failure (or being passed\n # an empty / None value), return empty string\n if not url:\n return \"\"\n\n try:\n return resolve_url_func(url)\n except NoReverseMatch:\n return \"\"\n\n\[email protected]_tag(takes_context=True)\ndef component(context, obj, fallback_render_method=False):\n # Render a component by calling its render_html method, passing request and context from the\n # calling template.\n # If fallback_render_method is true, objects without a render_html method will have render()\n # called instead (with no arguments) - this is to provide deprecation path for things that have\n # been newly upgraded to use the component pattern.\n\n has_render_html_method = hasattr(obj, \"render_html\")\n if fallback_render_method and not has_render_html_method and hasattr(obj, \"render\"):\n return obj.render()\n elif not has_render_html_method:\n raise ValueError(f\"Cannot render {obj!r} as a component\")\n\n return obj.render_html(context)\n\n\nclass FragmentNode(template.Node):\n def __init__(self, nodelist, target_var):\n self.nodelist = nodelist\n self.target_var = target_var\n\n def render(self, context):\n fragment = self.nodelist.render(context) if self.nodelist else \"\"\n context[self.target_var] = fragment\n return \"\"\n\n\[email protected](name=\"fragment\")\ndef fragment(parser, token):\n \"\"\"\n Store a template fragment as a variable.\n\n Usage:\n {% fragment as header_title %}\n {% blocktrans trimmed %}Welcome to the {{ site_name }} Wagtail CMS{% endblocktrans %}\n {% endfragment %}\n\n Copy-paste of slippers\u2019 fragment template tag.\n See https://github.com/mixxorz/slippers/blob/254c720e6bb02eb46ae07d104863fce41d4d3164/slippers/templatetags/slippers.py#L173.\n \"\"\"\n error_message = \"The syntax for fragment is {% fragment as variable_name %}\"\n\n try:\n tag_name, _, target_var = token.split_contents()\n nodelist = parser.parse((\"endfragment\",))\n parser.delete_first_token()\n except ValueError:\n if settings.DEBUG:\n raise template.TemplateSyntaxError(error_message)\n return \"\"\n\n return FragmentNode(nodelist, target_var)\n\n\nclass BlockInclusionNode(template.Node):\n \"\"\"\n Create template-driven tags like Django\u2019s inclusion_tag / InclusionNode, but for block-level tags.\n\n Usage:\n {% my_tag status=\"test\" label=\"Alert\" %}\n Proceed with caution.\n {% endmy_tag %}\n\n Within `my_tag`\u2019s template, the template fragment will be accessible as the {{ children }} context variable.\n\n The output can also be stored as a variable in the parent context:\n\n {% my_tag status=\"test\" label=\"Alert\" as my_variable %}\n Proceed with caution.\n {% endmy_tag %}\n\n Inspired by slippers\u2019 Component Node.\n See https://github.com/mixxorz/slippers/blob/254c720e6bb02eb46ae07d104863fce41d4d3164/slippers/templatetags/slippers.py#L47.\n \"\"\"\n\n def __init__(self, nodelist, template, extra_context, target_var=None):\n self.nodelist = nodelist\n self.template = template\n self.extra_context = extra_context\n self.target_var = target_var\n\n def get_context_data(self, parent_context):\n return parent_context\n\n def render(self, context):\n children = self.nodelist.render(context) if self.nodelist else \"\"\n\n values = {\n # Resolve the tag\u2019s parameters within the current context.\n key: value.resolve(context)\n for key, value in self.extra_context.items()\n }\n\n t = context.template.engine.get_template(self.template)\n # Add the `children` variable in the rendered template\u2019s context.\n context_data = self.get_context_data({**values, \"children\": children})\n output = t.render(Context(context_data, autoescape=context.autoescape))\n\n if self.target_var:\n context[self.target_var] = output\n return \"\"\n\n return output\n\n @classmethod\n def handle(cls, parser, token):\n tag_name, *remaining_bits = token.split_contents()\n\n nodelist = parser.parse((f\"end{tag_name}\",))\n parser.delete_first_token()\n\n extra_context = token_kwargs(remaining_bits, parser)\n\n # Allow component fragment to be assigned to a variable\n target_var = None\n if len(remaining_bits) >= 2 and remaining_bits[-2] == \"as\":\n target_var = remaining_bits[-1]\n\n return cls(nodelist, cls.template, extra_context, target_var)\n\n\nclass DialogNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/dialog/dialog.html\"\n\n def get_context_data(self, parent_context):\n context = super().get_context_data(parent_context)\n\n if \"title\" not in context:\n raise TypeError(\"You must supply a title\")\n if \"id\" not in context:\n raise TypeError(\"You must supply an id\")\n\n # Used for determining which icon the message will use\n message_icon_name = {\n \"info\": \"info-circle\",\n \"warning\": \"warning\",\n \"critical\": \"warning\",\n \"success\": \"circle-check\",\n }\n\n message_status = context.get(\"message_status\")\n\n # If there is a message status then determine which icon to use.\n if message_status:\n context[\"message_icon_name\"] = message_icon_name[message_status]\n\n return context\n\n\nregister.tag(\"dialog\", DialogNode.handle)\n\n\nclass HelpBlockNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/help_block.html\"\n\n\nregister.tag(\"help_block\", HelpBlockNode.handle)\n\n\nclass DropdownNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/dropdown/dropdown.html\"\n\n\nregister.tag(\"dropdown\", DropdownNode.handle)\n\n\nclass PanelNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/panel.html\"\n\n\nregister.tag(\"panel\", PanelNode.handle)\n\n\nclass FieldNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/field.html\"\n\n\nregister.tag(\"field\", FieldNode.handle)\n\n\nclass FieldRowNode(BlockInclusionNode):\n template = \"wagtailadmin/shared/forms/field_row.html\"\n\n\nregister.tag(\"field_row\", FieldRowNode.handle)\n\n\n# Button used to open dialogs\[email protected]_tag(\"wagtailadmin/shared/dialog/dialog_toggle.html\")\ndef dialog_toggle(dialog_id, classname=\"\", text=None):\n if not dialog_id:\n raise ValueError(\"You must supply the dialog ID\")\n\n return {\n \"classname\": classname,\n \"text\": text,\n # dialog_id must match the ID of the dialog you are toggling\n \"dialog_id\": dialog_id,\n }\n\n\[email protected]_tag()\ndef workflow_status_with_date(workflow_state):\n translation_context = {\n \"finished_at\": naturaltime(workflow_state.current_task_state.finished_at),\n \"started_at\": naturaltime(workflow_state.current_task_state.started_at),\n \"task_name\": workflow_state.current_task_state.task.name,\n \"status_display\": workflow_state.get_status_display,\n }\n\n if workflow_state.status == \"needs_changes\":\n return _(\"Changes requested %(finished_at)s\") % translation_context\n\n if workflow_state.status == \"in_progress\":\n return _(\"Sent to %(task_name)s %(started_at)s\") % translation_context\n\n return _(\"%(status_display)s %(task_name)s %(started_at)s\") % translation_context\n\n\[email protected]_tag(\"wagtailadmin/shared/human_readable_date.html\")\ndef human_readable_date(date, description=None, position=\"top\"):\n return {\n \"date\": date,\n \"description\": description,\n \"position\": position,\n }\n", "path": "wagtail/admin/templatetags/wagtailadmin_tags.py"}]} |
gh_patches_debug_1031 | rasdani/github-patches | git_diff | getsentry__sentry-55707 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to edit WHEN conditions from issue alert
### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
1. Create an issue alert with a few WHEN conditions
2. Save it
3. Go to the Alert details page
4. Click on Edit rule
5. Delete all the WHEN conditions
6. Click on Save
7. When you're back to the Alert details page, the WHEN conditions are still there, and the "Updated alert rule" message appears
### Expected Result
The users should be able to edit the alert rules
### Actual Result
The alert rule stays the same after editing
### Product Area
Alerts
### Link
_No response_
### DSN
_No response_
### Version
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/mediators/project_rules/updater.py`
Content:
```
1 from django.db import router
2 from rest_framework.request import Request
3
4 from sentry.mediators.mediator import Mediator
5 from sentry.mediators.param import Param
6 from sentry.models import Actor, Project, Rule
7
8
9 class Updater(Mediator):
10 rule = Param(Rule)
11 name = Param(str, required=False)
12 owner = Param(int, required=False)
13 environment = Param(int, required=False)
14 project = Param(Project)
15 action_match = Param(str, required=False)
16 filter_match = Param(str, required=False)
17 actions = Param(list, required=False)
18 conditions = Param(list, required=False)
19 frequency = Param(int, required=False)
20 request = Param(Request, required=False)
21 using = router.db_for_write(Project)
22
23 def call(self):
24 self._update_name()
25 self._update_owner()
26 self._update_environment()
27 self._update_project()
28 self._update_actions()
29 self._update_action_match()
30 self._update_filter_match()
31 self._update_conditions()
32 self._update_frequency()
33 self.rule.save()
34 return self.rule
35
36 def _update_name(self):
37 if self.name:
38 self.rule.label = self.name
39
40 def _update_owner(self) -> None:
41 self.rule.owner = Actor.objects.get(id=self.owner) if self.owner else None
42
43 def _update_environment(self):
44 self.rule.environment_id = self.environment
45
46 def _update_project(self):
47 if self.project:
48 self.rule.project = self.project
49
50 def _update_actions(self):
51 if self.actions:
52 self.rule.data["actions"] = self.actions
53
54 def _update_action_match(self):
55 if self.action_match:
56 self.rule.data["action_match"] = self.action_match
57
58 def _update_filter_match(self):
59 if self.filter_match:
60 self.rule.data["filter_match"] = self.filter_match
61
62 def _update_conditions(self):
63 if self.conditions:
64 self.rule.data["conditions"] = self.conditions
65
66 def _update_frequency(self):
67 if self.frequency:
68 self.rule.data["frequency"] = self.frequency
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/mediators/project_rules/updater.py b/src/sentry/mediators/project_rules/updater.py
--- a/src/sentry/mediators/project_rules/updater.py
+++ b/src/sentry/mediators/project_rules/updater.py
@@ -60,8 +60,7 @@
self.rule.data["filter_match"] = self.filter_match
def _update_conditions(self):
- if self.conditions:
- self.rule.data["conditions"] = self.conditions
+ self.rule.data["conditions"] = self.conditions or []
def _update_frequency(self):
if self.frequency:
| {"golden_diff": "diff --git a/src/sentry/mediators/project_rules/updater.py b/src/sentry/mediators/project_rules/updater.py\n--- a/src/sentry/mediators/project_rules/updater.py\n+++ b/src/sentry/mediators/project_rules/updater.py\n@@ -60,8 +60,7 @@\n self.rule.data[\"filter_match\"] = self.filter_match\n \n def _update_conditions(self):\n- if self.conditions:\n- self.rule.data[\"conditions\"] = self.conditions\n+ self.rule.data[\"conditions\"] = self.conditions or []\n \n def _update_frequency(self):\n if self.frequency:\n", "issue": "Unable to edit WHEN conditions from issue alert\n### Environment\n\nSaaS (https://sentry.io/)\n\n### Steps to Reproduce\n\n1. Create an issue alert with a few WHEN conditions\r\n2. Save it\r\n3. Go to the Alert details page\r\n4. Click on Edit rule\r\n5. Delete all the WHEN conditions\r\n6. Click on Save\r\n7. When you're back to the Alert details page, the WHEN conditions are still there, and the \"Updated alert rule\" message appears\n\n### Expected Result\n\nThe users should be able to edit the alert rules\n\n### Actual Result\n\nThe alert rule stays the same after editing\n\n### Product Area\n\nAlerts\n\n### Link\n\n_No response_\n\n### DSN\n\n_No response_\n\n### Version\n\n_No response_\n", "before_files": [{"content": "from django.db import router\nfrom rest_framework.request import Request\n\nfrom sentry.mediators.mediator import Mediator\nfrom sentry.mediators.param import Param\nfrom sentry.models import Actor, Project, Rule\n\n\nclass Updater(Mediator):\n rule = Param(Rule)\n name = Param(str, required=False)\n owner = Param(int, required=False)\n environment = Param(int, required=False)\n project = Param(Project)\n action_match = Param(str, required=False)\n filter_match = Param(str, required=False)\n actions = Param(list, required=False)\n conditions = Param(list, required=False)\n frequency = Param(int, required=False)\n request = Param(Request, required=False)\n using = router.db_for_write(Project)\n\n def call(self):\n self._update_name()\n self._update_owner()\n self._update_environment()\n self._update_project()\n self._update_actions()\n self._update_action_match()\n self._update_filter_match()\n self._update_conditions()\n self._update_frequency()\n self.rule.save()\n return self.rule\n\n def _update_name(self):\n if self.name:\n self.rule.label = self.name\n\n def _update_owner(self) -> None:\n self.rule.owner = Actor.objects.get(id=self.owner) if self.owner else None\n\n def _update_environment(self):\n self.rule.environment_id = self.environment\n\n def _update_project(self):\n if self.project:\n self.rule.project = self.project\n\n def _update_actions(self):\n if self.actions:\n self.rule.data[\"actions\"] = self.actions\n\n def _update_action_match(self):\n if self.action_match:\n self.rule.data[\"action_match\"] = self.action_match\n\n def _update_filter_match(self):\n if self.filter_match:\n self.rule.data[\"filter_match\"] = self.filter_match\n\n def _update_conditions(self):\n if self.conditions:\n self.rule.data[\"conditions\"] = self.conditions\n\n def _update_frequency(self):\n if self.frequency:\n self.rule.data[\"frequency\"] = self.frequency\n", "path": "src/sentry/mediators/project_rules/updater.py"}], "after_files": [{"content": "from django.db import router\nfrom rest_framework.request import Request\n\nfrom sentry.mediators.mediator import Mediator\nfrom sentry.mediators.param import Param\nfrom sentry.models import Actor, Project, Rule\n\n\nclass Updater(Mediator):\n rule = Param(Rule)\n name = Param(str, required=False)\n owner = Param(int, required=False)\n environment = Param(int, required=False)\n project = Param(Project)\n action_match = Param(str, required=False)\n filter_match = Param(str, required=False)\n actions = Param(list, required=False)\n conditions = Param(list, required=False)\n frequency = Param(int, required=False)\n request = Param(Request, required=False)\n using = router.db_for_write(Project)\n\n def call(self):\n self._update_name()\n self._update_owner()\n self._update_environment()\n self._update_project()\n self._update_actions()\n self._update_action_match()\n self._update_filter_match()\n self._update_conditions()\n self._update_frequency()\n self.rule.save()\n return self.rule\n\n def _update_name(self):\n if self.name:\n self.rule.label = self.name\n\n def _update_owner(self) -> None:\n self.rule.owner = Actor.objects.get(id=self.owner) if self.owner else None\n\n def _update_environment(self):\n self.rule.environment_id = self.environment\n\n def _update_project(self):\n if self.project:\n self.rule.project = self.project\n\n def _update_actions(self):\n if self.actions:\n self.rule.data[\"actions\"] = self.actions\n\n def _update_action_match(self):\n if self.action_match:\n self.rule.data[\"action_match\"] = self.action_match\n\n def _update_filter_match(self):\n if self.filter_match:\n self.rule.data[\"filter_match\"] = self.filter_match\n\n def _update_conditions(self):\n self.rule.data[\"conditions\"] = self.conditions or []\n\n def _update_frequency(self):\n if self.frequency:\n self.rule.data[\"frequency\"] = self.frequency\n", "path": "src/sentry/mediators/project_rules/updater.py"}]} |
gh_patches_debug_1032 | rasdani/github-patches | git_diff | xonsh__xonsh-138 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
In .xonshrc, import does not create a global name
xonsh: git checkout f44013b31756ba5491f2a7e1dffb7ad64513b28e
python: 3.4.1
OS: Fedora 21
With this as your .xonshrc:
``` python
import subprocess
def get_tty():
tty = subprocess.check_output('tty').decode().strip()
segments = tty.split('/')
return '/'.join(segments[-2:])
$PROMPT='{tty}@{{hostname}}$ '.format(tty=get_tty())
```
Trying to start .xonshrc yields a traceback:
```
Traceback (most recent call last):
File "scripts/xonsh", line 3, in <module>
main()
File "/srv/git/wishlist/xonsh/xonsh/main.py", line 36, in main
shell = Shell()
File "/srv/git/wishlist/xonsh/xonsh/shell.py", line 94, in __init__
execer=self.execer)
File "/srv/git/wishlist/xonsh/xonsh/environ.py", line 168, in xonshrc_context
execer.exec(rc, glbs={}, locs=env)
File "/srv/git/wishlist/xonsh/xonsh/execer.py", line 110, in exec
return exec(code, glbs, locs)
File "/home/badger/.xonshrc", line 7, in <module>
File "/home/badger/.xonshrc", line 259, in get_tty
NameError: name 'subprocess' is not defined
Exception ignored in: <bound method Shell.__del__ of <xonsh.shell.Shell object at 0x7f383127e4e0>>
Traceback (most recent call last):
File "/srv/git/wishlist/xonsh/xonsh/shell.py", line 102, in __del__
teardown_readline()
File "/srv/git/wishlist/xonsh/xonsh/shell.py", line 65, in teardown_readline
import readline
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2222, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 2164, in _find_spec
File "<frozen importlib._bootstrap>", line 1940, in find_spec
File "<frozen importlib._bootstrap>", line 1908, in _get_spec
TypeError: 'NoneType' object is not iterable
```
If I change .xonshrc to have the subprocess import inside of the function then it starts up fine. So it seems like importing does not create a globally available name. The other things I tried such as:
``` python
import subprocess as subprocess
subprocess = __import__('subprocess')
```
also lead to the same traceback.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xonsh/environ.py`
Content:
```
1 """Environment for the xonsh shell.
2 """
3 import os
4 import re
5 import socket
6 import locale
7 import builtins
8 import platform
9 import subprocess
10 from warnings import warn
11
12 from xonsh.tools import TERM_COLORS
13
14 def current_branch(cwd=None):
15 """Gets the branch for a current working directory. Returns None
16 if the cwd is not a repository. This currently only works for git,
17 bust should be extended in the future.
18 """
19 branch = None
20 cwd = os.getcwd() if cwd is None else cwd
21
22 # step out completely if git is not installed
23 try:
24 binary_location = subprocess.check_output(['which', 'git'], cwd=cwd,
25 stderr=subprocess.PIPE,
26 universal_newlines=True)
27 if not binary_location:
28 return branch
29 except subprocess.CalledProcessError:
30 return branch
31
32 prompt_scripts = [
33 '/usr/lib/git-core/git-sh-prompt',
34 '/usr/local/etc/bash_completion.d/git-prompt.sh'
35 ]
36
37 for script in prompt_scripts:
38 # note that this is about 10x faster than bash -i "__git_ps1"
39 _input = ('source {}; __git_ps1 "${{1:-%s}}"'.format(script))
40 try:
41 branch = subprocess.check_output(['bash',], cwd=cwd, input=_input,
42 stderr=subprocess.PIPE,
43 universal_newlines=True) or None
44 except subprocess.CalledProcessError:
45 continue
46
47 # fall back to using the git binary if the above failed
48 if branch is None:
49 try:
50 s = subprocess.check_output(['git', 'rev-parse','--abbrev-ref', 'HEAD'],
51 stderr=subprocess.PIPE, cwd=cwd,
52 universal_newlines=True)
53 s = s.strip()
54 if len(s) > 0:
55 branch = s
56 except subprocess.CalledProcessError:
57 pass
58
59 return branch
60
61
62 default_prompt = ('{BOLD_GREEN}{user}@{hostname}{BOLD_BLUE} '
63 '{cwd}{BOLD_RED}{curr_branch} {BOLD_BLUE}${NO_COLOR} ')
64 default_title = '{user}@{hostname}: {cwd} | xonsh'
65
66 def format_prompt(template=default_prompt):
67 """Formats a xonsh prompt template string.
68
69 The following keyword arguments are recognized in the template string:
70
71 + user -- Name of current user
72 + hostname -- Name of host computer
73 + cwd -- Current working directory
74 + curr_branch -- Name of current git branch (preceded by a space), if any
75 + (QUALIFIER\_)COLORNAME -- Inserts an ANSI color code
76 - COLORNAME can be any of:
77 BLACK, RED, GREEN, YELLOW, BLUE, PURPLE, CYAN, WHITE
78 - QUALIFIER is optional and can be any of:
79 BOLD, UNDERLINE, BACKGROUND, INTENSE,
80 BOLD_INTENSE, BACKGROUND_INTENSE
81 + NO_COLOR -- Resets any previously used color codes
82 """
83 env = builtins.__xonsh_env__
84 cwd = env['PWD']
85 branch = current_branch(cwd=cwd)
86 branch = '' if branch is None else ' ' + branch
87 p = template.format(
88 user=env.get('USER', '<user>'),
89 hostname=socket.gethostname(),
90 cwd=cwd.replace(env['HOME'], '~'),
91 curr_branch=branch,
92 **TERM_COLORS
93 )
94 return p
95
96
97 RE_HIDDEN = re.compile('\001.*?\002')
98
99 def multiline_prompt():
100 """Returns the filler text for the prompt in multiline scenarios."""
101 curr = builtins.__xonsh_env__.get('PROMPT', "set '$PROMPT = ...' $ ")
102 curr = curr() if callable(curr) else curr
103 curr = format_prompt(curr)
104 line = curr.rsplit('\n', 1)[1] if '\n' in curr else curr
105 line = RE_HIDDEN.sub('', line) # gets rid of colors
106 # most prompts end in whitespace, head is the part before that.
107 head = line.rstrip()
108 headlen = len(head)
109 # tail is the trailing whitespace
110 tail = line if headlen == 0 else line.rsplit(head[-1], 1)[1]
111 # now to constuct the actual string
112 dots = builtins.__xonsh_env__.get('MULTILINE_PROMPT', '.')
113 dots = dots() if callable(dots) else dots
114 if dots is None or len(dots) == 0:
115 return ''
116 return (dots*(headlen//len(dots))) + dots[:headlen%len(dots)] + tail
117
118
119 BASE_ENV = {
120 'INDENT': ' ',
121 'PROMPT': default_prompt,
122 'TITLE': default_title,
123 'MULTILINE_PROMPT': '.',
124 'XONSHRC': os.path.expanduser('~/.xonshrc'),
125 'XONSH_HISTORY_SIZE': 8128,
126 'XONSH_HISTORY_FILE': os.path.expanduser('~/.xonsh_history'),
127 'LC_CTYPE': locale.setlocale(locale.LC_CTYPE),
128 'LC_COLLATE': locale.setlocale(locale.LC_COLLATE),
129 'LC_TIME': locale.setlocale(locale.LC_TIME),
130 'LC_MONETARY': locale.setlocale(locale.LC_MONETARY),
131 'LC_MESSAGES': locale.setlocale(locale.LC_MESSAGES),
132 'LC_NUMERIC': locale.setlocale(locale.LC_NUMERIC),
133 }
134
135 if platform.system() == 'Darwin':
136 BASE_ENV['BASH_COMPLETIONS'] = []
137 else:
138 BASE_ENV['BASH_COMPLETIONS'] = ['/etc/bash_completion',
139 '/usr/share/bash-completion/completions/git']
140
141 def bash_env():
142 """Attempts to compute the bash envinronment variables."""
143 currenv = None
144 if hasattr(builtins, '__xonsh_env__'):
145 currenv = builtins.__xonsh_env__.detype()
146 try:
147 s = subprocess.check_output(['bash', '-i'], input='env', env=currenv,
148 stderr=subprocess.PIPE,
149 universal_newlines=True)
150 except subprocess.CalledProcessError:
151 s = ''
152 items = [line.split('=', 1) for line in s.splitlines() if '=' in line]
153 env = dict(items)
154 return env
155
156 def xonshrc_context(rcfile=None, execer=None):
157 """Attempts to read in xonshrc file, and return the contents."""
158 if rcfile is None or execer is None or not os.path.isfile(rcfile):
159 return {}
160 with open(rcfile, 'r') as f:
161 rc = f.read()
162 if not rc.endswith('\n'):
163 rc += '\n'
164 fname = execer.filename
165 env = {}
166 try:
167 execer.filename = rcfile
168 execer.exec(rc, glbs={}, locs=env)
169 except SyntaxError as err:
170 msg = 'syntax error in xonsh run control file {0!r}: {1!s}'
171 warn(msg.format(rcfile, err), RuntimeWarning)
172 finally:
173 execer.filename = fname
174 return env
175
176 def default_env(env=None):
177 """Constructs a default xonsh environment."""
178 # in order of increasing precedence
179 ctx = dict(BASE_ENV)
180 ctx.update(os.environ)
181 ctx.update(bash_env())
182 if env is not None:
183 ctx.update(env)
184 return ctx
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xonsh/environ.py b/xonsh/environ.py
--- a/xonsh/environ.py
+++ b/xonsh/environ.py
@@ -165,7 +165,7 @@
env = {}
try:
execer.filename = rcfile
- execer.exec(rc, glbs={}, locs=env)
+ execer.exec(rc, glbs=env)
except SyntaxError as err:
msg = 'syntax error in xonsh run control file {0!r}: {1!s}'
warn(msg.format(rcfile, err), RuntimeWarning)
| {"golden_diff": "diff --git a/xonsh/environ.py b/xonsh/environ.py\n--- a/xonsh/environ.py\n+++ b/xonsh/environ.py\n@@ -165,7 +165,7 @@\n env = {}\n try:\n execer.filename = rcfile\n- execer.exec(rc, glbs={}, locs=env)\n+ execer.exec(rc, glbs=env)\n except SyntaxError as err:\n msg = 'syntax error in xonsh run control file {0!r}: {1!s}'\n warn(msg.format(rcfile, err), RuntimeWarning)\n", "issue": "In .xonshrc, import does not create a global name\nxonsh: git checkout f44013b31756ba5491f2a7e1dffb7ad64513b28e\npython: 3.4.1\nOS: Fedora 21\n\nWith this as your .xonshrc:\n\n``` python\nimport subprocess\n\ndef get_tty():\n tty = subprocess.check_output('tty').decode().strip()\n segments = tty.split('/')\n return '/'.join(segments[-2:])\n\n$PROMPT='{tty}@{{hostname}}$ '.format(tty=get_tty())\n```\n\nTrying to start .xonshrc yields a traceback:\n\n```\nTraceback (most recent call last):\n File \"scripts/xonsh\", line 3, in <module>\n main()\n File \"/srv/git/wishlist/xonsh/xonsh/main.py\", line 36, in main\n shell = Shell()\n File \"/srv/git/wishlist/xonsh/xonsh/shell.py\", line 94, in __init__\n execer=self.execer)\n File \"/srv/git/wishlist/xonsh/xonsh/environ.py\", line 168, in xonshrc_context\n execer.exec(rc, glbs={}, locs=env)\n File \"/srv/git/wishlist/xonsh/xonsh/execer.py\", line 110, in exec\n return exec(code, glbs, locs)\n File \"/home/badger/.xonshrc\", line 7, in <module>\n\n File \"/home/badger/.xonshrc\", line 259, in get_tty\nNameError: name 'subprocess' is not defined\nException ignored in: <bound method Shell.__del__ of <xonsh.shell.Shell object at 0x7f383127e4e0>>\nTraceback (most recent call last):\n File \"/srv/git/wishlist/xonsh/xonsh/shell.py\", line 102, in __del__\n teardown_readline()\n File \"/srv/git/wishlist/xonsh/xonsh/shell.py\", line 65, in teardown_readline\n import readline\n File \"<frozen importlib._bootstrap>\", line 2237, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 2222, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 2164, in _find_spec\n File \"<frozen importlib._bootstrap>\", line 1940, in find_spec\n File \"<frozen importlib._bootstrap>\", line 1908, in _get_spec\nTypeError: 'NoneType' object is not iterable\n```\n\nIf I change .xonshrc to have the subprocess import inside of the function then it starts up fine. So it seems like importing does not create a globally available name. The other things I tried such as:\n\n``` python\nimport subprocess as subprocess\nsubprocess = __import__('subprocess')\n```\n\nalso lead to the same traceback.\n\n", "before_files": [{"content": "\"\"\"Environment for the xonsh shell.\n\"\"\"\nimport os\nimport re\nimport socket\nimport locale\nimport builtins\nimport platform\nimport subprocess\nfrom warnings import warn\n\nfrom xonsh.tools import TERM_COLORS\n\ndef current_branch(cwd=None):\n \"\"\"Gets the branch for a current working directory. Returns None\n if the cwd is not a repository. This currently only works for git, \n bust should be extended in the future.\n \"\"\"\n branch = None\n cwd = os.getcwd() if cwd is None else cwd\n\n # step out completely if git is not installed\n try:\n binary_location = subprocess.check_output(['which', 'git'], cwd=cwd,\n stderr=subprocess.PIPE,\n universal_newlines=True)\n if not binary_location:\n return branch\n except subprocess.CalledProcessError:\n return branch\n\n prompt_scripts = [\n '/usr/lib/git-core/git-sh-prompt',\n '/usr/local/etc/bash_completion.d/git-prompt.sh'\n ]\n\n for script in prompt_scripts:\n # note that this is about 10x faster than bash -i \"__git_ps1\"\n _input = ('source {}; __git_ps1 \"${{1:-%s}}\"'.format(script))\n try:\n branch = subprocess.check_output(['bash',], cwd=cwd, input=_input,\n stderr=subprocess.PIPE,\n universal_newlines=True) or None\n except subprocess.CalledProcessError:\n continue\n\n # fall back to using the git binary if the above failed\n if branch is None:\n try:\n s = subprocess.check_output(['git', 'rev-parse','--abbrev-ref', 'HEAD'],\n stderr=subprocess.PIPE, cwd=cwd,\n universal_newlines=True) \n s = s.strip()\n if len(s) > 0:\n branch = s\n except subprocess.CalledProcessError:\n pass\n\n return branch\n\n\ndefault_prompt = ('{BOLD_GREEN}{user}@{hostname}{BOLD_BLUE} '\n '{cwd}{BOLD_RED}{curr_branch} {BOLD_BLUE}${NO_COLOR} ')\ndefault_title = '{user}@{hostname}: {cwd} | xonsh'\n\ndef format_prompt(template=default_prompt):\n \"\"\"Formats a xonsh prompt template string.\n\n The following keyword arguments are recognized in the template string:\n\n + user -- Name of current user\n + hostname -- Name of host computer\n + cwd -- Current working directory\n + curr_branch -- Name of current git branch (preceded by a space), if any\n + (QUALIFIER\\_)COLORNAME -- Inserts an ANSI color code\n - COLORNAME can be any of:\n BLACK, RED, GREEN, YELLOW, BLUE, PURPLE, CYAN, WHITE\n - QUALIFIER is optional and can be any of:\n BOLD, UNDERLINE, BACKGROUND, INTENSE,\n BOLD_INTENSE, BACKGROUND_INTENSE\n + NO_COLOR -- Resets any previously used color codes\n \"\"\"\n env = builtins.__xonsh_env__\n cwd = env['PWD']\n branch = current_branch(cwd=cwd)\n branch = '' if branch is None else ' ' + branch\n p = template.format(\n user=env.get('USER', '<user>'),\n hostname=socket.gethostname(),\n cwd=cwd.replace(env['HOME'], '~'),\n curr_branch=branch,\n **TERM_COLORS\n )\n return p\n\n\nRE_HIDDEN = re.compile('\\001.*?\\002')\n\ndef multiline_prompt():\n \"\"\"Returns the filler text for the prompt in multiline scenarios.\"\"\"\n curr = builtins.__xonsh_env__.get('PROMPT', \"set '$PROMPT = ...' $ \")\n curr = curr() if callable(curr) else curr\n curr = format_prompt(curr)\n line = curr.rsplit('\\n', 1)[1] if '\\n' in curr else curr\n line = RE_HIDDEN.sub('', line) # gets rid of colors\n # most prompts end in whitespace, head is the part before that.\n head = line.rstrip()\n headlen = len(head)\n # tail is the trailing whitespace\n tail = line if headlen == 0 else line.rsplit(head[-1], 1)[1]\n # now to constuct the actual string\n dots = builtins.__xonsh_env__.get('MULTILINE_PROMPT', '.')\n dots = dots() if callable(dots) else dots\n if dots is None or len(dots) == 0:\n return ''\n return (dots*(headlen//len(dots))) + dots[:headlen%len(dots)] + tail\n\n\nBASE_ENV = {\n 'INDENT': ' ',\n 'PROMPT': default_prompt,\n 'TITLE': default_title,\n 'MULTILINE_PROMPT': '.',\n 'XONSHRC': os.path.expanduser('~/.xonshrc'),\n 'XONSH_HISTORY_SIZE': 8128,\n 'XONSH_HISTORY_FILE': os.path.expanduser('~/.xonsh_history'),\n 'LC_CTYPE': locale.setlocale(locale.LC_CTYPE),\n 'LC_COLLATE': locale.setlocale(locale.LC_COLLATE),\n 'LC_TIME': locale.setlocale(locale.LC_TIME),\n 'LC_MONETARY': locale.setlocale(locale.LC_MONETARY),\n 'LC_MESSAGES': locale.setlocale(locale.LC_MESSAGES),\n 'LC_NUMERIC': locale.setlocale(locale.LC_NUMERIC),\n }\n\nif platform.system() == 'Darwin':\n BASE_ENV['BASH_COMPLETIONS'] = []\nelse:\n BASE_ENV['BASH_COMPLETIONS'] = ['/etc/bash_completion', \n '/usr/share/bash-completion/completions/git']\n\ndef bash_env():\n \"\"\"Attempts to compute the bash envinronment variables.\"\"\"\n currenv = None\n if hasattr(builtins, '__xonsh_env__'):\n currenv = builtins.__xonsh_env__.detype()\n try:\n s = subprocess.check_output(['bash', '-i'], input='env', env=currenv, \n stderr=subprocess.PIPE,\n universal_newlines=True)\n except subprocess.CalledProcessError:\n s = ''\n items = [line.split('=', 1) for line in s.splitlines() if '=' in line]\n env = dict(items)\n return env\n\ndef xonshrc_context(rcfile=None, execer=None):\n \"\"\"Attempts to read in xonshrc file, and return the contents.\"\"\"\n if rcfile is None or execer is None or not os.path.isfile(rcfile):\n return {}\n with open(rcfile, 'r') as f:\n rc = f.read()\n if not rc.endswith('\\n'):\n rc += '\\n'\n fname = execer.filename\n env = {}\n try:\n execer.filename = rcfile\n execer.exec(rc, glbs={}, locs=env)\n except SyntaxError as err:\n msg = 'syntax error in xonsh run control file {0!r}: {1!s}'\n warn(msg.format(rcfile, err), RuntimeWarning)\n finally:\n execer.filename = fname\n return env\n\ndef default_env(env=None):\n \"\"\"Constructs a default xonsh environment.\"\"\"\n # in order of increasing precedence\n ctx = dict(BASE_ENV)\n ctx.update(os.environ)\n ctx.update(bash_env())\n if env is not None:\n ctx.update(env)\n return ctx\n", "path": "xonsh/environ.py"}], "after_files": [{"content": "\"\"\"Environment for the xonsh shell.\n\"\"\"\nimport os\nimport re\nimport socket\nimport locale\nimport builtins\nimport platform\nimport subprocess\nfrom warnings import warn\n\nfrom xonsh.tools import TERM_COLORS\n\ndef current_branch(cwd=None):\n \"\"\"Gets the branch for a current working directory. Returns None\n if the cwd is not a repository. This currently only works for git, \n bust should be extended in the future.\n \"\"\"\n branch = None\n cwd = os.getcwd() if cwd is None else cwd\n\n # step out completely if git is not installed\n try:\n binary_location = subprocess.check_output(['which', 'git'], cwd=cwd,\n stderr=subprocess.PIPE,\n universal_newlines=True)\n if not binary_location:\n return branch\n except subprocess.CalledProcessError:\n return branch\n\n prompt_scripts = [\n '/usr/lib/git-core/git-sh-prompt',\n '/usr/local/etc/bash_completion.d/git-prompt.sh'\n ]\n\n for script in prompt_scripts:\n # note that this is about 10x faster than bash -i \"__git_ps1\"\n _input = ('source {}; __git_ps1 \"${{1:-%s}}\"'.format(script))\n try:\n branch = subprocess.check_output(['bash',], cwd=cwd, input=_input,\n stderr=subprocess.PIPE,\n universal_newlines=True) or None\n except subprocess.CalledProcessError:\n continue\n\n # fall back to using the git binary if the above failed\n if branch is None:\n try:\n s = subprocess.check_output(['git', 'rev-parse','--abbrev-ref', 'HEAD'],\n stderr=subprocess.PIPE, cwd=cwd,\n universal_newlines=True) \n s = s.strip()\n if len(s) > 0:\n branch = s\n except subprocess.CalledProcessError:\n pass\n\n return branch\n\n\ndefault_prompt = ('{BOLD_GREEN}{user}@{hostname}{BOLD_BLUE} '\n '{cwd}{BOLD_RED}{curr_branch} {BOLD_BLUE}${NO_COLOR} ')\ndefault_title = '{user}@{hostname}: {cwd} | xonsh'\n\ndef format_prompt(template=default_prompt):\n \"\"\"Formats a xonsh prompt template string.\n\n The following keyword arguments are recognized in the template string:\n\n + user -- Name of current user\n + hostname -- Name of host computer\n + cwd -- Current working directory\n + curr_branch -- Name of current git branch (preceded by a space), if any\n + (QUALIFIER\\_)COLORNAME -- Inserts an ANSI color code\n - COLORNAME can be any of:\n BLACK, RED, GREEN, YELLOW, BLUE, PURPLE, CYAN, WHITE\n - QUALIFIER is optional and can be any of:\n BOLD, UNDERLINE, BACKGROUND, INTENSE,\n BOLD_INTENSE, BACKGROUND_INTENSE\n + NO_COLOR -- Resets any previously used color codes\n \"\"\"\n env = builtins.__xonsh_env__\n cwd = env['PWD']\n branch = current_branch(cwd=cwd)\n branch = '' if branch is None else ' ' + branch\n p = template.format(\n user=env.get('USER', '<user>'),\n hostname=socket.gethostname(),\n cwd=cwd.replace(env['HOME'], '~'),\n curr_branch=branch,\n **TERM_COLORS\n )\n return p\n\n\nRE_HIDDEN = re.compile('\\001.*?\\002')\n\ndef multiline_prompt():\n \"\"\"Returns the filler text for the prompt in multiline scenarios.\"\"\"\n curr = builtins.__xonsh_env__.get('PROMPT', \"set '$PROMPT = ...' $ \")\n curr = curr() if callable(curr) else curr\n curr = format_prompt(curr)\n line = curr.rsplit('\\n', 1)[1] if '\\n' in curr else curr\n line = RE_HIDDEN.sub('', line) # gets rid of colors\n # most prompts end in whitespace, head is the part before that.\n head = line.rstrip()\n headlen = len(head)\n # tail is the trailing whitespace\n tail = line if headlen == 0 else line.rsplit(head[-1], 1)[1]\n # now to constuct the actual string\n dots = builtins.__xonsh_env__.get('MULTILINE_PROMPT', '.')\n dots = dots() if callable(dots) else dots\n if dots is None or len(dots) == 0:\n return ''\n return (dots*(headlen//len(dots))) + dots[:headlen%len(dots)] + tail\n\n\nBASE_ENV = {\n 'INDENT': ' ',\n 'PROMPT': default_prompt,\n 'TITLE': default_title,\n 'MULTILINE_PROMPT': '.',\n 'XONSHRC': os.path.expanduser('~/.xonshrc'),\n 'XONSH_HISTORY_SIZE': 8128,\n 'XONSH_HISTORY_FILE': os.path.expanduser('~/.xonsh_history'),\n 'LC_CTYPE': locale.setlocale(locale.LC_CTYPE),\n 'LC_COLLATE': locale.setlocale(locale.LC_COLLATE),\n 'LC_TIME': locale.setlocale(locale.LC_TIME),\n 'LC_MONETARY': locale.setlocale(locale.LC_MONETARY),\n 'LC_MESSAGES': locale.setlocale(locale.LC_MESSAGES),\n 'LC_NUMERIC': locale.setlocale(locale.LC_NUMERIC),\n }\n\nif platform.system() == 'Darwin':\n BASE_ENV['BASH_COMPLETIONS'] = []\nelse:\n BASE_ENV['BASH_COMPLETIONS'] = ['/etc/bash_completion', \n '/usr/share/bash-completion/completions/git']\n\ndef bash_env():\n \"\"\"Attempts to compute the bash envinronment variables.\"\"\"\n currenv = None\n if hasattr(builtins, '__xonsh_env__'):\n currenv = builtins.__xonsh_env__.detype()\n try:\n s = subprocess.check_output(['bash', '-i'], input='env', env=currenv, \n stderr=subprocess.PIPE,\n universal_newlines=True)\n except subprocess.CalledProcessError:\n s = ''\n items = [line.split('=', 1) for line in s.splitlines() if '=' in line]\n env = dict(items)\n return env\n\ndef xonshrc_context(rcfile=None, execer=None):\n \"\"\"Attempts to read in xonshrc file, and return the contents.\"\"\"\n if rcfile is None or execer is None or not os.path.isfile(rcfile):\n return {}\n with open(rcfile, 'r') as f:\n rc = f.read()\n if not rc.endswith('\\n'):\n rc += '\\n'\n fname = execer.filename\n env = {}\n try:\n execer.filename = rcfile\n execer.exec(rc, glbs=env)\n except SyntaxError as err:\n msg = 'syntax error in xonsh run control file {0!r}: {1!s}'\n warn(msg.format(rcfile, err), RuntimeWarning)\n finally:\n execer.filename = fname\n return env\n\ndef default_env(env=None):\n \"\"\"Constructs a default xonsh environment.\"\"\"\n # in order of increasing precedence\n ctx = dict(BASE_ENV)\n ctx.update(os.environ)\n ctx.update(bash_env())\n if env is not None:\n ctx.update(env)\n return ctx\n", "path": "xonsh/environ.py"}]} |
gh_patches_debug_1033 | rasdani/github-patches | git_diff | google__turbinia-1086 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sphinx docs build broken
Getting an error when trying to build the docs:
```
$ sphinx-build -b html -d build/doctrees docs dist/docs
Running Sphinx v4.5.0
WARNING: html_static_path entry '_static' does not exist
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 19 source files that are out of date
updating environment: [new config] 19 added, 0 changed, 0 removed
reading sources... [ 5%] developer/contributing
Extension error (sphinx_markdown_tables):
Handler <function process_tables at 0x7fb9b1b0a700> for event 'source-read' threw an exception (exception: __init__() missing 1 required positional argument: 'config')
```
Trying an earlier version of sphinx and an earlier version of the repo does not resolve the issue. It seems to be something in the sphinx-markdown-tables module, but that doesn't seem to have changed that recently either (more than a month ago: https://pypi.org/project/sphinx-markdown-tables/0.0.15/#history).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 # import os
14 # import sys
15 # sys.path.insert(0, os.path.abspath('.'))
16
17 from __future__ import unicode_literals
18 import re
19
20 from recommonmark.parser import CommonMarkParser
21 from recommonmark.transform import AutoStructify
22 from docutils import nodes, transforms
23
24 # -- Project information -----------------------------------------------------
25
26 project = 'Turbinia'
27 copyright = '2020, Google Inc'
28 author = 'Turbinia maintainers'
29
30 # -- General configuration ---------------------------------------------------
31
32 # Add any Sphinx extension module names here, as strings. They can be
33 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
34 # ones.
35 extensions = [
36 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',
37 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx_markdown_tables',
38 'recommonmark'
39 ]
40
41 # Add any paths that contain templates here, relative to this directory.
42 templates_path = ['_templates']
43
44 # List of patterns, relative to source directory, that match files and
45 # directories to ignore when looking for source files.
46 # This pattern also affects html_static_path and html_extra_path.
47 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'design/*']
48
49 # -- Options for HTML output -------------------------------------------------
50
51 # The theme to use for HTML and HTML Help pages. See the documentation for
52 # a list of builtin themes.
53 #
54 html_theme = 'sphinx_rtd_theme'
55
56 # The master toctree document.
57 master_doc = 'index'
58
59 # The name of the Pygments (syntax highlighting) style to use.
60 pygments_style = 'sphinx'
61
62 # Add any paths that contain custom static files (such as style sheets) here,
63 # relative to this directory. They are copied after the builtin static files,
64 # so a file named "default.css" will overwrite the builtin "default.css".
65 html_static_path = ['_static']
66
67 # The default sidebars (for documents that don't match any pattern) are
68 # defined by theme itself. Builtin themes are using these templates by
69 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
70 # 'searchbox.html']``.
71 #
72 html_sidebars = {
73 '**': [
74 'sidebar.html', 'localtoc.html', 'relations.html', 'sourcelink.html',
75 'searchbox.html'
76 ]
77 }
78
79 # Adding retries to linkchecks before declaring a link broken
80 linkcheck_retries = 3
81
82 # Output file base name for HTML help builder.
83 htmlhelp_basename = 'turbiniadoc'
84
85 html_logo = "images/turbinia-logo.jpg"
86
87
88 class ProcessLink(transforms.Transform):
89 """Transform definition to parse .md references to internal pages."""
90
91 default_priority = 1000
92
93 def find_replace(self, node):
94 """Parses URIs containing .md and replaces them with their HTML page."""
95 if isinstance(node, nodes.reference) and 'refuri' in node:
96 r = node['refuri']
97 if r.endswith('.md'):
98 r = r[:-3] + '.html'
99 node['refuri'] = r
100
101 return node
102
103 def traverse(self, node):
104 """Traverse the document tree rooted at node.
105 node : docutil node
106 current root node to traverse
107 """
108 self.find_replace(node)
109
110 for c in node.children:
111 self.traverse(c)
112
113 # pylint: disable=arguments-differ,attribute-defined-outside-init
114 # this was taken from GRR's config file for documentation
115 def apply(self):
116 self.current_level = 0
117 self.traverse(self.document)
118
119
120 def setup(app):
121 """Add custom parsers to Sphinx generation."""
122 app.add_config_value(
123 'recommonmark_config', {
124 'enable_auto_doc_ref': False,
125 }, True)
126 app.add_transform(AutoStructify)
127 app.add_transform(ProcessLink)
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -34,8 +34,7 @@
# ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',
- 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx_markdown_tables',
- 'recommonmark'
+ 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'recommonmark'
]
# Add any paths that contain templates here, relative to this directory.
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -34,8 +34,7 @@\n # ones.\n extensions = [\n 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',\n- 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx_markdown_tables',\n- 'recommonmark'\n+ 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'recommonmark'\n ]\n \n # Add any paths that contain templates here, relative to this directory.\n", "issue": "sphinx docs build broken\nGetting an error when trying to build the docs:\r\n```\r\n$ sphinx-build -b html -d build/doctrees docs dist/docs\r\nRunning Sphinx v4.5.0\r\nWARNING: html_static_path entry '_static' does not exist\r\nbuilding [mo]: targets for 0 po files that are out of date\r\nbuilding [html]: targets for 19 source files that are out of date\r\nupdating environment: [new config] 19 added, 0 changed, 0 removed\r\nreading sources... [ 5%] developer/contributing \r\nExtension error (sphinx_markdown_tables):\r\nHandler <function process_tables at 0x7fb9b1b0a700> for event 'source-read' threw an exception (exception: __init__() missing 1 required positional argument: 'config')\r\n```\r\n\r\nTrying an earlier version of sphinx and an earlier version of the repo does not resolve the issue. It seems to be something in the sphinx-markdown-tables module, but that doesn't seem to have changed that recently either (more than a month ago: https://pypi.org/project/sphinx-markdown-tables/0.0.15/#history).\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nfrom __future__ import unicode_literals\nimport re\n\nfrom recommonmark.parser import CommonMarkParser\nfrom recommonmark.transform import AutoStructify\nfrom docutils import nodes, transforms\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Turbinia'\ncopyright = '2020, Google Inc'\nauthor = 'Turbinia maintainers'\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx_markdown_tables',\n 'recommonmark'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'design/*']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\nhtml_sidebars = {\n '**': [\n 'sidebar.html', 'localtoc.html', 'relations.html', 'sourcelink.html',\n 'searchbox.html'\n ]\n}\n\n# Adding retries to linkchecks before declaring a link broken\nlinkcheck_retries = 3\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'turbiniadoc'\n\nhtml_logo = \"images/turbinia-logo.jpg\"\n\n\nclass ProcessLink(transforms.Transform):\n \"\"\"Transform definition to parse .md references to internal pages.\"\"\"\n\n default_priority = 1000\n\n def find_replace(self, node):\n \"\"\"Parses URIs containing .md and replaces them with their HTML page.\"\"\"\n if isinstance(node, nodes.reference) and 'refuri' in node:\n r = node['refuri']\n if r.endswith('.md'):\n r = r[:-3] + '.html'\n node['refuri'] = r\n\n return node\n\n def traverse(self, node):\n \"\"\"Traverse the document tree rooted at node.\n node : docutil node\n current root node to traverse\n \"\"\"\n self.find_replace(node)\n\n for c in node.children:\n self.traverse(c)\n\n # pylint: disable=arguments-differ,attribute-defined-outside-init\n # this was taken from GRR's config file for documentation\n def apply(self):\n self.current_level = 0\n self.traverse(self.document)\n\n\ndef setup(app):\n \"\"\"Add custom parsers to Sphinx generation.\"\"\"\n app.add_config_value(\n 'recommonmark_config', {\n 'enable_auto_doc_ref': False,\n }, True)\n app.add_transform(AutoStructify)\n app.add_transform(ProcessLink)\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nfrom __future__ import unicode_literals\nimport re\n\nfrom recommonmark.parser import CommonMarkParser\nfrom recommonmark.transform import AutoStructify\nfrom docutils import nodes, transforms\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Turbinia'\ncopyright = '2020, Google Inc'\nauthor = 'Turbinia maintainers'\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'recommonmark'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'design/*']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\nhtml_sidebars = {\n '**': [\n 'sidebar.html', 'localtoc.html', 'relations.html', 'sourcelink.html',\n 'searchbox.html'\n ]\n}\n\n# Adding retries to linkchecks before declaring a link broken\nlinkcheck_retries = 3\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'turbiniadoc'\n\nhtml_logo = \"images/turbinia-logo.jpg\"\n\n\nclass ProcessLink(transforms.Transform):\n \"\"\"Transform definition to parse .md references to internal pages.\"\"\"\n\n default_priority = 1000\n\n def find_replace(self, node):\n \"\"\"Parses URIs containing .md and replaces them with their HTML page.\"\"\"\n if isinstance(node, nodes.reference) and 'refuri' in node:\n r = node['refuri']\n if r.endswith('.md'):\n r = r[:-3] + '.html'\n node['refuri'] = r\n\n return node\n\n def traverse(self, node):\n \"\"\"Traverse the document tree rooted at node.\n node : docutil node\n current root node to traverse\n \"\"\"\n self.find_replace(node)\n\n for c in node.children:\n self.traverse(c)\n\n # pylint: disable=arguments-differ,attribute-defined-outside-init\n # this was taken from GRR's config file for documentation\n def apply(self):\n self.current_level = 0\n self.traverse(self.document)\n\n\ndef setup(app):\n \"\"\"Add custom parsers to Sphinx generation.\"\"\"\n app.add_config_value(\n 'recommonmark_config', {\n 'enable_auto_doc_ref': False,\n }, True)\n app.add_transform(AutoStructify)\n app.add_transform(ProcessLink)\n", "path": "docs/conf.py"}]} |
gh_patches_debug_1034 | rasdani/github-patches | git_diff | django-import-export__django-import-export-214 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Export order
Is there a way to specify a partial export order ? For example, I'd like to specify that the first two columns should be "id" and "name", then I'd like to have all remaining fields in whatever order.
Currently I have two options:
- Specify `export_order` in the resource's meta object, but any field that is not listed will not be included;
- Not specify `export_order` in which case the export starts with the fields declared explicitly in the resource in no particular order followed by introspected fields in the order they were declared in the model.
Ideally, what I would like is to have introspected fields first in order of declaration, then explicit fields. Since other applications may have different requirements, I would be happy with specifying a couple of fields explicitly in `export_order` so that those fields will come first and in the specified order, then have any remaining fields come after in no particular order.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `import_export/resources.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import functools
4 from copy import deepcopy
5 import sys
6 import traceback
7
8 import tablib
9 from diff_match_patch import diff_match_patch
10
11 from django.utils.safestring import mark_safe
12 from django.utils import six
13 from django.db import transaction
14 from django.db.models.fields import FieldDoesNotExist
15 from django.db.models.query import QuerySet
16 from django.db.models.related import RelatedObject
17 from django.conf import settings
18
19 from .results import Error, Result, RowResult
20 from .fields import Field
21 from import_export import widgets
22 from .instance_loaders import (
23 ModelInstanceLoader,
24 )
25
26
27 try:
28 from django.utils.encoding import force_text
29 except ImportError:
30 from django.utils.encoding import force_unicode as force_text
31
32 try:
33 from collections import OrderedDict
34 except ImportError:
35 from django.utils.datastructures import SortedDict as OrderedDict
36
37 USE_TRANSACTIONS = getattr(settings, 'IMPORT_EXPORT_USE_TRANSACTIONS', False)
38
39
40 class ResourceOptions(object):
41 """
42 The inner Meta class allows for class-level configuration of how the
43 Resource should behave. The following options are available:
44
45 * ``fields`` - Controls what introspected fields the Resource
46 should include. A whitelist of fields.
47
48 * ``exclude`` - Controls what introspected fields the Resource should
49 NOT include. A blacklist of fields.
50
51 * ``model`` - Django Model class. It is used to introspect available
52 fields.
53
54 * ``instance_loader_class`` - Controls which class instance will take
55 care of loading existing objects.
56
57 * ``import_id_fields`` - Controls which object fields will be used to
58 identify existing instances.
59
60 * ``export_order`` - Controls export order for columns.
61
62 * ``widgets`` - dictionary defines widget kwargs for fields.
63
64 * ``use_transactions`` - Controls if import should use database
65 transactions. Default value is ``None`` meaning
66 ``settings.IMPORT_EXPORT_USE_TRANSACTIONS`` will be evaluated.
67
68 * ``skip_unchanged`` - Controls if the import should skip unchanged records.
69 Default value is False
70
71 * ``report_skipped`` - Controls if the result reports skipped rows
72 Default value is True
73
74 """
75 fields = None
76 model = None
77 exclude = None
78 instance_loader_class = None
79 import_id_fields = ['id']
80 export_order = None
81 widgets = None
82 use_transactions = None
83 skip_unchanged = False
84 report_skipped = True
85
86 def __new__(cls, meta=None):
87 overrides = {}
88
89 if meta:
90 for override_name in dir(meta):
91 if not override_name.startswith('_'):
92 overrides[override_name] = getattr(meta, override_name)
93
94 return object.__new__(type(str('ResourceOptions'), (cls,), overrides))
95
96
97 class DeclarativeMetaclass(type):
98
99 def __new__(cls, name, bases, attrs):
100 declared_fields = []
101
102 for field_name, obj in attrs.copy().items():
103 if isinstance(obj, Field):
104 field = attrs.pop(field_name)
105 if not field.column_name:
106 field.column_name = field_name
107 declared_fields.append((field_name, field))
108
109 attrs['fields'] = OrderedDict(declared_fields)
110 new_class = super(DeclarativeMetaclass, cls).__new__(cls, name,
111 bases, attrs)
112 opts = getattr(new_class, 'Meta', None)
113 new_class._meta = ResourceOptions(opts)
114
115 return new_class
116
117
118 class Resource(six.with_metaclass(DeclarativeMetaclass)):
119 """
120 Resource defines how objects are mapped to their import and export
121 representations and handle importing and exporting data.
122 """
123
124 def get_use_transactions(self):
125 if self._meta.use_transactions is None:
126 return USE_TRANSACTIONS
127 else:
128 return self._meta.use_transactions
129
130 def get_fields(self):
131 """
132 Returns fields in ``export_order`` order.
133 """
134 return [self.fields[f] for f in self.get_export_order()]
135
136 @classmethod
137 def get_field_name(cls, field):
138 """
139 Returns field name for given field.
140 """
141 for field_name, f in cls.fields.items():
142 if f == field:
143 return field_name
144 raise AttributeError("Field %s does not exists in %s resource" % (
145 field, cls))
146
147 def init_instance(self, row=None):
148 raise NotImplementedError()
149
150 def get_instance(self, instance_loader, row):
151 return instance_loader.get_instance(row)
152
153 def get_or_init_instance(self, instance_loader, row):
154 instance = self.get_instance(instance_loader, row)
155 if instance:
156 return (instance, False)
157 else:
158 return (self.init_instance(row), True)
159
160 def save_instance(self, instance, dry_run=False):
161 self.before_save_instance(instance, dry_run)
162 if not dry_run:
163 instance.save()
164 self.after_save_instance(instance, dry_run)
165
166 def before_save_instance(self, instance, dry_run):
167 """
168 Override to add additional logic.
169 """
170 pass
171
172 def after_save_instance(self, instance, dry_run):
173 """
174 Override to add additional logic.
175 """
176 pass
177
178 def delete_instance(self, instance, dry_run=False):
179 self.before_delete_instance(instance, dry_run)
180 if not dry_run:
181 instance.delete()
182 self.after_delete_instance(instance, dry_run)
183
184 def before_delete_instance(self, instance, dry_run):
185 """
186 Override to add additional logic.
187 """
188 pass
189
190 def after_delete_instance(self, instance, dry_run):
191 """
192 Override to add additional logic.
193 """
194 pass
195
196 def import_field(self, field, obj, data):
197 if field.attribute and field.column_name in data:
198 field.save(obj, data)
199
200 def import_obj(self, obj, data, dry_run):
201 """
202 """
203 for field in self.get_fields():
204 if isinstance(field.widget, widgets.ManyToManyWidget):
205 continue
206 self.import_field(field, obj, data)
207
208 def save_m2m(self, obj, data, dry_run):
209 """
210 Saves m2m fields.
211
212 Model instance need to have a primary key value before
213 a many-to-many relationship can be used.
214 """
215 if not dry_run:
216 for field in self.get_fields():
217 if not isinstance(field.widget, widgets.ManyToManyWidget):
218 continue
219 self.import_field(field, obj, data)
220
221 def for_delete(self, row, instance):
222 """
223 Returns ``True`` if ``row`` importing should delete instance.
224
225 Default implementation returns ``False``.
226 Override this method to handle deletion.
227 """
228 return False
229
230 def skip_row(self, instance, original):
231 """
232 Returns ``True`` if ``row`` importing should be skipped.
233
234 Default implementation returns ``False`` unless skip_unchanged == True.
235 Override this method to handle skipping rows meeting certain conditions.
236 """
237 if not self._meta.skip_unchanged:
238 return False
239 for field in self.get_fields():
240 try:
241 # For fields that are models.fields.related.ManyRelatedManager
242 # we need to compare the results
243 if list(field.get_value(instance).all()) != list(field.get_value(original).all()):
244 return False
245 except AttributeError:
246 if field.get_value(instance) != field.get_value(original):
247 return False
248 return True
249
250 def get_diff(self, original, current, dry_run=False):
251 """
252 Get diff between original and current object when ``import_data``
253 is run.
254
255 ``dry_run`` allows handling special cases when object is not saved
256 to database (ie. m2m relationships).
257 """
258 data = []
259 dmp = diff_match_patch()
260 for field in self.get_fields():
261 v1 = self.export_field(field, original) if original else ""
262 v2 = self.export_field(field, current) if current else ""
263 diff = dmp.diff_main(force_text(v1), force_text(v2))
264 dmp.diff_cleanupSemantic(diff)
265 html = dmp.diff_prettyHtml(diff)
266 html = mark_safe(html)
267 data.append(html)
268 return data
269
270 def get_diff_headers(self):
271 """
272 Diff representation headers.
273 """
274 return self.get_export_headers()
275
276 def before_import(self, dataset, dry_run):
277 """
278 Override to add additional logic.
279 """
280 pass
281
282 def import_data(self, dataset, dry_run=False, raise_errors=False,
283 use_transactions=None):
284 """
285 Imports data from ``dataset``.
286
287 ``use_transactions``
288 If ``True`` import process will be processed inside transaction.
289 If ``dry_run`` is set, or error occurs, transaction will be rolled
290 back.
291 """
292 result = Result()
293 result.diff_headers = self.get_diff_headers()
294
295 if use_transactions is None:
296 use_transactions = self.get_use_transactions()
297
298 if use_transactions is True:
299 # when transactions are used we want to create/update/delete object
300 # as transaction will be rolled back if dry_run is set
301 real_dry_run = False
302 transaction.enter_transaction_management()
303 transaction.managed(True)
304 else:
305 real_dry_run = dry_run
306
307 try:
308 self.before_import(dataset, real_dry_run)
309 except Exception as e:
310 tb_info = traceback.format_exc(2)
311 result.base_errors.append(Error(repr(e), tb_info))
312 if raise_errors:
313 if use_transactions:
314 transaction.rollback()
315 transaction.leave_transaction_management()
316 raise
317
318 instance_loader = self._meta.instance_loader_class(self, dataset)
319
320 for row in dataset.dict:
321 try:
322 row_result = RowResult()
323 instance, new = self.get_or_init_instance(instance_loader, row)
324 if new:
325 row_result.import_type = RowResult.IMPORT_TYPE_NEW
326 else:
327 row_result.import_type = RowResult.IMPORT_TYPE_UPDATE
328 row_result.new_record = new
329 original = deepcopy(instance)
330 if self.for_delete(row, instance):
331 if new:
332 row_result.import_type = RowResult.IMPORT_TYPE_SKIP
333 row_result.diff = self.get_diff(None, None,
334 real_dry_run)
335 else:
336 row_result.import_type = RowResult.IMPORT_TYPE_DELETE
337 self.delete_instance(instance, real_dry_run)
338 row_result.diff = self.get_diff(original, None,
339 real_dry_run)
340 else:
341 self.import_obj(instance, row, real_dry_run)
342 if self.skip_row(instance, original):
343 row_result.import_type = RowResult.IMPORT_TYPE_SKIP
344 else:
345 self.save_instance(instance, real_dry_run)
346 self.save_m2m(instance, row, real_dry_run)
347 # Add object info to RowResult for LogEntry
348 row_result.object_repr = force_text(instance)
349 row_result.object_id = instance.pk
350 row_result.diff = self.get_diff(original, instance,
351 real_dry_run)
352 except Exception as e:
353 tb_info = traceback.format_exc(2)
354 row_result.errors.append(Error(e, tb_info))
355 if raise_errors:
356 if use_transactions:
357 transaction.rollback()
358 transaction.leave_transaction_management()
359 six.reraise(*sys.exc_info())
360 if (row_result.import_type != RowResult.IMPORT_TYPE_SKIP or
361 self._meta.report_skipped):
362 result.rows.append(row_result)
363
364 if use_transactions:
365 if dry_run or result.has_errors():
366 transaction.rollback()
367 else:
368 transaction.commit()
369 transaction.leave_transaction_management()
370
371 return result
372
373 def get_export_order(self):
374 return self._meta.export_order or self.fields.keys()
375
376 def export_field(self, field, obj):
377 field_name = self.get_field_name(field)
378 method = getattr(self, 'dehydrate_%s' % field_name, None)
379 if method is not None:
380 return method(obj)
381 return field.export(obj)
382
383 def export_resource(self, obj):
384 return [self.export_field(field, obj) for field in self.get_fields()]
385
386 def get_export_headers(self):
387 headers = [force_text(field.column_name) for field in self.get_fields()]
388 return headers
389
390 def export(self, queryset=None):
391 """
392 Exports a resource.
393 """
394 if queryset is None:
395 queryset = self.get_queryset()
396 headers = self.get_export_headers()
397 data = tablib.Dataset(headers=headers)
398
399 if isinstance(queryset, QuerySet):
400 # Iterate without the queryset cache, to avoid wasting memory when
401 # exporting large datasets.
402 iterable = queryset.iterator()
403 else:
404 iterable = queryset
405 for obj in iterable:
406 data.append(self.export_resource(obj))
407 return data
408
409
410 class ModelDeclarativeMetaclass(DeclarativeMetaclass):
411
412 def __new__(cls, name, bases, attrs):
413 new_class = super(ModelDeclarativeMetaclass,
414 cls).__new__(cls, name, bases, attrs)
415
416 opts = new_class._meta
417
418 if not opts.instance_loader_class:
419 opts.instance_loader_class = ModelInstanceLoader
420
421 if opts.model:
422 model_opts = opts.model._meta
423 declared_fields = new_class.fields
424
425 field_list = []
426 for f in sorted(model_opts.fields + model_opts.many_to_many):
427 if opts.fields is not None and not f.name in opts.fields:
428 continue
429 if opts.exclude and f.name in opts.exclude:
430 continue
431 if f.name in declared_fields:
432 continue
433
434 field = new_class.field_from_django_field(f.name, f,
435 readonly=False)
436 field_list.append((f.name, field, ))
437
438 new_class.fields.update(OrderedDict(field_list))
439
440 #add fields that follow relationships
441 if opts.fields is not None:
442 field_list = []
443 for field_name in opts.fields:
444 if field_name in declared_fields:
445 continue
446 if field_name.find('__') == -1:
447 continue
448
449 model = opts.model
450 attrs = field_name.split('__')
451 for i, attr in enumerate(attrs):
452 verbose_path = ".".join([opts.model.__name__] + attrs[0:i+1])
453
454 try:
455 f = model._meta.get_field_by_name(attr)[0]
456 except FieldDoesNotExist as e:
457 raise FieldDoesNotExist("%s: %s has no field named '%s'" %
458 (verbose_path, model.__name__, attr))
459
460 if i < len(attrs) - 1:
461 # We're not at the last attribute yet, so check that
462 # we're looking at a relation, and move on to the
463 # next model.
464 if isinstance(f, RelatedObject):
465 model = f.model
466 else:
467 if f.rel is None:
468 raise KeyError('%s is not a relation' % verbose_path)
469 model = f.rel.to
470
471 if isinstance(f, RelatedObject):
472 f = f.field
473
474 field = new_class.field_from_django_field(field_name, f,
475 readonly=True)
476 field_list.append((field_name, field))
477
478 new_class.fields.update(OrderedDict(field_list))
479
480 return new_class
481
482
483 class ModelResource(six.with_metaclass(ModelDeclarativeMetaclass, Resource)):
484 """
485 ModelResource is Resource subclass for handling Django models.
486 """
487
488 @classmethod
489 def widget_from_django_field(cls, f, default=widgets.Widget):
490 """
491 Returns the widget that would likely be associated with each
492 Django type.
493 """
494 result = default
495 internal_type = f.get_internal_type()
496 if internal_type in ('ManyToManyField', ):
497 result = functools.partial(widgets.ManyToManyWidget,
498 model=f.rel.to)
499 if internal_type in ('ForeignKey', 'OneToOneField', ):
500 result = functools.partial(widgets.ForeignKeyWidget,
501 model=f.rel.to)
502 if internal_type in ('DecimalField', ):
503 result = widgets.DecimalWidget
504 if internal_type in ('DateTimeField', ):
505 result = widgets.DateTimeWidget
506 elif internal_type in ('DateField', ):
507 result = widgets.DateWidget
508 elif internal_type in ('IntegerField', 'PositiveIntegerField',
509 'PositiveSmallIntegerField', 'SmallIntegerField', 'AutoField'):
510 result = widgets.IntegerWidget
511 elif internal_type in ('BooleanField', 'NullBooleanField'):
512 result = widgets.BooleanWidget
513 return result
514
515 @classmethod
516 def widget_kwargs_for_field(self, field_name):
517 """
518 Returns widget kwargs for given field_name.
519 """
520 if self._meta.widgets:
521 return self._meta.widgets.get(field_name, {})
522 return {}
523
524 @classmethod
525 def field_from_django_field(self, field_name, django_field, readonly):
526 """
527 Returns a Resource Field instance for the given Django model field.
528 """
529
530 FieldWidget = self.widget_from_django_field(django_field)
531 widget_kwargs = self.widget_kwargs_for_field(field_name)
532 field = Field(attribute=field_name, column_name=field_name,
533 widget=FieldWidget(**widget_kwargs), readonly=readonly)
534 return field
535
536 def get_import_id_fields(self):
537 return self._meta.import_id_fields
538
539 def get_queryset(self):
540 return self._meta.model.objects.all()
541
542 def init_instance(self, row=None):
543 return self._meta.model()
544
545
546 def modelresource_factory(model, resource_class=ModelResource):
547 """
548 Factory for creating ``ModelResource`` class for given Django model.
549 """
550 attrs = {'model': model}
551 Meta = type(str('Meta'), (object,), attrs)
552
553 class_name = model.__name__ + str('Resource')
554
555 class_attrs = {
556 'Meta': Meta,
557 }
558
559 metaclass = ModelDeclarativeMetaclass
560 return metaclass(class_name, (resource_class,), class_attrs)
561
562
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/import_export/resources.py b/import_export/resources.py
--- a/import_export/resources.py
+++ b/import_export/resources.py
@@ -371,7 +371,8 @@
return result
def get_export_order(self):
- return self._meta.export_order or self.fields.keys()
+ order = tuple (self._meta.export_order or ())
+ return order + tuple (k for k in self.fields.keys() if k not in order)
def export_field(self, field, obj):
field_name = self.get_field_name(field)
| {"golden_diff": "diff --git a/import_export/resources.py b/import_export/resources.py\n--- a/import_export/resources.py\n+++ b/import_export/resources.py\n@@ -371,7 +371,8 @@\n return result\n \n def get_export_order(self):\n- return self._meta.export_order or self.fields.keys()\n+ order = tuple (self._meta.export_order or ())\n+ return order + tuple (k for k in self.fields.keys() if k not in order)\n \n def export_field(self, field, obj):\n field_name = self.get_field_name(field)\n", "issue": "Export order\nIs there a way to specify a partial export order\u00a0? For example, I'd like to specify that the first two columns should be \"id\" and \"name\", then I'd like to have all remaining fields in whatever order.\n\nCurrently I have two options: \n- Specify `export_order` in the resource's meta object, but any field that is not listed will not be included;\n- Not specify `export_order` in which case the export starts with the fields declared explicitly in the resource in no particular order followed by introspected fields in the order they were declared in the model.\n\nIdeally, what I would like is to have introspected fields first in order of declaration, then explicit fields. Since other applications may have different requirements, I would be happy with specifying a couple of fields explicitly in `export_order` so that those fields will come first and in the specified order, then have any remaining fields come after in no particular order.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport functools\nfrom copy import deepcopy\nimport sys\nimport traceback\n\nimport tablib\nfrom diff_match_patch import diff_match_patch\n\nfrom django.utils.safestring import mark_safe\nfrom django.utils import six\nfrom django.db import transaction\nfrom django.db.models.fields import FieldDoesNotExist\nfrom django.db.models.query import QuerySet\nfrom django.db.models.related import RelatedObject\nfrom django.conf import settings\n\nfrom .results import Error, Result, RowResult\nfrom .fields import Field\nfrom import_export import widgets\nfrom .instance_loaders import (\n ModelInstanceLoader,\n)\n\n\ntry:\n from django.utils.encoding import force_text\nexcept ImportError:\n from django.utils.encoding import force_unicode as force_text\n\ntry:\n from collections import OrderedDict\nexcept ImportError:\n from django.utils.datastructures import SortedDict as OrderedDict\n\nUSE_TRANSACTIONS = getattr(settings, 'IMPORT_EXPORT_USE_TRANSACTIONS', False)\n\n\nclass ResourceOptions(object):\n \"\"\"\n The inner Meta class allows for class-level configuration of how the\n Resource should behave. The following options are available:\n\n * ``fields`` - Controls what introspected fields the Resource\n should include. A whitelist of fields.\n\n * ``exclude`` - Controls what introspected fields the Resource should\n NOT include. A blacklist of fields.\n\n * ``model`` - Django Model class. It is used to introspect available\n fields.\n\n * ``instance_loader_class`` - Controls which class instance will take\n care of loading existing objects.\n\n * ``import_id_fields`` - Controls which object fields will be used to\n identify existing instances.\n\n * ``export_order`` - Controls export order for columns.\n\n * ``widgets`` - dictionary defines widget kwargs for fields.\n\n * ``use_transactions`` - Controls if import should use database\n transactions. Default value is ``None`` meaning\n ``settings.IMPORT_EXPORT_USE_TRANSACTIONS`` will be evaluated.\n\n * ``skip_unchanged`` - Controls if the import should skip unchanged records.\n Default value is False\n\n * ``report_skipped`` - Controls if the result reports skipped rows\n Default value is True\n\n \"\"\"\n fields = None\n model = None\n exclude = None\n instance_loader_class = None\n import_id_fields = ['id']\n export_order = None\n widgets = None\n use_transactions = None\n skip_unchanged = False\n report_skipped = True\n\n def __new__(cls, meta=None):\n overrides = {}\n\n if meta:\n for override_name in dir(meta):\n if not override_name.startswith('_'):\n overrides[override_name] = getattr(meta, override_name)\n\n return object.__new__(type(str('ResourceOptions'), (cls,), overrides))\n\n\nclass DeclarativeMetaclass(type):\n\n def __new__(cls, name, bases, attrs):\n declared_fields = []\n\n for field_name, obj in attrs.copy().items():\n if isinstance(obj, Field):\n field = attrs.pop(field_name)\n if not field.column_name:\n field.column_name = field_name\n declared_fields.append((field_name, field))\n\n attrs['fields'] = OrderedDict(declared_fields)\n new_class = super(DeclarativeMetaclass, cls).__new__(cls, name,\n bases, attrs)\n opts = getattr(new_class, 'Meta', None)\n new_class._meta = ResourceOptions(opts)\n\n return new_class\n\n\nclass Resource(six.with_metaclass(DeclarativeMetaclass)):\n \"\"\"\n Resource defines how objects are mapped to their import and export\n representations and handle importing and exporting data.\n \"\"\"\n\n def get_use_transactions(self):\n if self._meta.use_transactions is None:\n return USE_TRANSACTIONS\n else:\n return self._meta.use_transactions\n\n def get_fields(self):\n \"\"\"\n Returns fields in ``export_order`` order.\n \"\"\"\n return [self.fields[f] for f in self.get_export_order()]\n\n @classmethod\n def get_field_name(cls, field):\n \"\"\"\n Returns field name for given field.\n \"\"\"\n for field_name, f in cls.fields.items():\n if f == field:\n return field_name\n raise AttributeError(\"Field %s does not exists in %s resource\" % (\n field, cls))\n\n def init_instance(self, row=None):\n raise NotImplementedError()\n\n def get_instance(self, instance_loader, row):\n return instance_loader.get_instance(row)\n\n def get_or_init_instance(self, instance_loader, row):\n instance = self.get_instance(instance_loader, row)\n if instance:\n return (instance, False)\n else:\n return (self.init_instance(row), True)\n\n def save_instance(self, instance, dry_run=False):\n self.before_save_instance(instance, dry_run)\n if not dry_run:\n instance.save()\n self.after_save_instance(instance, dry_run)\n\n def before_save_instance(self, instance, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def after_save_instance(self, instance, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def delete_instance(self, instance, dry_run=False):\n self.before_delete_instance(instance, dry_run)\n if not dry_run:\n instance.delete()\n self.after_delete_instance(instance, dry_run)\n\n def before_delete_instance(self, instance, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def after_delete_instance(self, instance, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def import_field(self, field, obj, data):\n if field.attribute and field.column_name in data:\n field.save(obj, data)\n\n def import_obj(self, obj, data, dry_run):\n \"\"\"\n \"\"\"\n for field in self.get_fields():\n if isinstance(field.widget, widgets.ManyToManyWidget):\n continue\n self.import_field(field, obj, data)\n\n def save_m2m(self, obj, data, dry_run):\n \"\"\"\n Saves m2m fields.\n\n Model instance need to have a primary key value before\n a many-to-many relationship can be used.\n \"\"\"\n if not dry_run:\n for field in self.get_fields():\n if not isinstance(field.widget, widgets.ManyToManyWidget):\n continue\n self.import_field(field, obj, data)\n\n def for_delete(self, row, instance):\n \"\"\"\n Returns ``True`` if ``row`` importing should delete instance.\n\n Default implementation returns ``False``.\n Override this method to handle deletion.\n \"\"\"\n return False\n\n def skip_row(self, instance, original):\n \"\"\"\n Returns ``True`` if ``row`` importing should be skipped.\n\n Default implementation returns ``False`` unless skip_unchanged == True.\n Override this method to handle skipping rows meeting certain conditions.\n \"\"\"\n if not self._meta.skip_unchanged:\n return False\n for field in self.get_fields():\n try:\n # For fields that are models.fields.related.ManyRelatedManager\n # we need to compare the results\n if list(field.get_value(instance).all()) != list(field.get_value(original).all()):\n return False\n except AttributeError:\n if field.get_value(instance) != field.get_value(original):\n return False\n return True\n\n def get_diff(self, original, current, dry_run=False):\n \"\"\"\n Get diff between original and current object when ``import_data``\n is run.\n\n ``dry_run`` allows handling special cases when object is not saved\n to database (ie. m2m relationships).\n \"\"\"\n data = []\n dmp = diff_match_patch()\n for field in self.get_fields():\n v1 = self.export_field(field, original) if original else \"\"\n v2 = self.export_field(field, current) if current else \"\"\n diff = dmp.diff_main(force_text(v1), force_text(v2))\n dmp.diff_cleanupSemantic(diff)\n html = dmp.diff_prettyHtml(diff)\n html = mark_safe(html)\n data.append(html)\n return data\n\n def get_diff_headers(self):\n \"\"\"\n Diff representation headers.\n \"\"\"\n return self.get_export_headers()\n\n def before_import(self, dataset, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def import_data(self, dataset, dry_run=False, raise_errors=False,\n use_transactions=None):\n \"\"\"\n Imports data from ``dataset``.\n\n ``use_transactions``\n If ``True`` import process will be processed inside transaction.\n If ``dry_run`` is set, or error occurs, transaction will be rolled\n back.\n \"\"\"\n result = Result()\n result.diff_headers = self.get_diff_headers()\n\n if use_transactions is None:\n use_transactions = self.get_use_transactions()\n\n if use_transactions is True:\n # when transactions are used we want to create/update/delete object\n # as transaction will be rolled back if dry_run is set\n real_dry_run = False\n transaction.enter_transaction_management()\n transaction.managed(True)\n else:\n real_dry_run = dry_run\n\n try:\n self.before_import(dataset, real_dry_run)\n except Exception as e:\n tb_info = traceback.format_exc(2)\n result.base_errors.append(Error(repr(e), tb_info))\n if raise_errors:\n if use_transactions:\n transaction.rollback()\n transaction.leave_transaction_management()\n raise\n\n instance_loader = self._meta.instance_loader_class(self, dataset)\n\n for row in dataset.dict:\n try:\n row_result = RowResult()\n instance, new = self.get_or_init_instance(instance_loader, row)\n if new:\n row_result.import_type = RowResult.IMPORT_TYPE_NEW\n else:\n row_result.import_type = RowResult.IMPORT_TYPE_UPDATE\n row_result.new_record = new\n original = deepcopy(instance)\n if self.for_delete(row, instance):\n if new:\n row_result.import_type = RowResult.IMPORT_TYPE_SKIP\n row_result.diff = self.get_diff(None, None,\n real_dry_run)\n else:\n row_result.import_type = RowResult.IMPORT_TYPE_DELETE\n self.delete_instance(instance, real_dry_run)\n row_result.diff = self.get_diff(original, None,\n real_dry_run)\n else:\n self.import_obj(instance, row, real_dry_run)\n if self.skip_row(instance, original):\n row_result.import_type = RowResult.IMPORT_TYPE_SKIP\n else:\n self.save_instance(instance, real_dry_run)\n self.save_m2m(instance, row, real_dry_run)\n # Add object info to RowResult for LogEntry\n row_result.object_repr = force_text(instance)\n row_result.object_id = instance.pk\n row_result.diff = self.get_diff(original, instance,\n real_dry_run)\n except Exception as e:\n tb_info = traceback.format_exc(2)\n row_result.errors.append(Error(e, tb_info))\n if raise_errors:\n if use_transactions:\n transaction.rollback()\n transaction.leave_transaction_management()\n six.reraise(*sys.exc_info())\n if (row_result.import_type != RowResult.IMPORT_TYPE_SKIP or\n self._meta.report_skipped):\n result.rows.append(row_result)\n\n if use_transactions:\n if dry_run or result.has_errors():\n transaction.rollback()\n else:\n transaction.commit()\n transaction.leave_transaction_management()\n\n return result\n\n def get_export_order(self):\n return self._meta.export_order or self.fields.keys()\n\n def export_field(self, field, obj):\n field_name = self.get_field_name(field)\n method = getattr(self, 'dehydrate_%s' % field_name, None)\n if method is not None:\n return method(obj)\n return field.export(obj)\n\n def export_resource(self, obj):\n return [self.export_field(field, obj) for field in self.get_fields()]\n\n def get_export_headers(self):\n headers = [force_text(field.column_name) for field in self.get_fields()]\n return headers\n\n def export(self, queryset=None):\n \"\"\"\n Exports a resource.\n \"\"\"\n if queryset is None:\n queryset = self.get_queryset()\n headers = self.get_export_headers()\n data = tablib.Dataset(headers=headers)\n\n if isinstance(queryset, QuerySet):\n # Iterate without the queryset cache, to avoid wasting memory when\n # exporting large datasets.\n iterable = queryset.iterator()\n else:\n iterable = queryset\n for obj in iterable:\n data.append(self.export_resource(obj))\n return data\n\n\nclass ModelDeclarativeMetaclass(DeclarativeMetaclass):\n\n def __new__(cls, name, bases, attrs):\n new_class = super(ModelDeclarativeMetaclass,\n cls).__new__(cls, name, bases, attrs)\n\n opts = new_class._meta\n\n if not opts.instance_loader_class:\n opts.instance_loader_class = ModelInstanceLoader\n\n if opts.model:\n model_opts = opts.model._meta\n declared_fields = new_class.fields\n\n field_list = []\n for f in sorted(model_opts.fields + model_opts.many_to_many):\n if opts.fields is not None and not f.name in opts.fields:\n continue\n if opts.exclude and f.name in opts.exclude:\n continue\n if f.name in declared_fields:\n continue\n\n field = new_class.field_from_django_field(f.name, f,\n readonly=False)\n field_list.append((f.name, field, ))\n\n new_class.fields.update(OrderedDict(field_list))\n\n #add fields that follow relationships\n if opts.fields is not None:\n field_list = []\n for field_name in opts.fields:\n if field_name in declared_fields:\n continue\n if field_name.find('__') == -1:\n continue\n\n model = opts.model\n attrs = field_name.split('__')\n for i, attr in enumerate(attrs):\n verbose_path = \".\".join([opts.model.__name__] + attrs[0:i+1])\n\n try:\n f = model._meta.get_field_by_name(attr)[0]\n except FieldDoesNotExist as e:\n raise FieldDoesNotExist(\"%s: %s has no field named '%s'\" %\n (verbose_path, model.__name__, attr))\n\n if i < len(attrs) - 1:\n # We're not at the last attribute yet, so check that\n # we're looking at a relation, and move on to the\n # next model.\n if isinstance(f, RelatedObject):\n model = f.model\n else:\n if f.rel is None:\n raise KeyError('%s is not a relation' % verbose_path)\n model = f.rel.to\n\n if isinstance(f, RelatedObject):\n f = f.field\n\n field = new_class.field_from_django_field(field_name, f,\n readonly=True)\n field_list.append((field_name, field))\n\n new_class.fields.update(OrderedDict(field_list))\n\n return new_class\n\n\nclass ModelResource(six.with_metaclass(ModelDeclarativeMetaclass, Resource)):\n \"\"\"\n ModelResource is Resource subclass for handling Django models.\n \"\"\"\n\n @classmethod\n def widget_from_django_field(cls, f, default=widgets.Widget):\n \"\"\"\n Returns the widget that would likely be associated with each\n Django type.\n \"\"\"\n result = default\n internal_type = f.get_internal_type()\n if internal_type in ('ManyToManyField', ):\n result = functools.partial(widgets.ManyToManyWidget,\n model=f.rel.to)\n if internal_type in ('ForeignKey', 'OneToOneField', ):\n result = functools.partial(widgets.ForeignKeyWidget,\n model=f.rel.to)\n if internal_type in ('DecimalField', ):\n result = widgets.DecimalWidget\n if internal_type in ('DateTimeField', ):\n result = widgets.DateTimeWidget\n elif internal_type in ('DateField', ):\n result = widgets.DateWidget\n elif internal_type in ('IntegerField', 'PositiveIntegerField',\n 'PositiveSmallIntegerField', 'SmallIntegerField', 'AutoField'):\n result = widgets.IntegerWidget\n elif internal_type in ('BooleanField', 'NullBooleanField'):\n result = widgets.BooleanWidget\n return result\n\n @classmethod\n def widget_kwargs_for_field(self, field_name):\n \"\"\"\n Returns widget kwargs for given field_name.\n \"\"\"\n if self._meta.widgets:\n return self._meta.widgets.get(field_name, {})\n return {}\n\n @classmethod\n def field_from_django_field(self, field_name, django_field, readonly):\n \"\"\"\n Returns a Resource Field instance for the given Django model field.\n \"\"\"\n\n FieldWidget = self.widget_from_django_field(django_field)\n widget_kwargs = self.widget_kwargs_for_field(field_name)\n field = Field(attribute=field_name, column_name=field_name,\n widget=FieldWidget(**widget_kwargs), readonly=readonly)\n return field\n\n def get_import_id_fields(self):\n return self._meta.import_id_fields\n\n def get_queryset(self):\n return self._meta.model.objects.all()\n\n def init_instance(self, row=None):\n return self._meta.model()\n\n\ndef modelresource_factory(model, resource_class=ModelResource):\n \"\"\"\n Factory for creating ``ModelResource`` class for given Django model.\n \"\"\"\n attrs = {'model': model}\n Meta = type(str('Meta'), (object,), attrs)\n\n class_name = model.__name__ + str('Resource')\n\n class_attrs = {\n 'Meta': Meta,\n }\n\n metaclass = ModelDeclarativeMetaclass\n return metaclass(class_name, (resource_class,), class_attrs)\n\n", "path": "import_export/resources.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport functools\nfrom copy import deepcopy\nimport sys\nimport traceback\n\nimport tablib\nfrom diff_match_patch import diff_match_patch\n\nfrom django.utils.safestring import mark_safe\nfrom django.utils import six\nfrom django.db import transaction\nfrom django.db.models.fields import FieldDoesNotExist\nfrom django.db.models.query import QuerySet\nfrom django.db.models.related import RelatedObject\nfrom django.conf import settings\n\nfrom .results import Error, Result, RowResult\nfrom .fields import Field\nfrom import_export import widgets\nfrom .instance_loaders import (\n ModelInstanceLoader,\n)\n\n\ntry:\n from django.utils.encoding import force_text\nexcept ImportError:\n from django.utils.encoding import force_unicode as force_text\n\ntry:\n from collections import OrderedDict\nexcept ImportError:\n from django.utils.datastructures import SortedDict as OrderedDict\n\nUSE_TRANSACTIONS = getattr(settings, 'IMPORT_EXPORT_USE_TRANSACTIONS', False)\n\n\nclass ResourceOptions(object):\n \"\"\"\n The inner Meta class allows for class-level configuration of how the\n Resource should behave. The following options are available:\n\n * ``fields`` - Controls what introspected fields the Resource\n should include. A whitelist of fields.\n\n * ``exclude`` - Controls what introspected fields the Resource should\n NOT include. A blacklist of fields.\n\n * ``model`` - Django Model class. It is used to introspect available\n fields.\n\n * ``instance_loader_class`` - Controls which class instance will take\n care of loading existing objects.\n\n * ``import_id_fields`` - Controls which object fields will be used to\n identify existing instances.\n\n * ``export_order`` - Controls export order for columns.\n\n * ``widgets`` - dictionary defines widget kwargs for fields.\n\n * ``use_transactions`` - Controls if import should use database\n transactions. Default value is ``None`` meaning\n ``settings.IMPORT_EXPORT_USE_TRANSACTIONS`` will be evaluated.\n\n * ``skip_unchanged`` - Controls if the import should skip unchanged records.\n Default value is False\n\n * ``report_skipped`` - Controls if the result reports skipped rows\n Default value is True\n\n \"\"\"\n fields = None\n model = None\n exclude = None\n instance_loader_class = None\n import_id_fields = ['id']\n export_order = None\n widgets = None\n use_transactions = None\n skip_unchanged = False\n report_skipped = True\n\n def __new__(cls, meta=None):\n overrides = {}\n\n if meta:\n for override_name in dir(meta):\n if not override_name.startswith('_'):\n overrides[override_name] = getattr(meta, override_name)\n\n return object.__new__(type(str('ResourceOptions'), (cls,), overrides))\n\n\nclass DeclarativeMetaclass(type):\n\n def __new__(cls, name, bases, attrs):\n declared_fields = []\n\n for field_name, obj in attrs.copy().items():\n if isinstance(obj, Field):\n field = attrs.pop(field_name)\n if not field.column_name:\n field.column_name = field_name\n declared_fields.append((field_name, field))\n\n attrs['fields'] = OrderedDict(declared_fields)\n new_class = super(DeclarativeMetaclass, cls).__new__(cls, name,\n bases, attrs)\n opts = getattr(new_class, 'Meta', None)\n new_class._meta = ResourceOptions(opts)\n\n return new_class\n\n\nclass Resource(six.with_metaclass(DeclarativeMetaclass)):\n \"\"\"\n Resource defines how objects are mapped to their import and export\n representations and handle importing and exporting data.\n \"\"\"\n\n def get_use_transactions(self):\n if self._meta.use_transactions is None:\n return USE_TRANSACTIONS\n else:\n return self._meta.use_transactions\n\n def get_fields(self):\n \"\"\"\n Returns fields in ``export_order`` order.\n \"\"\"\n return [self.fields[f] for f in self.get_export_order()]\n\n @classmethod\n def get_field_name(cls, field):\n \"\"\"\n Returns field name for given field.\n \"\"\"\n for field_name, f in cls.fields.items():\n if f == field:\n return field_name\n raise AttributeError(\"Field %s does not exists in %s resource\" % (\n field, cls))\n\n def init_instance(self, row=None):\n raise NotImplementedError()\n\n def get_instance(self, instance_loader, row):\n return instance_loader.get_instance(row)\n\n def get_or_init_instance(self, instance_loader, row):\n instance = self.get_instance(instance_loader, row)\n if instance:\n return (instance, False)\n else:\n return (self.init_instance(row), True)\n\n def save_instance(self, instance, dry_run=False):\n self.before_save_instance(instance, dry_run)\n if not dry_run:\n instance.save()\n self.after_save_instance(instance, dry_run)\n\n def before_save_instance(self, instance, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def after_save_instance(self, instance, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def delete_instance(self, instance, dry_run=False):\n self.before_delete_instance(instance, dry_run)\n if not dry_run:\n instance.delete()\n self.after_delete_instance(instance, dry_run)\n\n def before_delete_instance(self, instance, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def after_delete_instance(self, instance, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def import_field(self, field, obj, data):\n if field.attribute and field.column_name in data:\n field.save(obj, data)\n\n def import_obj(self, obj, data, dry_run):\n \"\"\"\n \"\"\"\n for field in self.get_fields():\n if isinstance(field.widget, widgets.ManyToManyWidget):\n continue\n self.import_field(field, obj, data)\n\n def save_m2m(self, obj, data, dry_run):\n \"\"\"\n Saves m2m fields.\n\n Model instance need to have a primary key value before\n a many-to-many relationship can be used.\n \"\"\"\n if not dry_run:\n for field in self.get_fields():\n if not isinstance(field.widget, widgets.ManyToManyWidget):\n continue\n self.import_field(field, obj, data)\n\n def for_delete(self, row, instance):\n \"\"\"\n Returns ``True`` if ``row`` importing should delete instance.\n\n Default implementation returns ``False``.\n Override this method to handle deletion.\n \"\"\"\n return False\n\n def skip_row(self, instance, original):\n \"\"\"\n Returns ``True`` if ``row`` importing should be skipped.\n\n Default implementation returns ``False`` unless skip_unchanged == True.\n Override this method to handle skipping rows meeting certain conditions.\n \"\"\"\n if not self._meta.skip_unchanged:\n return False\n for field in self.get_fields():\n try:\n # For fields that are models.fields.related.ManyRelatedManager\n # we need to compare the results\n if list(field.get_value(instance).all()) != list(field.get_value(original).all()):\n return False\n except AttributeError:\n if field.get_value(instance) != field.get_value(original):\n return False\n return True\n\n def get_diff(self, original, current, dry_run=False):\n \"\"\"\n Get diff between original and current object when ``import_data``\n is run.\n\n ``dry_run`` allows handling special cases when object is not saved\n to database (ie. m2m relationships).\n \"\"\"\n data = []\n dmp = diff_match_patch()\n for field in self.get_fields():\n v1 = self.export_field(field, original) if original else \"\"\n v2 = self.export_field(field, current) if current else \"\"\n diff = dmp.diff_main(force_text(v1), force_text(v2))\n dmp.diff_cleanupSemantic(diff)\n html = dmp.diff_prettyHtml(diff)\n html = mark_safe(html)\n data.append(html)\n return data\n\n def get_diff_headers(self):\n \"\"\"\n Diff representation headers.\n \"\"\"\n return self.get_export_headers()\n\n def before_import(self, dataset, dry_run):\n \"\"\"\n Override to add additional logic.\n \"\"\"\n pass\n\n def import_data(self, dataset, dry_run=False, raise_errors=False,\n use_transactions=None):\n \"\"\"\n Imports data from ``dataset``.\n\n ``use_transactions``\n If ``True`` import process will be processed inside transaction.\n If ``dry_run`` is set, or error occurs, transaction will be rolled\n back.\n \"\"\"\n result = Result()\n result.diff_headers = self.get_diff_headers()\n\n if use_transactions is None:\n use_transactions = self.get_use_transactions()\n\n if use_transactions is True:\n # when transactions are used we want to create/update/delete object\n # as transaction will be rolled back if dry_run is set\n real_dry_run = False\n transaction.enter_transaction_management()\n transaction.managed(True)\n else:\n real_dry_run = dry_run\n\n try:\n self.before_import(dataset, real_dry_run)\n except Exception as e:\n tb_info = traceback.format_exc(2)\n result.base_errors.append(Error(repr(e), tb_info))\n if raise_errors:\n if use_transactions:\n transaction.rollback()\n transaction.leave_transaction_management()\n raise\n\n instance_loader = self._meta.instance_loader_class(self, dataset)\n\n for row in dataset.dict:\n try:\n row_result = RowResult()\n instance, new = self.get_or_init_instance(instance_loader, row)\n if new:\n row_result.import_type = RowResult.IMPORT_TYPE_NEW\n else:\n row_result.import_type = RowResult.IMPORT_TYPE_UPDATE\n row_result.new_record = new\n original = deepcopy(instance)\n if self.for_delete(row, instance):\n if new:\n row_result.import_type = RowResult.IMPORT_TYPE_SKIP\n row_result.diff = self.get_diff(None, None,\n real_dry_run)\n else:\n row_result.import_type = RowResult.IMPORT_TYPE_DELETE\n self.delete_instance(instance, real_dry_run)\n row_result.diff = self.get_diff(original, None,\n real_dry_run)\n else:\n self.import_obj(instance, row, real_dry_run)\n if self.skip_row(instance, original):\n row_result.import_type = RowResult.IMPORT_TYPE_SKIP\n else:\n self.save_instance(instance, real_dry_run)\n self.save_m2m(instance, row, real_dry_run)\n # Add object info to RowResult for LogEntry\n row_result.object_repr = force_text(instance)\n row_result.object_id = instance.pk\n row_result.diff = self.get_diff(original, instance,\n real_dry_run)\n except Exception as e:\n tb_info = traceback.format_exc(2)\n row_result.errors.append(Error(e, tb_info))\n if raise_errors:\n if use_transactions:\n transaction.rollback()\n transaction.leave_transaction_management()\n six.reraise(*sys.exc_info())\n if (row_result.import_type != RowResult.IMPORT_TYPE_SKIP or\n self._meta.report_skipped):\n result.rows.append(row_result)\n\n if use_transactions:\n if dry_run or result.has_errors():\n transaction.rollback()\n else:\n transaction.commit()\n transaction.leave_transaction_management()\n\n return result\n\n def get_export_order(self):\n order = tuple (self._meta.export_order or ())\n return order + tuple (k for k in self.fields.keys() if k not in order)\n\n def export_field(self, field, obj):\n field_name = self.get_field_name(field)\n method = getattr(self, 'dehydrate_%s' % field_name, None)\n if method is not None:\n return method(obj)\n return field.export(obj)\n\n def export_resource(self, obj):\n return [self.export_field(field, obj) for field in self.get_fields()]\n\n def get_export_headers(self):\n headers = [force_text(field.column_name) for field in self.get_fields()]\n return headers\n\n def export(self, queryset=None):\n \"\"\"\n Exports a resource.\n \"\"\"\n if queryset is None:\n queryset = self.get_queryset()\n headers = self.get_export_headers()\n data = tablib.Dataset(headers=headers)\n\n if isinstance(queryset, QuerySet):\n # Iterate without the queryset cache, to avoid wasting memory when\n # exporting large datasets.\n iterable = queryset.iterator()\n else:\n iterable = queryset\n for obj in iterable:\n data.append(self.export_resource(obj))\n return data\n\n\nclass ModelDeclarativeMetaclass(DeclarativeMetaclass):\n\n def __new__(cls, name, bases, attrs):\n new_class = super(ModelDeclarativeMetaclass,\n cls).__new__(cls, name, bases, attrs)\n\n opts = new_class._meta\n\n if not opts.instance_loader_class:\n opts.instance_loader_class = ModelInstanceLoader\n\n if opts.model:\n model_opts = opts.model._meta\n declared_fields = new_class.fields\n\n field_list = []\n for f in sorted(model_opts.fields + model_opts.many_to_many):\n if opts.fields is not None and not f.name in opts.fields:\n continue\n if opts.exclude and f.name in opts.exclude:\n continue\n if f.name in declared_fields:\n continue\n\n field = new_class.field_from_django_field(f.name, f,\n readonly=False)\n field_list.append((f.name, field, ))\n\n new_class.fields.update(OrderedDict(field_list))\n\n #add fields that follow relationships\n if opts.fields is not None:\n field_list = []\n for field_name in opts.fields:\n if field_name in declared_fields:\n continue\n if field_name.find('__') == -1:\n continue\n\n model = opts.model\n attrs = field_name.split('__')\n for i, attr in enumerate(attrs):\n verbose_path = \".\".join([opts.model.__name__] + attrs[0:i+1])\n\n try:\n f = model._meta.get_field_by_name(attr)[0]\n except FieldDoesNotExist as e:\n raise FieldDoesNotExist(\"%s: %s has no field named '%s'\" %\n (verbose_path, model.__name__, attr))\n\n if i < len(attrs) - 1:\n # We're not at the last attribute yet, so check that\n # we're looking at a relation, and move on to the\n # next model.\n if isinstance(f, RelatedObject):\n model = f.model\n else:\n if f.rel is None:\n raise KeyError('%s is not a relation' % verbose_path)\n model = f.rel.to\n\n if isinstance(f, RelatedObject):\n f = f.field\n\n field = new_class.field_from_django_field(field_name, f,\n readonly=True)\n field_list.append((field_name, field))\n\n new_class.fields.update(OrderedDict(field_list))\n\n return new_class\n\n\nclass ModelResource(six.with_metaclass(ModelDeclarativeMetaclass, Resource)):\n \"\"\"\n ModelResource is Resource subclass for handling Django models.\n \"\"\"\n\n @classmethod\n def widget_from_django_field(cls, f, default=widgets.Widget):\n \"\"\"\n Returns the widget that would likely be associated with each\n Django type.\n \"\"\"\n result = default\n internal_type = f.get_internal_type()\n if internal_type in ('ManyToManyField', ):\n result = functools.partial(widgets.ManyToManyWidget,\n model=f.rel.to)\n if internal_type in ('ForeignKey', 'OneToOneField', ):\n result = functools.partial(widgets.ForeignKeyWidget,\n model=f.rel.to)\n if internal_type in ('DecimalField', ):\n result = widgets.DecimalWidget\n if internal_type in ('DateTimeField', ):\n result = widgets.DateTimeWidget\n elif internal_type in ('DateField', ):\n result = widgets.DateWidget\n elif internal_type in ('IntegerField', 'PositiveIntegerField',\n 'PositiveSmallIntegerField', 'SmallIntegerField', 'AutoField'):\n result = widgets.IntegerWidget\n elif internal_type in ('BooleanField', 'NullBooleanField'):\n result = widgets.BooleanWidget\n return result\n\n @classmethod\n def widget_kwargs_for_field(self, field_name):\n \"\"\"\n Returns widget kwargs for given field_name.\n \"\"\"\n if self._meta.widgets:\n return self._meta.widgets.get(field_name, {})\n return {}\n\n @classmethod\n def field_from_django_field(self, field_name, django_field, readonly):\n \"\"\"\n Returns a Resource Field instance for the given Django model field.\n \"\"\"\n\n FieldWidget = self.widget_from_django_field(django_field)\n widget_kwargs = self.widget_kwargs_for_field(field_name)\n field = Field(attribute=field_name, column_name=field_name,\n widget=FieldWidget(**widget_kwargs), readonly=readonly)\n return field\n\n def get_import_id_fields(self):\n return self._meta.import_id_fields\n\n def get_queryset(self):\n return self._meta.model.objects.all()\n\n def init_instance(self, row=None):\n return self._meta.model()\n\n\ndef modelresource_factory(model, resource_class=ModelResource):\n \"\"\"\n Factory for creating ``ModelResource`` class for given Django model.\n \"\"\"\n attrs = {'model': model}\n Meta = type(str('Meta'), (object,), attrs)\n\n class_name = model.__name__ + str('Resource')\n\n class_attrs = {\n 'Meta': Meta,\n }\n\n metaclass = ModelDeclarativeMetaclass\n return metaclass(class_name, (resource_class,), class_attrs)\n\n", "path": "import_export/resources.py"}]} |
gh_patches_debug_1035 | rasdani/github-patches | git_diff | koxudaxi__datamodel-code-generator-689 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
--enum-field-as-literal=one converts integer to string - still
**Describe the bug**
When using `--enum-field-as-literal=one`, literal integers get converted to strings, depending on the exact version of `datamodel`'s dependencies which are installed. For details see the bottom of the description. This is highly problematic when using `datamodel-code-generator` as a library, as it's output changes unpredictably depending on which exact version of other dependencies are installed.
This is not a duplicate of https://github.com/koxudaxi/datamodel-code-generator/issues/440 .
**To Reproduce**
Example schema:
```json
{
"title": "SomeModel",
"type": "object",
"properties": {
"attribute": {
"title": "Attribute",
"enum": [
1
],
"type": "integer"
}
},
"required": [
"attribute"
]
}
```
Used commandline:
```
$ datamodel-codegen --input file.json --enum-field-as-literal=one
```
**Expected behavior**
I expected the result to look something like
```
class SomeModel(BaseModel):
attribute: Literal[1] = Field(..., title='Attribute')
```
instead it looks like
```
class SomeModel(BaseModel):
attribute: Literal['1'] = Field(..., title='Attribute')
```
**Version:**
- OS: Linux
- Python version: 3.8.0
- datamodel-code-generator version: 0.11.16
**Additional context**
The problem seems to lie in https://github.com/koxudaxi/datamodel-code-generator/blob/e2dcb199fc6da3c22aa5df4dd209721f1e71507e/datamodel_code_generator/types.py#L78
Python caches specified generics - see also https://bugs.python.org/issue45679 -, which means that if
```
List[Union[str, int]]
```
was used in some dependency _before_ python parses this part, `List[Union[int, str]]` magically becomes `List[Union[str, int]]`. This is turn makes pydantic parse `[1]` to `['1']`. Whether or not `List[Union[str, int]]` was parsed by python before parsing `types.py` depends on the exact version of the dependencies which are installed.
For an example of this type caching, the following code runs without error in python 3.8:
```
from typing import List, Union
List[Union[str, int]]
assert str(List[Union[int, str]]) == "typing.List[typing.Union[str, int]]"
```
For how this can confuse pydantic, also the following code runs without error in python 3.8 with pydantic version 1.9.0:
```
from pydantic import BaseModel
from typing import List, Literal, Union
List[Union[str, int]]
class SomeModel(BaseModel):
literals: List[Union[int, str]]
my_instance = SomeModel(literals=[1])
assert type(my_instance.literals[0]) == str
```
See also the warning in https://pydantic-docs.helpmanual.io/usage/types/#unions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `datamodel_code_generator/types.py`
Content:
```
1 from abc import ABC, abstractmethod
2 from enum import Enum, auto
3 from itertools import chain
4 from typing import (
5 TYPE_CHECKING,
6 Any,
7 ClassVar,
8 Dict,
9 FrozenSet,
10 Iterable,
11 Iterator,
12 List,
13 Optional,
14 Sequence,
15 Set,
16 Tuple,
17 Type,
18 TypeVar,
19 Union,
20 )
21
22 from pydantic import create_model
23
24 from datamodel_code_generator import Protocol, runtime_checkable
25 from datamodel_code_generator.format import PythonVersion
26 from datamodel_code_generator.imports import (
27 IMPORT_ABC_MAPPING,
28 IMPORT_ABC_SEQUENCE,
29 IMPORT_DICT,
30 IMPORT_LIST,
31 IMPORT_LITERAL,
32 IMPORT_LITERAL_BACKPORT,
33 IMPORT_MAPPING,
34 IMPORT_OPTIONAL,
35 IMPORT_SEQUENCE,
36 IMPORT_UNION,
37 Import,
38 )
39 from datamodel_code_generator.reference import Reference, _BaseModel
40
41 T = TypeVar('T')
42
43
44 class StrictTypes(Enum):
45 str = 'str'
46 bytes = 'bytes'
47 int = 'int'
48 float = 'float'
49 bool = 'bool'
50
51
52 def chain_as_tuple(*iterables: Iterable[T]) -> Tuple[T, ...]:
53 return tuple(chain(*iterables))
54
55
56 @runtime_checkable
57 class Modular(Protocol):
58 @property
59 def module_name(self) -> str:
60 raise NotImplementedError
61
62
63 class DataType(_BaseModel):
64 class Config:
65 extra = "forbid"
66
67 type: Optional[str]
68 reference: Optional[Reference]
69 data_types: List['DataType'] = []
70 is_func: bool = False
71 kwargs: Optional[Dict[str, Any]]
72 import_: Optional[Import] = None
73 python_version: PythonVersion = PythonVersion.PY_37
74 is_optional: bool = False
75 is_dict: bool = False
76 is_list: bool = False
77 is_custom_type: bool = False
78 literals: List[Union[int, str]] = []
79 use_standard_collections: bool = False
80 use_generic_container: bool = False
81 alias: Optional[str] = None
82 parent: Optional[Any] = None
83 children: List[Any] = []
84 strict: bool = False
85 dict_key: Optional['DataType'] = None
86
87 _exclude_fields: ClassVar[Set[str]] = {'parent', 'children'}
88 _pass_fields: ClassVar[Set[str]] = {'parent', 'children', 'data_types', 'reference'}
89
90 @classmethod
91 def from_import(
92 cls: Type['DataTypeT'],
93 import_: Import,
94 *,
95 is_optional: bool = False,
96 is_dict: bool = False,
97 is_list: bool = False,
98 is_custom_type: bool = False,
99 strict: bool = False,
100 kwargs: Optional[Dict[str, Any]] = None,
101 ) -> 'DataTypeT':
102 return cls(
103 type=import_.import_,
104 import_=import_,
105 is_optional=is_optional,
106 is_dict=is_dict,
107 is_list=is_list,
108 is_func=True if kwargs else False,
109 is_custom_type=is_custom_type,
110 strict=strict,
111 kwargs=kwargs,
112 )
113
114 @property
115 def unresolved_types(self) -> FrozenSet[str]:
116 return frozenset(
117 {
118 t.reference.path
119 for data_types in self.data_types
120 for t in data_types.all_data_types
121 if t.reference
122 }
123 | ({self.reference.path} if self.reference else set())
124 )
125
126 def replace_reference(self, reference: Reference) -> None:
127 if not self.reference: # pragma: no cover
128 raise Exception(
129 f'`{self.__class__.__name__}.replace_reference()` can\'t be called'
130 f' when `reference` field is empty.'
131 )
132
133 self.reference.children.remove(self)
134 self.reference = reference
135 reference.children.append(self)
136
137 @property
138 def module_name(self) -> Optional[str]:
139 if self.reference and isinstance(self.reference.source, Modular):
140 return self.reference.source.module_name
141 return None # pragma: no cover
142
143 @property
144 def full_name(self) -> str:
145 module_name = self.module_name
146 if module_name:
147 return f'{module_name}.{self.reference.short_name}' # type: ignore
148 return self.reference.short_name # type: ignore
149
150 @property
151 def all_data_types(self) -> Iterator['DataType']:
152 for data_type in self.data_types:
153 yield from data_type.all_data_types
154 yield self
155
156 @property
157 def all_imports(self) -> Iterator[Import]:
158 for data_type in self.data_types:
159 yield from data_type.all_imports
160 yield from self.imports
161
162 @property
163 def imports(self) -> Iterator[Import]:
164 if self.import_:
165 yield self.import_
166 imports: Tuple[Tuple[bool, Import], ...] = (
167 (self.is_optional, IMPORT_OPTIONAL),
168 (len(self.data_types) > 1, IMPORT_UNION),
169 )
170 if any(self.literals):
171 import_literal = (
172 IMPORT_LITERAL
173 if self.python_version.has_literal_type
174 else IMPORT_LITERAL_BACKPORT
175 )
176 imports = (
177 *imports,
178 (any(self.literals), import_literal),
179 )
180
181 if self.use_generic_container:
182 if self.use_standard_collections:
183 imports = (
184 *imports,
185 (self.is_list, IMPORT_ABC_SEQUENCE),
186 (self.is_dict, IMPORT_ABC_MAPPING),
187 )
188 else:
189 imports = (
190 *imports,
191 (self.is_list, IMPORT_SEQUENCE),
192 (self.is_dict, IMPORT_MAPPING),
193 )
194 elif not self.use_standard_collections:
195 imports = (
196 *imports,
197 (self.is_list, IMPORT_LIST),
198 (self.is_dict, IMPORT_DICT),
199 )
200 for field, import_ in imports:
201 if field and import_ != self.import_:
202 yield import_
203
204 if self.dict_key:
205 yield from self.dict_key.imports
206
207 def __init__(self, **values: Any) -> None:
208 if not TYPE_CHECKING:
209 super().__init__(**values)
210
211 for type_ in self.data_types:
212 if type_.type == 'Any' and type_.is_optional:
213 if any(
214 t for t in self.data_types if t.type != 'Any'
215 ): # pragma: no cover
216 self.is_optional = True
217 self.data_types = [
218 t
219 for t in self.data_types
220 if not (t.type == 'Any' and t.is_optional)
221 ]
222 break
223
224 for data_type in self.data_types:
225 if data_type.reference or data_type.data_types:
226 data_type.parent = self
227
228 if self.reference:
229 self.reference.children.append(self)
230
231 @property
232 def type_hint(self) -> str:
233 type_: Optional[str] = self.alias or self.type
234 if not type_:
235 if len(self.data_types) > 1:
236 type_ = f"Union[{', '.join(data_type.type_hint for data_type in self.data_types)}]"
237 elif len(self.data_types) == 1:
238 type_ = self.data_types[0].type_hint
239 elif self.literals:
240 type_ = (
241 f"Literal[{', '.join(repr(literal) for literal in self.literals)}]"
242 )
243 else:
244 if self.reference:
245 type_ = self.reference.short_name
246 else:
247 # TODO support strict Any
248 # type_ = 'Any'
249 type_ = ''
250 if self.reference and self.python_version == PythonVersion.PY_36:
251 type_ = f"'{type_}'"
252 if self.is_list:
253 if self.use_generic_container:
254 list_ = 'Sequence'
255 elif self.use_standard_collections:
256 list_ = 'list'
257 else:
258 list_ = 'List'
259 type_ = f'{list_}[{type_}]' if type_ else list_
260 elif self.is_dict:
261 if self.use_generic_container:
262 dict_ = 'Mapping'
263 elif self.use_standard_collections:
264 dict_ = 'dict'
265 else:
266 dict_ = 'Dict'
267 if self.dict_key or type_:
268 key = self.dict_key.type_hint if self.dict_key else 'str'
269 type_ = f'{dict_}[{key}, {type_ or "Any"}]'
270 else: # pragma: no cover
271 type_ = dict_
272 if self.is_optional and type_ != 'Any':
273 type_ = f'Optional[{type_}]'
274 elif self.is_func:
275 if self.kwargs:
276 kwargs: str = ', '.join(f'{k}={v}' for k, v in self.kwargs.items())
277 return f'{type_}({kwargs})'
278 return f'{type_}()'
279 return type_
280
281
282 DataType.update_forward_refs()
283
284 DataTypeT = TypeVar('DataTypeT', bound=DataType)
285
286
287 class Types(Enum):
288 integer = auto()
289 int32 = auto()
290 int64 = auto()
291 number = auto()
292 float = auto()
293 double = auto()
294 decimal = auto()
295 time = auto()
296 string = auto()
297 byte = auto()
298 binary = auto()
299 date = auto()
300 date_time = auto()
301 password = auto()
302 email = auto()
303 uuid = auto()
304 uuid1 = auto()
305 uuid2 = auto()
306 uuid3 = auto()
307 uuid4 = auto()
308 uuid5 = auto()
309 uri = auto()
310 hostname = auto()
311 ipv4 = auto()
312 ipv6 = auto()
313 boolean = auto()
314 object = auto()
315 null = auto()
316 array = auto()
317 any = auto()
318
319
320 class DataTypeManager(ABC):
321 def __init__(
322 self,
323 python_version: PythonVersion = PythonVersion.PY_37,
324 use_standard_collections: bool = False,
325 use_generic_container_types: bool = False,
326 strict_types: Optional[Sequence[StrictTypes]] = None,
327 use_non_positive_negative_number_constrained_types: bool = False,
328 ) -> None:
329 self.python_version = python_version
330 self.use_standard_collections: bool = use_standard_collections
331 self.use_generic_container_types: bool = use_generic_container_types
332 self.strict_types: Sequence[StrictTypes] = strict_types or ()
333 self.use_non_positive_negative_number_constrained_types: bool = (
334 use_non_positive_negative_number_constrained_types
335 )
336
337 if (
338 use_generic_container_types and python_version == PythonVersion.PY_36
339 ): # pragma: no cover
340 raise Exception(
341 "use_generic_container_types can not be used with target_python_version 3.6.\n"
342 " The version will be not supported in a future version"
343 )
344
345 if TYPE_CHECKING:
346 self.data_type: Type[DataType]
347 else:
348 self.data_type: Type[DataType] = create_model(
349 'ContextDataType',
350 python_version=python_version,
351 use_standard_collections=use_standard_collections,
352 use_generic_container=use_generic_container_types,
353 __base__=DataType,
354 )
355
356 @abstractmethod
357 def get_data_type(self, types: Types, **kwargs: Any) -> DataType:
358 raise NotImplementedError
359
360 def get_data_type_from_full_path(
361 self, full_path: str, is_custom_type: bool
362 ) -> DataType:
363 return self.data_type.from_import(
364 Import.from_full_path(full_path), is_custom_type=is_custom_type
365 )
366
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/datamodel_code_generator/types.py b/datamodel_code_generator/types.py
--- a/datamodel_code_generator/types.py
+++ b/datamodel_code_generator/types.py
@@ -75,7 +75,7 @@
is_dict: bool = False
is_list: bool = False
is_custom_type: bool = False
- literals: List[Union[int, str]] = []
+ literals: 'List[Union[int, str]]' = []
use_standard_collections: bool = False
use_generic_container: bool = False
alias: Optional[str] = None
| {"golden_diff": "diff --git a/datamodel_code_generator/types.py b/datamodel_code_generator/types.py\n--- a/datamodel_code_generator/types.py\n+++ b/datamodel_code_generator/types.py\n@@ -75,7 +75,7 @@\n is_dict: bool = False\n is_list: bool = False\n is_custom_type: bool = False\n- literals: List[Union[int, str]] = []\n+ literals: 'List[Union[int, str]]' = []\n use_standard_collections: bool = False\n use_generic_container: bool = False\n alias: Optional[str] = None\n", "issue": "--enum-field-as-literal=one converts integer to string - still\n**Describe the bug**\r\nWhen using `--enum-field-as-literal=one`, literal integers get converted to strings, depending on the exact version of `datamodel`'s dependencies which are installed. For details see the bottom of the description. This is highly problematic when using `datamodel-code-generator` as a library, as it's output changes unpredictably depending on which exact version of other dependencies are installed.\r\n\r\nThis is not a duplicate of https://github.com/koxudaxi/datamodel-code-generator/issues/440 .\r\n\r\n**To Reproduce**\r\n\r\nExample schema:\r\n```json\r\n {\r\n \"title\": \"SomeModel\",\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"attribute\": {\r\n \"title\": \"Attribute\",\r\n \"enum\": [\r\n 1\r\n ],\r\n \"type\": \"integer\"\r\n }\r\n },\r\n \"required\": [\r\n \"attribute\"\r\n ]\r\n }\r\n```\r\n\r\nUsed commandline:\r\n```\r\n$ datamodel-codegen --input file.json --enum-field-as-literal=one\r\n```\r\n\r\n**Expected behavior**\r\nI expected the result to look something like\r\n```\r\nclass SomeModel(BaseModel):\r\n attribute: Literal[1] = Field(..., title='Attribute')\r\n```\r\ninstead it looks like\r\n```\r\nclass SomeModel(BaseModel):\r\n attribute: Literal['1'] = Field(..., title='Attribute')\r\n```\r\n\r\n**Version:**\r\n - OS: Linux\r\n - Python version: 3.8.0\r\n - datamodel-code-generator version: 0.11.16\r\n\r\n**Additional context**\r\nThe problem seems to lie in https://github.com/koxudaxi/datamodel-code-generator/blob/e2dcb199fc6da3c22aa5df4dd209721f1e71507e/datamodel_code_generator/types.py#L78 \r\n\r\nPython caches specified generics - see also https://bugs.python.org/issue45679 -, which means that if\r\n```\r\nList[Union[str, int]]\r\n```\r\nwas used in some dependency _before_ python parses this part, `List[Union[int, str]]` magically becomes `List[Union[str, int]]`. This is turn makes pydantic parse `[1]` to `['1']`. Whether or not `List[Union[str, int]]` was parsed by python before parsing `types.py` depends on the exact version of the dependencies which are installed.\r\n\r\nFor an example of this type caching, the following code runs without error in python 3.8:\r\n```\r\nfrom typing import List, Union\r\n\r\nList[Union[str, int]]\r\n\r\nassert str(List[Union[int, str]]) == \"typing.List[typing.Union[str, int]]\"\r\n```\r\nFor how this can confuse pydantic, also the following code runs without error in python 3.8 with pydantic version 1.9.0:\r\n```\r\nfrom pydantic import BaseModel\r\n\r\nfrom typing import List, Literal, Union\r\n\r\nList[Union[str, int]]\r\n\r\nclass SomeModel(BaseModel):\r\n literals: List[Union[int, str]]\r\n\r\nmy_instance = SomeModel(literals=[1])\r\n\r\nassert type(my_instance.literals[0]) == str\r\n```\r\nSee also the warning in https://pydantic-docs.helpmanual.io/usage/types/#unions\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nfrom enum import Enum, auto\nfrom itertools import chain\nfrom typing import (\n TYPE_CHECKING,\n Any,\n ClassVar,\n Dict,\n FrozenSet,\n Iterable,\n Iterator,\n List,\n Optional,\n Sequence,\n Set,\n Tuple,\n Type,\n TypeVar,\n Union,\n)\n\nfrom pydantic import create_model\n\nfrom datamodel_code_generator import Protocol, runtime_checkable\nfrom datamodel_code_generator.format import PythonVersion\nfrom datamodel_code_generator.imports import (\n IMPORT_ABC_MAPPING,\n IMPORT_ABC_SEQUENCE,\n IMPORT_DICT,\n IMPORT_LIST,\n IMPORT_LITERAL,\n IMPORT_LITERAL_BACKPORT,\n IMPORT_MAPPING,\n IMPORT_OPTIONAL,\n IMPORT_SEQUENCE,\n IMPORT_UNION,\n Import,\n)\nfrom datamodel_code_generator.reference import Reference, _BaseModel\n\nT = TypeVar('T')\n\n\nclass StrictTypes(Enum):\n str = 'str'\n bytes = 'bytes'\n int = 'int'\n float = 'float'\n bool = 'bool'\n\n\ndef chain_as_tuple(*iterables: Iterable[T]) -> Tuple[T, ...]:\n return tuple(chain(*iterables))\n\n\n@runtime_checkable\nclass Modular(Protocol):\n @property\n def module_name(self) -> str:\n raise NotImplementedError\n\n\nclass DataType(_BaseModel):\n class Config:\n extra = \"forbid\"\n\n type: Optional[str]\n reference: Optional[Reference]\n data_types: List['DataType'] = []\n is_func: bool = False\n kwargs: Optional[Dict[str, Any]]\n import_: Optional[Import] = None\n python_version: PythonVersion = PythonVersion.PY_37\n is_optional: bool = False\n is_dict: bool = False\n is_list: bool = False\n is_custom_type: bool = False\n literals: List[Union[int, str]] = []\n use_standard_collections: bool = False\n use_generic_container: bool = False\n alias: Optional[str] = None\n parent: Optional[Any] = None\n children: List[Any] = []\n strict: bool = False\n dict_key: Optional['DataType'] = None\n\n _exclude_fields: ClassVar[Set[str]] = {'parent', 'children'}\n _pass_fields: ClassVar[Set[str]] = {'parent', 'children', 'data_types', 'reference'}\n\n @classmethod\n def from_import(\n cls: Type['DataTypeT'],\n import_: Import,\n *,\n is_optional: bool = False,\n is_dict: bool = False,\n is_list: bool = False,\n is_custom_type: bool = False,\n strict: bool = False,\n kwargs: Optional[Dict[str, Any]] = None,\n ) -> 'DataTypeT':\n return cls(\n type=import_.import_,\n import_=import_,\n is_optional=is_optional,\n is_dict=is_dict,\n is_list=is_list,\n is_func=True if kwargs else False,\n is_custom_type=is_custom_type,\n strict=strict,\n kwargs=kwargs,\n )\n\n @property\n def unresolved_types(self) -> FrozenSet[str]:\n return frozenset(\n {\n t.reference.path\n for data_types in self.data_types\n for t in data_types.all_data_types\n if t.reference\n }\n | ({self.reference.path} if self.reference else set())\n )\n\n def replace_reference(self, reference: Reference) -> None:\n if not self.reference: # pragma: no cover\n raise Exception(\n f'`{self.__class__.__name__}.replace_reference()` can\\'t be called'\n f' when `reference` field is empty.'\n )\n\n self.reference.children.remove(self)\n self.reference = reference\n reference.children.append(self)\n\n @property\n def module_name(self) -> Optional[str]:\n if self.reference and isinstance(self.reference.source, Modular):\n return self.reference.source.module_name\n return None # pragma: no cover\n\n @property\n def full_name(self) -> str:\n module_name = self.module_name\n if module_name:\n return f'{module_name}.{self.reference.short_name}' # type: ignore\n return self.reference.short_name # type: ignore\n\n @property\n def all_data_types(self) -> Iterator['DataType']:\n for data_type in self.data_types:\n yield from data_type.all_data_types\n yield self\n\n @property\n def all_imports(self) -> Iterator[Import]:\n for data_type in self.data_types:\n yield from data_type.all_imports\n yield from self.imports\n\n @property\n def imports(self) -> Iterator[Import]:\n if self.import_:\n yield self.import_\n imports: Tuple[Tuple[bool, Import], ...] = (\n (self.is_optional, IMPORT_OPTIONAL),\n (len(self.data_types) > 1, IMPORT_UNION),\n )\n if any(self.literals):\n import_literal = (\n IMPORT_LITERAL\n if self.python_version.has_literal_type\n else IMPORT_LITERAL_BACKPORT\n )\n imports = (\n *imports,\n (any(self.literals), import_literal),\n )\n\n if self.use_generic_container:\n if self.use_standard_collections:\n imports = (\n *imports,\n (self.is_list, IMPORT_ABC_SEQUENCE),\n (self.is_dict, IMPORT_ABC_MAPPING),\n )\n else:\n imports = (\n *imports,\n (self.is_list, IMPORT_SEQUENCE),\n (self.is_dict, IMPORT_MAPPING),\n )\n elif not self.use_standard_collections:\n imports = (\n *imports,\n (self.is_list, IMPORT_LIST),\n (self.is_dict, IMPORT_DICT),\n )\n for field, import_ in imports:\n if field and import_ != self.import_:\n yield import_\n\n if self.dict_key:\n yield from self.dict_key.imports\n\n def __init__(self, **values: Any) -> None:\n if not TYPE_CHECKING:\n super().__init__(**values)\n\n for type_ in self.data_types:\n if type_.type == 'Any' and type_.is_optional:\n if any(\n t for t in self.data_types if t.type != 'Any'\n ): # pragma: no cover\n self.is_optional = True\n self.data_types = [\n t\n for t in self.data_types\n if not (t.type == 'Any' and t.is_optional)\n ]\n break\n\n for data_type in self.data_types:\n if data_type.reference or data_type.data_types:\n data_type.parent = self\n\n if self.reference:\n self.reference.children.append(self)\n\n @property\n def type_hint(self) -> str:\n type_: Optional[str] = self.alias or self.type\n if not type_:\n if len(self.data_types) > 1:\n type_ = f\"Union[{', '.join(data_type.type_hint for data_type in self.data_types)}]\"\n elif len(self.data_types) == 1:\n type_ = self.data_types[0].type_hint\n elif self.literals:\n type_ = (\n f\"Literal[{', '.join(repr(literal) for literal in self.literals)}]\"\n )\n else:\n if self.reference:\n type_ = self.reference.short_name\n else:\n # TODO support strict Any\n # type_ = 'Any'\n type_ = ''\n if self.reference and self.python_version == PythonVersion.PY_36:\n type_ = f\"'{type_}'\"\n if self.is_list:\n if self.use_generic_container:\n list_ = 'Sequence'\n elif self.use_standard_collections:\n list_ = 'list'\n else:\n list_ = 'List'\n type_ = f'{list_}[{type_}]' if type_ else list_\n elif self.is_dict:\n if self.use_generic_container:\n dict_ = 'Mapping'\n elif self.use_standard_collections:\n dict_ = 'dict'\n else:\n dict_ = 'Dict'\n if self.dict_key or type_:\n key = self.dict_key.type_hint if self.dict_key else 'str'\n type_ = f'{dict_}[{key}, {type_ or \"Any\"}]'\n else: # pragma: no cover\n type_ = dict_\n if self.is_optional and type_ != 'Any':\n type_ = f'Optional[{type_}]'\n elif self.is_func:\n if self.kwargs:\n kwargs: str = ', '.join(f'{k}={v}' for k, v in self.kwargs.items())\n return f'{type_}({kwargs})'\n return f'{type_}()'\n return type_\n\n\nDataType.update_forward_refs()\n\nDataTypeT = TypeVar('DataTypeT', bound=DataType)\n\n\nclass Types(Enum):\n integer = auto()\n int32 = auto()\n int64 = auto()\n number = auto()\n float = auto()\n double = auto()\n decimal = auto()\n time = auto()\n string = auto()\n byte = auto()\n binary = auto()\n date = auto()\n date_time = auto()\n password = auto()\n email = auto()\n uuid = auto()\n uuid1 = auto()\n uuid2 = auto()\n uuid3 = auto()\n uuid4 = auto()\n uuid5 = auto()\n uri = auto()\n hostname = auto()\n ipv4 = auto()\n ipv6 = auto()\n boolean = auto()\n object = auto()\n null = auto()\n array = auto()\n any = auto()\n\n\nclass DataTypeManager(ABC):\n def __init__(\n self,\n python_version: PythonVersion = PythonVersion.PY_37,\n use_standard_collections: bool = False,\n use_generic_container_types: bool = False,\n strict_types: Optional[Sequence[StrictTypes]] = None,\n use_non_positive_negative_number_constrained_types: bool = False,\n ) -> None:\n self.python_version = python_version\n self.use_standard_collections: bool = use_standard_collections\n self.use_generic_container_types: bool = use_generic_container_types\n self.strict_types: Sequence[StrictTypes] = strict_types or ()\n self.use_non_positive_negative_number_constrained_types: bool = (\n use_non_positive_negative_number_constrained_types\n )\n\n if (\n use_generic_container_types and python_version == PythonVersion.PY_36\n ): # pragma: no cover\n raise Exception(\n \"use_generic_container_types can not be used with target_python_version 3.6.\\n\"\n \" The version will be not supported in a future version\"\n )\n\n if TYPE_CHECKING:\n self.data_type: Type[DataType]\n else:\n self.data_type: Type[DataType] = create_model(\n 'ContextDataType',\n python_version=python_version,\n use_standard_collections=use_standard_collections,\n use_generic_container=use_generic_container_types,\n __base__=DataType,\n )\n\n @abstractmethod\n def get_data_type(self, types: Types, **kwargs: Any) -> DataType:\n raise NotImplementedError\n\n def get_data_type_from_full_path(\n self, full_path: str, is_custom_type: bool\n ) -> DataType:\n return self.data_type.from_import(\n Import.from_full_path(full_path), is_custom_type=is_custom_type\n )\n", "path": "datamodel_code_generator/types.py"}], "after_files": [{"content": "from abc import ABC, abstractmethod\nfrom enum import Enum, auto\nfrom itertools import chain\nfrom typing import (\n TYPE_CHECKING,\n Any,\n ClassVar,\n Dict,\n FrozenSet,\n Iterable,\n Iterator,\n List,\n Optional,\n Sequence,\n Set,\n Tuple,\n Type,\n TypeVar,\n Union,\n)\n\nfrom pydantic import create_model\n\nfrom datamodel_code_generator import Protocol, runtime_checkable\nfrom datamodel_code_generator.format import PythonVersion\nfrom datamodel_code_generator.imports import (\n IMPORT_ABC_MAPPING,\n IMPORT_ABC_SEQUENCE,\n IMPORT_DICT,\n IMPORT_LIST,\n IMPORT_LITERAL,\n IMPORT_LITERAL_BACKPORT,\n IMPORT_MAPPING,\n IMPORT_OPTIONAL,\n IMPORT_SEQUENCE,\n IMPORT_UNION,\n Import,\n)\nfrom datamodel_code_generator.reference import Reference, _BaseModel\n\nT = TypeVar('T')\n\n\nclass StrictTypes(Enum):\n str = 'str'\n bytes = 'bytes'\n int = 'int'\n float = 'float'\n bool = 'bool'\n\n\ndef chain_as_tuple(*iterables: Iterable[T]) -> Tuple[T, ...]:\n return tuple(chain(*iterables))\n\n\n@runtime_checkable\nclass Modular(Protocol):\n @property\n def module_name(self) -> str:\n raise NotImplementedError\n\n\nclass DataType(_BaseModel):\n class Config:\n extra = \"forbid\"\n\n type: Optional[str]\n reference: Optional[Reference]\n data_types: List['DataType'] = []\n is_func: bool = False\n kwargs: Optional[Dict[str, Any]]\n import_: Optional[Import] = None\n python_version: PythonVersion = PythonVersion.PY_37\n is_optional: bool = False\n is_dict: bool = False\n is_list: bool = False\n is_custom_type: bool = False\n literals: 'List[Union[int, str]]' = []\n use_standard_collections: bool = False\n use_generic_container: bool = False\n alias: Optional[str] = None\n parent: Optional[Any] = None\n children: List[Any] = []\n strict: bool = False\n dict_key: Optional['DataType'] = None\n\n _exclude_fields: ClassVar[Set[str]] = {'parent', 'children'}\n _pass_fields: ClassVar[Set[str]] = {'parent', 'children', 'data_types', 'reference'}\n\n @classmethod\n def from_import(\n cls: Type['DataTypeT'],\n import_: Import,\n *,\n is_optional: bool = False,\n is_dict: bool = False,\n is_list: bool = False,\n is_custom_type: bool = False,\n strict: bool = False,\n kwargs: Optional[Dict[str, Any]] = None,\n ) -> 'DataTypeT':\n return cls(\n type=import_.import_,\n import_=import_,\n is_optional=is_optional,\n is_dict=is_dict,\n is_list=is_list,\n is_func=True if kwargs else False,\n is_custom_type=is_custom_type,\n strict=strict,\n kwargs=kwargs,\n )\n\n @property\n def unresolved_types(self) -> FrozenSet[str]:\n return frozenset(\n {\n t.reference.path\n for data_types in self.data_types\n for t in data_types.all_data_types\n if t.reference\n }\n | ({self.reference.path} if self.reference else set())\n )\n\n def replace_reference(self, reference: Reference) -> None:\n if not self.reference: # pragma: no cover\n raise Exception(\n f'`{self.__class__.__name__}.replace_reference()` can\\'t be called'\n f' when `reference` field is empty.'\n )\n\n self.reference.children.remove(self)\n self.reference = reference\n reference.children.append(self)\n\n @property\n def module_name(self) -> Optional[str]:\n if self.reference and isinstance(self.reference.source, Modular):\n return self.reference.source.module_name\n return None # pragma: no cover\n\n @property\n def full_name(self) -> str:\n module_name = self.module_name\n if module_name:\n return f'{module_name}.{self.reference.short_name}' # type: ignore\n return self.reference.short_name # type: ignore\n\n @property\n def all_data_types(self) -> Iterator['DataType']:\n for data_type in self.data_types:\n yield from data_type.all_data_types\n yield self\n\n @property\n def all_imports(self) -> Iterator[Import]:\n for data_type in self.data_types:\n yield from data_type.all_imports\n yield from self.imports\n\n @property\n def imports(self) -> Iterator[Import]:\n if self.import_:\n yield self.import_\n imports: Tuple[Tuple[bool, Import], ...] = (\n (self.is_optional, IMPORT_OPTIONAL),\n (len(self.data_types) > 1, IMPORT_UNION),\n )\n if any(self.literals):\n import_literal = (\n IMPORT_LITERAL\n if self.python_version.has_literal_type\n else IMPORT_LITERAL_BACKPORT\n )\n imports = (\n *imports,\n (any(self.literals), import_literal),\n )\n\n if self.use_generic_container:\n if self.use_standard_collections:\n imports = (\n *imports,\n (self.is_list, IMPORT_ABC_SEQUENCE),\n (self.is_dict, IMPORT_ABC_MAPPING),\n )\n else:\n imports = (\n *imports,\n (self.is_list, IMPORT_SEQUENCE),\n (self.is_dict, IMPORT_MAPPING),\n )\n elif not self.use_standard_collections:\n imports = (\n *imports,\n (self.is_list, IMPORT_LIST),\n (self.is_dict, IMPORT_DICT),\n )\n for field, import_ in imports:\n if field and import_ != self.import_:\n yield import_\n\n if self.dict_key:\n yield from self.dict_key.imports\n\n def __init__(self, **values: Any) -> None:\n if not TYPE_CHECKING:\n super().__init__(**values)\n\n for type_ in self.data_types:\n if type_.type == 'Any' and type_.is_optional:\n if any(\n t for t in self.data_types if t.type != 'Any'\n ): # pragma: no cover\n self.is_optional = True\n self.data_types = [\n t\n for t in self.data_types\n if not (t.type == 'Any' and t.is_optional)\n ]\n break\n\n for data_type in self.data_types:\n if data_type.reference or data_type.data_types:\n data_type.parent = self\n\n if self.reference:\n self.reference.children.append(self)\n\n @property\n def type_hint(self) -> str:\n type_: Optional[str] = self.alias or self.type\n if not type_:\n if len(self.data_types) > 1:\n type_ = f\"Union[{', '.join(data_type.type_hint for data_type in self.data_types)}]\"\n elif len(self.data_types) == 1:\n type_ = self.data_types[0].type_hint\n elif self.literals:\n type_ = (\n f\"Literal[{', '.join(repr(literal) for literal in self.literals)}]\"\n )\n else:\n if self.reference:\n type_ = self.reference.short_name\n else:\n # TODO support strict Any\n # type_ = 'Any'\n type_ = ''\n if self.reference and self.python_version == PythonVersion.PY_36:\n type_ = f\"'{type_}'\"\n if self.is_list:\n if self.use_generic_container:\n list_ = 'Sequence'\n elif self.use_standard_collections:\n list_ = 'list'\n else:\n list_ = 'List'\n type_ = f'{list_}[{type_}]' if type_ else list_\n elif self.is_dict:\n if self.use_generic_container:\n dict_ = 'Mapping'\n elif self.use_standard_collections:\n dict_ = 'dict'\n else:\n dict_ = 'Dict'\n if self.dict_key or type_:\n key = self.dict_key.type_hint if self.dict_key else 'str'\n type_ = f'{dict_}[{key}, {type_ or \"Any\"}]'\n else: # pragma: no cover\n type_ = dict_\n if self.is_optional and type_ != 'Any':\n type_ = f'Optional[{type_}]'\n elif self.is_func:\n if self.kwargs:\n kwargs: str = ', '.join(f'{k}={v}' for k, v in self.kwargs.items())\n return f'{type_}({kwargs})'\n return f'{type_}()'\n return type_\n\n\nDataType.update_forward_refs()\n\nDataTypeT = TypeVar('DataTypeT', bound=DataType)\n\n\nclass Types(Enum):\n integer = auto()\n int32 = auto()\n int64 = auto()\n number = auto()\n float = auto()\n double = auto()\n decimal = auto()\n time = auto()\n string = auto()\n byte = auto()\n binary = auto()\n date = auto()\n date_time = auto()\n password = auto()\n email = auto()\n uuid = auto()\n uuid1 = auto()\n uuid2 = auto()\n uuid3 = auto()\n uuid4 = auto()\n uuid5 = auto()\n uri = auto()\n hostname = auto()\n ipv4 = auto()\n ipv6 = auto()\n boolean = auto()\n object = auto()\n null = auto()\n array = auto()\n any = auto()\n\n\nclass DataTypeManager(ABC):\n def __init__(\n self,\n python_version: PythonVersion = PythonVersion.PY_37,\n use_standard_collections: bool = False,\n use_generic_container_types: bool = False,\n strict_types: Optional[Sequence[StrictTypes]] = None,\n use_non_positive_negative_number_constrained_types: bool = False,\n ) -> None:\n self.python_version = python_version\n self.use_standard_collections: bool = use_standard_collections\n self.use_generic_container_types: bool = use_generic_container_types\n self.strict_types: Sequence[StrictTypes] = strict_types or ()\n self.use_non_positive_negative_number_constrained_types: bool = (\n use_non_positive_negative_number_constrained_types\n )\n\n if (\n use_generic_container_types and python_version == PythonVersion.PY_36\n ): # pragma: no cover\n raise Exception(\n \"use_generic_container_types can not be used with target_python_version 3.6.\\n\"\n \" The version will be not supported in a future version\"\n )\n\n if TYPE_CHECKING:\n self.data_type: Type[DataType]\n else:\n self.data_type: Type[DataType] = create_model(\n 'ContextDataType',\n python_version=python_version,\n use_standard_collections=use_standard_collections,\n use_generic_container=use_generic_container_types,\n __base__=DataType,\n )\n\n @abstractmethod\n def get_data_type(self, types: Types, **kwargs: Any) -> DataType:\n raise NotImplementedError\n\n def get_data_type_from_full_path(\n self, full_path: str, is_custom_type: bool\n ) -> DataType:\n return self.data_type.from_import(\n Import.from_full_path(full_path), is_custom_type=is_custom_type\n )\n", "path": "datamodel_code_generator/types.py"}]} |
gh_patches_debug_1036 | rasdani/github-patches | git_diff | pypa__pipenv-2450 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can not generate Pipfile.lock by installing packages with requirementx.txt on Python 2.7
Describe the issue briefly here.
Run: $pipenv install -r requirements.txt
Got error:
Creating a virtualenv for this project...
Pipfile: /home/ec2-user/test/Pipfile
Using /usr/bin/python2.7 (2.7.14) to create virtualenv...
⠋Already using interpreter /usr/bin/python2.7
New python executable in /home/ec2-user/.local/share/virtualenvs/test-LVXQY0Nl/bin/python2.7
Also creating executable in /home/ec2-user/.local/share/virtualenvs/test-LVXQY0Nl/bin/python
Installing setuptools, pip, wheel...done.
Setting project for test-LVXQY0Nl to /home/ec2-user/test
Virtualenv location: /home/ec2-user/.local/share/virtualenvs/test-LVXQY0Nl
Creating a Pipfile for this project...
Requirements file provided! Importing into Pipfile...
Traceback (most recent call last):
File "/usr/local/bin/pipenv", line 11, in <module>
sys.exit(cli())
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pipenv/cli.py", line 416, in install
selective_upgrade=selective_upgrade,
File "/usr/local/lib/python2.7/site-packages/pipenv/core.py", line 1854, in do_install
import_requirements(r=project.path_to(requirements), dev=dev)
File "/usr/local/lib/python2.7/site-packages/pipenv/core.py", line 228, in import_requirements
project.recase_pipfile()
File "/usr/local/lib/python2.7/site-packages/pipenv/project.py", line 766, in recase_pipfile
if self.ensure_proper_casing():
File "/usr/local/lib/python2.7/site-packages/pipenv/project.py", line 802, in ensure_proper_casing
casing_changed = self.proper_case_section(pfile.get('packages', {}))
File "/usr/local/lib/python2.7/site-packages/pipenv/project.py", line 826, in proper_case_section
self.register_proper_name(new_casing)
File "/usr/local/lib/python2.7/site-packages/pipenv/project.py", line 366, in register_proper_name
f.write('{0}\n'.format(name))
TypeError: write() argument 1 must be unicode, not str
/usr/local/lib/python2.7/site-packages/pipenv/_compat.py:108: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/pipenv-2ttppI-requirements'>
warnings.warn(warn_message, ResourceWarning)
Please run `$ python -m pipenv.help`, and paste the results here.
<details><summary>$ python -m pipenv.help output</summary>
Pipenv version: `'2018.6.25'`
Pipenv location: `'/usr/local/lib/python2.7/site-packages/pipenv'`
Python location: `'/usr/bin/python'`
Other Python installations in `PATH`:
- `2.7`: `/usr/bin/python2.7`
- `2.7`: `/usr/bin/python2.7`
- `2.7.14`: `/usr/bin/python`
PEP 508 Information:
```
{'implementation_name': 'cpython',
'implementation_version': '0',
'os_name': 'posix',
'platform_machine': 'x86_64',
'platform_python_implementation': 'CPython',
'platform_release': '4.14.33-51.37.amzn1.x86_64',
'platform_system': 'Linux',
'platform_version': '#1 SMP Thu May 3 20:07:43 UTC 2018',
'python_full_version': '2.7.14',
'python_version': '2.7',
'sys_platform': 'linux2'}
```
System environment variables:
- `LC_CTYPE`
- `PYTHONDONTWRITEBYTECODE`
- `LESSOPEN`
- `SSH_CLIENT`
- `LOGNAME`
- `USER`
- `HOME`
- `PATH`
- `AWS_PATH`
- `LANG`
- `LESS_TERMCAP_se`
- `TERM`
- `SHELL`
- `EC2_AMITOOL_HOME`
- `LESS_TERMCAP_me`
- `LESS_TERMCAP_md`
- `LESS_TERMCAP_mb`
- `HISTSIZE`
- `AWS_ELB_HOME`
- `JAVA_HOME`
- `EC2_HOME`
- `AWS_AUTO_SCALING_HOME`
- `PIP_PYTHON_PATH`
- `_`
- `LESS_TERMCAP_ue`
- `SSH_CONNECTION`
- `AWS_CLOUDWATCH_HOME`
- `SSH_TTY`
- `OLDPWD`
- `HOSTNAME`
- `HISTCONTROL`
- `SHLVL`
- `PWD`
- `LESS_TERMCAP_us`
- `MAIL`
- `LS_COLORS`
Pipenv–specific environment variables:
Debug–specific environment variables:
- `PATH`: `/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/.local/bin:/home/ec2-user/bin`
- `SHELL`: `/bin/bash`
- `LANG`: `en_US.UTF-8`
- `PWD`: `/home/ec2-user/test`
---------------------------
Contents of `Pipfile` ('/home/ec2-user/test/Pipfile'):
```toml
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[dev-packages]
[packages]
flask = "==0.10.1"
"jinja2" = "==2.7.3"
werkzeug = "==0.10"
[requires]
python_version = "2.7"
```
</details>
If you're on MacOS, just run the following:
$ python -m pipenv.help | pbcopy
------------
##### Expected result
Describe what you expected.
##### Actual result
When possible, provide the verbose output (`--verbose`), especially for locking and dependencies resolving issues.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pipenv/project.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import io
3 import json
4 import os
5 import re
6 import sys
7 import base64
8 import hashlib
9 import contoml
10 from first import first
11 import pipfile
12 import pipfile.api
13 import six
14 import toml
15 import json as simplejson
16
17 try:
18 from pathlib import Path
19 except ImportError:
20 from pathlib2 import Path
21
22 from .cmdparse import Script
23 from .vendor.requirementslib import Requirement
24 from .utils import (
25 atomic_open_for_write,
26 mkdir_p,
27 pep423_name,
28 proper_case,
29 find_requirements,
30 is_editable,
31 is_vcs,
32 cleanup_toml,
33 is_installable_file,
34 is_valid_url,
35 normalize_drive,
36 python_version,
37 safe_expandvars,
38 is_star,
39 )
40 from .environments import (
41 PIPENV_MAX_DEPTH,
42 PIPENV_PIPFILE,
43 PIPENV_VENV_IN_PROJECT,
44 PIPENV_VIRTUALENV,
45 PIPENV_TEST_INDEX,
46 PIPENV_PYTHON,
47 PIPENV_DEFAULT_PYTHON_VERSION,
48 )
49
50
51 def _normalized(p):
52 if p is None:
53 return None
54 loc = Path(p)
55 if loc.is_absolute():
56 return normalize_drive(str(loc))
57 else:
58 try:
59 loc = loc.resolve()
60 except OSError:
61 loc = loc.absolute()
62 return normalize_drive(str(loc))
63
64
65 DEFAULT_NEWLINES = u'\n'
66
67
68 def preferred_newlines(f):
69 if isinstance(f.newlines, six.text_type):
70 return f.newlines
71
72 return DEFAULT_NEWLINES
73
74
75 if PIPENV_PIPFILE:
76 if not os.path.isfile(PIPENV_PIPFILE):
77 raise RuntimeError('Given PIPENV_PIPFILE is not found!')
78
79 else:
80 PIPENV_PIPFILE = _normalized(PIPENV_PIPFILE)
81 # (path, file contents) => TOMLFile
82 # keeps track of pipfiles that we've seen so we do not need to re-parse 'em
83 _pipfile_cache = {}
84
85
86 if PIPENV_TEST_INDEX:
87 DEFAULT_SOURCE = {
88 u'url': PIPENV_TEST_INDEX,
89 u'verify_ssl': True,
90 u'name': u'custom',
91 }
92 else:
93 DEFAULT_SOURCE = {
94 u'url': u'https://pypi.org/simple',
95 u'verify_ssl': True,
96 u'name': u'pypi',
97 }
98
99 pipfile.api.DEFAULT_SOURCE = DEFAULT_SOURCE
100
101
102 class SourceNotFound(KeyError):
103 pass
104
105
106 class Project(object):
107 """docstring for Project"""
108
109 def __init__(self, which=None, python_version=None, chdir=True):
110 super(Project, self).__init__()
111 self._name = None
112 self._virtualenv_location = None
113 self._download_location = None
114 self._proper_names_db_path = None
115 self._pipfile_location = None
116 self._pipfile_newlines = DEFAULT_NEWLINES
117 self._lockfile_newlines = DEFAULT_NEWLINES
118 self._requirements_location = None
119 self._original_dir = os.path.abspath(os.curdir)
120 self.which = which
121 self.python_version = python_version
122 # Hack to skip this during pipenv run, or -r.
123 if ('run' not in sys.argv) and chdir:
124 try:
125 os.chdir(self.project_directory)
126 except (TypeError, AttributeError):
127 pass
128
129 def path_to(self, p):
130 """Returns the absolute path to a given relative path."""
131 if os.path.isabs(p):
132 return p
133
134 return os.sep.join([self._original_dir, p])
135
136 def _build_package_list(self, package_section):
137 """Returns a list of packages for pip-tools to consume."""
138 ps = {}
139 # TODO: Separate the logic for showing packages from the filters for supplying pip-tools
140 for k, v in self.parsed_pipfile.get(package_section, {}).items():
141 # Skip editable VCS deps.
142 if hasattr(v, 'keys'):
143 # When a vcs url is gven without editable it only appears as a key
144 # Eliminate any vcs, path, or url entries which are not editable
145 # Since pip-tools can't do deep resolution on them, even setuptools-installable ones
146 if (
147 is_vcs(v) or
148 is_vcs(k) or
149 (is_installable_file(k) or is_installable_file(v)) or
150 any(
151 (
152 prefix in v and
153 (
154 os.path.isfile(v[prefix]) or
155 is_valid_url(v[prefix])
156 )
157 )
158 for prefix in ['path', 'file']
159 )
160 ):
161 # If they are editable, do resolve them
162 if 'editable' not in v:
163 # allow wheels to be passed through
164 if not (hasattr(v, 'keys') and v.get('path', v.get('file', '')).endswith('.whl')):
165 continue
166 ps.update({k: v})
167
168 else:
169 ps.update({k: v})
170 else:
171 ps.update({k: v})
172 else:
173 # Since these entries have no attributes we know they are not editable
174 # So we can safely exclude things that need to be editable in order to be resolved
175 # First exclude anything that is a vcs entry either in the key or value
176 if not (
177 any(is_vcs(i) for i in [k, v]) or
178 # Then exclude any installable files that are not directories
179 # Because pip-tools can resolve setup.py for example
180 any(is_installable_file(i) for i in [k, v]) or
181 # Then exclude any URLs because they need to be editable also
182 # Things that are excluded can only be 'shallow resolved'
183 any(is_valid_url(i) for i in [k, v])
184 ):
185 ps.update({k: v})
186 return ps
187
188 @property
189 def name(self):
190 if self._name is None:
191 self._name = self.pipfile_location.split(os.sep)[-2]
192 return self._name
193
194 @property
195 def pipfile_exists(self):
196 return bool(self.pipfile_location)
197
198 @property
199 def required_python_version(self):
200 if self.pipfile_exists:
201 required = self.parsed_pipfile.get('requires', {}).get(
202 'python_full_version'
203 )
204 if not required:
205 required = self.parsed_pipfile.get('requires', {}).get(
206 'python_version'
207 )
208 if required != "*":
209 return required
210
211 @property
212 def project_directory(self):
213 if self.pipfile_location is not None:
214 return os.path.abspath(
215 os.path.join(self.pipfile_location, os.pardir)
216 )
217
218 else:
219 return None
220
221 @property
222 def requirements_exists(self):
223 return bool(self.requirements_location)
224
225 def is_venv_in_project(self):
226 return PIPENV_VENV_IN_PROJECT or (
227 self.project_directory and
228 os.path.exists(os.path.join(self.project_directory, '.venv'))
229 )
230
231 @property
232 def virtualenv_exists(self):
233 # TODO: Decouple project from existence of Pipfile.
234 if self.pipfile_exists and os.path.exists(self.virtualenv_location):
235 if os.name == 'nt':
236 extra = ['Scripts', 'activate.bat']
237 else:
238 extra = ['bin', 'activate']
239 return os.path.isfile(
240 os.sep.join([self.virtualenv_location] + extra)
241 )
242
243 return False
244
245 @classmethod
246 def _get_virtualenv_location(cls, name):
247 from .patched.pew.pew import get_workon_home
248 venv = get_workon_home() / name
249 if not venv.exists():
250 return ''
251 return '{0}'.format(venv)
252
253 @classmethod
254 def _sanitize(cls, name):
255 # Replace dangerous characters into '_'. The length of the sanitized
256 # project name is limited as 42 because of the limit of linux kernel
257 #
258 # 42 = 127 - len('/home//.local/share/virtualenvs//bin/python2') - 32 - len('-HASHHASH')
259 #
260 # 127 : BINPRM_BUF_SIZE - 1
261 # 32 : Maximum length of username
262 #
263 # References:
264 # https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html
265 # http://www.tldp.org/LDP/abs/html/special-chars.html#FIELDREF
266 # https://github.com/torvalds/linux/blob/2bfe01ef/include/uapi/linux/binfmts.h#L18
267 return re.sub(r'[ $`!*@"\\\r\n\t]', '_', name)[0:42]
268
269 def _get_virtualenv_hash(self, name):
270 """Get the name of the virtualenv adjusted for windows if needed
271
272 Returns (name, encoded_hash)
273 """
274 def get_name(name, location):
275 name = self._sanitize(name)
276 hash = hashlib.sha256(location.encode()).digest()[:6]
277 encoded_hash = base64.urlsafe_b64encode(hash).decode()
278 return name, encoded_hash[:8]
279
280 clean_name, encoded_hash = get_name(name, self.pipfile_location)
281 venv_name = '{0}-{1}'.format(clean_name, encoded_hash)
282
283 # This should work most of the time, for non-WIndows, in-project venv,
284 # or "proper" path casing (on Windows).
285 if (os.name != 'nt' or
286 self.is_venv_in_project() or
287 self._get_virtualenv_location(venv_name)):
288 return clean_name, encoded_hash
289
290 # Check for different capitalization of the same project.
291 from .patched.pew.pew import lsenvs
292 for env in lsenvs():
293 try:
294 env_name, hash_ = env.rsplit('-', 1)
295 except ValueError:
296 continue
297 if len(hash_) != 8 or env_name.lower() != name.lower():
298 continue
299 return get_name(env_name, self.pipfile_location.replace(name, env_name))
300
301 # Use the default if no matching env exists.
302 return clean_name, encoded_hash
303
304 @property
305 def virtualenv_name(self):
306 sanitized, encoded_hash = self._get_virtualenv_hash(self.name)
307 suffix = '-{0}'.format(PIPENV_PYTHON) if PIPENV_PYTHON else ''
308 # If the pipfile was located at '/home/user/MY_PROJECT/Pipfile',
309 # the name of its virtualenv will be 'my-project-wyUfYPqE'
310 return sanitized + '-' + encoded_hash + suffix
311
312 @property
313 def virtualenv_location(self):
314 # if VIRTUAL_ENV is set, use that.
315 if PIPENV_VIRTUALENV:
316 return PIPENV_VIRTUALENV
317
318 # Use cached version, if available.
319 if self._virtualenv_location:
320 return self._virtualenv_location
321
322 # Default mode.
323 if not self.is_venv_in_project():
324 loc = self._get_virtualenv_location(self.virtualenv_name)
325 # The user wants the virtualenv in the project.
326 else:
327 loc = os.sep.join(
328 self.pipfile_location.split(os.sep)[:-1] + ['.venv']
329 )
330 self._virtualenv_location = loc
331 return loc
332
333 @property
334 def virtualenv_src_location(self):
335 loc = os.sep.join([self.virtualenv_location, 'src'])
336 mkdir_p(loc)
337 return loc
338
339 @property
340 def download_location(self):
341 if self._download_location is None:
342 loc = os.sep.join([self.virtualenv_location, 'downloads'])
343 self._download_location = loc
344 # Create the directory, if it doesn't exist.
345 mkdir_p(self._download_location)
346 return self._download_location
347
348 @property
349 def proper_names_db_path(self):
350 if self._proper_names_db_path is None:
351 self._proper_names_db_path = Path(
352 self.virtualenv_location,
353 'pipenv-proper-names.txt',
354 )
355 self._proper_names_db_path.touch() # Ensure the file exists.
356 return self._proper_names_db_path
357
358 @property
359 def proper_names(self):
360 with self.proper_names_db_path.open() as f:
361 return f.read().splitlines()
362
363 def register_proper_name(self, name):
364 """Registers a proper name to the database."""
365 with self.proper_names_db_path.open('a') as f:
366 f.write('{0}\n'.format(name))
367
368 @property
369 def pipfile_location(self):
370 if PIPENV_PIPFILE:
371 return PIPENV_PIPFILE
372
373 if self._pipfile_location is None:
374 try:
375 loc = pipfile.Pipfile.find(max_depth=PIPENV_MAX_DEPTH)
376 except RuntimeError:
377 loc = None
378 self._pipfile_location = _normalized(loc)
379 return self._pipfile_location
380
381 @property
382 def requirements_location(self):
383 if self._requirements_location is None:
384 try:
385 loc = find_requirements(max_depth=PIPENV_MAX_DEPTH)
386 except RuntimeError:
387 loc = None
388 self._requirements_location = loc
389 return self._requirements_location
390
391 @property
392 def parsed_pipfile(self):
393 """Parse Pipfile into a TOMLFile and cache it
394
395 (call clear_pipfile_cache() afterwards if mutating)"""
396 contents = self.read_pipfile()
397 # use full contents to get around str/bytes 2/3 issues
398 cache_key = (self.pipfile_location, contents)
399 if cache_key not in _pipfile_cache:
400 parsed = self._parse_pipfile(contents)
401 _pipfile_cache[cache_key] = parsed
402 return _pipfile_cache[cache_key]
403
404 def read_pipfile(self):
405 # Open the pipfile, read it into memory.
406 with io.open(self.pipfile_location) as f:
407 contents = f.read()
408 self._pipfile_newlines = preferred_newlines(f)
409
410 return contents
411
412 @property
413 def pased_pure_pipfile(self):
414 contents = self.read_pipfile()
415
416 return self._parse_pipfile(contents)
417
418 def clear_pipfile_cache(self):
419 """Clear pipfile cache (e.g., so we can mutate parsed pipfile)"""
420 _pipfile_cache.clear()
421
422 def _parse_pipfile(self, contents):
423 # If any outline tables are present...
424 if ('[packages.' in contents) or ('[dev-packages.' in contents):
425 data = toml.loads(contents)
426 # Convert all outline tables to inline tables.
427 for section in ('packages', 'dev-packages'):
428 for package in data.get(section, {}):
429 # Convert things to inline tables — fancy :)
430 if hasattr(data[section][package], 'keys'):
431 _data = data[section][package]
432 data[section][package] = toml._get_empty_inline_table(
433 dict
434 )
435 data[section][package].update(_data)
436 # We lose comments here, but it's for the best.)
437 try:
438 return contoml.loads(toml.dumps(data, preserve=True))
439
440 except RuntimeError:
441 return toml.loads(toml.dumps(data, preserve=True))
442
443 else:
444 # Fallback to toml parser, for large files.
445 try:
446 return contoml.loads(contents)
447
448 except Exception:
449 return toml.loads(contents)
450
451 @property
452 def settings(self):
453 """A dictionary of the settings added to the Pipfile."""
454 return self.parsed_pipfile.get('pipenv', {})
455
456 def has_script(self, name):
457 try:
458 return name in self.parsed_pipfile['scripts']
459 except KeyError:
460 return False
461
462 def build_script(self, name, extra_args=None):
463 try:
464 script = Script.parse(self.parsed_pipfile['scripts'][name])
465 except KeyError:
466 script = Script(name)
467 if extra_args:
468 script.extend(extra_args)
469 return script
470
471 def update_settings(self, d):
472 settings = self.settings
473 changed = False
474 for new in d:
475 if new not in settings:
476 settings[new] = d[new]
477 changed = True
478 if changed:
479 p = self.parsed_pipfile
480 p['pipenv'] = settings
481 # Write the changes to disk.
482 self.write_toml(p)
483
484 @property
485 def _lockfile(self):
486 """Pipfile.lock divided by PyPI and external dependencies."""
487 pfile = pipfile.load(self.pipfile_location, inject_env=False)
488 lockfile = json.loads(pfile.lock())
489 for section in ('default', 'develop'):
490 lock_section = lockfile.get(section, {})
491 for key in list(lock_section.keys()):
492 norm_key = pep423_name(key)
493 lockfile[section][norm_key] = lock_section.pop(key)
494 return lockfile
495
496 @property
497 def lockfile_location(self):
498 return '{0}.lock'.format(self.pipfile_location)
499
500 @property
501 def lockfile_exists(self):
502 return os.path.isfile(self.lockfile_location)
503
504 @property
505 def lockfile_content(self):
506 return self.load_lockfile()
507
508 def _get_editable_packages(self, dev=False):
509 section = 'dev-packages' if dev else 'packages'
510 packages = {
511 k: v
512 for k, v in self.parsed_pipfile.get(section, {}).items()
513 if is_editable(v)
514 }
515 return packages
516
517 def _get_vcs_packages(self, dev=False):
518 section = 'dev-packages' if dev else 'packages'
519 packages = {
520 k: v
521 for k, v in self.parsed_pipfile.get(section, {}).items()
522 if is_vcs(v) or is_vcs(k)
523 }
524 return packages or {}
525
526 @property
527 def editable_packages(self):
528 return self._get_editable_packages(dev=False)
529
530 @property
531 def editable_dev_packages(self):
532 return self._get_editable_packages(dev=True)
533
534 @property
535 def vcs_packages(self):
536 """Returns a list of VCS packages, for not pip-tools to consume."""
537 return self._get_vcs_packages(dev=False)
538
539 @property
540 def vcs_dev_packages(self):
541 """Returns a list of VCS packages, for not pip-tools to consume."""
542 return self._get_vcs_packages(dev=True)
543
544 @property
545 def all_packages(self):
546 """Returns a list of all packages."""
547 p = dict(self.parsed_pipfile.get('dev-packages', {}))
548 p.update(self.parsed_pipfile.get('packages', {}))
549 return p
550
551 @property
552 def packages(self):
553 """Returns a list of packages, for pip-tools to consume."""
554 return self._build_package_list('packages')
555
556 @property
557 def dev_packages(self):
558 """Returns a list of dev-packages, for pip-tools to consume."""
559 return self._build_package_list('dev-packages')
560
561 def touch_pipfile(self):
562 """Simply touches the Pipfile, for later use."""
563 with open('Pipfile', 'a'):
564 os.utime('Pipfile', None)
565
566 @property
567 def pipfile_is_empty(self):
568 if not self.pipfile_exists:
569 return True
570
571 if not len(self.read_pipfile()):
572 return True
573
574 return False
575
576 def create_pipfile(self, python=None):
577 """Creates the Pipfile, filled with juicy defaults."""
578 from .patched.notpip._internal import ConfigOptionParser
579 from .patched.notpip._internal.cmdoptions import make_option_group, index_group
580 config_parser = ConfigOptionParser(name=self.name)
581 config_parser.add_option_group(make_option_group(index_group, config_parser))
582 install = config_parser.option_groups[0]
583 indexes = ' '.join(install.get_option('--extra-index-url').default).lstrip('\n').split('\n')
584 sources = [DEFAULT_SOURCE]
585 for i, index in enumerate(indexes):
586 if not index:
587 continue
588
589 source_name = 'pip_index_{}'.format(i)
590 verify_ssl = index.startswith('https')
591 sources.append(
592 {
593 u'url': index,
594 u'verify_ssl': verify_ssl,
595 u'name': source_name,
596 }
597 )
598
599 data = {
600 u'source': sources,
601 # Default packages.
602 u'packages': {},
603 u'dev-packages': {},
604 }
605 # Default requires.
606 required_python = python
607 if not python:
608 if self.virtualenv_location:
609 required_python = self.which('python', self.virtualenv_location)
610 else:
611 required_python = self.which('python')
612 version = python_version(required_python) or PIPENV_DEFAULT_PYTHON_VERSION
613 if version and len(version) >= 3:
614 data[u'requires'] = {
615 'python_version': version[: len('2.7')]
616 }
617 self.write_toml(data, 'Pipfile')
618
619 def write_toml(self, data, path=None):
620 """Writes the given data structure out as TOML."""
621 if path is None:
622 path = self.pipfile_location
623 try:
624 formatted_data = contoml.dumps(data).rstrip()
625 except Exception:
626 for section in ('packages', 'dev-packages'):
627 for package in data.get(section, {}):
628 # Convert things to inline tables — fancy :)
629 if hasattr(data[section][package], 'keys'):
630 _data = data[section][package]
631 data[section][package] = toml._get_empty_inline_table(
632 dict
633 )
634 data[section][package].update(_data)
635 formatted_data = toml.dumps(data).rstrip()
636
637 if Path(path).absolute() == Path(self.pipfile_location).absolute():
638 newlines = self._pipfile_newlines
639 else:
640 newlines = DEFAULT_NEWLINES
641 formatted_data = cleanup_toml(formatted_data)
642 with io.open(path, 'w', newline=newlines) as f:
643 f.write(formatted_data)
644 # pipfile is mutated!
645 self.clear_pipfile_cache()
646
647 def write_lockfile(self, content):
648 """Write out the lockfile.
649 """
650 newlines = self._lockfile_newlines
651 s = simplejson.dumps( # Send Unicode in to guarentee Unicode out.
652 content, indent=4, separators=(u',', u': '), sort_keys=True,
653 )
654 with atomic_open_for_write(self.lockfile_location, newline=newlines) as f:
655 f.write(s)
656 if not s.endswith(u'\n'):
657 f.write(u'\n') # Write newline at end of document. GH #319.
658
659 @property
660 def pipfile_sources(self):
661 if 'source' not in self.parsed_pipfile:
662 return [DEFAULT_SOURCE]
663 # We need to make copies of the source info so we don't
664 # accidentally modify the cache. See #2100 where values are
665 # written after the os.path.expandvars() call.
666 return [
667 {k: safe_expandvars(v) for k, v in source.items()}
668 for source in self.parsed_pipfile['source']
669 ]
670
671 @property
672 def sources(self):
673 if self.lockfile_exists and hasattr(self.lockfile_content, 'keys'):
674 meta_ = self.lockfile_content['_meta']
675 sources_ = meta_.get('sources')
676 if sources_:
677 return sources_
678
679 else:
680 return self.pipfile_sources
681
682 def find_source(self, source):
683 """given a source, find it.
684
685 source can be a url or an index name.
686 """
687 if not is_valid_url(source):
688 try:
689 source = self.get_source(name=source)
690 except SourceNotFound:
691 source = self.get_source(url=source)
692 else:
693 source = self.get_source(url=source)
694 return source
695
696 def get_source(self, name=None, url=None):
697 def find_source(sources, name=None, url=None):
698 source = None
699 if name:
700 source = [s for s in sources if s.get('name') == name]
701 elif url:
702 source = [s for s in sources if url.startswith(s.get('url'))]
703 if source:
704 return first(source)
705
706 found_source = find_source(self.sources, name=name, url=url)
707 if found_source:
708 return found_source
709 found_source = find_source(self.pipfile_sources, name=name, url=url)
710 if found_source:
711 return found_source
712 raise SourceNotFound(name or url)
713
714 def get_package_name_in_pipfile(self, package_name, dev=False):
715 """Get the equivalent package name in pipfile"""
716 key = 'dev-packages' if dev else 'packages'
717 section = self.parsed_pipfile.get(key, {})
718 package_name = pep423_name(package_name)
719 for name in section.keys():
720 if pep423_name(name) == package_name:
721 return name
722 return None
723
724 def remove_package_from_pipfile(self, package_name, dev=False):
725 # Read and append Pipfile.
726 name = self.get_package_name_in_pipfile(package_name, dev)
727 key = 'dev-packages' if dev else 'packages'
728 p = self.parsed_pipfile
729 if name:
730 del p[key][name]
731 self.write_toml(p)
732
733 def add_package_to_pipfile(self, package_name, dev=False):
734 # Read and append Pipfile.
735 p = self.parsed_pipfile
736 # Don't re-capitalize file URLs or VCSs.
737 package = Requirement.from_line(package_name.strip())
738 _, converted = package.pipfile_entry
739 key = 'dev-packages' if dev else 'packages'
740 # Set empty group if it doesn't exist yet.
741 if key not in p:
742 p[key] = {}
743 name = self.get_package_name_in_pipfile(package.name, dev)
744 if name and is_star(converted):
745 # Skip for wildcard version
746 return
747 # Add the package to the group.
748 p[key][name or package.normalized_name] = converted
749 # Write Pipfile.
750 self.write_toml(p)
751
752 def add_index_to_pipfile(self, index):
753 """Adds a given index to the Pipfile."""
754 # Read and append Pipfile.
755 p = self.parsed_pipfile
756 source = {'url': index, 'verify_ssl': True}
757 # Add the package to the group.
758 if 'source' not in p:
759 p['source'] = [source]
760 else:
761 p['source'].append(source)
762 # Write Pipfile.
763 self.write_toml(p)
764
765 def recase_pipfile(self):
766 if self.ensure_proper_casing():
767 self.write_toml(self.parsed_pipfile)
768
769 def load_lockfile(self, expand_env_vars=True):
770 with io.open(self.lockfile_location) as lock:
771 j = json.load(lock)
772 self._lockfile_newlines = preferred_newlines(lock)
773 # lockfile is just a string
774 if not j or not hasattr(j, 'keys'):
775 return j
776
777 if expand_env_vars:
778 # Expand environment variables in Pipfile.lock at runtime.
779 for i, source in enumerate(j['_meta']['sources'][:]):
780 j['_meta']['sources'][i]['url'] = os.path.expandvars(j['_meta']['sources'][i]['url'])
781
782 return j
783
784 def get_lockfile_hash(self):
785 if not os.path.exists(self.lockfile_location):
786 return
787
788 lockfile = self.load_lockfile(expand_env_vars=False)
789 if '_meta' in lockfile and hasattr(lockfile, 'keys'):
790 return lockfile['_meta'].get('hash', {}).get('sha256')
791 # Lockfile exists but has no hash at all
792 return ''
793
794 def calculate_pipfile_hash(self):
795 # Update the lockfile if it is out-of-date.
796 p = pipfile.load(self.pipfile_location, inject_env=False)
797 return p.hash
798
799 def ensure_proper_casing(self):
800 """Ensures proper casing of Pipfile packages"""
801 pfile = self.parsed_pipfile
802 casing_changed = self.proper_case_section(pfile.get('packages', {}))
803 casing_changed |= self.proper_case_section(pfile.get('dev-packages', {}))
804 return casing_changed
805
806 def proper_case_section(self, section):
807 """Verify proper casing is retrieved, when available, for each
808 dependency in the section.
809 """
810 # Casing for section.
811 changed_values = False
812 unknown_names = [
813 k for k in section.keys() if k not in set(self.proper_names)
814 ]
815 # Replace each package with proper casing.
816 for dep in unknown_names:
817 try:
818 # Get new casing for package name.
819 new_casing = proper_case(dep)
820 except IOError:
821 # Unable to normalize package name.
822 continue
823
824 if new_casing != dep:
825 changed_values = True
826 self.register_proper_name(new_casing)
827 # Replace old value with new value.
828 old_value = section[dep]
829 section[new_casing] = old_value
830 del section[dep]
831 # Return whether or not values have been changed.
832 return changed_values
833
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pipenv/project.py b/pipenv/project.py
--- a/pipenv/project.py
+++ b/pipenv/project.py
@@ -363,7 +363,7 @@
def register_proper_name(self, name):
"""Registers a proper name to the database."""
with self.proper_names_db_path.open('a') as f:
- f.write('{0}\n'.format(name))
+ f.write(u'{0}\n'.format(name))
@property
def pipfile_location(self):
| {"golden_diff": "diff --git a/pipenv/project.py b/pipenv/project.py\n--- a/pipenv/project.py\n+++ b/pipenv/project.py\n@@ -363,7 +363,7 @@\n def register_proper_name(self, name):\n \"\"\"Registers a proper name to the database.\"\"\"\n with self.proper_names_db_path.open('a') as f:\n- f.write('{0}\\n'.format(name))\n+ f.write(u'{0}\\n'.format(name))\n \n @property\n def pipfile_location(self):\n", "issue": "Can not generate Pipfile.lock by installing packages with requirementx.txt on Python 2.7\nDescribe the issue briefly here.\r\nRun: $pipenv install -r requirements.txt\r\nGot error: \r\nCreating a virtualenv for this project...\r\nPipfile: /home/ec2-user/test/Pipfile\r\nUsing /usr/bin/python2.7 (2.7.14) to create virtualenv...\r\n\u280bAlready using interpreter /usr/bin/python2.7\r\nNew python executable in /home/ec2-user/.local/share/virtualenvs/test-LVXQY0Nl/bin/python2.7\r\nAlso creating executable in /home/ec2-user/.local/share/virtualenvs/test-LVXQY0Nl/bin/python\r\nInstalling setuptools, pip, wheel...done.\r\nSetting project for test-LVXQY0Nl to /home/ec2-user/test\r\n\r\nVirtualenv location: /home/ec2-user/.local/share/virtualenvs/test-LVXQY0Nl\r\nCreating a Pipfile for this project...\r\nRequirements file provided! Importing into Pipfile...\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/pipenv\", line 11, in <module>\r\n sys.exit(cli())\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/cli.py\", line 416, in install\r\n selective_upgrade=selective_upgrade,\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/core.py\", line 1854, in do_install\r\n import_requirements(r=project.path_to(requirements), dev=dev)\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/core.py\", line 228, in import_requirements\r\n project.recase_pipfile()\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/project.py\", line 766, in recase_pipfile\r\n if self.ensure_proper_casing():\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/project.py\", line 802, in ensure_proper_casing\r\n casing_changed = self.proper_case_section(pfile.get('packages', {}))\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/project.py\", line 826, in proper_case_section\r\n self.register_proper_name(new_casing)\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/project.py\", line 366, in register_proper_name\r\n f.write('{0}\\n'.format(name))\r\nTypeError: write() argument 1 must be unicode, not str\r\n/usr/local/lib/python2.7/site-packages/pipenv/_compat.py:108: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/pipenv-2ttppI-requirements'>\r\n warnings.warn(warn_message, ResourceWarning)\r\n\r\n\r\nPlease run `$ python -m pipenv.help`, and paste the results here.\r\n<details><summary>$ python -m pipenv.help output</summary>\r\n\r\nPipenv version: `'2018.6.25'`\r\n\r\nPipenv location: `'/usr/local/lib/python2.7/site-packages/pipenv'`\r\n\r\nPython location: `'/usr/bin/python'`\r\n\r\nOther Python installations in `PATH`:\r\n\r\n - `2.7`: `/usr/bin/python2.7`\r\n - `2.7`: `/usr/bin/python2.7`\r\n\r\n - `2.7.14`: `/usr/bin/python`\r\n\r\nPEP 508 Information:\r\n\r\n```\r\n{'implementation_name': 'cpython',\r\n 'implementation_version': '0',\r\n 'os_name': 'posix',\r\n 'platform_machine': 'x86_64',\r\n 'platform_python_implementation': 'CPython',\r\n 'platform_release': '4.14.33-51.37.amzn1.x86_64',\r\n 'platform_system': 'Linux',\r\n 'platform_version': '#1 SMP Thu May 3 20:07:43 UTC 2018',\r\n 'python_full_version': '2.7.14',\r\n 'python_version': '2.7',\r\n 'sys_platform': 'linux2'}\r\n```\r\n\r\nSystem environment variables:\r\n\r\n - `LC_CTYPE`\r\n - `PYTHONDONTWRITEBYTECODE`\r\n - `LESSOPEN`\r\n - `SSH_CLIENT`\r\n - `LOGNAME`\r\n - `USER`\r\n - `HOME`\r\n - `PATH`\r\n - `AWS_PATH`\r\n - `LANG`\r\n - `LESS_TERMCAP_se`\r\n - `TERM`\r\n - `SHELL`\r\n - `EC2_AMITOOL_HOME`\r\n - `LESS_TERMCAP_me`\r\n - `LESS_TERMCAP_md`\r\n - `LESS_TERMCAP_mb`\r\n - `HISTSIZE`\r\n - `AWS_ELB_HOME`\r\n - `JAVA_HOME`\r\n - `EC2_HOME`\r\n - `AWS_AUTO_SCALING_HOME`\r\n - `PIP_PYTHON_PATH`\r\n - `_`\r\n - `LESS_TERMCAP_ue`\r\n - `SSH_CONNECTION`\r\n - `AWS_CLOUDWATCH_HOME`\r\n - `SSH_TTY`\r\n - `OLDPWD`\r\n - `HOSTNAME`\r\n - `HISTCONTROL`\r\n - `SHLVL`\r\n - `PWD`\r\n - `LESS_TERMCAP_us`\r\n - `MAIL`\r\n - `LS_COLORS`\r\n\r\nPipenv\u2013specific environment variables:\r\n\r\n\r\nDebug\u2013specific environment variables:\r\n\r\n - `PATH`: `/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/.local/bin:/home/ec2-user/bin`\r\n - `SHELL`: `/bin/bash`\r\n - `LANG`: `en_US.UTF-8`\r\n - `PWD`: `/home/ec2-user/test`\r\n\r\n\r\n---------------------------\r\n\r\nContents of `Pipfile` ('/home/ec2-user/test/Pipfile'):\r\n\r\n```toml\r\n[[source]]\r\nurl = \"https://pypi.org/simple\"\r\nverify_ssl = true\r\nname = \"pypi\"\r\n\r\n[dev-packages]\r\n\r\n[packages]\r\nflask = \"==0.10.1\"\r\n\"jinja2\" = \"==2.7.3\"\r\nwerkzeug = \"==0.10\"\r\n\r\n[requires]\r\npython_version = \"2.7\"\r\n\r\n```\r\n\r\n</details>\r\nIf you're on MacOS, just run the following:\r\n\r\n $ python -m pipenv.help | pbcopy\r\n\r\n------------\r\n\r\n##### Expected result\r\n\r\nDescribe what you expected.\r\n\r\n##### Actual result\r\n\r\nWhen possible, provide the verbose output (`--verbose`), especially for locking and dependencies resolving issues.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport io\nimport json\nimport os\nimport re\nimport sys\nimport base64\nimport hashlib\nimport contoml\nfrom first import first\nimport pipfile\nimport pipfile.api\nimport six\nimport toml\nimport json as simplejson\n\ntry:\n from pathlib import Path\nexcept ImportError:\n from pathlib2 import Path\n\nfrom .cmdparse import Script\nfrom .vendor.requirementslib import Requirement\nfrom .utils import (\n atomic_open_for_write,\n mkdir_p,\n pep423_name,\n proper_case,\n find_requirements,\n is_editable,\n is_vcs,\n cleanup_toml,\n is_installable_file,\n is_valid_url,\n normalize_drive,\n python_version,\n safe_expandvars,\n is_star,\n)\nfrom .environments import (\n PIPENV_MAX_DEPTH,\n PIPENV_PIPFILE,\n PIPENV_VENV_IN_PROJECT,\n PIPENV_VIRTUALENV,\n PIPENV_TEST_INDEX,\n PIPENV_PYTHON,\n PIPENV_DEFAULT_PYTHON_VERSION,\n)\n\n\ndef _normalized(p):\n if p is None:\n return None\n loc = Path(p)\n if loc.is_absolute():\n return normalize_drive(str(loc))\n else:\n try:\n loc = loc.resolve()\n except OSError:\n loc = loc.absolute()\n return normalize_drive(str(loc))\n\n\nDEFAULT_NEWLINES = u'\\n'\n\n\ndef preferred_newlines(f):\n if isinstance(f.newlines, six.text_type):\n return f.newlines\n\n return DEFAULT_NEWLINES\n\n\nif PIPENV_PIPFILE:\n if not os.path.isfile(PIPENV_PIPFILE):\n raise RuntimeError('Given PIPENV_PIPFILE is not found!')\n\n else:\n PIPENV_PIPFILE = _normalized(PIPENV_PIPFILE)\n# (path, file contents) => TOMLFile\n# keeps track of pipfiles that we've seen so we do not need to re-parse 'em\n_pipfile_cache = {}\n\n\nif PIPENV_TEST_INDEX:\n DEFAULT_SOURCE = {\n u'url': PIPENV_TEST_INDEX,\n u'verify_ssl': True,\n u'name': u'custom',\n }\nelse:\n DEFAULT_SOURCE = {\n u'url': u'https://pypi.org/simple',\n u'verify_ssl': True,\n u'name': u'pypi',\n }\n\npipfile.api.DEFAULT_SOURCE = DEFAULT_SOURCE\n\n\nclass SourceNotFound(KeyError):\n pass\n\n\nclass Project(object):\n \"\"\"docstring for Project\"\"\"\n\n def __init__(self, which=None, python_version=None, chdir=True):\n super(Project, self).__init__()\n self._name = None\n self._virtualenv_location = None\n self._download_location = None\n self._proper_names_db_path = None\n self._pipfile_location = None\n self._pipfile_newlines = DEFAULT_NEWLINES\n self._lockfile_newlines = DEFAULT_NEWLINES\n self._requirements_location = None\n self._original_dir = os.path.abspath(os.curdir)\n self.which = which\n self.python_version = python_version\n # Hack to skip this during pipenv run, or -r.\n if ('run' not in sys.argv) and chdir:\n try:\n os.chdir(self.project_directory)\n except (TypeError, AttributeError):\n pass\n\n def path_to(self, p):\n \"\"\"Returns the absolute path to a given relative path.\"\"\"\n if os.path.isabs(p):\n return p\n\n return os.sep.join([self._original_dir, p])\n\n def _build_package_list(self, package_section):\n \"\"\"Returns a list of packages for pip-tools to consume.\"\"\"\n ps = {}\n # TODO: Separate the logic for showing packages from the filters for supplying pip-tools\n for k, v in self.parsed_pipfile.get(package_section, {}).items():\n # Skip editable VCS deps.\n if hasattr(v, 'keys'):\n # When a vcs url is gven without editable it only appears as a key\n # Eliminate any vcs, path, or url entries which are not editable\n # Since pip-tools can't do deep resolution on them, even setuptools-installable ones\n if (\n is_vcs(v) or\n is_vcs(k) or\n (is_installable_file(k) or is_installable_file(v)) or\n any(\n (\n prefix in v and\n (\n os.path.isfile(v[prefix]) or\n is_valid_url(v[prefix])\n )\n )\n for prefix in ['path', 'file']\n )\n ):\n # If they are editable, do resolve them\n if 'editable' not in v:\n # allow wheels to be passed through\n if not (hasattr(v, 'keys') and v.get('path', v.get('file', '')).endswith('.whl')):\n continue\n ps.update({k: v})\n\n else:\n ps.update({k: v})\n else:\n ps.update({k: v})\n else:\n # Since these entries have no attributes we know they are not editable\n # So we can safely exclude things that need to be editable in order to be resolved\n # First exclude anything that is a vcs entry either in the key or value\n if not (\n any(is_vcs(i) for i in [k, v]) or\n # Then exclude any installable files that are not directories\n # Because pip-tools can resolve setup.py for example\n any(is_installable_file(i) for i in [k, v]) or\n # Then exclude any URLs because they need to be editable also\n # Things that are excluded can only be 'shallow resolved'\n any(is_valid_url(i) for i in [k, v])\n ):\n ps.update({k: v})\n return ps\n\n @property\n def name(self):\n if self._name is None:\n self._name = self.pipfile_location.split(os.sep)[-2]\n return self._name\n\n @property\n def pipfile_exists(self):\n return bool(self.pipfile_location)\n\n @property\n def required_python_version(self):\n if self.pipfile_exists:\n required = self.parsed_pipfile.get('requires', {}).get(\n 'python_full_version'\n )\n if not required:\n required = self.parsed_pipfile.get('requires', {}).get(\n 'python_version'\n )\n if required != \"*\":\n return required\n\n @property\n def project_directory(self):\n if self.pipfile_location is not None:\n return os.path.abspath(\n os.path.join(self.pipfile_location, os.pardir)\n )\n\n else:\n return None\n\n @property\n def requirements_exists(self):\n return bool(self.requirements_location)\n\n def is_venv_in_project(self):\n return PIPENV_VENV_IN_PROJECT or (\n self.project_directory and\n os.path.exists(os.path.join(self.project_directory, '.venv'))\n )\n\n @property\n def virtualenv_exists(self):\n # TODO: Decouple project from existence of Pipfile.\n if self.pipfile_exists and os.path.exists(self.virtualenv_location):\n if os.name == 'nt':\n extra = ['Scripts', 'activate.bat']\n else:\n extra = ['bin', 'activate']\n return os.path.isfile(\n os.sep.join([self.virtualenv_location] + extra)\n )\n\n return False\n\n @classmethod\n def _get_virtualenv_location(cls, name):\n from .patched.pew.pew import get_workon_home\n venv = get_workon_home() / name\n if not venv.exists():\n return ''\n return '{0}'.format(venv)\n\n @classmethod\n def _sanitize(cls, name):\n # Replace dangerous characters into '_'. The length of the sanitized\n # project name is limited as 42 because of the limit of linux kernel\n #\n # 42 = 127 - len('/home//.local/share/virtualenvs//bin/python2') - 32 - len('-HASHHASH')\n #\n # 127 : BINPRM_BUF_SIZE - 1\n # 32 : Maximum length of username\n #\n # References:\n # https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html\n # http://www.tldp.org/LDP/abs/html/special-chars.html#FIELDREF\n # https://github.com/torvalds/linux/blob/2bfe01ef/include/uapi/linux/binfmts.h#L18\n return re.sub(r'[ $`!*@\"\\\\\\r\\n\\t]', '_', name)[0:42]\n\n def _get_virtualenv_hash(self, name):\n \"\"\"Get the name of the virtualenv adjusted for windows if needed\n\n Returns (name, encoded_hash)\n \"\"\"\n def get_name(name, location):\n name = self._sanitize(name)\n hash = hashlib.sha256(location.encode()).digest()[:6]\n encoded_hash = base64.urlsafe_b64encode(hash).decode()\n return name, encoded_hash[:8]\n\n clean_name, encoded_hash = get_name(name, self.pipfile_location)\n venv_name = '{0}-{1}'.format(clean_name, encoded_hash)\n\n # This should work most of the time, for non-WIndows, in-project venv,\n # or \"proper\" path casing (on Windows).\n if (os.name != 'nt' or\n self.is_venv_in_project() or\n self._get_virtualenv_location(venv_name)):\n return clean_name, encoded_hash\n\n # Check for different capitalization of the same project.\n from .patched.pew.pew import lsenvs\n for env in lsenvs():\n try:\n env_name, hash_ = env.rsplit('-', 1)\n except ValueError:\n continue\n if len(hash_) != 8 or env_name.lower() != name.lower():\n continue\n return get_name(env_name, self.pipfile_location.replace(name, env_name))\n\n # Use the default if no matching env exists.\n return clean_name, encoded_hash\n\n @property\n def virtualenv_name(self):\n sanitized, encoded_hash = self._get_virtualenv_hash(self.name)\n suffix = '-{0}'.format(PIPENV_PYTHON) if PIPENV_PYTHON else ''\n # If the pipfile was located at '/home/user/MY_PROJECT/Pipfile',\n # the name of its virtualenv will be 'my-project-wyUfYPqE'\n return sanitized + '-' + encoded_hash + suffix\n\n @property\n def virtualenv_location(self):\n # if VIRTUAL_ENV is set, use that.\n if PIPENV_VIRTUALENV:\n return PIPENV_VIRTUALENV\n\n # Use cached version, if available.\n if self._virtualenv_location:\n return self._virtualenv_location\n\n # Default mode.\n if not self.is_venv_in_project():\n loc = self._get_virtualenv_location(self.virtualenv_name)\n # The user wants the virtualenv in the project.\n else:\n loc = os.sep.join(\n self.pipfile_location.split(os.sep)[:-1] + ['.venv']\n )\n self._virtualenv_location = loc\n return loc\n\n @property\n def virtualenv_src_location(self):\n loc = os.sep.join([self.virtualenv_location, 'src'])\n mkdir_p(loc)\n return loc\n\n @property\n def download_location(self):\n if self._download_location is None:\n loc = os.sep.join([self.virtualenv_location, 'downloads'])\n self._download_location = loc\n # Create the directory, if it doesn't exist.\n mkdir_p(self._download_location)\n return self._download_location\n\n @property\n def proper_names_db_path(self):\n if self._proper_names_db_path is None:\n self._proper_names_db_path = Path(\n self.virtualenv_location,\n 'pipenv-proper-names.txt',\n )\n self._proper_names_db_path.touch() # Ensure the file exists.\n return self._proper_names_db_path\n\n @property\n def proper_names(self):\n with self.proper_names_db_path.open() as f:\n return f.read().splitlines()\n\n def register_proper_name(self, name):\n \"\"\"Registers a proper name to the database.\"\"\"\n with self.proper_names_db_path.open('a') as f:\n f.write('{0}\\n'.format(name))\n\n @property\n def pipfile_location(self):\n if PIPENV_PIPFILE:\n return PIPENV_PIPFILE\n\n if self._pipfile_location is None:\n try:\n loc = pipfile.Pipfile.find(max_depth=PIPENV_MAX_DEPTH)\n except RuntimeError:\n loc = None\n self._pipfile_location = _normalized(loc)\n return self._pipfile_location\n\n @property\n def requirements_location(self):\n if self._requirements_location is None:\n try:\n loc = find_requirements(max_depth=PIPENV_MAX_DEPTH)\n except RuntimeError:\n loc = None\n self._requirements_location = loc\n return self._requirements_location\n\n @property\n def parsed_pipfile(self):\n \"\"\"Parse Pipfile into a TOMLFile and cache it\n\n (call clear_pipfile_cache() afterwards if mutating)\"\"\"\n contents = self.read_pipfile()\n # use full contents to get around str/bytes 2/3 issues\n cache_key = (self.pipfile_location, contents)\n if cache_key not in _pipfile_cache:\n parsed = self._parse_pipfile(contents)\n _pipfile_cache[cache_key] = parsed\n return _pipfile_cache[cache_key]\n\n def read_pipfile(self):\n # Open the pipfile, read it into memory.\n with io.open(self.pipfile_location) as f:\n contents = f.read()\n self._pipfile_newlines = preferred_newlines(f)\n\n return contents\n\n @property\n def pased_pure_pipfile(self):\n contents = self.read_pipfile()\n\n return self._parse_pipfile(contents)\n\n def clear_pipfile_cache(self):\n \"\"\"Clear pipfile cache (e.g., so we can mutate parsed pipfile)\"\"\"\n _pipfile_cache.clear()\n\n def _parse_pipfile(self, contents):\n # If any outline tables are present...\n if ('[packages.' in contents) or ('[dev-packages.' in contents):\n data = toml.loads(contents)\n # Convert all outline tables to inline tables.\n for section in ('packages', 'dev-packages'):\n for package in data.get(section, {}):\n # Convert things to inline tables \u2014 fancy :)\n if hasattr(data[section][package], 'keys'):\n _data = data[section][package]\n data[section][package] = toml._get_empty_inline_table(\n dict\n )\n data[section][package].update(_data)\n # We lose comments here, but it's for the best.)\n try:\n return contoml.loads(toml.dumps(data, preserve=True))\n\n except RuntimeError:\n return toml.loads(toml.dumps(data, preserve=True))\n\n else:\n # Fallback to toml parser, for large files.\n try:\n return contoml.loads(contents)\n\n except Exception:\n return toml.loads(contents)\n\n @property\n def settings(self):\n \"\"\"A dictionary of the settings added to the Pipfile.\"\"\"\n return self.parsed_pipfile.get('pipenv', {})\n\n def has_script(self, name):\n try:\n return name in self.parsed_pipfile['scripts']\n except KeyError:\n return False\n\n def build_script(self, name, extra_args=None):\n try:\n script = Script.parse(self.parsed_pipfile['scripts'][name])\n except KeyError:\n script = Script(name)\n if extra_args:\n script.extend(extra_args)\n return script\n\n def update_settings(self, d):\n settings = self.settings\n changed = False\n for new in d:\n if new not in settings:\n settings[new] = d[new]\n changed = True\n if changed:\n p = self.parsed_pipfile\n p['pipenv'] = settings\n # Write the changes to disk.\n self.write_toml(p)\n\n @property\n def _lockfile(self):\n \"\"\"Pipfile.lock divided by PyPI and external dependencies.\"\"\"\n pfile = pipfile.load(self.pipfile_location, inject_env=False)\n lockfile = json.loads(pfile.lock())\n for section in ('default', 'develop'):\n lock_section = lockfile.get(section, {})\n for key in list(lock_section.keys()):\n norm_key = pep423_name(key)\n lockfile[section][norm_key] = lock_section.pop(key)\n return lockfile\n\n @property\n def lockfile_location(self):\n return '{0}.lock'.format(self.pipfile_location)\n\n @property\n def lockfile_exists(self):\n return os.path.isfile(self.lockfile_location)\n\n @property\n def lockfile_content(self):\n return self.load_lockfile()\n\n def _get_editable_packages(self, dev=False):\n section = 'dev-packages' if dev else 'packages'\n packages = {\n k: v\n for k, v in self.parsed_pipfile.get(section, {}).items()\n if is_editable(v)\n }\n return packages\n\n def _get_vcs_packages(self, dev=False):\n section = 'dev-packages' if dev else 'packages'\n packages = {\n k: v\n for k, v in self.parsed_pipfile.get(section, {}).items()\n if is_vcs(v) or is_vcs(k)\n }\n return packages or {}\n\n @property\n def editable_packages(self):\n return self._get_editable_packages(dev=False)\n\n @property\n def editable_dev_packages(self):\n return self._get_editable_packages(dev=True)\n\n @property\n def vcs_packages(self):\n \"\"\"Returns a list of VCS packages, for not pip-tools to consume.\"\"\"\n return self._get_vcs_packages(dev=False)\n\n @property\n def vcs_dev_packages(self):\n \"\"\"Returns a list of VCS packages, for not pip-tools to consume.\"\"\"\n return self._get_vcs_packages(dev=True)\n\n @property\n def all_packages(self):\n \"\"\"Returns a list of all packages.\"\"\"\n p = dict(self.parsed_pipfile.get('dev-packages', {}))\n p.update(self.parsed_pipfile.get('packages', {}))\n return p\n\n @property\n def packages(self):\n \"\"\"Returns a list of packages, for pip-tools to consume.\"\"\"\n return self._build_package_list('packages')\n\n @property\n def dev_packages(self):\n \"\"\"Returns a list of dev-packages, for pip-tools to consume.\"\"\"\n return self._build_package_list('dev-packages')\n\n def touch_pipfile(self):\n \"\"\"Simply touches the Pipfile, for later use.\"\"\"\n with open('Pipfile', 'a'):\n os.utime('Pipfile', None)\n\n @property\n def pipfile_is_empty(self):\n if not self.pipfile_exists:\n return True\n\n if not len(self.read_pipfile()):\n return True\n\n return False\n\n def create_pipfile(self, python=None):\n \"\"\"Creates the Pipfile, filled with juicy defaults.\"\"\"\n from .patched.notpip._internal import ConfigOptionParser\n from .patched.notpip._internal.cmdoptions import make_option_group, index_group\n config_parser = ConfigOptionParser(name=self.name)\n config_parser.add_option_group(make_option_group(index_group, config_parser))\n install = config_parser.option_groups[0]\n indexes = ' '.join(install.get_option('--extra-index-url').default).lstrip('\\n').split('\\n')\n sources = [DEFAULT_SOURCE]\n for i, index in enumerate(indexes):\n if not index:\n continue\n\n source_name = 'pip_index_{}'.format(i)\n verify_ssl = index.startswith('https')\n sources.append(\n {\n u'url': index,\n u'verify_ssl': verify_ssl,\n u'name': source_name,\n }\n )\n\n data = {\n u'source': sources,\n # Default packages.\n u'packages': {},\n u'dev-packages': {},\n }\n # Default requires.\n required_python = python\n if not python:\n if self.virtualenv_location:\n required_python = self.which('python', self.virtualenv_location)\n else:\n required_python = self.which('python')\n version = python_version(required_python) or PIPENV_DEFAULT_PYTHON_VERSION\n if version and len(version) >= 3:\n data[u'requires'] = {\n 'python_version': version[: len('2.7')]\n }\n self.write_toml(data, 'Pipfile')\n\n def write_toml(self, data, path=None):\n \"\"\"Writes the given data structure out as TOML.\"\"\"\n if path is None:\n path = self.pipfile_location\n try:\n formatted_data = contoml.dumps(data).rstrip()\n except Exception:\n for section in ('packages', 'dev-packages'):\n for package in data.get(section, {}):\n # Convert things to inline tables \u2014 fancy :)\n if hasattr(data[section][package], 'keys'):\n _data = data[section][package]\n data[section][package] = toml._get_empty_inline_table(\n dict\n )\n data[section][package].update(_data)\n formatted_data = toml.dumps(data).rstrip()\n\n if Path(path).absolute() == Path(self.pipfile_location).absolute():\n newlines = self._pipfile_newlines\n else:\n newlines = DEFAULT_NEWLINES\n formatted_data = cleanup_toml(formatted_data)\n with io.open(path, 'w', newline=newlines) as f:\n f.write(formatted_data)\n # pipfile is mutated!\n self.clear_pipfile_cache()\n\n def write_lockfile(self, content):\n \"\"\"Write out the lockfile.\n \"\"\"\n newlines = self._lockfile_newlines\n s = simplejson.dumps( # Send Unicode in to guarentee Unicode out.\n content, indent=4, separators=(u',', u': '), sort_keys=True,\n )\n with atomic_open_for_write(self.lockfile_location, newline=newlines) as f:\n f.write(s)\n if not s.endswith(u'\\n'):\n f.write(u'\\n') # Write newline at end of document. GH #319.\n\n @property\n def pipfile_sources(self):\n if 'source' not in self.parsed_pipfile:\n return [DEFAULT_SOURCE]\n # We need to make copies of the source info so we don't\n # accidentally modify the cache. See #2100 where values are\n # written after the os.path.expandvars() call.\n return [\n {k: safe_expandvars(v) for k, v in source.items()}\n for source in self.parsed_pipfile['source']\n ]\n\n @property\n def sources(self):\n if self.lockfile_exists and hasattr(self.lockfile_content, 'keys'):\n meta_ = self.lockfile_content['_meta']\n sources_ = meta_.get('sources')\n if sources_:\n return sources_\n\n else:\n return self.pipfile_sources\n\n def find_source(self, source):\n \"\"\"given a source, find it.\n\n source can be a url or an index name.\n \"\"\"\n if not is_valid_url(source):\n try:\n source = self.get_source(name=source)\n except SourceNotFound:\n source = self.get_source(url=source)\n else:\n source = self.get_source(url=source)\n return source\n\n def get_source(self, name=None, url=None):\n def find_source(sources, name=None, url=None):\n source = None\n if name:\n source = [s for s in sources if s.get('name') == name]\n elif url:\n source = [s for s in sources if url.startswith(s.get('url'))]\n if source:\n return first(source)\n\n found_source = find_source(self.sources, name=name, url=url)\n if found_source:\n return found_source\n found_source = find_source(self.pipfile_sources, name=name, url=url)\n if found_source:\n return found_source\n raise SourceNotFound(name or url)\n\n def get_package_name_in_pipfile(self, package_name, dev=False):\n \"\"\"Get the equivalent package name in pipfile\"\"\"\n key = 'dev-packages' if dev else 'packages'\n section = self.parsed_pipfile.get(key, {})\n package_name = pep423_name(package_name)\n for name in section.keys():\n if pep423_name(name) == package_name:\n return name\n return None\n\n def remove_package_from_pipfile(self, package_name, dev=False):\n # Read and append Pipfile.\n name = self.get_package_name_in_pipfile(package_name, dev)\n key = 'dev-packages' if dev else 'packages'\n p = self.parsed_pipfile\n if name:\n del p[key][name]\n self.write_toml(p)\n\n def add_package_to_pipfile(self, package_name, dev=False):\n # Read and append Pipfile.\n p = self.parsed_pipfile\n # Don't re-capitalize file URLs or VCSs.\n package = Requirement.from_line(package_name.strip())\n _, converted = package.pipfile_entry\n key = 'dev-packages' if dev else 'packages'\n # Set empty group if it doesn't exist yet.\n if key not in p:\n p[key] = {}\n name = self.get_package_name_in_pipfile(package.name, dev)\n if name and is_star(converted):\n # Skip for wildcard version\n return\n # Add the package to the group.\n p[key][name or package.normalized_name] = converted\n # Write Pipfile.\n self.write_toml(p)\n\n def add_index_to_pipfile(self, index):\n \"\"\"Adds a given index to the Pipfile.\"\"\"\n # Read and append Pipfile.\n p = self.parsed_pipfile\n source = {'url': index, 'verify_ssl': True}\n # Add the package to the group.\n if 'source' not in p:\n p['source'] = [source]\n else:\n p['source'].append(source)\n # Write Pipfile.\n self.write_toml(p)\n\n def recase_pipfile(self):\n if self.ensure_proper_casing():\n self.write_toml(self.parsed_pipfile)\n\n def load_lockfile(self, expand_env_vars=True):\n with io.open(self.lockfile_location) as lock:\n j = json.load(lock)\n self._lockfile_newlines = preferred_newlines(lock)\n # lockfile is just a string\n if not j or not hasattr(j, 'keys'):\n return j\n\n if expand_env_vars:\n # Expand environment variables in Pipfile.lock at runtime.\n for i, source in enumerate(j['_meta']['sources'][:]):\n j['_meta']['sources'][i]['url'] = os.path.expandvars(j['_meta']['sources'][i]['url'])\n\n return j\n\n def get_lockfile_hash(self):\n if not os.path.exists(self.lockfile_location):\n return\n\n lockfile = self.load_lockfile(expand_env_vars=False)\n if '_meta' in lockfile and hasattr(lockfile, 'keys'):\n return lockfile['_meta'].get('hash', {}).get('sha256')\n # Lockfile exists but has no hash at all\n return ''\n\n def calculate_pipfile_hash(self):\n # Update the lockfile if it is out-of-date.\n p = pipfile.load(self.pipfile_location, inject_env=False)\n return p.hash\n\n def ensure_proper_casing(self):\n \"\"\"Ensures proper casing of Pipfile packages\"\"\"\n pfile = self.parsed_pipfile\n casing_changed = self.proper_case_section(pfile.get('packages', {}))\n casing_changed |= self.proper_case_section(pfile.get('dev-packages', {}))\n return casing_changed\n\n def proper_case_section(self, section):\n \"\"\"Verify proper casing is retrieved, when available, for each\n dependency in the section.\n \"\"\"\n # Casing for section.\n changed_values = False\n unknown_names = [\n k for k in section.keys() if k not in set(self.proper_names)\n ]\n # Replace each package with proper casing.\n for dep in unknown_names:\n try:\n # Get new casing for package name.\n new_casing = proper_case(dep)\n except IOError:\n # Unable to normalize package name.\n continue\n\n if new_casing != dep:\n changed_values = True\n self.register_proper_name(new_casing)\n # Replace old value with new value.\n old_value = section[dep]\n section[new_casing] = old_value\n del section[dep]\n # Return whether or not values have been changed.\n return changed_values\n", "path": "pipenv/project.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport io\nimport json\nimport os\nimport re\nimport sys\nimport base64\nimport hashlib\nimport contoml\nfrom first import first\nimport pipfile\nimport pipfile.api\nimport six\nimport toml\nimport json as simplejson\n\ntry:\n from pathlib import Path\nexcept ImportError:\n from pathlib2 import Path\n\nfrom .cmdparse import Script\nfrom .vendor.requirementslib import Requirement\nfrom .utils import (\n atomic_open_for_write,\n mkdir_p,\n pep423_name,\n proper_case,\n find_requirements,\n is_editable,\n is_vcs,\n cleanup_toml,\n is_installable_file,\n is_valid_url,\n normalize_drive,\n python_version,\n safe_expandvars,\n is_star,\n)\nfrom .environments import (\n PIPENV_MAX_DEPTH,\n PIPENV_PIPFILE,\n PIPENV_VENV_IN_PROJECT,\n PIPENV_VIRTUALENV,\n PIPENV_TEST_INDEX,\n PIPENV_PYTHON,\n PIPENV_DEFAULT_PYTHON_VERSION,\n)\n\n\ndef _normalized(p):\n if p is None:\n return None\n loc = Path(p)\n if loc.is_absolute():\n return normalize_drive(str(loc))\n else:\n try:\n loc = loc.resolve()\n except OSError:\n loc = loc.absolute()\n return normalize_drive(str(loc))\n\n\nDEFAULT_NEWLINES = u'\\n'\n\n\ndef preferred_newlines(f):\n if isinstance(f.newlines, six.text_type):\n return f.newlines\n\n return DEFAULT_NEWLINES\n\n\nif PIPENV_PIPFILE:\n if not os.path.isfile(PIPENV_PIPFILE):\n raise RuntimeError('Given PIPENV_PIPFILE is not found!')\n\n else:\n PIPENV_PIPFILE = _normalized(PIPENV_PIPFILE)\n# (path, file contents) => TOMLFile\n# keeps track of pipfiles that we've seen so we do not need to re-parse 'em\n_pipfile_cache = {}\n\n\nif PIPENV_TEST_INDEX:\n DEFAULT_SOURCE = {\n u'url': PIPENV_TEST_INDEX,\n u'verify_ssl': True,\n u'name': u'custom',\n }\nelse:\n DEFAULT_SOURCE = {\n u'url': u'https://pypi.org/simple',\n u'verify_ssl': True,\n u'name': u'pypi',\n }\n\npipfile.api.DEFAULT_SOURCE = DEFAULT_SOURCE\n\n\nclass SourceNotFound(KeyError):\n pass\n\n\nclass Project(object):\n \"\"\"docstring for Project\"\"\"\n\n def __init__(self, which=None, python_version=None, chdir=True):\n super(Project, self).__init__()\n self._name = None\n self._virtualenv_location = None\n self._download_location = None\n self._proper_names_db_path = None\n self._pipfile_location = None\n self._pipfile_newlines = DEFAULT_NEWLINES\n self._lockfile_newlines = DEFAULT_NEWLINES\n self._requirements_location = None\n self._original_dir = os.path.abspath(os.curdir)\n self.which = which\n self.python_version = python_version\n # Hack to skip this during pipenv run, or -r.\n if ('run' not in sys.argv) and chdir:\n try:\n os.chdir(self.project_directory)\n except (TypeError, AttributeError):\n pass\n\n def path_to(self, p):\n \"\"\"Returns the absolute path to a given relative path.\"\"\"\n if os.path.isabs(p):\n return p\n\n return os.sep.join([self._original_dir, p])\n\n def _build_package_list(self, package_section):\n \"\"\"Returns a list of packages for pip-tools to consume.\"\"\"\n ps = {}\n # TODO: Separate the logic for showing packages from the filters for supplying pip-tools\n for k, v in self.parsed_pipfile.get(package_section, {}).items():\n # Skip editable VCS deps.\n if hasattr(v, 'keys'):\n # When a vcs url is gven without editable it only appears as a key\n # Eliminate any vcs, path, or url entries which are not editable\n # Since pip-tools can't do deep resolution on them, even setuptools-installable ones\n if (\n is_vcs(v) or\n is_vcs(k) or\n (is_installable_file(k) or is_installable_file(v)) or\n any(\n (\n prefix in v and\n (\n os.path.isfile(v[prefix]) or\n is_valid_url(v[prefix])\n )\n )\n for prefix in ['path', 'file']\n )\n ):\n # If they are editable, do resolve them\n if 'editable' not in v:\n # allow wheels to be passed through\n if not (hasattr(v, 'keys') and v.get('path', v.get('file', '')).endswith('.whl')):\n continue\n ps.update({k: v})\n\n else:\n ps.update({k: v})\n else:\n ps.update({k: v})\n else:\n # Since these entries have no attributes we know they are not editable\n # So we can safely exclude things that need to be editable in order to be resolved\n # First exclude anything that is a vcs entry either in the key or value\n if not (\n any(is_vcs(i) for i in [k, v]) or\n # Then exclude any installable files that are not directories\n # Because pip-tools can resolve setup.py for example\n any(is_installable_file(i) for i in [k, v]) or\n # Then exclude any URLs because they need to be editable also\n # Things that are excluded can only be 'shallow resolved'\n any(is_valid_url(i) for i in [k, v])\n ):\n ps.update({k: v})\n return ps\n\n @property\n def name(self):\n if self._name is None:\n self._name = self.pipfile_location.split(os.sep)[-2]\n return self._name\n\n @property\n def pipfile_exists(self):\n return bool(self.pipfile_location)\n\n @property\n def required_python_version(self):\n if self.pipfile_exists:\n required = self.parsed_pipfile.get('requires', {}).get(\n 'python_full_version'\n )\n if not required:\n required = self.parsed_pipfile.get('requires', {}).get(\n 'python_version'\n )\n if required != \"*\":\n return required\n\n @property\n def project_directory(self):\n if self.pipfile_location is not None:\n return os.path.abspath(\n os.path.join(self.pipfile_location, os.pardir)\n )\n\n else:\n return None\n\n @property\n def requirements_exists(self):\n return bool(self.requirements_location)\n\n def is_venv_in_project(self):\n return PIPENV_VENV_IN_PROJECT or (\n self.project_directory and\n os.path.exists(os.path.join(self.project_directory, '.venv'))\n )\n\n @property\n def virtualenv_exists(self):\n # TODO: Decouple project from existence of Pipfile.\n if self.pipfile_exists and os.path.exists(self.virtualenv_location):\n if os.name == 'nt':\n extra = ['Scripts', 'activate.bat']\n else:\n extra = ['bin', 'activate']\n return os.path.isfile(\n os.sep.join([self.virtualenv_location] + extra)\n )\n\n return False\n\n @classmethod\n def _get_virtualenv_location(cls, name):\n from .patched.pew.pew import get_workon_home\n venv = get_workon_home() / name\n if not venv.exists():\n return ''\n return '{0}'.format(venv)\n\n @classmethod\n def _sanitize(cls, name):\n # Replace dangerous characters into '_'. The length of the sanitized\n # project name is limited as 42 because of the limit of linux kernel\n #\n # 42 = 127 - len('/home//.local/share/virtualenvs//bin/python2') - 32 - len('-HASHHASH')\n #\n # 127 : BINPRM_BUF_SIZE - 1\n # 32 : Maximum length of username\n #\n # References:\n # https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html\n # http://www.tldp.org/LDP/abs/html/special-chars.html#FIELDREF\n # https://github.com/torvalds/linux/blob/2bfe01ef/include/uapi/linux/binfmts.h#L18\n return re.sub(r'[ $`!*@\"\\\\\\r\\n\\t]', '_', name)[0:42]\n\n def _get_virtualenv_hash(self, name):\n \"\"\"Get the name of the virtualenv adjusted for windows if needed\n\n Returns (name, encoded_hash)\n \"\"\"\n def get_name(name, location):\n name = self._sanitize(name)\n hash = hashlib.sha256(location.encode()).digest()[:6]\n encoded_hash = base64.urlsafe_b64encode(hash).decode()\n return name, encoded_hash[:8]\n\n clean_name, encoded_hash = get_name(name, self.pipfile_location)\n venv_name = '{0}-{1}'.format(clean_name, encoded_hash)\n\n # This should work most of the time, for non-WIndows, in-project venv,\n # or \"proper\" path casing (on Windows).\n if (os.name != 'nt' or\n self.is_venv_in_project() or\n self._get_virtualenv_location(venv_name)):\n return clean_name, encoded_hash\n\n # Check for different capitalization of the same project.\n from .patched.pew.pew import lsenvs\n for env in lsenvs():\n try:\n env_name, hash_ = env.rsplit('-', 1)\n except ValueError:\n continue\n if len(hash_) != 8 or env_name.lower() != name.lower():\n continue\n return get_name(env_name, self.pipfile_location.replace(name, env_name))\n\n # Use the default if no matching env exists.\n return clean_name, encoded_hash\n\n @property\n def virtualenv_name(self):\n sanitized, encoded_hash = self._get_virtualenv_hash(self.name)\n suffix = '-{0}'.format(PIPENV_PYTHON) if PIPENV_PYTHON else ''\n # If the pipfile was located at '/home/user/MY_PROJECT/Pipfile',\n # the name of its virtualenv will be 'my-project-wyUfYPqE'\n return sanitized + '-' + encoded_hash + suffix\n\n @property\n def virtualenv_location(self):\n # if VIRTUAL_ENV is set, use that.\n if PIPENV_VIRTUALENV:\n return PIPENV_VIRTUALENV\n\n # Use cached version, if available.\n if self._virtualenv_location:\n return self._virtualenv_location\n\n # Default mode.\n if not self.is_venv_in_project():\n loc = self._get_virtualenv_location(self.virtualenv_name)\n # The user wants the virtualenv in the project.\n else:\n loc = os.sep.join(\n self.pipfile_location.split(os.sep)[:-1] + ['.venv']\n )\n self._virtualenv_location = loc\n return loc\n\n @property\n def virtualenv_src_location(self):\n loc = os.sep.join([self.virtualenv_location, 'src'])\n mkdir_p(loc)\n return loc\n\n @property\n def download_location(self):\n if self._download_location is None:\n loc = os.sep.join([self.virtualenv_location, 'downloads'])\n self._download_location = loc\n # Create the directory, if it doesn't exist.\n mkdir_p(self._download_location)\n return self._download_location\n\n @property\n def proper_names_db_path(self):\n if self._proper_names_db_path is None:\n self._proper_names_db_path = Path(\n self.virtualenv_location,\n 'pipenv-proper-names.txt',\n )\n self._proper_names_db_path.touch() # Ensure the file exists.\n return self._proper_names_db_path\n\n @property\n def proper_names(self):\n with self.proper_names_db_path.open() as f:\n return f.read().splitlines()\n\n def register_proper_name(self, name):\n \"\"\"Registers a proper name to the database.\"\"\"\n with self.proper_names_db_path.open('a') as f:\n f.write(u'{0}\\n'.format(name))\n\n @property\n def pipfile_location(self):\n if PIPENV_PIPFILE:\n return PIPENV_PIPFILE\n\n if self._pipfile_location is None:\n try:\n loc = pipfile.Pipfile.find(max_depth=PIPENV_MAX_DEPTH)\n except RuntimeError:\n loc = None\n self._pipfile_location = _normalized(loc)\n return self._pipfile_location\n\n @property\n def requirements_location(self):\n if self._requirements_location is None:\n try:\n loc = find_requirements(max_depth=PIPENV_MAX_DEPTH)\n except RuntimeError:\n loc = None\n self._requirements_location = loc\n return self._requirements_location\n\n @property\n def parsed_pipfile(self):\n \"\"\"Parse Pipfile into a TOMLFile and cache it\n\n (call clear_pipfile_cache() afterwards if mutating)\"\"\"\n contents = self.read_pipfile()\n # use full contents to get around str/bytes 2/3 issues\n cache_key = (self.pipfile_location, contents)\n if cache_key not in _pipfile_cache:\n parsed = self._parse_pipfile(contents)\n _pipfile_cache[cache_key] = parsed\n return _pipfile_cache[cache_key]\n\n def read_pipfile(self):\n # Open the pipfile, read it into memory.\n with io.open(self.pipfile_location) as f:\n contents = f.read()\n self._pipfile_newlines = preferred_newlines(f)\n\n return contents\n\n @property\n def pased_pure_pipfile(self):\n contents = self.read_pipfile()\n\n return self._parse_pipfile(contents)\n\n def clear_pipfile_cache(self):\n \"\"\"Clear pipfile cache (e.g., so we can mutate parsed pipfile)\"\"\"\n _pipfile_cache.clear()\n\n def _parse_pipfile(self, contents):\n # If any outline tables are present...\n if ('[packages.' in contents) or ('[dev-packages.' in contents):\n data = toml.loads(contents)\n # Convert all outline tables to inline tables.\n for section in ('packages', 'dev-packages'):\n for package in data.get(section, {}):\n # Convert things to inline tables \u2014 fancy :)\n if hasattr(data[section][package], 'keys'):\n _data = data[section][package]\n data[section][package] = toml._get_empty_inline_table(\n dict\n )\n data[section][package].update(_data)\n # We lose comments here, but it's for the best.)\n try:\n return contoml.loads(toml.dumps(data, preserve=True))\n\n except RuntimeError:\n return toml.loads(toml.dumps(data, preserve=True))\n\n else:\n # Fallback to toml parser, for large files.\n try:\n return contoml.loads(contents)\n\n except Exception:\n return toml.loads(contents)\n\n @property\n def settings(self):\n \"\"\"A dictionary of the settings added to the Pipfile.\"\"\"\n return self.parsed_pipfile.get('pipenv', {})\n\n def has_script(self, name):\n try:\n return name in self.parsed_pipfile['scripts']\n except KeyError:\n return False\n\n def build_script(self, name, extra_args=None):\n try:\n script = Script.parse(self.parsed_pipfile['scripts'][name])\n except KeyError:\n script = Script(name)\n if extra_args:\n script.extend(extra_args)\n return script\n\n def update_settings(self, d):\n settings = self.settings\n changed = False\n for new in d:\n if new not in settings:\n settings[new] = d[new]\n changed = True\n if changed:\n p = self.parsed_pipfile\n p['pipenv'] = settings\n # Write the changes to disk.\n self.write_toml(p)\n\n @property\n def _lockfile(self):\n \"\"\"Pipfile.lock divided by PyPI and external dependencies.\"\"\"\n pfile = pipfile.load(self.pipfile_location, inject_env=False)\n lockfile = json.loads(pfile.lock())\n for section in ('default', 'develop'):\n lock_section = lockfile.get(section, {})\n for key in list(lock_section.keys()):\n norm_key = pep423_name(key)\n lockfile[section][norm_key] = lock_section.pop(key)\n return lockfile\n\n @property\n def lockfile_location(self):\n return '{0}.lock'.format(self.pipfile_location)\n\n @property\n def lockfile_exists(self):\n return os.path.isfile(self.lockfile_location)\n\n @property\n def lockfile_content(self):\n return self.load_lockfile()\n\n def _get_editable_packages(self, dev=False):\n section = 'dev-packages' if dev else 'packages'\n packages = {\n k: v\n for k, v in self.parsed_pipfile.get(section, {}).items()\n if is_editable(v)\n }\n return packages\n\n def _get_vcs_packages(self, dev=False):\n section = 'dev-packages' if dev else 'packages'\n packages = {\n k: v\n for k, v in self.parsed_pipfile.get(section, {}).items()\n if is_vcs(v) or is_vcs(k)\n }\n return packages or {}\n\n @property\n def editable_packages(self):\n return self._get_editable_packages(dev=False)\n\n @property\n def editable_dev_packages(self):\n return self._get_editable_packages(dev=True)\n\n @property\n def vcs_packages(self):\n \"\"\"Returns a list of VCS packages, for not pip-tools to consume.\"\"\"\n return self._get_vcs_packages(dev=False)\n\n @property\n def vcs_dev_packages(self):\n \"\"\"Returns a list of VCS packages, for not pip-tools to consume.\"\"\"\n return self._get_vcs_packages(dev=True)\n\n @property\n def all_packages(self):\n \"\"\"Returns a list of all packages.\"\"\"\n p = dict(self.parsed_pipfile.get('dev-packages', {}))\n p.update(self.parsed_pipfile.get('packages', {}))\n return p\n\n @property\n def packages(self):\n \"\"\"Returns a list of packages, for pip-tools to consume.\"\"\"\n return self._build_package_list('packages')\n\n @property\n def dev_packages(self):\n \"\"\"Returns a list of dev-packages, for pip-tools to consume.\"\"\"\n return self._build_package_list('dev-packages')\n\n def touch_pipfile(self):\n \"\"\"Simply touches the Pipfile, for later use.\"\"\"\n with open('Pipfile', 'a'):\n os.utime('Pipfile', None)\n\n @property\n def pipfile_is_empty(self):\n if not self.pipfile_exists:\n return True\n\n if not len(self.read_pipfile()):\n return True\n\n return False\n\n def create_pipfile(self, python=None):\n \"\"\"Creates the Pipfile, filled with juicy defaults.\"\"\"\n from .patched.notpip._internal import ConfigOptionParser\n from .patched.notpip._internal.cmdoptions import make_option_group, index_group\n config_parser = ConfigOptionParser(name=self.name)\n config_parser.add_option_group(make_option_group(index_group, config_parser))\n install = config_parser.option_groups[0]\n indexes = ' '.join(install.get_option('--extra-index-url').default).lstrip('\\n').split('\\n')\n sources = [DEFAULT_SOURCE]\n for i, index in enumerate(indexes):\n if not index:\n continue\n\n source_name = 'pip_index_{}'.format(i)\n verify_ssl = index.startswith('https')\n sources.append(\n {\n u'url': index,\n u'verify_ssl': verify_ssl,\n u'name': source_name,\n }\n )\n\n data = {\n u'source': sources,\n # Default packages.\n u'packages': {},\n u'dev-packages': {},\n }\n # Default requires.\n required_python = python\n if not python:\n if self.virtualenv_location:\n required_python = self.which('python', self.virtualenv_location)\n else:\n required_python = self.which('python')\n version = python_version(required_python) or PIPENV_DEFAULT_PYTHON_VERSION\n if version and len(version) >= 3:\n data[u'requires'] = {\n 'python_version': version[: len('2.7')]\n }\n self.write_toml(data, 'Pipfile')\n\n def write_toml(self, data, path=None):\n \"\"\"Writes the given data structure out as TOML.\"\"\"\n if path is None:\n path = self.pipfile_location\n try:\n formatted_data = contoml.dumps(data).rstrip()\n except Exception:\n for section in ('packages', 'dev-packages'):\n for package in data.get(section, {}):\n # Convert things to inline tables \u2014 fancy :)\n if hasattr(data[section][package], 'keys'):\n _data = data[section][package]\n data[section][package] = toml._get_empty_inline_table(\n dict\n )\n data[section][package].update(_data)\n formatted_data = toml.dumps(data).rstrip()\n\n if Path(path).absolute() == Path(self.pipfile_location).absolute():\n newlines = self._pipfile_newlines\n else:\n newlines = DEFAULT_NEWLINES\n formatted_data = cleanup_toml(formatted_data)\n with io.open(path, 'w', newline=newlines) as f:\n f.write(formatted_data)\n # pipfile is mutated!\n self.clear_pipfile_cache()\n\n def write_lockfile(self, content):\n \"\"\"Write out the lockfile.\n \"\"\"\n newlines = self._lockfile_newlines\n s = simplejson.dumps( # Send Unicode in to guarentee Unicode out.\n content, indent=4, separators=(u',', u': '), sort_keys=True,\n )\n with atomic_open_for_write(self.lockfile_location, newline=newlines) as f:\n f.write(s)\n if not s.endswith(u'\\n'):\n f.write(u'\\n') # Write newline at end of document. GH #319.\n\n @property\n def pipfile_sources(self):\n if 'source' not in self.parsed_pipfile:\n return [DEFAULT_SOURCE]\n # We need to make copies of the source info so we don't\n # accidentally modify the cache. See #2100 where values are\n # written after the os.path.expandvars() call.\n return [\n {k: safe_expandvars(v) for k, v in source.items()}\n for source in self.parsed_pipfile['source']\n ]\n\n @property\n def sources(self):\n if self.lockfile_exists and hasattr(self.lockfile_content, 'keys'):\n meta_ = self.lockfile_content['_meta']\n sources_ = meta_.get('sources')\n if sources_:\n return sources_\n\n else:\n return self.pipfile_sources\n\n def find_source(self, source):\n \"\"\"given a source, find it.\n\n source can be a url or an index name.\n \"\"\"\n if not is_valid_url(source):\n try:\n source = self.get_source(name=source)\n except SourceNotFound:\n source = self.get_source(url=source)\n else:\n source = self.get_source(url=source)\n return source\n\n def get_source(self, name=None, url=None):\n def find_source(sources, name=None, url=None):\n source = None\n if name:\n source = [s for s in sources if s.get('name') == name]\n elif url:\n source = [s for s in sources if url.startswith(s.get('url'))]\n if source:\n return first(source)\n\n found_source = find_source(self.sources, name=name, url=url)\n if found_source:\n return found_source\n found_source = find_source(self.pipfile_sources, name=name, url=url)\n if found_source:\n return found_source\n raise SourceNotFound(name or url)\n\n def get_package_name_in_pipfile(self, package_name, dev=False):\n \"\"\"Get the equivalent package name in pipfile\"\"\"\n key = 'dev-packages' if dev else 'packages'\n section = self.parsed_pipfile.get(key, {})\n package_name = pep423_name(package_name)\n for name in section.keys():\n if pep423_name(name) == package_name:\n return name\n return None\n\n def remove_package_from_pipfile(self, package_name, dev=False):\n # Read and append Pipfile.\n name = self.get_package_name_in_pipfile(package_name, dev)\n key = 'dev-packages' if dev else 'packages'\n p = self.parsed_pipfile\n if name:\n del p[key][name]\n self.write_toml(p)\n\n def add_package_to_pipfile(self, package_name, dev=False):\n # Read and append Pipfile.\n p = self.parsed_pipfile\n # Don't re-capitalize file URLs or VCSs.\n package = Requirement.from_line(package_name.strip())\n _, converted = package.pipfile_entry\n key = 'dev-packages' if dev else 'packages'\n # Set empty group if it doesn't exist yet.\n if key not in p:\n p[key] = {}\n name = self.get_package_name_in_pipfile(package.name, dev)\n if name and is_star(converted):\n # Skip for wildcard version\n return\n # Add the package to the group.\n p[key][name or package.normalized_name] = converted\n # Write Pipfile.\n self.write_toml(p)\n\n def add_index_to_pipfile(self, index):\n \"\"\"Adds a given index to the Pipfile.\"\"\"\n # Read and append Pipfile.\n p = self.parsed_pipfile\n source = {'url': index, 'verify_ssl': True}\n # Add the package to the group.\n if 'source' not in p:\n p['source'] = [source]\n else:\n p['source'].append(source)\n # Write Pipfile.\n self.write_toml(p)\n\n def recase_pipfile(self):\n if self.ensure_proper_casing():\n self.write_toml(self.parsed_pipfile)\n\n def load_lockfile(self, expand_env_vars=True):\n with io.open(self.lockfile_location) as lock:\n j = json.load(lock)\n self._lockfile_newlines = preferred_newlines(lock)\n # lockfile is just a string\n if not j or not hasattr(j, 'keys'):\n return j\n\n if expand_env_vars:\n # Expand environment variables in Pipfile.lock at runtime.\n for i, source in enumerate(j['_meta']['sources'][:]):\n j['_meta']['sources'][i]['url'] = os.path.expandvars(j['_meta']['sources'][i]['url'])\n\n return j\n\n def get_lockfile_hash(self):\n if not os.path.exists(self.lockfile_location):\n return\n\n lockfile = self.load_lockfile(expand_env_vars=False)\n if '_meta' in lockfile and hasattr(lockfile, 'keys'):\n return lockfile['_meta'].get('hash', {}).get('sha256')\n # Lockfile exists but has no hash at all\n return ''\n\n def calculate_pipfile_hash(self):\n # Update the lockfile if it is out-of-date.\n p = pipfile.load(self.pipfile_location, inject_env=False)\n return p.hash\n\n def ensure_proper_casing(self):\n \"\"\"Ensures proper casing of Pipfile packages\"\"\"\n pfile = self.parsed_pipfile\n casing_changed = self.proper_case_section(pfile.get('packages', {}))\n casing_changed |= self.proper_case_section(pfile.get('dev-packages', {}))\n return casing_changed\n\n def proper_case_section(self, section):\n \"\"\"Verify proper casing is retrieved, when available, for each\n dependency in the section.\n \"\"\"\n # Casing for section.\n changed_values = False\n unknown_names = [\n k for k in section.keys() if k not in set(self.proper_names)\n ]\n # Replace each package with proper casing.\n for dep in unknown_names:\n try:\n # Get new casing for package name.\n new_casing = proper_case(dep)\n except IOError:\n # Unable to normalize package name.\n continue\n\n if new_casing != dep:\n changed_values = True\n self.register_proper_name(new_casing)\n # Replace old value with new value.\n old_value = section[dep]\n section[new_casing] = old_value\n del section[dep]\n # Return whether or not values have been changed.\n return changed_values\n", "path": "pipenv/project.py"}]} |
gh_patches_debug_1037 | rasdani/github-patches | git_diff | airctic__icevision-500 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add tutorial with hard negative samples
## 📓 Documentation Update
"how to use an image as background annotation" is a common question. We can provide a tutorial showing how to do that
### Racoon and dogs
If you train a model on the racoon dataset and show the model a picture of a dog it will classify it as a racoon. We can add images of dogs to the dataset (without any annotations) and show how the difference of model performance in both scenarios.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/models/base_show_results.py`
Content:
```
1 __all__ = ["base_show_results"]
2
3 from icevision.imports import *
4 from icevision.utils import *
5 from icevision.core import *
6 from icevision.visualize import *
7 from icevision.data import *
8
9
10 def base_show_results(
11 predict_fn: callable,
12 build_infer_batch_fn: callable,
13 model: nn.Module,
14 dataset: Dataset,
15 class_map: Optional[ClassMap] = None,
16 num_samples: int = 6,
17 ncols: int = 3,
18 denormalize_fn: Optional[callable] = denormalize_imagenet,
19 show: bool = True,
20 ) -> None:
21 samples = [dataset[i] for i in range(num_samples)]
22 batch, samples = build_infer_batch_fn(samples)
23 preds = predict_fn(model, batch)
24
25 imgs = [sample["img"] for sample in samples]
26 show_preds(
27 imgs,
28 preds,
29 class_map=class_map,
30 denormalize_fn=denormalize_fn,
31 ncols=ncols,
32 show=show,
33 )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/icevision/models/base_show_results.py b/icevision/models/base_show_results.py
--- a/icevision/models/base_show_results.py
+++ b/icevision/models/base_show_results.py
@@ -18,7 +18,7 @@
denormalize_fn: Optional[callable] = denormalize_imagenet,
show: bool = True,
) -> None:
- samples = [dataset[i] for i in range(num_samples)]
+ samples = random.choices(dataset, k=num_samples)
batch, samples = build_infer_batch_fn(samples)
preds = predict_fn(model, batch)
| {"golden_diff": "diff --git a/icevision/models/base_show_results.py b/icevision/models/base_show_results.py\n--- a/icevision/models/base_show_results.py\n+++ b/icevision/models/base_show_results.py\n@@ -18,7 +18,7 @@\n denormalize_fn: Optional[callable] = denormalize_imagenet,\n show: bool = True,\n ) -> None:\n- samples = [dataset[i] for i in range(num_samples)]\n+ samples = random.choices(dataset, k=num_samples)\n batch, samples = build_infer_batch_fn(samples)\n preds = predict_fn(model, batch)\n", "issue": "Add tutorial with hard negative samples\n## \ud83d\udcd3 Documentation Update\r\n\"how to use an image as background annotation\" is a common question. We can provide a tutorial showing how to do that\r\n\r\n### Racoon and dogs\r\nIf you train a model on the racoon dataset and show the model a picture of a dog it will classify it as a racoon. We can add images of dogs to the dataset (without any annotations) and show how the difference of model performance in both scenarios.\n", "before_files": [{"content": "__all__ = [\"base_show_results\"]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.core import *\nfrom icevision.visualize import *\nfrom icevision.data import *\n\n\ndef base_show_results(\n predict_fn: callable,\n build_infer_batch_fn: callable,\n model: nn.Module,\n dataset: Dataset,\n class_map: Optional[ClassMap] = None,\n num_samples: int = 6,\n ncols: int = 3,\n denormalize_fn: Optional[callable] = denormalize_imagenet,\n show: bool = True,\n) -> None:\n samples = [dataset[i] for i in range(num_samples)]\n batch, samples = build_infer_batch_fn(samples)\n preds = predict_fn(model, batch)\n\n imgs = [sample[\"img\"] for sample in samples]\n show_preds(\n imgs,\n preds,\n class_map=class_map,\n denormalize_fn=denormalize_fn,\n ncols=ncols,\n show=show,\n )\n", "path": "icevision/models/base_show_results.py"}], "after_files": [{"content": "__all__ = [\"base_show_results\"]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.core import *\nfrom icevision.visualize import *\nfrom icevision.data import *\n\n\ndef base_show_results(\n predict_fn: callable,\n build_infer_batch_fn: callable,\n model: nn.Module,\n dataset: Dataset,\n class_map: Optional[ClassMap] = None,\n num_samples: int = 6,\n ncols: int = 3,\n denormalize_fn: Optional[callable] = denormalize_imagenet,\n show: bool = True,\n) -> None:\n samples = random.choices(dataset, k=num_samples)\n batch, samples = build_infer_batch_fn(samples)\n preds = predict_fn(model, batch)\n\n imgs = [sample[\"img\"] for sample in samples]\n show_preds(\n imgs,\n preds,\n class_map=class_map,\n denormalize_fn=denormalize_fn,\n ncols=ncols,\n show=show,\n )\n", "path": "icevision/models/base_show_results.py"}]} |
gh_patches_debug_1038 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1162 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reader study completed message is visible when study is not completed

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/reader_studies/templatetags/get_ground_truth.py`
Content:
```
1 from django import template
2
3 register = template.Library()
4
5
6 @register.simple_tag
7 def get_ground_truth(obj, image, question):
8 """Get the auth token for the user."""
9 ground_truths = obj.statistics["ground_truths"]
10 return ground_truths[image][question]
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/reader_studies/templatetags/get_ground_truth.py b/app/grandchallenge/reader_studies/templatetags/get_ground_truth.py
--- a/app/grandchallenge/reader_studies/templatetags/get_ground_truth.py
+++ b/app/grandchallenge/reader_studies/templatetags/get_ground_truth.py
@@ -5,6 +5,7 @@
@register.simple_tag
def get_ground_truth(obj, image, question):
- """Get the auth token for the user."""
+ """Get the ground truth value for the image/question combination in reader
+ study obj."""
ground_truths = obj.statistics["ground_truths"]
return ground_truths[image][question]
| {"golden_diff": "diff --git a/app/grandchallenge/reader_studies/templatetags/get_ground_truth.py b/app/grandchallenge/reader_studies/templatetags/get_ground_truth.py\n--- a/app/grandchallenge/reader_studies/templatetags/get_ground_truth.py\n+++ b/app/grandchallenge/reader_studies/templatetags/get_ground_truth.py\n@@ -5,6 +5,7 @@\n \n @register.simple_tag\n def get_ground_truth(obj, image, question):\n- \"\"\"Get the auth token for the user.\"\"\"\n+ \"\"\"Get the ground truth value for the image/question combination in reader\n+ study obj.\"\"\"\n ground_truths = obj.statistics[\"ground_truths\"]\n return ground_truths[image][question]\n", "issue": "Reader study completed message is visible when study is not completed\n\r\n\n", "before_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag\ndef get_ground_truth(obj, image, question):\n \"\"\"Get the auth token for the user.\"\"\"\n ground_truths = obj.statistics[\"ground_truths\"]\n return ground_truths[image][question]\n", "path": "app/grandchallenge/reader_studies/templatetags/get_ground_truth.py"}], "after_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag\ndef get_ground_truth(obj, image, question):\n \"\"\"Get the ground truth value for the image/question combination in reader\n study obj.\"\"\"\n ground_truths = obj.statistics[\"ground_truths\"]\n return ground_truths[image][question]\n", "path": "app/grandchallenge/reader_studies/templatetags/get_ground_truth.py"}]} |
gh_patches_debug_1039 | rasdani/github-patches | git_diff | conan-io__conan-7509 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Removing credentials from 'url' for SCM doesn't play well with SSH git repository hosting
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
conan 1.12+, previous used version was 1.11
Once upon a time there was a pull request #4207 that changed URLs like `ssh://[email protected]:port/GROUP/repo.git` to `ssh://a.b.c.d:port/GROUP/repo.git` (i.e. without username) for SCM attribute.
Recently I updated to conan 1.18.5 and got a problem.
I'm pulling sources from a GitLab instance via SSH (I can't change it now to HTTP) and GitLab (I think git hosting services too) doesn't accept SSH connections without username.
So what options do I have now?
- Every user of package have to execute `git config --global url.ssh://[email protected]:port.insteadOf ssh://a.b.c.d:port`, i.e. add config to rewrite URL. It doesn't scale well.
- Every package developer must hardcode username 'git' in the scm attribute i.e.
```scm = {
"type": "git",
"username": "git",
"url": "auto",
"revision": "auto",
}
```
It doesn't scale too and what if someone wants to use HTTPS and his name is not `git`?
For me as a user of conan it looks like a regression.
Could you suggest a scalable workaround or fix this issue?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/tools/scm.py`
Content:
```
1 import os
2 import platform
3 import re
4 import xml.etree.ElementTree as ET
5 from subprocess import CalledProcessError
6
7 from six.moves.urllib.parse import quote_plus, unquote, urlparse
8
9 from conans.client.tools.env import environment_append, no_op
10 from conans.client.tools.files import chdir
11 from conans.errors import ConanException
12 from conans.model.version import Version
13 from conans.util.files import decode_text, to_file_bytes, walk, mkdir
14 from conans.util.runners import check_output_runner, version_runner, muted_runner, input_runner, \
15 pyinstaller_bundle_env_cleaned
16
17
18 def _check_repo(cmd, folder):
19 msg = "Not a valid '{0}' repository or '{0}' not found.".format(cmd[0])
20 try:
21 ret = muted_runner(cmd, folder=folder)
22 except Exception:
23 raise ConanException(msg)
24 else:
25 if bool(ret):
26 raise ConanException(msg)
27
28
29 class SCMBase(object):
30 cmd_command = None
31
32 @classmethod
33 def get_version(cls):
34 try:
35 out = version_runner([cls.cmd_command, "--version"])
36 version_line = decode_text(out).split('\n', 1)[0]
37 version_str = version_line.split(' ', 3)[2]
38 return Version(version_str)
39 except Exception as e:
40 raise ConanException("Error retrieving {} version: '{}'".format(cls.cmd_command, e))
41
42 def __init__(self, folder=None, verify_ssl=True, username=None, password=None,
43 force_english=True, runner=None, output=None):
44 self.folder = folder or os.getcwd()
45 if not os.path.exists(self.folder):
46 os.makedirs(self.folder)
47 self._verify_ssl = verify_ssl
48 self._force_eng = force_english
49 self._username = username
50 self._password = password
51 self._runner = runner
52 self._output = output
53
54 def run(self, command):
55 command = "%s %s" % (self.cmd_command, command)
56 with chdir(self.folder) if self.folder else no_op():
57 with environment_append({"LC_ALL": "en_US.UTF-8"}) if self._force_eng else no_op():
58 with pyinstaller_bundle_env_cleaned():
59 if not self._runner:
60 return check_output_runner(command).strip()
61 else:
62 return self._runner(command)
63
64 def get_url_with_credentials(self, url):
65 if not self._username or not self._password:
66 return url
67 if urlparse(url).password:
68 return url
69
70 user_enc = quote_plus(self._username)
71 pwd_enc = quote_plus(self._password)
72 url = url.replace("://", "://" + user_enc + ":" + pwd_enc + "@", 1)
73 return url
74
75 @classmethod
76 def _remove_credentials_url(cls, url):
77 parsed = urlparse(url)
78 netloc = parsed.hostname
79 if parsed.port:
80 netloc += ":{}".format(parsed.port)
81 replaced = parsed._replace(netloc=netloc)
82 return replaced.geturl()
83
84
85 class Git(SCMBase):
86 cmd_command = "git"
87
88 @property
89 def _configure_ssl_verify(self):
90 return "-c http.sslVerify=%s " % ("true" if self._verify_ssl else "false")
91
92 def run(self, command):
93 command = self._configure_ssl_verify + command
94 return super(Git, self).run(command)
95
96 def _fetch(self, url, branch, shallow):
97 if not branch:
98 raise ConanException("The destination folder '%s' is not empty, "
99 "specify a branch to checkout (not a tag or commit) "
100 "or specify a 'subfolder' "
101 "attribute in the 'scm'" % self.folder)
102
103 output = self.run("init")
104 output += self.run('remote add origin "%s"' % url)
105 if shallow:
106 output += self.run('fetch --depth 1 origin "%s"' % branch)
107 output += self.run('checkout FETCH_HEAD')
108 else:
109 output += self.run("fetch")
110 output += self.run("checkout -t origin/%s" % branch)
111 return output
112
113 def clone(self, url, branch=None, args="", shallow=False):
114 """
115 :param url: repository remote URL to clone from (e.g. https, git or local)
116 :param branch: actually, can be any valid git ref expression like,
117 - None, use default branch, usually it's "master"
118 - branch name
119 - tag name
120 - revision sha256
121 - expression like HEAD~1
122 :param args: additional arguments to be passed to the git command (e.g. config args)
123 :param shallow:
124 :return: output of the clone command
125 """
126 # TODO: rename "branch" -> "element" in Conan 2.0
127 url = self.get_url_with_credentials(url)
128 if os.path.exists(url):
129 url = url.replace("\\", "/") # Windows local directory
130 mkdir(self.folder) # might not exist in case of shallow clone
131 if os.listdir(self.folder):
132 return self._fetch(url, branch, shallow)
133 if shallow and branch:
134 return self._fetch(url, branch, shallow)
135 branch_cmd = "--branch %s" % branch if branch else ""
136 shallow_cmd = "--depth 1" if shallow else ""
137 output = self.run('clone "%s" . %s %s %s' % (url, branch_cmd, shallow_cmd, args))
138
139 return output
140
141 def checkout(self, element, submodule=None):
142 # Element can be a tag, branch or commit
143 self.check_repo()
144 output = self.run('checkout "%s"' % element)
145 output += self.checkout_submodules(submodule)
146
147 return output
148
149 def checkout_submodules(self, submodule=None):
150 """Do the checkout only for submodules"""
151 if not submodule:
152 return ""
153 if submodule == "shallow":
154 output = self.run("submodule sync")
155 output += self.run("submodule update --init")
156 return output
157 elif submodule == "recursive":
158 output = self.run("submodule sync --recursive")
159 output += self.run("submodule update --init --recursive")
160 return output
161 else:
162 raise ConanException("Invalid 'submodule' attribute value in the 'scm'. "
163 "Unknown value '%s'. Allowed values: ['shallow', 'recursive']"
164 % submodule)
165
166 def excluded_files(self):
167 ret = []
168 try:
169 file_paths = [os.path.normpath(
170 os.path.join(
171 os.path.relpath(folder, self.folder), el)).replace("\\", "/")
172 for folder, dirpaths, fs in walk(self.folder)
173 for el in fs + dirpaths]
174 if file_paths:
175 paths = to_file_bytes("\n".join(file_paths))
176 out = input_runner(['git', 'check-ignore', '--stdin'], paths, self.folder)
177 grep_stdout = decode_text(out)
178 ret = grep_stdout.splitlines()
179 except (CalledProcessError, IOError, OSError) as e:
180 if self._output:
181 self._output.warn("Error checking excluded git files: %s. "
182 "Ignoring excluded files" % e)
183 ret = []
184 return ret
185
186 def get_remote_url(self, remote_name=None, remove_credentials=False):
187 self.check_repo()
188 remote_name = remote_name or "origin"
189 remotes = self.run("remote -v")
190 for remote in remotes.splitlines():
191 name, url = remote.split(None, 1)
192 if name == remote_name:
193 url, _ = url.rsplit(None, 1)
194 if remove_credentials and not os.path.exists(url): # only if not local
195 url = self._remove_credentials_url(url)
196 if os.path.exists(url): # Windows local directory
197 url = url.replace("\\", "/")
198 return url
199 return None
200
201 def is_local_repository(self):
202 url = self.get_remote_url()
203 return os.path.exists(url)
204
205 def get_commit(self):
206 self.check_repo()
207 try:
208 commit = self.run("rev-parse HEAD")
209 commit = commit.strip()
210 return commit
211 except Exception as e:
212 raise ConanException("Unable to get git commit from '%s': %s" % (self.folder, str(e)))
213
214 get_revision = get_commit
215
216 def get_commit_message(self):
217 self.check_repo()
218 try:
219 message = self.run("log -1 --format=%s%n%b")
220 return message.strip()
221 except Exception:
222 return None
223
224 def is_pristine(self):
225 self.check_repo()
226 status = self.run("status --porcelain").strip()
227 if not status:
228 return True
229 else:
230 return False
231
232 def get_repo_root(self):
233 self.check_repo()
234 return self.run("rev-parse --show-toplevel")
235
236 def get_branch(self):
237 self.check_repo()
238 try:
239 status = self.run("status -bs --porcelain")
240 # ## feature/scm_branch...myorigin/feature/scm_branch
241 branch = status.splitlines()[0].split("...")[0].strip("#").strip()
242 return branch
243 except Exception as e:
244 raise ConanException("Unable to get git branch from %s: %s" % (self.folder, str(e)))
245
246 def get_tag(self):
247 self.check_repo()
248 try:
249 status = self.run("describe --exact-match --tags")
250 tag = status.strip()
251 return tag
252 except Exception:
253 return None
254
255 def check_repo(self):
256 """ Check if it is a valid GIT repo """
257 _check_repo(["git", "status"], folder=self.folder)
258
259
260 class SVN(SCMBase):
261 cmd_command = "svn"
262 file_protocol = 'file:///' if platform.system() == "Windows" else 'file://'
263 API_CHANGE_VERSION = Version("1.9") # CLI changes in 1.9
264
265 def __init__(self, folder=None, runner=None, *args, **kwargs):
266 def runner_no_strip(command):
267 return check_output_runner(command)
268 runner = runner or runner_no_strip
269 super(SVN, self).__init__(folder=folder, runner=runner, *args, **kwargs)
270
271 @property
272 def version(self):
273 if not hasattr(self, '_version'):
274 version = SVN.get_version()
275 setattr(self, '_version', version)
276 return getattr(self, '_version')
277
278 def run(self, command):
279 # Ensure we always pass some params
280 extra_options = " --no-auth-cache --non-interactive"
281 if not self._verify_ssl:
282 if self.version >= SVN.API_CHANGE_VERSION:
283 extra_options += " --trust-server-cert-failures=unknown-ca"
284 else:
285 extra_options += " --trust-server-cert"
286 if self._username and self._password:
287 extra_options += " --username=" + self._username
288 extra_options += " --password=" + self._password
289 return super(SVN, self).run(command="{} {}".format(command, extra_options))
290
291 def _show_item(self, item, target='.'):
292 self.check_repo()
293 if self.version >= SVN.API_CHANGE_VERSION:
294 value = self.run("info --show-item {item} \"{target}\"".format(item=item, target=target))
295 return value.strip()
296 else:
297 output = self.run("info --xml \"{target}\"".format(target=target))
298 root = ET.fromstring(output)
299 if item == 'revision':
300 return root.findall("./entry")[0].get("revision")
301 elif item == 'url':
302 return root.findall("./entry/url")[0].text
303 elif item == 'wc-root':
304 return root.findall("./entry/wc-info/wcroot-abspath")[0].text
305 elif item == 'last-changed-revision':
306 return root.findall("./entry/commit")[0].get("revision")
307 elif item == 'relative-url':
308 root_url = root.findall("./entry/repository/root")[0].text
309 url = self._show_item(item='url', target=target)
310 if url.startswith(root_url):
311 return url[len(root_url):]
312 raise ConanException("Retrieval of item '{}' not implemented for SVN<{}".format(
313 item, SVN.API_CHANGE_VERSION))
314
315 def checkout(self, url, revision="HEAD"):
316 output = ""
317 try:
318 self.check_repo()
319 except ConanException:
320 output += self.run('co "{url}" .'.format(url=url))
321 else:
322 assert url.lower() == self.get_remote_url().lower(), \
323 "%s != %s" % (url, self.get_remote_url())
324 output += self.run("revert . --recursive")
325 finally:
326 output += self.update(revision=revision)
327 return output
328
329 def update(self, revision='HEAD'):
330 self.check_repo()
331 return self.run("update -r {rev}".format(rev=revision))
332
333 def excluded_files(self):
334 self.check_repo()
335 excluded_list = []
336 output = self.run("status --no-ignore")
337 for it in output.splitlines():
338 if it.startswith('I'): # Only ignored files
339 filepath = it[8:].strip()
340 excluded_list.append(os.path.normpath(filepath))
341 return excluded_list
342
343 def get_remote_url(self, remove_credentials=False):
344 url = self._show_item('url')
345 if remove_credentials and not os.path.exists(url): # only if not local
346 url = self._remove_credentials_url(url)
347 return url
348
349 def get_qualified_remote_url(self, remove_credentials=False):
350 # Return url with peg revision
351 url = self.get_remote_url(remove_credentials=remove_credentials)
352 revision = self.get_revision()
353 return "{url}@{revision}".format(url=url, revision=revision)
354
355 def is_local_repository(self):
356 url = self.get_remote_url()
357 return (url.startswith(self.file_protocol) and
358 os.path.exists(unquote(url[len(self.file_protocol):])))
359
360 def is_pristine(self):
361 # Check if working copy is pristine/consistent
362 if self.version >= SVN.API_CHANGE_VERSION:
363 try:
364 output = self.run("status -u -r {} --xml".format(self.get_revision()))
365 except CalledProcessError:
366 return False
367 else:
368 root = ET.fromstring(output)
369
370 pristine_item_list = ['external', 'ignored', 'none', 'normal']
371 pristine_props_list = ['normal', 'none']
372 for item in root.findall('.//wc-status'):
373 if item.get('item', 'none') not in pristine_item_list:
374 return False
375 if item.get('props', 'none') not in pristine_props_list:
376 return False
377
378 for item in root.findall('.//repos-status'):
379 if item.get('item', 'none') not in pristine_item_list:
380 return False
381 if item.get('props', 'none') not in pristine_props_list:
382 return False
383 return True
384 else:
385 if self._output:
386 self._output.warn("SVN::is_pristine for SVN v{} (less than {}) is not implemented,"
387 " it is returning not-pristine always because it cannot compare"
388 " with checked out version.".format(self.version,
389 SVN.API_CHANGE_VERSION))
390 return False
391
392 def get_revision(self):
393 return self._show_item('revision')
394
395 def get_revision_message(self):
396 output = self.run("log -r COMMITTED").splitlines()
397 return output[3] if len(output) > 2 else None
398
399 def get_repo_root(self):
400 return self._show_item('wc-root')
401
402 def get_last_changed_revision(self, use_wc_root=True):
403 if use_wc_root:
404 return self._show_item(item='last-changed-revision', target=self.get_repo_root())
405 else:
406 return self._show_item(item='last-changed-revision')
407
408 def get_branch(self):
409 item = self._get_item("branches/[^/]+|trunk", "branch")
410 return item.replace("branches/", "") if item else None
411
412 def get_tag(self):
413 item = self._get_item("tags/[^/]+", "tag")
414 return item.replace("tags/", "") if item else None
415
416 def _get_item(self, pattern, item_name):
417 try:
418 url = self._show_item('relative-url')
419 except Exception as e:
420 raise ConanException("Unable to get svn %s from %s: %s"
421 % (item_name, self.folder, str(e)))
422 item = re.search(pattern, url)
423 return item.group(0) if item else None
424
425 def check_repo(self):
426 """ Check if it is a valid SVN repo """
427 _check_repo(["svn", "info"], folder=self.folder)
428
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conans/client/tools/scm.py b/conans/client/tools/scm.py
--- a/conans/client/tools/scm.py
+++ b/conans/client/tools/scm.py
@@ -78,6 +78,8 @@
netloc = parsed.hostname
if parsed.port:
netloc += ":{}".format(parsed.port)
+ if parsed.username and parsed.scheme == "ssh":
+ netloc = "{}@{}".format(parsed.username, netloc)
replaced = parsed._replace(netloc=netloc)
return replaced.geturl()
| {"golden_diff": "diff --git a/conans/client/tools/scm.py b/conans/client/tools/scm.py\n--- a/conans/client/tools/scm.py\n+++ b/conans/client/tools/scm.py\n@@ -78,6 +78,8 @@\n netloc = parsed.hostname\n if parsed.port:\n netloc += \":{}\".format(parsed.port)\n+ if parsed.username and parsed.scheme == \"ssh\":\n+ netloc = \"{}@{}\".format(parsed.username, netloc)\n replaced = parsed._replace(netloc=netloc)\n return replaced.geturl()\n", "issue": "Removing credentials from 'url' for SCM doesn't play well with SSH git repository hosting\n- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).\r\n- [x] I've specified the Conan version, operating system version and any tool that can be relevant.\r\n- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.\r\n\r\nconan 1.12+, previous used version was 1.11\r\n\r\nOnce upon a time there was a pull request #4207 that changed URLs like `ssh://[email protected]:port/GROUP/repo.git` to `ssh://a.b.c.d:port/GROUP/repo.git` (i.e. without username) for SCM attribute.\r\n\r\nRecently I updated to conan 1.18.5 and got a problem.\r\n\r\nI'm pulling sources from a GitLab instance via SSH (I can't change it now to HTTP) and GitLab (I think git hosting services too) doesn't accept SSH connections without username.\r\n\r\nSo what options do I have now?\r\n- Every user of package have to execute `git config --global url.ssh://[email protected]:port.insteadOf ssh://a.b.c.d:port`, i.e. add config to rewrite URL. It doesn't scale well.\r\n- Every package developer must hardcode username 'git' in the scm attribute i.e.\r\n```scm = {\r\n \"type\": \"git\",\r\n \"username\": \"git\",\r\n \"url\": \"auto\",\r\n \"revision\": \"auto\",\r\n }\r\n```\r\nIt doesn't scale too and what if someone wants to use HTTPS and his name is not `git`?\r\n\r\nFor me as a user of conan it looks like a regression.\r\n\r\nCould you suggest a scalable workaround or fix this issue?\n", "before_files": [{"content": "import os\nimport platform\nimport re\nimport xml.etree.ElementTree as ET\nfrom subprocess import CalledProcessError\n\nfrom six.moves.urllib.parse import quote_plus, unquote, urlparse\n\nfrom conans.client.tools.env import environment_append, no_op\nfrom conans.client.tools.files import chdir\nfrom conans.errors import ConanException\nfrom conans.model.version import Version\nfrom conans.util.files import decode_text, to_file_bytes, walk, mkdir\nfrom conans.util.runners import check_output_runner, version_runner, muted_runner, input_runner, \\\n pyinstaller_bundle_env_cleaned\n\n\ndef _check_repo(cmd, folder):\n msg = \"Not a valid '{0}' repository or '{0}' not found.\".format(cmd[0])\n try:\n ret = muted_runner(cmd, folder=folder)\n except Exception:\n raise ConanException(msg)\n else:\n if bool(ret):\n raise ConanException(msg)\n\n\nclass SCMBase(object):\n cmd_command = None\n\n @classmethod\n def get_version(cls):\n try:\n out = version_runner([cls.cmd_command, \"--version\"])\n version_line = decode_text(out).split('\\n', 1)[0]\n version_str = version_line.split(' ', 3)[2]\n return Version(version_str)\n except Exception as e:\n raise ConanException(\"Error retrieving {} version: '{}'\".format(cls.cmd_command, e))\n\n def __init__(self, folder=None, verify_ssl=True, username=None, password=None,\n force_english=True, runner=None, output=None):\n self.folder = folder or os.getcwd()\n if not os.path.exists(self.folder):\n os.makedirs(self.folder)\n self._verify_ssl = verify_ssl\n self._force_eng = force_english\n self._username = username\n self._password = password\n self._runner = runner\n self._output = output\n\n def run(self, command):\n command = \"%s %s\" % (self.cmd_command, command)\n with chdir(self.folder) if self.folder else no_op():\n with environment_append({\"LC_ALL\": \"en_US.UTF-8\"}) if self._force_eng else no_op():\n with pyinstaller_bundle_env_cleaned():\n if not self._runner:\n return check_output_runner(command).strip()\n else:\n return self._runner(command)\n\n def get_url_with_credentials(self, url):\n if not self._username or not self._password:\n return url\n if urlparse(url).password:\n return url\n\n user_enc = quote_plus(self._username)\n pwd_enc = quote_plus(self._password)\n url = url.replace(\"://\", \"://\" + user_enc + \":\" + pwd_enc + \"@\", 1)\n return url\n\n @classmethod\n def _remove_credentials_url(cls, url):\n parsed = urlparse(url)\n netloc = parsed.hostname\n if parsed.port:\n netloc += \":{}\".format(parsed.port)\n replaced = parsed._replace(netloc=netloc)\n return replaced.geturl()\n\n\nclass Git(SCMBase):\n cmd_command = \"git\"\n\n @property\n def _configure_ssl_verify(self):\n return \"-c http.sslVerify=%s \" % (\"true\" if self._verify_ssl else \"false\")\n\n def run(self, command):\n command = self._configure_ssl_verify + command\n return super(Git, self).run(command)\n\n def _fetch(self, url, branch, shallow):\n if not branch:\n raise ConanException(\"The destination folder '%s' is not empty, \"\n \"specify a branch to checkout (not a tag or commit) \"\n \"or specify a 'subfolder' \"\n \"attribute in the 'scm'\" % self.folder)\n\n output = self.run(\"init\")\n output += self.run('remote add origin \"%s\"' % url)\n if shallow:\n output += self.run('fetch --depth 1 origin \"%s\"' % branch)\n output += self.run('checkout FETCH_HEAD')\n else:\n output += self.run(\"fetch\")\n output += self.run(\"checkout -t origin/%s\" % branch)\n return output\n\n def clone(self, url, branch=None, args=\"\", shallow=False):\n \"\"\"\n :param url: repository remote URL to clone from (e.g. https, git or local)\n :param branch: actually, can be any valid git ref expression like,\n - None, use default branch, usually it's \"master\"\n - branch name\n - tag name\n - revision sha256\n - expression like HEAD~1\n :param args: additional arguments to be passed to the git command (e.g. config args)\n :param shallow:\n :return: output of the clone command\n \"\"\"\n # TODO: rename \"branch\" -> \"element\" in Conan 2.0\n url = self.get_url_with_credentials(url)\n if os.path.exists(url):\n url = url.replace(\"\\\\\", \"/\") # Windows local directory\n mkdir(self.folder) # might not exist in case of shallow clone\n if os.listdir(self.folder):\n return self._fetch(url, branch, shallow)\n if shallow and branch:\n return self._fetch(url, branch, shallow)\n branch_cmd = \"--branch %s\" % branch if branch else \"\"\n shallow_cmd = \"--depth 1\" if shallow else \"\"\n output = self.run('clone \"%s\" . %s %s %s' % (url, branch_cmd, shallow_cmd, args))\n\n return output\n\n def checkout(self, element, submodule=None):\n # Element can be a tag, branch or commit\n self.check_repo()\n output = self.run('checkout \"%s\"' % element)\n output += self.checkout_submodules(submodule)\n\n return output\n\n def checkout_submodules(self, submodule=None):\n \"\"\"Do the checkout only for submodules\"\"\"\n if not submodule:\n return \"\"\n if submodule == \"shallow\":\n output = self.run(\"submodule sync\")\n output += self.run(\"submodule update --init\")\n return output\n elif submodule == \"recursive\":\n output = self.run(\"submodule sync --recursive\")\n output += self.run(\"submodule update --init --recursive\")\n return output\n else:\n raise ConanException(\"Invalid 'submodule' attribute value in the 'scm'. \"\n \"Unknown value '%s'. Allowed values: ['shallow', 'recursive']\"\n % submodule)\n\n def excluded_files(self):\n ret = []\n try:\n file_paths = [os.path.normpath(\n os.path.join(\n os.path.relpath(folder, self.folder), el)).replace(\"\\\\\", \"/\")\n for folder, dirpaths, fs in walk(self.folder)\n for el in fs + dirpaths]\n if file_paths:\n paths = to_file_bytes(\"\\n\".join(file_paths))\n out = input_runner(['git', 'check-ignore', '--stdin'], paths, self.folder)\n grep_stdout = decode_text(out)\n ret = grep_stdout.splitlines()\n except (CalledProcessError, IOError, OSError) as e:\n if self._output:\n self._output.warn(\"Error checking excluded git files: %s. \"\n \"Ignoring excluded files\" % e)\n ret = []\n return ret\n\n def get_remote_url(self, remote_name=None, remove_credentials=False):\n self.check_repo()\n remote_name = remote_name or \"origin\"\n remotes = self.run(\"remote -v\")\n for remote in remotes.splitlines():\n name, url = remote.split(None, 1)\n if name == remote_name:\n url, _ = url.rsplit(None, 1)\n if remove_credentials and not os.path.exists(url): # only if not local\n url = self._remove_credentials_url(url)\n if os.path.exists(url): # Windows local directory\n url = url.replace(\"\\\\\", \"/\")\n return url\n return None\n\n def is_local_repository(self):\n url = self.get_remote_url()\n return os.path.exists(url)\n\n def get_commit(self):\n self.check_repo()\n try:\n commit = self.run(\"rev-parse HEAD\")\n commit = commit.strip()\n return commit\n except Exception as e:\n raise ConanException(\"Unable to get git commit from '%s': %s\" % (self.folder, str(e)))\n\n get_revision = get_commit\n\n def get_commit_message(self):\n self.check_repo()\n try:\n message = self.run(\"log -1 --format=%s%n%b\")\n return message.strip()\n except Exception:\n return None\n\n def is_pristine(self):\n self.check_repo()\n status = self.run(\"status --porcelain\").strip()\n if not status:\n return True\n else:\n return False\n\n def get_repo_root(self):\n self.check_repo()\n return self.run(\"rev-parse --show-toplevel\")\n\n def get_branch(self):\n self.check_repo()\n try:\n status = self.run(\"status -bs --porcelain\")\n # ## feature/scm_branch...myorigin/feature/scm_branch\n branch = status.splitlines()[0].split(\"...\")[0].strip(\"#\").strip()\n return branch\n except Exception as e:\n raise ConanException(\"Unable to get git branch from %s: %s\" % (self.folder, str(e)))\n\n def get_tag(self):\n self.check_repo()\n try:\n status = self.run(\"describe --exact-match --tags\")\n tag = status.strip()\n return tag\n except Exception:\n return None\n\n def check_repo(self):\n \"\"\" Check if it is a valid GIT repo \"\"\"\n _check_repo([\"git\", \"status\"], folder=self.folder)\n\n\nclass SVN(SCMBase):\n cmd_command = \"svn\"\n file_protocol = 'file:///' if platform.system() == \"Windows\" else 'file://'\n API_CHANGE_VERSION = Version(\"1.9\") # CLI changes in 1.9\n\n def __init__(self, folder=None, runner=None, *args, **kwargs):\n def runner_no_strip(command):\n return check_output_runner(command)\n runner = runner or runner_no_strip\n super(SVN, self).__init__(folder=folder, runner=runner, *args, **kwargs)\n\n @property\n def version(self):\n if not hasattr(self, '_version'):\n version = SVN.get_version()\n setattr(self, '_version', version)\n return getattr(self, '_version')\n\n def run(self, command):\n # Ensure we always pass some params\n extra_options = \" --no-auth-cache --non-interactive\"\n if not self._verify_ssl:\n if self.version >= SVN.API_CHANGE_VERSION:\n extra_options += \" --trust-server-cert-failures=unknown-ca\"\n else:\n extra_options += \" --trust-server-cert\"\n if self._username and self._password:\n extra_options += \" --username=\" + self._username\n extra_options += \" --password=\" + self._password\n return super(SVN, self).run(command=\"{} {}\".format(command, extra_options))\n\n def _show_item(self, item, target='.'):\n self.check_repo()\n if self.version >= SVN.API_CHANGE_VERSION:\n value = self.run(\"info --show-item {item} \\\"{target}\\\"\".format(item=item, target=target))\n return value.strip()\n else:\n output = self.run(\"info --xml \\\"{target}\\\"\".format(target=target))\n root = ET.fromstring(output)\n if item == 'revision':\n return root.findall(\"./entry\")[0].get(\"revision\")\n elif item == 'url':\n return root.findall(\"./entry/url\")[0].text\n elif item == 'wc-root':\n return root.findall(\"./entry/wc-info/wcroot-abspath\")[0].text\n elif item == 'last-changed-revision':\n return root.findall(\"./entry/commit\")[0].get(\"revision\")\n elif item == 'relative-url':\n root_url = root.findall(\"./entry/repository/root\")[0].text\n url = self._show_item(item='url', target=target)\n if url.startswith(root_url):\n return url[len(root_url):]\n raise ConanException(\"Retrieval of item '{}' not implemented for SVN<{}\".format(\n item, SVN.API_CHANGE_VERSION))\n\n def checkout(self, url, revision=\"HEAD\"):\n output = \"\"\n try:\n self.check_repo()\n except ConanException:\n output += self.run('co \"{url}\" .'.format(url=url))\n else:\n assert url.lower() == self.get_remote_url().lower(), \\\n \"%s != %s\" % (url, self.get_remote_url())\n output += self.run(\"revert . --recursive\")\n finally:\n output += self.update(revision=revision)\n return output\n\n def update(self, revision='HEAD'):\n self.check_repo()\n return self.run(\"update -r {rev}\".format(rev=revision))\n\n def excluded_files(self):\n self.check_repo()\n excluded_list = []\n output = self.run(\"status --no-ignore\")\n for it in output.splitlines():\n if it.startswith('I'): # Only ignored files\n filepath = it[8:].strip()\n excluded_list.append(os.path.normpath(filepath))\n return excluded_list\n\n def get_remote_url(self, remove_credentials=False):\n url = self._show_item('url')\n if remove_credentials and not os.path.exists(url): # only if not local\n url = self._remove_credentials_url(url)\n return url\n\n def get_qualified_remote_url(self, remove_credentials=False):\n # Return url with peg revision\n url = self.get_remote_url(remove_credentials=remove_credentials)\n revision = self.get_revision()\n return \"{url}@{revision}\".format(url=url, revision=revision)\n\n def is_local_repository(self):\n url = self.get_remote_url()\n return (url.startswith(self.file_protocol) and\n os.path.exists(unquote(url[len(self.file_protocol):])))\n\n def is_pristine(self):\n # Check if working copy is pristine/consistent\n if self.version >= SVN.API_CHANGE_VERSION:\n try:\n output = self.run(\"status -u -r {} --xml\".format(self.get_revision()))\n except CalledProcessError:\n return False\n else:\n root = ET.fromstring(output)\n\n pristine_item_list = ['external', 'ignored', 'none', 'normal']\n pristine_props_list = ['normal', 'none']\n for item in root.findall('.//wc-status'):\n if item.get('item', 'none') not in pristine_item_list:\n return False\n if item.get('props', 'none') not in pristine_props_list:\n return False\n\n for item in root.findall('.//repos-status'):\n if item.get('item', 'none') not in pristine_item_list:\n return False\n if item.get('props', 'none') not in pristine_props_list:\n return False\n return True\n else:\n if self._output:\n self._output.warn(\"SVN::is_pristine for SVN v{} (less than {}) is not implemented,\"\n \" it is returning not-pristine always because it cannot compare\"\n \" with checked out version.\".format(self.version,\n SVN.API_CHANGE_VERSION))\n return False\n\n def get_revision(self):\n return self._show_item('revision')\n\n def get_revision_message(self):\n output = self.run(\"log -r COMMITTED\").splitlines()\n return output[3] if len(output) > 2 else None\n\n def get_repo_root(self):\n return self._show_item('wc-root')\n\n def get_last_changed_revision(self, use_wc_root=True):\n if use_wc_root:\n return self._show_item(item='last-changed-revision', target=self.get_repo_root())\n else:\n return self._show_item(item='last-changed-revision')\n\n def get_branch(self):\n item = self._get_item(\"branches/[^/]+|trunk\", \"branch\")\n return item.replace(\"branches/\", \"\") if item else None\n\n def get_tag(self):\n item = self._get_item(\"tags/[^/]+\", \"tag\")\n return item.replace(\"tags/\", \"\") if item else None\n\n def _get_item(self, pattern, item_name):\n try:\n url = self._show_item('relative-url')\n except Exception as e:\n raise ConanException(\"Unable to get svn %s from %s: %s\"\n % (item_name, self.folder, str(e)))\n item = re.search(pattern, url)\n return item.group(0) if item else None\n\n def check_repo(self):\n \"\"\" Check if it is a valid SVN repo \"\"\"\n _check_repo([\"svn\", \"info\"], folder=self.folder)\n", "path": "conans/client/tools/scm.py"}], "after_files": [{"content": "import os\nimport platform\nimport re\nimport xml.etree.ElementTree as ET\nfrom subprocess import CalledProcessError\n\nfrom six.moves.urllib.parse import quote_plus, unquote, urlparse\n\nfrom conans.client.tools.env import environment_append, no_op\nfrom conans.client.tools.files import chdir\nfrom conans.errors import ConanException\nfrom conans.model.version import Version\nfrom conans.util.files import decode_text, to_file_bytes, walk, mkdir\nfrom conans.util.runners import check_output_runner, version_runner, muted_runner, input_runner, \\\n pyinstaller_bundle_env_cleaned\n\n\ndef _check_repo(cmd, folder):\n msg = \"Not a valid '{0}' repository or '{0}' not found.\".format(cmd[0])\n try:\n ret = muted_runner(cmd, folder=folder)\n except Exception:\n raise ConanException(msg)\n else:\n if bool(ret):\n raise ConanException(msg)\n\n\nclass SCMBase(object):\n cmd_command = None\n\n @classmethod\n def get_version(cls):\n try:\n out = version_runner([cls.cmd_command, \"--version\"])\n version_line = decode_text(out).split('\\n', 1)[0]\n version_str = version_line.split(' ', 3)[2]\n return Version(version_str)\n except Exception as e:\n raise ConanException(\"Error retrieving {} version: '{}'\".format(cls.cmd_command, e))\n\n def __init__(self, folder=None, verify_ssl=True, username=None, password=None,\n force_english=True, runner=None, output=None):\n self.folder = folder or os.getcwd()\n if not os.path.exists(self.folder):\n os.makedirs(self.folder)\n self._verify_ssl = verify_ssl\n self._force_eng = force_english\n self._username = username\n self._password = password\n self._runner = runner\n self._output = output\n\n def run(self, command):\n command = \"%s %s\" % (self.cmd_command, command)\n with chdir(self.folder) if self.folder else no_op():\n with environment_append({\"LC_ALL\": \"en_US.UTF-8\"}) if self._force_eng else no_op():\n with pyinstaller_bundle_env_cleaned():\n if not self._runner:\n return check_output_runner(command).strip()\n else:\n return self._runner(command)\n\n def get_url_with_credentials(self, url):\n if not self._username or not self._password:\n return url\n if urlparse(url).password:\n return url\n\n user_enc = quote_plus(self._username)\n pwd_enc = quote_plus(self._password)\n url = url.replace(\"://\", \"://\" + user_enc + \":\" + pwd_enc + \"@\", 1)\n return url\n\n @classmethod\n def _remove_credentials_url(cls, url):\n parsed = urlparse(url)\n netloc = parsed.hostname\n if parsed.port:\n netloc += \":{}\".format(parsed.port)\n if parsed.username and parsed.scheme == \"ssh\":\n netloc = \"{}@{}\".format(parsed.username, netloc)\n replaced = parsed._replace(netloc=netloc)\n return replaced.geturl()\n\n\nclass Git(SCMBase):\n cmd_command = \"git\"\n\n @property\n def _configure_ssl_verify(self):\n return \"-c http.sslVerify=%s \" % (\"true\" if self._verify_ssl else \"false\")\n\n def run(self, command):\n command = self._configure_ssl_verify + command\n return super(Git, self).run(command)\n\n def _fetch(self, url, branch, shallow):\n if not branch:\n raise ConanException(\"The destination folder '%s' is not empty, \"\n \"specify a branch to checkout (not a tag or commit) \"\n \"or specify a 'subfolder' \"\n \"attribute in the 'scm'\" % self.folder)\n\n output = self.run(\"init\")\n output += self.run('remote add origin \"%s\"' % url)\n if shallow:\n output += self.run('fetch --depth 1 origin \"%s\"' % branch)\n output += self.run('checkout FETCH_HEAD')\n else:\n output += self.run(\"fetch\")\n output += self.run(\"checkout -t origin/%s\" % branch)\n return output\n\n def clone(self, url, branch=None, args=\"\", shallow=False):\n \"\"\"\n :param url: repository remote URL to clone from (e.g. https, git or local)\n :param branch: actually, can be any valid git ref expression like,\n - None, use default branch, usually it's \"master\"\n - branch name\n - tag name\n - revision sha256\n - expression like HEAD~1\n :param args: additional arguments to be passed to the git command (e.g. config args)\n :param shallow:\n :return: output of the clone command\n \"\"\"\n # TODO: rename \"branch\" -> \"element\" in Conan 2.0\n url = self.get_url_with_credentials(url)\n if os.path.exists(url):\n url = url.replace(\"\\\\\", \"/\") # Windows local directory\n mkdir(self.folder) # might not exist in case of shallow clone\n if os.listdir(self.folder):\n return self._fetch(url, branch, shallow)\n if shallow and branch:\n return self._fetch(url, branch, shallow)\n branch_cmd = \"--branch %s\" % branch if branch else \"\"\n shallow_cmd = \"--depth 1\" if shallow else \"\"\n output = self.run('clone \"%s\" . %s %s %s' % (url, branch_cmd, shallow_cmd, args))\n\n return output\n\n def checkout(self, element, submodule=None):\n # Element can be a tag, branch or commit\n self.check_repo()\n output = self.run('checkout \"%s\"' % element)\n output += self.checkout_submodules(submodule)\n\n return output\n\n def checkout_submodules(self, submodule=None):\n \"\"\"Do the checkout only for submodules\"\"\"\n if not submodule:\n return \"\"\n if submodule == \"shallow\":\n output = self.run(\"submodule sync\")\n output += self.run(\"submodule update --init\")\n return output\n elif submodule == \"recursive\":\n output = self.run(\"submodule sync --recursive\")\n output += self.run(\"submodule update --init --recursive\")\n return output\n else:\n raise ConanException(\"Invalid 'submodule' attribute value in the 'scm'. \"\n \"Unknown value '%s'. Allowed values: ['shallow', 'recursive']\"\n % submodule)\n\n def excluded_files(self):\n ret = []\n try:\n file_paths = [os.path.normpath(\n os.path.join(\n os.path.relpath(folder, self.folder), el)).replace(\"\\\\\", \"/\")\n for folder, dirpaths, fs in walk(self.folder)\n for el in fs + dirpaths]\n if file_paths:\n paths = to_file_bytes(\"\\n\".join(file_paths))\n out = input_runner(['git', 'check-ignore', '--stdin'], paths, self.folder)\n grep_stdout = decode_text(out)\n ret = grep_stdout.splitlines()\n except (CalledProcessError, IOError, OSError) as e:\n if self._output:\n self._output.warn(\"Error checking excluded git files: %s. \"\n \"Ignoring excluded files\" % e)\n ret = []\n return ret\n\n def get_remote_url(self, remote_name=None, remove_credentials=False):\n self.check_repo()\n remote_name = remote_name or \"origin\"\n remotes = self.run(\"remote -v\")\n for remote in remotes.splitlines():\n name, url = remote.split(None, 1)\n if name == remote_name:\n url, _ = url.rsplit(None, 1)\n if remove_credentials and not os.path.exists(url): # only if not local\n url = self._remove_credentials_url(url)\n if os.path.exists(url): # Windows local directory\n url = url.replace(\"\\\\\", \"/\")\n return url\n return None\n\n def is_local_repository(self):\n url = self.get_remote_url()\n return os.path.exists(url)\n\n def get_commit(self):\n self.check_repo()\n try:\n commit = self.run(\"rev-parse HEAD\")\n commit = commit.strip()\n return commit\n except Exception as e:\n raise ConanException(\"Unable to get git commit from '%s': %s\" % (self.folder, str(e)))\n\n get_revision = get_commit\n\n def get_commit_message(self):\n self.check_repo()\n try:\n message = self.run(\"log -1 --format=%s%n%b\")\n return message.strip()\n except Exception:\n return None\n\n def is_pristine(self):\n self.check_repo()\n status = self.run(\"status --porcelain\").strip()\n if not status:\n return True\n else:\n return False\n\n def get_repo_root(self):\n self.check_repo()\n return self.run(\"rev-parse --show-toplevel\")\n\n def get_branch(self):\n self.check_repo()\n try:\n status = self.run(\"status -bs --porcelain\")\n # ## feature/scm_branch...myorigin/feature/scm_branch\n branch = status.splitlines()[0].split(\"...\")[0].strip(\"#\").strip()\n return branch\n except Exception as e:\n raise ConanException(\"Unable to get git branch from %s: %s\" % (self.folder, str(e)))\n\n def get_tag(self):\n self.check_repo()\n try:\n status = self.run(\"describe --exact-match --tags\")\n tag = status.strip()\n return tag\n except Exception:\n return None\n\n def check_repo(self):\n \"\"\" Check if it is a valid GIT repo \"\"\"\n _check_repo([\"git\", \"status\"], folder=self.folder)\n\n\nclass SVN(SCMBase):\n cmd_command = \"svn\"\n file_protocol = 'file:///' if platform.system() == \"Windows\" else 'file://'\n API_CHANGE_VERSION = Version(\"1.9\") # CLI changes in 1.9\n\n def __init__(self, folder=None, runner=None, *args, **kwargs):\n def runner_no_strip(command):\n return check_output_runner(command)\n runner = runner or runner_no_strip\n super(SVN, self).__init__(folder=folder, runner=runner, *args, **kwargs)\n\n @property\n def version(self):\n if not hasattr(self, '_version'):\n version = SVN.get_version()\n setattr(self, '_version', version)\n return getattr(self, '_version')\n\n def run(self, command):\n # Ensure we always pass some params\n extra_options = \" --no-auth-cache --non-interactive\"\n if not self._verify_ssl:\n if self.version >= SVN.API_CHANGE_VERSION:\n extra_options += \" --trust-server-cert-failures=unknown-ca\"\n else:\n extra_options += \" --trust-server-cert\"\n if self._username and self._password:\n extra_options += \" --username=\" + self._username\n extra_options += \" --password=\" + self._password\n return super(SVN, self).run(command=\"{} {}\".format(command, extra_options))\n\n def _show_item(self, item, target='.'):\n self.check_repo()\n if self.version >= SVN.API_CHANGE_VERSION:\n value = self.run(\"info --show-item {item} \\\"{target}\\\"\".format(item=item, target=target))\n return value.strip()\n else:\n output = self.run(\"info --xml \\\"{target}\\\"\".format(target=target))\n root = ET.fromstring(output)\n if item == 'revision':\n return root.findall(\"./entry\")[0].get(\"revision\")\n elif item == 'url':\n return root.findall(\"./entry/url\")[0].text\n elif item == 'wc-root':\n return root.findall(\"./entry/wc-info/wcroot-abspath\")[0].text\n elif item == 'last-changed-revision':\n return root.findall(\"./entry/commit\")[0].get(\"revision\")\n elif item == 'relative-url':\n root_url = root.findall(\"./entry/repository/root\")[0].text\n url = self._show_item(item='url', target=target)\n if url.startswith(root_url):\n return url[len(root_url):]\n raise ConanException(\"Retrieval of item '{}' not implemented for SVN<{}\".format(\n item, SVN.API_CHANGE_VERSION))\n\n def checkout(self, url, revision=\"HEAD\"):\n output = \"\"\n try:\n self.check_repo()\n except ConanException:\n output += self.run('co \"{url}\" .'.format(url=url))\n else:\n assert url.lower() == self.get_remote_url().lower(), \\\n \"%s != %s\" % (url, self.get_remote_url())\n output += self.run(\"revert . --recursive\")\n finally:\n output += self.update(revision=revision)\n return output\n\n def update(self, revision='HEAD'):\n self.check_repo()\n return self.run(\"update -r {rev}\".format(rev=revision))\n\n def excluded_files(self):\n self.check_repo()\n excluded_list = []\n output = self.run(\"status --no-ignore\")\n for it in output.splitlines():\n if it.startswith('I'): # Only ignored files\n filepath = it[8:].strip()\n excluded_list.append(os.path.normpath(filepath))\n return excluded_list\n\n def get_remote_url(self, remove_credentials=False):\n url = self._show_item('url')\n if remove_credentials and not os.path.exists(url): # only if not local\n url = self._remove_credentials_url(url)\n return url\n\n def get_qualified_remote_url(self, remove_credentials=False):\n # Return url with peg revision\n url = self.get_remote_url(remove_credentials=remove_credentials)\n revision = self.get_revision()\n return \"{url}@{revision}\".format(url=url, revision=revision)\n\n def is_local_repository(self):\n url = self.get_remote_url()\n return (url.startswith(self.file_protocol) and\n os.path.exists(unquote(url[len(self.file_protocol):])))\n\n def is_pristine(self):\n # Check if working copy is pristine/consistent\n if self.version >= SVN.API_CHANGE_VERSION:\n try:\n output = self.run(\"status -u -r {} --xml\".format(self.get_revision()))\n except CalledProcessError:\n return False\n else:\n root = ET.fromstring(output)\n\n pristine_item_list = ['external', 'ignored', 'none', 'normal']\n pristine_props_list = ['normal', 'none']\n for item in root.findall('.//wc-status'):\n if item.get('item', 'none') not in pristine_item_list:\n return False\n if item.get('props', 'none') not in pristine_props_list:\n return False\n\n for item in root.findall('.//repos-status'):\n if item.get('item', 'none') not in pristine_item_list:\n return False\n if item.get('props', 'none') not in pristine_props_list:\n return False\n return True\n else:\n if self._output:\n self._output.warn(\"SVN::is_pristine for SVN v{} (less than {}) is not implemented,\"\n \" it is returning not-pristine always because it cannot compare\"\n \" with checked out version.\".format(self.version,\n SVN.API_CHANGE_VERSION))\n return False\n\n def get_revision(self):\n return self._show_item('revision')\n\n def get_revision_message(self):\n output = self.run(\"log -r COMMITTED\").splitlines()\n return output[3] if len(output) > 2 else None\n\n def get_repo_root(self):\n return self._show_item('wc-root')\n\n def get_last_changed_revision(self, use_wc_root=True):\n if use_wc_root:\n return self._show_item(item='last-changed-revision', target=self.get_repo_root())\n else:\n return self._show_item(item='last-changed-revision')\n\n def get_branch(self):\n item = self._get_item(\"branches/[^/]+|trunk\", \"branch\")\n return item.replace(\"branches/\", \"\") if item else None\n\n def get_tag(self):\n item = self._get_item(\"tags/[^/]+\", \"tag\")\n return item.replace(\"tags/\", \"\") if item else None\n\n def _get_item(self, pattern, item_name):\n try:\n url = self._show_item('relative-url')\n except Exception as e:\n raise ConanException(\"Unable to get svn %s from %s: %s\"\n % (item_name, self.folder, str(e)))\n item = re.search(pattern, url)\n return item.group(0) if item else None\n\n def check_repo(self):\n \"\"\" Check if it is a valid SVN repo \"\"\"\n _check_repo([\"svn\", \"info\"], folder=self.folder)\n", "path": "conans/client/tools/scm.py"}]} |
gh_patches_debug_1040 | rasdani/github-patches | git_diff | coreruleset__coreruleset-3550 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
.changes-pending.md lacks space before asterisk
Our CHANGES.md has a leading space before the bullet / asterisk. The .changes-pending.md does not.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `.github/create-changelog-prs.py`
Content:
```
1 #! /usr/bin/env python
2
3 import subprocess
4 import json
5 import datetime
6 import sys
7 import os
8 import re
9
10 DEVELOPERS = dict()
11
12 def get_pr(repository: str, number: int) -> dict:
13 command = f"""gh pr view \
14 --repo "{repository}" \
15 "{number}" \
16 --json mergeCommit,mergedBy,title,author,baseRefName,number
17 """
18 proc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
19 pr_json, errors = proc.communicate()
20 if proc.returncode != 0:
21 print(errors)
22 exit(1)
23 return json.loads(pr_json)
24
25 def get_prs(repository: str, day: datetime.date) -> list:
26 print(f"Fetching PRs for {day}")
27 command = f"""gh search prs \
28 --repo "{repository}" \
29 --merged-at "{day}" \
30 --json number \
31 -- \
32 -label:changelog-pr # ignore changelog prs
33 """
34 proc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
35 prs_json, errors = proc.communicate()
36 if proc.returncode != 0:
37 print(errors)
38 exit(1)
39 prs = list()
40 for result in json.loads(prs_json):
41 prs.append(get_pr(repository, result["number"]))
42
43 return prs
44
45 def parse_prs(prs: list) -> dict:
46 pr_map = dict()
47 for pr in prs:
48 merged_by = pr["mergedBy"]["login"]
49 if merged_by not in pr_map:
50 pr_list = list()
51 pr_map[merged_by] = pr_list
52 else:
53 pr_list = pr_map[merged_by]
54 pr_list.append(pr)
55 return pr_map
56
57
58 def create_prs(repository: str, merged_by_prs_map: dict, day: datetime.date):
59 for author in merged_by_prs_map.keys():
60 create_pr(repository, author, merged_by_prs_map[author], day)
61
62 def create_pr(repository: str, merged_by: str, prs: list, day: datetime.date):
63 if len(prs) == 0:
64 return
65 print(f"Creating changelog PR for @{merged_by}")
66
67 sample_pr = prs[0]
68 base_branch = sample_pr["baseRefName"]
69 pr_branch_name = create_pr_branch(day, merged_by, base_branch)
70 pr_body, changelog_lines = generate_content(prs, merged_by)
71 create_commit(changelog_lines)
72 push_pr_branch(pr_branch_name)
73
74 command = f"""gh pr create \
75 --repo "{repository}" \
76 --assignee "{merged_by}" \
77 --base "{base_branch}" \
78 --label "changelog-pr" \
79 --title "chore: changelog updates for {day}, merged by @{merged_by}" \
80 --body-file -
81 """
82
83 proc = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
84 outs, errors = proc.communicate(input=pr_body.encode())
85 if proc.returncode != 0:
86 print(errors)
87 exit(1)
88 print(f"Created PR: {outs.decode()}")
89
90 def create_commit(changelog_lines: str):
91 with open('.changes-pending.md', 'a') as changelog:
92 changelog.write(changelog_lines)
93
94 command = "git commit .changes-pending.md -m 'Add pending changelog entries'"
95 proc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
96 _, errors = proc.communicate()
97 if proc.returncode != 0:
98 print(errors)
99 exit(1)
100
101 def generate_content(prs: list, merged_by: str) -> (str, str):
102 changelog_lines = ""
103 pr_body = f"This PR was auto-generated to update the changelog with the following entries, merged by @{merged_by}:\n```\n"
104 pr_links = ""
105 for pr in prs:
106 pr_number = pr["number"]
107 pr_title = pr["title"]
108 pr_author = get_pr_author_name(pr["author"]["login"])
109 new_line = f"* {pr_title} ({pr_author}) [#{pr_number}]\n"
110 pr_body += new_line
111 pr_links += f"- #{pr_number}\n"
112
113 changelog_lines += new_line
114 pr_body += "```\n\n" + pr_links
115
116 return pr_body, changelog_lines
117
118 def get_pr_author_name(login: str) -> str:
119 if len(DEVELOPERS) == 0:
120 parse_contributors()
121
122 return DEVELOPERS[login] if login in DEVELOPERS else f"@{login}"
123
124 def parse_contributors():
125 regex = re.compile(r'^\s*?-\s*?\[([^]]+)\]\s*?\(http.*/([^/]+)\s*?\)')
126 with open('CONTRIBUTORS.md', 'rt') as handle:
127 line = handle.readline()
128 while not ('##' in line and 'Contributors' in line):
129 match = regex.match(line)
130 if match:
131 DEVELOPERS[match.group(2)] = match.group(1)
132 line = handle.readline()
133
134 def create_pr_branch(day: datetime.date, author: str, base_branch: str) -> str:
135 branch_name = f"changelog-updates-for-{day}-{author} {base_branch}"
136 command = f"git checkout -b {branch_name}"
137 proc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
138 _, errors = proc.communicate()
139 if proc.returncode != 0:
140 print(errors)
141 exit(1)
142
143 return branch_name
144
145 def push_pr_branch(branch_name: str):
146 command = f"git push -u origin {branch_name}"
147 proc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
148 _, errors = proc.communicate()
149 if proc.returncode != 0:
150 print(errors)
151 exit(1)
152
153 def run(source_repository: str, target_repository: str, today: datetime.date):
154 day = today - datetime.timedelta(days=1)
155 prs = get_prs(source_repository, day)
156 prs_length = len(prs)
157 print(f"Found {prs_length} PRs")
158 if prs_length == 0:
159 return
160
161 merged_by_prs_map = parse_prs(prs)
162 create_prs(target_repository, merged_by_prs_map, day)
163
164 if __name__ == "__main__":
165 # disable pager
166 os.environ["GH_PAGER"] = ''
167 # set variables for Git
168 os.environ["GIT_AUTHOR_NAME"] = "changelog-pr-bot"
169 os.environ["GIT_AUTHOR_EMAIL"] = "[email protected]"
170 os.environ["GIT_COMMITTER_NAME"] = "changelog-pr-bot"
171 os.environ["GIT_COMMITTER_EMAIL"] = "[email protected]"
172
173 source_repository = 'coreruleset/coreruleset'
174 target_repository = source_repository
175 # the cron schedule for the workflow uses UTC
176 today = datetime.datetime.now(datetime.timezone.utc).date()
177
178 if len(sys.argv) > 1:
179 source_repository = sys.argv[1]
180 if len(sys.argv) > 2:
181 target_repository = sys.argv[2]
182 if len(sys.argv) > 3:
183 today = datetime.date.fromisoformat(sys.argv[3])
184 run(source_repository, target_repository, today)
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/.github/create-changelog-prs.py b/.github/create-changelog-prs.py
--- a/.github/create-changelog-prs.py
+++ b/.github/create-changelog-prs.py
@@ -106,7 +106,7 @@
pr_number = pr["number"]
pr_title = pr["title"]
pr_author = get_pr_author_name(pr["author"]["login"])
- new_line = f"* {pr_title} ({pr_author}) [#{pr_number}]\n"
+ new_line = f" * {pr_title} ({pr_author}) [#{pr_number}]\n"
pr_body += new_line
pr_links += f"- #{pr_number}\n"
| {"golden_diff": "diff --git a/.github/create-changelog-prs.py b/.github/create-changelog-prs.py\n--- a/.github/create-changelog-prs.py\n+++ b/.github/create-changelog-prs.py\n@@ -106,7 +106,7 @@\n \t\tpr_number = pr[\"number\"]\n \t\tpr_title = pr[\"title\"]\n \t\tpr_author = get_pr_author_name(pr[\"author\"][\"login\"])\n-\t\tnew_line = f\"* {pr_title} ({pr_author}) [#{pr_number}]\\n\"\n+\t\tnew_line = f\" * {pr_title} ({pr_author}) [#{pr_number}]\\n\"\n \t\tpr_body += new_line\n \t\tpr_links += f\"- #{pr_number}\\n\"\n", "issue": ".changes-pending.md lacks space before asterisk\nOur CHANGES.md has a leading space before the bullet / asterisk. The .changes-pending.md does not.\n", "before_files": [{"content": "#! /usr/bin/env python\n\nimport subprocess\nimport json\nimport datetime\nimport sys\nimport os\nimport re\n\nDEVELOPERS = dict()\n\ndef get_pr(repository: str, number: int) -> dict:\n\tcommand = f\"\"\"gh pr view \\\n\t\t--repo \"{repository}\" \\\n\t\t\"{number}\" \\\n\t\t--json mergeCommit,mergedBy,title,author,baseRefName,number\n\t\"\"\"\n\tproc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\tpr_json, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\treturn json.loads(pr_json)\n\ndef get_prs(repository: str, day: datetime.date) -> list:\n\tprint(f\"Fetching PRs for {day}\")\n\tcommand = f\"\"\"gh search prs \\\n\t\t--repo \"{repository}\" \\\n\t\t--merged-at \"{day}\" \\\n\t\t--json number \\\n\t\t-- \\\n\t\t-label:changelog-pr # ignore changelog prs\n\t\"\"\"\n\tproc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\tprs_json, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\tprs = list()\n\tfor result in json.loads(prs_json):\n\t\tprs.append(get_pr(repository, result[\"number\"]))\n\n\treturn prs\n\ndef parse_prs(prs: list) -> dict:\n\tpr_map = dict()\n\tfor pr in prs:\n\t\tmerged_by = pr[\"mergedBy\"][\"login\"]\n\t\tif merged_by not in pr_map:\n\t\t\tpr_list = list()\n\t\t\tpr_map[merged_by] = pr_list\n\t\telse:\n\t\t\tpr_list = pr_map[merged_by]\n\t\tpr_list.append(pr)\n\treturn pr_map\n\n\ndef create_prs(repository: str, merged_by_prs_map: dict, day: datetime.date):\n\tfor author in merged_by_prs_map.keys():\n\t\tcreate_pr(repository, author, merged_by_prs_map[author], day)\n\ndef create_pr(repository: str, merged_by: str, prs: list, day: datetime.date):\n\tif len(prs) == 0:\n\t\treturn\n\tprint(f\"Creating changelog PR for @{merged_by}\")\n\n\tsample_pr = prs[0]\n\tbase_branch = sample_pr[\"baseRefName\"]\n\tpr_branch_name = create_pr_branch(day, merged_by, base_branch)\n\tpr_body, changelog_lines = generate_content(prs, merged_by)\n\tcreate_commit(changelog_lines)\n\tpush_pr_branch(pr_branch_name)\n\n\tcommand = f\"\"\"gh pr create \\\n\t\t--repo \"{repository}\" \\\n\t\t--assignee \"{merged_by}\" \\\n\t\t--base \"{base_branch}\" \\\n\t\t--label \"changelog-pr\" \\\n\t\t--title \"chore: changelog updates for {day}, merged by @{merged_by}\" \\\n\t\t--body-file -\n\t\"\"\"\n\n\tproc = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\touts, errors = proc.communicate(input=pr_body.encode())\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\tprint(f\"Created PR: {outs.decode()}\")\n\ndef create_commit(changelog_lines: str):\n\twith open('.changes-pending.md', 'a') as changelog:\n\t\tchangelog.write(changelog_lines)\n\n\tcommand = \"git commit .changes-pending.md -m 'Add pending changelog entries'\"\n\tproc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)\n\t_, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\ndef generate_content(prs: list, merged_by: str) -> (str, str):\n\tchangelog_lines = \"\"\n\tpr_body = f\"This PR was auto-generated to update the changelog with the following entries, merged by @{merged_by}:\\n```\\n\"\n\tpr_links = \"\"\n\tfor pr in prs:\n\t\tpr_number = pr[\"number\"]\n\t\tpr_title = pr[\"title\"]\n\t\tpr_author = get_pr_author_name(pr[\"author\"][\"login\"])\n\t\tnew_line = f\"* {pr_title} ({pr_author}) [#{pr_number}]\\n\"\n\t\tpr_body += new_line\n\t\tpr_links += f\"- #{pr_number}\\n\"\n\n\t\tchangelog_lines += new_line\n\tpr_body += \"```\\n\\n\" + pr_links\n\n\treturn pr_body, changelog_lines\n\ndef get_pr_author_name(login: str) -> str:\n\tif len(DEVELOPERS) == 0:\n\t\tparse_contributors()\n\n\treturn DEVELOPERS[login] if login in DEVELOPERS else f\"@{login}\"\n\ndef parse_contributors():\n\tregex = re.compile(r'^\\s*?-\\s*?\\[([^]]+)\\]\\s*?\\(http.*/([^/]+)\\s*?\\)')\n\twith open('CONTRIBUTORS.md', 'rt') as handle:\n\t\tline = handle.readline()\n\t\twhile not ('##' in line and 'Contributors' in line):\n\t\t\tmatch = regex.match(line)\n\t\t\tif match:\n\t\t\t\tDEVELOPERS[match.group(2)] = match.group(1)\n\t\t\tline = handle.readline()\n\ndef create_pr_branch(day: datetime.date, author: str, base_branch: str) -> str:\n\tbranch_name = f\"changelog-updates-for-{day}-{author} {base_branch}\"\n\tcommand = f\"git checkout -b {branch_name}\"\n\tproc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)\n\t_, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\n\treturn branch_name\n\ndef push_pr_branch(branch_name: str):\n\tcommand = f\"git push -u origin {branch_name}\"\n\tproc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)\n\t_, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\ndef run(source_repository: str, target_repository: str, today: datetime.date):\n\tday = today - datetime.timedelta(days=1)\n\tprs = get_prs(source_repository, day)\n\tprs_length = len(prs)\n\tprint(f\"Found {prs_length} PRs\")\n\tif prs_length == 0:\n\t\treturn\n\n\tmerged_by_prs_map = parse_prs(prs)\n\tcreate_prs(target_repository, merged_by_prs_map, day)\n\nif __name__ == \"__main__\":\n\t# disable pager\n\tos.environ[\"GH_PAGER\"] = ''\n\t# set variables for Git\n\tos.environ[\"GIT_AUTHOR_NAME\"] = \"changelog-pr-bot\"\n\tos.environ[\"GIT_AUTHOR_EMAIL\"] = \"[email protected]\"\n\tos.environ[\"GIT_COMMITTER_NAME\"] = \"changelog-pr-bot\"\n\tos.environ[\"GIT_COMMITTER_EMAIL\"] = \"[email protected]\"\n\n\tsource_repository = 'coreruleset/coreruleset'\n\ttarget_repository = source_repository\n\t# the cron schedule for the workflow uses UTC\n\ttoday = datetime.datetime.now(datetime.timezone.utc).date()\n\n\tif len(sys.argv) > 1:\n\t\tsource_repository = sys.argv[1]\n\tif len(sys.argv) > 2:\n\t\ttarget_repository = sys.argv[2]\n\tif len(sys.argv) > 3:\n\t\ttoday = datetime.date.fromisoformat(sys.argv[3])\n\trun(source_repository, target_repository, today)\n", "path": ".github/create-changelog-prs.py"}], "after_files": [{"content": "#! /usr/bin/env python\n\nimport subprocess\nimport json\nimport datetime\nimport sys\nimport os\nimport re\n\nDEVELOPERS = dict()\n\ndef get_pr(repository: str, number: int) -> dict:\n\tcommand = f\"\"\"gh pr view \\\n\t\t--repo \"{repository}\" \\\n\t\t\"{number}\" \\\n\t\t--json mergeCommit,mergedBy,title,author,baseRefName,number\n\t\"\"\"\n\tproc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\tpr_json, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\treturn json.loads(pr_json)\n\ndef get_prs(repository: str, day: datetime.date) -> list:\n\tprint(f\"Fetching PRs for {day}\")\n\tcommand = f\"\"\"gh search prs \\\n\t\t--repo \"{repository}\" \\\n\t\t--merged-at \"{day}\" \\\n\t\t--json number \\\n\t\t-- \\\n\t\t-label:changelog-pr # ignore changelog prs\n\t\"\"\"\n\tproc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\tprs_json, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\tprs = list()\n\tfor result in json.loads(prs_json):\n\t\tprs.append(get_pr(repository, result[\"number\"]))\n\n\treturn prs\n\ndef parse_prs(prs: list) -> dict:\n\tpr_map = dict()\n\tfor pr in prs:\n\t\tmerged_by = pr[\"mergedBy\"][\"login\"]\n\t\tif merged_by not in pr_map:\n\t\t\tpr_list = list()\n\t\t\tpr_map[merged_by] = pr_list\n\t\telse:\n\t\t\tpr_list = pr_map[merged_by]\n\t\tpr_list.append(pr)\n\treturn pr_map\n\n\ndef create_prs(repository: str, merged_by_prs_map: dict, day: datetime.date):\n\tfor author in merged_by_prs_map.keys():\n\t\tcreate_pr(repository, author, merged_by_prs_map[author], day)\n\ndef create_pr(repository: str, merged_by: str, prs: list, day: datetime.date):\n\tif len(prs) == 0:\n\t\treturn\n\tprint(f\"Creating changelog PR for @{merged_by}\")\n\n\tsample_pr = prs[0]\n\tbase_branch = sample_pr[\"baseRefName\"]\n\tpr_branch_name = create_pr_branch(day, merged_by, base_branch)\n\tpr_body, changelog_lines = generate_content(prs, merged_by)\n\tcreate_commit(changelog_lines)\n\tpush_pr_branch(pr_branch_name)\n\n\tcommand = f\"\"\"gh pr create \\\n\t\t--repo \"{repository}\" \\\n\t\t--assignee \"{merged_by}\" \\\n\t\t--base \"{base_branch}\" \\\n\t\t--label \"changelog-pr\" \\\n\t\t--title \"chore: changelog updates for {day}, merged by @{merged_by}\" \\\n\t\t--body-file -\n\t\"\"\"\n\n\tproc = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\touts, errors = proc.communicate(input=pr_body.encode())\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\tprint(f\"Created PR: {outs.decode()}\")\n\ndef create_commit(changelog_lines: str):\n\twith open('.changes-pending.md', 'a') as changelog:\n\t\tchangelog.write(changelog_lines)\n\n\tcommand = \"git commit .changes-pending.md -m 'Add pending changelog entries'\"\n\tproc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)\n\t_, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\ndef generate_content(prs: list, merged_by: str) -> (str, str):\n\tchangelog_lines = \"\"\n\tpr_body = f\"This PR was auto-generated to update the changelog with the following entries, merged by @{merged_by}:\\n```\\n\"\n\tpr_links = \"\"\n\tfor pr in prs:\n\t\tpr_number = pr[\"number\"]\n\t\tpr_title = pr[\"title\"]\n\t\tpr_author = get_pr_author_name(pr[\"author\"][\"login\"])\n\t\tnew_line = f\" * {pr_title} ({pr_author}) [#{pr_number}]\\n\"\n\t\tpr_body += new_line\n\t\tpr_links += f\"- #{pr_number}\\n\"\n\n\t\tchangelog_lines += new_line\n\tpr_body += \"```\\n\\n\" + pr_links\n\n\treturn pr_body, changelog_lines\n\ndef get_pr_author_name(login: str) -> str:\n\tif len(DEVELOPERS) == 0:\n\t\tparse_contributors()\n\n\treturn DEVELOPERS[login] if login in DEVELOPERS else f\"@{login}\"\n\ndef parse_contributors():\n\tregex = re.compile(r'^\\s*?-\\s*?\\[([^]]+)\\]\\s*?\\(http.*/([^/]+)\\s*?\\)')\n\twith open('CONTRIBUTORS.md', 'rt') as handle:\n\t\tline = handle.readline()\n\t\twhile not ('##' in line and 'Contributors' in line):\n\t\t\tmatch = regex.match(line)\n\t\t\tif match:\n\t\t\t\tDEVELOPERS[match.group(2)] = match.group(1)\n\t\t\tline = handle.readline()\n\ndef create_pr_branch(day: datetime.date, author: str, base_branch: str) -> str:\n\tbranch_name = f\"changelog-updates-for-{day}-{author} {base_branch}\"\n\tcommand = f\"git checkout -b {branch_name}\"\n\tproc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)\n\t_, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\n\treturn branch_name\n\ndef push_pr_branch(branch_name: str):\n\tcommand = f\"git push -u origin {branch_name}\"\n\tproc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)\n\t_, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\ndef run(source_repository: str, target_repository: str, today: datetime.date):\n\tday = today - datetime.timedelta(days=1)\n\tprs = get_prs(source_repository, day)\n\tprs_length = len(prs)\n\tprint(f\"Found {prs_length} PRs\")\n\tif prs_length == 0:\n\t\treturn\n\n\tmerged_by_prs_map = parse_prs(prs)\n\tcreate_prs(target_repository, merged_by_prs_map, day)\n\nif __name__ == \"__main__\":\n\t# disable pager\n\tos.environ[\"GH_PAGER\"] = ''\n\t# set variables for Git\n\tos.environ[\"GIT_AUTHOR_NAME\"] = \"changelog-pr-bot\"\n\tos.environ[\"GIT_AUTHOR_EMAIL\"] = \"[email protected]\"\n\tos.environ[\"GIT_COMMITTER_NAME\"] = \"changelog-pr-bot\"\n\tos.environ[\"GIT_COMMITTER_EMAIL\"] = \"[email protected]\"\n\n\tsource_repository = 'coreruleset/coreruleset'\n\ttarget_repository = source_repository\n\t# the cron schedule for the workflow uses UTC\n\ttoday = datetime.datetime.now(datetime.timezone.utc).date()\n\n\tif len(sys.argv) > 1:\n\t\tsource_repository = sys.argv[1]\n\tif len(sys.argv) > 2:\n\t\ttarget_repository = sys.argv[2]\n\tif len(sys.argv) > 3:\n\t\ttoday = datetime.date.fromisoformat(sys.argv[3])\n\trun(source_repository, target_repository, today)\n", "path": ".github/create-changelog-prs.py"}]} |
gh_patches_debug_1041 | rasdani/github-patches | git_diff | openshift__openshift-ansible-2630 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
maximum recursion depth exceeded -- related to callback/default.py
Running the `ansible-playbook -b --become-user root -i ansible-ose-inventory /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml`
i am getting
```
statically included: /usr/share/ansible/openshift-ansible/roles/openshift_hosted/tasks/registry/registry.yml
statically included: /usr/share/ansible/openshift-ansible/roles/openshift_metrics/tasks/install.yml
ERROR! Unexpected Exception: maximum recursion depth exceeded while calling a Python object
the full traceback was:
Traceback (most recent call last):
File "/bin/ansible-playbook", line 103, in <module>
exit_code = cli.run()
File "/usr/lib/python2.7/site-packages/ansible/cli/playbook.py", line 159, in run
results = pbex.run()
File "/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py", line 89, in run
self._tqm.load_callbacks()
File "/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py", line 172, in load_callbacks
self._stdout_callback = callback_loader.get(self._stdout_callback)
File "/usr/lib/python2.7/site-packages/ansible/plugins/__init__.py", line 358, in get
obj = obj(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 41, in __init__
super(CallbackModule, self).__init__()
...
super(CallbackModule, self).__init__()
File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 41, in __init__
super(CallbackModule, self).__init__()
File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 41, in __init__
super(CallbackModule, self).__init__()
RuntimeError: maximum recursion depth exceeded while calling a Python object
```
##### Version
```
atomic-openshift-utils-3.3.37-1.git.0.10ff25b.el7.noarch
openshift-ansible-3.3.37-1.git.0.10ff25b.el7.noarch
```
The playbooks are installed from AtomicOpenShift/3.3/2016-10-18.2
The 3.4 has same problem. 3.2 Doesn't
```
openshift-ansible.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-callback-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-docs.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-filter-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-lookup-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-playbooks.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-roles.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
ansible-playbook 2.2.0.0
config file = /root/ansible.cfg
configured module search path = Default w/o overrides
```
##### Steps To Reproduce
In description
##### Current Result
Infinite recursion with ansible 2.2.0.0
No problem with ansible 2.1.2.0
The difference seems to be that the 2.1.2.0 do not have the `__init__` in the
```
/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py
```
```
class CallbackModule(CallbackBase):
...
def __init__(self):
self._play = None
self._last_task_banner = None
super(CallbackModule, self).__init__()
```
If I remove it from the same file on the old ansible, deployment seems
to work. Though I have no idea why it get's to the infinite recursion.
It doesn't make sense to me.
##### Expected Result
No problems with the infinite recursion
##### Additional Information
Red Hat Enterprise Linux Server release 7.2 (Maipo)
The inventory file
```
[OSEv3:children]
masters
nodes
[OSEv3:vars]
deployment_type=openshift-enterprise
ansible_ssh_user=cloud-user
ansible_sudo=true
ansible_sudo_user=root
openshift_use_manageiq=True
#use_cluster_metrics=true
openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://download.xxx.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift-errata/3.3/latest/RH7-RHAOS-3.3/x86_64/os/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}, {'id':'rhel-extras-candidate','name':'rhel-extras-candidate','baseurl':'http://download.xxx..redhat.com/brewroot/repos/extras-rhel-7.2-candidate/latest/x86_64/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}]
openshift_docker_additional_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
openshift_docker_insecure_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
[masters]
ose3-master-08w85 openshift_scheduleable=True openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85
[nodes]
ose3-master-08w85 openshift_node_labels="{'region':'infra','zone':'default'}" openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85
ose3-node0-08w85 openshift_node_labels="{'region':'primary','zone':'east'}" openshift_hostname=ose3-node0-08w85 openshift_public_hostname=ose3-node0-08w85
ose3-node1-08w85 openshift_node_labels="{'region':'primary','zone':'west'}" openshift_hostname=ose3-node1-08w85 openshift_public_hostname=ose3-node1-08w85
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `callback_plugins/default.py`
Content:
```
1 '''Plugin to override the default output logic.'''
2
3 # upstream: https://gist.github.com/cliffano/9868180
4
5 # This program is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with this program. If not, see <http://www.gnu.org/licenses/>.
17
18
19 # For some reason this has to be done
20 import imp
21 import os
22
23 ANSIBLE_PATH = imp.find_module('ansible')[1]
24 DEFAULT_PATH = os.path.join(ANSIBLE_PATH, 'plugins/callback/default.py')
25 DEFAULT_MODULE = imp.load_source(
26 'ansible.plugins.callback.default',
27 DEFAULT_PATH
28 )
29
30 try:
31 from ansible.plugins.callback import CallbackBase
32 BASECLASS = CallbackBase
33 except ImportError: # < ansible 2.1
34 BASECLASS = DEFAULT_MODULE.CallbackModule
35
36
37 class CallbackModule(DEFAULT_MODULE.CallbackModule): # pylint: disable=too-few-public-methods,no-init
38 '''
39 Override for the default callback module.
40
41 Render std err/out outside of the rest of the result which it prints with
42 indentation.
43 '''
44 CALLBACK_VERSION = 2.0
45 CALLBACK_TYPE = 'stdout'
46 CALLBACK_NAME = 'default'
47
48 def _dump_results(self, result):
49 '''Return the text to output for a result.'''
50 result['_ansible_verbose_always'] = True
51
52 save = {}
53 for key in ['stdout', 'stdout_lines', 'stderr', 'stderr_lines', 'msg']:
54 if key in result:
55 save[key] = result.pop(key)
56
57 output = BASECLASS._dump_results(self, result) # pylint: disable=protected-access
58
59 for key in ['stdout', 'stderr', 'msg']:
60 if key in save and save[key]:
61 output += '\n\n%s:\n\n%s\n' % (key.upper(), save[key])
62
63 for key, value in save.items():
64 result[key] = value
65
66 return output
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/callback_plugins/default.py b/callback_plugins/default.py
--- a/callback_plugins/default.py
+++ b/callback_plugins/default.py
@@ -45,6 +45,9 @@
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'default'
+ def __init__(self, *args, **kwargs):
+ BASECLASS.__init__(self, *args, **kwargs)
+
def _dump_results(self, result):
'''Return the text to output for a result.'''
result['_ansible_verbose_always'] = True
| {"golden_diff": "diff --git a/callback_plugins/default.py b/callback_plugins/default.py\n--- a/callback_plugins/default.py\n+++ b/callback_plugins/default.py\n@@ -45,6 +45,9 @@\n CALLBACK_TYPE = 'stdout'\n CALLBACK_NAME = 'default'\n \n+ def __init__(self, *args, **kwargs):\n+ BASECLASS.__init__(self, *args, **kwargs)\n+\n def _dump_results(self, result):\n '''Return the text to output for a result.'''\n result['_ansible_verbose_always'] = True\n", "issue": "maximum recursion depth exceeded -- related to callback/default.py\nRunning the `ansible-playbook -b --become-user root -i ansible-ose-inventory /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml`\n\ni am getting\n\n```\nstatically included: /usr/share/ansible/openshift-ansible/roles/openshift_hosted/tasks/registry/registry.yml\nstatically included: /usr/share/ansible/openshift-ansible/roles/openshift_metrics/tasks/install.yml\nERROR! Unexpected Exception: maximum recursion depth exceeded while calling a Python object\nthe full traceback was:\n\nTraceback (most recent call last):\n File \"/bin/ansible-playbook\", line 103, in <module>\n exit_code = cli.run()\n File \"/usr/lib/python2.7/site-packages/ansible/cli/playbook.py\", line 159, in run\n results = pbex.run()\n File \"/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py\", line 89, in run\n self._tqm.load_callbacks()\n File \"/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py\", line 172, in load_callbacks\n self._stdout_callback = callback_loader.get(self._stdout_callback)\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/__init__.py\", line 358, in get\n obj = obj(*args, **kwargs)\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\", line 41, in __init__\n super(CallbackModule, self).__init__()\n...\n super(CallbackModule, self).__init__()\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\", line 41, in __init__\n super(CallbackModule, self).__init__()\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\", line 41, in __init__\n super(CallbackModule, self).__init__()\nRuntimeError: maximum recursion depth exceeded while calling a Python object\n```\n##### Version\n\n```\natomic-openshift-utils-3.3.37-1.git.0.10ff25b.el7.noarch\nopenshift-ansible-3.3.37-1.git.0.10ff25b.el7.noarch\n```\n\nThe playbooks are installed from AtomicOpenShift/3.3/2016-10-18.2\nThe 3.4 has same problem. 3.2 Doesn't\n\n```\nopenshift-ansible.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-callback-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-docs.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-filter-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-lookup-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-playbooks.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-roles.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\n\nansible-playbook 2.2.0.0\n config file = /root/ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### Steps To Reproduce\n\nIn description\n##### Current Result\n\nInfinite recursion with ansible 2.2.0.0\nNo problem with ansible 2.1.2.0\n\n The difference seems to be that the 2.1.2.0 do not have the `__init__` in the\n\n```\n /usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\n```\n\n```\nclass CallbackModule(CallbackBase):\n...\n def __init__(self):\n\n self._play = None\n self._last_task_banner = None\n super(CallbackModule, self).__init__()\n```\n\nIf I remove it from the same file on the old ansible, deployment seems\nto work. Though I have no idea why it get's to the infinite recursion.\nIt doesn't make sense to me.\n##### Expected Result\n\nNo problems with the infinite recursion\n##### Additional Information\n\nRed Hat Enterprise Linux Server release 7.2 (Maipo)\n\nThe inventory file\n\n```\n[OSEv3:children]\nmasters\nnodes\n\n[OSEv3:vars]\ndeployment_type=openshift-enterprise\nansible_ssh_user=cloud-user\nansible_sudo=true\nansible_sudo_user=root\nopenshift_use_manageiq=True\n#use_cluster_metrics=true\n\nopenshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://download.xxx.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift-errata/3.3/latest/RH7-RHAOS-3.3/x86_64/os/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}, {'id':'rhel-extras-candidate','name':'rhel-extras-candidate','baseurl':'http://download.xxx..redhat.com/brewroot/repos/extras-rhel-7.2-candidate/latest/x86_64/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}]\nopenshift_docker_additional_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888\nopenshift_docker_insecure_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888\n\n[masters]\nose3-master-08w85 openshift_scheduleable=True openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85\n\n[nodes]\nose3-master-08w85 openshift_node_labels=\"{'region':'infra','zone':'default'}\" openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85\n\nose3-node0-08w85 openshift_node_labels=\"{'region':'primary','zone':'east'}\" openshift_hostname=ose3-node0-08w85 openshift_public_hostname=ose3-node0-08w85\nose3-node1-08w85 openshift_node_labels=\"{'region':'primary','zone':'west'}\" openshift_hostname=ose3-node1-08w85 openshift_public_hostname=ose3-node1-08w85\n```\n\n", "before_files": [{"content": "'''Plugin to override the default output logic.'''\n\n# upstream: https://gist.github.com/cliffano/9868180\n\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\n# For some reason this has to be done\nimport imp\nimport os\n\nANSIBLE_PATH = imp.find_module('ansible')[1]\nDEFAULT_PATH = os.path.join(ANSIBLE_PATH, 'plugins/callback/default.py')\nDEFAULT_MODULE = imp.load_source(\n 'ansible.plugins.callback.default',\n DEFAULT_PATH\n)\n\ntry:\n from ansible.plugins.callback import CallbackBase\n BASECLASS = CallbackBase\nexcept ImportError: # < ansible 2.1\n BASECLASS = DEFAULT_MODULE.CallbackModule\n\n\nclass CallbackModule(DEFAULT_MODULE.CallbackModule): # pylint: disable=too-few-public-methods,no-init\n '''\n Override for the default callback module.\n\n Render std err/out outside of the rest of the result which it prints with\n indentation.\n '''\n CALLBACK_VERSION = 2.0\n CALLBACK_TYPE = 'stdout'\n CALLBACK_NAME = 'default'\n\n def _dump_results(self, result):\n '''Return the text to output for a result.'''\n result['_ansible_verbose_always'] = True\n\n save = {}\n for key in ['stdout', 'stdout_lines', 'stderr', 'stderr_lines', 'msg']:\n if key in result:\n save[key] = result.pop(key)\n\n output = BASECLASS._dump_results(self, result) # pylint: disable=protected-access\n\n for key in ['stdout', 'stderr', 'msg']:\n if key in save and save[key]:\n output += '\\n\\n%s:\\n\\n%s\\n' % (key.upper(), save[key])\n\n for key, value in save.items():\n result[key] = value\n\n return output\n", "path": "callback_plugins/default.py"}], "after_files": [{"content": "'''Plugin to override the default output logic.'''\n\n# upstream: https://gist.github.com/cliffano/9868180\n\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\n# For some reason this has to be done\nimport imp\nimport os\n\nANSIBLE_PATH = imp.find_module('ansible')[1]\nDEFAULT_PATH = os.path.join(ANSIBLE_PATH, 'plugins/callback/default.py')\nDEFAULT_MODULE = imp.load_source(\n 'ansible.plugins.callback.default',\n DEFAULT_PATH\n)\n\ntry:\n from ansible.plugins.callback import CallbackBase\n BASECLASS = CallbackBase\nexcept ImportError: # < ansible 2.1\n BASECLASS = DEFAULT_MODULE.CallbackModule\n\n\nclass CallbackModule(DEFAULT_MODULE.CallbackModule): # pylint: disable=too-few-public-methods,no-init\n '''\n Override for the default callback module.\n\n Render std err/out outside of the rest of the result which it prints with\n indentation.\n '''\n CALLBACK_VERSION = 2.0\n CALLBACK_TYPE = 'stdout'\n CALLBACK_NAME = 'default'\n\n def __init__(self, *args, **kwargs):\n BASECLASS.__init__(self, *args, **kwargs)\n\n def _dump_results(self, result):\n '''Return the text to output for a result.'''\n result['_ansible_verbose_always'] = True\n\n save = {}\n for key in ['stdout', 'stdout_lines', 'stderr', 'stderr_lines', 'msg']:\n if key in result:\n save[key] = result.pop(key)\n\n output = BASECLASS._dump_results(self, result) # pylint: disable=protected-access\n\n for key in ['stdout', 'stderr', 'msg']:\n if key in save and save[key]:\n output += '\\n\\n%s:\\n\\n%s\\n' % (key.upper(), save[key])\n\n for key, value in save.items():\n result[key] = value\n\n return output\n", "path": "callback_plugins/default.py"}]} |
gh_patches_debug_1042 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-3045 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The virsh_list_all parser is raising ValueError exceptions in production
The VirshListAll parser is throwing a large number of the exception ValueError("Line containing 'Id,Name,State' was not found in table",) in production.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `insights/parsers/virsh_list_all.py`
Content:
```
1 """VirshListAll - command ``virsh --readonly list --all``
2 =========================================================
3
4 This module provides VM status using output of command ``virsh --readonly list --all``.
5 """
6 from collections import namedtuple
7
8 from insights.specs import Specs
9 from .. import CommandParser, parser
10 from . import parse_fixed_table, keyword_search
11
12
13 @parser(Specs.virsh_list_all)
14 class VirshListAll(CommandParser):
15 """Parsing output of ``virsh --readonly list --all``.
16
17 Typical output of ``virsh --readonly list --all`` command is::
18
19 Id Name State
20 ----------------------------------------------------
21 2 rhel7.4 running
22 4 rhel7.0 paused
23 - centos6.8-router shut off
24 - cfme-5.7.13 shut off
25 - cfme-rhos-5.9.0.15 shut off
26 - fedora-24-kernel shut off
27 - fedora-saio_fedoraSaio shut off
28 - fedora24-misc shut off
29 - freebsd11.0 shut off
30 - guixSD shut off
31 - miq-gap-1 shut off
32 - rhel7.2 shut off
33 - RHOSP10 shut off
34
35
36 Examples:
37
38 >>> len(output.search(state='shut off')) == 11
39 True
40 >>> len(output.search(id=None)) == 11
41 True
42 >>> len(output.search(id=2)) == 1
43 True
44 >>> output.search(name='rhel7.4') == [{'state': 'running', 'id': 2, 'name': 'rhel7.4'}]
45 True
46 >>> output.get_vm_state('rhel7.0') == 'paused'
47 True
48 >>> output.get_vm_state('rhel9.0') is None
49 True
50 >>> 'cfme' in output
51 False
52 >>> 'cfme-5.7.13' in output
53 True
54
55 Attributes:
56 fields (list): List of ``KeyValue`` namedtupules for each line
57 in the command.
58
59 cols (list): List id key value pair derived from the command.
60
61 keywords (list): keywords present in the command, each
62 keyword is converted to lowercase.
63
64 """
65 keyvalue = namedtuple('KeyValue',
66 ['name', 'state', 'id', 'name_lower'])
67 """namedtuple: Represent name value pair as a namedtuple with case."""
68 def _cleanup(self):
69 for col in self.cols:
70 if col['id'] == '-':
71 col['id'] = None
72 else:
73 col['id'] = (lambda x: int(x) if x.isdigit() else x)(col['id'])
74
75 def parse_content(self, content):
76 self.fields = []
77 self.cols = []
78 self.keywords = []
79 if not content:
80 return
81
82 self.cols = parse_fixed_table(content,
83 heading_ignore=['Id', 'Name', 'State'],
84 header_substitute=[('Id', 'id'), ('Name', 'name'), ('State', 'state')])[1:] # noqa
85 self._cleanup()
86
87 for item in self.cols:
88 self.fields.append(self.keyvalue(item['name'], item['state'], item['id'], item['name'].lower())) # noqa
89 self.keywords = [name.name_lower for name in self.fields]
90
91 def __contains__(self, keyword):
92 return keyword.lower() in self.keywords
93
94 def __iter__(self):
95 return iter(self.fields)
96
97 def search(self, **kw):
98 '''Search item based on key value pair.
99
100 Example:
101
102 >>> len(output.search(state='shut off')) == 11
103 True
104 >>> len(output.search(id=None)) == 11
105 True
106 >>> len(output.search(id=2)) == 1
107 True
108 '''
109 return keyword_search(self.cols, **kw)
110
111 def get_vm_state(self, vmname):
112 '''Get VM state associated with vmname
113
114 Typical output is ``virsh --readonly list --all`` command::
115
116 Id Name State
117 ----------------------------------------------------
118 2 rhel7.4 running
119 4 rhel7.0 paused
120
121
122 Example:
123
124 >>> output.get_vm_state('rhel7.0')
125 'paused'
126
127 Args:
128
129 vmname (str): A key. For ex. ``rhel7.0``.
130
131 Returns:
132
133 str: State of VM. Returns None if, ``vmname`` does not exist.
134 '''
135 if vmname.lower() in self.keywords:
136 return self.search(name=vmname)[0]['state']
137 return None
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/insights/parsers/virsh_list_all.py b/insights/parsers/virsh_list_all.py
--- a/insights/parsers/virsh_list_all.py
+++ b/insights/parsers/virsh_list_all.py
@@ -76,6 +76,10 @@
self.fields = []
self.cols = []
self.keywords = []
+ # Check and remove any error message, or empty lines. This to
+ # prevent any ValueError exceptions when parse_fixed_table is
+ # called below.
+ content = [l for l in content if not l.startswith("error: ") and l != ""]
if not content:
return
| {"golden_diff": "diff --git a/insights/parsers/virsh_list_all.py b/insights/parsers/virsh_list_all.py\n--- a/insights/parsers/virsh_list_all.py\n+++ b/insights/parsers/virsh_list_all.py\n@@ -76,6 +76,10 @@\n self.fields = []\n self.cols = []\n self.keywords = []\n+ # Check and remove any error message, or empty lines. This to\n+ # prevent any ValueError exceptions when parse_fixed_table is\n+ # called below.\n+ content = [l for l in content if not l.startswith(\"error: \") and l != \"\"]\n if not content:\n return\n", "issue": "The virsh_list_all parser is raising ValueError exceptions in production\nThe VirshListAll parser is throwing a large number of the exception ValueError(\"Line containing 'Id,Name,State' was not found in table\",) in production.\n", "before_files": [{"content": "\"\"\"VirshListAll - command ``virsh --readonly list --all``\n=========================================================\n\nThis module provides VM status using output of command ``virsh --readonly list --all``.\n\"\"\"\nfrom collections import namedtuple\n\nfrom insights.specs import Specs\nfrom .. import CommandParser, parser\nfrom . import parse_fixed_table, keyword_search\n\n\n@parser(Specs.virsh_list_all)\nclass VirshListAll(CommandParser):\n \"\"\"Parsing output of ``virsh --readonly list --all``.\n\n Typical output of ``virsh --readonly list --all`` command is::\n\n Id Name State\n ----------------------------------------------------\n 2 rhel7.4 running\n 4 rhel7.0 paused\n - centos6.8-router shut off\n - cfme-5.7.13 shut off\n - cfme-rhos-5.9.0.15 shut off\n - fedora-24-kernel shut off\n - fedora-saio_fedoraSaio shut off\n - fedora24-misc shut off\n - freebsd11.0 shut off\n - guixSD shut off\n - miq-gap-1 shut off\n - rhel7.2 shut off\n - RHOSP10 shut off\n\n\n Examples:\n\n >>> len(output.search(state='shut off')) == 11\n True\n >>> len(output.search(id=None)) == 11\n True\n >>> len(output.search(id=2)) == 1\n True\n >>> output.search(name='rhel7.4') == [{'state': 'running', 'id': 2, 'name': 'rhel7.4'}]\n True\n >>> output.get_vm_state('rhel7.0') == 'paused'\n True\n >>> output.get_vm_state('rhel9.0') is None\n True\n >>> 'cfme' in output\n False\n >>> 'cfme-5.7.13' in output\n True\n\n Attributes:\n fields (list): List of ``KeyValue`` namedtupules for each line\n in the command.\n\n cols (list): List id key value pair derived from the command.\n\n keywords (list): keywords present in the command, each\n keyword is converted to lowercase.\n\n \"\"\"\n keyvalue = namedtuple('KeyValue',\n ['name', 'state', 'id', 'name_lower'])\n \"\"\"namedtuple: Represent name value pair as a namedtuple with case.\"\"\"\n def _cleanup(self):\n for col in self.cols:\n if col['id'] == '-':\n col['id'] = None\n else:\n col['id'] = (lambda x: int(x) if x.isdigit() else x)(col['id'])\n\n def parse_content(self, content):\n self.fields = []\n self.cols = []\n self.keywords = []\n if not content:\n return\n\n self.cols = parse_fixed_table(content,\n heading_ignore=['Id', 'Name', 'State'],\n header_substitute=[('Id', 'id'), ('Name', 'name'), ('State', 'state')])[1:] # noqa\n self._cleanup()\n\n for item in self.cols:\n self.fields.append(self.keyvalue(item['name'], item['state'], item['id'], item['name'].lower())) # noqa\n self.keywords = [name.name_lower for name in self.fields]\n\n def __contains__(self, keyword):\n return keyword.lower() in self.keywords\n\n def __iter__(self):\n return iter(self.fields)\n\n def search(self, **kw):\n '''Search item based on key value pair.\n\n Example:\n\n >>> len(output.search(state='shut off')) == 11\n True\n >>> len(output.search(id=None)) == 11\n True\n >>> len(output.search(id=2)) == 1\n True\n '''\n return keyword_search(self.cols, **kw)\n\n def get_vm_state(self, vmname):\n '''Get VM state associated with vmname\n\n Typical output is ``virsh --readonly list --all`` command::\n\n Id Name State\n ----------------------------------------------------\n 2 rhel7.4 running\n 4 rhel7.0 paused\n\n\n Example:\n\n >>> output.get_vm_state('rhel7.0')\n 'paused'\n\n Args:\n\n vmname (str): A key. For ex. ``rhel7.0``.\n\n Returns:\n\n str: State of VM. Returns None if, ``vmname`` does not exist.\n '''\n if vmname.lower() in self.keywords:\n return self.search(name=vmname)[0]['state']\n return None\n", "path": "insights/parsers/virsh_list_all.py"}], "after_files": [{"content": "\"\"\"VirshListAll - command ``virsh --readonly list --all``\n=========================================================\n\nThis module provides VM status using output of command ``virsh --readonly list --all``.\n\"\"\"\nfrom collections import namedtuple\n\nfrom insights.specs import Specs\nfrom .. import CommandParser, parser\nfrom . import parse_fixed_table, keyword_search\n\n\n@parser(Specs.virsh_list_all)\nclass VirshListAll(CommandParser):\n \"\"\"Parsing output of ``virsh --readonly list --all``.\n\n Typical output of ``virsh --readonly list --all`` command is::\n\n Id Name State\n ----------------------------------------------------\n 2 rhel7.4 running\n 4 rhel7.0 paused\n - centos6.8-router shut off\n - cfme-5.7.13 shut off\n - cfme-rhos-5.9.0.15 shut off\n - fedora-24-kernel shut off\n - fedora-saio_fedoraSaio shut off\n - fedora24-misc shut off\n - freebsd11.0 shut off\n - guixSD shut off\n - miq-gap-1 shut off\n - rhel7.2 shut off\n - RHOSP10 shut off\n\n\n Examples:\n\n >>> len(output.search(state='shut off')) == 11\n True\n >>> len(output.search(id=None)) == 11\n True\n >>> len(output.search(id=2)) == 1\n True\n >>> output.search(name='rhel7.4') == [{'state': 'running', 'id': 2, 'name': 'rhel7.4'}]\n True\n >>> output.get_vm_state('rhel7.0') == 'paused'\n True\n >>> output.get_vm_state('rhel9.0') is None\n True\n >>> 'cfme' in output\n False\n >>> 'cfme-5.7.13' in output\n True\n\n Attributes:\n fields (list): List of ``KeyValue`` namedtupules for each line\n in the command.\n\n cols (list): List id key value pair derived from the command.\n\n keywords (list): keywords present in the command, each\n keyword is converted to lowercase.\n\n \"\"\"\n keyvalue = namedtuple('KeyValue',\n ['name', 'state', 'id', 'name_lower'])\n \"\"\"namedtuple: Represent name value pair as a namedtuple with case.\"\"\"\n def _cleanup(self):\n for col in self.cols:\n if col['id'] == '-':\n col['id'] = None\n else:\n col['id'] = (lambda x: int(x) if x.isdigit() else x)(col['id'])\n\n def parse_content(self, content):\n self.fields = []\n self.cols = []\n self.keywords = []\n # Check and remove any error message, or empty lines. This to\n # prevent any ValueError exceptions when parse_fixed_table is\n # called below.\n content = [l for l in content if not l.startswith(\"error: \") and l != \"\"]\n if not content:\n return\n\n self.cols = parse_fixed_table(content,\n heading_ignore=['Id', 'Name', 'State'],\n header_substitute=[('Id', 'id'), ('Name', 'name'), ('State', 'state')])[1:] # noqa\n self._cleanup()\n\n for item in self.cols:\n self.fields.append(self.keyvalue(item['name'], item['state'], item['id'], item['name'].lower())) # noqa\n self.keywords = [name.name_lower for name in self.fields]\n\n def __contains__(self, keyword):\n return keyword.lower() in self.keywords\n\n def __iter__(self):\n return iter(self.fields)\n\n def search(self, **kw):\n '''Search item based on key value pair.\n\n Example:\n\n >>> len(output.search(state='shut off')) == 11\n True\n >>> len(output.search(id=None)) == 11\n True\n >>> len(output.search(id=2)) == 1\n True\n '''\n return keyword_search(self.cols, **kw)\n\n def get_vm_state(self, vmname):\n '''Get VM state associated with vmname\n\n Typical output is ``virsh --readonly list --all`` command::\n\n Id Name State\n ----------------------------------------------------\n 2 rhel7.4 running\n 4 rhel7.0 paused\n\n\n Example:\n\n >>> output.get_vm_state('rhel7.0')\n 'paused'\n\n Args:\n\n vmname (str): A key. For ex. ``rhel7.0``.\n\n Returns:\n\n str: State of VM. Returns None if, ``vmname`` does not exist.\n '''\n if vmname.lower() in self.keywords:\n return self.search(name=vmname)[0]['state']\n return None\n", "path": "insights/parsers/virsh_list_all.py"}]} |
gh_patches_debug_1043 | rasdani/github-patches | git_diff | cornellius-gp__gpytorch-1685 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]
# 🐛 Bug: Possible error with multitask learning with additive kernel structure
<!-- A clear and concise description of what the bug is. -->
When I define in the class MultitaskGPModel the multitask kernel
self.covar_module = (gpytorch.kernels.ScaleKernel(
gpytorch.kernels.PeriodicKernel(batch_shape=torch.Size([num_latents])) *
gpytorch.kernels.RQKernel(batch_shape=torch.Size([num_latents])),
batch_shape=torch.Size([num_latents])
) +
gpytorch.kernels.ScaleKernel(
gpytorch.kernels.MaternKernel(nu=0.5, batch_shape=torch.Size([num_latents])),
batch_shape=torch.Size([num_latents])
)
)
which uses the additive kernel as its outermost layer, and I apply the class on data as
w_l = 50
num_latents = 24
Xc_t_npa = np.arange(0,w_l,1,dtype=np.float32).reshape(-1, 1)
Xc_t = torch.from_numpy(Xc_t_npa).type(torch.Tensor)
model_mul(Xc_t)
I get
'RuntimeError: The expected shape of the kernel was torch.Size([100, 100]), but got torch.Size([24, 100, 100]). This is likely a bug in GPyTorch'.
This behavior seems not to change when changing the number of tasks or the number of latent gps.
If I use the same kernel in a non-batch setting, it works smoothly.
I wrote the batched problem with another kernel which is mathematically the same but which doesn't use the outer additive kernel, and it works smoothly. Unfortunatly the role of the subkernel parameters in the new form is not the same as that of the malfunctioning kernel, and I have to re-run a lot of past non-batch fits in the new form to make them comparable with the new setting.
## To reproduce
** Code snippet to reproduce **
```python
# Your code goes here
# Please make sure it does not require any external dependencies (other than PyTorch!)
# (We much prefer small snippets rather than links to existing libraries!)
```
Zc_intra_np = np.arange(0, 24, 1).reshape(-1, 1)
Zc_intra = torch.tensor(Zc_intra_np, dtype=torch.float)
w_l = 50
num_latents = 24
num_tasks = 12
Xc_t_npa = np.arange(0,w_l,1,dtype=np.float32).reshape(-1, 1)
Xc_t = torch.from_numpy(Xc_t_npa).type(torch.Tensor)
model_mul = MultitaskGPModel()
likelihood_mul = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=num_tasks)
model_mul(Xc_t)
class MultitaskGPModel(gpytorch.models.ApproximateGP):
def __init__(self):
inducing_points = Zc_intra
variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(
inducing_points.size(-2), batch_shape=torch.Size([num_latents])
)
variational_strategy = gpytorch.variational.LMCVariationalStrategy(
gpytorch.variational.VariationalStrategy(
self, inducing_points, variational_distribution, learn_inducing_locations=True
),
num_tasks=num_tasks,
num_latents=num_latents,
# could be 0
latent_dim=-1
)
super().__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean(batch_shape=torch.Size([num_latents]))
self.covar_module = gpytorch.kernels.ScaleKernel(
gpytorch.kernels.PeriodicKernel(batch_shape=torch.Size([num_latents])) *
gpytorch.kernels.RQKernel(batch_shape=torch.Size([num_latents])) +
gpytorch.kernels.ScaleKernel(
gpytorch.kernels.MaternKernel(nu=0.5, batch_shape=torch.Size([num_latents])),
batch_shape=torch.Size([num_latents])),
batch_shape=torch.Size([num_latents])
)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
** Stack trace/error message **
```
Traceback (most recent call last):
File "<ipython-input-398-5fc832e3a3f0>", line 1, in <module>
model_mul(Xc_t)
File "C:\Users\lucheron\Anaconda3\envs\pyro16_py37\lib\site-packages\gpytorch\models\approximate_gp.py", line 81, in __call__
return self.variational_strategy(inputs, prior=prior, **kwargs)
File "C:\Users\lucheron\Anaconda3\envs\pyro16_py37\lib\site-packages\gpytorch\variational\lmc_variational_strategy.py", line 124, in __call__
function_dist = self.base_variational_strategy(x, prior=prior, **kwargs)
File "C:\Users\lucheron\Anaconda3\envs\pyro16_py37\lib\site-packages\gpytorch\variational\variational_strategy.py", line 168, in __call__
return super().__call__(x, prior=prior, **kwargs)
File "C:\Users\lucheron\Anaconda3\envs\pyro16_py37\lib\site-packages\gpytorch\variational\_variational_strategy.py", line 129, in __call__
**kwargs,
File "C:\Users\lucheron\Anaconda3\envs\pyro16_py37\lib\site-packages\gpytorch\module.py", line 28, in __call__
outputs = self.forward(*inputs, **kwargs)
File "C:\Users\lucheron\Anaconda3\envs\pyro16_py37\lib\site-packages\gpytorch\variational\variational_strategy.py", line 96, in forward
induc_induc_covar = full_covar[..., :num_induc, :num_induc].add_jitter()
File "C:\Users\lucheron\Anaconda3\envs\pyro16_py37\lib\site-packages\gpytorch\lazy\lazy_evaluated_kernel_tensor.py", line 237, in add_jitter
return self.evaluate_kernel().add_jitter(jitter_val)
File "C:\Users\lucheron\Anaconda3\envs\pyro16_py37\lib\site-packages\gpytorch\utils\memoize.py", line 59, in g
return _add_to_cache(self, cache_name, method(self, *args, **kwargs), *args, kwargs_pkl=kwargs_pkl)
File "C:\Users\lucheron\Anaconda3\envs\pyro16_py37\lib\site-packages\gpytorch\lazy\lazy_evaluated_kernel_tensor.py", line 291, in evaluate_kernel
f"The expected shape of the kernel was {self.shape}, but got {res.shape}. "
RuntimeError: The expected shape of the kernel was torch.Size([100, 100]), but got torch.Size([24, 100, 100]). This is likely a bug in GPyTorch.
```
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. --> Run with no errors
## System information
**Please complete the following information:**
- <!-- GPyTorch Version (run `print(gpytorch.__version__)` --> 1.4.1
- <!-- PyTorch Version (run `print(torch.__version__)` --> 1.8.1
- <!-- Computer OS --> Win10 pro 19042.1052
## Additional context
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/kernels/kernel.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import warnings
4 from abc import abstractmethod
5 from copy import deepcopy
6
7 import torch
8 from torch.nn import ModuleList
9
10 from .. import settings
11 from ..constraints import Positive
12 from ..lazy import LazyEvaluatedKernelTensor, ZeroLazyTensor, delazify, lazify
13 from ..models import exact_prediction_strategies
14 from ..module import Module
15 from ..utils.broadcasting import _mul_broadcast_shape
16
17
18 def default_postprocess_script(x):
19 return x
20
21
22 class Distance(torch.nn.Module):
23 def __init__(self, postprocess_script=default_postprocess_script):
24 super().__init__()
25 self._postprocess = postprocess_script
26
27 def _sq_dist(self, x1, x2, postprocess, x1_eq_x2=False):
28 # TODO: use torch squared cdist once implemented: https://github.com/pytorch/pytorch/pull/25799
29 adjustment = x1.mean(-2, keepdim=True)
30 x1 = x1 - adjustment
31 x2 = x2 - adjustment # x1 and x2 should be identical in all dims except -2 at this point
32
33 # Compute squared distance matrix using quadratic expansion
34 x1_norm = x1.pow(2).sum(dim=-1, keepdim=True)
35 x1_pad = torch.ones_like(x1_norm)
36 if x1_eq_x2 and not x1.requires_grad and not x2.requires_grad:
37 x2_norm, x2_pad = x1_norm, x1_pad
38 else:
39 x2_norm = x2.pow(2).sum(dim=-1, keepdim=True)
40 x2_pad = torch.ones_like(x2_norm)
41 x1_ = torch.cat([-2.0 * x1, x1_norm, x1_pad], dim=-1)
42 x2_ = torch.cat([x2, x2_pad, x2_norm], dim=-1)
43 res = x1_.matmul(x2_.transpose(-2, -1))
44
45 if x1_eq_x2 and not x1.requires_grad and not x2.requires_grad:
46 res.diagonal(dim1=-2, dim2=-1).fill_(0)
47
48 # Zero out negative values
49 res.clamp_min_(0)
50 return self._postprocess(res) if postprocess else res
51
52 def _dist(self, x1, x2, postprocess, x1_eq_x2=False):
53 # TODO: use torch cdist once implementation is improved: https://github.com/pytorch/pytorch/pull/25799
54 res = self._sq_dist(x1, x2, postprocess=False, x1_eq_x2=x1_eq_x2)
55 res = res.clamp_min_(1e-30).sqrt_()
56 return self._postprocess(res) if postprocess else res
57
58
59 class Kernel(Module):
60 r"""
61 Kernels in GPyTorch are implemented as a :class:`gpytorch.Module` that, when called on two :obj:`torch.tensor`
62 objects `x1` and `x2` returns either a :obj:`torch.tensor` or a :obj:`gpytorch.lazy.LazyTensor` that represents
63 the covariance matrix between `x1` and `x2`.
64
65 In the typical use case, to extend this class means to implement the :func:`~gpytorch.kernels.Kernel.forward`
66 method.
67
68 .. note::
69 The :func:`~gpytorch.kernels.Kernel.__call__` does some additional internal work. In particular,
70 all kernels are lazily evaluated so that, in some cases, we can index in to the kernel matrix before actually
71 computing it. Furthermore, many built in kernel modules return LazyTensors that allow for more efficient
72 inference than if we explicitly computed the kernel matrix itself.
73
74 As a result, if you want to use a :obj:`gpytorch.kernels.Kernel` object just to get an actual
75 :obj:`torch.tensor` representing the covariance matrix, you may need to call the
76 :func:`gpytorch.lazy.LazyTensor.evaluate` method on the output.
77
78 This base :class:`Kernel` class includes a lengthscale parameter
79 :math:`\Theta`, which is used by many common kernel functions.
80 There are a few options for the lengthscale:
81
82 * Default: No lengthscale (i.e. :math:`\Theta` is the identity matrix).
83
84 * Single lengthscale: One lengthscale can be applied to all input dimensions/batches
85 (i.e. :math:`\Theta` is a constant diagonal matrix).
86 This is controlled by setting the attribute `has_lengthscale=True`.
87
88 * ARD: Each input dimension gets its own separate lengthscale
89 (i.e. :math:`\Theta` is a non-constant diagonal matrix).
90 This is controlled by the `ard_num_dims` keyword argument (as well as `has_lengthscale=True`).
91
92 In batch-mode (i.e. when :math:`x_1` and :math:`x_2` are batches of input matrices), each
93 batch of data can have its own lengthscale parameter by setting the `batch_shape`
94 keyword argument to the appropriate number of batches.
95
96 .. note::
97
98 The :attr:`lengthscale` parameter is parameterized on a log scale to constrain it to be positive.
99 You can set a prior on this parameter using the :attr:`lengthscale_prior` argument.
100
101 Base Args:
102 :attr:`ard_num_dims` (int, optional):
103 Set this if you want a separate lengthscale for each input
104 dimension. It should be `d` if :attr:`x1` is a `n x d` matrix. Default: `None`
105 :attr:`batch_shape` (torch.Size, optional):
106 Set this if you want a separate lengthscale for each batch of input
107 data. It should be `b1 x ... x bk` if :attr:`x1` is a `b1 x ... x bk x n x d` tensor.
108 :attr:`active_dims` (tuple of ints, optional):
109 Set this if you want to compute the covariance of only a few input dimensions. The ints
110 corresponds to the indices of the dimensions. Default: `None`.
111 :attr:`lengthscale_prior` (Prior, optional):
112 Set this if you want to apply a prior to the lengthscale parameter. Default: `None`
113 :attr:`lengthscale_constraint` (Constraint, optional):
114 Set this if you want to apply a constraint to the lengthscale parameter. Default: `Positive`.
115 :attr:`eps` (float):
116 The minimum value that the lengthscale can take (prevents divide by zero errors). Default: `1e-6`.
117
118 Base Attributes:
119 :attr:`lengthscale` (Tensor):
120 The lengthscale parameter. Size/shape of parameter depends on the
121 :attr:`ard_num_dims` and :attr:`batch_shape` arguments.
122
123 Example:
124 >>> covar_module = gpytorch.kernels.LinearKernel()
125 >>> x1 = torch.randn(50, 3)
126 >>> lazy_covar_matrix = covar_module(x1) # Returns a RootLazyTensor
127 >>> tensor_covar_matrix = lazy_covar_matrix.evaluate() # Gets the actual tensor for this kernel matrix
128 """
129
130 has_lengthscale = False
131
132 def __init__(
133 self,
134 ard_num_dims=None,
135 batch_shape=torch.Size([]),
136 active_dims=None,
137 lengthscale_prior=None,
138 lengthscale_constraint=None,
139 eps=1e-6,
140 **kwargs,
141 ):
142 super(Kernel, self).__init__()
143 self._batch_shape = batch_shape
144 if active_dims is not None and not torch.is_tensor(active_dims):
145 active_dims = torch.tensor(active_dims, dtype=torch.long)
146 self.register_buffer("active_dims", active_dims)
147 self.ard_num_dims = ard_num_dims
148
149 self.eps = eps
150
151 param_transform = kwargs.get("param_transform")
152
153 if lengthscale_constraint is None:
154 lengthscale_constraint = Positive()
155
156 if param_transform is not None:
157 warnings.warn(
158 "The 'param_transform' argument is now deprecated. If you want to use a different "
159 "transformation, specify a different 'lengthscale_constraint' instead.",
160 DeprecationWarning,
161 )
162
163 if self.has_lengthscale:
164 lengthscale_num_dims = 1 if ard_num_dims is None else ard_num_dims
165 self.register_parameter(
166 name="raw_lengthscale",
167 parameter=torch.nn.Parameter(torch.zeros(*self.batch_shape, 1, lengthscale_num_dims)),
168 )
169 if lengthscale_prior is not None:
170 self.register_prior(
171 "lengthscale_prior", lengthscale_prior, lambda m: m.lengthscale, lambda m, v: m._set_lengthscale(v)
172 )
173
174 self.register_constraint("raw_lengthscale", lengthscale_constraint)
175
176 self.distance_module = None
177 # TODO: Remove this on next official PyTorch release.
178 self.__pdist_supports_batch = True
179
180 @abstractmethod
181 def forward(self, x1, x2, diag=False, last_dim_is_batch=False, **params):
182 r"""
183 Computes the covariance between x1 and x2.
184 This method should be imlemented by all Kernel subclasses.
185
186 Args:
187 :attr:`x1` (Tensor `n x d` or `b x n x d`):
188 First set of data
189 :attr:`x2` (Tensor `m x d` or `b x m x d`):
190 Second set of data
191 :attr:`diag` (bool):
192 Should the Kernel compute the whole kernel, or just the diag?
193 :attr:`last_dim_is_batch` (tuple, optional):
194 If this is true, it treats the last dimension of the data as another batch dimension.
195 (Useful for additive structure over the dimensions). Default: False
196
197 Returns:
198 :class:`Tensor` or :class:`gpytorch.lazy.LazyTensor`.
199 The exact size depends on the kernel's evaluation mode:
200
201 * `full_covar`: `n x m` or `b x n x m`
202 * `full_covar` with `last_dim_is_batch=True`: `k x n x m` or `b x k x n x m`
203 * `diag`: `n` or `b x n`
204 * `diag` with `last_dim_is_batch=True`: `k x n` or `b x k x n`
205 """
206 raise NotImplementedError()
207
208 @property
209 def batch_shape(self):
210 kernels = list(self.sub_kernels())
211 if len(kernels):
212 return _mul_broadcast_shape(self._batch_shape, *[k.batch_shape for k in kernels])
213 else:
214 return self._batch_shape
215
216 @batch_shape.setter
217 def batch_shape(self, val):
218 self._batch_shape = val
219
220 @property
221 def dtype(self):
222 if self.has_lengthscale:
223 return self.lengthscale.dtype
224 else:
225 for param in self.parameters():
226 return param.dtype
227 return torch.get_default_dtype()
228
229 @property
230 def is_stationary(self) -> bool:
231 """
232 Property to indicate whether kernel is stationary or not.
233 """
234 return self.has_lengthscale
235
236 @property
237 def lengthscale(self):
238 if self.has_lengthscale:
239 return self.raw_lengthscale_constraint.transform(self.raw_lengthscale)
240 else:
241 return None
242
243 @lengthscale.setter
244 def lengthscale(self, value):
245 self._set_lengthscale(value)
246
247 def _set_lengthscale(self, value):
248 if not self.has_lengthscale:
249 raise RuntimeError("Kernel has no lengthscale.")
250
251 if not torch.is_tensor(value):
252 value = torch.as_tensor(value).to(self.raw_lengthscale)
253
254 self.initialize(raw_lengthscale=self.raw_lengthscale_constraint.inverse_transform(value))
255
256 def local_load_samples(self, samples_dict, memo, prefix):
257 num_samples = next(iter(samples_dict.values())).size(0)
258 self.batch_shape = torch.Size([num_samples]) + self.batch_shape
259 super().local_load_samples(samples_dict, memo, prefix)
260
261 def covar_dist(
262 self,
263 x1,
264 x2,
265 diag=False,
266 last_dim_is_batch=False,
267 square_dist=False,
268 dist_postprocess_func=default_postprocess_script,
269 postprocess=True,
270 **params,
271 ):
272 r"""
273 This is a helper method for computing the Euclidean distance between
274 all pairs of points in x1 and x2.
275
276 Args:
277 :attr:`x1` (Tensor `n x d` or `b1 x ... x bk x n x d`):
278 First set of data.
279 :attr:`x2` (Tensor `m x d` or `b1 x ... x bk x m x d`):
280 Second set of data.
281 :attr:`diag` (bool):
282 Should we return the whole distance matrix, or just the diagonal? If True, we must have `x1 == x2`.
283 :attr:`last_dim_is_batch` (tuple, optional):
284 Is the last dimension of the data a batch dimension or not?
285 :attr:`square_dist` (bool):
286 Should we square the distance matrix before returning?
287
288 Returns:
289 (:class:`Tensor`, :class:`Tensor) corresponding to the distance matrix between `x1` and `x2`.
290 The shape depends on the kernel's mode
291 * `diag=False`
292 * `diag=False` and `last_dim_is_batch=True`: (`b x d x n x n`)
293 * `diag=True`
294 * `diag=True` and `last_dim_is_batch=True`: (`b x d x n`)
295 """
296 if last_dim_is_batch:
297 x1 = x1.transpose(-1, -2).unsqueeze(-1)
298 x2 = x2.transpose(-1, -2).unsqueeze(-1)
299
300 x1_eq_x2 = torch.equal(x1, x2)
301
302 # torch scripts expect tensors
303 postprocess = torch.tensor(postprocess)
304
305 res = None
306
307 # Cache the Distance object or else JIT will recompile every time
308 if not self.distance_module or self.distance_module._postprocess != dist_postprocess_func:
309 self.distance_module = Distance(dist_postprocess_func)
310
311 if diag:
312 # Special case the diagonal because we can return all zeros most of the time.
313 if x1_eq_x2:
314 res = torch.zeros(*x1.shape[:-2], x1.shape[-2], dtype=x1.dtype, device=x1.device)
315 if postprocess:
316 res = dist_postprocess_func(res)
317 return res
318 else:
319 res = torch.norm(x1 - x2, p=2, dim=-1)
320 if square_dist:
321 res = res.pow(2)
322 if postprocess:
323 res = dist_postprocess_func(res)
324 return res
325
326 elif square_dist:
327 res = self.distance_module._sq_dist(x1, x2, postprocess, x1_eq_x2)
328 else:
329 res = self.distance_module._dist(x1, x2, postprocess, x1_eq_x2)
330
331 return res
332
333 def named_sub_kernels(self):
334 for name, module in self._modules.items():
335 if isinstance(module, Kernel):
336 yield name, module
337
338 def num_outputs_per_input(self, x1, x2):
339 """
340 How many outputs are produced per input (default 1)
341 if x1 is size `n x d` and x2 is size `m x d`, then the size of the kernel
342 will be `(n * num_outputs_per_input) x (m * num_outputs_per_input)`
343 Default: 1
344 """
345 return 1
346
347 def prediction_strategy(self, train_inputs, train_prior_dist, train_labels, likelihood):
348 return exact_prediction_strategies.DefaultPredictionStrategy(
349 train_inputs, train_prior_dist, train_labels, likelihood
350 )
351
352 def sub_kernels(self):
353 for _, kernel in self.named_sub_kernels():
354 yield kernel
355
356 def __call__(self, x1, x2=None, diag=False, last_dim_is_batch=False, **params):
357 x1_, x2_ = x1, x2
358
359 # Select the active dimensions
360 if self.active_dims is not None:
361 x1_ = x1_.index_select(-1, self.active_dims)
362 if x2_ is not None:
363 x2_ = x2_.index_select(-1, self.active_dims)
364
365 # Give x1_ and x2_ a last dimension, if necessary
366 if x1_.ndimension() == 1:
367 x1_ = x1_.unsqueeze(1)
368 if x2_ is not None:
369 if x2_.ndimension() == 1:
370 x2_ = x2_.unsqueeze(1)
371 if not x1_.size(-1) == x2_.size(-1):
372 raise RuntimeError("x1_ and x2_ must have the same number of dimensions!")
373
374 if x2_ is None:
375 x2_ = x1_
376
377 # Check that ard_num_dims matches the supplied number of dimensions
378 if settings.debug.on():
379 if self.ard_num_dims is not None and self.ard_num_dims != x1_.size(-1):
380 raise RuntimeError(
381 "Expected the input to have {} dimensionality "
382 "(based on the ard_num_dims argument). Got {}.".format(self.ard_num_dims, x1_.size(-1))
383 )
384
385 if diag:
386 res = super(Kernel, self).__call__(x1_, x2_, diag=True, last_dim_is_batch=last_dim_is_batch, **params)
387 # Did this Kernel eat the diag option?
388 # If it does not return a LazyEvaluatedKernelTensor, we can call diag on the output
389 if not isinstance(res, LazyEvaluatedKernelTensor):
390 if res.dim() == x1_.dim() and res.shape[-2:] == torch.Size((x1_.size(-2), x2_.size(-2))):
391 res = res.diag()
392 return res
393
394 else:
395 if settings.lazily_evaluate_kernels.on():
396 res = LazyEvaluatedKernelTensor(x1_, x2_, kernel=self, last_dim_is_batch=last_dim_is_batch, **params)
397 else:
398 res = lazify(super(Kernel, self).__call__(x1_, x2_, last_dim_is_batch=last_dim_is_batch, **params))
399 return res
400
401 def __getstate__(self):
402 # JIT ScriptModules cannot be pickled
403 self.distance_module = None
404 return self.__dict__
405
406 def __add__(self, other):
407 kernels = []
408 kernels += self.kernels if isinstance(self, AdditiveKernel) else [self]
409 kernels += other.kernels if isinstance(other, AdditiveKernel) else [other]
410 return AdditiveKernel(*kernels)
411
412 def __mul__(self, other):
413 kernels = []
414 kernels += self.kernels if isinstance(self, ProductKernel) else [self]
415 kernels += other.kernels if isinstance(other, ProductKernel) else [other]
416 return ProductKernel(*kernels)
417
418 def __setstate__(self, d):
419 self.__dict__ = d
420
421 def __getitem__(self, index):
422 if len(self.batch_shape) == 0:
423 return self
424
425 new_kernel = deepcopy(self)
426 # Process the index
427 index = index if isinstance(index, tuple) else (index,)
428
429 for param_name, param in self._parameters.items():
430 new_kernel._parameters[param_name].data = param.__getitem__(index)
431 ndim_removed = len(param.shape) - len(new_kernel._parameters[param_name].shape)
432 new_batch_shape_len = len(self.batch_shape) - ndim_removed
433 new_kernel.batch_shape = new_kernel._parameters[param_name].shape[:new_batch_shape_len]
434
435 for sub_module_name, sub_module in self.named_sub_kernels():
436 self._modules[sub_module_name] = sub_module.__getitem__(index)
437
438 return new_kernel
439
440
441 class AdditiveKernel(Kernel):
442 """
443 A Kernel that supports summing over multiple component kernels.
444
445 Example:
446 >>> covar_module = RBFKernel(active_dims=torch.tensor([1])) + RBFKernel(active_dims=torch.tensor([2]))
447 >>> x1 = torch.randn(50, 2)
448 >>> additive_kernel_matrix = covar_module(x1)
449 """
450
451 @property
452 def is_stationary(self) -> bool:
453 """
454 Kernel is stationary if all components are stationary.
455 """
456 return all(k.is_stationary for k in self.kernels)
457
458 def __init__(self, *kernels):
459 super(AdditiveKernel, self).__init__()
460 self.kernels = ModuleList(kernels)
461
462 def forward(self, x1, x2, diag=False, **params):
463 res = ZeroLazyTensor() if not diag else 0
464 for kern in self.kernels:
465 next_term = kern(x1, x2, diag=diag, **params)
466 if not diag:
467 res = res + lazify(next_term)
468 else:
469 res = res + next_term
470
471 return res
472
473 def num_outputs_per_input(self, x1, x2):
474 return self.kernels[0].num_outputs_per_input(x1, x2)
475
476 def __getitem__(self, index):
477 new_kernel = deepcopy(self)
478 for i, kernel in enumerate(self.kernels):
479 new_kernel.kernels[i] = self.kernels[i].__getitem__(index)
480
481 return new_kernel
482
483
484 class ProductKernel(Kernel):
485 """
486 A Kernel that supports elementwise multiplying multiple component kernels together.
487
488 Example:
489 >>> covar_module = RBFKernel(active_dims=torch.tensor([1])) * RBFKernel(active_dims=torch.tensor([2]))
490 >>> x1 = torch.randn(50, 2)
491 >>> kernel_matrix = covar_module(x1) # The RBF Kernel already decomposes multiplicatively, so this is foolish!
492 """
493
494 @property
495 def is_stationary(self) -> bool:
496 """
497 Kernel is stationary if all components are stationary.
498 """
499 return all(k.is_stationary for k in self.kernels)
500
501 def __init__(self, *kernels):
502 super(ProductKernel, self).__init__()
503 self.kernels = ModuleList(kernels)
504
505 def forward(self, x1, x2, diag=False, **params):
506 x1_eq_x2 = torch.equal(x1, x2)
507
508 if not x1_eq_x2:
509 # If x1 != x2, then we can't make a MulLazyTensor because the kernel won't necessarily be square/symmetric
510 res = delazify(self.kernels[0](x1, x2, diag=diag, **params))
511 else:
512 res = self.kernels[0](x1, x2, diag=diag, **params)
513
514 if not diag:
515 res = lazify(res)
516
517 for kern in self.kernels[1:]:
518 next_term = kern(x1, x2, diag=diag, **params)
519 if not x1_eq_x2:
520 # Again delazify if x1 != x2
521 res = res * delazify(next_term)
522 else:
523 if not diag:
524 res = res * lazify(next_term)
525 else:
526 res = res * next_term
527
528 return res
529
530 def num_outputs_per_input(self, x1, x2):
531 return self.kernels[0].num_outputs_per_input(x1, x2)
532
533 def __getitem__(self, index):
534 new_kernel = deepcopy(self)
535 for i, kernel in enumerate(self.kernels):
536 new_kernel.kernels[i] = self.kernels[i].__getitem__(index)
537
538 return new_kernel
539
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gpytorch/kernels/kernel.py b/gpytorch/kernels/kernel.py
--- a/gpytorch/kernels/kernel.py
+++ b/gpytorch/kernels/kernel.py
@@ -331,8 +331,8 @@
return res
def named_sub_kernels(self):
- for name, module in self._modules.items():
- if isinstance(module, Kernel):
+ for name, module in self.named_modules():
+ if module is not self and isinstance(module, Kernel):
yield name, module
def num_outputs_per_input(self, x1, x2):
| {"golden_diff": "diff --git a/gpytorch/kernels/kernel.py b/gpytorch/kernels/kernel.py\n--- a/gpytorch/kernels/kernel.py\n+++ b/gpytorch/kernels/kernel.py\n@@ -331,8 +331,8 @@\n return res\n \n def named_sub_kernels(self):\n- for name, module in self._modules.items():\n- if isinstance(module, Kernel):\n+ for name, module in self.named_modules():\n+ if module is not self and isinstance(module, Kernel):\n yield name, module\n \n def num_outputs_per_input(self, x1, x2):\n", "issue": "[Bug]\n# \ud83d\udc1b Bug: Possible error with multitask learning with additive kernel structure\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nWhen I define in the class MultitaskGPModel the multitask kernel\r\n\r\n self.covar_module = (gpytorch.kernels.ScaleKernel(\r\n gpytorch.kernels.PeriodicKernel(batch_shape=torch.Size([num_latents])) * \r\n gpytorch.kernels.RQKernel(batch_shape=torch.Size([num_latents])),\r\n batch_shape=torch.Size([num_latents])\r\n ) + \r\n gpytorch.kernels.ScaleKernel(\r\n gpytorch.kernels.MaternKernel(nu=0.5, batch_shape=torch.Size([num_latents])), \r\n batch_shape=torch.Size([num_latents])\r\n )\r\n )\r\n\r\nwhich uses the additive kernel as its outermost layer, and I apply the class on data as\r\n\r\n w_l = 50\r\n num_latents = 24\r\n Xc_t_npa = np.arange(0,w_l,1,dtype=np.float32).reshape(-1, 1)\r\n Xc_t = torch.from_numpy(Xc_t_npa).type(torch.Tensor)\r\n model_mul(Xc_t)\r\n\r\nI get \r\n'RuntimeError: The expected shape of the kernel was torch.Size([100, 100]), but got torch.Size([24, 100, 100]). This is likely a bug in GPyTorch'.\r\nThis behavior seems not to change when changing the number of tasks or the number of latent gps.\r\n\r\nIf I use the same kernel in a non-batch setting, it works smoothly.\r\n\r\nI wrote the batched problem with another kernel which is mathematically the same but which doesn't use the outer additive kernel, and it works smoothly. Unfortunatly the role of the subkernel parameters in the new form is not the same as that of the malfunctioning kernel, and I have to re-run a lot of past non-batch fits in the new form to make them comparable with the new setting.\r\n\r\n\r\n## To reproduce\r\n\r\n** Code snippet to reproduce **\r\n```python\r\n# Your code goes here\r\n# Please make sure it does not require any external dependencies (other than PyTorch!)\r\n# (We much prefer small snippets rather than links to existing libraries!)\r\n```\r\n Zc_intra_np = np.arange(0, 24, 1).reshape(-1, 1)\r\n Zc_intra = torch.tensor(Zc_intra_np, dtype=torch.float)\r\n \r\n w_l = 50\r\n num_latents = 24\r\n num_tasks = 12\r\n Xc_t_npa = np.arange(0,w_l,1,dtype=np.float32).reshape(-1, 1)\r\n Xc_t = torch.from_numpy(Xc_t_npa).type(torch.Tensor)\r\n\r\n model_mul = MultitaskGPModel()\r\n likelihood_mul = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=num_tasks)\r\n model_mul(Xc_t)\r\n\r\n\r\n class MultitaskGPModel(gpytorch.models.ApproximateGP):\r\n \r\n def __init__(self):\r\n \r\n\r\n inducing_points = Zc_intra\r\n \r\n variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(\r\n inducing_points.size(-2), batch_shape=torch.Size([num_latents])\r\n )\r\n\r\n variational_strategy = gpytorch.variational.LMCVariationalStrategy(\r\n gpytorch.variational.VariationalStrategy(\r\n self, inducing_points, variational_distribution, learn_inducing_locations=True\r\n ),\r\n num_tasks=num_tasks,\r\n num_latents=num_latents,\r\n # could be 0\r\n latent_dim=-1\r\n )\r\n\r\n super().__init__(variational_strategy)\r\n\r\n self.mean_module = gpytorch.means.ConstantMean(batch_shape=torch.Size([num_latents]))\r\n \r\n self.covar_module = gpytorch.kernels.ScaleKernel(\r\n gpytorch.kernels.PeriodicKernel(batch_shape=torch.Size([num_latents])) * \r\n gpytorch.kernels.RQKernel(batch_shape=torch.Size([num_latents])) + \r\n gpytorch.kernels.ScaleKernel(\r\n gpytorch.kernels.MaternKernel(nu=0.5, batch_shape=torch.Size([num_latents])), \r\n batch_shape=torch.Size([num_latents])), \r\n batch_shape=torch.Size([num_latents]) \r\n )\r\n\r\n\r\n def forward(self, x):\r\n mean_x = self.mean_module(x)\r\n covar_x = self.covar_module(x)\r\n return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)\r\n\r\n\r\n\r\n\r\n\r\n\r\n** Stack trace/error message **\r\n```\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-398-5fc832e3a3f0>\", line 1, in <module>\r\n model_mul(Xc_t)\r\n\r\n File \"C:\\Users\\lucheron\\Anaconda3\\envs\\pyro16_py37\\lib\\site-packages\\gpytorch\\models\\approximate_gp.py\", line 81, in __call__\r\n return self.variational_strategy(inputs, prior=prior, **kwargs)\r\n\r\n File \"C:\\Users\\lucheron\\Anaconda3\\envs\\pyro16_py37\\lib\\site-packages\\gpytorch\\variational\\lmc_variational_strategy.py\", line 124, in __call__\r\n function_dist = self.base_variational_strategy(x, prior=prior, **kwargs)\r\n\r\n File \"C:\\Users\\lucheron\\Anaconda3\\envs\\pyro16_py37\\lib\\site-packages\\gpytorch\\variational\\variational_strategy.py\", line 168, in __call__\r\n return super().__call__(x, prior=prior, **kwargs)\r\n\r\n File \"C:\\Users\\lucheron\\Anaconda3\\envs\\pyro16_py37\\lib\\site-packages\\gpytorch\\variational\\_variational_strategy.py\", line 129, in __call__\r\n **kwargs,\r\n\r\n File \"C:\\Users\\lucheron\\Anaconda3\\envs\\pyro16_py37\\lib\\site-packages\\gpytorch\\module.py\", line 28, in __call__\r\n outputs = self.forward(*inputs, **kwargs)\r\n\r\n File \"C:\\Users\\lucheron\\Anaconda3\\envs\\pyro16_py37\\lib\\site-packages\\gpytorch\\variational\\variational_strategy.py\", line 96, in forward\r\n induc_induc_covar = full_covar[..., :num_induc, :num_induc].add_jitter()\r\n\r\n File \"C:\\Users\\lucheron\\Anaconda3\\envs\\pyro16_py37\\lib\\site-packages\\gpytorch\\lazy\\lazy_evaluated_kernel_tensor.py\", line 237, in add_jitter\r\n return self.evaluate_kernel().add_jitter(jitter_val)\r\n\r\n File \"C:\\Users\\lucheron\\Anaconda3\\envs\\pyro16_py37\\lib\\site-packages\\gpytorch\\utils\\memoize.py\", line 59, in g\r\n return _add_to_cache(self, cache_name, method(self, *args, **kwargs), *args, kwargs_pkl=kwargs_pkl)\r\n\r\n File \"C:\\Users\\lucheron\\Anaconda3\\envs\\pyro16_py37\\lib\\site-packages\\gpytorch\\lazy\\lazy_evaluated_kernel_tensor.py\", line 291, in evaluate_kernel\r\n f\"The expected shape of the kernel was {self.shape}, but got {res.shape}. \"\r\n\r\nRuntimeError: The expected shape of the kernel was torch.Size([100, 100]), but got torch.Size([24, 100, 100]). This is likely a bug in GPyTorch.\r\n```\r\n\r\n## Expected Behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. --> Run with no errors\r\n\r\n## System information\r\n\r\n**Please complete the following information:**\r\n- <!-- GPyTorch Version (run `print(gpytorch.__version__)` --> 1.4.1\r\n- <!-- PyTorch Version (run `print(torch.__version__)` --> 1.8.1\r\n- <!-- Computer OS --> Win10 pro 19042.1052\r\n\r\n## Additional context\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport warnings\nfrom abc import abstractmethod\nfrom copy import deepcopy\n\nimport torch\nfrom torch.nn import ModuleList\n\nfrom .. import settings\nfrom ..constraints import Positive\nfrom ..lazy import LazyEvaluatedKernelTensor, ZeroLazyTensor, delazify, lazify\nfrom ..models import exact_prediction_strategies\nfrom ..module import Module\nfrom ..utils.broadcasting import _mul_broadcast_shape\n\n\ndef default_postprocess_script(x):\n return x\n\n\nclass Distance(torch.nn.Module):\n def __init__(self, postprocess_script=default_postprocess_script):\n super().__init__()\n self._postprocess = postprocess_script\n\n def _sq_dist(self, x1, x2, postprocess, x1_eq_x2=False):\n # TODO: use torch squared cdist once implemented: https://github.com/pytorch/pytorch/pull/25799\n adjustment = x1.mean(-2, keepdim=True)\n x1 = x1 - adjustment\n x2 = x2 - adjustment # x1 and x2 should be identical in all dims except -2 at this point\n\n # Compute squared distance matrix using quadratic expansion\n x1_norm = x1.pow(2).sum(dim=-1, keepdim=True)\n x1_pad = torch.ones_like(x1_norm)\n if x1_eq_x2 and not x1.requires_grad and not x2.requires_grad:\n x2_norm, x2_pad = x1_norm, x1_pad\n else:\n x2_norm = x2.pow(2).sum(dim=-1, keepdim=True)\n x2_pad = torch.ones_like(x2_norm)\n x1_ = torch.cat([-2.0 * x1, x1_norm, x1_pad], dim=-1)\n x2_ = torch.cat([x2, x2_pad, x2_norm], dim=-1)\n res = x1_.matmul(x2_.transpose(-2, -1))\n\n if x1_eq_x2 and not x1.requires_grad and not x2.requires_grad:\n res.diagonal(dim1=-2, dim2=-1).fill_(0)\n\n # Zero out negative values\n res.clamp_min_(0)\n return self._postprocess(res) if postprocess else res\n\n def _dist(self, x1, x2, postprocess, x1_eq_x2=False):\n # TODO: use torch cdist once implementation is improved: https://github.com/pytorch/pytorch/pull/25799\n res = self._sq_dist(x1, x2, postprocess=False, x1_eq_x2=x1_eq_x2)\n res = res.clamp_min_(1e-30).sqrt_()\n return self._postprocess(res) if postprocess else res\n\n\nclass Kernel(Module):\n r\"\"\"\n Kernels in GPyTorch are implemented as a :class:`gpytorch.Module` that, when called on two :obj:`torch.tensor`\n objects `x1` and `x2` returns either a :obj:`torch.tensor` or a :obj:`gpytorch.lazy.LazyTensor` that represents\n the covariance matrix between `x1` and `x2`.\n\n In the typical use case, to extend this class means to implement the :func:`~gpytorch.kernels.Kernel.forward`\n method.\n\n .. note::\n The :func:`~gpytorch.kernels.Kernel.__call__` does some additional internal work. In particular,\n all kernels are lazily evaluated so that, in some cases, we can index in to the kernel matrix before actually\n computing it. Furthermore, many built in kernel modules return LazyTensors that allow for more efficient\n inference than if we explicitly computed the kernel matrix itself.\n\n As a result, if you want to use a :obj:`gpytorch.kernels.Kernel` object just to get an actual\n :obj:`torch.tensor` representing the covariance matrix, you may need to call the\n :func:`gpytorch.lazy.LazyTensor.evaluate` method on the output.\n\n This base :class:`Kernel` class includes a lengthscale parameter\n :math:`\\Theta`, which is used by many common kernel functions.\n There are a few options for the lengthscale:\n\n * Default: No lengthscale (i.e. :math:`\\Theta` is the identity matrix).\n\n * Single lengthscale: One lengthscale can be applied to all input dimensions/batches\n (i.e. :math:`\\Theta` is a constant diagonal matrix).\n This is controlled by setting the attribute `has_lengthscale=True`.\n\n * ARD: Each input dimension gets its own separate lengthscale\n (i.e. :math:`\\Theta` is a non-constant diagonal matrix).\n This is controlled by the `ard_num_dims` keyword argument (as well as `has_lengthscale=True`).\n\n In batch-mode (i.e. when :math:`x_1` and :math:`x_2` are batches of input matrices), each\n batch of data can have its own lengthscale parameter by setting the `batch_shape`\n keyword argument to the appropriate number of batches.\n\n .. note::\n\n The :attr:`lengthscale` parameter is parameterized on a log scale to constrain it to be positive.\n You can set a prior on this parameter using the :attr:`lengthscale_prior` argument.\n\n Base Args:\n :attr:`ard_num_dims` (int, optional):\n Set this if you want a separate lengthscale for each input\n dimension. It should be `d` if :attr:`x1` is a `n x d` matrix. Default: `None`\n :attr:`batch_shape` (torch.Size, optional):\n Set this if you want a separate lengthscale for each batch of input\n data. It should be `b1 x ... x bk` if :attr:`x1` is a `b1 x ... x bk x n x d` tensor.\n :attr:`active_dims` (tuple of ints, optional):\n Set this if you want to compute the covariance of only a few input dimensions. The ints\n corresponds to the indices of the dimensions. Default: `None`.\n :attr:`lengthscale_prior` (Prior, optional):\n Set this if you want to apply a prior to the lengthscale parameter. Default: `None`\n :attr:`lengthscale_constraint` (Constraint, optional):\n Set this if you want to apply a constraint to the lengthscale parameter. Default: `Positive`.\n :attr:`eps` (float):\n The minimum value that the lengthscale can take (prevents divide by zero errors). Default: `1e-6`.\n\n Base Attributes:\n :attr:`lengthscale` (Tensor):\n The lengthscale parameter. Size/shape of parameter depends on the\n :attr:`ard_num_dims` and :attr:`batch_shape` arguments.\n\n Example:\n >>> covar_module = gpytorch.kernels.LinearKernel()\n >>> x1 = torch.randn(50, 3)\n >>> lazy_covar_matrix = covar_module(x1) # Returns a RootLazyTensor\n >>> tensor_covar_matrix = lazy_covar_matrix.evaluate() # Gets the actual tensor for this kernel matrix\n \"\"\"\n\n has_lengthscale = False\n\n def __init__(\n self,\n ard_num_dims=None,\n batch_shape=torch.Size([]),\n active_dims=None,\n lengthscale_prior=None,\n lengthscale_constraint=None,\n eps=1e-6,\n **kwargs,\n ):\n super(Kernel, self).__init__()\n self._batch_shape = batch_shape\n if active_dims is not None and not torch.is_tensor(active_dims):\n active_dims = torch.tensor(active_dims, dtype=torch.long)\n self.register_buffer(\"active_dims\", active_dims)\n self.ard_num_dims = ard_num_dims\n\n self.eps = eps\n\n param_transform = kwargs.get(\"param_transform\")\n\n if lengthscale_constraint is None:\n lengthscale_constraint = Positive()\n\n if param_transform is not None:\n warnings.warn(\n \"The 'param_transform' argument is now deprecated. If you want to use a different \"\n \"transformation, specify a different 'lengthscale_constraint' instead.\",\n DeprecationWarning,\n )\n\n if self.has_lengthscale:\n lengthscale_num_dims = 1 if ard_num_dims is None else ard_num_dims\n self.register_parameter(\n name=\"raw_lengthscale\",\n parameter=torch.nn.Parameter(torch.zeros(*self.batch_shape, 1, lengthscale_num_dims)),\n )\n if lengthscale_prior is not None:\n self.register_prior(\n \"lengthscale_prior\", lengthscale_prior, lambda m: m.lengthscale, lambda m, v: m._set_lengthscale(v)\n )\n\n self.register_constraint(\"raw_lengthscale\", lengthscale_constraint)\n\n self.distance_module = None\n # TODO: Remove this on next official PyTorch release.\n self.__pdist_supports_batch = True\n\n @abstractmethod\n def forward(self, x1, x2, diag=False, last_dim_is_batch=False, **params):\n r\"\"\"\n Computes the covariance between x1 and x2.\n This method should be imlemented by all Kernel subclasses.\n\n Args:\n :attr:`x1` (Tensor `n x d` or `b x n x d`):\n First set of data\n :attr:`x2` (Tensor `m x d` or `b x m x d`):\n Second set of data\n :attr:`diag` (bool):\n Should the Kernel compute the whole kernel, or just the diag?\n :attr:`last_dim_is_batch` (tuple, optional):\n If this is true, it treats the last dimension of the data as another batch dimension.\n (Useful for additive structure over the dimensions). Default: False\n\n Returns:\n :class:`Tensor` or :class:`gpytorch.lazy.LazyTensor`.\n The exact size depends on the kernel's evaluation mode:\n\n * `full_covar`: `n x m` or `b x n x m`\n * `full_covar` with `last_dim_is_batch=True`: `k x n x m` or `b x k x n x m`\n * `diag`: `n` or `b x n`\n * `diag` with `last_dim_is_batch=True`: `k x n` or `b x k x n`\n \"\"\"\n raise NotImplementedError()\n\n @property\n def batch_shape(self):\n kernels = list(self.sub_kernels())\n if len(kernels):\n return _mul_broadcast_shape(self._batch_shape, *[k.batch_shape for k in kernels])\n else:\n return self._batch_shape\n\n @batch_shape.setter\n def batch_shape(self, val):\n self._batch_shape = val\n\n @property\n def dtype(self):\n if self.has_lengthscale:\n return self.lengthscale.dtype\n else:\n for param in self.parameters():\n return param.dtype\n return torch.get_default_dtype()\n\n @property\n def is_stationary(self) -> bool:\n \"\"\"\n Property to indicate whether kernel is stationary or not.\n \"\"\"\n return self.has_lengthscale\n\n @property\n def lengthscale(self):\n if self.has_lengthscale:\n return self.raw_lengthscale_constraint.transform(self.raw_lengthscale)\n else:\n return None\n\n @lengthscale.setter\n def lengthscale(self, value):\n self._set_lengthscale(value)\n\n def _set_lengthscale(self, value):\n if not self.has_lengthscale:\n raise RuntimeError(\"Kernel has no lengthscale.\")\n\n if not torch.is_tensor(value):\n value = torch.as_tensor(value).to(self.raw_lengthscale)\n\n self.initialize(raw_lengthscale=self.raw_lengthscale_constraint.inverse_transform(value))\n\n def local_load_samples(self, samples_dict, memo, prefix):\n num_samples = next(iter(samples_dict.values())).size(0)\n self.batch_shape = torch.Size([num_samples]) + self.batch_shape\n super().local_load_samples(samples_dict, memo, prefix)\n\n def covar_dist(\n self,\n x1,\n x2,\n diag=False,\n last_dim_is_batch=False,\n square_dist=False,\n dist_postprocess_func=default_postprocess_script,\n postprocess=True,\n **params,\n ):\n r\"\"\"\n This is a helper method for computing the Euclidean distance between\n all pairs of points in x1 and x2.\n\n Args:\n :attr:`x1` (Tensor `n x d` or `b1 x ... x bk x n x d`):\n First set of data.\n :attr:`x2` (Tensor `m x d` or `b1 x ... x bk x m x d`):\n Second set of data.\n :attr:`diag` (bool):\n Should we return the whole distance matrix, or just the diagonal? If True, we must have `x1 == x2`.\n :attr:`last_dim_is_batch` (tuple, optional):\n Is the last dimension of the data a batch dimension or not?\n :attr:`square_dist` (bool):\n Should we square the distance matrix before returning?\n\n Returns:\n (:class:`Tensor`, :class:`Tensor) corresponding to the distance matrix between `x1` and `x2`.\n The shape depends on the kernel's mode\n * `diag=False`\n * `diag=False` and `last_dim_is_batch=True`: (`b x d x n x n`)\n * `diag=True`\n * `diag=True` and `last_dim_is_batch=True`: (`b x d x n`)\n \"\"\"\n if last_dim_is_batch:\n x1 = x1.transpose(-1, -2).unsqueeze(-1)\n x2 = x2.transpose(-1, -2).unsqueeze(-1)\n\n x1_eq_x2 = torch.equal(x1, x2)\n\n # torch scripts expect tensors\n postprocess = torch.tensor(postprocess)\n\n res = None\n\n # Cache the Distance object or else JIT will recompile every time\n if not self.distance_module or self.distance_module._postprocess != dist_postprocess_func:\n self.distance_module = Distance(dist_postprocess_func)\n\n if diag:\n # Special case the diagonal because we can return all zeros most of the time.\n if x1_eq_x2:\n res = torch.zeros(*x1.shape[:-2], x1.shape[-2], dtype=x1.dtype, device=x1.device)\n if postprocess:\n res = dist_postprocess_func(res)\n return res\n else:\n res = torch.norm(x1 - x2, p=2, dim=-1)\n if square_dist:\n res = res.pow(2)\n if postprocess:\n res = dist_postprocess_func(res)\n return res\n\n elif square_dist:\n res = self.distance_module._sq_dist(x1, x2, postprocess, x1_eq_x2)\n else:\n res = self.distance_module._dist(x1, x2, postprocess, x1_eq_x2)\n\n return res\n\n def named_sub_kernels(self):\n for name, module in self._modules.items():\n if isinstance(module, Kernel):\n yield name, module\n\n def num_outputs_per_input(self, x1, x2):\n \"\"\"\n How many outputs are produced per input (default 1)\n if x1 is size `n x d` and x2 is size `m x d`, then the size of the kernel\n will be `(n * num_outputs_per_input) x (m * num_outputs_per_input)`\n Default: 1\n \"\"\"\n return 1\n\n def prediction_strategy(self, train_inputs, train_prior_dist, train_labels, likelihood):\n return exact_prediction_strategies.DefaultPredictionStrategy(\n train_inputs, train_prior_dist, train_labels, likelihood\n )\n\n def sub_kernels(self):\n for _, kernel in self.named_sub_kernels():\n yield kernel\n\n def __call__(self, x1, x2=None, diag=False, last_dim_is_batch=False, **params):\n x1_, x2_ = x1, x2\n\n # Select the active dimensions\n if self.active_dims is not None:\n x1_ = x1_.index_select(-1, self.active_dims)\n if x2_ is not None:\n x2_ = x2_.index_select(-1, self.active_dims)\n\n # Give x1_ and x2_ a last dimension, if necessary\n if x1_.ndimension() == 1:\n x1_ = x1_.unsqueeze(1)\n if x2_ is not None:\n if x2_.ndimension() == 1:\n x2_ = x2_.unsqueeze(1)\n if not x1_.size(-1) == x2_.size(-1):\n raise RuntimeError(\"x1_ and x2_ must have the same number of dimensions!\")\n\n if x2_ is None:\n x2_ = x1_\n\n # Check that ard_num_dims matches the supplied number of dimensions\n if settings.debug.on():\n if self.ard_num_dims is not None and self.ard_num_dims != x1_.size(-1):\n raise RuntimeError(\n \"Expected the input to have {} dimensionality \"\n \"(based on the ard_num_dims argument). Got {}.\".format(self.ard_num_dims, x1_.size(-1))\n )\n\n if diag:\n res = super(Kernel, self).__call__(x1_, x2_, diag=True, last_dim_is_batch=last_dim_is_batch, **params)\n # Did this Kernel eat the diag option?\n # If it does not return a LazyEvaluatedKernelTensor, we can call diag on the output\n if not isinstance(res, LazyEvaluatedKernelTensor):\n if res.dim() == x1_.dim() and res.shape[-2:] == torch.Size((x1_.size(-2), x2_.size(-2))):\n res = res.diag()\n return res\n\n else:\n if settings.lazily_evaluate_kernels.on():\n res = LazyEvaluatedKernelTensor(x1_, x2_, kernel=self, last_dim_is_batch=last_dim_is_batch, **params)\n else:\n res = lazify(super(Kernel, self).__call__(x1_, x2_, last_dim_is_batch=last_dim_is_batch, **params))\n return res\n\n def __getstate__(self):\n # JIT ScriptModules cannot be pickled\n self.distance_module = None\n return self.__dict__\n\n def __add__(self, other):\n kernels = []\n kernels += self.kernels if isinstance(self, AdditiveKernel) else [self]\n kernels += other.kernels if isinstance(other, AdditiveKernel) else [other]\n return AdditiveKernel(*kernels)\n\n def __mul__(self, other):\n kernels = []\n kernels += self.kernels if isinstance(self, ProductKernel) else [self]\n kernels += other.kernels if isinstance(other, ProductKernel) else [other]\n return ProductKernel(*kernels)\n\n def __setstate__(self, d):\n self.__dict__ = d\n\n def __getitem__(self, index):\n if len(self.batch_shape) == 0:\n return self\n\n new_kernel = deepcopy(self)\n # Process the index\n index = index if isinstance(index, tuple) else (index,)\n\n for param_name, param in self._parameters.items():\n new_kernel._parameters[param_name].data = param.__getitem__(index)\n ndim_removed = len(param.shape) - len(new_kernel._parameters[param_name].shape)\n new_batch_shape_len = len(self.batch_shape) - ndim_removed\n new_kernel.batch_shape = new_kernel._parameters[param_name].shape[:new_batch_shape_len]\n\n for sub_module_name, sub_module in self.named_sub_kernels():\n self._modules[sub_module_name] = sub_module.__getitem__(index)\n\n return new_kernel\n\n\nclass AdditiveKernel(Kernel):\n \"\"\"\n A Kernel that supports summing over multiple component kernels.\n\n Example:\n >>> covar_module = RBFKernel(active_dims=torch.tensor([1])) + RBFKernel(active_dims=torch.tensor([2]))\n >>> x1 = torch.randn(50, 2)\n >>> additive_kernel_matrix = covar_module(x1)\n \"\"\"\n\n @property\n def is_stationary(self) -> bool:\n \"\"\"\n Kernel is stationary if all components are stationary.\n \"\"\"\n return all(k.is_stationary for k in self.kernels)\n\n def __init__(self, *kernels):\n super(AdditiveKernel, self).__init__()\n self.kernels = ModuleList(kernels)\n\n def forward(self, x1, x2, diag=False, **params):\n res = ZeroLazyTensor() if not diag else 0\n for kern in self.kernels:\n next_term = kern(x1, x2, diag=diag, **params)\n if not diag:\n res = res + lazify(next_term)\n else:\n res = res + next_term\n\n return res\n\n def num_outputs_per_input(self, x1, x2):\n return self.kernels[0].num_outputs_per_input(x1, x2)\n\n def __getitem__(self, index):\n new_kernel = deepcopy(self)\n for i, kernel in enumerate(self.kernels):\n new_kernel.kernels[i] = self.kernels[i].__getitem__(index)\n\n return new_kernel\n\n\nclass ProductKernel(Kernel):\n \"\"\"\n A Kernel that supports elementwise multiplying multiple component kernels together.\n\n Example:\n >>> covar_module = RBFKernel(active_dims=torch.tensor([1])) * RBFKernel(active_dims=torch.tensor([2]))\n >>> x1 = torch.randn(50, 2)\n >>> kernel_matrix = covar_module(x1) # The RBF Kernel already decomposes multiplicatively, so this is foolish!\n \"\"\"\n\n @property\n def is_stationary(self) -> bool:\n \"\"\"\n Kernel is stationary if all components are stationary.\n \"\"\"\n return all(k.is_stationary for k in self.kernels)\n\n def __init__(self, *kernels):\n super(ProductKernel, self).__init__()\n self.kernels = ModuleList(kernels)\n\n def forward(self, x1, x2, diag=False, **params):\n x1_eq_x2 = torch.equal(x1, x2)\n\n if not x1_eq_x2:\n # If x1 != x2, then we can't make a MulLazyTensor because the kernel won't necessarily be square/symmetric\n res = delazify(self.kernels[0](x1, x2, diag=diag, **params))\n else:\n res = self.kernels[0](x1, x2, diag=diag, **params)\n\n if not diag:\n res = lazify(res)\n\n for kern in self.kernels[1:]:\n next_term = kern(x1, x2, diag=diag, **params)\n if not x1_eq_x2:\n # Again delazify if x1 != x2\n res = res * delazify(next_term)\n else:\n if not diag:\n res = res * lazify(next_term)\n else:\n res = res * next_term\n\n return res\n\n def num_outputs_per_input(self, x1, x2):\n return self.kernels[0].num_outputs_per_input(x1, x2)\n\n def __getitem__(self, index):\n new_kernel = deepcopy(self)\n for i, kernel in enumerate(self.kernels):\n new_kernel.kernels[i] = self.kernels[i].__getitem__(index)\n\n return new_kernel\n", "path": "gpytorch/kernels/kernel.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport warnings\nfrom abc import abstractmethod\nfrom copy import deepcopy\n\nimport torch\nfrom torch.nn import ModuleList\n\nfrom .. import settings\nfrom ..constraints import Positive\nfrom ..lazy import LazyEvaluatedKernelTensor, ZeroLazyTensor, delazify, lazify\nfrom ..models import exact_prediction_strategies\nfrom ..module import Module\nfrom ..utils.broadcasting import _mul_broadcast_shape\n\n\ndef default_postprocess_script(x):\n return x\n\n\nclass Distance(torch.nn.Module):\n def __init__(self, postprocess_script=default_postprocess_script):\n super().__init__()\n self._postprocess = postprocess_script\n\n def _sq_dist(self, x1, x2, postprocess, x1_eq_x2=False):\n # TODO: use torch squared cdist once implemented: https://github.com/pytorch/pytorch/pull/25799\n adjustment = x1.mean(-2, keepdim=True)\n x1 = x1 - adjustment\n x2 = x2 - adjustment # x1 and x2 should be identical in all dims except -2 at this point\n\n # Compute squared distance matrix using quadratic expansion\n x1_norm = x1.pow(2).sum(dim=-1, keepdim=True)\n x1_pad = torch.ones_like(x1_norm)\n if x1_eq_x2 and not x1.requires_grad and not x2.requires_grad:\n x2_norm, x2_pad = x1_norm, x1_pad\n else:\n x2_norm = x2.pow(2).sum(dim=-1, keepdim=True)\n x2_pad = torch.ones_like(x2_norm)\n x1_ = torch.cat([-2.0 * x1, x1_norm, x1_pad], dim=-1)\n x2_ = torch.cat([x2, x2_pad, x2_norm], dim=-1)\n res = x1_.matmul(x2_.transpose(-2, -1))\n\n if x1_eq_x2 and not x1.requires_grad and not x2.requires_grad:\n res.diagonal(dim1=-2, dim2=-1).fill_(0)\n\n # Zero out negative values\n res.clamp_min_(0)\n return self._postprocess(res) if postprocess else res\n\n def _dist(self, x1, x2, postprocess, x1_eq_x2=False):\n # TODO: use torch cdist once implementation is improved: https://github.com/pytorch/pytorch/pull/25799\n res = self._sq_dist(x1, x2, postprocess=False, x1_eq_x2=x1_eq_x2)\n res = res.clamp_min_(1e-30).sqrt_()\n return self._postprocess(res) if postprocess else res\n\n\nclass Kernel(Module):\n r\"\"\"\n Kernels in GPyTorch are implemented as a :class:`gpytorch.Module` that, when called on two :obj:`torch.tensor`\n objects `x1` and `x2` returns either a :obj:`torch.tensor` or a :obj:`gpytorch.lazy.LazyTensor` that represents\n the covariance matrix between `x1` and `x2`.\n\n In the typical use case, to extend this class means to implement the :func:`~gpytorch.kernels.Kernel.forward`\n method.\n\n .. note::\n The :func:`~gpytorch.kernels.Kernel.__call__` does some additional internal work. In particular,\n all kernels are lazily evaluated so that, in some cases, we can index in to the kernel matrix before actually\n computing it. Furthermore, many built in kernel modules return LazyTensors that allow for more efficient\n inference than if we explicitly computed the kernel matrix itself.\n\n As a result, if you want to use a :obj:`gpytorch.kernels.Kernel` object just to get an actual\n :obj:`torch.tensor` representing the covariance matrix, you may need to call the\n :func:`gpytorch.lazy.LazyTensor.evaluate` method on the output.\n\n This base :class:`Kernel` class includes a lengthscale parameter\n :math:`\\Theta`, which is used by many common kernel functions.\n There are a few options for the lengthscale:\n\n * Default: No lengthscale (i.e. :math:`\\Theta` is the identity matrix).\n\n * Single lengthscale: One lengthscale can be applied to all input dimensions/batches\n (i.e. :math:`\\Theta` is a constant diagonal matrix).\n This is controlled by setting the attribute `has_lengthscale=True`.\n\n * ARD: Each input dimension gets its own separate lengthscale\n (i.e. :math:`\\Theta` is a non-constant diagonal matrix).\n This is controlled by the `ard_num_dims` keyword argument (as well as `has_lengthscale=True`).\n\n In batch-mode (i.e. when :math:`x_1` and :math:`x_2` are batches of input matrices), each\n batch of data can have its own lengthscale parameter by setting the `batch_shape`\n keyword argument to the appropriate number of batches.\n\n .. note::\n\n The :attr:`lengthscale` parameter is parameterized on a log scale to constrain it to be positive.\n You can set a prior on this parameter using the :attr:`lengthscale_prior` argument.\n\n Base Args:\n :attr:`ard_num_dims` (int, optional):\n Set this if you want a separate lengthscale for each input\n dimension. It should be `d` if :attr:`x1` is a `n x d` matrix. Default: `None`\n :attr:`batch_shape` (torch.Size, optional):\n Set this if you want a separate lengthscale for each batch of input\n data. It should be `b1 x ... x bk` if :attr:`x1` is a `b1 x ... x bk x n x d` tensor.\n :attr:`active_dims` (tuple of ints, optional):\n Set this if you want to compute the covariance of only a few input dimensions. The ints\n corresponds to the indices of the dimensions. Default: `None`.\n :attr:`lengthscale_prior` (Prior, optional):\n Set this if you want to apply a prior to the lengthscale parameter. Default: `None`\n :attr:`lengthscale_constraint` (Constraint, optional):\n Set this if you want to apply a constraint to the lengthscale parameter. Default: `Positive`.\n :attr:`eps` (float):\n The minimum value that the lengthscale can take (prevents divide by zero errors). Default: `1e-6`.\n\n Base Attributes:\n :attr:`lengthscale` (Tensor):\n The lengthscale parameter. Size/shape of parameter depends on the\n :attr:`ard_num_dims` and :attr:`batch_shape` arguments.\n\n Example:\n >>> covar_module = gpytorch.kernels.LinearKernel()\n >>> x1 = torch.randn(50, 3)\n >>> lazy_covar_matrix = covar_module(x1) # Returns a RootLazyTensor\n >>> tensor_covar_matrix = lazy_covar_matrix.evaluate() # Gets the actual tensor for this kernel matrix\n \"\"\"\n\n has_lengthscale = False\n\n def __init__(\n self,\n ard_num_dims=None,\n batch_shape=torch.Size([]),\n active_dims=None,\n lengthscale_prior=None,\n lengthscale_constraint=None,\n eps=1e-6,\n **kwargs,\n ):\n super(Kernel, self).__init__()\n self._batch_shape = batch_shape\n if active_dims is not None and not torch.is_tensor(active_dims):\n active_dims = torch.tensor(active_dims, dtype=torch.long)\n self.register_buffer(\"active_dims\", active_dims)\n self.ard_num_dims = ard_num_dims\n\n self.eps = eps\n\n param_transform = kwargs.get(\"param_transform\")\n\n if lengthscale_constraint is None:\n lengthscale_constraint = Positive()\n\n if param_transform is not None:\n warnings.warn(\n \"The 'param_transform' argument is now deprecated. If you want to use a different \"\n \"transformation, specify a different 'lengthscale_constraint' instead.\",\n DeprecationWarning,\n )\n\n if self.has_lengthscale:\n lengthscale_num_dims = 1 if ard_num_dims is None else ard_num_dims\n self.register_parameter(\n name=\"raw_lengthscale\",\n parameter=torch.nn.Parameter(torch.zeros(*self.batch_shape, 1, lengthscale_num_dims)),\n )\n if lengthscale_prior is not None:\n self.register_prior(\n \"lengthscale_prior\", lengthscale_prior, lambda m: m.lengthscale, lambda m, v: m._set_lengthscale(v)\n )\n\n self.register_constraint(\"raw_lengthscale\", lengthscale_constraint)\n\n self.distance_module = None\n # TODO: Remove this on next official PyTorch release.\n self.__pdist_supports_batch = True\n\n @abstractmethod\n def forward(self, x1, x2, diag=False, last_dim_is_batch=False, **params):\n r\"\"\"\n Computes the covariance between x1 and x2.\n This method should be imlemented by all Kernel subclasses.\n\n Args:\n :attr:`x1` (Tensor `n x d` or `b x n x d`):\n First set of data\n :attr:`x2` (Tensor `m x d` or `b x m x d`):\n Second set of data\n :attr:`diag` (bool):\n Should the Kernel compute the whole kernel, or just the diag?\n :attr:`last_dim_is_batch` (tuple, optional):\n If this is true, it treats the last dimension of the data as another batch dimension.\n (Useful for additive structure over the dimensions). Default: False\n\n Returns:\n :class:`Tensor` or :class:`gpytorch.lazy.LazyTensor`.\n The exact size depends on the kernel's evaluation mode:\n\n * `full_covar`: `n x m` or `b x n x m`\n * `full_covar` with `last_dim_is_batch=True`: `k x n x m` or `b x k x n x m`\n * `diag`: `n` or `b x n`\n * `diag` with `last_dim_is_batch=True`: `k x n` or `b x k x n`\n \"\"\"\n raise NotImplementedError()\n\n @property\n def batch_shape(self):\n kernels = list(self.sub_kernels())\n if len(kernels):\n return _mul_broadcast_shape(self._batch_shape, *[k.batch_shape for k in kernels])\n else:\n return self._batch_shape\n\n @batch_shape.setter\n def batch_shape(self, val):\n self._batch_shape = val\n\n @property\n def dtype(self):\n if self.has_lengthscale:\n return self.lengthscale.dtype\n else:\n for param in self.parameters():\n return param.dtype\n return torch.get_default_dtype()\n\n @property\n def is_stationary(self) -> bool:\n \"\"\"\n Property to indicate whether kernel is stationary or not.\n \"\"\"\n return self.has_lengthscale\n\n @property\n def lengthscale(self):\n if self.has_lengthscale:\n return self.raw_lengthscale_constraint.transform(self.raw_lengthscale)\n else:\n return None\n\n @lengthscale.setter\n def lengthscale(self, value):\n self._set_lengthscale(value)\n\n def _set_lengthscale(self, value):\n if not self.has_lengthscale:\n raise RuntimeError(\"Kernel has no lengthscale.\")\n\n if not torch.is_tensor(value):\n value = torch.as_tensor(value).to(self.raw_lengthscale)\n\n self.initialize(raw_lengthscale=self.raw_lengthscale_constraint.inverse_transform(value))\n\n def local_load_samples(self, samples_dict, memo, prefix):\n num_samples = next(iter(samples_dict.values())).size(0)\n self.batch_shape = torch.Size([num_samples]) + self.batch_shape\n super().local_load_samples(samples_dict, memo, prefix)\n\n def covar_dist(\n self,\n x1,\n x2,\n diag=False,\n last_dim_is_batch=False,\n square_dist=False,\n dist_postprocess_func=default_postprocess_script,\n postprocess=True,\n **params,\n ):\n r\"\"\"\n This is a helper method for computing the Euclidean distance between\n all pairs of points in x1 and x2.\n\n Args:\n :attr:`x1` (Tensor `n x d` or `b1 x ... x bk x n x d`):\n First set of data.\n :attr:`x2` (Tensor `m x d` or `b1 x ... x bk x m x d`):\n Second set of data.\n :attr:`diag` (bool):\n Should we return the whole distance matrix, or just the diagonal? If True, we must have `x1 == x2`.\n :attr:`last_dim_is_batch` (tuple, optional):\n Is the last dimension of the data a batch dimension or not?\n :attr:`square_dist` (bool):\n Should we square the distance matrix before returning?\n\n Returns:\n (:class:`Tensor`, :class:`Tensor) corresponding to the distance matrix between `x1` and `x2`.\n The shape depends on the kernel's mode\n * `diag=False`\n * `diag=False` and `last_dim_is_batch=True`: (`b x d x n x n`)\n * `diag=True`\n * `diag=True` and `last_dim_is_batch=True`: (`b x d x n`)\n \"\"\"\n if last_dim_is_batch:\n x1 = x1.transpose(-1, -2).unsqueeze(-1)\n x2 = x2.transpose(-1, -2).unsqueeze(-1)\n\n x1_eq_x2 = torch.equal(x1, x2)\n\n # torch scripts expect tensors\n postprocess = torch.tensor(postprocess)\n\n res = None\n\n # Cache the Distance object or else JIT will recompile every time\n if not self.distance_module or self.distance_module._postprocess != dist_postprocess_func:\n self.distance_module = Distance(dist_postprocess_func)\n\n if diag:\n # Special case the diagonal because we can return all zeros most of the time.\n if x1_eq_x2:\n res = torch.zeros(*x1.shape[:-2], x1.shape[-2], dtype=x1.dtype, device=x1.device)\n if postprocess:\n res = dist_postprocess_func(res)\n return res\n else:\n res = torch.norm(x1 - x2, p=2, dim=-1)\n if square_dist:\n res = res.pow(2)\n if postprocess:\n res = dist_postprocess_func(res)\n return res\n\n elif square_dist:\n res = self.distance_module._sq_dist(x1, x2, postprocess, x1_eq_x2)\n else:\n res = self.distance_module._dist(x1, x2, postprocess, x1_eq_x2)\n\n return res\n\n def named_sub_kernels(self):\n for name, module in self.named_modules():\n if module is not self and isinstance(module, Kernel):\n yield name, module\n\n def num_outputs_per_input(self, x1, x2):\n \"\"\"\n How many outputs are produced per input (default 1)\n if x1 is size `n x d` and x2 is size `m x d`, then the size of the kernel\n will be `(n * num_outputs_per_input) x (m * num_outputs_per_input)`\n Default: 1\n \"\"\"\n return 1\n\n def prediction_strategy(self, train_inputs, train_prior_dist, train_labels, likelihood):\n return exact_prediction_strategies.DefaultPredictionStrategy(\n train_inputs, train_prior_dist, train_labels, likelihood\n )\n\n def sub_kernels(self):\n for _, kernel in self.named_sub_kernels():\n yield kernel\n\n def __call__(self, x1, x2=None, diag=False, last_dim_is_batch=False, **params):\n x1_, x2_ = x1, x2\n\n # Select the active dimensions\n if self.active_dims is not None:\n x1_ = x1_.index_select(-1, self.active_dims)\n if x2_ is not None:\n x2_ = x2_.index_select(-1, self.active_dims)\n\n # Give x1_ and x2_ a last dimension, if necessary\n if x1_.ndimension() == 1:\n x1_ = x1_.unsqueeze(1)\n if x2_ is not None:\n if x2_.ndimension() == 1:\n x2_ = x2_.unsqueeze(1)\n if not x1_.size(-1) == x2_.size(-1):\n raise RuntimeError(\"x1_ and x2_ must have the same number of dimensions!\")\n\n if x2_ is None:\n x2_ = x1_\n\n # Check that ard_num_dims matches the supplied number of dimensions\n if settings.debug.on():\n if self.ard_num_dims is not None and self.ard_num_dims != x1_.size(-1):\n raise RuntimeError(\n \"Expected the input to have {} dimensionality \"\n \"(based on the ard_num_dims argument). Got {}.\".format(self.ard_num_dims, x1_.size(-1))\n )\n\n if diag:\n res = super(Kernel, self).__call__(x1_, x2_, diag=True, last_dim_is_batch=last_dim_is_batch, **params)\n # Did this Kernel eat the diag option?\n # If it does not return a LazyEvaluatedKernelTensor, we can call diag on the output\n if not isinstance(res, LazyEvaluatedKernelTensor):\n if res.dim() == x1_.dim() and res.shape[-2:] == torch.Size((x1_.size(-2), x2_.size(-2))):\n res = res.diag()\n return res\n\n else:\n if settings.lazily_evaluate_kernels.on():\n res = LazyEvaluatedKernelTensor(x1_, x2_, kernel=self, last_dim_is_batch=last_dim_is_batch, **params)\n else:\n res = lazify(super(Kernel, self).__call__(x1_, x2_, last_dim_is_batch=last_dim_is_batch, **params))\n return res\n\n def __getstate__(self):\n # JIT ScriptModules cannot be pickled\n self.distance_module = None\n return self.__dict__\n\n def __add__(self, other):\n kernels = []\n kernels += self.kernels if isinstance(self, AdditiveKernel) else [self]\n kernels += other.kernels if isinstance(other, AdditiveKernel) else [other]\n return AdditiveKernel(*kernels)\n\n def __mul__(self, other):\n kernels = []\n kernels += self.kernels if isinstance(self, ProductKernel) else [self]\n kernels += other.kernels if isinstance(other, ProductKernel) else [other]\n return ProductKernel(*kernels)\n\n def __setstate__(self, d):\n self.__dict__ = d\n\n def __getitem__(self, index):\n if len(self.batch_shape) == 0:\n return self\n\n new_kernel = deepcopy(self)\n # Process the index\n index = index if isinstance(index, tuple) else (index,)\n\n for param_name, param in self._parameters.items():\n new_kernel._parameters[param_name].data = param.__getitem__(index)\n ndim_removed = len(param.shape) - len(new_kernel._parameters[param_name].shape)\n new_batch_shape_len = len(self.batch_shape) - ndim_removed\n new_kernel.batch_shape = new_kernel._parameters[param_name].shape[:new_batch_shape_len]\n\n for sub_module_name, sub_module in self.named_sub_kernels():\n self._modules[sub_module_name] = sub_module.__getitem__(index)\n\n return new_kernel\n\n\nclass AdditiveKernel(Kernel):\n \"\"\"\n A Kernel that supports summing over multiple component kernels.\n\n Example:\n >>> covar_module = RBFKernel(active_dims=torch.tensor([1])) + RBFKernel(active_dims=torch.tensor([2]))\n >>> x1 = torch.randn(50, 2)\n >>> additive_kernel_matrix = covar_module(x1)\n \"\"\"\n\n @property\n def is_stationary(self) -> bool:\n \"\"\"\n Kernel is stationary if all components are stationary.\n \"\"\"\n return all(k.is_stationary for k in self.kernels)\n\n def __init__(self, *kernels):\n super(AdditiveKernel, self).__init__()\n self.kernels = ModuleList(kernels)\n\n def forward(self, x1, x2, diag=False, **params):\n res = ZeroLazyTensor() if not diag else 0\n for kern in self.kernels:\n next_term = kern(x1, x2, diag=diag, **params)\n if not diag:\n res = res + lazify(next_term)\n else:\n res = res + next_term\n\n return res\n\n def num_outputs_per_input(self, x1, x2):\n return self.kernels[0].num_outputs_per_input(x1, x2)\n\n def __getitem__(self, index):\n new_kernel = deepcopy(self)\n for i, kernel in enumerate(self.kernels):\n new_kernel.kernels[i] = self.kernels[i].__getitem__(index)\n\n return new_kernel\n\n\nclass ProductKernel(Kernel):\n \"\"\"\n A Kernel that supports elementwise multiplying multiple component kernels together.\n\n Example:\n >>> covar_module = RBFKernel(active_dims=torch.tensor([1])) * RBFKernel(active_dims=torch.tensor([2]))\n >>> x1 = torch.randn(50, 2)\n >>> kernel_matrix = covar_module(x1) # The RBF Kernel already decomposes multiplicatively, so this is foolish!\n \"\"\"\n\n @property\n def is_stationary(self) -> bool:\n \"\"\"\n Kernel is stationary if all components are stationary.\n \"\"\"\n return all(k.is_stationary for k in self.kernels)\n\n def __init__(self, *kernels):\n super(ProductKernel, self).__init__()\n self.kernels = ModuleList(kernels)\n\n def forward(self, x1, x2, diag=False, **params):\n x1_eq_x2 = torch.equal(x1, x2)\n\n if not x1_eq_x2:\n # If x1 != x2, then we can't make a MulLazyTensor because the kernel won't necessarily be square/symmetric\n res = delazify(self.kernels[0](x1, x2, diag=diag, **params))\n else:\n res = self.kernels[0](x1, x2, diag=diag, **params)\n\n if not diag:\n res = lazify(res)\n\n for kern in self.kernels[1:]:\n next_term = kern(x1, x2, diag=diag, **params)\n if not x1_eq_x2:\n # Again delazify if x1 != x2\n res = res * delazify(next_term)\n else:\n if not diag:\n res = res * lazify(next_term)\n else:\n res = res * next_term\n\n return res\n\n def num_outputs_per_input(self, x1, x2):\n return self.kernels[0].num_outputs_per_input(x1, x2)\n\n def __getitem__(self, index):\n new_kernel = deepcopy(self)\n for i, kernel in enumerate(self.kernels):\n new_kernel.kernels[i] = self.kernels[i].__getitem__(index)\n\n return new_kernel\n", "path": "gpytorch/kernels/kernel.py"}]} |
gh_patches_debug_1044 | rasdani/github-patches | git_diff | django-oscar__django-oscar-3495 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The postal Code validation for Israel should also take 5 digit numbers
In oscar.apps.address.abstract_models.AbstractAddress:
`'IL': r'^[0-9]{7}$',`
Should be:
`'IL': r'^([0-9]{5}|[0-9]{7})$',`
For more info: https://en.wikipedia.org/wiki/Postal_codes_in_Israel
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/oscar/apps/address/abstract_models.py`
Content:
```
1 import re
2 import zlib
3
4 from django.conf import settings
5 from django.core import exceptions
6 from django.db import models
7 from django.utils.translation import gettext_lazy as _
8 from django.utils.translation import pgettext_lazy
9 from phonenumber_field.modelfields import PhoneNumberField
10
11 from oscar.core.compat import AUTH_USER_MODEL
12 from oscar.models.fields import UppercaseCharField
13
14
15 class AbstractAddress(models.Model):
16 """
17 Superclass address object
18
19 This is subclassed and extended to provide models for
20 user, shipping and billing addresses.
21 """
22 MR, MISS, MRS, MS, DR = ('Mr', 'Miss', 'Mrs', 'Ms', 'Dr')
23 TITLE_CHOICES = (
24 (MR, _("Mr")),
25 (MISS, _("Miss")),
26 (MRS, _("Mrs")),
27 (MS, _("Ms")),
28 (DR, _("Dr")),
29 )
30
31 POSTCODE_REQUIRED = 'postcode' in settings.OSCAR_REQUIRED_ADDRESS_FIELDS
32
33 # Regex for each country. Not listed countries don't use postcodes
34 # Based on http://en.wikipedia.org/wiki/List_of_postal_codes
35 POSTCODES_REGEX = {
36 'AC': r'^[A-Z]{4}[0-9][A-Z]$',
37 'AD': r'^AD[0-9]{3}$',
38 'AF': r'^[0-9]{4}$',
39 'AI': r'^AI-2640$',
40 'AL': r'^[0-9]{4}$',
41 'AM': r'^[0-9]{4}$',
42 'AR': r'^([0-9]{4}|[A-Z][0-9]{4}[A-Z]{3})$',
43 'AS': r'^[0-9]{5}(-[0-9]{4}|-[0-9]{6})?$',
44 'AT': r'^[0-9]{4}$',
45 'AU': r'^[0-9]{4}$',
46 'AX': r'^[0-9]{5}$',
47 'AZ': r'^AZ[0-9]{4}$',
48 'BA': r'^[0-9]{5}$',
49 'BB': r'^BB[0-9]{5}$',
50 'BD': r'^[0-9]{4}$',
51 'BE': r'^[0-9]{4}$',
52 'BG': r'^[0-9]{4}$',
53 'BH': r'^[0-9]{3,4}$',
54 'BL': r'^[0-9]{5}$',
55 'BM': r'^[A-Z]{2}([0-9]{2}|[A-Z]{2})',
56 'BN': r'^[A-Z]{2}[0-9]{4}$',
57 'BO': r'^[0-9]{4}$',
58 'BR': r'^[0-9]{5}(-[0-9]{3})?$',
59 'BT': r'^[0-9]{3}$',
60 'BY': r'^[0-9]{6}$',
61 'CA': r'^[A-Z][0-9][A-Z][0-9][A-Z][0-9]$',
62 'CC': r'^[0-9]{4}$',
63 'CH': r'^[0-9]{4}$',
64 'CL': r'^([0-9]{7}|[0-9]{3}-[0-9]{4})$',
65 'CN': r'^[0-9]{6}$',
66 'CO': r'^[0-9]{6}$',
67 'CR': r'^[0-9]{4,5}$',
68 'CU': r'^[0-9]{5}$',
69 'CV': r'^[0-9]{4}$',
70 'CX': r'^[0-9]{4}$',
71 'CY': r'^[0-9]{4}$',
72 'CZ': r'^[0-9]{5}$',
73 'DE': r'^[0-9]{5}$',
74 'DK': r'^[0-9]{4}$',
75 'DO': r'^[0-9]{5}$',
76 'DZ': r'^[0-9]{5}$',
77 'EC': r'^EC[0-9]{6}$',
78 'EE': r'^[0-9]{5}$',
79 'EG': r'^[0-9]{5}$',
80 'ES': r'^[0-9]{5}$',
81 'ET': r'^[0-9]{4}$',
82 'FI': r'^[0-9]{5}$',
83 'FK': r'^[A-Z]{4}[0-9][A-Z]{2}$',
84 'FM': r'^[0-9]{5}(-[0-9]{4})?$',
85 'FO': r'^[0-9]{3}$',
86 'FR': r'^[0-9]{5}$',
87 'GA': r'^[0-9]{2}.*[0-9]{2}$',
88 'GB': r'^[A-Z][A-Z0-9]{1,3}[0-9][A-Z]{2}$',
89 'GE': r'^[0-9]{4}$',
90 'GF': r'^[0-9]{5}$',
91 'GG': r'^([A-Z]{2}[0-9]{2,3}[A-Z]{2})$',
92 'GI': r'^GX111AA$',
93 'GL': r'^[0-9]{4}$',
94 'GP': r'^[0-9]{5}$',
95 'GR': r'^[0-9]{5}$',
96 'GS': r'^SIQQ1ZZ$',
97 'GT': r'^[0-9]{5}$',
98 'GU': r'^[0-9]{5}$',
99 'GW': r'^[0-9]{4}$',
100 'HM': r'^[0-9]{4}$',
101 'HN': r'^[0-9]{5}$',
102 'HR': r'^[0-9]{5}$',
103 'HT': r'^[0-9]{4}$',
104 'HU': r'^[0-9]{4}$',
105 'ID': r'^[0-9]{5}$',
106 'IL': r'^[0-9]{7}$',
107 'IM': r'^IM[0-9]{2,3}[A-Z]{2}$$',
108 'IN': r'^[0-9]{6}$',
109 'IO': r'^[A-Z]{4}[0-9][A-Z]{2}$',
110 'IQ': r'^[0-9]{5}$',
111 'IR': r'^[0-9]{5}-[0-9]{5}$',
112 'IS': r'^[0-9]{3}$',
113 'IT': r'^[0-9]{5}$',
114 'JE': r'^JE[0-9]{2}[A-Z]{2}$',
115 'JM': r'^JM[A-Z]{3}[0-9]{2}$',
116 'JO': r'^[0-9]{5}$',
117 'JP': r'^[0-9]{3}-?[0-9]{4}$',
118 'KE': r'^[0-9]{5}$',
119 'KG': r'^[0-9]{6}$',
120 'KH': r'^[0-9]{5}$',
121 'KR': r'^[0-9]{5}$',
122 'KY': r'^KY[0-9]-[0-9]{4}$',
123 'KZ': r'^[0-9]{6}$',
124 'LA': r'^[0-9]{5}$',
125 'LB': r'^[0-9]{8}$',
126 'LI': r'^[0-9]{4}$',
127 'LK': r'^[0-9]{5}$',
128 'LR': r'^[0-9]{4}$',
129 'LS': r'^[0-9]{3}$',
130 'LT': r'^(LT-)?[0-9]{5}$',
131 'LU': r'^[0-9]{4}$',
132 'LV': r'^LV-[0-9]{4}$',
133 'LY': r'^[0-9]{5}$',
134 'MA': r'^[0-9]{5}$',
135 'MC': r'^980[0-9]{2}$',
136 'MD': r'^MD-?[0-9]{4}$',
137 'ME': r'^[0-9]{5}$',
138 'MF': r'^[0-9]{5}$',
139 'MG': r'^[0-9]{3}$',
140 'MH': r'^[0-9]{5}$',
141 'MK': r'^[0-9]{4}$',
142 'MM': r'^[0-9]{5}$',
143 'MN': r'^[0-9]{5}$',
144 'MP': r'^[0-9]{5}$',
145 'MQ': r'^[0-9]{5}$',
146 'MT': r'^[A-Z]{3}[0-9]{4}$',
147 'MV': r'^[0-9]{4,5}$',
148 'MX': r'^[0-9]{5}$',
149 'MY': r'^[0-9]{5}$',
150 'MZ': r'^[0-9]{4}$',
151 'NA': r'^[0-9]{5}$',
152 'NC': r'^[0-9]{5}$',
153 'NE': r'^[0-9]{4}$',
154 'NF': r'^[0-9]{4}$',
155 'NG': r'^[0-9]{6}$',
156 'NI': r'^[0-9]{5}$',
157 'NL': r'^[0-9]{4}[A-Z]{2}$',
158 'NO': r'^[0-9]{4}$',
159 'NP': r'^[0-9]{5}$',
160 'NZ': r'^[0-9]{4}$',
161 'OM': r'^[0-9]{3}$',
162 'PA': r'^[0-9]{6}$',
163 'PE': r'^[0-9]{5}$',
164 'PF': r'^[0-9]{5}$',
165 'PG': r'^[0-9]{3}$',
166 'PH': r'^[0-9]{4}$',
167 'PK': r'^[0-9]{5}$',
168 'PL': r'^[0-9]{2}-?[0-9]{3}$',
169 'PM': r'^[0-9]{5}$',
170 'PN': r'^[A-Z]{4}[0-9][A-Z]{2}$',
171 'PR': r'^[0-9]{5}$',
172 'PT': r'^[0-9]{4}(-?[0-9]{3})?$',
173 'PW': r'^[0-9]{5}$',
174 'PY': r'^[0-9]{4}$',
175 'RE': r'^[0-9]{5}$',
176 'RO': r'^[0-9]{6}$',
177 'RS': r'^[0-9]{5}$',
178 'RU': r'^[0-9]{6}$',
179 'SA': r'^[0-9]{5}$',
180 'SD': r'^[0-9]{5}$',
181 'SE': r'^[0-9]{5}$',
182 'SG': r'^([0-9]{2}|[0-9]{4}|[0-9]{6})$',
183 'SH': r'^(STHL1ZZ|TDCU1ZZ)$',
184 'SI': r'^(SI-)?[0-9]{4}$',
185 'SK': r'^[0-9]{5}$',
186 'SM': r'^[0-9]{5}$',
187 'SN': r'^[0-9]{5}$',
188 'SV': r'^01101$',
189 'SZ': r'^[A-Z][0-9]{3}$',
190 'TC': r'^TKCA1ZZ$',
191 'TD': r'^[0-9]{5}$',
192 'TH': r'^[0-9]{5}$',
193 'TJ': r'^[0-9]{6}$',
194 'TM': r'^[0-9]{6}$',
195 'TN': r'^[0-9]{4}$',
196 'TR': r'^[0-9]{5}$',
197 'TT': r'^[0-9]{6}$',
198 'TW': r'^([0-9]{3}|[0-9]{5})$',
199 'UA': r'^[0-9]{5}$',
200 'US': r'^[0-9]{5}(-[0-9]{4}|-[0-9]{6})?$',
201 'UY': r'^[0-9]{5}$',
202 'UZ': r'^[0-9]{6}$',
203 'VA': r'^00120$',
204 'VC': r'^VC[0-9]{4}',
205 'VE': r'^[0-9]{4}[A-Z]?$',
206 'VG': r'^VG[0-9]{4}$',
207 'VI': r'^[0-9]{5}$',
208 'VN': r'^[0-9]{6}$',
209 'WF': r'^[0-9]{5}$',
210 'XK': r'^[0-9]{5}$',
211 'YT': r'^[0-9]{5}$',
212 'ZA': r'^[0-9]{4}$',
213 'ZM': r'^[0-9]{5}$',
214 }
215
216 title = models.CharField(
217 pgettext_lazy("Treatment Pronouns for the customer", "Title"),
218 max_length=64, choices=TITLE_CHOICES, blank=True)
219 first_name = models.CharField(_("First name"), max_length=255, blank=True)
220 last_name = models.CharField(_("Last name"), max_length=255, blank=True)
221
222 # We use quite a few lines of an address as they are often quite long and
223 # it's easier to just hide the unnecessary ones than add extra ones.
224 line1 = models.CharField(_("First line of address"), max_length=255)
225 line2 = models.CharField(
226 _("Second line of address"), max_length=255, blank=True)
227 line3 = models.CharField(
228 _("Third line of address"), max_length=255, blank=True)
229 line4 = models.CharField(_("City"), max_length=255, blank=True)
230 state = models.CharField(_("State/County"), max_length=255, blank=True)
231 postcode = UppercaseCharField(
232 _("Post/Zip-code"), max_length=64, blank=True)
233 country = models.ForeignKey(
234 'address.Country',
235 on_delete=models.CASCADE,
236 verbose_name=_("Country"))
237
238 #: A field only used for searching addresses - this contains all the
239 #: relevant fields. This is effectively a poor man's Solr text field.
240 search_text = models.TextField(
241 _("Search text - used only for searching addresses"), editable=False)
242
243 # Fields, used for `summary` property definition and hash generation.
244 base_fields = hash_fields = ['salutation', 'line1', 'line2', 'line3', 'line4', 'state', 'postcode', 'country']
245
246 def __str__(self):
247 return self.summary
248
249 class Meta:
250 abstract = True
251 verbose_name = _('Address')
252 verbose_name_plural = _('Addresses')
253
254 # Saving
255
256 def save(self, *args, **kwargs):
257 self._update_search_text()
258 super().save(*args, **kwargs)
259
260 def clean(self):
261 # Strip all whitespace
262 for field in ['first_name', 'last_name', 'line1', 'line2', 'line3',
263 'line4', 'state', 'postcode']:
264 if self.__dict__[field]:
265 self.__dict__[field] = self.__dict__[field].strip()
266
267 # Ensure postcodes are valid for country
268 self.ensure_postcode_is_valid_for_country()
269
270 def ensure_postcode_is_valid_for_country(self):
271 """
272 Validate postcode given the country
273 """
274 if not self.postcode and self.POSTCODE_REQUIRED and self.country_id:
275 country_code = self.country.iso_3166_1_a2
276 regex = self.POSTCODES_REGEX.get(country_code, None)
277 if regex:
278 msg = _("Addresses in %(country)s require a valid postcode") \
279 % {'country': self.country}
280 raise exceptions.ValidationError(msg)
281
282 if self.postcode and self.country_id:
283 # Ensure postcodes are always uppercase
284 postcode = self.postcode.upper().replace(' ', '')
285 country_code = self.country.iso_3166_1_a2
286 regex = self.POSTCODES_REGEX.get(country_code, None)
287
288 # Validate postcode against regex for the country if available
289 if regex and not re.match(regex, postcode):
290 msg = _("The postcode '%(postcode)s' is not valid "
291 "for %(country)s") \
292 % {'postcode': self.postcode,
293 'country': self.country}
294 raise exceptions.ValidationError(
295 {'postcode': [msg]})
296
297 def _update_search_text(self):
298 search_fields = filter(
299 bool, [self.first_name, self.last_name,
300 self.line1, self.line2, self.line3, self.line4,
301 self.state, self.postcode, self.country.name])
302 self.search_text = ' '.join(search_fields)
303
304 # Properties
305
306 @property
307 def city(self):
308 # Common alias
309 return self.line4
310
311 @property
312 def summary(self):
313 """
314 Returns a single string summary of the address,
315 separating fields using commas.
316 """
317 return ", ".join(self.active_address_fields())
318
319 @property
320 def salutation(self):
321 """
322 Name (including title)
323 """
324 return self.join_fields(
325 ('title', 'first_name', 'last_name'),
326 separator=" ")
327
328 @property
329 def name(self):
330 return self.join_fields(('first_name', 'last_name'), separator=" ")
331
332 # Helpers
333
334 def get_field_values(self, fields):
335 field_values = []
336 for field in fields:
337 # Title is special case
338 if field == 'title':
339 value = self.get_title_display()
340 elif field == 'country':
341 try:
342 value = self.country.printable_name
343 except exceptions.ObjectDoesNotExist:
344 value = ''
345 elif field == 'salutation':
346 value = self.salutation
347 else:
348 value = getattr(self, field)
349 field_values.append(value)
350 return field_values
351
352 def get_address_field_values(self, fields):
353 """
354 Returns set of field values within the salutation and country.
355 """
356 field_values = [f.strip() for f in self.get_field_values(fields) if f]
357 return field_values
358
359 def generate_hash(self):
360 """
361 Returns a hash of the address, based on standard set of fields, listed
362 out in `hash_fields` property.
363 """
364 field_values = self.get_address_field_values(self.hash_fields)
365 # Python 2 and 3 generates CRC checksum in different ranges, so
366 # in order to generate platform-independent value we apply
367 # `& 0xffffffff` expression.
368 return zlib.crc32(', '.join(field_values).upper().encode('UTF8')) & 0xffffffff
369
370 def join_fields(self, fields, separator=", "):
371 """
372 Join a sequence of fields using the specified separator
373 """
374 field_values = self.get_field_values(fields)
375 return separator.join(filter(bool, field_values))
376
377 def populate_alternative_model(self, address_model):
378 """
379 For populating an address model using the matching fields
380 from this one.
381
382 This is used to convert a user address to a shipping address
383 as part of the checkout process.
384 """
385 destination_field_names = [
386 field.name for field in address_model._meta.fields]
387 for field_name in [field.name for field in self._meta.fields]:
388 if field_name in destination_field_names and field_name != 'id':
389 setattr(address_model, field_name, getattr(self, field_name))
390
391 def active_address_fields(self):
392 """
393 Returns the non-empty components of the address, but merging the
394 title, first_name and last_name into a single line. It uses fields
395 listed out in `base_fields` property.
396 """
397 return self.get_address_field_values(self.base_fields)
398
399
400 class AbstractCountry(models.Model):
401 """
402 `ISO 3166 Country Codes <https://www.iso.org/iso-3166-country-codes.html>`_
403
404 The field names are a bit awkward, but kept for backwards compatibility.
405 pycountry's syntax of alpha2, alpha3, name and official_name seems sane.
406 """
407 iso_3166_1_a2 = models.CharField(
408 _('ISO 3166-1 alpha-2'), max_length=2, primary_key=True)
409 iso_3166_1_a3 = models.CharField(
410 _('ISO 3166-1 alpha-3'), max_length=3, blank=True)
411 iso_3166_1_numeric = models.CharField(
412 _('ISO 3166-1 numeric'), blank=True, max_length=3)
413
414 #: The commonly used name; e.g. 'United Kingdom'
415 printable_name = models.CharField(_('Country name'), max_length=128, db_index=True)
416 #: The full official name of a country
417 #: e.g. 'United Kingdom of Great Britain and Northern Ireland'
418 name = models.CharField(_('Official name'), max_length=128)
419
420 display_order = models.PositiveSmallIntegerField(
421 _("Display order"), default=0, db_index=True,
422 help_text=_('Higher the number, higher the country in the list.'))
423
424 is_shipping_country = models.BooleanField(
425 _("Is shipping country"), default=False, db_index=True)
426
427 class Meta:
428 abstract = True
429 app_label = 'address'
430 verbose_name = _('Country')
431 verbose_name_plural = _('Countries')
432 ordering = ('-display_order', 'printable_name',)
433
434 def __str__(self):
435 return self.printable_name or self.name
436
437 @property
438 def code(self):
439 """
440 Shorthand for the ISO 3166 Alpha-2 code
441 """
442 return self.iso_3166_1_a2
443
444 @property
445 def numeric_code(self):
446 """
447 Shorthand for the ISO 3166 numeric code.
448
449 :py:attr:`.iso_3166_1_numeric` used to wrongly be a integer field, but has to
450 be padded with leading zeroes. It's since been converted to a char
451 field, but the database might still contain non-padded strings. That's
452 why the padding is kept.
453 """
454 return "%.03d" % int(self.iso_3166_1_numeric)
455
456
457 class AbstractShippingAddress(AbstractAddress):
458 """
459 A shipping address.
460
461 A shipping address should not be edited once the order has been placed -
462 it should be read-only after that.
463
464 NOTE:
465 ShippingAddress is a model of the order app. But moving it there is tricky
466 due to circular import issues that are amplified by get_model/get_class
467 calls pre-Django 1.7 to register receivers. So...
468 TODO: Once Django 1.6 support is dropped, move AbstractBillingAddress and
469 AbstractShippingAddress to the order app, and move
470 PartnerAddress to the partner app.
471 """
472
473 phone_number = PhoneNumberField(
474 _("Phone number"), blank=True,
475 help_text=_("In case we need to call you about your order"))
476 notes = models.TextField(
477 blank=True, verbose_name=_('Instructions'),
478 help_text=_("Tell us anything we should know when delivering "
479 "your order."))
480
481 class Meta:
482 abstract = True
483 # ShippingAddress is registered in order/models.py
484 app_label = 'order'
485 verbose_name = _("Shipping address")
486 verbose_name_plural = _("Shipping addresses")
487
488 @property
489 def order(self):
490 """
491 Return the order linked to this shipping address
492 """
493 return self.order_set.first()
494
495
496 class AbstractUserAddress(AbstractShippingAddress):
497 """
498 A user's address. A user can have many of these and together they form an
499 'address book' of sorts for the user.
500
501 We use a separate model for shipping and billing (even though there will be
502 some data duplication) because we don't want shipping/billing addresses
503 changed or deleted once an order has been placed. By having a separate
504 model, we allow users the ability to add/edit/delete from their address
505 book without affecting orders already placed.
506 """
507 user = models.ForeignKey(
508 AUTH_USER_MODEL,
509 on_delete=models.CASCADE,
510 related_name='addresses',
511 verbose_name=_("User"))
512
513 #: Whether this address is the default for shipping
514 is_default_for_shipping = models.BooleanField(
515 _("Default shipping address?"), default=False)
516
517 #: Whether this address should be the default for billing.
518 is_default_for_billing = models.BooleanField(
519 _("Default billing address?"), default=False)
520
521 #: We keep track of the number of times an address has been used
522 #: as a shipping address so we can show the most popular ones
523 #: first at the checkout.
524 num_orders_as_shipping_address = models.PositiveIntegerField(
525 _("Number of Orders as Shipping Address"), default=0)
526
527 #: Same as previous, but for billing address.
528 num_orders_as_billing_address = models.PositiveIntegerField(
529 _("Number of Orders as Billing Address"), default=0)
530
531 #: A hash is kept to try and avoid duplicate addresses being added
532 #: to the address book.
533 hash = models.CharField(_("Address Hash"), max_length=255, db_index=True,
534 editable=False)
535 date_created = models.DateTimeField(_("Date Created"), auto_now_add=True)
536
537 def save(self, *args, **kwargs):
538 """
539 Save a hash of the address fields
540 """
541 # Save a hash of the address fields so we can check whether two
542 # addresses are the same to avoid saving duplicates
543 self.hash = self.generate_hash()
544
545 # Ensure that each user only has one default shipping address
546 # and billing address
547 self._ensure_defaults_integrity()
548 super().save(*args, **kwargs)
549
550 def _ensure_defaults_integrity(self):
551 if self.is_default_for_shipping:
552 self.__class__._default_manager\
553 .filter(user=self.user, is_default_for_shipping=True)\
554 .update(is_default_for_shipping=False)
555 if self.is_default_for_billing:
556 self.__class__._default_manager\
557 .filter(user=self.user, is_default_for_billing=True)\
558 .update(is_default_for_billing=False)
559
560 class Meta:
561 abstract = True
562 app_label = 'address'
563 verbose_name = _("User address")
564 verbose_name_plural = _("User addresses")
565 ordering = ['-num_orders_as_shipping_address']
566 unique_together = ('user', 'hash')
567
568 def validate_unique(self, exclude=None):
569 super().validate_unique(exclude)
570 qs = self.__class__.objects.filter(
571 user=self.user,
572 hash=self.generate_hash())
573 if self.id:
574 qs = qs.exclude(id=self.id)
575 if qs.exists():
576 raise exceptions.ValidationError({
577 '__all__': [_("This address is already in your address"
578 " book")]})
579
580
581 class AbstractBillingAddress(AbstractAddress):
582 class Meta:
583 abstract = True
584 # BillingAddress is registered in order/models.py
585 app_label = 'order'
586 verbose_name = _("Billing address")
587 verbose_name_plural = _("Billing addresses")
588
589 @property
590 def order(self):
591 """
592 Return the order linked to this shipping address
593 """
594 return self.order_set.first()
595
596
597 class AbstractPartnerAddress(AbstractAddress):
598 """
599 A partner can have one or more addresses. This can be useful e.g. when
600 determining US tax which depends on the origin of the shipment.
601 """
602 partner = models.ForeignKey(
603 'partner.Partner',
604 on_delete=models.CASCADE,
605 related_name='addresses',
606 verbose_name=_('Partner'))
607
608 class Meta:
609 abstract = True
610 app_label = 'partner'
611 verbose_name = _("Partner address")
612 verbose_name_plural = _("Partner addresses")
613
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/oscar/apps/address/abstract_models.py b/src/oscar/apps/address/abstract_models.py
--- a/src/oscar/apps/address/abstract_models.py
+++ b/src/oscar/apps/address/abstract_models.py
@@ -103,7 +103,7 @@
'HT': r'^[0-9]{4}$',
'HU': r'^[0-9]{4}$',
'ID': r'^[0-9]{5}$',
- 'IL': r'^[0-9]{7}$',
+ 'IL': r'^([0-9]{5}|[0-9]{7})$',
'IM': r'^IM[0-9]{2,3}[A-Z]{2}$$',
'IN': r'^[0-9]{6}$',
'IO': r'^[A-Z]{4}[0-9][A-Z]{2}$',
| {"golden_diff": "diff --git a/src/oscar/apps/address/abstract_models.py b/src/oscar/apps/address/abstract_models.py\n--- a/src/oscar/apps/address/abstract_models.py\n+++ b/src/oscar/apps/address/abstract_models.py\n@@ -103,7 +103,7 @@\n 'HT': r'^[0-9]{4}$',\n 'HU': r'^[0-9]{4}$',\n 'ID': r'^[0-9]{5}$',\n- 'IL': r'^[0-9]{7}$',\n+ 'IL': r'^([0-9]{5}|[0-9]{7})$',\n 'IM': r'^IM[0-9]{2,3}[A-Z]{2}$$',\n 'IN': r'^[0-9]{6}$',\n 'IO': r'^[A-Z]{4}[0-9][A-Z]{2}$',\n", "issue": "The postal Code validation for Israel should also take 5 digit numbers\nIn oscar.apps.address.abstract_models.AbstractAddress:\r\n\r\n`'IL': r'^[0-9]{7}$',`\r\n\r\nShould be:\r\n`'IL': r'^([0-9]{5}|[0-9]{7})$',`\r\n\r\nFor more info: https://en.wikipedia.org/wiki/Postal_codes_in_Israel\n", "before_files": [{"content": "import re\nimport zlib\n\nfrom django.conf import settings\nfrom django.core import exceptions\nfrom django.db import models\nfrom django.utils.translation import gettext_lazy as _\nfrom django.utils.translation import pgettext_lazy\nfrom phonenumber_field.modelfields import PhoneNumberField\n\nfrom oscar.core.compat import AUTH_USER_MODEL\nfrom oscar.models.fields import UppercaseCharField\n\n\nclass AbstractAddress(models.Model):\n \"\"\"\n Superclass address object\n\n This is subclassed and extended to provide models for\n user, shipping and billing addresses.\n \"\"\"\n MR, MISS, MRS, MS, DR = ('Mr', 'Miss', 'Mrs', 'Ms', 'Dr')\n TITLE_CHOICES = (\n (MR, _(\"Mr\")),\n (MISS, _(\"Miss\")),\n (MRS, _(\"Mrs\")),\n (MS, _(\"Ms\")),\n (DR, _(\"Dr\")),\n )\n\n POSTCODE_REQUIRED = 'postcode' in settings.OSCAR_REQUIRED_ADDRESS_FIELDS\n\n # Regex for each country. Not listed countries don't use postcodes\n # Based on http://en.wikipedia.org/wiki/List_of_postal_codes\n POSTCODES_REGEX = {\n 'AC': r'^[A-Z]{4}[0-9][A-Z]$',\n 'AD': r'^AD[0-9]{3}$',\n 'AF': r'^[0-9]{4}$',\n 'AI': r'^AI-2640$',\n 'AL': r'^[0-9]{4}$',\n 'AM': r'^[0-9]{4}$',\n 'AR': r'^([0-9]{4}|[A-Z][0-9]{4}[A-Z]{3})$',\n 'AS': r'^[0-9]{5}(-[0-9]{4}|-[0-9]{6})?$',\n 'AT': r'^[0-9]{4}$',\n 'AU': r'^[0-9]{4}$',\n 'AX': r'^[0-9]{5}$',\n 'AZ': r'^AZ[0-9]{4}$',\n 'BA': r'^[0-9]{5}$',\n 'BB': r'^BB[0-9]{5}$',\n 'BD': r'^[0-9]{4}$',\n 'BE': r'^[0-9]{4}$',\n 'BG': r'^[0-9]{4}$',\n 'BH': r'^[0-9]{3,4}$',\n 'BL': r'^[0-9]{5}$',\n 'BM': r'^[A-Z]{2}([0-9]{2}|[A-Z]{2})',\n 'BN': r'^[A-Z]{2}[0-9]{4}$',\n 'BO': r'^[0-9]{4}$',\n 'BR': r'^[0-9]{5}(-[0-9]{3})?$',\n 'BT': r'^[0-9]{3}$',\n 'BY': r'^[0-9]{6}$',\n 'CA': r'^[A-Z][0-9][A-Z][0-9][A-Z][0-9]$',\n 'CC': r'^[0-9]{4}$',\n 'CH': r'^[0-9]{4}$',\n 'CL': r'^([0-9]{7}|[0-9]{3}-[0-9]{4})$',\n 'CN': r'^[0-9]{6}$',\n 'CO': r'^[0-9]{6}$',\n 'CR': r'^[0-9]{4,5}$',\n 'CU': r'^[0-9]{5}$',\n 'CV': r'^[0-9]{4}$',\n 'CX': r'^[0-9]{4}$',\n 'CY': r'^[0-9]{4}$',\n 'CZ': r'^[0-9]{5}$',\n 'DE': r'^[0-9]{5}$',\n 'DK': r'^[0-9]{4}$',\n 'DO': r'^[0-9]{5}$',\n 'DZ': r'^[0-9]{5}$',\n 'EC': r'^EC[0-9]{6}$',\n 'EE': r'^[0-9]{5}$',\n 'EG': r'^[0-9]{5}$',\n 'ES': r'^[0-9]{5}$',\n 'ET': r'^[0-9]{4}$',\n 'FI': r'^[0-9]{5}$',\n 'FK': r'^[A-Z]{4}[0-9][A-Z]{2}$',\n 'FM': r'^[0-9]{5}(-[0-9]{4})?$',\n 'FO': r'^[0-9]{3}$',\n 'FR': r'^[0-9]{5}$',\n 'GA': r'^[0-9]{2}.*[0-9]{2}$',\n 'GB': r'^[A-Z][A-Z0-9]{1,3}[0-9][A-Z]{2}$',\n 'GE': r'^[0-9]{4}$',\n 'GF': r'^[0-9]{5}$',\n 'GG': r'^([A-Z]{2}[0-9]{2,3}[A-Z]{2})$',\n 'GI': r'^GX111AA$',\n 'GL': r'^[0-9]{4}$',\n 'GP': r'^[0-9]{5}$',\n 'GR': r'^[0-9]{5}$',\n 'GS': r'^SIQQ1ZZ$',\n 'GT': r'^[0-9]{5}$',\n 'GU': r'^[0-9]{5}$',\n 'GW': r'^[0-9]{4}$',\n 'HM': r'^[0-9]{4}$',\n 'HN': r'^[0-9]{5}$',\n 'HR': r'^[0-9]{5}$',\n 'HT': r'^[0-9]{4}$',\n 'HU': r'^[0-9]{4}$',\n 'ID': r'^[0-9]{5}$',\n 'IL': r'^[0-9]{7}$',\n 'IM': r'^IM[0-9]{2,3}[A-Z]{2}$$',\n 'IN': r'^[0-9]{6}$',\n 'IO': r'^[A-Z]{4}[0-9][A-Z]{2}$',\n 'IQ': r'^[0-9]{5}$',\n 'IR': r'^[0-9]{5}-[0-9]{5}$',\n 'IS': r'^[0-9]{3}$',\n 'IT': r'^[0-9]{5}$',\n 'JE': r'^JE[0-9]{2}[A-Z]{2}$',\n 'JM': r'^JM[A-Z]{3}[0-9]{2}$',\n 'JO': r'^[0-9]{5}$',\n 'JP': r'^[0-9]{3}-?[0-9]{4}$',\n 'KE': r'^[0-9]{5}$',\n 'KG': r'^[0-9]{6}$',\n 'KH': r'^[0-9]{5}$',\n 'KR': r'^[0-9]{5}$',\n 'KY': r'^KY[0-9]-[0-9]{4}$',\n 'KZ': r'^[0-9]{6}$',\n 'LA': r'^[0-9]{5}$',\n 'LB': r'^[0-9]{8}$',\n 'LI': r'^[0-9]{4}$',\n 'LK': r'^[0-9]{5}$',\n 'LR': r'^[0-9]{4}$',\n 'LS': r'^[0-9]{3}$',\n 'LT': r'^(LT-)?[0-9]{5}$',\n 'LU': r'^[0-9]{4}$',\n 'LV': r'^LV-[0-9]{4}$',\n 'LY': r'^[0-9]{5}$',\n 'MA': r'^[0-9]{5}$',\n 'MC': r'^980[0-9]{2}$',\n 'MD': r'^MD-?[0-9]{4}$',\n 'ME': r'^[0-9]{5}$',\n 'MF': r'^[0-9]{5}$',\n 'MG': r'^[0-9]{3}$',\n 'MH': r'^[0-9]{5}$',\n 'MK': r'^[0-9]{4}$',\n 'MM': r'^[0-9]{5}$',\n 'MN': r'^[0-9]{5}$',\n 'MP': r'^[0-9]{5}$',\n 'MQ': r'^[0-9]{5}$',\n 'MT': r'^[A-Z]{3}[0-9]{4}$',\n 'MV': r'^[0-9]{4,5}$',\n 'MX': r'^[0-9]{5}$',\n 'MY': r'^[0-9]{5}$',\n 'MZ': r'^[0-9]{4}$',\n 'NA': r'^[0-9]{5}$',\n 'NC': r'^[0-9]{5}$',\n 'NE': r'^[0-9]{4}$',\n 'NF': r'^[0-9]{4}$',\n 'NG': r'^[0-9]{6}$',\n 'NI': r'^[0-9]{5}$',\n 'NL': r'^[0-9]{4}[A-Z]{2}$',\n 'NO': r'^[0-9]{4}$',\n 'NP': r'^[0-9]{5}$',\n 'NZ': r'^[0-9]{4}$',\n 'OM': r'^[0-9]{3}$',\n 'PA': r'^[0-9]{6}$',\n 'PE': r'^[0-9]{5}$',\n 'PF': r'^[0-9]{5}$',\n 'PG': r'^[0-9]{3}$',\n 'PH': r'^[0-9]{4}$',\n 'PK': r'^[0-9]{5}$',\n 'PL': r'^[0-9]{2}-?[0-9]{3}$',\n 'PM': r'^[0-9]{5}$',\n 'PN': r'^[A-Z]{4}[0-9][A-Z]{2}$',\n 'PR': r'^[0-9]{5}$',\n 'PT': r'^[0-9]{4}(-?[0-9]{3})?$',\n 'PW': r'^[0-9]{5}$',\n 'PY': r'^[0-9]{4}$',\n 'RE': r'^[0-9]{5}$',\n 'RO': r'^[0-9]{6}$',\n 'RS': r'^[0-9]{5}$',\n 'RU': r'^[0-9]{6}$',\n 'SA': r'^[0-9]{5}$',\n 'SD': r'^[0-9]{5}$',\n 'SE': r'^[0-9]{5}$',\n 'SG': r'^([0-9]{2}|[0-9]{4}|[0-9]{6})$',\n 'SH': r'^(STHL1ZZ|TDCU1ZZ)$',\n 'SI': r'^(SI-)?[0-9]{4}$',\n 'SK': r'^[0-9]{5}$',\n 'SM': r'^[0-9]{5}$',\n 'SN': r'^[0-9]{5}$',\n 'SV': r'^01101$',\n 'SZ': r'^[A-Z][0-9]{3}$',\n 'TC': r'^TKCA1ZZ$',\n 'TD': r'^[0-9]{5}$',\n 'TH': r'^[0-9]{5}$',\n 'TJ': r'^[0-9]{6}$',\n 'TM': r'^[0-9]{6}$',\n 'TN': r'^[0-9]{4}$',\n 'TR': r'^[0-9]{5}$',\n 'TT': r'^[0-9]{6}$',\n 'TW': r'^([0-9]{3}|[0-9]{5})$',\n 'UA': r'^[0-9]{5}$',\n 'US': r'^[0-9]{5}(-[0-9]{4}|-[0-9]{6})?$',\n 'UY': r'^[0-9]{5}$',\n 'UZ': r'^[0-9]{6}$',\n 'VA': r'^00120$',\n 'VC': r'^VC[0-9]{4}',\n 'VE': r'^[0-9]{4}[A-Z]?$',\n 'VG': r'^VG[0-9]{4}$',\n 'VI': r'^[0-9]{5}$',\n 'VN': r'^[0-9]{6}$',\n 'WF': r'^[0-9]{5}$',\n 'XK': r'^[0-9]{5}$',\n 'YT': r'^[0-9]{5}$',\n 'ZA': r'^[0-9]{4}$',\n 'ZM': r'^[0-9]{5}$',\n }\n\n title = models.CharField(\n pgettext_lazy(\"Treatment Pronouns for the customer\", \"Title\"),\n max_length=64, choices=TITLE_CHOICES, blank=True)\n first_name = models.CharField(_(\"First name\"), max_length=255, blank=True)\n last_name = models.CharField(_(\"Last name\"), max_length=255, blank=True)\n\n # We use quite a few lines of an address as they are often quite long and\n # it's easier to just hide the unnecessary ones than add extra ones.\n line1 = models.CharField(_(\"First line of address\"), max_length=255)\n line2 = models.CharField(\n _(\"Second line of address\"), max_length=255, blank=True)\n line3 = models.CharField(\n _(\"Third line of address\"), max_length=255, blank=True)\n line4 = models.CharField(_(\"City\"), max_length=255, blank=True)\n state = models.CharField(_(\"State/County\"), max_length=255, blank=True)\n postcode = UppercaseCharField(\n _(\"Post/Zip-code\"), max_length=64, blank=True)\n country = models.ForeignKey(\n 'address.Country',\n on_delete=models.CASCADE,\n verbose_name=_(\"Country\"))\n\n #: A field only used for searching addresses - this contains all the\n #: relevant fields. This is effectively a poor man's Solr text field.\n search_text = models.TextField(\n _(\"Search text - used only for searching addresses\"), editable=False)\n\n # Fields, used for `summary` property definition and hash generation.\n base_fields = hash_fields = ['salutation', 'line1', 'line2', 'line3', 'line4', 'state', 'postcode', 'country']\n\n def __str__(self):\n return self.summary\n\n class Meta:\n abstract = True\n verbose_name = _('Address')\n verbose_name_plural = _('Addresses')\n\n # Saving\n\n def save(self, *args, **kwargs):\n self._update_search_text()\n super().save(*args, **kwargs)\n\n def clean(self):\n # Strip all whitespace\n for field in ['first_name', 'last_name', 'line1', 'line2', 'line3',\n 'line4', 'state', 'postcode']:\n if self.__dict__[field]:\n self.__dict__[field] = self.__dict__[field].strip()\n\n # Ensure postcodes are valid for country\n self.ensure_postcode_is_valid_for_country()\n\n def ensure_postcode_is_valid_for_country(self):\n \"\"\"\n Validate postcode given the country\n \"\"\"\n if not self.postcode and self.POSTCODE_REQUIRED and self.country_id:\n country_code = self.country.iso_3166_1_a2\n regex = self.POSTCODES_REGEX.get(country_code, None)\n if regex:\n msg = _(\"Addresses in %(country)s require a valid postcode\") \\\n % {'country': self.country}\n raise exceptions.ValidationError(msg)\n\n if self.postcode and self.country_id:\n # Ensure postcodes are always uppercase\n postcode = self.postcode.upper().replace(' ', '')\n country_code = self.country.iso_3166_1_a2\n regex = self.POSTCODES_REGEX.get(country_code, None)\n\n # Validate postcode against regex for the country if available\n if regex and not re.match(regex, postcode):\n msg = _(\"The postcode '%(postcode)s' is not valid \"\n \"for %(country)s\") \\\n % {'postcode': self.postcode,\n 'country': self.country}\n raise exceptions.ValidationError(\n {'postcode': [msg]})\n\n def _update_search_text(self):\n search_fields = filter(\n bool, [self.first_name, self.last_name,\n self.line1, self.line2, self.line3, self.line4,\n self.state, self.postcode, self.country.name])\n self.search_text = ' '.join(search_fields)\n\n # Properties\n\n @property\n def city(self):\n # Common alias\n return self.line4\n\n @property\n def summary(self):\n \"\"\"\n Returns a single string summary of the address,\n separating fields using commas.\n \"\"\"\n return \", \".join(self.active_address_fields())\n\n @property\n def salutation(self):\n \"\"\"\n Name (including title)\n \"\"\"\n return self.join_fields(\n ('title', 'first_name', 'last_name'),\n separator=\" \")\n\n @property\n def name(self):\n return self.join_fields(('first_name', 'last_name'), separator=\" \")\n\n # Helpers\n\n def get_field_values(self, fields):\n field_values = []\n for field in fields:\n # Title is special case\n if field == 'title':\n value = self.get_title_display()\n elif field == 'country':\n try:\n value = self.country.printable_name\n except exceptions.ObjectDoesNotExist:\n value = ''\n elif field == 'salutation':\n value = self.salutation\n else:\n value = getattr(self, field)\n field_values.append(value)\n return field_values\n\n def get_address_field_values(self, fields):\n \"\"\"\n Returns set of field values within the salutation and country.\n \"\"\"\n field_values = [f.strip() for f in self.get_field_values(fields) if f]\n return field_values\n\n def generate_hash(self):\n \"\"\"\n Returns a hash of the address, based on standard set of fields, listed\n out in `hash_fields` property.\n \"\"\"\n field_values = self.get_address_field_values(self.hash_fields)\n # Python 2 and 3 generates CRC checksum in different ranges, so\n # in order to generate platform-independent value we apply\n # `& 0xffffffff` expression.\n return zlib.crc32(', '.join(field_values).upper().encode('UTF8')) & 0xffffffff\n\n def join_fields(self, fields, separator=\", \"):\n \"\"\"\n Join a sequence of fields using the specified separator\n \"\"\"\n field_values = self.get_field_values(fields)\n return separator.join(filter(bool, field_values))\n\n def populate_alternative_model(self, address_model):\n \"\"\"\n For populating an address model using the matching fields\n from this one.\n\n This is used to convert a user address to a shipping address\n as part of the checkout process.\n \"\"\"\n destination_field_names = [\n field.name for field in address_model._meta.fields]\n for field_name in [field.name for field in self._meta.fields]:\n if field_name in destination_field_names and field_name != 'id':\n setattr(address_model, field_name, getattr(self, field_name))\n\n def active_address_fields(self):\n \"\"\"\n Returns the non-empty components of the address, but merging the\n title, first_name and last_name into a single line. It uses fields\n listed out in `base_fields` property.\n \"\"\"\n return self.get_address_field_values(self.base_fields)\n\n\nclass AbstractCountry(models.Model):\n \"\"\"\n `ISO 3166 Country Codes <https://www.iso.org/iso-3166-country-codes.html>`_\n\n The field names are a bit awkward, but kept for backwards compatibility.\n pycountry's syntax of alpha2, alpha3, name and official_name seems sane.\n \"\"\"\n iso_3166_1_a2 = models.CharField(\n _('ISO 3166-1 alpha-2'), max_length=2, primary_key=True)\n iso_3166_1_a3 = models.CharField(\n _('ISO 3166-1 alpha-3'), max_length=3, blank=True)\n iso_3166_1_numeric = models.CharField(\n _('ISO 3166-1 numeric'), blank=True, max_length=3)\n\n #: The commonly used name; e.g. 'United Kingdom'\n printable_name = models.CharField(_('Country name'), max_length=128, db_index=True)\n #: The full official name of a country\n #: e.g. 'United Kingdom of Great Britain and Northern Ireland'\n name = models.CharField(_('Official name'), max_length=128)\n\n display_order = models.PositiveSmallIntegerField(\n _(\"Display order\"), default=0, db_index=True,\n help_text=_('Higher the number, higher the country in the list.'))\n\n is_shipping_country = models.BooleanField(\n _(\"Is shipping country\"), default=False, db_index=True)\n\n class Meta:\n abstract = True\n app_label = 'address'\n verbose_name = _('Country')\n verbose_name_plural = _('Countries')\n ordering = ('-display_order', 'printable_name',)\n\n def __str__(self):\n return self.printable_name or self.name\n\n @property\n def code(self):\n \"\"\"\n Shorthand for the ISO 3166 Alpha-2 code\n \"\"\"\n return self.iso_3166_1_a2\n\n @property\n def numeric_code(self):\n \"\"\"\n Shorthand for the ISO 3166 numeric code.\n\n :py:attr:`.iso_3166_1_numeric` used to wrongly be a integer field, but has to\n be padded with leading zeroes. It's since been converted to a char\n field, but the database might still contain non-padded strings. That's\n why the padding is kept.\n \"\"\"\n return \"%.03d\" % int(self.iso_3166_1_numeric)\n\n\nclass AbstractShippingAddress(AbstractAddress):\n \"\"\"\n A shipping address.\n\n A shipping address should not be edited once the order has been placed -\n it should be read-only after that.\n\n NOTE:\n ShippingAddress is a model of the order app. But moving it there is tricky\n due to circular import issues that are amplified by get_model/get_class\n calls pre-Django 1.7 to register receivers. So...\n TODO: Once Django 1.6 support is dropped, move AbstractBillingAddress and\n AbstractShippingAddress to the order app, and move\n PartnerAddress to the partner app.\n \"\"\"\n\n phone_number = PhoneNumberField(\n _(\"Phone number\"), blank=True,\n help_text=_(\"In case we need to call you about your order\"))\n notes = models.TextField(\n blank=True, verbose_name=_('Instructions'),\n help_text=_(\"Tell us anything we should know when delivering \"\n \"your order.\"))\n\n class Meta:\n abstract = True\n # ShippingAddress is registered in order/models.py\n app_label = 'order'\n verbose_name = _(\"Shipping address\")\n verbose_name_plural = _(\"Shipping addresses\")\n\n @property\n def order(self):\n \"\"\"\n Return the order linked to this shipping address\n \"\"\"\n return self.order_set.first()\n\n\nclass AbstractUserAddress(AbstractShippingAddress):\n \"\"\"\n A user's address. A user can have many of these and together they form an\n 'address book' of sorts for the user.\n\n We use a separate model for shipping and billing (even though there will be\n some data duplication) because we don't want shipping/billing addresses\n changed or deleted once an order has been placed. By having a separate\n model, we allow users the ability to add/edit/delete from their address\n book without affecting orders already placed.\n \"\"\"\n user = models.ForeignKey(\n AUTH_USER_MODEL,\n on_delete=models.CASCADE,\n related_name='addresses',\n verbose_name=_(\"User\"))\n\n #: Whether this address is the default for shipping\n is_default_for_shipping = models.BooleanField(\n _(\"Default shipping address?\"), default=False)\n\n #: Whether this address should be the default for billing.\n is_default_for_billing = models.BooleanField(\n _(\"Default billing address?\"), default=False)\n\n #: We keep track of the number of times an address has been used\n #: as a shipping address so we can show the most popular ones\n #: first at the checkout.\n num_orders_as_shipping_address = models.PositiveIntegerField(\n _(\"Number of Orders as Shipping Address\"), default=0)\n\n #: Same as previous, but for billing address.\n num_orders_as_billing_address = models.PositiveIntegerField(\n _(\"Number of Orders as Billing Address\"), default=0)\n\n #: A hash is kept to try and avoid duplicate addresses being added\n #: to the address book.\n hash = models.CharField(_(\"Address Hash\"), max_length=255, db_index=True,\n editable=False)\n date_created = models.DateTimeField(_(\"Date Created\"), auto_now_add=True)\n\n def save(self, *args, **kwargs):\n \"\"\"\n Save a hash of the address fields\n \"\"\"\n # Save a hash of the address fields so we can check whether two\n # addresses are the same to avoid saving duplicates\n self.hash = self.generate_hash()\n\n # Ensure that each user only has one default shipping address\n # and billing address\n self._ensure_defaults_integrity()\n super().save(*args, **kwargs)\n\n def _ensure_defaults_integrity(self):\n if self.is_default_for_shipping:\n self.__class__._default_manager\\\n .filter(user=self.user, is_default_for_shipping=True)\\\n .update(is_default_for_shipping=False)\n if self.is_default_for_billing:\n self.__class__._default_manager\\\n .filter(user=self.user, is_default_for_billing=True)\\\n .update(is_default_for_billing=False)\n\n class Meta:\n abstract = True\n app_label = 'address'\n verbose_name = _(\"User address\")\n verbose_name_plural = _(\"User addresses\")\n ordering = ['-num_orders_as_shipping_address']\n unique_together = ('user', 'hash')\n\n def validate_unique(self, exclude=None):\n super().validate_unique(exclude)\n qs = self.__class__.objects.filter(\n user=self.user,\n hash=self.generate_hash())\n if self.id:\n qs = qs.exclude(id=self.id)\n if qs.exists():\n raise exceptions.ValidationError({\n '__all__': [_(\"This address is already in your address\"\n \" book\")]})\n\n\nclass AbstractBillingAddress(AbstractAddress):\n class Meta:\n abstract = True\n # BillingAddress is registered in order/models.py\n app_label = 'order'\n verbose_name = _(\"Billing address\")\n verbose_name_plural = _(\"Billing addresses\")\n\n @property\n def order(self):\n \"\"\"\n Return the order linked to this shipping address\n \"\"\"\n return self.order_set.first()\n\n\nclass AbstractPartnerAddress(AbstractAddress):\n \"\"\"\n A partner can have one or more addresses. This can be useful e.g. when\n determining US tax which depends on the origin of the shipment.\n \"\"\"\n partner = models.ForeignKey(\n 'partner.Partner',\n on_delete=models.CASCADE,\n related_name='addresses',\n verbose_name=_('Partner'))\n\n class Meta:\n abstract = True\n app_label = 'partner'\n verbose_name = _(\"Partner address\")\n verbose_name_plural = _(\"Partner addresses\")\n", "path": "src/oscar/apps/address/abstract_models.py"}], "after_files": [{"content": "import re\nimport zlib\n\nfrom django.conf import settings\nfrom django.core import exceptions\nfrom django.db import models\nfrom django.utils.translation import gettext_lazy as _\nfrom django.utils.translation import pgettext_lazy\nfrom phonenumber_field.modelfields import PhoneNumberField\n\nfrom oscar.core.compat import AUTH_USER_MODEL\nfrom oscar.models.fields import UppercaseCharField\n\n\nclass AbstractAddress(models.Model):\n \"\"\"\n Superclass address object\n\n This is subclassed and extended to provide models for\n user, shipping and billing addresses.\n \"\"\"\n MR, MISS, MRS, MS, DR = ('Mr', 'Miss', 'Mrs', 'Ms', 'Dr')\n TITLE_CHOICES = (\n (MR, _(\"Mr\")),\n (MISS, _(\"Miss\")),\n (MRS, _(\"Mrs\")),\n (MS, _(\"Ms\")),\n (DR, _(\"Dr\")),\n )\n\n POSTCODE_REQUIRED = 'postcode' in settings.OSCAR_REQUIRED_ADDRESS_FIELDS\n\n # Regex for each country. Not listed countries don't use postcodes\n # Based on http://en.wikipedia.org/wiki/List_of_postal_codes\n POSTCODES_REGEX = {\n 'AC': r'^[A-Z]{4}[0-9][A-Z]$',\n 'AD': r'^AD[0-9]{3}$',\n 'AF': r'^[0-9]{4}$',\n 'AI': r'^AI-2640$',\n 'AL': r'^[0-9]{4}$',\n 'AM': r'^[0-9]{4}$',\n 'AR': r'^([0-9]{4}|[A-Z][0-9]{4}[A-Z]{3})$',\n 'AS': r'^[0-9]{5}(-[0-9]{4}|-[0-9]{6})?$',\n 'AT': r'^[0-9]{4}$',\n 'AU': r'^[0-9]{4}$',\n 'AX': r'^[0-9]{5}$',\n 'AZ': r'^AZ[0-9]{4}$',\n 'BA': r'^[0-9]{5}$',\n 'BB': r'^BB[0-9]{5}$',\n 'BD': r'^[0-9]{4}$',\n 'BE': r'^[0-9]{4}$',\n 'BG': r'^[0-9]{4}$',\n 'BH': r'^[0-9]{3,4}$',\n 'BL': r'^[0-9]{5}$',\n 'BM': r'^[A-Z]{2}([0-9]{2}|[A-Z]{2})',\n 'BN': r'^[A-Z]{2}[0-9]{4}$',\n 'BO': r'^[0-9]{4}$',\n 'BR': r'^[0-9]{5}(-[0-9]{3})?$',\n 'BT': r'^[0-9]{3}$',\n 'BY': r'^[0-9]{6}$',\n 'CA': r'^[A-Z][0-9][A-Z][0-9][A-Z][0-9]$',\n 'CC': r'^[0-9]{4}$',\n 'CH': r'^[0-9]{4}$',\n 'CL': r'^([0-9]{7}|[0-9]{3}-[0-9]{4})$',\n 'CN': r'^[0-9]{6}$',\n 'CO': r'^[0-9]{6}$',\n 'CR': r'^[0-9]{4,5}$',\n 'CU': r'^[0-9]{5}$',\n 'CV': r'^[0-9]{4}$',\n 'CX': r'^[0-9]{4}$',\n 'CY': r'^[0-9]{4}$',\n 'CZ': r'^[0-9]{5}$',\n 'DE': r'^[0-9]{5}$',\n 'DK': r'^[0-9]{4}$',\n 'DO': r'^[0-9]{5}$',\n 'DZ': r'^[0-9]{5}$',\n 'EC': r'^EC[0-9]{6}$',\n 'EE': r'^[0-9]{5}$',\n 'EG': r'^[0-9]{5}$',\n 'ES': r'^[0-9]{5}$',\n 'ET': r'^[0-9]{4}$',\n 'FI': r'^[0-9]{5}$',\n 'FK': r'^[A-Z]{4}[0-9][A-Z]{2}$',\n 'FM': r'^[0-9]{5}(-[0-9]{4})?$',\n 'FO': r'^[0-9]{3}$',\n 'FR': r'^[0-9]{5}$',\n 'GA': r'^[0-9]{2}.*[0-9]{2}$',\n 'GB': r'^[A-Z][A-Z0-9]{1,3}[0-9][A-Z]{2}$',\n 'GE': r'^[0-9]{4}$',\n 'GF': r'^[0-9]{5}$',\n 'GG': r'^([A-Z]{2}[0-9]{2,3}[A-Z]{2})$',\n 'GI': r'^GX111AA$',\n 'GL': r'^[0-9]{4}$',\n 'GP': r'^[0-9]{5}$',\n 'GR': r'^[0-9]{5}$',\n 'GS': r'^SIQQ1ZZ$',\n 'GT': r'^[0-9]{5}$',\n 'GU': r'^[0-9]{5}$',\n 'GW': r'^[0-9]{4}$',\n 'HM': r'^[0-9]{4}$',\n 'HN': r'^[0-9]{5}$',\n 'HR': r'^[0-9]{5}$',\n 'HT': r'^[0-9]{4}$',\n 'HU': r'^[0-9]{4}$',\n 'ID': r'^[0-9]{5}$',\n 'IL': r'^([0-9]{5}|[0-9]{7})$',\n 'IM': r'^IM[0-9]{2,3}[A-Z]{2}$$',\n 'IN': r'^[0-9]{6}$',\n 'IO': r'^[A-Z]{4}[0-9][A-Z]{2}$',\n 'IQ': r'^[0-9]{5}$',\n 'IR': r'^[0-9]{5}-[0-9]{5}$',\n 'IS': r'^[0-9]{3}$',\n 'IT': r'^[0-9]{5}$',\n 'JE': r'^JE[0-9]{2}[A-Z]{2}$',\n 'JM': r'^JM[A-Z]{3}[0-9]{2}$',\n 'JO': r'^[0-9]{5}$',\n 'JP': r'^[0-9]{3}-?[0-9]{4}$',\n 'KE': r'^[0-9]{5}$',\n 'KG': r'^[0-9]{6}$',\n 'KH': r'^[0-9]{5}$',\n 'KR': r'^[0-9]{5}$',\n 'KY': r'^KY[0-9]-[0-9]{4}$',\n 'KZ': r'^[0-9]{6}$',\n 'LA': r'^[0-9]{5}$',\n 'LB': r'^[0-9]{8}$',\n 'LI': r'^[0-9]{4}$',\n 'LK': r'^[0-9]{5}$',\n 'LR': r'^[0-9]{4}$',\n 'LS': r'^[0-9]{3}$',\n 'LT': r'^(LT-)?[0-9]{5}$',\n 'LU': r'^[0-9]{4}$',\n 'LV': r'^LV-[0-9]{4}$',\n 'LY': r'^[0-9]{5}$',\n 'MA': r'^[0-9]{5}$',\n 'MC': r'^980[0-9]{2}$',\n 'MD': r'^MD-?[0-9]{4}$',\n 'ME': r'^[0-9]{5}$',\n 'MF': r'^[0-9]{5}$',\n 'MG': r'^[0-9]{3}$',\n 'MH': r'^[0-9]{5}$',\n 'MK': r'^[0-9]{4}$',\n 'MM': r'^[0-9]{5}$',\n 'MN': r'^[0-9]{5}$',\n 'MP': r'^[0-9]{5}$',\n 'MQ': r'^[0-9]{5}$',\n 'MT': r'^[A-Z]{3}[0-9]{4}$',\n 'MV': r'^[0-9]{4,5}$',\n 'MX': r'^[0-9]{5}$',\n 'MY': r'^[0-9]{5}$',\n 'MZ': r'^[0-9]{4}$',\n 'NA': r'^[0-9]{5}$',\n 'NC': r'^[0-9]{5}$',\n 'NE': r'^[0-9]{4}$',\n 'NF': r'^[0-9]{4}$',\n 'NG': r'^[0-9]{6}$',\n 'NI': r'^[0-9]{5}$',\n 'NL': r'^[0-9]{4}[A-Z]{2}$',\n 'NO': r'^[0-9]{4}$',\n 'NP': r'^[0-9]{5}$',\n 'NZ': r'^[0-9]{4}$',\n 'OM': r'^[0-9]{3}$',\n 'PA': r'^[0-9]{6}$',\n 'PE': r'^[0-9]{5}$',\n 'PF': r'^[0-9]{5}$',\n 'PG': r'^[0-9]{3}$',\n 'PH': r'^[0-9]{4}$',\n 'PK': r'^[0-9]{5}$',\n 'PL': r'^[0-9]{2}-?[0-9]{3}$',\n 'PM': r'^[0-9]{5}$',\n 'PN': r'^[A-Z]{4}[0-9][A-Z]{2}$',\n 'PR': r'^[0-9]{5}$',\n 'PT': r'^[0-9]{4}(-?[0-9]{3})?$',\n 'PW': r'^[0-9]{5}$',\n 'PY': r'^[0-9]{4}$',\n 'RE': r'^[0-9]{5}$',\n 'RO': r'^[0-9]{6}$',\n 'RS': r'^[0-9]{5}$',\n 'RU': r'^[0-9]{6}$',\n 'SA': r'^[0-9]{5}$',\n 'SD': r'^[0-9]{5}$',\n 'SE': r'^[0-9]{5}$',\n 'SG': r'^([0-9]{2}|[0-9]{4}|[0-9]{6})$',\n 'SH': r'^(STHL1ZZ|TDCU1ZZ)$',\n 'SI': r'^(SI-)?[0-9]{4}$',\n 'SK': r'^[0-9]{5}$',\n 'SM': r'^[0-9]{5}$',\n 'SN': r'^[0-9]{5}$',\n 'SV': r'^01101$',\n 'SZ': r'^[A-Z][0-9]{3}$',\n 'TC': r'^TKCA1ZZ$',\n 'TD': r'^[0-9]{5}$',\n 'TH': r'^[0-9]{5}$',\n 'TJ': r'^[0-9]{6}$',\n 'TM': r'^[0-9]{6}$',\n 'TN': r'^[0-9]{4}$',\n 'TR': r'^[0-9]{5}$',\n 'TT': r'^[0-9]{6}$',\n 'TW': r'^([0-9]{3}|[0-9]{5})$',\n 'UA': r'^[0-9]{5}$',\n 'US': r'^[0-9]{5}(-[0-9]{4}|-[0-9]{6})?$',\n 'UY': r'^[0-9]{5}$',\n 'UZ': r'^[0-9]{6}$',\n 'VA': r'^00120$',\n 'VC': r'^VC[0-9]{4}',\n 'VE': r'^[0-9]{4}[A-Z]?$',\n 'VG': r'^VG[0-9]{4}$',\n 'VI': r'^[0-9]{5}$',\n 'VN': r'^[0-9]{6}$',\n 'WF': r'^[0-9]{5}$',\n 'XK': r'^[0-9]{5}$',\n 'YT': r'^[0-9]{5}$',\n 'ZA': r'^[0-9]{4}$',\n 'ZM': r'^[0-9]{5}$',\n }\n\n title = models.CharField(\n pgettext_lazy(\"Treatment Pronouns for the customer\", \"Title\"),\n max_length=64, choices=TITLE_CHOICES, blank=True)\n first_name = models.CharField(_(\"First name\"), max_length=255, blank=True)\n last_name = models.CharField(_(\"Last name\"), max_length=255, blank=True)\n\n # We use quite a few lines of an address as they are often quite long and\n # it's easier to just hide the unnecessary ones than add extra ones.\n line1 = models.CharField(_(\"First line of address\"), max_length=255)\n line2 = models.CharField(\n _(\"Second line of address\"), max_length=255, blank=True)\n line3 = models.CharField(\n _(\"Third line of address\"), max_length=255, blank=True)\n line4 = models.CharField(_(\"City\"), max_length=255, blank=True)\n state = models.CharField(_(\"State/County\"), max_length=255, blank=True)\n postcode = UppercaseCharField(\n _(\"Post/Zip-code\"), max_length=64, blank=True)\n country = models.ForeignKey(\n 'address.Country',\n on_delete=models.CASCADE,\n verbose_name=_(\"Country\"))\n\n #: A field only used for searching addresses - this contains all the\n #: relevant fields. This is effectively a poor man's Solr text field.\n search_text = models.TextField(\n _(\"Search text - used only for searching addresses\"), editable=False)\n\n # Fields, used for `summary` property definition and hash generation.\n base_fields = hash_fields = ['salutation', 'line1', 'line2', 'line3', 'line4', 'state', 'postcode', 'country']\n\n def __str__(self):\n return self.summary\n\n class Meta:\n abstract = True\n verbose_name = _('Address')\n verbose_name_plural = _('Addresses')\n\n # Saving\n\n def save(self, *args, **kwargs):\n self._update_search_text()\n super().save(*args, **kwargs)\n\n def clean(self):\n # Strip all whitespace\n for field in ['first_name', 'last_name', 'line1', 'line2', 'line3',\n 'line4', 'state', 'postcode']:\n if self.__dict__[field]:\n self.__dict__[field] = self.__dict__[field].strip()\n\n # Ensure postcodes are valid for country\n self.ensure_postcode_is_valid_for_country()\n\n def ensure_postcode_is_valid_for_country(self):\n \"\"\"\n Validate postcode given the country\n \"\"\"\n if not self.postcode and self.POSTCODE_REQUIRED and self.country_id:\n country_code = self.country.iso_3166_1_a2\n regex = self.POSTCODES_REGEX.get(country_code, None)\n if regex:\n msg = _(\"Addresses in %(country)s require a valid postcode\") \\\n % {'country': self.country}\n raise exceptions.ValidationError(msg)\n\n if self.postcode and self.country_id:\n # Ensure postcodes are always uppercase\n postcode = self.postcode.upper().replace(' ', '')\n country_code = self.country.iso_3166_1_a2\n regex = self.POSTCODES_REGEX.get(country_code, None)\n\n # Validate postcode against regex for the country if available\n if regex and not re.match(regex, postcode):\n msg = _(\"The postcode '%(postcode)s' is not valid \"\n \"for %(country)s\") \\\n % {'postcode': self.postcode,\n 'country': self.country}\n raise exceptions.ValidationError(\n {'postcode': [msg]})\n\n def _update_search_text(self):\n search_fields = filter(\n bool, [self.first_name, self.last_name,\n self.line1, self.line2, self.line3, self.line4,\n self.state, self.postcode, self.country.name])\n self.search_text = ' '.join(search_fields)\n\n # Properties\n\n @property\n def city(self):\n # Common alias\n return self.line4\n\n @property\n def summary(self):\n \"\"\"\n Returns a single string summary of the address,\n separating fields using commas.\n \"\"\"\n return \", \".join(self.active_address_fields())\n\n @property\n def salutation(self):\n \"\"\"\n Name (including title)\n \"\"\"\n return self.join_fields(\n ('title', 'first_name', 'last_name'),\n separator=\" \")\n\n @property\n def name(self):\n return self.join_fields(('first_name', 'last_name'), separator=\" \")\n\n # Helpers\n\n def get_field_values(self, fields):\n field_values = []\n for field in fields:\n # Title is special case\n if field == 'title':\n value = self.get_title_display()\n elif field == 'country':\n try:\n value = self.country.printable_name\n except exceptions.ObjectDoesNotExist:\n value = ''\n elif field == 'salutation':\n value = self.salutation\n else:\n value = getattr(self, field)\n field_values.append(value)\n return field_values\n\n def get_address_field_values(self, fields):\n \"\"\"\n Returns set of field values within the salutation and country.\n \"\"\"\n field_values = [f.strip() for f in self.get_field_values(fields) if f]\n return field_values\n\n def generate_hash(self):\n \"\"\"\n Returns a hash of the address, based on standard set of fields, listed\n out in `hash_fields` property.\n \"\"\"\n field_values = self.get_address_field_values(self.hash_fields)\n # Python 2 and 3 generates CRC checksum in different ranges, so\n # in order to generate platform-independent value we apply\n # `& 0xffffffff` expression.\n return zlib.crc32(', '.join(field_values).upper().encode('UTF8')) & 0xffffffff\n\n def join_fields(self, fields, separator=\", \"):\n \"\"\"\n Join a sequence of fields using the specified separator\n \"\"\"\n field_values = self.get_field_values(fields)\n return separator.join(filter(bool, field_values))\n\n def populate_alternative_model(self, address_model):\n \"\"\"\n For populating an address model using the matching fields\n from this one.\n\n This is used to convert a user address to a shipping address\n as part of the checkout process.\n \"\"\"\n destination_field_names = [\n field.name for field in address_model._meta.fields]\n for field_name in [field.name for field in self._meta.fields]:\n if field_name in destination_field_names and field_name != 'id':\n setattr(address_model, field_name, getattr(self, field_name))\n\n def active_address_fields(self):\n \"\"\"\n Returns the non-empty components of the address, but merging the\n title, first_name and last_name into a single line. It uses fields\n listed out in `base_fields` property.\n \"\"\"\n return self.get_address_field_values(self.base_fields)\n\n\nclass AbstractCountry(models.Model):\n \"\"\"\n `ISO 3166 Country Codes <https://www.iso.org/iso-3166-country-codes.html>`_\n\n The field names are a bit awkward, but kept for backwards compatibility.\n pycountry's syntax of alpha2, alpha3, name and official_name seems sane.\n \"\"\"\n iso_3166_1_a2 = models.CharField(\n _('ISO 3166-1 alpha-2'), max_length=2, primary_key=True)\n iso_3166_1_a3 = models.CharField(\n _('ISO 3166-1 alpha-3'), max_length=3, blank=True)\n iso_3166_1_numeric = models.CharField(\n _('ISO 3166-1 numeric'), blank=True, max_length=3)\n\n #: The commonly used name; e.g. 'United Kingdom'\n printable_name = models.CharField(_('Country name'), max_length=128, db_index=True)\n #: The full official name of a country\n #: e.g. 'United Kingdom of Great Britain and Northern Ireland'\n name = models.CharField(_('Official name'), max_length=128)\n\n display_order = models.PositiveSmallIntegerField(\n _(\"Display order\"), default=0, db_index=True,\n help_text=_('Higher the number, higher the country in the list.'))\n\n is_shipping_country = models.BooleanField(\n _(\"Is shipping country\"), default=False, db_index=True)\n\n class Meta:\n abstract = True\n app_label = 'address'\n verbose_name = _('Country')\n verbose_name_plural = _('Countries')\n ordering = ('-display_order', 'printable_name',)\n\n def __str__(self):\n return self.printable_name or self.name\n\n @property\n def code(self):\n \"\"\"\n Shorthand for the ISO 3166 Alpha-2 code\n \"\"\"\n return self.iso_3166_1_a2\n\n @property\n def numeric_code(self):\n \"\"\"\n Shorthand for the ISO 3166 numeric code.\n\n :py:attr:`.iso_3166_1_numeric` used to wrongly be a integer field, but has to\n be padded with leading zeroes. It's since been converted to a char\n field, but the database might still contain non-padded strings. That's\n why the padding is kept.\n \"\"\"\n return \"%.03d\" % int(self.iso_3166_1_numeric)\n\n\nclass AbstractShippingAddress(AbstractAddress):\n \"\"\"\n A shipping address.\n\n A shipping address should not be edited once the order has been placed -\n it should be read-only after that.\n\n NOTE:\n ShippingAddress is a model of the order app. But moving it there is tricky\n due to circular import issues that are amplified by get_model/get_class\n calls pre-Django 1.7 to register receivers. So...\n TODO: Once Django 1.6 support is dropped, move AbstractBillingAddress and\n AbstractShippingAddress to the order app, and move\n PartnerAddress to the partner app.\n \"\"\"\n\n phone_number = PhoneNumberField(\n _(\"Phone number\"), blank=True,\n help_text=_(\"In case we need to call you about your order\"))\n notes = models.TextField(\n blank=True, verbose_name=_('Instructions'),\n help_text=_(\"Tell us anything we should know when delivering \"\n \"your order.\"))\n\n class Meta:\n abstract = True\n # ShippingAddress is registered in order/models.py\n app_label = 'order'\n verbose_name = _(\"Shipping address\")\n verbose_name_plural = _(\"Shipping addresses\")\n\n @property\n def order(self):\n \"\"\"\n Return the order linked to this shipping address\n \"\"\"\n return self.order_set.first()\n\n\nclass AbstractUserAddress(AbstractShippingAddress):\n \"\"\"\n A user's address. A user can have many of these and together they form an\n 'address book' of sorts for the user.\n\n We use a separate model for shipping and billing (even though there will be\n some data duplication) because we don't want shipping/billing addresses\n changed or deleted once an order has been placed. By having a separate\n model, we allow users the ability to add/edit/delete from their address\n book without affecting orders already placed.\n \"\"\"\n user = models.ForeignKey(\n AUTH_USER_MODEL,\n on_delete=models.CASCADE,\n related_name='addresses',\n verbose_name=_(\"User\"))\n\n #: Whether this address is the default for shipping\n is_default_for_shipping = models.BooleanField(\n _(\"Default shipping address?\"), default=False)\n\n #: Whether this address should be the default for billing.\n is_default_for_billing = models.BooleanField(\n _(\"Default billing address?\"), default=False)\n\n #: We keep track of the number of times an address has been used\n #: as a shipping address so we can show the most popular ones\n #: first at the checkout.\n num_orders_as_shipping_address = models.PositiveIntegerField(\n _(\"Number of Orders as Shipping Address\"), default=0)\n\n #: Same as previous, but for billing address.\n num_orders_as_billing_address = models.PositiveIntegerField(\n _(\"Number of Orders as Billing Address\"), default=0)\n\n #: A hash is kept to try and avoid duplicate addresses being added\n #: to the address book.\n hash = models.CharField(_(\"Address Hash\"), max_length=255, db_index=True,\n editable=False)\n date_created = models.DateTimeField(_(\"Date Created\"), auto_now_add=True)\n\n def save(self, *args, **kwargs):\n \"\"\"\n Save a hash of the address fields\n \"\"\"\n # Save a hash of the address fields so we can check whether two\n # addresses are the same to avoid saving duplicates\n self.hash = self.generate_hash()\n\n # Ensure that each user only has one default shipping address\n # and billing address\n self._ensure_defaults_integrity()\n super().save(*args, **kwargs)\n\n def _ensure_defaults_integrity(self):\n if self.is_default_for_shipping:\n self.__class__._default_manager\\\n .filter(user=self.user, is_default_for_shipping=True)\\\n .update(is_default_for_shipping=False)\n if self.is_default_for_billing:\n self.__class__._default_manager\\\n .filter(user=self.user, is_default_for_billing=True)\\\n .update(is_default_for_billing=False)\n\n class Meta:\n abstract = True\n app_label = 'address'\n verbose_name = _(\"User address\")\n verbose_name_plural = _(\"User addresses\")\n ordering = ['-num_orders_as_shipping_address']\n unique_together = ('user', 'hash')\n\n def validate_unique(self, exclude=None):\n super().validate_unique(exclude)\n qs = self.__class__.objects.filter(\n user=self.user,\n hash=self.generate_hash())\n if self.id:\n qs = qs.exclude(id=self.id)\n if qs.exists():\n raise exceptions.ValidationError({\n '__all__': [_(\"This address is already in your address\"\n \" book\")]})\n\n\nclass AbstractBillingAddress(AbstractAddress):\n class Meta:\n abstract = True\n # BillingAddress is registered in order/models.py\n app_label = 'order'\n verbose_name = _(\"Billing address\")\n verbose_name_plural = _(\"Billing addresses\")\n\n @property\n def order(self):\n \"\"\"\n Return the order linked to this shipping address\n \"\"\"\n return self.order_set.first()\n\n\nclass AbstractPartnerAddress(AbstractAddress):\n \"\"\"\n A partner can have one or more addresses. This can be useful e.g. when\n determining US tax which depends on the origin of the shipment.\n \"\"\"\n partner = models.ForeignKey(\n 'partner.Partner',\n on_delete=models.CASCADE,\n related_name='addresses',\n verbose_name=_('Partner'))\n\n class Meta:\n abstract = True\n app_label = 'partner'\n verbose_name = _(\"Partner address\")\n verbose_name_plural = _(\"Partner addresses\")\n", "path": "src/oscar/apps/address/abstract_models.py"}]} |
gh_patches_debug_1045 | rasdani/github-patches | git_diff | graphql-python__graphene-django-1155 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Why `DjangoFormMutation.perform_mutate()` calls `form.save()`
Django's plain (non-model) `Form`s don't have the `save()` method. Why does `DjangoFormMutation.perform_mutate()` still call that here:
https://github.com/graphql-python/graphene-django/blob/3058118e8fc64a0a0853b67364381eccc7746f67/graphene_django/forms/mutation.py#L104
Am I missing something or does this just always end up in an error?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `graphene_django/forms/mutation.py`
Content:
```
1 # from django import forms
2 from collections import OrderedDict
3
4 import graphene
5 from graphene import Field, InputField
6 from graphene.relay.mutation import ClientIDMutation
7 from graphene.types.mutation import MutationOptions
8
9 # from graphene.types.inputobjecttype import (
10 # InputObjectTypeOptions,
11 # InputObjectType,
12 # )
13 from graphene.types.utils import yank_fields_from_attrs
14 from graphene_django.constants import MUTATION_ERRORS_FLAG
15 from graphene_django.registry import get_global_registry
16
17 from ..types import ErrorType
18 from .converter import convert_form_field
19
20
21 def fields_for_form(form, only_fields, exclude_fields):
22 fields = OrderedDict()
23 for name, field in form.fields.items():
24 is_not_in_only = only_fields and name not in only_fields
25 is_excluded = (
26 name
27 in exclude_fields # or
28 # name in already_created_fields
29 )
30
31 if is_not_in_only or is_excluded:
32 continue
33
34 fields[name] = convert_form_field(field)
35 return fields
36
37
38 class BaseDjangoFormMutation(ClientIDMutation):
39 class Meta:
40 abstract = True
41
42 @classmethod
43 def mutate_and_get_payload(cls, root, info, **input):
44 form = cls.get_form(root, info, **input)
45
46 if form.is_valid():
47 return cls.perform_mutate(form, info)
48 else:
49 errors = ErrorType.from_errors(form.errors)
50 _set_errors_flag_to_context(info)
51
52 return cls(errors=errors, **form.data)
53
54 @classmethod
55 def get_form(cls, root, info, **input):
56 form_kwargs = cls.get_form_kwargs(root, info, **input)
57 return cls._meta.form_class(**form_kwargs)
58
59 @classmethod
60 def get_form_kwargs(cls, root, info, **input):
61 kwargs = {"data": input}
62
63 pk = input.pop("id", None)
64 if pk:
65 instance = cls._meta.model._default_manager.get(pk=pk)
66 kwargs["instance"] = instance
67
68 return kwargs
69
70
71 class DjangoFormMutationOptions(MutationOptions):
72 form_class = None
73
74
75 class DjangoFormMutation(BaseDjangoFormMutation):
76 class Meta:
77 abstract = True
78
79 errors = graphene.List(ErrorType)
80
81 @classmethod
82 def __init_subclass_with_meta__(
83 cls, form_class=None, only_fields=(), exclude_fields=(), **options
84 ):
85
86 if not form_class:
87 raise Exception("form_class is required for DjangoFormMutation")
88
89 form = form_class()
90 input_fields = fields_for_form(form, only_fields, exclude_fields)
91 output_fields = fields_for_form(form, only_fields, exclude_fields)
92
93 _meta = DjangoFormMutationOptions(cls)
94 _meta.form_class = form_class
95 _meta.fields = yank_fields_from_attrs(output_fields, _as=Field)
96
97 input_fields = yank_fields_from_attrs(input_fields, _as=InputField)
98 super(DjangoFormMutation, cls).__init_subclass_with_meta__(
99 _meta=_meta, input_fields=input_fields, **options
100 )
101
102 @classmethod
103 def perform_mutate(cls, form, info):
104 form.save()
105 return cls(errors=[], **form.cleaned_data)
106
107
108 class DjangoModelDjangoFormMutationOptions(DjangoFormMutationOptions):
109 model = None
110 return_field_name = None
111
112
113 class DjangoModelFormMutation(BaseDjangoFormMutation):
114 class Meta:
115 abstract = True
116
117 errors = graphene.List(ErrorType)
118
119 @classmethod
120 def __init_subclass_with_meta__(
121 cls,
122 form_class=None,
123 model=None,
124 return_field_name=None,
125 only_fields=(),
126 exclude_fields=(),
127 **options
128 ):
129
130 if not form_class:
131 raise Exception("form_class is required for DjangoModelFormMutation")
132
133 if not model:
134 model = form_class._meta.model
135
136 if not model:
137 raise Exception("model is required for DjangoModelFormMutation")
138
139 form = form_class()
140 input_fields = fields_for_form(form, only_fields, exclude_fields)
141 if "id" not in exclude_fields:
142 input_fields["id"] = graphene.ID()
143
144 registry = get_global_registry()
145 model_type = registry.get_type_for_model(model)
146 if not model_type:
147 raise Exception("No type registered for model: {}".format(model.__name__))
148
149 if not return_field_name:
150 model_name = model.__name__
151 return_field_name = model_name[:1].lower() + model_name[1:]
152
153 output_fields = OrderedDict()
154 output_fields[return_field_name] = graphene.Field(model_type)
155
156 _meta = DjangoModelDjangoFormMutationOptions(cls)
157 _meta.form_class = form_class
158 _meta.model = model
159 _meta.return_field_name = return_field_name
160 _meta.fields = yank_fields_from_attrs(output_fields, _as=Field)
161
162 input_fields = yank_fields_from_attrs(input_fields, _as=InputField)
163 super(DjangoModelFormMutation, cls).__init_subclass_with_meta__(
164 _meta=_meta, input_fields=input_fields, **options
165 )
166
167 @classmethod
168 def mutate_and_get_payload(cls, root, info, **input):
169 form = cls.get_form(root, info, **input)
170
171 if form.is_valid():
172 return cls.perform_mutate(form, info)
173 else:
174 errors = ErrorType.from_errors(form.errors)
175 _set_errors_flag_to_context(info)
176
177 return cls(errors=errors)
178
179 @classmethod
180 def perform_mutate(cls, form, info):
181 obj = form.save()
182 kwargs = {cls._meta.return_field_name: obj}
183 return cls(errors=[], **kwargs)
184
185
186 def _set_errors_flag_to_context(info):
187 # This is not ideal but necessary to keep the response errors empty
188 if info and info.context:
189 setattr(info.context, MUTATION_ERRORS_FLAG, True)
190
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/graphene_django/forms/mutation.py b/graphene_django/forms/mutation.py
--- a/graphene_django/forms/mutation.py
+++ b/graphene_django/forms/mutation.py
@@ -101,7 +101,10 @@
@classmethod
def perform_mutate(cls, form, info):
- form.save()
+ if hasattr(form, "save"):
+ # `save` method won't exist on plain Django forms, but this mutation can
+ # in theory be used with `ModelForm`s as well and we do want to save them.
+ form.save()
return cls(errors=[], **form.cleaned_data)
| {"golden_diff": "diff --git a/graphene_django/forms/mutation.py b/graphene_django/forms/mutation.py\n--- a/graphene_django/forms/mutation.py\n+++ b/graphene_django/forms/mutation.py\n@@ -101,7 +101,10 @@\n \n @classmethod\n def perform_mutate(cls, form, info):\n- form.save()\n+ if hasattr(form, \"save\"):\n+ # `save` method won't exist on plain Django forms, but this mutation can\n+ # in theory be used with `ModelForm`s as well and we do want to save them.\n+ form.save()\n return cls(errors=[], **form.cleaned_data)\n", "issue": "Why `DjangoFormMutation.perform_mutate()` calls `form.save()`\nDjango's plain (non-model) `Form`s don't have the `save()` method. Why does `DjangoFormMutation.perform_mutate()` still call that here: \r\n\r\nhttps://github.com/graphql-python/graphene-django/blob/3058118e8fc64a0a0853b67364381eccc7746f67/graphene_django/forms/mutation.py#L104\r\n\r\nAm I missing something or does this just always end up in an error?\n", "before_files": [{"content": "# from django import forms\nfrom collections import OrderedDict\n\nimport graphene\nfrom graphene import Field, InputField\nfrom graphene.relay.mutation import ClientIDMutation\nfrom graphene.types.mutation import MutationOptions\n\n# from graphene.types.inputobjecttype import (\n# InputObjectTypeOptions,\n# InputObjectType,\n# )\nfrom graphene.types.utils import yank_fields_from_attrs\nfrom graphene_django.constants import MUTATION_ERRORS_FLAG\nfrom graphene_django.registry import get_global_registry\n\nfrom ..types import ErrorType\nfrom .converter import convert_form_field\n\n\ndef fields_for_form(form, only_fields, exclude_fields):\n fields = OrderedDict()\n for name, field in form.fields.items():\n is_not_in_only = only_fields and name not in only_fields\n is_excluded = (\n name\n in exclude_fields # or\n # name in already_created_fields\n )\n\n if is_not_in_only or is_excluded:\n continue\n\n fields[name] = convert_form_field(field)\n return fields\n\n\nclass BaseDjangoFormMutation(ClientIDMutation):\n class Meta:\n abstract = True\n\n @classmethod\n def mutate_and_get_payload(cls, root, info, **input):\n form = cls.get_form(root, info, **input)\n\n if form.is_valid():\n return cls.perform_mutate(form, info)\n else:\n errors = ErrorType.from_errors(form.errors)\n _set_errors_flag_to_context(info)\n\n return cls(errors=errors, **form.data)\n\n @classmethod\n def get_form(cls, root, info, **input):\n form_kwargs = cls.get_form_kwargs(root, info, **input)\n return cls._meta.form_class(**form_kwargs)\n\n @classmethod\n def get_form_kwargs(cls, root, info, **input):\n kwargs = {\"data\": input}\n\n pk = input.pop(\"id\", None)\n if pk:\n instance = cls._meta.model._default_manager.get(pk=pk)\n kwargs[\"instance\"] = instance\n\n return kwargs\n\n\nclass DjangoFormMutationOptions(MutationOptions):\n form_class = None\n\n\nclass DjangoFormMutation(BaseDjangoFormMutation):\n class Meta:\n abstract = True\n\n errors = graphene.List(ErrorType)\n\n @classmethod\n def __init_subclass_with_meta__(\n cls, form_class=None, only_fields=(), exclude_fields=(), **options\n ):\n\n if not form_class:\n raise Exception(\"form_class is required for DjangoFormMutation\")\n\n form = form_class()\n input_fields = fields_for_form(form, only_fields, exclude_fields)\n output_fields = fields_for_form(form, only_fields, exclude_fields)\n\n _meta = DjangoFormMutationOptions(cls)\n _meta.form_class = form_class\n _meta.fields = yank_fields_from_attrs(output_fields, _as=Field)\n\n input_fields = yank_fields_from_attrs(input_fields, _as=InputField)\n super(DjangoFormMutation, cls).__init_subclass_with_meta__(\n _meta=_meta, input_fields=input_fields, **options\n )\n\n @classmethod\n def perform_mutate(cls, form, info):\n form.save()\n return cls(errors=[], **form.cleaned_data)\n\n\nclass DjangoModelDjangoFormMutationOptions(DjangoFormMutationOptions):\n model = None\n return_field_name = None\n\n\nclass DjangoModelFormMutation(BaseDjangoFormMutation):\n class Meta:\n abstract = True\n\n errors = graphene.List(ErrorType)\n\n @classmethod\n def __init_subclass_with_meta__(\n cls,\n form_class=None,\n model=None,\n return_field_name=None,\n only_fields=(),\n exclude_fields=(),\n **options\n ):\n\n if not form_class:\n raise Exception(\"form_class is required for DjangoModelFormMutation\")\n\n if not model:\n model = form_class._meta.model\n\n if not model:\n raise Exception(\"model is required for DjangoModelFormMutation\")\n\n form = form_class()\n input_fields = fields_for_form(form, only_fields, exclude_fields)\n if \"id\" not in exclude_fields:\n input_fields[\"id\"] = graphene.ID()\n\n registry = get_global_registry()\n model_type = registry.get_type_for_model(model)\n if not model_type:\n raise Exception(\"No type registered for model: {}\".format(model.__name__))\n\n if not return_field_name:\n model_name = model.__name__\n return_field_name = model_name[:1].lower() + model_name[1:]\n\n output_fields = OrderedDict()\n output_fields[return_field_name] = graphene.Field(model_type)\n\n _meta = DjangoModelDjangoFormMutationOptions(cls)\n _meta.form_class = form_class\n _meta.model = model\n _meta.return_field_name = return_field_name\n _meta.fields = yank_fields_from_attrs(output_fields, _as=Field)\n\n input_fields = yank_fields_from_attrs(input_fields, _as=InputField)\n super(DjangoModelFormMutation, cls).__init_subclass_with_meta__(\n _meta=_meta, input_fields=input_fields, **options\n )\n\n @classmethod\n def mutate_and_get_payload(cls, root, info, **input):\n form = cls.get_form(root, info, **input)\n\n if form.is_valid():\n return cls.perform_mutate(form, info)\n else:\n errors = ErrorType.from_errors(form.errors)\n _set_errors_flag_to_context(info)\n\n return cls(errors=errors)\n\n @classmethod\n def perform_mutate(cls, form, info):\n obj = form.save()\n kwargs = {cls._meta.return_field_name: obj}\n return cls(errors=[], **kwargs)\n\n\ndef _set_errors_flag_to_context(info):\n # This is not ideal but necessary to keep the response errors empty\n if info and info.context:\n setattr(info.context, MUTATION_ERRORS_FLAG, True)\n", "path": "graphene_django/forms/mutation.py"}], "after_files": [{"content": "# from django import forms\nfrom collections import OrderedDict\n\nimport graphene\nfrom graphene import Field, InputField\nfrom graphene.relay.mutation import ClientIDMutation\nfrom graphene.types.mutation import MutationOptions\n\n# from graphene.types.inputobjecttype import (\n# InputObjectTypeOptions,\n# InputObjectType,\n# )\nfrom graphene.types.utils import yank_fields_from_attrs\nfrom graphene_django.constants import MUTATION_ERRORS_FLAG\nfrom graphene_django.registry import get_global_registry\n\nfrom ..types import ErrorType\nfrom .converter import convert_form_field\n\n\ndef fields_for_form(form, only_fields, exclude_fields):\n fields = OrderedDict()\n for name, field in form.fields.items():\n is_not_in_only = only_fields and name not in only_fields\n is_excluded = (\n name\n in exclude_fields # or\n # name in already_created_fields\n )\n\n if is_not_in_only or is_excluded:\n continue\n\n fields[name] = convert_form_field(field)\n return fields\n\n\nclass BaseDjangoFormMutation(ClientIDMutation):\n class Meta:\n abstract = True\n\n @classmethod\n def mutate_and_get_payload(cls, root, info, **input):\n form = cls.get_form(root, info, **input)\n\n if form.is_valid():\n return cls.perform_mutate(form, info)\n else:\n errors = ErrorType.from_errors(form.errors)\n _set_errors_flag_to_context(info)\n\n return cls(errors=errors, **form.data)\n\n @classmethod\n def get_form(cls, root, info, **input):\n form_kwargs = cls.get_form_kwargs(root, info, **input)\n return cls._meta.form_class(**form_kwargs)\n\n @classmethod\n def get_form_kwargs(cls, root, info, **input):\n kwargs = {\"data\": input}\n\n pk = input.pop(\"id\", None)\n if pk:\n instance = cls._meta.model._default_manager.get(pk=pk)\n kwargs[\"instance\"] = instance\n\n return kwargs\n\n\nclass DjangoFormMutationOptions(MutationOptions):\n form_class = None\n\n\nclass DjangoFormMutation(BaseDjangoFormMutation):\n class Meta:\n abstract = True\n\n errors = graphene.List(ErrorType)\n\n @classmethod\n def __init_subclass_with_meta__(\n cls, form_class=None, only_fields=(), exclude_fields=(), **options\n ):\n\n if not form_class:\n raise Exception(\"form_class is required for DjangoFormMutation\")\n\n form = form_class()\n input_fields = fields_for_form(form, only_fields, exclude_fields)\n output_fields = fields_for_form(form, only_fields, exclude_fields)\n\n _meta = DjangoFormMutationOptions(cls)\n _meta.form_class = form_class\n _meta.fields = yank_fields_from_attrs(output_fields, _as=Field)\n\n input_fields = yank_fields_from_attrs(input_fields, _as=InputField)\n super(DjangoFormMutation, cls).__init_subclass_with_meta__(\n _meta=_meta, input_fields=input_fields, **options\n )\n\n @classmethod\n def perform_mutate(cls, form, info):\n if hasattr(form, \"save\"):\n # `save` method won't exist on plain Django forms, but this mutation can\n # in theory be used with `ModelForm`s as well and we do want to save them.\n form.save()\n return cls(errors=[], **form.cleaned_data)\n\n\nclass DjangoModelDjangoFormMutationOptions(DjangoFormMutationOptions):\n model = None\n return_field_name = None\n\n\nclass DjangoModelFormMutation(BaseDjangoFormMutation):\n class Meta:\n abstract = True\n\n errors = graphene.List(ErrorType)\n\n @classmethod\n def __init_subclass_with_meta__(\n cls,\n form_class=None,\n model=None,\n return_field_name=None,\n only_fields=(),\n exclude_fields=(),\n **options\n ):\n\n if not form_class:\n raise Exception(\"form_class is required for DjangoModelFormMutation\")\n\n if not model:\n model = form_class._meta.model\n\n if not model:\n raise Exception(\"model is required for DjangoModelFormMutation\")\n\n form = form_class()\n input_fields = fields_for_form(form, only_fields, exclude_fields)\n if \"id\" not in exclude_fields:\n input_fields[\"id\"] = graphene.ID()\n\n registry = get_global_registry()\n model_type = registry.get_type_for_model(model)\n if not model_type:\n raise Exception(\"No type registered for model: {}\".format(model.__name__))\n\n if not return_field_name:\n model_name = model.__name__\n return_field_name = model_name[:1].lower() + model_name[1:]\n\n output_fields = OrderedDict()\n output_fields[return_field_name] = graphene.Field(model_type)\n\n _meta = DjangoModelDjangoFormMutationOptions(cls)\n _meta.form_class = form_class\n _meta.model = model\n _meta.return_field_name = return_field_name\n _meta.fields = yank_fields_from_attrs(output_fields, _as=Field)\n\n input_fields = yank_fields_from_attrs(input_fields, _as=InputField)\n super(DjangoModelFormMutation, cls).__init_subclass_with_meta__(\n _meta=_meta, input_fields=input_fields, **options\n )\n\n @classmethod\n def mutate_and_get_payload(cls, root, info, **input):\n form = cls.get_form(root, info, **input)\n\n if form.is_valid():\n return cls.perform_mutate(form, info)\n else:\n errors = ErrorType.from_errors(form.errors)\n _set_errors_flag_to_context(info)\n\n return cls(errors=errors)\n\n @classmethod\n def perform_mutate(cls, form, info):\n obj = form.save()\n kwargs = {cls._meta.return_field_name: obj}\n return cls(errors=[], **kwargs)\n\n\ndef _set_errors_flag_to_context(info):\n # This is not ideal but necessary to keep the response errors empty\n if info and info.context:\n setattr(info.context, MUTATION_ERRORS_FLAG, True)\n", "path": "graphene_django/forms/mutation.py"}]} |
gh_patches_debug_1046 | rasdani/github-patches | git_diff | DDMAL__CantusDB-845 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"my sources"" page side panel
http://206.12.88.113/my-sources/ has a "created by" side panel. This is
1. not paginated
3. includes all the sources and not just the created ones (so it's both extra- long and also redundant).
Staging:
<img width="1106" alt="image" src="https://github.com/DDMAL/CantusDB/assets/67451875/3d11789e-6027-4358-8595-328e95e89d7b">
on production this only has the sources created on production, so it only has one source (I'm assuming the others will show up once we sort out the "Created by" info from OldCantus?
<img width="1160" alt="image" src="https://github.com/DDMAL/CantusDB/assets/67451875/f6e98d78-0f66-421c-aad9-2ede47400d88">
On OldCantus it looks like this:
<img width="981" alt="image" src="https://github.com/DDMAL/CantusDB/assets/67451875/15f4b995-d930-4645-9ca4-3befce6a868d">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/views/user.py`
Content:
```
1 from django.urls import reverse
2 from django.db.models.aggregates import Count
3 from django.views.generic import DetailView
4 from django.contrib.auth import get_user_model, login as auth_login
5 from main_app.models import Source
6 from django.views.generic import ListView
7 from django.contrib.auth.mixins import LoginRequiredMixin
8 from django.db.models import Q
9 from django.core.paginator import Paginator
10 from django.contrib.auth.views import LogoutView, LoginView
11 from django.contrib import messages
12 from extra_views import SearchableListMixin
13 from django.http import HttpResponseRedirect
14 from django.core.exceptions import PermissionDenied
15
16
17 class UserDetailView(DetailView):
18 """Detail view for User model
19
20 Accessed by /users/<pk>
21 """
22
23 model = get_user_model()
24 context_object_name = "user"
25 template_name = "user_detail.html"
26
27 def get_context_data(self, **kwargs):
28 user = self.get_object()
29 # to begin, if the person viewing the site is not logged in,
30 # they should only be able to view the detail pages of indexers,
31 # and not the detail pages of run-of-the-mill users
32 viewing_user = self.request.user
33 if not (viewing_user.is_authenticated or user.is_indexer):
34 raise PermissionDenied()
35
36 context = super().get_context_data(**kwargs)
37 display_unpublished = viewing_user.is_authenticated
38 sort_by_siglum = lambda source: source.siglum
39 if display_unpublished:
40 context["inventoried_sources"] = sorted(
41 user.inventoried_sources.all(), key=sort_by_siglum
42 )
43 context["full_text_sources"] = sorted(
44 user.entered_full_text_for_sources.all(), key=sort_by_siglum
45 )
46 context["melody_sources"] = sorted(
47 user.entered_melody_for_sources.all(), key=sort_by_siglum
48 )
49 context["proofread_sources"] = sorted(
50 user.proofread_sources.all(), key=sort_by_siglum
51 )
52 context["edited_sources"] = sorted(
53 user.edited_sources.all(), key=sort_by_siglum
54 )
55 else:
56 context["inventoried_sources"] = sorted(
57 user.inventoried_sources.all().filter(published=True),
58 key=sort_by_siglum,
59 )
60 context["full_text_sources"] = sorted(
61 user.entered_full_text_for_sources.all().filter(published=True),
62 key=sort_by_siglum,
63 )
64 context["melody_sources"] = sorted(
65 user.entered_melody_for_sources.all().filter(published=True),
66 key=sort_by_siglum,
67 )
68 context["proofread_sources"] = sorted(
69 user.proofread_sources.all().filter(published=True), key=sort_by_siglum
70 )
71 context["edited_sources"] = sorted(
72 user.edited_sources.all().filter(published=True), key=sort_by_siglum
73 )
74
75 return context
76
77
78 class UserSourceListView(LoginRequiredMixin, ListView):
79 model = Source
80 context_object_name = "sources"
81 template_name = "user_source_list.html"
82 paginate_by = 100
83
84 def get_queryset(self):
85 return (
86 Source.objects.filter(
87 Q(current_editors=self.request.user)
88 | Q(created_by=self.request.user)
89 # | Q(inventoried_by=self.request.user)
90 # | Q(full_text_entered_by=self.request.user)
91 # | Q(melodies_entered_by=self.request.user)
92 # | Q(proofreaders=self.request.user)
93 # | Q(other_editors=self.request.user)
94 )
95 .order_by("-date_created")
96 .distinct()
97 )
98
99 def get_context_data(self, **kwargs):
100 context = super().get_context_data(**kwargs)
101
102 user_created_sources = (
103 Source.objects.filter(created_by=self.request.user)
104 .order_by("-date_created")
105 .distinct()
106 )
107 paginator = Paginator(user_created_sources, 10)
108 page_number = self.request.GET.get("page2")
109 page_obj = paginator.get_page(page_number)
110
111 context["user_created_sources_page_obj"] = page_obj
112 return context
113
114
115 class CustomLogoutView(LogoutView):
116 def get_next_page(self):
117 next_page = super().get_next_page()
118 messages.success(self.request, "You have successfully logged out!")
119 return next_page
120
121
122 class UserListView(LoginRequiredMixin, SearchableListMixin, ListView):
123 """A list of all User objects
124
125 This view is equivalent to the user list view on the old Cantus.
126 This includes all User objects on the old Cantus.
127 When passed a `?q=<query>` argument in the GET request, it will filter users
128 based on the fields defined in `search_fields` with the `icontains` lookup.
129
130 Accessed by /users/
131 """
132
133 model = get_user_model()
134 ordering = "full_name"
135 search_fields = ["full_name", "institution", "city", "country"]
136 paginate_by = 100
137 template_name = "user_list.html"
138 context_object_name = "users"
139
140
141 class IndexerListView(SearchableListMixin, ListView):
142 """A list of User objects shown to the public
143
144 This view replaces the indexer list view on the old Cantus.
145 The indexers are considered a subset of all User objects, the subset shown to the public.
146 This includes the User objects corresponding to Indexer objects on the old Cantus.
147 When passed a `?q=<query>` argument in the GET request, it will filter users
148 based on the fields defined in `search_fields` with the `icontains` lookup.
149
150 Accessed by /indexers/
151 """
152
153 model = get_user_model()
154 ordering = "full_name"
155 search_fields = ["full_name", "institution", "city", "country"]
156 paginate_by = 100
157 template_name = "indexer_list.html"
158 context_object_name = "indexers"
159
160 def get_queryset(self):
161 all_users = super().get_queryset()
162 indexers = all_users.filter(is_indexer=True)
163 display_unpublished = self.request.user.is_authenticated
164 if display_unpublished:
165 indexers = indexers.annotate(source_count=Count("inventoried_sources"))
166 # display those who have at least one source
167 return indexers.filter(source_count__gte=1)
168 else:
169 indexers = indexers.annotate(
170 source_count=Count(
171 "inventoried_sources", filter=Q(inventoried_sources__published=True)
172 )
173 )
174 # display those who have at least one published source
175 return indexers.filter(source_count__gte=1)
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/views/user.py b/django/cantusdb_project/main_app/views/user.py
--- a/django/cantusdb_project/main_app/views/user.py
+++ b/django/cantusdb_project/main_app/views/user.py
@@ -104,7 +104,7 @@
.order_by("-date_created")
.distinct()
)
- paginator = Paginator(user_created_sources, 10)
+ paginator = Paginator(user_created_sources, 6)
page_number = self.request.GET.get("page2")
page_obj = paginator.get_page(page_number)
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/user.py b/django/cantusdb_project/main_app/views/user.py\n--- a/django/cantusdb_project/main_app/views/user.py\n+++ b/django/cantusdb_project/main_app/views/user.py\n@@ -104,7 +104,7 @@\n .order_by(\"-date_created\")\n .distinct()\n )\n- paginator = Paginator(user_created_sources, 10)\n+ paginator = Paginator(user_created_sources, 6)\n page_number = self.request.GET.get(\"page2\")\n page_obj = paginator.get_page(page_number)\n", "issue": "\"my sources\"\" page side panel\nhttp://206.12.88.113/my-sources/ has a \"created by\" side panel. This is\r\n1. not paginated\r\n3. includes all the sources and not just the created ones (so it's both extra- long and also redundant).\r\nStaging:\r\n<img width=\"1106\" alt=\"image\" src=\"https://github.com/DDMAL/CantusDB/assets/67451875/3d11789e-6027-4358-8595-328e95e89d7b\">\r\non production this only has the sources created on production, so it only has one source (I'm assuming the others will show up once we sort out the \"Created by\" info from OldCantus?\r\n<img width=\"1160\" alt=\"image\" src=\"https://github.com/DDMAL/CantusDB/assets/67451875/f6e98d78-0f66-421c-aad9-2ede47400d88\">\r\nOn OldCantus it looks like this:\r\n\r\n<img width=\"981\" alt=\"image\" src=\"https://github.com/DDMAL/CantusDB/assets/67451875/15f4b995-d930-4645-9ca4-3befce6a868d\">\r\n\r\n\n", "before_files": [{"content": "from django.urls import reverse\nfrom django.db.models.aggregates import Count\nfrom django.views.generic import DetailView\nfrom django.contrib.auth import get_user_model, login as auth_login\nfrom main_app.models import Source\nfrom django.views.generic import ListView\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.db.models import Q\nfrom django.core.paginator import Paginator\nfrom django.contrib.auth.views import LogoutView, LoginView\nfrom django.contrib import messages\nfrom extra_views import SearchableListMixin\nfrom django.http import HttpResponseRedirect\nfrom django.core.exceptions import PermissionDenied\n\n\nclass UserDetailView(DetailView):\n \"\"\"Detail view for User model\n\n Accessed by /users/<pk>\n \"\"\"\n\n model = get_user_model()\n context_object_name = \"user\"\n template_name = \"user_detail.html\"\n\n def get_context_data(self, **kwargs):\n user = self.get_object()\n # to begin, if the person viewing the site is not logged in,\n # they should only be able to view the detail pages of indexers,\n # and not the detail pages of run-of-the-mill users\n viewing_user = self.request.user\n if not (viewing_user.is_authenticated or user.is_indexer):\n raise PermissionDenied()\n\n context = super().get_context_data(**kwargs)\n display_unpublished = viewing_user.is_authenticated\n sort_by_siglum = lambda source: source.siglum\n if display_unpublished:\n context[\"inventoried_sources\"] = sorted(\n user.inventoried_sources.all(), key=sort_by_siglum\n )\n context[\"full_text_sources\"] = sorted(\n user.entered_full_text_for_sources.all(), key=sort_by_siglum\n )\n context[\"melody_sources\"] = sorted(\n user.entered_melody_for_sources.all(), key=sort_by_siglum\n )\n context[\"proofread_sources\"] = sorted(\n user.proofread_sources.all(), key=sort_by_siglum\n )\n context[\"edited_sources\"] = sorted(\n user.edited_sources.all(), key=sort_by_siglum\n )\n else:\n context[\"inventoried_sources\"] = sorted(\n user.inventoried_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"full_text_sources\"] = sorted(\n user.entered_full_text_for_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"melody_sources\"] = sorted(\n user.entered_melody_for_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"proofread_sources\"] = sorted(\n user.proofread_sources.all().filter(published=True), key=sort_by_siglum\n )\n context[\"edited_sources\"] = sorted(\n user.edited_sources.all().filter(published=True), key=sort_by_siglum\n )\n\n return context\n\n\nclass UserSourceListView(LoginRequiredMixin, ListView):\n model = Source\n context_object_name = \"sources\"\n template_name = \"user_source_list.html\"\n paginate_by = 100\n\n def get_queryset(self):\n return (\n Source.objects.filter(\n Q(current_editors=self.request.user)\n | Q(created_by=self.request.user)\n # | Q(inventoried_by=self.request.user)\n # | Q(full_text_entered_by=self.request.user)\n # | Q(melodies_entered_by=self.request.user)\n # | Q(proofreaders=self.request.user)\n # | Q(other_editors=self.request.user)\n )\n .order_by(\"-date_created\")\n .distinct()\n )\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n user_created_sources = (\n Source.objects.filter(created_by=self.request.user)\n .order_by(\"-date_created\")\n .distinct()\n )\n paginator = Paginator(user_created_sources, 10)\n page_number = self.request.GET.get(\"page2\")\n page_obj = paginator.get_page(page_number)\n\n context[\"user_created_sources_page_obj\"] = page_obj\n return context\n\n\nclass CustomLogoutView(LogoutView):\n def get_next_page(self):\n next_page = super().get_next_page()\n messages.success(self.request, \"You have successfully logged out!\")\n return next_page\n\n\nclass UserListView(LoginRequiredMixin, SearchableListMixin, ListView):\n \"\"\"A list of all User objects\n\n This view is equivalent to the user list view on the old Cantus.\n This includes all User objects on the old Cantus.\n When passed a `?q=<query>` argument in the GET request, it will filter users\n based on the fields defined in `search_fields` with the `icontains` lookup.\n\n Accessed by /users/\n \"\"\"\n\n model = get_user_model()\n ordering = \"full_name\"\n search_fields = [\"full_name\", \"institution\", \"city\", \"country\"]\n paginate_by = 100\n template_name = \"user_list.html\"\n context_object_name = \"users\"\n\n\nclass IndexerListView(SearchableListMixin, ListView):\n \"\"\"A list of User objects shown to the public\n\n This view replaces the indexer list view on the old Cantus.\n The indexers are considered a subset of all User objects, the subset shown to the public.\n This includes the User objects corresponding to Indexer objects on the old Cantus.\n When passed a `?q=<query>` argument in the GET request, it will filter users\n based on the fields defined in `search_fields` with the `icontains` lookup.\n\n Accessed by /indexers/\n \"\"\"\n\n model = get_user_model()\n ordering = \"full_name\"\n search_fields = [\"full_name\", \"institution\", \"city\", \"country\"]\n paginate_by = 100\n template_name = \"indexer_list.html\"\n context_object_name = \"indexers\"\n\n def get_queryset(self):\n all_users = super().get_queryset()\n indexers = all_users.filter(is_indexer=True)\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n indexers = indexers.annotate(source_count=Count(\"inventoried_sources\"))\n # display those who have at least one source\n return indexers.filter(source_count__gte=1)\n else:\n indexers = indexers.annotate(\n source_count=Count(\n \"inventoried_sources\", filter=Q(inventoried_sources__published=True)\n )\n )\n # display those who have at least one published source\n return indexers.filter(source_count__gte=1)\n", "path": "django/cantusdb_project/main_app/views/user.py"}], "after_files": [{"content": "from django.urls import reverse\nfrom django.db.models.aggregates import Count\nfrom django.views.generic import DetailView\nfrom django.contrib.auth import get_user_model, login as auth_login\nfrom main_app.models import Source\nfrom django.views.generic import ListView\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.db.models import Q\nfrom django.core.paginator import Paginator\nfrom django.contrib.auth.views import LogoutView, LoginView\nfrom django.contrib import messages\nfrom extra_views import SearchableListMixin\nfrom django.http import HttpResponseRedirect\nfrom django.core.exceptions import PermissionDenied\n\n\nclass UserDetailView(DetailView):\n \"\"\"Detail view for User model\n\n Accessed by /users/<pk>\n \"\"\"\n\n model = get_user_model()\n context_object_name = \"user\"\n template_name = \"user_detail.html\"\n\n def get_context_data(self, **kwargs):\n user = self.get_object()\n # to begin, if the person viewing the site is not logged in,\n # they should only be able to view the detail pages of indexers,\n # and not the detail pages of run-of-the-mill users\n viewing_user = self.request.user\n if not (viewing_user.is_authenticated or user.is_indexer):\n raise PermissionDenied()\n\n context = super().get_context_data(**kwargs)\n display_unpublished = viewing_user.is_authenticated\n sort_by_siglum = lambda source: source.siglum\n if display_unpublished:\n context[\"inventoried_sources\"] = sorted(\n user.inventoried_sources.all(), key=sort_by_siglum\n )\n context[\"full_text_sources\"] = sorted(\n user.entered_full_text_for_sources.all(), key=sort_by_siglum\n )\n context[\"melody_sources\"] = sorted(\n user.entered_melody_for_sources.all(), key=sort_by_siglum\n )\n context[\"proofread_sources\"] = sorted(\n user.proofread_sources.all(), key=sort_by_siglum\n )\n context[\"edited_sources\"] = sorted(\n user.edited_sources.all(), key=sort_by_siglum\n )\n else:\n context[\"inventoried_sources\"] = sorted(\n user.inventoried_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"full_text_sources\"] = sorted(\n user.entered_full_text_for_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"melody_sources\"] = sorted(\n user.entered_melody_for_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"proofread_sources\"] = sorted(\n user.proofread_sources.all().filter(published=True), key=sort_by_siglum\n )\n context[\"edited_sources\"] = sorted(\n user.edited_sources.all().filter(published=True), key=sort_by_siglum\n )\n\n return context\n\n\nclass UserSourceListView(LoginRequiredMixin, ListView):\n model = Source\n context_object_name = \"sources\"\n template_name = \"user_source_list.html\"\n paginate_by = 100\n\n def get_queryset(self):\n return (\n Source.objects.filter(\n Q(current_editors=self.request.user)\n | Q(created_by=self.request.user)\n # | Q(inventoried_by=self.request.user)\n # | Q(full_text_entered_by=self.request.user)\n # | Q(melodies_entered_by=self.request.user)\n # | Q(proofreaders=self.request.user)\n # | Q(other_editors=self.request.user)\n )\n .order_by(\"-date_created\")\n .distinct()\n )\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n user_created_sources = (\n Source.objects.filter(created_by=self.request.user)\n .order_by(\"-date_created\")\n .distinct()\n )\n paginator = Paginator(user_created_sources, 6)\n page_number = self.request.GET.get(\"page2\")\n page_obj = paginator.get_page(page_number)\n\n context[\"user_created_sources_page_obj\"] = page_obj\n return context\n\n\nclass CustomLogoutView(LogoutView):\n def get_next_page(self):\n next_page = super().get_next_page()\n messages.success(self.request, \"You have successfully logged out!\")\n return next_page\n\n\nclass UserListView(LoginRequiredMixin, SearchableListMixin, ListView):\n \"\"\"A list of all User objects\n\n This view is equivalent to the user list view on the old Cantus.\n This includes all User objects on the old Cantus.\n When passed a `?q=<query>` argument in the GET request, it will filter users\n based on the fields defined in `search_fields` with the `icontains` lookup.\n\n Accessed by /users/\n \"\"\"\n\n model = get_user_model()\n ordering = \"full_name\"\n search_fields = [\"full_name\", \"institution\", \"city\", \"country\"]\n paginate_by = 100\n template_name = \"user_list.html\"\n context_object_name = \"users\"\n\n\nclass IndexerListView(SearchableListMixin, ListView):\n \"\"\"A list of User objects shown to the public\n\n This view replaces the indexer list view on the old Cantus.\n The indexers are considered a subset of all User objects, the subset shown to the public.\n This includes the User objects corresponding to Indexer objects on the old Cantus.\n When passed a `?q=<query>` argument in the GET request, it will filter users\n based on the fields defined in `search_fields` with the `icontains` lookup.\n\n Accessed by /indexers/\n \"\"\"\n\n model = get_user_model()\n ordering = \"full_name\"\n search_fields = [\"full_name\", \"institution\", \"city\", \"country\"]\n paginate_by = 100\n template_name = \"indexer_list.html\"\n context_object_name = \"indexers\"\n\n def get_queryset(self):\n all_users = super().get_queryset()\n indexers = all_users.filter(is_indexer=True)\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n indexers = indexers.annotate(source_count=Count(\"inventoried_sources\"))\n # display those who have at least one source\n return indexers.filter(source_count__gte=1)\n else:\n indexers = indexers.annotate(\n source_count=Count(\n \"inventoried_sources\", filter=Q(inventoried_sources__published=True)\n )\n )\n # display those who have at least one published source\n return indexers.filter(source_count__gte=1)\n", "path": "django/cantusdb_project/main_app/views/user.py"}]} |
gh_patches_debug_1047 | rasdani/github-patches | git_diff | CTFd__CTFd-1101 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CTFd crash right after install
## Git + python 3.7.2 (3b1b82b9a0fbcb8731d7a3a3bbac99499c466c99)
[](https://asciinema.org/a/iHDMVNRWSYJDTZUgIy4wIBsOK)
## Git + python 2.7.15 (3b1b82b9a0fbcb8731d7a3a3bbac99499c466c99)
[](https://asciinema.org/a/JRqfe0rMl0QeZAEklyvV1LQX9)
## 2.1.4 + python 3.7.2
[](https://asciinema.org/a/wErebw8ZN2HFER9P71tLW4FBv)
**Environment**:
- CTFd Version/Commit: see titles
- Operating System: ArchLinux
- Web Browser and Version: Firefox
**What happened?**
CTFd crash right after install.
**What did you expect to happen?**
Working or at least a more comprehensible error.
**How to reproduce your issue**
See the asciinema, but basically
- Download CTFd
- Install dependencies
- run flask
- crash
PS : with the untouched default config
**Any associated stack traces or error logs**
See the asciinema
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wsgi.py`
Content:
```
1 from gevent import monkey
2 monkey.patch_all()
3 from CTFd import create_app
4
5 app = create_app()
6
7 if __name__ == '__main__':
8 app.run(debug=True, threaded=True, host="127.0.0.1", port=4000)
9
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wsgi.py b/wsgi.py
--- a/wsgi.py
+++ b/wsgi.py
@@ -1,8 +1,14 @@
-from gevent import monkey
-monkey.patch_all()
+import os
+
+# Detect if we're running via `flask run` and don't monkey patch
+if not os.getenv("FLASK_RUN_FROM_CLI"):
+ from gevent import monkey
+
+ monkey.patch_all()
+
from CTFd import create_app
app = create_app()
-if __name__ == '__main__':
+if __name__ == "__main__":
app.run(debug=True, threaded=True, host="127.0.0.1", port=4000)
| {"golden_diff": "diff --git a/wsgi.py b/wsgi.py\n--- a/wsgi.py\n+++ b/wsgi.py\n@@ -1,8 +1,14 @@\n-from gevent import monkey\n-monkey.patch_all()\n+import os\n+\n+# Detect if we're running via `flask run` and don't monkey patch\n+if not os.getenv(\"FLASK_RUN_FROM_CLI\"):\n+ from gevent import monkey\n+\n+ monkey.patch_all()\n+\n from CTFd import create_app\n \n app = create_app()\n \n-if __name__ == '__main__':\n+if __name__ == \"__main__\":\n app.run(debug=True, threaded=True, host=\"127.0.0.1\", port=4000)\n", "issue": "CTFd crash right after install\n## Git + python 3.7.2 (3b1b82b9a0fbcb8731d7a3a3bbac99499c466c99)\r\n\r\n[](https://asciinema.org/a/iHDMVNRWSYJDTZUgIy4wIBsOK)\r\n\r\n## Git + python 2.7.15 (3b1b82b9a0fbcb8731d7a3a3bbac99499c466c99)\r\n\r\n[](https://asciinema.org/a/JRqfe0rMl0QeZAEklyvV1LQX9)\r\n\r\n## 2.1.4 + python 3.7.2\r\n\r\n[](https://asciinema.org/a/wErebw8ZN2HFER9P71tLW4FBv)\r\n\r\n**Environment**:\r\n\r\n - CTFd Version/Commit: see titles\r\n - Operating System: ArchLinux\r\n - Web Browser and Version: Firefox\r\n\r\n**What happened?**\r\n\r\nCTFd crash right after install.\r\n\r\n**What did you expect to happen?**\r\n\r\nWorking or at least a more comprehensible error.\r\n\r\n**How to reproduce your issue**\r\n\r\nSee the asciinema, but basically\r\n\r\n- Download CTFd\r\n- Install dependencies\r\n- run flask\r\n- crash\r\n\r\nPS : with the untouched default config\r\n\r\n**Any associated stack traces or error logs**\r\n\r\nSee the asciinema\n", "before_files": [{"content": "from gevent import monkey\nmonkey.patch_all()\nfrom CTFd import create_app\n\napp = create_app()\n\nif __name__ == '__main__':\n app.run(debug=True, threaded=True, host=\"127.0.0.1\", port=4000)\n", "path": "wsgi.py"}], "after_files": [{"content": "import os\n\n# Detect if we're running via `flask run` and don't monkey patch\nif not os.getenv(\"FLASK_RUN_FROM_CLI\"):\n from gevent import monkey\n\n monkey.patch_all()\n\nfrom CTFd import create_app\n\napp = create_app()\n\nif __name__ == \"__main__\":\n app.run(debug=True, threaded=True, host=\"127.0.0.1\", port=4000)\n", "path": "wsgi.py"}]} |
gh_patches_debug_1048 | rasdani/github-patches | git_diff | keras-team__keras-2268 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Textual information for labels?
I seem unable to use text for labels, whilst using to_categorical
```
Using Theano backend.
Traceback (most recent call last):
File "playground.py", line 88, in <module>
train_model_and_test(number_of_epochs, number_of_classes, train_data, train_label, augmented_data_generator)
File "playground.py", line 62, in train_model_and_test
train_label = np_utils.to_categorical(train_label, number_of_classes)
File "/usr/local/lib/python2.7/dist-packages/keras/utils/np_utils.py", line 12, in to_categorical
y = np.asarray(y, dtype='int32')
File "/usr/lib/python2.7/dist-packages/numpy/core/numeric.py", line 460, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: invalid literal for int() with base 10: 'yellow'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras/utils/np_utils.py`
Content:
```
1 from __future__ import absolute_import
2 import numpy as np
3 import scipy as sp
4 from six.moves import range
5 from six.moves import zip
6
7
8 def to_categorical(y, nb_classes=None):
9 '''Convert class vector (integers from 0 to nb_classes)
10 to binary class matrix, for use with categorical_crossentropy.
11 '''
12 y = np.asarray(y, dtype='int32')
13 if not nb_classes:
14 nb_classes = np.max(y)+1
15 Y = np.zeros((len(y), nb_classes))
16 for i in range(len(y)):
17 Y[i, y[i]] = 1.
18 return Y
19
20
21 def normalize(a, axis=-1, order=2):
22 l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
23 l2[l2 == 0] = 1
24 return a / np.expand_dims(l2, axis)
25
26
27 def binary_logloss(p, y):
28 epsilon = 1e-15
29 p = sp.maximum(epsilon, p)
30 p = sp.minimum(1-epsilon, p)
31 res = sum(y * sp.log(p) + sp.subtract(1, y) * sp.log(sp.subtract(1, p)))
32 res *= -1.0/len(y)
33 return res
34
35
36 def multiclass_logloss(P, Y):
37 npreds = [P[i][Y[i]-1] for i in range(len(Y))]
38 score = -(1. / len(Y)) * np.sum(np.log(npreds))
39 return score
40
41
42 def accuracy(p, y):
43 return np.mean([a == b for a, b in zip(p, y)])
44
45
46 def probas_to_classes(y_pred):
47 if len(y_pred.shape) > 1 and y_pred.shape[1] > 1:
48 return categorical_probas_to_classes(y_pred)
49 return np.array([1 if p > 0.5 else 0 for p in y_pred])
50
51
52 def categorical_probas_to_classes(p):
53 return np.argmax(p, axis=1)
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/keras/utils/np_utils.py b/keras/utils/np_utils.py
--- a/keras/utils/np_utils.py
+++ b/keras/utils/np_utils.py
@@ -9,7 +9,6 @@
'''Convert class vector (integers from 0 to nb_classes)
to binary class matrix, for use with categorical_crossentropy.
'''
- y = np.asarray(y, dtype='int32')
if not nb_classes:
nb_classes = np.max(y)+1
Y = np.zeros((len(y), nb_classes))
| {"golden_diff": "diff --git a/keras/utils/np_utils.py b/keras/utils/np_utils.py\n--- a/keras/utils/np_utils.py\n+++ b/keras/utils/np_utils.py\n@@ -9,7 +9,6 @@\n '''Convert class vector (integers from 0 to nb_classes)\n to binary class matrix, for use with categorical_crossentropy.\n '''\n- y = np.asarray(y, dtype='int32')\n if not nb_classes:\n nb_classes = np.max(y)+1\n Y = np.zeros((len(y), nb_classes))\n", "issue": "Textual information for labels?\nI seem unable to use text for labels, whilst using to_categorical\n\n```\nUsing Theano backend.\nTraceback (most recent call last):\n File \"playground.py\", line 88, in <module>\n train_model_and_test(number_of_epochs, number_of_classes, train_data, train_label, augmented_data_generator)\n File \"playground.py\", line 62, in train_model_and_test\n train_label = np_utils.to_categorical(train_label, number_of_classes)\n File \"/usr/local/lib/python2.7/dist-packages/keras/utils/np_utils.py\", line 12, in to_categorical\n y = np.asarray(y, dtype='int32')\n File \"/usr/lib/python2.7/dist-packages/numpy/core/numeric.py\", line 460, in asarray\n return array(a, dtype, copy=False, order=order)\nValueError: invalid literal for int() with base 10: 'yellow'\n```\n\n", "before_files": [{"content": "from __future__ import absolute_import\nimport numpy as np\nimport scipy as sp\nfrom six.moves import range\nfrom six.moves import zip\n\n\ndef to_categorical(y, nb_classes=None):\n '''Convert class vector (integers from 0 to nb_classes)\n to binary class matrix, for use with categorical_crossentropy.\n '''\n y = np.asarray(y, dtype='int32')\n if not nb_classes:\n nb_classes = np.max(y)+1\n Y = np.zeros((len(y), nb_classes))\n for i in range(len(y)):\n Y[i, y[i]] = 1.\n return Y\n\n\ndef normalize(a, axis=-1, order=2):\n l2 = np.atleast_1d(np.linalg.norm(a, order, axis))\n l2[l2 == 0] = 1\n return a / np.expand_dims(l2, axis)\n\n\ndef binary_logloss(p, y):\n epsilon = 1e-15\n p = sp.maximum(epsilon, p)\n p = sp.minimum(1-epsilon, p)\n res = sum(y * sp.log(p) + sp.subtract(1, y) * sp.log(sp.subtract(1, p)))\n res *= -1.0/len(y)\n return res\n\n\ndef multiclass_logloss(P, Y):\n npreds = [P[i][Y[i]-1] for i in range(len(Y))]\n score = -(1. / len(Y)) * np.sum(np.log(npreds))\n return score\n\n\ndef accuracy(p, y):\n return np.mean([a == b for a, b in zip(p, y)])\n\n\ndef probas_to_classes(y_pred):\n if len(y_pred.shape) > 1 and y_pred.shape[1] > 1:\n return categorical_probas_to_classes(y_pred)\n return np.array([1 if p > 0.5 else 0 for p in y_pred])\n\n\ndef categorical_probas_to_classes(p):\n return np.argmax(p, axis=1)\n", "path": "keras/utils/np_utils.py"}], "after_files": [{"content": "from __future__ import absolute_import\nimport numpy as np\nimport scipy as sp\nfrom six.moves import range\nfrom six.moves import zip\n\n\ndef to_categorical(y, nb_classes=None):\n '''Convert class vector (integers from 0 to nb_classes)\n to binary class matrix, for use with categorical_crossentropy.\n '''\n if not nb_classes:\n nb_classes = np.max(y)+1\n Y = np.zeros((len(y), nb_classes))\n for i in range(len(y)):\n Y[i, y[i]] = 1.\n return Y\n\n\ndef normalize(a, axis=-1, order=2):\n l2 = np.atleast_1d(np.linalg.norm(a, order, axis))\n l2[l2 == 0] = 1\n return a / np.expand_dims(l2, axis)\n\n\ndef binary_logloss(p, y):\n epsilon = 1e-15\n p = sp.maximum(epsilon, p)\n p = sp.minimum(1-epsilon, p)\n res = sum(y * sp.log(p) + sp.subtract(1, y) * sp.log(sp.subtract(1, p)))\n res *= -1.0/len(y)\n return res\n\n\ndef multiclass_logloss(P, Y):\n npreds = [P[i][Y[i]-1] for i in range(len(Y))]\n score = -(1. / len(Y)) * np.sum(np.log(npreds))\n return score\n\n\ndef accuracy(p, y):\n return np.mean([a == b for a, b in zip(p, y)])\n\n\ndef probas_to_classes(y_pred):\n if len(y_pred.shape) > 1 and y_pred.shape[1] > 1:\n return categorical_probas_to_classes(y_pred)\n return np.array([1 if p > 0.5 else 0 for p in y_pred])\n\n\ndef categorical_probas_to_classes(p):\n return np.argmax(p, axis=1)\n", "path": "keras/utils/np_utils.py"}]} |
gh_patches_debug_1049 | rasdani/github-patches | git_diff | geopandas__geopandas-1566 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG: the clip function don't take dynamically the geometry column name
Hi, today i noticed that the clip function didn't work for me but then i found which was the problem.
Import geodataframes with geometry called "WKT":
```
field_map = pd.read_csv('./field_map.csv')
field_map['WKT'] = field_map['WKT'].apply(wkt.loads)
field_map = gpd.GeoDataFrame(field_map, geometry = 'WKT', crs = {'init': 'epsg:4326'})
print(field_map.columns)
boundary_map = pd.read_csv('./boundary_map.csv')
boundary_map['WKT'] = boundary_map['WKT'].apply(wkt.loads)
boundary_map = gpd.GeoDataFrame(boundary_map, geometry = 'WKT', crs = {'init': 'epsg:4326'})
print(boundary_map.columns)
> Index(['Unnamed: 0', 'IDX', 'Value', 'WKT', 'WKTTypeID', 'IDXmaster'], dtype='object')
> Index(['Unnamed: 0', 'WKT'], dtype='object')
```
Clip the map and plot to validate:
```
clip_map = gpd.clip(field_map, boundary_map)
fig, ax = plt.subplots(figsize=(10,10))
clip_map.plot(ax=ax)
boundary_map.geometry.boundary.plot(ax=ax, color='red')
```

it seems that the clip has not worked but if we look at the of clip_map columns we see "WKT" and "geometry"

**SOLUTION:**
This worked for me, renaming the geometry column as "geometry"
```
field_map = field_map.rename_geometry('geometry')
boundary_map = boundary_map.rename_geometry('geometry')
clip_map = gpd.clip(field_map, boundary_map)
fig, ax = plt.subplots(figsize=(10,10))
clip_map.plot(ax=ax)
boundary_map.geometry.boundary.plot(ax=ax, color='red')
```

The clip function now work correctly
Regards
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/tools/clip.py`
Content:
```
1 """
2 geopandas.clip
3 ==============
4
5 A module to clip vector data using GeoPandas.
6
7 """
8 import warnings
9
10 import numpy as np
11 import pandas as pd
12
13 from shapely.geometry import Polygon, MultiPolygon
14
15 from geopandas import GeoDataFrame, GeoSeries
16 from geopandas.array import _check_crs, _crs_mismatch_warn
17
18
19 def _clip_points(gdf, poly):
20 """Clip point geometry to the polygon extent.
21
22 Clip an input point GeoDataFrame to the polygon extent of the poly
23 parameter. Points that intersect the poly geometry are extracted with
24 associated attributes and returned.
25
26 Parameters
27 ----------
28 gdf : GeoDataFrame, GeoSeries
29 Composed of point geometry that will be clipped to the poly.
30
31 poly : (Multi)Polygon
32 Reference geometry used to spatially clip the data.
33
34 Returns
35 -------
36 GeoDataFrame
37 The returned GeoDataFrame is a subset of gdf that intersects
38 with poly.
39 """
40 return gdf.iloc[gdf.sindex.query(poly, predicate="intersects")]
41
42
43 def _clip_line_poly(gdf, poly):
44 """Clip line and polygon geometry to the polygon extent.
45
46 Clip an input line or polygon to the polygon extent of the poly
47 parameter. Parts of Lines or Polygons that intersect the poly geometry are
48 extracted with associated attributes and returned.
49
50 Parameters
51 ----------
52 gdf : GeoDataFrame, GeoSeries
53 Line or polygon geometry that is clipped to poly.
54
55 poly : (Multi)Polygon
56 Reference polygon for clipping.
57
58 Returns
59 -------
60 GeoDataFrame
61 The returned GeoDataFrame is a clipped subset of gdf
62 that intersects with poly.
63 """
64 gdf_sub = gdf.iloc[gdf.sindex.query(poly, predicate="intersects")]
65
66 # Clip the data with the polygon
67 if isinstance(gdf_sub, GeoDataFrame):
68 clipped = gdf_sub.copy()
69 clipped["geometry"] = gdf_sub.intersection(poly)
70 else:
71 # GeoSeries
72 clipped = gdf_sub.intersection(poly)
73
74 return clipped
75
76
77 def clip(gdf, mask, keep_geom_type=False):
78 """Clip points, lines, or polygon geometries to the mask extent.
79
80 Both layers must be in the same Coordinate Reference System (CRS).
81 The `gdf` will be clipped to the full extent of the clip object.
82
83 If there are multiple polygons in mask, data from `gdf` will be
84 clipped to the total boundary of all polygons in mask.
85
86 Parameters
87 ----------
88 gdf : GeoDataFrame or GeoSeries
89 Vector layer (point, line, polygon) to be clipped to mask.
90 mask : GeoDataFrame, GeoSeries, (Multi)Polygon
91 Polygon vector layer used to clip `gdf`.
92 The mask's geometry is dissolved into one geometric feature
93 and intersected with `gdf`.
94 keep_geom_type : boolean, default False
95 If True, return only geometries of original type in case of intersection
96 resulting in multiple geometry types or GeometryCollections.
97 If False, return all resulting geometries (potentially mixed-types).
98
99 Returns
100 -------
101 GeoDataFrame or GeoSeries
102 Vector data (points, lines, polygons) from `gdf` clipped to
103 polygon boundary from mask.
104
105 Examples
106 --------
107 Clip points (global cities) with a polygon (the South American continent):
108
109 >>> import geopandas
110 >>> path =
111 >>> world = geopandas.read_file(
112 ... geopandas.datasets.get_path('naturalearth_lowres'))
113 >>> south_america = world[world['continent'] == "South America"]
114 >>> capitals = geopandas.read_file(
115 ... geopandas.datasets.get_path('naturalearth_cities'))
116 >>> capitals.shape
117 (202, 2)
118 >>> sa_capitals = geopandas.clip(capitals, south_america)
119 >>> sa_capitals.shape
120 (12, 2)
121 """
122 if not isinstance(gdf, (GeoDataFrame, GeoSeries)):
123 raise TypeError(
124 "'gdf' should be GeoDataFrame or GeoSeries, got {}".format(type(gdf))
125 )
126
127 if not isinstance(mask, (GeoDataFrame, GeoSeries, Polygon, MultiPolygon)):
128 raise TypeError(
129 "'mask' should be GeoDataFrame, GeoSeries or"
130 "(Multi)Polygon, got {}".format(type(gdf))
131 )
132
133 if isinstance(mask, (GeoDataFrame, GeoSeries)):
134 if not _check_crs(gdf, mask):
135 _crs_mismatch_warn(gdf, mask, stacklevel=3)
136
137 if isinstance(mask, (GeoDataFrame, GeoSeries)):
138 box_mask = mask.total_bounds
139 else:
140 box_mask = mask.bounds
141 box_gdf = gdf.total_bounds
142 if not (
143 ((box_mask[0] <= box_gdf[2]) and (box_gdf[0] <= box_mask[2]))
144 and ((box_mask[1] <= box_gdf[3]) and (box_gdf[1] <= box_mask[3]))
145 ):
146 return gdf.iloc[:0]
147
148 if isinstance(mask, (GeoDataFrame, GeoSeries)):
149 poly = mask.geometry.unary_union
150 else:
151 poly = mask
152
153 geom_types = gdf.geometry.type
154 poly_idx = np.asarray((geom_types == "Polygon") | (geom_types == "MultiPolygon"))
155 line_idx = np.asarray(
156 (geom_types == "LineString")
157 | (geom_types == "LinearRing")
158 | (geom_types == "MultiLineString")
159 )
160 point_idx = np.asarray((geom_types == "Point") | (geom_types == "MultiPoint"))
161 geomcoll_idx = np.asarray((geom_types == "GeometryCollection"))
162
163 if point_idx.any():
164 point_gdf = _clip_points(gdf[point_idx], poly)
165 else:
166 point_gdf = None
167
168 if poly_idx.any():
169 poly_gdf = _clip_line_poly(gdf[poly_idx], poly)
170 else:
171 poly_gdf = None
172
173 if line_idx.any():
174 line_gdf = _clip_line_poly(gdf[line_idx], poly)
175 else:
176 line_gdf = None
177
178 if geomcoll_idx.any():
179 geomcoll_gdf = _clip_line_poly(gdf[geomcoll_idx], poly)
180 else:
181 geomcoll_gdf = None
182
183 order = pd.Series(range(len(gdf)), index=gdf.index)
184 concat = pd.concat([point_gdf, line_gdf, poly_gdf, geomcoll_gdf])
185
186 if keep_geom_type:
187 geomcoll_concat = (concat.geom_type == "GeometryCollection").any()
188 geomcoll_orig = geomcoll_idx.any()
189
190 new_collection = geomcoll_concat and not geomcoll_orig
191
192 if geomcoll_orig:
193 warnings.warn(
194 "keep_geom_type can not be called on a "
195 "GeoDataFrame with GeometryCollection."
196 )
197 else:
198 polys = ["Polygon", "MultiPolygon"]
199 lines = ["LineString", "MultiLineString", "LinearRing"]
200 points = ["Point", "MultiPoint"]
201
202 # Check that the gdf for multiple geom types (points, lines and/or polys)
203 orig_types_total = sum(
204 [
205 gdf.geom_type.isin(polys).any(),
206 gdf.geom_type.isin(lines).any(),
207 gdf.geom_type.isin(points).any(),
208 ]
209 )
210
211 # Check how many geometry types are in the clipped GeoDataFrame
212 clip_types_total = sum(
213 [
214 concat.geom_type.isin(polys).any(),
215 concat.geom_type.isin(lines).any(),
216 concat.geom_type.isin(points).any(),
217 ]
218 )
219
220 # Check there aren't any new geom types in the clipped GeoDataFrame
221 more_types = orig_types_total < clip_types_total
222
223 if orig_types_total > 1:
224 warnings.warn(
225 "keep_geom_type can not be called on a mixed type GeoDataFrame."
226 )
227 elif new_collection or more_types:
228 orig_type = gdf.geom_type.iloc[0]
229 if new_collection:
230 concat = concat.explode()
231 if orig_type in polys:
232 concat = concat.loc[concat.geom_type.isin(polys)]
233 elif orig_type in lines:
234 concat = concat.loc[concat.geom_type.isin(lines)]
235
236 # Return empty GeoDataFrame or GeoSeries if no shapes remain
237 if len(concat) == 0:
238 return gdf.iloc[:0]
239
240 # Preserve the original order of the input
241 if isinstance(concat, GeoDataFrame):
242 concat["_order"] = order
243 return concat.sort_values(by="_order").drop(columns="_order")
244 else:
245 concat = GeoDataFrame(geometry=concat)
246 concat["_order"] = order
247 return concat.sort_values(by="_order").geometry
248
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geopandas/tools/clip.py b/geopandas/tools/clip.py
--- a/geopandas/tools/clip.py
+++ b/geopandas/tools/clip.py
@@ -66,7 +66,7 @@
# Clip the data with the polygon
if isinstance(gdf_sub, GeoDataFrame):
clipped = gdf_sub.copy()
- clipped["geometry"] = gdf_sub.intersection(poly)
+ clipped[gdf.geometry.name] = gdf_sub.intersection(poly)
else:
# GeoSeries
clipped = gdf_sub.intersection(poly)
| {"golden_diff": "diff --git a/geopandas/tools/clip.py b/geopandas/tools/clip.py\n--- a/geopandas/tools/clip.py\n+++ b/geopandas/tools/clip.py\n@@ -66,7 +66,7 @@\n # Clip the data with the polygon\n if isinstance(gdf_sub, GeoDataFrame):\n clipped = gdf_sub.copy()\n- clipped[\"geometry\"] = gdf_sub.intersection(poly)\n+ clipped[gdf.geometry.name] = gdf_sub.intersection(poly)\n else:\n # GeoSeries\n clipped = gdf_sub.intersection(poly)\n", "issue": "BUG: the clip function don't take dynamically the geometry column name \nHi, today i noticed that the clip function didn't work for me but then i found which was the problem.\r\n\r\nImport geodataframes with geometry called \"WKT\":\r\n```\r\nfield_map = pd.read_csv('./field_map.csv')\r\nfield_map['WKT'] = field_map['WKT'].apply(wkt.loads)\r\nfield_map = gpd.GeoDataFrame(field_map, geometry = 'WKT', crs = {'init': 'epsg:4326'})\r\n\r\nprint(field_map.columns)\r\n\r\nboundary_map = pd.read_csv('./boundary_map.csv')\r\nboundary_map['WKT'] = boundary_map['WKT'].apply(wkt.loads)\r\nboundary_map = gpd.GeoDataFrame(boundary_map, geometry = 'WKT', crs = {'init': 'epsg:4326'})\r\n\r\nprint(boundary_map.columns)\r\n\r\n> Index(['Unnamed: 0', 'IDX', 'Value', 'WKT', 'WKTTypeID', 'IDXmaster'], dtype='object')\r\n> Index(['Unnamed: 0', 'WKT'], dtype='object')\r\n\r\n```\r\nClip the map and plot to validate:\r\n```\r\nclip_map = gpd.clip(field_map, boundary_map)\r\n\r\nfig, ax = plt.subplots(figsize=(10,10))\r\nclip_map.plot(ax=ax)\r\nboundary_map.geometry.boundary.plot(ax=ax, color='red')\r\n```\r\n\r\n\r\nit seems that the clip has not worked but if we look at the of clip_map columns we see \"WKT\" and \"geometry\"\r\n\r\n\r\n\r\n\r\n**SOLUTION:**\r\nThis worked for me, renaming the geometry column as \"geometry\"\r\n\r\n```\r\nfield_map = field_map.rename_geometry('geometry')\r\nboundary_map = boundary_map.rename_geometry('geometry')\r\n\r\nclip_map = gpd.clip(field_map, boundary_map)\r\n\r\nfig, ax = plt.subplots(figsize=(10,10))\r\nclip_map.plot(ax=ax)\r\nboundary_map.geometry.boundary.plot(ax=ax, color='red')\r\n```\r\n\r\n\r\n\r\nThe clip function now work correctly\r\nRegards\r\n\r\n\n", "before_files": [{"content": "\"\"\"\ngeopandas.clip\n==============\n\nA module to clip vector data using GeoPandas.\n\n\"\"\"\nimport warnings\n\nimport numpy as np\nimport pandas as pd\n\nfrom shapely.geometry import Polygon, MultiPolygon\n\nfrom geopandas import GeoDataFrame, GeoSeries\nfrom geopandas.array import _check_crs, _crs_mismatch_warn\n\n\ndef _clip_points(gdf, poly):\n \"\"\"Clip point geometry to the polygon extent.\n\n Clip an input point GeoDataFrame to the polygon extent of the poly\n parameter. Points that intersect the poly geometry are extracted with\n associated attributes and returned.\n\n Parameters\n ----------\n gdf : GeoDataFrame, GeoSeries\n Composed of point geometry that will be clipped to the poly.\n\n poly : (Multi)Polygon\n Reference geometry used to spatially clip the data.\n\n Returns\n -------\n GeoDataFrame\n The returned GeoDataFrame is a subset of gdf that intersects\n with poly.\n \"\"\"\n return gdf.iloc[gdf.sindex.query(poly, predicate=\"intersects\")]\n\n\ndef _clip_line_poly(gdf, poly):\n \"\"\"Clip line and polygon geometry to the polygon extent.\n\n Clip an input line or polygon to the polygon extent of the poly\n parameter. Parts of Lines or Polygons that intersect the poly geometry are\n extracted with associated attributes and returned.\n\n Parameters\n ----------\n gdf : GeoDataFrame, GeoSeries\n Line or polygon geometry that is clipped to poly.\n\n poly : (Multi)Polygon\n Reference polygon for clipping.\n\n Returns\n -------\n GeoDataFrame\n The returned GeoDataFrame is a clipped subset of gdf\n that intersects with poly.\n \"\"\"\n gdf_sub = gdf.iloc[gdf.sindex.query(poly, predicate=\"intersects\")]\n\n # Clip the data with the polygon\n if isinstance(gdf_sub, GeoDataFrame):\n clipped = gdf_sub.copy()\n clipped[\"geometry\"] = gdf_sub.intersection(poly)\n else:\n # GeoSeries\n clipped = gdf_sub.intersection(poly)\n\n return clipped\n\n\ndef clip(gdf, mask, keep_geom_type=False):\n \"\"\"Clip points, lines, or polygon geometries to the mask extent.\n\n Both layers must be in the same Coordinate Reference System (CRS).\n The `gdf` will be clipped to the full extent of the clip object.\n\n If there are multiple polygons in mask, data from `gdf` will be\n clipped to the total boundary of all polygons in mask.\n\n Parameters\n ----------\n gdf : GeoDataFrame or GeoSeries\n Vector layer (point, line, polygon) to be clipped to mask.\n mask : GeoDataFrame, GeoSeries, (Multi)Polygon\n Polygon vector layer used to clip `gdf`.\n The mask's geometry is dissolved into one geometric feature\n and intersected with `gdf`.\n keep_geom_type : boolean, default False\n If True, return only geometries of original type in case of intersection\n resulting in multiple geometry types or GeometryCollections.\n If False, return all resulting geometries (potentially mixed-types).\n\n Returns\n -------\n GeoDataFrame or GeoSeries\n Vector data (points, lines, polygons) from `gdf` clipped to\n polygon boundary from mask.\n\n Examples\n --------\n Clip points (global cities) with a polygon (the South American continent):\n\n >>> import geopandas\n >>> path =\n >>> world = geopandas.read_file(\n ... geopandas.datasets.get_path('naturalearth_lowres'))\n >>> south_america = world[world['continent'] == \"South America\"]\n >>> capitals = geopandas.read_file(\n ... geopandas.datasets.get_path('naturalearth_cities'))\n >>> capitals.shape\n (202, 2)\n >>> sa_capitals = geopandas.clip(capitals, south_america)\n >>> sa_capitals.shape\n (12, 2)\n \"\"\"\n if not isinstance(gdf, (GeoDataFrame, GeoSeries)):\n raise TypeError(\n \"'gdf' should be GeoDataFrame or GeoSeries, got {}\".format(type(gdf))\n )\n\n if not isinstance(mask, (GeoDataFrame, GeoSeries, Polygon, MultiPolygon)):\n raise TypeError(\n \"'mask' should be GeoDataFrame, GeoSeries or\"\n \"(Multi)Polygon, got {}\".format(type(gdf))\n )\n\n if isinstance(mask, (GeoDataFrame, GeoSeries)):\n if not _check_crs(gdf, mask):\n _crs_mismatch_warn(gdf, mask, stacklevel=3)\n\n if isinstance(mask, (GeoDataFrame, GeoSeries)):\n box_mask = mask.total_bounds\n else:\n box_mask = mask.bounds\n box_gdf = gdf.total_bounds\n if not (\n ((box_mask[0] <= box_gdf[2]) and (box_gdf[0] <= box_mask[2]))\n and ((box_mask[1] <= box_gdf[3]) and (box_gdf[1] <= box_mask[3]))\n ):\n return gdf.iloc[:0]\n\n if isinstance(mask, (GeoDataFrame, GeoSeries)):\n poly = mask.geometry.unary_union\n else:\n poly = mask\n\n geom_types = gdf.geometry.type\n poly_idx = np.asarray((geom_types == \"Polygon\") | (geom_types == \"MultiPolygon\"))\n line_idx = np.asarray(\n (geom_types == \"LineString\")\n | (geom_types == \"LinearRing\")\n | (geom_types == \"MultiLineString\")\n )\n point_idx = np.asarray((geom_types == \"Point\") | (geom_types == \"MultiPoint\"))\n geomcoll_idx = np.asarray((geom_types == \"GeometryCollection\"))\n\n if point_idx.any():\n point_gdf = _clip_points(gdf[point_idx], poly)\n else:\n point_gdf = None\n\n if poly_idx.any():\n poly_gdf = _clip_line_poly(gdf[poly_idx], poly)\n else:\n poly_gdf = None\n\n if line_idx.any():\n line_gdf = _clip_line_poly(gdf[line_idx], poly)\n else:\n line_gdf = None\n\n if geomcoll_idx.any():\n geomcoll_gdf = _clip_line_poly(gdf[geomcoll_idx], poly)\n else:\n geomcoll_gdf = None\n\n order = pd.Series(range(len(gdf)), index=gdf.index)\n concat = pd.concat([point_gdf, line_gdf, poly_gdf, geomcoll_gdf])\n\n if keep_geom_type:\n geomcoll_concat = (concat.geom_type == \"GeometryCollection\").any()\n geomcoll_orig = geomcoll_idx.any()\n\n new_collection = geomcoll_concat and not geomcoll_orig\n\n if geomcoll_orig:\n warnings.warn(\n \"keep_geom_type can not be called on a \"\n \"GeoDataFrame with GeometryCollection.\"\n )\n else:\n polys = [\"Polygon\", \"MultiPolygon\"]\n lines = [\"LineString\", \"MultiLineString\", \"LinearRing\"]\n points = [\"Point\", \"MultiPoint\"]\n\n # Check that the gdf for multiple geom types (points, lines and/or polys)\n orig_types_total = sum(\n [\n gdf.geom_type.isin(polys).any(),\n gdf.geom_type.isin(lines).any(),\n gdf.geom_type.isin(points).any(),\n ]\n )\n\n # Check how many geometry types are in the clipped GeoDataFrame\n clip_types_total = sum(\n [\n concat.geom_type.isin(polys).any(),\n concat.geom_type.isin(lines).any(),\n concat.geom_type.isin(points).any(),\n ]\n )\n\n # Check there aren't any new geom types in the clipped GeoDataFrame\n more_types = orig_types_total < clip_types_total\n\n if orig_types_total > 1:\n warnings.warn(\n \"keep_geom_type can not be called on a mixed type GeoDataFrame.\"\n )\n elif new_collection or more_types:\n orig_type = gdf.geom_type.iloc[0]\n if new_collection:\n concat = concat.explode()\n if orig_type in polys:\n concat = concat.loc[concat.geom_type.isin(polys)]\n elif orig_type in lines:\n concat = concat.loc[concat.geom_type.isin(lines)]\n\n # Return empty GeoDataFrame or GeoSeries if no shapes remain\n if len(concat) == 0:\n return gdf.iloc[:0]\n\n # Preserve the original order of the input\n if isinstance(concat, GeoDataFrame):\n concat[\"_order\"] = order\n return concat.sort_values(by=\"_order\").drop(columns=\"_order\")\n else:\n concat = GeoDataFrame(geometry=concat)\n concat[\"_order\"] = order\n return concat.sort_values(by=\"_order\").geometry\n", "path": "geopandas/tools/clip.py"}], "after_files": [{"content": "\"\"\"\ngeopandas.clip\n==============\n\nA module to clip vector data using GeoPandas.\n\n\"\"\"\nimport warnings\n\nimport numpy as np\nimport pandas as pd\n\nfrom shapely.geometry import Polygon, MultiPolygon\n\nfrom geopandas import GeoDataFrame, GeoSeries\nfrom geopandas.array import _check_crs, _crs_mismatch_warn\n\n\ndef _clip_points(gdf, poly):\n \"\"\"Clip point geometry to the polygon extent.\n\n Clip an input point GeoDataFrame to the polygon extent of the poly\n parameter. Points that intersect the poly geometry are extracted with\n associated attributes and returned.\n\n Parameters\n ----------\n gdf : GeoDataFrame, GeoSeries\n Composed of point geometry that will be clipped to the poly.\n\n poly : (Multi)Polygon\n Reference geometry used to spatially clip the data.\n\n Returns\n -------\n GeoDataFrame\n The returned GeoDataFrame is a subset of gdf that intersects\n with poly.\n \"\"\"\n return gdf.iloc[gdf.sindex.query(poly, predicate=\"intersects\")]\n\n\ndef _clip_line_poly(gdf, poly):\n \"\"\"Clip line and polygon geometry to the polygon extent.\n\n Clip an input line or polygon to the polygon extent of the poly\n parameter. Parts of Lines or Polygons that intersect the poly geometry are\n extracted with associated attributes and returned.\n\n Parameters\n ----------\n gdf : GeoDataFrame, GeoSeries\n Line or polygon geometry that is clipped to poly.\n\n poly : (Multi)Polygon\n Reference polygon for clipping.\n\n Returns\n -------\n GeoDataFrame\n The returned GeoDataFrame is a clipped subset of gdf\n that intersects with poly.\n \"\"\"\n gdf_sub = gdf.iloc[gdf.sindex.query(poly, predicate=\"intersects\")]\n\n # Clip the data with the polygon\n if isinstance(gdf_sub, GeoDataFrame):\n clipped = gdf_sub.copy()\n clipped[gdf.geometry.name] = gdf_sub.intersection(poly)\n else:\n # GeoSeries\n clipped = gdf_sub.intersection(poly)\n\n return clipped\n\n\ndef clip(gdf, mask, keep_geom_type=False):\n \"\"\"Clip points, lines, or polygon geometries to the mask extent.\n\n Both layers must be in the same Coordinate Reference System (CRS).\n The `gdf` will be clipped to the full extent of the clip object.\n\n If there are multiple polygons in mask, data from `gdf` will be\n clipped to the total boundary of all polygons in mask.\n\n Parameters\n ----------\n gdf : GeoDataFrame or GeoSeries\n Vector layer (point, line, polygon) to be clipped to mask.\n mask : GeoDataFrame, GeoSeries, (Multi)Polygon\n Polygon vector layer used to clip `gdf`.\n The mask's geometry is dissolved into one geometric feature\n and intersected with `gdf`.\n keep_geom_type : boolean, default False\n If True, return only geometries of original type in case of intersection\n resulting in multiple geometry types or GeometryCollections.\n If False, return all resulting geometries (potentially mixed-types).\n\n Returns\n -------\n GeoDataFrame or GeoSeries\n Vector data (points, lines, polygons) from `gdf` clipped to\n polygon boundary from mask.\n\n Examples\n --------\n Clip points (global cities) with a polygon (the South American continent):\n\n >>> import geopandas\n >>> path =\n >>> world = geopandas.read_file(\n ... geopandas.datasets.get_path('naturalearth_lowres'))\n >>> south_america = world[world['continent'] == \"South America\"]\n >>> capitals = geopandas.read_file(\n ... geopandas.datasets.get_path('naturalearth_cities'))\n >>> capitals.shape\n (202, 2)\n >>> sa_capitals = geopandas.clip(capitals, south_america)\n >>> sa_capitals.shape\n (12, 2)\n \"\"\"\n if not isinstance(gdf, (GeoDataFrame, GeoSeries)):\n raise TypeError(\n \"'gdf' should be GeoDataFrame or GeoSeries, got {}\".format(type(gdf))\n )\n\n if not isinstance(mask, (GeoDataFrame, GeoSeries, Polygon, MultiPolygon)):\n raise TypeError(\n \"'mask' should be GeoDataFrame, GeoSeries or\"\n \"(Multi)Polygon, got {}\".format(type(gdf))\n )\n\n if isinstance(mask, (GeoDataFrame, GeoSeries)):\n if not _check_crs(gdf, mask):\n _crs_mismatch_warn(gdf, mask, stacklevel=3)\n\n if isinstance(mask, (GeoDataFrame, GeoSeries)):\n box_mask = mask.total_bounds\n else:\n box_mask = mask.bounds\n box_gdf = gdf.total_bounds\n if not (\n ((box_mask[0] <= box_gdf[2]) and (box_gdf[0] <= box_mask[2]))\n and ((box_mask[1] <= box_gdf[3]) and (box_gdf[1] <= box_mask[3]))\n ):\n return gdf.iloc[:0]\n\n if isinstance(mask, (GeoDataFrame, GeoSeries)):\n poly = mask.geometry.unary_union\n else:\n poly = mask\n\n geom_types = gdf.geometry.type\n poly_idx = np.asarray((geom_types == \"Polygon\") | (geom_types == \"MultiPolygon\"))\n line_idx = np.asarray(\n (geom_types == \"LineString\")\n | (geom_types == \"LinearRing\")\n | (geom_types == \"MultiLineString\")\n )\n point_idx = np.asarray((geom_types == \"Point\") | (geom_types == \"MultiPoint\"))\n geomcoll_idx = np.asarray((geom_types == \"GeometryCollection\"))\n\n if point_idx.any():\n point_gdf = _clip_points(gdf[point_idx], poly)\n else:\n point_gdf = None\n\n if poly_idx.any():\n poly_gdf = _clip_line_poly(gdf[poly_idx], poly)\n else:\n poly_gdf = None\n\n if line_idx.any():\n line_gdf = _clip_line_poly(gdf[line_idx], poly)\n else:\n line_gdf = None\n\n if geomcoll_idx.any():\n geomcoll_gdf = _clip_line_poly(gdf[geomcoll_idx], poly)\n else:\n geomcoll_gdf = None\n\n order = pd.Series(range(len(gdf)), index=gdf.index)\n concat = pd.concat([point_gdf, line_gdf, poly_gdf, geomcoll_gdf])\n\n if keep_geom_type:\n geomcoll_concat = (concat.geom_type == \"GeometryCollection\").any()\n geomcoll_orig = geomcoll_idx.any()\n\n new_collection = geomcoll_concat and not geomcoll_orig\n\n if geomcoll_orig:\n warnings.warn(\n \"keep_geom_type can not be called on a \"\n \"GeoDataFrame with GeometryCollection.\"\n )\n else:\n polys = [\"Polygon\", \"MultiPolygon\"]\n lines = [\"LineString\", \"MultiLineString\", \"LinearRing\"]\n points = [\"Point\", \"MultiPoint\"]\n\n # Check that the gdf for multiple geom types (points, lines and/or polys)\n orig_types_total = sum(\n [\n gdf.geom_type.isin(polys).any(),\n gdf.geom_type.isin(lines).any(),\n gdf.geom_type.isin(points).any(),\n ]\n )\n\n # Check how many geometry types are in the clipped GeoDataFrame\n clip_types_total = sum(\n [\n concat.geom_type.isin(polys).any(),\n concat.geom_type.isin(lines).any(),\n concat.geom_type.isin(points).any(),\n ]\n )\n\n # Check there aren't any new geom types in the clipped GeoDataFrame\n more_types = orig_types_total < clip_types_total\n\n if orig_types_total > 1:\n warnings.warn(\n \"keep_geom_type can not be called on a mixed type GeoDataFrame.\"\n )\n elif new_collection or more_types:\n orig_type = gdf.geom_type.iloc[0]\n if new_collection:\n concat = concat.explode()\n if orig_type in polys:\n concat = concat.loc[concat.geom_type.isin(polys)]\n elif orig_type in lines:\n concat = concat.loc[concat.geom_type.isin(lines)]\n\n # Return empty GeoDataFrame or GeoSeries if no shapes remain\n if len(concat) == 0:\n return gdf.iloc[:0]\n\n # Preserve the original order of the input\n if isinstance(concat, GeoDataFrame):\n concat[\"_order\"] = order\n return concat.sort_values(by=\"_order\").drop(columns=\"_order\")\n else:\n concat = GeoDataFrame(geometry=concat)\n concat[\"_order\"] = order\n return concat.sort_values(by=\"_order\").geometry\n", "path": "geopandas/tools/clip.py"}]} |
gh_patches_debug_1050 | rasdani/github-patches | git_diff | google-parfait__tensorflow-federated-1334 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Keras model in federated_learning_for_image_classification.ipynb throws warning
**Describe the bug**
Keras Sequential Model in [federated_learning_for_image_classification.ipynb](https://github.com/tensorflow/federated/blob/master/docs/tutorials/federated_learning_for_image_classification.ipynb) throws warning.
The model in the notebook is
```python
def create_keras_model():
return tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
```
Warning thrown:
```python
WARNING:tensorflow:Please add `keras.layers.InputLayer` instead of `keras.Input` to Sequential model. `keras.Input` is intended to be used by Functional model.
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f66178a46d0>
```
Easily fixed using the correct layer type:
```python
def create_keras_model():
return tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
```
[colab](https://colab.research.google.com/drive/1LFgBiu9xUa-k92IW24fiSX_kVp7lb0SB?usp=sharing) notebook that reproduces the bug.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_federated/python/examples/remote_execution/remote_executor_example.py`
Content:
```
1 # Copyright 2018, The TensorFlow Federated Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Example showing how to run a multi-machine simulation.
15
16 In order to run this example, you must have a running instance of the
17 Executor Service, either locally or on Kubernetes.
18
19 The model trains EMNIST for a small number of rounds, but uses a RemoteExecutor
20 to distribute the work to the ExecutorService.
21 """
22
23 import collections
24 import warnings
25
26 from absl import app
27 from absl import flags
28 import grpc
29 import numpy as np
30 import tensorflow as tf
31 import tensorflow_federated as tff
32
33 FLAGS = flags.FLAGS
34
35 flags.DEFINE_string('host', None, 'The host to connect to.')
36 flags.mark_flag_as_required('host')
37 flags.DEFINE_string('port', '8000', 'The port to connect to.')
38 flags.DEFINE_integer('n_clients', 10, 'Number of clients.')
39 flags.DEFINE_integer('n_rounds', 3, 'Number of rounds.')
40
41
42 def preprocess(dataset):
43
44 def element_fn(element):
45 return collections.OrderedDict([
46 ('x', tf.reshape(element['pixels'], [-1])),
47 ('y', tf.reshape(element['label'], [1])),
48 ])
49
50 return dataset.repeat(NUM_EPOCHS).map(element_fn).batch(BATCH_SIZE)
51
52
53 def make_federated_data(client_data, client_ids):
54 return [
55 preprocess(client_data.create_tf_dataset_for_client(x))
56 for x in client_ids
57 ]
58
59
60 NUM_EPOCHS = 10
61 BATCH_SIZE = 20
62
63
64 def make_remote_executor(inferred_cardinalities):
65 """Make remote executor."""
66
67 def create_worker_stack(ex):
68 ex = tff.framework.ThreadDelegatingExecutor(ex)
69 return tff.framework.ReferenceResolvingExecutor(ex)
70
71 client_ex = []
72 num_clients = inferred_cardinalities.get(tff.CLIENTS, None)
73 if num_clients:
74 print('Inferred that there are {} clients'.format(num_clients))
75 else:
76 print('No CLIENTS placement provided')
77
78 for _ in range(num_clients or 0):
79 channel = grpc.insecure_channel('{}:{}'.format(FLAGS.host, FLAGS.port))
80 remote_ex = tff.framework.RemoteExecutor(channel)
81 worker_stack = create_worker_stack(remote_ex)
82 client_ex.append(worker_stack)
83
84 federating_strategy_factory = tff.framework.FederatedResolvingStrategy.factory(
85 {
86 tff.SERVER: create_worker_stack(tff.framework.EagerTFExecutor()),
87 tff.CLIENTS: client_ex,
88 })
89 unplaced_ex = create_worker_stack(tff.framework.EagerTFExecutor())
90 federating_ex = tff.framework.FederatingExecutor(federating_strategy_factory,
91 unplaced_ex)
92 return tff.framework.ReferenceResolvingExecutor(federating_ex)
93
94
95 def main(argv):
96 if len(argv) > 1:
97 raise app.UsageError('Too many command-line arguments.')
98
99 warnings.simplefilter('ignore')
100
101 np.random.seed(0)
102
103 emnist_train, _ = tff.simulation.datasets.emnist.load_data()
104
105 sample_clients = emnist_train.client_ids[0:FLAGS.n_clients]
106
107 federated_train_data = make_federated_data(emnist_train, sample_clients)
108
109 example_dataset = emnist_train.create_tf_dataset_for_client(
110 emnist_train.client_ids[0])
111
112 preprocessed_example_dataset = preprocess(example_dataset)
113 input_spec = preprocessed_example_dataset.element_spec
114
115 def model_fn():
116 model = tf.keras.models.Sequential([
117 tf.keras.layers.Input(shape=(784,)),
118 tf.keras.layers.Dense(10, kernel_initializer='zeros'),
119 tf.keras.layers.Softmax(),
120 ])
121 return tff.learning.from_keras_model(
122 model,
123 input_spec=input_spec,
124 loss=tf.keras.losses.SparseCategoricalCrossentropy(),
125 metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
126
127 iterative_process = tff.learning.build_federated_averaging_process(
128 model_fn,
129 client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02))
130
131 factory = tff.framework.ResourceManagingExecutorFactory(make_remote_executor)
132 context = tff.framework.ExecutionContext(factory)
133 tff.framework.set_default_context(context)
134
135 state = iterative_process.initialize()
136
137 state, metrics = iterative_process.next(state, federated_train_data)
138 print('round 1, metrics={}'.format(metrics))
139
140 for round_num in range(2, FLAGS.n_rounds + 1):
141 state, metrics = iterative_process.next(state, federated_train_data)
142 print('round {:2d}, metrics={}'.format(round_num, metrics))
143
144
145 if __name__ == '__main__':
146 app.run(main)
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py b/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py
--- a/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py
+++ b/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py
@@ -114,7 +114,7 @@
def model_fn():
model = tf.keras.models.Sequential([
- tf.keras.layers.Input(shape=(784,)),
+ tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
| {"golden_diff": "diff --git a/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py b/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py\n--- a/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py\n+++ b/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py\n@@ -114,7 +114,7 @@\n \n def model_fn():\n model = tf.keras.models.Sequential([\n- tf.keras.layers.Input(shape=(784,)),\n+ tf.keras.layers.InputLayer(input_shape=(784,)),\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\n tf.keras.layers.Softmax(),\n ])\n", "issue": "Keras model in federated_learning_for_image_classification.ipynb throws warning\n**Describe the bug**\r\nKeras Sequential Model in [federated_learning_for_image_classification.ipynb](https://github.com/tensorflow/federated/blob/master/docs/tutorials/federated_learning_for_image_classification.ipynb) throws warning.\r\nThe model in the notebook is\r\n```python\r\ndef create_keras_model():\r\n return tf.keras.models.Sequential([\r\n tf.keras.layers.Input(shape=(784,)),\r\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\r\n tf.keras.layers.Softmax(),\r\n ])\r\n```\r\nWarning thrown:\r\n```python\r\nWARNING:tensorflow:Please add `keras.layers.InputLayer` instead of `keras.Input` to Sequential model. `keras.Input` is intended to be used by Functional model.\r\n<tensorflow.python.keras.engine.sequential.Sequential at 0x7f66178a46d0>\r\n```\r\n\r\nEasily fixed using the correct layer type:\r\n```python\r\ndef create_keras_model():\r\n return tf.keras.models.Sequential([\r\n tf.keras.layers.InputLayer(input_shape=(784,)),\r\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\r\n tf.keras.layers.Softmax(),\r\n ])\r\n```\r\n\r\n[colab](https://colab.research.google.com/drive/1LFgBiu9xUa-k92IW24fiSX_kVp7lb0SB?usp=sharing) notebook that reproduces the bug.\r\n\n", "before_files": [{"content": "# Copyright 2018, The TensorFlow Federated Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Example showing how to run a multi-machine simulation.\n\nIn order to run this example, you must have a running instance of the\nExecutor Service, either locally or on Kubernetes.\n\nThe model trains EMNIST for a small number of rounds, but uses a RemoteExecutor\nto distribute the work to the ExecutorService.\n\"\"\"\n\nimport collections\nimport warnings\n\nfrom absl import app\nfrom absl import flags\nimport grpc\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_federated as tff\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string('host', None, 'The host to connect to.')\nflags.mark_flag_as_required('host')\nflags.DEFINE_string('port', '8000', 'The port to connect to.')\nflags.DEFINE_integer('n_clients', 10, 'Number of clients.')\nflags.DEFINE_integer('n_rounds', 3, 'Number of rounds.')\n\n\ndef preprocess(dataset):\n\n def element_fn(element):\n return collections.OrderedDict([\n ('x', tf.reshape(element['pixels'], [-1])),\n ('y', tf.reshape(element['label'], [1])),\n ])\n\n return dataset.repeat(NUM_EPOCHS).map(element_fn).batch(BATCH_SIZE)\n\n\ndef make_federated_data(client_data, client_ids):\n return [\n preprocess(client_data.create_tf_dataset_for_client(x))\n for x in client_ids\n ]\n\n\nNUM_EPOCHS = 10\nBATCH_SIZE = 20\n\n\ndef make_remote_executor(inferred_cardinalities):\n \"\"\"Make remote executor.\"\"\"\n\n def create_worker_stack(ex):\n ex = tff.framework.ThreadDelegatingExecutor(ex)\n return tff.framework.ReferenceResolvingExecutor(ex)\n\n client_ex = []\n num_clients = inferred_cardinalities.get(tff.CLIENTS, None)\n if num_clients:\n print('Inferred that there are {} clients'.format(num_clients))\n else:\n print('No CLIENTS placement provided')\n\n for _ in range(num_clients or 0):\n channel = grpc.insecure_channel('{}:{}'.format(FLAGS.host, FLAGS.port))\n remote_ex = tff.framework.RemoteExecutor(channel)\n worker_stack = create_worker_stack(remote_ex)\n client_ex.append(worker_stack)\n\n federating_strategy_factory = tff.framework.FederatedResolvingStrategy.factory(\n {\n tff.SERVER: create_worker_stack(tff.framework.EagerTFExecutor()),\n tff.CLIENTS: client_ex,\n })\n unplaced_ex = create_worker_stack(tff.framework.EagerTFExecutor())\n federating_ex = tff.framework.FederatingExecutor(federating_strategy_factory,\n unplaced_ex)\n return tff.framework.ReferenceResolvingExecutor(federating_ex)\n\n\ndef main(argv):\n if len(argv) > 1:\n raise app.UsageError('Too many command-line arguments.')\n\n warnings.simplefilter('ignore')\n\n np.random.seed(0)\n\n emnist_train, _ = tff.simulation.datasets.emnist.load_data()\n\n sample_clients = emnist_train.client_ids[0:FLAGS.n_clients]\n\n federated_train_data = make_federated_data(emnist_train, sample_clients)\n\n example_dataset = emnist_train.create_tf_dataset_for_client(\n emnist_train.client_ids[0])\n\n preprocessed_example_dataset = preprocess(example_dataset)\n input_spec = preprocessed_example_dataset.element_spec\n\n def model_fn():\n model = tf.keras.models.Sequential([\n tf.keras.layers.Input(shape=(784,)),\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\n tf.keras.layers.Softmax(),\n ])\n return tff.learning.from_keras_model(\n model,\n input_spec=input_spec,\n loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])\n\n iterative_process = tff.learning.build_federated_averaging_process(\n model_fn,\n client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02))\n\n factory = tff.framework.ResourceManagingExecutorFactory(make_remote_executor)\n context = tff.framework.ExecutionContext(factory)\n tff.framework.set_default_context(context)\n\n state = iterative_process.initialize()\n\n state, metrics = iterative_process.next(state, federated_train_data)\n print('round 1, metrics={}'.format(metrics))\n\n for round_num in range(2, FLAGS.n_rounds + 1):\n state, metrics = iterative_process.next(state, federated_train_data)\n print('round {:2d}, metrics={}'.format(round_num, metrics))\n\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "tensorflow_federated/python/examples/remote_execution/remote_executor_example.py"}], "after_files": [{"content": "# Copyright 2018, The TensorFlow Federated Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Example showing how to run a multi-machine simulation.\n\nIn order to run this example, you must have a running instance of the\nExecutor Service, either locally or on Kubernetes.\n\nThe model trains EMNIST for a small number of rounds, but uses a RemoteExecutor\nto distribute the work to the ExecutorService.\n\"\"\"\n\nimport collections\nimport warnings\n\nfrom absl import app\nfrom absl import flags\nimport grpc\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_federated as tff\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string('host', None, 'The host to connect to.')\nflags.mark_flag_as_required('host')\nflags.DEFINE_string('port', '8000', 'The port to connect to.')\nflags.DEFINE_integer('n_clients', 10, 'Number of clients.')\nflags.DEFINE_integer('n_rounds', 3, 'Number of rounds.')\n\n\ndef preprocess(dataset):\n\n def element_fn(element):\n return collections.OrderedDict([\n ('x', tf.reshape(element['pixels'], [-1])),\n ('y', tf.reshape(element['label'], [1])),\n ])\n\n return dataset.repeat(NUM_EPOCHS).map(element_fn).batch(BATCH_SIZE)\n\n\ndef make_federated_data(client_data, client_ids):\n return [\n preprocess(client_data.create_tf_dataset_for_client(x))\n for x in client_ids\n ]\n\n\nNUM_EPOCHS = 10\nBATCH_SIZE = 20\n\n\ndef make_remote_executor(inferred_cardinalities):\n \"\"\"Make remote executor.\"\"\"\n\n def create_worker_stack(ex):\n ex = tff.framework.ThreadDelegatingExecutor(ex)\n return tff.framework.ReferenceResolvingExecutor(ex)\n\n client_ex = []\n num_clients = inferred_cardinalities.get(tff.CLIENTS, None)\n if num_clients:\n print('Inferred that there are {} clients'.format(num_clients))\n else:\n print('No CLIENTS placement provided')\n\n for _ in range(num_clients or 0):\n channel = grpc.insecure_channel('{}:{}'.format(FLAGS.host, FLAGS.port))\n remote_ex = tff.framework.RemoteExecutor(channel)\n worker_stack = create_worker_stack(remote_ex)\n client_ex.append(worker_stack)\n\n federating_strategy_factory = tff.framework.FederatedResolvingStrategy.factory(\n {\n tff.SERVER: create_worker_stack(tff.framework.EagerTFExecutor()),\n tff.CLIENTS: client_ex,\n })\n unplaced_ex = create_worker_stack(tff.framework.EagerTFExecutor())\n federating_ex = tff.framework.FederatingExecutor(federating_strategy_factory,\n unplaced_ex)\n return tff.framework.ReferenceResolvingExecutor(federating_ex)\n\n\ndef main(argv):\n if len(argv) > 1:\n raise app.UsageError('Too many command-line arguments.')\n\n warnings.simplefilter('ignore')\n\n np.random.seed(0)\n\n emnist_train, _ = tff.simulation.datasets.emnist.load_data()\n\n sample_clients = emnist_train.client_ids[0:FLAGS.n_clients]\n\n federated_train_data = make_federated_data(emnist_train, sample_clients)\n\n example_dataset = emnist_train.create_tf_dataset_for_client(\n emnist_train.client_ids[0])\n\n preprocessed_example_dataset = preprocess(example_dataset)\n input_spec = preprocessed_example_dataset.element_spec\n\n def model_fn():\n model = tf.keras.models.Sequential([\n tf.keras.layers.InputLayer(input_shape=(784,)),\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\n tf.keras.layers.Softmax(),\n ])\n return tff.learning.from_keras_model(\n model,\n input_spec=input_spec,\n loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])\n\n iterative_process = tff.learning.build_federated_averaging_process(\n model_fn,\n client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02))\n\n factory = tff.framework.ResourceManagingExecutorFactory(make_remote_executor)\n context = tff.framework.ExecutionContext(factory)\n tff.framework.set_default_context(context)\n\n state = iterative_process.initialize()\n\n state, metrics = iterative_process.next(state, federated_train_data)\n print('round 1, metrics={}'.format(metrics))\n\n for round_num in range(2, FLAGS.n_rounds + 1):\n state, metrics = iterative_process.next(state, federated_train_data)\n print('round {:2d}, metrics={}'.format(round_num, metrics))\n\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "tensorflow_federated/python/examples/remote_execution/remote_executor_example.py"}]} |
gh_patches_debug_1051 | rasdani/github-patches | git_diff | holoviz__hvplot-494 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Entrypoint broken
The setup.py specifies `hvplot.__main__` as a console_script but that doesn't actually exist.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import sys
3 import shutil
4 from collections import defaultdict
5 from setuptools import setup, find_packages
6
7
8 ########## autover ##########
9
10 def embed_version(basepath, ref='v0.2.2'):
11 """
12 Autover is purely a build time dependency in all cases (conda and
13 pip) except for when you use pip's remote git support [git+url] as
14 1) you need a dynamically changing version and 2) the environment
15 starts off clean with zero dependencies installed.
16 This function acts as a fallback to make Version available until
17 PEP518 is commonly supported by pip to express build dependencies.
18 """
19 import io, zipfile, importlib
20 try: from urllib.request import urlopen
21 except: from urllib import urlopen
22 try:
23 url = 'https://github.com/ioam/autover/archive/{ref}.zip'
24 response = urlopen(url.format(ref=ref))
25 zf = zipfile.ZipFile(io.BytesIO(response.read()))
26 ref = ref[1:] if ref.startswith('v') else ref
27 embed_version = zf.read('autover-{ref}/autover/version.py'.format(ref=ref))
28 with open(os.path.join(basepath, 'version.py'), 'wb') as f:
29 f.write(embed_version)
30 return importlib.import_module("version")
31 except:
32 return None
33
34 def get_setup_version(reponame):
35 """
36 Helper to get the current version from either git describe or the
37 .version file (if available).
38 """
39 import json
40 basepath = os.path.split(__file__)[0]
41 version_file_path = os.path.join(basepath, reponame, '.version')
42 try:
43 from param import version
44 except:
45 version = embed_version(basepath)
46 if version is not None:
47 return version.Version.setup_version(basepath, reponame, archive_commit="$Format:%h$")
48 else:
49 print("WARNING: param>=1.6.0 unavailable. If you are installing a package, this warning can safely be ignored. If you are creating a package or otherwise operating in a git repository, you should install param>=1.6.0.")
50 return json.load(open(version_file_path, 'r'))['version_string']
51
52 ########## examples ##########
53
54 def check_pseudo_package(path):
55 """
56 Verifies that a fake subpackage path for assets (notebooks, svgs,
57 pngs etc) both exists and is populated with files.
58 """
59 if not os.path.isdir(path):
60 raise Exception("Please make sure pseudo-package %s exists." % path)
61 else:
62 assets = os.listdir(path)
63 if len(assets) == 0:
64 raise Exception("Please make sure pseudo-package %s is populated." % path)
65
66
67 excludes = ['DS_Store', '.log', 'ipynb_checkpoints']
68 packages = []
69 extensions = defaultdict(list)
70
71 def walker(top, names):
72 """
73 Walks a directory and records all packages and file extensions.
74 """
75 global packages, extensions
76 if any(exc in top for exc in excludes):
77 return
78 package = top[top.rfind('hvplot'):].replace(os.path.sep, '.')
79 packages.append(package)
80 for name in names:
81 ext = '.'.join(name.split('.')[1:])
82 ext_str = '*.%s' % ext
83 if ext and ext not in excludes and ext_str not in extensions[package]:
84 extensions[package].append(ext_str)
85
86
87 def examples(path='hvplot-examples', verbose=False, force=False, root=__file__):
88 """
89 Copies the notebooks to the supplied path.
90 """
91 filepath = os.path.abspath(os.path.dirname(root))
92 example_dir = os.path.join(filepath, './examples')
93 if not os.path.exists(example_dir):
94 example_dir = os.path.join(filepath, '../examples')
95 if os.path.exists(path):
96 if not force:
97 print('%s directory already exists, either delete it or set the force flag' % path)
98 return
99 shutil.rmtree(path)
100 ignore = shutil.ignore_patterns('.ipynb_checkpoints', '*.pyc', '*~')
101 tree_root = os.path.abspath(example_dir)
102 if os.path.isdir(tree_root):
103 shutil.copytree(tree_root, path, ignore=ignore, symlinks=True)
104 else:
105 print('Cannot find %s' % tree_root)
106
107
108
109 def package_assets(example_path):
110 """
111 Generates pseudo-packages for the examples directory.
112 """
113 examples(example_path, force=True, root=__file__)
114 for root, dirs, files in os.walk(example_path):
115 walker(root, dirs+files)
116 setup_args['packages'] += packages
117 for p, exts in extensions.items():
118 if exts:
119 setup_args['package_data'][p] = exts
120
121
122 ########## dependencies ##########
123
124 install_requires = [
125 'bokeh >=1.0.0',
126 'colorcet >=2',
127 'holoviews >=1.11.0',
128 'pandas',
129 'numpy>=1.15'
130 ]
131
132 _examples = [
133 'geoviews >=1.6.0',
134 'panel',
135 'geopandas',
136 'xarray',
137 'networkx',
138 'streamz >=0.3.0',
139 'intake',
140 'intake-parquet',
141 'intake-xarray',
142 'dask',
143 'datashader >=0.6.5',
144 'notebook >=5.4',
145 'rasterio',
146 's3fs',
147 'scipy',
148 'pillow',
149 'selenium',
150 'spatialpandas',
151 'scikit-image'
152 ]
153
154 _examples_extra = _examples + [
155 'pygraphviz',
156 ]
157
158 extras_require = {
159 'tests': [
160 'coveralls',
161 'nose',
162 'flake8',
163 'parameterized',
164 'pytest',
165 'nbsmoke >=0.2.0',
166 ],
167 'examples': _examples,
168 'examples_extra': _examples_extra,
169 'doc': _examples_extra + [
170 'nbsite >=0.5.1',
171 'sphinx_holoviz_theme',
172 'tornado <6.0'
173 ]
174 }
175
176 # until pyproject.toml/equivalent is widely supported (setup_requires
177 # doesn't work well with pip)
178 extras_require['build'] = [
179 'param >=1.6.1',
180 'setuptools' # should make this pip now
181 ]
182
183 extras_require['all'] = sorted(set(sum(extras_require.values(), [])))
184
185 ########## metadata for setuptools ##########
186
187 setup_args = dict(
188 name='hvplot',
189 version=get_setup_version("hvplot"),
190 description='A high-level plotting API for the PyData ecosystem built on HoloViews.',
191 long_description=open("README.md").read(),
192 long_description_content_type="text/markdown",
193 author= "Philipp Rudiger",
194 author_email= "[email protected]",
195 maintainer="HoloViz developers",
196 maintainer_email="[email protected]",
197 packages=find_packages()+packages,
198 package_data={'hvplot': ['.version']},
199 platforms=['Windows', 'Mac OS X', 'Linux'],
200 license='BSD',
201 url='https://hvplot.pyviz.org',
202 classifiers = [
203 "License :: OSI Approved :: BSD License",
204 "Development Status :: 5 - Production/Stable",
205 "Programming Language :: Python :: 2",
206 "Programming Language :: Python :: 2.7",
207 "Programming Language :: Python :: 3.5",
208 "Programming Language :: Python :: 3.6",
209 "Operating System :: OS Independent",
210 "Intended Audience :: Science/Research",
211 "Intended Audience :: Developers",
212 "Natural Language :: English",
213 "Topic :: Scientific/Engineering",
214 "Topic :: Software Development :: Libraries"],
215 python_requires=">=2.7",
216 install_requires=install_requires,
217 extras_require=extras_require,
218 tests_require=extras_require['tests'],
219 entry_points={
220 'console_scripts': [
221 'hvplot = hvplot.__main__:main'
222 ],
223 'pandas_plotting_backends': [
224 'holoviews = hvplot:plotting',
225 ],
226 },
227 )
228
229
230 if __name__ == '__main__':
231 example_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
232 'hvplot','examples')
233 if 'develop' not in sys.argv:
234 package_assets(example_path)
235
236 setup(**setup_args)
237
238 if os.path.isdir(example_path):
239 shutil.rmtree(example_path)
240
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -217,9 +217,7 @@
extras_require=extras_require,
tests_require=extras_require['tests'],
entry_points={
- 'console_scripts': [
- 'hvplot = hvplot.__main__:main'
- ],
+ 'console_scripts': [],
'pandas_plotting_backends': [
'holoviews = hvplot:plotting',
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -217,9 +217,7 @@\n extras_require=extras_require,\n tests_require=extras_require['tests'],\n entry_points={\n- 'console_scripts': [\n- 'hvplot = hvplot.__main__:main'\n- ],\n+ 'console_scripts': [],\n 'pandas_plotting_backends': [\n 'holoviews = hvplot:plotting',\n ],\n", "issue": "Entrypoint broken\nThe setup.py specifies `hvplot.__main__` as a console_script but that doesn't actually exist.\n", "before_files": [{"content": "import os\nimport sys\nimport shutil\nfrom collections import defaultdict\nfrom setuptools import setup, find_packages\n\n\n########## autover ##########\n\ndef embed_version(basepath, ref='v0.2.2'):\n \"\"\"\n Autover is purely a build time dependency in all cases (conda and\n pip) except for when you use pip's remote git support [git+url] as\n 1) you need a dynamically changing version and 2) the environment\n starts off clean with zero dependencies installed.\n This function acts as a fallback to make Version available until\n PEP518 is commonly supported by pip to express build dependencies.\n \"\"\"\n import io, zipfile, importlib\n try: from urllib.request import urlopen\n except: from urllib import urlopen\n try:\n url = 'https://github.com/ioam/autover/archive/{ref}.zip'\n response = urlopen(url.format(ref=ref))\n zf = zipfile.ZipFile(io.BytesIO(response.read()))\n ref = ref[1:] if ref.startswith('v') else ref\n embed_version = zf.read('autover-{ref}/autover/version.py'.format(ref=ref))\n with open(os.path.join(basepath, 'version.py'), 'wb') as f:\n f.write(embed_version)\n return importlib.import_module(\"version\")\n except:\n return None\n\ndef get_setup_version(reponame):\n \"\"\"\n Helper to get the current version from either git describe or the\n .version file (if available).\n \"\"\"\n import json\n basepath = os.path.split(__file__)[0]\n version_file_path = os.path.join(basepath, reponame, '.version')\n try:\n from param import version\n except:\n version = embed_version(basepath)\n if version is not None:\n return version.Version.setup_version(basepath, reponame, archive_commit=\"$Format:%h$\")\n else:\n print(\"WARNING: param>=1.6.0 unavailable. If you are installing a package, this warning can safely be ignored. If you are creating a package or otherwise operating in a git repository, you should install param>=1.6.0.\")\n return json.load(open(version_file_path, 'r'))['version_string']\n\n########## examples ##########\n\ndef check_pseudo_package(path):\n \"\"\"\n Verifies that a fake subpackage path for assets (notebooks, svgs,\n pngs etc) both exists and is populated with files.\n \"\"\"\n if not os.path.isdir(path):\n raise Exception(\"Please make sure pseudo-package %s exists.\" % path)\n else:\n assets = os.listdir(path)\n if len(assets) == 0:\n raise Exception(\"Please make sure pseudo-package %s is populated.\" % path)\n\n\nexcludes = ['DS_Store', '.log', 'ipynb_checkpoints']\npackages = []\nextensions = defaultdict(list)\n\ndef walker(top, names):\n \"\"\"\n Walks a directory and records all packages and file extensions.\n \"\"\"\n global packages, extensions\n if any(exc in top for exc in excludes):\n return\n package = top[top.rfind('hvplot'):].replace(os.path.sep, '.')\n packages.append(package)\n for name in names:\n ext = '.'.join(name.split('.')[1:])\n ext_str = '*.%s' % ext\n if ext and ext not in excludes and ext_str not in extensions[package]:\n extensions[package].append(ext_str)\n\n\ndef examples(path='hvplot-examples', verbose=False, force=False, root=__file__):\n \"\"\"\n Copies the notebooks to the supplied path.\n \"\"\"\n filepath = os.path.abspath(os.path.dirname(root))\n example_dir = os.path.join(filepath, './examples')\n if not os.path.exists(example_dir):\n example_dir = os.path.join(filepath, '../examples')\n if os.path.exists(path):\n if not force:\n print('%s directory already exists, either delete it or set the force flag' % path)\n return\n shutil.rmtree(path)\n ignore = shutil.ignore_patterns('.ipynb_checkpoints', '*.pyc', '*~')\n tree_root = os.path.abspath(example_dir)\n if os.path.isdir(tree_root):\n shutil.copytree(tree_root, path, ignore=ignore, symlinks=True)\n else:\n print('Cannot find %s' % tree_root)\n\n\n\ndef package_assets(example_path):\n \"\"\"\n Generates pseudo-packages for the examples directory.\n \"\"\"\n examples(example_path, force=True, root=__file__)\n for root, dirs, files in os.walk(example_path):\n walker(root, dirs+files)\n setup_args['packages'] += packages\n for p, exts in extensions.items():\n if exts:\n setup_args['package_data'][p] = exts\n\n\n########## dependencies ##########\n\ninstall_requires = [\n 'bokeh >=1.0.0',\n 'colorcet >=2',\n 'holoviews >=1.11.0',\n 'pandas',\n 'numpy>=1.15'\n]\n\n_examples = [\n 'geoviews >=1.6.0',\n 'panel',\n 'geopandas',\n 'xarray',\n 'networkx',\n 'streamz >=0.3.0',\n 'intake',\n 'intake-parquet',\n 'intake-xarray',\n 'dask',\n 'datashader >=0.6.5',\n 'notebook >=5.4',\n 'rasterio',\n 's3fs',\n 'scipy',\n 'pillow',\n 'selenium',\n 'spatialpandas',\n 'scikit-image'\n]\n\n_examples_extra = _examples + [\n 'pygraphviz',\n]\n\nextras_require = {\n 'tests': [\n 'coveralls',\n 'nose',\n 'flake8',\n 'parameterized',\n 'pytest',\n 'nbsmoke >=0.2.0',\n ],\n 'examples': _examples,\n 'examples_extra': _examples_extra,\n 'doc': _examples_extra + [\n 'nbsite >=0.5.1',\n 'sphinx_holoviz_theme',\n 'tornado <6.0'\n ]\n}\n\n# until pyproject.toml/equivalent is widely supported (setup_requires\n# doesn't work well with pip)\nextras_require['build'] = [\n 'param >=1.6.1',\n 'setuptools' # should make this pip now\n]\n\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\n########## metadata for setuptools ##########\n\nsetup_args = dict(\n name='hvplot',\n version=get_setup_version(\"hvplot\"),\n description='A high-level plotting API for the PyData ecosystem built on HoloViews.',\n long_description=open(\"README.md\").read(),\n long_description_content_type=\"text/markdown\",\n author= \"Philipp Rudiger\",\n author_email= \"[email protected]\",\n maintainer=\"HoloViz developers\",\n maintainer_email=\"[email protected]\",\n packages=find_packages()+packages,\n package_data={'hvplot': ['.version']},\n platforms=['Windows', 'Mac OS X', 'Linux'],\n license='BSD',\n url='https://hvplot.pyviz.org',\n classifiers = [\n \"License :: OSI Approved :: BSD License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Operating System :: OS Independent\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Software Development :: Libraries\"],\n python_requires=\">=2.7\",\n install_requires=install_requires,\n extras_require=extras_require,\n tests_require=extras_require['tests'],\n entry_points={\n 'console_scripts': [\n 'hvplot = hvplot.__main__:main'\n ],\n 'pandas_plotting_backends': [\n 'holoviews = hvplot:plotting',\n ],\n },\n)\n\n\nif __name__ == '__main__':\n example_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n 'hvplot','examples')\n if 'develop' not in sys.argv:\n package_assets(example_path)\n\n setup(**setup_args)\n\n if os.path.isdir(example_path):\n shutil.rmtree(example_path)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport sys\nimport shutil\nfrom collections import defaultdict\nfrom setuptools import setup, find_packages\n\n\n########## autover ##########\n\ndef embed_version(basepath, ref='v0.2.2'):\n \"\"\"\n Autover is purely a build time dependency in all cases (conda and\n pip) except for when you use pip's remote git support [git+url] as\n 1) you need a dynamically changing version and 2) the environment\n starts off clean with zero dependencies installed.\n This function acts as a fallback to make Version available until\n PEP518 is commonly supported by pip to express build dependencies.\n \"\"\"\n import io, zipfile, importlib\n try: from urllib.request import urlopen\n except: from urllib import urlopen\n try:\n url = 'https://github.com/ioam/autover/archive/{ref}.zip'\n response = urlopen(url.format(ref=ref))\n zf = zipfile.ZipFile(io.BytesIO(response.read()))\n ref = ref[1:] if ref.startswith('v') else ref\n embed_version = zf.read('autover-{ref}/autover/version.py'.format(ref=ref))\n with open(os.path.join(basepath, 'version.py'), 'wb') as f:\n f.write(embed_version)\n return importlib.import_module(\"version\")\n except:\n return None\n\ndef get_setup_version(reponame):\n \"\"\"\n Helper to get the current version from either git describe or the\n .version file (if available).\n \"\"\"\n import json\n basepath = os.path.split(__file__)[0]\n version_file_path = os.path.join(basepath, reponame, '.version')\n try:\n from param import version\n except:\n version = embed_version(basepath)\n if version is not None:\n return version.Version.setup_version(basepath, reponame, archive_commit=\"$Format:%h$\")\n else:\n print(\"WARNING: param>=1.6.0 unavailable. If you are installing a package, this warning can safely be ignored. If you are creating a package or otherwise operating in a git repository, you should install param>=1.6.0.\")\n return json.load(open(version_file_path, 'r'))['version_string']\n\n########## examples ##########\n\ndef check_pseudo_package(path):\n \"\"\"\n Verifies that a fake subpackage path for assets (notebooks, svgs,\n pngs etc) both exists and is populated with files.\n \"\"\"\n if not os.path.isdir(path):\n raise Exception(\"Please make sure pseudo-package %s exists.\" % path)\n else:\n assets = os.listdir(path)\n if len(assets) == 0:\n raise Exception(\"Please make sure pseudo-package %s is populated.\" % path)\n\n\nexcludes = ['DS_Store', '.log', 'ipynb_checkpoints']\npackages = []\nextensions = defaultdict(list)\n\ndef walker(top, names):\n \"\"\"\n Walks a directory and records all packages and file extensions.\n \"\"\"\n global packages, extensions\n if any(exc in top for exc in excludes):\n return\n package = top[top.rfind('hvplot'):].replace(os.path.sep, '.')\n packages.append(package)\n for name in names:\n ext = '.'.join(name.split('.')[1:])\n ext_str = '*.%s' % ext\n if ext and ext not in excludes and ext_str not in extensions[package]:\n extensions[package].append(ext_str)\n\n\ndef examples(path='hvplot-examples', verbose=False, force=False, root=__file__):\n \"\"\"\n Copies the notebooks to the supplied path.\n \"\"\"\n filepath = os.path.abspath(os.path.dirname(root))\n example_dir = os.path.join(filepath, './examples')\n if not os.path.exists(example_dir):\n example_dir = os.path.join(filepath, '../examples')\n if os.path.exists(path):\n if not force:\n print('%s directory already exists, either delete it or set the force flag' % path)\n return\n shutil.rmtree(path)\n ignore = shutil.ignore_patterns('.ipynb_checkpoints', '*.pyc', '*~')\n tree_root = os.path.abspath(example_dir)\n if os.path.isdir(tree_root):\n shutil.copytree(tree_root, path, ignore=ignore, symlinks=True)\n else:\n print('Cannot find %s' % tree_root)\n\n\n\ndef package_assets(example_path):\n \"\"\"\n Generates pseudo-packages for the examples directory.\n \"\"\"\n examples(example_path, force=True, root=__file__)\n for root, dirs, files in os.walk(example_path):\n walker(root, dirs+files)\n setup_args['packages'] += packages\n for p, exts in extensions.items():\n if exts:\n setup_args['package_data'][p] = exts\n\n\n########## dependencies ##########\n\ninstall_requires = [\n 'bokeh >=1.0.0',\n 'colorcet >=2',\n 'holoviews >=1.11.0',\n 'pandas',\n 'numpy>=1.15'\n]\n\n_examples = [\n 'geoviews >=1.6.0',\n 'panel',\n 'geopandas',\n 'xarray',\n 'networkx',\n 'streamz >=0.3.0',\n 'intake',\n 'intake-parquet',\n 'intake-xarray',\n 'dask',\n 'datashader >=0.6.5',\n 'notebook >=5.4',\n 'rasterio',\n 's3fs',\n 'scipy',\n 'pillow',\n 'selenium',\n 'spatialpandas',\n 'scikit-image'\n]\n\n_examples_extra = _examples + [\n 'pygraphviz',\n]\n\nextras_require = {\n 'tests': [\n 'coveralls',\n 'nose',\n 'flake8',\n 'parameterized',\n 'pytest',\n 'nbsmoke >=0.2.0',\n ],\n 'examples': _examples,\n 'examples_extra': _examples_extra,\n 'doc': _examples_extra + [\n 'nbsite >=0.5.1',\n 'sphinx_holoviz_theme',\n 'tornado <6.0'\n ]\n}\n\n# until pyproject.toml/equivalent is widely supported (setup_requires\n# doesn't work well with pip)\nextras_require['build'] = [\n 'param >=1.6.1',\n 'setuptools' # should make this pip now\n]\n\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\n########## metadata for setuptools ##########\n\nsetup_args = dict(\n name='hvplot',\n version=get_setup_version(\"hvplot\"),\n description='A high-level plotting API for the PyData ecosystem built on HoloViews.',\n long_description=open(\"README.md\").read(),\n long_description_content_type=\"text/markdown\",\n author= \"Philipp Rudiger\",\n author_email= \"[email protected]\",\n maintainer=\"HoloViz developers\",\n maintainer_email=\"[email protected]\",\n packages=find_packages()+packages,\n package_data={'hvplot': ['.version']},\n platforms=['Windows', 'Mac OS X', 'Linux'],\n license='BSD',\n url='https://hvplot.pyviz.org',\n classifiers = [\n \"License :: OSI Approved :: BSD License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Operating System :: OS Independent\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Software Development :: Libraries\"],\n python_requires=\">=2.7\",\n install_requires=install_requires,\n extras_require=extras_require,\n tests_require=extras_require['tests'],\n entry_points={\n 'console_scripts': [],\n 'pandas_plotting_backends': [\n 'holoviews = hvplot:plotting',\n ],\n },\n)\n\n\nif __name__ == '__main__':\n example_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n 'hvplot','examples')\n if 'develop' not in sys.argv:\n package_assets(example_path)\n\n setup(**setup_args)\n\n if os.path.isdir(example_path):\n shutil.rmtree(example_path)\n", "path": "setup.py"}]} |
gh_patches_debug_1052 | rasdani/github-patches | git_diff | learningequality__kolibri-11737 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Environment variables and configuration flags for Kolibri options are persisted to the `options.ini` file by `generate_empty_options_file`
## Observed behavior
When using either environment variables or configuration flags like `--port` or `--debug` on the first run of a Kolibri device, these will be written to the empty `options.ini` file generated by `generate_empty_options_file` - this means that settings chosen on the first run by ephemeral configuration like environment variables and runtime flags will be persisted for subsequent runs of the server.
## Expected behavior
Ephemeral configuration should not be persisted to disk.
## User-facing consequences
The main place this could be problematic is if someone is using a base Kolibri to generate a `KOLIBRI_HOME` image and they do not intend their configuration flags to be persisted.
Additionally, Kolibri developers might be confused why flags are being persisted when they shouldn't.
## Steps to reproduce
Run kolibri with a new `KOLIBRI_HOME` directory. Run with the `--debug` flag.
See that `options.ini` contains an uncommented line `DEBUG = True`
## Context
Tell us about your environment, including:
* Kolibri version: 0.15.5
* Operating system: Ubuntu
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/utils/options.py`
Content:
```
1 """
2 This module is intended to allow customization of Kolibri settings with the
3 options.ini file.
4 The settings can be changed through environment variables or sections and keys
5 in the options.ini file.
6 """
7 import ast
8 import logging
9 import os
10 import sys
11 from functools import update_wrapper
12
13 from configobj import ConfigObj
14 from configobj import flatten_errors
15 from configobj import get_extra_values
16 from django.utils.functional import SimpleLazyObject
17 from django.utils.module_loading import import_string
18 from django.utils.six import string_types
19 from six.moves.urllib.parse import urlparse
20 from six.moves.urllib.parse import urlunparse
21 from validate import is_boolean
22 from validate import Validator
23 from validate import VdtTypeError
24 from validate import VdtValueError
25
26 try:
27 import kolibri.utils.pskolibri as psutil
28 except NotImplementedError:
29 # This module can't work on this OS
30 psutil = None
31
32
33 from kolibri.utils.data import bytes_from_humans
34 from kolibri.utils.i18n import KOLIBRI_LANGUAGE_INFO
35 from kolibri.utils.i18n import KOLIBRI_SUPPORTED_LANGUAGES
36 from kolibri.plugins.utils.options import extend_config_spec
37 from kolibri.deployment.default.sqlite_db_names import (
38 ADDITIONAL_SQLITE_DATABASES,
39 )
40 from kolibri.utils.system import get_fd_limit
41
42
43 logger = logging.getLogger(__name__)
44
45
46 CACHE_SHARDS = 8
47
48 # file descriptors per thread
49 FD_PER_THREAD = sum(
50 (
51 5, # minimum allowance
52 1 + len(ADDITIONAL_SQLITE_DATABASES), # DBs assuming SQLite
53 CACHE_SHARDS, # assuming diskcache
54 )
55 )
56
57 # Reserve some file descriptors for file operations happening in asynchronous tasks
58 # when the server is running with threaded task runners.
59 MIN_RESERVED_FD = 64
60
61
62 def calculate_thread_pool():
63 """
64 Returns the default value for CherryPY thread_pool:
65 - calculated based on the best values obtained in several partners installations
66 - servers with more memory can deal with more threads
67 - calculations are done for servers with more than 2 Gb of RAM
68 - restricts value to avoid exceeding file descriptor limit
69 """
70 MIN_POOL = 50
71 MAX_POOL = 150
72
73 pool_size = MIN_POOL
74 if psutil:
75 MIN_MEM = 2
76 MAX_MEM = 6
77 total_memory = psutil.virtual_memory().total / pow(10, 9) # in GB
78 # if it's in the range, scale thread count linearly with available memory
79 if MIN_MEM < total_memory < MAX_MEM:
80 pool_size = MIN_POOL + int(
81 (MAX_POOL - MIN_POOL)
82 * float(total_memory - MIN_MEM)
83 / (MAX_MEM - MIN_MEM)
84 )
85 elif total_memory >= MAX_MEM:
86 pool_size = MAX_POOL
87 elif sys.platform.startswith(
88 "darwin"
89 ): # Considering MacOS has at least 4 Gb of RAM
90 pool_size = MAX_POOL
91
92 # ensure (number of threads) x (open file descriptors) < (fd limit)
93 max_threads = (get_fd_limit() - MIN_RESERVED_FD) // FD_PER_THREAD
94 # Ensure that the number of threads never goes below 1
95 return max(1, min(pool_size, max_threads))
96
97
98 ALL_LANGUAGES = "kolibri-all"
99 SUPPORTED_LANGUAGES = "kolibri-supported"
100
101
102 def _process_language_string(value):
103 """
104 Used to validate string values.
105 The only valid argument in this case is that it is a string
106 so we first try to coerce it to a string, then do some checks
107 to see if it is any of our special values. Then if it is an
108 appropriate language code value.
109 If no value is appropriate, raise a ValueError.
110 """
111 value = str(value)
112 if value == ALL_LANGUAGES:
113 return list(KOLIBRI_LANGUAGE_INFO.keys())
114 if value == SUPPORTED_LANGUAGES:
115 return list(KOLIBRI_SUPPORTED_LANGUAGES)
116 if value in KOLIBRI_LANGUAGE_INFO:
117 return [value]
118 raise ValueError
119
120
121 def _process_list(value, separator=","):
122 """
123 Used to validate list values.
124 The only valid argument in this case is that it is a list
125 so we first try to coerce it to a list, then do some checks
126 to see if it is any of our special values. Then if it is an
127 appropriate list value.
128 If no value is appropriate, raise a ValueError.
129 """
130
131 # Check the supplied value is a list
132 if not isinstance(value, list):
133 if not value:
134 value = []
135 elif isinstance(value, string_types):
136 value = value.split(separator)
137 else:
138 value = [value]
139 return value
140
141
142 def language_list(value):
143 """
144 Check that the supplied value is a list of languages,
145 or a single language, or a special shortcut parameter.
146 In the case that it is a special shortcut name, we return the full list
147 of relevant languages for that parameter, or throw a validation error
148 if that parameter would return an empty list.
149 If a single language code is the parameter, this function will return a list
150 with that language code as the only member.
151
152 :param Union[str, list[str]] value: Either a string or a list of strings
153 String can be any value that is a key of KOLIBRI_LANGUAGE_INFO
154 or one of the special strings represented by ALL_LANGUAGES or SUPPORTED_LANGUAGES
155 A list must be a list of these strings.
156 """
157 value = _process_list(value)
158
159 out = set()
160 errors = []
161 for entry in value:
162 try:
163 entry_list = _process_language_string(entry)
164 out.update(entry_list)
165 except ValueError:
166 errors.append(entry)
167 if errors:
168 raise VdtValueError(errors)
169
170 if not out:
171 raise VdtValueError(value)
172
173 return sorted(list(out))
174
175
176 def path(value):
177 from kolibri.utils.conf import KOLIBRI_HOME
178
179 if not isinstance(value, string_types):
180 raise VdtValueError(repr(value))
181 # Allow for blank paths
182 if value:
183 # ensure all path arguments, e.g. under section "Paths", are fully resolved and expanded, relative to KOLIBRI_HOME
184 return os.path.join(KOLIBRI_HOME, os.path.expanduser(value))
185 return value
186
187
188 def path_list(value):
189 """
190 Check that the supplied value is a semicolon-delimited list of paths.
191 Note: we do not guarantee that these paths all currently exist.
192 """
193 if isinstance(value, string_types):
194 value = value.split(";")
195
196 out = []
197
198 if isinstance(value, list):
199 errors = []
200 for item in value:
201 try:
202 out.append(path(item))
203 except VdtValueError:
204 errors.append(repr(item))
205 if errors:
206 raise VdtValueError(errors)
207
208 return out
209
210
211 def validate_port_number(value):
212 if 0 <= value <= 65535:
213 return value
214 raise VdtValueError(value)
215
216
217 def port(value):
218 try:
219 return validate_port_number(int(value))
220 except ValueError:
221 raise VdtTypeError(value)
222
223
224 def origin_or_port(value):
225 """
226 Check that the passed value can either be coerced to an integer to supply
227 a port, or is a valid origin string.
228
229 :param Union[integer, str] value: Either an integer or a string
230 """
231 if value != "":
232 try:
233 value = validate_port_number(int(value))
234 except ValueError:
235 url = urlparse(value)
236 if not url.scheme or not url.netloc:
237 raise VdtValueError(value)
238 value = urlunparse((url.scheme, url.netloc, "", "", "", ""))
239 return value
240
241
242 def validate_bytes(value):
243 try:
244 value = bytes_from_humans(value)
245 except ValueError:
246 raise VdtValueError(value)
247 return value
248
249
250 def url_prefix(value):
251 if not isinstance(value, string_types):
252 raise VdtValueError(value)
253 return value.lstrip("/").rstrip("/") + "/"
254
255
256 def multiprocess_bool(value):
257 """
258 Validate the boolean value of a multiprocessing option.
259 Do this by checking it's a boolean, and also that multiprocessing
260 can be imported properly on this platform.
261 """
262 value = is_boolean(value)
263 try:
264 if not value:
265 raise ImportError()
266 # Import in order to check if multiprocessing is supported on this platform
267 from multiprocessing import synchronize # noqa
268
269 return True
270 except ImportError:
271 return False
272
273
274 class LazyImportFunction(object):
275 """
276 A function wrapper that will import a module when called.
277 We may be able to drop this when Python 2.7 support is dropped
278 and use Python LazyLoader module machinery instead.
279 """
280
281 def __init__(self, module_name):
282 self.module_name = module_name
283 self._fn = None
284
285 def __call__(self, *args, **kwargs):
286 if self._fn is None:
287 fn = import_string(self.module_name)
288 if not callable(fn):
289 raise ImportError("Module {} is not callable".format(self.module_name))
290 self._fn = fn
291 update_wrapper(self, self._fn)
292 return self._fn(*args, **kwargs)
293
294
295 def lazy_import_callback(value):
296 """
297 Validate that the value is a string that is a valid import name.
298 Does not validate that the module exists or can be imported,
299 so as to avoid premature evaluation of the module.
300 This is necessary to prevent circular dependencies if the module path
301 is internal to Kolibri, and also because the module may not be available
302 in some contexts.
303 """
304 if not isinstance(value, string_types):
305 raise VdtValueError(value)
306 try:
307 # Check that the string is at least parseable as a module name
308 ast.parse(value)
309 except SyntaxError:
310 raise VdtValueError(value)
311 # We seem to have something that is somewhat valid, so return a function
312 # that does the import and tries to invoke the returned function.
313
314 return LazyImportFunction(value)
315
316
317 def lazy_import_callback_list(value):
318 """
319 Check that the supplied value is a list of import paths.
320
321 :param list[str] value: A list of strings that are valid import paths
322 """
323 value = _process_list(value)
324
325 out = []
326 errors = []
327 for entry in value:
328 try:
329 entry_list = lazy_import_callback(entry)
330 out.append(entry_list)
331 except ValueError:
332 errors.append(entry)
333 if errors:
334 raise VdtValueError(errors)
335
336 return out
337
338
339 base_option_spec = {
340 "Cache": {
341 "CACHE_BACKEND": {
342 "type": "option",
343 "options": ("memory", "redis"),
344 "default": "memory",
345 "description": """
346 Which backend to use for the main cache - if 'memory' is selected, then for most cache operations,
347 an in-memory, process-local cache will be used, but a disk based cache will be used for some data
348 that needs to be persistent across processes. If 'redis' is used, it is used for all caches.
349 """,
350 },
351 "CACHE_TIMEOUT": {
352 "type": "integer",
353 "default": 300,
354 "description": "Default timeout for entries put into the cache.",
355 },
356 "CACHE_MAX_ENTRIES": {
357 "type": "integer",
358 "default": 1000,
359 "description": "Maximum number of entries to maintain in the cache at once.",
360 },
361 "CACHE_PASSWORD": {
362 "type": "string",
363 "default": "",
364 "description": "Password to authenticate to Redis, Redis only.",
365 },
366 "CACHE_LOCATION": {
367 "type": "string",
368 "default": "localhost:6379",
369 "description": "Host and port at which to connect to Redis, Redis only.",
370 },
371 "CACHE_REDIS_DB": {
372 "type": "integer",
373 "default": 0,
374 "description": "The database number for Redis.",
375 "deprecated_aliases": ("CACHE_REDIS_MIN_DB",),
376 },
377 "CACHE_REDIS_MAX_POOL_SIZE": {
378 "type": "integer",
379 "default": 50, # use redis-benchmark to determine better value
380 "description": "Maximum number of simultaneous connections to allow to Redis, Redis only.",
381 },
382 "CACHE_REDIS_POOL_TIMEOUT": {
383 "type": "integer",
384 "default": 30, # seconds
385 "description": "How long to wait when trying to connect to Redis before timing out, Redis only.",
386 },
387 # Optional redis settings to overwrite redis.conf
388 "CACHE_REDIS_MAXMEMORY": {
389 "type": "integer",
390 "default": 0,
391 "description": "Maximum memory that Redis should use, Redis only.",
392 },
393 "CACHE_REDIS_MAXMEMORY_POLICY": {
394 "type": "option",
395 "options": (
396 "",
397 "allkeys-lru",
398 "volatile-lru",
399 "allkeys-random",
400 "volatile-random",
401 "volatile-ttl",
402 "noeviction",
403 ),
404 "default": "",
405 "description": "Eviction policy to use when using Redis for caching, Redis only.",
406 },
407 "STREAMED_FILE_CACHE_SIZE": {
408 "type": "bytes",
409 "default": "500MB",
410 "description": """
411 Disk space to be used for caching streamed files. This is used for caching files that are
412 being streamed from remote libraries, if these files are later imported, these should be cleaned up,
413 and will no longer count to this cache size.
414 Value can either be a number suffixed with a unit (e.g. MB, GB, TB) or an integer number of bytes.
415 """,
416 },
417 },
418 "Database": {
419 "DATABASE_ENGINE": {
420 "type": "option",
421 "options": ("sqlite", "postgres"),
422 "default": "sqlite",
423 "description": "Which database backend to use, choices are 'sqlite' or 'postgres'",
424 },
425 "DATABASE_NAME": {
426 "type": "string",
427 "description": """
428 For SQLite - the name of a database file to use for the main Kolibri database.
429 For Postgresql, the name of the database to use for all Kolibri data.
430 """,
431 },
432 "DATABASE_PASSWORD": {
433 "type": "string",
434 "description": "The password to authenticate with when connecting to the database, Postgresql only.",
435 },
436 "DATABASE_USER": {
437 "type": "string",
438 "description": "The user to authenticate with when connecting to the database, Postgresql only.",
439 },
440 "DATABASE_HOST": {
441 "type": "string",
442 "description": "The host on which to connect to the database, Postgresql only.",
443 },
444 "DATABASE_PORT": {
445 "type": "string",
446 "description": "The port on which to connect to the database, Postgresql only.",
447 },
448 },
449 "Server": {
450 "CHERRYPY_START": {
451 "type": "boolean",
452 "default": True,
453 "description": "DEPRECATED - do not use this option, use the 'kolibri services' command instead.",
454 "deprecated": True,
455 },
456 "CHERRYPY_THREAD_POOL": {
457 "type": "integer",
458 "default": calculate_thread_pool(),
459 "description": "How many threads the Kolibri server should use to serve requests",
460 },
461 "CHERRYPY_SOCKET_TIMEOUT": {
462 "type": "integer",
463 "default": 10,
464 "description": """
465 How long a socket should wait for data flow to resume before
466 it considers that the connection has been interrupted.
467 Increasing this may help in situations where there is high
468 latency on a network or the bandwidth is bursty, with some
469 expected data flow interruptions which may not be indicative of the connection failing.
470 """,
471 },
472 "CHERRYPY_QUEUE_SIZE": {
473 "type": "integer",
474 "default": 30,
475 "description": """
476 How many requests to allow in the queue.
477 Increasing this may help situations where requests are instantly refused by the server.
478 """,
479 },
480 "CHERRYPY_QUEUE_TIMEOUT": {
481 "type": "float",
482 "default": 0.1,
483 "description": """
484 How many seconds to wait for a request to be put into the queue.
485 Increasing this may help situations where requests are instantly refused by the server.
486 """,
487 },
488 "PROFILE": {
489 "type": "boolean",
490 "default": False,
491 "envvars": ("KOLIBRI_SERVER_PROFILE",),
492 "description": "Activate the server profiling middleware.",
493 },
494 "DEBUG": {
495 "type": "boolean",
496 "default": False,
497 "description": "Run Kolibri with Django setting DEBUG = True",
498 },
499 "DEBUG_LOG_DATABASE": {
500 "type": "boolean",
501 "default": False,
502 "description": "Activate debug logging for Django ORM operations.",
503 },
504 },
505 "Paths": {
506 "CONTENT_DIR": {
507 "type": "path",
508 "default": "content",
509 "description": """
510 The directory that will store content files and content database files.
511 To change this in a currently active server it is recommended to use the
512 'content movedirectory' management command.
513 """,
514 },
515 "CONTENT_FALLBACK_DIRS": {
516 "type": "path_list",
517 "default": "",
518 "description": "Additional directories in which Kolibri will look for content files and content database files.",
519 },
520 "AUTOMATIC_PROVISION_FILE": {
521 "type": "path",
522 "default": "",
523 "description": "The file that contains the automatic device provisioning data.",
524 },
525 },
526 "Urls": {
527 "CENTRAL_CONTENT_BASE_URL": {
528 "type": "string",
529 "default": "https://studio.learningequality.org",
530 "deprecated_envvars": ("CENTRAL_CONTENT_DOWNLOAD_BASE_URL",),
531 "description": """
532 URL to use as the default source for content import.
533 Slightly counterintuitively this will still be displayed in the UI as 'import from Kolibri Studio'.
534 """,
535 },
536 "DATA_PORTAL_SYNCING_BASE_URL": {
537 "type": "string",
538 "default": "https://kolibridataportal.learningequality.org",
539 "description": "URL to use as the target for data portal syncing.",
540 },
541 },
542 "Deployment": {
543 "HTTP_PORT": {
544 "type": "port",
545 "default": 8080,
546 "deprecated_envvars": ("KOLIBRI_LISTEN_PORT",),
547 "description": "Sets the port that Kolibri will serve on. This can be further overridden by command line arguments.",
548 },
549 "RUN_MODE": {
550 "type": "string",
551 "description": "Used to flag non-user Kolibri instances",
552 "skip_blank": True,
553 },
554 "DISABLE_PING": {
555 "type": "boolean",
556 "default": False,
557 "description": "Turn off the statistics pingback. This will also disable update notifications",
558 },
559 "URL_PATH_PREFIX": {
560 "type": "url_prefix",
561 "default": "/",
562 "description": """
563 Serve Kolibri from a subpath under the main domain. Used when serving multiple applications from
564 the same origin. This option is not heavily tested, but is provided for user convenience.
565 """,
566 },
567 "LANGUAGES": {
568 "type": "language_list",
569 "default": SUPPORTED_LANGUAGES,
570 "description": """
571 The user interface languages to enable on this instance of Kolibri (has no effect on languages of imported content channels).
572 The default will include all the languages Kolibri supports.
573 """,
574 },
575 "ZIP_CONTENT_ORIGIN": {
576 "type": "origin_or_port",
577 "default": "",
578 "description": """
579 When running by default (value blank), Kolibri frontend looks for the zipcontent endpoints
580 on the same domain as Kolibri proper, but uses ZIP_CONTENT_PORT instead of HTTP_PORT.
581 When running behind a proxy, set the value to the port where zipcontent endpoint is served on,
582 and it will be substituted for the port that Kolibri proper is being served on.
583 When zipcontent is being served from a completely separate domain, you can set an
584 absolute origin (full protocol plus domain, e.g. 'https://myzipcontent.com/')
585 to be used for all zipcontent origin requests.
586 It is strongly recommended that zipcontent is served from a different origin from the main Kolibri app,
587 either by port or domain, to allow for proper sandboxing.
588 """,
589 },
590 "ZIP_CONTENT_PORT": {
591 "type": "port",
592 "default": 0,
593 "description": """
594 Sets the port that Kolibri will serve the alternate origin server on. This is the server that
595 is used to serve all content for the zipcontent endpoint, so as to provide safe IFrame sandboxing
596 but avoiding issues with null origins.
597 This is the alternate origin server equivalent of HTTP_PORT.
598 It is strongly recommended that zipcontent is served from a different origin from the main Kolibri app,
599 either by port or domain, to allow for proper sandboxing.
600 """,
601 },
602 "ZIP_CONTENT_URL_PATH_PREFIX": {
603 "type": "url_prefix",
604 "default": "/",
605 "description": """
606 The zip content equivalent of URL_PATH_PREFIX - allows all zip content URLs to be prefixed with
607 a fixed path. This both changes the URL from which the endpoints are served by the alternate
608 origin server, and the URL prefix where the Kolibri frontend looks for it.
609 In the case that ZIP_CONTENT_ORIGIN is pointing to an entirely separate origin, this setting
610 can still be used to set a URL prefix that the frontend of Kolibri will look to when
611 retrieving alternate origin URLs.
612 """,
613 },
614 "REMOTE_CONTENT": {
615 "type": "boolean",
616 "default": False,
617 "description": """
618 Boolean flag that causes content import processes to skip trying to import any
619 content, as it is assumed that the remote source has everything available.
620 Server configuration should handle ensuring that the files are properly served.
621 """,
622 },
623 "SYNC_INTERVAL": {
624 "type": "integer",
625 "default": 60,
626 "description": """
627 In case a SoUD connects to this server, the SoUD should use this interval to resync every user.
628 """,
629 },
630 "PROJECT": {
631 "type": "string",
632 "skip_blank": True,
633 "description": """
634 The custom identifier for a project. This is used to identify the project in the telemetry
635 data that is returned to our telemetry server.
636 """,
637 },
638 "MINIMUM_DISK_SPACE": {
639 "type": "bytes",
640 "default": "250MB",
641 "description": """
642 The minimum free disk space that Kolibri should try to maintain on the device. This will
643 be used as the floor value to prevent Kolibri completely filling the disk during file import.
644 Value can either be a number suffixed with a unit (e.g. MB, GB, TB) or an integer number of bytes.
645 """,
646 },
647 "LISTEN_ADDRESS": {
648 "type": "ip_addr",
649 "default": "0.0.0.0",
650 "description": """
651 The address that the server should listen on. This can be used to restrict access to the server
652 to a specific network interface.
653 """,
654 },
655 "RESTART_HOOKS": {
656 "type": "lazy_import_callback_list",
657 "default": ["kolibri.utils.server.signal_restart"],
658 "description": """
659 A list of module paths for function callbacks that will be called when server restart is called.
660 The default is to disallow server restarts, so callbacks need to be added to enable restarting.
661 """,
662 },
663 },
664 "Python": {
665 "PICKLE_PROTOCOL": {
666 "type": "integer",
667 "default": 2,
668 "description": """
669 Which Python pickle protocol to use. Pinned to 2 for now to provide maximal cross-Python version compatibility.
670 Can safely be set to a higher value for deployments that will never change Python versions.
671 """,
672 }
673 },
674 "Tasks": {
675 "USE_WORKER_MULTIPROCESSING": {
676 "type": "multiprocess_bool",
677 "default": False,
678 "description": """
679 Whether to use Python multiprocessing for worker pools. If False, then it will use threading. This may be useful,
680 if running on a dedicated device with multiple cores, and a lot of asynchronous tasks get run.
681 """,
682 },
683 "REGULAR_PRIORITY_WORKERS": {
684 "type": "integer",
685 "default": 4,
686 "description": """
687 The number of workers to spin up for regular priority asynchronous tasks.
688 """,
689 },
690 "HIGH_PRIORITY_WORKERS": {
691 "type": "integer",
692 "default": 2,
693 "description": """
694 The number of workers to spin up for high priority asynchronous tasks.
695 """,
696 },
697 "JOB_STORAGE_FILEPATH": {
698 "type": "path",
699 "default": "job_storage.sqlite3",
700 "description": """
701 The file to use for the job storage database. This is only used in the case that the database backend being used is SQLite.
702 """,
703 },
704 },
705 }
706
707
708 def _get_validator():
709 return Validator(
710 {
711 "language_list": language_list,
712 "path": path,
713 "path_list": path_list,
714 "origin_or_port": origin_or_port,
715 "port": port,
716 "url_prefix": url_prefix,
717 "bytes": validate_bytes,
718 "multiprocess_bool": multiprocess_bool,
719 "lazy_import_callback_list": lazy_import_callback_list,
720 }
721 )
722
723
724 def _get_option_spec():
725 """
726 Combine the default option spec with any options that are defined in plugins
727 """
728 option_spec = extend_config_spec(base_option_spec)
729 envvars = set()
730 for section, opts in option_spec.items():
731 for optname, attrs in opts.items():
732 if "deprecated_aliases" in attrs:
733 attrs["deprecated_envvars"] = attrs.get("deprecated_envvars", ())
734 for alias in attrs["deprecated_aliases"]:
735 alias_ev = "KOLIBRI_{}".format(alias)
736 if alias_ev not in envvars:
737 attrs["deprecated_envvars"] += (alias_ev,)
738
739 opt_envvars = attrs.get("envvars", ()) + attrs.get("deprecated_envvars", ())
740 default_envvar = "KOLIBRI_{}".format(optname.upper())
741 if default_envvar not in envvars:
742 envvars.add(default_envvar)
743 else:
744 logging.warning(
745 "Duplicate environment variable for options {}".format(
746 default_envvar
747 )
748 )
749 default_envvar = "KOLIBRI_{}_{}".format(
750 section.upper(), optname.upper()
751 )
752 if default_envvar not in opt_envvars:
753 attrs["envvars"] = (default_envvar,) + opt_envvars
754 return option_spec
755
756
757 option_spec = SimpleLazyObject(_get_option_spec)
758
759
760 def get_configspec():
761 """
762 Read the option_spec dict defined above, and turn it into a "configspec" object (per the configobj library)
763 so that we can use it to parse the options.ini file.
764 """
765
766 lines = []
767
768 for section, opts in option_spec.items():
769 lines.append("[{section}]".format(section=section))
770 for name, attrs in opts.items():
771 default = attrs.get("default", "")
772 if isinstance(default, list) and not default:
773 raise RuntimeError("For an empty list don't specify a default")
774 the_type = attrs["type"]
775 args = ["%r" % op for op in attrs.get("options", [])] + [
776 "default=list('{default_list}')".format(
777 default_list="','".join(default)
778 )
779 if isinstance(default, list)
780 else "default='{default}'".format(default=default)
781 ]
782 line = "{name} = {type}({args})".format(
783 name=name, type=the_type, args=", ".join(args)
784 )
785 lines.append(line)
786
787 return ConfigObj(lines, _inspec=True)
788
789
790 def _set_from_envvars(conf):
791 """
792 Set the configuration from environment variables.
793 """
794 # keep track of which options were overridden using environment variables, to support error reporting
795 using_env_vars = {}
796
797 deprecation_warning = "Option {optname} in section [{section}] being overridden by deprecated environment variable {envvar}, please update to: {envvars}"
798 # override any values from their environment variables (if set)
799 # and check for use of deprecated environment variables and options
800 for section, opts in option_spec.items():
801 for optname, attrs in opts.items():
802 for envvar in attrs.get("envvars", []):
803 if envvar in os.environ:
804 deprecated_envvars = attrs.get("deprecated_envvars", ())
805 if envvar in deprecated_envvars:
806 logger.warning(
807 deprecation_warning.format(
808 optname=optname,
809 section=section,
810 envvar=envvar,
811 envvars=", ".join(
812 e
813 for e in attrs.get("envvars", [])
814 if e not in deprecated_envvars
815 ),
816 )
817 )
818 else:
819 logger.info(
820 "Option {optname} in section [{section}] being overridden by environment variable {envvar}".format(
821 optname=optname, section=section, envvar=envvar
822 )
823 )
824 if attrs.get("deprecated", False):
825 logger.warning(
826 "Option {optname} in section [{section}] is deprecated, please remove it from your options.ini file".format(
827 optname=optname, section=section
828 )
829 )
830 conf[section][optname] = os.environ[envvar]
831 using_env_vars[optname] = envvar
832 break
833 return using_env_vars
834
835
836 def _set_from_deprecated_aliases(conf):
837 """
838 Set the configuration from deprecated aliases.
839 """
840 # keep track of which options were overridden using environment variables, to support error reporting
841 using_deprecated_alias = {}
842
843 deprecation_warning = "Option {optname} in section [{section}] being set by deprecated alias {alias}, please update to: {optname}"
844 # override any values from their environment variables (if set)
845 # and check for use of deprecated environment variables and options
846 for section, opts in option_spec.items():
847 for optname, attrs in opts.items():
848 for alias in attrs.get("deprecated_aliases", ()):
849 if alias in conf[section]:
850 logger.warning(
851 deprecation_warning.format(
852 optname=optname,
853 section=section,
854 alias=alias,
855 )
856 )
857 conf[section][optname] = conf[section][alias]
858 del conf[section][alias]
859 using_deprecated_alias[optname] = alias
860 break
861 return using_deprecated_alias
862
863
864 def read_options_file(ini_filename="options.ini"):
865
866 from kolibri.utils.conf import KOLIBRI_HOME
867
868 ini_path = os.path.join(KOLIBRI_HOME, ini_filename)
869
870 conf = ConfigObj(ini_path, configspec=get_configspec())
871
872 # Check for use of deprecated options
873 for section, opts in option_spec.items():
874 for optname, attrs in opts.items():
875 if (
876 attrs.get("deprecated", False)
877 and section in conf
878 and optname in conf[section]
879 ):
880 logger.warning(
881 "Option {optname} in section [{section}] is deprecated, please remove it from your options.ini file".format(
882 optname=optname, section=section
883 )
884 )
885
886 # validate once up front to ensure section structure is in place
887 conf.validate(_get_validator())
888
889 using_env_vars = _set_from_envvars(conf)
890
891 using_deprecated_alias = _set_from_deprecated_aliases(conf)
892
893 validation = conf.validate(_get_validator(), preserve_errors=True)
894
895 # loop over and display any errors with config values, and then bail
896 if validation is not True:
897 for section_list, optname, error in flatten_errors(conf, validation):
898 section = section_list[0]
899 if optname in using_env_vars:
900 logger.error(
901 "Error processing environment variable option {envvar}: {error}".format(
902 envvar=using_env_vars[optname], error=error
903 )
904 )
905 elif optname in using_deprecated_alias:
906 logger.error(
907 "Error processing {file} under section [{section}] for option {alias}: {error}".format(
908 file=ini_path,
909 section=section,
910 alias=using_deprecated_alias[optname],
911 error=error,
912 )
913 )
914 else:
915 logger.error(
916 "Error processing {file} under section [{section}] for option {option}: {error}".format(
917 file=ini_path, section=section, option=optname, error=error
918 )
919 )
920 logger.critical(
921 "Aborting: Could not process options config (see errors above for more details)"
922 )
923 raise SystemExit(1)
924
925 # loop over any extraneous options and warn the user that we're ignoring them
926 for sections, name in get_extra_values(conf):
927
928 # this code gets the extra values themselves
929 the_section = conf
930 for section in sections:
931 the_section = the_section[section]
932
933 # the_value may be a section or a value
934 the_value = the_section.pop(name)
935
936 # determine whether the extra item is a section (dict) or value
937 kind = "section" if isinstance(the_value, dict) else "option"
938
939 logger.warning(
940 "Ignoring unknown {kind} in options file {file} under {section}: {name}.".format(
941 kind=kind,
942 file=ini_path,
943 section=sections[0] if sections else "top level",
944 name=name,
945 )
946 )
947
948 # run validation once again to fill in any default values for options we deleted due to issues
949 conf.validate(_get_validator())
950
951 return conf
952
953
954 def update_options_file(section, key, value, ini_filename="options.ini"):
955 """
956 Updates the configuration file on top of what is currently in the
957 file.
958
959 Note to future: Do not change the implementation to write the
960 in-memory conf.OPTIONS as it can contain temporary in-memory values
961 that are not intended to be stored.
962 """
963
964 # load the current conf from disk into memory
965 conf = read_options_file(ini_filename=ini_filename)
966
967 # update the requested option value
968 conf[section][key] = value
969
970 # check for any errors with the provided value, and abort
971 validation = conf.validate(_get_validator(), preserve_errors=True)
972 if validation is not True:
973 error = validation.get(section, {}).get(key) or "unknown error"
974 raise ValueError(
975 "Unable to set {key} in {file}: {error}".format(
976 key=key, file=ini_filename, error=error
977 )
978 )
979
980 # write the settings file back to disk
981 conf.write()
982
983 logger.warning(
984 "Options file {file} has been updated; server restart is required before change will take effect.".format(
985 file=conf.filename
986 )
987 )
988
989
990 def generate_empty_options_file(ini_filename="options.ini"):
991 # Generate an options.ini file inside the KOLIBRI_HOME as default placeholder config
992
993 conf = read_options_file(ini_filename=ini_filename)
994
995 comments = None
996
997 for section, opts in option_spec.items():
998 if comments is not None:
999 conf.comments[section] = comments
1000 comments = []
1001 for optname, attrs in opts.items():
1002 if not attrs.get("skip_blank", False) and not attrs.get(
1003 "deprecated", False
1004 ):
1005 if "description" in attrs:
1006 comments.extend(attrs["description"].strip().split("\n"))
1007 comments.append("{} = {}".format(optname, attrs.get("default", "")))
1008 comments.append("")
1009 conf.final_comment = comments
1010
1011 conf.write()
1012
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/utils/options.py b/kolibri/utils/options.py
--- a/kolibri/utils/options.py
+++ b/kolibri/utils/options.py
@@ -992,6 +992,12 @@
conf = read_options_file(ini_filename=ini_filename)
+ for section, opts in option_spec.items():
+ for optname, attrs in opts.items():
+ for envvar in attrs.get("envvars", []):
+ if envvar in os.environ:
+ conf[section].pop(optname, None)
+
comments = None
for section, opts in option_spec.items():
| {"golden_diff": "diff --git a/kolibri/utils/options.py b/kolibri/utils/options.py\n--- a/kolibri/utils/options.py\n+++ b/kolibri/utils/options.py\n@@ -992,6 +992,12 @@\n \n conf = read_options_file(ini_filename=ini_filename)\n \n+ for section, opts in option_spec.items():\n+ for optname, attrs in opts.items():\n+ for envvar in attrs.get(\"envvars\", []):\n+ if envvar in os.environ:\n+ conf[section].pop(optname, None)\n+\n comments = None\n \n for section, opts in option_spec.items():\n", "issue": "Environment variables and configuration flags for Kolibri options are persisted to the `options.ini` file by `generate_empty_options_file`\n## Observed behavior\r\nWhen using either environment variables or configuration flags like `--port` or `--debug` on the first run of a Kolibri device, these will be written to the empty `options.ini` file generated by `generate_empty_options_file` - this means that settings chosen on the first run by ephemeral configuration like environment variables and runtime flags will be persisted for subsequent runs of the server.\r\n\r\n## Expected behavior\r\nEphemeral configuration should not be persisted to disk.\r\n\r\n## User-facing consequences\r\nThe main place this could be problematic is if someone is using a base Kolibri to generate a `KOLIBRI_HOME` image and they do not intend their configuration flags to be persisted.\r\n\r\nAdditionally, Kolibri developers might be confused why flags are being persisted when they shouldn't.\r\n\r\n## Steps to reproduce\r\nRun kolibri with a new `KOLIBRI_HOME` directory. Run with the `--debug` flag.\r\n\r\nSee that `options.ini` contains an uncommented line `DEBUG = True`\r\n\r\n## Context\r\nTell us about your environment, including:\r\n * Kolibri version: 0.15.5\r\n * Operating system: Ubuntu\n", "before_files": [{"content": "\"\"\"\nThis module is intended to allow customization of Kolibri settings with the\noptions.ini file.\nThe settings can be changed through environment variables or sections and keys\nin the options.ini file.\n\"\"\"\nimport ast\nimport logging\nimport os\nimport sys\nfrom functools import update_wrapper\n\nfrom configobj import ConfigObj\nfrom configobj import flatten_errors\nfrom configobj import get_extra_values\nfrom django.utils.functional import SimpleLazyObject\nfrom django.utils.module_loading import import_string\nfrom django.utils.six import string_types\nfrom six.moves.urllib.parse import urlparse\nfrom six.moves.urllib.parse import urlunparse\nfrom validate import is_boolean\nfrom validate import Validator\nfrom validate import VdtTypeError\nfrom validate import VdtValueError\n\ntry:\n import kolibri.utils.pskolibri as psutil\nexcept NotImplementedError:\n # This module can't work on this OS\n psutil = None\n\n\nfrom kolibri.utils.data import bytes_from_humans\nfrom kolibri.utils.i18n import KOLIBRI_LANGUAGE_INFO\nfrom kolibri.utils.i18n import KOLIBRI_SUPPORTED_LANGUAGES\nfrom kolibri.plugins.utils.options import extend_config_spec\nfrom kolibri.deployment.default.sqlite_db_names import (\n ADDITIONAL_SQLITE_DATABASES,\n)\nfrom kolibri.utils.system import get_fd_limit\n\n\nlogger = logging.getLogger(__name__)\n\n\nCACHE_SHARDS = 8\n\n# file descriptors per thread\nFD_PER_THREAD = sum(\n (\n 5, # minimum allowance\n 1 + len(ADDITIONAL_SQLITE_DATABASES), # DBs assuming SQLite\n CACHE_SHARDS, # assuming diskcache\n )\n)\n\n# Reserve some file descriptors for file operations happening in asynchronous tasks\n# when the server is running with threaded task runners.\nMIN_RESERVED_FD = 64\n\n\ndef calculate_thread_pool():\n \"\"\"\n Returns the default value for CherryPY thread_pool:\n - calculated based on the best values obtained in several partners installations\n - servers with more memory can deal with more threads\n - calculations are done for servers with more than 2 Gb of RAM\n - restricts value to avoid exceeding file descriptor limit\n \"\"\"\n MIN_POOL = 50\n MAX_POOL = 150\n\n pool_size = MIN_POOL\n if psutil:\n MIN_MEM = 2\n MAX_MEM = 6\n total_memory = psutil.virtual_memory().total / pow(10, 9) # in GB\n # if it's in the range, scale thread count linearly with available memory\n if MIN_MEM < total_memory < MAX_MEM:\n pool_size = MIN_POOL + int(\n (MAX_POOL - MIN_POOL)\n * float(total_memory - MIN_MEM)\n / (MAX_MEM - MIN_MEM)\n )\n elif total_memory >= MAX_MEM:\n pool_size = MAX_POOL\n elif sys.platform.startswith(\n \"darwin\"\n ): # Considering MacOS has at least 4 Gb of RAM\n pool_size = MAX_POOL\n\n # ensure (number of threads) x (open file descriptors) < (fd limit)\n max_threads = (get_fd_limit() - MIN_RESERVED_FD) // FD_PER_THREAD\n # Ensure that the number of threads never goes below 1\n return max(1, min(pool_size, max_threads))\n\n\nALL_LANGUAGES = \"kolibri-all\"\nSUPPORTED_LANGUAGES = \"kolibri-supported\"\n\n\ndef _process_language_string(value):\n \"\"\"\n Used to validate string values.\n The only valid argument in this case is that it is a string\n so we first try to coerce it to a string, then do some checks\n to see if it is any of our special values. Then if it is an\n appropriate language code value.\n If no value is appropriate, raise a ValueError.\n \"\"\"\n value = str(value)\n if value == ALL_LANGUAGES:\n return list(KOLIBRI_LANGUAGE_INFO.keys())\n if value == SUPPORTED_LANGUAGES:\n return list(KOLIBRI_SUPPORTED_LANGUAGES)\n if value in KOLIBRI_LANGUAGE_INFO:\n return [value]\n raise ValueError\n\n\ndef _process_list(value, separator=\",\"):\n \"\"\"\n Used to validate list values.\n The only valid argument in this case is that it is a list\n so we first try to coerce it to a list, then do some checks\n to see if it is any of our special values. Then if it is an\n appropriate list value.\n If no value is appropriate, raise a ValueError.\n \"\"\"\n\n # Check the supplied value is a list\n if not isinstance(value, list):\n if not value:\n value = []\n elif isinstance(value, string_types):\n value = value.split(separator)\n else:\n value = [value]\n return value\n\n\ndef language_list(value):\n \"\"\"\n Check that the supplied value is a list of languages,\n or a single language, or a special shortcut parameter.\n In the case that it is a special shortcut name, we return the full list\n of relevant languages for that parameter, or throw a validation error\n if that parameter would return an empty list.\n If a single language code is the parameter, this function will return a list\n with that language code as the only member.\n\n :param Union[str, list[str]] value: Either a string or a list of strings\n String can be any value that is a key of KOLIBRI_LANGUAGE_INFO\n or one of the special strings represented by ALL_LANGUAGES or SUPPORTED_LANGUAGES\n A list must be a list of these strings.\n \"\"\"\n value = _process_list(value)\n\n out = set()\n errors = []\n for entry in value:\n try:\n entry_list = _process_language_string(entry)\n out.update(entry_list)\n except ValueError:\n errors.append(entry)\n if errors:\n raise VdtValueError(errors)\n\n if not out:\n raise VdtValueError(value)\n\n return sorted(list(out))\n\n\ndef path(value):\n from kolibri.utils.conf import KOLIBRI_HOME\n\n if not isinstance(value, string_types):\n raise VdtValueError(repr(value))\n # Allow for blank paths\n if value:\n # ensure all path arguments, e.g. under section \"Paths\", are fully resolved and expanded, relative to KOLIBRI_HOME\n return os.path.join(KOLIBRI_HOME, os.path.expanduser(value))\n return value\n\n\ndef path_list(value):\n \"\"\"\n Check that the supplied value is a semicolon-delimited list of paths.\n Note: we do not guarantee that these paths all currently exist.\n \"\"\"\n if isinstance(value, string_types):\n value = value.split(\";\")\n\n out = []\n\n if isinstance(value, list):\n errors = []\n for item in value:\n try:\n out.append(path(item))\n except VdtValueError:\n errors.append(repr(item))\n if errors:\n raise VdtValueError(errors)\n\n return out\n\n\ndef validate_port_number(value):\n if 0 <= value <= 65535:\n return value\n raise VdtValueError(value)\n\n\ndef port(value):\n try:\n return validate_port_number(int(value))\n except ValueError:\n raise VdtTypeError(value)\n\n\ndef origin_or_port(value):\n \"\"\"\n Check that the passed value can either be coerced to an integer to supply\n a port, or is a valid origin string.\n\n :param Union[integer, str] value: Either an integer or a string\n \"\"\"\n if value != \"\":\n try:\n value = validate_port_number(int(value))\n except ValueError:\n url = urlparse(value)\n if not url.scheme or not url.netloc:\n raise VdtValueError(value)\n value = urlunparse((url.scheme, url.netloc, \"\", \"\", \"\", \"\"))\n return value\n\n\ndef validate_bytes(value):\n try:\n value = bytes_from_humans(value)\n except ValueError:\n raise VdtValueError(value)\n return value\n\n\ndef url_prefix(value):\n if not isinstance(value, string_types):\n raise VdtValueError(value)\n return value.lstrip(\"/\").rstrip(\"/\") + \"/\"\n\n\ndef multiprocess_bool(value):\n \"\"\"\n Validate the boolean value of a multiprocessing option.\n Do this by checking it's a boolean, and also that multiprocessing\n can be imported properly on this platform.\n \"\"\"\n value = is_boolean(value)\n try:\n if not value:\n raise ImportError()\n # Import in order to check if multiprocessing is supported on this platform\n from multiprocessing import synchronize # noqa\n\n return True\n except ImportError:\n return False\n\n\nclass LazyImportFunction(object):\n \"\"\"\n A function wrapper that will import a module when called.\n We may be able to drop this when Python 2.7 support is dropped\n and use Python LazyLoader module machinery instead.\n \"\"\"\n\n def __init__(self, module_name):\n self.module_name = module_name\n self._fn = None\n\n def __call__(self, *args, **kwargs):\n if self._fn is None:\n fn = import_string(self.module_name)\n if not callable(fn):\n raise ImportError(\"Module {} is not callable\".format(self.module_name))\n self._fn = fn\n update_wrapper(self, self._fn)\n return self._fn(*args, **kwargs)\n\n\ndef lazy_import_callback(value):\n \"\"\"\n Validate that the value is a string that is a valid import name.\n Does not validate that the module exists or can be imported,\n so as to avoid premature evaluation of the module.\n This is necessary to prevent circular dependencies if the module path\n is internal to Kolibri, and also because the module may not be available\n in some contexts.\n \"\"\"\n if not isinstance(value, string_types):\n raise VdtValueError(value)\n try:\n # Check that the string is at least parseable as a module name\n ast.parse(value)\n except SyntaxError:\n raise VdtValueError(value)\n # We seem to have something that is somewhat valid, so return a function\n # that does the import and tries to invoke the returned function.\n\n return LazyImportFunction(value)\n\n\ndef lazy_import_callback_list(value):\n \"\"\"\n Check that the supplied value is a list of import paths.\n\n :param list[str] value: A list of strings that are valid import paths\n \"\"\"\n value = _process_list(value)\n\n out = []\n errors = []\n for entry in value:\n try:\n entry_list = lazy_import_callback(entry)\n out.append(entry_list)\n except ValueError:\n errors.append(entry)\n if errors:\n raise VdtValueError(errors)\n\n return out\n\n\nbase_option_spec = {\n \"Cache\": {\n \"CACHE_BACKEND\": {\n \"type\": \"option\",\n \"options\": (\"memory\", \"redis\"),\n \"default\": \"memory\",\n \"description\": \"\"\"\n Which backend to use for the main cache - if 'memory' is selected, then for most cache operations,\n an in-memory, process-local cache will be used, but a disk based cache will be used for some data\n that needs to be persistent across processes. If 'redis' is used, it is used for all caches.\n \"\"\",\n },\n \"CACHE_TIMEOUT\": {\n \"type\": \"integer\",\n \"default\": 300,\n \"description\": \"Default timeout for entries put into the cache.\",\n },\n \"CACHE_MAX_ENTRIES\": {\n \"type\": \"integer\",\n \"default\": 1000,\n \"description\": \"Maximum number of entries to maintain in the cache at once.\",\n },\n \"CACHE_PASSWORD\": {\n \"type\": \"string\",\n \"default\": \"\",\n \"description\": \"Password to authenticate to Redis, Redis only.\",\n },\n \"CACHE_LOCATION\": {\n \"type\": \"string\",\n \"default\": \"localhost:6379\",\n \"description\": \"Host and port at which to connect to Redis, Redis only.\",\n },\n \"CACHE_REDIS_DB\": {\n \"type\": \"integer\",\n \"default\": 0,\n \"description\": \"The database number for Redis.\",\n \"deprecated_aliases\": (\"CACHE_REDIS_MIN_DB\",),\n },\n \"CACHE_REDIS_MAX_POOL_SIZE\": {\n \"type\": \"integer\",\n \"default\": 50, # use redis-benchmark to determine better value\n \"description\": \"Maximum number of simultaneous connections to allow to Redis, Redis only.\",\n },\n \"CACHE_REDIS_POOL_TIMEOUT\": {\n \"type\": \"integer\",\n \"default\": 30, # seconds\n \"description\": \"How long to wait when trying to connect to Redis before timing out, Redis only.\",\n },\n # Optional redis settings to overwrite redis.conf\n \"CACHE_REDIS_MAXMEMORY\": {\n \"type\": \"integer\",\n \"default\": 0,\n \"description\": \"Maximum memory that Redis should use, Redis only.\",\n },\n \"CACHE_REDIS_MAXMEMORY_POLICY\": {\n \"type\": \"option\",\n \"options\": (\n \"\",\n \"allkeys-lru\",\n \"volatile-lru\",\n \"allkeys-random\",\n \"volatile-random\",\n \"volatile-ttl\",\n \"noeviction\",\n ),\n \"default\": \"\",\n \"description\": \"Eviction policy to use when using Redis for caching, Redis only.\",\n },\n \"STREAMED_FILE_CACHE_SIZE\": {\n \"type\": \"bytes\",\n \"default\": \"500MB\",\n \"description\": \"\"\"\n Disk space to be used for caching streamed files. This is used for caching files that are\n being streamed from remote libraries, if these files are later imported, these should be cleaned up,\n and will no longer count to this cache size.\n Value can either be a number suffixed with a unit (e.g. MB, GB, TB) or an integer number of bytes.\n \"\"\",\n },\n },\n \"Database\": {\n \"DATABASE_ENGINE\": {\n \"type\": \"option\",\n \"options\": (\"sqlite\", \"postgres\"),\n \"default\": \"sqlite\",\n \"description\": \"Which database backend to use, choices are 'sqlite' or 'postgres'\",\n },\n \"DATABASE_NAME\": {\n \"type\": \"string\",\n \"description\": \"\"\"\n For SQLite - the name of a database file to use for the main Kolibri database.\n For Postgresql, the name of the database to use for all Kolibri data.\n \"\"\",\n },\n \"DATABASE_PASSWORD\": {\n \"type\": \"string\",\n \"description\": \"The password to authenticate with when connecting to the database, Postgresql only.\",\n },\n \"DATABASE_USER\": {\n \"type\": \"string\",\n \"description\": \"The user to authenticate with when connecting to the database, Postgresql only.\",\n },\n \"DATABASE_HOST\": {\n \"type\": \"string\",\n \"description\": \"The host on which to connect to the database, Postgresql only.\",\n },\n \"DATABASE_PORT\": {\n \"type\": \"string\",\n \"description\": \"The port on which to connect to the database, Postgresql only.\",\n },\n },\n \"Server\": {\n \"CHERRYPY_START\": {\n \"type\": \"boolean\",\n \"default\": True,\n \"description\": \"DEPRECATED - do not use this option, use the 'kolibri services' command instead.\",\n \"deprecated\": True,\n },\n \"CHERRYPY_THREAD_POOL\": {\n \"type\": \"integer\",\n \"default\": calculate_thread_pool(),\n \"description\": \"How many threads the Kolibri server should use to serve requests\",\n },\n \"CHERRYPY_SOCKET_TIMEOUT\": {\n \"type\": \"integer\",\n \"default\": 10,\n \"description\": \"\"\"\n How long a socket should wait for data flow to resume before\n it considers that the connection has been interrupted.\n Increasing this may help in situations where there is high\n latency on a network or the bandwidth is bursty, with some\n expected data flow interruptions which may not be indicative of the connection failing.\n \"\"\",\n },\n \"CHERRYPY_QUEUE_SIZE\": {\n \"type\": \"integer\",\n \"default\": 30,\n \"description\": \"\"\"\n How many requests to allow in the queue.\n Increasing this may help situations where requests are instantly refused by the server.\n \"\"\",\n },\n \"CHERRYPY_QUEUE_TIMEOUT\": {\n \"type\": \"float\",\n \"default\": 0.1,\n \"description\": \"\"\"\n How many seconds to wait for a request to be put into the queue.\n Increasing this may help situations where requests are instantly refused by the server.\n \"\"\",\n },\n \"PROFILE\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"envvars\": (\"KOLIBRI_SERVER_PROFILE\",),\n \"description\": \"Activate the server profiling middleware.\",\n },\n \"DEBUG\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"description\": \"Run Kolibri with Django setting DEBUG = True\",\n },\n \"DEBUG_LOG_DATABASE\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"description\": \"Activate debug logging for Django ORM operations.\",\n },\n },\n \"Paths\": {\n \"CONTENT_DIR\": {\n \"type\": \"path\",\n \"default\": \"content\",\n \"description\": \"\"\"\n The directory that will store content files and content database files.\n To change this in a currently active server it is recommended to use the\n 'content movedirectory' management command.\n \"\"\",\n },\n \"CONTENT_FALLBACK_DIRS\": {\n \"type\": \"path_list\",\n \"default\": \"\",\n \"description\": \"Additional directories in which Kolibri will look for content files and content database files.\",\n },\n \"AUTOMATIC_PROVISION_FILE\": {\n \"type\": \"path\",\n \"default\": \"\",\n \"description\": \"The file that contains the automatic device provisioning data.\",\n },\n },\n \"Urls\": {\n \"CENTRAL_CONTENT_BASE_URL\": {\n \"type\": \"string\",\n \"default\": \"https://studio.learningequality.org\",\n \"deprecated_envvars\": (\"CENTRAL_CONTENT_DOWNLOAD_BASE_URL\",),\n \"description\": \"\"\"\n URL to use as the default source for content import.\n Slightly counterintuitively this will still be displayed in the UI as 'import from Kolibri Studio'.\n \"\"\",\n },\n \"DATA_PORTAL_SYNCING_BASE_URL\": {\n \"type\": \"string\",\n \"default\": \"https://kolibridataportal.learningequality.org\",\n \"description\": \"URL to use as the target for data portal syncing.\",\n },\n },\n \"Deployment\": {\n \"HTTP_PORT\": {\n \"type\": \"port\",\n \"default\": 8080,\n \"deprecated_envvars\": (\"KOLIBRI_LISTEN_PORT\",),\n \"description\": \"Sets the port that Kolibri will serve on. This can be further overridden by command line arguments.\",\n },\n \"RUN_MODE\": {\n \"type\": \"string\",\n \"description\": \"Used to flag non-user Kolibri instances\",\n \"skip_blank\": True,\n },\n \"DISABLE_PING\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"description\": \"Turn off the statistics pingback. This will also disable update notifications\",\n },\n \"URL_PATH_PREFIX\": {\n \"type\": \"url_prefix\",\n \"default\": \"/\",\n \"description\": \"\"\"\n Serve Kolibri from a subpath under the main domain. Used when serving multiple applications from\n the same origin. This option is not heavily tested, but is provided for user convenience.\n \"\"\",\n },\n \"LANGUAGES\": {\n \"type\": \"language_list\",\n \"default\": SUPPORTED_LANGUAGES,\n \"description\": \"\"\"\n The user interface languages to enable on this instance of Kolibri (has no effect on languages of imported content channels).\n The default will include all the languages Kolibri supports.\n \"\"\",\n },\n \"ZIP_CONTENT_ORIGIN\": {\n \"type\": \"origin_or_port\",\n \"default\": \"\",\n \"description\": \"\"\"\n When running by default (value blank), Kolibri frontend looks for the zipcontent endpoints\n on the same domain as Kolibri proper, but uses ZIP_CONTENT_PORT instead of HTTP_PORT.\n When running behind a proxy, set the value to the port where zipcontent endpoint is served on,\n and it will be substituted for the port that Kolibri proper is being served on.\n When zipcontent is being served from a completely separate domain, you can set an\n absolute origin (full protocol plus domain, e.g. 'https://myzipcontent.com/')\n to be used for all zipcontent origin requests.\n It is strongly recommended that zipcontent is served from a different origin from the main Kolibri app,\n either by port or domain, to allow for proper sandboxing.\n \"\"\",\n },\n \"ZIP_CONTENT_PORT\": {\n \"type\": \"port\",\n \"default\": 0,\n \"description\": \"\"\"\n Sets the port that Kolibri will serve the alternate origin server on. This is the server that\n is used to serve all content for the zipcontent endpoint, so as to provide safe IFrame sandboxing\n but avoiding issues with null origins.\n This is the alternate origin server equivalent of HTTP_PORT.\n It is strongly recommended that zipcontent is served from a different origin from the main Kolibri app,\n either by port or domain, to allow for proper sandboxing.\n \"\"\",\n },\n \"ZIP_CONTENT_URL_PATH_PREFIX\": {\n \"type\": \"url_prefix\",\n \"default\": \"/\",\n \"description\": \"\"\"\n The zip content equivalent of URL_PATH_PREFIX - allows all zip content URLs to be prefixed with\n a fixed path. This both changes the URL from which the endpoints are served by the alternate\n origin server, and the URL prefix where the Kolibri frontend looks for it.\n In the case that ZIP_CONTENT_ORIGIN is pointing to an entirely separate origin, this setting\n can still be used to set a URL prefix that the frontend of Kolibri will look to when\n retrieving alternate origin URLs.\n \"\"\",\n },\n \"REMOTE_CONTENT\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"description\": \"\"\"\n Boolean flag that causes content import processes to skip trying to import any\n content, as it is assumed that the remote source has everything available.\n Server configuration should handle ensuring that the files are properly served.\n \"\"\",\n },\n \"SYNC_INTERVAL\": {\n \"type\": \"integer\",\n \"default\": 60,\n \"description\": \"\"\"\n In case a SoUD connects to this server, the SoUD should use this interval to resync every user.\n \"\"\",\n },\n \"PROJECT\": {\n \"type\": \"string\",\n \"skip_blank\": True,\n \"description\": \"\"\"\n The custom identifier for a project. This is used to identify the project in the telemetry\n data that is returned to our telemetry server.\n \"\"\",\n },\n \"MINIMUM_DISK_SPACE\": {\n \"type\": \"bytes\",\n \"default\": \"250MB\",\n \"description\": \"\"\"\n The minimum free disk space that Kolibri should try to maintain on the device. This will\n be used as the floor value to prevent Kolibri completely filling the disk during file import.\n Value can either be a number suffixed with a unit (e.g. MB, GB, TB) or an integer number of bytes.\n \"\"\",\n },\n \"LISTEN_ADDRESS\": {\n \"type\": \"ip_addr\",\n \"default\": \"0.0.0.0\",\n \"description\": \"\"\"\n The address that the server should listen on. This can be used to restrict access to the server\n to a specific network interface.\n \"\"\",\n },\n \"RESTART_HOOKS\": {\n \"type\": \"lazy_import_callback_list\",\n \"default\": [\"kolibri.utils.server.signal_restart\"],\n \"description\": \"\"\"\n A list of module paths for function callbacks that will be called when server restart is called.\n The default is to disallow server restarts, so callbacks need to be added to enable restarting.\n \"\"\",\n },\n },\n \"Python\": {\n \"PICKLE_PROTOCOL\": {\n \"type\": \"integer\",\n \"default\": 2,\n \"description\": \"\"\"\n Which Python pickle protocol to use. Pinned to 2 for now to provide maximal cross-Python version compatibility.\n Can safely be set to a higher value for deployments that will never change Python versions.\n \"\"\",\n }\n },\n \"Tasks\": {\n \"USE_WORKER_MULTIPROCESSING\": {\n \"type\": \"multiprocess_bool\",\n \"default\": False,\n \"description\": \"\"\"\n Whether to use Python multiprocessing for worker pools. If False, then it will use threading. This may be useful,\n if running on a dedicated device with multiple cores, and a lot of asynchronous tasks get run.\n \"\"\",\n },\n \"REGULAR_PRIORITY_WORKERS\": {\n \"type\": \"integer\",\n \"default\": 4,\n \"description\": \"\"\"\n The number of workers to spin up for regular priority asynchronous tasks.\n \"\"\",\n },\n \"HIGH_PRIORITY_WORKERS\": {\n \"type\": \"integer\",\n \"default\": 2,\n \"description\": \"\"\"\n The number of workers to spin up for high priority asynchronous tasks.\n \"\"\",\n },\n \"JOB_STORAGE_FILEPATH\": {\n \"type\": \"path\",\n \"default\": \"job_storage.sqlite3\",\n \"description\": \"\"\"\n The file to use for the job storage database. This is only used in the case that the database backend being used is SQLite.\n \"\"\",\n },\n },\n}\n\n\ndef _get_validator():\n return Validator(\n {\n \"language_list\": language_list,\n \"path\": path,\n \"path_list\": path_list,\n \"origin_or_port\": origin_or_port,\n \"port\": port,\n \"url_prefix\": url_prefix,\n \"bytes\": validate_bytes,\n \"multiprocess_bool\": multiprocess_bool,\n \"lazy_import_callback_list\": lazy_import_callback_list,\n }\n )\n\n\ndef _get_option_spec():\n \"\"\"\n Combine the default option spec with any options that are defined in plugins\n \"\"\"\n option_spec = extend_config_spec(base_option_spec)\n envvars = set()\n for section, opts in option_spec.items():\n for optname, attrs in opts.items():\n if \"deprecated_aliases\" in attrs:\n attrs[\"deprecated_envvars\"] = attrs.get(\"deprecated_envvars\", ())\n for alias in attrs[\"deprecated_aliases\"]:\n alias_ev = \"KOLIBRI_{}\".format(alias)\n if alias_ev not in envvars:\n attrs[\"deprecated_envvars\"] += (alias_ev,)\n\n opt_envvars = attrs.get(\"envvars\", ()) + attrs.get(\"deprecated_envvars\", ())\n default_envvar = \"KOLIBRI_{}\".format(optname.upper())\n if default_envvar not in envvars:\n envvars.add(default_envvar)\n else:\n logging.warning(\n \"Duplicate environment variable for options {}\".format(\n default_envvar\n )\n )\n default_envvar = \"KOLIBRI_{}_{}\".format(\n section.upper(), optname.upper()\n )\n if default_envvar not in opt_envvars:\n attrs[\"envvars\"] = (default_envvar,) + opt_envvars\n return option_spec\n\n\noption_spec = SimpleLazyObject(_get_option_spec)\n\n\ndef get_configspec():\n \"\"\"\n Read the option_spec dict defined above, and turn it into a \"configspec\" object (per the configobj library)\n so that we can use it to parse the options.ini file.\n \"\"\"\n\n lines = []\n\n for section, opts in option_spec.items():\n lines.append(\"[{section}]\".format(section=section))\n for name, attrs in opts.items():\n default = attrs.get(\"default\", \"\")\n if isinstance(default, list) and not default:\n raise RuntimeError(\"For an empty list don't specify a default\")\n the_type = attrs[\"type\"]\n args = [\"%r\" % op for op in attrs.get(\"options\", [])] + [\n \"default=list('{default_list}')\".format(\n default_list=\"','\".join(default)\n )\n if isinstance(default, list)\n else \"default='{default}'\".format(default=default)\n ]\n line = \"{name} = {type}({args})\".format(\n name=name, type=the_type, args=\", \".join(args)\n )\n lines.append(line)\n\n return ConfigObj(lines, _inspec=True)\n\n\ndef _set_from_envvars(conf):\n \"\"\"\n Set the configuration from environment variables.\n \"\"\"\n # keep track of which options were overridden using environment variables, to support error reporting\n using_env_vars = {}\n\n deprecation_warning = \"Option {optname} in section [{section}] being overridden by deprecated environment variable {envvar}, please update to: {envvars}\"\n # override any values from their environment variables (if set)\n # and check for use of deprecated environment variables and options\n for section, opts in option_spec.items():\n for optname, attrs in opts.items():\n for envvar in attrs.get(\"envvars\", []):\n if envvar in os.environ:\n deprecated_envvars = attrs.get(\"deprecated_envvars\", ())\n if envvar in deprecated_envvars:\n logger.warning(\n deprecation_warning.format(\n optname=optname,\n section=section,\n envvar=envvar,\n envvars=\", \".join(\n e\n for e in attrs.get(\"envvars\", [])\n if e not in deprecated_envvars\n ),\n )\n )\n else:\n logger.info(\n \"Option {optname} in section [{section}] being overridden by environment variable {envvar}\".format(\n optname=optname, section=section, envvar=envvar\n )\n )\n if attrs.get(\"deprecated\", False):\n logger.warning(\n \"Option {optname} in section [{section}] is deprecated, please remove it from your options.ini file\".format(\n optname=optname, section=section\n )\n )\n conf[section][optname] = os.environ[envvar]\n using_env_vars[optname] = envvar\n break\n return using_env_vars\n\n\ndef _set_from_deprecated_aliases(conf):\n \"\"\"\n Set the configuration from deprecated aliases.\n \"\"\"\n # keep track of which options were overridden using environment variables, to support error reporting\n using_deprecated_alias = {}\n\n deprecation_warning = \"Option {optname} in section [{section}] being set by deprecated alias {alias}, please update to: {optname}\"\n # override any values from their environment variables (if set)\n # and check for use of deprecated environment variables and options\n for section, opts in option_spec.items():\n for optname, attrs in opts.items():\n for alias in attrs.get(\"deprecated_aliases\", ()):\n if alias in conf[section]:\n logger.warning(\n deprecation_warning.format(\n optname=optname,\n section=section,\n alias=alias,\n )\n )\n conf[section][optname] = conf[section][alias]\n del conf[section][alias]\n using_deprecated_alias[optname] = alias\n break\n return using_deprecated_alias\n\n\ndef read_options_file(ini_filename=\"options.ini\"):\n\n from kolibri.utils.conf import KOLIBRI_HOME\n\n ini_path = os.path.join(KOLIBRI_HOME, ini_filename)\n\n conf = ConfigObj(ini_path, configspec=get_configspec())\n\n # Check for use of deprecated options\n for section, opts in option_spec.items():\n for optname, attrs in opts.items():\n if (\n attrs.get(\"deprecated\", False)\n and section in conf\n and optname in conf[section]\n ):\n logger.warning(\n \"Option {optname} in section [{section}] is deprecated, please remove it from your options.ini file\".format(\n optname=optname, section=section\n )\n )\n\n # validate once up front to ensure section structure is in place\n conf.validate(_get_validator())\n\n using_env_vars = _set_from_envvars(conf)\n\n using_deprecated_alias = _set_from_deprecated_aliases(conf)\n\n validation = conf.validate(_get_validator(), preserve_errors=True)\n\n # loop over and display any errors with config values, and then bail\n if validation is not True:\n for section_list, optname, error in flatten_errors(conf, validation):\n section = section_list[0]\n if optname in using_env_vars:\n logger.error(\n \"Error processing environment variable option {envvar}: {error}\".format(\n envvar=using_env_vars[optname], error=error\n )\n )\n elif optname in using_deprecated_alias:\n logger.error(\n \"Error processing {file} under section [{section}] for option {alias}: {error}\".format(\n file=ini_path,\n section=section,\n alias=using_deprecated_alias[optname],\n error=error,\n )\n )\n else:\n logger.error(\n \"Error processing {file} under section [{section}] for option {option}: {error}\".format(\n file=ini_path, section=section, option=optname, error=error\n )\n )\n logger.critical(\n \"Aborting: Could not process options config (see errors above for more details)\"\n )\n raise SystemExit(1)\n\n # loop over any extraneous options and warn the user that we're ignoring them\n for sections, name in get_extra_values(conf):\n\n # this code gets the extra values themselves\n the_section = conf\n for section in sections:\n the_section = the_section[section]\n\n # the_value may be a section or a value\n the_value = the_section.pop(name)\n\n # determine whether the extra item is a section (dict) or value\n kind = \"section\" if isinstance(the_value, dict) else \"option\"\n\n logger.warning(\n \"Ignoring unknown {kind} in options file {file} under {section}: {name}.\".format(\n kind=kind,\n file=ini_path,\n section=sections[0] if sections else \"top level\",\n name=name,\n )\n )\n\n # run validation once again to fill in any default values for options we deleted due to issues\n conf.validate(_get_validator())\n\n return conf\n\n\ndef update_options_file(section, key, value, ini_filename=\"options.ini\"):\n \"\"\"\n Updates the configuration file on top of what is currently in the\n file.\n\n Note to future: Do not change the implementation to write the\n in-memory conf.OPTIONS as it can contain temporary in-memory values\n that are not intended to be stored.\n \"\"\"\n\n # load the current conf from disk into memory\n conf = read_options_file(ini_filename=ini_filename)\n\n # update the requested option value\n conf[section][key] = value\n\n # check for any errors with the provided value, and abort\n validation = conf.validate(_get_validator(), preserve_errors=True)\n if validation is not True:\n error = validation.get(section, {}).get(key) or \"unknown error\"\n raise ValueError(\n \"Unable to set {key} in {file}: {error}\".format(\n key=key, file=ini_filename, error=error\n )\n )\n\n # write the settings file back to disk\n conf.write()\n\n logger.warning(\n \"Options file {file} has been updated; server restart is required before change will take effect.\".format(\n file=conf.filename\n )\n )\n\n\ndef generate_empty_options_file(ini_filename=\"options.ini\"):\n # Generate an options.ini file inside the KOLIBRI_HOME as default placeholder config\n\n conf = read_options_file(ini_filename=ini_filename)\n\n comments = None\n\n for section, opts in option_spec.items():\n if comments is not None:\n conf.comments[section] = comments\n comments = []\n for optname, attrs in opts.items():\n if not attrs.get(\"skip_blank\", False) and not attrs.get(\n \"deprecated\", False\n ):\n if \"description\" in attrs:\n comments.extend(attrs[\"description\"].strip().split(\"\\n\"))\n comments.append(\"{} = {}\".format(optname, attrs.get(\"default\", \"\")))\n comments.append(\"\")\n conf.final_comment = comments\n\n conf.write()\n", "path": "kolibri/utils/options.py"}], "after_files": [{"content": "\"\"\"\nThis module is intended to allow customization of Kolibri settings with the\noptions.ini file.\nThe settings can be changed through environment variables or sections and keys\nin the options.ini file.\n\"\"\"\nimport ast\nimport logging\nimport os\nimport sys\nfrom functools import update_wrapper\n\nfrom configobj import ConfigObj\nfrom configobj import flatten_errors\nfrom configobj import get_extra_values\nfrom django.utils.functional import SimpleLazyObject\nfrom django.utils.module_loading import import_string\nfrom django.utils.six import string_types\nfrom six.moves.urllib.parse import urlparse\nfrom six.moves.urllib.parse import urlunparse\nfrom validate import is_boolean\nfrom validate import Validator\nfrom validate import VdtTypeError\nfrom validate import VdtValueError\n\ntry:\n import kolibri.utils.pskolibri as psutil\nexcept NotImplementedError:\n # This module can't work on this OS\n psutil = None\n\n\nfrom kolibri.utils.data import bytes_from_humans\nfrom kolibri.utils.i18n import KOLIBRI_LANGUAGE_INFO\nfrom kolibri.utils.i18n import KOLIBRI_SUPPORTED_LANGUAGES\nfrom kolibri.plugins.utils.options import extend_config_spec\nfrom kolibri.deployment.default.sqlite_db_names import (\n ADDITIONAL_SQLITE_DATABASES,\n)\nfrom kolibri.utils.system import get_fd_limit\n\n\nlogger = logging.getLogger(__name__)\n\n\nCACHE_SHARDS = 8\n\n# file descriptors per thread\nFD_PER_THREAD = sum(\n (\n 5, # minimum allowance\n 1 + len(ADDITIONAL_SQLITE_DATABASES), # DBs assuming SQLite\n CACHE_SHARDS, # assuming diskcache\n )\n)\n\n# Reserve some file descriptors for file operations happening in asynchronous tasks\n# when the server is running with threaded task runners.\nMIN_RESERVED_FD = 64\n\n\ndef calculate_thread_pool():\n \"\"\"\n Returns the default value for CherryPY thread_pool:\n - calculated based on the best values obtained in several partners installations\n - servers with more memory can deal with more threads\n - calculations are done for servers with more than 2 Gb of RAM\n - restricts value to avoid exceeding file descriptor limit\n \"\"\"\n MIN_POOL = 50\n MAX_POOL = 150\n\n pool_size = MIN_POOL\n if psutil:\n MIN_MEM = 2\n MAX_MEM = 6\n total_memory = psutil.virtual_memory().total / pow(10, 9) # in GB\n # if it's in the range, scale thread count linearly with available memory\n if MIN_MEM < total_memory < MAX_MEM:\n pool_size = MIN_POOL + int(\n (MAX_POOL - MIN_POOL)\n * float(total_memory - MIN_MEM)\n / (MAX_MEM - MIN_MEM)\n )\n elif total_memory >= MAX_MEM:\n pool_size = MAX_POOL\n elif sys.platform.startswith(\n \"darwin\"\n ): # Considering MacOS has at least 4 Gb of RAM\n pool_size = MAX_POOL\n\n # ensure (number of threads) x (open file descriptors) < (fd limit)\n max_threads = (get_fd_limit() - MIN_RESERVED_FD) // FD_PER_THREAD\n # Ensure that the number of threads never goes below 1\n return max(1, min(pool_size, max_threads))\n\n\nALL_LANGUAGES = \"kolibri-all\"\nSUPPORTED_LANGUAGES = \"kolibri-supported\"\n\n\ndef _process_language_string(value):\n \"\"\"\n Used to validate string values.\n The only valid argument in this case is that it is a string\n so we first try to coerce it to a string, then do some checks\n to see if it is any of our special values. Then if it is an\n appropriate language code value.\n If no value is appropriate, raise a ValueError.\n \"\"\"\n value = str(value)\n if value == ALL_LANGUAGES:\n return list(KOLIBRI_LANGUAGE_INFO.keys())\n if value == SUPPORTED_LANGUAGES:\n return list(KOLIBRI_SUPPORTED_LANGUAGES)\n if value in KOLIBRI_LANGUAGE_INFO:\n return [value]\n raise ValueError\n\n\ndef _process_list(value, separator=\",\"):\n \"\"\"\n Used to validate list values.\n The only valid argument in this case is that it is a list\n so we first try to coerce it to a list, then do some checks\n to see if it is any of our special values. Then if it is an\n appropriate list value.\n If no value is appropriate, raise a ValueError.\n \"\"\"\n\n # Check the supplied value is a list\n if not isinstance(value, list):\n if not value:\n value = []\n elif isinstance(value, string_types):\n value = value.split(separator)\n else:\n value = [value]\n return value\n\n\ndef language_list(value):\n \"\"\"\n Check that the supplied value is a list of languages,\n or a single language, or a special shortcut parameter.\n In the case that it is a special shortcut name, we return the full list\n of relevant languages for that parameter, or throw a validation error\n if that parameter would return an empty list.\n If a single language code is the parameter, this function will return a list\n with that language code as the only member.\n\n :param Union[str, list[str]] value: Either a string or a list of strings\n String can be any value that is a key of KOLIBRI_LANGUAGE_INFO\n or one of the special strings represented by ALL_LANGUAGES or SUPPORTED_LANGUAGES\n A list must be a list of these strings.\n \"\"\"\n value = _process_list(value)\n\n out = set()\n errors = []\n for entry in value:\n try:\n entry_list = _process_language_string(entry)\n out.update(entry_list)\n except ValueError:\n errors.append(entry)\n if errors:\n raise VdtValueError(errors)\n\n if not out:\n raise VdtValueError(value)\n\n return sorted(list(out))\n\n\ndef path(value):\n from kolibri.utils.conf import KOLIBRI_HOME\n\n if not isinstance(value, string_types):\n raise VdtValueError(repr(value))\n # Allow for blank paths\n if value:\n # ensure all path arguments, e.g. under section \"Paths\", are fully resolved and expanded, relative to KOLIBRI_HOME\n return os.path.join(KOLIBRI_HOME, os.path.expanduser(value))\n return value\n\n\ndef path_list(value):\n \"\"\"\n Check that the supplied value is a semicolon-delimited list of paths.\n Note: we do not guarantee that these paths all currently exist.\n \"\"\"\n if isinstance(value, string_types):\n value = value.split(\";\")\n\n out = []\n\n if isinstance(value, list):\n errors = []\n for item in value:\n try:\n out.append(path(item))\n except VdtValueError:\n errors.append(repr(item))\n if errors:\n raise VdtValueError(errors)\n\n return out\n\n\ndef validate_port_number(value):\n if 0 <= value <= 65535:\n return value\n raise VdtValueError(value)\n\n\ndef port(value):\n try:\n return validate_port_number(int(value))\n except ValueError:\n raise VdtTypeError(value)\n\n\ndef origin_or_port(value):\n \"\"\"\n Check that the passed value can either be coerced to an integer to supply\n a port, or is a valid origin string.\n\n :param Union[integer, str] value: Either an integer or a string\n \"\"\"\n if value != \"\":\n try:\n value = validate_port_number(int(value))\n except ValueError:\n url = urlparse(value)\n if not url.scheme or not url.netloc:\n raise VdtValueError(value)\n value = urlunparse((url.scheme, url.netloc, \"\", \"\", \"\", \"\"))\n return value\n\n\ndef validate_bytes(value):\n try:\n value = bytes_from_humans(value)\n except ValueError:\n raise VdtValueError(value)\n return value\n\n\ndef url_prefix(value):\n if not isinstance(value, string_types):\n raise VdtValueError(value)\n return value.lstrip(\"/\").rstrip(\"/\") + \"/\"\n\n\ndef multiprocess_bool(value):\n \"\"\"\n Validate the boolean value of a multiprocessing option.\n Do this by checking it's a boolean, and also that multiprocessing\n can be imported properly on this platform.\n \"\"\"\n value = is_boolean(value)\n try:\n if not value:\n raise ImportError()\n # Import in order to check if multiprocessing is supported on this platform\n from multiprocessing import synchronize # noqa\n\n return True\n except ImportError:\n return False\n\n\nclass LazyImportFunction(object):\n \"\"\"\n A function wrapper that will import a module when called.\n We may be able to drop this when Python 2.7 support is dropped\n and use Python LazyLoader module machinery instead.\n \"\"\"\n\n def __init__(self, module_name):\n self.module_name = module_name\n self._fn = None\n\n def __call__(self, *args, **kwargs):\n if self._fn is None:\n fn = import_string(self.module_name)\n if not callable(fn):\n raise ImportError(\"Module {} is not callable\".format(self.module_name))\n self._fn = fn\n update_wrapper(self, self._fn)\n return self._fn(*args, **kwargs)\n\n\ndef lazy_import_callback(value):\n \"\"\"\n Validate that the value is a string that is a valid import name.\n Does not validate that the module exists or can be imported,\n so as to avoid premature evaluation of the module.\n This is necessary to prevent circular dependencies if the module path\n is internal to Kolibri, and also because the module may not be available\n in some contexts.\n \"\"\"\n if not isinstance(value, string_types):\n raise VdtValueError(value)\n try:\n # Check that the string is at least parseable as a module name\n ast.parse(value)\n except SyntaxError:\n raise VdtValueError(value)\n # We seem to have something that is somewhat valid, so return a function\n # that does the import and tries to invoke the returned function.\n\n return LazyImportFunction(value)\n\n\ndef lazy_import_callback_list(value):\n \"\"\"\n Check that the supplied value is a list of import paths.\n\n :param list[str] value: A list of strings that are valid import paths\n \"\"\"\n value = _process_list(value)\n\n out = []\n errors = []\n for entry in value:\n try:\n entry_list = lazy_import_callback(entry)\n out.append(entry_list)\n except ValueError:\n errors.append(entry)\n if errors:\n raise VdtValueError(errors)\n\n return out\n\n\nbase_option_spec = {\n \"Cache\": {\n \"CACHE_BACKEND\": {\n \"type\": \"option\",\n \"options\": (\"memory\", \"redis\"),\n \"default\": \"memory\",\n \"description\": \"\"\"\n Which backend to use for the main cache - if 'memory' is selected, then for most cache operations,\n an in-memory, process-local cache will be used, but a disk based cache will be used for some data\n that needs to be persistent across processes. If 'redis' is used, it is used for all caches.\n \"\"\",\n },\n \"CACHE_TIMEOUT\": {\n \"type\": \"integer\",\n \"default\": 300,\n \"description\": \"Default timeout for entries put into the cache.\",\n },\n \"CACHE_MAX_ENTRIES\": {\n \"type\": \"integer\",\n \"default\": 1000,\n \"description\": \"Maximum number of entries to maintain in the cache at once.\",\n },\n \"CACHE_PASSWORD\": {\n \"type\": \"string\",\n \"default\": \"\",\n \"description\": \"Password to authenticate to Redis, Redis only.\",\n },\n \"CACHE_LOCATION\": {\n \"type\": \"string\",\n \"default\": \"localhost:6379\",\n \"description\": \"Host and port at which to connect to Redis, Redis only.\",\n },\n \"CACHE_REDIS_DB\": {\n \"type\": \"integer\",\n \"default\": 0,\n \"description\": \"The database number for Redis.\",\n \"deprecated_aliases\": (\"CACHE_REDIS_MIN_DB\",),\n },\n \"CACHE_REDIS_MAX_POOL_SIZE\": {\n \"type\": \"integer\",\n \"default\": 50, # use redis-benchmark to determine better value\n \"description\": \"Maximum number of simultaneous connections to allow to Redis, Redis only.\",\n },\n \"CACHE_REDIS_POOL_TIMEOUT\": {\n \"type\": \"integer\",\n \"default\": 30, # seconds\n \"description\": \"How long to wait when trying to connect to Redis before timing out, Redis only.\",\n },\n # Optional redis settings to overwrite redis.conf\n \"CACHE_REDIS_MAXMEMORY\": {\n \"type\": \"integer\",\n \"default\": 0,\n \"description\": \"Maximum memory that Redis should use, Redis only.\",\n },\n \"CACHE_REDIS_MAXMEMORY_POLICY\": {\n \"type\": \"option\",\n \"options\": (\n \"\",\n \"allkeys-lru\",\n \"volatile-lru\",\n \"allkeys-random\",\n \"volatile-random\",\n \"volatile-ttl\",\n \"noeviction\",\n ),\n \"default\": \"\",\n \"description\": \"Eviction policy to use when using Redis for caching, Redis only.\",\n },\n \"STREAMED_FILE_CACHE_SIZE\": {\n \"type\": \"bytes\",\n \"default\": \"500MB\",\n \"description\": \"\"\"\n Disk space to be used for caching streamed files. This is used for caching files that are\n being streamed from remote libraries, if these files are later imported, these should be cleaned up,\n and will no longer count to this cache size.\n Value can either be a number suffixed with a unit (e.g. MB, GB, TB) or an integer number of bytes.\n \"\"\",\n },\n },\n \"Database\": {\n \"DATABASE_ENGINE\": {\n \"type\": \"option\",\n \"options\": (\"sqlite\", \"postgres\"),\n \"default\": \"sqlite\",\n \"description\": \"Which database backend to use, choices are 'sqlite' or 'postgres'\",\n },\n \"DATABASE_NAME\": {\n \"type\": \"string\",\n \"description\": \"\"\"\n For SQLite - the name of a database file to use for the main Kolibri database.\n For Postgresql, the name of the database to use for all Kolibri data.\n \"\"\",\n },\n \"DATABASE_PASSWORD\": {\n \"type\": \"string\",\n \"description\": \"The password to authenticate with when connecting to the database, Postgresql only.\",\n },\n \"DATABASE_USER\": {\n \"type\": \"string\",\n \"description\": \"The user to authenticate with when connecting to the database, Postgresql only.\",\n },\n \"DATABASE_HOST\": {\n \"type\": \"string\",\n \"description\": \"The host on which to connect to the database, Postgresql only.\",\n },\n \"DATABASE_PORT\": {\n \"type\": \"string\",\n \"description\": \"The port on which to connect to the database, Postgresql only.\",\n },\n },\n \"Server\": {\n \"CHERRYPY_START\": {\n \"type\": \"boolean\",\n \"default\": True,\n \"description\": \"DEPRECATED - do not use this option, use the 'kolibri services' command instead.\",\n \"deprecated\": True,\n },\n \"CHERRYPY_THREAD_POOL\": {\n \"type\": \"integer\",\n \"default\": calculate_thread_pool(),\n \"description\": \"How many threads the Kolibri server should use to serve requests\",\n },\n \"CHERRYPY_SOCKET_TIMEOUT\": {\n \"type\": \"integer\",\n \"default\": 10,\n \"description\": \"\"\"\n How long a socket should wait for data flow to resume before\n it considers that the connection has been interrupted.\n Increasing this may help in situations where there is high\n latency on a network or the bandwidth is bursty, with some\n expected data flow interruptions which may not be indicative of the connection failing.\n \"\"\",\n },\n \"CHERRYPY_QUEUE_SIZE\": {\n \"type\": \"integer\",\n \"default\": 30,\n \"description\": \"\"\"\n How many requests to allow in the queue.\n Increasing this may help situations where requests are instantly refused by the server.\n \"\"\",\n },\n \"CHERRYPY_QUEUE_TIMEOUT\": {\n \"type\": \"float\",\n \"default\": 0.1,\n \"description\": \"\"\"\n How many seconds to wait for a request to be put into the queue.\n Increasing this may help situations where requests are instantly refused by the server.\n \"\"\",\n },\n \"PROFILE\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"envvars\": (\"KOLIBRI_SERVER_PROFILE\",),\n \"description\": \"Activate the server profiling middleware.\",\n },\n \"DEBUG\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"description\": \"Run Kolibri with Django setting DEBUG = True\",\n },\n \"DEBUG_LOG_DATABASE\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"description\": \"Activate debug logging for Django ORM operations.\",\n },\n },\n \"Paths\": {\n \"CONTENT_DIR\": {\n \"type\": \"path\",\n \"default\": \"content\",\n \"description\": \"\"\"\n The directory that will store content files and content database files.\n To change this in a currently active server it is recommended to use the\n 'content movedirectory' management command.\n \"\"\",\n },\n \"CONTENT_FALLBACK_DIRS\": {\n \"type\": \"path_list\",\n \"default\": \"\",\n \"description\": \"Additional directories in which Kolibri will look for content files and content database files.\",\n },\n \"AUTOMATIC_PROVISION_FILE\": {\n \"type\": \"path\",\n \"default\": \"\",\n \"description\": \"The file that contains the automatic device provisioning data.\",\n },\n },\n \"Urls\": {\n \"CENTRAL_CONTENT_BASE_URL\": {\n \"type\": \"string\",\n \"default\": \"https://studio.learningequality.org\",\n \"deprecated_envvars\": (\"CENTRAL_CONTENT_DOWNLOAD_BASE_URL\",),\n \"description\": \"\"\"\n URL to use as the default source for content import.\n Slightly counterintuitively this will still be displayed in the UI as 'import from Kolibri Studio'.\n \"\"\",\n },\n \"DATA_PORTAL_SYNCING_BASE_URL\": {\n \"type\": \"string\",\n \"default\": \"https://kolibridataportal.learningequality.org\",\n \"description\": \"URL to use as the target for data portal syncing.\",\n },\n },\n \"Deployment\": {\n \"HTTP_PORT\": {\n \"type\": \"port\",\n \"default\": 8080,\n \"deprecated_envvars\": (\"KOLIBRI_LISTEN_PORT\",),\n \"description\": \"Sets the port that Kolibri will serve on. This can be further overridden by command line arguments.\",\n },\n \"RUN_MODE\": {\n \"type\": \"string\",\n \"description\": \"Used to flag non-user Kolibri instances\",\n \"skip_blank\": True,\n },\n \"DISABLE_PING\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"description\": \"Turn off the statistics pingback. This will also disable update notifications\",\n },\n \"URL_PATH_PREFIX\": {\n \"type\": \"url_prefix\",\n \"default\": \"/\",\n \"description\": \"\"\"\n Serve Kolibri from a subpath under the main domain. Used when serving multiple applications from\n the same origin. This option is not heavily tested, but is provided for user convenience.\n \"\"\",\n },\n \"LANGUAGES\": {\n \"type\": \"language_list\",\n \"default\": SUPPORTED_LANGUAGES,\n \"description\": \"\"\"\n The user interface languages to enable on this instance of Kolibri (has no effect on languages of imported content channels).\n The default will include all the languages Kolibri supports.\n \"\"\",\n },\n \"ZIP_CONTENT_ORIGIN\": {\n \"type\": \"origin_or_port\",\n \"default\": \"\",\n \"description\": \"\"\"\n When running by default (value blank), Kolibri frontend looks for the zipcontent endpoints\n on the same domain as Kolibri proper, but uses ZIP_CONTENT_PORT instead of HTTP_PORT.\n When running behind a proxy, set the value to the port where zipcontent endpoint is served on,\n and it will be substituted for the port that Kolibri proper is being served on.\n When zipcontent is being served from a completely separate domain, you can set an\n absolute origin (full protocol plus domain, e.g. 'https://myzipcontent.com/')\n to be used for all zipcontent origin requests.\n It is strongly recommended that zipcontent is served from a different origin from the main Kolibri app,\n either by port or domain, to allow for proper sandboxing.\n \"\"\",\n },\n \"ZIP_CONTENT_PORT\": {\n \"type\": \"port\",\n \"default\": 0,\n \"description\": \"\"\"\n Sets the port that Kolibri will serve the alternate origin server on. This is the server that\n is used to serve all content for the zipcontent endpoint, so as to provide safe IFrame sandboxing\n but avoiding issues with null origins.\n This is the alternate origin server equivalent of HTTP_PORT.\n It is strongly recommended that zipcontent is served from a different origin from the main Kolibri app,\n either by port or domain, to allow for proper sandboxing.\n \"\"\",\n },\n \"ZIP_CONTENT_URL_PATH_PREFIX\": {\n \"type\": \"url_prefix\",\n \"default\": \"/\",\n \"description\": \"\"\"\n The zip content equivalent of URL_PATH_PREFIX - allows all zip content URLs to be prefixed with\n a fixed path. This both changes the URL from which the endpoints are served by the alternate\n origin server, and the URL prefix where the Kolibri frontend looks for it.\n In the case that ZIP_CONTENT_ORIGIN is pointing to an entirely separate origin, this setting\n can still be used to set a URL prefix that the frontend of Kolibri will look to when\n retrieving alternate origin URLs.\n \"\"\",\n },\n \"REMOTE_CONTENT\": {\n \"type\": \"boolean\",\n \"default\": False,\n \"description\": \"\"\"\n Boolean flag that causes content import processes to skip trying to import any\n content, as it is assumed that the remote source has everything available.\n Server configuration should handle ensuring that the files are properly served.\n \"\"\",\n },\n \"SYNC_INTERVAL\": {\n \"type\": \"integer\",\n \"default\": 60,\n \"description\": \"\"\"\n In case a SoUD connects to this server, the SoUD should use this interval to resync every user.\n \"\"\",\n },\n \"PROJECT\": {\n \"type\": \"string\",\n \"skip_blank\": True,\n \"description\": \"\"\"\n The custom identifier for a project. This is used to identify the project in the telemetry\n data that is returned to our telemetry server.\n \"\"\",\n },\n \"MINIMUM_DISK_SPACE\": {\n \"type\": \"bytes\",\n \"default\": \"250MB\",\n \"description\": \"\"\"\n The minimum free disk space that Kolibri should try to maintain on the device. This will\n be used as the floor value to prevent Kolibri completely filling the disk during file import.\n Value can either be a number suffixed with a unit (e.g. MB, GB, TB) or an integer number of bytes.\n \"\"\",\n },\n \"LISTEN_ADDRESS\": {\n \"type\": \"ip_addr\",\n \"default\": \"0.0.0.0\",\n \"description\": \"\"\"\n The address that the server should listen on. This can be used to restrict access to the server\n to a specific network interface.\n \"\"\",\n },\n \"RESTART_HOOKS\": {\n \"type\": \"lazy_import_callback_list\",\n \"default\": [\"kolibri.utils.server.signal_restart\"],\n \"description\": \"\"\"\n A list of module paths for function callbacks that will be called when server restart is called.\n The default is to disallow server restarts, so callbacks need to be added to enable restarting.\n \"\"\",\n },\n },\n \"Python\": {\n \"PICKLE_PROTOCOL\": {\n \"type\": \"integer\",\n \"default\": 2,\n \"description\": \"\"\"\n Which Python pickle protocol to use. Pinned to 2 for now to provide maximal cross-Python version compatibility.\n Can safely be set to a higher value for deployments that will never change Python versions.\n \"\"\",\n }\n },\n \"Tasks\": {\n \"USE_WORKER_MULTIPROCESSING\": {\n \"type\": \"multiprocess_bool\",\n \"default\": False,\n \"description\": \"\"\"\n Whether to use Python multiprocessing for worker pools. If False, then it will use threading. This may be useful,\n if running on a dedicated device with multiple cores, and a lot of asynchronous tasks get run.\n \"\"\",\n },\n \"REGULAR_PRIORITY_WORKERS\": {\n \"type\": \"integer\",\n \"default\": 4,\n \"description\": \"\"\"\n The number of workers to spin up for regular priority asynchronous tasks.\n \"\"\",\n },\n \"HIGH_PRIORITY_WORKERS\": {\n \"type\": \"integer\",\n \"default\": 2,\n \"description\": \"\"\"\n The number of workers to spin up for high priority asynchronous tasks.\n \"\"\",\n },\n \"JOB_STORAGE_FILEPATH\": {\n \"type\": \"path\",\n \"default\": \"job_storage.sqlite3\",\n \"description\": \"\"\"\n The file to use for the job storage database. This is only used in the case that the database backend being used is SQLite.\n \"\"\",\n },\n },\n}\n\n\ndef _get_validator():\n return Validator(\n {\n \"language_list\": language_list,\n \"path\": path,\n \"path_list\": path_list,\n \"origin_or_port\": origin_or_port,\n \"port\": port,\n \"url_prefix\": url_prefix,\n \"bytes\": validate_bytes,\n \"multiprocess_bool\": multiprocess_bool,\n \"lazy_import_callback_list\": lazy_import_callback_list,\n }\n )\n\n\ndef _get_option_spec():\n \"\"\"\n Combine the default option spec with any options that are defined in plugins\n \"\"\"\n option_spec = extend_config_spec(base_option_spec)\n envvars = set()\n for section, opts in option_spec.items():\n for optname, attrs in opts.items():\n if \"deprecated_aliases\" in attrs:\n attrs[\"deprecated_envvars\"] = attrs.get(\"deprecated_envvars\", ())\n for alias in attrs[\"deprecated_aliases\"]:\n alias_ev = \"KOLIBRI_{}\".format(alias)\n if alias_ev not in envvars:\n attrs[\"deprecated_envvars\"] += (alias_ev,)\n\n opt_envvars = attrs.get(\"envvars\", ()) + attrs.get(\"deprecated_envvars\", ())\n default_envvar = \"KOLIBRI_{}\".format(optname.upper())\n if default_envvar not in envvars:\n envvars.add(default_envvar)\n else:\n logging.warning(\n \"Duplicate environment variable for options {}\".format(\n default_envvar\n )\n )\n default_envvar = \"KOLIBRI_{}_{}\".format(\n section.upper(), optname.upper()\n )\n if default_envvar not in opt_envvars:\n attrs[\"envvars\"] = (default_envvar,) + opt_envvars\n return option_spec\n\n\noption_spec = SimpleLazyObject(_get_option_spec)\n\n\ndef get_configspec():\n \"\"\"\n Read the option_spec dict defined above, and turn it into a \"configspec\" object (per the configobj library)\n so that we can use it to parse the options.ini file.\n \"\"\"\n\n lines = []\n\n for section, opts in option_spec.items():\n lines.append(\"[{section}]\".format(section=section))\n for name, attrs in opts.items():\n default = attrs.get(\"default\", \"\")\n if isinstance(default, list) and not default:\n raise RuntimeError(\"For an empty list don't specify a default\")\n the_type = attrs[\"type\"]\n args = [\"%r\" % op for op in attrs.get(\"options\", [])] + [\n \"default=list('{default_list}')\".format(\n default_list=\"','\".join(default)\n )\n if isinstance(default, list)\n else \"default='{default}'\".format(default=default)\n ]\n line = \"{name} = {type}({args})\".format(\n name=name, type=the_type, args=\", \".join(args)\n )\n lines.append(line)\n\n return ConfigObj(lines, _inspec=True)\n\n\ndef _set_from_envvars(conf):\n \"\"\"\n Set the configuration from environment variables.\n \"\"\"\n # keep track of which options were overridden using environment variables, to support error reporting\n using_env_vars = {}\n\n deprecation_warning = \"Option {optname} in section [{section}] being overridden by deprecated environment variable {envvar}, please update to: {envvars}\"\n # override any values from their environment variables (if set)\n # and check for use of deprecated environment variables and options\n for section, opts in option_spec.items():\n for optname, attrs in opts.items():\n for envvar in attrs.get(\"envvars\", []):\n if envvar in os.environ:\n deprecated_envvars = attrs.get(\"deprecated_envvars\", ())\n if envvar in deprecated_envvars:\n logger.warning(\n deprecation_warning.format(\n optname=optname,\n section=section,\n envvar=envvar,\n envvars=\", \".join(\n e\n for e in attrs.get(\"envvars\", [])\n if e not in deprecated_envvars\n ),\n )\n )\n else:\n logger.info(\n \"Option {optname} in section [{section}] being overridden by environment variable {envvar}\".format(\n optname=optname, section=section, envvar=envvar\n )\n )\n if attrs.get(\"deprecated\", False):\n logger.warning(\n \"Option {optname} in section [{section}] is deprecated, please remove it from your options.ini file\".format(\n optname=optname, section=section\n )\n )\n conf[section][optname] = os.environ[envvar]\n using_env_vars[optname] = envvar\n break\n return using_env_vars\n\n\ndef _set_from_deprecated_aliases(conf):\n \"\"\"\n Set the configuration from deprecated aliases.\n \"\"\"\n # keep track of which options were overridden using environment variables, to support error reporting\n using_deprecated_alias = {}\n\n deprecation_warning = \"Option {optname} in section [{section}] being set by deprecated alias {alias}, please update to: {optname}\"\n # override any values from their environment variables (if set)\n # and check for use of deprecated environment variables and options\n for section, opts in option_spec.items():\n for optname, attrs in opts.items():\n for alias in attrs.get(\"deprecated_aliases\", ()):\n if alias in conf[section]:\n logger.warning(\n deprecation_warning.format(\n optname=optname,\n section=section,\n alias=alias,\n )\n )\n conf[section][optname] = conf[section][alias]\n del conf[section][alias]\n using_deprecated_alias[optname] = alias\n break\n return using_deprecated_alias\n\n\ndef read_options_file(ini_filename=\"options.ini\"):\n\n from kolibri.utils.conf import KOLIBRI_HOME\n\n ini_path = os.path.join(KOLIBRI_HOME, ini_filename)\n\n conf = ConfigObj(ini_path, configspec=get_configspec())\n\n # Check for use of deprecated options\n for section, opts in option_spec.items():\n for optname, attrs in opts.items():\n if (\n attrs.get(\"deprecated\", False)\n and section in conf\n and optname in conf[section]\n ):\n logger.warning(\n \"Option {optname} in section [{section}] is deprecated, please remove it from your options.ini file\".format(\n optname=optname, section=section\n )\n )\n\n # validate once up front to ensure section structure is in place\n conf.validate(_get_validator())\n\n using_env_vars = _set_from_envvars(conf)\n\n using_deprecated_alias = _set_from_deprecated_aliases(conf)\n\n validation = conf.validate(_get_validator(), preserve_errors=True)\n\n # loop over and display any errors with config values, and then bail\n if validation is not True:\n for section_list, optname, error in flatten_errors(conf, validation):\n section = section_list[0]\n if optname in using_env_vars:\n logger.error(\n \"Error processing environment variable option {envvar}: {error}\".format(\n envvar=using_env_vars[optname], error=error\n )\n )\n elif optname in using_deprecated_alias:\n logger.error(\n \"Error processing {file} under section [{section}] for option {alias}: {error}\".format(\n file=ini_path,\n section=section,\n alias=using_deprecated_alias[optname],\n error=error,\n )\n )\n else:\n logger.error(\n \"Error processing {file} under section [{section}] for option {option}: {error}\".format(\n file=ini_path, section=section, option=optname, error=error\n )\n )\n logger.critical(\n \"Aborting: Could not process options config (see errors above for more details)\"\n )\n raise SystemExit(1)\n\n # loop over any extraneous options and warn the user that we're ignoring them\n for sections, name in get_extra_values(conf):\n\n # this code gets the extra values themselves\n the_section = conf\n for section in sections:\n the_section = the_section[section]\n\n # the_value may be a section or a value\n the_value = the_section.pop(name)\n\n # determine whether the extra item is a section (dict) or value\n kind = \"section\" if isinstance(the_value, dict) else \"option\"\n\n logger.warning(\n \"Ignoring unknown {kind} in options file {file} under {section}: {name}.\".format(\n kind=kind,\n file=ini_path,\n section=sections[0] if sections else \"top level\",\n name=name,\n )\n )\n\n # run validation once again to fill in any default values for options we deleted due to issues\n conf.validate(_get_validator())\n\n return conf\n\n\ndef update_options_file(section, key, value, ini_filename=\"options.ini\"):\n \"\"\"\n Updates the configuration file on top of what is currently in the\n file.\n\n Note to future: Do not change the implementation to write the\n in-memory conf.OPTIONS as it can contain temporary in-memory values\n that are not intended to be stored.\n \"\"\"\n\n # load the current conf from disk into memory\n conf = read_options_file(ini_filename=ini_filename)\n\n # update the requested option value\n conf[section][key] = value\n\n # check for any errors with the provided value, and abort\n validation = conf.validate(_get_validator(), preserve_errors=True)\n if validation is not True:\n error = validation.get(section, {}).get(key) or \"unknown error\"\n raise ValueError(\n \"Unable to set {key} in {file}: {error}\".format(\n key=key, file=ini_filename, error=error\n )\n )\n\n # write the settings file back to disk\n conf.write()\n\n logger.warning(\n \"Options file {file} has been updated; server restart is required before change will take effect.\".format(\n file=conf.filename\n )\n )\n\n\ndef generate_empty_options_file(ini_filename=\"options.ini\"):\n # Generate an options.ini file inside the KOLIBRI_HOME as default placeholder config\n\n conf = read_options_file(ini_filename=ini_filename)\n\n for section, opts in option_spec.items():\n for optname, attrs in opts.items():\n for envvar in attrs.get(\"envvars\", []):\n if envvar in os.environ:\n conf[section].pop(optname, None)\n\n comments = None\n\n for section, opts in option_spec.items():\n if comments is not None:\n conf.comments[section] = comments\n comments = []\n for optname, attrs in opts.items():\n if not attrs.get(\"skip_blank\", False) and not attrs.get(\n \"deprecated\", False\n ):\n if \"description\" in attrs:\n comments.extend(attrs[\"description\"].strip().split(\"\\n\"))\n comments.append(\"{} = {}\".format(optname, attrs.get(\"default\", \"\")))\n comments.append(\"\")\n conf.final_comment = comments\n\n conf.write()\n", "path": "kolibri/utils/options.py"}]} |
gh_patches_debug_1053 | rasdani/github-patches | git_diff | python-poetry__poetry-578 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Poetry run: ModuleOrPackageNotFound with implicit namespace packages (PEP420)
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
- **OS version and name**: Arch Linux 4.18.16
- **Poetry version**: 0.12.5
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: https://gist.github.com/Kazy/692963f6a41c64d38f38ac9a3f95619a
## Issue
I'm using implicit namespace packages to organize the packages at work, which works well by specifying the `packages` value in the `pyproject.toml` like that:
```toml
packages = [
{ include = "org" }
]
```
The file structure is like that:
```
├── org
│ └── subpackage
│ ├── __init__.py
│ └── command
│ └── cli.py
└── pyproject.toml
```
The issue is when you try to do `poetry run my-command`, you get:
```
[ModuleOrPackageNotFound]
No file/folder found for package org-subpackage-command
```
I already found how to fix it and will follow with a PR, but I wanted to create the issue in case my fix isn't the right one, and to make organization easier on your side as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/console/commands/run.py`
Content:
```
1 from .env_command import EnvCommand
2
3
4 class RunCommand(EnvCommand):
5 """
6 Runs a command in the appropriate environment.
7
8 run
9 { args* : The command and arguments/options to run. }
10 """
11
12 def handle(self):
13 args = self.argument("args")
14 script = args[0]
15 scripts = self.poetry.local_config.get("scripts")
16
17 if scripts and script in scripts:
18 return self.run_script(scripts[script], args)
19
20 return self.env.execute(*args)
21
22 def run_script(self, script, args):
23 if isinstance(script, dict):
24 script = script["callable"]
25
26 module, callable_ = script.split(":")
27
28 src_in_sys_path = "sys.path.append('src'); " if self._module.is_in_src() else ""
29
30 cmd = ["python", "-c"]
31
32 cmd += [
33 '"import sys; '
34 "from importlib import import_module; "
35 "sys.argv = {!r}; {}"
36 "import_module('{}').{}()\"".format(
37 args, src_in_sys_path, module, callable_
38 )
39 ]
40
41 return self.env.run(*cmd, shell=True, call=True)
42
43 @property
44 def _module(self):
45 from ...masonry.utils.module import Module
46
47 poetry = self.poetry
48 package = poetry.package
49 path = poetry.file.parent
50 module = Module(package.name, path.as_posix())
51 return module
52
53 def merge_application_definition(self, merge_args=True):
54 if self._application is None or (
55 self._application_definition_merged
56 and (self._application_definition_merged_with_args or not merge_args)
57 ):
58 return
59
60 if merge_args:
61 current_arguments = self._definition.get_arguments()
62 self._definition.set_arguments(
63 self._application.get_definition().get_arguments()
64 )
65 self._definition.add_arguments(current_arguments)
66
67 self._application_definition_merged = True
68 if merge_args:
69 self._application_definition_merged_with_args = True
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/console/commands/run.py b/poetry/console/commands/run.py
--- a/poetry/console/commands/run.py
+++ b/poetry/console/commands/run.py
@@ -47,7 +47,7 @@
poetry = self.poetry
package = poetry.package
path = poetry.file.parent
- module = Module(package.name, path.as_posix())
+ module = Module(package.name, path.as_posix(), package.packages)
return module
def merge_application_definition(self, merge_args=True):
| {"golden_diff": "diff --git a/poetry/console/commands/run.py b/poetry/console/commands/run.py\n--- a/poetry/console/commands/run.py\n+++ b/poetry/console/commands/run.py\n@@ -47,7 +47,7 @@\n poetry = self.poetry\n package = poetry.package\n path = poetry.file.parent\n- module = Module(package.name, path.as_posix())\n+ module = Module(package.name, path.as_posix(), package.packages)\n return module\n \n def merge_application_definition(self, merge_args=True):\n", "issue": "Poetry run: ModuleOrPackageNotFound with implicit namespace packages (PEP420)\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n- **OS version and name**: Arch Linux 4.18.16\r\n- **Poetry version**: 0.12.5\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: https://gist.github.com/Kazy/692963f6a41c64d38f38ac9a3f95619a\r\n\r\n## Issue\r\nI'm using implicit namespace packages to organize the packages at work, which works well by specifying the `packages` value in the `pyproject.toml` like that:\r\n```toml\r\npackages = [\r\n { include = \"org\" }\r\n]\r\n```\r\nThe file structure is like that:\r\n```\r\n\u251c\u2500\u2500 org\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 subpackage\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 command\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 cli.py\r\n\u2514\u2500\u2500 pyproject.toml\r\n```\r\n\r\nThe issue is when you try to do `poetry run my-command`, you get:\r\n```\r\n[ModuleOrPackageNotFound]\r\nNo file/folder found for package org-subpackage-command\r\n```\r\n\r\nI already found how to fix it and will follow with a PR, but I wanted to create the issue in case my fix isn't the right one, and to make organization easier on your side as well.\r\n\n", "before_files": [{"content": "from .env_command import EnvCommand\n\n\nclass RunCommand(EnvCommand):\n \"\"\"\n Runs a command in the appropriate environment.\n\n run\n { args* : The command and arguments/options to run. }\n \"\"\"\n\n def handle(self):\n args = self.argument(\"args\")\n script = args[0]\n scripts = self.poetry.local_config.get(\"scripts\")\n\n if scripts and script in scripts:\n return self.run_script(scripts[script], args)\n\n return self.env.execute(*args)\n\n def run_script(self, script, args):\n if isinstance(script, dict):\n script = script[\"callable\"]\n\n module, callable_ = script.split(\":\")\n\n src_in_sys_path = \"sys.path.append('src'); \" if self._module.is_in_src() else \"\"\n\n cmd = [\"python\", \"-c\"]\n\n cmd += [\n '\"import sys; '\n \"from importlib import import_module; \"\n \"sys.argv = {!r}; {}\"\n \"import_module('{}').{}()\\\"\".format(\n args, src_in_sys_path, module, callable_\n )\n ]\n\n return self.env.run(*cmd, shell=True, call=True)\n\n @property\n def _module(self):\n from ...masonry.utils.module import Module\n\n poetry = self.poetry\n package = poetry.package\n path = poetry.file.parent\n module = Module(package.name, path.as_posix())\n return module\n\n def merge_application_definition(self, merge_args=True):\n if self._application is None or (\n self._application_definition_merged\n and (self._application_definition_merged_with_args or not merge_args)\n ):\n return\n\n if merge_args:\n current_arguments = self._definition.get_arguments()\n self._definition.set_arguments(\n self._application.get_definition().get_arguments()\n )\n self._definition.add_arguments(current_arguments)\n\n self._application_definition_merged = True\n if merge_args:\n self._application_definition_merged_with_args = True\n", "path": "poetry/console/commands/run.py"}], "after_files": [{"content": "from .env_command import EnvCommand\n\n\nclass RunCommand(EnvCommand):\n \"\"\"\n Runs a command in the appropriate environment.\n\n run\n { args* : The command and arguments/options to run. }\n \"\"\"\n\n def handle(self):\n args = self.argument(\"args\")\n script = args[0]\n scripts = self.poetry.local_config.get(\"scripts\")\n\n if scripts and script in scripts:\n return self.run_script(scripts[script], args)\n\n return self.env.execute(*args)\n\n def run_script(self, script, args):\n if isinstance(script, dict):\n script = script[\"callable\"]\n\n module, callable_ = script.split(\":\")\n\n src_in_sys_path = \"sys.path.append('src'); \" if self._module.is_in_src() else \"\"\n\n cmd = [\"python\", \"-c\"]\n\n cmd += [\n '\"import sys; '\n \"from importlib import import_module; \"\n \"sys.argv = {!r}; {}\"\n \"import_module('{}').{}()\\\"\".format(\n args, src_in_sys_path, module, callable_\n )\n ]\n\n return self.env.run(*cmd, shell=True, call=True)\n\n @property\n def _module(self):\n from ...masonry.utils.module import Module\n\n poetry = self.poetry\n package = poetry.package\n path = poetry.file.parent\n module = Module(package.name, path.as_posix(), package.packages)\n return module\n\n def merge_application_definition(self, merge_args=True):\n if self._application is None or (\n self._application_definition_merged\n and (self._application_definition_merged_with_args or not merge_args)\n ):\n return\n\n if merge_args:\n current_arguments = self._definition.get_arguments()\n self._definition.set_arguments(\n self._application.get_definition().get_arguments()\n )\n self._definition.add_arguments(current_arguments)\n\n self._application_definition_merged = True\n if merge_args:\n self._application_definition_merged_with_args = True\n", "path": "poetry/console/commands/run.py"}]} |
gh_patches_debug_1054 | rasdani/github-patches | git_diff | certbot__certbot-6643 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
certbot delete list must be sorted
Subj.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `certbot/storage.py`
Content:
```
1 """Renewable certificates storage."""
2 import datetime
3 import glob
4 import logging
5 import os
6 import re
7 import stat
8
9 import configobj
10 import parsedatetime
11 import pytz
12 import shutil
13 import six
14
15 import certbot
16 from certbot import cli
17 from certbot import compat
18 from certbot import constants
19 from certbot import crypto_util
20 from certbot import errors
21 from certbot import error_handler
22 from certbot import util
23
24 from certbot.plugins import common as plugins_common
25 from certbot.plugins import disco as plugins_disco
26
27 logger = logging.getLogger(__name__)
28
29 ALL_FOUR = ("cert", "privkey", "chain", "fullchain")
30 README = "README"
31 CURRENT_VERSION = util.get_strict_version(certbot.__version__)
32 BASE_PRIVKEY_MODE = 0o600
33
34
35 def renewal_conf_files(config):
36 """Build a list of all renewal configuration files.
37
38 :param certbot.interfaces.IConfig config: Configuration object
39
40 :returns: list of renewal configuration files
41 :rtype: `list` of `str`
42
43 """
44 return glob.glob(os.path.join(config.renewal_configs_dir, "*.conf"))
45
46 def renewal_file_for_certname(config, certname):
47 """Return /path/to/certname.conf in the renewal conf directory"""
48 path = os.path.join(config.renewal_configs_dir, "{0}.conf".format(certname))
49 if not os.path.exists(path):
50 raise errors.CertStorageError("No certificate found with name {0} (expected "
51 "{1}).".format(certname, path))
52 return path
53
54
55 def cert_path_for_cert_name(config, cert_name):
56 """ If `--cert-name` was specified, but you need a value for `--cert-path`.
57
58 :param `configuration.NamespaceConfig` config: parsed command line arguments
59 :param str cert_name: cert name.
60
61 """
62 cert_name_implied_conf = renewal_file_for_certname(config, cert_name)
63 fullchain_path = configobj.ConfigObj(cert_name_implied_conf)["fullchain"]
64 with open(fullchain_path) as f:
65 cert_path = (fullchain_path, f.read())
66 return cert_path
67
68
69 def config_with_defaults(config=None):
70 """Merge supplied config, if provided, on top of builtin defaults."""
71 defaults_copy = configobj.ConfigObj(constants.RENEWER_DEFAULTS)
72 defaults_copy.merge(config if config is not None else configobj.ConfigObj())
73 return defaults_copy
74
75
76 def add_time_interval(base_time, interval, textparser=parsedatetime.Calendar()):
77 """Parse the time specified time interval, and add it to the base_time
78
79 The interval can be in the English-language format understood by
80 parsedatetime, e.g., '10 days', '3 weeks', '6 months', '9 hours', or
81 a sequence of such intervals like '6 months 1 week' or '3 days 12
82 hours'. If an integer is found with no associated unit, it is
83 interpreted by default as a number of days.
84
85 :param datetime.datetime base_time: The time to be added with the interval.
86 :param str interval: The time interval to parse.
87
88 :returns: The base_time plus the interpretation of the time interval.
89 :rtype: :class:`datetime.datetime`"""
90
91 if interval.strip().isdigit():
92 interval += " days"
93
94 # try to use the same timezone, but fallback to UTC
95 tzinfo = base_time.tzinfo or pytz.UTC
96
97 return textparser.parseDT(interval, base_time, tzinfo=tzinfo)[0]
98
99
100 def write_renewal_config(o_filename, n_filename, archive_dir, target, relevant_data):
101 """Writes a renewal config file with the specified name and values.
102
103 :param str o_filename: Absolute path to the previous version of config file
104 :param str n_filename: Absolute path to the new destination of config file
105 :param str archive_dir: Absolute path to the archive directory
106 :param dict target: Maps ALL_FOUR to their symlink paths
107 :param dict relevant_data: Renewal configuration options to save
108
109 :returns: Configuration object for the new config file
110 :rtype: configobj.ConfigObj
111
112 """
113 config = configobj.ConfigObj(o_filename)
114 config["version"] = certbot.__version__
115 config["archive_dir"] = archive_dir
116 for kind in ALL_FOUR:
117 config[kind] = target[kind]
118
119 if "renewalparams" not in config:
120 config["renewalparams"] = {}
121 config.comments["renewalparams"] = ["",
122 "Options used in "
123 "the renewal process"]
124
125 config["renewalparams"].update(relevant_data)
126
127 for k in config["renewalparams"].keys():
128 if k not in relevant_data:
129 del config["renewalparams"][k]
130
131 if "renew_before_expiry" not in config:
132 default_interval = constants.RENEWER_DEFAULTS["renew_before_expiry"]
133 config.initial_comment = ["renew_before_expiry = " + default_interval]
134
135 # TODO: add human-readable comments explaining other available
136 # parameters
137 logger.debug("Writing new config %s.", n_filename)
138
139 # Ensure that the file exists
140 open(n_filename, 'a').close()
141
142 # Copy permissions from the old version of the file, if it exists.
143 if os.path.exists(o_filename):
144 current_permissions = stat.S_IMODE(os.lstat(o_filename).st_mode)
145 os.chmod(n_filename, current_permissions)
146
147 with open(n_filename, "wb") as f:
148 config.write(outfile=f)
149 return config
150
151
152 def rename_renewal_config(prev_name, new_name, cli_config):
153 """Renames cli_config.certname's config to cli_config.new_certname.
154
155 :param .NamespaceConfig cli_config: parsed command line
156 arguments
157 """
158 prev_filename = renewal_filename_for_lineagename(cli_config, prev_name)
159 new_filename = renewal_filename_for_lineagename(cli_config, new_name)
160 if os.path.exists(new_filename):
161 raise errors.ConfigurationError("The new certificate name "
162 "is already in use.")
163 try:
164 os.rename(prev_filename, new_filename)
165 except OSError:
166 raise errors.ConfigurationError("Please specify a valid filename "
167 "for the new certificate name.")
168
169
170 def update_configuration(lineagename, archive_dir, target, cli_config):
171 """Modifies lineagename's config to contain the specified values.
172
173 :param str lineagename: Name of the lineage being modified
174 :param str archive_dir: Absolute path to the archive directory
175 :param dict target: Maps ALL_FOUR to their symlink paths
176 :param .NamespaceConfig cli_config: parsed command line
177 arguments
178
179 :returns: Configuration object for the updated config file
180 :rtype: configobj.ConfigObj
181
182 """
183 config_filename = renewal_filename_for_lineagename(cli_config, lineagename)
184 temp_filename = config_filename + ".new"
185
186 # If an existing tempfile exists, delete it
187 if os.path.exists(temp_filename):
188 os.unlink(temp_filename)
189
190 # Save only the config items that are relevant to renewal
191 values = relevant_values(vars(cli_config.namespace))
192 write_renewal_config(config_filename, temp_filename, archive_dir, target, values)
193 compat.os_rename(temp_filename, config_filename)
194
195 return configobj.ConfigObj(config_filename)
196
197
198 def get_link_target(link):
199 """Get an absolute path to the target of link.
200
201 :param str link: Path to a symbolic link
202
203 :returns: Absolute path to the target of link
204 :rtype: str
205
206 :raises .CertStorageError: If link does not exists.
207
208 """
209 try:
210 target = os.readlink(link)
211 except OSError:
212 raise errors.CertStorageError(
213 "Expected {0} to be a symlink".format(link))
214
215 if not os.path.isabs(target):
216 target = os.path.join(os.path.dirname(link), target)
217 return os.path.abspath(target)
218
219 def _write_live_readme_to(readme_path, is_base_dir=False):
220 prefix = ""
221 if is_base_dir:
222 prefix = "[cert name]/"
223 with open(readme_path, "w") as f:
224 logger.debug("Writing README to %s.", readme_path)
225 f.write("This directory contains your keys and certificates.\n\n"
226 "`{prefix}privkey.pem` : the private key for your certificate.\n"
227 "`{prefix}fullchain.pem`: the certificate file used in most server software.\n"
228 "`{prefix}chain.pem` : used for OCSP stapling in Nginx >=1.3.7.\n"
229 "`{prefix}cert.pem` : will break many server configurations, and "
230 "should not be used\n"
231 " without reading further documentation (see link below).\n\n"
232 "WARNING: DO NOT MOVE OR RENAME THESE FILES!\n"
233 " Certbot expects these files to remain in this location in order\n"
234 " to function properly!\n\n"
235 "We recommend not moving these files. For more information, see the Certbot\n"
236 "User Guide at https://certbot.eff.org/docs/using.html#where-are-my-"
237 "certificates.\n".format(prefix=prefix))
238
239
240 def _relevant(option):
241 """
242 Is this option one that could be restored for future renewal purposes?
243 :param str option: the name of the option
244
245 :rtype: bool
246 """
247 from certbot import renewal
248 plugins = plugins_disco.PluginsRegistry.find_all()
249 namespaces = [plugins_common.dest_namespace(plugin) for plugin in plugins]
250
251 return (option in renewal.CONFIG_ITEMS or
252 any(option.startswith(namespace) for namespace in namespaces))
253
254
255 def relevant_values(all_values):
256 """Return a new dict containing only items relevant for renewal.
257
258 :param dict all_values: The original values.
259
260 :returns: A new dictionary containing items that can be used in renewal.
261 :rtype dict:
262
263 """
264 rv = dict(
265 (option, value)
266 for option, value in six.iteritems(all_values)
267 if _relevant(option) and cli.option_was_set(option, value))
268 # We always save the server value to help with forward compatibility
269 # and behavioral consistency when versions of Certbot with different
270 # server defaults are used.
271 rv["server"] = all_values["server"]
272 return rv
273
274 def lineagename_for_filename(config_filename):
275 """Returns the lineagename for a configuration filename.
276 """
277 if not config_filename.endswith(".conf"):
278 raise errors.CertStorageError(
279 "renewal config file name must end in .conf")
280 return os.path.basename(config_filename[:-len(".conf")])
281
282 def renewal_filename_for_lineagename(config, lineagename):
283 """Returns the lineagename for a configuration filename.
284 """
285 return os.path.join(config.renewal_configs_dir, lineagename) + ".conf"
286
287 def _relpath_from_file(archive_dir, from_file):
288 """Path to a directory from a file"""
289 return os.path.relpath(archive_dir, os.path.dirname(from_file))
290
291 def full_archive_path(config_obj, cli_config, lineagename):
292 """Returns the full archive path for a lineagename
293
294 Uses cli_config to determine archive path if not available from config_obj.
295
296 :param configobj.ConfigObj config_obj: Renewal conf file contents (can be None)
297 :param configuration.NamespaceConfig cli_config: Main config file
298 :param str lineagename: Certificate name
299 """
300 if config_obj and "archive_dir" in config_obj:
301 return config_obj["archive_dir"]
302 else:
303 return os.path.join(cli_config.default_archive_dir, lineagename)
304
305 def _full_live_path(cli_config, lineagename):
306 """Returns the full default live path for a lineagename"""
307 return os.path.join(cli_config.live_dir, lineagename)
308
309 def delete_files(config, certname):
310 """Delete all files related to the certificate.
311
312 If some files are not found, ignore them and continue.
313 """
314 renewal_filename = renewal_file_for_certname(config, certname)
315 # file exists
316 full_default_archive_dir = full_archive_path(None, config, certname)
317 full_default_live_dir = _full_live_path(config, certname)
318 try:
319 renewal_config = configobj.ConfigObj(renewal_filename)
320 except configobj.ConfigObjError:
321 # config is corrupted
322 logger.warning("Could not parse %s. You may wish to manually "
323 "delete the contents of %s and %s.", renewal_filename,
324 full_default_live_dir, full_default_archive_dir)
325 raise errors.CertStorageError(
326 "error parsing {0}".format(renewal_filename))
327 finally:
328 # we couldn't read it, but let's at least delete it
329 # if this was going to fail, it already would have.
330 os.remove(renewal_filename)
331 logger.debug("Removed %s", renewal_filename)
332
333 # cert files and (hopefully) live directory
334 # it's not guaranteed that the files are in our default storage
335 # structure. so, first delete the cert files.
336 directory_names = set()
337 for kind in ALL_FOUR:
338 link = renewal_config.get(kind)
339 try:
340 os.remove(link)
341 logger.debug("Removed %s", link)
342 except OSError:
343 logger.debug("Unable to delete %s", link)
344 directory = os.path.dirname(link)
345 directory_names.add(directory)
346
347 # if all four were in the same directory, and the only thing left
348 # is the README file (or nothing), delete that directory.
349 # this will be wrong in very few but some cases.
350 if len(directory_names) == 1:
351 # delete the README file
352 directory = directory_names.pop()
353 readme_path = os.path.join(directory, README)
354 try:
355 os.remove(readme_path)
356 logger.debug("Removed %s", readme_path)
357 except OSError:
358 logger.debug("Unable to delete %s", readme_path)
359 # if it's now empty, delete the directory
360 try:
361 os.rmdir(directory) # only removes empty directories
362 logger.debug("Removed %s", directory)
363 except OSError:
364 logger.debug("Unable to remove %s; may not be empty.", directory)
365
366 # archive directory
367 try:
368 archive_path = full_archive_path(renewal_config, config, certname)
369 shutil.rmtree(archive_path)
370 logger.debug("Removed %s", archive_path)
371 except OSError:
372 logger.debug("Unable to remove %s", archive_path)
373
374
375 class RenewableCert(object):
376 # pylint: disable=too-many-instance-attributes,too-many-public-methods
377 """Renewable certificate.
378
379 Represents a lineage of certificates that is under the management of
380 Certbot, indicated by the existence of an associated renewal
381 configuration file.
382
383 Note that the notion of "current version" for a lineage is
384 maintained on disk in the structure of symbolic links, and is not
385 explicitly stored in any instance variable in this object. The
386 RenewableCert object is able to determine information about the
387 current (or other) version by accessing data on disk, but does not
388 inherently know any of this information except by examining the
389 symbolic links as needed. The instance variables mentioned below
390 point to symlinks that reflect the notion of "current version" of
391 each managed object, and it is these paths that should be used when
392 configuring servers to use the certificate managed in a lineage.
393 These paths are normally within the "live" directory, and their
394 symlink targets -- the actual cert files -- are normally found
395 within the "archive" directory.
396
397 :ivar str cert: The path to the symlink representing the current
398 version of the certificate managed by this lineage.
399 :ivar str privkey: The path to the symlink representing the current
400 version of the private key managed by this lineage.
401 :ivar str chain: The path to the symlink representing the current version
402 of the chain managed by this lineage.
403 :ivar str fullchain: The path to the symlink representing the
404 current version of the fullchain (combined chain and cert)
405 managed by this lineage.
406 :ivar configobj.ConfigObj configuration: The renewal configuration
407 options associated with this lineage, obtained from parsing the
408 renewal configuration file and/or systemwide defaults.
409
410 """
411 def __init__(self, config_filename, cli_config, update_symlinks=False):
412 """Instantiate a RenewableCert object from an existing lineage.
413
414 :param str config_filename: the path to the renewal config file
415 that defines this lineage.
416 :param .NamespaceConfig: parsed command line arguments
417
418 :raises .CertStorageError: if the configuration file's name didn't end
419 in ".conf", or the file is missing or broken.
420
421 """
422 self.cli_config = cli_config
423 self.lineagename = lineagename_for_filename(config_filename)
424
425 # self.configuration should be used to read parameters that
426 # may have been chosen based on default values from the
427 # systemwide renewal configuration; self.configfile should be
428 # used to make and save changes.
429 try:
430 self.configfile = configobj.ConfigObj(config_filename)
431 except configobj.ConfigObjError:
432 raise errors.CertStorageError(
433 "error parsing {0}".format(config_filename))
434 # TODO: Do we actually use anything from defaults and do we want to
435 # read further defaults from the systemwide renewal configuration
436 # file at this stage?
437 self.configuration = config_with_defaults(self.configfile)
438
439 if not all(x in self.configuration for x in ALL_FOUR):
440 raise errors.CertStorageError(
441 "renewal config file {0} is missing a required "
442 "file reference".format(self.configfile))
443
444 conf_version = self.configuration.get("version")
445 if (conf_version is not None and
446 util.get_strict_version(conf_version) > CURRENT_VERSION):
447 logger.info(
448 "Attempting to parse the version %s renewal configuration "
449 "file found at %s with version %s of Certbot. This might not "
450 "work.", conf_version, config_filename, certbot.__version__)
451
452 self.cert = self.configuration["cert"]
453 self.privkey = self.configuration["privkey"]
454 self.chain = self.configuration["chain"]
455 self.fullchain = self.configuration["fullchain"]
456 self.live_dir = os.path.dirname(self.cert)
457
458 self._fix_symlinks()
459 if update_symlinks:
460 self._update_symlinks()
461 self._check_symlinks()
462
463 @property
464 def key_path(self):
465 """Duck type for self.privkey"""
466 return self.privkey
467
468 @property
469 def cert_path(self):
470 """Duck type for self.cert"""
471 return self.cert
472
473 @property
474 def chain_path(self):
475 """Duck type for self.chain"""
476 return self.chain
477
478 @property
479 def fullchain_path(self):
480 """Duck type for self.fullchain"""
481 return self.fullchain
482
483 @property
484 def target_expiry(self):
485 """The current target certificate's expiration datetime
486
487 :returns: Expiration datetime of the current target certificate
488 :rtype: :class:`datetime.datetime`
489 """
490 return crypto_util.notAfter(self.current_target("cert"))
491
492 @property
493 def archive_dir(self):
494 """Returns the default or specified archive directory"""
495 return full_archive_path(self.configuration,
496 self.cli_config, self.lineagename)
497
498 def relative_archive_dir(self, from_file):
499 """Returns the default or specified archive directory as a relative path
500
501 Used for creating symbolic links.
502 """
503 return _relpath_from_file(self.archive_dir, from_file)
504
505 @property
506 def is_test_cert(self):
507 """Returns true if this is a test cert from a staging server."""
508 server = self.configuration["renewalparams"].get("server", None)
509 if server:
510 return util.is_staging(server)
511 else:
512 return False
513
514 def _check_symlinks(self):
515 """Raises an exception if a symlink doesn't exist"""
516 for kind in ALL_FOUR:
517 link = getattr(self, kind)
518 if not os.path.islink(link):
519 raise errors.CertStorageError(
520 "expected {0} to be a symlink".format(link))
521 target = get_link_target(link)
522 if not os.path.exists(target):
523 raise errors.CertStorageError("target {0} of symlink {1} does "
524 "not exist".format(target, link))
525
526 def _update_symlinks(self):
527 """Updates symlinks to use archive_dir"""
528 for kind in ALL_FOUR:
529 link = getattr(self, kind)
530 previous_link = get_link_target(link)
531 new_link = os.path.join(self.relative_archive_dir(link),
532 os.path.basename(previous_link))
533
534 os.unlink(link)
535 os.symlink(new_link, link)
536
537 def _consistent(self):
538 """Are the files associated with this lineage self-consistent?
539
540 :returns: Whether the files stored in connection with this
541 lineage appear to be correct and consistent with one
542 another.
543 :rtype: bool
544
545 """
546 # Each element must be referenced with an absolute path
547 for x in (self.cert, self.privkey, self.chain, self.fullchain):
548 if not os.path.isabs(x):
549 logger.debug("Element %s is not referenced with an "
550 "absolute path.", x)
551 return False
552
553 # Each element must exist and be a symbolic link
554 for x in (self.cert, self.privkey, self.chain, self.fullchain):
555 if not os.path.islink(x):
556 logger.debug("Element %s is not a symbolic link.", x)
557 return False
558 for kind in ALL_FOUR:
559 link = getattr(self, kind)
560 target = get_link_target(link)
561
562 # Each element's link must point within the cert lineage's
563 # directory within the official archive directory
564 if not os.path.samefile(os.path.dirname(target), self.archive_dir):
565 logger.debug("Element's link does not point within the "
566 "cert lineage's directory within the "
567 "official archive directory. Link: %s, "
568 "target directory: %s, "
569 "archive directory: %s. If you've specified "
570 "the archive directory in the renewal configuration "
571 "file, you may need to update links by running "
572 "certbot update_symlinks.",
573 link, os.path.dirname(target), self.archive_dir)
574 return False
575
576 # The link must point to a file that exists
577 if not os.path.exists(target):
578 logger.debug("Link %s points to file %s that does not exist.",
579 link, target)
580 return False
581
582 # The link must point to a file that follows the archive
583 # naming convention
584 pattern = re.compile(r"^{0}([0-9]+)\.pem$".format(kind))
585 if not pattern.match(os.path.basename(target)):
586 logger.debug("%s does not follow the archive naming "
587 "convention.", target)
588 return False
589
590 # It is NOT required that the link's target be a regular
591 # file (it may itself be a symlink). But we should probably
592 # do a recursive check that ultimately the target does
593 # exist?
594 # XXX: Additional possible consistency checks (e.g.
595 # cryptographic validation of the chain being a chain,
596 # the chain matching the cert, and the cert matching
597 # the subject key)
598 # XXX: All four of the targets are in the same directory
599 # (This check is redundant with the check that they
600 # are all in the desired directory!)
601 # len(set(os.path.basename(self.current_target(x)
602 # for x in ALL_FOUR))) == 1
603 return True
604
605 def _fix(self):
606 """Attempt to fix defects or inconsistencies in this lineage.
607
608 .. todo:: Currently unimplemented.
609
610 """
611 # TODO: Figure out what kinds of fixes are possible. For
612 # example, checking if there is a valid version that
613 # we can update the symlinks to. (Maybe involve
614 # parsing keys and certs to see if they exist and
615 # if a key corresponds to the subject key of a cert?)
616
617 # TODO: In general, the symlink-reading functions below are not
618 # cautious enough about the possibility that links or their
619 # targets may not exist. (This shouldn't happen, but might
620 # happen as a result of random tampering by a sysadmin, or
621 # filesystem errors, or crashes.)
622
623 def _previous_symlinks(self):
624 """Returns the kind and path of all symlinks used in recovery.
625
626 :returns: list of (kind, symlink) tuples
627 :rtype: list
628
629 """
630 previous_symlinks = []
631 for kind in ALL_FOUR:
632 link_dir = os.path.dirname(getattr(self, kind))
633 link_base = "previous_{0}.pem".format(kind)
634 previous_symlinks.append((kind, os.path.join(link_dir, link_base)))
635
636 return previous_symlinks
637
638 def _fix_symlinks(self):
639 """Fixes symlinks in the event of an incomplete version update.
640
641 If there is no problem with the current symlinks, this function
642 has no effect.
643
644 """
645 previous_symlinks = self._previous_symlinks()
646 if all(os.path.exists(link[1]) for link in previous_symlinks):
647 for kind, previous_link in previous_symlinks:
648 current_link = getattr(self, kind)
649 if os.path.lexists(current_link):
650 os.unlink(current_link)
651 os.symlink(os.readlink(previous_link), current_link)
652
653 for _, link in previous_symlinks:
654 if os.path.exists(link):
655 os.unlink(link)
656
657 def current_target(self, kind):
658 """Returns full path to which the specified item currently points.
659
660 :param str kind: the lineage member item ("cert", "privkey",
661 "chain", or "fullchain")
662
663 :returns: The path to the current version of the specified
664 member.
665 :rtype: str or None
666
667 """
668 if kind not in ALL_FOUR:
669 raise errors.CertStorageError("unknown kind of item")
670 link = getattr(self, kind)
671 if not os.path.exists(link):
672 logger.debug("Expected symlink %s for %s does not exist.",
673 link, kind)
674 return None
675 return get_link_target(link)
676
677 def current_version(self, kind):
678 """Returns numerical version of the specified item.
679
680 For example, if kind is "chain" and the current chain link
681 points to a file named "chain7.pem", returns the integer 7.
682
683 :param str kind: the lineage member item ("cert", "privkey",
684 "chain", or "fullchain")
685
686 :returns: the current version of the specified member.
687 :rtype: int
688
689 """
690 if kind not in ALL_FOUR:
691 raise errors.CertStorageError("unknown kind of item")
692 pattern = re.compile(r"^{0}([0-9]+)\.pem$".format(kind))
693 target = self.current_target(kind)
694 if target is None or not os.path.exists(target):
695 logger.debug("Current-version target for %s "
696 "does not exist at %s.", kind, target)
697 target = ""
698 matches = pattern.match(os.path.basename(target))
699 if matches:
700 return int(matches.groups()[0])
701 else:
702 logger.debug("No matches for target %s.", kind)
703 return None
704
705 def version(self, kind, version):
706 """The filename that corresponds to the specified version and kind.
707
708 .. warning:: The specified version may not exist in this
709 lineage. There is no guarantee that the file path returned
710 by this method actually exists.
711
712 :param str kind: the lineage member item ("cert", "privkey",
713 "chain", or "fullchain")
714 :param int version: the desired version
715
716 :returns: The path to the specified version of the specified member.
717 :rtype: str
718
719 """
720 if kind not in ALL_FOUR:
721 raise errors.CertStorageError("unknown kind of item")
722 where = os.path.dirname(self.current_target(kind))
723 return os.path.join(where, "{0}{1}.pem".format(kind, version))
724
725 def available_versions(self, kind):
726 """Which alternative versions of the specified kind of item exist?
727
728 The archive directory where the current version is stored is
729 consulted to obtain the list of alternatives.
730
731 :param str kind: the lineage member item (
732 ``cert``, ``privkey``, ``chain``, or ``fullchain``)
733
734 :returns: all of the version numbers that currently exist
735 :rtype: `list` of `int`
736
737 """
738 if kind not in ALL_FOUR:
739 raise errors.CertStorageError("unknown kind of item")
740 where = os.path.dirname(self.current_target(kind))
741 files = os.listdir(where)
742 pattern = re.compile(r"^{0}([0-9]+)\.pem$".format(kind))
743 matches = [pattern.match(f) for f in files]
744 return sorted([int(m.groups()[0]) for m in matches if m])
745
746 def newest_available_version(self, kind):
747 """Newest available version of the specified kind of item?
748
749 :param str kind: the lineage member item (``cert``,
750 ``privkey``, ``chain``, or ``fullchain``)
751
752 :returns: the newest available version of this member
753 :rtype: int
754
755 """
756 return max(self.available_versions(kind))
757
758 def latest_common_version(self):
759 """Newest version for which all items are available?
760
761 :returns: the newest available version for which all members
762 (``cert, ``privkey``, ``chain``, and ``fullchain``) exist
763 :rtype: int
764
765 """
766 # TODO: this can raise CertStorageError if there is no version overlap
767 # (it should probably return None instead)
768 # TODO: this can raise a spurious AttributeError if the current
769 # link for any kind is missing (it should probably return None)
770 versions = [self.available_versions(x) for x in ALL_FOUR]
771 return max(n for n in versions[0] if all(n in v for v in versions[1:]))
772
773 def next_free_version(self):
774 """Smallest version newer than all full or partial versions?
775
776 :returns: the smallest version number that is larger than any
777 version of any item currently stored in this lineage
778 :rtype: int
779
780 """
781 # TODO: consider locking/mutual exclusion between updating processes
782 # This isn't self.latest_common_version() + 1 because we don't want
783 # collide with a version that might exist for one file type but not
784 # for the others.
785 return max(self.newest_available_version(x) for x in ALL_FOUR) + 1
786
787 def ensure_deployed(self):
788 """Make sure we've deployed the latest version.
789
790 :returns: False if a change was needed, True otherwise
791 :rtype: bool
792
793 May need to recover from rare interrupted / crashed states."""
794
795 if self.has_pending_deployment():
796 logger.warning("Found a new cert /archive/ that was not linked to in /live/; "
797 "fixing...")
798 self.update_all_links_to(self.latest_common_version())
799 return False
800 return True
801
802
803 def has_pending_deployment(self):
804 """Is there a later version of all of the managed items?
805
806 :returns: ``True`` if there is a complete version of this
807 lineage with a larger version number than the current
808 version, and ``False`` otherwise
809 :rtype: bool
810
811 """
812 # TODO: consider whether to assume consistency or treat
813 # inconsistent/consistent versions differently
814 smallest_current = min(self.current_version(x) for x in ALL_FOUR)
815 return smallest_current < self.latest_common_version()
816
817 def _update_link_to(self, kind, version):
818 """Make the specified item point at the specified version.
819
820 (Note that this method doesn't verify that the specified version
821 exists.)
822
823 :param str kind: the lineage member item ("cert", "privkey",
824 "chain", or "fullchain")
825 :param int version: the desired version
826
827 """
828 if kind not in ALL_FOUR:
829 raise errors.CertStorageError("unknown kind of item")
830 link = getattr(self, kind)
831 filename = "{0}{1}.pem".format(kind, version)
832 # Relative rather than absolute target directory
833 target_directory = os.path.dirname(os.readlink(link))
834 # TODO: it could be safer to make the link first under a temporary
835 # filename, then unlink the old link, then rename the new link
836 # to the old link; this ensures that this process is able to
837 # create symlinks.
838 # TODO: we might also want to check consistency of related links
839 # for the other corresponding items
840 os.unlink(link)
841 os.symlink(os.path.join(target_directory, filename), link)
842
843 def update_all_links_to(self, version):
844 """Change all member objects to point to the specified version.
845
846 :param int version: the desired version
847
848 """
849 with error_handler.ErrorHandler(self._fix_symlinks):
850 previous_links = self._previous_symlinks()
851 for kind, link in previous_links:
852 os.symlink(self.current_target(kind), link)
853
854 for kind in ALL_FOUR:
855 self._update_link_to(kind, version)
856
857 for _, link in previous_links:
858 os.unlink(link)
859
860 def names(self, version=None):
861 """What are the subject names of this certificate?
862
863 (If no version is specified, use the current version.)
864
865 :param int version: the desired version number
866 :returns: the subject names
867 :rtype: `list` of `str`
868 :raises .CertStorageError: if could not find cert file.
869
870 """
871 if version is None:
872 target = self.current_target("cert")
873 else:
874 target = self.version("cert", version)
875 if target is None:
876 raise errors.CertStorageError("could not find cert file")
877 with open(target) as f:
878 return crypto_util.get_names_from_cert(f.read())
879
880 def autodeployment_is_enabled(self):
881 """Is automatic deployment enabled for this cert?
882
883 If autodeploy is not specified, defaults to True.
884
885 :returns: True if automatic deployment is enabled
886 :rtype: bool
887
888 """
889 return ("autodeploy" not in self.configuration or
890 self.configuration.as_bool("autodeploy"))
891
892 def should_autodeploy(self, interactive=False):
893 """Should this lineage now automatically deploy a newer version?
894
895 This is a policy question and does not only depend on whether
896 there is a newer version of the cert. (This considers whether
897 autodeployment is enabled, whether a relevant newer version
898 exists, and whether the time interval for autodeployment has
899 been reached.)
900
901 :param bool interactive: set to True to examine the question
902 regardless of whether the renewal configuration allows
903 automated deployment (for interactive use). Default False.
904
905 :returns: whether the lineage now ought to autodeploy an
906 existing newer cert version
907 :rtype: bool
908
909 """
910 if interactive or self.autodeployment_is_enabled():
911 if self.has_pending_deployment():
912 interval = self.configuration.get("deploy_before_expiry",
913 "5 days")
914 now = pytz.UTC.fromutc(datetime.datetime.utcnow())
915 if self.target_expiry < add_time_interval(now, interval):
916 return True
917 return False
918
919 def ocsp_revoked(self, version=None):
920 # pylint: disable=no-self-use,unused-argument
921 """Is the specified cert version revoked according to OCSP?
922
923 Also returns True if the cert version is declared as intended
924 to be revoked according to Let's Encrypt OCSP extensions.
925 (If no version is specified, uses the current version.)
926
927 This method is not yet implemented and currently always returns
928 False.
929
930 :param int version: the desired version number
931
932 :returns: whether the certificate is or will be revoked
933 :rtype: bool
934
935 """
936 # XXX: This query and its associated network service aren't
937 # implemented yet, so we currently return False (indicating that the
938 # certificate is not revoked).
939 return False
940
941 def autorenewal_is_enabled(self):
942 """Is automatic renewal enabled for this cert?
943
944 If autorenew is not specified, defaults to True.
945
946 :returns: True if automatic renewal is enabled
947 :rtype: bool
948
949 """
950 return ("autorenew" not in self.configuration["renewalparams"] or
951 self.configuration["renewalparams"].as_bool("autorenew"))
952
953 def should_autorenew(self):
954 """Should we now try to autorenew the most recent cert version?
955
956 This is a policy question and does not only depend on whether
957 the cert is expired. (This considers whether autorenewal is
958 enabled, whether the cert is revoked, and whether the time
959 interval for autorenewal has been reached.)
960
961 Note that this examines the numerically most recent cert version,
962 not the currently deployed version.
963
964 :returns: whether an attempt should now be made to autorenew the
965 most current cert version in this lineage
966 :rtype: bool
967
968 """
969 if self.autorenewal_is_enabled():
970 # Consider whether to attempt to autorenew this cert now
971
972 # Renewals on the basis of revocation
973 if self.ocsp_revoked(self.latest_common_version()):
974 logger.debug("Should renew, certificate is revoked.")
975 return True
976
977 # Renews some period before expiry time
978 default_interval = constants.RENEWER_DEFAULTS["renew_before_expiry"]
979 interval = self.configuration.get("renew_before_expiry", default_interval)
980 expiry = crypto_util.notAfter(self.version(
981 "cert", self.latest_common_version()))
982 now = pytz.UTC.fromutc(datetime.datetime.utcnow())
983 if expiry < add_time_interval(now, interval):
984 logger.debug("Should renew, less than %s before certificate "
985 "expiry %s.", interval,
986 expiry.strftime("%Y-%m-%d %H:%M:%S %Z"))
987 return True
988 return False
989
990 @classmethod
991 def new_lineage(cls, lineagename, cert, privkey, chain, cli_config):
992 # pylint: disable=too-many-locals
993 """Create a new certificate lineage.
994
995 Attempts to create a certificate lineage -- enrolled for
996 potential future renewal -- with the (suggested) lineage name
997 lineagename, and the associated cert, privkey, and chain (the
998 associated fullchain will be created automatically). Optional
999 configurator and renewalparams record the configuration that was
1000 originally used to obtain this cert, so that it can be reused
1001 later during automated renewal.
1002
1003 Returns a new RenewableCert object referring to the created
1004 lineage. (The actual lineage name, as well as all the relevant
1005 file paths, will be available within this object.)
1006
1007 :param str lineagename: the suggested name for this lineage
1008 (normally the current cert's first subject DNS name)
1009 :param str cert: the initial certificate version in PEM format
1010 :param str privkey: the private key in PEM format
1011 :param str chain: the certificate chain in PEM format
1012 :param .NamespaceConfig cli_config: parsed command line
1013 arguments
1014
1015 :returns: the newly-created RenewalCert object
1016 :rtype: :class:`storage.renewableCert`
1017
1018 """
1019
1020 # Examine the configuration and find the new lineage's name
1021 for i in (cli_config.renewal_configs_dir, cli_config.default_archive_dir,
1022 cli_config.live_dir):
1023 if not os.path.exists(i):
1024 os.makedirs(i, 0o700)
1025 logger.debug("Creating directory %s.", i)
1026 config_file, config_filename = util.unique_lineage_name(
1027 cli_config.renewal_configs_dir, lineagename)
1028 base_readme_path = os.path.join(cli_config.live_dir, README)
1029 if not os.path.exists(base_readme_path):
1030 _write_live_readme_to(base_readme_path, is_base_dir=True)
1031
1032 # Determine where on disk everything will go
1033 # lineagename will now potentially be modified based on which
1034 # renewal configuration file could actually be created
1035 lineagename = lineagename_for_filename(config_filename)
1036 archive = full_archive_path(None, cli_config, lineagename)
1037 live_dir = _full_live_path(cli_config, lineagename)
1038 if os.path.exists(archive):
1039 config_file.close()
1040 raise errors.CertStorageError(
1041 "archive directory exists for " + lineagename)
1042 if os.path.exists(live_dir):
1043 config_file.close()
1044 raise errors.CertStorageError(
1045 "live directory exists for " + lineagename)
1046 os.mkdir(archive)
1047 os.mkdir(live_dir)
1048 logger.debug("Archive directory %s and live "
1049 "directory %s created.", archive, live_dir)
1050
1051 # Put the data into the appropriate files on disk
1052 target = dict([(kind, os.path.join(live_dir, kind + ".pem"))
1053 for kind in ALL_FOUR])
1054 archive_target = dict([(kind, os.path.join(archive, kind + "1.pem"))
1055 for kind in ALL_FOUR])
1056 for kind in ALL_FOUR:
1057 os.symlink(_relpath_from_file(archive_target[kind], target[kind]), target[kind])
1058 with open(target["cert"], "wb") as f:
1059 logger.debug("Writing certificate to %s.", target["cert"])
1060 f.write(cert)
1061 with util.safe_open(archive_target["privkey"], "wb", chmod=BASE_PRIVKEY_MODE) as f:
1062 logger.debug("Writing private key to %s.", target["privkey"])
1063 f.write(privkey)
1064 # XXX: Let's make sure to get the file permissions right here
1065 with open(target["chain"], "wb") as f:
1066 logger.debug("Writing chain to %s.", target["chain"])
1067 f.write(chain)
1068 with open(target["fullchain"], "wb") as f:
1069 # assumes that OpenSSL.crypto.dump_certificate includes
1070 # ending newline character
1071 logger.debug("Writing full chain to %s.", target["fullchain"])
1072 f.write(cert + chain)
1073
1074 # Write a README file to the live directory
1075 readme_path = os.path.join(live_dir, README)
1076 _write_live_readme_to(readme_path)
1077
1078 # Document what we've done in a new renewal config file
1079 config_file.close()
1080
1081 # Save only the config items that are relevant to renewal
1082 values = relevant_values(vars(cli_config.namespace))
1083
1084 new_config = write_renewal_config(config_filename, config_filename, archive,
1085 target, values)
1086 return cls(new_config.filename, cli_config)
1087
1088 def save_successor(self, prior_version, new_cert,
1089 new_privkey, new_chain, cli_config):
1090 """Save new cert and chain as a successor of a prior version.
1091
1092 Returns the new version number that was created.
1093
1094 .. note:: this function does NOT update links to deploy this
1095 version
1096
1097 :param int prior_version: the old version to which this version
1098 is regarded as a successor (used to choose a privkey, if the
1099 key has not changed, but otherwise this information is not
1100 permanently recorded anywhere)
1101 :param bytes new_cert: the new certificate, in PEM format
1102 :param bytes new_privkey: the new private key, in PEM format,
1103 or ``None``, if the private key has not changed
1104 :param bytes new_chain: the new chain, in PEM format
1105 :param .NamespaceConfig cli_config: parsed command line
1106 arguments
1107
1108 :returns: the new version number that was created
1109 :rtype: int
1110
1111 """
1112 # XXX: assumes official archive location rather than examining links
1113 # XXX: consider using os.open for availability of os.O_EXCL
1114 # XXX: ensure file permissions are correct; also create directories
1115 # if needed (ensuring their permissions are correct)
1116 # Figure out what the new version is and hence where to save things
1117
1118 self.cli_config = cli_config
1119 target_version = self.next_free_version()
1120 target = dict(
1121 [(kind,
1122 os.path.join(self.archive_dir, "{0}{1}.pem".format(kind, target_version)))
1123 for kind in ALL_FOUR])
1124
1125 old_privkey = os.path.join(
1126 self.archive_dir, "privkey{0}.pem".format(prior_version))
1127
1128 # Distinguish the cases where the privkey has changed and where it
1129 # has not changed (in the latter case, making an appropriate symlink
1130 # to an earlier privkey version)
1131 if new_privkey is None:
1132 # The behavior below keeps the prior key by creating a new
1133 # symlink to the old key or the target of the old key symlink.
1134 if os.path.islink(old_privkey):
1135 old_privkey = os.readlink(old_privkey)
1136 else:
1137 old_privkey = "privkey{0}.pem".format(prior_version)
1138 logger.debug("Writing symlink to old private key, %s.", old_privkey)
1139 os.symlink(old_privkey, target["privkey"])
1140 else:
1141 with util.safe_open(target["privkey"], "wb", chmod=BASE_PRIVKEY_MODE) as f:
1142 logger.debug("Writing new private key to %s.", target["privkey"])
1143 f.write(new_privkey)
1144 # Preserve gid and (mode & 074) from previous privkey in this lineage.
1145 old_mode = stat.S_IMODE(os.stat(old_privkey).st_mode) & \
1146 (stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP | \
1147 stat.S_IROTH)
1148 mode = BASE_PRIVKEY_MODE | old_mode
1149 os.chown(target["privkey"], -1, os.stat(old_privkey).st_gid)
1150 os.chmod(target["privkey"], mode)
1151
1152 # Save everything else
1153 with open(target["cert"], "wb") as f:
1154 logger.debug("Writing certificate to %s.", target["cert"])
1155 f.write(new_cert)
1156 with open(target["chain"], "wb") as f:
1157 logger.debug("Writing chain to %s.", target["chain"])
1158 f.write(new_chain)
1159 with open(target["fullchain"], "wb") as f:
1160 logger.debug("Writing full chain to %s.", target["fullchain"])
1161 f.write(new_cert + new_chain)
1162
1163 symlinks = dict((kind, self.configuration[kind]) for kind in ALL_FOUR)
1164 # Update renewal config file
1165 self.configfile = update_configuration(
1166 self.lineagename, self.archive_dir, symlinks, cli_config)
1167 self.configuration = config_with_defaults(self.configfile)
1168
1169 return target_version
1170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/certbot/storage.py b/certbot/storage.py
--- a/certbot/storage.py
+++ b/certbot/storage.py
@@ -41,7 +41,9 @@
:rtype: `list` of `str`
"""
- return glob.glob(os.path.join(config.renewal_configs_dir, "*.conf"))
+ result = glob.glob(os.path.join(config.renewal_configs_dir, "*.conf"))
+ result.sort()
+ return result
def renewal_file_for_certname(config, certname):
"""Return /path/to/certname.conf in the renewal conf directory"""
| {"golden_diff": "diff --git a/certbot/storage.py b/certbot/storage.py\n--- a/certbot/storage.py\n+++ b/certbot/storage.py\n@@ -41,7 +41,9 @@\n :rtype: `list` of `str`\n \n \"\"\"\n- return glob.glob(os.path.join(config.renewal_configs_dir, \"*.conf\"))\n+ result = glob.glob(os.path.join(config.renewal_configs_dir, \"*.conf\"))\n+ result.sort()\n+ return result\n \n def renewal_file_for_certname(config, certname):\n \"\"\"Return /path/to/certname.conf in the renewal conf directory\"\"\"\n", "issue": "certbot delete list must be sorted\nSubj.\n", "before_files": [{"content": "\"\"\"Renewable certificates storage.\"\"\"\nimport datetime\nimport glob\nimport logging\nimport os\nimport re\nimport stat\n\nimport configobj\nimport parsedatetime\nimport pytz\nimport shutil\nimport six\n\nimport certbot\nfrom certbot import cli\nfrom certbot import compat\nfrom certbot import constants\nfrom certbot import crypto_util\nfrom certbot import errors\nfrom certbot import error_handler\nfrom certbot import util\n\nfrom certbot.plugins import common as plugins_common\nfrom certbot.plugins import disco as plugins_disco\n\nlogger = logging.getLogger(__name__)\n\nALL_FOUR = (\"cert\", \"privkey\", \"chain\", \"fullchain\")\nREADME = \"README\"\nCURRENT_VERSION = util.get_strict_version(certbot.__version__)\nBASE_PRIVKEY_MODE = 0o600\n\n\ndef renewal_conf_files(config):\n \"\"\"Build a list of all renewal configuration files.\n\n :param certbot.interfaces.IConfig config: Configuration object\n\n :returns: list of renewal configuration files\n :rtype: `list` of `str`\n\n \"\"\"\n return glob.glob(os.path.join(config.renewal_configs_dir, \"*.conf\"))\n\ndef renewal_file_for_certname(config, certname):\n \"\"\"Return /path/to/certname.conf in the renewal conf directory\"\"\"\n path = os.path.join(config.renewal_configs_dir, \"{0}.conf\".format(certname))\n if not os.path.exists(path):\n raise errors.CertStorageError(\"No certificate found with name {0} (expected \"\n \"{1}).\".format(certname, path))\n return path\n\n\ndef cert_path_for_cert_name(config, cert_name):\n \"\"\" If `--cert-name` was specified, but you need a value for `--cert-path`.\n\n :param `configuration.NamespaceConfig` config: parsed command line arguments\n :param str cert_name: cert name.\n\n \"\"\"\n cert_name_implied_conf = renewal_file_for_certname(config, cert_name)\n fullchain_path = configobj.ConfigObj(cert_name_implied_conf)[\"fullchain\"]\n with open(fullchain_path) as f:\n cert_path = (fullchain_path, f.read())\n return cert_path\n\n\ndef config_with_defaults(config=None):\n \"\"\"Merge supplied config, if provided, on top of builtin defaults.\"\"\"\n defaults_copy = configobj.ConfigObj(constants.RENEWER_DEFAULTS)\n defaults_copy.merge(config if config is not None else configobj.ConfigObj())\n return defaults_copy\n\n\ndef add_time_interval(base_time, interval, textparser=parsedatetime.Calendar()):\n \"\"\"Parse the time specified time interval, and add it to the base_time\n\n The interval can be in the English-language format understood by\n parsedatetime, e.g., '10 days', '3 weeks', '6 months', '9 hours', or\n a sequence of such intervals like '6 months 1 week' or '3 days 12\n hours'. If an integer is found with no associated unit, it is\n interpreted by default as a number of days.\n\n :param datetime.datetime base_time: The time to be added with the interval.\n :param str interval: The time interval to parse.\n\n :returns: The base_time plus the interpretation of the time interval.\n :rtype: :class:`datetime.datetime`\"\"\"\n\n if interval.strip().isdigit():\n interval += \" days\"\n\n # try to use the same timezone, but fallback to UTC\n tzinfo = base_time.tzinfo or pytz.UTC\n\n return textparser.parseDT(interval, base_time, tzinfo=tzinfo)[0]\n\n\ndef write_renewal_config(o_filename, n_filename, archive_dir, target, relevant_data):\n \"\"\"Writes a renewal config file with the specified name and values.\n\n :param str o_filename: Absolute path to the previous version of config file\n :param str n_filename: Absolute path to the new destination of config file\n :param str archive_dir: Absolute path to the archive directory\n :param dict target: Maps ALL_FOUR to their symlink paths\n :param dict relevant_data: Renewal configuration options to save\n\n :returns: Configuration object for the new config file\n :rtype: configobj.ConfigObj\n\n \"\"\"\n config = configobj.ConfigObj(o_filename)\n config[\"version\"] = certbot.__version__\n config[\"archive_dir\"] = archive_dir\n for kind in ALL_FOUR:\n config[kind] = target[kind]\n\n if \"renewalparams\" not in config:\n config[\"renewalparams\"] = {}\n config.comments[\"renewalparams\"] = [\"\",\n \"Options used in \"\n \"the renewal process\"]\n\n config[\"renewalparams\"].update(relevant_data)\n\n for k in config[\"renewalparams\"].keys():\n if k not in relevant_data:\n del config[\"renewalparams\"][k]\n\n if \"renew_before_expiry\" not in config:\n default_interval = constants.RENEWER_DEFAULTS[\"renew_before_expiry\"]\n config.initial_comment = [\"renew_before_expiry = \" + default_interval]\n\n # TODO: add human-readable comments explaining other available\n # parameters\n logger.debug(\"Writing new config %s.\", n_filename)\n\n # Ensure that the file exists\n open(n_filename, 'a').close()\n\n # Copy permissions from the old version of the file, if it exists.\n if os.path.exists(o_filename):\n current_permissions = stat.S_IMODE(os.lstat(o_filename).st_mode)\n os.chmod(n_filename, current_permissions)\n\n with open(n_filename, \"wb\") as f:\n config.write(outfile=f)\n return config\n\n\ndef rename_renewal_config(prev_name, new_name, cli_config):\n \"\"\"Renames cli_config.certname's config to cli_config.new_certname.\n\n :param .NamespaceConfig cli_config: parsed command line\n arguments\n \"\"\"\n prev_filename = renewal_filename_for_lineagename(cli_config, prev_name)\n new_filename = renewal_filename_for_lineagename(cli_config, new_name)\n if os.path.exists(new_filename):\n raise errors.ConfigurationError(\"The new certificate name \"\n \"is already in use.\")\n try:\n os.rename(prev_filename, new_filename)\n except OSError:\n raise errors.ConfigurationError(\"Please specify a valid filename \"\n \"for the new certificate name.\")\n\n\ndef update_configuration(lineagename, archive_dir, target, cli_config):\n \"\"\"Modifies lineagename's config to contain the specified values.\n\n :param str lineagename: Name of the lineage being modified\n :param str archive_dir: Absolute path to the archive directory\n :param dict target: Maps ALL_FOUR to their symlink paths\n :param .NamespaceConfig cli_config: parsed command line\n arguments\n\n :returns: Configuration object for the updated config file\n :rtype: configobj.ConfigObj\n\n \"\"\"\n config_filename = renewal_filename_for_lineagename(cli_config, lineagename)\n temp_filename = config_filename + \".new\"\n\n # If an existing tempfile exists, delete it\n if os.path.exists(temp_filename):\n os.unlink(temp_filename)\n\n # Save only the config items that are relevant to renewal\n values = relevant_values(vars(cli_config.namespace))\n write_renewal_config(config_filename, temp_filename, archive_dir, target, values)\n compat.os_rename(temp_filename, config_filename)\n\n return configobj.ConfigObj(config_filename)\n\n\ndef get_link_target(link):\n \"\"\"Get an absolute path to the target of link.\n\n :param str link: Path to a symbolic link\n\n :returns: Absolute path to the target of link\n :rtype: str\n\n :raises .CertStorageError: If link does not exists.\n\n \"\"\"\n try:\n target = os.readlink(link)\n except OSError:\n raise errors.CertStorageError(\n \"Expected {0} to be a symlink\".format(link))\n\n if not os.path.isabs(target):\n target = os.path.join(os.path.dirname(link), target)\n return os.path.abspath(target)\n\ndef _write_live_readme_to(readme_path, is_base_dir=False):\n prefix = \"\"\n if is_base_dir:\n prefix = \"[cert name]/\"\n with open(readme_path, \"w\") as f:\n logger.debug(\"Writing README to %s.\", readme_path)\n f.write(\"This directory contains your keys and certificates.\\n\\n\"\n \"`{prefix}privkey.pem` : the private key for your certificate.\\n\"\n \"`{prefix}fullchain.pem`: the certificate file used in most server software.\\n\"\n \"`{prefix}chain.pem` : used for OCSP stapling in Nginx >=1.3.7.\\n\"\n \"`{prefix}cert.pem` : will break many server configurations, and \"\n \"should not be used\\n\"\n \" without reading further documentation (see link below).\\n\\n\"\n \"WARNING: DO NOT MOVE OR RENAME THESE FILES!\\n\"\n \" Certbot expects these files to remain in this location in order\\n\"\n \" to function properly!\\n\\n\"\n \"We recommend not moving these files. For more information, see the Certbot\\n\"\n \"User Guide at https://certbot.eff.org/docs/using.html#where-are-my-\"\n \"certificates.\\n\".format(prefix=prefix))\n\n\ndef _relevant(option):\n \"\"\"\n Is this option one that could be restored for future renewal purposes?\n :param str option: the name of the option\n\n :rtype: bool\n \"\"\"\n from certbot import renewal\n plugins = plugins_disco.PluginsRegistry.find_all()\n namespaces = [plugins_common.dest_namespace(plugin) for plugin in plugins]\n\n return (option in renewal.CONFIG_ITEMS or\n any(option.startswith(namespace) for namespace in namespaces))\n\n\ndef relevant_values(all_values):\n \"\"\"Return a new dict containing only items relevant for renewal.\n\n :param dict all_values: The original values.\n\n :returns: A new dictionary containing items that can be used in renewal.\n :rtype dict:\n\n \"\"\"\n rv = dict(\n (option, value)\n for option, value in six.iteritems(all_values)\n if _relevant(option) and cli.option_was_set(option, value))\n # We always save the server value to help with forward compatibility\n # and behavioral consistency when versions of Certbot with different\n # server defaults are used.\n rv[\"server\"] = all_values[\"server\"]\n return rv\n\ndef lineagename_for_filename(config_filename):\n \"\"\"Returns the lineagename for a configuration filename.\n \"\"\"\n if not config_filename.endswith(\".conf\"):\n raise errors.CertStorageError(\n \"renewal config file name must end in .conf\")\n return os.path.basename(config_filename[:-len(\".conf\")])\n\ndef renewal_filename_for_lineagename(config, lineagename):\n \"\"\"Returns the lineagename for a configuration filename.\n \"\"\"\n return os.path.join(config.renewal_configs_dir, lineagename) + \".conf\"\n\ndef _relpath_from_file(archive_dir, from_file):\n \"\"\"Path to a directory from a file\"\"\"\n return os.path.relpath(archive_dir, os.path.dirname(from_file))\n\ndef full_archive_path(config_obj, cli_config, lineagename):\n \"\"\"Returns the full archive path for a lineagename\n\n Uses cli_config to determine archive path if not available from config_obj.\n\n :param configobj.ConfigObj config_obj: Renewal conf file contents (can be None)\n :param configuration.NamespaceConfig cli_config: Main config file\n :param str lineagename: Certificate name\n \"\"\"\n if config_obj and \"archive_dir\" in config_obj:\n return config_obj[\"archive_dir\"]\n else:\n return os.path.join(cli_config.default_archive_dir, lineagename)\n\ndef _full_live_path(cli_config, lineagename):\n \"\"\"Returns the full default live path for a lineagename\"\"\"\n return os.path.join(cli_config.live_dir, lineagename)\n\ndef delete_files(config, certname):\n \"\"\"Delete all files related to the certificate.\n\n If some files are not found, ignore them and continue.\n \"\"\"\n renewal_filename = renewal_file_for_certname(config, certname)\n # file exists\n full_default_archive_dir = full_archive_path(None, config, certname)\n full_default_live_dir = _full_live_path(config, certname)\n try:\n renewal_config = configobj.ConfigObj(renewal_filename)\n except configobj.ConfigObjError:\n # config is corrupted\n logger.warning(\"Could not parse %s. You may wish to manually \"\n \"delete the contents of %s and %s.\", renewal_filename,\n full_default_live_dir, full_default_archive_dir)\n raise errors.CertStorageError(\n \"error parsing {0}\".format(renewal_filename))\n finally:\n # we couldn't read it, but let's at least delete it\n # if this was going to fail, it already would have.\n os.remove(renewal_filename)\n logger.debug(\"Removed %s\", renewal_filename)\n\n # cert files and (hopefully) live directory\n # it's not guaranteed that the files are in our default storage\n # structure. so, first delete the cert files.\n directory_names = set()\n for kind in ALL_FOUR:\n link = renewal_config.get(kind)\n try:\n os.remove(link)\n logger.debug(\"Removed %s\", link)\n except OSError:\n logger.debug(\"Unable to delete %s\", link)\n directory = os.path.dirname(link)\n directory_names.add(directory)\n\n # if all four were in the same directory, and the only thing left\n # is the README file (or nothing), delete that directory.\n # this will be wrong in very few but some cases.\n if len(directory_names) == 1:\n # delete the README file\n directory = directory_names.pop()\n readme_path = os.path.join(directory, README)\n try:\n os.remove(readme_path)\n logger.debug(\"Removed %s\", readme_path)\n except OSError:\n logger.debug(\"Unable to delete %s\", readme_path)\n # if it's now empty, delete the directory\n try:\n os.rmdir(directory) # only removes empty directories\n logger.debug(\"Removed %s\", directory)\n except OSError:\n logger.debug(\"Unable to remove %s; may not be empty.\", directory)\n\n # archive directory\n try:\n archive_path = full_archive_path(renewal_config, config, certname)\n shutil.rmtree(archive_path)\n logger.debug(\"Removed %s\", archive_path)\n except OSError:\n logger.debug(\"Unable to remove %s\", archive_path)\n\n\nclass RenewableCert(object):\n # pylint: disable=too-many-instance-attributes,too-many-public-methods\n \"\"\"Renewable certificate.\n\n Represents a lineage of certificates that is under the management of\n Certbot, indicated by the existence of an associated renewal\n configuration file.\n\n Note that the notion of \"current version\" for a lineage is\n maintained on disk in the structure of symbolic links, and is not\n explicitly stored in any instance variable in this object. The\n RenewableCert object is able to determine information about the\n current (or other) version by accessing data on disk, but does not\n inherently know any of this information except by examining the\n symbolic links as needed. The instance variables mentioned below\n point to symlinks that reflect the notion of \"current version\" of\n each managed object, and it is these paths that should be used when\n configuring servers to use the certificate managed in a lineage.\n These paths are normally within the \"live\" directory, and their\n symlink targets -- the actual cert files -- are normally found\n within the \"archive\" directory.\n\n :ivar str cert: The path to the symlink representing the current\n version of the certificate managed by this lineage.\n :ivar str privkey: The path to the symlink representing the current\n version of the private key managed by this lineage.\n :ivar str chain: The path to the symlink representing the current version\n of the chain managed by this lineage.\n :ivar str fullchain: The path to the symlink representing the\n current version of the fullchain (combined chain and cert)\n managed by this lineage.\n :ivar configobj.ConfigObj configuration: The renewal configuration\n options associated with this lineage, obtained from parsing the\n renewal configuration file and/or systemwide defaults.\n\n \"\"\"\n def __init__(self, config_filename, cli_config, update_symlinks=False):\n \"\"\"Instantiate a RenewableCert object from an existing lineage.\n\n :param str config_filename: the path to the renewal config file\n that defines this lineage.\n :param .NamespaceConfig: parsed command line arguments\n\n :raises .CertStorageError: if the configuration file's name didn't end\n in \".conf\", or the file is missing or broken.\n\n \"\"\"\n self.cli_config = cli_config\n self.lineagename = lineagename_for_filename(config_filename)\n\n # self.configuration should be used to read parameters that\n # may have been chosen based on default values from the\n # systemwide renewal configuration; self.configfile should be\n # used to make and save changes.\n try:\n self.configfile = configobj.ConfigObj(config_filename)\n except configobj.ConfigObjError:\n raise errors.CertStorageError(\n \"error parsing {0}\".format(config_filename))\n # TODO: Do we actually use anything from defaults and do we want to\n # read further defaults from the systemwide renewal configuration\n # file at this stage?\n self.configuration = config_with_defaults(self.configfile)\n\n if not all(x in self.configuration for x in ALL_FOUR):\n raise errors.CertStorageError(\n \"renewal config file {0} is missing a required \"\n \"file reference\".format(self.configfile))\n\n conf_version = self.configuration.get(\"version\")\n if (conf_version is not None and\n util.get_strict_version(conf_version) > CURRENT_VERSION):\n logger.info(\n \"Attempting to parse the version %s renewal configuration \"\n \"file found at %s with version %s of Certbot. This might not \"\n \"work.\", conf_version, config_filename, certbot.__version__)\n\n self.cert = self.configuration[\"cert\"]\n self.privkey = self.configuration[\"privkey\"]\n self.chain = self.configuration[\"chain\"]\n self.fullchain = self.configuration[\"fullchain\"]\n self.live_dir = os.path.dirname(self.cert)\n\n self._fix_symlinks()\n if update_symlinks:\n self._update_symlinks()\n self._check_symlinks()\n\n @property\n def key_path(self):\n \"\"\"Duck type for self.privkey\"\"\"\n return self.privkey\n\n @property\n def cert_path(self):\n \"\"\"Duck type for self.cert\"\"\"\n return self.cert\n\n @property\n def chain_path(self):\n \"\"\"Duck type for self.chain\"\"\"\n return self.chain\n\n @property\n def fullchain_path(self):\n \"\"\"Duck type for self.fullchain\"\"\"\n return self.fullchain\n\n @property\n def target_expiry(self):\n \"\"\"The current target certificate's expiration datetime\n\n :returns: Expiration datetime of the current target certificate\n :rtype: :class:`datetime.datetime`\n \"\"\"\n return crypto_util.notAfter(self.current_target(\"cert\"))\n\n @property\n def archive_dir(self):\n \"\"\"Returns the default or specified archive directory\"\"\"\n return full_archive_path(self.configuration,\n self.cli_config, self.lineagename)\n\n def relative_archive_dir(self, from_file):\n \"\"\"Returns the default or specified archive directory as a relative path\n\n Used for creating symbolic links.\n \"\"\"\n return _relpath_from_file(self.archive_dir, from_file)\n\n @property\n def is_test_cert(self):\n \"\"\"Returns true if this is a test cert from a staging server.\"\"\"\n server = self.configuration[\"renewalparams\"].get(\"server\", None)\n if server:\n return util.is_staging(server)\n else:\n return False\n\n def _check_symlinks(self):\n \"\"\"Raises an exception if a symlink doesn't exist\"\"\"\n for kind in ALL_FOUR:\n link = getattr(self, kind)\n if not os.path.islink(link):\n raise errors.CertStorageError(\n \"expected {0} to be a symlink\".format(link))\n target = get_link_target(link)\n if not os.path.exists(target):\n raise errors.CertStorageError(\"target {0} of symlink {1} does \"\n \"not exist\".format(target, link))\n\n def _update_symlinks(self):\n \"\"\"Updates symlinks to use archive_dir\"\"\"\n for kind in ALL_FOUR:\n link = getattr(self, kind)\n previous_link = get_link_target(link)\n new_link = os.path.join(self.relative_archive_dir(link),\n os.path.basename(previous_link))\n\n os.unlink(link)\n os.symlink(new_link, link)\n\n def _consistent(self):\n \"\"\"Are the files associated with this lineage self-consistent?\n\n :returns: Whether the files stored in connection with this\n lineage appear to be correct and consistent with one\n another.\n :rtype: bool\n\n \"\"\"\n # Each element must be referenced with an absolute path\n for x in (self.cert, self.privkey, self.chain, self.fullchain):\n if not os.path.isabs(x):\n logger.debug(\"Element %s is not referenced with an \"\n \"absolute path.\", x)\n return False\n\n # Each element must exist and be a symbolic link\n for x in (self.cert, self.privkey, self.chain, self.fullchain):\n if not os.path.islink(x):\n logger.debug(\"Element %s is not a symbolic link.\", x)\n return False\n for kind in ALL_FOUR:\n link = getattr(self, kind)\n target = get_link_target(link)\n\n # Each element's link must point within the cert lineage's\n # directory within the official archive directory\n if not os.path.samefile(os.path.dirname(target), self.archive_dir):\n logger.debug(\"Element's link does not point within the \"\n \"cert lineage's directory within the \"\n \"official archive directory. Link: %s, \"\n \"target directory: %s, \"\n \"archive directory: %s. If you've specified \"\n \"the archive directory in the renewal configuration \"\n \"file, you may need to update links by running \"\n \"certbot update_symlinks.\",\n link, os.path.dirname(target), self.archive_dir)\n return False\n\n # The link must point to a file that exists\n if not os.path.exists(target):\n logger.debug(\"Link %s points to file %s that does not exist.\",\n link, target)\n return False\n\n # The link must point to a file that follows the archive\n # naming convention\n pattern = re.compile(r\"^{0}([0-9]+)\\.pem$\".format(kind))\n if not pattern.match(os.path.basename(target)):\n logger.debug(\"%s does not follow the archive naming \"\n \"convention.\", target)\n return False\n\n # It is NOT required that the link's target be a regular\n # file (it may itself be a symlink). But we should probably\n # do a recursive check that ultimately the target does\n # exist?\n # XXX: Additional possible consistency checks (e.g.\n # cryptographic validation of the chain being a chain,\n # the chain matching the cert, and the cert matching\n # the subject key)\n # XXX: All four of the targets are in the same directory\n # (This check is redundant with the check that they\n # are all in the desired directory!)\n # len(set(os.path.basename(self.current_target(x)\n # for x in ALL_FOUR))) == 1\n return True\n\n def _fix(self):\n \"\"\"Attempt to fix defects or inconsistencies in this lineage.\n\n .. todo:: Currently unimplemented.\n\n \"\"\"\n # TODO: Figure out what kinds of fixes are possible. For\n # example, checking if there is a valid version that\n # we can update the symlinks to. (Maybe involve\n # parsing keys and certs to see if they exist and\n # if a key corresponds to the subject key of a cert?)\n\n # TODO: In general, the symlink-reading functions below are not\n # cautious enough about the possibility that links or their\n # targets may not exist. (This shouldn't happen, but might\n # happen as a result of random tampering by a sysadmin, or\n # filesystem errors, or crashes.)\n\n def _previous_symlinks(self):\n \"\"\"Returns the kind and path of all symlinks used in recovery.\n\n :returns: list of (kind, symlink) tuples\n :rtype: list\n\n \"\"\"\n previous_symlinks = []\n for kind in ALL_FOUR:\n link_dir = os.path.dirname(getattr(self, kind))\n link_base = \"previous_{0}.pem\".format(kind)\n previous_symlinks.append((kind, os.path.join(link_dir, link_base)))\n\n return previous_symlinks\n\n def _fix_symlinks(self):\n \"\"\"Fixes symlinks in the event of an incomplete version update.\n\n If there is no problem with the current symlinks, this function\n has no effect.\n\n \"\"\"\n previous_symlinks = self._previous_symlinks()\n if all(os.path.exists(link[1]) for link in previous_symlinks):\n for kind, previous_link in previous_symlinks:\n current_link = getattr(self, kind)\n if os.path.lexists(current_link):\n os.unlink(current_link)\n os.symlink(os.readlink(previous_link), current_link)\n\n for _, link in previous_symlinks:\n if os.path.exists(link):\n os.unlink(link)\n\n def current_target(self, kind):\n \"\"\"Returns full path to which the specified item currently points.\n\n :param str kind: the lineage member item (\"cert\", \"privkey\",\n \"chain\", or \"fullchain\")\n\n :returns: The path to the current version of the specified\n member.\n :rtype: str or None\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n link = getattr(self, kind)\n if not os.path.exists(link):\n logger.debug(\"Expected symlink %s for %s does not exist.\",\n link, kind)\n return None\n return get_link_target(link)\n\n def current_version(self, kind):\n \"\"\"Returns numerical version of the specified item.\n\n For example, if kind is \"chain\" and the current chain link\n points to a file named \"chain7.pem\", returns the integer 7.\n\n :param str kind: the lineage member item (\"cert\", \"privkey\",\n \"chain\", or \"fullchain\")\n\n :returns: the current version of the specified member.\n :rtype: int\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n pattern = re.compile(r\"^{0}([0-9]+)\\.pem$\".format(kind))\n target = self.current_target(kind)\n if target is None or not os.path.exists(target):\n logger.debug(\"Current-version target for %s \"\n \"does not exist at %s.\", kind, target)\n target = \"\"\n matches = pattern.match(os.path.basename(target))\n if matches:\n return int(matches.groups()[0])\n else:\n logger.debug(\"No matches for target %s.\", kind)\n return None\n\n def version(self, kind, version):\n \"\"\"The filename that corresponds to the specified version and kind.\n\n .. warning:: The specified version may not exist in this\n lineage. There is no guarantee that the file path returned\n by this method actually exists.\n\n :param str kind: the lineage member item (\"cert\", \"privkey\",\n \"chain\", or \"fullchain\")\n :param int version: the desired version\n\n :returns: The path to the specified version of the specified member.\n :rtype: str\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n where = os.path.dirname(self.current_target(kind))\n return os.path.join(where, \"{0}{1}.pem\".format(kind, version))\n\n def available_versions(self, kind):\n \"\"\"Which alternative versions of the specified kind of item exist?\n\n The archive directory where the current version is stored is\n consulted to obtain the list of alternatives.\n\n :param str kind: the lineage member item (\n ``cert``, ``privkey``, ``chain``, or ``fullchain``)\n\n :returns: all of the version numbers that currently exist\n :rtype: `list` of `int`\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n where = os.path.dirname(self.current_target(kind))\n files = os.listdir(where)\n pattern = re.compile(r\"^{0}([0-9]+)\\.pem$\".format(kind))\n matches = [pattern.match(f) for f in files]\n return sorted([int(m.groups()[0]) for m in matches if m])\n\n def newest_available_version(self, kind):\n \"\"\"Newest available version of the specified kind of item?\n\n :param str kind: the lineage member item (``cert``,\n ``privkey``, ``chain``, or ``fullchain``)\n\n :returns: the newest available version of this member\n :rtype: int\n\n \"\"\"\n return max(self.available_versions(kind))\n\n def latest_common_version(self):\n \"\"\"Newest version for which all items are available?\n\n :returns: the newest available version for which all members\n (``cert, ``privkey``, ``chain``, and ``fullchain``) exist\n :rtype: int\n\n \"\"\"\n # TODO: this can raise CertStorageError if there is no version overlap\n # (it should probably return None instead)\n # TODO: this can raise a spurious AttributeError if the current\n # link for any kind is missing (it should probably return None)\n versions = [self.available_versions(x) for x in ALL_FOUR]\n return max(n for n in versions[0] if all(n in v for v in versions[1:]))\n\n def next_free_version(self):\n \"\"\"Smallest version newer than all full or partial versions?\n\n :returns: the smallest version number that is larger than any\n version of any item currently stored in this lineage\n :rtype: int\n\n \"\"\"\n # TODO: consider locking/mutual exclusion between updating processes\n # This isn't self.latest_common_version() + 1 because we don't want\n # collide with a version that might exist for one file type but not\n # for the others.\n return max(self.newest_available_version(x) for x in ALL_FOUR) + 1\n\n def ensure_deployed(self):\n \"\"\"Make sure we've deployed the latest version.\n\n :returns: False if a change was needed, True otherwise\n :rtype: bool\n\n May need to recover from rare interrupted / crashed states.\"\"\"\n\n if self.has_pending_deployment():\n logger.warning(\"Found a new cert /archive/ that was not linked to in /live/; \"\n \"fixing...\")\n self.update_all_links_to(self.latest_common_version())\n return False\n return True\n\n\n def has_pending_deployment(self):\n \"\"\"Is there a later version of all of the managed items?\n\n :returns: ``True`` if there is a complete version of this\n lineage with a larger version number than the current\n version, and ``False`` otherwise\n :rtype: bool\n\n \"\"\"\n # TODO: consider whether to assume consistency or treat\n # inconsistent/consistent versions differently\n smallest_current = min(self.current_version(x) for x in ALL_FOUR)\n return smallest_current < self.latest_common_version()\n\n def _update_link_to(self, kind, version):\n \"\"\"Make the specified item point at the specified version.\n\n (Note that this method doesn't verify that the specified version\n exists.)\n\n :param str kind: the lineage member item (\"cert\", \"privkey\",\n \"chain\", or \"fullchain\")\n :param int version: the desired version\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n link = getattr(self, kind)\n filename = \"{0}{1}.pem\".format(kind, version)\n # Relative rather than absolute target directory\n target_directory = os.path.dirname(os.readlink(link))\n # TODO: it could be safer to make the link first under a temporary\n # filename, then unlink the old link, then rename the new link\n # to the old link; this ensures that this process is able to\n # create symlinks.\n # TODO: we might also want to check consistency of related links\n # for the other corresponding items\n os.unlink(link)\n os.symlink(os.path.join(target_directory, filename), link)\n\n def update_all_links_to(self, version):\n \"\"\"Change all member objects to point to the specified version.\n\n :param int version: the desired version\n\n \"\"\"\n with error_handler.ErrorHandler(self._fix_symlinks):\n previous_links = self._previous_symlinks()\n for kind, link in previous_links:\n os.symlink(self.current_target(kind), link)\n\n for kind in ALL_FOUR:\n self._update_link_to(kind, version)\n\n for _, link in previous_links:\n os.unlink(link)\n\n def names(self, version=None):\n \"\"\"What are the subject names of this certificate?\n\n (If no version is specified, use the current version.)\n\n :param int version: the desired version number\n :returns: the subject names\n :rtype: `list` of `str`\n :raises .CertStorageError: if could not find cert file.\n\n \"\"\"\n if version is None:\n target = self.current_target(\"cert\")\n else:\n target = self.version(\"cert\", version)\n if target is None:\n raise errors.CertStorageError(\"could not find cert file\")\n with open(target) as f:\n return crypto_util.get_names_from_cert(f.read())\n\n def autodeployment_is_enabled(self):\n \"\"\"Is automatic deployment enabled for this cert?\n\n If autodeploy is not specified, defaults to True.\n\n :returns: True if automatic deployment is enabled\n :rtype: bool\n\n \"\"\"\n return (\"autodeploy\" not in self.configuration or\n self.configuration.as_bool(\"autodeploy\"))\n\n def should_autodeploy(self, interactive=False):\n \"\"\"Should this lineage now automatically deploy a newer version?\n\n This is a policy question and does not only depend on whether\n there is a newer version of the cert. (This considers whether\n autodeployment is enabled, whether a relevant newer version\n exists, and whether the time interval for autodeployment has\n been reached.)\n\n :param bool interactive: set to True to examine the question\n regardless of whether the renewal configuration allows\n automated deployment (for interactive use). Default False.\n\n :returns: whether the lineage now ought to autodeploy an\n existing newer cert version\n :rtype: bool\n\n \"\"\"\n if interactive or self.autodeployment_is_enabled():\n if self.has_pending_deployment():\n interval = self.configuration.get(\"deploy_before_expiry\",\n \"5 days\")\n now = pytz.UTC.fromutc(datetime.datetime.utcnow())\n if self.target_expiry < add_time_interval(now, interval):\n return True\n return False\n\n def ocsp_revoked(self, version=None):\n # pylint: disable=no-self-use,unused-argument\n \"\"\"Is the specified cert version revoked according to OCSP?\n\n Also returns True if the cert version is declared as intended\n to be revoked according to Let's Encrypt OCSP extensions.\n (If no version is specified, uses the current version.)\n\n This method is not yet implemented and currently always returns\n False.\n\n :param int version: the desired version number\n\n :returns: whether the certificate is or will be revoked\n :rtype: bool\n\n \"\"\"\n # XXX: This query and its associated network service aren't\n # implemented yet, so we currently return False (indicating that the\n # certificate is not revoked).\n return False\n\n def autorenewal_is_enabled(self):\n \"\"\"Is automatic renewal enabled for this cert?\n\n If autorenew is not specified, defaults to True.\n\n :returns: True if automatic renewal is enabled\n :rtype: bool\n\n \"\"\"\n return (\"autorenew\" not in self.configuration[\"renewalparams\"] or\n self.configuration[\"renewalparams\"].as_bool(\"autorenew\"))\n\n def should_autorenew(self):\n \"\"\"Should we now try to autorenew the most recent cert version?\n\n This is a policy question and does not only depend on whether\n the cert is expired. (This considers whether autorenewal is\n enabled, whether the cert is revoked, and whether the time\n interval for autorenewal has been reached.)\n\n Note that this examines the numerically most recent cert version,\n not the currently deployed version.\n\n :returns: whether an attempt should now be made to autorenew the\n most current cert version in this lineage\n :rtype: bool\n\n \"\"\"\n if self.autorenewal_is_enabled():\n # Consider whether to attempt to autorenew this cert now\n\n # Renewals on the basis of revocation\n if self.ocsp_revoked(self.latest_common_version()):\n logger.debug(\"Should renew, certificate is revoked.\")\n return True\n\n # Renews some period before expiry time\n default_interval = constants.RENEWER_DEFAULTS[\"renew_before_expiry\"]\n interval = self.configuration.get(\"renew_before_expiry\", default_interval)\n expiry = crypto_util.notAfter(self.version(\n \"cert\", self.latest_common_version()))\n now = pytz.UTC.fromutc(datetime.datetime.utcnow())\n if expiry < add_time_interval(now, interval):\n logger.debug(\"Should renew, less than %s before certificate \"\n \"expiry %s.\", interval,\n expiry.strftime(\"%Y-%m-%d %H:%M:%S %Z\"))\n return True\n return False\n\n @classmethod\n def new_lineage(cls, lineagename, cert, privkey, chain, cli_config):\n # pylint: disable=too-many-locals\n \"\"\"Create a new certificate lineage.\n\n Attempts to create a certificate lineage -- enrolled for\n potential future renewal -- with the (suggested) lineage name\n lineagename, and the associated cert, privkey, and chain (the\n associated fullchain will be created automatically). Optional\n configurator and renewalparams record the configuration that was\n originally used to obtain this cert, so that it can be reused\n later during automated renewal.\n\n Returns a new RenewableCert object referring to the created\n lineage. (The actual lineage name, as well as all the relevant\n file paths, will be available within this object.)\n\n :param str lineagename: the suggested name for this lineage\n (normally the current cert's first subject DNS name)\n :param str cert: the initial certificate version in PEM format\n :param str privkey: the private key in PEM format\n :param str chain: the certificate chain in PEM format\n :param .NamespaceConfig cli_config: parsed command line\n arguments\n\n :returns: the newly-created RenewalCert object\n :rtype: :class:`storage.renewableCert`\n\n \"\"\"\n\n # Examine the configuration and find the new lineage's name\n for i in (cli_config.renewal_configs_dir, cli_config.default_archive_dir,\n cli_config.live_dir):\n if not os.path.exists(i):\n os.makedirs(i, 0o700)\n logger.debug(\"Creating directory %s.\", i)\n config_file, config_filename = util.unique_lineage_name(\n cli_config.renewal_configs_dir, lineagename)\n base_readme_path = os.path.join(cli_config.live_dir, README)\n if not os.path.exists(base_readme_path):\n _write_live_readme_to(base_readme_path, is_base_dir=True)\n\n # Determine where on disk everything will go\n # lineagename will now potentially be modified based on which\n # renewal configuration file could actually be created\n lineagename = lineagename_for_filename(config_filename)\n archive = full_archive_path(None, cli_config, lineagename)\n live_dir = _full_live_path(cli_config, lineagename)\n if os.path.exists(archive):\n config_file.close()\n raise errors.CertStorageError(\n \"archive directory exists for \" + lineagename)\n if os.path.exists(live_dir):\n config_file.close()\n raise errors.CertStorageError(\n \"live directory exists for \" + lineagename)\n os.mkdir(archive)\n os.mkdir(live_dir)\n logger.debug(\"Archive directory %s and live \"\n \"directory %s created.\", archive, live_dir)\n\n # Put the data into the appropriate files on disk\n target = dict([(kind, os.path.join(live_dir, kind + \".pem\"))\n for kind in ALL_FOUR])\n archive_target = dict([(kind, os.path.join(archive, kind + \"1.pem\"))\n for kind in ALL_FOUR])\n for kind in ALL_FOUR:\n os.symlink(_relpath_from_file(archive_target[kind], target[kind]), target[kind])\n with open(target[\"cert\"], \"wb\") as f:\n logger.debug(\"Writing certificate to %s.\", target[\"cert\"])\n f.write(cert)\n with util.safe_open(archive_target[\"privkey\"], \"wb\", chmod=BASE_PRIVKEY_MODE) as f:\n logger.debug(\"Writing private key to %s.\", target[\"privkey\"])\n f.write(privkey)\n # XXX: Let's make sure to get the file permissions right here\n with open(target[\"chain\"], \"wb\") as f:\n logger.debug(\"Writing chain to %s.\", target[\"chain\"])\n f.write(chain)\n with open(target[\"fullchain\"], \"wb\") as f:\n # assumes that OpenSSL.crypto.dump_certificate includes\n # ending newline character\n logger.debug(\"Writing full chain to %s.\", target[\"fullchain\"])\n f.write(cert + chain)\n\n # Write a README file to the live directory\n readme_path = os.path.join(live_dir, README)\n _write_live_readme_to(readme_path)\n\n # Document what we've done in a new renewal config file\n config_file.close()\n\n # Save only the config items that are relevant to renewal\n values = relevant_values(vars(cli_config.namespace))\n\n new_config = write_renewal_config(config_filename, config_filename, archive,\n target, values)\n return cls(new_config.filename, cli_config)\n\n def save_successor(self, prior_version, new_cert,\n new_privkey, new_chain, cli_config):\n \"\"\"Save new cert and chain as a successor of a prior version.\n\n Returns the new version number that was created.\n\n .. note:: this function does NOT update links to deploy this\n version\n\n :param int prior_version: the old version to which this version\n is regarded as a successor (used to choose a privkey, if the\n key has not changed, but otherwise this information is not\n permanently recorded anywhere)\n :param bytes new_cert: the new certificate, in PEM format\n :param bytes new_privkey: the new private key, in PEM format,\n or ``None``, if the private key has not changed\n :param bytes new_chain: the new chain, in PEM format\n :param .NamespaceConfig cli_config: parsed command line\n arguments\n\n :returns: the new version number that was created\n :rtype: int\n\n \"\"\"\n # XXX: assumes official archive location rather than examining links\n # XXX: consider using os.open for availability of os.O_EXCL\n # XXX: ensure file permissions are correct; also create directories\n # if needed (ensuring their permissions are correct)\n # Figure out what the new version is and hence where to save things\n\n self.cli_config = cli_config\n target_version = self.next_free_version()\n target = dict(\n [(kind,\n os.path.join(self.archive_dir, \"{0}{1}.pem\".format(kind, target_version)))\n for kind in ALL_FOUR])\n\n old_privkey = os.path.join(\n self.archive_dir, \"privkey{0}.pem\".format(prior_version))\n\n # Distinguish the cases where the privkey has changed and where it\n # has not changed (in the latter case, making an appropriate symlink\n # to an earlier privkey version)\n if new_privkey is None:\n # The behavior below keeps the prior key by creating a new\n # symlink to the old key or the target of the old key symlink.\n if os.path.islink(old_privkey):\n old_privkey = os.readlink(old_privkey)\n else:\n old_privkey = \"privkey{0}.pem\".format(prior_version)\n logger.debug(\"Writing symlink to old private key, %s.\", old_privkey)\n os.symlink(old_privkey, target[\"privkey\"])\n else:\n with util.safe_open(target[\"privkey\"], \"wb\", chmod=BASE_PRIVKEY_MODE) as f:\n logger.debug(\"Writing new private key to %s.\", target[\"privkey\"])\n f.write(new_privkey)\n # Preserve gid and (mode & 074) from previous privkey in this lineage.\n old_mode = stat.S_IMODE(os.stat(old_privkey).st_mode) & \\\n (stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP | \\\n stat.S_IROTH)\n mode = BASE_PRIVKEY_MODE | old_mode\n os.chown(target[\"privkey\"], -1, os.stat(old_privkey).st_gid)\n os.chmod(target[\"privkey\"], mode)\n\n # Save everything else\n with open(target[\"cert\"], \"wb\") as f:\n logger.debug(\"Writing certificate to %s.\", target[\"cert\"])\n f.write(new_cert)\n with open(target[\"chain\"], \"wb\") as f:\n logger.debug(\"Writing chain to %s.\", target[\"chain\"])\n f.write(new_chain)\n with open(target[\"fullchain\"], \"wb\") as f:\n logger.debug(\"Writing full chain to %s.\", target[\"fullchain\"])\n f.write(new_cert + new_chain)\n\n symlinks = dict((kind, self.configuration[kind]) for kind in ALL_FOUR)\n # Update renewal config file\n self.configfile = update_configuration(\n self.lineagename, self.archive_dir, symlinks, cli_config)\n self.configuration = config_with_defaults(self.configfile)\n\n return target_version\n", "path": "certbot/storage.py"}], "after_files": [{"content": "\"\"\"Renewable certificates storage.\"\"\"\nimport datetime\nimport glob\nimport logging\nimport os\nimport re\nimport stat\n\nimport configobj\nimport parsedatetime\nimport pytz\nimport shutil\nimport six\n\nimport certbot\nfrom certbot import cli\nfrom certbot import compat\nfrom certbot import constants\nfrom certbot import crypto_util\nfrom certbot import errors\nfrom certbot import error_handler\nfrom certbot import util\n\nfrom certbot.plugins import common as plugins_common\nfrom certbot.plugins import disco as plugins_disco\n\nlogger = logging.getLogger(__name__)\n\nALL_FOUR = (\"cert\", \"privkey\", \"chain\", \"fullchain\")\nREADME = \"README\"\nCURRENT_VERSION = util.get_strict_version(certbot.__version__)\nBASE_PRIVKEY_MODE = 0o600\n\n\ndef renewal_conf_files(config):\n \"\"\"Build a list of all renewal configuration files.\n\n :param certbot.interfaces.IConfig config: Configuration object\n\n :returns: list of renewal configuration files\n :rtype: `list` of `str`\n\n \"\"\"\n result = glob.glob(os.path.join(config.renewal_configs_dir, \"*.conf\"))\n result.sort()\n return result\n\ndef renewal_file_for_certname(config, certname):\n \"\"\"Return /path/to/certname.conf in the renewal conf directory\"\"\"\n path = os.path.join(config.renewal_configs_dir, \"{0}.conf\".format(certname))\n if not os.path.exists(path):\n raise errors.CertStorageError(\"No certificate found with name {0} (expected \"\n \"{1}).\".format(certname, path))\n return path\n\n\ndef cert_path_for_cert_name(config, cert_name):\n \"\"\" If `--cert-name` was specified, but you need a value for `--cert-path`.\n\n :param `configuration.NamespaceConfig` config: parsed command line arguments\n :param str cert_name: cert name.\n\n \"\"\"\n cert_name_implied_conf = renewal_file_for_certname(config, cert_name)\n fullchain_path = configobj.ConfigObj(cert_name_implied_conf)[\"fullchain\"]\n with open(fullchain_path) as f:\n cert_path = (fullchain_path, f.read())\n return cert_path\n\n\ndef config_with_defaults(config=None):\n \"\"\"Merge supplied config, if provided, on top of builtin defaults.\"\"\"\n defaults_copy = configobj.ConfigObj(constants.RENEWER_DEFAULTS)\n defaults_copy.merge(config if config is not None else configobj.ConfigObj())\n return defaults_copy\n\n\ndef add_time_interval(base_time, interval, textparser=parsedatetime.Calendar()):\n \"\"\"Parse the time specified time interval, and add it to the base_time\n\n The interval can be in the English-language format understood by\n parsedatetime, e.g., '10 days', '3 weeks', '6 months', '9 hours', or\n a sequence of such intervals like '6 months 1 week' or '3 days 12\n hours'. If an integer is found with no associated unit, it is\n interpreted by default as a number of days.\n\n :param datetime.datetime base_time: The time to be added with the interval.\n :param str interval: The time interval to parse.\n\n :returns: The base_time plus the interpretation of the time interval.\n :rtype: :class:`datetime.datetime`\"\"\"\n\n if interval.strip().isdigit():\n interval += \" days\"\n\n # try to use the same timezone, but fallback to UTC\n tzinfo = base_time.tzinfo or pytz.UTC\n\n return textparser.parseDT(interval, base_time, tzinfo=tzinfo)[0]\n\n\ndef write_renewal_config(o_filename, n_filename, archive_dir, target, relevant_data):\n \"\"\"Writes a renewal config file with the specified name and values.\n\n :param str o_filename: Absolute path to the previous version of config file\n :param str n_filename: Absolute path to the new destination of config file\n :param str archive_dir: Absolute path to the archive directory\n :param dict target: Maps ALL_FOUR to their symlink paths\n :param dict relevant_data: Renewal configuration options to save\n\n :returns: Configuration object for the new config file\n :rtype: configobj.ConfigObj\n\n \"\"\"\n config = configobj.ConfigObj(o_filename)\n config[\"version\"] = certbot.__version__\n config[\"archive_dir\"] = archive_dir\n for kind in ALL_FOUR:\n config[kind] = target[kind]\n\n if \"renewalparams\" not in config:\n config[\"renewalparams\"] = {}\n config.comments[\"renewalparams\"] = [\"\",\n \"Options used in \"\n \"the renewal process\"]\n\n config[\"renewalparams\"].update(relevant_data)\n\n for k in config[\"renewalparams\"].keys():\n if k not in relevant_data:\n del config[\"renewalparams\"][k]\n\n if \"renew_before_expiry\" not in config:\n default_interval = constants.RENEWER_DEFAULTS[\"renew_before_expiry\"]\n config.initial_comment = [\"renew_before_expiry = \" + default_interval]\n\n # TODO: add human-readable comments explaining other available\n # parameters\n logger.debug(\"Writing new config %s.\", n_filename)\n\n # Ensure that the file exists\n open(n_filename, 'a').close()\n\n # Copy permissions from the old version of the file, if it exists.\n if os.path.exists(o_filename):\n current_permissions = stat.S_IMODE(os.lstat(o_filename).st_mode)\n os.chmod(n_filename, current_permissions)\n\n with open(n_filename, \"wb\") as f:\n config.write(outfile=f)\n return config\n\n\ndef rename_renewal_config(prev_name, new_name, cli_config):\n \"\"\"Renames cli_config.certname's config to cli_config.new_certname.\n\n :param .NamespaceConfig cli_config: parsed command line\n arguments\n \"\"\"\n prev_filename = renewal_filename_for_lineagename(cli_config, prev_name)\n new_filename = renewal_filename_for_lineagename(cli_config, new_name)\n if os.path.exists(new_filename):\n raise errors.ConfigurationError(\"The new certificate name \"\n \"is already in use.\")\n try:\n os.rename(prev_filename, new_filename)\n except OSError:\n raise errors.ConfigurationError(\"Please specify a valid filename \"\n \"for the new certificate name.\")\n\n\ndef update_configuration(lineagename, archive_dir, target, cli_config):\n \"\"\"Modifies lineagename's config to contain the specified values.\n\n :param str lineagename: Name of the lineage being modified\n :param str archive_dir: Absolute path to the archive directory\n :param dict target: Maps ALL_FOUR to their symlink paths\n :param .NamespaceConfig cli_config: parsed command line\n arguments\n\n :returns: Configuration object for the updated config file\n :rtype: configobj.ConfigObj\n\n \"\"\"\n config_filename = renewal_filename_for_lineagename(cli_config, lineagename)\n temp_filename = config_filename + \".new\"\n\n # If an existing tempfile exists, delete it\n if os.path.exists(temp_filename):\n os.unlink(temp_filename)\n\n # Save only the config items that are relevant to renewal\n values = relevant_values(vars(cli_config.namespace))\n write_renewal_config(config_filename, temp_filename, archive_dir, target, values)\n compat.os_rename(temp_filename, config_filename)\n\n return configobj.ConfigObj(config_filename)\n\n\ndef get_link_target(link):\n \"\"\"Get an absolute path to the target of link.\n\n :param str link: Path to a symbolic link\n\n :returns: Absolute path to the target of link\n :rtype: str\n\n :raises .CertStorageError: If link does not exists.\n\n \"\"\"\n try:\n target = os.readlink(link)\n except OSError:\n raise errors.CertStorageError(\n \"Expected {0} to be a symlink\".format(link))\n\n if not os.path.isabs(target):\n target = os.path.join(os.path.dirname(link), target)\n return os.path.abspath(target)\n\ndef _write_live_readme_to(readme_path, is_base_dir=False):\n prefix = \"\"\n if is_base_dir:\n prefix = \"[cert name]/\"\n with open(readme_path, \"w\") as f:\n logger.debug(\"Writing README to %s.\", readme_path)\n f.write(\"This directory contains your keys and certificates.\\n\\n\"\n \"`{prefix}privkey.pem` : the private key for your certificate.\\n\"\n \"`{prefix}fullchain.pem`: the certificate file used in most server software.\\n\"\n \"`{prefix}chain.pem` : used for OCSP stapling in Nginx >=1.3.7.\\n\"\n \"`{prefix}cert.pem` : will break many server configurations, and \"\n \"should not be used\\n\"\n \" without reading further documentation (see link below).\\n\\n\"\n \"WARNING: DO NOT MOVE OR RENAME THESE FILES!\\n\"\n \" Certbot expects these files to remain in this location in order\\n\"\n \" to function properly!\\n\\n\"\n \"We recommend not moving these files. For more information, see the Certbot\\n\"\n \"User Guide at https://certbot.eff.org/docs/using.html#where-are-my-\"\n \"certificates.\\n\".format(prefix=prefix))\n\n\ndef _relevant(option):\n \"\"\"\n Is this option one that could be restored for future renewal purposes?\n :param str option: the name of the option\n\n :rtype: bool\n \"\"\"\n from certbot import renewal\n plugins = plugins_disco.PluginsRegistry.find_all()\n namespaces = [plugins_common.dest_namespace(plugin) for plugin in plugins]\n\n return (option in renewal.CONFIG_ITEMS or\n any(option.startswith(namespace) for namespace in namespaces))\n\n\ndef relevant_values(all_values):\n \"\"\"Return a new dict containing only items relevant for renewal.\n\n :param dict all_values: The original values.\n\n :returns: A new dictionary containing items that can be used in renewal.\n :rtype dict:\n\n \"\"\"\n rv = dict(\n (option, value)\n for option, value in six.iteritems(all_values)\n if _relevant(option) and cli.option_was_set(option, value))\n # We always save the server value to help with forward compatibility\n # and behavioral consistency when versions of Certbot with different\n # server defaults are used.\n rv[\"server\"] = all_values[\"server\"]\n return rv\n\ndef lineagename_for_filename(config_filename):\n \"\"\"Returns the lineagename for a configuration filename.\n \"\"\"\n if not config_filename.endswith(\".conf\"):\n raise errors.CertStorageError(\n \"renewal config file name must end in .conf\")\n return os.path.basename(config_filename[:-len(\".conf\")])\n\ndef renewal_filename_for_lineagename(config, lineagename):\n \"\"\"Returns the lineagename for a configuration filename.\n \"\"\"\n return os.path.join(config.renewal_configs_dir, lineagename) + \".conf\"\n\ndef _relpath_from_file(archive_dir, from_file):\n \"\"\"Path to a directory from a file\"\"\"\n return os.path.relpath(archive_dir, os.path.dirname(from_file))\n\ndef full_archive_path(config_obj, cli_config, lineagename):\n \"\"\"Returns the full archive path for a lineagename\n\n Uses cli_config to determine archive path if not available from config_obj.\n\n :param configobj.ConfigObj config_obj: Renewal conf file contents (can be None)\n :param configuration.NamespaceConfig cli_config: Main config file\n :param str lineagename: Certificate name\n \"\"\"\n if config_obj and \"archive_dir\" in config_obj:\n return config_obj[\"archive_dir\"]\n else:\n return os.path.join(cli_config.default_archive_dir, lineagename)\n\ndef _full_live_path(cli_config, lineagename):\n \"\"\"Returns the full default live path for a lineagename\"\"\"\n return os.path.join(cli_config.live_dir, lineagename)\n\ndef delete_files(config, certname):\n \"\"\"Delete all files related to the certificate.\n\n If some files are not found, ignore them and continue.\n \"\"\"\n renewal_filename = renewal_file_for_certname(config, certname)\n # file exists\n full_default_archive_dir = full_archive_path(None, config, certname)\n full_default_live_dir = _full_live_path(config, certname)\n try:\n renewal_config = configobj.ConfigObj(renewal_filename)\n except configobj.ConfigObjError:\n # config is corrupted\n logger.warning(\"Could not parse %s. You may wish to manually \"\n \"delete the contents of %s and %s.\", renewal_filename,\n full_default_live_dir, full_default_archive_dir)\n raise errors.CertStorageError(\n \"error parsing {0}\".format(renewal_filename))\n finally:\n # we couldn't read it, but let's at least delete it\n # if this was going to fail, it already would have.\n os.remove(renewal_filename)\n logger.debug(\"Removed %s\", renewal_filename)\n\n # cert files and (hopefully) live directory\n # it's not guaranteed that the files are in our default storage\n # structure. so, first delete the cert files.\n directory_names = set()\n for kind in ALL_FOUR:\n link = renewal_config.get(kind)\n try:\n os.remove(link)\n logger.debug(\"Removed %s\", link)\n except OSError:\n logger.debug(\"Unable to delete %s\", link)\n directory = os.path.dirname(link)\n directory_names.add(directory)\n\n # if all four were in the same directory, and the only thing left\n # is the README file (or nothing), delete that directory.\n # this will be wrong in very few but some cases.\n if len(directory_names) == 1:\n # delete the README file\n directory = directory_names.pop()\n readme_path = os.path.join(directory, README)\n try:\n os.remove(readme_path)\n logger.debug(\"Removed %s\", readme_path)\n except OSError:\n logger.debug(\"Unable to delete %s\", readme_path)\n # if it's now empty, delete the directory\n try:\n os.rmdir(directory) # only removes empty directories\n logger.debug(\"Removed %s\", directory)\n except OSError:\n logger.debug(\"Unable to remove %s; may not be empty.\", directory)\n\n # archive directory\n try:\n archive_path = full_archive_path(renewal_config, config, certname)\n shutil.rmtree(archive_path)\n logger.debug(\"Removed %s\", archive_path)\n except OSError:\n logger.debug(\"Unable to remove %s\", archive_path)\n\n\nclass RenewableCert(object):\n # pylint: disable=too-many-instance-attributes,too-many-public-methods\n \"\"\"Renewable certificate.\n\n Represents a lineage of certificates that is under the management of\n Certbot, indicated by the existence of an associated renewal\n configuration file.\n\n Note that the notion of \"current version\" for a lineage is\n maintained on disk in the structure of symbolic links, and is not\n explicitly stored in any instance variable in this object. The\n RenewableCert object is able to determine information about the\n current (or other) version by accessing data on disk, but does not\n inherently know any of this information except by examining the\n symbolic links as needed. The instance variables mentioned below\n point to symlinks that reflect the notion of \"current version\" of\n each managed object, and it is these paths that should be used when\n configuring servers to use the certificate managed in a lineage.\n These paths are normally within the \"live\" directory, and their\n symlink targets -- the actual cert files -- are normally found\n within the \"archive\" directory.\n\n :ivar str cert: The path to the symlink representing the current\n version of the certificate managed by this lineage.\n :ivar str privkey: The path to the symlink representing the current\n version of the private key managed by this lineage.\n :ivar str chain: The path to the symlink representing the current version\n of the chain managed by this lineage.\n :ivar str fullchain: The path to the symlink representing the\n current version of the fullchain (combined chain and cert)\n managed by this lineage.\n :ivar configobj.ConfigObj configuration: The renewal configuration\n options associated with this lineage, obtained from parsing the\n renewal configuration file and/or systemwide defaults.\n\n \"\"\"\n def __init__(self, config_filename, cli_config, update_symlinks=False):\n \"\"\"Instantiate a RenewableCert object from an existing lineage.\n\n :param str config_filename: the path to the renewal config file\n that defines this lineage.\n :param .NamespaceConfig: parsed command line arguments\n\n :raises .CertStorageError: if the configuration file's name didn't end\n in \".conf\", or the file is missing or broken.\n\n \"\"\"\n self.cli_config = cli_config\n self.lineagename = lineagename_for_filename(config_filename)\n\n # self.configuration should be used to read parameters that\n # may have been chosen based on default values from the\n # systemwide renewal configuration; self.configfile should be\n # used to make and save changes.\n try:\n self.configfile = configobj.ConfigObj(config_filename)\n except configobj.ConfigObjError:\n raise errors.CertStorageError(\n \"error parsing {0}\".format(config_filename))\n # TODO: Do we actually use anything from defaults and do we want to\n # read further defaults from the systemwide renewal configuration\n # file at this stage?\n self.configuration = config_with_defaults(self.configfile)\n\n if not all(x in self.configuration for x in ALL_FOUR):\n raise errors.CertStorageError(\n \"renewal config file {0} is missing a required \"\n \"file reference\".format(self.configfile))\n\n conf_version = self.configuration.get(\"version\")\n if (conf_version is not None and\n util.get_strict_version(conf_version) > CURRENT_VERSION):\n logger.info(\n \"Attempting to parse the version %s renewal configuration \"\n \"file found at %s with version %s of Certbot. This might not \"\n \"work.\", conf_version, config_filename, certbot.__version__)\n\n self.cert = self.configuration[\"cert\"]\n self.privkey = self.configuration[\"privkey\"]\n self.chain = self.configuration[\"chain\"]\n self.fullchain = self.configuration[\"fullchain\"]\n self.live_dir = os.path.dirname(self.cert)\n\n self._fix_symlinks()\n if update_symlinks:\n self._update_symlinks()\n self._check_symlinks()\n\n @property\n def key_path(self):\n \"\"\"Duck type for self.privkey\"\"\"\n return self.privkey\n\n @property\n def cert_path(self):\n \"\"\"Duck type for self.cert\"\"\"\n return self.cert\n\n @property\n def chain_path(self):\n \"\"\"Duck type for self.chain\"\"\"\n return self.chain\n\n @property\n def fullchain_path(self):\n \"\"\"Duck type for self.fullchain\"\"\"\n return self.fullchain\n\n @property\n def target_expiry(self):\n \"\"\"The current target certificate's expiration datetime\n\n :returns: Expiration datetime of the current target certificate\n :rtype: :class:`datetime.datetime`\n \"\"\"\n return crypto_util.notAfter(self.current_target(\"cert\"))\n\n @property\n def archive_dir(self):\n \"\"\"Returns the default or specified archive directory\"\"\"\n return full_archive_path(self.configuration,\n self.cli_config, self.lineagename)\n\n def relative_archive_dir(self, from_file):\n \"\"\"Returns the default or specified archive directory as a relative path\n\n Used for creating symbolic links.\n \"\"\"\n return _relpath_from_file(self.archive_dir, from_file)\n\n @property\n def is_test_cert(self):\n \"\"\"Returns true if this is a test cert from a staging server.\"\"\"\n server = self.configuration[\"renewalparams\"].get(\"server\", None)\n if server:\n return util.is_staging(server)\n else:\n return False\n\n def _check_symlinks(self):\n \"\"\"Raises an exception if a symlink doesn't exist\"\"\"\n for kind in ALL_FOUR:\n link = getattr(self, kind)\n if not os.path.islink(link):\n raise errors.CertStorageError(\n \"expected {0} to be a symlink\".format(link))\n target = get_link_target(link)\n if not os.path.exists(target):\n raise errors.CertStorageError(\"target {0} of symlink {1} does \"\n \"not exist\".format(target, link))\n\n def _update_symlinks(self):\n \"\"\"Updates symlinks to use archive_dir\"\"\"\n for kind in ALL_FOUR:\n link = getattr(self, kind)\n previous_link = get_link_target(link)\n new_link = os.path.join(self.relative_archive_dir(link),\n os.path.basename(previous_link))\n\n os.unlink(link)\n os.symlink(new_link, link)\n\n def _consistent(self):\n \"\"\"Are the files associated with this lineage self-consistent?\n\n :returns: Whether the files stored in connection with this\n lineage appear to be correct and consistent with one\n another.\n :rtype: bool\n\n \"\"\"\n # Each element must be referenced with an absolute path\n for x in (self.cert, self.privkey, self.chain, self.fullchain):\n if not os.path.isabs(x):\n logger.debug(\"Element %s is not referenced with an \"\n \"absolute path.\", x)\n return False\n\n # Each element must exist and be a symbolic link\n for x in (self.cert, self.privkey, self.chain, self.fullchain):\n if not os.path.islink(x):\n logger.debug(\"Element %s is not a symbolic link.\", x)\n return False\n for kind in ALL_FOUR:\n link = getattr(self, kind)\n target = get_link_target(link)\n\n # Each element's link must point within the cert lineage's\n # directory within the official archive directory\n if not os.path.samefile(os.path.dirname(target), self.archive_dir):\n logger.debug(\"Element's link does not point within the \"\n \"cert lineage's directory within the \"\n \"official archive directory. Link: %s, \"\n \"target directory: %s, \"\n \"archive directory: %s. If you've specified \"\n \"the archive directory in the renewal configuration \"\n \"file, you may need to update links by running \"\n \"certbot update_symlinks.\",\n link, os.path.dirname(target), self.archive_dir)\n return False\n\n # The link must point to a file that exists\n if not os.path.exists(target):\n logger.debug(\"Link %s points to file %s that does not exist.\",\n link, target)\n return False\n\n # The link must point to a file that follows the archive\n # naming convention\n pattern = re.compile(r\"^{0}([0-9]+)\\.pem$\".format(kind))\n if not pattern.match(os.path.basename(target)):\n logger.debug(\"%s does not follow the archive naming \"\n \"convention.\", target)\n return False\n\n # It is NOT required that the link's target be a regular\n # file (it may itself be a symlink). But we should probably\n # do a recursive check that ultimately the target does\n # exist?\n # XXX: Additional possible consistency checks (e.g.\n # cryptographic validation of the chain being a chain,\n # the chain matching the cert, and the cert matching\n # the subject key)\n # XXX: All four of the targets are in the same directory\n # (This check is redundant with the check that they\n # are all in the desired directory!)\n # len(set(os.path.basename(self.current_target(x)\n # for x in ALL_FOUR))) == 1\n return True\n\n def _fix(self):\n \"\"\"Attempt to fix defects or inconsistencies in this lineage.\n\n .. todo:: Currently unimplemented.\n\n \"\"\"\n # TODO: Figure out what kinds of fixes are possible. For\n # example, checking if there is a valid version that\n # we can update the symlinks to. (Maybe involve\n # parsing keys and certs to see if they exist and\n # if a key corresponds to the subject key of a cert?)\n\n # TODO: In general, the symlink-reading functions below are not\n # cautious enough about the possibility that links or their\n # targets may not exist. (This shouldn't happen, but might\n # happen as a result of random tampering by a sysadmin, or\n # filesystem errors, or crashes.)\n\n def _previous_symlinks(self):\n \"\"\"Returns the kind and path of all symlinks used in recovery.\n\n :returns: list of (kind, symlink) tuples\n :rtype: list\n\n \"\"\"\n previous_symlinks = []\n for kind in ALL_FOUR:\n link_dir = os.path.dirname(getattr(self, kind))\n link_base = \"previous_{0}.pem\".format(kind)\n previous_symlinks.append((kind, os.path.join(link_dir, link_base)))\n\n return previous_symlinks\n\n def _fix_symlinks(self):\n \"\"\"Fixes symlinks in the event of an incomplete version update.\n\n If there is no problem with the current symlinks, this function\n has no effect.\n\n \"\"\"\n previous_symlinks = self._previous_symlinks()\n if all(os.path.exists(link[1]) for link in previous_symlinks):\n for kind, previous_link in previous_symlinks:\n current_link = getattr(self, kind)\n if os.path.lexists(current_link):\n os.unlink(current_link)\n os.symlink(os.readlink(previous_link), current_link)\n\n for _, link in previous_symlinks:\n if os.path.exists(link):\n os.unlink(link)\n\n def current_target(self, kind):\n \"\"\"Returns full path to which the specified item currently points.\n\n :param str kind: the lineage member item (\"cert\", \"privkey\",\n \"chain\", or \"fullchain\")\n\n :returns: The path to the current version of the specified\n member.\n :rtype: str or None\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n link = getattr(self, kind)\n if not os.path.exists(link):\n logger.debug(\"Expected symlink %s for %s does not exist.\",\n link, kind)\n return None\n return get_link_target(link)\n\n def current_version(self, kind):\n \"\"\"Returns numerical version of the specified item.\n\n For example, if kind is \"chain\" and the current chain link\n points to a file named \"chain7.pem\", returns the integer 7.\n\n :param str kind: the lineage member item (\"cert\", \"privkey\",\n \"chain\", or \"fullchain\")\n\n :returns: the current version of the specified member.\n :rtype: int\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n pattern = re.compile(r\"^{0}([0-9]+)\\.pem$\".format(kind))\n target = self.current_target(kind)\n if target is None or not os.path.exists(target):\n logger.debug(\"Current-version target for %s \"\n \"does not exist at %s.\", kind, target)\n target = \"\"\n matches = pattern.match(os.path.basename(target))\n if matches:\n return int(matches.groups()[0])\n else:\n logger.debug(\"No matches for target %s.\", kind)\n return None\n\n def version(self, kind, version):\n \"\"\"The filename that corresponds to the specified version and kind.\n\n .. warning:: The specified version may not exist in this\n lineage. There is no guarantee that the file path returned\n by this method actually exists.\n\n :param str kind: the lineage member item (\"cert\", \"privkey\",\n \"chain\", or \"fullchain\")\n :param int version: the desired version\n\n :returns: The path to the specified version of the specified member.\n :rtype: str\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n where = os.path.dirname(self.current_target(kind))\n return os.path.join(where, \"{0}{1}.pem\".format(kind, version))\n\n def available_versions(self, kind):\n \"\"\"Which alternative versions of the specified kind of item exist?\n\n The archive directory where the current version is stored is\n consulted to obtain the list of alternatives.\n\n :param str kind: the lineage member item (\n ``cert``, ``privkey``, ``chain``, or ``fullchain``)\n\n :returns: all of the version numbers that currently exist\n :rtype: `list` of `int`\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n where = os.path.dirname(self.current_target(kind))\n files = os.listdir(where)\n pattern = re.compile(r\"^{0}([0-9]+)\\.pem$\".format(kind))\n matches = [pattern.match(f) for f in files]\n return sorted([int(m.groups()[0]) for m in matches if m])\n\n def newest_available_version(self, kind):\n \"\"\"Newest available version of the specified kind of item?\n\n :param str kind: the lineage member item (``cert``,\n ``privkey``, ``chain``, or ``fullchain``)\n\n :returns: the newest available version of this member\n :rtype: int\n\n \"\"\"\n return max(self.available_versions(kind))\n\n def latest_common_version(self):\n \"\"\"Newest version for which all items are available?\n\n :returns: the newest available version for which all members\n (``cert, ``privkey``, ``chain``, and ``fullchain``) exist\n :rtype: int\n\n \"\"\"\n # TODO: this can raise CertStorageError if there is no version overlap\n # (it should probably return None instead)\n # TODO: this can raise a spurious AttributeError if the current\n # link for any kind is missing (it should probably return None)\n versions = [self.available_versions(x) for x in ALL_FOUR]\n return max(n for n in versions[0] if all(n in v for v in versions[1:]))\n\n def next_free_version(self):\n \"\"\"Smallest version newer than all full or partial versions?\n\n :returns: the smallest version number that is larger than any\n version of any item currently stored in this lineage\n :rtype: int\n\n \"\"\"\n # TODO: consider locking/mutual exclusion between updating processes\n # This isn't self.latest_common_version() + 1 because we don't want\n # collide with a version that might exist for one file type but not\n # for the others.\n return max(self.newest_available_version(x) for x in ALL_FOUR) + 1\n\n def ensure_deployed(self):\n \"\"\"Make sure we've deployed the latest version.\n\n :returns: False if a change was needed, True otherwise\n :rtype: bool\n\n May need to recover from rare interrupted / crashed states.\"\"\"\n\n if self.has_pending_deployment():\n logger.warning(\"Found a new cert /archive/ that was not linked to in /live/; \"\n \"fixing...\")\n self.update_all_links_to(self.latest_common_version())\n return False\n return True\n\n\n def has_pending_deployment(self):\n \"\"\"Is there a later version of all of the managed items?\n\n :returns: ``True`` if there is a complete version of this\n lineage with a larger version number than the current\n version, and ``False`` otherwise\n :rtype: bool\n\n \"\"\"\n # TODO: consider whether to assume consistency or treat\n # inconsistent/consistent versions differently\n smallest_current = min(self.current_version(x) for x in ALL_FOUR)\n return smallest_current < self.latest_common_version()\n\n def _update_link_to(self, kind, version):\n \"\"\"Make the specified item point at the specified version.\n\n (Note that this method doesn't verify that the specified version\n exists.)\n\n :param str kind: the lineage member item (\"cert\", \"privkey\",\n \"chain\", or \"fullchain\")\n :param int version: the desired version\n\n \"\"\"\n if kind not in ALL_FOUR:\n raise errors.CertStorageError(\"unknown kind of item\")\n link = getattr(self, kind)\n filename = \"{0}{1}.pem\".format(kind, version)\n # Relative rather than absolute target directory\n target_directory = os.path.dirname(os.readlink(link))\n # TODO: it could be safer to make the link first under a temporary\n # filename, then unlink the old link, then rename the new link\n # to the old link; this ensures that this process is able to\n # create symlinks.\n # TODO: we might also want to check consistency of related links\n # for the other corresponding items\n os.unlink(link)\n os.symlink(os.path.join(target_directory, filename), link)\n\n def update_all_links_to(self, version):\n \"\"\"Change all member objects to point to the specified version.\n\n :param int version: the desired version\n\n \"\"\"\n with error_handler.ErrorHandler(self._fix_symlinks):\n previous_links = self._previous_symlinks()\n for kind, link in previous_links:\n os.symlink(self.current_target(kind), link)\n\n for kind in ALL_FOUR:\n self._update_link_to(kind, version)\n\n for _, link in previous_links:\n os.unlink(link)\n\n def names(self, version=None):\n \"\"\"What are the subject names of this certificate?\n\n (If no version is specified, use the current version.)\n\n :param int version: the desired version number\n :returns: the subject names\n :rtype: `list` of `str`\n :raises .CertStorageError: if could not find cert file.\n\n \"\"\"\n if version is None:\n target = self.current_target(\"cert\")\n else:\n target = self.version(\"cert\", version)\n if target is None:\n raise errors.CertStorageError(\"could not find cert file\")\n with open(target) as f:\n return crypto_util.get_names_from_cert(f.read())\n\n def autodeployment_is_enabled(self):\n \"\"\"Is automatic deployment enabled for this cert?\n\n If autodeploy is not specified, defaults to True.\n\n :returns: True if automatic deployment is enabled\n :rtype: bool\n\n \"\"\"\n return (\"autodeploy\" not in self.configuration or\n self.configuration.as_bool(\"autodeploy\"))\n\n def should_autodeploy(self, interactive=False):\n \"\"\"Should this lineage now automatically deploy a newer version?\n\n This is a policy question and does not only depend on whether\n there is a newer version of the cert. (This considers whether\n autodeployment is enabled, whether a relevant newer version\n exists, and whether the time interval for autodeployment has\n been reached.)\n\n :param bool interactive: set to True to examine the question\n regardless of whether the renewal configuration allows\n automated deployment (for interactive use). Default False.\n\n :returns: whether the lineage now ought to autodeploy an\n existing newer cert version\n :rtype: bool\n\n \"\"\"\n if interactive or self.autodeployment_is_enabled():\n if self.has_pending_deployment():\n interval = self.configuration.get(\"deploy_before_expiry\",\n \"5 days\")\n now = pytz.UTC.fromutc(datetime.datetime.utcnow())\n if self.target_expiry < add_time_interval(now, interval):\n return True\n return False\n\n def ocsp_revoked(self, version=None):\n # pylint: disable=no-self-use,unused-argument\n \"\"\"Is the specified cert version revoked according to OCSP?\n\n Also returns True if the cert version is declared as intended\n to be revoked according to Let's Encrypt OCSP extensions.\n (If no version is specified, uses the current version.)\n\n This method is not yet implemented and currently always returns\n False.\n\n :param int version: the desired version number\n\n :returns: whether the certificate is or will be revoked\n :rtype: bool\n\n \"\"\"\n # XXX: This query and its associated network service aren't\n # implemented yet, so we currently return False (indicating that the\n # certificate is not revoked).\n return False\n\n def autorenewal_is_enabled(self):\n \"\"\"Is automatic renewal enabled for this cert?\n\n If autorenew is not specified, defaults to True.\n\n :returns: True if automatic renewal is enabled\n :rtype: bool\n\n \"\"\"\n return (\"autorenew\" not in self.configuration[\"renewalparams\"] or\n self.configuration[\"renewalparams\"].as_bool(\"autorenew\"))\n\n def should_autorenew(self):\n \"\"\"Should we now try to autorenew the most recent cert version?\n\n This is a policy question and does not only depend on whether\n the cert is expired. (This considers whether autorenewal is\n enabled, whether the cert is revoked, and whether the time\n interval for autorenewal has been reached.)\n\n Note that this examines the numerically most recent cert version,\n not the currently deployed version.\n\n :returns: whether an attempt should now be made to autorenew the\n most current cert version in this lineage\n :rtype: bool\n\n \"\"\"\n if self.autorenewal_is_enabled():\n # Consider whether to attempt to autorenew this cert now\n\n # Renewals on the basis of revocation\n if self.ocsp_revoked(self.latest_common_version()):\n logger.debug(\"Should renew, certificate is revoked.\")\n return True\n\n # Renews some period before expiry time\n default_interval = constants.RENEWER_DEFAULTS[\"renew_before_expiry\"]\n interval = self.configuration.get(\"renew_before_expiry\", default_interval)\n expiry = crypto_util.notAfter(self.version(\n \"cert\", self.latest_common_version()))\n now = pytz.UTC.fromutc(datetime.datetime.utcnow())\n if expiry < add_time_interval(now, interval):\n logger.debug(\"Should renew, less than %s before certificate \"\n \"expiry %s.\", interval,\n expiry.strftime(\"%Y-%m-%d %H:%M:%S %Z\"))\n return True\n return False\n\n @classmethod\n def new_lineage(cls, lineagename, cert, privkey, chain, cli_config):\n # pylint: disable=too-many-locals\n \"\"\"Create a new certificate lineage.\n\n Attempts to create a certificate lineage -- enrolled for\n potential future renewal -- with the (suggested) lineage name\n lineagename, and the associated cert, privkey, and chain (the\n associated fullchain will be created automatically). Optional\n configurator and renewalparams record the configuration that was\n originally used to obtain this cert, so that it can be reused\n later during automated renewal.\n\n Returns a new RenewableCert object referring to the created\n lineage. (The actual lineage name, as well as all the relevant\n file paths, will be available within this object.)\n\n :param str lineagename: the suggested name for this lineage\n (normally the current cert's first subject DNS name)\n :param str cert: the initial certificate version in PEM format\n :param str privkey: the private key in PEM format\n :param str chain: the certificate chain in PEM format\n :param .NamespaceConfig cli_config: parsed command line\n arguments\n\n :returns: the newly-created RenewalCert object\n :rtype: :class:`storage.renewableCert`\n\n \"\"\"\n\n # Examine the configuration and find the new lineage's name\n for i in (cli_config.renewal_configs_dir, cli_config.default_archive_dir,\n cli_config.live_dir):\n if not os.path.exists(i):\n os.makedirs(i, 0o700)\n logger.debug(\"Creating directory %s.\", i)\n config_file, config_filename = util.unique_lineage_name(\n cli_config.renewal_configs_dir, lineagename)\n base_readme_path = os.path.join(cli_config.live_dir, README)\n if not os.path.exists(base_readme_path):\n _write_live_readme_to(base_readme_path, is_base_dir=True)\n\n # Determine where on disk everything will go\n # lineagename will now potentially be modified based on which\n # renewal configuration file could actually be created\n lineagename = lineagename_for_filename(config_filename)\n archive = full_archive_path(None, cli_config, lineagename)\n live_dir = _full_live_path(cli_config, lineagename)\n if os.path.exists(archive):\n config_file.close()\n raise errors.CertStorageError(\n \"archive directory exists for \" + lineagename)\n if os.path.exists(live_dir):\n config_file.close()\n raise errors.CertStorageError(\n \"live directory exists for \" + lineagename)\n os.mkdir(archive)\n os.mkdir(live_dir)\n logger.debug(\"Archive directory %s and live \"\n \"directory %s created.\", archive, live_dir)\n\n # Put the data into the appropriate files on disk\n target = dict([(kind, os.path.join(live_dir, kind + \".pem\"))\n for kind in ALL_FOUR])\n archive_target = dict([(kind, os.path.join(archive, kind + \"1.pem\"))\n for kind in ALL_FOUR])\n for kind in ALL_FOUR:\n os.symlink(_relpath_from_file(archive_target[kind], target[kind]), target[kind])\n with open(target[\"cert\"], \"wb\") as f:\n logger.debug(\"Writing certificate to %s.\", target[\"cert\"])\n f.write(cert)\n with util.safe_open(archive_target[\"privkey\"], \"wb\", chmod=BASE_PRIVKEY_MODE) as f:\n logger.debug(\"Writing private key to %s.\", target[\"privkey\"])\n f.write(privkey)\n # XXX: Let's make sure to get the file permissions right here\n with open(target[\"chain\"], \"wb\") as f:\n logger.debug(\"Writing chain to %s.\", target[\"chain\"])\n f.write(chain)\n with open(target[\"fullchain\"], \"wb\") as f:\n # assumes that OpenSSL.crypto.dump_certificate includes\n # ending newline character\n logger.debug(\"Writing full chain to %s.\", target[\"fullchain\"])\n f.write(cert + chain)\n\n # Write a README file to the live directory\n readme_path = os.path.join(live_dir, README)\n _write_live_readme_to(readme_path)\n\n # Document what we've done in a new renewal config file\n config_file.close()\n\n # Save only the config items that are relevant to renewal\n values = relevant_values(vars(cli_config.namespace))\n\n new_config = write_renewal_config(config_filename, config_filename, archive,\n target, values)\n return cls(new_config.filename, cli_config)\n\n def save_successor(self, prior_version, new_cert,\n new_privkey, new_chain, cli_config):\n \"\"\"Save new cert and chain as a successor of a prior version.\n\n Returns the new version number that was created.\n\n .. note:: this function does NOT update links to deploy this\n version\n\n :param int prior_version: the old version to which this version\n is regarded as a successor (used to choose a privkey, if the\n key has not changed, but otherwise this information is not\n permanently recorded anywhere)\n :param bytes new_cert: the new certificate, in PEM format\n :param bytes new_privkey: the new private key, in PEM format,\n or ``None``, if the private key has not changed\n :param bytes new_chain: the new chain, in PEM format\n :param .NamespaceConfig cli_config: parsed command line\n arguments\n\n :returns: the new version number that was created\n :rtype: int\n\n \"\"\"\n # XXX: assumes official archive location rather than examining links\n # XXX: consider using os.open for availability of os.O_EXCL\n # XXX: ensure file permissions are correct; also create directories\n # if needed (ensuring their permissions are correct)\n # Figure out what the new version is and hence where to save things\n\n self.cli_config = cli_config\n target_version = self.next_free_version()\n target = dict(\n [(kind,\n os.path.join(self.archive_dir, \"{0}{1}.pem\".format(kind, target_version)))\n for kind in ALL_FOUR])\n\n old_privkey = os.path.join(\n self.archive_dir, \"privkey{0}.pem\".format(prior_version))\n\n # Distinguish the cases where the privkey has changed and where it\n # has not changed (in the latter case, making an appropriate symlink\n # to an earlier privkey version)\n if new_privkey is None:\n # The behavior below keeps the prior key by creating a new\n # symlink to the old key or the target of the old key symlink.\n if os.path.islink(old_privkey):\n old_privkey = os.readlink(old_privkey)\n else:\n old_privkey = \"privkey{0}.pem\".format(prior_version)\n logger.debug(\"Writing symlink to old private key, %s.\", old_privkey)\n os.symlink(old_privkey, target[\"privkey\"])\n else:\n with util.safe_open(target[\"privkey\"], \"wb\", chmod=BASE_PRIVKEY_MODE) as f:\n logger.debug(\"Writing new private key to %s.\", target[\"privkey\"])\n f.write(new_privkey)\n # Preserve gid and (mode & 074) from previous privkey in this lineage.\n old_mode = stat.S_IMODE(os.stat(old_privkey).st_mode) & \\\n (stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP | \\\n stat.S_IROTH)\n mode = BASE_PRIVKEY_MODE | old_mode\n os.chown(target[\"privkey\"], -1, os.stat(old_privkey).st_gid)\n os.chmod(target[\"privkey\"], mode)\n\n # Save everything else\n with open(target[\"cert\"], \"wb\") as f:\n logger.debug(\"Writing certificate to %s.\", target[\"cert\"])\n f.write(new_cert)\n with open(target[\"chain\"], \"wb\") as f:\n logger.debug(\"Writing chain to %s.\", target[\"chain\"])\n f.write(new_chain)\n with open(target[\"fullchain\"], \"wb\") as f:\n logger.debug(\"Writing full chain to %s.\", target[\"fullchain\"])\n f.write(new_cert + new_chain)\n\n symlinks = dict((kind, self.configuration[kind]) for kind in ALL_FOUR)\n # Update renewal config file\n self.configfile = update_configuration(\n self.lineagename, self.archive_dir, symlinks, cli_config)\n self.configuration = config_with_defaults(self.configfile)\n\n return target_version\n", "path": "certbot/storage.py"}]} |
gh_patches_debug_1055 | rasdani/github-patches | git_diff | open-mmlab__mmocr-633 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IndexError when running model_inference with empty list
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Bug description**
I am using the mmocr/utils/ocr.py script. I create a MMOCR object with both detector and recognition. However, when running the readtext method ,there are some images where I get the following error:
```python
Traceback (most recent call last):
File "test.py", line 16, in <module>
result = ocr.readtext(data, print_result=True, imshow=False, batch_mode=True, merge=True)
File "/home/mpena/mmocr/mmocr/utils/ocr.py", line 426, in readtext
self.detect_model, self.recog_model, kie_model=self.kie_model)
File "/home/mpena/mmocr/mmocr/utils/ocr.py", line 574, in det_recog_kie_inference
recog_model, box_imgs, True, self.args.recog_batch_size)
File "/home/mpena/mmocr/mmocr/utils/ocr.py", line 628, in single_inference
result = model_inference(model, arrays, batch_mode=True)
File "/home/mpena/mmocr/mmocr/apis/inference.py", line 101, in model_inference
if not isinstance(imgs[0], (np.ndarray, str)):
IndexError: list index out of range
```
This happens because there are some images where the detector returns an empty list: 'boundary_result' from det_result at https://github.com/open-mmlab/mmocr/blob/main/mmocr/utils/ocr.py#L522
And this breaks at https://github.com/open-mmlab/mmocr/blob/main/mmocr/apis/inference.py#L101
```python
if isinstance(imgs, (list, tuple)):
is_batch = True
if not isinstance(imgs[0], (np.ndarray, str)):
raise AssertionError('imgs must be strings or numpy arrays')
```
because imgs[0] doesn't exist.
**Reproduction**
The error can be reproduced with the following script, called from the mmocr directory
```python
from mmocr.utils.ocr import MMOCR
ocr = MMOCR()
det_result = []
results = ocr.readtext(det_result, batch_mode=True, merge=True, print_result=True, imshow=False)
```
**Environment**
1. Please run `python mmocr/utils/collect_env.py` to collect necessary environment information and paste it here.
```bash
sys.platform: linux
Python: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]
CUDA available: True
GPU 0: GeForce GTX 1080 Ti
CUDA_HOME: /usr
NVCC: Cuda compilation tools, release 10.1, V10.1.243
GCC: gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
PyTorch: 1.6.0
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.3
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.7.0
OpenCV: 4.5.4
MMCV: 1.3.18
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 10.1
MMOCR: 0.3.0+3188e53
```
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Error traceback**
```bash
Traceback (most recent call last):
File "test.py", line 16, in <module>
result = ocr.readtext(data, print_result=True, imshow=False, batch_mode=True, merge=True)
File "/home/mpena/mmocr/mmocr/utils/ocr.py", line 426, in readtext
self.detect_model, self.recog_model, kie_model=self.kie_model)
File "/home/mpena/mmocr/mmocr/utils/ocr.py", line 574, in det_recog_kie_inference
recog_model, box_imgs, True, self.args.recog_batch_size)
File "/home/mpena/mmocr/mmocr/utils/ocr.py", line 628, in single_inference
result = model_inference(model, arrays, batch_mode=True)
File "/home/mpena/mmocr/mmocr/apis/inference.py", line 101, in model_inference
if not isinstance(imgs[0], (np.ndarray, str)):
IndexError: list index out of range
```
**Bug fix**
It is necessary to check input size to ensure input image list is not empy, I am wiling to send a PR to fix this
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmocr/apis/inference.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import warnings
3
4 import mmcv
5 import numpy as np
6 import torch
7 from mmcv.ops import RoIPool
8 from mmcv.parallel import collate, scatter
9 from mmcv.runner import load_checkpoint
10 from mmdet.core import get_classes
11 from mmdet.datasets import replace_ImageToTensor
12 from mmdet.datasets.pipelines import Compose
13
14 from mmocr.models import build_detector
15
16
17 def init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):
18 """Initialize a detector from config file.
19
20 Args:
21 config (str or :obj:`mmcv.Config`): Config file path or the config
22 object.
23 checkpoint (str, optional): Checkpoint path. If left as None, the model
24 will not load any weights.
25 cfg_options (dict): Options to override some settings in the used
26 config.
27
28 Returns:
29 nn.Module: The constructed detector.
30 """
31 if isinstance(config, str):
32 config = mmcv.Config.fromfile(config)
33 elif not isinstance(config, mmcv.Config):
34 raise TypeError('config must be a filename or Config object, '
35 f'but got {type(config)}')
36 if cfg_options is not None:
37 config.merge_from_dict(cfg_options)
38 if config.model.get('pretrained'):
39 config.model.pretrained = None
40 config.model.train_cfg = None
41 model = build_detector(config.model, test_cfg=config.get('test_cfg'))
42 if checkpoint is not None:
43 checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')
44 if 'CLASSES' in checkpoint.get('meta', {}):
45 model.CLASSES = checkpoint['meta']['CLASSES']
46 else:
47 warnings.simplefilter('once')
48 warnings.warn('Class names are not saved in the checkpoint\'s '
49 'meta data, use COCO classes by default.')
50 model.CLASSES = get_classes('coco')
51 model.cfg = config # save the config in the model for convenience
52 model.to(device)
53 model.eval()
54 return model
55
56
57 def disable_text_recog_aug_test(cfg, set_types=None):
58 """Remove aug_test from test pipeline of text recognition.
59 Args:
60 cfg (mmcv.Config): Input config.
61 set_types (list[str]): Type of dataset source. Should be
62 None or sublist of ['test', 'val']
63
64 Returns:
65 cfg (mmcv.Config): Output config removing
66 `MultiRotateAugOCR` in test pipeline.
67 """
68 assert set_types is None or isinstance(set_types, list)
69 if set_types is None:
70 set_types = ['val', 'test']
71 for set_type in set_types:
72 if cfg.data[set_type].pipeline[1].type == 'MultiRotateAugOCR':
73 cfg.data[set_type].pipeline = [
74 cfg.data[set_type].pipeline[0],
75 *cfg.data[set_type].pipeline[1].transforms
76 ]
77
78 return cfg
79
80
81 def model_inference(model,
82 imgs,
83 ann=None,
84 batch_mode=False,
85 return_data=False):
86 """Inference image(s) with the detector.
87
88 Args:
89 model (nn.Module): The loaded detector.
90 imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]):
91 Either image files or loaded images.
92 batch_mode (bool): If True, use batch mode for inference.
93 ann (dict): Annotation info for key information extraction.
94 return_data: Return postprocessed data.
95 Returns:
96 result (dict): Predicted results.
97 """
98
99 if isinstance(imgs, (list, tuple)):
100 is_batch = True
101 if not isinstance(imgs[0], (np.ndarray, str)):
102 raise AssertionError('imgs must be strings or numpy arrays')
103
104 elif isinstance(imgs, (np.ndarray, str)):
105 imgs = [imgs]
106 is_batch = False
107 else:
108 raise AssertionError('imgs must be strings or numpy arrays')
109
110 is_ndarray = isinstance(imgs[0], np.ndarray)
111
112 cfg = model.cfg
113
114 if batch_mode:
115 cfg = disable_text_recog_aug_test(cfg, set_types=['test'])
116
117 device = next(model.parameters()).device # model device
118
119 if is_ndarray:
120 cfg = cfg.copy()
121 # set loading pipeline type
122 cfg.data.test.pipeline[0].type = 'LoadImageFromNdarray'
123
124 cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)
125 test_pipeline = Compose(cfg.data.test.pipeline)
126
127 datas = []
128 for img in imgs:
129 # prepare data
130 if is_ndarray:
131 # directly add img
132 data = dict(img=img, ann_info=ann, bbox_fields=[])
133 else:
134 # add information into dict
135 data = dict(
136 img_info=dict(filename=img),
137 img_prefix=None,
138 ann_info=ann,
139 bbox_fields=[])
140 if ann is not None:
141 data.update(dict(**ann))
142
143 # build the data pipeline
144 data = test_pipeline(data)
145 # get tensor from list to stack for batch mode (text detection)
146 if batch_mode:
147 if cfg.data.test.pipeline[1].type == 'MultiScaleFlipAug':
148 for key, value in data.items():
149 data[key] = value[0]
150 datas.append(data)
151
152 if isinstance(datas[0]['img'], list) and len(datas) > 1:
153 raise Exception('aug test does not support '
154 f'inference with batch size '
155 f'{len(datas)}')
156
157 data = collate(datas, samples_per_gpu=len(imgs))
158
159 # process img_metas
160 if isinstance(data['img_metas'], list):
161 data['img_metas'] = [
162 img_metas.data[0] for img_metas in data['img_metas']
163 ]
164 else:
165 data['img_metas'] = data['img_metas'].data
166
167 if isinstance(data['img'], list):
168 data['img'] = [img.data for img in data['img']]
169 if isinstance(data['img'][0], list):
170 data['img'] = [img[0] for img in data['img']]
171 else:
172 data['img'] = data['img'].data
173
174 # for KIE models
175 if ann is not None:
176 data['relations'] = data['relations'].data[0]
177 data['gt_bboxes'] = data['gt_bboxes'].data[0]
178 data['texts'] = data['texts'].data[0]
179 data['img'] = data['img'][0]
180 data['img_metas'] = data['img_metas'][0]
181
182 if next(model.parameters()).is_cuda:
183 # scatter to specified GPU
184 data = scatter(data, [device])[0]
185 else:
186 for m in model.modules():
187 assert not isinstance(
188 m, RoIPool
189 ), 'CPU inference with RoIPool is not supported currently.'
190
191 # forward the model
192 with torch.no_grad():
193 results = model(return_loss=False, rescale=True, **data)
194
195 if not is_batch:
196 if not return_data:
197 return results[0]
198 return results[0], datas[0]
199 else:
200 if not return_data:
201 return results
202 return results, datas
203
204
205 def text_model_inference(model, input_sentence):
206 """Inference text(s) with the entity recognizer.
207
208 Args:
209 model (nn.Module): The loaded recognizer.
210 input_sentence (str): A text entered by the user.
211
212 Returns:
213 result (dict): Predicted results.
214 """
215
216 assert isinstance(input_sentence, str)
217
218 cfg = model.cfg
219 test_pipeline = Compose(cfg.data.test.pipeline)
220 data = {'text': input_sentence, 'label': {}}
221
222 # build the data pipeline
223 data = test_pipeline(data)
224 if isinstance(data['img_metas'], dict):
225 img_metas = data['img_metas']
226 else:
227 img_metas = data['img_metas'].data
228
229 assert isinstance(img_metas, dict)
230 img_metas = {
231 'input_ids': img_metas['input_ids'].unsqueeze(0),
232 'attention_masks': img_metas['attention_masks'].unsqueeze(0),
233 'token_type_ids': img_metas['token_type_ids'].unsqueeze(0),
234 'labels': img_metas['labels'].unsqueeze(0)
235 }
236 # forward the model
237 with torch.no_grad():
238 result = model(None, img_metas, return_loss=False)
239 return result
240
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmocr/apis/inference.py b/mmocr/apis/inference.py
--- a/mmocr/apis/inference.py
+++ b/mmocr/apis/inference.py
@@ -98,6 +98,8 @@
if isinstance(imgs, (list, tuple)):
is_batch = True
+ if len(imgs) == 0:
+ raise Exception('empty imgs provided, please check and try again')
if not isinstance(imgs[0], (np.ndarray, str)):
raise AssertionError('imgs must be strings or numpy arrays')
| {"golden_diff": "diff --git a/mmocr/apis/inference.py b/mmocr/apis/inference.py\n--- a/mmocr/apis/inference.py\n+++ b/mmocr/apis/inference.py\n@@ -98,6 +98,8 @@\n \n if isinstance(imgs, (list, tuple)):\n is_batch = True\n+ if len(imgs) == 0:\n+ raise Exception('empty imgs provided, please check and try again')\n if not isinstance(imgs[0], (np.ndarray, str)):\n raise AssertionError('imgs must be strings or numpy arrays')\n", "issue": "IndexError when running model_inference with empty list\nThanks for your error report and we appreciate it a lot.\r\n\r\n**Checklist**\r\n\r\n1. I have searched related issues but cannot get the expected help.\r\n2. The bug has not been fixed in the latest version.\r\n\r\n**Bug description**\r\nI am using the mmocr/utils/ocr.py script. I create a MMOCR object with both detector and recognition. However, when running the readtext method ,there are some images where I get the following error:\r\n```python\r\nTraceback (most recent call last):\r\n File \"test.py\", line 16, in <module>\r\n result = ocr.readtext(data, print_result=True, imshow=False, batch_mode=True, merge=True)\r\n File \"/home/mpena/mmocr/mmocr/utils/ocr.py\", line 426, in readtext\r\n self.detect_model, self.recog_model, kie_model=self.kie_model)\r\n File \"/home/mpena/mmocr/mmocr/utils/ocr.py\", line 574, in det_recog_kie_inference\r\n recog_model, box_imgs, True, self.args.recog_batch_size)\r\n File \"/home/mpena/mmocr/mmocr/utils/ocr.py\", line 628, in single_inference\r\n result = model_inference(model, arrays, batch_mode=True)\r\n File \"/home/mpena/mmocr/mmocr/apis/inference.py\", line 101, in model_inference\r\n if not isinstance(imgs[0], (np.ndarray, str)):\r\nIndexError: list index out of range\r\n```\r\nThis happens because there are some images where the detector returns an empty list: 'boundary_result' from det_result at https://github.com/open-mmlab/mmocr/blob/main/mmocr/utils/ocr.py#L522\r\n\r\nAnd this breaks at https://github.com/open-mmlab/mmocr/blob/main/mmocr/apis/inference.py#L101\r\n```python\r\nif isinstance(imgs, (list, tuple)):\r\n is_batch = True\r\n if not isinstance(imgs[0], (np.ndarray, str)):\r\n raise AssertionError('imgs must be strings or numpy arrays')\r\n\r\n```\r\nbecause imgs[0] doesn't exist.\r\n\r\n**Reproduction**\r\nThe error can be reproduced with the following script, called from the mmocr directory\r\n\r\n```python\r\nfrom mmocr.utils.ocr import MMOCR\r\nocr = MMOCR() \r\ndet_result = []\r\nresults = ocr.readtext(det_result, batch_mode=True, merge=True, print_result=True, imshow=False)\r\n```\r\n\r\n**Environment**\r\n\r\n1. Please run `python mmocr/utils/collect_env.py` to collect necessary environment information and paste it here.\r\n```bash\r\nsys.platform: linux\r\nPython: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]\r\nCUDA available: True\r\nGPU 0: GeForce GTX 1080 Ti\r\nCUDA_HOME: /usr\r\nNVCC: Cuda compilation tools, release 10.1, V10.1.243\r\nGCC: gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0\r\nPyTorch: 1.6.0\r\nPyTorch compiling details: PyTorch built with:\r\n - GCC 7.3\r\n - C++ Version: 201402\r\n - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications\r\n - Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - NNPACK is enabled\r\n - CPU capability usage: AVX2\r\n - CUDA Runtime 10.1\r\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37\r\n - CuDNN 7.6.3\r\n - Magma 2.5.2\r\n - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, \r\n\r\nTorchVision: 0.7.0\r\nOpenCV: 4.5.4\r\nMMCV: 1.3.18\r\nMMCV Compiler: GCC 7.3\r\nMMCV CUDA Compiler: 10.1\r\nMMOCR: 0.3.0+3188e53\r\n```\r\n2. You may add addition that may be helpful for locating the problem, such as\r\n - How you installed PyTorch [e.g., pip, conda, source]\r\n - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)\r\n\r\n**Error traceback**\r\n```bash\r\nTraceback (most recent call last):\r\n File \"test.py\", line 16, in <module>\r\n result = ocr.readtext(data, print_result=True, imshow=False, batch_mode=True, merge=True)\r\n File \"/home/mpena/mmocr/mmocr/utils/ocr.py\", line 426, in readtext\r\n self.detect_model, self.recog_model, kie_model=self.kie_model)\r\n File \"/home/mpena/mmocr/mmocr/utils/ocr.py\", line 574, in det_recog_kie_inference\r\n recog_model, box_imgs, True, self.args.recog_batch_size)\r\n File \"/home/mpena/mmocr/mmocr/utils/ocr.py\", line 628, in single_inference\r\n result = model_inference(model, arrays, batch_mode=True)\r\n File \"/home/mpena/mmocr/mmocr/apis/inference.py\", line 101, in model_inference\r\n if not isinstance(imgs[0], (np.ndarray, str)):\r\nIndexError: list index out of range\r\n```\r\n\r\n\r\n\r\n**Bug fix**\r\nIt is necessary to check input size to ensure input image list is not empy, I am wiling to send a PR to fix this\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport mmcv\nimport numpy as np\nimport torch\nfrom mmcv.ops import RoIPool\nfrom mmcv.parallel import collate, scatter\nfrom mmcv.runner import load_checkpoint\nfrom mmdet.core import get_classes\nfrom mmdet.datasets import replace_ImageToTensor\nfrom mmdet.datasets.pipelines import Compose\n\nfrom mmocr.models import build_detector\n\n\ndef init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):\n \"\"\"Initialize a detector from config file.\n\n Args:\n config (str or :obj:`mmcv.Config`): Config file path or the config\n object.\n checkpoint (str, optional): Checkpoint path. If left as None, the model\n will not load any weights.\n cfg_options (dict): Options to override some settings in the used\n config.\n\n Returns:\n nn.Module: The constructed detector.\n \"\"\"\n if isinstance(config, str):\n config = mmcv.Config.fromfile(config)\n elif not isinstance(config, mmcv.Config):\n raise TypeError('config must be a filename or Config object, '\n f'but got {type(config)}')\n if cfg_options is not None:\n config.merge_from_dict(cfg_options)\n if config.model.get('pretrained'):\n config.model.pretrained = None\n config.model.train_cfg = None\n model = build_detector(config.model, test_cfg=config.get('test_cfg'))\n if checkpoint is not None:\n checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')\n if 'CLASSES' in checkpoint.get('meta', {}):\n model.CLASSES = checkpoint['meta']['CLASSES']\n else:\n warnings.simplefilter('once')\n warnings.warn('Class names are not saved in the checkpoint\\'s '\n 'meta data, use COCO classes by default.')\n model.CLASSES = get_classes('coco')\n model.cfg = config # save the config in the model for convenience\n model.to(device)\n model.eval()\n return model\n\n\ndef disable_text_recog_aug_test(cfg, set_types=None):\n \"\"\"Remove aug_test from test pipeline of text recognition.\n Args:\n cfg (mmcv.Config): Input config.\n set_types (list[str]): Type of dataset source. Should be\n None or sublist of ['test', 'val']\n\n Returns:\n cfg (mmcv.Config): Output config removing\n `MultiRotateAugOCR` in test pipeline.\n \"\"\"\n assert set_types is None or isinstance(set_types, list)\n if set_types is None:\n set_types = ['val', 'test']\n for set_type in set_types:\n if cfg.data[set_type].pipeline[1].type == 'MultiRotateAugOCR':\n cfg.data[set_type].pipeline = [\n cfg.data[set_type].pipeline[0],\n *cfg.data[set_type].pipeline[1].transforms\n ]\n\n return cfg\n\n\ndef model_inference(model,\n imgs,\n ann=None,\n batch_mode=False,\n return_data=False):\n \"\"\"Inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]):\n Either image files or loaded images.\n batch_mode (bool): If True, use batch mode for inference.\n ann (dict): Annotation info for key information extraction.\n return_data: Return postprocessed data.\n Returns:\n result (dict): Predicted results.\n \"\"\"\n\n if isinstance(imgs, (list, tuple)):\n is_batch = True\n if not isinstance(imgs[0], (np.ndarray, str)):\n raise AssertionError('imgs must be strings or numpy arrays')\n\n elif isinstance(imgs, (np.ndarray, str)):\n imgs = [imgs]\n is_batch = False\n else:\n raise AssertionError('imgs must be strings or numpy arrays')\n\n is_ndarray = isinstance(imgs[0], np.ndarray)\n\n cfg = model.cfg\n\n if batch_mode:\n cfg = disable_text_recog_aug_test(cfg, set_types=['test'])\n\n device = next(model.parameters()).device # model device\n\n if is_ndarray:\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromNdarray'\n\n cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)\n test_pipeline = Compose(cfg.data.test.pipeline)\n\n datas = []\n for img in imgs:\n # prepare data\n if is_ndarray:\n # directly add img\n data = dict(img=img, ann_info=ann, bbox_fields=[])\n else:\n # add information into dict\n data = dict(\n img_info=dict(filename=img),\n img_prefix=None,\n ann_info=ann,\n bbox_fields=[])\n if ann is not None:\n data.update(dict(**ann))\n\n # build the data pipeline\n data = test_pipeline(data)\n # get tensor from list to stack for batch mode (text detection)\n if batch_mode:\n if cfg.data.test.pipeline[1].type == 'MultiScaleFlipAug':\n for key, value in data.items():\n data[key] = value[0]\n datas.append(data)\n\n if isinstance(datas[0]['img'], list) and len(datas) > 1:\n raise Exception('aug test does not support '\n f'inference with batch size '\n f'{len(datas)}')\n\n data = collate(datas, samples_per_gpu=len(imgs))\n\n # process img_metas\n if isinstance(data['img_metas'], list):\n data['img_metas'] = [\n img_metas.data[0] for img_metas in data['img_metas']\n ]\n else:\n data['img_metas'] = data['img_metas'].data\n\n if isinstance(data['img'], list):\n data['img'] = [img.data for img in data['img']]\n if isinstance(data['img'][0], list):\n data['img'] = [img[0] for img in data['img']]\n else:\n data['img'] = data['img'].data\n\n # for KIE models\n if ann is not None:\n data['relations'] = data['relations'].data[0]\n data['gt_bboxes'] = data['gt_bboxes'].data[0]\n data['texts'] = data['texts'].data[0]\n data['img'] = data['img'][0]\n data['img_metas'] = data['img_metas'][0]\n\n if next(model.parameters()).is_cuda:\n # scatter to specified GPU\n data = scatter(data, [device])[0]\n else:\n for m in model.modules():\n assert not isinstance(\n m, RoIPool\n ), 'CPU inference with RoIPool is not supported currently.'\n\n # forward the model\n with torch.no_grad():\n results = model(return_loss=False, rescale=True, **data)\n\n if not is_batch:\n if not return_data:\n return results[0]\n return results[0], datas[0]\n else:\n if not return_data:\n return results\n return results, datas\n\n\ndef text_model_inference(model, input_sentence):\n \"\"\"Inference text(s) with the entity recognizer.\n\n Args:\n model (nn.Module): The loaded recognizer.\n input_sentence (str): A text entered by the user.\n\n Returns:\n result (dict): Predicted results.\n \"\"\"\n\n assert isinstance(input_sentence, str)\n\n cfg = model.cfg\n test_pipeline = Compose(cfg.data.test.pipeline)\n data = {'text': input_sentence, 'label': {}}\n\n # build the data pipeline\n data = test_pipeline(data)\n if isinstance(data['img_metas'], dict):\n img_metas = data['img_metas']\n else:\n img_metas = data['img_metas'].data\n\n assert isinstance(img_metas, dict)\n img_metas = {\n 'input_ids': img_metas['input_ids'].unsqueeze(0),\n 'attention_masks': img_metas['attention_masks'].unsqueeze(0),\n 'token_type_ids': img_metas['token_type_ids'].unsqueeze(0),\n 'labels': img_metas['labels'].unsqueeze(0)\n }\n # forward the model\n with torch.no_grad():\n result = model(None, img_metas, return_loss=False)\n return result\n", "path": "mmocr/apis/inference.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport mmcv\nimport numpy as np\nimport torch\nfrom mmcv.ops import RoIPool\nfrom mmcv.parallel import collate, scatter\nfrom mmcv.runner import load_checkpoint\nfrom mmdet.core import get_classes\nfrom mmdet.datasets import replace_ImageToTensor\nfrom mmdet.datasets.pipelines import Compose\n\nfrom mmocr.models import build_detector\n\n\ndef init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):\n \"\"\"Initialize a detector from config file.\n\n Args:\n config (str or :obj:`mmcv.Config`): Config file path or the config\n object.\n checkpoint (str, optional): Checkpoint path. If left as None, the model\n will not load any weights.\n cfg_options (dict): Options to override some settings in the used\n config.\n\n Returns:\n nn.Module: The constructed detector.\n \"\"\"\n if isinstance(config, str):\n config = mmcv.Config.fromfile(config)\n elif not isinstance(config, mmcv.Config):\n raise TypeError('config must be a filename or Config object, '\n f'but got {type(config)}')\n if cfg_options is not None:\n config.merge_from_dict(cfg_options)\n if config.model.get('pretrained'):\n config.model.pretrained = None\n config.model.train_cfg = None\n model = build_detector(config.model, test_cfg=config.get('test_cfg'))\n if checkpoint is not None:\n checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')\n if 'CLASSES' in checkpoint.get('meta', {}):\n model.CLASSES = checkpoint['meta']['CLASSES']\n else:\n warnings.simplefilter('once')\n warnings.warn('Class names are not saved in the checkpoint\\'s '\n 'meta data, use COCO classes by default.')\n model.CLASSES = get_classes('coco')\n model.cfg = config # save the config in the model for convenience\n model.to(device)\n model.eval()\n return model\n\n\ndef disable_text_recog_aug_test(cfg, set_types=None):\n \"\"\"Remove aug_test from test pipeline of text recognition.\n Args:\n cfg (mmcv.Config): Input config.\n set_types (list[str]): Type of dataset source. Should be\n None or sublist of ['test', 'val']\n\n Returns:\n cfg (mmcv.Config): Output config removing\n `MultiRotateAugOCR` in test pipeline.\n \"\"\"\n assert set_types is None or isinstance(set_types, list)\n if set_types is None:\n set_types = ['val', 'test']\n for set_type in set_types:\n if cfg.data[set_type].pipeline[1].type == 'MultiRotateAugOCR':\n cfg.data[set_type].pipeline = [\n cfg.data[set_type].pipeline[0],\n *cfg.data[set_type].pipeline[1].transforms\n ]\n\n return cfg\n\n\ndef model_inference(model,\n imgs,\n ann=None,\n batch_mode=False,\n return_data=False):\n \"\"\"Inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]):\n Either image files or loaded images.\n batch_mode (bool): If True, use batch mode for inference.\n ann (dict): Annotation info for key information extraction.\n return_data: Return postprocessed data.\n Returns:\n result (dict): Predicted results.\n \"\"\"\n\n if isinstance(imgs, (list, tuple)):\n is_batch = True\n if len(imgs) == 0:\n raise Exception('empty imgs provided, please check and try again')\n if not isinstance(imgs[0], (np.ndarray, str)):\n raise AssertionError('imgs must be strings or numpy arrays')\n\n elif isinstance(imgs, (np.ndarray, str)):\n imgs = [imgs]\n is_batch = False\n else:\n raise AssertionError('imgs must be strings or numpy arrays')\n\n is_ndarray = isinstance(imgs[0], np.ndarray)\n\n cfg = model.cfg\n\n if batch_mode:\n cfg = disable_text_recog_aug_test(cfg, set_types=['test'])\n\n device = next(model.parameters()).device # model device\n\n if is_ndarray:\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromNdarray'\n\n cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)\n test_pipeline = Compose(cfg.data.test.pipeline)\n\n datas = []\n for img in imgs:\n # prepare data\n if is_ndarray:\n # directly add img\n data = dict(img=img, ann_info=ann, bbox_fields=[])\n else:\n # add information into dict\n data = dict(\n img_info=dict(filename=img),\n img_prefix=None,\n ann_info=ann,\n bbox_fields=[])\n if ann is not None:\n data.update(dict(**ann))\n\n # build the data pipeline\n data = test_pipeline(data)\n # get tensor from list to stack for batch mode (text detection)\n if batch_mode:\n if cfg.data.test.pipeline[1].type == 'MultiScaleFlipAug':\n for key, value in data.items():\n data[key] = value[0]\n datas.append(data)\n\n if isinstance(datas[0]['img'], list) and len(datas) > 1:\n raise Exception('aug test does not support '\n f'inference with batch size '\n f'{len(datas)}')\n\n data = collate(datas, samples_per_gpu=len(imgs))\n\n # process img_metas\n if isinstance(data['img_metas'], list):\n data['img_metas'] = [\n img_metas.data[0] for img_metas in data['img_metas']\n ]\n else:\n data['img_metas'] = data['img_metas'].data\n\n if isinstance(data['img'], list):\n data['img'] = [img.data for img in data['img']]\n if isinstance(data['img'][0], list):\n data['img'] = [img[0] for img in data['img']]\n else:\n data['img'] = data['img'].data\n\n # for KIE models\n if ann is not None:\n data['relations'] = data['relations'].data[0]\n data['gt_bboxes'] = data['gt_bboxes'].data[0]\n data['texts'] = data['texts'].data[0]\n data['img'] = data['img'][0]\n data['img_metas'] = data['img_metas'][0]\n\n if next(model.parameters()).is_cuda:\n # scatter to specified GPU\n data = scatter(data, [device])[0]\n else:\n for m in model.modules():\n assert not isinstance(\n m, RoIPool\n ), 'CPU inference with RoIPool is not supported currently.'\n\n # forward the model\n with torch.no_grad():\n results = model(return_loss=False, rescale=True, **data)\n\n if not is_batch:\n if not return_data:\n return results[0]\n return results[0], datas[0]\n else:\n if not return_data:\n return results\n return results, datas\n\n\ndef text_model_inference(model, input_sentence):\n \"\"\"Inference text(s) with the entity recognizer.\n\n Args:\n model (nn.Module): The loaded recognizer.\n input_sentence (str): A text entered by the user.\n\n Returns:\n result (dict): Predicted results.\n \"\"\"\n\n assert isinstance(input_sentence, str)\n\n cfg = model.cfg\n test_pipeline = Compose(cfg.data.test.pipeline)\n data = {'text': input_sentence, 'label': {}}\n\n # build the data pipeline\n data = test_pipeline(data)\n if isinstance(data['img_metas'], dict):\n img_metas = data['img_metas']\n else:\n img_metas = data['img_metas'].data\n\n assert isinstance(img_metas, dict)\n img_metas = {\n 'input_ids': img_metas['input_ids'].unsqueeze(0),\n 'attention_masks': img_metas['attention_masks'].unsqueeze(0),\n 'token_type_ids': img_metas['token_type_ids'].unsqueeze(0),\n 'labels': img_metas['labels'].unsqueeze(0)\n }\n # forward the model\n with torch.no_grad():\n result = model(None, img_metas, return_loss=False)\n return result\n", "path": "mmocr/apis/inference.py"}]} |
gh_patches_debug_1056 | rasdani/github-patches | git_diff | Kinto__kinto-889 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash when header contains control characters
```
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> requests.get("http://localhost:8888/crlftest%0DSet-Cookie:test%3Dtest%3Bdomain%3D.yelp.com")
<Response [500]>
>>>
```
```
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/tweens.py", line 22, in excview_tween
response = handler(request)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid_tm/__init__.py", line 109, in tm_tween
reraise(*exc_info)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid_tm/__init__.py", line 88, in tm_tween
response = handler(request)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/router.py", line 158, in handle_request
view_name
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/view.py", line 547, in _call_view
response = view_callable(context, request)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/viewderivers.py", line 413, in viewresult_to_response
result = view(context, request)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/viewderivers.py", line 147, in _requestonly_view
response = view(request)
File "/data/kinto-dist/lib/python2.7/site-packages/kinto/core/initialization.py", line 79, in _redirect_to_version_view
raise HTTPTemporaryRedirect(redirect)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/httpexceptions.py", line 493, in __init__
body_template=body_template, location=location, **kw)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/httpexceptions.py", line 221, in __init__
Response.__init__(self, status=status, **kw)
File "/data/kinto-dist/lib/python2.7/site-packages/webob/response.py", line 153, in __init__
setattr(self, name, value)
File "/data/kinto-dist/lib/python2.7/site-packages/webob/descriptors.py", line 142, in fset
raise ValueError('Header value may not contain control characters')
ValueError: Header value may not contain control characters","uid":null,"errno":null,"querystring":"{}","agent":"Amazon CloudFront","method":"GET","path":"/crlftest%0DSet-Cookie:test%3Dtest%3Bdomain%3D.yelp.com","authn_type":null},"Logger":"kinto","Type":["Header value may not contain control characters"],"Severity":2}
```
Crash when header contains control characters
```
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> requests.get("http://localhost:8888/crlftest%0DSet-Cookie:test%3Dtest%3Bdomain%3D.yelp.com")
<Response [500]>
>>>
```
```
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/tweens.py", line 22, in excview_tween
response = handler(request)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid_tm/__init__.py", line 109, in tm_tween
reraise(*exc_info)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid_tm/__init__.py", line 88, in tm_tween
response = handler(request)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/router.py", line 158, in handle_request
view_name
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/view.py", line 547, in _call_view
response = view_callable(context, request)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/viewderivers.py", line 413, in viewresult_to_response
result = view(context, request)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/viewderivers.py", line 147, in _requestonly_view
response = view(request)
File "/data/kinto-dist/lib/python2.7/site-packages/kinto/core/initialization.py", line 79, in _redirect_to_version_view
raise HTTPTemporaryRedirect(redirect)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/httpexceptions.py", line 493, in __init__
body_template=body_template, location=location, **kw)
File "/data/kinto-dist/lib/python2.7/site-packages/pyramid/httpexceptions.py", line 221, in __init__
Response.__init__(self, status=status, **kw)
File "/data/kinto-dist/lib/python2.7/site-packages/webob/response.py", line 153, in __init__
setattr(self, name, value)
File "/data/kinto-dist/lib/python2.7/site-packages/webob/descriptors.py", line 142, in fset
raise ValueError('Header value may not contain control characters')
ValueError: Header value may not contain control characters","uid":null,"errno":null,"querystring":"{}","agent":"Amazon CloudFront","method":"GET","path":"/crlftest%0DSet-Cookie:test%3Dtest%3Bdomain%3D.yelp.com","authn_type":null},"Logger":"kinto","Type":["Header value may not contain control characters"],"Severity":2}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/core/initialization.py`
Content:
```
1 import re
2 import warnings
3 from datetime import datetime
4 from dateutil import parser as dateparser
5
6 import structlog
7 from pyramid.events import NewRequest, NewResponse
8 from pyramid.exceptions import ConfigurationError
9 from pyramid.httpexceptions import (HTTPTemporaryRedirect, HTTPGone,
10 HTTPBadRequest)
11 from pyramid.renderers import JSON as JSONRenderer
12 from pyramid.security import NO_PERMISSION_REQUIRED
13 from pyramid.interfaces import IAuthenticationPolicy
14 from pyramid.settings import asbool, aslist
15 from pyramid_multiauth import (MultiAuthenticationPolicy,
16 MultiAuthPolicySelected)
17 try:
18 import newrelic.agent
19 except ImportError: # pragma: no cover
20 newrelic = None
21 try:
22 from werkzeug.contrib.profiler import ProfilerMiddleware
23 except ImportError: # pragma: no cover
24 pass
25
26 from kinto.core import errors
27 from kinto.core import utils
28 from kinto.core import cache
29 from kinto.core import storage
30 from kinto.core import permission
31 from kinto.core.logs import logger
32 from kinto.core.events import ResourceRead, ResourceChanged, ACTIONS
33
34
35 def setup_request_bound_data(config):
36 """Attach custom data on request object, and share it with parent
37 requests during batch."""
38 def attach_bound_data(request):
39 parent = getattr(request, 'parent', None)
40 return parent.bound_data if parent else {}
41
42 config.add_request_method(attach_bound_data, name='bound_data', reify=True)
43
44
45 def setup_json_serializer(config):
46 import requests
47 import webob
48
49 # Monkey patch to use ujson
50 webob.request.json = utils.json
51 requests.models.json = utils.json
52
53 # Override json renderer using ujson
54 renderer = JSONRenderer(serializer=utils.json_serializer)
55 config.add_renderer('json', renderer)
56
57
58 def setup_version_redirection(config):
59 """Add a view which redirects to the current version of the API.
60 """
61 settings = config.get_settings()
62 redirect_enabled = settings['version_prefix_redirect_enabled']
63 version_prefix_redirection_enabled = asbool(redirect_enabled)
64
65 route_prefix = config.route_prefix
66 config.registry.route_prefix = route_prefix
67
68 # Redirect to the current version of the API if the prefix isn't used.
69 # Do not redirect if kinto.version_prefix_redirect_enabled is set to
70 # False.
71 if not version_prefix_redirection_enabled:
72 return
73
74 def _redirect_to_version_view(request):
75 path = request.matchdict['path']
76 querystring = request.url[(request.url.rindex(request.path) +
77 len(request.path)):]
78 redirect = '/%s/%s%s' % (route_prefix, path, querystring)
79 raise HTTPTemporaryRedirect(redirect)
80
81 # Disable the route prefix passed by the app.
82 config.route_prefix = None
83
84 config.add_route(name='redirect_to_version',
85 pattern='/{path:(?!v[0-9]+).*}')
86
87 config.add_view(view=_redirect_to_version_view,
88 route_name='redirect_to_version',
89 permission=NO_PERMISSION_REQUIRED)
90
91 config.route_prefix = route_prefix
92
93
94 def setup_authentication(config):
95 """Let pyramid_multiauth manage authentication and authorization
96 from configuration.
97 """
98 config.include('pyramid_multiauth')
99
100 # Track policy used, for prefixing user_id and for logging.
101 def on_policy_selected(event):
102 authn_type = event.policy_name.lower()
103 event.request.authn_type = authn_type
104 event.request.selected_userid = event.userid
105 # Add authentication info to context.
106 logger.bind(uid=event.userid, authn_type=authn_type)
107
108 config.add_subscriber(on_policy_selected, MultiAuthPolicySelected)
109
110
111 def setup_backoff(config):
112 """Attach HTTP requests/responses objects.
113
114 This is useful to attach objects to the request object for easier
115 access, and to pre-process responses.
116 """
117 def on_new_response(event):
118 # Add backoff in response headers.
119 backoff = config.registry.settings['backoff']
120 if backoff is not None:
121 backoff = utils.encode_header('%s' % backoff)
122 event.response.headers['Backoff'] = backoff
123
124 config.add_subscriber(on_new_response, NewResponse)
125
126
127 def setup_requests_scheme(config):
128 """Force server scheme, host and port at the application level."""
129 settings = config.get_settings()
130
131 http_scheme = settings['http_scheme']
132 http_host = settings['http_host']
133
134 def on_new_request(event):
135 if http_scheme:
136 event.request.scheme = http_scheme
137 if http_host:
138 event.request.host = http_host
139
140 if http_scheme or http_host:
141 config.add_subscriber(on_new_request, NewRequest)
142
143
144 def setup_deprecation(config):
145 config.add_tween("kinto.core.initialization._end_of_life_tween_factory")
146
147
148 def _end_of_life_tween_factory(handler, registry):
149 """Pyramid tween to handle service end of life."""
150 deprecation_msg = ("The service you are trying to connect no longer exists"
151 " at this location.")
152
153 def eos_tween(request):
154 eos_date = registry.settings['eos']
155 eos_url = registry.settings['eos_url']
156 eos_message = registry.settings['eos_message']
157 if not eos_date:
158 return handler(request)
159
160 eos_date = dateparser.parse(eos_date)
161 if eos_date > datetime.now():
162 code = "soft-eol"
163 request.response = handler(request)
164 else:
165 code = "hard-eol"
166 request.response = errors.http_error(
167 HTTPGone(),
168 errno=errors.ERRORS.SERVICE_DEPRECATED,
169 message=deprecation_msg)
170
171 errors.send_alert(request, eos_message, url=eos_url, code=code)
172 return request.response
173
174 return eos_tween
175
176
177 def setup_storage(config):
178 settings = config.get_settings()
179
180 # Id generators by resource name.
181 config.registry.id_generators = {}
182 for key, value in settings.items():
183 m = re.match(r'^([^_]*)_?id_generator', key)
184 if m is None:
185 continue
186 resource_name = m.group(1)
187 id_generator = config.maybe_dotted(value)
188 config.registry.id_generators[resource_name] = id_generator()
189
190 storage_mod = settings['storage_backend']
191 if not storage_mod:
192 return
193
194 storage_mod = config.maybe_dotted(storage_mod)
195 backend = storage_mod.load_from_config(config)
196 if not isinstance(backend, storage.StorageBase):
197 raise ConfigurationError("Invalid storage backend: %s" % backend)
198 config.registry.storage = backend
199
200 heartbeat = storage.heartbeat(backend)
201 config.registry.heartbeats['storage'] = heartbeat
202
203
204 def setup_permission(config):
205 settings = config.get_settings()
206 permission_mod = settings['permission_backend']
207 if not permission_mod:
208 return
209
210 permission_mod = config.maybe_dotted(permission_mod)
211 backend = permission_mod.load_from_config(config)
212 if not isinstance(backend, permission.PermissionBase):
213 raise ConfigurationError("Invalid permission backend: %s" % backend)
214 config.registry.permission = backend
215
216 heartbeat = permission.heartbeat(backend)
217 config.registry.heartbeats['permission'] = heartbeat
218
219
220 def setup_cache(config):
221 settings = config.get_settings()
222 cache_mod = settings['cache_backend']
223 if not cache_mod:
224 return
225
226 cache_mod = config.maybe_dotted(cache_mod)
227 backend = cache_mod.load_from_config(config)
228 if not isinstance(backend, cache.CacheBase):
229 raise ConfigurationError("Invalid cache backend: %s" % backend)
230 config.registry.cache = backend
231
232 heartbeat = cache.heartbeat(backend)
233 config.registry.heartbeats['cache'] = heartbeat
234
235
236 def setup_statsd(config):
237 settings = config.get_settings()
238 config.registry.statsd = None
239
240 if settings['statsd_url']:
241 statsd_mod = settings['statsd_backend']
242 statsd_mod = config.maybe_dotted(statsd_mod)
243 client = statsd_mod.load_from_config(config)
244
245 config.registry.statsd = client
246
247 client.watch_execution_time(config.registry.cache, prefix='backend')
248 client.watch_execution_time(config.registry.storage, prefix='backend')
249 client.watch_execution_time(config.registry.permission, prefix='backend')
250
251 # Commit so that configured policy can be queried.
252 config.commit()
253 policy = config.registry.queryUtility(IAuthenticationPolicy)
254 if isinstance(policy, MultiAuthenticationPolicy):
255 for name, subpolicy in policy.get_policies():
256 client.watch_execution_time(subpolicy,
257 prefix='authentication',
258 classname=name)
259 else:
260 client.watch_execution_time(policy, prefix='authentication')
261
262 def on_new_response(event):
263 request = event.request
264
265 # Count unique users.
266 user_id = request.prefixed_userid
267 if user_id:
268 client.count('users', unique=user_id)
269
270 # Count authentication verifications.
271 if hasattr(request, 'authn_type'):
272 client.count('authn_type.%s' % request.authn_type)
273
274 # Count view calls.
275 service = request.current_service
276 if service:
277 client.count('view.%s.%s' % (service.name, request.method))
278
279 config.add_subscriber(on_new_response, NewResponse)
280
281 return client
282
283
284 def install_middlewares(app, settings):
285 "Install a set of middlewares defined in the ini file on the given app."
286 # Setup new-relic.
287 if settings.get('newrelic_config'):
288 ini_file = settings['newrelic_config']
289 env = settings['newrelic_env']
290 newrelic.agent.initialize(ini_file, env)
291 app = newrelic.agent.WSGIApplicationWrapper(app)
292
293 # Adds the Werkzeug profiler.
294 if asbool(settings.get('profiler_enabled')):
295 profile_dir = settings['profiler_dir']
296 app = ProfilerMiddleware(app, profile_dir=profile_dir,
297 restrictions=('*kinto.core*'))
298
299 return app
300
301
302 def setup_logging(config):
303 """Setup structured logging, and emit `request.summary` event on each
304 request, as recommanded by Mozilla Services standard:
305
306 * https://mana.mozilla.org/wiki/display/CLOUDSERVICES/Logging+Standard
307 * http://12factor.net/logs
308 """
309 settings = config.get_settings()
310
311 renderer_klass = config.maybe_dotted(settings['logging_renderer'])
312 renderer = renderer_klass(settings)
313
314 structlog.configure(
315 # Share the logger context by thread.
316 context_class=structlog.threadlocal.wrap_dict(dict),
317 # Integrate with Pyramid logging facilities.
318 logger_factory=structlog.stdlib.LoggerFactory(),
319 wrapper_class=structlog.stdlib.BoundLogger,
320 # Setup logger output format.
321 processors=[
322 structlog.stdlib.filter_by_level,
323 structlog.processors.format_exc_info,
324 renderer,
325 ])
326
327 def on_new_request(event):
328 request = event.request
329 # Save the time the request was received by the server.
330 event.request._received_at = utils.msec_time()
331
332 try:
333 # Pyramid fails if the URL contains invalid UTF-8 characters.
334 request_path = event.request.path
335 except UnicodeDecodeError:
336 raise errors.http_error(
337 HTTPBadRequest(),
338 errno=errors.ERRORS.INVALID_PARAMETERS,
339 message="Invalid URL path.")
340
341 # New logger context, with infos for request summary logger.
342 logger.new(agent=request.headers.get('User-Agent'),
343 path=request_path,
344 method=request.method,
345 querystring=dict(request.GET),
346 lang=request.headers.get('Accept-Language'),
347 uid=None,
348 authn_type=None,
349 errno=None)
350
351 config.add_subscriber(on_new_request, NewRequest)
352
353 def on_new_response(event):
354 response = event.response
355 request = event.request
356
357 # Compute the request processing time in msec (-1 if unknown)
358 current = utils.msec_time()
359 duration = current - getattr(request, '_received_at', current - 1)
360 isotimestamp = datetime.fromtimestamp(current/1000).isoformat()
361
362 # Bind infos for request summary logger.
363 logger.bind(time=isotimestamp,
364 code=response.status_code,
365 t=duration)
366
367 # Ouput application request summary.
368 if not hasattr(request, 'parent'):
369 logger.info('request.summary')
370
371 config.add_subscriber(on_new_response, NewResponse)
372
373
374 class EventActionFilter(object):
375 def __init__(self, actions, config):
376 actions = ACTIONS.from_string_list(actions)
377 self.actions = [action.value for action in actions]
378
379 def phash(self):
380 return 'for_actions = %s' % (','.join(self.actions))
381
382 def __call__(self, event):
383 action = event.payload.get('action')
384 return not action or action in self.actions
385
386
387 class EventResourceFilter(object):
388 def __init__(self, resources, config):
389 self.resources = resources
390
391 def phash(self):
392 return 'for_resources = %s' % (','.join(self.resources))
393
394 def __call__(self, event):
395 resource = event.payload.get('resource_name')
396 return not resource or not self.resources or resource in self.resources
397
398
399 def setup_listeners(config):
400 # Register basic subscriber predicates, to filter events.
401 config.add_subscriber_predicate('for_actions', EventActionFilter)
402 config.add_subscriber_predicate('for_resources', EventResourceFilter)
403
404 write_actions = (ACTIONS.CREATE, ACTIONS.UPDATE, ACTIONS.DELETE)
405 settings = config.get_settings()
406 project_name = settings.get('project_name', '')
407 listeners = aslist(settings['event_listeners'])
408
409 for name in listeners:
410 logger.info('Setting up %r listener' % name)
411 prefix = 'event_listeners.%s.' % name
412
413 try:
414 listener_mod = config.maybe_dotted(name)
415 prefix = 'event_listeners.%s.' % name.split('.')[-1]
416 listener = listener_mod.load_from_config(config, prefix)
417 except (ImportError, AttributeError):
418 module_setting = prefix + "use"
419 # Read from ENV or settings.
420 module_value = utils.read_env(project_name + "." + module_setting,
421 settings.get(module_setting))
422 listener_mod = config.maybe_dotted(module_value)
423 listener = listener_mod.load_from_config(config, prefix)
424
425 # If StatsD is enabled, monitor execution time of listeners.
426 if getattr(config.registry, "statsd", None):
427 statsd_client = config.registry.statsd
428 key = 'listeners.%s' % name
429 listener = statsd_client.timer(key)(listener.__call__)
430
431 # Optional filter by event action.
432 actions_setting = prefix + "actions"
433 # Read from ENV or settings.
434 actions_value = utils.read_env(project_name + "." + actions_setting,
435 settings.get(actions_setting, ""))
436 actions = aslist(actions_value)
437 if len(actions) > 0:
438 actions = ACTIONS.from_string_list(actions)
439 else:
440 actions = write_actions
441
442 # Optional filter by event resource name.
443 resource_setting = prefix + "resources"
444 # Read from ENV or settings.
445 resource_value = utils.read_env(project_name + "." + resource_setting,
446 settings.get(resource_setting, ""))
447 resource_names = aslist(resource_value)
448
449 # Pyramid event predicates.
450 options = dict(for_actions=actions, for_resources=resource_names)
451
452 if ACTIONS.READ in actions:
453 config.add_subscriber(listener, ResourceRead, **options)
454 if len(actions) == 1:
455 return
456
457 config.add_subscriber(listener, ResourceChanged, **options)
458
459
460 def load_default_settings(config, default_settings):
461 """Read settings provided in Paste ini file, set default values and
462 replace if defined as environment variable.
463 """
464 settings = config.get_settings()
465
466 project_name = settings['project_name']
467
468 def _prefixed_keys(key):
469 unprefixed = key
470 if key.startswith('kinto.') or key.startswith(project_name + '.'):
471 unprefixed = key.split('.', 1)[1]
472 project_prefix = project_name + '.' + unprefixed
473 kinto_prefix = 'kinto.' + unprefixed
474 return unprefixed, project_prefix, kinto_prefix
475
476 # Fill settings with default values if not defined.
477 for key, default_value in sorted(default_settings.items()):
478 unprefixed, project_prefix, kinto_prefix = keys = _prefixed_keys(key)
479 is_defined = len(set(settings.keys()).intersection(set(keys))) > 0
480 if not is_defined:
481 settings[unprefixed] = default_value
482
483 for key, value in sorted(settings.items()):
484 unprefixed, project_prefix, kinto_prefix = keys = _prefixed_keys(key)
485
486 # Fail if not only one is defined.
487 defined = set(settings.keys()).intersection(set(keys))
488 distinct_values = set([str(settings[d]) for d in defined])
489
490 if len(defined) > 1 and len(distinct_values) > 1:
491 names = "', '".join(defined)
492 raise ValueError("Settings '%s' are in conflict." % names)
493
494 # Maintain backwards compatibility with old settings files that
495 # have backend settings like cliquet.foo (which is now
496 # kinto.core.foo).
497 unprefixed, _, _ = _prefixed_keys(key)
498 CONTAIN_CLIQUET_MODULE_NAMES = [
499 'storage_backend',
500 'cache_backend',
501 'permission_backend',
502 'logging_renderer',
503 ]
504 if unprefixed in CONTAIN_CLIQUET_MODULE_NAMES and \
505 value.startswith('cliquet.'):
506 new_value = value.replace('cliquet.', 'kinto.core.')
507 logger.warn(
508 "Backend settings referring to cliquet are DEPRECATED. "
509 "Please update your {} setting to {} (was: {}).".format(
510 key, new_value, value))
511 value = new_value
512
513 # Override settings from OS env values.
514 # e.g. HTTP_PORT, READINGLIST_HTTP_PORT, KINTO_HTTP_PORT
515 from_env = utils.read_env(unprefixed, value)
516 from_env = utils.read_env(project_prefix, from_env)
517 from_env = utils.read_env(kinto_prefix, from_env)
518
519 settings[unprefixed] = from_env
520
521 config.add_settings(settings)
522
523
524 def initialize(config, version=None, project_name='', default_settings=None):
525 """Initialize kinto.core with the given configuration, version and project
526 name.
527
528 This will basically include kinto.core in Pyramid and set route prefix
529 based on the specified version.
530
531 :param config: Pyramid configuration
532 :type config: ~pyramid:pyramid.config.Configurator
533 :param str version: Current project version (e.g. '0.0.1') if not defined
534 in application settings.
535 :param str project_name: Project name if not defined
536 in application settings.
537 :param dict default_settings: Override kinto.core default settings values.
538 """
539 from kinto.core import DEFAULT_SETTINGS
540
541 settings = config.get_settings()
542
543 project_name = settings.pop('kinto.project_name',
544 settings.get('project_name')) or project_name
545 settings['project_name'] = project_name
546 if not project_name:
547 warnings.warn('No value specified for `project_name`')
548
549 kinto_core_defaults = DEFAULT_SETTINGS.copy()
550
551 if default_settings:
552 kinto_core_defaults.update(default_settings)
553
554 load_default_settings(config, kinto_core_defaults)
555
556 http_scheme = settings['http_scheme']
557 if http_scheme != 'https':
558 warnings.warn('HTTPS is not enabled')
559
560 # Override project version from settings.
561 project_version = settings.get('project_version') or version
562 if not project_version:
563 error_msg = "Invalid project version: %s" % project_version
564 raise ConfigurationError(error_msg)
565 settings['project_version'] = project_version = str(project_version)
566
567 # HTTP API version.
568 http_api_version = settings.get('http_api_version')
569 if http_api_version is None:
570 # The API version is derivated from the module version if not provided.
571 http_api_version = '.'.join(project_version.split('.')[0:2])
572 settings['http_api_version'] = http_api_version = str(http_api_version)
573 api_version = 'v%s' % http_api_version.split('.')[0]
574
575 # Include kinto.core views with the correct api version prefix.
576 config.include("kinto.core", route_prefix=api_version)
577 config.route_prefix = api_version
578
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/core/initialization.py b/kinto/core/initialization.py
--- a/kinto/core/initialization.py
+++ b/kinto/core/initialization.py
@@ -82,7 +82,7 @@
config.route_prefix = None
config.add_route(name='redirect_to_version',
- pattern='/{path:(?!v[0-9]+).*}')
+ pattern=r'/{path:(?!v[0-9]+)[^\r]*}')
config.add_view(view=_redirect_to_version_view,
route_name='redirect_to_version',
| {"golden_diff": "diff --git a/kinto/core/initialization.py b/kinto/core/initialization.py\n--- a/kinto/core/initialization.py\n+++ b/kinto/core/initialization.py\n@@ -82,7 +82,7 @@\n config.route_prefix = None\n \n config.add_route(name='redirect_to_version',\n- pattern='/{path:(?!v[0-9]+).*}')\n+ pattern=r'/{path:(?!v[0-9]+)[^\\r]*}')\n \n config.add_view(view=_redirect_to_version_view,\n route_name='redirect_to_version',\n", "issue": "Crash when header contains control characters\n```\nPython 2.7.12 (default, Jul 1 2016, 15:12:24) \n[GCC 5.4.0 20160609] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import requests\n>>> requests.get(\"http://localhost:8888/crlftest%0DSet-Cookie:test%3Dtest%3Bdomain%3D.yelp.com\")\n<Response [500]>\n>>> \n```\n\n```\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/tweens.py\", line 22, in excview_tween\n response = handler(request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid_tm/__init__.py\", line 109, in tm_tween\n reraise(*exc_info)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid_tm/__init__.py\", line 88, in tm_tween\n response = handler(request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/router.py\", line 158, in handle_request\n view_name\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/view.py\", line 547, in _call_view\n response = view_callable(context, request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/viewderivers.py\", line 413, in viewresult_to_response\n result = view(context, request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/viewderivers.py\", line 147, in _requestonly_view\n response = view(request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/kinto/core/initialization.py\", line 79, in _redirect_to_version_view\n raise HTTPTemporaryRedirect(redirect)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/httpexceptions.py\", line 493, in __init__\n body_template=body_template, location=location, **kw)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/httpexceptions.py\", line 221, in __init__\n Response.__init__(self, status=status, **kw)\n File \"/data/kinto-dist/lib/python2.7/site-packages/webob/response.py\", line 153, in __init__\n setattr(self, name, value)\n File \"/data/kinto-dist/lib/python2.7/site-packages/webob/descriptors.py\", line 142, in fset\n raise ValueError('Header value may not contain control characters')\n ValueError: Header value may not contain control characters\",\"uid\":null,\"errno\":null,\"querystring\":\"{}\",\"agent\":\"Amazon CloudFront\",\"method\":\"GET\",\"path\":\"/crlftest%0DSet-Cookie:test%3Dtest%3Bdomain%3D.yelp.com\",\"authn_type\":null},\"Logger\":\"kinto\",\"Type\":[\"Header value may not contain control characters\"],\"Severity\":2}\n```\n\nCrash when header contains control characters\n```\nPython 2.7.12 (default, Jul 1 2016, 15:12:24) \n[GCC 5.4.0 20160609] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import requests\n>>> requests.get(\"http://localhost:8888/crlftest%0DSet-Cookie:test%3Dtest%3Bdomain%3D.yelp.com\")\n<Response [500]>\n>>> \n```\n\n```\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/tweens.py\", line 22, in excview_tween\n response = handler(request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid_tm/__init__.py\", line 109, in tm_tween\n reraise(*exc_info)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid_tm/__init__.py\", line 88, in tm_tween\n response = handler(request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/router.py\", line 158, in handle_request\n view_name\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/view.py\", line 547, in _call_view\n response = view_callable(context, request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/viewderivers.py\", line 413, in viewresult_to_response\n result = view(context, request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/viewderivers.py\", line 147, in _requestonly_view\n response = view(request)\n File \"/data/kinto-dist/lib/python2.7/site-packages/kinto/core/initialization.py\", line 79, in _redirect_to_version_view\n raise HTTPTemporaryRedirect(redirect)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/httpexceptions.py\", line 493, in __init__\n body_template=body_template, location=location, **kw)\n File \"/data/kinto-dist/lib/python2.7/site-packages/pyramid/httpexceptions.py\", line 221, in __init__\n Response.__init__(self, status=status, **kw)\n File \"/data/kinto-dist/lib/python2.7/site-packages/webob/response.py\", line 153, in __init__\n setattr(self, name, value)\n File \"/data/kinto-dist/lib/python2.7/site-packages/webob/descriptors.py\", line 142, in fset\n raise ValueError('Header value may not contain control characters')\n ValueError: Header value may not contain control characters\",\"uid\":null,\"errno\":null,\"querystring\":\"{}\",\"agent\":\"Amazon CloudFront\",\"method\":\"GET\",\"path\":\"/crlftest%0DSet-Cookie:test%3Dtest%3Bdomain%3D.yelp.com\",\"authn_type\":null},\"Logger\":\"kinto\",\"Type\":[\"Header value may not contain control characters\"],\"Severity\":2}\n```\n\n", "before_files": [{"content": "import re\nimport warnings\nfrom datetime import datetime\nfrom dateutil import parser as dateparser\n\nimport structlog\nfrom pyramid.events import NewRequest, NewResponse\nfrom pyramid.exceptions import ConfigurationError\nfrom pyramid.httpexceptions import (HTTPTemporaryRedirect, HTTPGone,\n HTTPBadRequest)\nfrom pyramid.renderers import JSON as JSONRenderer\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.interfaces import IAuthenticationPolicy\nfrom pyramid.settings import asbool, aslist\nfrom pyramid_multiauth import (MultiAuthenticationPolicy,\n MultiAuthPolicySelected)\ntry:\n import newrelic.agent\nexcept ImportError: # pragma: no cover\n newrelic = None\ntry:\n from werkzeug.contrib.profiler import ProfilerMiddleware\nexcept ImportError: # pragma: no cover\n pass\n\nfrom kinto.core import errors\nfrom kinto.core import utils\nfrom kinto.core import cache\nfrom kinto.core import storage\nfrom kinto.core import permission\nfrom kinto.core.logs import logger\nfrom kinto.core.events import ResourceRead, ResourceChanged, ACTIONS\n\n\ndef setup_request_bound_data(config):\n \"\"\"Attach custom data on request object, and share it with parent\n requests during batch.\"\"\"\n def attach_bound_data(request):\n parent = getattr(request, 'parent', None)\n return parent.bound_data if parent else {}\n\n config.add_request_method(attach_bound_data, name='bound_data', reify=True)\n\n\ndef setup_json_serializer(config):\n import requests\n import webob\n\n # Monkey patch to use ujson\n webob.request.json = utils.json\n requests.models.json = utils.json\n\n # Override json renderer using ujson\n renderer = JSONRenderer(serializer=utils.json_serializer)\n config.add_renderer('json', renderer)\n\n\ndef setup_version_redirection(config):\n \"\"\"Add a view which redirects to the current version of the API.\n \"\"\"\n settings = config.get_settings()\n redirect_enabled = settings['version_prefix_redirect_enabled']\n version_prefix_redirection_enabled = asbool(redirect_enabled)\n\n route_prefix = config.route_prefix\n config.registry.route_prefix = route_prefix\n\n # Redirect to the current version of the API if the prefix isn't used.\n # Do not redirect if kinto.version_prefix_redirect_enabled is set to\n # False.\n if not version_prefix_redirection_enabled:\n return\n\n def _redirect_to_version_view(request):\n path = request.matchdict['path']\n querystring = request.url[(request.url.rindex(request.path) +\n len(request.path)):]\n redirect = '/%s/%s%s' % (route_prefix, path, querystring)\n raise HTTPTemporaryRedirect(redirect)\n\n # Disable the route prefix passed by the app.\n config.route_prefix = None\n\n config.add_route(name='redirect_to_version',\n pattern='/{path:(?!v[0-9]+).*}')\n\n config.add_view(view=_redirect_to_version_view,\n route_name='redirect_to_version',\n permission=NO_PERMISSION_REQUIRED)\n\n config.route_prefix = route_prefix\n\n\ndef setup_authentication(config):\n \"\"\"Let pyramid_multiauth manage authentication and authorization\n from configuration.\n \"\"\"\n config.include('pyramid_multiauth')\n\n # Track policy used, for prefixing user_id and for logging.\n def on_policy_selected(event):\n authn_type = event.policy_name.lower()\n event.request.authn_type = authn_type\n event.request.selected_userid = event.userid\n # Add authentication info to context.\n logger.bind(uid=event.userid, authn_type=authn_type)\n\n config.add_subscriber(on_policy_selected, MultiAuthPolicySelected)\n\n\ndef setup_backoff(config):\n \"\"\"Attach HTTP requests/responses objects.\n\n This is useful to attach objects to the request object for easier\n access, and to pre-process responses.\n \"\"\"\n def on_new_response(event):\n # Add backoff in response headers.\n backoff = config.registry.settings['backoff']\n if backoff is not None:\n backoff = utils.encode_header('%s' % backoff)\n event.response.headers['Backoff'] = backoff\n\n config.add_subscriber(on_new_response, NewResponse)\n\n\ndef setup_requests_scheme(config):\n \"\"\"Force server scheme, host and port at the application level.\"\"\"\n settings = config.get_settings()\n\n http_scheme = settings['http_scheme']\n http_host = settings['http_host']\n\n def on_new_request(event):\n if http_scheme:\n event.request.scheme = http_scheme\n if http_host:\n event.request.host = http_host\n\n if http_scheme or http_host:\n config.add_subscriber(on_new_request, NewRequest)\n\n\ndef setup_deprecation(config):\n config.add_tween(\"kinto.core.initialization._end_of_life_tween_factory\")\n\n\ndef _end_of_life_tween_factory(handler, registry):\n \"\"\"Pyramid tween to handle service end of life.\"\"\"\n deprecation_msg = (\"The service you are trying to connect no longer exists\"\n \" at this location.\")\n\n def eos_tween(request):\n eos_date = registry.settings['eos']\n eos_url = registry.settings['eos_url']\n eos_message = registry.settings['eos_message']\n if not eos_date:\n return handler(request)\n\n eos_date = dateparser.parse(eos_date)\n if eos_date > datetime.now():\n code = \"soft-eol\"\n request.response = handler(request)\n else:\n code = \"hard-eol\"\n request.response = errors.http_error(\n HTTPGone(),\n errno=errors.ERRORS.SERVICE_DEPRECATED,\n message=deprecation_msg)\n\n errors.send_alert(request, eos_message, url=eos_url, code=code)\n return request.response\n\n return eos_tween\n\n\ndef setup_storage(config):\n settings = config.get_settings()\n\n # Id generators by resource name.\n config.registry.id_generators = {}\n for key, value in settings.items():\n m = re.match(r'^([^_]*)_?id_generator', key)\n if m is None:\n continue\n resource_name = m.group(1)\n id_generator = config.maybe_dotted(value)\n config.registry.id_generators[resource_name] = id_generator()\n\n storage_mod = settings['storage_backend']\n if not storage_mod:\n return\n\n storage_mod = config.maybe_dotted(storage_mod)\n backend = storage_mod.load_from_config(config)\n if not isinstance(backend, storage.StorageBase):\n raise ConfigurationError(\"Invalid storage backend: %s\" % backend)\n config.registry.storage = backend\n\n heartbeat = storage.heartbeat(backend)\n config.registry.heartbeats['storage'] = heartbeat\n\n\ndef setup_permission(config):\n settings = config.get_settings()\n permission_mod = settings['permission_backend']\n if not permission_mod:\n return\n\n permission_mod = config.maybe_dotted(permission_mod)\n backend = permission_mod.load_from_config(config)\n if not isinstance(backend, permission.PermissionBase):\n raise ConfigurationError(\"Invalid permission backend: %s\" % backend)\n config.registry.permission = backend\n\n heartbeat = permission.heartbeat(backend)\n config.registry.heartbeats['permission'] = heartbeat\n\n\ndef setup_cache(config):\n settings = config.get_settings()\n cache_mod = settings['cache_backend']\n if not cache_mod:\n return\n\n cache_mod = config.maybe_dotted(cache_mod)\n backend = cache_mod.load_from_config(config)\n if not isinstance(backend, cache.CacheBase):\n raise ConfigurationError(\"Invalid cache backend: %s\" % backend)\n config.registry.cache = backend\n\n heartbeat = cache.heartbeat(backend)\n config.registry.heartbeats['cache'] = heartbeat\n\n\ndef setup_statsd(config):\n settings = config.get_settings()\n config.registry.statsd = None\n\n if settings['statsd_url']:\n statsd_mod = settings['statsd_backend']\n statsd_mod = config.maybe_dotted(statsd_mod)\n client = statsd_mod.load_from_config(config)\n\n config.registry.statsd = client\n\n client.watch_execution_time(config.registry.cache, prefix='backend')\n client.watch_execution_time(config.registry.storage, prefix='backend')\n client.watch_execution_time(config.registry.permission, prefix='backend')\n\n # Commit so that configured policy can be queried.\n config.commit()\n policy = config.registry.queryUtility(IAuthenticationPolicy)\n if isinstance(policy, MultiAuthenticationPolicy):\n for name, subpolicy in policy.get_policies():\n client.watch_execution_time(subpolicy,\n prefix='authentication',\n classname=name)\n else:\n client.watch_execution_time(policy, prefix='authentication')\n\n def on_new_response(event):\n request = event.request\n\n # Count unique users.\n user_id = request.prefixed_userid\n if user_id:\n client.count('users', unique=user_id)\n\n # Count authentication verifications.\n if hasattr(request, 'authn_type'):\n client.count('authn_type.%s' % request.authn_type)\n\n # Count view calls.\n service = request.current_service\n if service:\n client.count('view.%s.%s' % (service.name, request.method))\n\n config.add_subscriber(on_new_response, NewResponse)\n\n return client\n\n\ndef install_middlewares(app, settings):\n \"Install a set of middlewares defined in the ini file on the given app.\"\n # Setup new-relic.\n if settings.get('newrelic_config'):\n ini_file = settings['newrelic_config']\n env = settings['newrelic_env']\n newrelic.agent.initialize(ini_file, env)\n app = newrelic.agent.WSGIApplicationWrapper(app)\n\n # Adds the Werkzeug profiler.\n if asbool(settings.get('profiler_enabled')):\n profile_dir = settings['profiler_dir']\n app = ProfilerMiddleware(app, profile_dir=profile_dir,\n restrictions=('*kinto.core*'))\n\n return app\n\n\ndef setup_logging(config):\n \"\"\"Setup structured logging, and emit `request.summary` event on each\n request, as recommanded by Mozilla Services standard:\n\n * https://mana.mozilla.org/wiki/display/CLOUDSERVICES/Logging+Standard\n * http://12factor.net/logs\n \"\"\"\n settings = config.get_settings()\n\n renderer_klass = config.maybe_dotted(settings['logging_renderer'])\n renderer = renderer_klass(settings)\n\n structlog.configure(\n # Share the logger context by thread.\n context_class=structlog.threadlocal.wrap_dict(dict),\n # Integrate with Pyramid logging facilities.\n logger_factory=structlog.stdlib.LoggerFactory(),\n wrapper_class=structlog.stdlib.BoundLogger,\n # Setup logger output format.\n processors=[\n structlog.stdlib.filter_by_level,\n structlog.processors.format_exc_info,\n renderer,\n ])\n\n def on_new_request(event):\n request = event.request\n # Save the time the request was received by the server.\n event.request._received_at = utils.msec_time()\n\n try:\n # Pyramid fails if the URL contains invalid UTF-8 characters.\n request_path = event.request.path\n except UnicodeDecodeError:\n raise errors.http_error(\n HTTPBadRequest(),\n errno=errors.ERRORS.INVALID_PARAMETERS,\n message=\"Invalid URL path.\")\n\n # New logger context, with infos for request summary logger.\n logger.new(agent=request.headers.get('User-Agent'),\n path=request_path,\n method=request.method,\n querystring=dict(request.GET),\n lang=request.headers.get('Accept-Language'),\n uid=None,\n authn_type=None,\n errno=None)\n\n config.add_subscriber(on_new_request, NewRequest)\n\n def on_new_response(event):\n response = event.response\n request = event.request\n\n # Compute the request processing time in msec (-1 if unknown)\n current = utils.msec_time()\n duration = current - getattr(request, '_received_at', current - 1)\n isotimestamp = datetime.fromtimestamp(current/1000).isoformat()\n\n # Bind infos for request summary logger.\n logger.bind(time=isotimestamp,\n code=response.status_code,\n t=duration)\n\n # Ouput application request summary.\n if not hasattr(request, 'parent'):\n logger.info('request.summary')\n\n config.add_subscriber(on_new_response, NewResponse)\n\n\nclass EventActionFilter(object):\n def __init__(self, actions, config):\n actions = ACTIONS.from_string_list(actions)\n self.actions = [action.value for action in actions]\n\n def phash(self):\n return 'for_actions = %s' % (','.join(self.actions))\n\n def __call__(self, event):\n action = event.payload.get('action')\n return not action or action in self.actions\n\n\nclass EventResourceFilter(object):\n def __init__(self, resources, config):\n self.resources = resources\n\n def phash(self):\n return 'for_resources = %s' % (','.join(self.resources))\n\n def __call__(self, event):\n resource = event.payload.get('resource_name')\n return not resource or not self.resources or resource in self.resources\n\n\ndef setup_listeners(config):\n # Register basic subscriber predicates, to filter events.\n config.add_subscriber_predicate('for_actions', EventActionFilter)\n config.add_subscriber_predicate('for_resources', EventResourceFilter)\n\n write_actions = (ACTIONS.CREATE, ACTIONS.UPDATE, ACTIONS.DELETE)\n settings = config.get_settings()\n project_name = settings.get('project_name', '')\n listeners = aslist(settings['event_listeners'])\n\n for name in listeners:\n logger.info('Setting up %r listener' % name)\n prefix = 'event_listeners.%s.' % name\n\n try:\n listener_mod = config.maybe_dotted(name)\n prefix = 'event_listeners.%s.' % name.split('.')[-1]\n listener = listener_mod.load_from_config(config, prefix)\n except (ImportError, AttributeError):\n module_setting = prefix + \"use\"\n # Read from ENV or settings.\n module_value = utils.read_env(project_name + \".\" + module_setting,\n settings.get(module_setting))\n listener_mod = config.maybe_dotted(module_value)\n listener = listener_mod.load_from_config(config, prefix)\n\n # If StatsD is enabled, monitor execution time of listeners.\n if getattr(config.registry, \"statsd\", None):\n statsd_client = config.registry.statsd\n key = 'listeners.%s' % name\n listener = statsd_client.timer(key)(listener.__call__)\n\n # Optional filter by event action.\n actions_setting = prefix + \"actions\"\n # Read from ENV or settings.\n actions_value = utils.read_env(project_name + \".\" + actions_setting,\n settings.get(actions_setting, \"\"))\n actions = aslist(actions_value)\n if len(actions) > 0:\n actions = ACTIONS.from_string_list(actions)\n else:\n actions = write_actions\n\n # Optional filter by event resource name.\n resource_setting = prefix + \"resources\"\n # Read from ENV or settings.\n resource_value = utils.read_env(project_name + \".\" + resource_setting,\n settings.get(resource_setting, \"\"))\n resource_names = aslist(resource_value)\n\n # Pyramid event predicates.\n options = dict(for_actions=actions, for_resources=resource_names)\n\n if ACTIONS.READ in actions:\n config.add_subscriber(listener, ResourceRead, **options)\n if len(actions) == 1:\n return\n\n config.add_subscriber(listener, ResourceChanged, **options)\n\n\ndef load_default_settings(config, default_settings):\n \"\"\"Read settings provided in Paste ini file, set default values and\n replace if defined as environment variable.\n \"\"\"\n settings = config.get_settings()\n\n project_name = settings['project_name']\n\n def _prefixed_keys(key):\n unprefixed = key\n if key.startswith('kinto.') or key.startswith(project_name + '.'):\n unprefixed = key.split('.', 1)[1]\n project_prefix = project_name + '.' + unprefixed\n kinto_prefix = 'kinto.' + unprefixed\n return unprefixed, project_prefix, kinto_prefix\n\n # Fill settings with default values if not defined.\n for key, default_value in sorted(default_settings.items()):\n unprefixed, project_prefix, kinto_prefix = keys = _prefixed_keys(key)\n is_defined = len(set(settings.keys()).intersection(set(keys))) > 0\n if not is_defined:\n settings[unprefixed] = default_value\n\n for key, value in sorted(settings.items()):\n unprefixed, project_prefix, kinto_prefix = keys = _prefixed_keys(key)\n\n # Fail if not only one is defined.\n defined = set(settings.keys()).intersection(set(keys))\n distinct_values = set([str(settings[d]) for d in defined])\n\n if len(defined) > 1 and len(distinct_values) > 1:\n names = \"', '\".join(defined)\n raise ValueError(\"Settings '%s' are in conflict.\" % names)\n\n # Maintain backwards compatibility with old settings files that\n # have backend settings like cliquet.foo (which is now\n # kinto.core.foo).\n unprefixed, _, _ = _prefixed_keys(key)\n CONTAIN_CLIQUET_MODULE_NAMES = [\n 'storage_backend',\n 'cache_backend',\n 'permission_backend',\n 'logging_renderer',\n ]\n if unprefixed in CONTAIN_CLIQUET_MODULE_NAMES and \\\n value.startswith('cliquet.'):\n new_value = value.replace('cliquet.', 'kinto.core.')\n logger.warn(\n \"Backend settings referring to cliquet are DEPRECATED. \"\n \"Please update your {} setting to {} (was: {}).\".format(\n key, new_value, value))\n value = new_value\n\n # Override settings from OS env values.\n # e.g. HTTP_PORT, READINGLIST_HTTP_PORT, KINTO_HTTP_PORT\n from_env = utils.read_env(unprefixed, value)\n from_env = utils.read_env(project_prefix, from_env)\n from_env = utils.read_env(kinto_prefix, from_env)\n\n settings[unprefixed] = from_env\n\n config.add_settings(settings)\n\n\ndef initialize(config, version=None, project_name='', default_settings=None):\n \"\"\"Initialize kinto.core with the given configuration, version and project\n name.\n\n This will basically include kinto.core in Pyramid and set route prefix\n based on the specified version.\n\n :param config: Pyramid configuration\n :type config: ~pyramid:pyramid.config.Configurator\n :param str version: Current project version (e.g. '0.0.1') if not defined\n in application settings.\n :param str project_name: Project name if not defined\n in application settings.\n :param dict default_settings: Override kinto.core default settings values.\n \"\"\"\n from kinto.core import DEFAULT_SETTINGS\n\n settings = config.get_settings()\n\n project_name = settings.pop('kinto.project_name',\n settings.get('project_name')) or project_name\n settings['project_name'] = project_name\n if not project_name:\n warnings.warn('No value specified for `project_name`')\n\n kinto_core_defaults = DEFAULT_SETTINGS.copy()\n\n if default_settings:\n kinto_core_defaults.update(default_settings)\n\n load_default_settings(config, kinto_core_defaults)\n\n http_scheme = settings['http_scheme']\n if http_scheme != 'https':\n warnings.warn('HTTPS is not enabled')\n\n # Override project version from settings.\n project_version = settings.get('project_version') or version\n if not project_version:\n error_msg = \"Invalid project version: %s\" % project_version\n raise ConfigurationError(error_msg)\n settings['project_version'] = project_version = str(project_version)\n\n # HTTP API version.\n http_api_version = settings.get('http_api_version')\n if http_api_version is None:\n # The API version is derivated from the module version if not provided.\n http_api_version = '.'.join(project_version.split('.')[0:2])\n settings['http_api_version'] = http_api_version = str(http_api_version)\n api_version = 'v%s' % http_api_version.split('.')[0]\n\n # Include kinto.core views with the correct api version prefix.\n config.include(\"kinto.core\", route_prefix=api_version)\n config.route_prefix = api_version\n", "path": "kinto/core/initialization.py"}], "after_files": [{"content": "import re\nimport warnings\nfrom datetime import datetime\nfrom dateutil import parser as dateparser\n\nimport structlog\nfrom pyramid.events import NewRequest, NewResponse\nfrom pyramid.exceptions import ConfigurationError\nfrom pyramid.httpexceptions import (HTTPTemporaryRedirect, HTTPGone,\n HTTPBadRequest)\nfrom pyramid.renderers import JSON as JSONRenderer\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.interfaces import IAuthenticationPolicy\nfrom pyramid.settings import asbool, aslist\nfrom pyramid_multiauth import (MultiAuthenticationPolicy,\n MultiAuthPolicySelected)\ntry:\n import newrelic.agent\nexcept ImportError: # pragma: no cover\n newrelic = None\ntry:\n from werkzeug.contrib.profiler import ProfilerMiddleware\nexcept ImportError: # pragma: no cover\n pass\n\nfrom kinto.core import errors\nfrom kinto.core import utils\nfrom kinto.core import cache\nfrom kinto.core import storage\nfrom kinto.core import permission\nfrom kinto.core.logs import logger\nfrom kinto.core.events import ResourceRead, ResourceChanged, ACTIONS\n\n\ndef setup_request_bound_data(config):\n \"\"\"Attach custom data on request object, and share it with parent\n requests during batch.\"\"\"\n def attach_bound_data(request):\n parent = getattr(request, 'parent', None)\n return parent.bound_data if parent else {}\n\n config.add_request_method(attach_bound_data, name='bound_data', reify=True)\n\n\ndef setup_json_serializer(config):\n import requests\n import webob\n\n # Monkey patch to use ujson\n webob.request.json = utils.json\n requests.models.json = utils.json\n\n # Override json renderer using ujson\n renderer = JSONRenderer(serializer=utils.json_serializer)\n config.add_renderer('json', renderer)\n\n\ndef setup_version_redirection(config):\n \"\"\"Add a view which redirects to the current version of the API.\n \"\"\"\n settings = config.get_settings()\n redirect_enabled = settings['version_prefix_redirect_enabled']\n version_prefix_redirection_enabled = asbool(redirect_enabled)\n\n route_prefix = config.route_prefix\n config.registry.route_prefix = route_prefix\n\n # Redirect to the current version of the API if the prefix isn't used.\n # Do not redirect if kinto.version_prefix_redirect_enabled is set to\n # False.\n if not version_prefix_redirection_enabled:\n return\n\n def _redirect_to_version_view(request):\n path = request.matchdict['path']\n querystring = request.url[(request.url.rindex(request.path) +\n len(request.path)):]\n redirect = '/%s/%s%s' % (route_prefix, path, querystring)\n raise HTTPTemporaryRedirect(redirect)\n\n # Disable the route prefix passed by the app.\n config.route_prefix = None\n\n config.add_route(name='redirect_to_version',\n pattern=r'/{path:(?!v[0-9]+)[^\\r]*}')\n\n config.add_view(view=_redirect_to_version_view,\n route_name='redirect_to_version',\n permission=NO_PERMISSION_REQUIRED)\n\n config.route_prefix = route_prefix\n\n\ndef setup_authentication(config):\n \"\"\"Let pyramid_multiauth manage authentication and authorization\n from configuration.\n \"\"\"\n config.include('pyramid_multiauth')\n\n # Track policy used, for prefixing user_id and for logging.\n def on_policy_selected(event):\n authn_type = event.policy_name.lower()\n event.request.authn_type = authn_type\n event.request.selected_userid = event.userid\n # Add authentication info to context.\n logger.bind(uid=event.userid, authn_type=authn_type)\n\n config.add_subscriber(on_policy_selected, MultiAuthPolicySelected)\n\n\ndef setup_backoff(config):\n \"\"\"Attach HTTP requests/responses objects.\n\n This is useful to attach objects to the request object for easier\n access, and to pre-process responses.\n \"\"\"\n def on_new_response(event):\n # Add backoff in response headers.\n backoff = config.registry.settings['backoff']\n if backoff is not None:\n backoff = utils.encode_header('%s' % backoff)\n event.response.headers['Backoff'] = backoff\n\n config.add_subscriber(on_new_response, NewResponse)\n\n\ndef setup_requests_scheme(config):\n \"\"\"Force server scheme, host and port at the application level.\"\"\"\n settings = config.get_settings()\n\n http_scheme = settings['http_scheme']\n http_host = settings['http_host']\n\n def on_new_request(event):\n if http_scheme:\n event.request.scheme = http_scheme\n if http_host:\n event.request.host = http_host\n\n if http_scheme or http_host:\n config.add_subscriber(on_new_request, NewRequest)\n\n\ndef setup_deprecation(config):\n config.add_tween(\"kinto.core.initialization._end_of_life_tween_factory\")\n\n\ndef _end_of_life_tween_factory(handler, registry):\n \"\"\"Pyramid tween to handle service end of life.\"\"\"\n deprecation_msg = (\"The service you are trying to connect no longer exists\"\n \" at this location.\")\n\n def eos_tween(request):\n eos_date = registry.settings['eos']\n eos_url = registry.settings['eos_url']\n eos_message = registry.settings['eos_message']\n if not eos_date:\n return handler(request)\n\n eos_date = dateparser.parse(eos_date)\n if eos_date > datetime.now():\n code = \"soft-eol\"\n request.response = handler(request)\n else:\n code = \"hard-eol\"\n request.response = errors.http_error(\n HTTPGone(),\n errno=errors.ERRORS.SERVICE_DEPRECATED,\n message=deprecation_msg)\n\n errors.send_alert(request, eos_message, url=eos_url, code=code)\n return request.response\n\n return eos_tween\n\n\ndef setup_storage(config):\n settings = config.get_settings()\n\n # Id generators by resource name.\n config.registry.id_generators = {}\n for key, value in settings.items():\n m = re.match(r'^([^_]*)_?id_generator', key)\n if m is None:\n continue\n resource_name = m.group(1)\n id_generator = config.maybe_dotted(value)\n config.registry.id_generators[resource_name] = id_generator()\n\n storage_mod = settings['storage_backend']\n if not storage_mod:\n return\n\n storage_mod = config.maybe_dotted(storage_mod)\n backend = storage_mod.load_from_config(config)\n if not isinstance(backend, storage.StorageBase):\n raise ConfigurationError(\"Invalid storage backend: %s\" % backend)\n config.registry.storage = backend\n\n heartbeat = storage.heartbeat(backend)\n config.registry.heartbeats['storage'] = heartbeat\n\n\ndef setup_permission(config):\n settings = config.get_settings()\n permission_mod = settings['permission_backend']\n if not permission_mod:\n return\n\n permission_mod = config.maybe_dotted(permission_mod)\n backend = permission_mod.load_from_config(config)\n if not isinstance(backend, permission.PermissionBase):\n raise ConfigurationError(\"Invalid permission backend: %s\" % backend)\n config.registry.permission = backend\n\n heartbeat = permission.heartbeat(backend)\n config.registry.heartbeats['permission'] = heartbeat\n\n\ndef setup_cache(config):\n settings = config.get_settings()\n cache_mod = settings['cache_backend']\n if not cache_mod:\n return\n\n cache_mod = config.maybe_dotted(cache_mod)\n backend = cache_mod.load_from_config(config)\n if not isinstance(backend, cache.CacheBase):\n raise ConfigurationError(\"Invalid cache backend: %s\" % backend)\n config.registry.cache = backend\n\n heartbeat = cache.heartbeat(backend)\n config.registry.heartbeats['cache'] = heartbeat\n\n\ndef setup_statsd(config):\n settings = config.get_settings()\n config.registry.statsd = None\n\n if settings['statsd_url']:\n statsd_mod = settings['statsd_backend']\n statsd_mod = config.maybe_dotted(statsd_mod)\n client = statsd_mod.load_from_config(config)\n\n config.registry.statsd = client\n\n client.watch_execution_time(config.registry.cache, prefix='backend')\n client.watch_execution_time(config.registry.storage, prefix='backend')\n client.watch_execution_time(config.registry.permission, prefix='backend')\n\n # Commit so that configured policy can be queried.\n config.commit()\n policy = config.registry.queryUtility(IAuthenticationPolicy)\n if isinstance(policy, MultiAuthenticationPolicy):\n for name, subpolicy in policy.get_policies():\n client.watch_execution_time(subpolicy,\n prefix='authentication',\n classname=name)\n else:\n client.watch_execution_time(policy, prefix='authentication')\n\n def on_new_response(event):\n request = event.request\n\n # Count unique users.\n user_id = request.prefixed_userid\n if user_id:\n client.count('users', unique=user_id)\n\n # Count authentication verifications.\n if hasattr(request, 'authn_type'):\n client.count('authn_type.%s' % request.authn_type)\n\n # Count view calls.\n service = request.current_service\n if service:\n client.count('view.%s.%s' % (service.name, request.method))\n\n config.add_subscriber(on_new_response, NewResponse)\n\n return client\n\n\ndef install_middlewares(app, settings):\n \"Install a set of middlewares defined in the ini file on the given app.\"\n # Setup new-relic.\n if settings.get('newrelic_config'):\n ini_file = settings['newrelic_config']\n env = settings['newrelic_env']\n newrelic.agent.initialize(ini_file, env)\n app = newrelic.agent.WSGIApplicationWrapper(app)\n\n # Adds the Werkzeug profiler.\n if asbool(settings.get('profiler_enabled')):\n profile_dir = settings['profiler_dir']\n app = ProfilerMiddleware(app, profile_dir=profile_dir,\n restrictions=('*kinto.core*'))\n\n return app\n\n\ndef setup_logging(config):\n \"\"\"Setup structured logging, and emit `request.summary` event on each\n request, as recommanded by Mozilla Services standard:\n\n * https://mana.mozilla.org/wiki/display/CLOUDSERVICES/Logging+Standard\n * http://12factor.net/logs\n \"\"\"\n settings = config.get_settings()\n\n renderer_klass = config.maybe_dotted(settings['logging_renderer'])\n renderer = renderer_klass(settings)\n\n structlog.configure(\n # Share the logger context by thread.\n context_class=structlog.threadlocal.wrap_dict(dict),\n # Integrate with Pyramid logging facilities.\n logger_factory=structlog.stdlib.LoggerFactory(),\n wrapper_class=structlog.stdlib.BoundLogger,\n # Setup logger output format.\n processors=[\n structlog.stdlib.filter_by_level,\n structlog.processors.format_exc_info,\n renderer,\n ])\n\n def on_new_request(event):\n request = event.request\n # Save the time the request was received by the server.\n event.request._received_at = utils.msec_time()\n\n try:\n # Pyramid fails if the URL contains invalid UTF-8 characters.\n request_path = event.request.path\n except UnicodeDecodeError:\n raise errors.http_error(\n HTTPBadRequest(),\n errno=errors.ERRORS.INVALID_PARAMETERS,\n message=\"Invalid URL path.\")\n\n # New logger context, with infos for request summary logger.\n logger.new(agent=request.headers.get('User-Agent'),\n path=request_path,\n method=request.method,\n querystring=dict(request.GET),\n lang=request.headers.get('Accept-Language'),\n uid=None,\n authn_type=None,\n errno=None)\n\n config.add_subscriber(on_new_request, NewRequest)\n\n def on_new_response(event):\n response = event.response\n request = event.request\n\n # Compute the request processing time in msec (-1 if unknown)\n current = utils.msec_time()\n duration = current - getattr(request, '_received_at', current - 1)\n isotimestamp = datetime.fromtimestamp(current/1000).isoformat()\n\n # Bind infos for request summary logger.\n logger.bind(time=isotimestamp,\n code=response.status_code,\n t=duration)\n\n # Ouput application request summary.\n if not hasattr(request, 'parent'):\n logger.info('request.summary')\n\n config.add_subscriber(on_new_response, NewResponse)\n\n\nclass EventActionFilter(object):\n def __init__(self, actions, config):\n actions = ACTIONS.from_string_list(actions)\n self.actions = [action.value for action in actions]\n\n def phash(self):\n return 'for_actions = %s' % (','.join(self.actions))\n\n def __call__(self, event):\n action = event.payload.get('action')\n return not action or action in self.actions\n\n\nclass EventResourceFilter(object):\n def __init__(self, resources, config):\n self.resources = resources\n\n def phash(self):\n return 'for_resources = %s' % (','.join(self.resources))\n\n def __call__(self, event):\n resource = event.payload.get('resource_name')\n return not resource or not self.resources or resource in self.resources\n\n\ndef setup_listeners(config):\n # Register basic subscriber predicates, to filter events.\n config.add_subscriber_predicate('for_actions', EventActionFilter)\n config.add_subscriber_predicate('for_resources', EventResourceFilter)\n\n write_actions = (ACTIONS.CREATE, ACTIONS.UPDATE, ACTIONS.DELETE)\n settings = config.get_settings()\n project_name = settings.get('project_name', '')\n listeners = aslist(settings['event_listeners'])\n\n for name in listeners:\n logger.info('Setting up %r listener' % name)\n prefix = 'event_listeners.%s.' % name\n\n try:\n listener_mod = config.maybe_dotted(name)\n prefix = 'event_listeners.%s.' % name.split('.')[-1]\n listener = listener_mod.load_from_config(config, prefix)\n except (ImportError, AttributeError):\n module_setting = prefix + \"use\"\n # Read from ENV or settings.\n module_value = utils.read_env(project_name + \".\" + module_setting,\n settings.get(module_setting))\n listener_mod = config.maybe_dotted(module_value)\n listener = listener_mod.load_from_config(config, prefix)\n\n # If StatsD is enabled, monitor execution time of listeners.\n if getattr(config.registry, \"statsd\", None):\n statsd_client = config.registry.statsd\n key = 'listeners.%s' % name\n listener = statsd_client.timer(key)(listener.__call__)\n\n # Optional filter by event action.\n actions_setting = prefix + \"actions\"\n # Read from ENV or settings.\n actions_value = utils.read_env(project_name + \".\" + actions_setting,\n settings.get(actions_setting, \"\"))\n actions = aslist(actions_value)\n if len(actions) > 0:\n actions = ACTIONS.from_string_list(actions)\n else:\n actions = write_actions\n\n # Optional filter by event resource name.\n resource_setting = prefix + \"resources\"\n # Read from ENV or settings.\n resource_value = utils.read_env(project_name + \".\" + resource_setting,\n settings.get(resource_setting, \"\"))\n resource_names = aslist(resource_value)\n\n # Pyramid event predicates.\n options = dict(for_actions=actions, for_resources=resource_names)\n\n if ACTIONS.READ in actions:\n config.add_subscriber(listener, ResourceRead, **options)\n if len(actions) == 1:\n return\n\n config.add_subscriber(listener, ResourceChanged, **options)\n\n\ndef load_default_settings(config, default_settings):\n \"\"\"Read settings provided in Paste ini file, set default values and\n replace if defined as environment variable.\n \"\"\"\n settings = config.get_settings()\n\n project_name = settings['project_name']\n\n def _prefixed_keys(key):\n unprefixed = key\n if key.startswith('kinto.') or key.startswith(project_name + '.'):\n unprefixed = key.split('.', 1)[1]\n project_prefix = project_name + '.' + unprefixed\n kinto_prefix = 'kinto.' + unprefixed\n return unprefixed, project_prefix, kinto_prefix\n\n # Fill settings with default values if not defined.\n for key, default_value in sorted(default_settings.items()):\n unprefixed, project_prefix, kinto_prefix = keys = _prefixed_keys(key)\n is_defined = len(set(settings.keys()).intersection(set(keys))) > 0\n if not is_defined:\n settings[unprefixed] = default_value\n\n for key, value in sorted(settings.items()):\n unprefixed, project_prefix, kinto_prefix = keys = _prefixed_keys(key)\n\n # Fail if not only one is defined.\n defined = set(settings.keys()).intersection(set(keys))\n distinct_values = set([str(settings[d]) for d in defined])\n\n if len(defined) > 1 and len(distinct_values) > 1:\n names = \"', '\".join(defined)\n raise ValueError(\"Settings '%s' are in conflict.\" % names)\n\n # Maintain backwards compatibility with old settings files that\n # have backend settings like cliquet.foo (which is now\n # kinto.core.foo).\n unprefixed, _, _ = _prefixed_keys(key)\n CONTAIN_CLIQUET_MODULE_NAMES = [\n 'storage_backend',\n 'cache_backend',\n 'permission_backend',\n 'logging_renderer',\n ]\n if unprefixed in CONTAIN_CLIQUET_MODULE_NAMES and \\\n value.startswith('cliquet.'):\n new_value = value.replace('cliquet.', 'kinto.core.')\n logger.warn(\n \"Backend settings referring to cliquet are DEPRECATED. \"\n \"Please update your {} setting to {} (was: {}).\".format(\n key, new_value, value))\n value = new_value\n\n # Override settings from OS env values.\n # e.g. HTTP_PORT, READINGLIST_HTTP_PORT, KINTO_HTTP_PORT\n from_env = utils.read_env(unprefixed, value)\n from_env = utils.read_env(project_prefix, from_env)\n from_env = utils.read_env(kinto_prefix, from_env)\n\n settings[unprefixed] = from_env\n\n config.add_settings(settings)\n\n\ndef initialize(config, version=None, project_name='', default_settings=None):\n \"\"\"Initialize kinto.core with the given configuration, version and project\n name.\n\n This will basically include kinto.core in Pyramid and set route prefix\n based on the specified version.\n\n :param config: Pyramid configuration\n :type config: ~pyramid:pyramid.config.Configurator\n :param str version: Current project version (e.g. '0.0.1') if not defined\n in application settings.\n :param str project_name: Project name if not defined\n in application settings.\n :param dict default_settings: Override kinto.core default settings values.\n \"\"\"\n from kinto.core import DEFAULT_SETTINGS\n\n settings = config.get_settings()\n\n project_name = settings.pop('kinto.project_name',\n settings.get('project_name')) or project_name\n settings['project_name'] = project_name\n if not project_name:\n warnings.warn('No value specified for `project_name`')\n\n kinto_core_defaults = DEFAULT_SETTINGS.copy()\n\n if default_settings:\n kinto_core_defaults.update(default_settings)\n\n load_default_settings(config, kinto_core_defaults)\n\n http_scheme = settings['http_scheme']\n if http_scheme != 'https':\n warnings.warn('HTTPS is not enabled')\n\n # Override project version from settings.\n project_version = settings.get('project_version') or version\n if not project_version:\n error_msg = \"Invalid project version: %s\" % project_version\n raise ConfigurationError(error_msg)\n settings['project_version'] = project_version = str(project_version)\n\n # HTTP API version.\n http_api_version = settings.get('http_api_version')\n if http_api_version is None:\n # The API version is derivated from the module version if not provided.\n http_api_version = '.'.join(project_version.split('.')[0:2])\n settings['http_api_version'] = http_api_version = str(http_api_version)\n api_version = 'v%s' % http_api_version.split('.')[0]\n\n # Include kinto.core views with the correct api version prefix.\n config.include(\"kinto.core\", route_prefix=api_version)\n config.route_prefix = api_version\n", "path": "kinto/core/initialization.py"}]} |
gh_patches_debug_1057 | rasdani/github-patches | git_diff | Kinto__kinto-2027 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash with "in <string>' requires string as left operand, not int"
```
ValidationError: 'minVersion' is a required property
Failed validating 'required' in schema['properties']['versionRange']['items']['properties']['targetApplication']['items']:
{'additionalProperties': False,
'description': 'Target application',
'properties': {'guid': {'description': 'The application unique '
'identifier.',
'enum': ['{ec8030f7-c20a-464f-9b0e-13a3a9e97384}',
'{3550f703-e582-4d05-9a08-453d09bdfdc6}',
'{92650c4d-4b8e-4d2a-b7eb-24ecf4f6b63a}',
'{aa3c5121-dab2-40e2-81ca-7ea25febc110}'],
'enumNames': ['Firefox',
'Thunderbird',
'Seamonkey',
'Android'],
'title': 'Application id',
'type': 'string'},
'maxVersion': {'$ref': '#/definitions/maxVersion'},
'minVersion': {'$ref': '#/definitions/minVersion'}},
'required': ['guid', 'minVersion', 'maxVersion'],
'title': 'Target application',
'type': 'object'}
On instance['versionRange'][0]['targetApplication'][0]:
{'guid': 'ec8030f7-c20a-464f-9b0e-13a3a9e97384', 'maxVersion': '57.0.*'}
File "kinto/views/records.py", line 73, in process_record
jsonschema.validate(data, schema)
File "jsonschema/validators.py", line 541, in validate
cls(schema, *args, **kwargs).validate(instance)
File "jsonschema/validators.py", line 130, in validate
raise error
TypeError: 'in <string>' requires string as left operand, not int
(11 additional frame(s) were not displayed)
...
File "cornice/service.py", line 494, in wrapper
response = view_()
File "kinto/core/resource/__init__.py", line 463, in put
new_record = self.process_record(post_record, old=existing)
File "kinto/views/records.py", line 81, in process_record
raise_invalid(self.request, name=field, description=e.message)
File "kinto/core/errors.py", line 178, in raise_invalid
response = json_error_handler(request)
File "kinto/core/errors.py", line 149, in json_error_handler
if name in description:
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/core/errors.py`
Content:
```
1 import colander
2 import logging
3 from pyramid import httpexceptions
4 from enum import Enum
5
6 from kinto.core.schema import Any
7 from kinto.core.utils import json, reapply_cors
8
9
10 class ERRORS(Enum):
11 """Predefined errors as specified by the API.
12
13 +-------------+-------+------------------------------------------------+
14 | Status code | Errno | Description |
15 +=============+=======+================================================+
16 | 401 | 104 | Missing Authorization Token |
17 +-------------+-------+------------------------------------------------+
18 | 401 | 105 | Invalid Authorization Token |
19 +-------------+-------+------------------------------------------------+
20 | 400 | 106 | request body was not valid JSON |
21 +-------------+-------+------------------------------------------------+
22 | 400 | 107 | invalid request parameter |
23 +-------------+-------+------------------------------------------------+
24 | 400 | 108 | missing request parameter |
25 +-------------+-------+------------------------------------------------+
26 | 400 | 109 | invalid posted data |
27 +-------------+-------+------------------------------------------------+
28 | 404 | 110 | Invalid Token / id |
29 +-------------+-------+------------------------------------------------+
30 | 404 | 111 | Missing Token / id |
31 +-------------+-------+------------------------------------------------+
32 | 411 | 112 | Content-Length header was not provided |
33 +-------------+-------+------------------------------------------------+
34 | 413 | 113 | Request body too large |
35 +-------------+-------+------------------------------------------------+
36 | 412 | 114 | Resource was modified meanwhile |
37 +-------------+-------+------------------------------------------------+
38 | 405 | 115 | Method not allowed on this end point |
39 +-------------+-------+------------------------------------------------+
40 | 404 | 116 | Requested version not available on this server |
41 +-------------+-------+------------------------------------------------+
42 | 429 | 117 | Client has sent too many requests |
43 +-------------+-------+------------------------------------------------+
44 | 403 | 121 | Resource's access forbidden for this user |
45 +-------------+-------+------------------------------------------------+
46 | 409 | 122 | Another resource violates constraint |
47 +-------------+-------+------------------------------------------------+
48 | 500 | 999 | Internal Server Error |
49 +-------------+-------+------------------------------------------------+
50 | 503 | 201 | Service Temporary unavailable due to high load |
51 +-------------+-------+------------------------------------------------+
52 | 410 | 202 | Service deprecated |
53 +-------------+-------+------------------------------------------------+
54 """
55
56 MISSING_AUTH_TOKEN = 104
57 INVALID_AUTH_TOKEN = 105
58 BADJSON = 106
59 INVALID_PARAMETERS = 107
60 MISSING_PARAMETERS = 108
61 INVALID_POSTED_DATA = 109
62 INVALID_RESOURCE_ID = 110
63 MISSING_RESOURCE = 111
64 MISSING_CONTENT_LENGTH = 112
65 REQUEST_TOO_LARGE = 113
66 MODIFIED_MEANWHILE = 114
67 METHOD_NOT_ALLOWED = 115
68 VERSION_NOT_AVAILABLE = 116
69 CLIENT_REACHED_CAPACITY = 117
70 FORBIDDEN = 121
71 CONSTRAINT_VIOLATED = 122
72 UNDEFINED = 999
73 BACKEND = 201
74 SERVICE_DEPRECATED = 202
75
76
77 class ErrorSchema(colander.MappingSchema):
78 """Payload schema for Kinto errors."""
79
80 code = colander.SchemaNode(colander.Integer())
81 errno = colander.SchemaNode(colander.Integer())
82 error = colander.SchemaNode(colander.String())
83 message = colander.SchemaNode(colander.String(), missing=colander.drop)
84 info = colander.SchemaNode(colander.String(), missing=colander.drop)
85 details = colander.SchemaNode(Any(), missing=colander.drop)
86
87
88 def http_error(
89 httpexception, errno=None, code=None, error=None, message=None, info=None, details=None
90 ):
91 """Return a JSON formated response matching the error HTTP API.
92
93 :param httpexception: Instance of :mod:`~pyramid:pyramid.httpexceptions`
94 :param errno: stable application-level error number (e.g. 109)
95 :param code: matches the HTTP status code (e.g 400)
96 :param error: string description of error type (e.g. "Bad request")
97 :param message: context information (e.g. "Invalid request parameters")
98 :param info: information about error (e.g. URL to troubleshooting)
99 :param details: additional structured details (conflicting object)
100 :returns: the formatted response object
101 :rtype: pyramid.httpexceptions.HTTPException
102 """
103 errno = errno or ERRORS.UNDEFINED
104
105 if isinstance(errno, Enum):
106 errno = errno.value
107
108 body = {
109 "code": code or httpexception.code,
110 "errno": errno,
111 "error": error or httpexception.title,
112 "message": message,
113 "info": info,
114 "details": details or colander.drop,
115 }
116
117 response = httpexception
118 response.errno = errno
119 response.json = ErrorSchema().deserialize(body)
120 response.content_type = "application/json"
121 return response
122
123
124 def json_error_handler(request):
125 """Cornice JSON error handler, returning consistant JSON formatted errors
126 from schema validation errors.
127
128 This is meant to be used is custom services in your applications.
129
130 .. code-block:: python
131
132 upload = Service(name="upload", path='/upload',
133 error_handler=errors.json_error_handler)
134
135 .. warning::
136
137 Only the first error of the list is formatted in the response.
138 (c.f. HTTP API).
139 """
140 errors = request.errors
141 sorted_errors = sorted(errors, key=lambda x: str(x["name"]))
142 # In Cornice, we call error handler if at least one error was set.
143 error = sorted_errors[0]
144 name = error["name"]
145 description = error["description"]
146
147 if isinstance(description, bytes):
148 description = error["description"].decode("utf-8")
149
150 if name is not None:
151 if name in description:
152 message = description
153 else:
154 message = "{name} in {location}: {description}".format_map(error)
155 else:
156 message = "{location}: {description}".format_map(error)
157
158 response = http_error(
159 httpexceptions.HTTPBadRequest(),
160 code=errors.status,
161 errno=ERRORS.INVALID_PARAMETERS.value,
162 error="Invalid parameters",
163 message=message,
164 details=errors,
165 )
166 response.status = errors.status
167 response = reapply_cors(request, response)
168 return response
169
170
171 def raise_invalid(request, location="body", name=None, description=None, **kwargs):
172 """Helper to raise a validation error.
173
174 :param location: location in request (e.g. ``'querystring'``)
175 :param name: field name
176 :param description: detailed description of validation error
177
178 :raises: :class:`~pyramid:pyramid.httpexceptions.HTTPBadRequest`
179 """
180 request.errors.add(location, name, description, **kwargs)
181 response = json_error_handler(request)
182 raise response
183
184
185 def send_alert(request, message=None, url=None, code="soft-eol"):
186 """Helper to add an Alert header to the response.
187
188 :param code: The type of error 'soft-eol', 'hard-eol'
189 :param message: The description message.
190 :param url: The URL for more information, default to the documentation url.
191 """
192 if url is None:
193 url = request.registry.settings["project_docs"]
194
195 request.response.headers["Alert"] = json.dumps({"code": code, "message": message, "url": url})
196
197
198 def request_GET(request):
199 """Catches a UnicodeDecode error in request.GET in case a wrong request was received.
200 Fixing a webob long term issue: https://github.com/Pylons/webob/issues/161
201 """
202 try:
203 return request.GET
204 except UnicodeDecodeError:
205 querystring = request.environ.get("QUERY_STRING", "")
206 logger = logging.getLogger(__name__)
207 logger.warning("Error decoding QUERY_STRING: %s" % request.environ)
208 raise http_error(
209 httpexceptions.HTTPBadRequest(),
210 errno=ERRORS.INVALID_PARAMETERS,
211 message="A request with an incorrect encoding in the querystring was"
212 "received. Please make sure your requests are encoded in UTF-8: %s" % querystring,
213 )
214
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/core/errors.py b/kinto/core/errors.py
--- a/kinto/core/errors.py
+++ b/kinto/core/errors.py
@@ -148,7 +148,7 @@
description = error["description"].decode("utf-8")
if name is not None:
- if name in description:
+ if str(name) in description:
message = description
else:
message = "{name} in {location}: {description}".format_map(error)
| {"golden_diff": "diff --git a/kinto/core/errors.py b/kinto/core/errors.py\n--- a/kinto/core/errors.py\n+++ b/kinto/core/errors.py\n@@ -148,7 +148,7 @@\n description = error[\"description\"].decode(\"utf-8\")\n \n if name is not None:\n- if name in description:\n+ if str(name) in description:\n message = description\n else:\n message = \"{name} in {location}: {description}\".format_map(error)\n", "issue": "Crash with \"in <string>' requires string as left operand, not int\"\n```\r\nValidationError: 'minVersion' is a required property\r\n\r\nFailed validating 'required' in schema['properties']['versionRange']['items']['properties']['targetApplication']['items']:\r\n {'additionalProperties': False,\r\n 'description': 'Target application',\r\n 'properties': {'guid': {'description': 'The application unique '\r\n 'identifier.',\r\n 'enum': ['{ec8030f7-c20a-464f-9b0e-13a3a9e97384}',\r\n '{3550f703-e582-4d05-9a08-453d09bdfdc6}',\r\n '{92650c4d-4b8e-4d2a-b7eb-24ecf4f6b63a}',\r\n '{aa3c5121-dab2-40e2-81ca-7ea25febc110}'],\r\n 'enumNames': ['Firefox',\r\n 'Thunderbird',\r\n 'Seamonkey',\r\n 'Android'],\r\n 'title': 'Application id',\r\n 'type': 'string'},\r\n 'maxVersion': {'$ref': '#/definitions/maxVersion'},\r\n 'minVersion': {'$ref': '#/definitions/minVersion'}},\r\n 'required': ['guid', 'minVersion', 'maxVersion'],\r\n 'title': 'Target application',\r\n 'type': 'object'}\r\n\r\nOn instance['versionRange'][0]['targetApplication'][0]:\r\n {'guid': 'ec8030f7-c20a-464f-9b0e-13a3a9e97384', 'maxVersion': '57.0.*'}\r\n File \"kinto/views/records.py\", line 73, in process_record\r\n jsonschema.validate(data, schema)\r\n File \"jsonschema/validators.py\", line 541, in validate\r\n cls(schema, *args, **kwargs).validate(instance)\r\n File \"jsonschema/validators.py\", line 130, in validate\r\n raise error\r\n\r\nTypeError: 'in <string>' requires string as left operand, not int\r\n(11 additional frame(s) were not displayed)\r\n...\r\n File \"cornice/service.py\", line 494, in wrapper\r\n response = view_()\r\n File \"kinto/core/resource/__init__.py\", line 463, in put\r\n new_record = self.process_record(post_record, old=existing)\r\n File \"kinto/views/records.py\", line 81, in process_record\r\n raise_invalid(self.request, name=field, description=e.message)\r\n File \"kinto/core/errors.py\", line 178, in raise_invalid\r\n response = json_error_handler(request)\r\n File \"kinto/core/errors.py\", line 149, in json_error_handler\r\n if name in description:\r\n```\n", "before_files": [{"content": "import colander\nimport logging\nfrom pyramid import httpexceptions\nfrom enum import Enum\n\nfrom kinto.core.schema import Any\nfrom kinto.core.utils import json, reapply_cors\n\n\nclass ERRORS(Enum):\n \"\"\"Predefined errors as specified by the API.\n\n +-------------+-------+------------------------------------------------+\n | Status code | Errno | Description |\n +=============+=======+================================================+\n | 401 | 104 | Missing Authorization Token |\n +-------------+-------+------------------------------------------------+\n | 401 | 105 | Invalid Authorization Token |\n +-------------+-------+------------------------------------------------+\n | 400 | 106 | request body was not valid JSON |\n +-------------+-------+------------------------------------------------+\n | 400 | 107 | invalid request parameter |\n +-------------+-------+------------------------------------------------+\n | 400 | 108 | missing request parameter |\n +-------------+-------+------------------------------------------------+\n | 400 | 109 | invalid posted data |\n +-------------+-------+------------------------------------------------+\n | 404 | 110 | Invalid Token / id |\n +-------------+-------+------------------------------------------------+\n | 404 | 111 | Missing Token / id |\n +-------------+-------+------------------------------------------------+\n | 411 | 112 | Content-Length header was not provided |\n +-------------+-------+------------------------------------------------+\n | 413 | 113 | Request body too large |\n +-------------+-------+------------------------------------------------+\n | 412 | 114 | Resource was modified meanwhile |\n +-------------+-------+------------------------------------------------+\n | 405 | 115 | Method not allowed on this end point |\n +-------------+-------+------------------------------------------------+\n | 404 | 116 | Requested version not available on this server |\n +-------------+-------+------------------------------------------------+\n | 429 | 117 | Client has sent too many requests |\n +-------------+-------+------------------------------------------------+\n | 403 | 121 | Resource's access forbidden for this user |\n +-------------+-------+------------------------------------------------+\n | 409 | 122 | Another resource violates constraint |\n +-------------+-------+------------------------------------------------+\n | 500 | 999 | Internal Server Error |\n +-------------+-------+------------------------------------------------+\n | 503 | 201 | Service Temporary unavailable due to high load |\n +-------------+-------+------------------------------------------------+\n | 410 | 202 | Service deprecated |\n +-------------+-------+------------------------------------------------+\n \"\"\"\n\n MISSING_AUTH_TOKEN = 104\n INVALID_AUTH_TOKEN = 105\n BADJSON = 106\n INVALID_PARAMETERS = 107\n MISSING_PARAMETERS = 108\n INVALID_POSTED_DATA = 109\n INVALID_RESOURCE_ID = 110\n MISSING_RESOURCE = 111\n MISSING_CONTENT_LENGTH = 112\n REQUEST_TOO_LARGE = 113\n MODIFIED_MEANWHILE = 114\n METHOD_NOT_ALLOWED = 115\n VERSION_NOT_AVAILABLE = 116\n CLIENT_REACHED_CAPACITY = 117\n FORBIDDEN = 121\n CONSTRAINT_VIOLATED = 122\n UNDEFINED = 999\n BACKEND = 201\n SERVICE_DEPRECATED = 202\n\n\nclass ErrorSchema(colander.MappingSchema):\n \"\"\"Payload schema for Kinto errors.\"\"\"\n\n code = colander.SchemaNode(colander.Integer())\n errno = colander.SchemaNode(colander.Integer())\n error = colander.SchemaNode(colander.String())\n message = colander.SchemaNode(colander.String(), missing=colander.drop)\n info = colander.SchemaNode(colander.String(), missing=colander.drop)\n details = colander.SchemaNode(Any(), missing=colander.drop)\n\n\ndef http_error(\n httpexception, errno=None, code=None, error=None, message=None, info=None, details=None\n):\n \"\"\"Return a JSON formated response matching the error HTTP API.\n\n :param httpexception: Instance of :mod:`~pyramid:pyramid.httpexceptions`\n :param errno: stable application-level error number (e.g. 109)\n :param code: matches the HTTP status code (e.g 400)\n :param error: string description of error type (e.g. \"Bad request\")\n :param message: context information (e.g. \"Invalid request parameters\")\n :param info: information about error (e.g. URL to troubleshooting)\n :param details: additional structured details (conflicting object)\n :returns: the formatted response object\n :rtype: pyramid.httpexceptions.HTTPException\n \"\"\"\n errno = errno or ERRORS.UNDEFINED\n\n if isinstance(errno, Enum):\n errno = errno.value\n\n body = {\n \"code\": code or httpexception.code,\n \"errno\": errno,\n \"error\": error or httpexception.title,\n \"message\": message,\n \"info\": info,\n \"details\": details or colander.drop,\n }\n\n response = httpexception\n response.errno = errno\n response.json = ErrorSchema().deserialize(body)\n response.content_type = \"application/json\"\n return response\n\n\ndef json_error_handler(request):\n \"\"\"Cornice JSON error handler, returning consistant JSON formatted errors\n from schema validation errors.\n\n This is meant to be used is custom services in your applications.\n\n .. code-block:: python\n\n upload = Service(name=\"upload\", path='/upload',\n error_handler=errors.json_error_handler)\n\n .. warning::\n\n Only the first error of the list is formatted in the response.\n (c.f. HTTP API).\n \"\"\"\n errors = request.errors\n sorted_errors = sorted(errors, key=lambda x: str(x[\"name\"]))\n # In Cornice, we call error handler if at least one error was set.\n error = sorted_errors[0]\n name = error[\"name\"]\n description = error[\"description\"]\n\n if isinstance(description, bytes):\n description = error[\"description\"].decode(\"utf-8\")\n\n if name is not None:\n if name in description:\n message = description\n else:\n message = \"{name} in {location}: {description}\".format_map(error)\n else:\n message = \"{location}: {description}\".format_map(error)\n\n response = http_error(\n httpexceptions.HTTPBadRequest(),\n code=errors.status,\n errno=ERRORS.INVALID_PARAMETERS.value,\n error=\"Invalid parameters\",\n message=message,\n details=errors,\n )\n response.status = errors.status\n response = reapply_cors(request, response)\n return response\n\n\ndef raise_invalid(request, location=\"body\", name=None, description=None, **kwargs):\n \"\"\"Helper to raise a validation error.\n\n :param location: location in request (e.g. ``'querystring'``)\n :param name: field name\n :param description: detailed description of validation error\n\n :raises: :class:`~pyramid:pyramid.httpexceptions.HTTPBadRequest`\n \"\"\"\n request.errors.add(location, name, description, **kwargs)\n response = json_error_handler(request)\n raise response\n\n\ndef send_alert(request, message=None, url=None, code=\"soft-eol\"):\n \"\"\"Helper to add an Alert header to the response.\n\n :param code: The type of error 'soft-eol', 'hard-eol'\n :param message: The description message.\n :param url: The URL for more information, default to the documentation url.\n \"\"\"\n if url is None:\n url = request.registry.settings[\"project_docs\"]\n\n request.response.headers[\"Alert\"] = json.dumps({\"code\": code, \"message\": message, \"url\": url})\n\n\ndef request_GET(request):\n \"\"\"Catches a UnicodeDecode error in request.GET in case a wrong request was received.\n Fixing a webob long term issue: https://github.com/Pylons/webob/issues/161\n \"\"\"\n try:\n return request.GET\n except UnicodeDecodeError:\n querystring = request.environ.get(\"QUERY_STRING\", \"\")\n logger = logging.getLogger(__name__)\n logger.warning(\"Error decoding QUERY_STRING: %s\" % request.environ)\n raise http_error(\n httpexceptions.HTTPBadRequest(),\n errno=ERRORS.INVALID_PARAMETERS,\n message=\"A request with an incorrect encoding in the querystring was\"\n \"received. Please make sure your requests are encoded in UTF-8: %s\" % querystring,\n )\n", "path": "kinto/core/errors.py"}], "after_files": [{"content": "import colander\nimport logging\nfrom pyramid import httpexceptions\nfrom enum import Enum\n\nfrom kinto.core.schema import Any\nfrom kinto.core.utils import json, reapply_cors\n\n\nclass ERRORS(Enum):\n \"\"\"Predefined errors as specified by the API.\n\n +-------------+-------+------------------------------------------------+\n | Status code | Errno | Description |\n +=============+=======+================================================+\n | 401 | 104 | Missing Authorization Token |\n +-------------+-------+------------------------------------------------+\n | 401 | 105 | Invalid Authorization Token |\n +-------------+-------+------------------------------------------------+\n | 400 | 106 | request body was not valid JSON |\n +-------------+-------+------------------------------------------------+\n | 400 | 107 | invalid request parameter |\n +-------------+-------+------------------------------------------------+\n | 400 | 108 | missing request parameter |\n +-------------+-------+------------------------------------------------+\n | 400 | 109 | invalid posted data |\n +-------------+-------+------------------------------------------------+\n | 404 | 110 | Invalid Token / id |\n +-------------+-------+------------------------------------------------+\n | 404 | 111 | Missing Token / id |\n +-------------+-------+------------------------------------------------+\n | 411 | 112 | Content-Length header was not provided |\n +-------------+-------+------------------------------------------------+\n | 413 | 113 | Request body too large |\n +-------------+-------+------------------------------------------------+\n | 412 | 114 | Resource was modified meanwhile |\n +-------------+-------+------------------------------------------------+\n | 405 | 115 | Method not allowed on this end point |\n +-------------+-------+------------------------------------------------+\n | 404 | 116 | Requested version not available on this server |\n +-------------+-------+------------------------------------------------+\n | 429 | 117 | Client has sent too many requests |\n +-------------+-------+------------------------------------------------+\n | 403 | 121 | Resource's access forbidden for this user |\n +-------------+-------+------------------------------------------------+\n | 409 | 122 | Another resource violates constraint |\n +-------------+-------+------------------------------------------------+\n | 500 | 999 | Internal Server Error |\n +-------------+-------+------------------------------------------------+\n | 503 | 201 | Service Temporary unavailable due to high load |\n +-------------+-------+------------------------------------------------+\n | 410 | 202 | Service deprecated |\n +-------------+-------+------------------------------------------------+\n \"\"\"\n\n MISSING_AUTH_TOKEN = 104\n INVALID_AUTH_TOKEN = 105\n BADJSON = 106\n INVALID_PARAMETERS = 107\n MISSING_PARAMETERS = 108\n INVALID_POSTED_DATA = 109\n INVALID_RESOURCE_ID = 110\n MISSING_RESOURCE = 111\n MISSING_CONTENT_LENGTH = 112\n REQUEST_TOO_LARGE = 113\n MODIFIED_MEANWHILE = 114\n METHOD_NOT_ALLOWED = 115\n VERSION_NOT_AVAILABLE = 116\n CLIENT_REACHED_CAPACITY = 117\n FORBIDDEN = 121\n CONSTRAINT_VIOLATED = 122\n UNDEFINED = 999\n BACKEND = 201\n SERVICE_DEPRECATED = 202\n\n\nclass ErrorSchema(colander.MappingSchema):\n \"\"\"Payload schema for Kinto errors.\"\"\"\n\n code = colander.SchemaNode(colander.Integer())\n errno = colander.SchemaNode(colander.Integer())\n error = colander.SchemaNode(colander.String())\n message = colander.SchemaNode(colander.String(), missing=colander.drop)\n info = colander.SchemaNode(colander.String(), missing=colander.drop)\n details = colander.SchemaNode(Any(), missing=colander.drop)\n\n\ndef http_error(\n httpexception, errno=None, code=None, error=None, message=None, info=None, details=None\n):\n \"\"\"Return a JSON formated response matching the error HTTP API.\n\n :param httpexception: Instance of :mod:`~pyramid:pyramid.httpexceptions`\n :param errno: stable application-level error number (e.g. 109)\n :param code: matches the HTTP status code (e.g 400)\n :param error: string description of error type (e.g. \"Bad request\")\n :param message: context information (e.g. \"Invalid request parameters\")\n :param info: information about error (e.g. URL to troubleshooting)\n :param details: additional structured details (conflicting object)\n :returns: the formatted response object\n :rtype: pyramid.httpexceptions.HTTPException\n \"\"\"\n errno = errno or ERRORS.UNDEFINED\n\n if isinstance(errno, Enum):\n errno = errno.value\n\n body = {\n \"code\": code or httpexception.code,\n \"errno\": errno,\n \"error\": error or httpexception.title,\n \"message\": message,\n \"info\": info,\n \"details\": details or colander.drop,\n }\n\n response = httpexception\n response.errno = errno\n response.json = ErrorSchema().deserialize(body)\n response.content_type = \"application/json\"\n return response\n\n\ndef json_error_handler(request):\n \"\"\"Cornice JSON error handler, returning consistant JSON formatted errors\n from schema validation errors.\n\n This is meant to be used is custom services in your applications.\n\n .. code-block:: python\n\n upload = Service(name=\"upload\", path='/upload',\n error_handler=errors.json_error_handler)\n\n .. warning::\n\n Only the first error of the list is formatted in the response.\n (c.f. HTTP API).\n \"\"\"\n errors = request.errors\n sorted_errors = sorted(errors, key=lambda x: str(x[\"name\"]))\n # In Cornice, we call error handler if at least one error was set.\n error = sorted_errors[0]\n name = error[\"name\"]\n description = error[\"description\"]\n\n if isinstance(description, bytes):\n description = error[\"description\"].decode(\"utf-8\")\n\n if name is not None:\n if str(name) in description:\n message = description\n else:\n message = \"{name} in {location}: {description}\".format_map(error)\n else:\n message = \"{location}: {description}\".format_map(error)\n\n response = http_error(\n httpexceptions.HTTPBadRequest(),\n code=errors.status,\n errno=ERRORS.INVALID_PARAMETERS.value,\n error=\"Invalid parameters\",\n message=message,\n details=errors,\n )\n response.status = errors.status\n response = reapply_cors(request, response)\n return response\n\n\ndef raise_invalid(request, location=\"body\", name=None, description=None, **kwargs):\n \"\"\"Helper to raise a validation error.\n\n :param location: location in request (e.g. ``'querystring'``)\n :param name: field name\n :param description: detailed description of validation error\n\n :raises: :class:`~pyramid:pyramid.httpexceptions.HTTPBadRequest`\n \"\"\"\n request.errors.add(location, name, description, **kwargs)\n response = json_error_handler(request)\n raise response\n\n\ndef send_alert(request, message=None, url=None, code=\"soft-eol\"):\n \"\"\"Helper to add an Alert header to the response.\n\n :param code: The type of error 'soft-eol', 'hard-eol'\n :param message: The description message.\n :param url: The URL for more information, default to the documentation url.\n \"\"\"\n if url is None:\n url = request.registry.settings[\"project_docs\"]\n\n request.response.headers[\"Alert\"] = json.dumps({\"code\": code, \"message\": message, \"url\": url})\n\n\ndef request_GET(request):\n \"\"\"Catches a UnicodeDecode error in request.GET in case a wrong request was received.\n Fixing a webob long term issue: https://github.com/Pylons/webob/issues/161\n \"\"\"\n try:\n return request.GET\n except UnicodeDecodeError:\n querystring = request.environ.get(\"QUERY_STRING\", \"\")\n logger = logging.getLogger(__name__)\n logger.warning(\"Error decoding QUERY_STRING: %s\" % request.environ)\n raise http_error(\n httpexceptions.HTTPBadRequest(),\n errno=ERRORS.INVALID_PARAMETERS,\n message=\"A request with an incorrect encoding in the querystring was\"\n \"received. Please make sure your requests are encoded in UTF-8: %s\" % querystring,\n )\n", "path": "kinto/core/errors.py"}]} |
gh_patches_debug_1058 | rasdani/github-patches | git_diff | Mailu__Mailu-719 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Alternatives useless after podop
After updating to master to get all the up-to-date fixes it also moves postfix to use podop and it seems to no longer support receiving external mail from alternative domains 😢
Sending internal mail between alternatives works as expected but not with external mail, a "relay denied" message is shown in the logs and when checking the postfix podop views it looks like alternative is never mentioned.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/internal/views/postfix.py`
Content:
```
1 from mailu import db, models
2 from mailu.internal import internal
3
4 import flask
5
6
7 @internal.route("/postfix/domain/<domain_name>")
8 def postfix_mailbox_domain(domain_name):
9 domain = models.Domain.query.get(domain_name) or flask.abort(404)
10 return flask.jsonify(domain.name)
11
12
13 @internal.route("/postfix/mailbox/<email>")
14 def postfix_mailbox_map(email):
15 user = models.User.query.get(email) or flask.abort(404)
16 return flask.jsonify(user.email)
17
18
19 @internal.route("/postfix/alias/<alias>")
20 def postfix_alias_map(alias):
21 localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)
22 alternative = models.Alternative.query.get(domain)
23 if alternative:
24 domain = alternative.domain_name
25 email = '{}@{}'.format(localpart, domain)
26 if localpart is None:
27 return flask.jsonify(domain)
28 else:
29 alias_obj = models.Alias.resolve(localpart, domain)
30 if alias_obj:
31 return flask.jsonify(",".join(alias_obj.destination))
32 user_obj = models.User.query.get(email)
33 if user_obj:
34 return flask.jsonify(user_obj.destination)
35 return flask.abort(404)
36
37
38 @internal.route("/postfix/transport/<email>")
39 def postfix_transport(email):
40 localpart, domain = email.split('@', 1) if '@' in email else (None, email)
41 relay = models.Relay.query.get(domain) or flask.abort(404)
42 return flask.jsonify("smtp:[{}]".format(relay.smtp))
43
44
45 @internal.route("/postfix/sender/<sender>")
46 def postfix_sender(sender):
47 """ Simply reject any sender that pretends to be from a local domain
48 """
49 localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)
50 domain = models.Domain.query.get(domain_name)
51 alternative = models.Alternative.query.get(domain_name)
52 if domain or alternative:
53 return flask.jsonify("REJECT")
54 return flask.abort(404)
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py
--- a/core/admin/mailu/internal/views/postfix.py
+++ b/core/admin/mailu/internal/views/postfix.py
@@ -6,7 +6,9 @@
@internal.route("/postfix/domain/<domain_name>")
def postfix_mailbox_domain(domain_name):
- domain = models.Domain.query.get(domain_name) or flask.abort(404)
+ domain = models.Domain.query.get(domain_name) or \
+ models.Alternative.query.get(domain_name) or \
+ flask.abort(404)
return flask.jsonify(domain.name)
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py\n--- a/core/admin/mailu/internal/views/postfix.py\n+++ b/core/admin/mailu/internal/views/postfix.py\n@@ -6,7 +6,9 @@\n \n @internal.route(\"/postfix/domain/<domain_name>\")\n def postfix_mailbox_domain(domain_name):\n- domain = models.Domain.query.get(domain_name) or flask.abort(404)\n+ domain = models.Domain.query.get(domain_name) or \\\n+ models.Alternative.query.get(domain_name) or \\\n+ flask.abort(404)\n return flask.jsonify(domain.name)\n", "issue": "Alternatives useless after podop\nAfter updating to master to get all the up-to-date fixes it also moves postfix to use podop and it seems to no longer support receiving external mail from alternative domains \ud83d\ude22 \r\n\r\nSending internal mail between alternatives works as expected but not with external mail, a \"relay denied\" message is shown in the logs and when checking the postfix podop views it looks like alternative is never mentioned.\n", "before_files": [{"content": "from mailu import db, models\nfrom mailu.internal import internal\n\nimport flask\n\n\[email protected](\"/postfix/domain/<domain_name>\")\ndef postfix_mailbox_domain(domain_name):\n domain = models.Domain.query.get(domain_name) or flask.abort(404)\n return flask.jsonify(domain.name)\n\n\[email protected](\"/postfix/mailbox/<email>\")\ndef postfix_mailbox_map(email):\n user = models.User.query.get(email) or flask.abort(404)\n return flask.jsonify(user.email)\n\n\[email protected](\"/postfix/alias/<alias>\")\ndef postfix_alias_map(alias):\n localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)\n alternative = models.Alternative.query.get(domain)\n if alternative:\n domain = alternative.domain_name\n email = '{}@{}'.format(localpart, domain)\n if localpart is None:\n return flask.jsonify(domain)\n else:\n alias_obj = models.Alias.resolve(localpart, domain)\n if alias_obj:\n return flask.jsonify(\",\".join(alias_obj.destination))\n user_obj = models.User.query.get(email)\n if user_obj:\n return flask.jsonify(user_obj.destination)\n return flask.abort(404)\n\n\[email protected](\"/postfix/transport/<email>\")\ndef postfix_transport(email):\n localpart, domain = email.split('@', 1) if '@' in email else (None, email)\n relay = models.Relay.query.get(domain) or flask.abort(404)\n return flask.jsonify(\"smtp:[{}]\".format(relay.smtp))\n\n\[email protected](\"/postfix/sender/<sender>\")\ndef postfix_sender(sender):\n \"\"\" Simply reject any sender that pretends to be from a local domain\n \"\"\"\n localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)\n domain = models.Domain.query.get(domain_name)\n alternative = models.Alternative.query.get(domain_name)\n if domain or alternative:\n return flask.jsonify(\"REJECT\")\n return flask.abort(404)\n", "path": "core/admin/mailu/internal/views/postfix.py"}], "after_files": [{"content": "from mailu import db, models\nfrom mailu.internal import internal\n\nimport flask\n\n\[email protected](\"/postfix/domain/<domain_name>\")\ndef postfix_mailbox_domain(domain_name):\n domain = models.Domain.query.get(domain_name) or \\\n models.Alternative.query.get(domain_name) or \\\n flask.abort(404)\n return flask.jsonify(domain.name)\n\n\[email protected](\"/postfix/mailbox/<email>\")\ndef postfix_mailbox_map(email):\n user = models.User.query.get(email) or flask.abort(404)\n return flask.jsonify(user.email)\n\n\[email protected](\"/postfix/alias/<alias>\")\ndef postfix_alias_map(alias):\n localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)\n alternative = models.Alternative.query.get(domain)\n if alternative:\n domain = alternative.domain_name\n email = '{}@{}'.format(localpart, domain)\n if localpart is None:\n return flask.jsonify(domain)\n else:\n alias_obj = models.Alias.resolve(localpart, domain)\n if alias_obj:\n return flask.jsonify(\",\".join(alias_obj.destination))\n user_obj = models.User.query.get(email)\n if user_obj:\n return flask.jsonify(user_obj.destination)\n return flask.abort(404)\n\n\[email protected](\"/postfix/transport/<email>\")\ndef postfix_transport(email):\n localpart, domain = email.split('@', 1) if '@' in email else (None, email)\n relay = models.Relay.query.get(domain) or flask.abort(404)\n return flask.jsonify(\"smtp:[{}]\".format(relay.smtp))\n\n\[email protected](\"/postfix/sender/<sender>\")\ndef postfix_sender(sender):\n \"\"\" Simply reject any sender that pretends to be from a local domain\n \"\"\"\n localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)\n domain = models.Domain.query.get(domain_name)\n alternative = models.Alternative.query.get(domain_name)\n if domain or alternative:\n return flask.jsonify(\"REJECT\")\n return flask.abort(404)\n", "path": "core/admin/mailu/internal/views/postfix.py"}]} |
gh_patches_debug_1059 | rasdani/github-patches | git_diff | privacyidea__privacyidea-2418 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update requirements for upcoming version 3.5
Push requirements to newest versions according to https://github.com/privacyidea/privacyidea/wiki/Development-workflow#requirements
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import print_function
3 from setuptools import setup, find_packages
4 import os
5 import stat
6 import sys
7
8 #VERSION = "2.1dev4"
9 VERSION = "3.4"
10
11 # Taken from kennethreitz/requests/setup.py
12 package_directory = os.path.realpath(os.path.dirname(__file__))
13
14
15 def get_file_contents(file_path):
16 """Get the context of the file using full path name."""
17 content = ""
18 try:
19 full_path = os.path.join(package_directory, file_path)
20 content = open(full_path, 'r').read()
21 except:
22 print("### could not open file {0!r}".format(file_path), file=sys.stderr)
23 return content
24
25
26 def get_file_list(file_path):
27 full_path = os.path.join(package_directory, file_path)
28 file_list = os.listdir(full_path)
29 # now we need to add the path to the files
30 return [file_path + f for f in file_list]
31
32
33 install_requires = ["beautifulsoup4[lxml]>=4.3.2",
34 "cbor2>=5.0.1",
35 "configobj>=5.0.6",
36 "croniter>=0.3.8",
37 "cryptography>=2.4.2",
38 "defusedxml>=0.4.1",
39 "ecdsa>=0.13.3",
40 "Flask>=0.10.1",
41 "Flask-Babel>=0.9",
42 "Flask-Migrate>=1.2.0",
43 "Flask-Script>=2.0.5",
44 "Flask-SQLAlchemy>=2.0",
45 "Flask-Versioned>=0.9.4",
46 "future>=0.18.2;python_version<'3.0'",
47 "huey[redis]>=1.11.0",
48 "ldap3>=2.6",
49 "netaddr>=0.7.12",
50 "oauth2client>=2.0.1",
51 "passlib[bcrypt]>=1.7.0",
52 "Pillow>=6.2.1",
53 "PyJWT>=1.3.0",
54 "PyMySQL>=0.6.6",
55 "pyOpenSSL>=17.5",
56 "pyrad>=2.0",
57 "python-dateutil>=2.7.3",
58 "python-gnupg>=0.4.4",
59 "PyYAML>=5.1",
60 "qrcode>=6.1",
61 "requests>=2.7.0",
62 "smpplib>=2.0",
63 "SQLAlchemy>=1.3.0",
64 "sqlsoup>=0.9.0"]
65
66
67 def get_man_pages(dir):
68 """
69 Get man pages in a directory.
70 :param dir:
71 :return: list of file names
72 """
73 files = os.listdir(dir)
74 r_files = []
75 for file in files:
76 if file.endswith(".1"):
77 r_files.append(dir + "/" + file)
78 return r_files
79
80
81 def get_scripts(dir):
82 """
83 Get files that are executable
84 :param dir:
85 :return: list of file names
86 """
87 files = os.listdir(dir)
88 r_files = []
89 for file in files:
90 if os.stat(dir + "/" + file)[stat.ST_MODE] & stat.S_IEXEC:
91 r_files.append(dir + "/" + file)
92 return r_files
93
94
95 setup(
96 name='privacyIDEA',
97 version=VERSION,
98 description='privacyIDEA: identity, multifactor authentication (OTP), '
99 'authorization, audit',
100 author='privacyidea.org',
101 license='AGPLv3',
102 author_email='[email protected]',
103 url='http://www.privacyidea.org',
104 keywords='OTP, two factor authentication, management, security',
105 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.9.*',
106 packages=find_packages(),
107 scripts=["pi-manage"] + get_scripts("tools"),
108 extras_require={
109 'doc': ["Sphinx>=1.3.1",
110 "sphinxcontrib-httpdomain>=1.3.0",
111 "sphinxcontrib-plantuml>=0.18"],
112 'test': ["mock>=2.0.0",
113 "pytest>=3.6.0",
114 "pytest-cov>=2.5.1",
115 "responses>=0.9.0"],
116 'postgres': ['psycopg2>=2.8.3']
117 },
118 install_requires=install_requires,
119 include_package_data=True,
120 data_files=[('etc/privacyidea/',
121 ['deploy/apache/privacyideaapp.wsgi',
122 'deploy/privacyidea/dictionary']),
123 ('share/man/man1', get_man_pages("tools")),
124 ('lib/privacyidea/migrations',
125 ["migrations/alembic.ini",
126 "migrations/env.py",
127 "migrations/README",
128 "migrations/script.py.mako"]),
129 ('lib/privacyidea/migrations/versions',
130 get_file_list("migrations/versions/")),
131 ('lib/privacyidea/', ['requirements.txt'])
132 ],
133 classifiers=["Framework :: Flask",
134 "License :: OSI Approved :: "
135 "GNU Affero General Public License v3",
136 "Programming Language :: Python",
137 "Development Status :: 5 - Production/Stable",
138 "Topic :: Internet",
139 "Topic :: Security",
140 "Topic :: System ::"
141 " Systems Administration :: Authentication/Directory",
142 'Programming Language :: Python',
143 'Programming Language :: Python :: 2',
144 'Programming Language :: Python :: 2.7',
145 'Programming Language :: Python :: 3',
146 'Programming Language :: Python :: 3.5',
147 'Programming Language :: Python :: 3.6',
148 'Programming Language :: Python :: 3.7',
149 'Programming Language :: Python :: 3.8'
150 ],
151 zip_safe=False,
152 long_description=get_file_contents('README.rst')
153 )
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -50,6 +50,7 @@
"oauth2client>=2.0.1",
"passlib[bcrypt]>=1.7.0",
"Pillow>=6.2.1",
+ "pydash>=4.7.4",
"PyJWT>=1.3.0",
"PyMySQL>=0.6.6",
"pyOpenSSL>=17.5",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -50,6 +50,7 @@\n \"oauth2client>=2.0.1\",\n \"passlib[bcrypt]>=1.7.0\",\n \"Pillow>=6.2.1\",\n+ \"pydash>=4.7.4\",\n \"PyJWT>=1.3.0\",\n \"PyMySQL>=0.6.6\",\n \"pyOpenSSL>=17.5\",\n", "issue": "Update requirements for upcoming version 3.5\nPush requirements to newest versions according to https://github.com/privacyidea/privacyidea/wiki/Development-workflow#requirements\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import print_function\nfrom setuptools import setup, find_packages\nimport os\nimport stat\nimport sys\n\n#VERSION = \"2.1dev4\"\nVERSION = \"3.4\"\n\n# Taken from kennethreitz/requests/setup.py\npackage_directory = os.path.realpath(os.path.dirname(__file__))\n\n\ndef get_file_contents(file_path):\n \"\"\"Get the context of the file using full path name.\"\"\"\n content = \"\"\n try:\n full_path = os.path.join(package_directory, file_path)\n content = open(full_path, 'r').read()\n except:\n print(\"### could not open file {0!r}\".format(file_path), file=sys.stderr)\n return content\n\n\ndef get_file_list(file_path):\n full_path = os.path.join(package_directory, file_path)\n file_list = os.listdir(full_path)\n # now we need to add the path to the files\n return [file_path + f for f in file_list]\n\n\ninstall_requires = [\"beautifulsoup4[lxml]>=4.3.2\",\n \"cbor2>=5.0.1\",\n \"configobj>=5.0.6\",\n \"croniter>=0.3.8\",\n \"cryptography>=2.4.2\",\n \"defusedxml>=0.4.1\",\n \"ecdsa>=0.13.3\",\n \"Flask>=0.10.1\",\n \"Flask-Babel>=0.9\",\n \"Flask-Migrate>=1.2.0\",\n \"Flask-Script>=2.0.5\",\n \"Flask-SQLAlchemy>=2.0\",\n \"Flask-Versioned>=0.9.4\",\n \"future>=0.18.2;python_version<'3.0'\",\n \"huey[redis]>=1.11.0\",\n \"ldap3>=2.6\",\n \"netaddr>=0.7.12\",\n \"oauth2client>=2.0.1\",\n \"passlib[bcrypt]>=1.7.0\",\n \"Pillow>=6.2.1\",\n \"PyJWT>=1.3.0\",\n \"PyMySQL>=0.6.6\",\n \"pyOpenSSL>=17.5\",\n \"pyrad>=2.0\",\n \"python-dateutil>=2.7.3\",\n \"python-gnupg>=0.4.4\",\n \"PyYAML>=5.1\",\n \"qrcode>=6.1\",\n \"requests>=2.7.0\",\n \"smpplib>=2.0\",\n \"SQLAlchemy>=1.3.0\",\n \"sqlsoup>=0.9.0\"]\n\n\ndef get_man_pages(dir):\n \"\"\"\n Get man pages in a directory.\n :param dir:\n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if file.endswith(\".1\"):\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\ndef get_scripts(dir):\n \"\"\"\n Get files that are executable\n :param dir:\n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if os.stat(dir + \"/\" + file)[stat.ST_MODE] & stat.S_IEXEC:\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\nsetup(\n name='privacyIDEA',\n version=VERSION,\n description='privacyIDEA: identity, multifactor authentication (OTP), '\n 'authorization, audit',\n author='privacyidea.org',\n license='AGPLv3',\n author_email='[email protected]',\n url='http://www.privacyidea.org',\n keywords='OTP, two factor authentication, management, security',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.9.*',\n packages=find_packages(),\n scripts=[\"pi-manage\"] + get_scripts(\"tools\"),\n extras_require={\n 'doc': [\"Sphinx>=1.3.1\",\n \"sphinxcontrib-httpdomain>=1.3.0\",\n \"sphinxcontrib-plantuml>=0.18\"],\n 'test': [\"mock>=2.0.0\",\n \"pytest>=3.6.0\",\n \"pytest-cov>=2.5.1\",\n \"responses>=0.9.0\"],\n 'postgres': ['psycopg2>=2.8.3']\n },\n install_requires=install_requires,\n include_package_data=True,\n data_files=[('etc/privacyidea/',\n ['deploy/apache/privacyideaapp.wsgi',\n 'deploy/privacyidea/dictionary']),\n ('share/man/man1', get_man_pages(\"tools\")),\n ('lib/privacyidea/migrations',\n [\"migrations/alembic.ini\",\n \"migrations/env.py\",\n \"migrations/README\",\n \"migrations/script.py.mako\"]),\n ('lib/privacyidea/migrations/versions',\n get_file_list(\"migrations/versions/\")),\n ('lib/privacyidea/', ['requirements.txt'])\n ],\n classifiers=[\"Framework :: Flask\",\n \"License :: OSI Approved :: \"\n \"GNU Affero General Public License v3\",\n \"Programming Language :: Python\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Internet\",\n \"Topic :: Security\",\n \"Topic :: System ::\"\n \" Systems Administration :: Authentication/Directory\",\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8'\n ],\n zip_safe=False,\n long_description=get_file_contents('README.rst')\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import print_function\nfrom setuptools import setup, find_packages\nimport os\nimport stat\nimport sys\n\n#VERSION = \"2.1dev4\"\nVERSION = \"3.4\"\n\n# Taken from kennethreitz/requests/setup.py\npackage_directory = os.path.realpath(os.path.dirname(__file__))\n\n\ndef get_file_contents(file_path):\n \"\"\"Get the context of the file using full path name.\"\"\"\n content = \"\"\n try:\n full_path = os.path.join(package_directory, file_path)\n content = open(full_path, 'r').read()\n except:\n print(\"### could not open file {0!r}\".format(file_path), file=sys.stderr)\n return content\n\n\ndef get_file_list(file_path):\n full_path = os.path.join(package_directory, file_path)\n file_list = os.listdir(full_path)\n # now we need to add the path to the files\n return [file_path + f for f in file_list]\n\n\ninstall_requires = [\"beautifulsoup4[lxml]>=4.3.2\",\n \"cbor2>=5.0.1\",\n \"configobj>=5.0.6\",\n \"croniter>=0.3.8\",\n \"cryptography>=2.4.2\",\n \"defusedxml>=0.4.1\",\n \"ecdsa>=0.13.3\",\n \"Flask>=0.10.1\",\n \"Flask-Babel>=0.9\",\n \"Flask-Migrate>=1.2.0\",\n \"Flask-Script>=2.0.5\",\n \"Flask-SQLAlchemy>=2.0\",\n \"Flask-Versioned>=0.9.4\",\n \"future>=0.18.2;python_version<'3.0'\",\n \"huey[redis]>=1.11.0\",\n \"ldap3>=2.6\",\n \"netaddr>=0.7.12\",\n \"oauth2client>=2.0.1\",\n \"passlib[bcrypt]>=1.7.0\",\n \"Pillow>=6.2.1\",\n \"pydash>=4.7.4\",\n \"PyJWT>=1.3.0\",\n \"PyMySQL>=0.6.6\",\n \"pyOpenSSL>=17.5\",\n \"pyrad>=2.0\",\n \"python-dateutil>=2.7.3\",\n \"python-gnupg>=0.4.4\",\n \"PyYAML>=5.1\",\n \"qrcode>=6.1\",\n \"requests>=2.7.0\",\n \"smpplib>=2.0\",\n \"SQLAlchemy>=1.3.0\",\n \"sqlsoup>=0.9.0\"]\n\n\ndef get_man_pages(dir):\n \"\"\"\n Get man pages in a directory.\n :param dir:\n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if file.endswith(\".1\"):\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\ndef get_scripts(dir):\n \"\"\"\n Get files that are executable\n :param dir:\n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if os.stat(dir + \"/\" + file)[stat.ST_MODE] & stat.S_IEXEC:\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\nsetup(\n name='privacyIDEA',\n version=VERSION,\n description='privacyIDEA: identity, multifactor authentication (OTP), '\n 'authorization, audit',\n author='privacyidea.org',\n license='AGPLv3',\n author_email='[email protected]',\n url='http://www.privacyidea.org',\n keywords='OTP, two factor authentication, management, security',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.9.*',\n packages=find_packages(),\n scripts=[\"pi-manage\"] + get_scripts(\"tools\"),\n extras_require={\n 'doc': [\"Sphinx>=1.3.1\",\n \"sphinxcontrib-httpdomain>=1.3.0\",\n \"sphinxcontrib-plantuml>=0.18\"],\n 'test': [\"mock>=2.0.0\",\n \"pytest>=3.6.0\",\n \"pytest-cov>=2.5.1\",\n \"responses>=0.9.0\"],\n 'postgres': ['psycopg2>=2.8.3']\n },\n install_requires=install_requires,\n include_package_data=True,\n data_files=[('etc/privacyidea/',\n ['deploy/apache/privacyideaapp.wsgi',\n 'deploy/privacyidea/dictionary']),\n ('share/man/man1', get_man_pages(\"tools\")),\n ('lib/privacyidea/migrations',\n [\"migrations/alembic.ini\",\n \"migrations/env.py\",\n \"migrations/README\",\n \"migrations/script.py.mako\"]),\n ('lib/privacyidea/migrations/versions',\n get_file_list(\"migrations/versions/\")),\n ('lib/privacyidea/', ['requirements.txt'])\n ],\n classifiers=[\"Framework :: Flask\",\n \"License :: OSI Approved :: \"\n \"GNU Affero General Public License v3\",\n \"Programming Language :: Python\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Internet\",\n \"Topic :: Security\",\n \"Topic :: System ::\"\n \" Systems Administration :: Authentication/Directory\",\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8'\n ],\n zip_safe=False,\n long_description=get_file_contents('README.rst')\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1060 | rasdani/github-patches | git_diff | numba__numba-8723 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`compile_ptx()` allows compilation of kernel with non-`void` return type
For example:
```python
from numba import cuda, int32
def f(x, y):
return x[0] + y[0]
ptx, resty = cuda.compile_ptx(f, (int32[::1], int32[::1]))
print(resty)
# int64
```
compiles, and generates PTX where the function body returns nothing (heavily edited for clarity, but the idea is represented:
```assembly
.visible .entry f (
// args omitted for brevity
)
{
ret;
}
```
Usually we check that the kernel has a void return type in the `@cuda.jit` decorator, which is why this slips by in `compile_ptx`. The check should probably be pushed a bit deeper to cover both uses.
(cc @brandonwillard )
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/cuda/compiler.py`
Content:
```
1 from numba.core.typing.templates import ConcreteTemplate
2 from numba.core import types, typing, funcdesc, config, compiler
3 from numba.core.compiler import (sanitize_compile_result_entries, CompilerBase,
4 DefaultPassBuilder, Flags, Option,
5 CompileResult)
6 from numba.core.compiler_lock import global_compiler_lock
7 from numba.core.compiler_machinery import (LoweringPass, AnalysisPass,
8 PassManager, register_pass)
9 from numba.core.errors import NumbaInvalidConfigWarning, TypingError
10 from numba.core.typed_passes import (IRLegalization, NativeLowering,
11 AnnotateTypes)
12 from warnings import warn
13 from numba.cuda.api import get_current_device
14
15
16 def _nvvm_options_type(x):
17 if x is None:
18 return None
19
20 else:
21 assert isinstance(x, dict)
22 return x
23
24
25 class CUDAFlags(Flags):
26 nvvm_options = Option(
27 type=_nvvm_options_type,
28 default=None,
29 doc="NVVM options",
30 )
31
32
33 # The CUDACompileResult (CCR) has a specially-defined entry point equal to its
34 # id. This is because the entry point is used as a key into a dict of
35 # overloads by the base dispatcher. The id of the CCR is the only small and
36 # unique property of a CompileResult in the CUDA target (cf. the CPU target,
37 # which uses its entry_point, which is a pointer value).
38 #
39 # This does feel a little hackish, and there are two ways in which this could
40 # be improved:
41 #
42 # 1. We could change the core of Numba so that each CompileResult has its own
43 # unique ID that can be used as a key - e.g. a count, similar to the way in
44 # which types have unique counts.
45 # 2. At some future time when kernel launch uses a compiled function, the entry
46 # point will no longer need to be a synthetic value, but will instead be a
47 # pointer to the compiled function as in the CPU target.
48
49 class CUDACompileResult(CompileResult):
50 @property
51 def entry_point(self):
52 return id(self)
53
54
55 def cuda_compile_result(**entries):
56 entries = sanitize_compile_result_entries(entries)
57 return CUDACompileResult(**entries)
58
59
60 @register_pass(mutates_CFG=True, analysis_only=False)
61 class CUDABackend(LoweringPass):
62
63 _name = "cuda_backend"
64
65 def __init__(self):
66 LoweringPass.__init__(self)
67
68 def run_pass(self, state):
69 """
70 Back-end: Packages lowering output in a compile result
71 """
72 lowered = state['cr']
73 signature = typing.signature(state.return_type, *state.args)
74
75 state.cr = cuda_compile_result(
76 typing_context=state.typingctx,
77 target_context=state.targetctx,
78 typing_error=state.status.fail_reason,
79 type_annotation=state.type_annotation,
80 library=state.library,
81 call_helper=lowered.call_helper,
82 signature=signature,
83 fndesc=lowered.fndesc,
84 )
85 return True
86
87
88 @register_pass(mutates_CFG=False, analysis_only=False)
89 class CreateLibrary(LoweringPass):
90 """
91 Create a CUDACodeLibrary for the NativeLowering pass to populate. The
92 NativeLowering pass will create a code library if none exists, but we need
93 to set it up with nvvm_options from the flags if they are present.
94 """
95
96 _name = "create_library"
97
98 def __init__(self):
99 LoweringPass.__init__(self)
100
101 def run_pass(self, state):
102 codegen = state.targetctx.codegen()
103 name = state.func_id.func_qualname
104 nvvm_options = state.flags.nvvm_options
105 state.library = codegen.create_library(name, nvvm_options=nvvm_options)
106 # Enable object caching upfront so that the library can be serialized.
107 state.library.enable_object_caching()
108
109 return True
110
111
112 @register_pass(mutates_CFG=False, analysis_only=True)
113 class CUDALegalization(AnalysisPass):
114
115 _name = "cuda_legalization"
116
117 def __init__(self):
118 AnalysisPass.__init__(self)
119
120 def run_pass(self, state):
121 # Early return if NVVM 7
122 from numba.cuda.cudadrv.nvvm import NVVM
123 if NVVM().is_nvvm70:
124 return False
125 # NVVM < 7, need to check for charseq
126 typmap = state.typemap
127
128 def check_dtype(dtype):
129 if isinstance(dtype, (types.UnicodeCharSeq, types.CharSeq)):
130 msg = (f"{k} is a char sequence type. This type is not "
131 "supported with CUDA toolkit versions < 11.2. To "
132 "use this type, you need to update your CUDA "
133 "toolkit - try 'conda install cudatoolkit=11' if "
134 "you are using conda to manage your environment.")
135 raise TypingError(msg)
136 elif isinstance(dtype, types.Record):
137 for subdtype in dtype.fields.items():
138 # subdtype is a (name, _RecordField) pair
139 check_dtype(subdtype[1].type)
140
141 for k, v in typmap.items():
142 if isinstance(v, types.Array):
143 check_dtype(v.dtype)
144 return False
145
146
147 class CUDACompiler(CompilerBase):
148 def define_pipelines(self):
149 dpb = DefaultPassBuilder
150 pm = PassManager('cuda')
151
152 untyped_passes = dpb.define_untyped_pipeline(self.state)
153 pm.passes.extend(untyped_passes.passes)
154
155 typed_passes = dpb.define_typed_pipeline(self.state)
156 pm.passes.extend(typed_passes.passes)
157 pm.add_pass(CUDALegalization, "CUDA legalization")
158
159 lowering_passes = self.define_cuda_lowering_pipeline(self.state)
160 pm.passes.extend(lowering_passes.passes)
161
162 pm.finalize()
163 return [pm]
164
165 def define_cuda_lowering_pipeline(self, state):
166 pm = PassManager('cuda_lowering')
167 # legalise
168 pm.add_pass(IRLegalization,
169 "ensure IR is legal prior to lowering")
170 pm.add_pass(AnnotateTypes, "annotate types")
171
172 # lower
173 pm.add_pass(CreateLibrary, "create library")
174 pm.add_pass(NativeLowering, "native lowering")
175 pm.add_pass(CUDABackend, "cuda backend")
176
177 pm.finalize()
178 return pm
179
180
181 @global_compiler_lock
182 def compile_cuda(pyfunc, return_type, args, debug=False, lineinfo=False,
183 inline=False, fastmath=False, nvvm_options=None):
184 from .descriptor import cuda_target
185 typingctx = cuda_target.typing_context
186 targetctx = cuda_target.target_context
187
188 flags = CUDAFlags()
189 # Do not compile (generate native code), just lower (to LLVM)
190 flags.no_compile = True
191 flags.no_cpython_wrapper = True
192 flags.no_cfunc_wrapper = True
193 if debug or lineinfo:
194 # Note both debug and lineinfo turn on debug information in the
195 # compiled code, but we keep them separate arguments in case we
196 # later want to overload some other behavior on the debug flag.
197 # In particular, -opt=3 is not supported with -g.
198 flags.debuginfo = True
199 flags.error_model = 'python'
200 else:
201 flags.error_model = 'numpy'
202 if inline:
203 flags.forceinline = True
204 if fastmath:
205 flags.fastmath = True
206 if nvvm_options:
207 flags.nvvm_options = nvvm_options
208
209 # Run compilation pipeline
210 from numba.core.target_extension import target_override
211 with target_override('cuda'):
212 cres = compiler.compile_extra(typingctx=typingctx,
213 targetctx=targetctx,
214 func=pyfunc,
215 args=args,
216 return_type=return_type,
217 flags=flags,
218 locals={},
219 pipeline_class=CUDACompiler)
220
221 library = cres.library
222 library.finalize()
223
224 return cres
225
226
227 @global_compiler_lock
228 def compile_ptx(pyfunc, args, debug=False, lineinfo=False, device=False,
229 fastmath=False, cc=None, opt=True):
230 """Compile a Python function to PTX for a given set of argument types.
231
232 :param pyfunc: The Python function to compile.
233 :param args: A tuple of argument types to compile for.
234 :param debug: Whether to include debug info in the generated PTX.
235 :type debug: bool
236 :param lineinfo: Whether to include a line mapping from the generated PTX
237 to the source code. Usually this is used with optimized
238 code (since debug mode would automatically include this),
239 so we want debug info in the LLVM but only the line
240 mapping in the final PTX.
241 :type lineinfo: bool
242 :param device: Whether to compile a device function. Defaults to ``False``,
243 to compile global kernel functions.
244 :type device: bool
245 :param fastmath: Whether to enable fast math flags (ftz=1, prec_sqrt=0,
246 prec_div=, and fma=1)
247 :type fastmath: bool
248 :param cc: Compute capability to compile for, as a tuple ``(MAJOR, MINOR)``.
249 Defaults to ``(5, 3)``.
250 :type cc: tuple
251 :param opt: Enable optimizations. Defaults to ``True``.
252 :type opt: bool
253 :return: (ptx, resty): The PTX code and inferred return type
254 :rtype: tuple
255 """
256 if debug and opt:
257 msg = ("debug=True with opt=True (the default) "
258 "is not supported by CUDA. This may result in a crash"
259 " - set debug=False or opt=False.")
260 warn(NumbaInvalidConfigWarning(msg))
261
262 nvvm_options = {
263 'debug': debug,
264 'lineinfo': lineinfo,
265 'fastmath': fastmath,
266 'opt': 3 if opt else 0
267 }
268
269 cres = compile_cuda(pyfunc, None, args, debug=debug, lineinfo=lineinfo,
270 fastmath=fastmath,
271 nvvm_options=nvvm_options)
272 resty = cres.signature.return_type
273 if device:
274 lib = cres.library
275 else:
276 tgt = cres.target_context
277 code = pyfunc.__code__
278 filename = code.co_filename
279 linenum = code.co_firstlineno
280
281 lib, kernel = tgt.prepare_cuda_kernel(cres.library, cres.fndesc, debug,
282 nvvm_options, filename, linenum)
283
284 cc = cc or config.CUDA_DEFAULT_PTX_CC
285 ptx = lib.get_asm_str(cc=cc)
286 return ptx, resty
287
288
289 def compile_ptx_for_current_device(pyfunc, args, debug=False, lineinfo=False,
290 device=False, fastmath=False, opt=True):
291 """Compile a Python function to PTX for a given set of argument types for
292 the current device's compute capabilility. This calls :func:`compile_ptx`
293 with an appropriate ``cc`` value for the current device."""
294 cc = get_current_device().compute_capability
295 return compile_ptx(pyfunc, args, debug=debug, lineinfo=lineinfo,
296 device=device, fastmath=fastmath, cc=cc, opt=True)
297
298
299 def declare_device_function(name, restype, argtypes):
300 return declare_device_function_template(name, restype, argtypes).key
301
302
303 def declare_device_function_template(name, restype, argtypes):
304 from .descriptor import cuda_target
305 typingctx = cuda_target.typing_context
306 targetctx = cuda_target.target_context
307 sig = typing.signature(restype, *argtypes)
308 extfn = ExternFunction(name, sig)
309
310 class device_function_template(ConcreteTemplate):
311 key = extfn
312 cases = [sig]
313
314 fndesc = funcdesc.ExternalFunctionDescriptor(
315 name=name, restype=restype, argtypes=argtypes)
316 typingctx.insert_user_function(extfn, device_function_template)
317 targetctx.insert_user_function(extfn, fndesc)
318
319 return device_function_template
320
321
322 class ExternFunction(object):
323 def __init__(self, name, sig):
324 self.name = name
325 self.sig = sig
326
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numba/cuda/compiler.py b/numba/cuda/compiler.py
--- a/numba/cuda/compiler.py
+++ b/numba/cuda/compiler.py
@@ -270,6 +270,10 @@
fastmath=fastmath,
nvvm_options=nvvm_options)
resty = cres.signature.return_type
+
+ if resty and not device and resty != types.void:
+ raise TypeError("CUDA kernel must have void return type.")
+
if device:
lib = cres.library
else:
| {"golden_diff": "diff --git a/numba/cuda/compiler.py b/numba/cuda/compiler.py\n--- a/numba/cuda/compiler.py\n+++ b/numba/cuda/compiler.py\n@@ -270,6 +270,10 @@\n fastmath=fastmath,\n nvvm_options=nvvm_options)\n resty = cres.signature.return_type\n+\n+ if resty and not device and resty != types.void:\n+ raise TypeError(\"CUDA kernel must have void return type.\")\n+\n if device:\n lib = cres.library\n else:\n", "issue": "`compile_ptx()` allows compilation of kernel with non-`void` return type\nFor example:\r\n\r\n```python\r\nfrom numba import cuda, int32\r\n\r\ndef f(x, y):\r\n return x[0] + y[0]\r\n\r\nptx, resty = cuda.compile_ptx(f, (int32[::1], int32[::1]))\r\n\r\nprint(resty)\r\n# int64\r\n```\r\n\r\ncompiles, and generates PTX where the function body returns nothing (heavily edited for clarity, but the idea is represented:\r\n\r\n```assembly\r\n.visible .entry f (\r\n // args omitted for brevity\r\n)\r\n{\r\n\tret;\r\n}\r\n```\r\n\r\nUsually we check that the kernel has a void return type in the `@cuda.jit` decorator, which is why this slips by in `compile_ptx`. The check should probably be pushed a bit deeper to cover both uses.\r\n\r\n(cc @brandonwillard )\n", "before_files": [{"content": "from numba.core.typing.templates import ConcreteTemplate\nfrom numba.core import types, typing, funcdesc, config, compiler\nfrom numba.core.compiler import (sanitize_compile_result_entries, CompilerBase,\n DefaultPassBuilder, Flags, Option,\n CompileResult)\nfrom numba.core.compiler_lock import global_compiler_lock\nfrom numba.core.compiler_machinery import (LoweringPass, AnalysisPass,\n PassManager, register_pass)\nfrom numba.core.errors import NumbaInvalidConfigWarning, TypingError\nfrom numba.core.typed_passes import (IRLegalization, NativeLowering,\n AnnotateTypes)\nfrom warnings import warn\nfrom numba.cuda.api import get_current_device\n\n\ndef _nvvm_options_type(x):\n if x is None:\n return None\n\n else:\n assert isinstance(x, dict)\n return x\n\n\nclass CUDAFlags(Flags):\n nvvm_options = Option(\n type=_nvvm_options_type,\n default=None,\n doc=\"NVVM options\",\n )\n\n\n# The CUDACompileResult (CCR) has a specially-defined entry point equal to its\n# id. This is because the entry point is used as a key into a dict of\n# overloads by the base dispatcher. The id of the CCR is the only small and\n# unique property of a CompileResult in the CUDA target (cf. the CPU target,\n# which uses its entry_point, which is a pointer value).\n#\n# This does feel a little hackish, and there are two ways in which this could\n# be improved:\n#\n# 1. We could change the core of Numba so that each CompileResult has its own\n# unique ID that can be used as a key - e.g. a count, similar to the way in\n# which types have unique counts.\n# 2. At some future time when kernel launch uses a compiled function, the entry\n# point will no longer need to be a synthetic value, but will instead be a\n# pointer to the compiled function as in the CPU target.\n\nclass CUDACompileResult(CompileResult):\n @property\n def entry_point(self):\n return id(self)\n\n\ndef cuda_compile_result(**entries):\n entries = sanitize_compile_result_entries(entries)\n return CUDACompileResult(**entries)\n\n\n@register_pass(mutates_CFG=True, analysis_only=False)\nclass CUDABackend(LoweringPass):\n\n _name = \"cuda_backend\"\n\n def __init__(self):\n LoweringPass.__init__(self)\n\n def run_pass(self, state):\n \"\"\"\n Back-end: Packages lowering output in a compile result\n \"\"\"\n lowered = state['cr']\n signature = typing.signature(state.return_type, *state.args)\n\n state.cr = cuda_compile_result(\n typing_context=state.typingctx,\n target_context=state.targetctx,\n typing_error=state.status.fail_reason,\n type_annotation=state.type_annotation,\n library=state.library,\n call_helper=lowered.call_helper,\n signature=signature,\n fndesc=lowered.fndesc,\n )\n return True\n\n\n@register_pass(mutates_CFG=False, analysis_only=False)\nclass CreateLibrary(LoweringPass):\n \"\"\"\n Create a CUDACodeLibrary for the NativeLowering pass to populate. The\n NativeLowering pass will create a code library if none exists, but we need\n to set it up with nvvm_options from the flags if they are present.\n \"\"\"\n\n _name = \"create_library\"\n\n def __init__(self):\n LoweringPass.__init__(self)\n\n def run_pass(self, state):\n codegen = state.targetctx.codegen()\n name = state.func_id.func_qualname\n nvvm_options = state.flags.nvvm_options\n state.library = codegen.create_library(name, nvvm_options=nvvm_options)\n # Enable object caching upfront so that the library can be serialized.\n state.library.enable_object_caching()\n\n return True\n\n\n@register_pass(mutates_CFG=False, analysis_only=True)\nclass CUDALegalization(AnalysisPass):\n\n _name = \"cuda_legalization\"\n\n def __init__(self):\n AnalysisPass.__init__(self)\n\n def run_pass(self, state):\n # Early return if NVVM 7\n from numba.cuda.cudadrv.nvvm import NVVM\n if NVVM().is_nvvm70:\n return False\n # NVVM < 7, need to check for charseq\n typmap = state.typemap\n\n def check_dtype(dtype):\n if isinstance(dtype, (types.UnicodeCharSeq, types.CharSeq)):\n msg = (f\"{k} is a char sequence type. This type is not \"\n \"supported with CUDA toolkit versions < 11.2. To \"\n \"use this type, you need to update your CUDA \"\n \"toolkit - try 'conda install cudatoolkit=11' if \"\n \"you are using conda to manage your environment.\")\n raise TypingError(msg)\n elif isinstance(dtype, types.Record):\n for subdtype in dtype.fields.items():\n # subdtype is a (name, _RecordField) pair\n check_dtype(subdtype[1].type)\n\n for k, v in typmap.items():\n if isinstance(v, types.Array):\n check_dtype(v.dtype)\n return False\n\n\nclass CUDACompiler(CompilerBase):\n def define_pipelines(self):\n dpb = DefaultPassBuilder\n pm = PassManager('cuda')\n\n untyped_passes = dpb.define_untyped_pipeline(self.state)\n pm.passes.extend(untyped_passes.passes)\n\n typed_passes = dpb.define_typed_pipeline(self.state)\n pm.passes.extend(typed_passes.passes)\n pm.add_pass(CUDALegalization, \"CUDA legalization\")\n\n lowering_passes = self.define_cuda_lowering_pipeline(self.state)\n pm.passes.extend(lowering_passes.passes)\n\n pm.finalize()\n return [pm]\n\n def define_cuda_lowering_pipeline(self, state):\n pm = PassManager('cuda_lowering')\n # legalise\n pm.add_pass(IRLegalization,\n \"ensure IR is legal prior to lowering\")\n pm.add_pass(AnnotateTypes, \"annotate types\")\n\n # lower\n pm.add_pass(CreateLibrary, \"create library\")\n pm.add_pass(NativeLowering, \"native lowering\")\n pm.add_pass(CUDABackend, \"cuda backend\")\n\n pm.finalize()\n return pm\n\n\n@global_compiler_lock\ndef compile_cuda(pyfunc, return_type, args, debug=False, lineinfo=False,\n inline=False, fastmath=False, nvvm_options=None):\n from .descriptor import cuda_target\n typingctx = cuda_target.typing_context\n targetctx = cuda_target.target_context\n\n flags = CUDAFlags()\n # Do not compile (generate native code), just lower (to LLVM)\n flags.no_compile = True\n flags.no_cpython_wrapper = True\n flags.no_cfunc_wrapper = True\n if debug or lineinfo:\n # Note both debug and lineinfo turn on debug information in the\n # compiled code, but we keep them separate arguments in case we\n # later want to overload some other behavior on the debug flag.\n # In particular, -opt=3 is not supported with -g.\n flags.debuginfo = True\n flags.error_model = 'python'\n else:\n flags.error_model = 'numpy'\n if inline:\n flags.forceinline = True\n if fastmath:\n flags.fastmath = True\n if nvvm_options:\n flags.nvvm_options = nvvm_options\n\n # Run compilation pipeline\n from numba.core.target_extension import target_override\n with target_override('cuda'):\n cres = compiler.compile_extra(typingctx=typingctx,\n targetctx=targetctx,\n func=pyfunc,\n args=args,\n return_type=return_type,\n flags=flags,\n locals={},\n pipeline_class=CUDACompiler)\n\n library = cres.library\n library.finalize()\n\n return cres\n\n\n@global_compiler_lock\ndef compile_ptx(pyfunc, args, debug=False, lineinfo=False, device=False,\n fastmath=False, cc=None, opt=True):\n \"\"\"Compile a Python function to PTX for a given set of argument types.\n\n :param pyfunc: The Python function to compile.\n :param args: A tuple of argument types to compile for.\n :param debug: Whether to include debug info in the generated PTX.\n :type debug: bool\n :param lineinfo: Whether to include a line mapping from the generated PTX\n to the source code. Usually this is used with optimized\n code (since debug mode would automatically include this),\n so we want debug info in the LLVM but only the line\n mapping in the final PTX.\n :type lineinfo: bool\n :param device: Whether to compile a device function. Defaults to ``False``,\n to compile global kernel functions.\n :type device: bool\n :param fastmath: Whether to enable fast math flags (ftz=1, prec_sqrt=0,\n prec_div=, and fma=1)\n :type fastmath: bool\n :param cc: Compute capability to compile for, as a tuple ``(MAJOR, MINOR)``.\n Defaults to ``(5, 3)``.\n :type cc: tuple\n :param opt: Enable optimizations. Defaults to ``True``.\n :type opt: bool\n :return: (ptx, resty): The PTX code and inferred return type\n :rtype: tuple\n \"\"\"\n if debug and opt:\n msg = (\"debug=True with opt=True (the default) \"\n \"is not supported by CUDA. This may result in a crash\"\n \" - set debug=False or opt=False.\")\n warn(NumbaInvalidConfigWarning(msg))\n\n nvvm_options = {\n 'debug': debug,\n 'lineinfo': lineinfo,\n 'fastmath': fastmath,\n 'opt': 3 if opt else 0\n }\n\n cres = compile_cuda(pyfunc, None, args, debug=debug, lineinfo=lineinfo,\n fastmath=fastmath,\n nvvm_options=nvvm_options)\n resty = cres.signature.return_type\n if device:\n lib = cres.library\n else:\n tgt = cres.target_context\n code = pyfunc.__code__\n filename = code.co_filename\n linenum = code.co_firstlineno\n\n lib, kernel = tgt.prepare_cuda_kernel(cres.library, cres.fndesc, debug,\n nvvm_options, filename, linenum)\n\n cc = cc or config.CUDA_DEFAULT_PTX_CC\n ptx = lib.get_asm_str(cc=cc)\n return ptx, resty\n\n\ndef compile_ptx_for_current_device(pyfunc, args, debug=False, lineinfo=False,\n device=False, fastmath=False, opt=True):\n \"\"\"Compile a Python function to PTX for a given set of argument types for\n the current device's compute capabilility. This calls :func:`compile_ptx`\n with an appropriate ``cc`` value for the current device.\"\"\"\n cc = get_current_device().compute_capability\n return compile_ptx(pyfunc, args, debug=debug, lineinfo=lineinfo,\n device=device, fastmath=fastmath, cc=cc, opt=True)\n\n\ndef declare_device_function(name, restype, argtypes):\n return declare_device_function_template(name, restype, argtypes).key\n\n\ndef declare_device_function_template(name, restype, argtypes):\n from .descriptor import cuda_target\n typingctx = cuda_target.typing_context\n targetctx = cuda_target.target_context\n sig = typing.signature(restype, *argtypes)\n extfn = ExternFunction(name, sig)\n\n class device_function_template(ConcreteTemplate):\n key = extfn\n cases = [sig]\n\n fndesc = funcdesc.ExternalFunctionDescriptor(\n name=name, restype=restype, argtypes=argtypes)\n typingctx.insert_user_function(extfn, device_function_template)\n targetctx.insert_user_function(extfn, fndesc)\n\n return device_function_template\n\n\nclass ExternFunction(object):\n def __init__(self, name, sig):\n self.name = name\n self.sig = sig\n", "path": "numba/cuda/compiler.py"}], "after_files": [{"content": "from numba.core.typing.templates import ConcreteTemplate\nfrom numba.core import types, typing, funcdesc, config, compiler\nfrom numba.core.compiler import (sanitize_compile_result_entries, CompilerBase,\n DefaultPassBuilder, Flags, Option,\n CompileResult)\nfrom numba.core.compiler_lock import global_compiler_lock\nfrom numba.core.compiler_machinery import (LoweringPass, AnalysisPass,\n PassManager, register_pass)\nfrom numba.core.errors import NumbaInvalidConfigWarning, TypingError\nfrom numba.core.typed_passes import (IRLegalization, NativeLowering,\n AnnotateTypes)\nfrom warnings import warn\nfrom numba.cuda.api import get_current_device\n\n\ndef _nvvm_options_type(x):\n if x is None:\n return None\n\n else:\n assert isinstance(x, dict)\n return x\n\n\nclass CUDAFlags(Flags):\n nvvm_options = Option(\n type=_nvvm_options_type,\n default=None,\n doc=\"NVVM options\",\n )\n\n\n# The CUDACompileResult (CCR) has a specially-defined entry point equal to its\n# id. This is because the entry point is used as a key into a dict of\n# overloads by the base dispatcher. The id of the CCR is the only small and\n# unique property of a CompileResult in the CUDA target (cf. the CPU target,\n# which uses its entry_point, which is a pointer value).\n#\n# This does feel a little hackish, and there are two ways in which this could\n# be improved:\n#\n# 1. We could change the core of Numba so that each CompileResult has its own\n# unique ID that can be used as a key - e.g. a count, similar to the way in\n# which types have unique counts.\n# 2. At some future time when kernel launch uses a compiled function, the entry\n# point will no longer need to be a synthetic value, but will instead be a\n# pointer to the compiled function as in the CPU target.\n\nclass CUDACompileResult(CompileResult):\n @property\n def entry_point(self):\n return id(self)\n\n\ndef cuda_compile_result(**entries):\n entries = sanitize_compile_result_entries(entries)\n return CUDACompileResult(**entries)\n\n\n@register_pass(mutates_CFG=True, analysis_only=False)\nclass CUDABackend(LoweringPass):\n\n _name = \"cuda_backend\"\n\n def __init__(self):\n LoweringPass.__init__(self)\n\n def run_pass(self, state):\n \"\"\"\n Back-end: Packages lowering output in a compile result\n \"\"\"\n lowered = state['cr']\n signature = typing.signature(state.return_type, *state.args)\n\n state.cr = cuda_compile_result(\n typing_context=state.typingctx,\n target_context=state.targetctx,\n typing_error=state.status.fail_reason,\n type_annotation=state.type_annotation,\n library=state.library,\n call_helper=lowered.call_helper,\n signature=signature,\n fndesc=lowered.fndesc,\n )\n return True\n\n\n@register_pass(mutates_CFG=False, analysis_only=False)\nclass CreateLibrary(LoweringPass):\n \"\"\"\n Create a CUDACodeLibrary for the NativeLowering pass to populate. The\n NativeLowering pass will create a code library if none exists, but we need\n to set it up with nvvm_options from the flags if they are present.\n \"\"\"\n\n _name = \"create_library\"\n\n def __init__(self):\n LoweringPass.__init__(self)\n\n def run_pass(self, state):\n codegen = state.targetctx.codegen()\n name = state.func_id.func_qualname\n nvvm_options = state.flags.nvvm_options\n state.library = codegen.create_library(name, nvvm_options=nvvm_options)\n # Enable object caching upfront so that the library can be serialized.\n state.library.enable_object_caching()\n\n return True\n\n\n@register_pass(mutates_CFG=False, analysis_only=True)\nclass CUDALegalization(AnalysisPass):\n\n _name = \"cuda_legalization\"\n\n def __init__(self):\n AnalysisPass.__init__(self)\n\n def run_pass(self, state):\n # Early return if NVVM 7\n from numba.cuda.cudadrv.nvvm import NVVM\n if NVVM().is_nvvm70:\n return False\n # NVVM < 7, need to check for charseq\n typmap = state.typemap\n\n def check_dtype(dtype):\n if isinstance(dtype, (types.UnicodeCharSeq, types.CharSeq)):\n msg = (f\"{k} is a char sequence type. This type is not \"\n \"supported with CUDA toolkit versions < 11.2. To \"\n \"use this type, you need to update your CUDA \"\n \"toolkit - try 'conda install cudatoolkit=11' if \"\n \"you are using conda to manage your environment.\")\n raise TypingError(msg)\n elif isinstance(dtype, types.Record):\n for subdtype in dtype.fields.items():\n # subdtype is a (name, _RecordField) pair\n check_dtype(subdtype[1].type)\n\n for k, v in typmap.items():\n if isinstance(v, types.Array):\n check_dtype(v.dtype)\n return False\n\n\nclass CUDACompiler(CompilerBase):\n def define_pipelines(self):\n dpb = DefaultPassBuilder\n pm = PassManager('cuda')\n\n untyped_passes = dpb.define_untyped_pipeline(self.state)\n pm.passes.extend(untyped_passes.passes)\n\n typed_passes = dpb.define_typed_pipeline(self.state)\n pm.passes.extend(typed_passes.passes)\n pm.add_pass(CUDALegalization, \"CUDA legalization\")\n\n lowering_passes = self.define_cuda_lowering_pipeline(self.state)\n pm.passes.extend(lowering_passes.passes)\n\n pm.finalize()\n return [pm]\n\n def define_cuda_lowering_pipeline(self, state):\n pm = PassManager('cuda_lowering')\n # legalise\n pm.add_pass(IRLegalization,\n \"ensure IR is legal prior to lowering\")\n pm.add_pass(AnnotateTypes, \"annotate types\")\n\n # lower\n pm.add_pass(CreateLibrary, \"create library\")\n pm.add_pass(NativeLowering, \"native lowering\")\n pm.add_pass(CUDABackend, \"cuda backend\")\n\n pm.finalize()\n return pm\n\n\n@global_compiler_lock\ndef compile_cuda(pyfunc, return_type, args, debug=False, lineinfo=False,\n inline=False, fastmath=False, nvvm_options=None):\n from .descriptor import cuda_target\n typingctx = cuda_target.typing_context\n targetctx = cuda_target.target_context\n\n flags = CUDAFlags()\n # Do not compile (generate native code), just lower (to LLVM)\n flags.no_compile = True\n flags.no_cpython_wrapper = True\n flags.no_cfunc_wrapper = True\n if debug or lineinfo:\n # Note both debug and lineinfo turn on debug information in the\n # compiled code, but we keep them separate arguments in case we\n # later want to overload some other behavior on the debug flag.\n # In particular, -opt=3 is not supported with -g.\n flags.debuginfo = True\n flags.error_model = 'python'\n else:\n flags.error_model = 'numpy'\n if inline:\n flags.forceinline = True\n if fastmath:\n flags.fastmath = True\n if nvvm_options:\n flags.nvvm_options = nvvm_options\n\n # Run compilation pipeline\n from numba.core.target_extension import target_override\n with target_override('cuda'):\n cres = compiler.compile_extra(typingctx=typingctx,\n targetctx=targetctx,\n func=pyfunc,\n args=args,\n return_type=return_type,\n flags=flags,\n locals={},\n pipeline_class=CUDACompiler)\n\n library = cres.library\n library.finalize()\n\n return cres\n\n\n@global_compiler_lock\ndef compile_ptx(pyfunc, args, debug=False, lineinfo=False, device=False,\n fastmath=False, cc=None, opt=True):\n \"\"\"Compile a Python function to PTX for a given set of argument types.\n\n :param pyfunc: The Python function to compile.\n :param args: A tuple of argument types to compile for.\n :param debug: Whether to include debug info in the generated PTX.\n :type debug: bool\n :param lineinfo: Whether to include a line mapping from the generated PTX\n to the source code. Usually this is used with optimized\n code (since debug mode would automatically include this),\n so we want debug info in the LLVM but only the line\n mapping in the final PTX.\n :type lineinfo: bool\n :param device: Whether to compile a device function. Defaults to ``False``,\n to compile global kernel functions.\n :type device: bool\n :param fastmath: Whether to enable fast math flags (ftz=1, prec_sqrt=0,\n prec_div=, and fma=1)\n :type fastmath: bool\n :param cc: Compute capability to compile for, as a tuple ``(MAJOR, MINOR)``.\n Defaults to ``(5, 3)``.\n :type cc: tuple\n :param opt: Enable optimizations. Defaults to ``True``.\n :type opt: bool\n :return: (ptx, resty): The PTX code and inferred return type\n :rtype: tuple\n \"\"\"\n if debug and opt:\n msg = (\"debug=True with opt=True (the default) \"\n \"is not supported by CUDA. This may result in a crash\"\n \" - set debug=False or opt=False.\")\n warn(NumbaInvalidConfigWarning(msg))\n\n nvvm_options = {\n 'debug': debug,\n 'lineinfo': lineinfo,\n 'fastmath': fastmath,\n 'opt': 3 if opt else 0\n }\n\n cres = compile_cuda(pyfunc, None, args, debug=debug, lineinfo=lineinfo,\n fastmath=fastmath,\n nvvm_options=nvvm_options)\n resty = cres.signature.return_type\n\n if resty and not device and resty != types.void:\n raise TypeError(\"CUDA kernel must have void return type.\")\n\n if device:\n lib = cres.library\n else:\n tgt = cres.target_context\n code = pyfunc.__code__\n filename = code.co_filename\n linenum = code.co_firstlineno\n\n lib, kernel = tgt.prepare_cuda_kernel(cres.library, cres.fndesc, debug,\n nvvm_options, filename, linenum)\n\n cc = cc or config.CUDA_DEFAULT_PTX_CC\n ptx = lib.get_asm_str(cc=cc)\n return ptx, resty\n\n\ndef compile_ptx_for_current_device(pyfunc, args, debug=False, lineinfo=False,\n device=False, fastmath=False, opt=True):\n \"\"\"Compile a Python function to PTX for a given set of argument types for\n the current device's compute capabilility. This calls :func:`compile_ptx`\n with an appropriate ``cc`` value for the current device.\"\"\"\n cc = get_current_device().compute_capability\n return compile_ptx(pyfunc, args, debug=debug, lineinfo=lineinfo,\n device=device, fastmath=fastmath, cc=cc, opt=True)\n\n\ndef declare_device_function(name, restype, argtypes):\n return declare_device_function_template(name, restype, argtypes).key\n\n\ndef declare_device_function_template(name, restype, argtypes):\n from .descriptor import cuda_target\n typingctx = cuda_target.typing_context\n targetctx = cuda_target.target_context\n sig = typing.signature(restype, *argtypes)\n extfn = ExternFunction(name, sig)\n\n class device_function_template(ConcreteTemplate):\n key = extfn\n cases = [sig]\n\n fndesc = funcdesc.ExternalFunctionDescriptor(\n name=name, restype=restype, argtypes=argtypes)\n typingctx.insert_user_function(extfn, device_function_template)\n targetctx.insert_user_function(extfn, fndesc)\n\n return device_function_template\n\n\nclass ExternFunction(object):\n def __init__(self, name, sig):\n self.name = name\n self.sig = sig\n", "path": "numba/cuda/compiler.py"}]} |
gh_patches_debug_1061 | rasdani/github-patches | git_diff | WeblateOrg__weblate-10794 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Syntax highlighting of search input
### Describe the issue
1. Go to a screenshot
2. Enter "not found" as the search term
3. A lot of strings appear as search results, most of them not containing anything related to "not found"
If I enter "not" or "found" then fewer results are found compared to "not found".
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar issues in this repository.
### Steps to reproduce the behavior
1. Go to a screenshot
2. Enter "not found" as the search term
3. A lot of strings appear as search results, most of them not containing anything related to "not found"
### Expected behavior
Search only lists strings containing "not found"
### Screenshots

### Exception traceback
_No response_
### How do you run Weblate?
weblate.org service
### Weblate versions
_No response_
### Weblate deploy checks
_No response_
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/utils/forms.py`
Content:
```
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from crispy_forms.layout import Div, Field
6 from crispy_forms.utils import TEMPLATE_PACK
7 from django import forms
8 from django.core.exceptions import ValidationError
9 from django.db.models import Q
10 from django.forms.models import ModelChoiceIterator
11 from django.template.loader import render_to_string
12 from django.utils.translation import gettext, gettext_lazy
13
14 from weblate.trans.defines import EMAIL_LENGTH, USERNAME_LENGTH
15 from weblate.trans.filter import FILTERS
16 from weblate.trans.util import sort_unicode
17 from weblate.utils.errors import report_error
18 from weblate.utils.search import parse_query
19 from weblate.utils.validators import validate_email, validate_username
20
21
22 class QueryField(forms.CharField):
23 def __init__(self, parser: str = "unit", **kwargs):
24 if "label" not in kwargs:
25 kwargs["label"] = gettext_lazy("Query")
26 if "required" not in kwargs:
27 kwargs["required"] = False
28 self.parser = parser
29 super().__init__(**kwargs)
30
31 def clean(self, value):
32 if not value:
33 if self.required:
34 raise ValidationError(gettext("Missing query string."))
35 return ""
36 try:
37 parse_query(value, parser=self.parser)
38 except ValueError as error:
39 raise ValidationError(
40 gettext("Could not parse query string: {}").format(error)
41 ) from error
42 except Exception as error:
43 report_error(cause="Error parsing search query")
44 raise ValidationError(
45 gettext("Could not parse query string: {}").format(error)
46 ) from error
47 return value
48
49
50 class UsernameField(forms.CharField):
51 default_validators = [validate_username]
52
53 def __init__(self, *args, **kwargs):
54 params = {
55 "max_length": USERNAME_LENGTH,
56 "help_text": gettext_lazy(
57 "Username may only contain letters, "
58 "numbers or the following characters: @ . + - _"
59 ),
60 "label": gettext_lazy("Username"),
61 "required": True,
62 }
63 params.update(kwargs)
64 self.valid = None
65
66 super().__init__(*args, **params)
67
68
69 class UserField(forms.CharField):
70 def __init__(
71 self,
72 queryset=None,
73 empty_label="---------",
74 to_field_name=None,
75 limit_choices_to=None,
76 blank=None,
77 **kwargs,
78 ):
79 # This swallows some parameters to mimic ModelChoiceField API
80 super().__init__(**kwargs)
81
82 def widget_attrs(self, widget):
83 attrs = super().widget_attrs(widget)
84 attrs["dir"] = "ltr"
85 attrs["class"] = "user-autocomplete"
86 attrs["spellcheck"] = "false"
87 attrs["autocorrect"] = "off"
88 attrs["autocomplete"] = "off"
89 attrs["autocapitalize"] = "off"
90 return attrs
91
92 def clean(self, value):
93 from weblate.auth.models import User
94
95 if not value:
96 if self.required:
97 raise ValidationError(gettext("Missing username or e-mail."))
98 return None
99 try:
100 return User.objects.get(Q(username=value) | Q(email=value))
101 except User.DoesNotExist:
102 raise ValidationError(gettext("Could not find any such user."))
103 except User.MultipleObjectsReturned:
104 raise ValidationError(gettext("More possible users were found."))
105
106
107 class EmailField(forms.EmailField):
108 """
109 Slightly restricted EmailField.
110
111 We blacklist some additional local parts and customize error messages.
112 """
113
114 default_validators = [validate_email]
115
116 def __init__(self, *args, **kwargs):
117 kwargs.setdefault("max_length", EMAIL_LENGTH)
118 super().__init__(*args, **kwargs)
119
120
121 class SortedSelectMixin:
122 """Mixin for Select widgets to sort choices alphabetically."""
123
124 def optgroups(self, name, value, attrs=None):
125 groups = super().optgroups(name, value, attrs)
126 return sort_unicode(groups, lambda val: str(val[1][0]["label"]))
127
128
129 class ColorWidget(forms.RadioSelect):
130 def __init__(self, attrs=None, choices=()):
131 attrs = {**(attrs or {}), "class": "color_edit"}
132 super().__init__(attrs, choices)
133
134
135 class SortedSelectMultiple(SortedSelectMixin, forms.SelectMultiple):
136 """Wrapper class to sort choices alphabetically."""
137
138
139 class SortedSelect(SortedSelectMixin, forms.Select):
140 """Wrapper class to sort choices alphabetically."""
141
142
143 class ContextDiv(Div):
144 def __init__(self, *fields, **kwargs):
145 self.context = kwargs.pop("context", {})
146 super().__init__(*fields, **kwargs)
147
148 def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):
149 template = self.get_template_name(template_pack)
150 return render_to_string(template, self.context)
151
152
153 class SearchField(Field):
154 def __init__(self, *args, **kwargs):
155 kwargs["template"] = "snippets/query-field.html"
156 super().__init__(*args, **kwargs)
157
158 def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):
159 extra_context = {"custom_filter_list": self.get_search_query_choices()}
160 return super().render(form, context, template_pack, extra_context, **kwargs)
161
162 def get_search_query_choices(self):
163 """Return all filtering choices for query field."""
164 filter_keys = [
165 "nottranslated",
166 "todo",
167 "translated",
168 "fuzzy",
169 "suggestions",
170 "variants",
171 "screenshots",
172 "labels",
173 "context",
174 "nosuggestions",
175 "comments",
176 "allchecks",
177 "approved",
178 "unapproved",
179 ]
180 return [
181 (key, FILTERS.get_filter_name(key), FILTERS.get_filter_query(key))
182 for key in filter_keys
183 ]
184
185
186 class CachedQueryIterator(ModelChoiceIterator):
187 """
188 Choice iterator for cached querysets.
189
190 It assumes the queryset is reused and avoids using an iterator or counting queries.
191 """
192
193 def __iter__(self):
194 if self.field.empty_label is not None:
195 yield ("", self.field.empty_label)
196 for obj in self.queryset:
197 yield self.choice(obj)
198
199 def __len__(self):
200 return len(self.queryset) + (1 if self.field.empty_label is not None else 0)
201
202 def __bool__(self):
203 return self.field.empty_label is not None or bool(self.queryset)
204
205
206 class NonCopyingSetQuerysetMixin:
207 iterator = CachedQueryIterator
208
209 def _get_queryset(self):
210 return self._queryset
211
212 def _set_queryset(self, queryset):
213 self._queryset = queryset
214 self.widget.choices = self.choices
215
216 queryset = property(_get_queryset, _set_queryset)
217
218
219 class CachedModelChoiceField(NonCopyingSetQuerysetMixin, forms.ModelChoiceField):
220 pass
221
222
223 class CachedModelMultipleChoiceField(
224 NonCopyingSetQuerysetMixin, forms.ModelMultipleChoiceField
225 ):
226 pass
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/weblate/utils/forms.py b/weblate/utils/forms.py
--- a/weblate/utils/forms.py
+++ b/weblate/utils/forms.py
@@ -25,6 +25,8 @@
kwargs["label"] = gettext_lazy("Query")
if "required" not in kwargs:
kwargs["required"] = False
+ if "widget" not in kwargs:
+ kwargs["widget"] = forms.Textarea(attrs={"cols": None, "rows": 1})
self.parser = parser
super().__init__(**kwargs)
| {"golden_diff": "diff --git a/weblate/utils/forms.py b/weblate/utils/forms.py\n--- a/weblate/utils/forms.py\n+++ b/weblate/utils/forms.py\n@@ -25,6 +25,8 @@\n kwargs[\"label\"] = gettext_lazy(\"Query\")\n if \"required\" not in kwargs:\n kwargs[\"required\"] = False\n+ if \"widget\" not in kwargs:\n+ kwargs[\"widget\"] = forms.Textarea(attrs={\"cols\": None, \"rows\": 1})\n self.parser = parser\n super().__init__(**kwargs)\n", "issue": "Syntax highlighting of search input\n### Describe the issue\n\n1. Go to a screenshot\r\n2. Enter \"not found\" as the search term\r\n3. A lot of strings appear as search results, most of them not containing anything related to \"not found\"\r\n\r\n\r\nIf I enter \"not\" or \"found\" then fewer results are found compared to \"not found\".\n\n### I already tried\n\n- [X] I've read and searched [the documentation](https://docs.weblate.org/).\n- [X] I've searched for similar issues in this repository.\n\n### Steps to reproduce the behavior\n\n1. Go to a screenshot\r\n2. Enter \"not found\" as the search term\r\n3. A lot of strings appear as search results, most of them not containing anything related to \"not found\"\n\n### Expected behavior\n\nSearch only lists strings containing \"not found\"\n\n### Screenshots\n\n\r\n\n\n### Exception traceback\n\n_No response_\n\n### How do you run Weblate?\n\nweblate.org service\n\n### Weblate versions\n\n_No response_\n\n### Weblate deploy checks\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom crispy_forms.layout import Div, Field\nfrom crispy_forms.utils import TEMPLATE_PACK\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Q\nfrom django.forms.models import ModelChoiceIterator\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import gettext, gettext_lazy\n\nfrom weblate.trans.defines import EMAIL_LENGTH, USERNAME_LENGTH\nfrom weblate.trans.filter import FILTERS\nfrom weblate.trans.util import sort_unicode\nfrom weblate.utils.errors import report_error\nfrom weblate.utils.search import parse_query\nfrom weblate.utils.validators import validate_email, validate_username\n\n\nclass QueryField(forms.CharField):\n def __init__(self, parser: str = \"unit\", **kwargs):\n if \"label\" not in kwargs:\n kwargs[\"label\"] = gettext_lazy(\"Query\")\n if \"required\" not in kwargs:\n kwargs[\"required\"] = False\n self.parser = parser\n super().__init__(**kwargs)\n\n def clean(self, value):\n if not value:\n if self.required:\n raise ValidationError(gettext(\"Missing query string.\"))\n return \"\"\n try:\n parse_query(value, parser=self.parser)\n except ValueError as error:\n raise ValidationError(\n gettext(\"Could not parse query string: {}\").format(error)\n ) from error\n except Exception as error:\n report_error(cause=\"Error parsing search query\")\n raise ValidationError(\n gettext(\"Could not parse query string: {}\").format(error)\n ) from error\n return value\n\n\nclass UsernameField(forms.CharField):\n default_validators = [validate_username]\n\n def __init__(self, *args, **kwargs):\n params = {\n \"max_length\": USERNAME_LENGTH,\n \"help_text\": gettext_lazy(\n \"Username may only contain letters, \"\n \"numbers or the following characters: @ . + - _\"\n ),\n \"label\": gettext_lazy(\"Username\"),\n \"required\": True,\n }\n params.update(kwargs)\n self.valid = None\n\n super().__init__(*args, **params)\n\n\nclass UserField(forms.CharField):\n def __init__(\n self,\n queryset=None,\n empty_label=\"---------\",\n to_field_name=None,\n limit_choices_to=None,\n blank=None,\n **kwargs,\n ):\n # This swallows some parameters to mimic ModelChoiceField API\n super().__init__(**kwargs)\n\n def widget_attrs(self, widget):\n attrs = super().widget_attrs(widget)\n attrs[\"dir\"] = \"ltr\"\n attrs[\"class\"] = \"user-autocomplete\"\n attrs[\"spellcheck\"] = \"false\"\n attrs[\"autocorrect\"] = \"off\"\n attrs[\"autocomplete\"] = \"off\"\n attrs[\"autocapitalize\"] = \"off\"\n return attrs\n\n def clean(self, value):\n from weblate.auth.models import User\n\n if not value:\n if self.required:\n raise ValidationError(gettext(\"Missing username or e-mail.\"))\n return None\n try:\n return User.objects.get(Q(username=value) | Q(email=value))\n except User.DoesNotExist:\n raise ValidationError(gettext(\"Could not find any such user.\"))\n except User.MultipleObjectsReturned:\n raise ValidationError(gettext(\"More possible users were found.\"))\n\n\nclass EmailField(forms.EmailField):\n \"\"\"\n Slightly restricted EmailField.\n\n We blacklist some additional local parts and customize error messages.\n \"\"\"\n\n default_validators = [validate_email]\n\n def __init__(self, *args, **kwargs):\n kwargs.setdefault(\"max_length\", EMAIL_LENGTH)\n super().__init__(*args, **kwargs)\n\n\nclass SortedSelectMixin:\n \"\"\"Mixin for Select widgets to sort choices alphabetically.\"\"\"\n\n def optgroups(self, name, value, attrs=None):\n groups = super().optgroups(name, value, attrs)\n return sort_unicode(groups, lambda val: str(val[1][0][\"label\"]))\n\n\nclass ColorWidget(forms.RadioSelect):\n def __init__(self, attrs=None, choices=()):\n attrs = {**(attrs or {}), \"class\": \"color_edit\"}\n super().__init__(attrs, choices)\n\n\nclass SortedSelectMultiple(SortedSelectMixin, forms.SelectMultiple):\n \"\"\"Wrapper class to sort choices alphabetically.\"\"\"\n\n\nclass SortedSelect(SortedSelectMixin, forms.Select):\n \"\"\"Wrapper class to sort choices alphabetically.\"\"\"\n\n\nclass ContextDiv(Div):\n def __init__(self, *fields, **kwargs):\n self.context = kwargs.pop(\"context\", {})\n super().__init__(*fields, **kwargs)\n\n def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):\n template = self.get_template_name(template_pack)\n return render_to_string(template, self.context)\n\n\nclass SearchField(Field):\n def __init__(self, *args, **kwargs):\n kwargs[\"template\"] = \"snippets/query-field.html\"\n super().__init__(*args, **kwargs)\n\n def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):\n extra_context = {\"custom_filter_list\": self.get_search_query_choices()}\n return super().render(form, context, template_pack, extra_context, **kwargs)\n\n def get_search_query_choices(self):\n \"\"\"Return all filtering choices for query field.\"\"\"\n filter_keys = [\n \"nottranslated\",\n \"todo\",\n \"translated\",\n \"fuzzy\",\n \"suggestions\",\n \"variants\",\n \"screenshots\",\n \"labels\",\n \"context\",\n \"nosuggestions\",\n \"comments\",\n \"allchecks\",\n \"approved\",\n \"unapproved\",\n ]\n return [\n (key, FILTERS.get_filter_name(key), FILTERS.get_filter_query(key))\n for key in filter_keys\n ]\n\n\nclass CachedQueryIterator(ModelChoiceIterator):\n \"\"\"\n Choice iterator for cached querysets.\n\n It assumes the queryset is reused and avoids using an iterator or counting queries.\n \"\"\"\n\n def __iter__(self):\n if self.field.empty_label is not None:\n yield (\"\", self.field.empty_label)\n for obj in self.queryset:\n yield self.choice(obj)\n\n def __len__(self):\n return len(self.queryset) + (1 if self.field.empty_label is not None else 0)\n\n def __bool__(self):\n return self.field.empty_label is not None or bool(self.queryset)\n\n\nclass NonCopyingSetQuerysetMixin:\n iterator = CachedQueryIterator\n\n def _get_queryset(self):\n return self._queryset\n\n def _set_queryset(self, queryset):\n self._queryset = queryset\n self.widget.choices = self.choices\n\n queryset = property(_get_queryset, _set_queryset)\n\n\nclass CachedModelChoiceField(NonCopyingSetQuerysetMixin, forms.ModelChoiceField):\n pass\n\n\nclass CachedModelMultipleChoiceField(\n NonCopyingSetQuerysetMixin, forms.ModelMultipleChoiceField\n):\n pass\n", "path": "weblate/utils/forms.py"}], "after_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom crispy_forms.layout import Div, Field\nfrom crispy_forms.utils import TEMPLATE_PACK\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Q\nfrom django.forms.models import ModelChoiceIterator\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import gettext, gettext_lazy\n\nfrom weblate.trans.defines import EMAIL_LENGTH, USERNAME_LENGTH\nfrom weblate.trans.filter import FILTERS\nfrom weblate.trans.util import sort_unicode\nfrom weblate.utils.errors import report_error\nfrom weblate.utils.search import parse_query\nfrom weblate.utils.validators import validate_email, validate_username\n\n\nclass QueryField(forms.CharField):\n def __init__(self, parser: str = \"unit\", **kwargs):\n if \"label\" not in kwargs:\n kwargs[\"label\"] = gettext_lazy(\"Query\")\n if \"required\" not in kwargs:\n kwargs[\"required\"] = False\n if \"widget\" not in kwargs:\n kwargs[\"widget\"] = forms.Textarea(attrs={\"cols\": None, \"rows\": 1})\n self.parser = parser\n super().__init__(**kwargs)\n\n def clean(self, value):\n if not value:\n if self.required:\n raise ValidationError(gettext(\"Missing query string.\"))\n return \"\"\n try:\n parse_query(value, parser=self.parser)\n except ValueError as error:\n raise ValidationError(\n gettext(\"Could not parse query string: {}\").format(error)\n ) from error\n except Exception as error:\n report_error(cause=\"Error parsing search query\")\n raise ValidationError(\n gettext(\"Could not parse query string: {}\").format(error)\n ) from error\n return value\n\n\nclass UsernameField(forms.CharField):\n default_validators = [validate_username]\n\n def __init__(self, *args, **kwargs):\n params = {\n \"max_length\": USERNAME_LENGTH,\n \"help_text\": gettext_lazy(\n \"Username may only contain letters, \"\n \"numbers or the following characters: @ . + - _\"\n ),\n \"label\": gettext_lazy(\"Username\"),\n \"required\": True,\n }\n params.update(kwargs)\n self.valid = None\n\n super().__init__(*args, **params)\n\n\nclass UserField(forms.CharField):\n def __init__(\n self,\n queryset=None,\n empty_label=\"---------\",\n to_field_name=None,\n limit_choices_to=None,\n blank=None,\n **kwargs,\n ):\n # This swallows some parameters to mimic ModelChoiceField API\n super().__init__(**kwargs)\n\n def widget_attrs(self, widget):\n attrs = super().widget_attrs(widget)\n attrs[\"dir\"] = \"ltr\"\n attrs[\"class\"] = \"user-autocomplete\"\n attrs[\"spellcheck\"] = \"false\"\n attrs[\"autocorrect\"] = \"off\"\n attrs[\"autocomplete\"] = \"off\"\n attrs[\"autocapitalize\"] = \"off\"\n return attrs\n\n def clean(self, value):\n from weblate.auth.models import User\n\n if not value:\n if self.required:\n raise ValidationError(gettext(\"Missing username or e-mail.\"))\n return None\n try:\n return User.objects.get(Q(username=value) | Q(email=value))\n except User.DoesNotExist:\n raise ValidationError(gettext(\"Could not find any such user.\"))\n except User.MultipleObjectsReturned:\n raise ValidationError(gettext(\"More possible users were found.\"))\n\n\nclass EmailField(forms.EmailField):\n \"\"\"\n Slightly restricted EmailField.\n\n We blacklist some additional local parts and customize error messages.\n \"\"\"\n\n default_validators = [validate_email]\n\n def __init__(self, *args, **kwargs):\n kwargs.setdefault(\"max_length\", EMAIL_LENGTH)\n super().__init__(*args, **kwargs)\n\n\nclass SortedSelectMixin:\n \"\"\"Mixin for Select widgets to sort choices alphabetically.\"\"\"\n\n def optgroups(self, name, value, attrs=None):\n groups = super().optgroups(name, value, attrs)\n return sort_unicode(groups, lambda val: str(val[1][0][\"label\"]))\n\n\nclass ColorWidget(forms.RadioSelect):\n def __init__(self, attrs=None, choices=()):\n attrs = {**(attrs or {}), \"class\": \"color_edit\"}\n super().__init__(attrs, choices)\n\n\nclass SortedSelectMultiple(SortedSelectMixin, forms.SelectMultiple):\n \"\"\"Wrapper class to sort choices alphabetically.\"\"\"\n\n\nclass SortedSelect(SortedSelectMixin, forms.Select):\n \"\"\"Wrapper class to sort choices alphabetically.\"\"\"\n\n\nclass ContextDiv(Div):\n def __init__(self, *fields, **kwargs):\n self.context = kwargs.pop(\"context\", {})\n super().__init__(*fields, **kwargs)\n\n def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):\n template = self.get_template_name(template_pack)\n return render_to_string(template, self.context)\n\n\nclass SearchField(Field):\n def __init__(self, *args, **kwargs):\n kwargs[\"template\"] = \"snippets/query-field.html\"\n super().__init__(*args, **kwargs)\n\n def render(self, form, context, template_pack=TEMPLATE_PACK, **kwargs):\n extra_context = {\"custom_filter_list\": self.get_search_query_choices()}\n return super().render(form, context, template_pack, extra_context, **kwargs)\n\n def get_search_query_choices(self):\n \"\"\"Return all filtering choices for query field.\"\"\"\n filter_keys = [\n \"nottranslated\",\n \"todo\",\n \"translated\",\n \"fuzzy\",\n \"suggestions\",\n \"variants\",\n \"screenshots\",\n \"labels\",\n \"context\",\n \"nosuggestions\",\n \"comments\",\n \"allchecks\",\n \"approved\",\n \"unapproved\",\n ]\n return [\n (key, FILTERS.get_filter_name(key), FILTERS.get_filter_query(key))\n for key in filter_keys\n ]\n\n\nclass CachedQueryIterator(ModelChoiceIterator):\n \"\"\"\n Choice iterator for cached querysets.\n\n It assumes the queryset is reused and avoids using an iterator or counting queries.\n \"\"\"\n\n def __iter__(self):\n if self.field.empty_label is not None:\n yield (\"\", self.field.empty_label)\n for obj in self.queryset:\n yield self.choice(obj)\n\n def __len__(self):\n return len(self.queryset) + (1 if self.field.empty_label is not None else 0)\n\n def __bool__(self):\n return self.field.empty_label is not None or bool(self.queryset)\n\n\nclass NonCopyingSetQuerysetMixin:\n iterator = CachedQueryIterator\n\n def _get_queryset(self):\n return self._queryset\n\n def _set_queryset(self, queryset):\n self._queryset = queryset\n self.widget.choices = self.choices\n\n queryset = property(_get_queryset, _set_queryset)\n\n\nclass CachedModelChoiceField(NonCopyingSetQuerysetMixin, forms.ModelChoiceField):\n pass\n\n\nclass CachedModelMultipleChoiceField(\n NonCopyingSetQuerysetMixin, forms.ModelMultipleChoiceField\n):\n pass\n", "path": "weblate/utils/forms.py"}]} |
gh_patches_debug_1062 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-1228 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't change filename when send document after upgrading to v11.1.0
### Steps to reproduce
1. Generate a pickle file "test" (I didn't test other common files yet)
2. Send this file to user
`bot.send_document(chat_id=user_chat_id, document=open('./test', 'rb'), filename="test")`
or
`bot.send_document(chat_id=user_chat_id, document=open('./test', 'rb'))`
### Expected behaviour
User will receive a file named **test**
### Actual behaviour
User received a file named **application.octet-stream**
### Configuration
**Operating System:**
Debian (Server, where I first found this issue)
Ubuntu(Local, **I test on v10.1.0, everything is fine**, so I upgrade to v11.1.0, then I have the same issue as Debian Server)
**Version of Python, python-telegram-bot & dependencies:**
``$ python -m telegram``
*My Local Ubuntu After Upgrade:*
python-telegram-bot 11.1.0
certifi 2018.08.24
future 0.16.0
Python 3.6.6 (default, Sep 12 2018, 18:26:19) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
The pictures shows results of python-telegram-bot v10.1.0 (the first one) and v11.1.0 (the second one) :

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `telegram/files/inputfile.py`
Content:
```
1 #!/usr/bin/env python
2 # pylint: disable=W0622,E0611
3 #
4 # A library that provides a Python interface to the Telegram Bot API
5 # Copyright (C) 2015-2018
6 # Leandro Toledo de Souza <[email protected]>
7 #
8 # This program is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU Lesser Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # This program is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU Lesser Public License for more details.
17 #
18 # You should have received a copy of the GNU Lesser Public License
19 # along with this program. If not, see [http://www.gnu.org/licenses/].
20 """This module contains an object that represents a Telegram InputFile."""
21
22 import imghdr
23 import mimetypes
24 import os
25 from uuid import uuid4
26
27 from telegram import TelegramError
28
29 DEFAULT_MIME_TYPE = 'application/octet-stream'
30
31
32 class InputFile(object):
33 """This object represents a Telegram InputFile.
34
35 Attributes:
36 input_file_content (:obj:`bytes`): The binaray content of the file to send.
37 filename (:obj:`str`): Optional, Filename for the file to be sent.
38 attach (:obj:`str`): Optional, attach id for sending multiple files.
39
40 Args:
41 obj (:obj:`File handler`): An open file descriptor.
42 filename (:obj:`str`, optional): Filename for this InputFile.
43 attach (:obj:`bool`, optional): Whether this should be send as one file or is part of a
44 collection of files.
45
46 Raises:
47 TelegramError
48
49 """
50
51 def __init__(self, obj, filename=None, attach=None):
52 self.filename = None
53 self.input_file_content = obj.read()
54 self.attach = 'attached' + uuid4().hex if attach else None
55
56 if filename:
57 self.filename = filename
58 elif (hasattr(obj, 'name') and
59 not isinstance(obj.name, int) and # py3
60 obj.name != '<fdopen>'): # py2
61 # on py2.7, pylint fails to understand this properly
62 # pylint: disable=E1101
63 self.filename = os.path.basename(obj.name)
64
65 try:
66 self.mimetype = self.is_image(self.input_file_content)
67 except TelegramError:
68 if self.filename:
69 self.mimetype = mimetypes.guess_type(
70 self.filename)[0] or DEFAULT_MIME_TYPE
71 else:
72 self.mimetype = DEFAULT_MIME_TYPE
73 if not self.filename or '.' not in self.filename:
74 self.filename = self.mimetype.replace('/', '.')
75
76 @property
77 def field_tuple(self):
78 return self.filename, self.input_file_content, self.mimetype
79
80 @staticmethod
81 def is_image(stream):
82 """Check if the content file is an image by analyzing its headers.
83
84 Args:
85 stream (:obj:`str`): A str representing the content of a file.
86
87 Returns:
88 :obj:`str`: The str mime-type of an image.
89
90 """
91 image = imghdr.what(None, stream)
92 if image:
93 return 'image/%s' % image
94
95 raise TelegramError('Could not parse file content')
96
97 @staticmethod
98 def is_file(obj):
99 return hasattr(obj, 'read')
100
101 def to_dict(self):
102 if self.attach:
103 return 'attach://' + self.attach
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/telegram/files/inputfile.py b/telegram/files/inputfile.py
--- a/telegram/files/inputfile.py
+++ b/telegram/files/inputfile.py
@@ -70,7 +70,7 @@
self.filename)[0] or DEFAULT_MIME_TYPE
else:
self.mimetype = DEFAULT_MIME_TYPE
- if not self.filename or '.' not in self.filename:
+ if not self.filename:
self.filename = self.mimetype.replace('/', '.')
@property
| {"golden_diff": "diff --git a/telegram/files/inputfile.py b/telegram/files/inputfile.py\n--- a/telegram/files/inputfile.py\n+++ b/telegram/files/inputfile.py\n@@ -70,7 +70,7 @@\n self.filename)[0] or DEFAULT_MIME_TYPE\n else:\n self.mimetype = DEFAULT_MIME_TYPE\n- if not self.filename or '.' not in self.filename:\n+ if not self.filename:\n self.filename = self.mimetype.replace('/', '.')\n \n @property\n", "issue": "Can't change filename when send document after upgrading to v11.1.0\n### Steps to reproduce\r\n1. Generate a pickle file \"test\" (I didn't test other common files yet)\r\n\r\n2. Send this file to user\r\n\r\n`bot.send_document(chat_id=user_chat_id, document=open('./test', 'rb'), filename=\"test\")`\r\n\r\nor\r\n\r\n`bot.send_document(chat_id=user_chat_id, document=open('./test', 'rb'))`\r\n\r\n### Expected behaviour\r\nUser will receive a file named **test**\r\n\r\n### Actual behaviour\r\nUser received a file named **application.octet-stream**\r\n\r\n### Configuration\r\n**Operating System:** \r\n\r\nDebian (Server, where I first found this issue)\r\n\r\nUbuntu(Local, **I test on v10.1.0, everything is fine**, so I upgrade to v11.1.0, then I have the same issue as Debian Server)\r\n\r\n**Version of Python, python-telegram-bot & dependencies:**\r\n\r\n``$ python -m telegram``\r\n\r\n*My Local Ubuntu After Upgrade:*\r\npython-telegram-bot 11.1.0\r\ncertifi 2018.08.24\r\nfuture 0.16.0\r\nPython 3.6.6 (default, Sep 12 2018, 18:26:19) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]\r\n\r\nThe pictures shows results of python-telegram-bot v10.1.0 (the first one) and v11.1.0 (the second one) :\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# pylint: disable=W0622,E0611\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram InputFile.\"\"\"\n\nimport imghdr\nimport mimetypes\nimport os\nfrom uuid import uuid4\n\nfrom telegram import TelegramError\n\nDEFAULT_MIME_TYPE = 'application/octet-stream'\n\n\nclass InputFile(object):\n \"\"\"This object represents a Telegram InputFile.\n\n Attributes:\n input_file_content (:obj:`bytes`): The binaray content of the file to send.\n filename (:obj:`str`): Optional, Filename for the file to be sent.\n attach (:obj:`str`): Optional, attach id for sending multiple files.\n\n Args:\n obj (:obj:`File handler`): An open file descriptor.\n filename (:obj:`str`, optional): Filename for this InputFile.\n attach (:obj:`bool`, optional): Whether this should be send as one file or is part of a\n collection of files.\n\n Raises:\n TelegramError\n\n \"\"\"\n\n def __init__(self, obj, filename=None, attach=None):\n self.filename = None\n self.input_file_content = obj.read()\n self.attach = 'attached' + uuid4().hex if attach else None\n\n if filename:\n self.filename = filename\n elif (hasattr(obj, 'name') and\n not isinstance(obj.name, int) and # py3\n obj.name != '<fdopen>'): # py2\n # on py2.7, pylint fails to understand this properly\n # pylint: disable=E1101\n self.filename = os.path.basename(obj.name)\n\n try:\n self.mimetype = self.is_image(self.input_file_content)\n except TelegramError:\n if self.filename:\n self.mimetype = mimetypes.guess_type(\n self.filename)[0] or DEFAULT_MIME_TYPE\n else:\n self.mimetype = DEFAULT_MIME_TYPE\n if not self.filename or '.' not in self.filename:\n self.filename = self.mimetype.replace('/', '.')\n\n @property\n def field_tuple(self):\n return self.filename, self.input_file_content, self.mimetype\n\n @staticmethod\n def is_image(stream):\n \"\"\"Check if the content file is an image by analyzing its headers.\n\n Args:\n stream (:obj:`str`): A str representing the content of a file.\n\n Returns:\n :obj:`str`: The str mime-type of an image.\n\n \"\"\"\n image = imghdr.what(None, stream)\n if image:\n return 'image/%s' % image\n\n raise TelegramError('Could not parse file content')\n\n @staticmethod\n def is_file(obj):\n return hasattr(obj, 'read')\n\n def to_dict(self):\n if self.attach:\n return 'attach://' + self.attach\n", "path": "telegram/files/inputfile.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# pylint: disable=W0622,E0611\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram InputFile.\"\"\"\n\nimport imghdr\nimport mimetypes\nimport os\nfrom uuid import uuid4\n\nfrom telegram import TelegramError\n\nDEFAULT_MIME_TYPE = 'application/octet-stream'\n\n\nclass InputFile(object):\n \"\"\"This object represents a Telegram InputFile.\n\n Attributes:\n input_file_content (:obj:`bytes`): The binaray content of the file to send.\n filename (:obj:`str`): Optional, Filename for the file to be sent.\n attach (:obj:`str`): Optional, attach id for sending multiple files.\n\n Args:\n obj (:obj:`File handler`): An open file descriptor.\n filename (:obj:`str`, optional): Filename for this InputFile.\n attach (:obj:`bool`, optional): Whether this should be send as one file or is part of a\n collection of files.\n\n Raises:\n TelegramError\n\n \"\"\"\n\n def __init__(self, obj, filename=None, attach=None):\n self.filename = None\n self.input_file_content = obj.read()\n self.attach = 'attached' + uuid4().hex if attach else None\n\n if filename:\n self.filename = filename\n elif (hasattr(obj, 'name') and\n not isinstance(obj.name, int) and # py3\n obj.name != '<fdopen>'): # py2\n # on py2.7, pylint fails to understand this properly\n # pylint: disable=E1101\n self.filename = os.path.basename(obj.name)\n\n try:\n self.mimetype = self.is_image(self.input_file_content)\n except TelegramError:\n if self.filename:\n self.mimetype = mimetypes.guess_type(\n self.filename)[0] or DEFAULT_MIME_TYPE\n else:\n self.mimetype = DEFAULT_MIME_TYPE\n if not self.filename:\n self.filename = self.mimetype.replace('/', '.')\n\n @property\n def field_tuple(self):\n return self.filename, self.input_file_content, self.mimetype\n\n @staticmethod\n def is_image(stream):\n \"\"\"Check if the content file is an image by analyzing its headers.\n\n Args:\n stream (:obj:`str`): A str representing the content of a file.\n\n Returns:\n :obj:`str`: The str mime-type of an image.\n\n \"\"\"\n image = imghdr.what(None, stream)\n if image:\n return 'image/%s' % image\n\n raise TelegramError('Could not parse file content')\n\n @staticmethod\n def is_file(obj):\n return hasattr(obj, 'read')\n\n def to_dict(self):\n if self.attach:\n return 'attach://' + self.attach\n", "path": "telegram/files/inputfile.py"}]} |
gh_patches_debug_1063 | rasdani/github-patches | git_diff | pypa__setuptools-4065 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Docs] Preview pop-up on a link covers the link itself
### Summary
I've come across an issue with that breaks the online documentation completely for me. Whenever I move my mouse pointer over a link to a different part of the documentation, a pop-up appears that covers the link, making it near-impossible to click the link. See this screen recording of the situation that this has manifested first for me:
https://github.com/pypa/setuptools/assets/50332/af946044-7222-4e2d-b090-c771be758598
(On this page: https://setuptools.pypa.io/en/latest/pkg_resources.html)
### OS / Environment
Safari 16.6, macOS 12.6.8.
### Additional Information
IMHO, as it presents itself to me, this feature has downsides that are orders of magnitude bigger that it's upsides. My browser already allows me to preview a page by triple-tapping on the trackpad (macOS) or long-pressing (iOS), so it doesn't add a benefit on these platforms.
---
As an additional note: Even if this feature was implemented in a way where it wouldn't make it impossible to click on some links, it would still be an accessibility issue for me:
I'm on the ADD spectrum and I use my mouse pointer for focussing while reading. It's very natural for me to move my pointer along the text while reading. Such an unavoidable popup will draw my focus away from what I am reading (because it appears when I'm not expecting it and haven't performed an explicit action to make it appear). I'm having this issue also e.g. on GitHub, where some links have pop-ups that appear on mouse hovering.
If you intend on keeping these pop-ups, there is something that you could do to make it a bit less intrusive for people like me (I can't speak for everyone on the ADD spectrum of course): Make the pop-up appear immediately when entering the link's region _and also_ disappear immediately when leaving the region, instead of after a short delay. For example, buttons and links that change appearance while hovering or tool-tips in UIs that appear immediately are much less distracting to me. I think my brain is more likely to associate my action with the appearance of the pop-up and thus able to ignore the stimulus.
But hey, thanks for your work anyways!
### Code of Conduct
- [X] I agree to follow the PSF Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 extensions = [
2 'sphinx.ext.autodoc',
3 'jaraco.packaging.sphinx',
4 ]
5
6 master_doc = "index"
7 html_theme = "furo"
8
9 # Link dates and other references in the changelog
10 extensions += ['rst.linker']
11 link_files = {
12 '../NEWS.rst': dict(
13 using=dict(
14 BB='https://bitbucket.org',
15 GH='https://github.com',
16 ),
17 replace=[
18 dict(
19 pattern=r'(Issue #|\B#)(?P<issue>\d+)',
20 url='{package_url}/issues/{issue}',
21 ),
22 dict(
23 pattern=r'(?m:^((?P<scm_version>v?\d+(\.\d+){1,2}))\n[-=]+\n)',
24 with_scm='{text}\n{rev[timestamp]:%d %b %Y}\n',
25 ),
26 dict(
27 pattern=r'PEP[- ](?P<pep_number>\d+)',
28 url='https://peps.python.org/pep-{pep_number:0>4}/',
29 ),
30 dict(
31 pattern=r'(?<!\w)PR #(?P<pull>\d+)',
32 url='{package_url}/pull/{pull}',
33 ),
34 dict(
35 pattern=r'BB Pull Request ?#(?P<bb_pull_request>\d+)',
36 url='{BB}/pypa/setuptools/pull-request/{bb_pull_request}',
37 ),
38 dict(
39 pattern=r'Distribute #(?P<distribute>\d+)',
40 url='{BB}/tarek/distribute/issue/{distribute}',
41 ),
42 dict(
43 pattern=r'Buildout #(?P<buildout>\d+)',
44 url='{GH}/buildout/buildout/issues/{buildout}',
45 ),
46 dict(
47 pattern=r'Old Setuptools #(?P<old_setuptools>\d+)',
48 url='http://bugs.python.org/setuptools/issue{old_setuptools}',
49 ),
50 dict(
51 pattern=r'Jython #(?P<jython>\d+)',
52 url='http://bugs.jython.org/issue{jython}',
53 ),
54 dict(
55 pattern=r'(Python #|bpo-)(?P<python>\d+)',
56 url='http://bugs.python.org/issue{python}',
57 ),
58 dict(
59 pattern=r'Interop #(?P<interop>\d+)',
60 url='{GH}/pypa/interoperability-peps/issues/{interop}',
61 ),
62 dict(
63 pattern=r'Pip #(?P<pip>\d+)',
64 url='{GH}/pypa/pip/issues/{pip}',
65 ),
66 dict(
67 pattern=r'Packaging #(?P<packaging>\d+)',
68 url='{GH}/pypa/packaging/issues/{packaging}',
69 ),
70 dict(
71 pattern=r'[Pp]ackaging (?P<packaging_ver>\d+(\.\d+)+)',
72 url='{GH}/pypa/packaging/blob/{packaging_ver}/CHANGELOG.rst',
73 ),
74 dict(
75 pattern=r'setuptools_svn #(?P<setuptools_svn>\d+)',
76 url='{GH}/jaraco/setuptools_svn/issues/{setuptools_svn}',
77 ),
78 dict(
79 pattern=r'pypa/(?P<issue_repo>[\-\.\w]+)#(?P<issue_number>\d+)',
80 url='{GH}/pypa/{issue_repo}/issues/{issue_number}',
81 ),
82 dict(
83 pattern=r'pypa/(?P<commit_repo>[\-\.\w]+)@(?P<commit_number>[\da-f]+)',
84 url='{GH}/pypa/{commit_repo}/commit/{commit_number}',
85 ),
86 ],
87 ),
88 }
89
90 # Be strict about any broken references
91 nitpicky = True
92
93 # Include Python intersphinx mapping to prevent failures
94 # jaraco/skeleton#51
95 extensions += ['sphinx.ext.intersphinx']
96 intersphinx_mapping = {
97 'python': ('https://docs.python.org/3', None),
98 }
99
100 # Preserve authored syntax for defaults
101 autodoc_preserve_defaults = True
102
103 intersphinx_mapping.update(
104 {
105 'pip': ('https://pip.pypa.io/en/latest', None),
106 'build': ('https://pypa-build.readthedocs.io/en/latest', None),
107 'PyPUG': ('https://packaging.python.org/en/latest/', None),
108 'packaging': ('https://packaging.pypa.io/en/latest/', None),
109 'twine': ('https://twine.readthedocs.io/en/stable/', None),
110 'importlib-resources': (
111 'https://importlib-resources.readthedocs.io/en/latest',
112 None,
113 ),
114 }
115 )
116
117 # Support tooltips on references
118 extensions += ['hoverxref.extension']
119 hoverxref_auto_ref = True
120 hoverxref_intersphinx = [
121 'python',
122 'pip',
123 'build',
124 'PyPUG',
125 'packaging',
126 'twine',
127 'importlib-resources',
128 ]
129
130 # Add support for linking usernames
131 github_url = 'https://github.com'
132 github_repo_org = 'pypa'
133 github_repo_name = 'setuptools'
134 github_repo_slug = f'{github_repo_org}/{github_repo_name}'
135 github_repo_url = f'{github_url}/{github_repo_slug}'
136 github_sponsors_url = f'{github_url}/sponsors'
137 extlinks = {
138 'user': (f'{github_sponsors_url}/%s', '@%s'), # noqa: WPS323
139 'pypi': ('https://pypi.org/project/%s', '%s'), # noqa: WPS323
140 'wiki': ('https://wikipedia.org/wiki/%s', '%s'), # noqa: WPS323
141 }
142 extensions += ['sphinx.ext.extlinks']
143
144 # Ref: https://github.com/python-attrs/attrs/pull/571/files\
145 # #diff-85987f48f1258d9ee486e3191495582dR82
146 default_role = 'any'
147
148 # HTML theme
149 html_theme = 'furo'
150 html_logo = "images/logo.svg"
151
152 html_theme_options = {
153 "sidebar_hide_name": True,
154 "light_css_variables": {
155 "color-brand-primary": "#336790", # "blue"
156 "color-brand-content": "#336790",
157 },
158 "dark_css_variables": {
159 "color-brand-primary": "#E5B62F", # "yellow"
160 "color-brand-content": "#E5B62F",
161 },
162 }
163
164 # Redirect old docs so links and references in the ecosystem don't break
165 extensions += ['sphinx_reredirects']
166 redirects = {
167 "userguide/keywords": "/deprecated/changed_keywords.html",
168 "userguide/commands": "/deprecated/commands.html",
169 }
170
171 # Add support for inline tabs
172 extensions += ['sphinx_inline_tabs']
173
174 # Support for distutils
175
176 # Ref: https://stackoverflow.com/a/30624034/595220
177 nitpick_ignore = [
178 ('c:func', 'SHGetSpecialFolderPath'), # ref to MS docs
179 ('envvar', 'DISTUTILS_DEBUG'), # undocumented
180 ('envvar', 'HOME'), # undocumented
181 ('envvar', 'PLAT'), # undocumented
182 ('envvar', 'DIST_EXTRA_CONFIG'), # undocumented
183 ('py:attr', 'CCompiler.language_map'), # undocumented
184 ('py:attr', 'CCompiler.language_order'), # undocumented
185 ('py:class', 'distutils.dist.Distribution'), # undocumented
186 ('py:class', 'distutils.extension.Extension'), # undocumented
187 ('py:class', 'BorlandCCompiler'), # undocumented
188 ('py:class', 'CCompiler'), # undocumented
189 ('py:class', 'CygwinCCompiler'), # undocumented
190 ('py:class', 'distutils.dist.DistributionMetadata'), # undocumented
191 ('py:class', 'FileList'), # undocumented
192 ('py:class', 'IShellLink'), # ref to MS docs
193 ('py:class', 'MSVCCompiler'), # undocumented
194 ('py:class', 'OptionDummy'), # undocumented
195 ('py:class', 'UnixCCompiler'), # undocumented
196 ('py:exc', 'CompileError'), # undocumented
197 ('py:exc', 'DistutilsExecError'), # undocumented
198 ('py:exc', 'DistutilsFileError'), # undocumented
199 ('py:exc', 'LibError'), # undocumented
200 ('py:exc', 'LinkError'), # undocumented
201 ('py:exc', 'PreprocessError'), # undocumented
202 ('py:exc', 'setuptools.errors.PlatformError'), # sphinx cannot find it
203 ('py:func', 'distutils.CCompiler.new_compiler'), # undocumented
204 # undocumented:
205 ('py:func', 'distutils.dist.DistributionMetadata.read_pkg_file'),
206 ('py:func', 'distutils.file_util._copy_file_contents'), # undocumented
207 ('py:func', 'distutils.log.debug'), # undocumented
208 ('py:func', 'distutils.spawn.find_executable'), # undocumented
209 ('py:func', 'distutils.spawn.spawn'), # undocumented
210 # TODO: check https://docutils.rtfd.io in the future
211 ('py:mod', 'docutils'), # there's no Sphinx site documenting this
212 ]
213
214 # Allow linking objects on other Sphinx sites seamlessly:
215 intersphinx_mapping.update(
216 python=('https://docs.python.org/3', None),
217 )
218
219 # Add support for the unreleased "next-version" change notes
220 extensions += ['sphinxcontrib.towncrier']
221 # Extension needs a path from here to the towncrier config.
222 towncrier_draft_working_directory = '..'
223 # Avoid an empty section for unpublished changes.
224 towncrier_draft_include_empty = False
225 # sphinx-contrib/sphinxcontrib-towncrier#81
226 towncrier_draft_config_path = 'towncrier.toml'
227
228 extensions += ['jaraco.tidelift']
229
230 # Add icons (aka "favicons") to documentation
231 extensions += ['sphinx_favicon']
232 html_static_path = ['images'] # should contain the folder with icons
233
234 # Add support for nice Not Found 404 pages
235 extensions += ['notfound.extension']
236
237 # List of dicts with <link> HTML attributes
238 # static-file points to files in the html_static_path (href is computed)
239 favicons = [
240 { # "Catch-all" goes first, otherwise some browsers will overwrite
241 "rel": "icon",
242 "type": "image/svg+xml",
243 "static-file": "logo-symbol-only.svg",
244 "sizes": "any",
245 },
246 { # Version with thicker strokes for better visibility at smaller sizes
247 "rel": "icon",
248 "type": "image/svg+xml",
249 "static-file": "favicon.svg",
250 "sizes": "16x16 24x24 32x32 48x48",
251 },
252 # rel="apple-touch-icon" does not support SVG yet
253 ]
254
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -114,19 +114,6 @@
}
)
-# Support tooltips on references
-extensions += ['hoverxref.extension']
-hoverxref_auto_ref = True
-hoverxref_intersphinx = [
- 'python',
- 'pip',
- 'build',
- 'PyPUG',
- 'packaging',
- 'twine',
- 'importlib-resources',
-]
-
# Add support for linking usernames
github_url = 'https://github.com'
github_repo_org = 'pypa'
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -114,19 +114,6 @@\n }\n )\n \n-# Support tooltips on references\n-extensions += ['hoverxref.extension']\n-hoverxref_auto_ref = True\n-hoverxref_intersphinx = [\n- 'python',\n- 'pip',\n- 'build',\n- 'PyPUG',\n- 'packaging',\n- 'twine',\n- 'importlib-resources',\n-]\n-\n # Add support for linking usernames\n github_url = 'https://github.com'\n github_repo_org = 'pypa'\n", "issue": "[Docs] Preview pop-up on a link covers the link itself\n### Summary\n\nI've come across an issue with that breaks the online documentation completely for me. Whenever I move my mouse pointer over a link to a different part of the documentation, a pop-up appears that covers the link, making it near-impossible to click the link. See this screen recording of the situation that this has manifested first for me:\r\n\r\nhttps://github.com/pypa/setuptools/assets/50332/af946044-7222-4e2d-b090-c771be758598\r\n\r\n(On this page: https://setuptools.pypa.io/en/latest/pkg_resources.html)\n\n### OS / Environment\n\nSafari 16.6, macOS 12.6.8.\n\n### Additional Information\n\nIMHO, as it presents itself to me, this feature has downsides that are orders of magnitude bigger that it's upsides. My browser already allows me to preview a page by triple-tapping on the trackpad (macOS) or long-pressing (iOS), so it doesn't add a benefit on these platforms.\r\n\r\n---\r\n\r\nAs an additional note: Even if this feature was implemented in a way where it wouldn't make it impossible to click on some links, it would still be an accessibility issue for me:\r\n\r\nI'm on the ADD spectrum and I use my mouse pointer for focussing while reading. It's very natural for me to move my pointer along the text while reading. Such an unavoidable popup will draw my focus away from what I am reading (because it appears when I'm not expecting it and haven't performed an explicit action to make it appear). I'm having this issue also e.g. on GitHub, where some links have pop-ups that appear on mouse hovering.\r\n\r\nIf you intend on keeping these pop-ups, there is something that you could do to make it a bit less intrusive for people like me (I can't speak for everyone on the ADD spectrum of course): Make the pop-up appear immediately when entering the link's region _and also_ disappear immediately when leaving the region, instead of after a short delay. For example, buttons and links that change appearance while hovering or tool-tips in UIs that appear immediately are much less distracting to me. I think my brain is more likely to associate my action with the appearance of the pop-up and thus able to ignore the stimulus.\r\n\r\nBut hey, thanks for your work anyways!\n\n### Code of Conduct\n\n- [X] I agree to follow the PSF Code of Conduct\n", "before_files": [{"content": "extensions = [\n 'sphinx.ext.autodoc',\n 'jaraco.packaging.sphinx',\n]\n\nmaster_doc = \"index\"\nhtml_theme = \"furo\"\n\n# Link dates and other references in the changelog\nextensions += ['rst.linker']\nlink_files = {\n '../NEWS.rst': dict(\n using=dict(\n BB='https://bitbucket.org',\n GH='https://github.com',\n ),\n replace=[\n dict(\n pattern=r'(Issue #|\\B#)(?P<issue>\\d+)',\n url='{package_url}/issues/{issue}',\n ),\n dict(\n pattern=r'(?m:^((?P<scm_version>v?\\d+(\\.\\d+){1,2}))\\n[-=]+\\n)',\n with_scm='{text}\\n{rev[timestamp]:%d %b %Y}\\n',\n ),\n dict(\n pattern=r'PEP[- ](?P<pep_number>\\d+)',\n url='https://peps.python.org/pep-{pep_number:0>4}/',\n ),\n dict(\n pattern=r'(?<!\\w)PR #(?P<pull>\\d+)',\n url='{package_url}/pull/{pull}',\n ),\n dict(\n pattern=r'BB Pull Request ?#(?P<bb_pull_request>\\d+)',\n url='{BB}/pypa/setuptools/pull-request/{bb_pull_request}',\n ),\n dict(\n pattern=r'Distribute #(?P<distribute>\\d+)',\n url='{BB}/tarek/distribute/issue/{distribute}',\n ),\n dict(\n pattern=r'Buildout #(?P<buildout>\\d+)',\n url='{GH}/buildout/buildout/issues/{buildout}',\n ),\n dict(\n pattern=r'Old Setuptools #(?P<old_setuptools>\\d+)',\n url='http://bugs.python.org/setuptools/issue{old_setuptools}',\n ),\n dict(\n pattern=r'Jython #(?P<jython>\\d+)',\n url='http://bugs.jython.org/issue{jython}',\n ),\n dict(\n pattern=r'(Python #|bpo-)(?P<python>\\d+)',\n url='http://bugs.python.org/issue{python}',\n ),\n dict(\n pattern=r'Interop #(?P<interop>\\d+)',\n url='{GH}/pypa/interoperability-peps/issues/{interop}',\n ),\n dict(\n pattern=r'Pip #(?P<pip>\\d+)',\n url='{GH}/pypa/pip/issues/{pip}',\n ),\n dict(\n pattern=r'Packaging #(?P<packaging>\\d+)',\n url='{GH}/pypa/packaging/issues/{packaging}',\n ),\n dict(\n pattern=r'[Pp]ackaging (?P<packaging_ver>\\d+(\\.\\d+)+)',\n url='{GH}/pypa/packaging/blob/{packaging_ver}/CHANGELOG.rst',\n ),\n dict(\n pattern=r'setuptools_svn #(?P<setuptools_svn>\\d+)',\n url='{GH}/jaraco/setuptools_svn/issues/{setuptools_svn}',\n ),\n dict(\n pattern=r'pypa/(?P<issue_repo>[\\-\\.\\w]+)#(?P<issue_number>\\d+)',\n url='{GH}/pypa/{issue_repo}/issues/{issue_number}',\n ),\n dict(\n pattern=r'pypa/(?P<commit_repo>[\\-\\.\\w]+)@(?P<commit_number>[\\da-f]+)',\n url='{GH}/pypa/{commit_repo}/commit/{commit_number}',\n ),\n ],\n ),\n}\n\n# Be strict about any broken references\nnitpicky = True\n\n# Include Python intersphinx mapping to prevent failures\n# jaraco/skeleton#51\nextensions += ['sphinx.ext.intersphinx']\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n}\n\n# Preserve authored syntax for defaults\nautodoc_preserve_defaults = True\n\nintersphinx_mapping.update(\n {\n 'pip': ('https://pip.pypa.io/en/latest', None),\n 'build': ('https://pypa-build.readthedocs.io/en/latest', None),\n 'PyPUG': ('https://packaging.python.org/en/latest/', None),\n 'packaging': ('https://packaging.pypa.io/en/latest/', None),\n 'twine': ('https://twine.readthedocs.io/en/stable/', None),\n 'importlib-resources': (\n 'https://importlib-resources.readthedocs.io/en/latest',\n None,\n ),\n }\n)\n\n# Support tooltips on references\nextensions += ['hoverxref.extension']\nhoverxref_auto_ref = True\nhoverxref_intersphinx = [\n 'python',\n 'pip',\n 'build',\n 'PyPUG',\n 'packaging',\n 'twine',\n 'importlib-resources',\n]\n\n# Add support for linking usernames\ngithub_url = 'https://github.com'\ngithub_repo_org = 'pypa'\ngithub_repo_name = 'setuptools'\ngithub_repo_slug = f'{github_repo_org}/{github_repo_name}'\ngithub_repo_url = f'{github_url}/{github_repo_slug}'\ngithub_sponsors_url = f'{github_url}/sponsors'\nextlinks = {\n 'user': (f'{github_sponsors_url}/%s', '@%s'), # noqa: WPS323\n 'pypi': ('https://pypi.org/project/%s', '%s'), # noqa: WPS323\n 'wiki': ('https://wikipedia.org/wiki/%s', '%s'), # noqa: WPS323\n}\nextensions += ['sphinx.ext.extlinks']\n\n# Ref: https://github.com/python-attrs/attrs/pull/571/files\\\n# #diff-85987f48f1258d9ee486e3191495582dR82\ndefault_role = 'any'\n\n# HTML theme\nhtml_theme = 'furo'\nhtml_logo = \"images/logo.svg\"\n\nhtml_theme_options = {\n \"sidebar_hide_name\": True,\n \"light_css_variables\": {\n \"color-brand-primary\": \"#336790\", # \"blue\"\n \"color-brand-content\": \"#336790\",\n },\n \"dark_css_variables\": {\n \"color-brand-primary\": \"#E5B62F\", # \"yellow\"\n \"color-brand-content\": \"#E5B62F\",\n },\n}\n\n# Redirect old docs so links and references in the ecosystem don't break\nextensions += ['sphinx_reredirects']\nredirects = {\n \"userguide/keywords\": \"/deprecated/changed_keywords.html\",\n \"userguide/commands\": \"/deprecated/commands.html\",\n}\n\n# Add support for inline tabs\nextensions += ['sphinx_inline_tabs']\n\n# Support for distutils\n\n# Ref: https://stackoverflow.com/a/30624034/595220\nnitpick_ignore = [\n ('c:func', 'SHGetSpecialFolderPath'), # ref to MS docs\n ('envvar', 'DISTUTILS_DEBUG'), # undocumented\n ('envvar', 'HOME'), # undocumented\n ('envvar', 'PLAT'), # undocumented\n ('envvar', 'DIST_EXTRA_CONFIG'), # undocumented\n ('py:attr', 'CCompiler.language_map'), # undocumented\n ('py:attr', 'CCompiler.language_order'), # undocumented\n ('py:class', 'distutils.dist.Distribution'), # undocumented\n ('py:class', 'distutils.extension.Extension'), # undocumented\n ('py:class', 'BorlandCCompiler'), # undocumented\n ('py:class', 'CCompiler'), # undocumented\n ('py:class', 'CygwinCCompiler'), # undocumented\n ('py:class', 'distutils.dist.DistributionMetadata'), # undocumented\n ('py:class', 'FileList'), # undocumented\n ('py:class', 'IShellLink'), # ref to MS docs\n ('py:class', 'MSVCCompiler'), # undocumented\n ('py:class', 'OptionDummy'), # undocumented\n ('py:class', 'UnixCCompiler'), # undocumented\n ('py:exc', 'CompileError'), # undocumented\n ('py:exc', 'DistutilsExecError'), # undocumented\n ('py:exc', 'DistutilsFileError'), # undocumented\n ('py:exc', 'LibError'), # undocumented\n ('py:exc', 'LinkError'), # undocumented\n ('py:exc', 'PreprocessError'), # undocumented\n ('py:exc', 'setuptools.errors.PlatformError'), # sphinx cannot find it\n ('py:func', 'distutils.CCompiler.new_compiler'), # undocumented\n # undocumented:\n ('py:func', 'distutils.dist.DistributionMetadata.read_pkg_file'),\n ('py:func', 'distutils.file_util._copy_file_contents'), # undocumented\n ('py:func', 'distutils.log.debug'), # undocumented\n ('py:func', 'distutils.spawn.find_executable'), # undocumented\n ('py:func', 'distutils.spawn.spawn'), # undocumented\n # TODO: check https://docutils.rtfd.io in the future\n ('py:mod', 'docutils'), # there's no Sphinx site documenting this\n]\n\n# Allow linking objects on other Sphinx sites seamlessly:\nintersphinx_mapping.update(\n python=('https://docs.python.org/3', None),\n)\n\n# Add support for the unreleased \"next-version\" change notes\nextensions += ['sphinxcontrib.towncrier']\n# Extension needs a path from here to the towncrier config.\ntowncrier_draft_working_directory = '..'\n# Avoid an empty section for unpublished changes.\ntowncrier_draft_include_empty = False\n# sphinx-contrib/sphinxcontrib-towncrier#81\ntowncrier_draft_config_path = 'towncrier.toml'\n\nextensions += ['jaraco.tidelift']\n\n# Add icons (aka \"favicons\") to documentation\nextensions += ['sphinx_favicon']\nhtml_static_path = ['images'] # should contain the folder with icons\n\n# Add support for nice Not Found 404 pages\nextensions += ['notfound.extension']\n\n# List of dicts with <link> HTML attributes\n# static-file points to files in the html_static_path (href is computed)\nfavicons = [\n { # \"Catch-all\" goes first, otherwise some browsers will overwrite\n \"rel\": \"icon\",\n \"type\": \"image/svg+xml\",\n \"static-file\": \"logo-symbol-only.svg\",\n \"sizes\": \"any\",\n },\n { # Version with thicker strokes for better visibility at smaller sizes\n \"rel\": \"icon\",\n \"type\": \"image/svg+xml\",\n \"static-file\": \"favicon.svg\",\n \"sizes\": \"16x16 24x24 32x32 48x48\",\n },\n # rel=\"apple-touch-icon\" does not support SVG yet\n]\n", "path": "docs/conf.py"}], "after_files": [{"content": "extensions = [\n 'sphinx.ext.autodoc',\n 'jaraco.packaging.sphinx',\n]\n\nmaster_doc = \"index\"\nhtml_theme = \"furo\"\n\n# Link dates and other references in the changelog\nextensions += ['rst.linker']\nlink_files = {\n '../NEWS.rst': dict(\n using=dict(\n BB='https://bitbucket.org',\n GH='https://github.com',\n ),\n replace=[\n dict(\n pattern=r'(Issue #|\\B#)(?P<issue>\\d+)',\n url='{package_url}/issues/{issue}',\n ),\n dict(\n pattern=r'(?m:^((?P<scm_version>v?\\d+(\\.\\d+){1,2}))\\n[-=]+\\n)',\n with_scm='{text}\\n{rev[timestamp]:%d %b %Y}\\n',\n ),\n dict(\n pattern=r'PEP[- ](?P<pep_number>\\d+)',\n url='https://peps.python.org/pep-{pep_number:0>4}/',\n ),\n dict(\n pattern=r'(?<!\\w)PR #(?P<pull>\\d+)',\n url='{package_url}/pull/{pull}',\n ),\n dict(\n pattern=r'BB Pull Request ?#(?P<bb_pull_request>\\d+)',\n url='{BB}/pypa/setuptools/pull-request/{bb_pull_request}',\n ),\n dict(\n pattern=r'Distribute #(?P<distribute>\\d+)',\n url='{BB}/tarek/distribute/issue/{distribute}',\n ),\n dict(\n pattern=r'Buildout #(?P<buildout>\\d+)',\n url='{GH}/buildout/buildout/issues/{buildout}',\n ),\n dict(\n pattern=r'Old Setuptools #(?P<old_setuptools>\\d+)',\n url='http://bugs.python.org/setuptools/issue{old_setuptools}',\n ),\n dict(\n pattern=r'Jython #(?P<jython>\\d+)',\n url='http://bugs.jython.org/issue{jython}',\n ),\n dict(\n pattern=r'(Python #|bpo-)(?P<python>\\d+)',\n url='http://bugs.python.org/issue{python}',\n ),\n dict(\n pattern=r'Interop #(?P<interop>\\d+)',\n url='{GH}/pypa/interoperability-peps/issues/{interop}',\n ),\n dict(\n pattern=r'Pip #(?P<pip>\\d+)',\n url='{GH}/pypa/pip/issues/{pip}',\n ),\n dict(\n pattern=r'Packaging #(?P<packaging>\\d+)',\n url='{GH}/pypa/packaging/issues/{packaging}',\n ),\n dict(\n pattern=r'[Pp]ackaging (?P<packaging_ver>\\d+(\\.\\d+)+)',\n url='{GH}/pypa/packaging/blob/{packaging_ver}/CHANGELOG.rst',\n ),\n dict(\n pattern=r'setuptools_svn #(?P<setuptools_svn>\\d+)',\n url='{GH}/jaraco/setuptools_svn/issues/{setuptools_svn}',\n ),\n dict(\n pattern=r'pypa/(?P<issue_repo>[\\-\\.\\w]+)#(?P<issue_number>\\d+)',\n url='{GH}/pypa/{issue_repo}/issues/{issue_number}',\n ),\n dict(\n pattern=r'pypa/(?P<commit_repo>[\\-\\.\\w]+)@(?P<commit_number>[\\da-f]+)',\n url='{GH}/pypa/{commit_repo}/commit/{commit_number}',\n ),\n ],\n ),\n}\n\n# Be strict about any broken references\nnitpicky = True\n\n# Include Python intersphinx mapping to prevent failures\n# jaraco/skeleton#51\nextensions += ['sphinx.ext.intersphinx']\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n}\n\n# Preserve authored syntax for defaults\nautodoc_preserve_defaults = True\n\nintersphinx_mapping.update(\n {\n 'pip': ('https://pip.pypa.io/en/latest', None),\n 'build': ('https://pypa-build.readthedocs.io/en/latest', None),\n 'PyPUG': ('https://packaging.python.org/en/latest/', None),\n 'packaging': ('https://packaging.pypa.io/en/latest/', None),\n 'twine': ('https://twine.readthedocs.io/en/stable/', None),\n 'importlib-resources': (\n 'https://importlib-resources.readthedocs.io/en/latest',\n None,\n ),\n }\n)\n\n# Add support for linking usernames\ngithub_url = 'https://github.com'\ngithub_repo_org = 'pypa'\ngithub_repo_name = 'setuptools'\ngithub_repo_slug = f'{github_repo_org}/{github_repo_name}'\ngithub_repo_url = f'{github_url}/{github_repo_slug}'\ngithub_sponsors_url = f'{github_url}/sponsors'\nextlinks = {\n 'user': (f'{github_sponsors_url}/%s', '@%s'), # noqa: WPS323\n 'pypi': ('https://pypi.org/project/%s', '%s'), # noqa: WPS323\n 'wiki': ('https://wikipedia.org/wiki/%s', '%s'), # noqa: WPS323\n}\nextensions += ['sphinx.ext.extlinks']\n\n# Ref: https://github.com/python-attrs/attrs/pull/571/files\\\n# #diff-85987f48f1258d9ee486e3191495582dR82\ndefault_role = 'any'\n\n# HTML theme\nhtml_theme = 'furo'\nhtml_logo = \"images/logo.svg\"\n\nhtml_theme_options = {\n \"sidebar_hide_name\": True,\n \"light_css_variables\": {\n \"color-brand-primary\": \"#336790\", # \"blue\"\n \"color-brand-content\": \"#336790\",\n },\n \"dark_css_variables\": {\n \"color-brand-primary\": \"#E5B62F\", # \"yellow\"\n \"color-brand-content\": \"#E5B62F\",\n },\n}\n\n# Redirect old docs so links and references in the ecosystem don't break\nextensions += ['sphinx_reredirects']\nredirects = {\n \"userguide/keywords\": \"/deprecated/changed_keywords.html\",\n \"userguide/commands\": \"/deprecated/commands.html\",\n}\n\n# Add support for inline tabs\nextensions += ['sphinx_inline_tabs']\n\n# Support for distutils\n\n# Ref: https://stackoverflow.com/a/30624034/595220\nnitpick_ignore = [\n ('c:func', 'SHGetSpecialFolderPath'), # ref to MS docs\n ('envvar', 'DISTUTILS_DEBUG'), # undocumented\n ('envvar', 'HOME'), # undocumented\n ('envvar', 'PLAT'), # undocumented\n ('envvar', 'DIST_EXTRA_CONFIG'), # undocumented\n ('py:attr', 'CCompiler.language_map'), # undocumented\n ('py:attr', 'CCompiler.language_order'), # undocumented\n ('py:class', 'distutils.dist.Distribution'), # undocumented\n ('py:class', 'distutils.extension.Extension'), # undocumented\n ('py:class', 'BorlandCCompiler'), # undocumented\n ('py:class', 'CCompiler'), # undocumented\n ('py:class', 'CygwinCCompiler'), # undocumented\n ('py:class', 'distutils.dist.DistributionMetadata'), # undocumented\n ('py:class', 'FileList'), # undocumented\n ('py:class', 'IShellLink'), # ref to MS docs\n ('py:class', 'MSVCCompiler'), # undocumented\n ('py:class', 'OptionDummy'), # undocumented\n ('py:class', 'UnixCCompiler'), # undocumented\n ('py:exc', 'CompileError'), # undocumented\n ('py:exc', 'DistutilsExecError'), # undocumented\n ('py:exc', 'DistutilsFileError'), # undocumented\n ('py:exc', 'LibError'), # undocumented\n ('py:exc', 'LinkError'), # undocumented\n ('py:exc', 'PreprocessError'), # undocumented\n ('py:exc', 'setuptools.errors.PlatformError'), # sphinx cannot find it\n ('py:func', 'distutils.CCompiler.new_compiler'), # undocumented\n # undocumented:\n ('py:func', 'distutils.dist.DistributionMetadata.read_pkg_file'),\n ('py:func', 'distutils.file_util._copy_file_contents'), # undocumented\n ('py:func', 'distutils.log.debug'), # undocumented\n ('py:func', 'distutils.spawn.find_executable'), # undocumented\n ('py:func', 'distutils.spawn.spawn'), # undocumented\n # TODO: check https://docutils.rtfd.io in the future\n ('py:mod', 'docutils'), # there's no Sphinx site documenting this\n]\n\n# Allow linking objects on other Sphinx sites seamlessly:\nintersphinx_mapping.update(\n python=('https://docs.python.org/3', None),\n)\n\n# Add support for the unreleased \"next-version\" change notes\nextensions += ['sphinxcontrib.towncrier']\n# Extension needs a path from here to the towncrier config.\ntowncrier_draft_working_directory = '..'\n# Avoid an empty section for unpublished changes.\ntowncrier_draft_include_empty = False\n# sphinx-contrib/sphinxcontrib-towncrier#81\ntowncrier_draft_config_path = 'towncrier.toml'\n\nextensions += ['jaraco.tidelift']\n\n# Add icons (aka \"favicons\") to documentation\nextensions += ['sphinx_favicon']\nhtml_static_path = ['images'] # should contain the folder with icons\n\n# Add support for nice Not Found 404 pages\nextensions += ['notfound.extension']\n\n# List of dicts with <link> HTML attributes\n# static-file points to files in the html_static_path (href is computed)\nfavicons = [\n { # \"Catch-all\" goes first, otherwise some browsers will overwrite\n \"rel\": \"icon\",\n \"type\": \"image/svg+xml\",\n \"static-file\": \"logo-symbol-only.svg\",\n \"sizes\": \"any\",\n },\n { # Version with thicker strokes for better visibility at smaller sizes\n \"rel\": \"icon\",\n \"type\": \"image/svg+xml\",\n \"static-file\": \"favicon.svg\",\n \"sizes\": \"16x16 24x24 32x32 48x48\",\n },\n # rel=\"apple-touch-icon\" does not support SVG yet\n]\n", "path": "docs/conf.py"}]} |
gh_patches_debug_1064 | rasdani/github-patches | git_diff | streamlit__streamlit-2711 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Color picker was not fully removed out of beta
Feedback after product review of #2625
- `**st.beta_color_picker` still exists in the wheel file,** and when I use it I get a message saying it will be removed on Jan 28 2021. Were we supposed to remove beta_color_picker for this release? (Depends on which stage of the beta we're in)
See notion for images/more details: https://www.notion.so/streamlit/0-76-Candidate-5c0ba34f05384adaa487fddf6d132d08
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/streamlit/__init__.py`
Content:
```
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Streamlit.
16
17 How to use Streamlit in 3 seconds:
18
19 1. Write an app
20 >>> import streamlit as st
21 >>> st.write(anything_you_want)
22
23 2. Run your app
24 $ streamlit run my_script.py
25
26 3. Use your app
27 A new tab will open on your browser. That's your Streamlit app!
28
29 4. Modify your code, save it, and watch changes live on your browser.
30
31 Take a look at the other commands in this module to find out what else
32 Streamlit can do:
33
34 >>> dir(streamlit)
35
36 Or try running our "Hello World":
37
38 $ streamlit hello
39
40 For more detailed info, see https://docs.streamlit.io.
41 """
42
43 # IMPORTANT: Prefix with an underscore anything that the user shouldn't see.
44
45 # NOTE: You'll see lots of "noqa: F821" in this file. That's because we
46 # manually mess with the local namespace so the linter can't know that some
47 # identifiers actually exist in the namespace.
48
49 # Must be at the top, to avoid circular dependency.
50 from streamlit import logger as _logger
51 from streamlit import config as _config
52 from streamlit.proto.RootContainer_pb2 import RootContainer
53
54 _LOGGER = _logger.get_logger("root")
55
56 # Give the package a version.
57 import pkg_resources as _pkg_resources
58 from typing import List
59
60 # This used to be pkg_resources.require('streamlit') but it would cause
61 # pex files to fail. See #394 for more details.
62 __version__ = _pkg_resources.get_distribution("streamlit").version
63
64 import contextlib as _contextlib
65 import re as _re
66 import sys as _sys
67 import textwrap as _textwrap
68 import threading as _threading
69 import traceback as _traceback
70 import urllib.parse as _parse
71
72 from streamlit import code_util as _code_util
73 from streamlit import env_util as _env_util
74 from streamlit import source_util as _source_util
75 from streamlit import string_util as _string_util
76 from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator
77 from streamlit.report_thread import add_report_ctx as _add_report_ctx
78 from streamlit.report_thread import get_report_ctx as _get_report_ctx
79 from streamlit.script_runner import StopException
80 from streamlit.script_runner import RerunException as _RerunException
81 from streamlit.script_request_queue import RerunData as _RerunData
82 from streamlit.errors import StreamlitAPIException
83 from streamlit.proto import ForwardMsg_pb2 as _ForwardMsg_pb2
84
85 # Modules that the user should have access to. These are imported with "as"
86 # syntax pass mypy checking with implicit_reexport disabled.
87 from streamlit.caching import cache as cache # noqa: F401
88
89 # This is set to True inside cli._main_run(), and is False otherwise.
90 # If False, we should assume that DeltaGenerator functions are effectively
91 # no-ops, and adapt gracefully.
92 _is_running_with_streamlit = False
93
94
95 def _update_logger():
96 _logger.set_log_level(_config.get_option("logger.level").upper())
97 _logger.update_formatter()
98 _logger.init_tornado_logs()
99
100
101 # Make this file only depend on config option in an asynchronous manner. This
102 # avoids a race condition when another file (such as a test file) tries to pass
103 # in an alternative config.
104 _config.on_config_parsed(_update_logger, True)
105
106
107 _main = _DeltaGenerator(root_container=RootContainer.MAIN)
108 sidebar = _DeltaGenerator(root_container=RootContainer.SIDEBAR, parent=_main)
109
110 # DeltaGenerator methods:
111
112 altair_chart = _main.altair_chart # noqa: E221
113 area_chart = _main.area_chart # noqa: E221
114 audio = _main.audio # noqa: E221
115 balloons = _main.balloons # noqa: E221
116 bar_chart = _main.bar_chart # noqa: E221
117 bokeh_chart = _main.bokeh_chart # noqa: E221
118 button = _main.button # noqa: E221
119 checkbox = _main.checkbox # noqa: E221
120 code = _main.code # noqa: E221
121 dataframe = _main.dataframe # noqa: E221
122 date_input = _main.date_input # noqa: E221
123 pydeck_chart = _main.pydeck_chart # noqa: E221
124 empty = _main.empty # noqa: E221
125 error = _main.error # noqa: E221
126 exception = _main.exception # noqa: E221
127 file_uploader = _main.file_uploader # noqa: E221
128 graphviz_chart = _main.graphviz_chart # noqa: E221
129 header = _main.header # noqa: E221
130 help = _main.help # noqa: E221
131 image = _main.image # noqa: E221
132 info = _main.info # noqa: E221
133 json = _main.json # noqa: E221
134 latex = _main.latex # noqa: E221
135 line_chart = _main.line_chart # noqa: E221
136 map = _main.map # noqa: E221
137 markdown = _main.markdown # noqa: E221
138 multiselect = _main.multiselect # noqa: E221
139 number_input = _main.number_input # noqa: E221
140 plotly_chart = _main.plotly_chart # noqa: E221
141 progress = _main.progress # noqa: E221
142 pyplot = _main.pyplot # noqa: E221
143 radio = _main.radio # noqa: E221
144 selectbox = _main.selectbox # noqa: E221
145 select_slider = _main.select_slider # noqa: E221
146 slider = _main.slider # noqa: E221
147 subheader = _main.subheader # noqa: E221
148 success = _main.success # noqa: E221
149 table = _main.table # noqa: E221
150 text = _main.text # noqa: E221
151 text_area = _main.text_area # noqa: E221
152 text_input = _main.text_input # noqa: E221
153 time_input = _main.time_input # noqa: E221
154 title = _main.title # noqa: E221
155 vega_lite_chart = _main.vega_lite_chart # noqa: E221
156 video = _main.video # noqa: E221
157 warning = _main.warning # noqa: E221
158 write = _main.write # noqa: E221
159 color_picker = _main.color_picker # noqa: E221
160
161 # Config
162
163 get_option = _config.get_option
164 from streamlit.commands.page_config import set_page_config
165
166
167 def _beta_warning(func, date):
168 """Wrapper for functions that are no longer in beta.
169
170 Wrapped functions will run as normal, but then proceed to show an st.warning
171 saying that the beta_ version will be removed in ~3 months.
172
173 Parameters
174 ----------
175 func: function
176 The `st.` function that used to be in beta.
177
178 date: str
179 A date like "2020-01-01", indicating the last day we'll guarantee
180 support for the beta_ prefix.
181 """
182
183 def wrapped(*args, **kwargs):
184 # Note: Since we use a wrapper, beta_ functions will not autocomplete
185 # correctly on VSCode.
186 result = func(*args, **kwargs)
187 warning(
188 f"`st.{func.__name__}` has graduated out of beta. "
189 + f"On {date}, the beta_ version will be removed.\n\n"
190 + f"Before then, update your code from `st.beta_{func.__name__}` to `st.{func.__name__}`."
191 )
192 return result
193
194 # Update the wrapped func's name & docstring so st.help does the right thing
195 wrapped.__name__ = "beta_" + func.__name__
196 wrapped.__doc__ = func.__doc__
197 return wrapped
198
199
200 beta_set_page_config = _beta_warning(set_page_config, "2021-01-06")
201 beta_color_picker = _beta_warning(_main.color_picker, "January 28, 2021")
202 beta_container = _main.beta_container # noqa: E221
203 beta_expander = _main.beta_expander # noqa: E221
204 beta_columns = _main.beta_columns # noqa: E221
205
206
207 def set_option(key, value):
208 """Set config option.
209
210 Currently, only the following config options can be set within the script itself:
211 * client.caching
212 * client.displayEnabled
213 * deprecation.*
214
215 Calling with any other options will raise StreamlitAPIException.
216
217 Run `streamlit config show` in the terminal to see all available options.
218
219 Parameters
220 ----------
221 key : str
222 The config option key of the form "section.optionName". To see all
223 available options, run `streamlit config show` on a terminal.
224
225 value
226 The new value to assign to this config option.
227
228 """
229 opt = _config._config_options[key]
230 if opt.scriptable:
231 _config.set_option(key, value)
232 return
233
234 raise StreamlitAPIException(
235 "{key} cannot be set on the fly. Set as command line option, e.g. streamlit run script.py --{key}, or in config.toml instead.".format(
236 key=key
237 )
238 )
239
240
241 def experimental_show(*args):
242 """Write arguments and *argument names* to your app for debugging purposes.
243
244 Show() has similar properties to write():
245
246 1. You can pass in multiple arguments, all of which will be debugged.
247 2. It returns None, so it's "slot" in the app cannot be reused.
248
249 Note: This is an experimental feature. See
250 https://docs.streamlit.io/en/latest/api.html#pre-release-features for more information.
251
252 Parameters
253 ----------
254 *args : any
255 One or many objects to debug in the App.
256
257 Example
258 -------
259
260 >>> dataframe = pd.DataFrame({
261 ... 'first column': [1, 2, 3, 4],
262 ... 'second column': [10, 20, 30, 40],
263 ... }))
264 >>> st.experimental_show(dataframe)
265
266 Notes
267 -----
268
269 This is an experimental feature with usage limitations:
270
271 - The method must be called with the name `show`.
272 - Must be called in one line of code, and only once per line.
273 - When passing multiple arguments the inclusion of `,` or `)` in a string
274 argument may cause an error.
275
276 """
277 if not args:
278 return
279
280 try:
281 import inspect
282
283 # Get the calling line of code
284 current_frame = inspect.currentframe()
285 if current_frame is None:
286 warning("`show` not enabled in the shell")
287 return
288 lines = inspect.getframeinfo(current_frame.f_back)[3]
289
290 if not lines:
291 warning("`show` not enabled in the shell")
292 return
293
294 # Parse arguments from the line
295 line = lines[0].split("show", 1)[1]
296 inputs = _code_util.get_method_args_from_code(args, line)
297
298 # Escape markdown and add deltas
299 for idx, input in enumerate(inputs):
300 escaped = _string_util.escape_markdown(input)
301
302 markdown("**%s**" % escaped)
303 write(args[idx])
304
305 except Exception:
306 _, exc, exc_tb = _sys.exc_info()
307 exception(exc)
308
309
310 def experimental_get_query_params():
311 """Return the query parameters that is currently showing in the browser's URL bar.
312
313 Returns
314 -------
315 dict
316 The current query parameters as a dict. "Query parameters" are the part of the URL that comes
317 after the first "?".
318
319 Example
320 -------
321
322 Let's say the user's web browser is at
323 `http://localhost:8501/?show_map=True&selected=asia&selected=america`.
324 Then, you can get the query parameters using the following:
325
326 >>> st.experimental_get_query_params()
327 {"show_map": ["True"], "selected": ["asia", "america"]}
328
329 Note that the values in the returned dict are *always* lists. This is
330 because we internally use Python's urllib.parse.parse_qs(), which behaves
331 this way. And this behavior makes sense when you consider that every item
332 in a query string is potentially a 1-element array.
333
334 """
335 ctx = _get_report_ctx()
336 if ctx is None:
337 return ""
338 return _parse.parse_qs(ctx.query_string)
339
340
341 def experimental_set_query_params(**query_params):
342 """Set the query parameters that are shown in the browser's URL bar.
343
344 Parameters
345 ----------
346 **query_params : dict
347 The query parameters to set, as key-value pairs.
348
349 Example
350 -------
351
352 To point the user's web browser to something like
353 "http://localhost:8501/?show_map=True&selected=asia&selected=america",
354 you would do the following:
355
356 >>> st.experimental_set_query_params(
357 ... show_map=True,
358 ... selected=["asia", "america"],
359 ... )
360
361 """
362 ctx = _get_report_ctx()
363 if ctx is None:
364 return
365 ctx.query_string = _parse.urlencode(query_params, doseq=True)
366 msg = _ForwardMsg_pb2.ForwardMsg()
367 msg.page_info_changed.query_string = ctx.query_string
368 ctx.enqueue(msg)
369
370
371 @_contextlib.contextmanager
372 def spinner(text="In progress..."):
373 """Temporarily displays a message while executing a block of code.
374
375 Parameters
376 ----------
377 text : str
378 A message to display while executing that block
379
380 Example
381 -------
382
383 >>> with st.spinner('Wait for it...'):
384 >>> time.sleep(5)
385 >>> st.success('Done!')
386
387 """
388 import streamlit.caching as caching
389
390 # @st.cache optionally uses spinner for long-running computations.
391 # Normally, streamlit warns the user when they call st functions
392 # from within an @st.cache'd function. But we do *not* want to show
393 # these warnings for spinner's message, so we create and mutate this
394 # message delta within the "suppress_cached_st_function_warning"
395 # context.
396 with caching.suppress_cached_st_function_warning():
397 message = empty()
398
399 try:
400 # Set the message 0.1 seconds in the future to avoid annoying
401 # flickering if this spinner runs too quickly.
402 DELAY_SECS = 0.1
403 display_message = True
404 display_message_lock = _threading.Lock()
405
406 def set_message():
407 with display_message_lock:
408 if display_message:
409 with caching.suppress_cached_st_function_warning():
410 message.warning(str(text))
411
412 _add_report_ctx(_threading.Timer(DELAY_SECS, set_message)).start()
413
414 # Yield control back to the context.
415 yield
416 finally:
417 if display_message_lock:
418 with display_message_lock:
419 display_message = False
420 with caching.suppress_cached_st_function_warning():
421 message.empty()
422
423
424 _SPACES_RE = _re.compile("\\s*")
425
426
427 @_contextlib.contextmanager
428 def echo(code_location="above"):
429 """Use in a `with` block to draw some code on the app, then execute it.
430
431 Parameters
432 ----------
433 code_location : "above" or "below"
434 Whether to show the echoed code before or after the results of the
435 executed code block.
436
437 Example
438 -------
439
440 >>> with st.echo():
441 >>> st.write('This code will be printed')
442
443 """
444 if code_location == "below":
445 show_code = code
446 show_warning = warning
447 else:
448 placeholder = empty() # noqa: F821
449 show_code = placeholder.code
450 show_warning = placeholder.warning
451
452 try:
453 frame = _traceback.extract_stack()[-3]
454 filename, start_line = frame.filename, frame.lineno
455 yield
456 frame = _traceback.extract_stack()[-3]
457 end_line = frame.lineno
458 lines_to_display = [] # type: List[str]
459 with _source_util.open_python_file(filename) as source_file:
460 source_lines = source_file.readlines()
461 lines_to_display.extend(source_lines[start_line:end_line])
462 match = _SPACES_RE.match(lines_to_display[0])
463 initial_spaces = match.end() if match else 0
464 for line in source_lines[end_line:]:
465 match = _SPACES_RE.match(line)
466 indentation = match.end() if match else 0
467 # The != 1 is because we want to allow '\n' between sections.
468 if indentation != 1 and indentation < initial_spaces:
469 break
470 lines_to_display.append(line)
471 line_to_display = _textwrap.dedent("".join(lines_to_display))
472
473 show_code(line_to_display, "python")
474
475 except FileNotFoundError as err:
476 show_warning("Unable to display code. %s" % err)
477
478
479 def _transparent_write(*args):
480 """This is just st.write, but returns the arguments you passed to it."""
481 write(*args)
482 if len(args) == 1:
483 return args[0]
484 return args
485
486
487 # We want to show a warning when the user runs a Streamlit script without
488 # 'streamlit run', but we need to make sure the warning appears only once no
489 # matter how many times __init__ gets loaded.
490 _use_warning_has_been_displayed = False
491
492
493 def _maybe_print_use_warning():
494 """Print a warning if Streamlit is imported but not being run with `streamlit run`.
495 The warning is printed only once.
496 """
497 global _use_warning_has_been_displayed
498
499 if not _use_warning_has_been_displayed:
500 _use_warning_has_been_displayed = True
501
502 if _env_util.is_repl():
503 _LOGGER.warning(
504 _textwrap.dedent(
505 """
506
507 Will not generate Streamlit app
508
509 To generate an app, use Streamlit in a file and run it with:
510 $ streamlit run [FILE_NAME] [ARGUMENTS]
511
512 """
513 )
514 )
515
516 elif not _is_running_with_streamlit and _config.get_option(
517 "global.showWarningOnDirectExecution"
518 ):
519 script_name = _sys.argv[0]
520
521 _LOGGER.warning(
522 _textwrap.dedent(
523 f"""
524
525 Will not generate Streamlit App
526
527 To generate an App, run this file with:
528 $ streamlit run {script_name} [ARGUMENTS]
529
530 """
531 )
532 )
533
534
535 def stop():
536 """Stops execution immediately.
537
538 Streamlit will not run any statements after `st.stop()`.
539 We recommend rendering a message to explain why the script has stopped.
540 When run outside of Streamlit, this will raise an Exception.
541
542 Example
543 -------
544
545 >>> name = st.text_input('Name')
546 >>> if not name:
547 >>> st.warning('Please input a name.')
548 >>> st.stop()
549 >>> st.success('Thank you for inputting a name.')
550
551 """
552 raise StopException()
553
554
555 def experimental_rerun():
556 """Rerun the script immediately.
557
558 When `st.experimental_rerun()` is called, the script is halted - no
559 more statements will be run, and the script will be queued to re-run
560 from the top.
561
562 If this function is called outside of Streamlit, it will raise an
563 Exception.
564 """
565
566 raise _RerunException(_RerunData(None))
567
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/streamlit/__init__.py b/lib/streamlit/__init__.py
--- a/lib/streamlit/__init__.py
+++ b/lib/streamlit/__init__.py
@@ -197,8 +197,6 @@
return wrapped
-beta_set_page_config = _beta_warning(set_page_config, "2021-01-06")
-beta_color_picker = _beta_warning(_main.color_picker, "January 28, 2021")
beta_container = _main.beta_container # noqa: E221
beta_expander = _main.beta_expander # noqa: E221
beta_columns = _main.beta_columns # noqa: E221
| {"golden_diff": "diff --git a/lib/streamlit/__init__.py b/lib/streamlit/__init__.py\n--- a/lib/streamlit/__init__.py\n+++ b/lib/streamlit/__init__.py\n@@ -197,8 +197,6 @@\n return wrapped\n \n \n-beta_set_page_config = _beta_warning(set_page_config, \"2021-01-06\")\n-beta_color_picker = _beta_warning(_main.color_picker, \"January 28, 2021\")\n beta_container = _main.beta_container # noqa: E221\n beta_expander = _main.beta_expander # noqa: E221\n beta_columns = _main.beta_columns # noqa: E221\n", "issue": "Color picker was not fully removed out of beta\nFeedback after product review of #2625 \r\n\r\n- `**st.beta_color_picker` still exists in the wheel file,** and when I use it I get a message saying it will be removed on Jan 28 2021. Were we supposed to remove beta_color_picker for this release? (Depends on which stage of the beta we're in)\r\n\r\nSee notion for images/more details: https://www.notion.so/streamlit/0-76-Candidate-5c0ba34f05384adaa487fddf6d132d08 \n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Streamlit.\n\nHow to use Streamlit in 3 seconds:\n\n 1. Write an app\n >>> import streamlit as st\n >>> st.write(anything_you_want)\n\n 2. Run your app\n $ streamlit run my_script.py\n\n 3. Use your app\n A new tab will open on your browser. That's your Streamlit app!\n\n 4. Modify your code, save it, and watch changes live on your browser.\n\nTake a look at the other commands in this module to find out what else\nStreamlit can do:\n\n >>> dir(streamlit)\n\nOr try running our \"Hello World\":\n\n $ streamlit hello\n\nFor more detailed info, see https://docs.streamlit.io.\n\"\"\"\n\n# IMPORTANT: Prefix with an underscore anything that the user shouldn't see.\n\n# NOTE: You'll see lots of \"noqa: F821\" in this file. That's because we\n# manually mess with the local namespace so the linter can't know that some\n# identifiers actually exist in the namespace.\n\n# Must be at the top, to avoid circular dependency.\nfrom streamlit import logger as _logger\nfrom streamlit import config as _config\nfrom streamlit.proto.RootContainer_pb2 import RootContainer\n\n_LOGGER = _logger.get_logger(\"root\")\n\n# Give the package a version.\nimport pkg_resources as _pkg_resources\nfrom typing import List\n\n# This used to be pkg_resources.require('streamlit') but it would cause\n# pex files to fail. See #394 for more details.\n__version__ = _pkg_resources.get_distribution(\"streamlit\").version\n\nimport contextlib as _contextlib\nimport re as _re\nimport sys as _sys\nimport textwrap as _textwrap\nimport threading as _threading\nimport traceback as _traceback\nimport urllib.parse as _parse\n\nfrom streamlit import code_util as _code_util\nfrom streamlit import env_util as _env_util\nfrom streamlit import source_util as _source_util\nfrom streamlit import string_util as _string_util\nfrom streamlit.delta_generator import DeltaGenerator as _DeltaGenerator\nfrom streamlit.report_thread import add_report_ctx as _add_report_ctx\nfrom streamlit.report_thread import get_report_ctx as _get_report_ctx\nfrom streamlit.script_runner import StopException\nfrom streamlit.script_runner import RerunException as _RerunException\nfrom streamlit.script_request_queue import RerunData as _RerunData\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto import ForwardMsg_pb2 as _ForwardMsg_pb2\n\n# Modules that the user should have access to. These are imported with \"as\"\n# syntax pass mypy checking with implicit_reexport disabled.\nfrom streamlit.caching import cache as cache # noqa: F401\n\n# This is set to True inside cli._main_run(), and is False otherwise.\n# If False, we should assume that DeltaGenerator functions are effectively\n# no-ops, and adapt gracefully.\n_is_running_with_streamlit = False\n\n\ndef _update_logger():\n _logger.set_log_level(_config.get_option(\"logger.level\").upper())\n _logger.update_formatter()\n _logger.init_tornado_logs()\n\n\n# Make this file only depend on config option in an asynchronous manner. This\n# avoids a race condition when another file (such as a test file) tries to pass\n# in an alternative config.\n_config.on_config_parsed(_update_logger, True)\n\n\n_main = _DeltaGenerator(root_container=RootContainer.MAIN)\nsidebar = _DeltaGenerator(root_container=RootContainer.SIDEBAR, parent=_main)\n\n# DeltaGenerator methods:\n\naltair_chart = _main.altair_chart # noqa: E221\narea_chart = _main.area_chart # noqa: E221\naudio = _main.audio # noqa: E221\nballoons = _main.balloons # noqa: E221\nbar_chart = _main.bar_chart # noqa: E221\nbokeh_chart = _main.bokeh_chart # noqa: E221\nbutton = _main.button # noqa: E221\ncheckbox = _main.checkbox # noqa: E221\ncode = _main.code # noqa: E221\ndataframe = _main.dataframe # noqa: E221\ndate_input = _main.date_input # noqa: E221\npydeck_chart = _main.pydeck_chart # noqa: E221\nempty = _main.empty # noqa: E221\nerror = _main.error # noqa: E221\nexception = _main.exception # noqa: E221\nfile_uploader = _main.file_uploader # noqa: E221\ngraphviz_chart = _main.graphviz_chart # noqa: E221\nheader = _main.header # noqa: E221\nhelp = _main.help # noqa: E221\nimage = _main.image # noqa: E221\ninfo = _main.info # noqa: E221\njson = _main.json # noqa: E221\nlatex = _main.latex # noqa: E221\nline_chart = _main.line_chart # noqa: E221\nmap = _main.map # noqa: E221\nmarkdown = _main.markdown # noqa: E221\nmultiselect = _main.multiselect # noqa: E221\nnumber_input = _main.number_input # noqa: E221\nplotly_chart = _main.plotly_chart # noqa: E221\nprogress = _main.progress # noqa: E221\npyplot = _main.pyplot # noqa: E221\nradio = _main.radio # noqa: E221\nselectbox = _main.selectbox # noqa: E221\nselect_slider = _main.select_slider # noqa: E221\nslider = _main.slider # noqa: E221\nsubheader = _main.subheader # noqa: E221\nsuccess = _main.success # noqa: E221\ntable = _main.table # noqa: E221\ntext = _main.text # noqa: E221\ntext_area = _main.text_area # noqa: E221\ntext_input = _main.text_input # noqa: E221\ntime_input = _main.time_input # noqa: E221\ntitle = _main.title # noqa: E221\nvega_lite_chart = _main.vega_lite_chart # noqa: E221\nvideo = _main.video # noqa: E221\nwarning = _main.warning # noqa: E221\nwrite = _main.write # noqa: E221\ncolor_picker = _main.color_picker # noqa: E221\n\n# Config\n\nget_option = _config.get_option\nfrom streamlit.commands.page_config import set_page_config\n\n\ndef _beta_warning(func, date):\n \"\"\"Wrapper for functions that are no longer in beta.\n\n Wrapped functions will run as normal, but then proceed to show an st.warning\n saying that the beta_ version will be removed in ~3 months.\n\n Parameters\n ----------\n func: function\n The `st.` function that used to be in beta.\n\n date: str\n A date like \"2020-01-01\", indicating the last day we'll guarantee\n support for the beta_ prefix.\n \"\"\"\n\n def wrapped(*args, **kwargs):\n # Note: Since we use a wrapper, beta_ functions will not autocomplete\n # correctly on VSCode.\n result = func(*args, **kwargs)\n warning(\n f\"`st.{func.__name__}` has graduated out of beta. \"\n + f\"On {date}, the beta_ version will be removed.\\n\\n\"\n + f\"Before then, update your code from `st.beta_{func.__name__}` to `st.{func.__name__}`.\"\n )\n return result\n\n # Update the wrapped func's name & docstring so st.help does the right thing\n wrapped.__name__ = \"beta_\" + func.__name__\n wrapped.__doc__ = func.__doc__\n return wrapped\n\n\nbeta_set_page_config = _beta_warning(set_page_config, \"2021-01-06\")\nbeta_color_picker = _beta_warning(_main.color_picker, \"January 28, 2021\")\nbeta_container = _main.beta_container # noqa: E221\nbeta_expander = _main.beta_expander # noqa: E221\nbeta_columns = _main.beta_columns # noqa: E221\n\n\ndef set_option(key, value):\n \"\"\"Set config option.\n\n Currently, only the following config options can be set within the script itself:\n * client.caching\n * client.displayEnabled\n * deprecation.*\n\n Calling with any other options will raise StreamlitAPIException.\n\n Run `streamlit config show` in the terminal to see all available options.\n\n Parameters\n ----------\n key : str\n The config option key of the form \"section.optionName\". To see all\n available options, run `streamlit config show` on a terminal.\n\n value\n The new value to assign to this config option.\n\n \"\"\"\n opt = _config._config_options[key]\n if opt.scriptable:\n _config.set_option(key, value)\n return\n\n raise StreamlitAPIException(\n \"{key} cannot be set on the fly. Set as command line option, e.g. streamlit run script.py --{key}, or in config.toml instead.\".format(\n key=key\n )\n )\n\n\ndef experimental_show(*args):\n \"\"\"Write arguments and *argument names* to your app for debugging purposes.\n\n Show() has similar properties to write():\n\n 1. You can pass in multiple arguments, all of which will be debugged.\n 2. It returns None, so it's \"slot\" in the app cannot be reused.\n\n Note: This is an experimental feature. See\n https://docs.streamlit.io/en/latest/api.html#pre-release-features for more information.\n\n Parameters\n ----------\n *args : any\n One or many objects to debug in the App.\n\n Example\n -------\n\n >>> dataframe = pd.DataFrame({\n ... 'first column': [1, 2, 3, 4],\n ... 'second column': [10, 20, 30, 40],\n ... }))\n >>> st.experimental_show(dataframe)\n\n Notes\n -----\n\n This is an experimental feature with usage limitations:\n\n - The method must be called with the name `show`.\n - Must be called in one line of code, and only once per line.\n - When passing multiple arguments the inclusion of `,` or `)` in a string\n argument may cause an error.\n\n \"\"\"\n if not args:\n return\n\n try:\n import inspect\n\n # Get the calling line of code\n current_frame = inspect.currentframe()\n if current_frame is None:\n warning(\"`show` not enabled in the shell\")\n return\n lines = inspect.getframeinfo(current_frame.f_back)[3]\n\n if not lines:\n warning(\"`show` not enabled in the shell\")\n return\n\n # Parse arguments from the line\n line = lines[0].split(\"show\", 1)[1]\n inputs = _code_util.get_method_args_from_code(args, line)\n\n # Escape markdown and add deltas\n for idx, input in enumerate(inputs):\n escaped = _string_util.escape_markdown(input)\n\n markdown(\"**%s**\" % escaped)\n write(args[idx])\n\n except Exception:\n _, exc, exc_tb = _sys.exc_info()\n exception(exc)\n\n\ndef experimental_get_query_params():\n \"\"\"Return the query parameters that is currently showing in the browser's URL bar.\n\n Returns\n -------\n dict\n The current query parameters as a dict. \"Query parameters\" are the part of the URL that comes\n after the first \"?\".\n\n Example\n -------\n\n Let's say the user's web browser is at\n `http://localhost:8501/?show_map=True&selected=asia&selected=america`.\n Then, you can get the query parameters using the following:\n\n >>> st.experimental_get_query_params()\n {\"show_map\": [\"True\"], \"selected\": [\"asia\", \"america\"]}\n\n Note that the values in the returned dict are *always* lists. This is\n because we internally use Python's urllib.parse.parse_qs(), which behaves\n this way. And this behavior makes sense when you consider that every item\n in a query string is potentially a 1-element array.\n\n \"\"\"\n ctx = _get_report_ctx()\n if ctx is None:\n return \"\"\n return _parse.parse_qs(ctx.query_string)\n\n\ndef experimental_set_query_params(**query_params):\n \"\"\"Set the query parameters that are shown in the browser's URL bar.\n\n Parameters\n ----------\n **query_params : dict\n The query parameters to set, as key-value pairs.\n\n Example\n -------\n\n To point the user's web browser to something like\n \"http://localhost:8501/?show_map=True&selected=asia&selected=america\",\n you would do the following:\n\n >>> st.experimental_set_query_params(\n ... show_map=True,\n ... selected=[\"asia\", \"america\"],\n ... )\n\n \"\"\"\n ctx = _get_report_ctx()\n if ctx is None:\n return\n ctx.query_string = _parse.urlencode(query_params, doseq=True)\n msg = _ForwardMsg_pb2.ForwardMsg()\n msg.page_info_changed.query_string = ctx.query_string\n ctx.enqueue(msg)\n\n\n@_contextlib.contextmanager\ndef spinner(text=\"In progress...\"):\n \"\"\"Temporarily displays a message while executing a block of code.\n\n Parameters\n ----------\n text : str\n A message to display while executing that block\n\n Example\n -------\n\n >>> with st.spinner('Wait for it...'):\n >>> time.sleep(5)\n >>> st.success('Done!')\n\n \"\"\"\n import streamlit.caching as caching\n\n # @st.cache optionally uses spinner for long-running computations.\n # Normally, streamlit warns the user when they call st functions\n # from within an @st.cache'd function. But we do *not* want to show\n # these warnings for spinner's message, so we create and mutate this\n # message delta within the \"suppress_cached_st_function_warning\"\n # context.\n with caching.suppress_cached_st_function_warning():\n message = empty()\n\n try:\n # Set the message 0.1 seconds in the future to avoid annoying\n # flickering if this spinner runs too quickly.\n DELAY_SECS = 0.1\n display_message = True\n display_message_lock = _threading.Lock()\n\n def set_message():\n with display_message_lock:\n if display_message:\n with caching.suppress_cached_st_function_warning():\n message.warning(str(text))\n\n _add_report_ctx(_threading.Timer(DELAY_SECS, set_message)).start()\n\n # Yield control back to the context.\n yield\n finally:\n if display_message_lock:\n with display_message_lock:\n display_message = False\n with caching.suppress_cached_st_function_warning():\n message.empty()\n\n\n_SPACES_RE = _re.compile(\"\\\\s*\")\n\n\n@_contextlib.contextmanager\ndef echo(code_location=\"above\"):\n \"\"\"Use in a `with` block to draw some code on the app, then execute it.\n\n Parameters\n ----------\n code_location : \"above\" or \"below\"\n Whether to show the echoed code before or after the results of the\n executed code block.\n\n Example\n -------\n\n >>> with st.echo():\n >>> st.write('This code will be printed')\n\n \"\"\"\n if code_location == \"below\":\n show_code = code\n show_warning = warning\n else:\n placeholder = empty() # noqa: F821\n show_code = placeholder.code\n show_warning = placeholder.warning\n\n try:\n frame = _traceback.extract_stack()[-3]\n filename, start_line = frame.filename, frame.lineno\n yield\n frame = _traceback.extract_stack()[-3]\n end_line = frame.lineno\n lines_to_display = [] # type: List[str]\n with _source_util.open_python_file(filename) as source_file:\n source_lines = source_file.readlines()\n lines_to_display.extend(source_lines[start_line:end_line])\n match = _SPACES_RE.match(lines_to_display[0])\n initial_spaces = match.end() if match else 0\n for line in source_lines[end_line:]:\n match = _SPACES_RE.match(line)\n indentation = match.end() if match else 0\n # The != 1 is because we want to allow '\\n' between sections.\n if indentation != 1 and indentation < initial_spaces:\n break\n lines_to_display.append(line)\n line_to_display = _textwrap.dedent(\"\".join(lines_to_display))\n\n show_code(line_to_display, \"python\")\n\n except FileNotFoundError as err:\n show_warning(\"Unable to display code. %s\" % err)\n\n\ndef _transparent_write(*args):\n \"\"\"This is just st.write, but returns the arguments you passed to it.\"\"\"\n write(*args)\n if len(args) == 1:\n return args[0]\n return args\n\n\n# We want to show a warning when the user runs a Streamlit script without\n# 'streamlit run', but we need to make sure the warning appears only once no\n# matter how many times __init__ gets loaded.\n_use_warning_has_been_displayed = False\n\n\ndef _maybe_print_use_warning():\n \"\"\"Print a warning if Streamlit is imported but not being run with `streamlit run`.\n The warning is printed only once.\n \"\"\"\n global _use_warning_has_been_displayed\n\n if not _use_warning_has_been_displayed:\n _use_warning_has_been_displayed = True\n\n if _env_util.is_repl():\n _LOGGER.warning(\n _textwrap.dedent(\n \"\"\"\n\n Will not generate Streamlit app\n\n To generate an app, use Streamlit in a file and run it with:\n $ streamlit run [FILE_NAME] [ARGUMENTS]\n\n \"\"\"\n )\n )\n\n elif not _is_running_with_streamlit and _config.get_option(\n \"global.showWarningOnDirectExecution\"\n ):\n script_name = _sys.argv[0]\n\n _LOGGER.warning(\n _textwrap.dedent(\n f\"\"\"\n\n Will not generate Streamlit App\n\n To generate an App, run this file with:\n $ streamlit run {script_name} [ARGUMENTS]\n\n \"\"\"\n )\n )\n\n\ndef stop():\n \"\"\"Stops execution immediately.\n\n Streamlit will not run any statements after `st.stop()`.\n We recommend rendering a message to explain why the script has stopped.\n When run outside of Streamlit, this will raise an Exception.\n\n Example\n -------\n\n >>> name = st.text_input('Name')\n >>> if not name:\n >>> st.warning('Please input a name.')\n >>> st.stop()\n >>> st.success('Thank you for inputting a name.')\n\n \"\"\"\n raise StopException()\n\n\ndef experimental_rerun():\n \"\"\"Rerun the script immediately.\n\n When `st.experimental_rerun()` is called, the script is halted - no\n more statements will be run, and the script will be queued to re-run\n from the top.\n\n If this function is called outside of Streamlit, it will raise an\n Exception.\n \"\"\"\n\n raise _RerunException(_RerunData(None))\n", "path": "lib/streamlit/__init__.py"}], "after_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Streamlit.\n\nHow to use Streamlit in 3 seconds:\n\n 1. Write an app\n >>> import streamlit as st\n >>> st.write(anything_you_want)\n\n 2. Run your app\n $ streamlit run my_script.py\n\n 3. Use your app\n A new tab will open on your browser. That's your Streamlit app!\n\n 4. Modify your code, save it, and watch changes live on your browser.\n\nTake a look at the other commands in this module to find out what else\nStreamlit can do:\n\n >>> dir(streamlit)\n\nOr try running our \"Hello World\":\n\n $ streamlit hello\n\nFor more detailed info, see https://docs.streamlit.io.\n\"\"\"\n\n# IMPORTANT: Prefix with an underscore anything that the user shouldn't see.\n\n# NOTE: You'll see lots of \"noqa: F821\" in this file. That's because we\n# manually mess with the local namespace so the linter can't know that some\n# identifiers actually exist in the namespace.\n\n# Must be at the top, to avoid circular dependency.\nfrom streamlit import logger as _logger\nfrom streamlit import config as _config\nfrom streamlit.proto.RootContainer_pb2 import RootContainer\n\n_LOGGER = _logger.get_logger(\"root\")\n\n# Give the package a version.\nimport pkg_resources as _pkg_resources\nfrom typing import List\n\n# This used to be pkg_resources.require('streamlit') but it would cause\n# pex files to fail. See #394 for more details.\n__version__ = _pkg_resources.get_distribution(\"streamlit\").version\n\nimport contextlib as _contextlib\nimport re as _re\nimport sys as _sys\nimport textwrap as _textwrap\nimport threading as _threading\nimport traceback as _traceback\nimport urllib.parse as _parse\n\nfrom streamlit import code_util as _code_util\nfrom streamlit import env_util as _env_util\nfrom streamlit import source_util as _source_util\nfrom streamlit import string_util as _string_util\nfrom streamlit.delta_generator import DeltaGenerator as _DeltaGenerator\nfrom streamlit.report_thread import add_report_ctx as _add_report_ctx\nfrom streamlit.report_thread import get_report_ctx as _get_report_ctx\nfrom streamlit.script_runner import StopException\nfrom streamlit.script_runner import RerunException as _RerunException\nfrom streamlit.script_request_queue import RerunData as _RerunData\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto import ForwardMsg_pb2 as _ForwardMsg_pb2\n\n# Modules that the user should have access to. These are imported with \"as\"\n# syntax pass mypy checking with implicit_reexport disabled.\nfrom streamlit.caching import cache as cache # noqa: F401\n\n# This is set to True inside cli._main_run(), and is False otherwise.\n# If False, we should assume that DeltaGenerator functions are effectively\n# no-ops, and adapt gracefully.\n_is_running_with_streamlit = False\n\n\ndef _update_logger():\n _logger.set_log_level(_config.get_option(\"logger.level\").upper())\n _logger.update_formatter()\n _logger.init_tornado_logs()\n\n\n# Make this file only depend on config option in an asynchronous manner. This\n# avoids a race condition when another file (such as a test file) tries to pass\n# in an alternative config.\n_config.on_config_parsed(_update_logger, True)\n\n\n_main = _DeltaGenerator(root_container=RootContainer.MAIN)\nsidebar = _DeltaGenerator(root_container=RootContainer.SIDEBAR, parent=_main)\n\n# DeltaGenerator methods:\n\naltair_chart = _main.altair_chart # noqa: E221\narea_chart = _main.area_chart # noqa: E221\naudio = _main.audio # noqa: E221\nballoons = _main.balloons # noqa: E221\nbar_chart = _main.bar_chart # noqa: E221\nbokeh_chart = _main.bokeh_chart # noqa: E221\nbutton = _main.button # noqa: E221\ncheckbox = _main.checkbox # noqa: E221\ncode = _main.code # noqa: E221\ndataframe = _main.dataframe # noqa: E221\ndate_input = _main.date_input # noqa: E221\npydeck_chart = _main.pydeck_chart # noqa: E221\nempty = _main.empty # noqa: E221\nerror = _main.error # noqa: E221\nexception = _main.exception # noqa: E221\nfile_uploader = _main.file_uploader # noqa: E221\ngraphviz_chart = _main.graphviz_chart # noqa: E221\nheader = _main.header # noqa: E221\nhelp = _main.help # noqa: E221\nimage = _main.image # noqa: E221\ninfo = _main.info # noqa: E221\njson = _main.json # noqa: E221\nlatex = _main.latex # noqa: E221\nline_chart = _main.line_chart # noqa: E221\nmap = _main.map # noqa: E221\nmarkdown = _main.markdown # noqa: E221\nmultiselect = _main.multiselect # noqa: E221\nnumber_input = _main.number_input # noqa: E221\nplotly_chart = _main.plotly_chart # noqa: E221\nprogress = _main.progress # noqa: E221\npyplot = _main.pyplot # noqa: E221\nradio = _main.radio # noqa: E221\nselectbox = _main.selectbox # noqa: E221\nselect_slider = _main.select_slider # noqa: E221\nslider = _main.slider # noqa: E221\nsubheader = _main.subheader # noqa: E221\nsuccess = _main.success # noqa: E221\ntable = _main.table # noqa: E221\ntext = _main.text # noqa: E221\ntext_area = _main.text_area # noqa: E221\ntext_input = _main.text_input # noqa: E221\ntime_input = _main.time_input # noqa: E221\ntitle = _main.title # noqa: E221\nvega_lite_chart = _main.vega_lite_chart # noqa: E221\nvideo = _main.video # noqa: E221\nwarning = _main.warning # noqa: E221\nwrite = _main.write # noqa: E221\ncolor_picker = _main.color_picker # noqa: E221\n\n# Config\n\nget_option = _config.get_option\nfrom streamlit.commands.page_config import set_page_config\n\n\ndef _beta_warning(func, date):\n \"\"\"Wrapper for functions that are no longer in beta.\n\n Wrapped functions will run as normal, but then proceed to show an st.warning\n saying that the beta_ version will be removed in ~3 months.\n\n Parameters\n ----------\n func: function\n The `st.` function that used to be in beta.\n\n date: str\n A date like \"2020-01-01\", indicating the last day we'll guarantee\n support for the beta_ prefix.\n \"\"\"\n\n def wrapped(*args, **kwargs):\n # Note: Since we use a wrapper, beta_ functions will not autocomplete\n # correctly on VSCode.\n result = func(*args, **kwargs)\n warning(\n f\"`st.{func.__name__}` has graduated out of beta. \"\n + f\"On {date}, the beta_ version will be removed.\\n\\n\"\n + f\"Before then, update your code from `st.beta_{func.__name__}` to `st.{func.__name__}`.\"\n )\n return result\n\n # Update the wrapped func's name & docstring so st.help does the right thing\n wrapped.__name__ = \"beta_\" + func.__name__\n wrapped.__doc__ = func.__doc__\n return wrapped\n\n\nbeta_container = _main.beta_container # noqa: E221\nbeta_expander = _main.beta_expander # noqa: E221\nbeta_columns = _main.beta_columns # noqa: E221\n\n\ndef set_option(key, value):\n \"\"\"Set config option.\n\n Currently, only the following config options can be set within the script itself:\n * client.caching\n * client.displayEnabled\n * deprecation.*\n\n Calling with any other options will raise StreamlitAPIException.\n\n Run `streamlit config show` in the terminal to see all available options.\n\n Parameters\n ----------\n key : str\n The config option key of the form \"section.optionName\". To see all\n available options, run `streamlit config show` on a terminal.\n\n value\n The new value to assign to this config option.\n\n \"\"\"\n opt = _config._config_options[key]\n if opt.scriptable:\n _config.set_option(key, value)\n return\n\n raise StreamlitAPIException(\n \"{key} cannot be set on the fly. Set as command line option, e.g. streamlit run script.py --{key}, or in config.toml instead.\".format(\n key=key\n )\n )\n\n\ndef experimental_show(*args):\n \"\"\"Write arguments and *argument names* to your app for debugging purposes.\n\n Show() has similar properties to write():\n\n 1. You can pass in multiple arguments, all of which will be debugged.\n 2. It returns None, so it's \"slot\" in the app cannot be reused.\n\n Note: This is an experimental feature. See\n https://docs.streamlit.io/en/latest/api.html#pre-release-features for more information.\n\n Parameters\n ----------\n *args : any\n One or many objects to debug in the App.\n\n Example\n -------\n\n >>> dataframe = pd.DataFrame({\n ... 'first column': [1, 2, 3, 4],\n ... 'second column': [10, 20, 30, 40],\n ... }))\n >>> st.experimental_show(dataframe)\n\n Notes\n -----\n\n This is an experimental feature with usage limitations:\n\n - The method must be called with the name `show`.\n - Must be called in one line of code, and only once per line.\n - When passing multiple arguments the inclusion of `,` or `)` in a string\n argument may cause an error.\n\n \"\"\"\n if not args:\n return\n\n try:\n import inspect\n\n # Get the calling line of code\n current_frame = inspect.currentframe()\n if current_frame is None:\n warning(\"`show` not enabled in the shell\")\n return\n lines = inspect.getframeinfo(current_frame.f_back)[3]\n\n if not lines:\n warning(\"`show` not enabled in the shell\")\n return\n\n # Parse arguments from the line\n line = lines[0].split(\"show\", 1)[1]\n inputs = _code_util.get_method_args_from_code(args, line)\n\n # Escape markdown and add deltas\n for idx, input in enumerate(inputs):\n escaped = _string_util.escape_markdown(input)\n\n markdown(\"**%s**\" % escaped)\n write(args[idx])\n\n except Exception:\n _, exc, exc_tb = _sys.exc_info()\n exception(exc)\n\n\ndef experimental_get_query_params():\n \"\"\"Return the query parameters that is currently showing in the browser's URL bar.\n\n Returns\n -------\n dict\n The current query parameters as a dict. \"Query parameters\" are the part of the URL that comes\n after the first \"?\".\n\n Example\n -------\n\n Let's say the user's web browser is at\n `http://localhost:8501/?show_map=True&selected=asia&selected=america`.\n Then, you can get the query parameters using the following:\n\n >>> st.experimental_get_query_params()\n {\"show_map\": [\"True\"], \"selected\": [\"asia\", \"america\"]}\n\n Note that the values in the returned dict are *always* lists. This is\n because we internally use Python's urllib.parse.parse_qs(), which behaves\n this way. And this behavior makes sense when you consider that every item\n in a query string is potentially a 1-element array.\n\n \"\"\"\n ctx = _get_report_ctx()\n if ctx is None:\n return \"\"\n return _parse.parse_qs(ctx.query_string)\n\n\ndef experimental_set_query_params(**query_params):\n \"\"\"Set the query parameters that are shown in the browser's URL bar.\n\n Parameters\n ----------\n **query_params : dict\n The query parameters to set, as key-value pairs.\n\n Example\n -------\n\n To point the user's web browser to something like\n \"http://localhost:8501/?show_map=True&selected=asia&selected=america\",\n you would do the following:\n\n >>> st.experimental_set_query_params(\n ... show_map=True,\n ... selected=[\"asia\", \"america\"],\n ... )\n\n \"\"\"\n ctx = _get_report_ctx()\n if ctx is None:\n return\n ctx.query_string = _parse.urlencode(query_params, doseq=True)\n msg = _ForwardMsg_pb2.ForwardMsg()\n msg.page_info_changed.query_string = ctx.query_string\n ctx.enqueue(msg)\n\n\n@_contextlib.contextmanager\ndef spinner(text=\"In progress...\"):\n \"\"\"Temporarily displays a message while executing a block of code.\n\n Parameters\n ----------\n text : str\n A message to display while executing that block\n\n Example\n -------\n\n >>> with st.spinner('Wait for it...'):\n >>> time.sleep(5)\n >>> st.success('Done!')\n\n \"\"\"\n import streamlit.caching as caching\n\n # @st.cache optionally uses spinner for long-running computations.\n # Normally, streamlit warns the user when they call st functions\n # from within an @st.cache'd function. But we do *not* want to show\n # these warnings for spinner's message, so we create and mutate this\n # message delta within the \"suppress_cached_st_function_warning\"\n # context.\n with caching.suppress_cached_st_function_warning():\n message = empty()\n\n try:\n # Set the message 0.1 seconds in the future to avoid annoying\n # flickering if this spinner runs too quickly.\n DELAY_SECS = 0.1\n display_message = True\n display_message_lock = _threading.Lock()\n\n def set_message():\n with display_message_lock:\n if display_message:\n with caching.suppress_cached_st_function_warning():\n message.warning(str(text))\n\n _add_report_ctx(_threading.Timer(DELAY_SECS, set_message)).start()\n\n # Yield control back to the context.\n yield\n finally:\n if display_message_lock:\n with display_message_lock:\n display_message = False\n with caching.suppress_cached_st_function_warning():\n message.empty()\n\n\n_SPACES_RE = _re.compile(\"\\\\s*\")\n\n\n@_contextlib.contextmanager\ndef echo(code_location=\"above\"):\n \"\"\"Use in a `with` block to draw some code on the app, then execute it.\n\n Parameters\n ----------\n code_location : \"above\" or \"below\"\n Whether to show the echoed code before or after the results of the\n executed code block.\n\n Example\n -------\n\n >>> with st.echo():\n >>> st.write('This code will be printed')\n\n \"\"\"\n if code_location == \"below\":\n show_code = code\n show_warning = warning\n else:\n placeholder = empty() # noqa: F821\n show_code = placeholder.code\n show_warning = placeholder.warning\n\n try:\n frame = _traceback.extract_stack()[-3]\n filename, start_line = frame.filename, frame.lineno\n yield\n frame = _traceback.extract_stack()[-3]\n end_line = frame.lineno\n lines_to_display = [] # type: List[str]\n with _source_util.open_python_file(filename) as source_file:\n source_lines = source_file.readlines()\n lines_to_display.extend(source_lines[start_line:end_line])\n match = _SPACES_RE.match(lines_to_display[0])\n initial_spaces = match.end() if match else 0\n for line in source_lines[end_line:]:\n match = _SPACES_RE.match(line)\n indentation = match.end() if match else 0\n # The != 1 is because we want to allow '\\n' between sections.\n if indentation != 1 and indentation < initial_spaces:\n break\n lines_to_display.append(line)\n line_to_display = _textwrap.dedent(\"\".join(lines_to_display))\n\n show_code(line_to_display, \"python\")\n\n except FileNotFoundError as err:\n show_warning(\"Unable to display code. %s\" % err)\n\n\ndef _transparent_write(*args):\n \"\"\"This is just st.write, but returns the arguments you passed to it.\"\"\"\n write(*args)\n if len(args) == 1:\n return args[0]\n return args\n\n\n# We want to show a warning when the user runs a Streamlit script without\n# 'streamlit run', but we need to make sure the warning appears only once no\n# matter how many times __init__ gets loaded.\n_use_warning_has_been_displayed = False\n\n\ndef _maybe_print_use_warning():\n \"\"\"Print a warning if Streamlit is imported but not being run with `streamlit run`.\n The warning is printed only once.\n \"\"\"\n global _use_warning_has_been_displayed\n\n if not _use_warning_has_been_displayed:\n _use_warning_has_been_displayed = True\n\n if _env_util.is_repl():\n _LOGGER.warning(\n _textwrap.dedent(\n \"\"\"\n\n Will not generate Streamlit app\n\n To generate an app, use Streamlit in a file and run it with:\n $ streamlit run [FILE_NAME] [ARGUMENTS]\n\n \"\"\"\n )\n )\n\n elif not _is_running_with_streamlit and _config.get_option(\n \"global.showWarningOnDirectExecution\"\n ):\n script_name = _sys.argv[0]\n\n _LOGGER.warning(\n _textwrap.dedent(\n f\"\"\"\n\n Will not generate Streamlit App\n\n To generate an App, run this file with:\n $ streamlit run {script_name} [ARGUMENTS]\n\n \"\"\"\n )\n )\n\n\ndef stop():\n \"\"\"Stops execution immediately.\n\n Streamlit will not run any statements after `st.stop()`.\n We recommend rendering a message to explain why the script has stopped.\n When run outside of Streamlit, this will raise an Exception.\n\n Example\n -------\n\n >>> name = st.text_input('Name')\n >>> if not name:\n >>> st.warning('Please input a name.')\n >>> st.stop()\n >>> st.success('Thank you for inputting a name.')\n\n \"\"\"\n raise StopException()\n\n\ndef experimental_rerun():\n \"\"\"Rerun the script immediately.\n\n When `st.experimental_rerun()` is called, the script is halted - no\n more statements will be run, and the script will be queued to re-run\n from the top.\n\n If this function is called outside of Streamlit, it will raise an\n Exception.\n \"\"\"\n\n raise _RerunException(_RerunData(None))\n", "path": "lib/streamlit/__init__.py"}]} |
gh_patches_debug_1065 | rasdani/github-patches | git_diff | aio-libs__aiohttp-1532 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
reading nested multipart messages does not work correctly
## Long story short
Multipart reader breaks after reading a sub multipart end boundary and starting the next part
## Expected behaviour
Nested multipart reader reads a message created with the multipart writer correctly
## Actual behaviour
```python
ValueError: Invalid boundary b'', expected b'--b0b69248b3a345cf8256a8dd25f07874'
```
## Steps to reproduce
Receive the multipart response from #1525
Server:
```python
from aiohttp.multipart import MultipartWriter
from aiohttp.web import Response
def handle(request):
with MultipartWriter('mixed') as root:
with MultipartWriter('mixed') as subwriter1:
subwriter1.append('first message')
root.append(subwriter1, headers=subwriter1.headers)
with MultipartWriter('mixed') as subwriter2:
subwriter2.append('second message')
root.append(subwriter2, headers=subwriter2.headers)
return Response(body=b''.join(root.serialize()), headers=root.headers)
# ... create web app which responds with the handler above ...
```
Client:
```python
import aiohttp
import asyncio
from aiohttp.multipart import BodyPartReader, MultipartReader
@asyncio.coroutine
def read_multipart(reader):
while True:
part = yield from reader.next()
if part is None: break
if isinstance(part, BodyPartReader):
body = yield from part.read(decode=True)
print('body part: %r' % body)
else:
print('nested part')
yield from read_multipart(part)
@asyncio.coroutine
def request(url):
response = yield from aiohttp.get(url)
yield from read_multipart(MultipartReader.from_response(response))
# ... drive event loop and call request(handler_url) ...
```
## Issue
Lines [multipart.py:767](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/multipart.py#L767) and [multipart.py:969](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/multipart.py#L969) have line endings which result in an empty line after the end boundary of a multipart message (valid). However the reader terminates after the end boundary and the parent reader now expects the next boundary, but what is found is the blank line from [multipart.py:767](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/multipart.py#L767)
## Possible fix
```diff
diff --git a/aiohttp/multipart.py b/aiohttp/multipart.py
index af7f19b1..82ad2306 100644
--- a/aiohttp/multipart.py
+++ b/aiohttp/multipart.py
@@ -639,6 +639,7 @@ class MultipartReader(object):
pass
elif chunk == self._boundary + b'--':
self._at_eof = True
+ yield from self._readline()
else:
raise ValueError('Invalid boundary %r, expected %r'
% (chunk, self._boundary))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aiohttp/multipart.py`
Content:
```
1 import asyncio
2 import base64
3 import binascii
4 import io
5 import json
6 import mimetypes
7 import os
8 import re
9 import sys
10 import uuid
11 import warnings
12 import zlib
13 from collections import Mapping, Sequence, deque
14 from pathlib import Path
15 from urllib.parse import parse_qsl, quote, unquote, urlencode
16
17 from multidict import CIMultiDict
18
19 from .hdrs import (CONTENT_DISPOSITION, CONTENT_ENCODING, CONTENT_LENGTH,
20 CONTENT_TRANSFER_ENCODING, CONTENT_TYPE)
21 from .helpers import parse_mimetype
22 from .protocol import HttpParser
23
24 __all__ = ('MultipartReader', 'MultipartWriter',
25 'BodyPartReader', 'BodyPartWriter',
26 'BadContentDispositionHeader', 'BadContentDispositionParam',
27 'parse_content_disposition', 'content_disposition_filename')
28
29
30 CHAR = set(chr(i) for i in range(0, 128))
31 CTL = set(chr(i) for i in range(0, 32)) | {chr(127), }
32 SEPARATORS = {'(', ')', '<', '>', '@', ',', ';', ':', '\\', '"', '/', '[', ']',
33 '?', '=', '{', '}', ' ', chr(9)}
34 TOKEN = CHAR ^ CTL ^ SEPARATORS
35
36 PY_35 = sys.version_info >= (3, 5)
37 PY_352 = sys.version_info >= (3, 5, 2)
38
39
40 class BadContentDispositionHeader(RuntimeWarning):
41 pass
42
43
44 class BadContentDispositionParam(RuntimeWarning):
45 pass
46
47
48 def parse_content_disposition(header):
49 def is_token(string):
50 return string and TOKEN >= set(string)
51
52 def is_quoted(string):
53 return string[0] == string[-1] == '"'
54
55 def is_rfc5987(string):
56 return is_token(string) and string.count("'") == 2
57
58 def is_extended_param(string):
59 return string.endswith('*')
60
61 def is_continuous_param(string):
62 pos = string.find('*') + 1
63 if not pos:
64 return False
65 substring = string[pos:-1] if string.endswith('*') else string[pos:]
66 return substring.isdigit()
67
68 def unescape(text, *, chars=''.join(map(re.escape, CHAR))):
69 return re.sub('\\\\([{}])'.format(chars), '\\1', text)
70
71 if not header:
72 return None, {}
73
74 disptype, *parts = header.split(';')
75 if not is_token(disptype):
76 warnings.warn(BadContentDispositionHeader(header))
77 return None, {}
78
79 params = {}
80 for item in parts:
81 if '=' not in item:
82 warnings.warn(BadContentDispositionHeader(header))
83 return None, {}
84
85 key, value = item.split('=', 1)
86 key = key.lower().strip()
87 value = value.lstrip()
88
89 if key in params:
90 warnings.warn(BadContentDispositionHeader(header))
91 return None, {}
92
93 if not is_token(key):
94 warnings.warn(BadContentDispositionParam(item))
95 continue
96
97 elif is_continuous_param(key):
98 if is_quoted(value):
99 value = unescape(value[1:-1])
100 elif not is_token(value):
101 warnings.warn(BadContentDispositionParam(item))
102 continue
103
104 elif is_extended_param(key):
105 if is_rfc5987(value):
106 encoding, _, value = value.split("'", 2)
107 encoding = encoding or 'utf-8'
108 else:
109 warnings.warn(BadContentDispositionParam(item))
110 continue
111
112 try:
113 value = unquote(value, encoding, 'strict')
114 except UnicodeDecodeError: # pragma: nocover
115 warnings.warn(BadContentDispositionParam(item))
116 continue
117
118 else:
119 if is_quoted(value):
120 value = unescape(value[1:-1].lstrip('\\/'))
121 elif not is_token(value):
122 warnings.warn(BadContentDispositionHeader(header))
123 return None, {}
124
125 params[key] = value
126
127 return disptype.lower(), params
128
129
130 def content_disposition_filename(params):
131 if not params:
132 return None
133 elif 'filename*' in params:
134 return params['filename*']
135 elif 'filename' in params:
136 return params['filename']
137 else:
138 parts = []
139 fnparams = sorted((key, value)
140 for key, value in params.items()
141 if key.startswith('filename*'))
142 for num, (key, value) in enumerate(fnparams):
143 _, tail = key.split('*', 1)
144 if tail.endswith('*'):
145 tail = tail[:-1]
146 if tail == str(num):
147 parts.append(value)
148 else:
149 break
150 if not parts:
151 return None
152 value = ''.join(parts)
153 if "'" in value:
154 encoding, _, value = value.split("'", 2)
155 encoding = encoding or 'utf-8'
156 return unquote(value, encoding, 'strict')
157 return value
158
159
160 class MultipartResponseWrapper(object):
161 """Wrapper around the :class:`MultipartBodyReader` to take care about
162 underlying connection and close it when it needs in."""
163
164 def __init__(self, resp, stream):
165 self.resp = resp
166 self.stream = stream
167
168 if PY_35:
169 def __aiter__(self):
170 return self
171
172 if not PY_352: # pragma: no cover
173 __aiter__ = asyncio.coroutine(__aiter__)
174
175 @asyncio.coroutine
176 def __anext__(self):
177 part = yield from self.next()
178 if part is None:
179 raise StopAsyncIteration # NOQA
180 return part
181
182 def at_eof(self):
183 """Returns ``True`` when all response data had been read.
184
185 :rtype: bool
186 """
187 return self.resp.content.at_eof()
188
189 @asyncio.coroutine
190 def next(self):
191 """Emits next multipart reader object."""
192 item = yield from self.stream.next()
193 if self.stream.at_eof():
194 yield from self.release()
195 return item
196
197 @asyncio.coroutine
198 def release(self):
199 """Releases the connection gracefully, reading all the content
200 to the void."""
201 yield from self.resp.release()
202
203
204 class BodyPartReader(object):
205 """Multipart reader for single body part."""
206
207 chunk_size = 8192
208
209 def __init__(self, boundary, headers, content):
210 self.headers = headers
211 self._boundary = boundary
212 self._content = content
213 self._at_eof = False
214 length = self.headers.get(CONTENT_LENGTH, None)
215 self._length = int(length) if length is not None else None
216 self._read_bytes = 0
217 self._unread = deque()
218 self._prev_chunk = None
219 self._content_eof = 0
220
221 if PY_35:
222 def __aiter__(self):
223 return self
224
225 if not PY_352: # pragma: no cover
226 __aiter__ = asyncio.coroutine(__aiter__)
227
228 @asyncio.coroutine
229 def __anext__(self):
230 part = yield from self.next()
231 if part is None:
232 raise StopAsyncIteration # NOQA
233 return part
234
235 @asyncio.coroutine
236 def next(self):
237 item = yield from self.read()
238 if not item:
239 return None
240 return item
241
242 @asyncio.coroutine
243 def read(self, *, decode=False):
244 """Reads body part data.
245
246 :param bool decode: Decodes data following by encoding
247 method from `Content-Encoding` header. If it missed
248 data remains untouched
249
250 :rtype: bytearray
251 """
252 if self._at_eof:
253 return b''
254 data = bytearray()
255 while not self._at_eof:
256 data.extend((yield from self.read_chunk(self.chunk_size)))
257 if decode:
258 return self.decode(data)
259 return data
260
261 @asyncio.coroutine
262 def read_chunk(self, size=chunk_size):
263 """Reads body part content chunk of the specified size.
264
265 :param int size: chunk size
266
267 :rtype: bytearray
268 """
269 if self._at_eof:
270 return b''
271 if self._length:
272 chunk = yield from self._read_chunk_from_length(size)
273 else:
274 chunk = yield from self._read_chunk_from_stream(size)
275
276 self._read_bytes += len(chunk)
277 if self._read_bytes == self._length:
278 self._at_eof = True
279 if self._at_eof:
280 assert b'\r\n' == (yield from self._content.readline()), \
281 'reader did not read all the data or it is malformed'
282 return chunk
283
284 @asyncio.coroutine
285 def _read_chunk_from_length(self, size):
286 """Reads body part content chunk of the specified size.
287 The body part must has `Content-Length` header with proper value.
288
289 :param int size: chunk size
290
291 :rtype: bytearray
292 """
293 assert self._length is not None, \
294 'Content-Length required for chunked read'
295 chunk_size = min(size, self._length - self._read_bytes)
296 chunk = yield from self._content.read(chunk_size)
297 return chunk
298
299 @asyncio.coroutine
300 def _read_chunk_from_stream(self, size):
301 """Reads content chunk of body part with unknown length.
302 The `Content-Length` header for body part is not necessary.
303
304 :param int size: chunk size
305
306 :rtype: bytearray
307 """
308 assert size >= len(self._boundary) + 2, \
309 'Chunk size must be greater or equal than boundary length + 2'
310 first_chunk = self._prev_chunk is None
311 if first_chunk:
312 self._prev_chunk = yield from self._content.read(size)
313
314 chunk = yield from self._content.read(size)
315 self._content_eof += int(self._content.at_eof())
316 assert self._content_eof < 3, "Reading after EOF"
317 window = self._prev_chunk + chunk
318 sub = b'\r\n' + self._boundary
319 if first_chunk:
320 idx = window.find(sub)
321 else:
322 idx = window.find(sub, max(0, len(self._prev_chunk) - len(sub)))
323 if idx >= 0:
324 # pushing boundary back to content
325 self._content.unread_data(window[idx:])
326 if size > idx:
327 self._prev_chunk = self._prev_chunk[:idx]
328 chunk = window[len(self._prev_chunk):idx]
329 if not chunk:
330 self._at_eof = True
331 result = self._prev_chunk
332 self._prev_chunk = chunk
333 return result
334
335 @asyncio.coroutine
336 def readline(self):
337 """Reads body part by line by line.
338
339 :rtype: bytearray
340 """
341 if self._at_eof:
342 return b''
343
344 if self._unread:
345 line = self._unread.popleft()
346 else:
347 line = yield from self._content.readline()
348
349 if line.startswith(self._boundary):
350 # the very last boundary may not come with \r\n,
351 # so set single rules for everyone
352 sline = line.rstrip(b'\r\n')
353 boundary = self._boundary
354 last_boundary = self._boundary + b'--'
355 # ensure that we read exactly the boundary, not something alike
356 if sline == boundary or sline == last_boundary:
357 self._at_eof = True
358 self._unread.append(line)
359 return b''
360 else:
361 next_line = yield from self._content.readline()
362 if next_line.startswith(self._boundary):
363 line = line[:-2] # strip CRLF but only once
364 self._unread.append(next_line)
365
366 return line
367
368 @asyncio.coroutine
369 def release(self):
370 """Like :meth:`read`, but reads all the data to the void.
371
372 :rtype: None
373 """
374 if self._at_eof:
375 return
376 while not self._at_eof:
377 yield from self.read_chunk(self.chunk_size)
378
379 @asyncio.coroutine
380 def text(self, *, encoding=None):
381 """Like :meth:`read`, but assumes that body part contains text data.
382
383 :param str encoding: Custom text encoding. Overrides specified
384 in charset param of `Content-Type` header
385
386 :rtype: str
387 """
388 data = yield from self.read(decode=True)
389 # see https://www.w3.org/TR/html5/forms.html#multipart/form-data-encoding-algorithm # NOQA
390 # and https://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#dom-xmlhttprequest-send # NOQA
391 encoding = encoding or self.get_charset(default='utf-8')
392 return data.decode(encoding)
393
394 @asyncio.coroutine
395 def json(self, *, encoding=None):
396 """Like :meth:`read`, but assumes that body parts contains JSON data.
397
398 :param str encoding: Custom JSON encoding. Overrides specified
399 in charset param of `Content-Type` header
400 """
401 data = yield from self.read(decode=True)
402 if not data:
403 return None
404 encoding = encoding or self.get_charset(default='utf-8')
405 return json.loads(data.decode(encoding))
406
407 @asyncio.coroutine
408 def form(self, *, encoding=None):
409 """Like :meth:`read`, but assumes that body parts contains form
410 urlencoded data.
411
412 :param str encoding: Custom form encoding. Overrides specified
413 in charset param of `Content-Type` header
414 """
415 data = yield from self.read(decode=True)
416 if not data:
417 return None
418 encoding = encoding or self.get_charset(default='utf-8')
419 return parse_qsl(data.rstrip().decode(encoding), encoding=encoding)
420
421 def at_eof(self):
422 """Returns ``True`` if the boundary was reached or
423 ``False`` otherwise.
424
425 :rtype: bool
426 """
427 return self._at_eof
428
429 def decode(self, data):
430 """Decodes data according the specified `Content-Encoding`
431 or `Content-Transfer-Encoding` headers value.
432
433 Supports ``gzip``, ``deflate`` and ``identity`` encodings for
434 `Content-Encoding` header.
435
436 Supports ``base64``, ``quoted-printable``, ``binary`` encodings for
437 `Content-Transfer-Encoding` header.
438
439 :param bytearray data: Data to decode.
440
441 :raises: :exc:`RuntimeError` - if encoding is unknown.
442
443 :rtype: bytes
444 """
445 if CONTENT_TRANSFER_ENCODING in self.headers:
446 data = self._decode_content_transfer(data)
447 if CONTENT_ENCODING in self.headers:
448 return self._decode_content(data)
449 return data
450
451 def _decode_content(self, data):
452 encoding = self.headers[CONTENT_ENCODING].lower()
453
454 if encoding == 'deflate':
455 return zlib.decompress(data, -zlib.MAX_WBITS)
456 elif encoding == 'gzip':
457 return zlib.decompress(data, 16 + zlib.MAX_WBITS)
458 elif encoding == 'identity':
459 return data
460 else:
461 raise RuntimeError('unknown content encoding: {}'.format(encoding))
462
463 def _decode_content_transfer(self, data):
464 encoding = self.headers[CONTENT_TRANSFER_ENCODING].lower()
465
466 if encoding == 'base64':
467 return base64.b64decode(data)
468 elif encoding == 'quoted-printable':
469 return binascii.a2b_qp(data)
470 elif encoding == 'binary':
471 return data
472 else:
473 raise RuntimeError('unknown content transfer encoding: {}'
474 ''.format(encoding))
475
476 def get_charset(self, default=None):
477 """Returns charset parameter from ``Content-Type`` header or default.
478 """
479 ctype = self.headers.get(CONTENT_TYPE, '')
480 *_, params = parse_mimetype(ctype)
481 return params.get('charset', default)
482
483 @property
484 def filename(self):
485 """Returns filename specified in Content-Disposition header or ``None``
486 if missed or header is malformed."""
487 _, params = parse_content_disposition(
488 self.headers.get(CONTENT_DISPOSITION))
489 return content_disposition_filename(params)
490
491
492 class MultipartReader(object):
493 """Multipart body reader."""
494
495 #: Response wrapper, used when multipart readers constructs from response.
496 response_wrapper_cls = MultipartResponseWrapper
497 #: Multipart reader class, used to handle multipart/* body parts.
498 #: None points to type(self)
499 multipart_reader_cls = None
500 #: Body part reader class for non multipart/* content types.
501 part_reader_cls = BodyPartReader
502
503 def __init__(self, headers, content):
504 self.headers = headers
505 self._boundary = ('--' + self._get_boundary()).encode()
506 self._content = content
507 self._last_part = None
508 self._at_eof = False
509 self._at_bof = True
510 self._unread = []
511
512 if PY_35:
513 def __aiter__(self):
514 return self
515
516 if not PY_352: # pragma: no cover
517 __aiter__ = asyncio.coroutine(__aiter__)
518
519 @asyncio.coroutine
520 def __anext__(self):
521 part = yield from self.next()
522 if part is None:
523 raise StopAsyncIteration # NOQA
524 return part
525
526 @classmethod
527 def from_response(cls, response):
528 """Constructs reader instance from HTTP response.
529
530 :param response: :class:`~aiohttp.client.ClientResponse` instance
531 """
532 obj = cls.response_wrapper_cls(response, cls(response.headers,
533 response.content))
534 return obj
535
536 def at_eof(self):
537 """Returns ``True`` if the final boundary was reached or
538 ``False`` otherwise.
539
540 :rtype: bool
541 """
542 return self._at_eof
543
544 @asyncio.coroutine
545 def next(self):
546 """Emits the next multipart body part."""
547 # So, if we're at BOF, we need to skip till the boundary.
548 if self._at_eof:
549 return
550 yield from self._maybe_release_last_part()
551 if self._at_bof:
552 yield from self._read_until_first_boundary()
553 self._at_bof = False
554 else:
555 yield from self._read_boundary()
556 if self._at_eof: # we just read the last boundary, nothing to do there
557 return
558 self._last_part = yield from self.fetch_next_part()
559 return self._last_part
560
561 @asyncio.coroutine
562 def release(self):
563 """Reads all the body parts to the void till the final boundary."""
564 while not self._at_eof:
565 item = yield from self.next()
566 if item is None:
567 break
568 yield from item.release()
569
570 @asyncio.coroutine
571 def fetch_next_part(self):
572 """Returns the next body part reader."""
573 headers = yield from self._read_headers()
574 return self._get_part_reader(headers)
575
576 def _get_part_reader(self, headers):
577 """Dispatches the response by the `Content-Type` header, returning
578 suitable reader instance.
579
580 :param dict headers: Response headers
581 """
582 ctype = headers.get(CONTENT_TYPE, '')
583 mtype, *_ = parse_mimetype(ctype)
584 if mtype == 'multipart':
585 if self.multipart_reader_cls is None:
586 return type(self)(headers, self._content)
587 return self.multipart_reader_cls(headers, self._content)
588 else:
589 return self.part_reader_cls(self._boundary, headers, self._content)
590
591 def _get_boundary(self):
592 mtype, *_, params = parse_mimetype(self.headers[CONTENT_TYPE])
593
594 assert mtype == 'multipart', 'multipart/* content type expected'
595
596 if 'boundary' not in params:
597 raise ValueError('boundary missed for Content-Type: %s'
598 % self.headers[CONTENT_TYPE])
599
600 boundary = params['boundary']
601 if len(boundary) > 70:
602 raise ValueError('boundary %r is too long (70 chars max)'
603 % boundary)
604
605 return boundary
606
607 @asyncio.coroutine
608 def _readline(self):
609 if self._unread:
610 return self._unread.pop()
611 return (yield from self._content.readline())
612
613 @asyncio.coroutine
614 def _read_until_first_boundary(self):
615 while True:
616 chunk = yield from self._readline()
617 if chunk == b'':
618 raise ValueError("Could not find starting boundary %r"
619 % (self._boundary))
620 chunk = chunk.rstrip()
621 if chunk == self._boundary:
622 return
623 elif chunk == self._boundary + b'--':
624 self._at_eof = True
625 return
626
627 @asyncio.coroutine
628 def _read_boundary(self):
629 chunk = (yield from self._readline()).rstrip()
630 if chunk == self._boundary:
631 pass
632 elif chunk == self._boundary + b'--':
633 self._at_eof = True
634 else:
635 raise ValueError('Invalid boundary %r, expected %r'
636 % (chunk, self._boundary))
637
638 @asyncio.coroutine
639 def _read_headers(self):
640 lines = [b'']
641 while True:
642 chunk = yield from self._content.readline()
643 chunk = chunk.strip()
644 lines.append(chunk)
645 if not chunk:
646 break
647 parser = HttpParser()
648 headers, *_ = parser.parse_headers(lines)
649 return headers
650
651 @asyncio.coroutine
652 def _maybe_release_last_part(self):
653 """Ensures that the last read body part is read completely."""
654 if self._last_part is not None:
655 if not self._last_part.at_eof():
656 yield from self._last_part.release()
657 self._unread.extend(self._last_part._unread)
658 self._last_part = None
659
660
661 class BodyPartWriter(object):
662 """Multipart writer for single body part."""
663
664 def __init__(self, obj, headers=None, *, chunk_size=8192):
665 if isinstance(obj, MultipartWriter):
666 if headers is not None:
667 obj.headers.update(headers)
668 headers = obj.headers
669 elif headers is None:
670 headers = CIMultiDict()
671 elif not isinstance(headers, CIMultiDict):
672 headers = CIMultiDict(headers)
673
674 self.obj = obj
675 self.headers = headers
676 self._chunk_size = chunk_size
677 self._fill_headers_with_defaults()
678
679 self._serialize_map = {
680 bytes: self._serialize_bytes,
681 str: self._serialize_str,
682 io.IOBase: self._serialize_io,
683 MultipartWriter: self._serialize_multipart,
684 ('application', 'json'): self._serialize_json,
685 ('application', 'x-www-form-urlencoded'): self._serialize_form
686 }
687
688 def _fill_headers_with_defaults(self):
689 if CONTENT_TYPE not in self.headers:
690 content_type = self._guess_content_type(self.obj)
691 if content_type is not None:
692 self.headers[CONTENT_TYPE] = content_type
693
694 if CONTENT_LENGTH not in self.headers:
695 content_length = self._guess_content_length(self.obj)
696 if content_length is not None:
697 self.headers[CONTENT_LENGTH] = str(content_length)
698
699 if CONTENT_DISPOSITION not in self.headers:
700 filename = self._guess_filename(self.obj)
701 if filename is not None:
702 self.set_content_disposition('attachment', filename=filename)
703
704 def _guess_content_length(self, obj):
705 if isinstance(obj, bytes):
706 return len(obj)
707 elif isinstance(obj, str):
708 *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))
709 charset = params.get('charset', 'us-ascii')
710 return len(obj.encode(charset))
711 elif isinstance(obj, io.StringIO):
712 *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))
713 charset = params.get('charset', 'us-ascii')
714 return len(obj.getvalue().encode(charset)) - obj.tell()
715 elif isinstance(obj, io.BytesIO):
716 return len(obj.getvalue()) - obj.tell()
717 elif isinstance(obj, io.IOBase):
718 try:
719 return os.fstat(obj.fileno()).st_size - obj.tell()
720 except (AttributeError, OSError):
721 return None
722 else:
723 return None
724
725 def _guess_content_type(self, obj, default='application/octet-stream'):
726 if hasattr(obj, 'name'):
727 name = getattr(obj, 'name')
728 return mimetypes.guess_type(name)[0]
729 elif isinstance(obj, (str, io.StringIO)):
730 return 'text/plain; charset=utf-8'
731 else:
732 return default
733
734 def _guess_filename(self, obj):
735 if isinstance(obj, io.IOBase):
736 name = getattr(obj, 'name', None)
737 if name is not None:
738 return Path(name).name
739
740 def serialize(self):
741 """Yields byte chunks for body part."""
742
743 has_encoding = (
744 CONTENT_ENCODING in self.headers and
745 self.headers[CONTENT_ENCODING] != 'identity' or
746 CONTENT_TRANSFER_ENCODING in self.headers
747 )
748 if has_encoding:
749 # since we're following streaming approach which doesn't assumes
750 # any intermediate buffers, we cannot calculate real content length
751 # with the specified content encoding scheme. So, instead of lying
752 # about content length and cause reading issues, we have to strip
753 # this information.
754 self.headers.pop(CONTENT_LENGTH, None)
755
756 if self.headers:
757 yield b'\r\n'.join(
758 b': '.join(map(lambda i: i.encode('latin1'), item))
759 for item in self.headers.items()
760 )
761 yield b'\r\n\r\n'
762 yield from self._maybe_encode_stream(self._serialize_obj())
763 yield b'\r\n'
764
765 def _serialize_obj(self):
766 obj = self.obj
767 mtype, stype, *_ = parse_mimetype(self.headers.get(CONTENT_TYPE))
768 serializer = self._serialize_map.get((mtype, stype))
769 if serializer is not None:
770 return serializer(obj)
771
772 for key in self._serialize_map:
773 if not isinstance(key, tuple) and isinstance(obj, key):
774 return self._serialize_map[key](obj)
775 return self._serialize_default(obj)
776
777 def _serialize_bytes(self, obj):
778 yield obj
779
780 def _serialize_str(self, obj):
781 *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))
782 yield obj.encode(params.get('charset', 'us-ascii'))
783
784 def _serialize_io(self, obj):
785 while True:
786 chunk = obj.read(self._chunk_size)
787 if not chunk:
788 break
789 if isinstance(chunk, str):
790 yield from self._serialize_str(chunk)
791 else:
792 yield from self._serialize_bytes(chunk)
793
794 def _serialize_multipart(self, obj):
795 yield from obj.serialize()
796
797 def _serialize_json(self, obj):
798 *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))
799 yield json.dumps(obj).encode(params.get('charset', 'utf-8'))
800
801 def _serialize_form(self, obj):
802 if isinstance(obj, Mapping):
803 obj = list(obj.items())
804 return self._serialize_str(urlencode(obj, doseq=True))
805
806 def _serialize_default(self, obj):
807 raise TypeError('unknown body part type %r' % type(obj))
808
809 def _maybe_encode_stream(self, stream):
810 if CONTENT_ENCODING in self.headers:
811 stream = self._apply_content_encoding(stream)
812 if CONTENT_TRANSFER_ENCODING in self.headers:
813 stream = self._apply_content_transfer_encoding(stream)
814 yield from stream
815
816 def _apply_content_encoding(self, stream):
817 encoding = self.headers[CONTENT_ENCODING].lower()
818 if encoding == 'identity':
819 yield from stream
820 elif encoding in ('deflate', 'gzip'):
821 if encoding == 'gzip':
822 zlib_mode = 16 + zlib.MAX_WBITS
823 else:
824 zlib_mode = -zlib.MAX_WBITS
825 zcomp = zlib.compressobj(wbits=zlib_mode)
826 for chunk in stream:
827 yield zcomp.compress(chunk)
828 else:
829 yield zcomp.flush()
830 else:
831 raise RuntimeError('unknown content encoding: {}'
832 ''.format(encoding))
833
834 def _apply_content_transfer_encoding(self, stream):
835 encoding = self.headers[CONTENT_TRANSFER_ENCODING].lower()
836 if encoding == 'base64':
837 buffer = bytearray()
838 while True:
839 if buffer:
840 div, mod = divmod(len(buffer), 3)
841 chunk, buffer = buffer[:div * 3], buffer[div * 3:]
842 if chunk:
843 yield base64.b64encode(chunk)
844 chunk = next(stream, None)
845 if not chunk:
846 if buffer:
847 yield base64.b64encode(buffer[:])
848 return
849 buffer.extend(chunk)
850 elif encoding == 'quoted-printable':
851 for chunk in stream:
852 yield binascii.b2a_qp(chunk)
853 elif encoding == 'binary':
854 yield from stream
855 else:
856 raise RuntimeError('unknown content transfer encoding: {}'
857 ''.format(encoding))
858
859 def set_content_disposition(self, disptype, **params):
860 """Sets ``Content-Disposition`` header.
861
862 :param str disptype: Disposition type: inline, attachment, form-data.
863 Should be valid extension token (see RFC 2183)
864 :param dict params: Disposition params
865 """
866 if not disptype or not (TOKEN > set(disptype)):
867 raise ValueError('bad content disposition type {!r}'
868 ''.format(disptype))
869 value = disptype
870 if params:
871 lparams = []
872 for key, val in params.items():
873 if not key or not (TOKEN > set(key)):
874 raise ValueError('bad content disposition parameter'
875 ' {!r}={!r}'.format(key, val))
876 qval = quote(val, '')
877 lparams.append((key, '"%s"' % qval))
878 if key == 'filename':
879 lparams.append(('filename*', "utf-8''" + qval))
880 sparams = '; '.join('='.join(pair) for pair in lparams)
881 value = '; '.join((value, sparams))
882 self.headers[CONTENT_DISPOSITION] = value
883
884 @property
885 def filename(self):
886 """Returns filename specified in Content-Disposition header or ``None``
887 if missed."""
888 _, params = parse_content_disposition(
889 self.headers.get(CONTENT_DISPOSITION))
890 return content_disposition_filename(params)
891
892
893 class MultipartWriter(object):
894 """Multipart body writer."""
895
896 #: Body part reader class for non multipart/* content types.
897 part_writer_cls = BodyPartWriter
898
899 def __init__(self, subtype='mixed', boundary=None):
900 boundary = boundary if boundary is not None else uuid.uuid4().hex
901 try:
902 boundary.encode('us-ascii')
903 except UnicodeEncodeError:
904 raise ValueError('boundary should contains ASCII only chars')
905 self.headers = CIMultiDict()
906 self.headers[CONTENT_TYPE] = 'multipart/{}; boundary="{}"'.format(
907 subtype, boundary
908 )
909 self.parts = []
910
911 def __enter__(self):
912 return self
913
914 def __exit__(self, exc_type, exc_val, exc_tb):
915 pass
916
917 def __iter__(self):
918 return iter(self.parts)
919
920 def __len__(self):
921 return len(self.parts)
922
923 @property
924 def boundary(self):
925 *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))
926 return params['boundary'].encode('us-ascii')
927
928 def append(self, obj, headers=None):
929 """Adds a new body part to multipart writer."""
930 if isinstance(obj, self.part_writer_cls):
931 if headers:
932 obj.headers.update(headers)
933 self.parts.append(obj)
934 else:
935 if not headers:
936 headers = CIMultiDict()
937 self.parts.append(self.part_writer_cls(obj, headers))
938 return self.parts[-1]
939
940 def append_json(self, obj, headers=None):
941 """Helper to append JSON part."""
942 if not headers:
943 headers = CIMultiDict()
944 headers[CONTENT_TYPE] = 'application/json'
945 return self.append(obj, headers)
946
947 def append_form(self, obj, headers=None):
948 """Helper to append form urlencoded part."""
949 if not headers:
950 headers = CIMultiDict()
951 headers[CONTENT_TYPE] = 'application/x-www-form-urlencoded'
952 assert isinstance(obj, (Sequence, Mapping))
953 return self.append(obj, headers)
954
955 def serialize(self):
956 """Yields multipart byte chunks."""
957 if not self.parts:
958 yield b''
959 return
960
961 for part in self.parts:
962 yield b'--' + self.boundary + b'\r\n'
963 yield from part.serialize()
964 else:
965 yield b'--' + self.boundary + b'--\r\n'
966
967 yield b''
968
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aiohttp/multipart.py b/aiohttp/multipart.py
--- a/aiohttp/multipart.py
+++ b/aiohttp/multipart.py
@@ -639,6 +639,7 @@
pass
elif chunk == self._boundary + b'--':
self._at_eof = True
+ yield from self._readline()
else:
raise ValueError('Invalid boundary %r, expected %r'
% (chunk, self._boundary))
| {"golden_diff": "diff --git a/aiohttp/multipart.py b/aiohttp/multipart.py\n--- a/aiohttp/multipart.py\n+++ b/aiohttp/multipart.py\n@@ -639,6 +639,7 @@\n pass\n elif chunk == self._boundary + b'--':\n self._at_eof = True\n+ yield from self._readline()\n else:\n raise ValueError('Invalid boundary %r, expected %r'\n % (chunk, self._boundary))\n", "issue": "reading nested multipart messages does not work correctly\n## Long story short\r\n\r\nMultipart reader breaks after reading a sub multipart end boundary and starting the next part\r\n\r\n## Expected behaviour\r\n\r\nNested multipart reader reads a message created with the multipart writer correctly\r\n\r\n## Actual behaviour\r\n\r\n```python\r\nValueError: Invalid boundary b'', expected b'--b0b69248b3a345cf8256a8dd25f07874'\r\n```\r\n\r\n## Steps to reproduce\r\n\r\nReceive the multipart response from #1525 \r\n\r\nServer:\r\n```python\r\nfrom aiohttp.multipart import MultipartWriter\r\nfrom aiohttp.web import Response\r\n\r\ndef handle(request):\r\n with MultipartWriter('mixed') as root:\r\n with MultipartWriter('mixed') as subwriter1:\r\n subwriter1.append('first message')\r\n root.append(subwriter1, headers=subwriter1.headers)\r\n\r\n with MultipartWriter('mixed') as subwriter2:\r\n subwriter2.append('second message')\r\n root.append(subwriter2, headers=subwriter2.headers)\r\n return Response(body=b''.join(root.serialize()), headers=root.headers)\r\n\r\n# ... create web app which responds with the handler above ...\r\n```\r\n\r\nClient:\r\n```python\r\nimport aiohttp\r\nimport asyncio\r\nfrom aiohttp.multipart import BodyPartReader, MultipartReader\r\n\r\[email protected]\r\ndef read_multipart(reader):\r\n while True:\r\n part = yield from reader.next()\r\n if part is None: break\r\n if isinstance(part, BodyPartReader):\r\n body = yield from part.read(decode=True)\r\n print('body part: %r' % body)\r\n else:\r\n print('nested part')\r\n yield from read_multipart(part)\r\n\r\[email protected]\r\ndef request(url):\r\n response = yield from aiohttp.get(url)\r\n yield from read_multipart(MultipartReader.from_response(response))\r\n\r\n# ... drive event loop and call request(handler_url) ...\r\n```\r\n\r\n## Issue\r\n\r\nLines [multipart.py:767](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/multipart.py#L767) and [multipart.py:969](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/multipart.py#L969) have line endings which result in an empty line after the end boundary of a multipart message (valid). However the reader terminates after the end boundary and the parent reader now expects the next boundary, but what is found is the blank line from [multipart.py:767](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/multipart.py#L767)\r\n\r\n## Possible fix\r\n\r\n```diff\r\ndiff --git a/aiohttp/multipart.py b/aiohttp/multipart.py\r\nindex af7f19b1..82ad2306 100644\r\n--- a/aiohttp/multipart.py\r\n+++ b/aiohttp/multipart.py\r\n@@ -639,6 +639,7 @@ class MultipartReader(object):\r\n pass\r\n elif chunk == self._boundary + b'--':\r\n self._at_eof = True\r\n+ yield from self._readline()\r\n else:\r\n raise ValueError('Invalid boundary %r, expected %r'\r\n % (chunk, self._boundary))\r\n```\n", "before_files": [{"content": "import asyncio\nimport base64\nimport binascii\nimport io\nimport json\nimport mimetypes\nimport os\nimport re\nimport sys\nimport uuid\nimport warnings\nimport zlib\nfrom collections import Mapping, Sequence, deque\nfrom pathlib import Path\nfrom urllib.parse import parse_qsl, quote, unquote, urlencode\n\nfrom multidict import CIMultiDict\n\nfrom .hdrs import (CONTENT_DISPOSITION, CONTENT_ENCODING, CONTENT_LENGTH,\n CONTENT_TRANSFER_ENCODING, CONTENT_TYPE)\nfrom .helpers import parse_mimetype\nfrom .protocol import HttpParser\n\n__all__ = ('MultipartReader', 'MultipartWriter',\n 'BodyPartReader', 'BodyPartWriter',\n 'BadContentDispositionHeader', 'BadContentDispositionParam',\n 'parse_content_disposition', 'content_disposition_filename')\n\n\nCHAR = set(chr(i) for i in range(0, 128))\nCTL = set(chr(i) for i in range(0, 32)) | {chr(127), }\nSEPARATORS = {'(', ')', '<', '>', '@', ',', ';', ':', '\\\\', '\"', '/', '[', ']',\n '?', '=', '{', '}', ' ', chr(9)}\nTOKEN = CHAR ^ CTL ^ SEPARATORS\n\nPY_35 = sys.version_info >= (3, 5)\nPY_352 = sys.version_info >= (3, 5, 2)\n\n\nclass BadContentDispositionHeader(RuntimeWarning):\n pass\n\n\nclass BadContentDispositionParam(RuntimeWarning):\n pass\n\n\ndef parse_content_disposition(header):\n def is_token(string):\n return string and TOKEN >= set(string)\n\n def is_quoted(string):\n return string[0] == string[-1] == '\"'\n\n def is_rfc5987(string):\n return is_token(string) and string.count(\"'\") == 2\n\n def is_extended_param(string):\n return string.endswith('*')\n\n def is_continuous_param(string):\n pos = string.find('*') + 1\n if not pos:\n return False\n substring = string[pos:-1] if string.endswith('*') else string[pos:]\n return substring.isdigit()\n\n def unescape(text, *, chars=''.join(map(re.escape, CHAR))):\n return re.sub('\\\\\\\\([{}])'.format(chars), '\\\\1', text)\n\n if not header:\n return None, {}\n\n disptype, *parts = header.split(';')\n if not is_token(disptype):\n warnings.warn(BadContentDispositionHeader(header))\n return None, {}\n\n params = {}\n for item in parts:\n if '=' not in item:\n warnings.warn(BadContentDispositionHeader(header))\n return None, {}\n\n key, value = item.split('=', 1)\n key = key.lower().strip()\n value = value.lstrip()\n\n if key in params:\n warnings.warn(BadContentDispositionHeader(header))\n return None, {}\n\n if not is_token(key):\n warnings.warn(BadContentDispositionParam(item))\n continue\n\n elif is_continuous_param(key):\n if is_quoted(value):\n value = unescape(value[1:-1])\n elif not is_token(value):\n warnings.warn(BadContentDispositionParam(item))\n continue\n\n elif is_extended_param(key):\n if is_rfc5987(value):\n encoding, _, value = value.split(\"'\", 2)\n encoding = encoding or 'utf-8'\n else:\n warnings.warn(BadContentDispositionParam(item))\n continue\n\n try:\n value = unquote(value, encoding, 'strict')\n except UnicodeDecodeError: # pragma: nocover\n warnings.warn(BadContentDispositionParam(item))\n continue\n\n else:\n if is_quoted(value):\n value = unescape(value[1:-1].lstrip('\\\\/'))\n elif not is_token(value):\n warnings.warn(BadContentDispositionHeader(header))\n return None, {}\n\n params[key] = value\n\n return disptype.lower(), params\n\n\ndef content_disposition_filename(params):\n if not params:\n return None\n elif 'filename*' in params:\n return params['filename*']\n elif 'filename' in params:\n return params['filename']\n else:\n parts = []\n fnparams = sorted((key, value)\n for key, value in params.items()\n if key.startswith('filename*'))\n for num, (key, value) in enumerate(fnparams):\n _, tail = key.split('*', 1)\n if tail.endswith('*'):\n tail = tail[:-1]\n if tail == str(num):\n parts.append(value)\n else:\n break\n if not parts:\n return None\n value = ''.join(parts)\n if \"'\" in value:\n encoding, _, value = value.split(\"'\", 2)\n encoding = encoding or 'utf-8'\n return unquote(value, encoding, 'strict')\n return value\n\n\nclass MultipartResponseWrapper(object):\n \"\"\"Wrapper around the :class:`MultipartBodyReader` to take care about\n underlying connection and close it when it needs in.\"\"\"\n\n def __init__(self, resp, stream):\n self.resp = resp\n self.stream = stream\n\n if PY_35:\n def __aiter__(self):\n return self\n\n if not PY_352: # pragma: no cover\n __aiter__ = asyncio.coroutine(__aiter__)\n\n @asyncio.coroutine\n def __anext__(self):\n part = yield from self.next()\n if part is None:\n raise StopAsyncIteration # NOQA\n return part\n\n def at_eof(self):\n \"\"\"Returns ``True`` when all response data had been read.\n\n :rtype: bool\n \"\"\"\n return self.resp.content.at_eof()\n\n @asyncio.coroutine\n def next(self):\n \"\"\"Emits next multipart reader object.\"\"\"\n item = yield from self.stream.next()\n if self.stream.at_eof():\n yield from self.release()\n return item\n\n @asyncio.coroutine\n def release(self):\n \"\"\"Releases the connection gracefully, reading all the content\n to the void.\"\"\"\n yield from self.resp.release()\n\n\nclass BodyPartReader(object):\n \"\"\"Multipart reader for single body part.\"\"\"\n\n chunk_size = 8192\n\n def __init__(self, boundary, headers, content):\n self.headers = headers\n self._boundary = boundary\n self._content = content\n self._at_eof = False\n length = self.headers.get(CONTENT_LENGTH, None)\n self._length = int(length) if length is not None else None\n self._read_bytes = 0\n self._unread = deque()\n self._prev_chunk = None\n self._content_eof = 0\n\n if PY_35:\n def __aiter__(self):\n return self\n\n if not PY_352: # pragma: no cover\n __aiter__ = asyncio.coroutine(__aiter__)\n\n @asyncio.coroutine\n def __anext__(self):\n part = yield from self.next()\n if part is None:\n raise StopAsyncIteration # NOQA\n return part\n\n @asyncio.coroutine\n def next(self):\n item = yield from self.read()\n if not item:\n return None\n return item\n\n @asyncio.coroutine\n def read(self, *, decode=False):\n \"\"\"Reads body part data.\n\n :param bool decode: Decodes data following by encoding\n method from `Content-Encoding` header. If it missed\n data remains untouched\n\n :rtype: bytearray\n \"\"\"\n if self._at_eof:\n return b''\n data = bytearray()\n while not self._at_eof:\n data.extend((yield from self.read_chunk(self.chunk_size)))\n if decode:\n return self.decode(data)\n return data\n\n @asyncio.coroutine\n def read_chunk(self, size=chunk_size):\n \"\"\"Reads body part content chunk of the specified size.\n\n :param int size: chunk size\n\n :rtype: bytearray\n \"\"\"\n if self._at_eof:\n return b''\n if self._length:\n chunk = yield from self._read_chunk_from_length(size)\n else:\n chunk = yield from self._read_chunk_from_stream(size)\n\n self._read_bytes += len(chunk)\n if self._read_bytes == self._length:\n self._at_eof = True\n if self._at_eof:\n assert b'\\r\\n' == (yield from self._content.readline()), \\\n 'reader did not read all the data or it is malformed'\n return chunk\n\n @asyncio.coroutine\n def _read_chunk_from_length(self, size):\n \"\"\"Reads body part content chunk of the specified size.\n The body part must has `Content-Length` header with proper value.\n\n :param int size: chunk size\n\n :rtype: bytearray\n \"\"\"\n assert self._length is not None, \\\n 'Content-Length required for chunked read'\n chunk_size = min(size, self._length - self._read_bytes)\n chunk = yield from self._content.read(chunk_size)\n return chunk\n\n @asyncio.coroutine\n def _read_chunk_from_stream(self, size):\n \"\"\"Reads content chunk of body part with unknown length.\n The `Content-Length` header for body part is not necessary.\n\n :param int size: chunk size\n\n :rtype: bytearray\n \"\"\"\n assert size >= len(self._boundary) + 2, \\\n 'Chunk size must be greater or equal than boundary length + 2'\n first_chunk = self._prev_chunk is None\n if first_chunk:\n self._prev_chunk = yield from self._content.read(size)\n\n chunk = yield from self._content.read(size)\n self._content_eof += int(self._content.at_eof())\n assert self._content_eof < 3, \"Reading after EOF\"\n window = self._prev_chunk + chunk\n sub = b'\\r\\n' + self._boundary\n if first_chunk:\n idx = window.find(sub)\n else:\n idx = window.find(sub, max(0, len(self._prev_chunk) - len(sub)))\n if idx >= 0:\n # pushing boundary back to content\n self._content.unread_data(window[idx:])\n if size > idx:\n self._prev_chunk = self._prev_chunk[:idx]\n chunk = window[len(self._prev_chunk):idx]\n if not chunk:\n self._at_eof = True\n result = self._prev_chunk\n self._prev_chunk = chunk\n return result\n\n @asyncio.coroutine\n def readline(self):\n \"\"\"Reads body part by line by line.\n\n :rtype: bytearray\n \"\"\"\n if self._at_eof:\n return b''\n\n if self._unread:\n line = self._unread.popleft()\n else:\n line = yield from self._content.readline()\n\n if line.startswith(self._boundary):\n # the very last boundary may not come with \\r\\n,\n # so set single rules for everyone\n sline = line.rstrip(b'\\r\\n')\n boundary = self._boundary\n last_boundary = self._boundary + b'--'\n # ensure that we read exactly the boundary, not something alike\n if sline == boundary or sline == last_boundary:\n self._at_eof = True\n self._unread.append(line)\n return b''\n else:\n next_line = yield from self._content.readline()\n if next_line.startswith(self._boundary):\n line = line[:-2] # strip CRLF but only once\n self._unread.append(next_line)\n\n return line\n\n @asyncio.coroutine\n def release(self):\n \"\"\"Like :meth:`read`, but reads all the data to the void.\n\n :rtype: None\n \"\"\"\n if self._at_eof:\n return\n while not self._at_eof:\n yield from self.read_chunk(self.chunk_size)\n\n @asyncio.coroutine\n def text(self, *, encoding=None):\n \"\"\"Like :meth:`read`, but assumes that body part contains text data.\n\n :param str encoding: Custom text encoding. Overrides specified\n in charset param of `Content-Type` header\n\n :rtype: str\n \"\"\"\n data = yield from self.read(decode=True)\n # see https://www.w3.org/TR/html5/forms.html#multipart/form-data-encoding-algorithm # NOQA\n # and https://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#dom-xmlhttprequest-send # NOQA\n encoding = encoding or self.get_charset(default='utf-8')\n return data.decode(encoding)\n\n @asyncio.coroutine\n def json(self, *, encoding=None):\n \"\"\"Like :meth:`read`, but assumes that body parts contains JSON data.\n\n :param str encoding: Custom JSON encoding. Overrides specified\n in charset param of `Content-Type` header\n \"\"\"\n data = yield from self.read(decode=True)\n if not data:\n return None\n encoding = encoding or self.get_charset(default='utf-8')\n return json.loads(data.decode(encoding))\n\n @asyncio.coroutine\n def form(self, *, encoding=None):\n \"\"\"Like :meth:`read`, but assumes that body parts contains form\n urlencoded data.\n\n :param str encoding: Custom form encoding. Overrides specified\n in charset param of `Content-Type` header\n \"\"\"\n data = yield from self.read(decode=True)\n if not data:\n return None\n encoding = encoding or self.get_charset(default='utf-8')\n return parse_qsl(data.rstrip().decode(encoding), encoding=encoding)\n\n def at_eof(self):\n \"\"\"Returns ``True`` if the boundary was reached or\n ``False`` otherwise.\n\n :rtype: bool\n \"\"\"\n return self._at_eof\n\n def decode(self, data):\n \"\"\"Decodes data according the specified `Content-Encoding`\n or `Content-Transfer-Encoding` headers value.\n\n Supports ``gzip``, ``deflate`` and ``identity`` encodings for\n `Content-Encoding` header.\n\n Supports ``base64``, ``quoted-printable``, ``binary`` encodings for\n `Content-Transfer-Encoding` header.\n\n :param bytearray data: Data to decode.\n\n :raises: :exc:`RuntimeError` - if encoding is unknown.\n\n :rtype: bytes\n \"\"\"\n if CONTENT_TRANSFER_ENCODING in self.headers:\n data = self._decode_content_transfer(data)\n if CONTENT_ENCODING in self.headers:\n return self._decode_content(data)\n return data\n\n def _decode_content(self, data):\n encoding = self.headers[CONTENT_ENCODING].lower()\n\n if encoding == 'deflate':\n return zlib.decompress(data, -zlib.MAX_WBITS)\n elif encoding == 'gzip':\n return zlib.decompress(data, 16 + zlib.MAX_WBITS)\n elif encoding == 'identity':\n return data\n else:\n raise RuntimeError('unknown content encoding: {}'.format(encoding))\n\n def _decode_content_transfer(self, data):\n encoding = self.headers[CONTENT_TRANSFER_ENCODING].lower()\n\n if encoding == 'base64':\n return base64.b64decode(data)\n elif encoding == 'quoted-printable':\n return binascii.a2b_qp(data)\n elif encoding == 'binary':\n return data\n else:\n raise RuntimeError('unknown content transfer encoding: {}'\n ''.format(encoding))\n\n def get_charset(self, default=None):\n \"\"\"Returns charset parameter from ``Content-Type`` header or default.\n \"\"\"\n ctype = self.headers.get(CONTENT_TYPE, '')\n *_, params = parse_mimetype(ctype)\n return params.get('charset', default)\n\n @property\n def filename(self):\n \"\"\"Returns filename specified in Content-Disposition header or ``None``\n if missed or header is malformed.\"\"\"\n _, params = parse_content_disposition(\n self.headers.get(CONTENT_DISPOSITION))\n return content_disposition_filename(params)\n\n\nclass MultipartReader(object):\n \"\"\"Multipart body reader.\"\"\"\n\n #: Response wrapper, used when multipart readers constructs from response.\n response_wrapper_cls = MultipartResponseWrapper\n #: Multipart reader class, used to handle multipart/* body parts.\n #: None points to type(self)\n multipart_reader_cls = None\n #: Body part reader class for non multipart/* content types.\n part_reader_cls = BodyPartReader\n\n def __init__(self, headers, content):\n self.headers = headers\n self._boundary = ('--' + self._get_boundary()).encode()\n self._content = content\n self._last_part = None\n self._at_eof = False\n self._at_bof = True\n self._unread = []\n\n if PY_35:\n def __aiter__(self):\n return self\n\n if not PY_352: # pragma: no cover\n __aiter__ = asyncio.coroutine(__aiter__)\n\n @asyncio.coroutine\n def __anext__(self):\n part = yield from self.next()\n if part is None:\n raise StopAsyncIteration # NOQA\n return part\n\n @classmethod\n def from_response(cls, response):\n \"\"\"Constructs reader instance from HTTP response.\n\n :param response: :class:`~aiohttp.client.ClientResponse` instance\n \"\"\"\n obj = cls.response_wrapper_cls(response, cls(response.headers,\n response.content))\n return obj\n\n def at_eof(self):\n \"\"\"Returns ``True`` if the final boundary was reached or\n ``False`` otherwise.\n\n :rtype: bool\n \"\"\"\n return self._at_eof\n\n @asyncio.coroutine\n def next(self):\n \"\"\"Emits the next multipart body part.\"\"\"\n # So, if we're at BOF, we need to skip till the boundary.\n if self._at_eof:\n return\n yield from self._maybe_release_last_part()\n if self._at_bof:\n yield from self._read_until_first_boundary()\n self._at_bof = False\n else:\n yield from self._read_boundary()\n if self._at_eof: # we just read the last boundary, nothing to do there\n return\n self._last_part = yield from self.fetch_next_part()\n return self._last_part\n\n @asyncio.coroutine\n def release(self):\n \"\"\"Reads all the body parts to the void till the final boundary.\"\"\"\n while not self._at_eof:\n item = yield from self.next()\n if item is None:\n break\n yield from item.release()\n\n @asyncio.coroutine\n def fetch_next_part(self):\n \"\"\"Returns the next body part reader.\"\"\"\n headers = yield from self._read_headers()\n return self._get_part_reader(headers)\n\n def _get_part_reader(self, headers):\n \"\"\"Dispatches the response by the `Content-Type` header, returning\n suitable reader instance.\n\n :param dict headers: Response headers\n \"\"\"\n ctype = headers.get(CONTENT_TYPE, '')\n mtype, *_ = parse_mimetype(ctype)\n if mtype == 'multipart':\n if self.multipart_reader_cls is None:\n return type(self)(headers, self._content)\n return self.multipart_reader_cls(headers, self._content)\n else:\n return self.part_reader_cls(self._boundary, headers, self._content)\n\n def _get_boundary(self):\n mtype, *_, params = parse_mimetype(self.headers[CONTENT_TYPE])\n\n assert mtype == 'multipart', 'multipart/* content type expected'\n\n if 'boundary' not in params:\n raise ValueError('boundary missed for Content-Type: %s'\n % self.headers[CONTENT_TYPE])\n\n boundary = params['boundary']\n if len(boundary) > 70:\n raise ValueError('boundary %r is too long (70 chars max)'\n % boundary)\n\n return boundary\n\n @asyncio.coroutine\n def _readline(self):\n if self._unread:\n return self._unread.pop()\n return (yield from self._content.readline())\n\n @asyncio.coroutine\n def _read_until_first_boundary(self):\n while True:\n chunk = yield from self._readline()\n if chunk == b'':\n raise ValueError(\"Could not find starting boundary %r\"\n % (self._boundary))\n chunk = chunk.rstrip()\n if chunk == self._boundary:\n return\n elif chunk == self._boundary + b'--':\n self._at_eof = True\n return\n\n @asyncio.coroutine\n def _read_boundary(self):\n chunk = (yield from self._readline()).rstrip()\n if chunk == self._boundary:\n pass\n elif chunk == self._boundary + b'--':\n self._at_eof = True\n else:\n raise ValueError('Invalid boundary %r, expected %r'\n % (chunk, self._boundary))\n\n @asyncio.coroutine\n def _read_headers(self):\n lines = [b'']\n while True:\n chunk = yield from self._content.readline()\n chunk = chunk.strip()\n lines.append(chunk)\n if not chunk:\n break\n parser = HttpParser()\n headers, *_ = parser.parse_headers(lines)\n return headers\n\n @asyncio.coroutine\n def _maybe_release_last_part(self):\n \"\"\"Ensures that the last read body part is read completely.\"\"\"\n if self._last_part is not None:\n if not self._last_part.at_eof():\n yield from self._last_part.release()\n self._unread.extend(self._last_part._unread)\n self._last_part = None\n\n\nclass BodyPartWriter(object):\n \"\"\"Multipart writer for single body part.\"\"\"\n\n def __init__(self, obj, headers=None, *, chunk_size=8192):\n if isinstance(obj, MultipartWriter):\n if headers is not None:\n obj.headers.update(headers)\n headers = obj.headers\n elif headers is None:\n headers = CIMultiDict()\n elif not isinstance(headers, CIMultiDict):\n headers = CIMultiDict(headers)\n\n self.obj = obj\n self.headers = headers\n self._chunk_size = chunk_size\n self._fill_headers_with_defaults()\n\n self._serialize_map = {\n bytes: self._serialize_bytes,\n str: self._serialize_str,\n io.IOBase: self._serialize_io,\n MultipartWriter: self._serialize_multipart,\n ('application', 'json'): self._serialize_json,\n ('application', 'x-www-form-urlencoded'): self._serialize_form\n }\n\n def _fill_headers_with_defaults(self):\n if CONTENT_TYPE not in self.headers:\n content_type = self._guess_content_type(self.obj)\n if content_type is not None:\n self.headers[CONTENT_TYPE] = content_type\n\n if CONTENT_LENGTH not in self.headers:\n content_length = self._guess_content_length(self.obj)\n if content_length is not None:\n self.headers[CONTENT_LENGTH] = str(content_length)\n\n if CONTENT_DISPOSITION not in self.headers:\n filename = self._guess_filename(self.obj)\n if filename is not None:\n self.set_content_disposition('attachment', filename=filename)\n\n def _guess_content_length(self, obj):\n if isinstance(obj, bytes):\n return len(obj)\n elif isinstance(obj, str):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n charset = params.get('charset', 'us-ascii')\n return len(obj.encode(charset))\n elif isinstance(obj, io.StringIO):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n charset = params.get('charset', 'us-ascii')\n return len(obj.getvalue().encode(charset)) - obj.tell()\n elif isinstance(obj, io.BytesIO):\n return len(obj.getvalue()) - obj.tell()\n elif isinstance(obj, io.IOBase):\n try:\n return os.fstat(obj.fileno()).st_size - obj.tell()\n except (AttributeError, OSError):\n return None\n else:\n return None\n\n def _guess_content_type(self, obj, default='application/octet-stream'):\n if hasattr(obj, 'name'):\n name = getattr(obj, 'name')\n return mimetypes.guess_type(name)[0]\n elif isinstance(obj, (str, io.StringIO)):\n return 'text/plain; charset=utf-8'\n else:\n return default\n\n def _guess_filename(self, obj):\n if isinstance(obj, io.IOBase):\n name = getattr(obj, 'name', None)\n if name is not None:\n return Path(name).name\n\n def serialize(self):\n \"\"\"Yields byte chunks for body part.\"\"\"\n\n has_encoding = (\n CONTENT_ENCODING in self.headers and\n self.headers[CONTENT_ENCODING] != 'identity' or\n CONTENT_TRANSFER_ENCODING in self.headers\n )\n if has_encoding:\n # since we're following streaming approach which doesn't assumes\n # any intermediate buffers, we cannot calculate real content length\n # with the specified content encoding scheme. So, instead of lying\n # about content length and cause reading issues, we have to strip\n # this information.\n self.headers.pop(CONTENT_LENGTH, None)\n\n if self.headers:\n yield b'\\r\\n'.join(\n b': '.join(map(lambda i: i.encode('latin1'), item))\n for item in self.headers.items()\n )\n yield b'\\r\\n\\r\\n'\n yield from self._maybe_encode_stream(self._serialize_obj())\n yield b'\\r\\n'\n\n def _serialize_obj(self):\n obj = self.obj\n mtype, stype, *_ = parse_mimetype(self.headers.get(CONTENT_TYPE))\n serializer = self._serialize_map.get((mtype, stype))\n if serializer is not None:\n return serializer(obj)\n\n for key in self._serialize_map:\n if not isinstance(key, tuple) and isinstance(obj, key):\n return self._serialize_map[key](obj)\n return self._serialize_default(obj)\n\n def _serialize_bytes(self, obj):\n yield obj\n\n def _serialize_str(self, obj):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n yield obj.encode(params.get('charset', 'us-ascii'))\n\n def _serialize_io(self, obj):\n while True:\n chunk = obj.read(self._chunk_size)\n if not chunk:\n break\n if isinstance(chunk, str):\n yield from self._serialize_str(chunk)\n else:\n yield from self._serialize_bytes(chunk)\n\n def _serialize_multipart(self, obj):\n yield from obj.serialize()\n\n def _serialize_json(self, obj):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n yield json.dumps(obj).encode(params.get('charset', 'utf-8'))\n\n def _serialize_form(self, obj):\n if isinstance(obj, Mapping):\n obj = list(obj.items())\n return self._serialize_str(urlencode(obj, doseq=True))\n\n def _serialize_default(self, obj):\n raise TypeError('unknown body part type %r' % type(obj))\n\n def _maybe_encode_stream(self, stream):\n if CONTENT_ENCODING in self.headers:\n stream = self._apply_content_encoding(stream)\n if CONTENT_TRANSFER_ENCODING in self.headers:\n stream = self._apply_content_transfer_encoding(stream)\n yield from stream\n\n def _apply_content_encoding(self, stream):\n encoding = self.headers[CONTENT_ENCODING].lower()\n if encoding == 'identity':\n yield from stream\n elif encoding in ('deflate', 'gzip'):\n if encoding == 'gzip':\n zlib_mode = 16 + zlib.MAX_WBITS\n else:\n zlib_mode = -zlib.MAX_WBITS\n zcomp = zlib.compressobj(wbits=zlib_mode)\n for chunk in stream:\n yield zcomp.compress(chunk)\n else:\n yield zcomp.flush()\n else:\n raise RuntimeError('unknown content encoding: {}'\n ''.format(encoding))\n\n def _apply_content_transfer_encoding(self, stream):\n encoding = self.headers[CONTENT_TRANSFER_ENCODING].lower()\n if encoding == 'base64':\n buffer = bytearray()\n while True:\n if buffer:\n div, mod = divmod(len(buffer), 3)\n chunk, buffer = buffer[:div * 3], buffer[div * 3:]\n if chunk:\n yield base64.b64encode(chunk)\n chunk = next(stream, None)\n if not chunk:\n if buffer:\n yield base64.b64encode(buffer[:])\n return\n buffer.extend(chunk)\n elif encoding == 'quoted-printable':\n for chunk in stream:\n yield binascii.b2a_qp(chunk)\n elif encoding == 'binary':\n yield from stream\n else:\n raise RuntimeError('unknown content transfer encoding: {}'\n ''.format(encoding))\n\n def set_content_disposition(self, disptype, **params):\n \"\"\"Sets ``Content-Disposition`` header.\n\n :param str disptype: Disposition type: inline, attachment, form-data.\n Should be valid extension token (see RFC 2183)\n :param dict params: Disposition params\n \"\"\"\n if not disptype or not (TOKEN > set(disptype)):\n raise ValueError('bad content disposition type {!r}'\n ''.format(disptype))\n value = disptype\n if params:\n lparams = []\n for key, val in params.items():\n if not key or not (TOKEN > set(key)):\n raise ValueError('bad content disposition parameter'\n ' {!r}={!r}'.format(key, val))\n qval = quote(val, '')\n lparams.append((key, '\"%s\"' % qval))\n if key == 'filename':\n lparams.append(('filename*', \"utf-8''\" + qval))\n sparams = '; '.join('='.join(pair) for pair in lparams)\n value = '; '.join((value, sparams))\n self.headers[CONTENT_DISPOSITION] = value\n\n @property\n def filename(self):\n \"\"\"Returns filename specified in Content-Disposition header or ``None``\n if missed.\"\"\"\n _, params = parse_content_disposition(\n self.headers.get(CONTENT_DISPOSITION))\n return content_disposition_filename(params)\n\n\nclass MultipartWriter(object):\n \"\"\"Multipart body writer.\"\"\"\n\n #: Body part reader class for non multipart/* content types.\n part_writer_cls = BodyPartWriter\n\n def __init__(self, subtype='mixed', boundary=None):\n boundary = boundary if boundary is not None else uuid.uuid4().hex\n try:\n boundary.encode('us-ascii')\n except UnicodeEncodeError:\n raise ValueError('boundary should contains ASCII only chars')\n self.headers = CIMultiDict()\n self.headers[CONTENT_TYPE] = 'multipart/{}; boundary=\"{}\"'.format(\n subtype, boundary\n )\n self.parts = []\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n pass\n\n def __iter__(self):\n return iter(self.parts)\n\n def __len__(self):\n return len(self.parts)\n\n @property\n def boundary(self):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n return params['boundary'].encode('us-ascii')\n\n def append(self, obj, headers=None):\n \"\"\"Adds a new body part to multipart writer.\"\"\"\n if isinstance(obj, self.part_writer_cls):\n if headers:\n obj.headers.update(headers)\n self.parts.append(obj)\n else:\n if not headers:\n headers = CIMultiDict()\n self.parts.append(self.part_writer_cls(obj, headers))\n return self.parts[-1]\n\n def append_json(self, obj, headers=None):\n \"\"\"Helper to append JSON part.\"\"\"\n if not headers:\n headers = CIMultiDict()\n headers[CONTENT_TYPE] = 'application/json'\n return self.append(obj, headers)\n\n def append_form(self, obj, headers=None):\n \"\"\"Helper to append form urlencoded part.\"\"\"\n if not headers:\n headers = CIMultiDict()\n headers[CONTENT_TYPE] = 'application/x-www-form-urlencoded'\n assert isinstance(obj, (Sequence, Mapping))\n return self.append(obj, headers)\n\n def serialize(self):\n \"\"\"Yields multipart byte chunks.\"\"\"\n if not self.parts:\n yield b''\n return\n\n for part in self.parts:\n yield b'--' + self.boundary + b'\\r\\n'\n yield from part.serialize()\n else:\n yield b'--' + self.boundary + b'--\\r\\n'\n\n yield b''\n", "path": "aiohttp/multipart.py"}], "after_files": [{"content": "import asyncio\nimport base64\nimport binascii\nimport io\nimport json\nimport mimetypes\nimport os\nimport re\nimport sys\nimport uuid\nimport warnings\nimport zlib\nfrom collections import Mapping, Sequence, deque\nfrom pathlib import Path\nfrom urllib.parse import parse_qsl, quote, unquote, urlencode\n\nfrom multidict import CIMultiDict\n\nfrom .hdrs import (CONTENT_DISPOSITION, CONTENT_ENCODING, CONTENT_LENGTH,\n CONTENT_TRANSFER_ENCODING, CONTENT_TYPE)\nfrom .helpers import parse_mimetype\nfrom .protocol import HttpParser\n\n__all__ = ('MultipartReader', 'MultipartWriter',\n 'BodyPartReader', 'BodyPartWriter',\n 'BadContentDispositionHeader', 'BadContentDispositionParam',\n 'parse_content_disposition', 'content_disposition_filename')\n\n\nCHAR = set(chr(i) for i in range(0, 128))\nCTL = set(chr(i) for i in range(0, 32)) | {chr(127), }\nSEPARATORS = {'(', ')', '<', '>', '@', ',', ';', ':', '\\\\', '\"', '/', '[', ']',\n '?', '=', '{', '}', ' ', chr(9)}\nTOKEN = CHAR ^ CTL ^ SEPARATORS\n\nPY_35 = sys.version_info >= (3, 5)\nPY_352 = sys.version_info >= (3, 5, 2)\n\n\nclass BadContentDispositionHeader(RuntimeWarning):\n pass\n\n\nclass BadContentDispositionParam(RuntimeWarning):\n pass\n\n\ndef parse_content_disposition(header):\n def is_token(string):\n return string and TOKEN >= set(string)\n\n def is_quoted(string):\n return string[0] == string[-1] == '\"'\n\n def is_rfc5987(string):\n return is_token(string) and string.count(\"'\") == 2\n\n def is_extended_param(string):\n return string.endswith('*')\n\n def is_continuous_param(string):\n pos = string.find('*') + 1\n if not pos:\n return False\n substring = string[pos:-1] if string.endswith('*') else string[pos:]\n return substring.isdigit()\n\n def unescape(text, *, chars=''.join(map(re.escape, CHAR))):\n return re.sub('\\\\\\\\([{}])'.format(chars), '\\\\1', text)\n\n if not header:\n return None, {}\n\n disptype, *parts = header.split(';')\n if not is_token(disptype):\n warnings.warn(BadContentDispositionHeader(header))\n return None, {}\n\n params = {}\n for item in parts:\n if '=' not in item:\n warnings.warn(BadContentDispositionHeader(header))\n return None, {}\n\n key, value = item.split('=', 1)\n key = key.lower().strip()\n value = value.lstrip()\n\n if key in params:\n warnings.warn(BadContentDispositionHeader(header))\n return None, {}\n\n if not is_token(key):\n warnings.warn(BadContentDispositionParam(item))\n continue\n\n elif is_continuous_param(key):\n if is_quoted(value):\n value = unescape(value[1:-1])\n elif not is_token(value):\n warnings.warn(BadContentDispositionParam(item))\n continue\n\n elif is_extended_param(key):\n if is_rfc5987(value):\n encoding, _, value = value.split(\"'\", 2)\n encoding = encoding or 'utf-8'\n else:\n warnings.warn(BadContentDispositionParam(item))\n continue\n\n try:\n value = unquote(value, encoding, 'strict')\n except UnicodeDecodeError: # pragma: nocover\n warnings.warn(BadContentDispositionParam(item))\n continue\n\n else:\n if is_quoted(value):\n value = unescape(value[1:-1].lstrip('\\\\/'))\n elif not is_token(value):\n warnings.warn(BadContentDispositionHeader(header))\n return None, {}\n\n params[key] = value\n\n return disptype.lower(), params\n\n\ndef content_disposition_filename(params):\n if not params:\n return None\n elif 'filename*' in params:\n return params['filename*']\n elif 'filename' in params:\n return params['filename']\n else:\n parts = []\n fnparams = sorted((key, value)\n for key, value in params.items()\n if key.startswith('filename*'))\n for num, (key, value) in enumerate(fnparams):\n _, tail = key.split('*', 1)\n if tail.endswith('*'):\n tail = tail[:-1]\n if tail == str(num):\n parts.append(value)\n else:\n break\n if not parts:\n return None\n value = ''.join(parts)\n if \"'\" in value:\n encoding, _, value = value.split(\"'\", 2)\n encoding = encoding or 'utf-8'\n return unquote(value, encoding, 'strict')\n return value\n\n\nclass MultipartResponseWrapper(object):\n \"\"\"Wrapper around the :class:`MultipartBodyReader` to take care about\n underlying connection and close it when it needs in.\"\"\"\n\n def __init__(self, resp, stream):\n self.resp = resp\n self.stream = stream\n\n if PY_35:\n def __aiter__(self):\n return self\n\n if not PY_352: # pragma: no cover\n __aiter__ = asyncio.coroutine(__aiter__)\n\n @asyncio.coroutine\n def __anext__(self):\n part = yield from self.next()\n if part is None:\n raise StopAsyncIteration # NOQA\n return part\n\n def at_eof(self):\n \"\"\"Returns ``True`` when all response data had been read.\n\n :rtype: bool\n \"\"\"\n return self.resp.content.at_eof()\n\n @asyncio.coroutine\n def next(self):\n \"\"\"Emits next multipart reader object.\"\"\"\n item = yield from self.stream.next()\n if self.stream.at_eof():\n yield from self.release()\n return item\n\n @asyncio.coroutine\n def release(self):\n \"\"\"Releases the connection gracefully, reading all the content\n to the void.\"\"\"\n yield from self.resp.release()\n\n\nclass BodyPartReader(object):\n \"\"\"Multipart reader for single body part.\"\"\"\n\n chunk_size = 8192\n\n def __init__(self, boundary, headers, content):\n self.headers = headers\n self._boundary = boundary\n self._content = content\n self._at_eof = False\n length = self.headers.get(CONTENT_LENGTH, None)\n self._length = int(length) if length is not None else None\n self._read_bytes = 0\n self._unread = deque()\n self._prev_chunk = None\n self._content_eof = 0\n\n if PY_35:\n def __aiter__(self):\n return self\n\n if not PY_352: # pragma: no cover\n __aiter__ = asyncio.coroutine(__aiter__)\n\n @asyncio.coroutine\n def __anext__(self):\n part = yield from self.next()\n if part is None:\n raise StopAsyncIteration # NOQA\n return part\n\n @asyncio.coroutine\n def next(self):\n item = yield from self.read()\n if not item:\n return None\n return item\n\n @asyncio.coroutine\n def read(self, *, decode=False):\n \"\"\"Reads body part data.\n\n :param bool decode: Decodes data following by encoding\n method from `Content-Encoding` header. If it missed\n data remains untouched\n\n :rtype: bytearray\n \"\"\"\n if self._at_eof:\n return b''\n data = bytearray()\n if self._length is None:\n while not self._at_eof:\n data.extend((yield from self.readline()))\n else:\n while not self._at_eof:\n data.extend((yield from self.read_chunk(self.chunk_size)))\n if decode:\n return self.decode(data)\n return data\n\n @asyncio.coroutine\n def read_chunk(self, size=chunk_size):\n \"\"\"Reads body part content chunk of the specified size.\n\n :param int size: chunk size\n\n :rtype: bytearray\n \"\"\"\n if self._at_eof:\n return b''\n if self._length:\n chunk = yield from self._read_chunk_from_length(size)\n else:\n chunk = yield from self._read_chunk_from_stream(size)\n\n self._read_bytes += len(chunk)\n if self._read_bytes == self._length:\n self._at_eof = True\n if self._at_eof:\n assert b'\\r\\n' == (yield from self._content.readline()), \\\n 'reader did not read all the data or it is malformed'\n return chunk\n\n @asyncio.coroutine\n def _read_chunk_from_length(self, size):\n \"\"\"Reads body part content chunk of the specified size.\n The body part must has `Content-Length` header with proper value.\n\n :param int size: chunk size\n\n :rtype: bytearray\n \"\"\"\n assert self._length is not None, \\\n 'Content-Length required for chunked read'\n chunk_size = min(size, self._length - self._read_bytes)\n chunk = yield from self._content.read(chunk_size)\n return chunk\n\n @asyncio.coroutine\n def _read_chunk_from_stream(self, size):\n \"\"\"Reads content chunk of body part with unknown length.\n The `Content-Length` header for body part is not necessary.\n\n :param int size: chunk size\n\n :rtype: bytearray\n \"\"\"\n assert size >= len(self._boundary) + 2, \\\n 'Chunk size must be greater or equal than boundary length + 2'\n first_chunk = self._prev_chunk is None\n if first_chunk:\n self._prev_chunk = yield from self._content.read(size)\n\n chunk = yield from self._content.read(size)\n self._content_eof += int(self._content.at_eof())\n assert self._content_eof < 3, \"Reading after EOF\"\n window = self._prev_chunk + chunk\n sub = b'\\r\\n' + self._boundary\n if first_chunk:\n idx = window.find(sub)\n else:\n idx = window.find(sub, max(0, len(self._prev_chunk) - len(sub)))\n if idx >= 0:\n # pushing boundary back to content\n self._content.unread_data(window[idx:])\n if size > idx:\n self._prev_chunk = self._prev_chunk[:idx]\n chunk = window[len(self._prev_chunk):idx]\n if not chunk:\n self._at_eof = True\n result = self._prev_chunk\n self._prev_chunk = chunk\n return result\n\n @asyncio.coroutine\n def readline(self):\n \"\"\"Reads body part by line by line.\n\n :rtype: bytearray\n \"\"\"\n if self._at_eof:\n return b''\n\n if self._unread:\n line = self._unread.popleft()\n else:\n line = yield from self._content.readline()\n\n if line.startswith(self._boundary):\n # the very last boundary may not come with \\r\\n,\n # so set single rules for everyone\n sline = line.rstrip(b'\\r\\n')\n boundary = self._boundary\n last_boundary = self._boundary + b'--'\n # ensure that we read exactly the boundary, not something alike\n if sline == boundary or sline == last_boundary:\n self._at_eof = True\n self._unread.append(line)\n return b''\n else:\n next_line = yield from self._content.readline()\n if next_line.startswith(self._boundary):\n line = line[:-2] # strip CRLF but only once\n self._unread.append(next_line)\n\n return line\n\n @asyncio.coroutine\n def release(self):\n \"\"\"Like :meth:`read`, but reads all the data to the void.\n\n :rtype: None\n \"\"\"\n if self._at_eof:\n return\n if self._length is None:\n while not self._at_eof:\n yield from self.readline()\n else:\n while not self._at_eof:\n yield from self.read_chunk(self.chunk_size)\n\n @asyncio.coroutine\n def text(self, *, encoding=None):\n \"\"\"Like :meth:`read`, but assumes that body part contains text data.\n\n :param str encoding: Custom text encoding. Overrides specified\n in charset param of `Content-Type` header\n\n :rtype: str\n \"\"\"\n data = yield from self.read(decode=True)\n # see https://www.w3.org/TR/html5/forms.html#multipart/form-data-encoding-algorithm # NOQA\n # and https://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#dom-xmlhttprequest-send # NOQA\n encoding = encoding or self.get_charset(default='utf-8')\n return data.decode(encoding)\n\n @asyncio.coroutine\n def json(self, *, encoding=None):\n \"\"\"Like :meth:`read`, but assumes that body parts contains JSON data.\n\n :param str encoding: Custom JSON encoding. Overrides specified\n in charset param of `Content-Type` header\n \"\"\"\n data = yield from self.read(decode=True)\n if not data:\n return None\n encoding = encoding or self.get_charset(default='utf-8')\n return json.loads(data.decode(encoding))\n\n @asyncio.coroutine\n def form(self, *, encoding=None):\n \"\"\"Like :meth:`read`, but assumes that body parts contains form\n urlencoded data.\n\n :param str encoding: Custom form encoding. Overrides specified\n in charset param of `Content-Type` header\n \"\"\"\n data = yield from self.read(decode=True)\n if not data:\n return None\n encoding = encoding or self.get_charset(default='utf-8')\n return parse_qsl(data.rstrip().decode(encoding), encoding=encoding)\n\n def at_eof(self):\n \"\"\"Returns ``True`` if the boundary was reached or\n ``False`` otherwise.\n\n :rtype: bool\n \"\"\"\n return self._at_eof\n\n def decode(self, data):\n \"\"\"Decodes data according the specified `Content-Encoding`\n or `Content-Transfer-Encoding` headers value.\n\n Supports ``gzip``, ``deflate`` and ``identity`` encodings for\n `Content-Encoding` header.\n\n Supports ``base64``, ``quoted-printable``, ``binary`` encodings for\n `Content-Transfer-Encoding` header.\n\n :param bytearray data: Data to decode.\n\n :raises: :exc:`RuntimeError` - if encoding is unknown.\n\n :rtype: bytes\n \"\"\"\n if CONTENT_TRANSFER_ENCODING in self.headers:\n data = self._decode_content_transfer(data)\n if CONTENT_ENCODING in self.headers:\n return self._decode_content(data)\n return data\n\n def _decode_content(self, data):\n encoding = self.headers[CONTENT_ENCODING].lower()\n\n if encoding == 'deflate':\n return zlib.decompress(data, -zlib.MAX_WBITS)\n elif encoding == 'gzip':\n return zlib.decompress(data, 16 + zlib.MAX_WBITS)\n elif encoding == 'identity':\n return data\n else:\n raise RuntimeError('unknown content encoding: {}'.format(encoding))\n\n def _decode_content_transfer(self, data):\n encoding = self.headers[CONTENT_TRANSFER_ENCODING].lower()\n\n if encoding == 'base64':\n return base64.b64decode(data)\n elif encoding == 'quoted-printable':\n return binascii.a2b_qp(data)\n elif encoding == 'binary':\n return data\n else:\n raise RuntimeError('unknown content transfer encoding: {}'\n ''.format(encoding))\n\n def get_charset(self, default=None):\n \"\"\"Returns charset parameter from ``Content-Type`` header or default.\n \"\"\"\n ctype = self.headers.get(CONTENT_TYPE, '')\n *_, params = parse_mimetype(ctype)\n return params.get('charset', default)\n\n @property\n def filename(self):\n \"\"\"Returns filename specified in Content-Disposition header or ``None``\n if missed or header is malformed.\"\"\"\n _, params = parse_content_disposition(\n self.headers.get(CONTENT_DISPOSITION))\n return content_disposition_filename(params)\n\n\nclass MultipartReader(object):\n \"\"\"Multipart body reader.\"\"\"\n\n #: Response wrapper, used when multipart readers constructs from response.\n response_wrapper_cls = MultipartResponseWrapper\n #: Multipart reader class, used to handle multipart/* body parts.\n #: None points to type(self)\n multipart_reader_cls = None\n #: Body part reader class for non multipart/* content types.\n part_reader_cls = BodyPartReader\n\n def __init__(self, headers, content):\n self.headers = headers\n self._boundary = ('--' + self._get_boundary()).encode()\n self._content = content\n self._last_part = None\n self._at_eof = False\n self._at_bof = True\n self._unread = []\n\n if PY_35:\n def __aiter__(self):\n return self\n\n if not PY_352: # pragma: no cover\n __aiter__ = asyncio.coroutine(__aiter__)\n\n @asyncio.coroutine\n def __anext__(self):\n part = yield from self.next()\n if part is None:\n raise StopAsyncIteration # NOQA\n return part\n\n @classmethod\n def from_response(cls, response):\n \"\"\"Constructs reader instance from HTTP response.\n\n :param response: :class:`~aiohttp.client.ClientResponse` instance\n \"\"\"\n obj = cls.response_wrapper_cls(response, cls(response.headers,\n response.content))\n return obj\n\n def at_eof(self):\n \"\"\"Returns ``True`` if the final boundary was reached or\n ``False`` otherwise.\n\n :rtype: bool\n \"\"\"\n return self._at_eof\n\n @asyncio.coroutine\n def next(self):\n \"\"\"Emits the next multipart body part.\"\"\"\n # So, if we're at BOF, we need to skip till the boundary.\n if self._at_eof:\n return\n yield from self._maybe_release_last_part()\n if self._at_bof:\n yield from self._read_until_first_boundary()\n self._at_bof = False\n else:\n yield from self._read_boundary()\n if self._at_eof: # we just read the last boundary, nothing to do there\n return\n self._last_part = yield from self.fetch_next_part()\n return self._last_part\n\n @asyncio.coroutine\n def release(self):\n \"\"\"Reads all the body parts to the void till the final boundary.\"\"\"\n while not self._at_eof:\n item = yield from self.next()\n if item is None:\n break\n yield from item.release()\n\n @asyncio.coroutine\n def fetch_next_part(self):\n \"\"\"Returns the next body part reader.\"\"\"\n headers = yield from self._read_headers()\n return self._get_part_reader(headers)\n\n def _get_part_reader(self, headers):\n \"\"\"Dispatches the response by the `Content-Type` header, returning\n suitable reader instance.\n\n :param dict headers: Response headers\n \"\"\"\n ctype = headers.get(CONTENT_TYPE, '')\n mtype, *_ = parse_mimetype(ctype)\n if mtype == 'multipart':\n if self.multipart_reader_cls is None:\n return type(self)(headers, self._content)\n return self.multipart_reader_cls(headers, self._content)\n else:\n return self.part_reader_cls(self._boundary, headers, self._content)\n\n def _get_boundary(self):\n mtype, *_, params = parse_mimetype(self.headers[CONTENT_TYPE])\n\n assert mtype == 'multipart', 'multipart/* content type expected'\n\n if 'boundary' not in params:\n raise ValueError('boundary missed for Content-Type: %s'\n % self.headers[CONTENT_TYPE])\n\n boundary = params['boundary']\n if len(boundary) > 70:\n raise ValueError('boundary %r is too long (70 chars max)'\n % boundary)\n\n return boundary\n\n @asyncio.coroutine\n def _readline(self):\n if self._unread:\n return self._unread.pop()\n return (yield from self._content.readline())\n\n @asyncio.coroutine\n def _read_until_first_boundary(self):\n while True:\n chunk = yield from self._readline()\n if chunk == b'':\n raise ValueError(\"Could not find starting boundary %r\"\n % (self._boundary))\n chunk = chunk.rstrip()\n if chunk == self._boundary:\n return\n elif chunk == self._boundary + b'--':\n self._at_eof = True\n return\n\n @asyncio.coroutine\n def _read_boundary(self):\n chunk = (yield from self._readline()).rstrip()\n if chunk == self._boundary:\n pass\n elif chunk == self._boundary + b'--':\n self._at_eof = True\n yield from self._readline()\n else:\n raise ValueError('Invalid boundary %r, expected %r'\n % (chunk, self._boundary))\n\n @asyncio.coroutine\n def _read_headers(self):\n lines = [b'']\n while True:\n chunk = yield from self._content.readline()\n chunk = chunk.strip()\n lines.append(chunk)\n if not chunk:\n break\n parser = HttpParser()\n headers, *_ = parser.parse_headers(lines)\n return headers\n\n @asyncio.coroutine\n def _maybe_release_last_part(self):\n \"\"\"Ensures that the last read body part is read completely.\"\"\"\n if self._last_part is not None:\n if not self._last_part.at_eof():\n yield from self._last_part.release()\n self._unread.extend(self._last_part._unread)\n self._last_part = None\n\n\nclass BodyPartWriter(object):\n \"\"\"Multipart writer for single body part.\"\"\"\n\n def __init__(self, obj, headers=None, *, chunk_size=8192):\n if headers is None:\n headers = CIMultiDict()\n elif not isinstance(headers, CIMultiDict):\n headers = CIMultiDict(headers)\n\n self.obj = obj\n self.headers = headers\n self._chunk_size = chunk_size\n self._fill_headers_with_defaults()\n\n self._serialize_map = {\n bytes: self._serialize_bytes,\n str: self._serialize_str,\n io.IOBase: self._serialize_io,\n MultipartWriter: self._serialize_multipart,\n ('application', 'json'): self._serialize_json,\n ('application', 'x-www-form-urlencoded'): self._serialize_form\n }\n\n def _fill_headers_with_defaults(self):\n if CONTENT_TYPE not in self.headers:\n content_type = self._guess_content_type(self.obj)\n if content_type is not None:\n self.headers[CONTENT_TYPE] = content_type\n\n if CONTENT_LENGTH not in self.headers:\n content_length = self._guess_content_length(self.obj)\n if content_length is not None:\n self.headers[CONTENT_LENGTH] = str(content_length)\n\n if CONTENT_DISPOSITION not in self.headers:\n filename = self._guess_filename(self.obj)\n if filename is not None:\n self.set_content_disposition('attachment', filename=filename)\n\n def _guess_content_length(self, obj):\n if isinstance(obj, bytes):\n return len(obj)\n elif isinstance(obj, str):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n charset = params.get('charset', 'us-ascii')\n return len(obj.encode(charset))\n elif isinstance(obj, io.StringIO):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n charset = params.get('charset', 'us-ascii')\n return len(obj.getvalue().encode(charset)) - obj.tell()\n elif isinstance(obj, io.BytesIO):\n return len(obj.getvalue()) - obj.tell()\n elif isinstance(obj, io.IOBase):\n try:\n return os.fstat(obj.fileno()).st_size - obj.tell()\n except (AttributeError, OSError):\n return None\n else:\n return None\n\n def _guess_content_type(self, obj, default='application/octet-stream'):\n if hasattr(obj, 'name'):\n name = getattr(obj, 'name')\n return mimetypes.guess_type(name)[0]\n elif isinstance(obj, (str, io.StringIO)):\n return 'text/plain; charset=utf-8'\n else:\n return default\n\n def _guess_filename(self, obj):\n if isinstance(obj, io.IOBase):\n name = getattr(obj, 'name', None)\n if name is not None:\n return Path(name).name\n\n def serialize(self):\n \"\"\"Yields byte chunks for body part.\"\"\"\n\n has_encoding = (\n CONTENT_ENCODING in self.headers and\n self.headers[CONTENT_ENCODING] != 'identity' or\n CONTENT_TRANSFER_ENCODING in self.headers\n )\n if has_encoding:\n # since we're following streaming approach which doesn't assumes\n # any intermediate buffers, we cannot calculate real content length\n # with the specified content encoding scheme. So, instead of lying\n # about content length and cause reading issues, we have to strip\n # this information.\n self.headers.pop(CONTENT_LENGTH, None)\n\n if self.headers:\n yield b'\\r\\n'.join(\n b': '.join(map(lambda i: i.encode('latin1'), item))\n for item in self.headers.items()\n )\n yield b'\\r\\n\\r\\n'\n yield from self._maybe_encode_stream(self._serialize_obj())\n yield b'\\r\\n'\n\n def _serialize_obj(self):\n obj = self.obj\n mtype, stype, *_ = parse_mimetype(self.headers.get(CONTENT_TYPE))\n serializer = self._serialize_map.get((mtype, stype))\n if serializer is not None:\n return serializer(obj)\n\n for key in self._serialize_map:\n if not isinstance(key, tuple) and isinstance(obj, key):\n return self._serialize_map[key](obj)\n return self._serialize_default(obj)\n\n def _serialize_bytes(self, obj):\n yield obj\n\n def _serialize_str(self, obj):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n yield obj.encode(params.get('charset', 'us-ascii'))\n\n def _serialize_io(self, obj):\n while True:\n chunk = obj.read(self._chunk_size)\n if not chunk:\n break\n if isinstance(chunk, str):\n yield from self._serialize_str(chunk)\n else:\n yield from self._serialize_bytes(chunk)\n\n def _serialize_multipart(self, obj):\n yield from obj.serialize()\n\n def _serialize_json(self, obj):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n yield json.dumps(obj).encode(params.get('charset', 'utf-8'))\n\n def _serialize_form(self, obj):\n if isinstance(obj, Mapping):\n obj = list(obj.items())\n return self._serialize_str(urlencode(obj, doseq=True))\n\n def _serialize_default(self, obj):\n raise TypeError('unknown body part type %r' % type(obj))\n\n def _maybe_encode_stream(self, stream):\n if CONTENT_ENCODING in self.headers:\n stream = self._apply_content_encoding(stream)\n if CONTENT_TRANSFER_ENCODING in self.headers:\n stream = self._apply_content_transfer_encoding(stream)\n yield from stream\n\n def _apply_content_encoding(self, stream):\n encoding = self.headers[CONTENT_ENCODING].lower()\n if encoding == 'identity':\n yield from stream\n elif encoding in ('deflate', 'gzip'):\n if encoding == 'gzip':\n zlib_mode = 16 + zlib.MAX_WBITS\n else:\n zlib_mode = -zlib.MAX_WBITS\n zcomp = zlib.compressobj(wbits=zlib_mode)\n for chunk in stream:\n yield zcomp.compress(chunk)\n else:\n yield zcomp.flush()\n else:\n raise RuntimeError('unknown content encoding: {}'\n ''.format(encoding))\n\n def _apply_content_transfer_encoding(self, stream):\n encoding = self.headers[CONTENT_TRANSFER_ENCODING].lower()\n if encoding == 'base64':\n buffer = bytearray()\n while True:\n if buffer:\n div, mod = divmod(len(buffer), 3)\n chunk, buffer = buffer[:div * 3], buffer[div * 3:]\n if chunk:\n yield base64.b64encode(chunk)\n chunk = next(stream, None)\n if not chunk:\n if buffer:\n yield base64.b64encode(buffer[:])\n return\n buffer.extend(chunk)\n elif encoding == 'quoted-printable':\n for chunk in stream:\n yield binascii.b2a_qp(chunk)\n elif encoding == 'binary':\n yield from stream\n else:\n raise RuntimeError('unknown content transfer encoding: {}'\n ''.format(encoding))\n\n def set_content_disposition(self, disptype, **params):\n \"\"\"Sets ``Content-Disposition`` header.\n\n :param str disptype: Disposition type: inline, attachment, form-data.\n Should be valid extension token (see RFC 2183)\n :param dict params: Disposition params\n \"\"\"\n if not disptype or not (TOKEN > set(disptype)):\n raise ValueError('bad content disposition type {!r}'\n ''.format(disptype))\n value = disptype\n if params:\n lparams = []\n for key, val in params.items():\n if not key or not (TOKEN > set(key)):\n raise ValueError('bad content disposition parameter'\n ' {!r}={!r}'.format(key, val))\n qval = quote(val, '')\n lparams.append((key, '\"%s\"' % qval))\n if key == 'filename':\n lparams.append(('filename*', \"utf-8''\" + qval))\n sparams = '; '.join('='.join(pair) for pair in lparams)\n value = '; '.join((value, sparams))\n self.headers[CONTENT_DISPOSITION] = value\n\n @property\n def filename(self):\n \"\"\"Returns filename specified in Content-Disposition header or ``None``\n if missed.\"\"\"\n _, params = parse_content_disposition(\n self.headers.get(CONTENT_DISPOSITION))\n return content_disposition_filename(params)\n\n\nclass MultipartWriter(object):\n \"\"\"Multipart body writer.\"\"\"\n\n #: Body part reader class for non multipart/* content types.\n part_writer_cls = BodyPartWriter\n\n def __init__(self, subtype='mixed', boundary=None):\n boundary = boundary if boundary is not None else uuid.uuid4().hex\n try:\n boundary.encode('us-ascii')\n except UnicodeEncodeError:\n raise ValueError('boundary should contains ASCII only chars')\n self.headers = CIMultiDict()\n self.headers[CONTENT_TYPE] = 'multipart/{}; boundary=\"{}\"'.format(\n subtype, boundary\n )\n self.parts = []\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n pass\n\n def __iter__(self):\n return iter(self.parts)\n\n def __len__(self):\n return len(self.parts)\n\n @property\n def boundary(self):\n *_, params = parse_mimetype(self.headers.get(CONTENT_TYPE))\n return params['boundary'].encode('us-ascii')\n\n def append(self, obj, headers=None):\n \"\"\"Adds a new body part to multipart writer.\"\"\"\n if isinstance(obj, self.part_writer_cls):\n if headers:\n obj.headers.update(headers)\n self.parts.append(obj)\n else:\n if not headers:\n headers = CIMultiDict()\n self.parts.append(self.part_writer_cls(obj, headers))\n return self.parts[-1]\n\n def append_json(self, obj, headers=None):\n \"\"\"Helper to append JSON part.\"\"\"\n if not headers:\n headers = CIMultiDict()\n headers[CONTENT_TYPE] = 'application/json'\n return self.append(obj, headers)\n\n def append_form(self, obj, headers=None):\n \"\"\"Helper to append form urlencoded part.\"\"\"\n if not headers:\n headers = CIMultiDict()\n headers[CONTENT_TYPE] = 'application/x-www-form-urlencoded'\n assert isinstance(obj, (Sequence, Mapping))\n return self.append(obj, headers)\n\n def serialize(self):\n \"\"\"Yields multipart byte chunks.\"\"\"\n if not self.parts:\n yield b''\n return\n\n for part in self.parts:\n yield b'--' + self.boundary + b'\\r\\n'\n yield from part.serialize()\n else:\n yield b'--' + self.boundary + b'--\\r\\n'\n\n yield b''\n", "path": "aiohttp/multipart.py"}]} |
gh_patches_debug_1066 | rasdani/github-patches | git_diff | pypa__pipenv-2168 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pipenv Shell raises error if shell's path contains space
This might be a windows only issue - Pipenv version 11.10.1
I am using cmder console
##### Steps to replicate
1. `set PIPENV_SHELL="c:\path_with space\enclosed\inquotes\shell"
2. `pipenv shell`
##### Expected result
Pipenv Shell is activated with the desired shell
##### Actual result
Pipenv Shell is activated with the default shell (cmd.exe)
> 'c:\path_with' is not regognized as an internal or external command
I use a portable cmder that is in a folder location that has spaces in its path.
To use Pipenv with cmder, I need to set `PIPENV_SHELL` to cmder's `init.bat` file
This files if there is a space in the folder path.
#### Work Around
Moving cmder's `init.bat` to a location with no spaces fixes it

Pipenv Shell raises error if shell's path contains space
This might be a windows only issue - Pipenv version 11.10.1
I am using cmder console
##### Steps to replicate
1. `set PIPENV_SHELL="c:\path_with space\enclosed\inquotes\shell"
2. `pipenv shell`
##### Expected result
Pipenv Shell is activated with the desired shell
##### Actual result
Pipenv Shell is activated with the default shell (cmd.exe)
> 'c:\path_with' is not regognized as an internal or external command
I use a portable cmder that is in a folder location that has spaces in its path.
To use Pipenv with cmder, I need to set `PIPENV_SHELL` to cmder's `init.bat` file
This files if there is a space in the folder path.
#### Work Around
Moving cmder's `init.bat` to a location with no spaces fixes it

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pipenv/patched/pew/pew.py`
Content:
```
1 from __future__ import print_function, absolute_import, unicode_literals
2
3 import os
4 import sys
5 import argparse
6 import shutil
7 import random
8 import textwrap
9 from functools import partial
10 from subprocess import CalledProcessError
11 try:
12 from pathlib import Path
13 except ImportError:
14 from pipenv.vendor.pathlib2 import Path
15
16 try:
17 from shutil import get_terminal_size
18 except ImportError:
19 from pipenv.vendor.backports.shutil_get_terminal_size import get_terminal_size
20
21 windows = sys.platform == 'win32'
22
23 from clonevirtualenv import clone_virtualenv
24 if not windows:
25 try:
26 # Try importing these packages if avaiable
27 from pythonz.commands.install import InstallCommand
28 from pythonz.commands.uninstall import UninstallCommand
29 from pythonz.installer.pythoninstaller import PythonInstaller, AlreadyInstalledError
30 from pythonz.commands.list import ListCommand as ListPythons
31 from pythonz.define import PATH_PYTHONS
32 from pythonz.commands.locate import LocateCommand as LocatePython
33 except:
34 # create mock commands
35 InstallCommand = ListPythons = LocatePython = UninstallCommand = \
36 lambda : sys.exit('You need to install the pythonz extra. pip install pew[pythonz]')
37 else:
38 # Pythonz does not support windows
39 InstallCommand = ListPythons = LocatePython = UninstallCommand = \
40 lambda : sys.exit('Command not supported on this platform')
41
42 from ._win_utils import get_shell
43
44 from pew._utils import (check_call, invoke, expandpath, own, env_bin_dir,
45 check_path, temp_environ, NamedTemporaryFile, to_unicode)
46 from pew._print_utils import print_virtualenvs
47
48 if sys.version_info[0] == 2:
49 input = raw_input
50
51 err = partial(print, file=sys.stderr)
52
53 if windows:
54 default_home = '~/.virtualenvs'
55 else:
56 default_home = os.path.join(
57 os.environ.get('XDG_DATA_HOME', '~/.local/share'), 'virtualenvs')
58
59 def get_workon_home():
60 return expandpath(os.environ.get('WORKON_HOME', default_home))
61
62
63 def makedirs_and_symlink_if_needed(workon_home):
64 if not workon_home.exists() and own(workon_home):
65 workon_home.mkdir(parents=True)
66 link = expandpath('~/.virtualenvs')
67 if os.name == 'posix' and 'WORKON_HOME' not in os.environ and \
68 'XDG_DATA_HOME' not in os.environ and not link.exists():
69 link.symlink_to(str(workon_home))
70 return True
71 else:
72 return False
73
74 pew_site = Path(__file__).parent
75
76 def supported_shell():
77 shell = Path(os.environ.get('SHELL', '')).stem
78 if shell in ('bash', 'zsh', 'fish'):
79 return shell
80
81
82 def shell_config_cmd(argv):
83 "Prints the path for the current $SHELL helper file"
84 shell = supported_shell()
85 if shell:
86 print(pew_site / 'shell_config' / ('init.' + shell))
87 else:
88 err('Completions and prompts are unavailable for %s' %
89 repr(os.environ.get('SHELL', '')))
90
91
92 def deploy_completions():
93 completions = {'complete.bash': Path('/etc/bash_completion.d/pew'),
94 'complete.zsh': Path('/usr/local/share/zsh/site-functions/_pew'),
95 'complete.fish': Path('/etc/fish/completions/pew.fish')}
96 for comp, dest in completions.items():
97 if not dest.parent.exists():
98 dest.parent.mkdir(parents=True)
99 shutil.copy(str(pew_site / 'shell_config' / comp), str(dest))
100
101
102 def get_project_dir(env):
103 project_file = get_workon_home() / env / '.project'
104 if project_file.exists():
105 with project_file.open() as f:
106 project_dir = f.readline().strip()
107 if os.path.exists(project_dir):
108 return project_dir
109 else:
110 err('Corrupted or outdated:', project_file, '\nDirectory',
111 project_dir, "doesn't exist.")
112
113
114 def unsetenv(key):
115 if key in os.environ:
116 del os.environ[key]
117
118
119 def compute_path(env):
120 envdir = get_workon_home() / env
121 return os.pathsep.join([
122 str(envdir / env_bin_dir),
123 os.environ['PATH'],
124 ])
125
126
127 def inve(env, command, *args, **kwargs):
128 """Run a command in the given virtual environment.
129
130 Pass additional keyword arguments to ``subprocess.check_call()``."""
131 # we don't strictly need to restore the environment, since pew runs in
132 # its own process, but it feels like the right thing to do
133 with temp_environ():
134 os.environ['VIRTUAL_ENV'] = str(get_workon_home() / env)
135 os.environ['PATH'] = compute_path(env)
136
137 unsetenv('PYTHONHOME')
138 unsetenv('__PYVENV_LAUNCHER__')
139
140 try:
141 return check_call([command] + list(args), shell=windows, **kwargs)
142 # need to have shell=True on windows, otherwise the PYTHONPATH
143 # won't inherit the PATH
144 except OSError as e:
145 if e.errno == 2:
146 err('Unable to find', command)
147 else:
148 raise
149
150
151 def fork_shell(env, shellcmd, cwd):
152 or_ctrld = '' if windows else "or 'Ctrl+D' "
153 err("Launching subshell in virtual environment. Type 'exit' ", or_ctrld,
154 "to return.", sep='')
155 if 'VIRTUAL_ENV' in os.environ:
156 err("Be aware that this environment will be nested on top "
157 "of '%s'" % Path(os.environ['VIRTUAL_ENV']).name)
158 try:
159 inve(env, *shellcmd, cwd=cwd)
160 except CalledProcessError:
161 # These shells report errors when the last command executed in the
162 # subshell in an error. This causes the subprocess to fail, which is
163 # not what we want. Stay silent for them, there's nothing we can do.
164 shell_name, _ = os.path.splitext(os.path.basename(shellcmd[0]))
165 suppress_error = shell_name.lower() in ('cmd', 'powershell', 'pwsh')
166 if not suppress_error:
167 raise
168
169
170 def fork_bash(env, cwd):
171 # bash is a special little snowflake, and prevent_path_errors cannot work there
172 # https://github.com/berdario/pew/issues/58#issuecomment-102182346
173 bashrcpath = expandpath('~/.bashrc')
174 if bashrcpath.exists():
175 with NamedTemporaryFile('w+') as rcfile:
176 with bashrcpath.open() as bashrc:
177 rcfile.write(bashrc.read())
178 rcfile.write('\nexport PATH="' + to_unicode(compute_path(env)) + '"')
179 rcfile.flush()
180 fork_shell(env, ['bash', '--rcfile', rcfile.name], cwd)
181 else:
182 fork_shell(env, ['bash'], cwd)
183
184
185 def fork_cmder(env, cwd):
186 shell_cmd = ['cmd']
187 cmderrc_path = r'%CMDER_ROOT%\vendor\init.bat'
188 if expandpath(cmderrc_path).exists():
189 shell_cmd += ['/k', cmderrc_path]
190 if cwd:
191 os.environ['CMDER_START'] = cwd
192 fork_shell(env, shell_cmd, cwd)
193
194 def _detect_shell():
195 shell = os.environ.get('SHELL', None)
196 if not shell:
197 if 'CMDER_ROOT' in os.environ:
198 shell = 'Cmder'
199 elif windows:
200 shell = get_shell(os.getpid())
201 else:
202 shell = 'sh'
203 return shell
204
205 def shell(env, cwd=None):
206 env = str(env)
207 shell = _detect_shell()
208 shell_name = Path(shell).stem
209 if shell_name not in ('Cmder', 'bash', 'elvish', 'powershell', 'pwsh', 'klingon', 'cmd'):
210 # On Windows the PATH is usually set with System Utility
211 # so we won't worry about trying to check mistakes there
212 shell_check = (sys.executable + ' -c "from pipenv.patched.pew.pew import '
213 'prevent_path_errors; prevent_path_errors()"')
214 try:
215 inve(env, shell, '-c', shell_check)
216 except CalledProcessError:
217 return
218 if shell_name in ('Cmder', 'cmd'):
219 os.environ['PROMPT'] = '({0}) {1}'.format(env, os.environ['PROMPT'])
220 if shell_name == 'bash':
221 fork_bash(env, cwd)
222 elif shell_name == 'Cmder':
223 fork_cmder(env, cwd)
224 else:
225 fork_shell(env, [shell], cwd)
226
227
228 def mkvirtualenv(envname, python=None, packages=[], project=None,
229 requirements=None, rest=[]):
230
231 if python:
232 rest = ["--python=%s" % python] + rest
233
234 path = (get_workon_home() / envname).absolute()
235
236 try:
237 check_call([sys.executable, "-m", "virtualenv", str(path)] + rest)
238 except (CalledProcessError, KeyboardInterrupt):
239 rmvirtualenvs([envname])
240 raise
241 else:
242 if project:
243 setvirtualenvproject(envname, project.absolute())
244 if requirements:
245 inve(envname, 'pip', 'install', '-r', str(expandpath(requirements)))
246 if packages:
247 inve(envname, 'pip', 'install', *packages)
248
249
250 def mkvirtualenv_argparser():
251 parser = argparse.ArgumentParser()
252 parser.add_argument('-p', '--python')
253 parser.add_argument('-i', action='append', dest='packages', help='Install \
254 a package after the environment is created. This option may be repeated.')
255 parser.add_argument('-r', dest='requirements', help='Provide a pip \
256 requirements file to install a base set of packages into the new environment.')
257 parser.add_argument('-d', '--dont-activate', action='store_false',
258 default=True, dest='activate', help="After \
259 creation, continue with the existing shell (don't \
260 activate the new environment).")
261 return parser
262
263
264 def new_cmd(argv):
265 """Create a new environment, in $WORKON_HOME."""
266 parser = mkvirtualenv_argparser()
267 parser.add_argument('-a', dest='project', help='Provide a full path to a \
268 project directory to associate with the new environment.')
269
270 parser.add_argument('envname')
271 args, rest = parser.parse_known_args(argv)
272 project = expandpath(args.project) if args.project else None
273
274 mkvirtualenv(args.envname, args.python, args.packages, project,
275 args.requirements, rest)
276 if args.activate:
277 shell(args.envname)
278
279
280 def rmvirtualenvs(envs):
281 error_happened = False
282 for env in envs:
283 env = get_workon_home() / env
284 if os.environ.get('VIRTUAL_ENV') == str(env):
285 err("ERROR: You cannot remove the active environment (%s)." % env)
286 error_happened = True
287 break
288 try:
289 shutil.rmtree(str(env))
290 except OSError as e:
291 err("Error while trying to remove the {0} env: \n{1}".format
292 (env, e.strerror))
293 error_happened = True
294 return error_happened
295
296
297
298 def rm_cmd(argv):
299 """Remove one or more environment, from $WORKON_HOME."""
300 if len(argv) < 1:
301 sys.exit("Please specify an environment")
302 return rmvirtualenvs(argv)
303
304
305 def packages(site_packages):
306 nodes = site_packages.iterdir()
307 return set([x.stem.split('-')[0] for x in nodes]) - set(['__pycache__'])
308
309
310 def showvirtualenv(env):
311 columns, _ = get_terminal_size()
312 pkgs = sorted(packages(sitepackages_dir(env)))
313 env_python = get_workon_home() / env / env_bin_dir / 'python'
314 l = len(env) + 2
315 version = invoke(str(env_python), '-V')
316 version = ' - '.join((version.out + version.err).splitlines())
317 print(env, ': ', version, sep='')
318 print(textwrap.fill(' '.join(pkgs),
319 width=columns-l,
320 initial_indent=(l * ' '),
321 subsequent_indent=(l * ' ')), '\n')
322
323
324 def show_cmd(argv):
325 try:
326 showvirtualenv(argv[0])
327 except IndexError:
328 if 'VIRTUAL_ENV' in os.environ:
329 showvirtualenv(Path(os.environ['VIRTUAL_ENV']).name)
330 else:
331 sys.exit('pew show [env]')
332
333
334 def lsenvs():
335 items = get_workon_home().glob(os.path.join('*', env_bin_dir, 'python*'))
336 return sorted(set(env.parts[-3] for env in items))
337
338
339 def lsvirtualenv(verbose):
340 envs = lsenvs()
341
342 if not verbose:
343 print_virtualenvs(*envs)
344 else:
345 for env in envs:
346 showvirtualenv(env)
347
348
349 def ls_cmd(argv):
350 """List available environments."""
351 parser = argparse.ArgumentParser()
352 p_group = parser.add_mutually_exclusive_group()
353 p_group.add_argument('-b', '--brief', action='store_false')
354 p_group.add_argument('-l', '--long', action='store_true')
355 args = parser.parse_args(argv)
356 lsvirtualenv(args.long)
357
358 def parse_envname(argv, no_arg_callback):
359 if len(argv) < 1:
360 no_arg_callback()
361
362 env = argv[0]
363 if env.startswith('/'):
364 sys.exit("ERROR: Invalid environment name '{0}'.".format(env))
365 if not (get_workon_home() / env).exists():
366 sys.exit("ERROR: Environment '{0}' does not exist. Create it with \
367 'pew new {0}'.".format(env))
368 else:
369 return env
370
371 def workon_cmd(argv):
372 """List or change working virtual environments."""
373
374 def list_and_exit():
375 lsvirtualenv(False)
376 sys.exit(0)
377
378 env = parse_envname(argv, list_and_exit)
379
380 # Check if the virtualenv has an associated project directory and in
381 # this case, use it as the current working directory.
382 project_dir = get_project_dir(env) or os.getcwd()
383 shell(env, cwd=project_dir)
384
385
386 def sitepackages_dir(env=os.environ.get('VIRTUAL_ENV')):
387 if not env:
388 sys.exit('ERROR: no virtualenv active')
389 else:
390 env_python = get_workon_home() / env / env_bin_dir / 'python'
391 return Path(invoke(str(env_python), '-c', 'import distutils; \
392 print(distutils.sysconfig.get_python_lib())').out)
393
394
395 def add_cmd(argv):
396 """Add the specified directories to the Python path for the currently active virtualenv.
397
398 This will be done by placing the directory names in a path file named
399 "virtualenv_path_extensions.pth" inside the virtualenv's site-packages
400 directory; if this file does not exists, it will be created first.
401
402 """
403 parser = argparse.ArgumentParser()
404 parser.add_argument('-d', dest='remove', action='store_true')
405 parser.add_argument('dirs', nargs='+')
406 args = parser.parse_args(argv)
407
408 extra_paths = sitepackages_dir() / '_virtualenv_path_extensions.pth'
409 new_paths = [os.path.abspath(d) + "\n" for d in args.dirs]
410 if not extra_paths.exists():
411 with extra_paths.open('w') as extra:
412 extra.write('''import sys; sys.__plen = len(sys.path)
413 import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new)
414 ''')
415
416 def rewrite(f):
417 with extra_paths.open('r+') as extra:
418 to_write = f(extra.readlines())
419 extra.seek(0)
420 extra.truncate()
421 extra.writelines(to_write)
422
423 if args.remove:
424 rewrite(lambda ls: [line for line in ls if line not in new_paths])
425 else:
426 rewrite(lambda lines: lines[0:1] + new_paths + lines[1:])
427
428
429 def sitepackages_dir_cmd(argv):
430 print(sitepackages_dir())
431
432
433 def lssitepackages_cmd(argv):
434 """Show the content of the site-packages directory of the current virtualenv."""
435 site = sitepackages_dir()
436 print(*sorted(site.iterdir()), sep=os.linesep)
437 extra_paths = site / '_virtualenv_path_extensions.pth'
438 if extra_paths.exists():
439 print('from _virtualenv_path_extensions.pth:')
440 with extra_paths.open() as extra:
441 print(''.join(extra.readlines()))
442
443
444 def toggleglobalsitepackages_cmd(argv):
445 """Toggle the current virtualenv between having and not having access to the global site-packages."""
446 quiet = argv == ['-q']
447 site = sitepackages_dir()
448 ngsp_file = site.parent / 'no-global-site-packages.txt'
449 if ngsp_file.exists():
450 ngsp_file.unlink()
451 if not quiet:
452 print('Enabled global site-packages')
453 else:
454 with ngsp_file.open('w'):
455 if not quiet:
456 print('Disabled global site-packages')
457
458
459 def cp_cmd(argv):
460 """Duplicate the named virtualenv to make a new one."""
461 parser = argparse.ArgumentParser()
462 parser.add_argument('source')
463 parser.add_argument('target', nargs='?')
464 parser.add_argument('-d', '--dont-activate', action='store_false',
465 default=True, dest='activate', help="After \
466 creation, continue with the existing shell (don't \
467 activate the new environment).")
468
469 args = parser.parse_args(argv)
470 target_name = copy_virtualenv_project(args.source, args.target)
471 if args.activate:
472 shell(target_name)
473
474
475 def copy_virtualenv_project(source, target):
476 source = expandpath(source)
477 workon_home = get_workon_home()
478 if not source.exists():
479 source = workon_home / source
480 if not source.exists():
481 sys.exit('Please provide a valid virtualenv to copy')
482
483 target_name = target or source.name
484
485 target = workon_home / target_name
486
487 if target.exists():
488 sys.exit('%s virtualenv already exists in %s.' % (
489 target_name, workon_home
490 ))
491
492 print('Copying {0} in {1}'.format(source, target_name))
493 clone_virtualenv(str(source), str(target))
494 return target_name
495
496
497 def rename_cmd(argv):
498 """Rename a virtualenv"""
499 parser = argparse.ArgumentParser()
500 parser.add_argument('source')
501 parser.add_argument('target')
502 pargs = parser.parse_args(argv)
503 copy_virtualenv_project(pargs.source, pargs.target)
504 return rmvirtualenvs([pargs.source])
505
506
507 def setvirtualenvproject(env, project):
508 print('Setting project for {0} to {1}'.format(env, project))
509 with (get_workon_home() / env / '.project').open('wb') as prj:
510 prj.write(str(project).encode())
511
512
513 def setproject_cmd(argv):
514 """Given a virtualenv directory and a project directory, set the
515 virtualenv up to be associated with the project."""
516 args = dict(enumerate(argv))
517 project = os.path.abspath(args.get(1, '.'))
518 env = args.get(0, os.environ.get('VIRTUAL_ENV'))
519 if not env:
520 sys.exit('pew setproject [virtualenv] [project_path]')
521 if not (get_workon_home() / env).exists():
522 sys.exit("Environment '%s' doesn't exist." % env)
523 if not os.path.isdir(project):
524 sys.exit('pew setproject: %s does not exist' % project)
525 setvirtualenvproject(env, project)
526
527
528 def mkproject_cmd(argv):
529 """Create a new project directory and its associated virtualenv."""
530 if '-l' in argv or '--list' in argv:
531 templates = [t.name[9:] for t in get_workon_home().glob("template_*")]
532 print("Available project templates:", *templates, sep='\n')
533 return
534
535 parser = mkvirtualenv_argparser()
536 parser.add_argument('envname')
537 parser.add_argument(
538 '-t', action='append', default=[], dest='templates', help='Multiple \
539 templates may be selected. They are applied in the order specified on the \
540 command line.')
541 parser.add_argument(
542 '-l', '--list', action='store_true', help='List available templates.')
543
544 args, rest = parser.parse_known_args(argv)
545
546 projects_home = Path(os.environ.get('PROJECT_HOME', '.'))
547 if not projects_home.exists():
548 sys.exit('ERROR: Projects directory %s does not exist. \
549 Create it or set PROJECT_HOME to an existing directory.' % projects_home)
550
551 project = (projects_home / args.envname).absolute()
552 if project.exists():
553 sys.exit('Project %s already exists.' % args.envname)
554
555 mkvirtualenv(args.envname, args.python, args.packages, project.absolute(),
556 args.requirements, rest)
557
558 project.mkdir()
559
560 for template_name in args.templates:
561 template = get_workon_home() / ("template_" + template_name)
562 inve(args.envname, str(template), args.envname, str(project))
563 if args.activate:
564 shell(args.envname, cwd=str(project))
565
566
567 def mktmpenv_cmd(argv):
568 """Create a temporary virtualenv."""
569 parser = mkvirtualenv_argparser()
570 env = '.'
571 while (get_workon_home() / env).exists():
572 env = hex(random.getrandbits(64))[2:-1]
573
574 args, rest = parser.parse_known_args(argv)
575
576 mkvirtualenv(env, args.python, args.packages, requirements=args.requirements,
577 rest=rest)
578 print('This is a temporary environment. It will be deleted when you exit')
579 try:
580 if args.activate:
581 # only used for testing on windows
582 shell(env)
583 finally:
584 return rmvirtualenvs([env])
585
586
587 def wipeenv_cmd(argv):
588 """Remove all installed packages from the current (or supplied) env."""
589 env = argv[0] if argv else os.environ.get('VIRTUAL_ENV')
590
591 if not env:
592 sys.exit('ERROR: no virtualenv active')
593 elif not (get_workon_home() / env).exists():
594 sys.exit("ERROR: Environment '{0}' does not exist.".format(env))
595 else:
596 env_pip = str(get_workon_home() / env / env_bin_dir / 'pip')
597 all_pkgs = set(invoke(env_pip, 'freeze').out.splitlines())
598 pkgs = set(p for p in all_pkgs if len(p.split("==")) == 2)
599 ignored = sorted(all_pkgs - pkgs)
600 pkgs = set(p.split("==")[0] for p in pkgs)
601 to_remove = sorted(pkgs - set(['distribute', 'wsgiref']))
602 if to_remove:
603 print("Ignoring:\n %s" % "\n ".join(ignored))
604 print("Uninstalling packages:\n %s" % "\n ".join(to_remove))
605 inve(env, 'pip', 'uninstall', '-y', *to_remove)
606 else:
607 print("Nothing to remove")
608
609
610 def inall_cmd(argv):
611 """Run a command in each virtualenv."""
612 envs = lsenvs()
613 errors = False
614 for env in envs:
615 print("\n%s:" % env)
616 try:
617 inve(env, *argv)
618 except CalledProcessError as e:
619 errors = True
620 err(e)
621 sys.exit(errors)
622
623
624 def in_cmd(argv):
625 """Run a command in the given virtualenv."""
626
627 if len(argv) == 1:
628 return workon_cmd(argv)
629
630 parse_envname(argv, lambda : sys.exit('You must provide a valid virtualenv to target'))
631
632 inve(*argv)
633
634
635 def restore_cmd(argv):
636 """Try to restore a broken virtualenv by reinstalling the same python version on top of it"""
637
638 if len(argv) < 1:
639 sys.exit('You must provide a valid virtualenv to target')
640
641 env = argv[0]
642 path = get_workon_home() / env
643 path = workon_home / env
644 py = path / env_bin_dir / ('python.exe' if windows else 'python')
645 exact_py = py.resolve().name
646
647 check_call([sys.executable, "-m", "virtualenv", str(path.absolute()), "--python=%s" % exact_py])
648
649
650 def dir_cmd(argv):
651 """Print the path for the virtualenv directory"""
652 env = parse_envname(argv, lambda : sys.exit('You must provide a valid virtualenv to target'))
653 print(get_workon_home() / env)
654
655
656 def install_cmd(argv):
657 '''Use Pythonz to download and build the specified Python version'''
658 installer = InstallCommand()
659 options, versions = installer.parser.parse_args(argv)
660 if len(versions) != 1:
661 installer.parser.print_help()
662 sys.exit(1)
663 else:
664 try:
665 actual_installer = PythonInstaller.get_installer(versions[0], options)
666 actual_installer.install()
667 except AlreadyInstalledError as e:
668 print(e)
669
670
671 def uninstall_cmd(argv):
672 '''Use Pythonz to uninstall the specified Python version'''
673 UninstallCommand().run(argv)
674
675
676 def list_pythons_cmd(argv):
677 '''List the pythons installed by Pythonz (or all the installable ones)'''
678 try:
679 Path(PATH_PYTHONS).mkdir(parents=True)
680 except OSError:
681 pass
682 ListPythons().run(argv)
683
684
685 def locate_python_cmd(argv):
686 '''Locate the path for the python version installed by Pythonz'''
687 LocatePython().run(argv)
688
689
690 def version_cmd(argv):
691 """Prints current pew version"""
692 import pkg_resources
693
694 try:
695 __version__ = pkg_resources.get_distribution('pew').version
696 except pkg_resources.DistributionNotFound:
697 __version__ = 'unknown'
698 print('Setuptools has some issues here, failed to get our own package.', file=sys.stderr)
699
700 print(__version__)
701
702
703 def prevent_path_errors():
704 if 'VIRTUAL_ENV' in os.environ and not check_path():
705 sys.exit('''ERROR: The virtualenv hasn't been activated correctly.
706 Either the env is corrupted (try running `pew restore env`),
707 Or an upgrade of your Python version broke your env,
708 Or check the contents of your $PATH. You might be adding new directories to it
709 from inside your shell's configuration file.
710 In this case, for further details please see: https://github.com/berdario/pew#the-environment-doesnt-seem-to-be-activated''')
711
712
713 def first_run_setup():
714 shell = supported_shell()
715 if shell:
716 if shell == 'fish':
717 source_cmd = 'source (pew shell_config)'
718 else:
719 source_cmd = 'source $(pew shell_config)'
720 rcpath = expandpath({'bash': '~/.bashrc'
721 , 'zsh': '~/.zshrc'
722 , 'fish': '~/.config/fish/config.fish'}[shell])
723 if rcpath.exists():
724 update_config_file(rcpath, source_cmd)
725 else:
726 print("It seems that you're running pew for the first time\n"
727 "If you want source shell competions and update your prompt, "
728 "Add the following line to your shell config file:\n %s" % source_cmd)
729 print('\nWill now continue with the command:', *sys.argv[1:])
730 input('[enter]')
731
732 def update_config_file(rcpath, source_cmd):
733 with rcpath.open('r+') as rcfile:
734 if source_cmd not in (line.strip() for line in rcfile.readlines()):
735 choice = 'X'
736 while choice not in ('y', '', 'n'):
737 choice = input("It seems that you're running pew for the first time\n"
738 "do you want to modify %s to source completions and"
739 " update your prompt? [y/N]\n> " % rcpath).lower()
740 if choice == 'y':
741 rcfile.write('\n# added by Pew\n%s\n' % source_cmd)
742 print('Done')
743 else:
744 print('\nOk, if you want to do it manually, just add\n %s\nat'
745 ' the end of %s' % (source_cmd, rcpath))
746
747
748 def print_commands(cmds):
749 longest = max(map(len, cmds)) + 3
750 columns, _ = get_terminal_size()
751
752 print('Available commands:\n')
753 for cmd, fun in sorted(cmds.items()):
754 if fun.__doc__:
755 print(textwrap.fill(
756 fun.__doc__.splitlines()[0],
757 columns or 1000,
758 initial_indent=(' {0}: '.format(cmd)).ljust(longest),
759 subsequent_indent=longest * ' '))
760 else:
761 print(' ' + cmd)
762
763
764 def pew():
765 first_run = makedirs_and_symlink_if_needed(get_workon_home())
766 if first_run and sys.stdin.isatty():
767 first_run_setup()
768
769 cmds = dict((cmd[:-4], fun)
770 for cmd, fun in globals().items() if cmd.endswith('_cmd'))
771 if sys.argv[1:]:
772 if sys.argv[1] in cmds:
773 command = cmds[sys.argv[1]]
774 try:
775 return command(sys.argv[2:])
776 except CalledProcessError as e:
777 return e.returncode
778 except KeyboardInterrupt:
779 pass
780 else:
781 err("ERROR: command", sys.argv[1], "does not exist.")
782 print_commands(cmds)
783 sys.exit(1)
784 else:
785 print_commands(cmds)
786
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pipenv/patched/pew/pew.py b/pipenv/patched/pew/pew.py
--- a/pipenv/patched/pew/pew.py
+++ b/pipenv/patched/pew/pew.py
@@ -184,7 +184,8 @@
def fork_cmder(env, cwd):
shell_cmd = ['cmd']
- cmderrc_path = r'%CMDER_ROOT%\vendor\init.bat'
+ escaped_cmder_root = os.environ['CMDER_ROOT'].replace(' ', '^ ')
+ cmderrc_path = r'{0}\vendor\init.bat'.format(escaped_cmder_root)
if expandpath(cmderrc_path).exists():
shell_cmd += ['/k', cmderrc_path]
if cwd:
| {"golden_diff": "diff --git a/pipenv/patched/pew/pew.py b/pipenv/patched/pew/pew.py\n--- a/pipenv/patched/pew/pew.py\n+++ b/pipenv/patched/pew/pew.py\n@@ -184,7 +184,8 @@\n \n def fork_cmder(env, cwd):\n shell_cmd = ['cmd']\n- cmderrc_path = r'%CMDER_ROOT%\\vendor\\init.bat'\n+ escaped_cmder_root = os.environ['CMDER_ROOT'].replace(' ', '^ ')\n+ cmderrc_path = r'{0}\\vendor\\init.bat'.format(escaped_cmder_root)\n if expandpath(cmderrc_path).exists():\n shell_cmd += ['/k', cmderrc_path]\n if cwd:\n", "issue": "Pipenv Shell raises error if shell's path contains space\nThis might be a windows only issue - Pipenv version 11.10.1\r\nI am using cmder console\r\n\r\n##### Steps to replicate\r\n1. `set PIPENV_SHELL=\"c:\\path_with space\\enclosed\\inquotes\\shell\"\r\n2. `pipenv shell`\r\n\r\n##### Expected result\r\nPipenv Shell is activated with the desired shell\r\n\r\n##### Actual result\r\nPipenv Shell is activated with the default shell (cmd.exe)\r\n> 'c:\\path_with' is not regognized as an internal or external command\r\n\r\nI use a portable cmder that is in a folder location that has spaces in its path.\r\nTo use Pipenv with cmder, I need to set `PIPENV_SHELL` to cmder's `init.bat` file\r\nThis files if there is a space in the folder path.\r\n\r\n#### Work Around\r\nMoving cmder's `init.bat` to a location with no spaces fixes it\r\n\r\n\r\n\r\n\nPipenv Shell raises error if shell's path contains space\nThis might be a windows only issue - Pipenv version 11.10.1\r\nI am using cmder console\r\n\r\n##### Steps to replicate\r\n1. `set PIPENV_SHELL=\"c:\\path_with space\\enclosed\\inquotes\\shell\"\r\n2. `pipenv shell`\r\n\r\n##### Expected result\r\nPipenv Shell is activated with the desired shell\r\n\r\n##### Actual result\r\nPipenv Shell is activated with the default shell (cmd.exe)\r\n> 'c:\\path_with' is not regognized as an internal or external command\r\n\r\nI use a portable cmder that is in a folder location that has spaces in its path.\r\nTo use Pipenv with cmder, I need to set `PIPENV_SHELL` to cmder's `init.bat` file\r\nThis files if there is a space in the folder path.\r\n\r\n#### Work Around\r\nMoving cmder's `init.bat` to a location with no spaces fixes it\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import print_function, absolute_import, unicode_literals\n\nimport os\nimport sys\nimport argparse\nimport shutil\nimport random\nimport textwrap\nfrom functools import partial\nfrom subprocess import CalledProcessError\ntry:\n from pathlib import Path\nexcept ImportError:\n from pipenv.vendor.pathlib2 import Path\n\ntry:\n from shutil import get_terminal_size\nexcept ImportError:\n from pipenv.vendor.backports.shutil_get_terminal_size import get_terminal_size\n\nwindows = sys.platform == 'win32'\n\nfrom clonevirtualenv import clone_virtualenv\nif not windows:\n try:\n # Try importing these packages if avaiable\n from pythonz.commands.install import InstallCommand\n from pythonz.commands.uninstall import UninstallCommand\n from pythonz.installer.pythoninstaller import PythonInstaller, AlreadyInstalledError\n from pythonz.commands.list import ListCommand as ListPythons\n from pythonz.define import PATH_PYTHONS\n from pythonz.commands.locate import LocateCommand as LocatePython\n except:\n # create mock commands\n InstallCommand = ListPythons = LocatePython = UninstallCommand = \\\n lambda : sys.exit('You need to install the pythonz extra. pip install pew[pythonz]')\nelse:\n # Pythonz does not support windows\n InstallCommand = ListPythons = LocatePython = UninstallCommand = \\\n lambda : sys.exit('Command not supported on this platform')\n\n from ._win_utils import get_shell\n\nfrom pew._utils import (check_call, invoke, expandpath, own, env_bin_dir,\n check_path, temp_environ, NamedTemporaryFile, to_unicode)\nfrom pew._print_utils import print_virtualenvs\n\nif sys.version_info[0] == 2:\n input = raw_input\n\nerr = partial(print, file=sys.stderr)\n\nif windows:\n default_home = '~/.virtualenvs'\nelse:\n default_home = os.path.join(\n os.environ.get('XDG_DATA_HOME', '~/.local/share'), 'virtualenvs')\n\ndef get_workon_home():\n return expandpath(os.environ.get('WORKON_HOME', default_home))\n\n\ndef makedirs_and_symlink_if_needed(workon_home):\n if not workon_home.exists() and own(workon_home):\n workon_home.mkdir(parents=True)\n link = expandpath('~/.virtualenvs')\n if os.name == 'posix' and 'WORKON_HOME' not in os.environ and \\\n 'XDG_DATA_HOME' not in os.environ and not link.exists():\n link.symlink_to(str(workon_home))\n return True\n else:\n return False\n\npew_site = Path(__file__).parent\n\ndef supported_shell():\n shell = Path(os.environ.get('SHELL', '')).stem\n if shell in ('bash', 'zsh', 'fish'):\n return shell\n\n\ndef shell_config_cmd(argv):\n \"Prints the path for the current $SHELL helper file\"\n shell = supported_shell()\n if shell:\n print(pew_site / 'shell_config' / ('init.' + shell))\n else:\n err('Completions and prompts are unavailable for %s' %\n repr(os.environ.get('SHELL', '')))\n\n\ndef deploy_completions():\n completions = {'complete.bash': Path('/etc/bash_completion.d/pew'),\n 'complete.zsh': Path('/usr/local/share/zsh/site-functions/_pew'),\n 'complete.fish': Path('/etc/fish/completions/pew.fish')}\n for comp, dest in completions.items():\n if not dest.parent.exists():\n dest.parent.mkdir(parents=True)\n shutil.copy(str(pew_site / 'shell_config' / comp), str(dest))\n\n\ndef get_project_dir(env):\n project_file = get_workon_home() / env / '.project'\n if project_file.exists():\n with project_file.open() as f:\n project_dir = f.readline().strip()\n if os.path.exists(project_dir):\n return project_dir\n else:\n err('Corrupted or outdated:', project_file, '\\nDirectory',\n project_dir, \"doesn't exist.\")\n\n\ndef unsetenv(key):\n if key in os.environ:\n del os.environ[key]\n\n\ndef compute_path(env):\n envdir = get_workon_home() / env\n return os.pathsep.join([\n str(envdir / env_bin_dir),\n os.environ['PATH'],\n ])\n\n\ndef inve(env, command, *args, **kwargs):\n \"\"\"Run a command in the given virtual environment.\n\n Pass additional keyword arguments to ``subprocess.check_call()``.\"\"\"\n # we don't strictly need to restore the environment, since pew runs in\n # its own process, but it feels like the right thing to do\n with temp_environ():\n os.environ['VIRTUAL_ENV'] = str(get_workon_home() / env)\n os.environ['PATH'] = compute_path(env)\n\n unsetenv('PYTHONHOME')\n unsetenv('__PYVENV_LAUNCHER__')\n\n try:\n return check_call([command] + list(args), shell=windows, **kwargs)\n # need to have shell=True on windows, otherwise the PYTHONPATH\n # won't inherit the PATH\n except OSError as e:\n if e.errno == 2:\n err('Unable to find', command)\n else:\n raise\n\n\ndef fork_shell(env, shellcmd, cwd):\n or_ctrld = '' if windows else \"or 'Ctrl+D' \"\n err(\"Launching subshell in virtual environment. Type 'exit' \", or_ctrld,\n \"to return.\", sep='')\n if 'VIRTUAL_ENV' in os.environ:\n err(\"Be aware that this environment will be nested on top \"\n \"of '%s'\" % Path(os.environ['VIRTUAL_ENV']).name)\n try:\n inve(env, *shellcmd, cwd=cwd)\n except CalledProcessError:\n # These shells report errors when the last command executed in the\n # subshell in an error. This causes the subprocess to fail, which is\n # not what we want. Stay silent for them, there's nothing we can do.\n shell_name, _ = os.path.splitext(os.path.basename(shellcmd[0]))\n suppress_error = shell_name.lower() in ('cmd', 'powershell', 'pwsh')\n if not suppress_error:\n raise\n\n\ndef fork_bash(env, cwd):\n # bash is a special little snowflake, and prevent_path_errors cannot work there\n # https://github.com/berdario/pew/issues/58#issuecomment-102182346\n bashrcpath = expandpath('~/.bashrc')\n if bashrcpath.exists():\n with NamedTemporaryFile('w+') as rcfile:\n with bashrcpath.open() as bashrc:\n rcfile.write(bashrc.read())\n rcfile.write('\\nexport PATH=\"' + to_unicode(compute_path(env)) + '\"')\n rcfile.flush()\n fork_shell(env, ['bash', '--rcfile', rcfile.name], cwd)\n else:\n fork_shell(env, ['bash'], cwd)\n\n\ndef fork_cmder(env, cwd):\n shell_cmd = ['cmd']\n cmderrc_path = r'%CMDER_ROOT%\\vendor\\init.bat'\n if expandpath(cmderrc_path).exists():\n shell_cmd += ['/k', cmderrc_path]\n if cwd:\n os.environ['CMDER_START'] = cwd\n fork_shell(env, shell_cmd, cwd)\n\ndef _detect_shell():\n shell = os.environ.get('SHELL', None)\n if not shell:\n if 'CMDER_ROOT' in os.environ:\n shell = 'Cmder'\n elif windows:\n shell = get_shell(os.getpid())\n else:\n shell = 'sh'\n return shell\n\ndef shell(env, cwd=None):\n env = str(env)\n shell = _detect_shell()\n shell_name = Path(shell).stem\n if shell_name not in ('Cmder', 'bash', 'elvish', 'powershell', 'pwsh', 'klingon', 'cmd'):\n # On Windows the PATH is usually set with System Utility\n # so we won't worry about trying to check mistakes there\n shell_check = (sys.executable + ' -c \"from pipenv.patched.pew.pew import '\n 'prevent_path_errors; prevent_path_errors()\"')\n try:\n inve(env, shell, '-c', shell_check)\n except CalledProcessError:\n return\n if shell_name in ('Cmder', 'cmd'):\n os.environ['PROMPT'] = '({0}) {1}'.format(env, os.environ['PROMPT'])\n if shell_name == 'bash':\n fork_bash(env, cwd)\n elif shell_name == 'Cmder':\n fork_cmder(env, cwd)\n else:\n fork_shell(env, [shell], cwd)\n\n\ndef mkvirtualenv(envname, python=None, packages=[], project=None,\n requirements=None, rest=[]):\n\n if python:\n rest = [\"--python=%s\" % python] + rest\n\n path = (get_workon_home() / envname).absolute()\n\n try:\n check_call([sys.executable, \"-m\", \"virtualenv\", str(path)] + rest)\n except (CalledProcessError, KeyboardInterrupt):\n rmvirtualenvs([envname])\n raise\n else:\n if project:\n setvirtualenvproject(envname, project.absolute())\n if requirements:\n inve(envname, 'pip', 'install', '-r', str(expandpath(requirements)))\n if packages:\n inve(envname, 'pip', 'install', *packages)\n\n\ndef mkvirtualenv_argparser():\n parser = argparse.ArgumentParser()\n parser.add_argument('-p', '--python')\n parser.add_argument('-i', action='append', dest='packages', help='Install \\\na package after the environment is created. This option may be repeated.')\n parser.add_argument('-r', dest='requirements', help='Provide a pip \\\nrequirements file to install a base set of packages into the new environment.')\n parser.add_argument('-d', '--dont-activate', action='store_false',\n default=True, dest='activate', help=\"After \\\n creation, continue with the existing shell (don't \\\n activate the new environment).\")\n return parser\n\n\ndef new_cmd(argv):\n \"\"\"Create a new environment, in $WORKON_HOME.\"\"\"\n parser = mkvirtualenv_argparser()\n parser.add_argument('-a', dest='project', help='Provide a full path to a \\\nproject directory to associate with the new environment.')\n\n parser.add_argument('envname')\n args, rest = parser.parse_known_args(argv)\n project = expandpath(args.project) if args.project else None\n\n mkvirtualenv(args.envname, args.python, args.packages, project,\n args.requirements, rest)\n if args.activate:\n shell(args.envname)\n\n\ndef rmvirtualenvs(envs):\n error_happened = False\n for env in envs:\n env = get_workon_home() / env\n if os.environ.get('VIRTUAL_ENV') == str(env):\n err(\"ERROR: You cannot remove the active environment (%s).\" % env)\n error_happened = True\n break\n try:\n shutil.rmtree(str(env))\n except OSError as e:\n err(\"Error while trying to remove the {0} env: \\n{1}\".format\n (env, e.strerror))\n error_happened = True\n return error_happened\n\n\n\ndef rm_cmd(argv):\n \"\"\"Remove one or more environment, from $WORKON_HOME.\"\"\"\n if len(argv) < 1:\n sys.exit(\"Please specify an environment\")\n return rmvirtualenvs(argv)\n\n\ndef packages(site_packages):\n nodes = site_packages.iterdir()\n return set([x.stem.split('-')[0] for x in nodes]) - set(['__pycache__'])\n\n\ndef showvirtualenv(env):\n columns, _ = get_terminal_size()\n pkgs = sorted(packages(sitepackages_dir(env)))\n env_python = get_workon_home() / env / env_bin_dir / 'python'\n l = len(env) + 2\n version = invoke(str(env_python), '-V')\n version = ' - '.join((version.out + version.err).splitlines())\n print(env, ': ', version, sep='')\n print(textwrap.fill(' '.join(pkgs),\n width=columns-l,\n initial_indent=(l * ' '),\n subsequent_indent=(l * ' ')), '\\n')\n\n\ndef show_cmd(argv):\n try:\n showvirtualenv(argv[0])\n except IndexError:\n if 'VIRTUAL_ENV' in os.environ:\n showvirtualenv(Path(os.environ['VIRTUAL_ENV']).name)\n else:\n sys.exit('pew show [env]')\n\n\ndef lsenvs():\n items = get_workon_home().glob(os.path.join('*', env_bin_dir, 'python*'))\n return sorted(set(env.parts[-3] for env in items))\n\n\ndef lsvirtualenv(verbose):\n envs = lsenvs()\n\n if not verbose:\n print_virtualenvs(*envs)\n else:\n for env in envs:\n showvirtualenv(env)\n\n\ndef ls_cmd(argv):\n \"\"\"List available environments.\"\"\"\n parser = argparse.ArgumentParser()\n p_group = parser.add_mutually_exclusive_group()\n p_group.add_argument('-b', '--brief', action='store_false')\n p_group.add_argument('-l', '--long', action='store_true')\n args = parser.parse_args(argv)\n lsvirtualenv(args.long)\n\ndef parse_envname(argv, no_arg_callback):\n if len(argv) < 1:\n no_arg_callback()\n\n env = argv[0]\n if env.startswith('/'):\n sys.exit(\"ERROR: Invalid environment name '{0}'.\".format(env))\n if not (get_workon_home() / env).exists():\n sys.exit(\"ERROR: Environment '{0}' does not exist. Create it with \\\n'pew new {0}'.\".format(env))\n else:\n return env\n\ndef workon_cmd(argv):\n \"\"\"List or change working virtual environments.\"\"\"\n\n def list_and_exit():\n lsvirtualenv(False)\n sys.exit(0)\n\n env = parse_envname(argv, list_and_exit)\n\n # Check if the virtualenv has an associated project directory and in\n # this case, use it as the current working directory.\n project_dir = get_project_dir(env) or os.getcwd()\n shell(env, cwd=project_dir)\n\n\ndef sitepackages_dir(env=os.environ.get('VIRTUAL_ENV')):\n if not env:\n sys.exit('ERROR: no virtualenv active')\n else:\n env_python = get_workon_home() / env / env_bin_dir / 'python'\n return Path(invoke(str(env_python), '-c', 'import distutils; \\\nprint(distutils.sysconfig.get_python_lib())').out)\n\n\ndef add_cmd(argv):\n \"\"\"Add the specified directories to the Python path for the currently active virtualenv.\n\nThis will be done by placing the directory names in a path file named\n\"virtualenv_path_extensions.pth\" inside the virtualenv's site-packages\ndirectory; if this file does not exists, it will be created first.\n\n\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument('-d', dest='remove', action='store_true')\n parser.add_argument('dirs', nargs='+')\n args = parser.parse_args(argv)\n\n extra_paths = sitepackages_dir() / '_virtualenv_path_extensions.pth'\n new_paths = [os.path.abspath(d) + \"\\n\" for d in args.dirs]\n if not extra_paths.exists():\n with extra_paths.open('w') as extra:\n extra.write('''import sys; sys.__plen = len(sys.path)\nimport sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new)\n ''')\n\n def rewrite(f):\n with extra_paths.open('r+') as extra:\n to_write = f(extra.readlines())\n extra.seek(0)\n extra.truncate()\n extra.writelines(to_write)\n\n if args.remove:\n rewrite(lambda ls: [line for line in ls if line not in new_paths])\n else:\n rewrite(lambda lines: lines[0:1] + new_paths + lines[1:])\n\n\ndef sitepackages_dir_cmd(argv):\n print(sitepackages_dir())\n\n\ndef lssitepackages_cmd(argv):\n \"\"\"Show the content of the site-packages directory of the current virtualenv.\"\"\"\n site = sitepackages_dir()\n print(*sorted(site.iterdir()), sep=os.linesep)\n extra_paths = site / '_virtualenv_path_extensions.pth'\n if extra_paths.exists():\n print('from _virtualenv_path_extensions.pth:')\n with extra_paths.open() as extra:\n print(''.join(extra.readlines()))\n\n\ndef toggleglobalsitepackages_cmd(argv):\n \"\"\"Toggle the current virtualenv between having and not having access to the global site-packages.\"\"\"\n quiet = argv == ['-q']\n site = sitepackages_dir()\n ngsp_file = site.parent / 'no-global-site-packages.txt'\n if ngsp_file.exists():\n ngsp_file.unlink()\n if not quiet:\n print('Enabled global site-packages')\n else:\n with ngsp_file.open('w'):\n if not quiet:\n print('Disabled global site-packages')\n\n\ndef cp_cmd(argv):\n \"\"\"Duplicate the named virtualenv to make a new one.\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument('source')\n parser.add_argument('target', nargs='?')\n parser.add_argument('-d', '--dont-activate', action='store_false',\n default=True, dest='activate', help=\"After \\\n creation, continue with the existing shell (don't \\\n activate the new environment).\")\n\n args = parser.parse_args(argv)\n target_name = copy_virtualenv_project(args.source, args.target)\n if args.activate:\n shell(target_name)\n\n\ndef copy_virtualenv_project(source, target):\n source = expandpath(source)\n workon_home = get_workon_home()\n if not source.exists():\n source = workon_home / source\n if not source.exists():\n sys.exit('Please provide a valid virtualenv to copy')\n\n target_name = target or source.name\n\n target = workon_home / target_name\n\n if target.exists():\n sys.exit('%s virtualenv already exists in %s.' % (\n target_name, workon_home\n ))\n\n print('Copying {0} in {1}'.format(source, target_name))\n clone_virtualenv(str(source), str(target))\n return target_name\n\n\ndef rename_cmd(argv):\n \"\"\"Rename a virtualenv\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument('source')\n parser.add_argument('target')\n pargs = parser.parse_args(argv)\n copy_virtualenv_project(pargs.source, pargs.target)\n return rmvirtualenvs([pargs.source])\n\n\ndef setvirtualenvproject(env, project):\n print('Setting project for {0} to {1}'.format(env, project))\n with (get_workon_home() / env / '.project').open('wb') as prj:\n prj.write(str(project).encode())\n\n\ndef setproject_cmd(argv):\n \"\"\"Given a virtualenv directory and a project directory, set the\n virtualenv up to be associated with the project.\"\"\"\n args = dict(enumerate(argv))\n project = os.path.abspath(args.get(1, '.'))\n env = args.get(0, os.environ.get('VIRTUAL_ENV'))\n if not env:\n sys.exit('pew setproject [virtualenv] [project_path]')\n if not (get_workon_home() / env).exists():\n sys.exit(\"Environment '%s' doesn't exist.\" % env)\n if not os.path.isdir(project):\n sys.exit('pew setproject: %s does not exist' % project)\n setvirtualenvproject(env, project)\n\n\ndef mkproject_cmd(argv):\n \"\"\"Create a new project directory and its associated virtualenv.\"\"\"\n if '-l' in argv or '--list' in argv:\n templates = [t.name[9:] for t in get_workon_home().glob(\"template_*\")]\n print(\"Available project templates:\", *templates, sep='\\n')\n return\n\n parser = mkvirtualenv_argparser()\n parser.add_argument('envname')\n parser.add_argument(\n '-t', action='append', default=[], dest='templates', help='Multiple \\\ntemplates may be selected. They are applied in the order specified on the \\\ncommand line.')\n parser.add_argument(\n '-l', '--list', action='store_true', help='List available templates.')\n\n args, rest = parser.parse_known_args(argv)\n\n projects_home = Path(os.environ.get('PROJECT_HOME', '.'))\n if not projects_home.exists():\n sys.exit('ERROR: Projects directory %s does not exist. \\\nCreate it or set PROJECT_HOME to an existing directory.' % projects_home)\n\n project = (projects_home / args.envname).absolute()\n if project.exists():\n sys.exit('Project %s already exists.' % args.envname)\n\n mkvirtualenv(args.envname, args.python, args.packages, project.absolute(),\n args.requirements, rest)\n\n project.mkdir()\n\n for template_name in args.templates:\n template = get_workon_home() / (\"template_\" + template_name)\n inve(args.envname, str(template), args.envname, str(project))\n if args.activate:\n shell(args.envname, cwd=str(project))\n\n\ndef mktmpenv_cmd(argv):\n \"\"\"Create a temporary virtualenv.\"\"\"\n parser = mkvirtualenv_argparser()\n env = '.'\n while (get_workon_home() / env).exists():\n env = hex(random.getrandbits(64))[2:-1]\n\n args, rest = parser.parse_known_args(argv)\n\n mkvirtualenv(env, args.python, args.packages, requirements=args.requirements,\n rest=rest)\n print('This is a temporary environment. It will be deleted when you exit')\n try:\n if args.activate:\n # only used for testing on windows\n shell(env)\n finally:\n return rmvirtualenvs([env])\n\n\ndef wipeenv_cmd(argv):\n \"\"\"Remove all installed packages from the current (or supplied) env.\"\"\"\n env = argv[0] if argv else os.environ.get('VIRTUAL_ENV')\n\n if not env:\n sys.exit('ERROR: no virtualenv active')\n elif not (get_workon_home() / env).exists():\n sys.exit(\"ERROR: Environment '{0}' does not exist.\".format(env))\n else:\n env_pip = str(get_workon_home() / env / env_bin_dir / 'pip')\n all_pkgs = set(invoke(env_pip, 'freeze').out.splitlines())\n pkgs = set(p for p in all_pkgs if len(p.split(\"==\")) == 2)\n ignored = sorted(all_pkgs - pkgs)\n pkgs = set(p.split(\"==\")[0] for p in pkgs)\n to_remove = sorted(pkgs - set(['distribute', 'wsgiref']))\n if to_remove:\n print(\"Ignoring:\\n %s\" % \"\\n \".join(ignored))\n print(\"Uninstalling packages:\\n %s\" % \"\\n \".join(to_remove))\n inve(env, 'pip', 'uninstall', '-y', *to_remove)\n else:\n print(\"Nothing to remove\")\n\n\ndef inall_cmd(argv):\n \"\"\"Run a command in each virtualenv.\"\"\"\n envs = lsenvs()\n errors = False\n for env in envs:\n print(\"\\n%s:\" % env)\n try:\n inve(env, *argv)\n except CalledProcessError as e:\n errors = True\n err(e)\n sys.exit(errors)\n\n\ndef in_cmd(argv):\n \"\"\"Run a command in the given virtualenv.\"\"\"\n\n if len(argv) == 1:\n return workon_cmd(argv)\n\n parse_envname(argv, lambda : sys.exit('You must provide a valid virtualenv to target'))\n\n inve(*argv)\n\n\ndef restore_cmd(argv):\n \"\"\"Try to restore a broken virtualenv by reinstalling the same python version on top of it\"\"\"\n\n if len(argv) < 1:\n sys.exit('You must provide a valid virtualenv to target')\n\n env = argv[0]\n path = get_workon_home() / env\n path = workon_home / env\n py = path / env_bin_dir / ('python.exe' if windows else 'python')\n exact_py = py.resolve().name\n\n check_call([sys.executable, \"-m\", \"virtualenv\", str(path.absolute()), \"--python=%s\" % exact_py])\n\n\ndef dir_cmd(argv):\n \"\"\"Print the path for the virtualenv directory\"\"\"\n env = parse_envname(argv, lambda : sys.exit('You must provide a valid virtualenv to target'))\n print(get_workon_home() / env)\n\n\ndef install_cmd(argv):\n '''Use Pythonz to download and build the specified Python version'''\n installer = InstallCommand()\n options, versions = installer.parser.parse_args(argv)\n if len(versions) != 1:\n installer.parser.print_help()\n sys.exit(1)\n else:\n try:\n actual_installer = PythonInstaller.get_installer(versions[0], options)\n actual_installer.install()\n except AlreadyInstalledError as e:\n print(e)\n\n\ndef uninstall_cmd(argv):\n '''Use Pythonz to uninstall the specified Python version'''\n UninstallCommand().run(argv)\n\n\ndef list_pythons_cmd(argv):\n '''List the pythons installed by Pythonz (or all the installable ones)'''\n try:\n Path(PATH_PYTHONS).mkdir(parents=True)\n except OSError:\n pass\n ListPythons().run(argv)\n\n\ndef locate_python_cmd(argv):\n '''Locate the path for the python version installed by Pythonz'''\n LocatePython().run(argv)\n\n\ndef version_cmd(argv):\n \"\"\"Prints current pew version\"\"\"\n import pkg_resources\n\n try:\n __version__ = pkg_resources.get_distribution('pew').version\n except pkg_resources.DistributionNotFound:\n __version__ = 'unknown'\n print('Setuptools has some issues here, failed to get our own package.', file=sys.stderr)\n\n print(__version__)\n\n\ndef prevent_path_errors():\n if 'VIRTUAL_ENV' in os.environ and not check_path():\n sys.exit('''ERROR: The virtualenv hasn't been activated correctly.\nEither the env is corrupted (try running `pew restore env`),\nOr an upgrade of your Python version broke your env,\nOr check the contents of your $PATH. You might be adding new directories to it\nfrom inside your shell's configuration file.\nIn this case, for further details please see: https://github.com/berdario/pew#the-environment-doesnt-seem-to-be-activated''')\n\n\ndef first_run_setup():\n shell = supported_shell()\n if shell:\n if shell == 'fish':\n source_cmd = 'source (pew shell_config)'\n else:\n source_cmd = 'source $(pew shell_config)'\n rcpath = expandpath({'bash': '~/.bashrc'\n , 'zsh': '~/.zshrc'\n , 'fish': '~/.config/fish/config.fish'}[shell])\n if rcpath.exists():\n update_config_file(rcpath, source_cmd)\n else:\n print(\"It seems that you're running pew for the first time\\n\"\n \"If you want source shell competions and update your prompt, \"\n \"Add the following line to your shell config file:\\n %s\" % source_cmd)\n print('\\nWill now continue with the command:', *sys.argv[1:])\n input('[enter]')\n\ndef update_config_file(rcpath, source_cmd):\n with rcpath.open('r+') as rcfile:\n if source_cmd not in (line.strip() for line in rcfile.readlines()):\n choice = 'X'\n while choice not in ('y', '', 'n'):\n choice = input(\"It seems that you're running pew for the first time\\n\"\n \"do you want to modify %s to source completions and\"\n \" update your prompt? [y/N]\\n> \" % rcpath).lower()\n if choice == 'y':\n rcfile.write('\\n# added by Pew\\n%s\\n' % source_cmd)\n print('Done')\n else:\n print('\\nOk, if you want to do it manually, just add\\n %s\\nat'\n ' the end of %s' % (source_cmd, rcpath))\n\n\ndef print_commands(cmds):\n longest = max(map(len, cmds)) + 3\n columns, _ = get_terminal_size()\n\n print('Available commands:\\n')\n for cmd, fun in sorted(cmds.items()):\n if fun.__doc__:\n print(textwrap.fill(\n fun.__doc__.splitlines()[0],\n columns or 1000,\n initial_indent=(' {0}: '.format(cmd)).ljust(longest),\n subsequent_indent=longest * ' '))\n else:\n print(' ' + cmd)\n\n\ndef pew():\n first_run = makedirs_and_symlink_if_needed(get_workon_home())\n if first_run and sys.stdin.isatty():\n first_run_setup()\n\n cmds = dict((cmd[:-4], fun)\n for cmd, fun in globals().items() if cmd.endswith('_cmd'))\n if sys.argv[1:]:\n if sys.argv[1] in cmds:\n command = cmds[sys.argv[1]]\n try:\n return command(sys.argv[2:])\n except CalledProcessError as e:\n return e.returncode\n except KeyboardInterrupt:\n pass\n else:\n err(\"ERROR: command\", sys.argv[1], \"does not exist.\")\n print_commands(cmds)\n sys.exit(1)\n else:\n print_commands(cmds)\n", "path": "pipenv/patched/pew/pew.py"}], "after_files": [{"content": "from __future__ import print_function, absolute_import, unicode_literals\n\nimport os\nimport sys\nimport argparse\nimport shutil\nimport random\nimport textwrap\nfrom functools import partial\nfrom subprocess import CalledProcessError\ntry:\n from pathlib import Path\nexcept ImportError:\n from pipenv.vendor.pathlib2 import Path\n\ntry:\n from shutil import get_terminal_size\nexcept ImportError:\n from pipenv.vendor.backports.shutil_get_terminal_size import get_terminal_size\n\nwindows = sys.platform == 'win32'\n\nfrom clonevirtualenv import clone_virtualenv\nif not windows:\n try:\n # Try importing these packages if avaiable\n from pythonz.commands.install import InstallCommand\n from pythonz.commands.uninstall import UninstallCommand\n from pythonz.installer.pythoninstaller import PythonInstaller, AlreadyInstalledError\n from pythonz.commands.list import ListCommand as ListPythons\n from pythonz.define import PATH_PYTHONS\n from pythonz.commands.locate import LocateCommand as LocatePython\n except:\n # create mock commands\n InstallCommand = ListPythons = LocatePython = UninstallCommand = \\\n lambda : sys.exit('You need to install the pythonz extra. pip install pew[pythonz]')\nelse:\n # Pythonz does not support windows\n InstallCommand = ListPythons = LocatePython = UninstallCommand = \\\n lambda : sys.exit('Command not supported on this platform')\n\n from ._win_utils import get_shell\n\nfrom pew._utils import (check_call, invoke, expandpath, own, env_bin_dir,\n check_path, temp_environ, NamedTemporaryFile, to_unicode)\nfrom pew._print_utils import print_virtualenvs\n\nif sys.version_info[0] == 2:\n input = raw_input\n\nerr = partial(print, file=sys.stderr)\n\nif windows:\n default_home = '~/.virtualenvs'\nelse:\n default_home = os.path.join(\n os.environ.get('XDG_DATA_HOME', '~/.local/share'), 'virtualenvs')\n\ndef get_workon_home():\n return expandpath(os.environ.get('WORKON_HOME', default_home))\n\n\ndef makedirs_and_symlink_if_needed(workon_home):\n if not workon_home.exists() and own(workon_home):\n workon_home.mkdir(parents=True)\n link = expandpath('~/.virtualenvs')\n if os.name == 'posix' and 'WORKON_HOME' not in os.environ and \\\n 'XDG_DATA_HOME' not in os.environ and not link.exists():\n link.symlink_to(str(workon_home))\n return True\n else:\n return False\n\npew_site = Path(__file__).parent\n\ndef supported_shell():\n shell = Path(os.environ.get('SHELL', '')).stem\n if shell in ('bash', 'zsh', 'fish'):\n return shell\n\n\ndef shell_config_cmd(argv):\n \"Prints the path for the current $SHELL helper file\"\n shell = supported_shell()\n if shell:\n print(pew_site / 'shell_config' / ('init.' + shell))\n else:\n err('Completions and prompts are unavailable for %s' %\n repr(os.environ.get('SHELL', '')))\n\n\ndef deploy_completions():\n completions = {'complete.bash': Path('/etc/bash_completion.d/pew'),\n 'complete.zsh': Path('/usr/local/share/zsh/site-functions/_pew'),\n 'complete.fish': Path('/etc/fish/completions/pew.fish')}\n for comp, dest in completions.items():\n if not dest.parent.exists():\n dest.parent.mkdir(parents=True)\n shutil.copy(str(pew_site / 'shell_config' / comp), str(dest))\n\n\ndef get_project_dir(env):\n project_file = get_workon_home() / env / '.project'\n if project_file.exists():\n with project_file.open() as f:\n project_dir = f.readline().strip()\n if os.path.exists(project_dir):\n return project_dir\n else:\n err('Corrupted or outdated:', project_file, '\\nDirectory',\n project_dir, \"doesn't exist.\")\n\n\ndef unsetenv(key):\n if key in os.environ:\n del os.environ[key]\n\n\ndef compute_path(env):\n envdir = get_workon_home() / env\n return os.pathsep.join([\n str(envdir / env_bin_dir),\n os.environ['PATH'],\n ])\n\n\ndef inve(env, command, *args, **kwargs):\n \"\"\"Run a command in the given virtual environment.\n\n Pass additional keyword arguments to ``subprocess.check_call()``.\"\"\"\n # we don't strictly need to restore the environment, since pew runs in\n # its own process, but it feels like the right thing to do\n with temp_environ():\n os.environ['VIRTUAL_ENV'] = str(get_workon_home() / env)\n os.environ['PATH'] = compute_path(env)\n\n unsetenv('PYTHONHOME')\n unsetenv('__PYVENV_LAUNCHER__')\n\n try:\n return check_call([command] + list(args), shell=windows, **kwargs)\n # need to have shell=True on windows, otherwise the PYTHONPATH\n # won't inherit the PATH\n except OSError as e:\n if e.errno == 2:\n err('Unable to find', command)\n else:\n raise\n\n\ndef fork_shell(env, shellcmd, cwd):\n or_ctrld = '' if windows else \"or 'Ctrl+D' \"\n err(\"Launching subshell in virtual environment. Type 'exit' \", or_ctrld,\n \"to return.\", sep='')\n if 'VIRTUAL_ENV' in os.environ:\n err(\"Be aware that this environment will be nested on top \"\n \"of '%s'\" % Path(os.environ['VIRTUAL_ENV']).name)\n try:\n inve(env, *shellcmd, cwd=cwd)\n except CalledProcessError:\n # These shells report errors when the last command executed in the\n # subshell in an error. This causes the subprocess to fail, which is\n # not what we want. Stay silent for them, there's nothing we can do.\n shell_name, _ = os.path.splitext(os.path.basename(shellcmd[0]))\n suppress_error = shell_name.lower() in ('cmd', 'powershell', 'pwsh')\n if not suppress_error:\n raise\n\n\ndef fork_bash(env, cwd):\n # bash is a special little snowflake, and prevent_path_errors cannot work there\n # https://github.com/berdario/pew/issues/58#issuecomment-102182346\n bashrcpath = expandpath('~/.bashrc')\n if bashrcpath.exists():\n with NamedTemporaryFile('w+') as rcfile:\n with bashrcpath.open() as bashrc:\n rcfile.write(bashrc.read())\n rcfile.write('\\nexport PATH=\"' + to_unicode(compute_path(env)) + '\"')\n rcfile.flush()\n fork_shell(env, ['bash', '--rcfile', rcfile.name], cwd)\n else:\n fork_shell(env, ['bash'], cwd)\n\n\ndef fork_cmder(env, cwd):\n shell_cmd = ['cmd']\n escaped_cmder_root = os.environ['CMDER_ROOT'].replace(' ', '^ ')\n cmderrc_path = r'{0}\\vendor\\init.bat'.format(escaped_cmder_root)\n if expandpath(cmderrc_path).exists():\n shell_cmd += ['/k', cmderrc_path]\n if cwd:\n os.environ['CMDER_START'] = cwd\n fork_shell(env, shell_cmd, cwd)\n\ndef _detect_shell():\n shell = os.environ.get('SHELL', None)\n if not shell:\n if 'CMDER_ROOT' in os.environ:\n shell = 'Cmder'\n elif windows:\n shell = get_shell(os.getpid())\n else:\n shell = 'sh'\n return shell\n\ndef shell(env, cwd=None):\n env = str(env)\n shell = _detect_shell()\n shell_name = Path(shell).stem\n if shell_name not in ('Cmder', 'bash', 'elvish', 'powershell', 'pwsh', 'klingon', 'cmd'):\n # On Windows the PATH is usually set with System Utility\n # so we won't worry about trying to check mistakes there\n shell_check = (sys.executable + ' -c \"from pipenv.patched.pew.pew import '\n 'prevent_path_errors; prevent_path_errors()\"')\n try:\n inve(env, shell, '-c', shell_check)\n except CalledProcessError:\n return\n if shell_name in ('Cmder', 'cmd'):\n os.environ['PROMPT'] = '({0}) {1}'.format(env, os.environ['PROMPT'])\n if shell_name == 'bash':\n fork_bash(env, cwd)\n elif shell_name == 'Cmder':\n fork_cmder(env, cwd)\n else:\n fork_shell(env, [shell], cwd)\n\n\ndef mkvirtualenv(envname, python=None, packages=[], project=None,\n requirements=None, rest=[]):\n\n if python:\n rest = [\"--python=%s\" % python] + rest\n\n path = (get_workon_home() / envname).absolute()\n\n try:\n check_call([sys.executable, \"-m\", \"virtualenv\", str(path)] + rest)\n except (CalledProcessError, KeyboardInterrupt):\n rmvirtualenvs([envname])\n raise\n else:\n if project:\n setvirtualenvproject(envname, project.absolute())\n if requirements:\n inve(envname, 'pip', 'install', '-r', str(expandpath(requirements)))\n if packages:\n inve(envname, 'pip', 'install', *packages)\n\n\ndef mkvirtualenv_argparser():\n parser = argparse.ArgumentParser()\n parser.add_argument('-p', '--python')\n parser.add_argument('-i', action='append', dest='packages', help='Install \\\na package after the environment is created. This option may be repeated.')\n parser.add_argument('-r', dest='requirements', help='Provide a pip \\\nrequirements file to install a base set of packages into the new environment.')\n parser.add_argument('-d', '--dont-activate', action='store_false',\n default=True, dest='activate', help=\"After \\\n creation, continue with the existing shell (don't \\\n activate the new environment).\")\n return parser\n\n\ndef new_cmd(argv):\n \"\"\"Create a new environment, in $WORKON_HOME.\"\"\"\n parser = mkvirtualenv_argparser()\n parser.add_argument('-a', dest='project', help='Provide a full path to a \\\nproject directory to associate with the new environment.')\n\n parser.add_argument('envname')\n args, rest = parser.parse_known_args(argv)\n project = expandpath(args.project) if args.project else None\n\n mkvirtualenv(args.envname, args.python, args.packages, project,\n args.requirements, rest)\n if args.activate:\n shell(args.envname)\n\n\ndef rmvirtualenvs(envs):\n error_happened = False\n for env in envs:\n env = get_workon_home() / env\n if os.environ.get('VIRTUAL_ENV') == str(env):\n err(\"ERROR: You cannot remove the active environment (%s).\" % env)\n error_happened = True\n break\n try:\n shutil.rmtree(str(env))\n except OSError as e:\n err(\"Error while trying to remove the {0} env: \\n{1}\".format\n (env, e.strerror))\n error_happened = True\n return error_happened\n\n\n\ndef rm_cmd(argv):\n \"\"\"Remove one or more environment, from $WORKON_HOME.\"\"\"\n if len(argv) < 1:\n sys.exit(\"Please specify an environment\")\n return rmvirtualenvs(argv)\n\n\ndef packages(site_packages):\n nodes = site_packages.iterdir()\n return set([x.stem.split('-')[0] for x in nodes]) - set(['__pycache__'])\n\n\ndef showvirtualenv(env):\n columns, _ = get_terminal_size()\n pkgs = sorted(packages(sitepackages_dir(env)))\n env_python = get_workon_home() / env / env_bin_dir / 'python'\n l = len(env) + 2\n version = invoke(str(env_python), '-V')\n version = ' - '.join((version.out + version.err).splitlines())\n print(env, ': ', version, sep='')\n print(textwrap.fill(' '.join(pkgs),\n width=columns-l,\n initial_indent=(l * ' '),\n subsequent_indent=(l * ' ')), '\\n')\n\n\ndef show_cmd(argv):\n try:\n showvirtualenv(argv[0])\n except IndexError:\n if 'VIRTUAL_ENV' in os.environ:\n showvirtualenv(Path(os.environ['VIRTUAL_ENV']).name)\n else:\n sys.exit('pew show [env]')\n\n\ndef lsenvs():\n items = get_workon_home().glob(os.path.join('*', env_bin_dir, 'python*'))\n return sorted(set(env.parts[-3] for env in items))\n\n\ndef lsvirtualenv(verbose):\n envs = lsenvs()\n\n if not verbose:\n print_virtualenvs(*envs)\n else:\n for env in envs:\n showvirtualenv(env)\n\n\ndef ls_cmd(argv):\n \"\"\"List available environments.\"\"\"\n parser = argparse.ArgumentParser()\n p_group = parser.add_mutually_exclusive_group()\n p_group.add_argument('-b', '--brief', action='store_false')\n p_group.add_argument('-l', '--long', action='store_true')\n args = parser.parse_args(argv)\n lsvirtualenv(args.long)\n\ndef parse_envname(argv, no_arg_callback):\n if len(argv) < 1:\n no_arg_callback()\n\n env = argv[0]\n if env.startswith('/'):\n sys.exit(\"ERROR: Invalid environment name '{0}'.\".format(env))\n if not (get_workon_home() / env).exists():\n sys.exit(\"ERROR: Environment '{0}' does not exist. Create it with \\\n'pew new {0}'.\".format(env))\n else:\n return env\n\ndef workon_cmd(argv):\n \"\"\"List or change working virtual environments.\"\"\"\n\n def list_and_exit():\n lsvirtualenv(False)\n sys.exit(0)\n\n env = parse_envname(argv, list_and_exit)\n\n # Check if the virtualenv has an associated project directory and in\n # this case, use it as the current working directory.\n project_dir = get_project_dir(env) or os.getcwd()\n shell(env, cwd=project_dir)\n\n\ndef sitepackages_dir(env=os.environ.get('VIRTUAL_ENV')):\n if not env:\n sys.exit('ERROR: no virtualenv active')\n else:\n env_python = get_workon_home() / env / env_bin_dir / 'python'\n return Path(invoke(str(env_python), '-c', 'import distutils; \\\nprint(distutils.sysconfig.get_python_lib())').out)\n\n\ndef add_cmd(argv):\n \"\"\"Add the specified directories to the Python path for the currently active virtualenv.\n\nThis will be done by placing the directory names in a path file named\n\"virtualenv_path_extensions.pth\" inside the virtualenv's site-packages\ndirectory; if this file does not exists, it will be created first.\n\n\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument('-d', dest='remove', action='store_true')\n parser.add_argument('dirs', nargs='+')\n args = parser.parse_args(argv)\n\n extra_paths = sitepackages_dir() / '_virtualenv_path_extensions.pth'\n new_paths = [os.path.abspath(d) + \"\\n\" for d in args.dirs]\n if not extra_paths.exists():\n with extra_paths.open('w') as extra:\n extra.write('''import sys; sys.__plen = len(sys.path)\nimport sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new)\n ''')\n\n def rewrite(f):\n with extra_paths.open('r+') as extra:\n to_write = f(extra.readlines())\n extra.seek(0)\n extra.truncate()\n extra.writelines(to_write)\n\n if args.remove:\n rewrite(lambda ls: [line for line in ls if line not in new_paths])\n else:\n rewrite(lambda lines: lines[0:1] + new_paths + lines[1:])\n\n\ndef sitepackages_dir_cmd(argv):\n print(sitepackages_dir())\n\n\ndef lssitepackages_cmd(argv):\n \"\"\"Show the content of the site-packages directory of the current virtualenv.\"\"\"\n site = sitepackages_dir()\n print(*sorted(site.iterdir()), sep=os.linesep)\n extra_paths = site / '_virtualenv_path_extensions.pth'\n if extra_paths.exists():\n print('from _virtualenv_path_extensions.pth:')\n with extra_paths.open() as extra:\n print(''.join(extra.readlines()))\n\n\ndef toggleglobalsitepackages_cmd(argv):\n \"\"\"Toggle the current virtualenv between having and not having access to the global site-packages.\"\"\"\n quiet = argv == ['-q']\n site = sitepackages_dir()\n ngsp_file = site.parent / 'no-global-site-packages.txt'\n if ngsp_file.exists():\n ngsp_file.unlink()\n if not quiet:\n print('Enabled global site-packages')\n else:\n with ngsp_file.open('w'):\n if not quiet:\n print('Disabled global site-packages')\n\n\ndef cp_cmd(argv):\n \"\"\"Duplicate the named virtualenv to make a new one.\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument('source')\n parser.add_argument('target', nargs='?')\n parser.add_argument('-d', '--dont-activate', action='store_false',\n default=True, dest='activate', help=\"After \\\n creation, continue with the existing shell (don't \\\n activate the new environment).\")\n\n args = parser.parse_args(argv)\n target_name = copy_virtualenv_project(args.source, args.target)\n if args.activate:\n shell(target_name)\n\n\ndef copy_virtualenv_project(source, target):\n source = expandpath(source)\n workon_home = get_workon_home()\n if not source.exists():\n source = workon_home / source\n if not source.exists():\n sys.exit('Please provide a valid virtualenv to copy')\n\n target_name = target or source.name\n\n target = workon_home / target_name\n\n if target.exists():\n sys.exit('%s virtualenv already exists in %s.' % (\n target_name, workon_home\n ))\n\n print('Copying {0} in {1}'.format(source, target_name))\n clone_virtualenv(str(source), str(target))\n return target_name\n\n\ndef rename_cmd(argv):\n \"\"\"Rename a virtualenv\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument('source')\n parser.add_argument('target')\n pargs = parser.parse_args(argv)\n copy_virtualenv_project(pargs.source, pargs.target)\n return rmvirtualenvs([pargs.source])\n\n\ndef setvirtualenvproject(env, project):\n print('Setting project for {0} to {1}'.format(env, project))\n with (get_workon_home() / env / '.project').open('wb') as prj:\n prj.write(str(project).encode())\n\n\ndef setproject_cmd(argv):\n \"\"\"Given a virtualenv directory and a project directory, set the\n virtualenv up to be associated with the project.\"\"\"\n args = dict(enumerate(argv))\n project = os.path.abspath(args.get(1, '.'))\n env = args.get(0, os.environ.get('VIRTUAL_ENV'))\n if not env:\n sys.exit('pew setproject [virtualenv] [project_path]')\n if not (get_workon_home() / env).exists():\n sys.exit(\"Environment '%s' doesn't exist.\" % env)\n if not os.path.isdir(project):\n sys.exit('pew setproject: %s does not exist' % project)\n setvirtualenvproject(env, project)\n\n\ndef mkproject_cmd(argv):\n \"\"\"Create a new project directory and its associated virtualenv.\"\"\"\n if '-l' in argv or '--list' in argv:\n templates = [t.name[9:] for t in get_workon_home().glob(\"template_*\")]\n print(\"Available project templates:\", *templates, sep='\\n')\n return\n\n parser = mkvirtualenv_argparser()\n parser.add_argument('envname')\n parser.add_argument(\n '-t', action='append', default=[], dest='templates', help='Multiple \\\ntemplates may be selected. They are applied in the order specified on the \\\ncommand line.')\n parser.add_argument(\n '-l', '--list', action='store_true', help='List available templates.')\n\n args, rest = parser.parse_known_args(argv)\n\n projects_home = Path(os.environ.get('PROJECT_HOME', '.'))\n if not projects_home.exists():\n sys.exit('ERROR: Projects directory %s does not exist. \\\nCreate it or set PROJECT_HOME to an existing directory.' % projects_home)\n\n project = (projects_home / args.envname).absolute()\n if project.exists():\n sys.exit('Project %s already exists.' % args.envname)\n\n mkvirtualenv(args.envname, args.python, args.packages, project.absolute(),\n args.requirements, rest)\n\n project.mkdir()\n\n for template_name in args.templates:\n template = get_workon_home() / (\"template_\" + template_name)\n inve(args.envname, str(template), args.envname, str(project))\n if args.activate:\n shell(args.envname, cwd=str(project))\n\n\ndef mktmpenv_cmd(argv):\n \"\"\"Create a temporary virtualenv.\"\"\"\n parser = mkvirtualenv_argparser()\n env = '.'\n while (get_workon_home() / env).exists():\n env = hex(random.getrandbits(64))[2:-1]\n\n args, rest = parser.parse_known_args(argv)\n\n mkvirtualenv(env, args.python, args.packages, requirements=args.requirements,\n rest=rest)\n print('This is a temporary environment. It will be deleted when you exit')\n try:\n if args.activate:\n # only used for testing on windows\n shell(env)\n finally:\n return rmvirtualenvs([env])\n\n\ndef wipeenv_cmd(argv):\n \"\"\"Remove all installed packages from the current (or supplied) env.\"\"\"\n env = argv[0] if argv else os.environ.get('VIRTUAL_ENV')\n\n if not env:\n sys.exit('ERROR: no virtualenv active')\n elif not (get_workon_home() / env).exists():\n sys.exit(\"ERROR: Environment '{0}' does not exist.\".format(env))\n else:\n env_pip = str(get_workon_home() / env / env_bin_dir / 'pip')\n all_pkgs = set(invoke(env_pip, 'freeze').out.splitlines())\n pkgs = set(p for p in all_pkgs if len(p.split(\"==\")) == 2)\n ignored = sorted(all_pkgs - pkgs)\n pkgs = set(p.split(\"==\")[0] for p in pkgs)\n to_remove = sorted(pkgs - set(['distribute', 'wsgiref']))\n if to_remove:\n print(\"Ignoring:\\n %s\" % \"\\n \".join(ignored))\n print(\"Uninstalling packages:\\n %s\" % \"\\n \".join(to_remove))\n inve(env, 'pip', 'uninstall', '-y', *to_remove)\n else:\n print(\"Nothing to remove\")\n\n\ndef inall_cmd(argv):\n \"\"\"Run a command in each virtualenv.\"\"\"\n envs = lsenvs()\n errors = False\n for env in envs:\n print(\"\\n%s:\" % env)\n try:\n inve(env, *argv)\n except CalledProcessError as e:\n errors = True\n err(e)\n sys.exit(errors)\n\n\ndef in_cmd(argv):\n \"\"\"Run a command in the given virtualenv.\"\"\"\n\n if len(argv) == 1:\n return workon_cmd(argv)\n\n parse_envname(argv, lambda : sys.exit('You must provide a valid virtualenv to target'))\n\n inve(*argv)\n\n\ndef restore_cmd(argv):\n \"\"\"Try to restore a broken virtualenv by reinstalling the same python version on top of it\"\"\"\n\n if len(argv) < 1:\n sys.exit('You must provide a valid virtualenv to target')\n\n env = argv[0]\n path = get_workon_home() / env\n path = workon_home / env\n py = path / env_bin_dir / ('python.exe' if windows else 'python')\n exact_py = py.resolve().name\n\n check_call([sys.executable, \"-m\", \"virtualenv\", str(path.absolute()), \"--python=%s\" % exact_py])\n\n\ndef dir_cmd(argv):\n \"\"\"Print the path for the virtualenv directory\"\"\"\n env = parse_envname(argv, lambda : sys.exit('You must provide a valid virtualenv to target'))\n print(get_workon_home() / env)\n\n\ndef install_cmd(argv):\n '''Use Pythonz to download and build the specified Python version'''\n installer = InstallCommand()\n options, versions = installer.parser.parse_args(argv)\n if len(versions) != 1:\n installer.parser.print_help()\n sys.exit(1)\n else:\n try:\n actual_installer = PythonInstaller.get_installer(versions[0], options)\n actual_installer.install()\n except AlreadyInstalledError as e:\n print(e)\n\n\ndef uninstall_cmd(argv):\n '''Use Pythonz to uninstall the specified Python version'''\n UninstallCommand().run(argv)\n\n\ndef list_pythons_cmd(argv):\n '''List the pythons installed by Pythonz (or all the installable ones)'''\n try:\n Path(PATH_PYTHONS).mkdir(parents=True)\n except OSError:\n pass\n ListPythons().run(argv)\n\n\ndef locate_python_cmd(argv):\n '''Locate the path for the python version installed by Pythonz'''\n LocatePython().run(argv)\n\n\ndef version_cmd(argv):\n \"\"\"Prints current pew version\"\"\"\n import pkg_resources\n\n try:\n __version__ = pkg_resources.get_distribution('pew').version\n except pkg_resources.DistributionNotFound:\n __version__ = 'unknown'\n print('Setuptools has some issues here, failed to get our own package.', file=sys.stderr)\n\n print(__version__)\n\n\ndef prevent_path_errors():\n if 'VIRTUAL_ENV' in os.environ and not check_path():\n sys.exit('''ERROR: The virtualenv hasn't been activated correctly.\nEither the env is corrupted (try running `pew restore env`),\nOr an upgrade of your Python version broke your env,\nOr check the contents of your $PATH. You might be adding new directories to it\nfrom inside your shell's configuration file.\nIn this case, for further details please see: https://github.com/berdario/pew#the-environment-doesnt-seem-to-be-activated''')\n\n\ndef first_run_setup():\n shell = supported_shell()\n if shell:\n if shell == 'fish':\n source_cmd = 'source (pew shell_config)'\n else:\n source_cmd = 'source $(pew shell_config)'\n rcpath = expandpath({'bash': '~/.bashrc'\n , 'zsh': '~/.zshrc'\n , 'fish': '~/.config/fish/config.fish'}[shell])\n if rcpath.exists():\n update_config_file(rcpath, source_cmd)\n else:\n print(\"It seems that you're running pew for the first time\\n\"\n \"If you want source shell competions and update your prompt, \"\n \"Add the following line to your shell config file:\\n %s\" % source_cmd)\n print('\\nWill now continue with the command:', *sys.argv[1:])\n input('[enter]')\n\ndef update_config_file(rcpath, source_cmd):\n with rcpath.open('r+') as rcfile:\n if source_cmd not in (line.strip() for line in rcfile.readlines()):\n choice = 'X'\n while choice not in ('y', '', 'n'):\n choice = input(\"It seems that you're running pew for the first time\\n\"\n \"do you want to modify %s to source completions and\"\n \" update your prompt? [y/N]\\n> \" % rcpath).lower()\n if choice == 'y':\n rcfile.write('\\n# added by Pew\\n%s\\n' % source_cmd)\n print('Done')\n else:\n print('\\nOk, if you want to do it manually, just add\\n %s\\nat'\n ' the end of %s' % (source_cmd, rcpath))\n\n\ndef print_commands(cmds):\n longest = max(map(len, cmds)) + 3\n columns, _ = get_terminal_size()\n\n print('Available commands:\\n')\n for cmd, fun in sorted(cmds.items()):\n if fun.__doc__:\n print(textwrap.fill(\n fun.__doc__.splitlines()[0],\n columns or 1000,\n initial_indent=(' {0}: '.format(cmd)).ljust(longest),\n subsequent_indent=longest * ' '))\n else:\n print(' ' + cmd)\n\n\ndef pew():\n first_run = makedirs_and_symlink_if_needed(get_workon_home())\n if first_run and sys.stdin.isatty():\n first_run_setup()\n\n cmds = dict((cmd[:-4], fun)\n for cmd, fun in globals().items() if cmd.endswith('_cmd'))\n if sys.argv[1:]:\n if sys.argv[1] in cmds:\n command = cmds[sys.argv[1]]\n try:\n return command(sys.argv[2:])\n except CalledProcessError as e:\n return e.returncode\n except KeyboardInterrupt:\n pass\n else:\n err(\"ERROR: command\", sys.argv[1], \"does not exist.\")\n print_commands(cmds)\n sys.exit(1)\n else:\n print_commands(cmds)\n", "path": "pipenv/patched/pew/pew.py"}]} |
gh_patches_debug_1067 | rasdani/github-patches | git_diff | napari__napari-920 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
to_labels broken in nD_shapes.py
>
> Hi Nicolas,
>
> Thanks for building such a great visualization tool for python!
>
> I have been trying to use (napari version `0.2.10+7.g47af135`) the `.to_labels` functionality on an image stack and not getting the behavior I expected. For example, if I add the line `print(np.unique(labels))` at line number 36 of the nD_shapes.py example, expecting this to reflect the labels of all unique shapes, the output that I get is `array([0])`, indicating no labels present.
>
> If I change line 34 to read:
`labels = layer.to_labels(labels_shape=(128, 128))`
I get an index for all 128 shapes, but of course they are now compressed into one image plane, but if line 34 reads:
`labels = layer.to_labels()`
I again get only zeros (but with shape `(127,128,128)`).
>
> It does seem that `to_labels` is meant to be compatible with n-dimensional images, am I expecting the wrong behavior, or otherwise misusing this functionality?
>
_Originally posted by @miketaormina in https://github.com/napari/napari-tutorials/issues/46#issuecomment-578882563_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/layers/shapes/shape_models/shape.py`
Content:
```
1 from abc import ABC, abstractmethod
2 import numpy as np
3 from copy import copy
4 from vispy.color import Color
5 from ..shape_utils import (
6 triangulate_edge,
7 triangulate_face,
8 is_collinear,
9 poly_to_mask,
10 path_to_mask,
11 )
12
13
14 class Shape(ABC):
15 """Base class for a single shape
16
17 Parameters
18 ----------
19 data : (N, D) array
20 Vertices specifying the shape.
21 edge_width : float
22 thickness of lines and edges.
23 edge_color : str | tuple
24 If string can be any color name recognized by vispy or hex value if
25 starting with `#`. If array-like must be 1-dimensional array with 3 or
26 4 elements.
27 face_color : str | tuple
28 If string can be any color name recognized by vispy or hex value if
29 starting with `#`. If array-like must be 1-dimensional array with 3 or
30 4 elements.
31 opacity : float
32 Opacity of the shape, must be between 0 and 1.
33 z_index : int
34 Specifier of z order priority. Shapes with higher z order are displayed
35 ontop of others.
36 dims_order : (D,) list
37 Order that the dimensions are to be rendered in.
38 ndisplay : int
39 Number of displayed dimensions.
40
41 Attributes
42 ----------
43 data : (N, D) array
44 Vertices specifying the shape.
45 data_displayed : (N, 2) array
46 Vertices of the shape that are currently displayed. Only 2D rendering
47 currently supported.
48 edge_width : float
49 thickness of lines and edges.
50 edge_color : ColorArray
51 Color of the shape edge
52 face_color : ColorArray
53 Color of the shape face
54 opacity : float
55 Opacity of the shape, must be between 0 and 1.
56 name : str
57 Name of shape type.
58 z_index : int
59 Specifier of z order priority. Shapes with higher z order are displayed
60 ontop of others.
61 dims_order : (D,) list
62 Order that the dimensions are rendered in.
63 ndisplay : int
64 Number of dimensions to be displayed, must be 2 as only 2D rendering
65 currently supported.
66 displayed : tuple
67 List of dimensions that are displayed.
68 not_displayed : tuple
69 List of dimensions that are not displayed.
70 slice_key : (2, M) array
71 Min and max values of the M non-displayed dimensions, useful for
72 slicing multidimensional shapes.
73
74 Extended Summary
75 ----------
76 _edge_color_name : str
77 Name of edge color or six digit hex code representing edge color if not
78 recognized
79 _face_color_name : str
80 Name of edge color or six digit hex code representing face color if not
81 recognized
82 _closed : bool
83 Bool if shape edge is a closed path or not
84 _box : np.ndarray
85 9x2 array of vertices of the interaction box. The first 8 points are
86 the corners and midpoints of the box in clockwise order starting in the
87 upper-left corner. The last point is the center of the box
88 _face_vertices : np.ndarray
89 Qx2 array of vertices of all triangles for the shape face
90 _face_triangles : np.ndarray
91 Px3 array of vertex indices that form the triangles for the shape face
92 _edge_vertices : np.ndarray
93 Rx2 array of centers of vertices of triangles for the shape edge.
94 These values should be added to the scaled `_edge_offsets` to get the
95 actual vertex positions. The scaling corresponds to the width of the
96 edge
97 _edge_offsets : np.ndarray
98 Sx2 array of offsets of vertices of triangles for the shape edge. For
99 These values should be scaled and added to the `_edge_vertices` to get
100 the actual vertex positions. The scaling corresponds to the width of
101 the edge
102 _edge_triangles : np.ndarray
103 Tx3 array of vertex indices that form the triangles for the shape edge
104 _filled : bool
105 Flag if array is filled or not.
106 _use_face_vertices : bool
107 Flag to use face vertices for mask generation.
108 """
109
110 def __init__(
111 self,
112 *,
113 shape_type='rectangle',
114 edge_width=1,
115 edge_color='black',
116 face_color='white',
117 opacity=1,
118 z_index=0,
119 dims_order=None,
120 ndisplay=2,
121 ):
122
123 self._dims_order = dims_order or list(range(2))
124 self._ndisplay = ndisplay
125 self.slice_key = None
126
127 self._face_vertices = np.empty((0, self.ndisplay))
128 self._face_triangles = np.empty((0, 3), dtype=np.uint32)
129 self._edge_vertices = np.empty((0, self.ndisplay))
130 self._edge_offsets = np.empty((0, self.ndisplay))
131 self._edge_triangles = np.empty((0, 3), dtype=np.uint32)
132 self._box = np.empty((9, 2))
133 self._edge_color_name = 'black'
134 self._face_color_name = 'white'
135
136 self._closed = False
137 self._filled = True
138 self._use_face_vertices = False
139 self.edge_width = edge_width
140 self.edge_color = edge_color
141 self.face_color = face_color
142 self.opacity = opacity
143 self.z_index = z_index
144 self.name = ''
145
146 @property
147 @abstractmethod
148 def data(self):
149 # user writes own docstring
150 raise NotImplementedError()
151
152 @data.setter
153 @abstractmethod
154 def data(self, data):
155 raise NotImplementedError()
156
157 @abstractmethod
158 def _update_displayed_data(self):
159 raise NotImplementedError()
160
161 @property
162 def ndisplay(self):
163 """int: Number of displayed dimensions."""
164 return self._ndisplay
165
166 @ndisplay.setter
167 def ndisplay(self, ndisplay):
168 if self.ndisplay == ndisplay:
169 return
170 self._ndisplay = ndisplay
171 self._update_displayed_data()
172
173 @property
174 def dims_order(self):
175 """(D,) list: Order that the dimensions are rendered in."""
176 return self._dims_order
177
178 @dims_order.setter
179 def dims_order(self, dims_order):
180 if self.dims_order == dims_order:
181 return
182 self._dims_order = dims_order
183 self._update_displayed_data()
184
185 @property
186 def dims_displayed(self):
187 """tuple: Dimensions that are displayed."""
188 return self.dims_order[-self.ndisplay :]
189
190 @property
191 def dims_not_displayed(self):
192 """tuple: Dimensions that are not displayed."""
193 return self.dims_order[: -self.ndisplay]
194
195 @property
196 def data_displayed(self):
197 """(N, 2) array: Vertices of the shape that are currently displayed."""
198 return self.data[:, self.dims_displayed]
199
200 @property
201 def edge_width(self):
202 """float: thickness of lines and edges.
203 """
204 return self._edge_width
205
206 @edge_width.setter
207 def edge_width(self, edge_width):
208 self._edge_width = edge_width
209
210 @property
211 def edge_color(self):
212 """Color, ColorArray: color of edges
213 """
214 return self._edge_color
215
216 @edge_color.setter
217 def edge_color(self, edge_color):
218 self._edge_color = Color(edge_color)
219 if type(edge_color) is str:
220 self._edge_color_name = edge_color
221 else:
222 rgb = tuple([int(255 * x) for x in self._edge_color.rgba[:3]])
223 self._edge_color_name = '#%02x%02x%02x' % rgb
224
225 @property
226 def face_color(self):
227 """Color, ColorArray: color of faces
228 """
229 return self._face_color
230
231 @face_color.setter
232 def face_color(self, face_color):
233 self._face_color = Color(face_color)
234 if type(face_color) is str:
235 self._face_color_name = face_color
236 else:
237 rgb = tuple([int(255 * x) for x in self._face_color.rgba[:3]])
238 self._face_color_name = '#%02x%02x%02x' % rgb
239
240 @property
241 def opacity(self):
242 """float: opacity of shape
243 """
244 return self._opacity
245
246 @opacity.setter
247 def opacity(self, opacity):
248 self._opacity = opacity
249
250 @property
251 def svg_props(self):
252 """dict: color and width properties in the svg specification
253 """
254 width = str(self.edge_width)
255 face_color = (255 * self.face_color.rgba).astype(np.int)
256 fill = f'rgb{tuple(face_color[:3])}'
257 edge_color = (255 * self.edge_color.rgba).astype(np.int)
258 stroke = f'rgb{tuple(edge_color[:3])}'
259 opacity = str(self.opacity)
260
261 # Currently not using fill or stroke opacity - only global opacity
262 # as otherwise leads to unexpected behavior when reading svg into
263 # other applications
264 # fill_opacity = f'{self.opacity*self.face_color.rgba[3]}'
265 # stroke_opacity = f'{self.opacity*self.edge_color.rgba[3]}'
266
267 props = {
268 'fill': fill,
269 'stroke': stroke,
270 'stroke-width': width,
271 'opacity': opacity,
272 }
273
274 return props
275
276 @property
277 def z_index(self):
278 """int: z order priority of shape. Shapes with higher z order displayed
279 ontop of others.
280 """
281 return self._z_index
282
283 @z_index.setter
284 def z_index(self, z_index):
285 self._z_index = z_index
286
287 def _set_meshes(self, data, closed=True, face=True, edge=True):
288 """Sets the face and edge meshes from a set of points.
289
290 Parameters
291 ----------
292 data : np.ndarray
293 Nx2 or Nx3 array specifying the shape to be triangulated
294 closed : bool
295 Bool which determines if the edge is closed or not
296 face : bool
297 Bool which determines if the face need to be traingulated
298 edge : bool
299 Bool which determines if the edge need to be traingulated
300 """
301 if edge:
302 centers, offsets, triangles = triangulate_edge(data, closed=closed)
303 self._edge_vertices = centers
304 self._edge_offsets = offsets
305 self._edge_triangles = triangles
306 else:
307 self._edge_vertices = np.empty((0, self.ndisplay))
308 self._edge_offsets = np.empty((0, self.ndisplay))
309 self._edge_triangles = np.empty((0, 3), dtype=np.uint32)
310
311 if face:
312 clean_data = np.array(
313 [
314 p
315 for i, p in enumerate(data)
316 if i == 0 or not np.all(p == data[i - 1])
317 ]
318 )
319
320 if not is_collinear(clean_data[:, -2:]):
321 if clean_data.shape[1] == 2:
322 vertices, triangles = triangulate_face(clean_data)
323 elif len(np.unique(clean_data[:, 0])) == 1:
324 val = np.unique(clean_data[:, 0])
325 vertices, triangles = triangulate_face(clean_data[:, -2:])
326 exp = np.expand_dims(np.repeat(val, len(vertices)), axis=1)
327 vertices = np.concatenate([exp, vertices], axis=1)
328 else:
329 triangles = []
330 vertices = []
331 if len(triangles) > 0:
332 self._face_vertices = vertices
333 self._face_triangles = triangles
334 else:
335 self._face_vertices = np.empty((0, self.ndisplay))
336 self._face_triangles = np.empty((0, 3), dtype=np.uint32)
337 else:
338 self._face_vertices = np.empty((0, self.ndisplay))
339 self._face_triangles = np.empty((0, 3), dtype=np.uint32)
340 else:
341 self._face_vertices = np.empty((0, self.ndisplay))
342 self._face_triangles = np.empty((0, 3), dtype=np.uint32)
343
344 def transform(self, transform):
345 """Performs a linear transform on the shape
346
347 Parameters
348 ----------
349 transform : np.ndarray
350 2x2 array specifying linear transform.
351 """
352 self._box = self._box @ transform.T
353 self._data[:, self.dims_displayed] = (
354 self._data[:, self.dims_displayed] @ transform.T
355 )
356 self._face_vertices = self._face_vertices @ transform.T
357
358 points = self.data_displayed
359
360 centers, offsets, triangles = triangulate_edge(
361 points, closed=self._closed
362 )
363 self._edge_vertices = centers
364 self._edge_offsets = offsets
365 self._edge_triangles = triangles
366
367 def shift(self, shift):
368 """Performs a 2D shift on the shape
369
370 Parameters
371 ----------
372 shift : np.ndarray
373 length 2 array specifying shift of shapes.
374 """
375 shift = np.array(shift)
376
377 self._face_vertices = self._face_vertices + shift
378 self._edge_vertices = self._edge_vertices + shift
379 self._box = self._box + shift
380 self._data[:, self.dims_displayed] = self.data_displayed + shift
381
382 def scale(self, scale, center=None):
383 """Performs a scaling on the shape
384
385 Parameters
386 ----------
387 scale : float, list
388 scalar or list specifying rescaling of shape.
389 center : list
390 length 2 list specifying coordinate of center of scaling.
391 """
392 if isinstance(scale, (list, np.ndarray)):
393 transform = np.array([[scale[0], 0], [0, scale[1]]])
394 else:
395 transform = np.array([[scale, 0], [0, scale]])
396 if center is None:
397 self.transform(transform)
398 else:
399 self.shift(-center)
400 self.transform(transform)
401 self.shift(center)
402
403 def rotate(self, angle, center=None):
404 """Performs a rotation on the shape
405
406 Parameters
407 ----------
408 angle : float
409 angle specifying rotation of shape in degrees. CCW is positive.
410 center : list
411 length 2 list specifying coordinate of fixed point of the rotation.
412 """
413 theta = np.radians(angle)
414 transform = np.array(
415 [[np.cos(theta), np.sin(theta)], [-np.sin(theta), np.cos(theta)]]
416 )
417 if center is None:
418 self.transform(transform)
419 else:
420 self.shift(-center)
421 self.transform(transform)
422 self.shift(center)
423
424 def flip(self, axis, center=None):
425 """Performs a flip on the shape, either horizontal or vertical.
426
427 Parameters
428 ----------
429 axis : int
430 integer specifying axis of flip. `0` flips horizontal, `1` flips
431 vertical.
432 center : list
433 length 2 list specifying coordinate of center of flip axes.
434 """
435 if axis == 0:
436 transform = np.array([[1, 0], [0, -1]])
437 elif axis == 1:
438 transform = np.array([[-1, 0], [0, 1]])
439 else:
440 raise ValueError(
441 """Axis not recognized, must be one of "{0, 1}"
442 """
443 )
444 if center is None:
445 self.transform(transform)
446 else:
447 self.shift(-center)
448 self.transform(transform)
449 self.shift(-center)
450
451 def to_mask(self, mask_shape=None, zoom_factor=1, offset=[0, 0]):
452 """Convert the shape vertices to a boolean mask.
453
454 Set points to `True` if they are lying inside the shape if the shape is
455 filled, or if they are lying along the boundary of the shape if the
456 shape is not filled. Negative points or points outside the mask_shape
457 after the zoom and offset are clipped.
458
459 Parameters
460 ----------
461 mask_shape : (D,) array
462 Shape of mask to be generated. If non specified, takes the max of
463 the displayed vertices.
464 zoom_factor : float
465 Premultiplier applied to coordinates before generating mask. Used
466 for generating as downsampled mask.
467 offset : 2-tuple
468 Offset subtracted from coordinates before multiplying by the
469 zoom_factor. Used for putting negative coordinates into the mask.
470
471 Returns
472 ----------
473 mask : np.ndarray
474 Boolean array with `True` for points inside the shape
475 """
476 if mask_shape is None:
477 mask_shape = np.round(self.data_displayed.max(axis=0)).astype(
478 'int'
479 )
480
481 if len(mask_shape) == 2:
482 embedded = False
483 shape_plane = mask_shape
484 elif len(mask_shape) == self.data.shape[1]:
485 embedded = True
486 shape_plane = [mask_shape[d] for d in self.dims_displayed]
487 else:
488 raise ValueError(
489 f"""mask shape length must either be 2 or the same
490 as the dimensionality of the shape, expected {self.data.shape[1]}
491 got {len(mask_shape)}."""
492 )
493
494 if self._use_face_vertices:
495 data = self._face_vertices
496 else:
497 data = self.data_displayed
498
499 data = data[:, -len(shape_plane) :]
500
501 if self._filled:
502 mask_p = poly_to_mask(shape_plane, (data - offset) * zoom_factor)
503 else:
504 mask_p = path_to_mask(shape_plane, (data - offset) * zoom_factor)
505
506 # If the mask is to be embedded in a larger array, compute array
507 # and embed as a slice.
508 if embedded:
509 mask = np.zeros(mask_shape, dtype=bool)
510 slice_key = [0] * len(mask_shape)
511 j = 0
512 for i in range(len(mask_shape)):
513 if i in self.dims_displayed:
514 slice_key[i] = slice(None)
515 else:
516 slice_key[i] = slice(
517 self.slice_key[0, j], self.slice_key[1, j] + 1
518 )
519 j += 1
520 displayed_order = np.array(copy(self.dims_displayed))
521 displayed_order[np.argsort(displayed_order)] = list(
522 range(len(displayed_order))
523 )
524 mask[tuple(slice_key)] == mask_p.transpose(displayed_order)
525 else:
526 mask = mask_p
527
528 return mask
529
530 @abstractmethod
531 def to_xml(self):
532 # user writes own docstring
533 raise NotImplementedError()
534
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/napari/layers/shapes/shape_models/shape.py b/napari/layers/shapes/shape_models/shape.py
--- a/napari/layers/shapes/shape_models/shape.py
+++ b/napari/layers/shapes/shape_models/shape.py
@@ -521,7 +521,7 @@
displayed_order[np.argsort(displayed_order)] = list(
range(len(displayed_order))
)
- mask[tuple(slice_key)] == mask_p.transpose(displayed_order)
+ mask[tuple(slice_key)] = mask_p.transpose(displayed_order)
else:
mask = mask_p
| {"golden_diff": "diff --git a/napari/layers/shapes/shape_models/shape.py b/napari/layers/shapes/shape_models/shape.py\n--- a/napari/layers/shapes/shape_models/shape.py\n+++ b/napari/layers/shapes/shape_models/shape.py\n@@ -521,7 +521,7 @@\n displayed_order[np.argsort(displayed_order)] = list(\n range(len(displayed_order))\n )\n- mask[tuple(slice_key)] == mask_p.transpose(displayed_order)\n+ mask[tuple(slice_key)] = mask_p.transpose(displayed_order)\n else:\n mask = mask_p\n", "issue": "to_labels broken in nD_shapes.py\n>\r\n> Hi Nicolas,\r\n>\r\n> Thanks for building such a great visualization tool for python!\r\n>\r\n> I have been trying to use (napari version `0.2.10+7.g47af135`) the `.to_labels` functionality on an image stack and not getting the behavior I expected. For example, if I add the line `print(np.unique(labels))` at line number 36 of the nD_shapes.py example, expecting this to reflect the labels of all unique shapes, the output that I get is `array([0])`, indicating no labels present.\r\n>\r\n> If I change line 34 to read:\r\n`labels = layer.to_labels(labels_shape=(128, 128))`\r\nI get an index for all 128 shapes, but of course they are now compressed into one image plane, but if line 34 reads:\r\n`labels = layer.to_labels()`\r\nI again get only zeros (but with shape `(127,128,128)`).\r\n>\r\n> It does seem that `to_labels` is meant to be compatible with n-dimensional images, am I expecting the wrong behavior, or otherwise misusing this functionality?\r\n>\r\n_Originally posted by @miketaormina in https://github.com/napari/napari-tutorials/issues/46#issuecomment-578882563_\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nimport numpy as np\nfrom copy import copy\nfrom vispy.color import Color\nfrom ..shape_utils import (\n triangulate_edge,\n triangulate_face,\n is_collinear,\n poly_to_mask,\n path_to_mask,\n)\n\n\nclass Shape(ABC):\n \"\"\"Base class for a single shape\n\n Parameters\n ----------\n data : (N, D) array\n Vertices specifying the shape.\n edge_width : float\n thickness of lines and edges.\n edge_color : str | tuple\n If string can be any color name recognized by vispy or hex value if\n starting with `#`. If array-like must be 1-dimensional array with 3 or\n 4 elements.\n face_color : str | tuple\n If string can be any color name recognized by vispy or hex value if\n starting with `#`. If array-like must be 1-dimensional array with 3 or\n 4 elements.\n opacity : float\n Opacity of the shape, must be between 0 and 1.\n z_index : int\n Specifier of z order priority. Shapes with higher z order are displayed\n ontop of others.\n dims_order : (D,) list\n Order that the dimensions are to be rendered in.\n ndisplay : int\n Number of displayed dimensions.\n\n Attributes\n ----------\n data : (N, D) array\n Vertices specifying the shape.\n data_displayed : (N, 2) array\n Vertices of the shape that are currently displayed. Only 2D rendering\n currently supported.\n edge_width : float\n thickness of lines and edges.\n edge_color : ColorArray\n Color of the shape edge\n face_color : ColorArray\n Color of the shape face\n opacity : float\n Opacity of the shape, must be between 0 and 1.\n name : str\n Name of shape type.\n z_index : int\n Specifier of z order priority. Shapes with higher z order are displayed\n ontop of others.\n dims_order : (D,) list\n Order that the dimensions are rendered in.\n ndisplay : int\n Number of dimensions to be displayed, must be 2 as only 2D rendering\n currently supported.\n displayed : tuple\n List of dimensions that are displayed.\n not_displayed : tuple\n List of dimensions that are not displayed.\n slice_key : (2, M) array\n Min and max values of the M non-displayed dimensions, useful for\n slicing multidimensional shapes.\n\n Extended Summary\n ----------\n _edge_color_name : str\n Name of edge color or six digit hex code representing edge color if not\n recognized\n _face_color_name : str\n Name of edge color or six digit hex code representing face color if not\n recognized\n _closed : bool\n Bool if shape edge is a closed path or not\n _box : np.ndarray\n 9x2 array of vertices of the interaction box. The first 8 points are\n the corners and midpoints of the box in clockwise order starting in the\n upper-left corner. The last point is the center of the box\n _face_vertices : np.ndarray\n Qx2 array of vertices of all triangles for the shape face\n _face_triangles : np.ndarray\n Px3 array of vertex indices that form the triangles for the shape face\n _edge_vertices : np.ndarray\n Rx2 array of centers of vertices of triangles for the shape edge.\n These values should be added to the scaled `_edge_offsets` to get the\n actual vertex positions. The scaling corresponds to the width of the\n edge\n _edge_offsets : np.ndarray\n Sx2 array of offsets of vertices of triangles for the shape edge. For\n These values should be scaled and added to the `_edge_vertices` to get\n the actual vertex positions. The scaling corresponds to the width of\n the edge\n _edge_triangles : np.ndarray\n Tx3 array of vertex indices that form the triangles for the shape edge\n _filled : bool\n Flag if array is filled or not.\n _use_face_vertices : bool\n Flag to use face vertices for mask generation.\n \"\"\"\n\n def __init__(\n self,\n *,\n shape_type='rectangle',\n edge_width=1,\n edge_color='black',\n face_color='white',\n opacity=1,\n z_index=0,\n dims_order=None,\n ndisplay=2,\n ):\n\n self._dims_order = dims_order or list(range(2))\n self._ndisplay = ndisplay\n self.slice_key = None\n\n self._face_vertices = np.empty((0, self.ndisplay))\n self._face_triangles = np.empty((0, 3), dtype=np.uint32)\n self._edge_vertices = np.empty((0, self.ndisplay))\n self._edge_offsets = np.empty((0, self.ndisplay))\n self._edge_triangles = np.empty((0, 3), dtype=np.uint32)\n self._box = np.empty((9, 2))\n self._edge_color_name = 'black'\n self._face_color_name = 'white'\n\n self._closed = False\n self._filled = True\n self._use_face_vertices = False\n self.edge_width = edge_width\n self.edge_color = edge_color\n self.face_color = face_color\n self.opacity = opacity\n self.z_index = z_index\n self.name = ''\n\n @property\n @abstractmethod\n def data(self):\n # user writes own docstring\n raise NotImplementedError()\n\n @data.setter\n @abstractmethod\n def data(self, data):\n raise NotImplementedError()\n\n @abstractmethod\n def _update_displayed_data(self):\n raise NotImplementedError()\n\n @property\n def ndisplay(self):\n \"\"\"int: Number of displayed dimensions.\"\"\"\n return self._ndisplay\n\n @ndisplay.setter\n def ndisplay(self, ndisplay):\n if self.ndisplay == ndisplay:\n return\n self._ndisplay = ndisplay\n self._update_displayed_data()\n\n @property\n def dims_order(self):\n \"\"\"(D,) list: Order that the dimensions are rendered in.\"\"\"\n return self._dims_order\n\n @dims_order.setter\n def dims_order(self, dims_order):\n if self.dims_order == dims_order:\n return\n self._dims_order = dims_order\n self._update_displayed_data()\n\n @property\n def dims_displayed(self):\n \"\"\"tuple: Dimensions that are displayed.\"\"\"\n return self.dims_order[-self.ndisplay :]\n\n @property\n def dims_not_displayed(self):\n \"\"\"tuple: Dimensions that are not displayed.\"\"\"\n return self.dims_order[: -self.ndisplay]\n\n @property\n def data_displayed(self):\n \"\"\"(N, 2) array: Vertices of the shape that are currently displayed.\"\"\"\n return self.data[:, self.dims_displayed]\n\n @property\n def edge_width(self):\n \"\"\"float: thickness of lines and edges.\n \"\"\"\n return self._edge_width\n\n @edge_width.setter\n def edge_width(self, edge_width):\n self._edge_width = edge_width\n\n @property\n def edge_color(self):\n \"\"\"Color, ColorArray: color of edges\n \"\"\"\n return self._edge_color\n\n @edge_color.setter\n def edge_color(self, edge_color):\n self._edge_color = Color(edge_color)\n if type(edge_color) is str:\n self._edge_color_name = edge_color\n else:\n rgb = tuple([int(255 * x) for x in self._edge_color.rgba[:3]])\n self._edge_color_name = '#%02x%02x%02x' % rgb\n\n @property\n def face_color(self):\n \"\"\"Color, ColorArray: color of faces\n \"\"\"\n return self._face_color\n\n @face_color.setter\n def face_color(self, face_color):\n self._face_color = Color(face_color)\n if type(face_color) is str:\n self._face_color_name = face_color\n else:\n rgb = tuple([int(255 * x) for x in self._face_color.rgba[:3]])\n self._face_color_name = '#%02x%02x%02x' % rgb\n\n @property\n def opacity(self):\n \"\"\"float: opacity of shape\n \"\"\"\n return self._opacity\n\n @opacity.setter\n def opacity(self, opacity):\n self._opacity = opacity\n\n @property\n def svg_props(self):\n \"\"\"dict: color and width properties in the svg specification\n \"\"\"\n width = str(self.edge_width)\n face_color = (255 * self.face_color.rgba).astype(np.int)\n fill = f'rgb{tuple(face_color[:3])}'\n edge_color = (255 * self.edge_color.rgba).astype(np.int)\n stroke = f'rgb{tuple(edge_color[:3])}'\n opacity = str(self.opacity)\n\n # Currently not using fill or stroke opacity - only global opacity\n # as otherwise leads to unexpected behavior when reading svg into\n # other applications\n # fill_opacity = f'{self.opacity*self.face_color.rgba[3]}'\n # stroke_opacity = f'{self.opacity*self.edge_color.rgba[3]}'\n\n props = {\n 'fill': fill,\n 'stroke': stroke,\n 'stroke-width': width,\n 'opacity': opacity,\n }\n\n return props\n\n @property\n def z_index(self):\n \"\"\"int: z order priority of shape. Shapes with higher z order displayed\n ontop of others.\n \"\"\"\n return self._z_index\n\n @z_index.setter\n def z_index(self, z_index):\n self._z_index = z_index\n\n def _set_meshes(self, data, closed=True, face=True, edge=True):\n \"\"\"Sets the face and edge meshes from a set of points.\n\n Parameters\n ----------\n data : np.ndarray\n Nx2 or Nx3 array specifying the shape to be triangulated\n closed : bool\n Bool which determines if the edge is closed or not\n face : bool\n Bool which determines if the face need to be traingulated\n edge : bool\n Bool which determines if the edge need to be traingulated\n \"\"\"\n if edge:\n centers, offsets, triangles = triangulate_edge(data, closed=closed)\n self._edge_vertices = centers\n self._edge_offsets = offsets\n self._edge_triangles = triangles\n else:\n self._edge_vertices = np.empty((0, self.ndisplay))\n self._edge_offsets = np.empty((0, self.ndisplay))\n self._edge_triangles = np.empty((0, 3), dtype=np.uint32)\n\n if face:\n clean_data = np.array(\n [\n p\n for i, p in enumerate(data)\n if i == 0 or not np.all(p == data[i - 1])\n ]\n )\n\n if not is_collinear(clean_data[:, -2:]):\n if clean_data.shape[1] == 2:\n vertices, triangles = triangulate_face(clean_data)\n elif len(np.unique(clean_data[:, 0])) == 1:\n val = np.unique(clean_data[:, 0])\n vertices, triangles = triangulate_face(clean_data[:, -2:])\n exp = np.expand_dims(np.repeat(val, len(vertices)), axis=1)\n vertices = np.concatenate([exp, vertices], axis=1)\n else:\n triangles = []\n vertices = []\n if len(triangles) > 0:\n self._face_vertices = vertices\n self._face_triangles = triangles\n else:\n self._face_vertices = np.empty((0, self.ndisplay))\n self._face_triangles = np.empty((0, 3), dtype=np.uint32)\n else:\n self._face_vertices = np.empty((0, self.ndisplay))\n self._face_triangles = np.empty((0, 3), dtype=np.uint32)\n else:\n self._face_vertices = np.empty((0, self.ndisplay))\n self._face_triangles = np.empty((0, 3), dtype=np.uint32)\n\n def transform(self, transform):\n \"\"\"Performs a linear transform on the shape\n\n Parameters\n ----------\n transform : np.ndarray\n 2x2 array specifying linear transform.\n \"\"\"\n self._box = self._box @ transform.T\n self._data[:, self.dims_displayed] = (\n self._data[:, self.dims_displayed] @ transform.T\n )\n self._face_vertices = self._face_vertices @ transform.T\n\n points = self.data_displayed\n\n centers, offsets, triangles = triangulate_edge(\n points, closed=self._closed\n )\n self._edge_vertices = centers\n self._edge_offsets = offsets\n self._edge_triangles = triangles\n\n def shift(self, shift):\n \"\"\"Performs a 2D shift on the shape\n\n Parameters\n ----------\n shift : np.ndarray\n length 2 array specifying shift of shapes.\n \"\"\"\n shift = np.array(shift)\n\n self._face_vertices = self._face_vertices + shift\n self._edge_vertices = self._edge_vertices + shift\n self._box = self._box + shift\n self._data[:, self.dims_displayed] = self.data_displayed + shift\n\n def scale(self, scale, center=None):\n \"\"\"Performs a scaling on the shape\n\n Parameters\n ----------\n scale : float, list\n scalar or list specifying rescaling of shape.\n center : list\n length 2 list specifying coordinate of center of scaling.\n \"\"\"\n if isinstance(scale, (list, np.ndarray)):\n transform = np.array([[scale[0], 0], [0, scale[1]]])\n else:\n transform = np.array([[scale, 0], [0, scale]])\n if center is None:\n self.transform(transform)\n else:\n self.shift(-center)\n self.transform(transform)\n self.shift(center)\n\n def rotate(self, angle, center=None):\n \"\"\"Performs a rotation on the shape\n\n Parameters\n ----------\n angle : float\n angle specifying rotation of shape in degrees. CCW is positive.\n center : list\n length 2 list specifying coordinate of fixed point of the rotation.\n \"\"\"\n theta = np.radians(angle)\n transform = np.array(\n [[np.cos(theta), np.sin(theta)], [-np.sin(theta), np.cos(theta)]]\n )\n if center is None:\n self.transform(transform)\n else:\n self.shift(-center)\n self.transform(transform)\n self.shift(center)\n\n def flip(self, axis, center=None):\n \"\"\"Performs a flip on the shape, either horizontal or vertical.\n\n Parameters\n ----------\n axis : int\n integer specifying axis of flip. `0` flips horizontal, `1` flips\n vertical.\n center : list\n length 2 list specifying coordinate of center of flip axes.\n \"\"\"\n if axis == 0:\n transform = np.array([[1, 0], [0, -1]])\n elif axis == 1:\n transform = np.array([[-1, 0], [0, 1]])\n else:\n raise ValueError(\n \"\"\"Axis not recognized, must be one of \"{0, 1}\"\n \"\"\"\n )\n if center is None:\n self.transform(transform)\n else:\n self.shift(-center)\n self.transform(transform)\n self.shift(-center)\n\n def to_mask(self, mask_shape=None, zoom_factor=1, offset=[0, 0]):\n \"\"\"Convert the shape vertices to a boolean mask.\n\n Set points to `True` if they are lying inside the shape if the shape is\n filled, or if they are lying along the boundary of the shape if the\n shape is not filled. Negative points or points outside the mask_shape\n after the zoom and offset are clipped.\n\n Parameters\n ----------\n mask_shape : (D,) array\n Shape of mask to be generated. If non specified, takes the max of\n the displayed vertices.\n zoom_factor : float\n Premultiplier applied to coordinates before generating mask. Used\n for generating as downsampled mask.\n offset : 2-tuple\n Offset subtracted from coordinates before multiplying by the\n zoom_factor. Used for putting negative coordinates into the mask.\n\n Returns\n ----------\n mask : np.ndarray\n Boolean array with `True` for points inside the shape\n \"\"\"\n if mask_shape is None:\n mask_shape = np.round(self.data_displayed.max(axis=0)).astype(\n 'int'\n )\n\n if len(mask_shape) == 2:\n embedded = False\n shape_plane = mask_shape\n elif len(mask_shape) == self.data.shape[1]:\n embedded = True\n shape_plane = [mask_shape[d] for d in self.dims_displayed]\n else:\n raise ValueError(\n f\"\"\"mask shape length must either be 2 or the same\n as the dimensionality of the shape, expected {self.data.shape[1]}\n got {len(mask_shape)}.\"\"\"\n )\n\n if self._use_face_vertices:\n data = self._face_vertices\n else:\n data = self.data_displayed\n\n data = data[:, -len(shape_plane) :]\n\n if self._filled:\n mask_p = poly_to_mask(shape_plane, (data - offset) * zoom_factor)\n else:\n mask_p = path_to_mask(shape_plane, (data - offset) * zoom_factor)\n\n # If the mask is to be embedded in a larger array, compute array\n # and embed as a slice.\n if embedded:\n mask = np.zeros(mask_shape, dtype=bool)\n slice_key = [0] * len(mask_shape)\n j = 0\n for i in range(len(mask_shape)):\n if i in self.dims_displayed:\n slice_key[i] = slice(None)\n else:\n slice_key[i] = slice(\n self.slice_key[0, j], self.slice_key[1, j] + 1\n )\n j += 1\n displayed_order = np.array(copy(self.dims_displayed))\n displayed_order[np.argsort(displayed_order)] = list(\n range(len(displayed_order))\n )\n mask[tuple(slice_key)] == mask_p.transpose(displayed_order)\n else:\n mask = mask_p\n\n return mask\n\n @abstractmethod\n def to_xml(self):\n # user writes own docstring\n raise NotImplementedError()\n", "path": "napari/layers/shapes/shape_models/shape.py"}], "after_files": [{"content": "from abc import ABC, abstractmethod\nimport numpy as np\nfrom copy import copy\nfrom vispy.color import Color\nfrom ..shape_utils import (\n triangulate_edge,\n triangulate_face,\n is_collinear,\n poly_to_mask,\n path_to_mask,\n)\n\n\nclass Shape(ABC):\n \"\"\"Base class for a single shape\n\n Parameters\n ----------\n data : (N, D) array\n Vertices specifying the shape.\n edge_width : float\n thickness of lines and edges.\n edge_color : str | tuple\n If string can be any color name recognized by vispy or hex value if\n starting with `#`. If array-like must be 1-dimensional array with 3 or\n 4 elements.\n face_color : str | tuple\n If string can be any color name recognized by vispy or hex value if\n starting with `#`. If array-like must be 1-dimensional array with 3 or\n 4 elements.\n opacity : float\n Opacity of the shape, must be between 0 and 1.\n z_index : int\n Specifier of z order priority. Shapes with higher z order are displayed\n ontop of others.\n dims_order : (D,) list\n Order that the dimensions are to be rendered in.\n ndisplay : int\n Number of displayed dimensions.\n\n Attributes\n ----------\n data : (N, D) array\n Vertices specifying the shape.\n data_displayed : (N, 2) array\n Vertices of the shape that are currently displayed. Only 2D rendering\n currently supported.\n edge_width : float\n thickness of lines and edges.\n edge_color : ColorArray\n Color of the shape edge\n face_color : ColorArray\n Color of the shape face\n opacity : float\n Opacity of the shape, must be between 0 and 1.\n name : str\n Name of shape type.\n z_index : int\n Specifier of z order priority. Shapes with higher z order are displayed\n ontop of others.\n dims_order : (D,) list\n Order that the dimensions are rendered in.\n ndisplay : int\n Number of dimensions to be displayed, must be 2 as only 2D rendering\n currently supported.\n displayed : tuple\n List of dimensions that are displayed.\n not_displayed : tuple\n List of dimensions that are not displayed.\n slice_key : (2, M) array\n Min and max values of the M non-displayed dimensions, useful for\n slicing multidimensional shapes.\n\n Extended Summary\n ----------\n _edge_color_name : str\n Name of edge color or six digit hex code representing edge color if not\n recognized\n _face_color_name : str\n Name of edge color or six digit hex code representing face color if not\n recognized\n _closed : bool\n Bool if shape edge is a closed path or not\n _box : np.ndarray\n 9x2 array of vertices of the interaction box. The first 8 points are\n the corners and midpoints of the box in clockwise order starting in the\n upper-left corner. The last point is the center of the box\n _face_vertices : np.ndarray\n Qx2 array of vertices of all triangles for the shape face\n _face_triangles : np.ndarray\n Px3 array of vertex indices that form the triangles for the shape face\n _edge_vertices : np.ndarray\n Rx2 array of centers of vertices of triangles for the shape edge.\n These values should be added to the scaled `_edge_offsets` to get the\n actual vertex positions. The scaling corresponds to the width of the\n edge\n _edge_offsets : np.ndarray\n Sx2 array of offsets of vertices of triangles for the shape edge. For\n These values should be scaled and added to the `_edge_vertices` to get\n the actual vertex positions. The scaling corresponds to the width of\n the edge\n _edge_triangles : np.ndarray\n Tx3 array of vertex indices that form the triangles for the shape edge\n _filled : bool\n Flag if array is filled or not.\n _use_face_vertices : bool\n Flag to use face vertices for mask generation.\n \"\"\"\n\n def __init__(\n self,\n *,\n shape_type='rectangle',\n edge_width=1,\n edge_color='black',\n face_color='white',\n opacity=1,\n z_index=0,\n dims_order=None,\n ndisplay=2,\n ):\n\n self._dims_order = dims_order or list(range(2))\n self._ndisplay = ndisplay\n self.slice_key = None\n\n self._face_vertices = np.empty((0, self.ndisplay))\n self._face_triangles = np.empty((0, 3), dtype=np.uint32)\n self._edge_vertices = np.empty((0, self.ndisplay))\n self._edge_offsets = np.empty((0, self.ndisplay))\n self._edge_triangles = np.empty((0, 3), dtype=np.uint32)\n self._box = np.empty((9, 2))\n self._edge_color_name = 'black'\n self._face_color_name = 'white'\n\n self._closed = False\n self._filled = True\n self._use_face_vertices = False\n self.edge_width = edge_width\n self.edge_color = edge_color\n self.face_color = face_color\n self.opacity = opacity\n self.z_index = z_index\n self.name = ''\n\n @property\n @abstractmethod\n def data(self):\n # user writes own docstring\n raise NotImplementedError()\n\n @data.setter\n @abstractmethod\n def data(self, data):\n raise NotImplementedError()\n\n @abstractmethod\n def _update_displayed_data(self):\n raise NotImplementedError()\n\n @property\n def ndisplay(self):\n \"\"\"int: Number of displayed dimensions.\"\"\"\n return self._ndisplay\n\n @ndisplay.setter\n def ndisplay(self, ndisplay):\n if self.ndisplay == ndisplay:\n return\n self._ndisplay = ndisplay\n self._update_displayed_data()\n\n @property\n def dims_order(self):\n \"\"\"(D,) list: Order that the dimensions are rendered in.\"\"\"\n return self._dims_order\n\n @dims_order.setter\n def dims_order(self, dims_order):\n if self.dims_order == dims_order:\n return\n self._dims_order = dims_order\n self._update_displayed_data()\n\n @property\n def dims_displayed(self):\n \"\"\"tuple: Dimensions that are displayed.\"\"\"\n return self.dims_order[-self.ndisplay :]\n\n @property\n def dims_not_displayed(self):\n \"\"\"tuple: Dimensions that are not displayed.\"\"\"\n return self.dims_order[: -self.ndisplay]\n\n @property\n def data_displayed(self):\n \"\"\"(N, 2) array: Vertices of the shape that are currently displayed.\"\"\"\n return self.data[:, self.dims_displayed]\n\n @property\n def edge_width(self):\n \"\"\"float: thickness of lines and edges.\n \"\"\"\n return self._edge_width\n\n @edge_width.setter\n def edge_width(self, edge_width):\n self._edge_width = edge_width\n\n @property\n def edge_color(self):\n \"\"\"Color, ColorArray: color of edges\n \"\"\"\n return self._edge_color\n\n @edge_color.setter\n def edge_color(self, edge_color):\n self._edge_color = Color(edge_color)\n if type(edge_color) is str:\n self._edge_color_name = edge_color\n else:\n rgb = tuple([int(255 * x) for x in self._edge_color.rgba[:3]])\n self._edge_color_name = '#%02x%02x%02x' % rgb\n\n @property\n def face_color(self):\n \"\"\"Color, ColorArray: color of faces\n \"\"\"\n return self._face_color\n\n @face_color.setter\n def face_color(self, face_color):\n self._face_color = Color(face_color)\n if type(face_color) is str:\n self._face_color_name = face_color\n else:\n rgb = tuple([int(255 * x) for x in self._face_color.rgba[:3]])\n self._face_color_name = '#%02x%02x%02x' % rgb\n\n @property\n def opacity(self):\n \"\"\"float: opacity of shape\n \"\"\"\n return self._opacity\n\n @opacity.setter\n def opacity(self, opacity):\n self._opacity = opacity\n\n @property\n def svg_props(self):\n \"\"\"dict: color and width properties in the svg specification\n \"\"\"\n width = str(self.edge_width)\n face_color = (255 * self.face_color.rgba).astype(np.int)\n fill = f'rgb{tuple(face_color[:3])}'\n edge_color = (255 * self.edge_color.rgba).astype(np.int)\n stroke = f'rgb{tuple(edge_color[:3])}'\n opacity = str(self.opacity)\n\n # Currently not using fill or stroke opacity - only global opacity\n # as otherwise leads to unexpected behavior when reading svg into\n # other applications\n # fill_opacity = f'{self.opacity*self.face_color.rgba[3]}'\n # stroke_opacity = f'{self.opacity*self.edge_color.rgba[3]}'\n\n props = {\n 'fill': fill,\n 'stroke': stroke,\n 'stroke-width': width,\n 'opacity': opacity,\n }\n\n return props\n\n @property\n def z_index(self):\n \"\"\"int: z order priority of shape. Shapes with higher z order displayed\n ontop of others.\n \"\"\"\n return self._z_index\n\n @z_index.setter\n def z_index(self, z_index):\n self._z_index = z_index\n\n def _set_meshes(self, data, closed=True, face=True, edge=True):\n \"\"\"Sets the face and edge meshes from a set of points.\n\n Parameters\n ----------\n data : np.ndarray\n Nx2 or Nx3 array specifying the shape to be triangulated\n closed : bool\n Bool which determines if the edge is closed or not\n face : bool\n Bool which determines if the face need to be traingulated\n edge : bool\n Bool which determines if the edge need to be traingulated\n \"\"\"\n if edge:\n centers, offsets, triangles = triangulate_edge(data, closed=closed)\n self._edge_vertices = centers\n self._edge_offsets = offsets\n self._edge_triangles = triangles\n else:\n self._edge_vertices = np.empty((0, self.ndisplay))\n self._edge_offsets = np.empty((0, self.ndisplay))\n self._edge_triangles = np.empty((0, 3), dtype=np.uint32)\n\n if face:\n clean_data = np.array(\n [\n p\n for i, p in enumerate(data)\n if i == 0 or not np.all(p == data[i - 1])\n ]\n )\n\n if not is_collinear(clean_data[:, -2:]):\n if clean_data.shape[1] == 2:\n vertices, triangles = triangulate_face(clean_data)\n elif len(np.unique(clean_data[:, 0])) == 1:\n val = np.unique(clean_data[:, 0])\n vertices, triangles = triangulate_face(clean_data[:, -2:])\n exp = np.expand_dims(np.repeat(val, len(vertices)), axis=1)\n vertices = np.concatenate([exp, vertices], axis=1)\n else:\n triangles = []\n vertices = []\n if len(triangles) > 0:\n self._face_vertices = vertices\n self._face_triangles = triangles\n else:\n self._face_vertices = np.empty((0, self.ndisplay))\n self._face_triangles = np.empty((0, 3), dtype=np.uint32)\n else:\n self._face_vertices = np.empty((0, self.ndisplay))\n self._face_triangles = np.empty((0, 3), dtype=np.uint32)\n else:\n self._face_vertices = np.empty((0, self.ndisplay))\n self._face_triangles = np.empty((0, 3), dtype=np.uint32)\n\n def transform(self, transform):\n \"\"\"Performs a linear transform on the shape\n\n Parameters\n ----------\n transform : np.ndarray\n 2x2 array specifying linear transform.\n \"\"\"\n self._box = self._box @ transform.T\n self._data[:, self.dims_displayed] = (\n self._data[:, self.dims_displayed] @ transform.T\n )\n self._face_vertices = self._face_vertices @ transform.T\n\n points = self.data_displayed\n\n centers, offsets, triangles = triangulate_edge(\n points, closed=self._closed\n )\n self._edge_vertices = centers\n self._edge_offsets = offsets\n self._edge_triangles = triangles\n\n def shift(self, shift):\n \"\"\"Performs a 2D shift on the shape\n\n Parameters\n ----------\n shift : np.ndarray\n length 2 array specifying shift of shapes.\n \"\"\"\n shift = np.array(shift)\n\n self._face_vertices = self._face_vertices + shift\n self._edge_vertices = self._edge_vertices + shift\n self._box = self._box + shift\n self._data[:, self.dims_displayed] = self.data_displayed + shift\n\n def scale(self, scale, center=None):\n \"\"\"Performs a scaling on the shape\n\n Parameters\n ----------\n scale : float, list\n scalar or list specifying rescaling of shape.\n center : list\n length 2 list specifying coordinate of center of scaling.\n \"\"\"\n if isinstance(scale, (list, np.ndarray)):\n transform = np.array([[scale[0], 0], [0, scale[1]]])\n else:\n transform = np.array([[scale, 0], [0, scale]])\n if center is None:\n self.transform(transform)\n else:\n self.shift(-center)\n self.transform(transform)\n self.shift(center)\n\n def rotate(self, angle, center=None):\n \"\"\"Performs a rotation on the shape\n\n Parameters\n ----------\n angle : float\n angle specifying rotation of shape in degrees. CCW is positive.\n center : list\n length 2 list specifying coordinate of fixed point of the rotation.\n \"\"\"\n theta = np.radians(angle)\n transform = np.array(\n [[np.cos(theta), np.sin(theta)], [-np.sin(theta), np.cos(theta)]]\n )\n if center is None:\n self.transform(transform)\n else:\n self.shift(-center)\n self.transform(transform)\n self.shift(center)\n\n def flip(self, axis, center=None):\n \"\"\"Performs a flip on the shape, either horizontal or vertical.\n\n Parameters\n ----------\n axis : int\n integer specifying axis of flip. `0` flips horizontal, `1` flips\n vertical.\n center : list\n length 2 list specifying coordinate of center of flip axes.\n \"\"\"\n if axis == 0:\n transform = np.array([[1, 0], [0, -1]])\n elif axis == 1:\n transform = np.array([[-1, 0], [0, 1]])\n else:\n raise ValueError(\n \"\"\"Axis not recognized, must be one of \"{0, 1}\"\n \"\"\"\n )\n if center is None:\n self.transform(transform)\n else:\n self.shift(-center)\n self.transform(transform)\n self.shift(-center)\n\n def to_mask(self, mask_shape=None, zoom_factor=1, offset=[0, 0]):\n \"\"\"Convert the shape vertices to a boolean mask.\n\n Set points to `True` if they are lying inside the shape if the shape is\n filled, or if they are lying along the boundary of the shape if the\n shape is not filled. Negative points or points outside the mask_shape\n after the zoom and offset are clipped.\n\n Parameters\n ----------\n mask_shape : (D,) array\n Shape of mask to be generated. If non specified, takes the max of\n the displayed vertices.\n zoom_factor : float\n Premultiplier applied to coordinates before generating mask. Used\n for generating as downsampled mask.\n offset : 2-tuple\n Offset subtracted from coordinates before multiplying by the\n zoom_factor. Used for putting negative coordinates into the mask.\n\n Returns\n ----------\n mask : np.ndarray\n Boolean array with `True` for points inside the shape\n \"\"\"\n if mask_shape is None:\n mask_shape = np.round(self.data_displayed.max(axis=0)).astype(\n 'int'\n )\n\n if len(mask_shape) == 2:\n embedded = False\n shape_plane = mask_shape\n elif len(mask_shape) == self.data.shape[1]:\n embedded = True\n shape_plane = [mask_shape[d] for d in self.dims_displayed]\n else:\n raise ValueError(\n f\"\"\"mask shape length must either be 2 or the same\n as the dimensionality of the shape, expected {self.data.shape[1]}\n got {len(mask_shape)}.\"\"\"\n )\n\n if self._use_face_vertices:\n data = self._face_vertices\n else:\n data = self.data_displayed\n\n data = data[:, -len(shape_plane) :]\n\n if self._filled:\n mask_p = poly_to_mask(shape_plane, (data - offset) * zoom_factor)\n else:\n mask_p = path_to_mask(shape_plane, (data - offset) * zoom_factor)\n\n # If the mask is to be embedded in a larger array, compute array\n # and embed as a slice.\n if embedded:\n mask = np.zeros(mask_shape, dtype=bool)\n slice_key = [0] * len(mask_shape)\n j = 0\n for i in range(len(mask_shape)):\n if i in self.dims_displayed:\n slice_key[i] = slice(None)\n else:\n slice_key[i] = slice(\n self.slice_key[0, j], self.slice_key[1, j] + 1\n )\n j += 1\n displayed_order = np.array(copy(self.dims_displayed))\n displayed_order[np.argsort(displayed_order)] = list(\n range(len(displayed_order))\n )\n mask[tuple(slice_key)] = mask_p.transpose(displayed_order)\n else:\n mask = mask_p\n\n return mask\n\n @abstractmethod\n def to_xml(self):\n # user writes own docstring\n raise NotImplementedError()\n", "path": "napari/layers/shapes/shape_models/shape.py"}]} |
gh_patches_debug_1068 | rasdani/github-patches | git_diff | cleanlab__cleanlab-1000 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Class Imbalance issue checker should not run if labels are not provided in Datalab
```
from cleanlab import Datalab
lab = Datalab(data=df_without_labels)
lab.find_issues()
```
It should not run the ClassImbalanceIssueManager, but it tries to anyway.
Just add a check that the Datlab had labels specified, then it can run the ClassImbalanceIssueManager in find_issues.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cleanlab/datalab/internal/issue_finder.py`
Content:
```
1 # Copyright (C) 2017-2023 Cleanlab Inc.
2 # This file is part of cleanlab.
3 #
4 # cleanlab is free software: you can redistribute it and/or modify
5 # it under the terms of the GNU Affero General Public License as published
6 # by the Free Software Foundation, either version 3 of the License, or
7 # (at your option) any later version.
8 #
9 # cleanlab is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU Affero General Public License for more details.
13 #
14 # You should have received a copy of the GNU Affero General Public License
15 # along with cleanlab. If not, see <https://www.gnu.org/licenses/>.
16 """
17 Module for the :class:`IssueFinder` class, which is responsible for configuring,
18 creating and running issue managers.
19
20 It determines which types of issues to look for, instatiates the IssueManagers
21 via a factory, run the issue managers
22 (:py:meth:`IssueManager.find_issues <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager.find_issues>`),
23 and collects the results to :py:class:`DataIssues <cleanlab.datalab.internal.data_issues.DataIssues>`.
24
25 .. note::
26
27 This module is not intended to be used directly. Instead, use the public-facing
28 :py:meth:`Datalab.find_issues <cleanlab.datalab.datalab.Datalab.find_issues>` method.
29 """
30 from __future__ import annotations
31
32 import warnings
33 from typing import TYPE_CHECKING, Any, Dict, Optional
34
35 import numpy as np
36 from scipy.sparse import csr_matrix
37
38 from cleanlab.datalab.internal.issue_manager_factory import (
39 _IssueManagerFactory,
40 list_default_issue_types,
41 )
42 from cleanlab.datalab.internal.model_outputs import (
43 MultiClassPredProbs,
44 RegressionPredictions,
45 MultiLabelPredProbs,
46 )
47 from cleanlab.datalab.internal.task import Task
48
49 if TYPE_CHECKING: # pragma: no cover
50 import numpy.typing as npt
51 from typing import Callable
52
53 from cleanlab.datalab.datalab import Datalab
54
55
56 _CLASSIFICATION_ARGS_DICT = {
57 "label": ["pred_probs", "features"],
58 "outlier": ["pred_probs", "features", "knn_graph"],
59 "near_duplicate": ["features", "knn_graph"],
60 "non_iid": ["pred_probs", "features", "knn_graph"],
61 "underperforming_group": ["pred_probs", "features", "knn_graph", "cluster_ids"],
62 "data_valuation": ["knn_graph"],
63 "class_imbalance": [],
64 "null": ["features"],
65 }
66 _REGRESSION_ARGS_DICT = {
67 "label": ["features", "predictions"],
68 "outlier": ["features", "knn_graph"],
69 "near_duplicate": ["features", "knn_graph"],
70 "non_iid": ["features", "knn_graph"],
71 "null": ["features"],
72 }
73
74 _MULTILABEL_ARGS_DICT = {
75 "label": ["pred_probs"],
76 "outlier": ["features", "knn_graph"],
77 "near_duplicate": ["features", "knn_graph"],
78 "non_iid": ["features", "knn_graph"],
79 "null": ["features"],
80 }
81
82
83 def _resolve_required_args_for_classification(**kwargs):
84 """Resolves the required arguments for each issue type intended for classification tasks."""
85 initial_args_dict = _CLASSIFICATION_ARGS_DICT.copy()
86 args_dict = {
87 issue_type: {arg: kwargs.get(arg, None) for arg in initial_args_dict[issue_type]}
88 for issue_type in initial_args_dict
89 }
90
91 # Some issue types (like class-imbalance) have no required args.
92 # This conditional lambda is used to include them in args dict.
93 keep_empty_argument = lambda k: not len(_CLASSIFICATION_ARGS_DICT[k])
94
95 # Remove None values from argument list, rely on default values in IssueManager
96 args_dict = {
97 k: {k2: v2 for k2, v2 in v.items() if v2 is not None}
98 for k, v in args_dict.items()
99 if (v or keep_empty_argument(k))
100 }
101
102 # Prefer `knn_graph` over `features` if both are provided.
103 for v in args_dict.values():
104 if "cluster_ids" in v and ("knn_graph" in v or "features" in v):
105 warnings.warn(
106 "`cluster_ids` have been provided with `knn_graph` or `features`."
107 "Issue managers that require cluster labels will prefer"
108 "`cluster_ids` over computation of cluster labels using"
109 "`knn_graph` or `features`. "
110 )
111 if "knn_graph" in v and "features" in v:
112 warnings.warn(
113 "Both `features` and `knn_graph` were provided. "
114 "Most issue managers will likely prefer using `knn_graph` "
115 "instead of `features` for efficiency."
116 )
117
118 # Only keep issue types that have at least one argument
119 # or those that require no arguments.
120 args_dict = {k: v for k, v in args_dict.items() if (v or keep_empty_argument(k))}
121
122 return args_dict
123
124
125 def _resolve_required_args_for_regression(**kwargs):
126 """Resolves the required arguments for each issue type intended for regression tasks."""
127 initial_args_dict = _REGRESSION_ARGS_DICT.copy()
128 args_dict = {
129 issue_type: {arg: kwargs.get(arg, None) for arg in initial_args_dict[issue_type]}
130 for issue_type in initial_args_dict
131 }
132 # Some issue types have no required args.
133 # This conditional lambda is used to include them in args dict.
134 keep_empty_argument = lambda k: not len(_REGRESSION_ARGS_DICT[k])
135
136 # Remove None values from argument list, rely on default values in IssueManager
137 args_dict = {
138 k: {k2: v2 for k2, v2 in v.items() if v2 is not None}
139 for k, v in args_dict.items()
140 if v or keep_empty_argument(k)
141 }
142
143 # Only keep issue types that have at least one argument
144 # or those that require no arguments.
145 args_dict = {k: v for k, v in args_dict.items() if (v or keep_empty_argument(k))}
146
147 return args_dict
148
149
150 def _resolve_required_args_for_multilabel(**kwargs):
151 """Resolves the required arguments for each issue type intended for multilabel tasks."""
152 initial_args_dict = _MULTILABEL_ARGS_DICT.copy()
153 args_dict = {
154 issue_type: {arg: kwargs.get(arg, None) for arg in initial_args_dict[issue_type]}
155 for issue_type in initial_args_dict
156 }
157 # Some issue types have no required args.
158 # This conditional lambda is used to include them in args dict.
159 keep_empty_argument = lambda k: not len(_MULTILABEL_ARGS_DICT[k])
160
161 # Remove None values from argument list, rely on default values in IssueManager
162 args_dict = {
163 k: {k2: v2 for k2, v2 in v.items() if v2 is not None}
164 for k, v in args_dict.items()
165 if v or keep_empty_argument(k) # Allow label issues to require no arguments
166 }
167
168 # Only keep issue types that have at least one argument
169 # or those that require no arguments.
170 args_dict = {k: v for k, v in args_dict.items() if (v or keep_empty_argument(k))}
171
172 return args_dict
173
174
175 def _select_strategy_for_resolving_required_args(task: Task) -> Callable:
176 """Helper function that selects the strategy for resolving required arguments for each issue type.
177
178 Each strategy resolves the required arguments for each issue type.
179
180 This is a helper function that filters out any issue manager
181 that does not have the required arguments.
182
183 This does not consider custom hyperparameters for each issue type.
184
185 Parameters
186 ----------
187 task : str
188 The type of machine learning task that the dataset is used for.
189
190 Returns
191 -------
192 args_dict :
193 Dictionary of required arguments for each issue type, if available.
194 """
195 strategies = {
196 Task.CLASSIFICATION: _resolve_required_args_for_classification,
197 Task.REGRESSION: _resolve_required_args_for_regression,
198 Task.MULTILABEL: _resolve_required_args_for_multilabel,
199 }
200 selected_strategy = strategies.get(task, None)
201 if selected_strategy is None:
202 raise ValueError(f"No strategy for resolving required arguments for task '{task}'")
203 return selected_strategy
204
205
206 class IssueFinder:
207 """
208 The IssueFinder class is responsible for managing the process of identifying
209 issues in the dataset by handling the creation and execution of relevant
210 IssueManagers. It serves as a coordinator or helper class for the Datalab class
211 to encapsulate the specific behavior of the issue finding process.
212
213 At a high level, the IssueFinder is responsible for:
214
215 - Determining which types of issues to look for.
216 - Instantiating the appropriate IssueManagers using a factory.
217 - Running the IssueManagers' `find_issues` methods.
218 - Collecting the results into a DataIssues instance.
219
220 Parameters
221 ----------
222 datalab : Datalab
223 The Datalab instance associated with this IssueFinder.
224
225 task : str
226 The type of machine learning task that the dataset is used for.
227
228 verbosity : int
229 Controls the verbosity of the output during the issue finding process.
230
231 Note
232 ----
233 This class is not intended to be used directly. Instead, use the
234 `Datalab.find_issues` method which internally utilizes an IssueFinder instance.
235 """
236
237 def __init__(self, datalab: "Datalab", task: Task, verbosity=1):
238 self.datalab = datalab
239 self.task = task
240 self.verbosity = verbosity
241
242 def find_issues(
243 self,
244 *,
245 pred_probs: Optional[np.ndarray] = None,
246 features: Optional[npt.NDArray] = None,
247 knn_graph: Optional[csr_matrix] = None,
248 issue_types: Optional[Dict[str, Any]] = None,
249 ) -> None:
250 """
251 Checks the dataset for all sorts of common issues in real-world data (in both labels and feature values).
252
253 You can use Datalab to find issues in your data, utilizing *any* model you have already trained.
254 This method only interacts with your model via its predictions or embeddings (and other functions thereof).
255 The more of these inputs you provide, the more types of issues Datalab can detect in your dataset/labels.
256 If you provide a subset of these inputs, Datalab will output what insights it can based on the limited information from your model.
257
258 Note
259 ----
260 This method is not intended to be used directly. Instead, use the
261 :py:meth:`Datalab.find_issues <cleanlab.datalab.datalab.Datalab.find_issues>` method.
262
263 Note
264 ----
265 The issues are saved in the ``self.datalab.data_issues.issues`` attribute, but are not returned.
266
267 Parameters
268 ----------
269 pred_probs :
270 Out-of-sample predicted class probabilities made by the model for every example in the dataset.
271 To best detect label issues, provide this input obtained from the most accurate model you can produce.
272
273 If provided for classification, this must be a 2D array with shape ``(num_examples, K)`` where K is the number of classes in the dataset.
274 If provided for regression, this must be a 1D array with shape ``(num_examples,)``.
275
276 features : Optional[np.ndarray]
277 Feature embeddings (vector representations) of every example in the dataset.
278
279 If provided, this must be a 2D array with shape (num_examples, num_features).
280
281 knn_graph :
282 Sparse matrix representing distances between examples in the dataset in a k nearest neighbor graph.
283
284 For details, refer to the documentation of the same argument in :py:class:`Datalab.find_issues <cleanlab.datalab.datalab.Datalab.find_issues>`
285
286 issue_types :
287 Collection specifying which types of issues to consider in audit and any non-default parameter settings to use.
288 If unspecified, a default set of issue types and recommended parameter settings is considered.
289
290 This is a dictionary of dictionaries, where the keys are the issue types of interest
291 and the values are dictionaries of parameter values that control how each type of issue is detected (only for advanced users).
292 More specifically, the values are constructor keyword arguments passed to the corresponding ``IssueManager``,
293 which is responsible for detecting the particular issue type.
294
295 .. seealso::
296 :py:class:`IssueManager <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager>`
297 """
298
299 issue_types_copy = self.get_available_issue_types(
300 pred_probs=pred_probs,
301 features=features,
302 knn_graph=knn_graph,
303 issue_types=issue_types,
304 )
305
306 if not issue_types_copy:
307 return None
308
309 new_issue_managers = [
310 factory(datalab=self.datalab, **issue_types_copy.get(factory.issue_name, {}))
311 for factory in _IssueManagerFactory.from_list(
312 list(issue_types_copy.keys()), task=self.task
313 )
314 ]
315
316 failed_managers = []
317 data_issues = self.datalab.data_issues
318 for issue_manager, arg_dict in zip(new_issue_managers, issue_types_copy.values()):
319 try:
320 if self.verbosity:
321 print(f"Finding {issue_manager.issue_name} issues ...")
322 issue_manager.find_issues(**arg_dict)
323 data_issues.collect_statistics(issue_manager)
324 data_issues.collect_issues_from_issue_manager(issue_manager)
325 except Exception as e:
326 print(f"Error in {issue_manager.issue_name}: {e}")
327 failed_managers.append(issue_manager)
328 if failed_managers:
329 print(f"Failed to check for these issue types: {failed_managers}")
330 data_issues.set_health_score()
331
332 def _set_issue_types(
333 self,
334 issue_types: Optional[Dict[str, Any]],
335 required_defaults_dict: Dict[str, Any],
336 ) -> Dict[str, Any]:
337 """Set necessary configuration for each IssueManager in a dictionary.
338
339 While each IssueManager defines default values for its arguments,
340 the Datalab class needs to organize the calls to each IssueManager
341 with different arguments, some of which may be user-provided.
342
343 Parameters
344 ----------
345 issue_types :
346 Dictionary of issue types and argument configuration for their respective IssueManagers.
347 If None, then the `required_defaults_dict` is used.
348
349 required_defaults_dict :
350 Dictionary of default parameter configuration for each issue type.
351
352 Returns
353 -------
354 issue_types_copy :
355 Dictionary of issue types and their parameter configuration.
356 The input `issue_types` is copied and updated with the necessary default values.
357 """
358 if issue_types is not None:
359 issue_types_copy = issue_types.copy()
360 self._check_missing_args(required_defaults_dict, issue_types_copy)
361 else:
362 issue_types_copy = required_defaults_dict.copy()
363 # keep only default issue types
364 issue_types_copy = {
365 issue: issue_types_copy[issue]
366 for issue in list_default_issue_types(self.task)
367 if issue in issue_types_copy
368 }
369
370 # Check that all required arguments are provided.
371 self._validate_issue_types_dict(issue_types_copy, required_defaults_dict)
372
373 # Remove None values from argument list, rely on default values in IssueManager
374 for key, value in issue_types_copy.items():
375 issue_types_copy[key] = {k: v for k, v in value.items() if v is not None}
376
377 return issue_types_copy
378
379 @staticmethod
380 def _check_missing_args(required_defaults_dict, issue_types):
381 for key, issue_type_value in issue_types.items():
382 missing_args = set(required_defaults_dict.get(key, {})) - set(issue_type_value.keys())
383 # Impute missing arguments with default values.
384 missing_dict = {
385 missing_arg: required_defaults_dict[key][missing_arg]
386 for missing_arg in missing_args
387 }
388 issue_types[key].update(missing_dict)
389
390 @staticmethod
391 def _validate_issue_types_dict(
392 issue_types: Dict[str, Any], required_defaults_dict: Dict[str, Any]
393 ) -> None:
394 missing_required_args_dict = {}
395 for issue_name, required_args in required_defaults_dict.items():
396 if issue_name in issue_types:
397 missing_args = set(required_args.keys()) - set(issue_types[issue_name].keys())
398 if missing_args:
399 missing_required_args_dict[issue_name] = missing_args
400 if any(missing_required_args_dict.values()):
401 error_message = ""
402 for issue_name, missing_required_args in missing_required_args_dict.items():
403 error_message += f"Required argument {missing_required_args} for issue type {issue_name} was not provided.\n"
404 raise ValueError(error_message)
405
406 def get_available_issue_types(self, **kwargs):
407 """Returns a dictionary of issue types that can be used in :py:meth:`Datalab.find_issues
408 <cleanlab.datalab.datalab.Datalab.find_issues>` method."""
409
410 pred_probs = kwargs.get("pred_probs", None)
411 features = kwargs.get("features", None)
412 knn_graph = kwargs.get("knn_graph", None)
413 issue_types = kwargs.get("issue_types", None)
414
415 model_output = None
416 if pred_probs is not None:
417 model_output_dict = {
418 Task.REGRESSION: RegressionPredictions,
419 Task.CLASSIFICATION: MultiClassPredProbs,
420 Task.MULTILABEL: MultiLabelPredProbs,
421 }
422
423 model_output_class = model_output_dict.get(self.task)
424 if model_output_class is None:
425 raise ValueError(f"Unknown task type '{self.task}'")
426
427 model_output = model_output_class(pred_probs)
428
429 if model_output is not None:
430 # A basic trick to assign the model output to the correct argument
431 # E.g. Datalab accepts only `pred_probs`, but those are assigned to the `predictions` argument for regression-related issue_managers
432 kwargs.update({model_output.argument: model_output.collect()})
433
434 # Determine which parameters are required for each issue type
435 strategy_for_resolving_required_args = _select_strategy_for_resolving_required_args(
436 self.task
437 )
438 required_args_per_issue_type = strategy_for_resolving_required_args(**kwargs)
439
440 issue_types_copy = self._set_issue_types(issue_types, required_args_per_issue_type)
441 if issue_types is None:
442 # Only run default issue types if no issue types are specified
443 issue_types_copy = {
444 issue: issue_types_copy[issue]
445 for issue in list_default_issue_types(self.task)
446 if issue in issue_types_copy
447 }
448 drop_label_check = (
449 "label" in issue_types_copy
450 and not self.datalab.has_labels
451 and self.task != Task.REGRESSION
452 )
453
454 if drop_label_check:
455 warnings.warn("No labels were provided. " "The 'label' issue type will not be run.")
456 issue_types_copy.pop("label")
457
458 outlier_check_needs_features = (
459 self.task == "classification"
460 and "outlier" in issue_types_copy
461 and not self.datalab.has_labels
462 )
463 if outlier_check_needs_features:
464 no_features = features is None
465 no_knn_graph = knn_graph is None
466 pred_probs_given = issue_types_copy["outlier"].get("pred_probs", None) is not None
467
468 only_pred_probs_given = pred_probs_given and no_features and no_knn_graph
469 if only_pred_probs_given:
470 warnings.warn(
471 "No labels were provided. " "The 'outlier' issue type will not be run."
472 )
473 issue_types_copy.pop("outlier")
474
475 return issue_types_copy
476
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cleanlab/datalab/internal/issue_finder.py b/cleanlab/datalab/internal/issue_finder.py
--- a/cleanlab/datalab/internal/issue_finder.py
+++ b/cleanlab/datalab/internal/issue_finder.py
@@ -472,4 +472,12 @@
)
issue_types_copy.pop("outlier")
+ drop_class_imbalance_check = (
+ "class_imbalance" in issue_types_copy
+ and not self.datalab.has_labels
+ and self.task == Task.CLASSIFICATION
+ )
+ if drop_class_imbalance_check:
+ issue_types_copy.pop("class_imbalance")
+
return issue_types_copy
| {"golden_diff": "diff --git a/cleanlab/datalab/internal/issue_finder.py b/cleanlab/datalab/internal/issue_finder.py\n--- a/cleanlab/datalab/internal/issue_finder.py\n+++ b/cleanlab/datalab/internal/issue_finder.py\n@@ -472,4 +472,12 @@\n )\n issue_types_copy.pop(\"outlier\")\n \n+ drop_class_imbalance_check = (\n+ \"class_imbalance\" in issue_types_copy\n+ and not self.datalab.has_labels\n+ and self.task == Task.CLASSIFICATION\n+ )\n+ if drop_class_imbalance_check:\n+ issue_types_copy.pop(\"class_imbalance\")\n+\n return issue_types_copy\n", "issue": "Class Imbalance issue checker should not run if labels are not provided in Datalab\n```\r\nfrom cleanlab import Datalab\r\n\r\nlab = Datalab(data=df_without_labels)\r\nlab.find_issues()\r\n```\r\n\r\nIt should not run the ClassImbalanceIssueManager, but it tries to anyway.\r\n\r\nJust add a check that the Datlab had labels specified, then it can run the ClassImbalanceIssueManager in find_issues.\n", "before_files": [{"content": "# Copyright (C) 2017-2023 Cleanlab Inc.\n# This file is part of cleanlab.\n#\n# cleanlab is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# cleanlab is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with cleanlab. If not, see <https://www.gnu.org/licenses/>.\n\"\"\"\nModule for the :class:`IssueFinder` class, which is responsible for configuring,\ncreating and running issue managers.\n\nIt determines which types of issues to look for, instatiates the IssueManagers\nvia a factory, run the issue managers\n(:py:meth:`IssueManager.find_issues <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager.find_issues>`),\nand collects the results to :py:class:`DataIssues <cleanlab.datalab.internal.data_issues.DataIssues>`.\n\n.. note::\n\n This module is not intended to be used directly. Instead, use the public-facing\n :py:meth:`Datalab.find_issues <cleanlab.datalab.datalab.Datalab.find_issues>` method.\n\"\"\"\nfrom __future__ import annotations\n\nimport warnings\nfrom typing import TYPE_CHECKING, Any, Dict, Optional\n\nimport numpy as np\nfrom scipy.sparse import csr_matrix\n\nfrom cleanlab.datalab.internal.issue_manager_factory import (\n _IssueManagerFactory,\n list_default_issue_types,\n)\nfrom cleanlab.datalab.internal.model_outputs import (\n MultiClassPredProbs,\n RegressionPredictions,\n MultiLabelPredProbs,\n)\nfrom cleanlab.datalab.internal.task import Task\n\nif TYPE_CHECKING: # pragma: no cover\n import numpy.typing as npt\n from typing import Callable\n\n from cleanlab.datalab.datalab import Datalab\n\n\n_CLASSIFICATION_ARGS_DICT = {\n \"label\": [\"pred_probs\", \"features\"],\n \"outlier\": [\"pred_probs\", \"features\", \"knn_graph\"],\n \"near_duplicate\": [\"features\", \"knn_graph\"],\n \"non_iid\": [\"pred_probs\", \"features\", \"knn_graph\"],\n \"underperforming_group\": [\"pred_probs\", \"features\", \"knn_graph\", \"cluster_ids\"],\n \"data_valuation\": [\"knn_graph\"],\n \"class_imbalance\": [],\n \"null\": [\"features\"],\n}\n_REGRESSION_ARGS_DICT = {\n \"label\": [\"features\", \"predictions\"],\n \"outlier\": [\"features\", \"knn_graph\"],\n \"near_duplicate\": [\"features\", \"knn_graph\"],\n \"non_iid\": [\"features\", \"knn_graph\"],\n \"null\": [\"features\"],\n}\n\n_MULTILABEL_ARGS_DICT = {\n \"label\": [\"pred_probs\"],\n \"outlier\": [\"features\", \"knn_graph\"],\n \"near_duplicate\": [\"features\", \"knn_graph\"],\n \"non_iid\": [\"features\", \"knn_graph\"],\n \"null\": [\"features\"],\n}\n\n\ndef _resolve_required_args_for_classification(**kwargs):\n \"\"\"Resolves the required arguments for each issue type intended for classification tasks.\"\"\"\n initial_args_dict = _CLASSIFICATION_ARGS_DICT.copy()\n args_dict = {\n issue_type: {arg: kwargs.get(arg, None) for arg in initial_args_dict[issue_type]}\n for issue_type in initial_args_dict\n }\n\n # Some issue types (like class-imbalance) have no required args.\n # This conditional lambda is used to include them in args dict.\n keep_empty_argument = lambda k: not len(_CLASSIFICATION_ARGS_DICT[k])\n\n # Remove None values from argument list, rely on default values in IssueManager\n args_dict = {\n k: {k2: v2 for k2, v2 in v.items() if v2 is not None}\n for k, v in args_dict.items()\n if (v or keep_empty_argument(k))\n }\n\n # Prefer `knn_graph` over `features` if both are provided.\n for v in args_dict.values():\n if \"cluster_ids\" in v and (\"knn_graph\" in v or \"features\" in v):\n warnings.warn(\n \"`cluster_ids` have been provided with `knn_graph` or `features`.\"\n \"Issue managers that require cluster labels will prefer\"\n \"`cluster_ids` over computation of cluster labels using\"\n \"`knn_graph` or `features`. \"\n )\n if \"knn_graph\" in v and \"features\" in v:\n warnings.warn(\n \"Both `features` and `knn_graph` were provided. \"\n \"Most issue managers will likely prefer using `knn_graph` \"\n \"instead of `features` for efficiency.\"\n )\n\n # Only keep issue types that have at least one argument\n # or those that require no arguments.\n args_dict = {k: v for k, v in args_dict.items() if (v or keep_empty_argument(k))}\n\n return args_dict\n\n\ndef _resolve_required_args_for_regression(**kwargs):\n \"\"\"Resolves the required arguments for each issue type intended for regression tasks.\"\"\"\n initial_args_dict = _REGRESSION_ARGS_DICT.copy()\n args_dict = {\n issue_type: {arg: kwargs.get(arg, None) for arg in initial_args_dict[issue_type]}\n for issue_type in initial_args_dict\n }\n # Some issue types have no required args.\n # This conditional lambda is used to include them in args dict.\n keep_empty_argument = lambda k: not len(_REGRESSION_ARGS_DICT[k])\n\n # Remove None values from argument list, rely on default values in IssueManager\n args_dict = {\n k: {k2: v2 for k2, v2 in v.items() if v2 is not None}\n for k, v in args_dict.items()\n if v or keep_empty_argument(k)\n }\n\n # Only keep issue types that have at least one argument\n # or those that require no arguments.\n args_dict = {k: v for k, v in args_dict.items() if (v or keep_empty_argument(k))}\n\n return args_dict\n\n\ndef _resolve_required_args_for_multilabel(**kwargs):\n \"\"\"Resolves the required arguments for each issue type intended for multilabel tasks.\"\"\"\n initial_args_dict = _MULTILABEL_ARGS_DICT.copy()\n args_dict = {\n issue_type: {arg: kwargs.get(arg, None) for arg in initial_args_dict[issue_type]}\n for issue_type in initial_args_dict\n }\n # Some issue types have no required args.\n # This conditional lambda is used to include them in args dict.\n keep_empty_argument = lambda k: not len(_MULTILABEL_ARGS_DICT[k])\n\n # Remove None values from argument list, rely on default values in IssueManager\n args_dict = {\n k: {k2: v2 for k2, v2 in v.items() if v2 is not None}\n for k, v in args_dict.items()\n if v or keep_empty_argument(k) # Allow label issues to require no arguments\n }\n\n # Only keep issue types that have at least one argument\n # or those that require no arguments.\n args_dict = {k: v for k, v in args_dict.items() if (v or keep_empty_argument(k))}\n\n return args_dict\n\n\ndef _select_strategy_for_resolving_required_args(task: Task) -> Callable:\n \"\"\"Helper function that selects the strategy for resolving required arguments for each issue type.\n\n Each strategy resolves the required arguments for each issue type.\n\n This is a helper function that filters out any issue manager\n that does not have the required arguments.\n\n This does not consider custom hyperparameters for each issue type.\n\n Parameters\n ----------\n task : str\n The type of machine learning task that the dataset is used for.\n\n Returns\n -------\n args_dict :\n Dictionary of required arguments for each issue type, if available.\n \"\"\"\n strategies = {\n Task.CLASSIFICATION: _resolve_required_args_for_classification,\n Task.REGRESSION: _resolve_required_args_for_regression,\n Task.MULTILABEL: _resolve_required_args_for_multilabel,\n }\n selected_strategy = strategies.get(task, None)\n if selected_strategy is None:\n raise ValueError(f\"No strategy for resolving required arguments for task '{task}'\")\n return selected_strategy\n\n\nclass IssueFinder:\n \"\"\"\n The IssueFinder class is responsible for managing the process of identifying\n issues in the dataset by handling the creation and execution of relevant\n IssueManagers. It serves as a coordinator or helper class for the Datalab class\n to encapsulate the specific behavior of the issue finding process.\n\n At a high level, the IssueFinder is responsible for:\n\n - Determining which types of issues to look for.\n - Instantiating the appropriate IssueManagers using a factory.\n - Running the IssueManagers' `find_issues` methods.\n - Collecting the results into a DataIssues instance.\n\n Parameters\n ----------\n datalab : Datalab\n The Datalab instance associated with this IssueFinder.\n\n task : str\n The type of machine learning task that the dataset is used for.\n\n verbosity : int\n Controls the verbosity of the output during the issue finding process.\n\n Note\n ----\n This class is not intended to be used directly. Instead, use the\n `Datalab.find_issues` method which internally utilizes an IssueFinder instance.\n \"\"\"\n\n def __init__(self, datalab: \"Datalab\", task: Task, verbosity=1):\n self.datalab = datalab\n self.task = task\n self.verbosity = verbosity\n\n def find_issues(\n self,\n *,\n pred_probs: Optional[np.ndarray] = None,\n features: Optional[npt.NDArray] = None,\n knn_graph: Optional[csr_matrix] = None,\n issue_types: Optional[Dict[str, Any]] = None,\n ) -> None:\n \"\"\"\n Checks the dataset for all sorts of common issues in real-world data (in both labels and feature values).\n\n You can use Datalab to find issues in your data, utilizing *any* model you have already trained.\n This method only interacts with your model via its predictions or embeddings (and other functions thereof).\n The more of these inputs you provide, the more types of issues Datalab can detect in your dataset/labels.\n If you provide a subset of these inputs, Datalab will output what insights it can based on the limited information from your model.\n\n Note\n ----\n This method is not intended to be used directly. Instead, use the\n :py:meth:`Datalab.find_issues <cleanlab.datalab.datalab.Datalab.find_issues>` method.\n\n Note\n ----\n The issues are saved in the ``self.datalab.data_issues.issues`` attribute, but are not returned.\n\n Parameters\n ----------\n pred_probs :\n Out-of-sample predicted class probabilities made by the model for every example in the dataset.\n To best detect label issues, provide this input obtained from the most accurate model you can produce.\n\n If provided for classification, this must be a 2D array with shape ``(num_examples, K)`` where K is the number of classes in the dataset.\n If provided for regression, this must be a 1D array with shape ``(num_examples,)``.\n\n features : Optional[np.ndarray]\n Feature embeddings (vector representations) of every example in the dataset.\n\n If provided, this must be a 2D array with shape (num_examples, num_features).\n\n knn_graph :\n Sparse matrix representing distances between examples in the dataset in a k nearest neighbor graph.\n\n For details, refer to the documentation of the same argument in :py:class:`Datalab.find_issues <cleanlab.datalab.datalab.Datalab.find_issues>`\n\n issue_types :\n Collection specifying which types of issues to consider in audit and any non-default parameter settings to use.\n If unspecified, a default set of issue types and recommended parameter settings is considered.\n\n This is a dictionary of dictionaries, where the keys are the issue types of interest\n and the values are dictionaries of parameter values that control how each type of issue is detected (only for advanced users).\n More specifically, the values are constructor keyword arguments passed to the corresponding ``IssueManager``,\n which is responsible for detecting the particular issue type.\n\n .. seealso::\n :py:class:`IssueManager <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager>`\n \"\"\"\n\n issue_types_copy = self.get_available_issue_types(\n pred_probs=pred_probs,\n features=features,\n knn_graph=knn_graph,\n issue_types=issue_types,\n )\n\n if not issue_types_copy:\n return None\n\n new_issue_managers = [\n factory(datalab=self.datalab, **issue_types_copy.get(factory.issue_name, {}))\n for factory in _IssueManagerFactory.from_list(\n list(issue_types_copy.keys()), task=self.task\n )\n ]\n\n failed_managers = []\n data_issues = self.datalab.data_issues\n for issue_manager, arg_dict in zip(new_issue_managers, issue_types_copy.values()):\n try:\n if self.verbosity:\n print(f\"Finding {issue_manager.issue_name} issues ...\")\n issue_manager.find_issues(**arg_dict)\n data_issues.collect_statistics(issue_manager)\n data_issues.collect_issues_from_issue_manager(issue_manager)\n except Exception as e:\n print(f\"Error in {issue_manager.issue_name}: {e}\")\n failed_managers.append(issue_manager)\n if failed_managers:\n print(f\"Failed to check for these issue types: {failed_managers}\")\n data_issues.set_health_score()\n\n def _set_issue_types(\n self,\n issue_types: Optional[Dict[str, Any]],\n required_defaults_dict: Dict[str, Any],\n ) -> Dict[str, Any]:\n \"\"\"Set necessary configuration for each IssueManager in a dictionary.\n\n While each IssueManager defines default values for its arguments,\n the Datalab class needs to organize the calls to each IssueManager\n with different arguments, some of which may be user-provided.\n\n Parameters\n ----------\n issue_types :\n Dictionary of issue types and argument configuration for their respective IssueManagers.\n If None, then the `required_defaults_dict` is used.\n\n required_defaults_dict :\n Dictionary of default parameter configuration for each issue type.\n\n Returns\n -------\n issue_types_copy :\n Dictionary of issue types and their parameter configuration.\n The input `issue_types` is copied and updated with the necessary default values.\n \"\"\"\n if issue_types is not None:\n issue_types_copy = issue_types.copy()\n self._check_missing_args(required_defaults_dict, issue_types_copy)\n else:\n issue_types_copy = required_defaults_dict.copy()\n # keep only default issue types\n issue_types_copy = {\n issue: issue_types_copy[issue]\n for issue in list_default_issue_types(self.task)\n if issue in issue_types_copy\n }\n\n # Check that all required arguments are provided.\n self._validate_issue_types_dict(issue_types_copy, required_defaults_dict)\n\n # Remove None values from argument list, rely on default values in IssueManager\n for key, value in issue_types_copy.items():\n issue_types_copy[key] = {k: v for k, v in value.items() if v is not None}\n\n return issue_types_copy\n\n @staticmethod\n def _check_missing_args(required_defaults_dict, issue_types):\n for key, issue_type_value in issue_types.items():\n missing_args = set(required_defaults_dict.get(key, {})) - set(issue_type_value.keys())\n # Impute missing arguments with default values.\n missing_dict = {\n missing_arg: required_defaults_dict[key][missing_arg]\n for missing_arg in missing_args\n }\n issue_types[key].update(missing_dict)\n\n @staticmethod\n def _validate_issue_types_dict(\n issue_types: Dict[str, Any], required_defaults_dict: Dict[str, Any]\n ) -> None:\n missing_required_args_dict = {}\n for issue_name, required_args in required_defaults_dict.items():\n if issue_name in issue_types:\n missing_args = set(required_args.keys()) - set(issue_types[issue_name].keys())\n if missing_args:\n missing_required_args_dict[issue_name] = missing_args\n if any(missing_required_args_dict.values()):\n error_message = \"\"\n for issue_name, missing_required_args in missing_required_args_dict.items():\n error_message += f\"Required argument {missing_required_args} for issue type {issue_name} was not provided.\\n\"\n raise ValueError(error_message)\n\n def get_available_issue_types(self, **kwargs):\n \"\"\"Returns a dictionary of issue types that can be used in :py:meth:`Datalab.find_issues\n <cleanlab.datalab.datalab.Datalab.find_issues>` method.\"\"\"\n\n pred_probs = kwargs.get(\"pred_probs\", None)\n features = kwargs.get(\"features\", None)\n knn_graph = kwargs.get(\"knn_graph\", None)\n issue_types = kwargs.get(\"issue_types\", None)\n\n model_output = None\n if pred_probs is not None:\n model_output_dict = {\n Task.REGRESSION: RegressionPredictions,\n Task.CLASSIFICATION: MultiClassPredProbs,\n Task.MULTILABEL: MultiLabelPredProbs,\n }\n\n model_output_class = model_output_dict.get(self.task)\n if model_output_class is None:\n raise ValueError(f\"Unknown task type '{self.task}'\")\n\n model_output = model_output_class(pred_probs)\n\n if model_output is not None:\n # A basic trick to assign the model output to the correct argument\n # E.g. Datalab accepts only `pred_probs`, but those are assigned to the `predictions` argument for regression-related issue_managers\n kwargs.update({model_output.argument: model_output.collect()})\n\n # Determine which parameters are required for each issue type\n strategy_for_resolving_required_args = _select_strategy_for_resolving_required_args(\n self.task\n )\n required_args_per_issue_type = strategy_for_resolving_required_args(**kwargs)\n\n issue_types_copy = self._set_issue_types(issue_types, required_args_per_issue_type)\n if issue_types is None:\n # Only run default issue types if no issue types are specified\n issue_types_copy = {\n issue: issue_types_copy[issue]\n for issue in list_default_issue_types(self.task)\n if issue in issue_types_copy\n }\n drop_label_check = (\n \"label\" in issue_types_copy\n and not self.datalab.has_labels\n and self.task != Task.REGRESSION\n )\n\n if drop_label_check:\n warnings.warn(\"No labels were provided. \" \"The 'label' issue type will not be run.\")\n issue_types_copy.pop(\"label\")\n\n outlier_check_needs_features = (\n self.task == \"classification\"\n and \"outlier\" in issue_types_copy\n and not self.datalab.has_labels\n )\n if outlier_check_needs_features:\n no_features = features is None\n no_knn_graph = knn_graph is None\n pred_probs_given = issue_types_copy[\"outlier\"].get(\"pred_probs\", None) is not None\n\n only_pred_probs_given = pred_probs_given and no_features and no_knn_graph\n if only_pred_probs_given:\n warnings.warn(\n \"No labels were provided. \" \"The 'outlier' issue type will not be run.\"\n )\n issue_types_copy.pop(\"outlier\")\n\n return issue_types_copy\n", "path": "cleanlab/datalab/internal/issue_finder.py"}], "after_files": [{"content": "# Copyright (C) 2017-2023 Cleanlab Inc.\n# This file is part of cleanlab.\n#\n# cleanlab is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# cleanlab is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with cleanlab. If not, see <https://www.gnu.org/licenses/>.\n\"\"\"\nModule for the :class:`IssueFinder` class, which is responsible for configuring,\ncreating and running issue managers.\n\nIt determines which types of issues to look for, instatiates the IssueManagers\nvia a factory, run the issue managers\n(:py:meth:`IssueManager.find_issues <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager.find_issues>`),\nand collects the results to :py:class:`DataIssues <cleanlab.datalab.internal.data_issues.DataIssues>`.\n\n.. note::\n\n This module is not intended to be used directly. Instead, use the public-facing\n :py:meth:`Datalab.find_issues <cleanlab.datalab.datalab.Datalab.find_issues>` method.\n\"\"\"\nfrom __future__ import annotations\n\nimport warnings\nfrom typing import TYPE_CHECKING, Any, Dict, Optional\n\nimport numpy as np\nfrom scipy.sparse import csr_matrix\n\nfrom cleanlab.datalab.internal.issue_manager_factory import (\n _IssueManagerFactory,\n list_default_issue_types,\n)\nfrom cleanlab.datalab.internal.model_outputs import (\n MultiClassPredProbs,\n RegressionPredictions,\n MultiLabelPredProbs,\n)\nfrom cleanlab.datalab.internal.task import Task\n\nif TYPE_CHECKING: # pragma: no cover\n import numpy.typing as npt\n from typing import Callable\n\n from cleanlab.datalab.datalab import Datalab\n\n\n_CLASSIFICATION_ARGS_DICT = {\n \"label\": [\"pred_probs\", \"features\"],\n \"outlier\": [\"pred_probs\", \"features\", \"knn_graph\"],\n \"near_duplicate\": [\"features\", \"knn_graph\"],\n \"non_iid\": [\"pred_probs\", \"features\", \"knn_graph\"],\n \"underperforming_group\": [\"pred_probs\", \"features\", \"knn_graph\", \"cluster_ids\"],\n \"data_valuation\": [\"knn_graph\"],\n \"class_imbalance\": [],\n \"null\": [\"features\"],\n}\n_REGRESSION_ARGS_DICT = {\n \"label\": [\"features\", \"predictions\"],\n \"outlier\": [\"features\", \"knn_graph\"],\n \"near_duplicate\": [\"features\", \"knn_graph\"],\n \"non_iid\": [\"features\", \"knn_graph\"],\n \"null\": [\"features\"],\n}\n\n_MULTILABEL_ARGS_DICT = {\n \"label\": [\"pred_probs\"],\n \"outlier\": [\"features\", \"knn_graph\"],\n \"near_duplicate\": [\"features\", \"knn_graph\"],\n \"non_iid\": [\"features\", \"knn_graph\"],\n \"null\": [\"features\"],\n}\n\n\ndef _resolve_required_args_for_classification(**kwargs):\n \"\"\"Resolves the required arguments for each issue type intended for classification tasks.\"\"\"\n initial_args_dict = _CLASSIFICATION_ARGS_DICT.copy()\n args_dict = {\n issue_type: {arg: kwargs.get(arg, None) for arg in initial_args_dict[issue_type]}\n for issue_type in initial_args_dict\n }\n\n # Some issue types (like class-imbalance) have no required args.\n # This conditional lambda is used to include them in args dict.\n keep_empty_argument = lambda k: not len(_CLASSIFICATION_ARGS_DICT[k])\n\n # Remove None values from argument list, rely on default values in IssueManager\n args_dict = {\n k: {k2: v2 for k2, v2 in v.items() if v2 is not None}\n for k, v in args_dict.items()\n if (v or keep_empty_argument(k))\n }\n\n # Prefer `knn_graph` over `features` if both are provided.\n for v in args_dict.values():\n if \"cluster_ids\" in v and (\"knn_graph\" in v or \"features\" in v):\n warnings.warn(\n \"`cluster_ids` have been provided with `knn_graph` or `features`.\"\n \"Issue managers that require cluster labels will prefer\"\n \"`cluster_ids` over computation of cluster labels using\"\n \"`knn_graph` or `features`. \"\n )\n if \"knn_graph\" in v and \"features\" in v:\n warnings.warn(\n \"Both `features` and `knn_graph` were provided. \"\n \"Most issue managers will likely prefer using `knn_graph` \"\n \"instead of `features` for efficiency.\"\n )\n\n # Only keep issue types that have at least one argument\n # or those that require no arguments.\n args_dict = {k: v for k, v in args_dict.items() if (v or keep_empty_argument(k))}\n\n return args_dict\n\n\ndef _resolve_required_args_for_regression(**kwargs):\n \"\"\"Resolves the required arguments for each issue type intended for regression tasks.\"\"\"\n initial_args_dict = _REGRESSION_ARGS_DICT.copy()\n args_dict = {\n issue_type: {arg: kwargs.get(arg, None) for arg in initial_args_dict[issue_type]}\n for issue_type in initial_args_dict\n }\n # Some issue types have no required args.\n # This conditional lambda is used to include them in args dict.\n keep_empty_argument = lambda k: not len(_REGRESSION_ARGS_DICT[k])\n\n # Remove None values from argument list, rely on default values in IssueManager\n args_dict = {\n k: {k2: v2 for k2, v2 in v.items() if v2 is not None}\n for k, v in args_dict.items()\n if v or keep_empty_argument(k)\n }\n\n # Only keep issue types that have at least one argument\n # or those that require no arguments.\n args_dict = {k: v for k, v in args_dict.items() if (v or keep_empty_argument(k))}\n\n return args_dict\n\n\ndef _resolve_required_args_for_multilabel(**kwargs):\n \"\"\"Resolves the required arguments for each issue type intended for multilabel tasks.\"\"\"\n initial_args_dict = _MULTILABEL_ARGS_DICT.copy()\n args_dict = {\n issue_type: {arg: kwargs.get(arg, None) for arg in initial_args_dict[issue_type]}\n for issue_type in initial_args_dict\n }\n # Some issue types have no required args.\n # This conditional lambda is used to include them in args dict.\n keep_empty_argument = lambda k: not len(_MULTILABEL_ARGS_DICT[k])\n\n # Remove None values from argument list, rely on default values in IssueManager\n args_dict = {\n k: {k2: v2 for k2, v2 in v.items() if v2 is not None}\n for k, v in args_dict.items()\n if v or keep_empty_argument(k) # Allow label issues to require no arguments\n }\n\n # Only keep issue types that have at least one argument\n # or those that require no arguments.\n args_dict = {k: v for k, v in args_dict.items() if (v or keep_empty_argument(k))}\n\n return args_dict\n\n\ndef _select_strategy_for_resolving_required_args(task: Task) -> Callable:\n \"\"\"Helper function that selects the strategy for resolving required arguments for each issue type.\n\n Each strategy resolves the required arguments for each issue type.\n\n This is a helper function that filters out any issue manager\n that does not have the required arguments.\n\n This does not consider custom hyperparameters for each issue type.\n\n Parameters\n ----------\n task : str\n The type of machine learning task that the dataset is used for.\n\n Returns\n -------\n args_dict :\n Dictionary of required arguments for each issue type, if available.\n \"\"\"\n strategies = {\n Task.CLASSIFICATION: _resolve_required_args_for_classification,\n Task.REGRESSION: _resolve_required_args_for_regression,\n Task.MULTILABEL: _resolve_required_args_for_multilabel,\n }\n selected_strategy = strategies.get(task, None)\n if selected_strategy is None:\n raise ValueError(f\"No strategy for resolving required arguments for task '{task}'\")\n return selected_strategy\n\n\nclass IssueFinder:\n \"\"\"\n The IssueFinder class is responsible for managing the process of identifying\n issues in the dataset by handling the creation and execution of relevant\n IssueManagers. It serves as a coordinator or helper class for the Datalab class\n to encapsulate the specific behavior of the issue finding process.\n\n At a high level, the IssueFinder is responsible for:\n\n - Determining which types of issues to look for.\n - Instantiating the appropriate IssueManagers using a factory.\n - Running the IssueManagers' `find_issues` methods.\n - Collecting the results into a DataIssues instance.\n\n Parameters\n ----------\n datalab : Datalab\n The Datalab instance associated with this IssueFinder.\n\n task : str\n The type of machine learning task that the dataset is used for.\n\n verbosity : int\n Controls the verbosity of the output during the issue finding process.\n\n Note\n ----\n This class is not intended to be used directly. Instead, use the\n `Datalab.find_issues` method which internally utilizes an IssueFinder instance.\n \"\"\"\n\n def __init__(self, datalab: \"Datalab\", task: Task, verbosity=1):\n self.datalab = datalab\n self.task = task\n self.verbosity = verbosity\n\n def find_issues(\n self,\n *,\n pred_probs: Optional[np.ndarray] = None,\n features: Optional[npt.NDArray] = None,\n knn_graph: Optional[csr_matrix] = None,\n issue_types: Optional[Dict[str, Any]] = None,\n ) -> None:\n \"\"\"\n Checks the dataset for all sorts of common issues in real-world data (in both labels and feature values).\n\n You can use Datalab to find issues in your data, utilizing *any* model you have already trained.\n This method only interacts with your model via its predictions or embeddings (and other functions thereof).\n The more of these inputs you provide, the more types of issues Datalab can detect in your dataset/labels.\n If you provide a subset of these inputs, Datalab will output what insights it can based on the limited information from your model.\n\n Note\n ----\n This method is not intended to be used directly. Instead, use the\n :py:meth:`Datalab.find_issues <cleanlab.datalab.datalab.Datalab.find_issues>` method.\n\n Note\n ----\n The issues are saved in the ``self.datalab.data_issues.issues`` attribute, but are not returned.\n\n Parameters\n ----------\n pred_probs :\n Out-of-sample predicted class probabilities made by the model for every example in the dataset.\n To best detect label issues, provide this input obtained from the most accurate model you can produce.\n\n If provided for classification, this must be a 2D array with shape ``(num_examples, K)`` where K is the number of classes in the dataset.\n If provided for regression, this must be a 1D array with shape ``(num_examples,)``.\n\n features : Optional[np.ndarray]\n Feature embeddings (vector representations) of every example in the dataset.\n\n If provided, this must be a 2D array with shape (num_examples, num_features).\n\n knn_graph :\n Sparse matrix representing distances between examples in the dataset in a k nearest neighbor graph.\n\n For details, refer to the documentation of the same argument in :py:class:`Datalab.find_issues <cleanlab.datalab.datalab.Datalab.find_issues>`\n\n issue_types :\n Collection specifying which types of issues to consider in audit and any non-default parameter settings to use.\n If unspecified, a default set of issue types and recommended parameter settings is considered.\n\n This is a dictionary of dictionaries, where the keys are the issue types of interest\n and the values are dictionaries of parameter values that control how each type of issue is detected (only for advanced users).\n More specifically, the values are constructor keyword arguments passed to the corresponding ``IssueManager``,\n which is responsible for detecting the particular issue type.\n\n .. seealso::\n :py:class:`IssueManager <cleanlab.datalab.internal.issue_manager.issue_manager.IssueManager>`\n \"\"\"\n\n issue_types_copy = self.get_available_issue_types(\n pred_probs=pred_probs,\n features=features,\n knn_graph=knn_graph,\n issue_types=issue_types,\n )\n\n if not issue_types_copy:\n return None\n\n new_issue_managers = [\n factory(datalab=self.datalab, **issue_types_copy.get(factory.issue_name, {}))\n for factory in _IssueManagerFactory.from_list(\n list(issue_types_copy.keys()), task=self.task\n )\n ]\n\n failed_managers = []\n data_issues = self.datalab.data_issues\n for issue_manager, arg_dict in zip(new_issue_managers, issue_types_copy.values()):\n try:\n if self.verbosity:\n print(f\"Finding {issue_manager.issue_name} issues ...\")\n issue_manager.find_issues(**arg_dict)\n data_issues.collect_statistics(issue_manager)\n data_issues.collect_issues_from_issue_manager(issue_manager)\n except Exception as e:\n print(f\"Error in {issue_manager.issue_name}: {e}\")\n failed_managers.append(issue_manager)\n if failed_managers:\n print(f\"Failed to check for these issue types: {failed_managers}\")\n data_issues.set_health_score()\n\n def _set_issue_types(\n self,\n issue_types: Optional[Dict[str, Any]],\n required_defaults_dict: Dict[str, Any],\n ) -> Dict[str, Any]:\n \"\"\"Set necessary configuration for each IssueManager in a dictionary.\n\n While each IssueManager defines default values for its arguments,\n the Datalab class needs to organize the calls to each IssueManager\n with different arguments, some of which may be user-provided.\n\n Parameters\n ----------\n issue_types :\n Dictionary of issue types and argument configuration for their respective IssueManagers.\n If None, then the `required_defaults_dict` is used.\n\n required_defaults_dict :\n Dictionary of default parameter configuration for each issue type.\n\n Returns\n -------\n issue_types_copy :\n Dictionary of issue types and their parameter configuration.\n The input `issue_types` is copied and updated with the necessary default values.\n \"\"\"\n if issue_types is not None:\n issue_types_copy = issue_types.copy()\n self._check_missing_args(required_defaults_dict, issue_types_copy)\n else:\n issue_types_copy = required_defaults_dict.copy()\n # keep only default issue types\n issue_types_copy = {\n issue: issue_types_copy[issue]\n for issue in list_default_issue_types(self.task)\n if issue in issue_types_copy\n }\n\n # Check that all required arguments are provided.\n self._validate_issue_types_dict(issue_types_copy, required_defaults_dict)\n\n # Remove None values from argument list, rely on default values in IssueManager\n for key, value in issue_types_copy.items():\n issue_types_copy[key] = {k: v for k, v in value.items() if v is not None}\n\n return issue_types_copy\n\n @staticmethod\n def _check_missing_args(required_defaults_dict, issue_types):\n for key, issue_type_value in issue_types.items():\n missing_args = set(required_defaults_dict.get(key, {})) - set(issue_type_value.keys())\n # Impute missing arguments with default values.\n missing_dict = {\n missing_arg: required_defaults_dict[key][missing_arg]\n for missing_arg in missing_args\n }\n issue_types[key].update(missing_dict)\n\n @staticmethod\n def _validate_issue_types_dict(\n issue_types: Dict[str, Any], required_defaults_dict: Dict[str, Any]\n ) -> None:\n missing_required_args_dict = {}\n for issue_name, required_args in required_defaults_dict.items():\n if issue_name in issue_types:\n missing_args = set(required_args.keys()) - set(issue_types[issue_name].keys())\n if missing_args:\n missing_required_args_dict[issue_name] = missing_args\n if any(missing_required_args_dict.values()):\n error_message = \"\"\n for issue_name, missing_required_args in missing_required_args_dict.items():\n error_message += f\"Required argument {missing_required_args} for issue type {issue_name} was not provided.\\n\"\n raise ValueError(error_message)\n\n def get_available_issue_types(self, **kwargs):\n \"\"\"Returns a dictionary of issue types that can be used in :py:meth:`Datalab.find_issues\n <cleanlab.datalab.datalab.Datalab.find_issues>` method.\"\"\"\n\n pred_probs = kwargs.get(\"pred_probs\", None)\n features = kwargs.get(\"features\", None)\n knn_graph = kwargs.get(\"knn_graph\", None)\n issue_types = kwargs.get(\"issue_types\", None)\n\n model_output = None\n if pred_probs is not None:\n model_output_dict = {\n Task.REGRESSION: RegressionPredictions,\n Task.CLASSIFICATION: MultiClassPredProbs,\n Task.MULTILABEL: MultiLabelPredProbs,\n }\n\n model_output_class = model_output_dict.get(self.task)\n if model_output_class is None:\n raise ValueError(f\"Unknown task type '{self.task}'\")\n\n model_output = model_output_class(pred_probs)\n\n if model_output is not None:\n # A basic trick to assign the model output to the correct argument\n # E.g. Datalab accepts only `pred_probs`, but those are assigned to the `predictions` argument for regression-related issue_managers\n kwargs.update({model_output.argument: model_output.collect()})\n\n # Determine which parameters are required for each issue type\n strategy_for_resolving_required_args = _select_strategy_for_resolving_required_args(\n self.task\n )\n required_args_per_issue_type = strategy_for_resolving_required_args(**kwargs)\n\n issue_types_copy = self._set_issue_types(issue_types, required_args_per_issue_type)\n if issue_types is None:\n # Only run default issue types if no issue types are specified\n issue_types_copy = {\n issue: issue_types_copy[issue]\n for issue in list_default_issue_types(self.task)\n if issue in issue_types_copy\n }\n drop_label_check = (\n \"label\" in issue_types_copy\n and not self.datalab.has_labels\n and self.task != Task.REGRESSION\n )\n\n if drop_label_check:\n warnings.warn(\"No labels were provided. \" \"The 'label' issue type will not be run.\")\n issue_types_copy.pop(\"label\")\n\n outlier_check_needs_features = (\n self.task == \"classification\"\n and \"outlier\" in issue_types_copy\n and not self.datalab.has_labels\n )\n if outlier_check_needs_features:\n no_features = features is None\n no_knn_graph = knn_graph is None\n pred_probs_given = issue_types_copy[\"outlier\"].get(\"pred_probs\", None) is not None\n\n only_pred_probs_given = pred_probs_given and no_features and no_knn_graph\n if only_pred_probs_given:\n warnings.warn(\n \"No labels were provided. \" \"The 'outlier' issue type will not be run.\"\n )\n issue_types_copy.pop(\"outlier\")\n\n drop_class_imbalance_check = (\n \"class_imbalance\" in issue_types_copy\n and not self.datalab.has_labels\n and self.task == Task.CLASSIFICATION\n )\n if drop_class_imbalance_check:\n issue_types_copy.pop(\"class_imbalance\")\n\n return issue_types_copy\n", "path": "cleanlab/datalab/internal/issue_finder.py"}]} |
gh_patches_debug_1069 | rasdani/github-patches | git_diff | UTNkar__moore-151 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Paragraph block alignment
<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->
See image:

[Description of the issue]
### Steps to Reproduce
1. [First Step]
2. [Second Step]
3. [and so on...]
<!-- Please select the appropriate "topic category"/blue and "issue type"/yellow label -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/blocks/models.py`
Content:
```
1 from wagtail.wagtailcore import blocks
2 from wagtail.wagtailimages.blocks import ImageChooserBlock
3
4 from django.utils.translation import ugettext_lazy as _
5
6
7 class CountersBlock(blocks.StructBlock):
8 title = blocks.CharBlock()
9 counters = blocks.ListBlock(blocks.StructBlock([
10 ('icon', blocks.CharBlock(
11 help_text=_('Material icon font icon text, as found on: '
12 'https://material.io/icons'),
13 )),
14 ('value', blocks.CharBlock()),
15 ('description', blocks.CharBlock(required=False))
16 ]))
17 style = blocks.ChoiceBlock(choices=[
18 ('light', _('Light')),
19 ('dark', _('Dark')),
20 ])
21
22 class Meta:
23 label = _('Counters')
24 icon = 'fa-balance-scale'
25 template = 'blocks/counter.html'
26
27
28 class HeadingBlock(blocks.StructBlock):
29 title = blocks.CharBlock(required=True)
30 subtitle = blocks.CharBlock(required=False)
31
32 class Meta:
33 label = _('Heading')
34 icon = 'fa-header'
35 template = 'blocks/title.html'
36
37
38 class ImageDescriptionBlock(blocks.StructBlock):
39 description = blocks.RichTextBlock()
40 image = ImageChooserBlock()
41 image_alignment = blocks.ChoiceBlock(choices=[
42 ('left', _('Left')),
43 ('right', _('Right')),
44 ])
45 hide_on_med = blocks.BooleanBlock(required=False)
46
47 class Meta:
48 label = _('Image + Description')
49 icon = 'fa-file-image-o '
50 template = 'blocks/image_description.html'
51
52
53 class ImageIconsBlock(blocks.StructBlock):
54 title = blocks.CharBlock()
55 image = ImageChooserBlock()
56 image_alignment = blocks.ChoiceBlock(choices=[
57 ('left', _('Left')),
58 ('right', _('Right')),
59 ])
60 icons = blocks.ListBlock(blocks.StructBlock([
61 ('icon', blocks.CharBlock(
62 help_text=_('Material icon font icon text, as found on: '
63 'https://material.io/icons'),
64 )),
65 ('title', blocks.CharBlock()),
66 ('description', blocks.CharBlock())
67 ]))
68 hide_on_med = blocks.BooleanBlock(required=False)
69
70 class Meta:
71 label = _('Image + Icons')
72 icon = 'fa-file-excel-o'
73 template = 'blocks/image_icons.html'
74
75
76 class OverlayBlock(blocks.StructBlock):
77 image = ImageChooserBlock()
78 title = blocks.CharBlock(required=False)
79 description = blocks.CharBlock(required=False)
80
81 link = blocks.URLBlock(required=False)
82 button = blocks.CharBlock(required=False)
83
84 class Meta:
85 label = _('Image overlay')
86 icon = 'fa-clone'
87 template = 'blocks/overlay.html'
88
89
90 WAGTAIL_STATIC_BLOCKTYPES = [
91 ('heading', HeadingBlock()),
92 ('paragraph', blocks.RichTextBlock()),
93 ('image_description', ImageIconsBlock()),
94 ('image_icons', ImageDescriptionBlock()),
95 ('overlay', OverlayBlock()),
96 ('logos', blocks.ListBlock(
97 ImageChooserBlock(),
98 icon='fa-pied-piper',
99 template='blocks/logos.html',
100 label=_('Logos'),
101 )),
102 ('counters', CountersBlock()),
103 ('image', ImageChooserBlock(template='blocks/image.html')),
104 ]
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/blocks/models.py b/website/blocks/models.py
--- a/website/blocks/models.py
+++ b/website/blocks/models.py
@@ -89,7 +89,7 @@
WAGTAIL_STATIC_BLOCKTYPES = [
('heading', HeadingBlock()),
- ('paragraph', blocks.RichTextBlock()),
+ ('paragraph', blocks.RichTextBlock(template='blocks/paragraph.html')),
('image_description', ImageIconsBlock()),
('image_icons', ImageDescriptionBlock()),
('overlay', OverlayBlock()),
| {"golden_diff": "diff --git a/website/blocks/models.py b/website/blocks/models.py\n--- a/website/blocks/models.py\n+++ b/website/blocks/models.py\n@@ -89,7 +89,7 @@\n \n WAGTAIL_STATIC_BLOCKTYPES = [\n ('heading', HeadingBlock()),\n- ('paragraph', blocks.RichTextBlock()),\n+ ('paragraph', blocks.RichTextBlock(template='blocks/paragraph.html')),\n ('image_description', ImageIconsBlock()),\n ('image_icons', ImageDescriptionBlock()),\n ('overlay', OverlayBlock()),\n", "issue": "Paragraph block alignment\n<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->\r\n\r\nSee image:\r\n\r\n\r\n\r\n[Description of the issue]\r\n\r\n### Steps to Reproduce\r\n\r\n1. [First Step]\r\n2. [Second Step]\r\n3. [and so on...]\r\n\r\n<!-- Please select the appropriate \"topic category\"/blue and \"issue type\"/yellow label -->\r\n\n", "before_files": [{"content": "from wagtail.wagtailcore import blocks\nfrom wagtail.wagtailimages.blocks import ImageChooserBlock\n\nfrom django.utils.translation import ugettext_lazy as _\n\n\nclass CountersBlock(blocks.StructBlock):\n title = blocks.CharBlock()\n counters = blocks.ListBlock(blocks.StructBlock([\n ('icon', blocks.CharBlock(\n help_text=_('Material icon font icon text, as found on: '\n 'https://material.io/icons'),\n )),\n ('value', blocks.CharBlock()),\n ('description', blocks.CharBlock(required=False))\n ]))\n style = blocks.ChoiceBlock(choices=[\n ('light', _('Light')),\n ('dark', _('Dark')),\n ])\n\n class Meta:\n label = _('Counters')\n icon = 'fa-balance-scale'\n template = 'blocks/counter.html'\n\n\nclass HeadingBlock(blocks.StructBlock):\n title = blocks.CharBlock(required=True)\n subtitle = blocks.CharBlock(required=False)\n\n class Meta:\n label = _('Heading')\n icon = 'fa-header'\n template = 'blocks/title.html'\n\n\nclass ImageDescriptionBlock(blocks.StructBlock):\n description = blocks.RichTextBlock()\n image = ImageChooserBlock()\n image_alignment = blocks.ChoiceBlock(choices=[\n ('left', _('Left')),\n ('right', _('Right')),\n ])\n hide_on_med = blocks.BooleanBlock(required=False)\n\n class Meta:\n label = _('Image + Description')\n icon = 'fa-file-image-o '\n template = 'blocks/image_description.html'\n\n\nclass ImageIconsBlock(blocks.StructBlock):\n title = blocks.CharBlock()\n image = ImageChooserBlock()\n image_alignment = blocks.ChoiceBlock(choices=[\n ('left', _('Left')),\n ('right', _('Right')),\n ])\n icons = blocks.ListBlock(blocks.StructBlock([\n ('icon', blocks.CharBlock(\n help_text=_('Material icon font icon text, as found on: '\n 'https://material.io/icons'),\n )),\n ('title', blocks.CharBlock()),\n ('description', blocks.CharBlock())\n ]))\n hide_on_med = blocks.BooleanBlock(required=False)\n\n class Meta:\n label = _('Image + Icons')\n icon = 'fa-file-excel-o'\n template = 'blocks/image_icons.html'\n\n\nclass OverlayBlock(blocks.StructBlock):\n image = ImageChooserBlock()\n title = blocks.CharBlock(required=False)\n description = blocks.CharBlock(required=False)\n\n link = blocks.URLBlock(required=False)\n button = blocks.CharBlock(required=False)\n\n class Meta:\n label = _('Image overlay')\n icon = 'fa-clone'\n template = 'blocks/overlay.html'\n\n\nWAGTAIL_STATIC_BLOCKTYPES = [\n ('heading', HeadingBlock()),\n ('paragraph', blocks.RichTextBlock()),\n ('image_description', ImageIconsBlock()),\n ('image_icons', ImageDescriptionBlock()),\n ('overlay', OverlayBlock()),\n ('logos', blocks.ListBlock(\n ImageChooserBlock(),\n icon='fa-pied-piper',\n template='blocks/logos.html',\n label=_('Logos'),\n )),\n ('counters', CountersBlock()),\n ('image', ImageChooserBlock(template='blocks/image.html')),\n]\n", "path": "website/blocks/models.py"}], "after_files": [{"content": "from wagtail.wagtailcore import blocks\nfrom wagtail.wagtailimages.blocks import ImageChooserBlock\n\nfrom django.utils.translation import ugettext_lazy as _\n\n\nclass CountersBlock(blocks.StructBlock):\n title = blocks.CharBlock()\n counters = blocks.ListBlock(blocks.StructBlock([\n ('icon', blocks.CharBlock(\n help_text=_('Material icon font icon text, as found on: '\n 'https://material.io/icons'),\n )),\n ('value', blocks.CharBlock()),\n ('description', blocks.CharBlock(required=False))\n ]))\n style = blocks.ChoiceBlock(choices=[\n ('light', _('Light')),\n ('dark', _('Dark')),\n ])\n\n class Meta:\n label = _('Counters')\n icon = 'fa-balance-scale'\n template = 'blocks/counter.html'\n\n\nclass HeadingBlock(blocks.StructBlock):\n title = blocks.CharBlock(required=True)\n subtitle = blocks.CharBlock(required=False)\n\n class Meta:\n label = _('Heading')\n icon = 'fa-header'\n template = 'blocks/title.html'\n\n\nclass ImageDescriptionBlock(blocks.StructBlock):\n description = blocks.RichTextBlock()\n image = ImageChooserBlock()\n image_alignment = blocks.ChoiceBlock(choices=[\n ('left', _('Left')),\n ('right', _('Right')),\n ])\n hide_on_med = blocks.BooleanBlock(required=False)\n\n class Meta:\n label = _('Image + Description')\n icon = 'fa-file-image-o '\n template = 'blocks/image_description.html'\n\n\nclass ImageIconsBlock(blocks.StructBlock):\n title = blocks.CharBlock()\n image = ImageChooserBlock()\n image_alignment = blocks.ChoiceBlock(choices=[\n ('left', _('Left')),\n ('right', _('Right')),\n ])\n icons = blocks.ListBlock(blocks.StructBlock([\n ('icon', blocks.CharBlock(\n help_text=_('Material icon font icon text, as found on: '\n 'https://material.io/icons'),\n )),\n ('title', blocks.CharBlock()),\n ('description', blocks.CharBlock())\n ]))\n hide_on_med = blocks.BooleanBlock(required=False)\n\n class Meta:\n label = _('Image + Icons')\n icon = 'fa-file-excel-o'\n template = 'blocks/image_icons.html'\n\n\nclass OverlayBlock(blocks.StructBlock):\n image = ImageChooserBlock()\n title = blocks.CharBlock(required=False)\n description = blocks.CharBlock(required=False)\n\n link = blocks.URLBlock(required=False)\n button = blocks.CharBlock(required=False)\n\n class Meta:\n label = _('Image overlay')\n icon = 'fa-clone'\n template = 'blocks/overlay.html'\n\n\nWAGTAIL_STATIC_BLOCKTYPES = [\n ('heading', HeadingBlock()),\n ('paragraph', blocks.RichTextBlock(template='blocks/paragraph.html')),\n ('image_description', ImageIconsBlock()),\n ('image_icons', ImageDescriptionBlock()),\n ('overlay', OverlayBlock()),\n ('logos', blocks.ListBlock(\n ImageChooserBlock(),\n icon='fa-pied-piper',\n template='blocks/logos.html',\n label=_('Logos'),\n )),\n ('counters', CountersBlock()),\n ('image', ImageChooserBlock(template='blocks/image.html')),\n]\n", "path": "website/blocks/models.py"}]} |
gh_patches_debug_1070 | rasdani/github-patches | git_diff | e-valuation__EvaP-290 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update installation instructions
https://evap.readthedocs.org/en/latest/installation.html
Someone should follow these instructions and see if they are correct and complete.
The short version at https://github.com/fsr-itse/EvaP should also be checked again.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/settings.py`
Content:
```
1 # Django settings for evap project.
2
3 # automatically determine SITE_ROOT, used for absolute paths below
4 import os.path
5 SITE_ROOT = os.path.dirname(os.path.realpath(__file__))
6
7 DEBUG = True
8 TEMPLATE_DEBUG = DEBUG
9
10 ADMINS = (
11 # ('Your Name', '[email protected]'),
12 )
13
14 MANAGERS = ADMINS
15
16 DATABASES = {
17 'default': {
18 'ENGINE': 'django.db.backends.sqlite3', # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
19 'NAME': os.path.join(SITE_ROOT, 'database.sqlite3'), # Or path to database file if using sqlite3.
20 'USER': '', # Not used with sqlite3.
21 'PASSWORD': '', # Not used with sqlite3.
22 'HOST': '', # Set to empty string for localhost. Not used with sqlite3.
23 'PORT': '', # Set to empty string for default. Not used with sqlite3.
24 }
25 }
26
27 CACHES = {
28 'default': {
29 # 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
30 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
31 }
32 }
33
34 # config for feedback links
35 FEEDBACK_EMAIL = "webmaster@localhost"
36 TRACKER_URL = "https://github.com/fsr-itse/EvaP"
37
38 # config for mail system
39 DEFAULT_FROM_EMAIL = "webmaster@localhost"
40 REPLY_TO_EMAIL = DEFAULT_FROM_EMAIL
41 if DEBUG:
42 EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
43
44 # key authentication settings
45 LOGIN_KEY_VALIDITY = 210 # days, so roughly 7 months
46
47 # minimum answers needed for publishing
48 MIN_ANSWER_COUNT = 2
49 MIN_ANSWER_PERCENTAGE = 0.2
50
51 # days before end date to send reminder
52 REMIND_X_DAYS_AHEAD_OF_END_DATE = 2
53
54 # email domains for the internal users of the hosting institution used to
55 # figure out who can login with username and password and who needs a login key
56 INSTITUTION_EMAIL_DOMAINS = ["hpi.uni-potsdam.de", "student.hpi.uni-potsdam.de"]
57
58 # Local time zone for this installation. Choices can be found here:
59 # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
60 # although not all choices may be available on all operating systems.
61 # On Unix systems, a value of None will cause Django to use the same
62 # timezone as the operating system.
63 # If running in a Windows environment this must be set to the same as your
64 # system time zone.
65 TIME_ZONE = 'Europe/Berlin'
66
67 # Language code for this installation. All choices can be found here:
68 # http://www.i18nguy.com/unicode/language-identifiers.html
69 LANGUAGE_CODE = 'en'
70
71 LANGUAGES = (
72 ('en', "English"),
73 ('de', "Deutsch"),
74 )
75
76 SITE_ID = 1
77
78 # If you set this to False, Django will make some optimizations so as not
79 # to load the internationalization machinery.
80 USE_I18N = True
81
82 # If you set this to False, Django will not format dates, numbers and
83 # calendars according to the current locale
84 USE_L10N = True
85
86 # Locale paths
87 LOCALE_PATHS = (
88 os.path.join(SITE_ROOT, "locale"),
89 )
90
91 # Absolute filesystem path to the directory that will hold user-uploaded files.
92 # Example: "/home/media/media.lawrence.com/media/"
93 MEDIA_ROOT = os.path.join(SITE_ROOT, "upload")
94
95 # URL that handles the media served from MEDIA_ROOT. Make sure to use a
96 # trailing slash.
97 # Examples: "http://media.lawrence.com/media/", "http://example.com/media/"
98 MEDIA_URL = '/media/'
99
100 # Absolute path to the directory static files should be collected to.
101 # Don't put anything in this directory yourself; store your static files
102 # in apps' "static/" subdirectories and in STATICFILES_DIRS.
103 # Example: "/home/media/media.lawrence.com/static/"
104 STATIC_ROOT = os.path.join(SITE_ROOT, "staticfiles")
105
106 # URL prefix for static files.
107 # Example: "http://media.lawrence.com/static/"
108 STATIC_URL = '/static/'
109
110 # URL prefix for admin static files -- CSS, JavaScript and images.
111 # Make sure to use a trailing slash.
112 # Examples: "http://foo.com/static/admin/", "/static/admin/".
113 ADMIN_MEDIA_PREFIX = '/static/admin/'
114
115 # Additional locations of static files
116 STATICFILES_DIRS = (
117 # Put strings here, like "/home/html/static" or "C:/www/django/static".
118 # Always use forward slashes, even on Windows.
119 # Don't forget to use absolute paths, not relative paths.
120 os.path.join(SITE_ROOT, "static"),
121 )
122
123 # List of finder classes that know how to find static files in
124 # various locations.
125 STATICFILES_FINDERS = (
126 'django.contrib.staticfiles.finders.FileSystemFinder',
127 'django.contrib.staticfiles.finders.AppDirectoriesFinder',
128 # 'django.contrib.staticfiles.finders.DefaultStorageFinder',
129 )
130
131 # Make this unique, and don't share it with anybody.
132 SECRET_KEY = 'k9-)vh3c_dtm6bpi7j(!*s_^91v0!ekjt_#o&0i$e22tnn^-vb'
133
134 # List of callables that know how to import templates from various sources.
135 TEMPLATE_LOADERS = (
136 'django.template.loaders.filesystem.Loader',
137 'django.template.loaders.app_directories.Loader',
138 # 'django.template.loaders.eggs.Loader',
139 )
140
141 TEMPLATE_CONTEXT_PROCESSORS = (
142 "django.contrib.auth.context_processors.auth",
143 "django.core.context_processors.debug",
144 "django.core.context_processors.i18n",
145 "django.core.context_processors.media",
146 "django.core.context_processors.static",
147 "django.core.context_processors.request",
148 "django.contrib.messages.context_processors.messages",
149 )
150
151 MIDDLEWARE_CLASSES = (
152 'django.middleware.common.CommonMiddleware',
153 'django.contrib.sessions.middleware.SessionMiddleware',
154 'django.middleware.locale.LocaleMiddleware',
155 'django.middleware.csrf.CsrfViewMiddleware',
156 'django.contrib.auth.middleware.AuthenticationMiddleware',
157 'django.contrib.messages.middleware.MessageMiddleware',
158 'evap.evaluation.auth.RequestAuthMiddleware',
159 'evap.evaluation.403.Django403Middleware',
160 )
161
162 AUTHENTICATION_BACKENDS = (
163 'evap.evaluation.auth.RequestAuthUserBackend',
164 'django.contrib.auth.backends.ModelBackend',
165 )
166
167 LOGIN_URL = "/"
168
169 ROOT_URLCONF = 'evap.urls'
170
171 TEMPLATE_DIRS = (
172 # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
173 # Always use forward slashes, even on Windows.
174 # Don't forget to use absolute paths, not relative paths.
175 os.path.join(SITE_ROOT, "templates"),
176 )
177
178 INSTALLED_APPS = (
179 'django.contrib.auth',
180 'django.contrib.contenttypes',
181 'django.contrib.sessions',
182 'django.contrib.sites',
183 'django.contrib.messages',
184 'django.contrib.staticfiles',
185 'django.contrib.admin',
186 'south',
187 'widget_tweaks',
188 'evap.evaluation',
189 'evap.fsr',
190 'evap.results',
191 'evap.student',
192 'evap.contributor',
193 )
194 if not DEBUG:
195 INSTALLED_APPS += (
196 'raven.contrib.django.raven_compat',
197 )
198
199 RAVEN_CONFIG = {
200 'dsn': 'http://public:[email protected]/1',
201 }
202
203 # A sample logging configuration. The only tangible logging
204 # performed by this configuration is to send an email to
205 # the site admins on every HTTP 500 error.
206 # See http://docs.djangoproject.com/en/dev/topics/logging for
207 # more details on how to customize your logging configuration.
208 LOGGING = {
209 'version': 1,
210 'disable_existing_loggers': False,
211 'filters': {
212 'require_debug_false': {
213 '()': 'django.utils.log.RequireDebugFalse'
214 }
215 },
216 'handlers': {
217 'sentry': {
218 'level': 'ERROR',
219 'class': 'raven.contrib.django.handlers.SentryHandler',
220 },
221 'mail_admins': {
222 'level': 'ERROR',
223 'filters': ['require_debug_false'],
224 'class': 'django.utils.log.AdminEmailHandler'
225 },
226 'console': {
227 'level': 'INFO',
228 'class': 'logging.StreamHandler'
229 }
230 },
231 'loggers': {
232 'django.request': {
233 'handlers': ['mail_admins'],
234 'level': 'ERROR',
235 'propagate': True,
236 },
237 'evap.evaluation.management.commands.import_evaj': {
238 'handlers': ['console'],
239 'level': 'INFO'
240 },
241 'raven': {
242 'level': 'DEBUG',
243 'handlers': ['console'],
244 'propagate': False,
245 },
246 'sentry.errors': {
247 'level': 'DEBUG',
248 'handlers': ['console'],
249 'propagate': False,
250 }
251 }
252 }
253
254 # Create a local_settings.py if you want to override settings per machine
255 # or user, e.g. for development or different settings in deployments using
256 # multiple servers.
257 _LOCAL_SETTINGS_FILENAME = os.path.join(SITE_ROOT, "localsettings.py")
258 if os.path.exists(_LOCAL_SETTINGS_FILENAME):
259 execfile(_LOCAL_SETTINGS_FILENAME)
260 del _LOCAL_SETTINGS_FILENAME
261
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/settings.py b/evap/settings.py
--- a/evap/settings.py
+++ b/evap/settings.py
@@ -251,7 +251,7 @@
}
}
-# Create a local_settings.py if you want to override settings per machine
+# Create a localsettings.py if you want to override settings per machine
# or user, e.g. for development or different settings in deployments using
# multiple servers.
_LOCAL_SETTINGS_FILENAME = os.path.join(SITE_ROOT, "localsettings.py")
| {"golden_diff": "diff --git a/evap/settings.py b/evap/settings.py\n--- a/evap/settings.py\n+++ b/evap/settings.py\n@@ -251,7 +251,7 @@\n }\n }\n \n-# Create a local_settings.py if you want to override settings per machine\n+# Create a localsettings.py if you want to override settings per machine\n # or user, e.g. for development or different settings in deployments using\n # multiple servers.\n _LOCAL_SETTINGS_FILENAME = os.path.join(SITE_ROOT, \"localsettings.py\")\n", "issue": "Update installation instructions\nhttps://evap.readthedocs.org/en/latest/installation.html\n\nSomeone should follow these instructions and see if they are correct and complete.\n\nThe short version at https://github.com/fsr-itse/EvaP should also be checked again.\n\n", "before_files": [{"content": "# Django settings for evap project.\n\n# automatically determine SITE_ROOT, used for absolute paths below\nimport os.path\nSITE_ROOT = os.path.dirname(os.path.realpath(__file__))\n\nDEBUG = True\nTEMPLATE_DEBUG = DEBUG\n\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nMANAGERS = ADMINS\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3', # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.\n 'NAME': os.path.join(SITE_ROOT, 'database.sqlite3'), # Or path to database file if using sqlite3.\n 'USER': '', # Not used with sqlite3.\n 'PASSWORD': '', # Not used with sqlite3.\n 'HOST': '', # Set to empty string for localhost. Not used with sqlite3.\n 'PORT': '', # Set to empty string for default. Not used with sqlite3.\n }\n}\n\nCACHES = {\n 'default': {\n # 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n }\n}\n\n# config for feedback links\nFEEDBACK_EMAIL = \"webmaster@localhost\"\nTRACKER_URL = \"https://github.com/fsr-itse/EvaP\"\n\n# config for mail system\nDEFAULT_FROM_EMAIL = \"webmaster@localhost\"\nREPLY_TO_EMAIL = DEFAULT_FROM_EMAIL\nif DEBUG:\n EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n# key authentication settings\nLOGIN_KEY_VALIDITY = 210 # days, so roughly 7 months\n\n# minimum answers needed for publishing\nMIN_ANSWER_COUNT = 2\nMIN_ANSWER_PERCENTAGE = 0.2\n\n# days before end date to send reminder\nREMIND_X_DAYS_AHEAD_OF_END_DATE = 2\n\n# email domains for the internal users of the hosting institution used to\n# figure out who can login with username and password and who needs a login key\nINSTITUTION_EMAIL_DOMAINS = [\"hpi.uni-potsdam.de\", \"student.hpi.uni-potsdam.de\"]\n\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# On Unix systems, a value of None will cause Django to use the same\n# timezone as the operating system.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'Europe/Berlin'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\nLANGUAGE_CODE = 'en'\n\nLANGUAGES = (\n ('en', \"English\"),\n ('de', \"Deutsch\"),\n)\n\nSITE_ID = 1\n\n# If you set this to False, Django will make some optimizations so as not\n# to load the internationalization machinery.\nUSE_I18N = True\n\n# If you set this to False, Django will not format dates, numbers and\n# calendars according to the current locale\nUSE_L10N = True\n\n# Locale paths\nLOCALE_PATHS = (\n os.path.join(SITE_ROOT, \"locale\"),\n)\n\n# Absolute filesystem path to the directory that will hold user-uploaded files.\n# Example: \"/home/media/media.lawrence.com/media/\"\nMEDIA_ROOT = os.path.join(SITE_ROOT, \"upload\")\n\n# URL that handles the media served from MEDIA_ROOT. Make sure to use a\n# trailing slash.\n# Examples: \"http://media.lawrence.com/media/\", \"http://example.com/media/\"\nMEDIA_URL = '/media/'\n\n# Absolute path to the directory static files should be collected to.\n# Don't put anything in this directory yourself; store your static files\n# in apps' \"static/\" subdirectories and in STATICFILES_DIRS.\n# Example: \"/home/media/media.lawrence.com/static/\"\nSTATIC_ROOT = os.path.join(SITE_ROOT, \"staticfiles\")\n\n# URL prefix for static files.\n# Example: \"http://media.lawrence.com/static/\"\nSTATIC_URL = '/static/'\n\n# URL prefix for admin static files -- CSS, JavaScript and images.\n# Make sure to use a trailing slash.\n# Examples: \"http://foo.com/static/admin/\", \"/static/admin/\".\nADMIN_MEDIA_PREFIX = '/static/admin/'\n\n# Additional locations of static files\nSTATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n os.path.join(SITE_ROOT, \"static\"),\n)\n\n# List of finder classes that know how to find static files in\n# various locations.\nSTATICFILES_FINDERS = (\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n# 'django.contrib.staticfiles.finders.DefaultStorageFinder',\n)\n\n# Make this unique, and don't share it with anybody.\nSECRET_KEY = 'k9-)vh3c_dtm6bpi7j(!*s_^91v0!ekjt_#o&0i$e22tnn^-vb'\n\n# List of callables that know how to import templates from various sources.\nTEMPLATE_LOADERS = (\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n# 'django.template.loaders.eggs.Loader',\n)\n\nTEMPLATE_CONTEXT_PROCESSORS = (\n \"django.contrib.auth.context_processors.auth\",\n \"django.core.context_processors.debug\",\n \"django.core.context_processors.i18n\",\n \"django.core.context_processors.media\",\n \"django.core.context_processors.static\",\n \"django.core.context_processors.request\",\n \"django.contrib.messages.context_processors.messages\",\n)\n\nMIDDLEWARE_CLASSES = (\n 'django.middleware.common.CommonMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'evap.evaluation.auth.RequestAuthMiddleware',\n 'evap.evaluation.403.Django403Middleware',\n)\n\nAUTHENTICATION_BACKENDS = (\n 'evap.evaluation.auth.RequestAuthUserBackend',\n 'django.contrib.auth.backends.ModelBackend',\n)\n\nLOGIN_URL = \"/\"\n\nROOT_URLCONF = 'evap.urls'\n\nTEMPLATE_DIRS = (\n # Put strings here, like \"/home/html/django_templates\" or \"C:/www/django/templates\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n os.path.join(SITE_ROOT, \"templates\"),\n)\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.sites',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.admin',\n 'south',\n 'widget_tweaks',\n 'evap.evaluation',\n 'evap.fsr',\n 'evap.results',\n 'evap.student',\n 'evap.contributor',\n)\nif not DEBUG:\n INSTALLED_APPS += (\n 'raven.contrib.django.raven_compat',\n )\n\nRAVEN_CONFIG = {\n 'dsn': 'http://public:[email protected]/1',\n}\n\n# A sample logging configuration. The only tangible logging\n# performed by this configuration is to send an email to\n# the site admins on every HTTP 500 error.\n# See http://docs.djangoproject.com/en/dev/topics/logging for\n# more details on how to customize your logging configuration.\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse'\n }\n },\n 'handlers': {\n 'sentry': {\n 'level': 'ERROR',\n 'class': 'raven.contrib.django.handlers.SentryHandler',\n },\n 'mail_admins': {\n 'level': 'ERROR',\n 'filters': ['require_debug_false'],\n 'class': 'django.utils.log.AdminEmailHandler'\n },\n 'console': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler'\n }\n },\n 'loggers': {\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': True,\n },\n 'evap.evaluation.management.commands.import_evaj': {\n 'handlers': ['console'],\n 'level': 'INFO'\n },\n 'raven': {\n 'level': 'DEBUG',\n 'handlers': ['console'],\n 'propagate': False,\n },\n 'sentry.errors': {\n 'level': 'DEBUG',\n 'handlers': ['console'],\n 'propagate': False,\n }\n }\n}\n\n# Create a local_settings.py if you want to override settings per machine\n# or user, e.g. for development or different settings in deployments using\n# multiple servers.\n_LOCAL_SETTINGS_FILENAME = os.path.join(SITE_ROOT, \"localsettings.py\")\nif os.path.exists(_LOCAL_SETTINGS_FILENAME):\n execfile(_LOCAL_SETTINGS_FILENAME)\ndel _LOCAL_SETTINGS_FILENAME\n", "path": "evap/settings.py"}], "after_files": [{"content": "# Django settings for evap project.\n\n# automatically determine SITE_ROOT, used for absolute paths below\nimport os.path\nSITE_ROOT = os.path.dirname(os.path.realpath(__file__))\n\nDEBUG = True\nTEMPLATE_DEBUG = DEBUG\n\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nMANAGERS = ADMINS\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3', # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.\n 'NAME': os.path.join(SITE_ROOT, 'database.sqlite3'), # Or path to database file if using sqlite3.\n 'USER': '', # Not used with sqlite3.\n 'PASSWORD': '', # Not used with sqlite3.\n 'HOST': '', # Set to empty string for localhost. Not used with sqlite3.\n 'PORT': '', # Set to empty string for default. Not used with sqlite3.\n }\n}\n\nCACHES = {\n 'default': {\n # 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n }\n}\n\n# config for feedback links\nFEEDBACK_EMAIL = \"webmaster@localhost\"\nTRACKER_URL = \"https://github.com/fsr-itse/EvaP\"\n\n# config for mail system\nDEFAULT_FROM_EMAIL = \"webmaster@localhost\"\nREPLY_TO_EMAIL = DEFAULT_FROM_EMAIL\nif DEBUG:\n EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n# key authentication settings\nLOGIN_KEY_VALIDITY = 210 # days, so roughly 7 months\n\n# minimum answers needed for publishing\nMIN_ANSWER_COUNT = 2\nMIN_ANSWER_PERCENTAGE = 0.2\n\n# days before end date to send reminder\nREMIND_X_DAYS_AHEAD_OF_END_DATE = 2\n\n# email domains for the internal users of the hosting institution used to\n# figure out who can login with username and password and who needs a login key\nINSTITUTION_EMAIL_DOMAINS = [\"hpi.uni-potsdam.de\", \"student.hpi.uni-potsdam.de\"]\n\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# On Unix systems, a value of None will cause Django to use the same\n# timezone as the operating system.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'Europe/Berlin'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\nLANGUAGE_CODE = 'en'\n\nLANGUAGES = (\n ('en', \"English\"),\n ('de', \"Deutsch\"),\n)\n\nSITE_ID = 1\n\n# If you set this to False, Django will make some optimizations so as not\n# to load the internationalization machinery.\nUSE_I18N = True\n\n# If you set this to False, Django will not format dates, numbers and\n# calendars according to the current locale\nUSE_L10N = True\n\n# Locale paths\nLOCALE_PATHS = (\n os.path.join(SITE_ROOT, \"locale\"),\n)\n\n# Absolute filesystem path to the directory that will hold user-uploaded files.\n# Example: \"/home/media/media.lawrence.com/media/\"\nMEDIA_ROOT = os.path.join(SITE_ROOT, \"upload\")\n\n# URL that handles the media served from MEDIA_ROOT. Make sure to use a\n# trailing slash.\n# Examples: \"http://media.lawrence.com/media/\", \"http://example.com/media/\"\nMEDIA_URL = '/media/'\n\n# Absolute path to the directory static files should be collected to.\n# Don't put anything in this directory yourself; store your static files\n# in apps' \"static/\" subdirectories and in STATICFILES_DIRS.\n# Example: \"/home/media/media.lawrence.com/static/\"\nSTATIC_ROOT = os.path.join(SITE_ROOT, \"staticfiles\")\n\n# URL prefix for static files.\n# Example: \"http://media.lawrence.com/static/\"\nSTATIC_URL = '/static/'\n\n# URL prefix for admin static files -- CSS, JavaScript and images.\n# Make sure to use a trailing slash.\n# Examples: \"http://foo.com/static/admin/\", \"/static/admin/\".\nADMIN_MEDIA_PREFIX = '/static/admin/'\n\n# Additional locations of static files\nSTATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n os.path.join(SITE_ROOT, \"static\"),\n)\n\n# List of finder classes that know how to find static files in\n# various locations.\nSTATICFILES_FINDERS = (\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n# 'django.contrib.staticfiles.finders.DefaultStorageFinder',\n)\n\n# Make this unique, and don't share it with anybody.\nSECRET_KEY = 'k9-)vh3c_dtm6bpi7j(!*s_^91v0!ekjt_#o&0i$e22tnn^-vb'\n\n# List of callables that know how to import templates from various sources.\nTEMPLATE_LOADERS = (\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n# 'django.template.loaders.eggs.Loader',\n)\n\nTEMPLATE_CONTEXT_PROCESSORS = (\n \"django.contrib.auth.context_processors.auth\",\n \"django.core.context_processors.debug\",\n \"django.core.context_processors.i18n\",\n \"django.core.context_processors.media\",\n \"django.core.context_processors.static\",\n \"django.core.context_processors.request\",\n \"django.contrib.messages.context_processors.messages\",\n)\n\nMIDDLEWARE_CLASSES = (\n 'django.middleware.common.CommonMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'evap.evaluation.auth.RequestAuthMiddleware',\n 'evap.evaluation.403.Django403Middleware',\n)\n\nAUTHENTICATION_BACKENDS = (\n 'evap.evaluation.auth.RequestAuthUserBackend',\n 'django.contrib.auth.backends.ModelBackend',\n)\n\nLOGIN_URL = \"/\"\n\nROOT_URLCONF = 'evap.urls'\n\nTEMPLATE_DIRS = (\n # Put strings here, like \"/home/html/django_templates\" or \"C:/www/django/templates\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n os.path.join(SITE_ROOT, \"templates\"),\n)\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.sites',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.admin',\n 'south',\n 'widget_tweaks',\n 'evap.evaluation',\n 'evap.fsr',\n 'evap.results',\n 'evap.student',\n 'evap.contributor',\n)\nif not DEBUG:\n INSTALLED_APPS += (\n 'raven.contrib.django.raven_compat',\n )\n\nRAVEN_CONFIG = {\n 'dsn': 'http://public:[email protected]/1',\n}\n\n# A sample logging configuration. The only tangible logging\n# performed by this configuration is to send an email to\n# the site admins on every HTTP 500 error.\n# See http://docs.djangoproject.com/en/dev/topics/logging for\n# more details on how to customize your logging configuration.\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse'\n }\n },\n 'handlers': {\n 'sentry': {\n 'level': 'ERROR',\n 'class': 'raven.contrib.django.handlers.SentryHandler',\n },\n 'mail_admins': {\n 'level': 'ERROR',\n 'filters': ['require_debug_false'],\n 'class': 'django.utils.log.AdminEmailHandler'\n },\n 'console': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler'\n }\n },\n 'loggers': {\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': True,\n },\n 'evap.evaluation.management.commands.import_evaj': {\n 'handlers': ['console'],\n 'level': 'INFO'\n },\n 'raven': {\n 'level': 'DEBUG',\n 'handlers': ['console'],\n 'propagate': False,\n },\n 'sentry.errors': {\n 'level': 'DEBUG',\n 'handlers': ['console'],\n 'propagate': False,\n }\n }\n}\n\n# Create a localsettings.py if you want to override settings per machine\n# or user, e.g. for development or different settings in deployments using\n# multiple servers.\n_LOCAL_SETTINGS_FILENAME = os.path.join(SITE_ROOT, \"localsettings.py\")\nif os.path.exists(_LOCAL_SETTINGS_FILENAME):\n execfile(_LOCAL_SETTINGS_FILENAME)\ndel _LOCAL_SETTINGS_FILENAME\n", "path": "evap/settings.py"}]} |
gh_patches_debug_1071 | rasdani/github-patches | git_diff | facebookresearch__hydra-894 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: Nevergrad sweeper does not work with integers if there are less than 6 choices
Nevergrad sweeper complains if it has less than 6 values to sweep over in a range (e.g. `lower: 1` `upper:3`) and asks to use a list instead (`ValueError: For integers with 6 or fewer values, use a choice instead`). But if you use a list with integers it does not work because it assumes that choices contain only strings:

Line where the first error is raised: https://github.com/facebookresearch/hydra/blob/0e001afb2a55275b6f7dc33e79035dbf3a797c00/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py#L178
Hydra Version: 1.0.0rc2
Of course I can give a string and then convert in my code, but it would probably be better to solve it differently? For example sliently treating it as a list without raising the first error ? Or at least to say in the raised error that you have to use a list and convert the str to int in your own code ? Not sure what is the best way..
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import itertools
3 import logging
4 from dataclasses import dataclass
5 from typing import Any, Dict, List, Optional, Tuple
6
7 from hydra.core.config_loader import ConfigLoader
8 from hydra.core.plugins import Plugins
9 from hydra.plugins.launcher import Launcher
10 from hydra.plugins.sweeper import Sweeper
11 from hydra.types import TaskFunction
12 from omegaconf import DictConfig, ListConfig, OmegaConf
13
14 from .config import OptimConf, ScalarConfigSpec
15
16 # pylint: disable=logging-fstring-interpolation,no-self-used
17 log = logging.getLogger(__name__)
18
19
20 @dataclass
21 class CommandlineSpec:
22 """Structured commandline specification
23 for sweepers handling categorical variables and bounded variables
24
25 Attributes
26 ----------
27 bounds: Optional[Tuple[float, float]]
28 if present, this defines a bounded scalar between bounds[0]
29 and bounds[1]
30 options: Optional[List[Any]]
31 if present, this defines the options/choices of a categorical
32 variable
33 cast: str
34 the name of the variable type to cast it to ("int", "str"
35 or "float")
36 log: bool
37 for bounded scalars, whether it is log-distributed
38
39 Note
40 ----
41 Exactly one of bounds or options must be provided
42 """
43
44 bounds: Optional[Tuple[float, float]] = None
45 options: Optional[List[str]] = None
46 cast: str = "float"
47 log: bool = False
48
49 def __post_init__(self) -> None:
50 if not (self.bounds is None) ^ (self.options is None):
51 raise ValueError("Exactly one of bounds or options must be specified")
52 if self.bounds is not None:
53 if self.cast == "str":
54 raise ValueError(
55 "Inconsistent specifications 'str' for bounded values."
56 )
57 if self.bounds[0] > self.bounds[1]:
58 raise ValueError(f"Bounds must be ordered, but got {self.bounds}")
59 if self.options is not None and self.log:
60 raise ValueError("Inconsistent 'log' specification for choice parameter")
61
62 @classmethod
63 def parse(cls, string: str) -> "CommandlineSpec":
64 """Parses a commandline argument string
65
66 Parameter
67 ---------
68 string: str
69 This can be:
70 - comma-separated values: for a choice parameter
71 Eg.: "a,b,c"
72 - colon-separated values for ranges of scalars.
73 Eg.: "0:10"
74 Colon-separeted can be appended to:
75 - cast to int/str/float (always defaults to float):
76 Eg: "float:0,4,10", "int:0:10"
77 - set log distribution for scalars
78 Eg: "int:log:4:1024"
79 """
80 available_modifiers = {"log", "float", "int", "str"}
81 colon_split = string.split(":")
82 modifiers = set(
83 itertools.takewhile(available_modifiers.__contains__, colon_split)
84 )
85 remain = colon_split[len(modifiers) :]
86 casts = list(modifiers - {"log"})
87 if len(remain) not in {1, 2}:
88 raise ValueError(
89 "Can't interpret non-speficiations: {}.\nthis needs to be "
90 "either colon or coma-separated values".format(":".join(remain))
91 )
92 if len(casts) > 1:
93 raise ValueError(f"Inconsistent specifications: {casts}")
94 if len(remain) == 1: # choice argument
95 cast = casts[0] if casts else "str"
96 options = remain[0].split(",")
97 if len(options) < 2:
98 raise ValueError("At least 2 options are required")
99 if not casts:
100 try: # default to float if possible and no spec provided
101 _ = [float(x) for x in options]
102 cast = "float"
103 except ValueError:
104 pass
105 return cls(options=options, cast=cast)
106 # bounded argument
107 bounds: Tuple[float, float] = tuple(float(x) for x in remain) # type: ignore
108 cast = casts[0] if casts else "float"
109 return cls(bounds=bounds, cast=cast, log="log" in modifiers)
110
111
112 # pylint: disable=too-many-branches
113 def make_nevergrad_parameter(description: Any) -> Any:
114 """Returns a Nevergrad parameter from a definition string or object.
115
116 Parameters
117 ----------
118 description: Any
119 * a commandline definition string. This can be:
120 - comma-separated values: for a choice parameter
121 Eg.: "a,b,c"
122 Note: sequences of increasing scalars provide a specific parametrization
123 compared to unordered categorical values
124 - ":"-separated values for ranges of scalars.
125 "int" and/or "log" modifiers can be added in front to cast to integer or
126 use log-distributed values (Eg: int:log:4:1024)
127 - anything else will be treated as a constant string
128 * a config definition dict for scalar parameters, with potential fields
129 init, lower, upper, step, log, integer
130 * a list for option parameters defined in config file
131
132 Returns
133 -------
134 Parameter or str
135 A Parameter if the string fitted one of the definitions, else the input string
136 """
137 # lazy initialization to avoid overhead when loading hydra
138 import nevergrad as ng
139
140 # revert config parsing
141
142 if isinstance(description, (ListConfig, list)):
143 description = ",".join(description)
144 if isinstance(description, str):
145 # cast to spec if possible
146 try:
147 description = CommandlineSpec.parse(description)
148 except ValueError:
149 pass
150 # convert scalar commandline specs to dict
151 if isinstance(description, CommandlineSpec) and description.bounds is not None:
152 description = ScalarConfigSpec(
153 lower=description.bounds[0],
154 upper=description.bounds[1],
155 log=description.log,
156 integer=description.cast == "int",
157 )
158 # convert scalar config specs to dict
159 # convert dict to Scalar parameter instance
160 if isinstance(description, (dict, DictConfig)):
161 description = ScalarConfigSpec(**description)
162 if isinstance(description, ScalarConfigSpec):
163 init = ["init", "lower", "upper"]
164 init_params = {x: getattr(description, x) for x in init}
165 if not description.log:
166 scalar = ng.p.Scalar(**init_params)
167 if description.step is not None:
168 scalar.set_mutation(sigma=description.step)
169 else:
170 if description.step is not None:
171 init_params["exponent"] = description.step
172 scalar = ng.p.Log(**init_params)
173 if description.integer:
174 scalar.set_integer_casting()
175 a, b = scalar.bounds
176 if a is not None and b is not None and b - a <= 6:
177 raise ValueError(
178 "For integers with 6 or fewer values, use a choice instead"
179 )
180 return scalar
181 # choices
182 if isinstance(description, CommandlineSpec):
183 assert description.options is not None
184 caster = {"int": int, "str": str, "float": float}[description.cast]
185 choices = [caster(x) for x in description.options]
186 ordered = all(isinstance(c, (int, float)) for c in choices)
187 ordered &= all(c0 <= c1 for c0, c1 in zip(choices[:-1], choices[1:]))
188 return ng.p.TransitionChoice(choices) if ordered else ng.p.Choice(choices)
189 # constant
190 if isinstance(description, (str, int, float)):
191 return description
192 raise TypeError(f"Unexpected parameter configuration: {description}")
193
194
195 class NevergradSweeper(Sweeper):
196 """Returns a Nevergrad parameter from a definition string.
197
198 Parameters
199 ----------
200 config: DictConfig
201 the optimization process configuration
202 version: int
203 version of the API
204 """
205
206 def __init__(
207 self, optim: OptimConf, version: int, parametrization: Optional[DictConfig],
208 ):
209 assert (
210 version == 1
211 ), f"Only version 1 of API is currently available (got {version})"
212 self.opt_config = optim
213 self.config: Optional[DictConfig] = None
214 self.launcher: Optional[Launcher] = None
215 self.job_results = None
216 self.parametrization: Dict[str, Any] = {}
217 if parametrization is not None:
218 assert isinstance(parametrization, DictConfig)
219 self.parametrization = {
220 x: make_nevergrad_parameter(y) for x, y in parametrization.items()
221 }
222 self.job_idx: Optional[int] = None
223
224 def setup(
225 self,
226 config: DictConfig,
227 config_loader: ConfigLoader,
228 task_function: TaskFunction,
229 ) -> None:
230 self.job_idx = 0
231 self.config = config
232 self.config_loader = config_loader
233 self.launcher = Plugins.instance().instantiate_launcher(
234 config=config, config_loader=config_loader, task_function=task_function
235 )
236
237 def sweep(self, arguments: List[str]) -> None:
238 # lazy initialization to avoid overhead when loading hydra
239 import nevergrad as ng
240
241 assert self.config is not None
242 assert self.launcher is not None
243 assert self.job_idx is not None
244 direction = -1 if self.opt_config.maximize else 1
245 name = "maximization" if self.opt_config.maximize else "minimization"
246 # Override the parametrization from commandline
247 params = dict(self.parametrization)
248 for s in arguments:
249 key, value = s.split("=", 1)
250 params[key] = make_nevergrad_parameter(value)
251 parametrization = ng.p.Dict(**params)
252 parametrization.descriptors.deterministic_function = not self.opt_config.noisy
253 parametrization.random_state.seed(self.opt_config.seed)
254 # log and build the optimizer
255 opt = self.opt_config.optimizer
256 remaining_budget = self.opt_config.budget
257 nw = self.opt_config.num_workers
258 log.info(
259 f"NevergradSweeper(optimizer={opt}, budget={remaining_budget}, "
260 f"num_workers={nw}) {name}"
261 )
262 log.info(f"with parametrization {parametrization}")
263 log.info(f"Sweep output dir: {self.config.hydra.sweep.dir}")
264 optimizer = ng.optimizers.registry[opt](parametrization, remaining_budget, nw)
265 # loop!
266 all_returns: List[Any] = []
267 best: Tuple[float, ng.p.Parameter] = (float("inf"), parametrization)
268 while remaining_budget > 0:
269 batch = min(nw, remaining_budget)
270 remaining_budget -= batch
271 candidates = [optimizer.ask() for _ in range(batch)]
272 overrides = list(
273 tuple(f"{x}={y}" for x, y in c.value.items()) for c in candidates
274 )
275 self.validate_batch_is_legal(overrides)
276 returns = self.launcher.launch(overrides, initial_job_idx=self.job_idx)
277 self.job_idx += len(returns)
278 # would have been nice to avoid waiting for all jobs to finish
279 # aka batch size Vs steady state (launching a new job whenever one is done)
280 for cand, ret in zip(candidates, returns):
281 loss = direction * ret.return_value
282 optimizer.tell(cand, loss)
283 if loss < best[0]:
284 best = (loss, cand)
285 all_returns.extend(returns)
286 recom = optimizer.provide_recommendation()
287 results_to_serialize = {
288 "name": "nevergrad",
289 "best_evaluated_params": best[1].value,
290 "best_evaluated_result": direction * best[0],
291 }
292 OmegaConf.save(
293 OmegaConf.create(results_to_serialize),
294 f"{self.config.hydra.sweep.dir}/optimization_results.yaml",
295 )
296 log.info(
297 "Best parameters: %s", " ".join(f"{x}={y}" for x, y in recom.value.items())
298 )
299
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py
--- a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py
+++ b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py
@@ -140,7 +140,7 @@
# revert config parsing
if isinstance(description, (ListConfig, list)):
- description = ",".join(description)
+ description = ",".join(str(x) for x in description)
if isinstance(description, str):
# cast to spec if possible
try:
| {"golden_diff": "diff --git a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py\n--- a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py\n+++ b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py\n@@ -140,7 +140,7 @@\n # revert config parsing\n \n if isinstance(description, (ListConfig, list)):\n- description = \",\".join(description)\n+ description = \",\".join(str(x) for x in description)\n if isinstance(description, str):\n # cast to spec if possible\n try:\n", "issue": "[Bug]: Nevergrad sweeper does not work with integers if there are less than 6 choices\nNevergrad sweeper complains if it has less than 6 values to sweep over in a range (e.g. `lower: 1` `upper:3`) and asks to use a list instead (`ValueError: For integers with 6 or fewer values, use a choice instead`). But if you use a list with integers it does not work because it assumes that choices contain only strings:\r\n\r\n\r\n\r\nLine where the first error is raised: https://github.com/facebookresearch/hydra/blob/0e001afb2a55275b6f7dc33e79035dbf3a797c00/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py#L178\r\nHydra Version: 1.0.0rc2\r\n\r\n\r\nOf course I can give a string and then convert in my code, but it would probably be better to solve it differently? For example sliently treating it as a list without raising the first error ? Or at least to say in the raised error that you have to use a list and convert the str to int in your own code ? Not sure what is the best way..\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport itertools\nimport logging\nfrom dataclasses import dataclass\nfrom typing import Any, Dict, List, Optional, Tuple\n\nfrom hydra.core.config_loader import ConfigLoader\nfrom hydra.core.plugins import Plugins\nfrom hydra.plugins.launcher import Launcher\nfrom hydra.plugins.sweeper import Sweeper\nfrom hydra.types import TaskFunction\nfrom omegaconf import DictConfig, ListConfig, OmegaConf\n\nfrom .config import OptimConf, ScalarConfigSpec\n\n# pylint: disable=logging-fstring-interpolation,no-self-used\nlog = logging.getLogger(__name__)\n\n\n@dataclass\nclass CommandlineSpec:\n \"\"\"Structured commandline specification\n for sweepers handling categorical variables and bounded variables\n\n Attributes\n ----------\n bounds: Optional[Tuple[float, float]]\n if present, this defines a bounded scalar between bounds[0]\n and bounds[1]\n options: Optional[List[Any]]\n if present, this defines the options/choices of a categorical\n variable\n cast: str\n the name of the variable type to cast it to (\"int\", \"str\"\n or \"float\")\n log: bool\n for bounded scalars, whether it is log-distributed\n\n Note\n ----\n Exactly one of bounds or options must be provided\n \"\"\"\n\n bounds: Optional[Tuple[float, float]] = None\n options: Optional[List[str]] = None\n cast: str = \"float\"\n log: bool = False\n\n def __post_init__(self) -> None:\n if not (self.bounds is None) ^ (self.options is None):\n raise ValueError(\"Exactly one of bounds or options must be specified\")\n if self.bounds is not None:\n if self.cast == \"str\":\n raise ValueError(\n \"Inconsistent specifications 'str' for bounded values.\"\n )\n if self.bounds[0] > self.bounds[1]:\n raise ValueError(f\"Bounds must be ordered, but got {self.bounds}\")\n if self.options is not None and self.log:\n raise ValueError(\"Inconsistent 'log' specification for choice parameter\")\n\n @classmethod\n def parse(cls, string: str) -> \"CommandlineSpec\":\n \"\"\"Parses a commandline argument string\n\n Parameter\n ---------\n string: str\n This can be:\n - comma-separated values: for a choice parameter\n Eg.: \"a,b,c\"\n - colon-separated values for ranges of scalars.\n Eg.: \"0:10\"\n Colon-separeted can be appended to:\n - cast to int/str/float (always defaults to float):\n Eg: \"float:0,4,10\", \"int:0:10\"\n - set log distribution for scalars\n Eg: \"int:log:4:1024\"\n \"\"\"\n available_modifiers = {\"log\", \"float\", \"int\", \"str\"}\n colon_split = string.split(\":\")\n modifiers = set(\n itertools.takewhile(available_modifiers.__contains__, colon_split)\n )\n remain = colon_split[len(modifiers) :]\n casts = list(modifiers - {\"log\"})\n if len(remain) not in {1, 2}:\n raise ValueError(\n \"Can't interpret non-speficiations: {}.\\nthis needs to be \"\n \"either colon or coma-separated values\".format(\":\".join(remain))\n )\n if len(casts) > 1:\n raise ValueError(f\"Inconsistent specifications: {casts}\")\n if len(remain) == 1: # choice argument\n cast = casts[0] if casts else \"str\"\n options = remain[0].split(\",\")\n if len(options) < 2:\n raise ValueError(\"At least 2 options are required\")\n if not casts:\n try: # default to float if possible and no spec provided\n _ = [float(x) for x in options]\n cast = \"float\"\n except ValueError:\n pass\n return cls(options=options, cast=cast)\n # bounded argument\n bounds: Tuple[float, float] = tuple(float(x) for x in remain) # type: ignore\n cast = casts[0] if casts else \"float\"\n return cls(bounds=bounds, cast=cast, log=\"log\" in modifiers)\n\n\n# pylint: disable=too-many-branches\ndef make_nevergrad_parameter(description: Any) -> Any:\n \"\"\"Returns a Nevergrad parameter from a definition string or object.\n\n Parameters\n ----------\n description: Any\n * a commandline definition string. This can be:\n - comma-separated values: for a choice parameter\n Eg.: \"a,b,c\"\n Note: sequences of increasing scalars provide a specific parametrization\n compared to unordered categorical values\n - \":\"-separated values for ranges of scalars.\n \"int\" and/or \"log\" modifiers can be added in front to cast to integer or\n use log-distributed values (Eg: int:log:4:1024)\n - anything else will be treated as a constant string\n * a config definition dict for scalar parameters, with potential fields\n init, lower, upper, step, log, integer\n * a list for option parameters defined in config file\n\n Returns\n -------\n Parameter or str\n A Parameter if the string fitted one of the definitions, else the input string\n \"\"\"\n # lazy initialization to avoid overhead when loading hydra\n import nevergrad as ng\n\n # revert config parsing\n\n if isinstance(description, (ListConfig, list)):\n description = \",\".join(description)\n if isinstance(description, str):\n # cast to spec if possible\n try:\n description = CommandlineSpec.parse(description)\n except ValueError:\n pass\n # convert scalar commandline specs to dict\n if isinstance(description, CommandlineSpec) and description.bounds is not None:\n description = ScalarConfigSpec(\n lower=description.bounds[0],\n upper=description.bounds[1],\n log=description.log,\n integer=description.cast == \"int\",\n )\n # convert scalar config specs to dict\n # convert dict to Scalar parameter instance\n if isinstance(description, (dict, DictConfig)):\n description = ScalarConfigSpec(**description)\n if isinstance(description, ScalarConfigSpec):\n init = [\"init\", \"lower\", \"upper\"]\n init_params = {x: getattr(description, x) for x in init}\n if not description.log:\n scalar = ng.p.Scalar(**init_params)\n if description.step is not None:\n scalar.set_mutation(sigma=description.step)\n else:\n if description.step is not None:\n init_params[\"exponent\"] = description.step\n scalar = ng.p.Log(**init_params)\n if description.integer:\n scalar.set_integer_casting()\n a, b = scalar.bounds\n if a is not None and b is not None and b - a <= 6:\n raise ValueError(\n \"For integers with 6 or fewer values, use a choice instead\"\n )\n return scalar\n # choices\n if isinstance(description, CommandlineSpec):\n assert description.options is not None\n caster = {\"int\": int, \"str\": str, \"float\": float}[description.cast]\n choices = [caster(x) for x in description.options]\n ordered = all(isinstance(c, (int, float)) for c in choices)\n ordered &= all(c0 <= c1 for c0, c1 in zip(choices[:-1], choices[1:]))\n return ng.p.TransitionChoice(choices) if ordered else ng.p.Choice(choices)\n # constant\n if isinstance(description, (str, int, float)):\n return description\n raise TypeError(f\"Unexpected parameter configuration: {description}\")\n\n\nclass NevergradSweeper(Sweeper):\n \"\"\"Returns a Nevergrad parameter from a definition string.\n\n Parameters\n ----------\n config: DictConfig\n the optimization process configuration\n version: int\n version of the API\n \"\"\"\n\n def __init__(\n self, optim: OptimConf, version: int, parametrization: Optional[DictConfig],\n ):\n assert (\n version == 1\n ), f\"Only version 1 of API is currently available (got {version})\"\n self.opt_config = optim\n self.config: Optional[DictConfig] = None\n self.launcher: Optional[Launcher] = None\n self.job_results = None\n self.parametrization: Dict[str, Any] = {}\n if parametrization is not None:\n assert isinstance(parametrization, DictConfig)\n self.parametrization = {\n x: make_nevergrad_parameter(y) for x, y in parametrization.items()\n }\n self.job_idx: Optional[int] = None\n\n def setup(\n self,\n config: DictConfig,\n config_loader: ConfigLoader,\n task_function: TaskFunction,\n ) -> None:\n self.job_idx = 0\n self.config = config\n self.config_loader = config_loader\n self.launcher = Plugins.instance().instantiate_launcher(\n config=config, config_loader=config_loader, task_function=task_function\n )\n\n def sweep(self, arguments: List[str]) -> None:\n # lazy initialization to avoid overhead when loading hydra\n import nevergrad as ng\n\n assert self.config is not None\n assert self.launcher is not None\n assert self.job_idx is not None\n direction = -1 if self.opt_config.maximize else 1\n name = \"maximization\" if self.opt_config.maximize else \"minimization\"\n # Override the parametrization from commandline\n params = dict(self.parametrization)\n for s in arguments:\n key, value = s.split(\"=\", 1)\n params[key] = make_nevergrad_parameter(value)\n parametrization = ng.p.Dict(**params)\n parametrization.descriptors.deterministic_function = not self.opt_config.noisy\n parametrization.random_state.seed(self.opt_config.seed)\n # log and build the optimizer\n opt = self.opt_config.optimizer\n remaining_budget = self.opt_config.budget\n nw = self.opt_config.num_workers\n log.info(\n f\"NevergradSweeper(optimizer={opt}, budget={remaining_budget}, \"\n f\"num_workers={nw}) {name}\"\n )\n log.info(f\"with parametrization {parametrization}\")\n log.info(f\"Sweep output dir: {self.config.hydra.sweep.dir}\")\n optimizer = ng.optimizers.registry[opt](parametrization, remaining_budget, nw)\n # loop!\n all_returns: List[Any] = []\n best: Tuple[float, ng.p.Parameter] = (float(\"inf\"), parametrization)\n while remaining_budget > 0:\n batch = min(nw, remaining_budget)\n remaining_budget -= batch\n candidates = [optimizer.ask() for _ in range(batch)]\n overrides = list(\n tuple(f\"{x}={y}\" for x, y in c.value.items()) for c in candidates\n )\n self.validate_batch_is_legal(overrides)\n returns = self.launcher.launch(overrides, initial_job_idx=self.job_idx)\n self.job_idx += len(returns)\n # would have been nice to avoid waiting for all jobs to finish\n # aka batch size Vs steady state (launching a new job whenever one is done)\n for cand, ret in zip(candidates, returns):\n loss = direction * ret.return_value\n optimizer.tell(cand, loss)\n if loss < best[0]:\n best = (loss, cand)\n all_returns.extend(returns)\n recom = optimizer.provide_recommendation()\n results_to_serialize = {\n \"name\": \"nevergrad\",\n \"best_evaluated_params\": best[1].value,\n \"best_evaluated_result\": direction * best[0],\n }\n OmegaConf.save(\n OmegaConf.create(results_to_serialize),\n f\"{self.config.hydra.sweep.dir}/optimization_results.yaml\",\n )\n log.info(\n \"Best parameters: %s\", \" \".join(f\"{x}={y}\" for x, y in recom.value.items())\n )\n", "path": "plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport itertools\nimport logging\nfrom dataclasses import dataclass\nfrom typing import Any, Dict, List, Optional, Tuple\n\nfrom hydra.core.config_loader import ConfigLoader\nfrom hydra.core.plugins import Plugins\nfrom hydra.plugins.launcher import Launcher\nfrom hydra.plugins.sweeper import Sweeper\nfrom hydra.types import TaskFunction\nfrom omegaconf import DictConfig, ListConfig, OmegaConf\n\nfrom .config import OptimConf, ScalarConfigSpec\n\n# pylint: disable=logging-fstring-interpolation,no-self-used\nlog = logging.getLogger(__name__)\n\n\n@dataclass\nclass CommandlineSpec:\n \"\"\"Structured commandline specification\n for sweepers handling categorical variables and bounded variables\n\n Attributes\n ----------\n bounds: Optional[Tuple[float, float]]\n if present, this defines a bounded scalar between bounds[0]\n and bounds[1]\n options: Optional[List[Any]]\n if present, this defines the options/choices of a categorical\n variable\n cast: str\n the name of the variable type to cast it to (\"int\", \"str\"\n or \"float\")\n log: bool\n for bounded scalars, whether it is log-distributed\n\n Note\n ----\n Exactly one of bounds or options must be provided\n \"\"\"\n\n bounds: Optional[Tuple[float, float]] = None\n options: Optional[List[str]] = None\n cast: str = \"float\"\n log: bool = False\n\n def __post_init__(self) -> None:\n if not (self.bounds is None) ^ (self.options is None):\n raise ValueError(\"Exactly one of bounds or options must be specified\")\n if self.bounds is not None:\n if self.cast == \"str\":\n raise ValueError(\n \"Inconsistent specifications 'str' for bounded values.\"\n )\n if self.bounds[0] > self.bounds[1]:\n raise ValueError(f\"Bounds must be ordered, but got {self.bounds}\")\n if self.options is not None and self.log:\n raise ValueError(\"Inconsistent 'log' specification for choice parameter\")\n\n @classmethod\n def parse(cls, string: str) -> \"CommandlineSpec\":\n \"\"\"Parses a commandline argument string\n\n Parameter\n ---------\n string: str\n This can be:\n - comma-separated values: for a choice parameter\n Eg.: \"a,b,c\"\n - colon-separated values for ranges of scalars.\n Eg.: \"0:10\"\n Colon-separeted can be appended to:\n - cast to int/str/float (always defaults to float):\n Eg: \"float:0,4,10\", \"int:0:10\"\n - set log distribution for scalars\n Eg: \"int:log:4:1024\"\n \"\"\"\n available_modifiers = {\"log\", \"float\", \"int\", \"str\"}\n colon_split = string.split(\":\")\n modifiers = set(\n itertools.takewhile(available_modifiers.__contains__, colon_split)\n )\n remain = colon_split[len(modifiers) :]\n casts = list(modifiers - {\"log\"})\n if len(remain) not in {1, 2}:\n raise ValueError(\n \"Can't interpret non-speficiations: {}.\\nthis needs to be \"\n \"either colon or coma-separated values\".format(\":\".join(remain))\n )\n if len(casts) > 1:\n raise ValueError(f\"Inconsistent specifications: {casts}\")\n if len(remain) == 1: # choice argument\n cast = casts[0] if casts else \"str\"\n options = remain[0].split(\",\")\n if len(options) < 2:\n raise ValueError(\"At least 2 options are required\")\n if not casts:\n try: # default to float if possible and no spec provided\n _ = [float(x) for x in options]\n cast = \"float\"\n except ValueError:\n pass\n return cls(options=options, cast=cast)\n # bounded argument\n bounds: Tuple[float, float] = tuple(float(x) for x in remain) # type: ignore\n cast = casts[0] if casts else \"float\"\n return cls(bounds=bounds, cast=cast, log=\"log\" in modifiers)\n\n\n# pylint: disable=too-many-branches\ndef make_nevergrad_parameter(description: Any) -> Any:\n \"\"\"Returns a Nevergrad parameter from a definition string or object.\n\n Parameters\n ----------\n description: Any\n * a commandline definition string. This can be:\n - comma-separated values: for a choice parameter\n Eg.: \"a,b,c\"\n Note: sequences of increasing scalars provide a specific parametrization\n compared to unordered categorical values\n - \":\"-separated values for ranges of scalars.\n \"int\" and/or \"log\" modifiers can be added in front to cast to integer or\n use log-distributed values (Eg: int:log:4:1024)\n - anything else will be treated as a constant string\n * a config definition dict for scalar parameters, with potential fields\n init, lower, upper, step, log, integer\n * a list for option parameters defined in config file\n\n Returns\n -------\n Parameter or str\n A Parameter if the string fitted one of the definitions, else the input string\n \"\"\"\n # lazy initialization to avoid overhead when loading hydra\n import nevergrad as ng\n\n # revert config parsing\n\n if isinstance(description, (ListConfig, list)):\n description = \",\".join(str(x) for x in description)\n if isinstance(description, str):\n # cast to spec if possible\n try:\n description = CommandlineSpec.parse(description)\n except ValueError:\n pass\n # convert scalar commandline specs to dict\n if isinstance(description, CommandlineSpec) and description.bounds is not None:\n description = ScalarConfigSpec(\n lower=description.bounds[0],\n upper=description.bounds[1],\n log=description.log,\n integer=description.cast == \"int\",\n )\n # convert scalar config specs to dict\n # convert dict to Scalar parameter instance\n if isinstance(description, (dict, DictConfig)):\n description = ScalarConfigSpec(**description)\n if isinstance(description, ScalarConfigSpec):\n init = [\"init\", \"lower\", \"upper\"]\n init_params = {x: getattr(description, x) for x in init}\n if not description.log:\n scalar = ng.p.Scalar(**init_params)\n if description.step is not None:\n scalar.set_mutation(sigma=description.step)\n else:\n if description.step is not None:\n init_params[\"exponent\"] = description.step\n scalar = ng.p.Log(**init_params)\n if description.integer:\n scalar.set_integer_casting()\n a, b = scalar.bounds\n if a is not None and b is not None and b - a <= 6:\n raise ValueError(\n \"For integers with 6 or fewer values, use a choice instead\"\n )\n return scalar\n # choices\n if isinstance(description, CommandlineSpec):\n assert description.options is not None\n caster = {\"int\": int, \"str\": str, \"float\": float}[description.cast]\n choices = [caster(x) for x in description.options]\n ordered = all(isinstance(c, (int, float)) for c in choices)\n ordered &= all(c0 <= c1 for c0, c1 in zip(choices[:-1], choices[1:]))\n return ng.p.TransitionChoice(choices) if ordered else ng.p.Choice(choices)\n # constant\n if isinstance(description, (str, int, float)):\n return description\n raise TypeError(f\"Unexpected parameter configuration: {description}\")\n\n\nclass NevergradSweeper(Sweeper):\n \"\"\"Returns a Nevergrad parameter from a definition string.\n\n Parameters\n ----------\n config: DictConfig\n the optimization process configuration\n version: int\n version of the API\n \"\"\"\n\n def __init__(\n self, optim: OptimConf, version: int, parametrization: Optional[DictConfig],\n ):\n assert (\n version == 1\n ), f\"Only version 1 of API is currently available (got {version})\"\n self.opt_config = optim\n self.config: Optional[DictConfig] = None\n self.launcher: Optional[Launcher] = None\n self.job_results = None\n self.parametrization: Dict[str, Any] = {}\n if parametrization is not None:\n assert isinstance(parametrization, DictConfig)\n self.parametrization = {\n x: make_nevergrad_parameter(y) for x, y in parametrization.items()\n }\n self.job_idx: Optional[int] = None\n\n def setup(\n self,\n config: DictConfig,\n config_loader: ConfigLoader,\n task_function: TaskFunction,\n ) -> None:\n self.job_idx = 0\n self.config = config\n self.config_loader = config_loader\n self.launcher = Plugins.instance().instantiate_launcher(\n config=config, config_loader=config_loader, task_function=task_function\n )\n\n def sweep(self, arguments: List[str]) -> None:\n # lazy initialization to avoid overhead when loading hydra\n import nevergrad as ng\n\n assert self.config is not None\n assert self.launcher is not None\n assert self.job_idx is not None\n direction = -1 if self.opt_config.maximize else 1\n name = \"maximization\" if self.opt_config.maximize else \"minimization\"\n # Override the parametrization from commandline\n params = dict(self.parametrization)\n for s in arguments:\n key, value = s.split(\"=\", 1)\n params[key] = make_nevergrad_parameter(value)\n parametrization = ng.p.Dict(**params)\n parametrization.descriptors.deterministic_function = not self.opt_config.noisy\n parametrization.random_state.seed(self.opt_config.seed)\n # log and build the optimizer\n opt = self.opt_config.optimizer\n remaining_budget = self.opt_config.budget\n nw = self.opt_config.num_workers\n log.info(\n f\"NevergradSweeper(optimizer={opt}, budget={remaining_budget}, \"\n f\"num_workers={nw}) {name}\"\n )\n log.info(f\"with parametrization {parametrization}\")\n log.info(f\"Sweep output dir: {self.config.hydra.sweep.dir}\")\n optimizer = ng.optimizers.registry[opt](parametrization, remaining_budget, nw)\n # loop!\n all_returns: List[Any] = []\n best: Tuple[float, ng.p.Parameter] = (float(\"inf\"), parametrization)\n while remaining_budget > 0:\n batch = min(nw, remaining_budget)\n remaining_budget -= batch\n candidates = [optimizer.ask() for _ in range(batch)]\n overrides = list(\n tuple(f\"{x}={y}\" for x, y in c.value.items()) for c in candidates\n )\n self.validate_batch_is_legal(overrides)\n returns = self.launcher.launch(overrides, initial_job_idx=self.job_idx)\n self.job_idx += len(returns)\n # would have been nice to avoid waiting for all jobs to finish\n # aka batch size Vs steady state (launching a new job whenever one is done)\n for cand, ret in zip(candidates, returns):\n loss = direction * ret.return_value\n optimizer.tell(cand, loss)\n if loss < best[0]:\n best = (loss, cand)\n all_returns.extend(returns)\n recom = optimizer.provide_recommendation()\n results_to_serialize = {\n \"name\": \"nevergrad\",\n \"best_evaluated_params\": best[1].value,\n \"best_evaluated_result\": direction * best[0],\n }\n OmegaConf.save(\n OmegaConf.create(results_to_serialize),\n f\"{self.config.hydra.sweep.dir}/optimization_results.yaml\",\n )\n log.info(\n \"Best parameters: %s\", \" \".join(f\"{x}={y}\" for x, y in recom.value.items())\n )\n", "path": "plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/core.py"}]} |
gh_patches_debug_1072 | rasdani/github-patches | git_diff | huggingface__diffusers-1279 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
num_images_per_prompt=num_samples FAILS after latest commit in SD pipelines
### Describe the bug
This used to wwork:
images = pipe(
prompt=prompt,
image=image,
mask_image=mask_image,
guidance_scale=guidance_scale,
generator=generator,
num_images_per_prompt=num_samples,
num_inference_steps =50,
height=height,
width=width,
).images
Now it doesn't.
### Reproduction
Ue the RUNWAYML inpainting notebook. IT fails in that!
https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb
<img width="1339" alt="image" src="https://user-images.githubusercontent.com/26677859/201589394-ffadb77b-28c4-4667-8e53-7d381f482261.png">
### Logs
```shell
RuntimeError Traceback (most recent call last)
<ipython-input-8-7352d4d77608> in <module>
11 guidance_scale=guidance_scale,
12 generator=generator,
---> 13 num_images_per_prompt=num_samples,
14 ).images
1 frames
/usr/local/lib/python3.7/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py in __call__(self, prompt, image, mask_image, height, width, num_inference_steps, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, output_type, return_dict, callback, callback_steps, **kwargs)
567
568 # concat latents, mask, masked_image_latents in the channel dimension
--> 569 latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
570
571 latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 6 but got size 2 for tensor number 1 in the list.
```
### System Info
The colab notebook in the runwayml inpainting page fails.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py`
Content:
```
1 # Copyright 2022 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import inspect
16 from typing import Callable, List, Optional, Union
17
18 import numpy as np
19 import torch
20
21 import PIL
22 from diffusers.utils import is_accelerate_available
23 from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
24
25 from ...configuration_utils import FrozenDict
26 from ...models import AutoencoderKL, UNet2DConditionModel
27 from ...pipeline_utils import DiffusionPipeline
28 from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
29 from ...utils import deprecate, logging
30 from . import StableDiffusionPipelineOutput
31 from .safety_checker import StableDiffusionSafetyChecker
32
33
34 logger = logging.get_logger(__name__) # pylint: disable=invalid-name
35
36
37 def prepare_mask_and_masked_image(image, mask):
38 image = np.array(image.convert("RGB"))
39 image = image[None].transpose(0, 3, 1, 2)
40 image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
41
42 mask = np.array(mask.convert("L"))
43 mask = mask.astype(np.float32) / 255.0
44 mask = mask[None, None]
45 mask[mask < 0.5] = 0
46 mask[mask >= 0.5] = 1
47 mask = torch.from_numpy(mask)
48
49 masked_image = image * (mask < 0.5)
50
51 return mask, masked_image
52
53
54 class StableDiffusionInpaintPipeline(DiffusionPipeline):
55 r"""
56 Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
57
58 This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
59 library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
60
61 Args:
62 vae ([`AutoencoderKL`]):
63 Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
64 text_encoder ([`CLIPTextModel`]):
65 Frozen text-encoder. Stable Diffusion uses the text portion of
66 [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
67 the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
68 tokenizer (`CLIPTokenizer`):
69 Tokenizer of class
70 [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
71 unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
72 scheduler ([`SchedulerMixin`]):
73 A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
74 [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
75 safety_checker ([`StableDiffusionSafetyChecker`]):
76 Classification module that estimates whether generated images could be considered offensive or harmful.
77 Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
78 feature_extractor ([`CLIPFeatureExtractor`]):
79 Model that extracts features from generated images to be used as inputs for the `safety_checker`.
80 """
81
82 def __init__(
83 self,
84 vae: AutoencoderKL,
85 text_encoder: CLIPTextModel,
86 tokenizer: CLIPTokenizer,
87 unet: UNet2DConditionModel,
88 scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
89 safety_checker: StableDiffusionSafetyChecker,
90 feature_extractor: CLIPFeatureExtractor,
91 ):
92 super().__init__()
93
94 if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
95 deprecation_message = (
96 f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
97 f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
98 "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
99 " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
100 " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
101 " file"
102 )
103 deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
104 new_config = dict(scheduler.config)
105 new_config["steps_offset"] = 1
106 scheduler._internal_dict = FrozenDict(new_config)
107
108 if hasattr(scheduler.config, "skip_prk_steps") and scheduler.config.skip_prk_steps is False:
109 deprecation_message = (
110 f"The configuration file of this scheduler: {scheduler} has not set the configuration"
111 " `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make"
112 " sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to"
113 " incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face"
114 " Hub, it would be very nice if you could open a Pull request for the"
115 " `scheduler/scheduler_config.json` file"
116 )
117 deprecate("skip_prk_steps not set", "1.0.0", deprecation_message, standard_warn=False)
118 new_config = dict(scheduler.config)
119 new_config["skip_prk_steps"] = True
120 scheduler._internal_dict = FrozenDict(new_config)
121
122 if safety_checker is None:
123 logger.warn(
124 f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
125 " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
126 " results in services or applications open to the public. Both the diffusers team and Hugging Face"
127 " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
128 " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
129 " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
130 )
131
132 self.register_modules(
133 vae=vae,
134 text_encoder=text_encoder,
135 tokenizer=tokenizer,
136 unet=unet,
137 scheduler=scheduler,
138 safety_checker=safety_checker,
139 feature_extractor=feature_extractor,
140 )
141
142 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing
143 def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
144 r"""
145 Enable sliced attention computation.
146
147 When this option is enabled, the attention module will split the input tensor in slices, to compute attention
148 in several steps. This is useful to save some memory in exchange for a small speed decrease.
149
150 Args:
151 slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
152 When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
153 a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
154 `attention_head_dim` must be a multiple of `slice_size`.
155 """
156 if slice_size == "auto":
157 # half the attention head size is usually a good trade-off between
158 # speed and memory
159 slice_size = self.unet.config.attention_head_dim // 2
160 self.unet.set_attention_slice(slice_size)
161
162 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing
163 def disable_attention_slicing(self):
164 r"""
165 Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
166 back to computing attention in one step.
167 """
168 # set slice_size = `None` to disable `attention slicing`
169 self.enable_attention_slicing(None)
170
171 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload
172 def enable_sequential_cpu_offload(self):
173 r"""
174 Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
175 text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
176 `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
177 """
178 if is_accelerate_available():
179 from accelerate import cpu_offload
180 else:
181 raise ImportError("Please install accelerate via `pip install accelerate`")
182
183 device = torch.device("cuda")
184
185 for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]:
186 if cpu_offloaded_model is not None:
187 cpu_offload(cpu_offloaded_model, device)
188
189 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_xformers_memory_efficient_attention
190 def enable_xformers_memory_efficient_attention(self):
191 r"""
192 Enable memory efficient attention as implemented in xformers.
193
194 When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference
195 time. Speed up at training time is not guaranteed.
196
197 Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention
198 is used.
199 """
200 self.unet.set_use_memory_efficient_attention_xformers(True)
201
202 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_xformers_memory_efficient_attention
203 def disable_xformers_memory_efficient_attention(self):
204 r"""
205 Disable memory efficient attention as implemented in xformers.
206 """
207 self.unet.set_use_memory_efficient_attention_xformers(False)
208
209 @property
210 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
211 def _execution_device(self):
212 r"""
213 Returns the device on which the pipeline's models will be executed. After calling
214 `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
215 hooks.
216 """
217 if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
218 return self.device
219 for module in self.unet.modules():
220 if (
221 hasattr(module, "_hf_hook")
222 and hasattr(module._hf_hook, "execution_device")
223 and module._hf_hook.execution_device is not None
224 ):
225 return torch.device(module._hf_hook.execution_device)
226 return self.device
227
228 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
229 def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
230 r"""
231 Encodes the prompt into text encoder hidden states.
232
233 Args:
234 prompt (`str` or `list(int)`):
235 prompt to be encoded
236 device: (`torch.device`):
237 torch device
238 num_images_per_prompt (`int`):
239 number of images that should be generated per prompt
240 do_classifier_free_guidance (`bool`):
241 whether to use classifier free guidance or not
242 negative_prompt (`str` or `List[str]`):
243 The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
244 if `guidance_scale` is less than `1`).
245 """
246 batch_size = len(prompt) if isinstance(prompt, list) else 1
247
248 text_inputs = self.tokenizer(
249 prompt,
250 padding="max_length",
251 max_length=self.tokenizer.model_max_length,
252 truncation=True,
253 return_tensors="pt",
254 )
255 text_input_ids = text_inputs.input_ids
256 untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids
257
258 if not torch.equal(text_input_ids, untruncated_ids):
259 removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
260 logger.warning(
261 "The following part of your input was truncated because CLIP can only handle sequences up to"
262 f" {self.tokenizer.model_max_length} tokens: {removed_text}"
263 )
264 text_embeddings = self.text_encoder(text_input_ids.to(device))[0]
265
266 # duplicate text embeddings for each generation per prompt, using mps friendly method
267 bs_embed, seq_len, _ = text_embeddings.shape
268 text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
269 text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
270
271 # get unconditional embeddings for classifier free guidance
272 if do_classifier_free_guidance:
273 uncond_tokens: List[str]
274 if negative_prompt is None:
275 uncond_tokens = [""] * batch_size
276 elif type(prompt) is not type(negative_prompt):
277 raise TypeError(
278 f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
279 f" {type(prompt)}."
280 )
281 elif isinstance(negative_prompt, str):
282 uncond_tokens = [negative_prompt]
283 elif batch_size != len(negative_prompt):
284 raise ValueError(
285 f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
286 f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
287 " the batch size of `prompt`."
288 )
289 else:
290 uncond_tokens = negative_prompt
291
292 max_length = text_input_ids.shape[-1]
293 uncond_input = self.tokenizer(
294 uncond_tokens,
295 padding="max_length",
296 max_length=max_length,
297 truncation=True,
298 return_tensors="pt",
299 )
300 uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(device))[0]
301
302 # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
303 seq_len = uncond_embeddings.shape[1]
304 uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
305 uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
306
307 # For classifier free guidance, we need to do two forward passes.
308 # Here we concatenate the unconditional and text embeddings into a single batch
309 # to avoid doing two forward passes
310 text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
311
312 return text_embeddings
313
314 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
315 def run_safety_checker(self, image, device, dtype):
316 if self.safety_checker is not None:
317 safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
318 image, has_nsfw_concept = self.safety_checker(
319 images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
320 )
321 else:
322 has_nsfw_concept = None
323 return image, has_nsfw_concept
324
325 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
326 def prepare_extra_step_kwargs(self, generator, eta):
327 # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
328 # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
329 # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
330 # and should be between [0, 1]
331
332 accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
333 extra_step_kwargs = {}
334 if accepts_eta:
335 extra_step_kwargs["eta"] = eta
336
337 # check if the scheduler accepts generator
338 accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
339 if accepts_generator:
340 extra_step_kwargs["generator"] = generator
341 return extra_step_kwargs
342
343 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
344 def decode_latents(self, latents):
345 latents = 1 / 0.18215 * latents
346 image = self.vae.decode(latents).sample
347 image = (image / 2 + 0.5).clamp(0, 1)
348 # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
349 image = image.cpu().permute(0, 2, 3, 1).float().numpy()
350 return image
351
352 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs
353 def check_inputs(self, prompt, height, width, callback_steps):
354 if not isinstance(prompt, str) and not isinstance(prompt, list):
355 raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
356
357 if height % 8 != 0 or width % 8 != 0:
358 raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
359
360 if (callback_steps is None) or (
361 callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
362 ):
363 raise ValueError(
364 f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
365 f" {type(callback_steps)}."
366 )
367
368 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
369 def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
370 shape = (batch_size, num_channels_latents, height // 8, width // 8)
371 if latents is None:
372 if device.type == "mps":
373 # randn does not work reproducibly on mps
374 latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device)
375 else:
376 latents = torch.randn(shape, generator=generator, device=device, dtype=dtype)
377 else:
378 if latents.shape != shape:
379 raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
380 latents = latents.to(device)
381
382 # scale the initial noise by the standard deviation required by the scheduler
383 latents = latents * self.scheduler.init_noise_sigma
384 return latents
385
386 def prepare_mask_latents(
387 self, mask, masked_image, batch_size, height, width, dtype, device, generator, do_classifier_free_guidance
388 ):
389 # resize the mask to latents shape as we concatenate the mask to the latents
390 # we do that before converting to dtype to avoid breaking in case we're using cpu_offload
391 # and half precision
392 mask = torch.nn.functional.interpolate(mask, size=(height // 8, width // 8))
393 mask = mask.to(device=device, dtype=dtype)
394
395 masked_image = masked_image.to(device=device, dtype=dtype)
396
397 # encode the mask image into latents space so we can concatenate it to the latents
398 masked_image_latents = self.vae.encode(masked_image).latent_dist.sample(generator=generator)
399 masked_image_latents = 0.18215 * masked_image_latents
400
401 # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method
402 mask = mask.repeat(batch_size, 1, 1, 1)
403 masked_image_latents = masked_image_latents.repeat(batch_size, 1, 1, 1)
404
405 mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask
406 masked_image_latents = (
407 torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents
408 )
409
410 # aligning device to prevent device errors when concating it with the latent model input
411 masked_image_latents = masked_image_latents.to(device=device, dtype=dtype)
412 return mask, masked_image_latents
413
414 @torch.no_grad()
415 def __call__(
416 self,
417 prompt: Union[str, List[str]],
418 image: Union[torch.FloatTensor, PIL.Image.Image],
419 mask_image: Union[torch.FloatTensor, PIL.Image.Image],
420 height: int = 512,
421 width: int = 512,
422 num_inference_steps: int = 50,
423 guidance_scale: float = 7.5,
424 negative_prompt: Optional[Union[str, List[str]]] = None,
425 num_images_per_prompt: Optional[int] = 1,
426 eta: float = 0.0,
427 generator: Optional[torch.Generator] = None,
428 latents: Optional[torch.FloatTensor] = None,
429 output_type: Optional[str] = "pil",
430 return_dict: bool = True,
431 callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
432 callback_steps: Optional[int] = 1,
433 **kwargs,
434 ):
435 r"""
436 Function invoked when calling the pipeline for generation.
437
438 Args:
439 prompt (`str` or `List[str]`):
440 The prompt or prompts to guide the image generation.
441 image (`PIL.Image.Image`):
442 `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
443 be masked out with `mask_image` and repainted according to `prompt`.
444 mask_image (`PIL.Image.Image`):
445 `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
446 repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
447 to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
448 instead of 3, so the expected shape would be `(B, H, W, 1)`.
449 height (`int`, *optional*, defaults to 512):
450 The height in pixels of the generated image.
451 width (`int`, *optional*, defaults to 512):
452 The width in pixels of the generated image.
453 num_inference_steps (`int`, *optional*, defaults to 50):
454 The number of denoising steps. More denoising steps usually lead to a higher quality image at the
455 expense of slower inference.
456 guidance_scale (`float`, *optional*, defaults to 7.5):
457 Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
458 `guidance_scale` is defined as `w` of equation 2. of [Imagen
459 Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
460 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
461 usually at the expense of lower image quality.
462 negative_prompt (`str` or `List[str]`, *optional*):
463 The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
464 if `guidance_scale` is less than `1`).
465 num_images_per_prompt (`int`, *optional*, defaults to 1):
466 The number of images to generate per prompt.
467 eta (`float`, *optional*, defaults to 0.0):
468 Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
469 [`schedulers.DDIMScheduler`], will be ignored for others.
470 generator (`torch.Generator`, *optional*):
471 A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
472 deterministic.
473 latents (`torch.FloatTensor`, *optional*):
474 Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
475 generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
476 tensor will ge generated by sampling using the supplied random `generator`.
477 output_type (`str`, *optional*, defaults to `"pil"`):
478 The output format of the generate image. Choose between
479 [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
480 return_dict (`bool`, *optional*, defaults to `True`):
481 Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
482 plain tuple.
483 callback (`Callable`, *optional*):
484 A function that will be called every `callback_steps` steps during inference. The function will be
485 called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
486 callback_steps (`int`, *optional*, defaults to 1):
487 The frequency at which the `callback` function will be called. If not specified, the callback will be
488 called at every step.
489
490 Returns:
491 [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
492 [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
493 When returning a tuple, the first element is a list with the generated images, and the second element is a
494 list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
495 (nsfw) content, according to the `safety_checker`.
496 """
497
498 # 1. Check inputs
499 self.check_inputs(prompt, height, width, callback_steps)
500
501 # 2. Define call parameters
502 batch_size = 1 if isinstance(prompt, str) else len(prompt)
503 device = self._execution_device
504 # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
505 # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
506 # corresponds to doing no classifier free guidance.
507 do_classifier_free_guidance = guidance_scale > 1.0
508
509 # 3. Encode input prompt
510 text_embeddings = self._encode_prompt(
511 prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
512 )
513
514 # 4. Preprocess mask and image
515 if isinstance(image, PIL.Image.Image) and isinstance(mask_image, PIL.Image.Image):
516 mask, masked_image = prepare_mask_and_masked_image(image, mask_image)
517
518 # 5. set timesteps
519 self.scheduler.set_timesteps(num_inference_steps, device=device)
520 timesteps_tensor = self.scheduler.timesteps
521
522 # 6. Prepare latent variables
523 num_channels_latents = self.vae.config.latent_channels
524 latents = self.prepare_latents(
525 batch_size * num_images_per_prompt,
526 num_channels_latents,
527 height,
528 width,
529 text_embeddings.dtype,
530 device,
531 generator,
532 latents,
533 )
534
535 # 7. Prepare mask latent variables
536 mask, masked_image_latents = self.prepare_mask_latents(
537 mask,
538 masked_image,
539 batch_size,
540 height,
541 width,
542 text_embeddings.dtype,
543 device,
544 generator,
545 do_classifier_free_guidance,
546 )
547
548 # 8. Check that sizes of mask, masked image and latents match
549 num_channels_mask = mask.shape[1]
550 num_channels_masked_image = masked_image_latents.shape[1]
551 if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels:
552 raise ValueError(
553 f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects"
554 f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +"
555 f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}"
556 f" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of"
557 " `pipeline.unet` or your `mask_image` or `image` input."
558 )
559
560 # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
561 extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
562
563 # 10. Denoising loop
564 for i, t in enumerate(self.progress_bar(timesteps_tensor)):
565 # expand the latents if we are doing classifier free guidance
566 latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
567
568 # concat latents, mask, masked_image_latents in the channel dimension
569 latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
570
571 latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
572
573 # predict the noise residual
574 noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
575
576 # perform guidance
577 if do_classifier_free_guidance:
578 noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
579 noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
580
581 # compute the previous noisy sample x_t -> x_t-1
582 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
583
584 # call the callback, if provided
585 if callback is not None and i % callback_steps == 0:
586 callback(i, t, latents)
587
588 # 11. Post-processing
589 image = self.decode_latents(latents)
590
591 # 12. Run safety checker
592 image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype)
593
594 # 13. Convert to PIL
595 if output_type == "pil":
596 image = self.numpy_to_pil(image)
597
598 if not return_dict:
599 return (image, has_nsfw_concept)
600
601 return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
602
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
@@ -536,7 +536,7 @@
mask, masked_image_latents = self.prepare_mask_latents(
mask,
masked_image,
- batch_size,
+ batch_size * num_images_per_prompt,
height,
width,
text_embeddings.dtype,
| {"golden_diff": "diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py\n--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py\n+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py\n@@ -536,7 +536,7 @@\n mask, masked_image_latents = self.prepare_mask_latents(\n mask,\n masked_image,\n- batch_size,\n+ batch_size * num_images_per_prompt,\n height,\n width,\n text_embeddings.dtype,\n", "issue": "num_images_per_prompt=num_samples FAILS after latest commit in SD pipelines\n### Describe the bug\n\nThis used to wwork:\r\n\r\nimages = pipe(\r\n prompt=prompt,\r\n image=image,\r\n mask_image=mask_image,\r\n guidance_scale=guidance_scale,\r\n generator=generator,\r\n num_images_per_prompt=num_samples,\r\n num_inference_steps =50,\r\n height=height,\r\n width=width,\r\n \r\n).images\r\n\r\nNow it doesn't.\r\n\r\n\n\n### Reproduction\n\nUe the RUNWAYML inpainting notebook. IT fails in that!\r\n\r\n\r\nhttps://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb\r\n\r\n\r\n<img width=\"1339\" alt=\"image\" src=\"https://user-images.githubusercontent.com/26677859/201589394-ffadb77b-28c4-4667-8e53-7d381f482261.png\">\r\n \n\n### Logs\n\n```shell\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-8-7352d4d77608> in <module>\r\n 11 guidance_scale=guidance_scale,\r\n 12 generator=generator,\r\n---> 13 num_images_per_prompt=num_samples,\r\n 14 ).images\r\n\r\n1 frames\r\n/usr/local/lib/python3.7/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py in __call__(self, prompt, image, mask_image, height, width, num_inference_steps, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, output_type, return_dict, callback, callback_steps, **kwargs)\r\n 567 \r\n 568 # concat latents, mask, masked_image_latents in the channel dimension\r\n--> 569 latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)\r\n 570 \r\n 571 latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)\r\n\r\nRuntimeError: Sizes of tensors must match except in dimension 1. Expected size 6 but got size 2 for tensor number 1 in the list.\n```\n\n\n### System Info\n\nThe colab notebook in the runwayml inpainting page fails.\n", "before_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport inspect\nfrom typing import Callable, List, Optional, Union\n\nimport numpy as np\nimport torch\n\nimport PIL\nfrom diffusers.utils import is_accelerate_available\nfrom transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer\n\nfrom ...configuration_utils import FrozenDict\nfrom ...models import AutoencoderKL, UNet2DConditionModel\nfrom ...pipeline_utils import DiffusionPipeline\nfrom ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler\nfrom ...utils import deprecate, logging\nfrom . import StableDiffusionPipelineOutput\nfrom .safety_checker import StableDiffusionSafetyChecker\n\n\nlogger = logging.get_logger(__name__) # pylint: disable=invalid-name\n\n\ndef prepare_mask_and_masked_image(image, mask):\n image = np.array(image.convert(\"RGB\"))\n image = image[None].transpose(0, 3, 1, 2)\n image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0\n\n mask = np.array(mask.convert(\"L\"))\n mask = mask.astype(np.float32) / 255.0\n mask = mask[None, None]\n mask[mask < 0.5] = 0\n mask[mask >= 0.5] = 1\n mask = torch.from_numpy(mask)\n\n masked_image = image * (mask < 0.5)\n\n return mask, masked_image\n\n\nclass StableDiffusionInpaintPipeline(DiffusionPipeline):\n r\"\"\"\n Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*.\n\n This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the\n library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)\n\n Args:\n vae ([`AutoencoderKL`]):\n Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.\n text_encoder ([`CLIPTextModel`]):\n Frozen text-encoder. Stable Diffusion uses the text portion of\n [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically\n the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.\n tokenizer (`CLIPTokenizer`):\n Tokenizer of class\n [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).\n unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.\n scheduler ([`SchedulerMixin`]):\n A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of\n [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].\n safety_checker ([`StableDiffusionSafetyChecker`]):\n Classification module that estimates whether generated images could be considered offensive or harmful.\n Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.\n feature_extractor ([`CLIPFeatureExtractor`]):\n Model that extracts features from generated images to be used as inputs for the `safety_checker`.\n \"\"\"\n\n def __init__(\n self,\n vae: AutoencoderKL,\n text_encoder: CLIPTextModel,\n tokenizer: CLIPTokenizer,\n unet: UNet2DConditionModel,\n scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],\n safety_checker: StableDiffusionSafetyChecker,\n feature_extractor: CLIPFeatureExtractor,\n ):\n super().__init__()\n\n if hasattr(scheduler.config, \"steps_offset\") and scheduler.config.steps_offset != 1:\n deprecation_message = (\n f\"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`\"\n f\" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure \"\n \"to update the config accordingly as leaving `steps_offset` might led to incorrect results\"\n \" in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,\"\n \" it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`\"\n \" file\"\n )\n deprecate(\"steps_offset!=1\", \"1.0.0\", deprecation_message, standard_warn=False)\n new_config = dict(scheduler.config)\n new_config[\"steps_offset\"] = 1\n scheduler._internal_dict = FrozenDict(new_config)\n\n if hasattr(scheduler.config, \"skip_prk_steps\") and scheduler.config.skip_prk_steps is False:\n deprecation_message = (\n f\"The configuration file of this scheduler: {scheduler} has not set the configuration\"\n \" `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make\"\n \" sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to\"\n \" incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face\"\n \" Hub, it would be very nice if you could open a Pull request for the\"\n \" `scheduler/scheduler_config.json` file\"\n )\n deprecate(\"skip_prk_steps not set\", \"1.0.0\", deprecation_message, standard_warn=False)\n new_config = dict(scheduler.config)\n new_config[\"skip_prk_steps\"] = True\n scheduler._internal_dict = FrozenDict(new_config)\n\n if safety_checker is None:\n logger.warn(\n f\"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure\"\n \" that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered\"\n \" results in services or applications open to the public. Both the diffusers team and Hugging Face\"\n \" strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling\"\n \" it only for use-cases that involve analyzing network behavior or auditing its results. For more\"\n \" information, please have a look at https://github.com/huggingface/diffusers/pull/254 .\"\n )\n\n self.register_modules(\n vae=vae,\n text_encoder=text_encoder,\n tokenizer=tokenizer,\n unet=unet,\n scheduler=scheduler,\n safety_checker=safety_checker,\n feature_extractor=feature_extractor,\n )\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing\n def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = \"auto\"):\n r\"\"\"\n Enable sliced attention computation.\n\n When this option is enabled, the attention module will split the input tensor in slices, to compute attention\n in several steps. This is useful to save some memory in exchange for a small speed decrease.\n\n Args:\n slice_size (`str` or `int`, *optional*, defaults to `\"auto\"`):\n When `\"auto\"`, halves the input to the attention heads, so attention will be computed in two steps. If\n a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,\n `attention_head_dim` must be a multiple of `slice_size`.\n \"\"\"\n if slice_size == \"auto\":\n # half the attention head size is usually a good trade-off between\n # speed and memory\n slice_size = self.unet.config.attention_head_dim // 2\n self.unet.set_attention_slice(slice_size)\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing\n def disable_attention_slicing(self):\n r\"\"\"\n Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go\n back to computing attention in one step.\n \"\"\"\n # set slice_size = `None` to disable `attention slicing`\n self.enable_attention_slicing(None)\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload\n def enable_sequential_cpu_offload(self):\n r\"\"\"\n Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,\n text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a\n `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.\n \"\"\"\n if is_accelerate_available():\n from accelerate import cpu_offload\n else:\n raise ImportError(\"Please install accelerate via `pip install accelerate`\")\n\n device = torch.device(\"cuda\")\n\n for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]:\n if cpu_offloaded_model is not None:\n cpu_offload(cpu_offloaded_model, device)\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_xformers_memory_efficient_attention\n def enable_xformers_memory_efficient_attention(self):\n r\"\"\"\n Enable memory efficient attention as implemented in xformers.\n\n When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference\n time. Speed up at training time is not guaranteed.\n\n Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention\n is used.\n \"\"\"\n self.unet.set_use_memory_efficient_attention_xformers(True)\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_xformers_memory_efficient_attention\n def disable_xformers_memory_efficient_attention(self):\n r\"\"\"\n Disable memory efficient attention as implemented in xformers.\n \"\"\"\n self.unet.set_use_memory_efficient_attention_xformers(False)\n\n @property\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device\n def _execution_device(self):\n r\"\"\"\n Returns the device on which the pipeline's models will be executed. After calling\n `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module\n hooks.\n \"\"\"\n if self.device != torch.device(\"meta\") or not hasattr(self.unet, \"_hf_hook\"):\n return self.device\n for module in self.unet.modules():\n if (\n hasattr(module, \"_hf_hook\")\n and hasattr(module._hf_hook, \"execution_device\")\n and module._hf_hook.execution_device is not None\n ):\n return torch.device(module._hf_hook.execution_device)\n return self.device\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt\n def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):\n r\"\"\"\n Encodes the prompt into text encoder hidden states.\n\n Args:\n prompt (`str` or `list(int)`):\n prompt to be encoded\n device: (`torch.device`):\n torch device\n num_images_per_prompt (`int`):\n number of images that should be generated per prompt\n do_classifier_free_guidance (`bool`):\n whether to use classifier free guidance or not\n negative_prompt (`str` or `List[str]`):\n The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored\n if `guidance_scale` is less than `1`).\n \"\"\"\n batch_size = len(prompt) if isinstance(prompt, list) else 1\n\n text_inputs = self.tokenizer(\n prompt,\n padding=\"max_length\",\n max_length=self.tokenizer.model_max_length,\n truncation=True,\n return_tensors=\"pt\",\n )\n text_input_ids = text_inputs.input_ids\n untruncated_ids = self.tokenizer(prompt, padding=\"max_length\", return_tensors=\"pt\").input_ids\n\n if not torch.equal(text_input_ids, untruncated_ids):\n removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])\n logger.warning(\n \"The following part of your input was truncated because CLIP can only handle sequences up to\"\n f\" {self.tokenizer.model_max_length} tokens: {removed_text}\"\n )\n text_embeddings = self.text_encoder(text_input_ids.to(device))[0]\n\n # duplicate text embeddings for each generation per prompt, using mps friendly method\n bs_embed, seq_len, _ = text_embeddings.shape\n text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)\n text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)\n\n # get unconditional embeddings for classifier free guidance\n if do_classifier_free_guidance:\n uncond_tokens: List[str]\n if negative_prompt is None:\n uncond_tokens = [\"\"] * batch_size\n elif type(prompt) is not type(negative_prompt):\n raise TypeError(\n f\"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=\"\n f\" {type(prompt)}.\"\n )\n elif isinstance(negative_prompt, str):\n uncond_tokens = [negative_prompt]\n elif batch_size != len(negative_prompt):\n raise ValueError(\n f\"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:\"\n f\" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches\"\n \" the batch size of `prompt`.\"\n )\n else:\n uncond_tokens = negative_prompt\n\n max_length = text_input_ids.shape[-1]\n uncond_input = self.tokenizer(\n uncond_tokens,\n padding=\"max_length\",\n max_length=max_length,\n truncation=True,\n return_tensors=\"pt\",\n )\n uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(device))[0]\n\n # duplicate unconditional embeddings for each generation per prompt, using mps friendly method\n seq_len = uncond_embeddings.shape[1]\n uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)\n uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)\n\n # For classifier free guidance, we need to do two forward passes.\n # Here we concatenate the unconditional and text embeddings into a single batch\n # to avoid doing two forward passes\n text_embeddings = torch.cat([uncond_embeddings, text_embeddings])\n\n return text_embeddings\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker\n def run_safety_checker(self, image, device, dtype):\n if self.safety_checker is not None:\n safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors=\"pt\").to(device)\n image, has_nsfw_concept = self.safety_checker(\n images=image, clip_input=safety_checker_input.pixel_values.to(dtype)\n )\n else:\n has_nsfw_concept = None\n return image, has_nsfw_concept\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs\n def prepare_extra_step_kwargs(self, generator, eta):\n # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature\n # eta (\u03b7) is only used with the DDIMScheduler, it will be ignored for other schedulers.\n # eta corresponds to \u03b7 in DDIM paper: https://arxiv.org/abs/2010.02502\n # and should be between [0, 1]\n\n accepts_eta = \"eta\" in set(inspect.signature(self.scheduler.step).parameters.keys())\n extra_step_kwargs = {}\n if accepts_eta:\n extra_step_kwargs[\"eta\"] = eta\n\n # check if the scheduler accepts generator\n accepts_generator = \"generator\" in set(inspect.signature(self.scheduler.step).parameters.keys())\n if accepts_generator:\n extra_step_kwargs[\"generator\"] = generator\n return extra_step_kwargs\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents\n def decode_latents(self, latents):\n latents = 1 / 0.18215 * latents\n image = self.vae.decode(latents).sample\n image = (image / 2 + 0.5).clamp(0, 1)\n # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16\n image = image.cpu().permute(0, 2, 3, 1).float().numpy()\n return image\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs\n def check_inputs(self, prompt, height, width, callback_steps):\n if not isinstance(prompt, str) and not isinstance(prompt, list):\n raise ValueError(f\"`prompt` has to be of type `str` or `list` but is {type(prompt)}\")\n\n if height % 8 != 0 or width % 8 != 0:\n raise ValueError(f\"`height` and `width` have to be divisible by 8 but are {height} and {width}.\")\n\n if (callback_steps is None) or (\n callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)\n ):\n raise ValueError(\n f\"`callback_steps` has to be a positive integer but is {callback_steps} of type\"\n f\" {type(callback_steps)}.\"\n )\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents\n def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):\n shape = (batch_size, num_channels_latents, height // 8, width // 8)\n if latents is None:\n if device.type == \"mps\":\n # randn does not work reproducibly on mps\n latents = torch.randn(shape, generator=generator, device=\"cpu\", dtype=dtype).to(device)\n else:\n latents = torch.randn(shape, generator=generator, device=device, dtype=dtype)\n else:\n if latents.shape != shape:\n raise ValueError(f\"Unexpected latents shape, got {latents.shape}, expected {shape}\")\n latents = latents.to(device)\n\n # scale the initial noise by the standard deviation required by the scheduler\n latents = latents * self.scheduler.init_noise_sigma\n return latents\n\n def prepare_mask_latents(\n self, mask, masked_image, batch_size, height, width, dtype, device, generator, do_classifier_free_guidance\n ):\n # resize the mask to latents shape as we concatenate the mask to the latents\n # we do that before converting to dtype to avoid breaking in case we're using cpu_offload\n # and half precision\n mask = torch.nn.functional.interpolate(mask, size=(height // 8, width // 8))\n mask = mask.to(device=device, dtype=dtype)\n\n masked_image = masked_image.to(device=device, dtype=dtype)\n\n # encode the mask image into latents space so we can concatenate it to the latents\n masked_image_latents = self.vae.encode(masked_image).latent_dist.sample(generator=generator)\n masked_image_latents = 0.18215 * masked_image_latents\n\n # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method\n mask = mask.repeat(batch_size, 1, 1, 1)\n masked_image_latents = masked_image_latents.repeat(batch_size, 1, 1, 1)\n\n mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask\n masked_image_latents = (\n torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents\n )\n\n # aligning device to prevent device errors when concating it with the latent model input\n masked_image_latents = masked_image_latents.to(device=device, dtype=dtype)\n return mask, masked_image_latents\n\n @torch.no_grad()\n def __call__(\n self,\n prompt: Union[str, List[str]],\n image: Union[torch.FloatTensor, PIL.Image.Image],\n mask_image: Union[torch.FloatTensor, PIL.Image.Image],\n height: int = 512,\n width: int = 512,\n num_inference_steps: int = 50,\n guidance_scale: float = 7.5,\n negative_prompt: Optional[Union[str, List[str]]] = None,\n num_images_per_prompt: Optional[int] = 1,\n eta: float = 0.0,\n generator: Optional[torch.Generator] = None,\n latents: Optional[torch.FloatTensor] = None,\n output_type: Optional[str] = \"pil\",\n return_dict: bool = True,\n callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,\n callback_steps: Optional[int] = 1,\n **kwargs,\n ):\n r\"\"\"\n Function invoked when calling the pipeline for generation.\n\n Args:\n prompt (`str` or `List[str]`):\n The prompt or prompts to guide the image generation.\n image (`PIL.Image.Image`):\n `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will\n be masked out with `mask_image` and repainted according to `prompt`.\n mask_image (`PIL.Image.Image`):\n `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be\n repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted\n to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)\n instead of 3, so the expected shape would be `(B, H, W, 1)`.\n height (`int`, *optional*, defaults to 512):\n The height in pixels of the generated image.\n width (`int`, *optional*, defaults to 512):\n The width in pixels of the generated image.\n num_inference_steps (`int`, *optional*, defaults to 50):\n The number of denoising steps. More denoising steps usually lead to a higher quality image at the\n expense of slower inference.\n guidance_scale (`float`, *optional*, defaults to 7.5):\n Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).\n `guidance_scale` is defined as `w` of equation 2. of [Imagen\n Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >\n 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,\n usually at the expense of lower image quality.\n negative_prompt (`str` or `List[str]`, *optional*):\n The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored\n if `guidance_scale` is less than `1`).\n num_images_per_prompt (`int`, *optional*, defaults to 1):\n The number of images to generate per prompt.\n eta (`float`, *optional*, defaults to 0.0):\n Corresponds to parameter eta (\u03b7) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to\n [`schedulers.DDIMScheduler`], will be ignored for others.\n generator (`torch.Generator`, *optional*):\n A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation\n deterministic.\n latents (`torch.FloatTensor`, *optional*):\n Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image\n generation. Can be used to tweak the same generation with different prompts. If not provided, a latents\n tensor will ge generated by sampling using the supplied random `generator`.\n output_type (`str`, *optional*, defaults to `\"pil\"`):\n The output format of the generate image. Choose between\n [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.\n return_dict (`bool`, *optional*, defaults to `True`):\n Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a\n plain tuple.\n callback (`Callable`, *optional*):\n A function that will be called every `callback_steps` steps during inference. The function will be\n called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.\n callback_steps (`int`, *optional*, defaults to 1):\n The frequency at which the `callback` function will be called. If not specified, the callback will be\n called at every step.\n\n Returns:\n [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:\n [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.\n When returning a tuple, the first element is a list with the generated images, and the second element is a\n list of `bool`s denoting whether the corresponding generated image likely represents \"not-safe-for-work\"\n (nsfw) content, according to the `safety_checker`.\n \"\"\"\n\n # 1. Check inputs\n self.check_inputs(prompt, height, width, callback_steps)\n\n # 2. Define call parameters\n batch_size = 1 if isinstance(prompt, str) else len(prompt)\n device = self._execution_device\n # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)\n # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`\n # corresponds to doing no classifier free guidance.\n do_classifier_free_guidance = guidance_scale > 1.0\n\n # 3. Encode input prompt\n text_embeddings = self._encode_prompt(\n prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt\n )\n\n # 4. Preprocess mask and image\n if isinstance(image, PIL.Image.Image) and isinstance(mask_image, PIL.Image.Image):\n mask, masked_image = prepare_mask_and_masked_image(image, mask_image)\n\n # 5. set timesteps\n self.scheduler.set_timesteps(num_inference_steps, device=device)\n timesteps_tensor = self.scheduler.timesteps\n\n # 6. Prepare latent variables\n num_channels_latents = self.vae.config.latent_channels\n latents = self.prepare_latents(\n batch_size * num_images_per_prompt,\n num_channels_latents,\n height,\n width,\n text_embeddings.dtype,\n device,\n generator,\n latents,\n )\n\n # 7. Prepare mask latent variables\n mask, masked_image_latents = self.prepare_mask_latents(\n mask,\n masked_image,\n batch_size,\n height,\n width,\n text_embeddings.dtype,\n device,\n generator,\n do_classifier_free_guidance,\n )\n\n # 8. Check that sizes of mask, masked image and latents match\n num_channels_mask = mask.shape[1]\n num_channels_masked_image = masked_image_latents.shape[1]\n if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels:\n raise ValueError(\n f\"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects\"\n f\" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +\"\n f\" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}\"\n f\" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of\"\n \" `pipeline.unet` or your `mask_image` or `image` input.\"\n )\n\n # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline\n extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)\n\n # 10. Denoising loop\n for i, t in enumerate(self.progress_bar(timesteps_tensor)):\n # expand the latents if we are doing classifier free guidance\n latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents\n\n # concat latents, mask, masked_image_latents in the channel dimension\n latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)\n\n latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)\n\n # predict the noise residual\n noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample\n\n # perform guidance\n if do_classifier_free_guidance:\n noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)\n noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)\n\n # compute the previous noisy sample x_t -> x_t-1\n latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample\n\n # call the callback, if provided\n if callback is not None and i % callback_steps == 0:\n callback(i, t, latents)\n\n # 11. Post-processing\n image = self.decode_latents(latents)\n\n # 12. Run safety checker\n image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype)\n\n # 13. Convert to PIL\n if output_type == \"pil\":\n image = self.numpy_to_pil(image)\n\n if not return_dict:\n return (image, has_nsfw_concept)\n\n return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)\n", "path": "src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py"}], "after_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport inspect\nfrom typing import Callable, List, Optional, Union\n\nimport numpy as np\nimport torch\n\nimport PIL\nfrom diffusers.utils import is_accelerate_available\nfrom transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer\n\nfrom ...configuration_utils import FrozenDict\nfrom ...models import AutoencoderKL, UNet2DConditionModel\nfrom ...pipeline_utils import DiffusionPipeline\nfrom ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler\nfrom ...utils import deprecate, logging\nfrom . import StableDiffusionPipelineOutput\nfrom .safety_checker import StableDiffusionSafetyChecker\n\n\nlogger = logging.get_logger(__name__) # pylint: disable=invalid-name\n\n\ndef prepare_mask_and_masked_image(image, mask):\n image = np.array(image.convert(\"RGB\"))\n image = image[None].transpose(0, 3, 1, 2)\n image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0\n\n mask = np.array(mask.convert(\"L\"))\n mask = mask.astype(np.float32) / 255.0\n mask = mask[None, None]\n mask[mask < 0.5] = 0\n mask[mask >= 0.5] = 1\n mask = torch.from_numpy(mask)\n\n masked_image = image * (mask < 0.5)\n\n return mask, masked_image\n\n\nclass StableDiffusionInpaintPipeline(DiffusionPipeline):\n r\"\"\"\n Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*.\n\n This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the\n library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)\n\n Args:\n vae ([`AutoencoderKL`]):\n Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.\n text_encoder ([`CLIPTextModel`]):\n Frozen text-encoder. Stable Diffusion uses the text portion of\n [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically\n the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.\n tokenizer (`CLIPTokenizer`):\n Tokenizer of class\n [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).\n unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.\n scheduler ([`SchedulerMixin`]):\n A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of\n [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].\n safety_checker ([`StableDiffusionSafetyChecker`]):\n Classification module that estimates whether generated images could be considered offensive or harmful.\n Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.\n feature_extractor ([`CLIPFeatureExtractor`]):\n Model that extracts features from generated images to be used as inputs for the `safety_checker`.\n \"\"\"\n\n def __init__(\n self,\n vae: AutoencoderKL,\n text_encoder: CLIPTextModel,\n tokenizer: CLIPTokenizer,\n unet: UNet2DConditionModel,\n scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],\n safety_checker: StableDiffusionSafetyChecker,\n feature_extractor: CLIPFeatureExtractor,\n ):\n super().__init__()\n\n if hasattr(scheduler.config, \"steps_offset\") and scheduler.config.steps_offset != 1:\n deprecation_message = (\n f\"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`\"\n f\" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure \"\n \"to update the config accordingly as leaving `steps_offset` might led to incorrect results\"\n \" in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,\"\n \" it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`\"\n \" file\"\n )\n deprecate(\"steps_offset!=1\", \"1.0.0\", deprecation_message, standard_warn=False)\n new_config = dict(scheduler.config)\n new_config[\"steps_offset\"] = 1\n scheduler._internal_dict = FrozenDict(new_config)\n\n if hasattr(scheduler.config, \"skip_prk_steps\") and scheduler.config.skip_prk_steps is False:\n deprecation_message = (\n f\"The configuration file of this scheduler: {scheduler} has not set the configuration\"\n \" `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make\"\n \" sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to\"\n \" incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face\"\n \" Hub, it would be very nice if you could open a Pull request for the\"\n \" `scheduler/scheduler_config.json` file\"\n )\n deprecate(\"skip_prk_steps not set\", \"1.0.0\", deprecation_message, standard_warn=False)\n new_config = dict(scheduler.config)\n new_config[\"skip_prk_steps\"] = True\n scheduler._internal_dict = FrozenDict(new_config)\n\n if safety_checker is None:\n logger.warn(\n f\"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure\"\n \" that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered\"\n \" results in services or applications open to the public. Both the diffusers team and Hugging Face\"\n \" strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling\"\n \" it only for use-cases that involve analyzing network behavior or auditing its results. For more\"\n \" information, please have a look at https://github.com/huggingface/diffusers/pull/254 .\"\n )\n\n self.register_modules(\n vae=vae,\n text_encoder=text_encoder,\n tokenizer=tokenizer,\n unet=unet,\n scheduler=scheduler,\n safety_checker=safety_checker,\n feature_extractor=feature_extractor,\n )\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing\n def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = \"auto\"):\n r\"\"\"\n Enable sliced attention computation.\n\n When this option is enabled, the attention module will split the input tensor in slices, to compute attention\n in several steps. This is useful to save some memory in exchange for a small speed decrease.\n\n Args:\n slice_size (`str` or `int`, *optional*, defaults to `\"auto\"`):\n When `\"auto\"`, halves the input to the attention heads, so attention will be computed in two steps. If\n a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,\n `attention_head_dim` must be a multiple of `slice_size`.\n \"\"\"\n if slice_size == \"auto\":\n # half the attention head size is usually a good trade-off between\n # speed and memory\n slice_size = self.unet.config.attention_head_dim // 2\n self.unet.set_attention_slice(slice_size)\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing\n def disable_attention_slicing(self):\n r\"\"\"\n Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go\n back to computing attention in one step.\n \"\"\"\n # set slice_size = `None` to disable `attention slicing`\n self.enable_attention_slicing(None)\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload\n def enable_sequential_cpu_offload(self):\n r\"\"\"\n Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,\n text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a\n `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.\n \"\"\"\n if is_accelerate_available():\n from accelerate import cpu_offload\n else:\n raise ImportError(\"Please install accelerate via `pip install accelerate`\")\n\n device = torch.device(\"cuda\")\n\n for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]:\n if cpu_offloaded_model is not None:\n cpu_offload(cpu_offloaded_model, device)\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_xformers_memory_efficient_attention\n def enable_xformers_memory_efficient_attention(self):\n r\"\"\"\n Enable memory efficient attention as implemented in xformers.\n\n When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference\n time. Speed up at training time is not guaranteed.\n\n Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention\n is used.\n \"\"\"\n self.unet.set_use_memory_efficient_attention_xformers(True)\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_xformers_memory_efficient_attention\n def disable_xformers_memory_efficient_attention(self):\n r\"\"\"\n Disable memory efficient attention as implemented in xformers.\n \"\"\"\n self.unet.set_use_memory_efficient_attention_xformers(False)\n\n @property\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device\n def _execution_device(self):\n r\"\"\"\n Returns the device on which the pipeline's models will be executed. After calling\n `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module\n hooks.\n \"\"\"\n if self.device != torch.device(\"meta\") or not hasattr(self.unet, \"_hf_hook\"):\n return self.device\n for module in self.unet.modules():\n if (\n hasattr(module, \"_hf_hook\")\n and hasattr(module._hf_hook, \"execution_device\")\n and module._hf_hook.execution_device is not None\n ):\n return torch.device(module._hf_hook.execution_device)\n return self.device\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt\n def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):\n r\"\"\"\n Encodes the prompt into text encoder hidden states.\n\n Args:\n prompt (`str` or `list(int)`):\n prompt to be encoded\n device: (`torch.device`):\n torch device\n num_images_per_prompt (`int`):\n number of images that should be generated per prompt\n do_classifier_free_guidance (`bool`):\n whether to use classifier free guidance or not\n negative_prompt (`str` or `List[str]`):\n The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored\n if `guidance_scale` is less than `1`).\n \"\"\"\n batch_size = len(prompt) if isinstance(prompt, list) else 1\n\n text_inputs = self.tokenizer(\n prompt,\n padding=\"max_length\",\n max_length=self.tokenizer.model_max_length,\n truncation=True,\n return_tensors=\"pt\",\n )\n text_input_ids = text_inputs.input_ids\n untruncated_ids = self.tokenizer(prompt, padding=\"max_length\", return_tensors=\"pt\").input_ids\n\n if not torch.equal(text_input_ids, untruncated_ids):\n removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])\n logger.warning(\n \"The following part of your input was truncated because CLIP can only handle sequences up to\"\n f\" {self.tokenizer.model_max_length} tokens: {removed_text}\"\n )\n text_embeddings = self.text_encoder(text_input_ids.to(device))[0]\n\n # duplicate text embeddings for each generation per prompt, using mps friendly method\n bs_embed, seq_len, _ = text_embeddings.shape\n text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)\n text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)\n\n # get unconditional embeddings for classifier free guidance\n if do_classifier_free_guidance:\n uncond_tokens: List[str]\n if negative_prompt is None:\n uncond_tokens = [\"\"] * batch_size\n elif type(prompt) is not type(negative_prompt):\n raise TypeError(\n f\"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=\"\n f\" {type(prompt)}.\"\n )\n elif isinstance(negative_prompt, str):\n uncond_tokens = [negative_prompt]\n elif batch_size != len(negative_prompt):\n raise ValueError(\n f\"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:\"\n f\" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches\"\n \" the batch size of `prompt`.\"\n )\n else:\n uncond_tokens = negative_prompt\n\n max_length = text_input_ids.shape[-1]\n uncond_input = self.tokenizer(\n uncond_tokens,\n padding=\"max_length\",\n max_length=max_length,\n truncation=True,\n return_tensors=\"pt\",\n )\n uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(device))[0]\n\n # duplicate unconditional embeddings for each generation per prompt, using mps friendly method\n seq_len = uncond_embeddings.shape[1]\n uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)\n uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)\n\n # For classifier free guidance, we need to do two forward passes.\n # Here we concatenate the unconditional and text embeddings into a single batch\n # to avoid doing two forward passes\n text_embeddings = torch.cat([uncond_embeddings, text_embeddings])\n\n return text_embeddings\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker\n def run_safety_checker(self, image, device, dtype):\n if self.safety_checker is not None:\n safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors=\"pt\").to(device)\n image, has_nsfw_concept = self.safety_checker(\n images=image, clip_input=safety_checker_input.pixel_values.to(dtype)\n )\n else:\n has_nsfw_concept = None\n return image, has_nsfw_concept\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs\n def prepare_extra_step_kwargs(self, generator, eta):\n # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature\n # eta (\u03b7) is only used with the DDIMScheduler, it will be ignored for other schedulers.\n # eta corresponds to \u03b7 in DDIM paper: https://arxiv.org/abs/2010.02502\n # and should be between [0, 1]\n\n accepts_eta = \"eta\" in set(inspect.signature(self.scheduler.step).parameters.keys())\n extra_step_kwargs = {}\n if accepts_eta:\n extra_step_kwargs[\"eta\"] = eta\n\n # check if the scheduler accepts generator\n accepts_generator = \"generator\" in set(inspect.signature(self.scheduler.step).parameters.keys())\n if accepts_generator:\n extra_step_kwargs[\"generator\"] = generator\n return extra_step_kwargs\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents\n def decode_latents(self, latents):\n latents = 1 / 0.18215 * latents\n image = self.vae.decode(latents).sample\n image = (image / 2 + 0.5).clamp(0, 1)\n # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16\n image = image.cpu().permute(0, 2, 3, 1).float().numpy()\n return image\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs\n def check_inputs(self, prompt, height, width, callback_steps):\n if not isinstance(prompt, str) and not isinstance(prompt, list):\n raise ValueError(f\"`prompt` has to be of type `str` or `list` but is {type(prompt)}\")\n\n if height % 8 != 0 or width % 8 != 0:\n raise ValueError(f\"`height` and `width` have to be divisible by 8 but are {height} and {width}.\")\n\n if (callback_steps is None) or (\n callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)\n ):\n raise ValueError(\n f\"`callback_steps` has to be a positive integer but is {callback_steps} of type\"\n f\" {type(callback_steps)}.\"\n )\n\n # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents\n def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):\n shape = (batch_size, num_channels_latents, height // 8, width // 8)\n if latents is None:\n if device.type == \"mps\":\n # randn does not work reproducibly on mps\n latents = torch.randn(shape, generator=generator, device=\"cpu\", dtype=dtype).to(device)\n else:\n latents = torch.randn(shape, generator=generator, device=device, dtype=dtype)\n else:\n if latents.shape != shape:\n raise ValueError(f\"Unexpected latents shape, got {latents.shape}, expected {shape}\")\n latents = latents.to(device)\n\n # scale the initial noise by the standard deviation required by the scheduler\n latents = latents * self.scheduler.init_noise_sigma\n return latents\n\n def prepare_mask_latents(\n self, mask, masked_image, batch_size, height, width, dtype, device, generator, do_classifier_free_guidance\n ):\n # resize the mask to latents shape as we concatenate the mask to the latents\n # we do that before converting to dtype to avoid breaking in case we're using cpu_offload\n # and half precision\n mask = torch.nn.functional.interpolate(mask, size=(height // 8, width // 8))\n mask = mask.to(device=device, dtype=dtype)\n\n masked_image = masked_image.to(device=device, dtype=dtype)\n\n # encode the mask image into latents space so we can concatenate it to the latents\n masked_image_latents = self.vae.encode(masked_image).latent_dist.sample(generator=generator)\n masked_image_latents = 0.18215 * masked_image_latents\n\n # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method\n mask = mask.repeat(batch_size, 1, 1, 1)\n masked_image_latents = masked_image_latents.repeat(batch_size, 1, 1, 1)\n\n mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask\n masked_image_latents = (\n torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents\n )\n\n # aligning device to prevent device errors when concating it with the latent model input\n masked_image_latents = masked_image_latents.to(device=device, dtype=dtype)\n return mask, masked_image_latents\n\n @torch.no_grad()\n def __call__(\n self,\n prompt: Union[str, List[str]],\n image: Union[torch.FloatTensor, PIL.Image.Image],\n mask_image: Union[torch.FloatTensor, PIL.Image.Image],\n height: int = 512,\n width: int = 512,\n num_inference_steps: int = 50,\n guidance_scale: float = 7.5,\n negative_prompt: Optional[Union[str, List[str]]] = None,\n num_images_per_prompt: Optional[int] = 1,\n eta: float = 0.0,\n generator: Optional[torch.Generator] = None,\n latents: Optional[torch.FloatTensor] = None,\n output_type: Optional[str] = \"pil\",\n return_dict: bool = True,\n callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,\n callback_steps: Optional[int] = 1,\n **kwargs,\n ):\n r\"\"\"\n Function invoked when calling the pipeline for generation.\n\n Args:\n prompt (`str` or `List[str]`):\n The prompt or prompts to guide the image generation.\n image (`PIL.Image.Image`):\n `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will\n be masked out with `mask_image` and repainted according to `prompt`.\n mask_image (`PIL.Image.Image`):\n `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be\n repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted\n to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)\n instead of 3, so the expected shape would be `(B, H, W, 1)`.\n height (`int`, *optional*, defaults to 512):\n The height in pixels of the generated image.\n width (`int`, *optional*, defaults to 512):\n The width in pixels of the generated image.\n num_inference_steps (`int`, *optional*, defaults to 50):\n The number of denoising steps. More denoising steps usually lead to a higher quality image at the\n expense of slower inference.\n guidance_scale (`float`, *optional*, defaults to 7.5):\n Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).\n `guidance_scale` is defined as `w` of equation 2. of [Imagen\n Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >\n 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,\n usually at the expense of lower image quality.\n negative_prompt (`str` or `List[str]`, *optional*):\n The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored\n if `guidance_scale` is less than `1`).\n num_images_per_prompt (`int`, *optional*, defaults to 1):\n The number of images to generate per prompt.\n eta (`float`, *optional*, defaults to 0.0):\n Corresponds to parameter eta (\u03b7) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to\n [`schedulers.DDIMScheduler`], will be ignored for others.\n generator (`torch.Generator`, *optional*):\n A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation\n deterministic.\n latents (`torch.FloatTensor`, *optional*):\n Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image\n generation. Can be used to tweak the same generation with different prompts. If not provided, a latents\n tensor will ge generated by sampling using the supplied random `generator`.\n output_type (`str`, *optional*, defaults to `\"pil\"`):\n The output format of the generate image. Choose between\n [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.\n return_dict (`bool`, *optional*, defaults to `True`):\n Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a\n plain tuple.\n callback (`Callable`, *optional*):\n A function that will be called every `callback_steps` steps during inference. The function will be\n called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.\n callback_steps (`int`, *optional*, defaults to 1):\n The frequency at which the `callback` function will be called. If not specified, the callback will be\n called at every step.\n\n Returns:\n [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:\n [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.\n When returning a tuple, the first element is a list with the generated images, and the second element is a\n list of `bool`s denoting whether the corresponding generated image likely represents \"not-safe-for-work\"\n (nsfw) content, according to the `safety_checker`.\n \"\"\"\n\n # 1. Check inputs\n self.check_inputs(prompt, height, width, callback_steps)\n\n # 2. Define call parameters\n batch_size = 1 if isinstance(prompt, str) else len(prompt)\n device = self._execution_device\n # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)\n # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`\n # corresponds to doing no classifier free guidance.\n do_classifier_free_guidance = guidance_scale > 1.0\n\n # 3. Encode input prompt\n text_embeddings = self._encode_prompt(\n prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt\n )\n\n # 4. Preprocess mask and image\n if isinstance(image, PIL.Image.Image) and isinstance(mask_image, PIL.Image.Image):\n mask, masked_image = prepare_mask_and_masked_image(image, mask_image)\n\n # 5. set timesteps\n self.scheduler.set_timesteps(num_inference_steps, device=device)\n timesteps_tensor = self.scheduler.timesteps\n\n # 6. Prepare latent variables\n num_channels_latents = self.vae.config.latent_channels\n latents = self.prepare_latents(\n batch_size * num_images_per_prompt,\n num_channels_latents,\n height,\n width,\n text_embeddings.dtype,\n device,\n generator,\n latents,\n )\n\n # 7. Prepare mask latent variables\n mask, masked_image_latents = self.prepare_mask_latents(\n mask,\n masked_image,\n batch_size * num_images_per_prompt,\n height,\n width,\n text_embeddings.dtype,\n device,\n generator,\n do_classifier_free_guidance,\n )\n\n # 8. Check that sizes of mask, masked image and latents match\n num_channels_mask = mask.shape[1]\n num_channels_masked_image = masked_image_latents.shape[1]\n if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels:\n raise ValueError(\n f\"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects\"\n f\" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +\"\n f\" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}\"\n f\" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of\"\n \" `pipeline.unet` or your `mask_image` or `image` input.\"\n )\n\n # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline\n extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)\n\n # 10. Denoising loop\n for i, t in enumerate(self.progress_bar(timesteps_tensor)):\n # expand the latents if we are doing classifier free guidance\n latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents\n\n # concat latents, mask, masked_image_latents in the channel dimension\n latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)\n\n latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)\n\n # predict the noise residual\n noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample\n\n # perform guidance\n if do_classifier_free_guidance:\n noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)\n noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)\n\n # compute the previous noisy sample x_t -> x_t-1\n latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample\n\n # call the callback, if provided\n if callback is not None and i % callback_steps == 0:\n callback(i, t, latents)\n\n # 11. Post-processing\n image = self.decode_latents(latents)\n\n # 12. Run safety checker\n image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype)\n\n # 13. Convert to PIL\n if output_type == \"pil\":\n image = self.numpy_to_pil(image)\n\n if not return_dict:\n return (image, has_nsfw_concept)\n\n return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)\n", "path": "src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py"}]} |
gh_patches_debug_1073 | rasdani/github-patches | git_diff | scverse__scanpy-1948 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sphinx 4.1.0 doesn't like ScanpyConfig
Update:
Docs don't build with sphinx 4.1.0 due to a error triggered by `scanpydoc`. Sphinx will be pinned until this is solved (which is when this issue should be closed). It's not obvious to me at the moment whether sphinx or scanpydoc is at fault.
---------------
Trying to build the docs with Sphinx 4.1.0 fails with the following output:
<details>
<summary> </summary>
```sh
$ make html
Running Sphinx v4.1.0
loading intersphinx inventory from https://anndata.readthedocs.io/en/stable/objects.inv...
loading intersphinx inventory from https://bbknn.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://matplotlib.org/cycler/objects.inv...
loading intersphinx inventory from http://docs.h5py.org/en/stable/objects.inv...
loading intersphinx inventory from https://ipython.readthedocs.io/en/stable/objects.inv...
loading intersphinx inventory from https://leidenalg.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://louvain-igraph.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://matplotlib.org/objects.inv...
loading intersphinx inventory from https://networkx.github.io/documentation/networkx-1.10/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/numpy/objects.inv...
loading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...
loading intersphinx inventory from https://docs.pytest.org/en/latest/objects.inv...
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/scipy/reference/objects.inv...
loading intersphinx inventory from https://seaborn.pydata.org/objects.inv...
loading intersphinx inventory from https://scikit-learn.org/stable/objects.inv...
loading intersphinx inventory from https://scanpy-tutorials.readthedocs.io/en/latest/objects.inv...
intersphinx inventory has moved: https://networkx.github.io/documentation/networkx-1.10/objects.inv -> https://networkx.org/documentation/networkx-1.10/objects.inv
intersphinx inventory has moved: https://docs.scipy.org/doc/numpy/objects.inv -> https://numpy.org/doc/stable/objects.inv
intersphinx inventory has moved: http://docs.h5py.org/en/stable/objects.inv -> https://docs.h5py.org/en/stable/objects.inv
[autosummary] generating autosummary for: _key_contributors.rst, api.rst, basic_usage.rst, community.rst, contributors.rst, dev/ci.rst, dev/code.rst, dev/documentation.rst, dev/external-tools.rst, dev/getting-set-up.rst, ..., release-notes/1.7.1.rst, release-notes/1.7.2.rst, release-notes/1.8.0.rst, release-notes/1.8.1.rst, release-notes/1.8.2.rst, release-notes/1.9.0.rst, release-notes/index.rst, release-notes/release-latest.rst, tutorials.rst, usage-principles.rst
Error in github_url('scanpy._settings.ScanpyConfig.N_PCS'):
Extension error (sphinx.ext.autosummary):
Handler <function process_generate_options at 0x139c4a940> for event 'builder-inited' threw an exception (exception: type object 'ScanpyConfig' has no attribute 'N_PCS')
make: *** [html] Error 2
```
</details>
However, I'm entirely sure if this is Sphinx's fault, or our own. Currently the [N_PCS parameter isn't in the rendered documentation](https://scanpy.readthedocs.io/en/stable/generated/scanpy._settings.ScanpyConfig.html#scanpy._settings.ScanpyConfig). I think it should be, and am not sure why it's not showing up here.
To summarize:
* Previous versions of our doc builds didn't seem to be including attribute docstrings for `ScanpyConfig`.
* Sphinx 4.1.0 raises an error when it hits this attribute
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 import os
2 import sys
3 from pathlib import Path
4 from datetime import datetime
5
6 import matplotlib # noqa
7
8 # Don’t use tkinter agg when importing scanpy → … → matplotlib
9 matplotlib.use('agg')
10
11 HERE = Path(__file__).parent
12 sys.path[:0] = [str(HERE.parent), str(HERE / 'extensions')]
13 import scanpy # noqa
14
15 on_rtd = os.environ.get('READTHEDOCS') == 'True'
16
17 # -- General configuration ------------------------------------------------
18
19
20 nitpicky = True # Warn about broken links. This is here for a reason: Do not change.
21 needs_sphinx = '2.0' # Nicer param docs
22 suppress_warnings = ['ref.citation']
23
24 # General information
25 project = 'Scanpy'
26 author = scanpy.__author__
27 copyright = f'{datetime.now():%Y}, {author}.'
28 version = scanpy.__version__.replace('.dirty', '')
29 release = version
30
31 # default settings
32 templates_path = ['_templates']
33 source_suffix = '.rst'
34 master_doc = 'index'
35 default_role = 'literal'
36 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
37 pygments_style = 'sphinx'
38
39 extensions = [
40 'sphinx.ext.autodoc',
41 'sphinx.ext.intersphinx',
42 'sphinx.ext.doctest',
43 'sphinx.ext.coverage',
44 'sphinx.ext.mathjax',
45 'sphinx.ext.napoleon',
46 'sphinx.ext.autosummary',
47 # 'plot_generator',
48 'matplotlib.sphinxext.plot_directive',
49 'sphinx_autodoc_typehints', # needs to be after napoleon
50 # 'ipython_directive',
51 # 'ipython_console_highlighting',
52 'scanpydoc',
53 *[p.stem for p in (HERE / 'extensions').glob('*.py')],
54 ]
55
56 # Generate the API documentation when building
57 autosummary_generate = True
58 autodoc_member_order = 'bysource'
59 # autodoc_default_flags = ['members']
60 napoleon_google_docstring = False
61 napoleon_numpy_docstring = True
62 napoleon_include_init_with_doc = False
63 napoleon_use_rtype = True # having a separate entry generally helps readability
64 napoleon_use_param = True
65 napoleon_custom_sections = [('Params', 'Parameters')]
66 todo_include_todos = False
67 api_dir = HERE / 'api' # function_images
68
69 scanpy_tutorials_url = 'https://scanpy-tutorials.readthedocs.io/en/latest/'
70
71 intersphinx_mapping = dict(
72 anndata=('https://anndata.readthedocs.io/en/stable/', None),
73 bbknn=('https://bbknn.readthedocs.io/en/latest/', None),
74 cycler=('https://matplotlib.org/cycler/', None),
75 h5py=('http://docs.h5py.org/en/stable/', None),
76 ipython=('https://ipython.readthedocs.io/en/stable/', None),
77 leidenalg=('https://leidenalg.readthedocs.io/en/latest/', None),
78 louvain=('https://louvain-igraph.readthedocs.io/en/latest/', None),
79 matplotlib=('https://matplotlib.org/', None),
80 networkx=('https://networkx.github.io/documentation/networkx-1.10/', None),
81 numpy=('https://docs.scipy.org/doc/numpy/', None),
82 pandas=('https://pandas.pydata.org/pandas-docs/stable/', None),
83 pytest=('https://docs.pytest.org/en/latest/', None),
84 python=('https://docs.python.org/3', None),
85 scipy=('https://docs.scipy.org/doc/scipy/reference/', None),
86 seaborn=('https://seaborn.pydata.org/', None),
87 sklearn=('https://scikit-learn.org/stable/', None),
88 scanpy_tutorials=(scanpy_tutorials_url, None),
89 )
90
91
92 # -- Options for HTML output ----------------------------------------------
93
94
95 html_theme = 'scanpydoc'
96 html_theme_options = dict(
97 navigation_depth=4,
98 logo_only=True,
99 docsearch_index='scanpy',
100 docsearch_key='fa4304eb95d2134997e3729553a674b2',
101 )
102 html_context = dict(
103 display_github=True, # Integrate GitHub
104 github_user='theislab', # Username
105 github_repo='scanpy', # Repo name
106 github_version='master', # Version
107 conf_py_path='/docs/', # Path in the checkout to the docs root
108 )
109 html_static_path = ['_static']
110 html_show_sphinx = False
111 html_logo = '_static/img/Scanpy_Logo_BrightFG.svg'
112
113
114 def setup(app):
115 app.warningiserror = on_rtd
116
117
118 # -- Options for other output formats ------------------------------------------
119
120 htmlhelp_basename = f'{project}doc'
121 doc_title = f'{project} Documentation'
122 latex_documents = [(master_doc, f'{project}.tex', doc_title, author, 'manual')]
123 man_pages = [(master_doc, project, doc_title, [author], 1)]
124 texinfo_documents = [
125 (
126 master_doc,
127 project,
128 doc_title,
129 author,
130 project,
131 'One line description of project.',
132 'Miscellaneous',
133 )
134 ]
135
136
137 # -- Suppress link warnings ----------------------------------------------------
138
139 qualname_overrides = {
140 "sklearn.neighbors._dist_metrics.DistanceMetric": "sklearn.neighbors.DistanceMetric",
141 # If the docs are built with an old version of numpy, this will make it work:
142 "numpy.random.RandomState": "numpy.random.mtrand.RandomState",
143 "scanpy.plotting._matrixplot.MatrixPlot": "scanpy.pl.MatrixPlot",
144 "scanpy.plotting._dotplot.DotPlot": "scanpy.pl.DotPlot",
145 "scanpy.plotting._stacked_violin.StackedViolin": "scanpy.pl.StackedViolin",
146 "pandas.core.series.Series": "pandas.Series",
147 }
148
149 nitpick_ignore = [
150 # Will probably be documented
151 ('py:class', 'scanpy._settings.Verbosity'),
152 # Currently undocumented: https://github.com/mwaskom/seaborn/issues/1810
153 ('py:class', 'seaborn.ClusterGrid'),
154 # Won’t be documented
155 ('py:class', 'scanpy.plotting._utils._AxesSubplot'),
156 ('py:class', 'scanpy._utils.Empty'),
157 ('py:class', 'numpy.random.mtrand.RandomState'),
158 ]
159
160 # Options for plot examples
161
162 plot_include_source = True
163 plot_formats = [("png", 90)]
164 plot_html_show_formats = False
165 plot_html_show_source_link = False
166 plot_working_directory = HERE.parent # Project root
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -155,6 +155,9 @@
('py:class', 'scanpy.plotting._utils._AxesSubplot'),
('py:class', 'scanpy._utils.Empty'),
('py:class', 'numpy.random.mtrand.RandomState'),
+ # Will work once scipy 1.8 is released
+ ('py:class', 'scipy.sparse.base.spmatrix'),
+ ('py:class', 'scipy.sparse.csr.csr_matrix'),
]
# Options for plot examples
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -155,6 +155,9 @@\n ('py:class', 'scanpy.plotting._utils._AxesSubplot'),\n ('py:class', 'scanpy._utils.Empty'),\n ('py:class', 'numpy.random.mtrand.RandomState'),\n+ # Will work once scipy 1.8 is released\n+ ('py:class', 'scipy.sparse.base.spmatrix'),\n+ ('py:class', 'scipy.sparse.csr.csr_matrix'),\n ]\n \n # Options for plot examples\n", "issue": "Sphinx 4.1.0 doesn't like ScanpyConfig\nUpdate:\r\n\r\nDocs don't build with sphinx 4.1.0 due to a error triggered by `scanpydoc`. Sphinx will be pinned until this is solved (which is when this issue should be closed). It's not obvious to me at the moment whether sphinx or scanpydoc is at fault.\r\n\r\n---------------\r\n\r\nTrying to build the docs with Sphinx 4.1.0 fails with the following output:\r\n\r\n<details>\r\n<summary> </summary>\r\n\r\n```sh\r\n$ make html\r\nRunning Sphinx v4.1.0\r\nloading intersphinx inventory from https://anndata.readthedocs.io/en/stable/objects.inv...\r\nloading intersphinx inventory from https://bbknn.readthedocs.io/en/latest/objects.inv...\r\nloading intersphinx inventory from https://matplotlib.org/cycler/objects.inv...\r\nloading intersphinx inventory from http://docs.h5py.org/en/stable/objects.inv...\r\nloading intersphinx inventory from https://ipython.readthedocs.io/en/stable/objects.inv...\r\nloading intersphinx inventory from https://leidenalg.readthedocs.io/en/latest/objects.inv...\r\nloading intersphinx inventory from https://louvain-igraph.readthedocs.io/en/latest/objects.inv...\r\nloading intersphinx inventory from https://matplotlib.org/objects.inv...\r\nloading intersphinx inventory from https://networkx.github.io/documentation/networkx-1.10/objects.inv...\r\nloading intersphinx inventory from https://docs.scipy.org/doc/numpy/objects.inv...\r\nloading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...\r\nloading intersphinx inventory from https://docs.pytest.org/en/latest/objects.inv...\r\nloading intersphinx inventory from https://docs.python.org/3/objects.inv...\r\nloading intersphinx inventory from https://docs.scipy.org/doc/scipy/reference/objects.inv...\r\nloading intersphinx inventory from https://seaborn.pydata.org/objects.inv...\r\nloading intersphinx inventory from https://scikit-learn.org/stable/objects.inv...\r\nloading intersphinx inventory from https://scanpy-tutorials.readthedocs.io/en/latest/objects.inv...\r\nintersphinx inventory has moved: https://networkx.github.io/documentation/networkx-1.10/objects.inv -> https://networkx.org/documentation/networkx-1.10/objects.inv\r\nintersphinx inventory has moved: https://docs.scipy.org/doc/numpy/objects.inv -> https://numpy.org/doc/stable/objects.inv\r\nintersphinx inventory has moved: http://docs.h5py.org/en/stable/objects.inv -> https://docs.h5py.org/en/stable/objects.inv\r\n[autosummary] generating autosummary for: _key_contributors.rst, api.rst, basic_usage.rst, community.rst, contributors.rst, dev/ci.rst, dev/code.rst, dev/documentation.rst, dev/external-tools.rst, dev/getting-set-up.rst, ..., release-notes/1.7.1.rst, release-notes/1.7.2.rst, release-notes/1.8.0.rst, release-notes/1.8.1.rst, release-notes/1.8.2.rst, release-notes/1.9.0.rst, release-notes/index.rst, release-notes/release-latest.rst, tutorials.rst, usage-principles.rst\r\nError in github_url('scanpy._settings.ScanpyConfig.N_PCS'):\r\n\r\nExtension error (sphinx.ext.autosummary):\r\nHandler <function process_generate_options at 0x139c4a940> for event 'builder-inited' threw an exception (exception: type object 'ScanpyConfig' has no attribute 'N_PCS')\r\nmake: *** [html] Error 2\r\n```\r\n\r\n</details>\r\n\r\nHowever, I'm entirely sure if this is Sphinx's fault, or our own. Currently the [N_PCS parameter isn't in the rendered documentation](https://scanpy.readthedocs.io/en/stable/generated/scanpy._settings.ScanpyConfig.html#scanpy._settings.ScanpyConfig). I think it should be, and am not sure why it's not showing up here.\r\n\r\nTo summarize:\r\n\r\n* Previous versions of our doc builds didn't seem to be including attribute docstrings for `ScanpyConfig`.\r\n* Sphinx 4.1.0 raises an error when it hits this attribute\n", "before_files": [{"content": "import os\nimport sys\nfrom pathlib import Path\nfrom datetime import datetime\n\nimport matplotlib # noqa\n\n# Don\u2019t use tkinter agg when importing scanpy \u2192 \u2026 \u2192 matplotlib\nmatplotlib.use('agg')\n\nHERE = Path(__file__).parent\nsys.path[:0] = [str(HERE.parent), str(HERE / 'extensions')]\nimport scanpy # noqa\n\non_rtd = os.environ.get('READTHEDOCS') == 'True'\n\n# -- General configuration ------------------------------------------------\n\n\nnitpicky = True # Warn about broken links. This is here for a reason: Do not change.\nneeds_sphinx = '2.0' # Nicer param docs\nsuppress_warnings = ['ref.citation']\n\n# General information\nproject = 'Scanpy'\nauthor = scanpy.__author__\ncopyright = f'{datetime.now():%Y}, {author}.'\nversion = scanpy.__version__.replace('.dirty', '')\nrelease = version\n\n# default settings\ntemplates_path = ['_templates']\nsource_suffix = '.rst'\nmaster_doc = 'index'\ndefault_role = 'literal'\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\npygments_style = 'sphinx'\n\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.doctest',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.autosummary',\n # 'plot_generator',\n 'matplotlib.sphinxext.plot_directive',\n 'sphinx_autodoc_typehints', # needs to be after napoleon\n # 'ipython_directive',\n # 'ipython_console_highlighting',\n 'scanpydoc',\n *[p.stem for p in (HERE / 'extensions').glob('*.py')],\n]\n\n# Generate the API documentation when building\nautosummary_generate = True\nautodoc_member_order = 'bysource'\n# autodoc_default_flags = ['members']\nnapoleon_google_docstring = False\nnapoleon_numpy_docstring = True\nnapoleon_include_init_with_doc = False\nnapoleon_use_rtype = True # having a separate entry generally helps readability\nnapoleon_use_param = True\nnapoleon_custom_sections = [('Params', 'Parameters')]\ntodo_include_todos = False\napi_dir = HERE / 'api' # function_images\n\nscanpy_tutorials_url = 'https://scanpy-tutorials.readthedocs.io/en/latest/'\n\nintersphinx_mapping = dict(\n anndata=('https://anndata.readthedocs.io/en/stable/', None),\n bbknn=('https://bbknn.readthedocs.io/en/latest/', None),\n cycler=('https://matplotlib.org/cycler/', None),\n h5py=('http://docs.h5py.org/en/stable/', None),\n ipython=('https://ipython.readthedocs.io/en/stable/', None),\n leidenalg=('https://leidenalg.readthedocs.io/en/latest/', None),\n louvain=('https://louvain-igraph.readthedocs.io/en/latest/', None),\n matplotlib=('https://matplotlib.org/', None),\n networkx=('https://networkx.github.io/documentation/networkx-1.10/', None),\n numpy=('https://docs.scipy.org/doc/numpy/', None),\n pandas=('https://pandas.pydata.org/pandas-docs/stable/', None),\n pytest=('https://docs.pytest.org/en/latest/', None),\n python=('https://docs.python.org/3', None),\n scipy=('https://docs.scipy.org/doc/scipy/reference/', None),\n seaborn=('https://seaborn.pydata.org/', None),\n sklearn=('https://scikit-learn.org/stable/', None),\n scanpy_tutorials=(scanpy_tutorials_url, None),\n)\n\n\n# -- Options for HTML output ----------------------------------------------\n\n\nhtml_theme = 'scanpydoc'\nhtml_theme_options = dict(\n navigation_depth=4,\n logo_only=True,\n docsearch_index='scanpy',\n docsearch_key='fa4304eb95d2134997e3729553a674b2',\n)\nhtml_context = dict(\n display_github=True, # Integrate GitHub\n github_user='theislab', # Username\n github_repo='scanpy', # Repo name\n github_version='master', # Version\n conf_py_path='/docs/', # Path in the checkout to the docs root\n)\nhtml_static_path = ['_static']\nhtml_show_sphinx = False\nhtml_logo = '_static/img/Scanpy_Logo_BrightFG.svg'\n\n\ndef setup(app):\n app.warningiserror = on_rtd\n\n\n# -- Options for other output formats ------------------------------------------\n\nhtmlhelp_basename = f'{project}doc'\ndoc_title = f'{project} Documentation'\nlatex_documents = [(master_doc, f'{project}.tex', doc_title, author, 'manual')]\nman_pages = [(master_doc, project, doc_title, [author], 1)]\ntexinfo_documents = [\n (\n master_doc,\n project,\n doc_title,\n author,\n project,\n 'One line description of project.',\n 'Miscellaneous',\n )\n]\n\n\n# -- Suppress link warnings ----------------------------------------------------\n\nqualname_overrides = {\n \"sklearn.neighbors._dist_metrics.DistanceMetric\": \"sklearn.neighbors.DistanceMetric\",\n # If the docs are built with an old version of numpy, this will make it work:\n \"numpy.random.RandomState\": \"numpy.random.mtrand.RandomState\",\n \"scanpy.plotting._matrixplot.MatrixPlot\": \"scanpy.pl.MatrixPlot\",\n \"scanpy.plotting._dotplot.DotPlot\": \"scanpy.pl.DotPlot\",\n \"scanpy.plotting._stacked_violin.StackedViolin\": \"scanpy.pl.StackedViolin\",\n \"pandas.core.series.Series\": \"pandas.Series\",\n}\n\nnitpick_ignore = [\n # Will probably be documented\n ('py:class', 'scanpy._settings.Verbosity'),\n # Currently undocumented: https://github.com/mwaskom/seaborn/issues/1810\n ('py:class', 'seaborn.ClusterGrid'),\n # Won\u2019t be documented\n ('py:class', 'scanpy.plotting._utils._AxesSubplot'),\n ('py:class', 'scanpy._utils.Empty'),\n ('py:class', 'numpy.random.mtrand.RandomState'),\n]\n\n# Options for plot examples\n\nplot_include_source = True\nplot_formats = [(\"png\", 90)]\nplot_html_show_formats = False\nplot_html_show_source_link = False\nplot_working_directory = HERE.parent # Project root\n", "path": "docs/conf.py"}], "after_files": [{"content": "import os\nimport sys\nfrom pathlib import Path\nfrom datetime import datetime\n\nimport matplotlib # noqa\n\n# Don\u2019t use tkinter agg when importing scanpy \u2192 \u2026 \u2192 matplotlib\nmatplotlib.use('agg')\n\nHERE = Path(__file__).parent\nsys.path[:0] = [str(HERE.parent), str(HERE / 'extensions')]\nimport scanpy # noqa\n\non_rtd = os.environ.get('READTHEDOCS') == 'True'\n\n# -- General configuration ------------------------------------------------\n\n\nnitpicky = True # Warn about broken links. This is here for a reason: Do not change.\nneeds_sphinx = '2.0' # Nicer param docs\nsuppress_warnings = ['ref.citation']\n\n# General information\nproject = 'Scanpy'\nauthor = scanpy.__author__\ncopyright = f'{datetime.now():%Y}, {author}.'\nversion = scanpy.__version__.replace('.dirty', '')\nrelease = version\n\n# default settings\ntemplates_path = ['_templates']\nsource_suffix = '.rst'\nmaster_doc = 'index'\ndefault_role = 'literal'\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\npygments_style = 'sphinx'\n\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.doctest',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.autosummary',\n # 'plot_generator',\n 'matplotlib.sphinxext.plot_directive',\n 'sphinx_autodoc_typehints', # needs to be after napoleon\n # 'ipython_directive',\n # 'ipython_console_highlighting',\n 'scanpydoc',\n *[p.stem for p in (HERE / 'extensions').glob('*.py')],\n]\n\n# Generate the API documentation when building\nautosummary_generate = True\nautodoc_member_order = 'bysource'\n# autodoc_default_flags = ['members']\nnapoleon_google_docstring = False\nnapoleon_numpy_docstring = True\nnapoleon_include_init_with_doc = False\nnapoleon_use_rtype = True # having a separate entry generally helps readability\nnapoleon_use_param = True\nnapoleon_custom_sections = [('Params', 'Parameters')]\ntodo_include_todos = False\napi_dir = HERE / 'api' # function_images\n\nscanpy_tutorials_url = 'https://scanpy-tutorials.readthedocs.io/en/latest/'\n\nintersphinx_mapping = dict(\n anndata=('https://anndata.readthedocs.io/en/stable/', None),\n bbknn=('https://bbknn.readthedocs.io/en/latest/', None),\n cycler=('https://matplotlib.org/cycler/', None),\n h5py=('http://docs.h5py.org/en/stable/', None),\n ipython=('https://ipython.readthedocs.io/en/stable/', None),\n leidenalg=('https://leidenalg.readthedocs.io/en/latest/', None),\n louvain=('https://louvain-igraph.readthedocs.io/en/latest/', None),\n matplotlib=('https://matplotlib.org/', None),\n networkx=('https://networkx.github.io/documentation/networkx-1.10/', None),\n numpy=('https://docs.scipy.org/doc/numpy/', None),\n pandas=('https://pandas.pydata.org/pandas-docs/stable/', None),\n pytest=('https://docs.pytest.org/en/latest/', None),\n python=('https://docs.python.org/3', None),\n scipy=('https://docs.scipy.org/doc/scipy/reference/', None),\n seaborn=('https://seaborn.pydata.org/', None),\n sklearn=('https://scikit-learn.org/stable/', None),\n scanpy_tutorials=(scanpy_tutorials_url, None),\n)\n\n\n# -- Options for HTML output ----------------------------------------------\n\n\nhtml_theme = 'scanpydoc'\nhtml_theme_options = dict(\n navigation_depth=4,\n logo_only=True,\n docsearch_index='scanpy',\n docsearch_key='fa4304eb95d2134997e3729553a674b2',\n)\nhtml_context = dict(\n display_github=True, # Integrate GitHub\n github_user='theislab', # Username\n github_repo='scanpy', # Repo name\n github_version='master', # Version\n conf_py_path='/docs/', # Path in the checkout to the docs root\n)\nhtml_static_path = ['_static']\nhtml_show_sphinx = False\nhtml_logo = '_static/img/Scanpy_Logo_BrightFG.svg'\n\n\ndef setup(app):\n app.warningiserror = on_rtd\n\n\n# -- Options for other output formats ------------------------------------------\n\nhtmlhelp_basename = f'{project}doc'\ndoc_title = f'{project} Documentation'\nlatex_documents = [(master_doc, f'{project}.tex', doc_title, author, 'manual')]\nman_pages = [(master_doc, project, doc_title, [author], 1)]\ntexinfo_documents = [\n (\n master_doc,\n project,\n doc_title,\n author,\n project,\n 'One line description of project.',\n 'Miscellaneous',\n )\n]\n\n\n# -- Suppress link warnings ----------------------------------------------------\n\nqualname_overrides = {\n \"sklearn.neighbors._dist_metrics.DistanceMetric\": \"sklearn.neighbors.DistanceMetric\",\n # If the docs are built with an old version of numpy, this will make it work:\n \"numpy.random.RandomState\": \"numpy.random.mtrand.RandomState\",\n \"scanpy.plotting._matrixplot.MatrixPlot\": \"scanpy.pl.MatrixPlot\",\n \"scanpy.plotting._dotplot.DotPlot\": \"scanpy.pl.DotPlot\",\n \"scanpy.plotting._stacked_violin.StackedViolin\": \"scanpy.pl.StackedViolin\",\n \"pandas.core.series.Series\": \"pandas.Series\",\n}\n\nnitpick_ignore = [\n # Will probably be documented\n ('py:class', 'scanpy._settings.Verbosity'),\n # Currently undocumented: https://github.com/mwaskom/seaborn/issues/1810\n ('py:class', 'seaborn.ClusterGrid'),\n # Won\u2019t be documented\n ('py:class', 'scanpy.plotting._utils._AxesSubplot'),\n ('py:class', 'scanpy._utils.Empty'),\n ('py:class', 'numpy.random.mtrand.RandomState'),\n # Will work once scipy 1.8 is released\n ('py:class', 'scipy.sparse.base.spmatrix'),\n ('py:class', 'scipy.sparse.csr.csr_matrix'),\n]\n\n# Options for plot examples\n\nplot_include_source = True\nplot_formats = [(\"png\", 90)]\nplot_html_show_formats = False\nplot_html_show_source_link = False\nplot_working_directory = HERE.parent # Project root\n", "path": "docs/conf.py"}]} |
gh_patches_debug_1074 | rasdani/github-patches | git_diff | aio-libs__aiohttp-7371 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
typo in payload.py class AsyncIterablePayload error message
### Describe the bug
https://github.com/aio-libs/aiohttp/blob/bf9d753edc928e7ecbc590c32603ebd3c1fc6282/aiohttp/payload.py#L419 has a typo in place of the intended `collections.abc.AsyncIterable`.
### To Reproduce
N/A
### Expected behavior
N/A
### Logs/tracebacks
```python-traceback
N/A
```
### Python Version
```console
$ python --version
Python 3.9.13
```
### aiohttp Version
```console
$ python -m pip show aiohttp
Version: 3.8.4
```
### multidict Version
```console
$ python -m pip show multidict
Version: 6.0.4
```
### yarl Version
```console
$ python -m pip show yarl
Version: 1.9.2
```
### OS
Windows 10
### Related component
Client
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aiohttp/payload.py`
Content:
```
1 import asyncio
2 import enum
3 import io
4 import json
5 import mimetypes
6 import os
7 import warnings
8 from abc import ABC, abstractmethod
9 from itertools import chain
10 from typing import (
11 IO,
12 TYPE_CHECKING,
13 Any,
14 ByteString,
15 Dict,
16 Iterable,
17 Optional,
18 TextIO,
19 Tuple,
20 Type,
21 Union,
22 )
23
24 from multidict import CIMultiDict
25
26 from . import hdrs
27 from .abc import AbstractStreamWriter
28 from .helpers import (
29 PY_36,
30 content_disposition_header,
31 guess_filename,
32 parse_mimetype,
33 sentinel,
34 )
35 from .streams import StreamReader
36 from .typedefs import Final, JSONEncoder, _CIMultiDict
37
38 __all__ = (
39 "PAYLOAD_REGISTRY",
40 "get_payload",
41 "payload_type",
42 "Payload",
43 "BytesPayload",
44 "StringPayload",
45 "IOBasePayload",
46 "BytesIOPayload",
47 "BufferedReaderPayload",
48 "TextIOPayload",
49 "StringIOPayload",
50 "JsonPayload",
51 "AsyncIterablePayload",
52 )
53
54 TOO_LARGE_BYTES_BODY: Final[int] = 2**20 # 1 MB
55
56 if TYPE_CHECKING: # pragma: no cover
57 from typing import List
58
59
60 class LookupError(Exception):
61 pass
62
63
64 class Order(str, enum.Enum):
65 normal = "normal"
66 try_first = "try_first"
67 try_last = "try_last"
68
69
70 def get_payload(data: Any, *args: Any, **kwargs: Any) -> "Payload":
71 return PAYLOAD_REGISTRY.get(data, *args, **kwargs)
72
73
74 def register_payload(
75 factory: Type["Payload"], type: Any, *, order: Order = Order.normal
76 ) -> None:
77 PAYLOAD_REGISTRY.register(factory, type, order=order)
78
79
80 class payload_type:
81 def __init__(self, type: Any, *, order: Order = Order.normal) -> None:
82 self.type = type
83 self.order = order
84
85 def __call__(self, factory: Type["Payload"]) -> Type["Payload"]:
86 register_payload(factory, self.type, order=self.order)
87 return factory
88
89
90 PayloadType = Type["Payload"]
91 _PayloadRegistryItem = Tuple[PayloadType, Any]
92
93
94 class PayloadRegistry:
95 """Payload registry.
96
97 note: we need zope.interface for more efficient adapter search
98 """
99
100 def __init__(self) -> None:
101 self._first: List[_PayloadRegistryItem] = []
102 self._normal: List[_PayloadRegistryItem] = []
103 self._last: List[_PayloadRegistryItem] = []
104
105 def get(
106 self,
107 data: Any,
108 *args: Any,
109 _CHAIN: "Type[chain[_PayloadRegistryItem]]" = chain,
110 **kwargs: Any,
111 ) -> "Payload":
112 if isinstance(data, Payload):
113 return data
114 for factory, type in _CHAIN(self._first, self._normal, self._last):
115 if isinstance(data, type):
116 return factory(data, *args, **kwargs)
117
118 raise LookupError()
119
120 def register(
121 self, factory: PayloadType, type: Any, *, order: Order = Order.normal
122 ) -> None:
123 if order is Order.try_first:
124 self._first.append((factory, type))
125 elif order is Order.normal:
126 self._normal.append((factory, type))
127 elif order is Order.try_last:
128 self._last.append((factory, type))
129 else:
130 raise ValueError(f"Unsupported order {order!r}")
131
132
133 class Payload(ABC):
134
135 _default_content_type: str = "application/octet-stream"
136 _size: Optional[int] = None
137
138 def __init__(
139 self,
140 value: Any,
141 headers: Optional[
142 Union[_CIMultiDict, Dict[str, str], Iterable[Tuple[str, str]]]
143 ] = None,
144 content_type: Optional[str] = sentinel,
145 filename: Optional[str] = None,
146 encoding: Optional[str] = None,
147 **kwargs: Any,
148 ) -> None:
149 self._encoding = encoding
150 self._filename = filename
151 self._headers: _CIMultiDict = CIMultiDict()
152 self._value = value
153 if content_type is not sentinel and content_type is not None:
154 self._headers[hdrs.CONTENT_TYPE] = content_type
155 elif self._filename is not None:
156 content_type = mimetypes.guess_type(self._filename)[0]
157 if content_type is None:
158 content_type = self._default_content_type
159 self._headers[hdrs.CONTENT_TYPE] = content_type
160 else:
161 self._headers[hdrs.CONTENT_TYPE] = self._default_content_type
162 self._headers.update(headers or {})
163
164 @property
165 def size(self) -> Optional[int]:
166 """Size of the payload."""
167 return self._size
168
169 @property
170 def filename(self) -> Optional[str]:
171 """Filename of the payload."""
172 return self._filename
173
174 @property
175 def headers(self) -> _CIMultiDict:
176 """Custom item headers"""
177 return self._headers
178
179 @property
180 def _binary_headers(self) -> bytes:
181 return (
182 "".join([k + ": " + v + "\r\n" for k, v in self.headers.items()]).encode(
183 "utf-8"
184 )
185 + b"\r\n"
186 )
187
188 @property
189 def encoding(self) -> Optional[str]:
190 """Payload encoding"""
191 return self._encoding
192
193 @property
194 def content_type(self) -> str:
195 """Content type"""
196 return self._headers[hdrs.CONTENT_TYPE]
197
198 def set_content_disposition(
199 self,
200 disptype: str,
201 quote_fields: bool = True,
202 _charset: str = "utf-8",
203 **params: Any,
204 ) -> None:
205 """Sets ``Content-Disposition`` header."""
206 self._headers[hdrs.CONTENT_DISPOSITION] = content_disposition_header(
207 disptype, quote_fields=quote_fields, _charset=_charset, **params
208 )
209
210 @abstractmethod
211 async def write(self, writer: AbstractStreamWriter) -> None:
212 """Write payload.
213
214 writer is an AbstractStreamWriter instance:
215 """
216
217
218 class BytesPayload(Payload):
219 def __init__(self, value: ByteString, *args: Any, **kwargs: Any) -> None:
220 if not isinstance(value, (bytes, bytearray, memoryview)):
221 raise TypeError(f"value argument must be byte-ish, not {type(value)!r}")
222
223 if "content_type" not in kwargs:
224 kwargs["content_type"] = "application/octet-stream"
225
226 super().__init__(value, *args, **kwargs)
227
228 if isinstance(value, memoryview):
229 self._size = value.nbytes
230 else:
231 self._size = len(value)
232
233 if self._size > TOO_LARGE_BYTES_BODY:
234 if PY_36:
235 kwargs = {"source": self}
236 else:
237 kwargs = {}
238 warnings.warn(
239 "Sending a large body directly with raw bytes might"
240 " lock the event loop. You should probably pass an "
241 "io.BytesIO object instead",
242 ResourceWarning,
243 **kwargs,
244 )
245
246 async def write(self, writer: AbstractStreamWriter) -> None:
247 await writer.write(self._value)
248
249
250 class StringPayload(BytesPayload):
251 def __init__(
252 self,
253 value: str,
254 *args: Any,
255 encoding: Optional[str] = None,
256 content_type: Optional[str] = None,
257 **kwargs: Any,
258 ) -> None:
259
260 if encoding is None:
261 if content_type is None:
262 real_encoding = "utf-8"
263 content_type = "text/plain; charset=utf-8"
264 else:
265 mimetype = parse_mimetype(content_type)
266 real_encoding = mimetype.parameters.get("charset", "utf-8")
267 else:
268 if content_type is None:
269 content_type = "text/plain; charset=%s" % encoding
270 real_encoding = encoding
271
272 super().__init__(
273 value.encode(real_encoding),
274 encoding=real_encoding,
275 content_type=content_type,
276 *args,
277 **kwargs,
278 )
279
280
281 class StringIOPayload(StringPayload):
282 def __init__(self, value: IO[str], *args: Any, **kwargs: Any) -> None:
283 super().__init__(value.read(), *args, **kwargs)
284
285
286 class IOBasePayload(Payload):
287 _value: IO[Any]
288
289 def __init__(
290 self, value: IO[Any], disposition: str = "attachment", *args: Any, **kwargs: Any
291 ) -> None:
292 if "filename" not in kwargs:
293 kwargs["filename"] = guess_filename(value)
294
295 super().__init__(value, *args, **kwargs)
296
297 if self._filename is not None and disposition is not None:
298 if hdrs.CONTENT_DISPOSITION not in self.headers:
299 self.set_content_disposition(disposition, filename=self._filename)
300
301 async def write(self, writer: AbstractStreamWriter) -> None:
302 loop = asyncio.get_event_loop()
303 try:
304 chunk = await loop.run_in_executor(None, self._value.read, 2**16)
305 while chunk:
306 await writer.write(chunk)
307 chunk = await loop.run_in_executor(None, self._value.read, 2**16)
308 finally:
309 await loop.run_in_executor(None, self._value.close)
310
311
312 class TextIOPayload(IOBasePayload):
313 _value: TextIO
314
315 def __init__(
316 self,
317 value: TextIO,
318 *args: Any,
319 encoding: Optional[str] = None,
320 content_type: Optional[str] = None,
321 **kwargs: Any,
322 ) -> None:
323
324 if encoding is None:
325 if content_type is None:
326 encoding = "utf-8"
327 content_type = "text/plain; charset=utf-8"
328 else:
329 mimetype = parse_mimetype(content_type)
330 encoding = mimetype.parameters.get("charset", "utf-8")
331 else:
332 if content_type is None:
333 content_type = "text/plain; charset=%s" % encoding
334
335 super().__init__(
336 value,
337 content_type=content_type,
338 encoding=encoding,
339 *args,
340 **kwargs,
341 )
342
343 @property
344 def size(self) -> Optional[int]:
345 try:
346 return os.fstat(self._value.fileno()).st_size - self._value.tell()
347 except OSError:
348 return None
349
350 async def write(self, writer: AbstractStreamWriter) -> None:
351 loop = asyncio.get_event_loop()
352 try:
353 chunk = await loop.run_in_executor(None, self._value.read, 2**16)
354 while chunk:
355 data = (
356 chunk.encode(encoding=self._encoding)
357 if self._encoding
358 else chunk.encode()
359 )
360 await writer.write(data)
361 chunk = await loop.run_in_executor(None, self._value.read, 2**16)
362 finally:
363 await loop.run_in_executor(None, self._value.close)
364
365
366 class BytesIOPayload(IOBasePayload):
367 @property
368 def size(self) -> int:
369 position = self._value.tell()
370 end = self._value.seek(0, os.SEEK_END)
371 self._value.seek(position)
372 return end - position
373
374
375 class BufferedReaderPayload(IOBasePayload):
376 @property
377 def size(self) -> Optional[int]:
378 try:
379 return os.fstat(self._value.fileno()).st_size - self._value.tell()
380 except OSError:
381 # data.fileno() is not supported, e.g.
382 # io.BufferedReader(io.BytesIO(b'data'))
383 return None
384
385
386 class JsonPayload(BytesPayload):
387 def __init__(
388 self,
389 value: Any,
390 encoding: str = "utf-8",
391 content_type: str = "application/json",
392 dumps: JSONEncoder = json.dumps,
393 *args: Any,
394 **kwargs: Any,
395 ) -> None:
396
397 super().__init__(
398 dumps(value).encode(encoding),
399 content_type=content_type,
400 encoding=encoding,
401 *args,
402 **kwargs,
403 )
404
405
406 if TYPE_CHECKING: # pragma: no cover
407 from typing import AsyncIterable, AsyncIterator
408
409 _AsyncIterator = AsyncIterator[bytes]
410 _AsyncIterable = AsyncIterable[bytes]
411 else:
412 from collections.abc import AsyncIterable, AsyncIterator
413
414 _AsyncIterator = AsyncIterator
415 _AsyncIterable = AsyncIterable
416
417
418 class AsyncIterablePayload(Payload):
419
420 _iter: Optional[_AsyncIterator] = None
421
422 def __init__(self, value: _AsyncIterable, *args: Any, **kwargs: Any) -> None:
423 if not isinstance(value, AsyncIterable):
424 raise TypeError(
425 "value argument must support "
426 "collections.abc.AsyncIterablebe interface, "
427 "got {!r}".format(type(value))
428 )
429
430 if "content_type" not in kwargs:
431 kwargs["content_type"] = "application/octet-stream"
432
433 super().__init__(value, *args, **kwargs)
434
435 self._iter = value.__aiter__()
436
437 async def write(self, writer: AbstractStreamWriter) -> None:
438 if self._iter:
439 try:
440 # iter is not None check prevents rare cases
441 # when the case iterable is used twice
442 while True:
443 chunk = await self._iter.__anext__()
444 await writer.write(chunk)
445 except StopAsyncIteration:
446 self._iter = None
447
448
449 class StreamReaderPayload(AsyncIterablePayload):
450 def __init__(self, value: StreamReader, *args: Any, **kwargs: Any) -> None:
451 super().__init__(value.iter_any(), *args, **kwargs)
452
453
454 PAYLOAD_REGISTRY = PayloadRegistry()
455 PAYLOAD_REGISTRY.register(BytesPayload, (bytes, bytearray, memoryview))
456 PAYLOAD_REGISTRY.register(StringPayload, str)
457 PAYLOAD_REGISTRY.register(StringIOPayload, io.StringIO)
458 PAYLOAD_REGISTRY.register(TextIOPayload, io.TextIOBase)
459 PAYLOAD_REGISTRY.register(BytesIOPayload, io.BytesIO)
460 PAYLOAD_REGISTRY.register(BufferedReaderPayload, (io.BufferedReader, io.BufferedRandom))
461 PAYLOAD_REGISTRY.register(IOBasePayload, io.IOBase)
462 PAYLOAD_REGISTRY.register(StreamReaderPayload, StreamReader)
463 # try_last for giving a chance to more specialized async interables like
464 # multidict.BodyPartReaderPayload override the default
465 PAYLOAD_REGISTRY.register(AsyncIterablePayload, AsyncIterable, order=Order.try_last)
466
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aiohttp/payload.py b/aiohttp/payload.py
--- a/aiohttp/payload.py
+++ b/aiohttp/payload.py
@@ -423,7 +423,7 @@
if not isinstance(value, AsyncIterable):
raise TypeError(
"value argument must support "
- "collections.abc.AsyncIterablebe interface, "
+ "collections.abc.AsyncIterable interface, "
"got {!r}".format(type(value))
)
| {"golden_diff": "diff --git a/aiohttp/payload.py b/aiohttp/payload.py\n--- a/aiohttp/payload.py\n+++ b/aiohttp/payload.py\n@@ -423,7 +423,7 @@\n if not isinstance(value, AsyncIterable):\n raise TypeError(\n \"value argument must support \"\n- \"collections.abc.AsyncIterablebe interface, \"\n+ \"collections.abc.AsyncIterable interface, \"\n \"got {!r}\".format(type(value))\n )\n", "issue": "typo in payload.py class AsyncIterablePayload error message\n### Describe the bug\n\nhttps://github.com/aio-libs/aiohttp/blob/bf9d753edc928e7ecbc590c32603ebd3c1fc6282/aiohttp/payload.py#L419 has a typo in place of the intended `collections.abc.AsyncIterable`.\n\n### To Reproduce\n\nN/A\n\n### Expected behavior\n\nN/A\n\n### Logs/tracebacks\n\n```python-traceback\nN/A\n```\n\n\n### Python Version\n\n```console\n$ python --version\r\nPython 3.9.13\n```\n\n\n### aiohttp Version\n\n```console\n$ python -m pip show aiohttp\r\nVersion: 3.8.4\n```\n\n\n### multidict Version\n\n```console\n$ python -m pip show multidict\r\nVersion: 6.0.4\n```\n\n\n### yarl Version\n\n```console\n$ python -m pip show yarl\r\nVersion: 1.9.2\n```\n\n\n### OS\n\nWindows 10\n\n### Related component\n\nClient\n\n### Additional context\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow the aio-libs Code of Conduct\n", "before_files": [{"content": "import asyncio\nimport enum\nimport io\nimport json\nimport mimetypes\nimport os\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom itertools import chain\nfrom typing import (\n IO,\n TYPE_CHECKING,\n Any,\n ByteString,\n Dict,\n Iterable,\n Optional,\n TextIO,\n Tuple,\n Type,\n Union,\n)\n\nfrom multidict import CIMultiDict\n\nfrom . import hdrs\nfrom .abc import AbstractStreamWriter\nfrom .helpers import (\n PY_36,\n content_disposition_header,\n guess_filename,\n parse_mimetype,\n sentinel,\n)\nfrom .streams import StreamReader\nfrom .typedefs import Final, JSONEncoder, _CIMultiDict\n\n__all__ = (\n \"PAYLOAD_REGISTRY\",\n \"get_payload\",\n \"payload_type\",\n \"Payload\",\n \"BytesPayload\",\n \"StringPayload\",\n \"IOBasePayload\",\n \"BytesIOPayload\",\n \"BufferedReaderPayload\",\n \"TextIOPayload\",\n \"StringIOPayload\",\n \"JsonPayload\",\n \"AsyncIterablePayload\",\n)\n\nTOO_LARGE_BYTES_BODY: Final[int] = 2**20 # 1 MB\n\nif TYPE_CHECKING: # pragma: no cover\n from typing import List\n\n\nclass LookupError(Exception):\n pass\n\n\nclass Order(str, enum.Enum):\n normal = \"normal\"\n try_first = \"try_first\"\n try_last = \"try_last\"\n\n\ndef get_payload(data: Any, *args: Any, **kwargs: Any) -> \"Payload\":\n return PAYLOAD_REGISTRY.get(data, *args, **kwargs)\n\n\ndef register_payload(\n factory: Type[\"Payload\"], type: Any, *, order: Order = Order.normal\n) -> None:\n PAYLOAD_REGISTRY.register(factory, type, order=order)\n\n\nclass payload_type:\n def __init__(self, type: Any, *, order: Order = Order.normal) -> None:\n self.type = type\n self.order = order\n\n def __call__(self, factory: Type[\"Payload\"]) -> Type[\"Payload\"]:\n register_payload(factory, self.type, order=self.order)\n return factory\n\n\nPayloadType = Type[\"Payload\"]\n_PayloadRegistryItem = Tuple[PayloadType, Any]\n\n\nclass PayloadRegistry:\n \"\"\"Payload registry.\n\n note: we need zope.interface for more efficient adapter search\n \"\"\"\n\n def __init__(self) -> None:\n self._first: List[_PayloadRegistryItem] = []\n self._normal: List[_PayloadRegistryItem] = []\n self._last: List[_PayloadRegistryItem] = []\n\n def get(\n self,\n data: Any,\n *args: Any,\n _CHAIN: \"Type[chain[_PayloadRegistryItem]]\" = chain,\n **kwargs: Any,\n ) -> \"Payload\":\n if isinstance(data, Payload):\n return data\n for factory, type in _CHAIN(self._first, self._normal, self._last):\n if isinstance(data, type):\n return factory(data, *args, **kwargs)\n\n raise LookupError()\n\n def register(\n self, factory: PayloadType, type: Any, *, order: Order = Order.normal\n ) -> None:\n if order is Order.try_first:\n self._first.append((factory, type))\n elif order is Order.normal:\n self._normal.append((factory, type))\n elif order is Order.try_last:\n self._last.append((factory, type))\n else:\n raise ValueError(f\"Unsupported order {order!r}\")\n\n\nclass Payload(ABC):\n\n _default_content_type: str = \"application/octet-stream\"\n _size: Optional[int] = None\n\n def __init__(\n self,\n value: Any,\n headers: Optional[\n Union[_CIMultiDict, Dict[str, str], Iterable[Tuple[str, str]]]\n ] = None,\n content_type: Optional[str] = sentinel,\n filename: Optional[str] = None,\n encoding: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n self._encoding = encoding\n self._filename = filename\n self._headers: _CIMultiDict = CIMultiDict()\n self._value = value\n if content_type is not sentinel and content_type is not None:\n self._headers[hdrs.CONTENT_TYPE] = content_type\n elif self._filename is not None:\n content_type = mimetypes.guess_type(self._filename)[0]\n if content_type is None:\n content_type = self._default_content_type\n self._headers[hdrs.CONTENT_TYPE] = content_type\n else:\n self._headers[hdrs.CONTENT_TYPE] = self._default_content_type\n self._headers.update(headers or {})\n\n @property\n def size(self) -> Optional[int]:\n \"\"\"Size of the payload.\"\"\"\n return self._size\n\n @property\n def filename(self) -> Optional[str]:\n \"\"\"Filename of the payload.\"\"\"\n return self._filename\n\n @property\n def headers(self) -> _CIMultiDict:\n \"\"\"Custom item headers\"\"\"\n return self._headers\n\n @property\n def _binary_headers(self) -> bytes:\n return (\n \"\".join([k + \": \" + v + \"\\r\\n\" for k, v in self.headers.items()]).encode(\n \"utf-8\"\n )\n + b\"\\r\\n\"\n )\n\n @property\n def encoding(self) -> Optional[str]:\n \"\"\"Payload encoding\"\"\"\n return self._encoding\n\n @property\n def content_type(self) -> str:\n \"\"\"Content type\"\"\"\n return self._headers[hdrs.CONTENT_TYPE]\n\n def set_content_disposition(\n self,\n disptype: str,\n quote_fields: bool = True,\n _charset: str = \"utf-8\",\n **params: Any,\n ) -> None:\n \"\"\"Sets ``Content-Disposition`` header.\"\"\"\n self._headers[hdrs.CONTENT_DISPOSITION] = content_disposition_header(\n disptype, quote_fields=quote_fields, _charset=_charset, **params\n )\n\n @abstractmethod\n async def write(self, writer: AbstractStreamWriter) -> None:\n \"\"\"Write payload.\n\n writer is an AbstractStreamWriter instance:\n \"\"\"\n\n\nclass BytesPayload(Payload):\n def __init__(self, value: ByteString, *args: Any, **kwargs: Any) -> None:\n if not isinstance(value, (bytes, bytearray, memoryview)):\n raise TypeError(f\"value argument must be byte-ish, not {type(value)!r}\")\n\n if \"content_type\" not in kwargs:\n kwargs[\"content_type\"] = \"application/octet-stream\"\n\n super().__init__(value, *args, **kwargs)\n\n if isinstance(value, memoryview):\n self._size = value.nbytes\n else:\n self._size = len(value)\n\n if self._size > TOO_LARGE_BYTES_BODY:\n if PY_36:\n kwargs = {\"source\": self}\n else:\n kwargs = {}\n warnings.warn(\n \"Sending a large body directly with raw bytes might\"\n \" lock the event loop. You should probably pass an \"\n \"io.BytesIO object instead\",\n ResourceWarning,\n **kwargs,\n )\n\n async def write(self, writer: AbstractStreamWriter) -> None:\n await writer.write(self._value)\n\n\nclass StringPayload(BytesPayload):\n def __init__(\n self,\n value: str,\n *args: Any,\n encoding: Optional[str] = None,\n content_type: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n\n if encoding is None:\n if content_type is None:\n real_encoding = \"utf-8\"\n content_type = \"text/plain; charset=utf-8\"\n else:\n mimetype = parse_mimetype(content_type)\n real_encoding = mimetype.parameters.get(\"charset\", \"utf-8\")\n else:\n if content_type is None:\n content_type = \"text/plain; charset=%s\" % encoding\n real_encoding = encoding\n\n super().__init__(\n value.encode(real_encoding),\n encoding=real_encoding,\n content_type=content_type,\n *args,\n **kwargs,\n )\n\n\nclass StringIOPayload(StringPayload):\n def __init__(self, value: IO[str], *args: Any, **kwargs: Any) -> None:\n super().__init__(value.read(), *args, **kwargs)\n\n\nclass IOBasePayload(Payload):\n _value: IO[Any]\n\n def __init__(\n self, value: IO[Any], disposition: str = \"attachment\", *args: Any, **kwargs: Any\n ) -> None:\n if \"filename\" not in kwargs:\n kwargs[\"filename\"] = guess_filename(value)\n\n super().__init__(value, *args, **kwargs)\n\n if self._filename is not None and disposition is not None:\n if hdrs.CONTENT_DISPOSITION not in self.headers:\n self.set_content_disposition(disposition, filename=self._filename)\n\n async def write(self, writer: AbstractStreamWriter) -> None:\n loop = asyncio.get_event_loop()\n try:\n chunk = await loop.run_in_executor(None, self._value.read, 2**16)\n while chunk:\n await writer.write(chunk)\n chunk = await loop.run_in_executor(None, self._value.read, 2**16)\n finally:\n await loop.run_in_executor(None, self._value.close)\n\n\nclass TextIOPayload(IOBasePayload):\n _value: TextIO\n\n def __init__(\n self,\n value: TextIO,\n *args: Any,\n encoding: Optional[str] = None,\n content_type: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n\n if encoding is None:\n if content_type is None:\n encoding = \"utf-8\"\n content_type = \"text/plain; charset=utf-8\"\n else:\n mimetype = parse_mimetype(content_type)\n encoding = mimetype.parameters.get(\"charset\", \"utf-8\")\n else:\n if content_type is None:\n content_type = \"text/plain; charset=%s\" % encoding\n\n super().__init__(\n value,\n content_type=content_type,\n encoding=encoding,\n *args,\n **kwargs,\n )\n\n @property\n def size(self) -> Optional[int]:\n try:\n return os.fstat(self._value.fileno()).st_size - self._value.tell()\n except OSError:\n return None\n\n async def write(self, writer: AbstractStreamWriter) -> None:\n loop = asyncio.get_event_loop()\n try:\n chunk = await loop.run_in_executor(None, self._value.read, 2**16)\n while chunk:\n data = (\n chunk.encode(encoding=self._encoding)\n if self._encoding\n else chunk.encode()\n )\n await writer.write(data)\n chunk = await loop.run_in_executor(None, self._value.read, 2**16)\n finally:\n await loop.run_in_executor(None, self._value.close)\n\n\nclass BytesIOPayload(IOBasePayload):\n @property\n def size(self) -> int:\n position = self._value.tell()\n end = self._value.seek(0, os.SEEK_END)\n self._value.seek(position)\n return end - position\n\n\nclass BufferedReaderPayload(IOBasePayload):\n @property\n def size(self) -> Optional[int]:\n try:\n return os.fstat(self._value.fileno()).st_size - self._value.tell()\n except OSError:\n # data.fileno() is not supported, e.g.\n # io.BufferedReader(io.BytesIO(b'data'))\n return None\n\n\nclass JsonPayload(BytesPayload):\n def __init__(\n self,\n value: Any,\n encoding: str = \"utf-8\",\n content_type: str = \"application/json\",\n dumps: JSONEncoder = json.dumps,\n *args: Any,\n **kwargs: Any,\n ) -> None:\n\n super().__init__(\n dumps(value).encode(encoding),\n content_type=content_type,\n encoding=encoding,\n *args,\n **kwargs,\n )\n\n\nif TYPE_CHECKING: # pragma: no cover\n from typing import AsyncIterable, AsyncIterator\n\n _AsyncIterator = AsyncIterator[bytes]\n _AsyncIterable = AsyncIterable[bytes]\nelse:\n from collections.abc import AsyncIterable, AsyncIterator\n\n _AsyncIterator = AsyncIterator\n _AsyncIterable = AsyncIterable\n\n\nclass AsyncIterablePayload(Payload):\n\n _iter: Optional[_AsyncIterator] = None\n\n def __init__(self, value: _AsyncIterable, *args: Any, **kwargs: Any) -> None:\n if not isinstance(value, AsyncIterable):\n raise TypeError(\n \"value argument must support \"\n \"collections.abc.AsyncIterablebe interface, \"\n \"got {!r}\".format(type(value))\n )\n\n if \"content_type\" not in kwargs:\n kwargs[\"content_type\"] = \"application/octet-stream\"\n\n super().__init__(value, *args, **kwargs)\n\n self._iter = value.__aiter__()\n\n async def write(self, writer: AbstractStreamWriter) -> None:\n if self._iter:\n try:\n # iter is not None check prevents rare cases\n # when the case iterable is used twice\n while True:\n chunk = await self._iter.__anext__()\n await writer.write(chunk)\n except StopAsyncIteration:\n self._iter = None\n\n\nclass StreamReaderPayload(AsyncIterablePayload):\n def __init__(self, value: StreamReader, *args: Any, **kwargs: Any) -> None:\n super().__init__(value.iter_any(), *args, **kwargs)\n\n\nPAYLOAD_REGISTRY = PayloadRegistry()\nPAYLOAD_REGISTRY.register(BytesPayload, (bytes, bytearray, memoryview))\nPAYLOAD_REGISTRY.register(StringPayload, str)\nPAYLOAD_REGISTRY.register(StringIOPayload, io.StringIO)\nPAYLOAD_REGISTRY.register(TextIOPayload, io.TextIOBase)\nPAYLOAD_REGISTRY.register(BytesIOPayload, io.BytesIO)\nPAYLOAD_REGISTRY.register(BufferedReaderPayload, (io.BufferedReader, io.BufferedRandom))\nPAYLOAD_REGISTRY.register(IOBasePayload, io.IOBase)\nPAYLOAD_REGISTRY.register(StreamReaderPayload, StreamReader)\n# try_last for giving a chance to more specialized async interables like\n# multidict.BodyPartReaderPayload override the default\nPAYLOAD_REGISTRY.register(AsyncIterablePayload, AsyncIterable, order=Order.try_last)\n", "path": "aiohttp/payload.py"}], "after_files": [{"content": "import asyncio\nimport enum\nimport io\nimport json\nimport mimetypes\nimport os\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom itertools import chain\nfrom typing import (\n IO,\n TYPE_CHECKING,\n Any,\n ByteString,\n Dict,\n Iterable,\n Optional,\n TextIO,\n Tuple,\n Type,\n Union,\n)\n\nfrom multidict import CIMultiDict\n\nfrom . import hdrs\nfrom .abc import AbstractStreamWriter\nfrom .helpers import (\n PY_36,\n content_disposition_header,\n guess_filename,\n parse_mimetype,\n sentinel,\n)\nfrom .streams import StreamReader\nfrom .typedefs import Final, JSONEncoder, _CIMultiDict\n\n__all__ = (\n \"PAYLOAD_REGISTRY\",\n \"get_payload\",\n \"payload_type\",\n \"Payload\",\n \"BytesPayload\",\n \"StringPayload\",\n \"IOBasePayload\",\n \"BytesIOPayload\",\n \"BufferedReaderPayload\",\n \"TextIOPayload\",\n \"StringIOPayload\",\n \"JsonPayload\",\n \"AsyncIterablePayload\",\n)\n\nTOO_LARGE_BYTES_BODY: Final[int] = 2**20 # 1 MB\n\nif TYPE_CHECKING: # pragma: no cover\n from typing import List\n\n\nclass LookupError(Exception):\n pass\n\n\nclass Order(str, enum.Enum):\n normal = \"normal\"\n try_first = \"try_first\"\n try_last = \"try_last\"\n\n\ndef get_payload(data: Any, *args: Any, **kwargs: Any) -> \"Payload\":\n return PAYLOAD_REGISTRY.get(data, *args, **kwargs)\n\n\ndef register_payload(\n factory: Type[\"Payload\"], type: Any, *, order: Order = Order.normal\n) -> None:\n PAYLOAD_REGISTRY.register(factory, type, order=order)\n\n\nclass payload_type:\n def __init__(self, type: Any, *, order: Order = Order.normal) -> None:\n self.type = type\n self.order = order\n\n def __call__(self, factory: Type[\"Payload\"]) -> Type[\"Payload\"]:\n register_payload(factory, self.type, order=self.order)\n return factory\n\n\nPayloadType = Type[\"Payload\"]\n_PayloadRegistryItem = Tuple[PayloadType, Any]\n\n\nclass PayloadRegistry:\n \"\"\"Payload registry.\n\n note: we need zope.interface for more efficient adapter search\n \"\"\"\n\n def __init__(self) -> None:\n self._first: List[_PayloadRegistryItem] = []\n self._normal: List[_PayloadRegistryItem] = []\n self._last: List[_PayloadRegistryItem] = []\n\n def get(\n self,\n data: Any,\n *args: Any,\n _CHAIN: \"Type[chain[_PayloadRegistryItem]]\" = chain,\n **kwargs: Any,\n ) -> \"Payload\":\n if isinstance(data, Payload):\n return data\n for factory, type in _CHAIN(self._first, self._normal, self._last):\n if isinstance(data, type):\n return factory(data, *args, **kwargs)\n\n raise LookupError()\n\n def register(\n self, factory: PayloadType, type: Any, *, order: Order = Order.normal\n ) -> None:\n if order is Order.try_first:\n self._first.append((factory, type))\n elif order is Order.normal:\n self._normal.append((factory, type))\n elif order is Order.try_last:\n self._last.append((factory, type))\n else:\n raise ValueError(f\"Unsupported order {order!r}\")\n\n\nclass Payload(ABC):\n\n _default_content_type: str = \"application/octet-stream\"\n _size: Optional[int] = None\n\n def __init__(\n self,\n value: Any,\n headers: Optional[\n Union[_CIMultiDict, Dict[str, str], Iterable[Tuple[str, str]]]\n ] = None,\n content_type: Optional[str] = sentinel,\n filename: Optional[str] = None,\n encoding: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n self._encoding = encoding\n self._filename = filename\n self._headers: _CIMultiDict = CIMultiDict()\n self._value = value\n if content_type is not sentinel and content_type is not None:\n self._headers[hdrs.CONTENT_TYPE] = content_type\n elif self._filename is not None:\n content_type = mimetypes.guess_type(self._filename)[0]\n if content_type is None:\n content_type = self._default_content_type\n self._headers[hdrs.CONTENT_TYPE] = content_type\n else:\n self._headers[hdrs.CONTENT_TYPE] = self._default_content_type\n self._headers.update(headers or {})\n\n @property\n def size(self) -> Optional[int]:\n \"\"\"Size of the payload.\"\"\"\n return self._size\n\n @property\n def filename(self) -> Optional[str]:\n \"\"\"Filename of the payload.\"\"\"\n return self._filename\n\n @property\n def headers(self) -> _CIMultiDict:\n \"\"\"Custom item headers\"\"\"\n return self._headers\n\n @property\n def _binary_headers(self) -> bytes:\n return (\n \"\".join([k + \": \" + v + \"\\r\\n\" for k, v in self.headers.items()]).encode(\n \"utf-8\"\n )\n + b\"\\r\\n\"\n )\n\n @property\n def encoding(self) -> Optional[str]:\n \"\"\"Payload encoding\"\"\"\n return self._encoding\n\n @property\n def content_type(self) -> str:\n \"\"\"Content type\"\"\"\n return self._headers[hdrs.CONTENT_TYPE]\n\n def set_content_disposition(\n self,\n disptype: str,\n quote_fields: bool = True,\n _charset: str = \"utf-8\",\n **params: Any,\n ) -> None:\n \"\"\"Sets ``Content-Disposition`` header.\"\"\"\n self._headers[hdrs.CONTENT_DISPOSITION] = content_disposition_header(\n disptype, quote_fields=quote_fields, _charset=_charset, **params\n )\n\n @abstractmethod\n async def write(self, writer: AbstractStreamWriter) -> None:\n \"\"\"Write payload.\n\n writer is an AbstractStreamWriter instance:\n \"\"\"\n\n\nclass BytesPayload(Payload):\n def __init__(self, value: ByteString, *args: Any, **kwargs: Any) -> None:\n if not isinstance(value, (bytes, bytearray, memoryview)):\n raise TypeError(f\"value argument must be byte-ish, not {type(value)!r}\")\n\n if \"content_type\" not in kwargs:\n kwargs[\"content_type\"] = \"application/octet-stream\"\n\n super().__init__(value, *args, **kwargs)\n\n if isinstance(value, memoryview):\n self._size = value.nbytes\n else:\n self._size = len(value)\n\n if self._size > TOO_LARGE_BYTES_BODY:\n if PY_36:\n kwargs = {\"source\": self}\n else:\n kwargs = {}\n warnings.warn(\n \"Sending a large body directly with raw bytes might\"\n \" lock the event loop. You should probably pass an \"\n \"io.BytesIO object instead\",\n ResourceWarning,\n **kwargs,\n )\n\n async def write(self, writer: AbstractStreamWriter) -> None:\n await writer.write(self._value)\n\n\nclass StringPayload(BytesPayload):\n def __init__(\n self,\n value: str,\n *args: Any,\n encoding: Optional[str] = None,\n content_type: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n\n if encoding is None:\n if content_type is None:\n real_encoding = \"utf-8\"\n content_type = \"text/plain; charset=utf-8\"\n else:\n mimetype = parse_mimetype(content_type)\n real_encoding = mimetype.parameters.get(\"charset\", \"utf-8\")\n else:\n if content_type is None:\n content_type = \"text/plain; charset=%s\" % encoding\n real_encoding = encoding\n\n super().__init__(\n value.encode(real_encoding),\n encoding=real_encoding,\n content_type=content_type,\n *args,\n **kwargs,\n )\n\n\nclass StringIOPayload(StringPayload):\n def __init__(self, value: IO[str], *args: Any, **kwargs: Any) -> None:\n super().__init__(value.read(), *args, **kwargs)\n\n\nclass IOBasePayload(Payload):\n _value: IO[Any]\n\n def __init__(\n self, value: IO[Any], disposition: str = \"attachment\", *args: Any, **kwargs: Any\n ) -> None:\n if \"filename\" not in kwargs:\n kwargs[\"filename\"] = guess_filename(value)\n\n super().__init__(value, *args, **kwargs)\n\n if self._filename is not None and disposition is not None:\n if hdrs.CONTENT_DISPOSITION not in self.headers:\n self.set_content_disposition(disposition, filename=self._filename)\n\n async def write(self, writer: AbstractStreamWriter) -> None:\n loop = asyncio.get_event_loop()\n try:\n chunk = await loop.run_in_executor(None, self._value.read, 2**16)\n while chunk:\n await writer.write(chunk)\n chunk = await loop.run_in_executor(None, self._value.read, 2**16)\n finally:\n await loop.run_in_executor(None, self._value.close)\n\n\nclass TextIOPayload(IOBasePayload):\n _value: TextIO\n\n def __init__(\n self,\n value: TextIO,\n *args: Any,\n encoding: Optional[str] = None,\n content_type: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n\n if encoding is None:\n if content_type is None:\n encoding = \"utf-8\"\n content_type = \"text/plain; charset=utf-8\"\n else:\n mimetype = parse_mimetype(content_type)\n encoding = mimetype.parameters.get(\"charset\", \"utf-8\")\n else:\n if content_type is None:\n content_type = \"text/plain; charset=%s\" % encoding\n\n super().__init__(\n value,\n content_type=content_type,\n encoding=encoding,\n *args,\n **kwargs,\n )\n\n @property\n def size(self) -> Optional[int]:\n try:\n return os.fstat(self._value.fileno()).st_size - self._value.tell()\n except OSError:\n return None\n\n async def write(self, writer: AbstractStreamWriter) -> None:\n loop = asyncio.get_event_loop()\n try:\n chunk = await loop.run_in_executor(None, self._value.read, 2**16)\n while chunk:\n data = (\n chunk.encode(encoding=self._encoding)\n if self._encoding\n else chunk.encode()\n )\n await writer.write(data)\n chunk = await loop.run_in_executor(None, self._value.read, 2**16)\n finally:\n await loop.run_in_executor(None, self._value.close)\n\n\nclass BytesIOPayload(IOBasePayload):\n @property\n def size(self) -> int:\n position = self._value.tell()\n end = self._value.seek(0, os.SEEK_END)\n self._value.seek(position)\n return end - position\n\n\nclass BufferedReaderPayload(IOBasePayload):\n @property\n def size(self) -> Optional[int]:\n try:\n return os.fstat(self._value.fileno()).st_size - self._value.tell()\n except OSError:\n # data.fileno() is not supported, e.g.\n # io.BufferedReader(io.BytesIO(b'data'))\n return None\n\n\nclass JsonPayload(BytesPayload):\n def __init__(\n self,\n value: Any,\n encoding: str = \"utf-8\",\n content_type: str = \"application/json\",\n dumps: JSONEncoder = json.dumps,\n *args: Any,\n **kwargs: Any,\n ) -> None:\n\n super().__init__(\n dumps(value).encode(encoding),\n content_type=content_type,\n encoding=encoding,\n *args,\n **kwargs,\n )\n\n\nif TYPE_CHECKING: # pragma: no cover\n from typing import AsyncIterable, AsyncIterator\n\n _AsyncIterator = AsyncIterator[bytes]\n _AsyncIterable = AsyncIterable[bytes]\nelse:\n from collections.abc import AsyncIterable, AsyncIterator\n\n _AsyncIterator = AsyncIterator\n _AsyncIterable = AsyncIterable\n\n\nclass AsyncIterablePayload(Payload):\n\n _iter: Optional[_AsyncIterator] = None\n\n def __init__(self, value: _AsyncIterable, *args: Any, **kwargs: Any) -> None:\n if not isinstance(value, AsyncIterable):\n raise TypeError(\n \"value argument must support \"\n \"collections.abc.AsyncIterable interface, \"\n \"got {!r}\".format(type(value))\n )\n\n if \"content_type\" not in kwargs:\n kwargs[\"content_type\"] = \"application/octet-stream\"\n\n super().__init__(value, *args, **kwargs)\n\n self._iter = value.__aiter__()\n\n async def write(self, writer: AbstractStreamWriter) -> None:\n if self._iter:\n try:\n # iter is not None check prevents rare cases\n # when the case iterable is used twice\n while True:\n chunk = await self._iter.__anext__()\n await writer.write(chunk)\n except StopAsyncIteration:\n self._iter = None\n\n\nclass StreamReaderPayload(AsyncIterablePayload):\n def __init__(self, value: StreamReader, *args: Any, **kwargs: Any) -> None:\n super().__init__(value.iter_any(), *args, **kwargs)\n\n\nPAYLOAD_REGISTRY = PayloadRegistry()\nPAYLOAD_REGISTRY.register(BytesPayload, (bytes, bytearray, memoryview))\nPAYLOAD_REGISTRY.register(StringPayload, str)\nPAYLOAD_REGISTRY.register(StringIOPayload, io.StringIO)\nPAYLOAD_REGISTRY.register(TextIOPayload, io.TextIOBase)\nPAYLOAD_REGISTRY.register(BytesIOPayload, io.BytesIO)\nPAYLOAD_REGISTRY.register(BufferedReaderPayload, (io.BufferedReader, io.BufferedRandom))\nPAYLOAD_REGISTRY.register(IOBasePayload, io.IOBase)\nPAYLOAD_REGISTRY.register(StreamReaderPayload, StreamReader)\n# try_last for giving a chance to more specialized async interables like\n# multidict.BodyPartReaderPayload override the default\nPAYLOAD_REGISTRY.register(AsyncIterablePayload, AsyncIterable, order=Order.try_last)\n", "path": "aiohttp/payload.py"}]} |
gh_patches_debug_1075 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-3879 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
users should not be able to embed video in their idea
**URL:** https://meinberlin-stage.liqd.net/mapideas/create/module/brainstorming-mit-kartenfunktion-36/
**user:** registered user
**expected behaviour:** should not be able to embed video
**behaviour:** is able to embed video in idea form
**important screensize:**
**device & browser:**
**Comment/Question:** we should not allow this also because it may look crap in frontend if also a picture has been uploaded. Don't know where this came from but it is not on prod.
Screenshot?


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/config/settings/base.py`
Content:
```
1 """
2 Django settings for meinberlin project.
3
4 Generated by 'django-admin startproject' using Django 1.8.17.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.8/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.8/ref/settings/
11 """
12
13 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
14 import os
15
16 from django.utils.translation import ugettext_lazy as _
17
18 CONFIG_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
19 PROJECT_DIR = os.path.dirname(CONFIG_DIR)
20 BASE_DIR = os.path.dirname(PROJECT_DIR)
21
22 # General settings
23
24 CONTACT_EMAIL = '[email protected]'
25 SUPERVISOR_EMAIL = '[email protected]'
26 TRACKING_ENABLED = False
27
28 # Application definition
29
30 INSTALLED_APPS = (
31 'django.contrib.sites',
32 'django.contrib.admin',
33 'django.contrib.auth',
34 'django.contrib.contenttypes',
35 'django.contrib.sessions',
36 'django.contrib.messages',
37 'django.contrib.staticfiles',
38 'django.contrib.humanize',
39 'django.contrib.sitemaps',
40
41 'wagtail.contrib.forms',
42 'wagtail.contrib.redirects',
43 'wagtail.contrib.settings',
44 'wagtail.embeds',
45 'wagtail.sites',
46 'wagtail.users',
47 'wagtail.snippets',
48 'wagtail.documents',
49 'wagtail.images',
50 'wagtail.search',
51 'wagtail.admin',
52 'wagtail.core',
53 'wagtail.contrib.styleguide',
54
55 'taggit', # wagtail dependency
56 'widget_tweaks',
57 'rest_framework',
58 'allauth',
59 'allauth.account',
60 'allauth.socialaccount',
61 'rules.apps.AutodiscoverRulesConfig',
62 'easy_thumbnails',
63 'ckeditor',
64 'ckeditor_uploader',
65 'capture_tag',
66 'background_task',
67
68 'adhocracy4.actions',
69 'adhocracy4.administrative_districts',
70 'adhocracy4.categories',
71 'adhocracy4.ckeditor',
72 'adhocracy4.comments',
73 'adhocracy4.dashboard',
74 'adhocracy4.filters',
75 'adhocracy4.follows',
76 'adhocracy4.forms',
77 'adhocracy4.images',
78 'adhocracy4.labels',
79 'adhocracy4.maps',
80 'adhocracy4.modules',
81 'adhocracy4.organisations',
82 'adhocracy4.phases',
83 'adhocracy4.polls',
84 'adhocracy4.projects',
85 'adhocracy4.ratings',
86 'adhocracy4.reports',
87 'adhocracy4.rules',
88
89 # General components that define models or helpers
90 'meinberlin.apps.actions',
91 'meinberlin.apps.captcha',
92 'meinberlin.apps.cms',
93 'meinberlin.apps.contrib',
94 'meinberlin.apps.likes',
95 'meinberlin.apps.livequestions',
96 'meinberlin.apps.maps',
97 'meinberlin.apps.moderatorfeedback',
98 'meinberlin.apps.moderatorremark',
99 'meinberlin.apps.notifications',
100 'meinberlin.apps.organisations',
101 'meinberlin.apps.users',
102
103 # General apps containing views
104 'meinberlin.apps.account',
105 'meinberlin.apps.adminlog',
106 'meinberlin.apps.dashboard',
107 'meinberlin.apps.embed',
108 'meinberlin.apps.exports',
109 'meinberlin.apps.initiators',
110 'meinberlin.apps.newsletters',
111 'meinberlin.apps.offlineevents',
112 'meinberlin.apps.plans',
113 'meinberlin.apps.platformemails',
114
115 # Apps defining phases
116 'meinberlin.apps.activities',
117 'meinberlin.apps.bplan',
118 'meinberlin.apps.budgeting',
119 'meinberlin.apps.documents',
120 'meinberlin.apps.extprojects',
121 'meinberlin.apps.ideas',
122 'meinberlin.apps.kiezkasse',
123 'meinberlin.apps.mapideas',
124 'meinberlin.apps.maptopicprio',
125 'meinberlin.apps.projectcontainers',
126 'meinberlin.apps.topicprio',
127
128 # Apps overwriting and adding to a4
129 'meinberlin.apps.polls',
130 'meinberlin.apps.projects',
131 )
132
133 MIDDLEWARE = (
134 'django.middleware.security.SecurityMiddleware',
135 'whitenoise.middleware.WhiteNoiseMiddleware',
136 'django.middleware.clickjacking.XFrameOptionsMiddleware',
137 'django.middleware.csrf.CsrfViewMiddleware',
138 'csp.middleware.CSPMiddleware',
139 'django_cloudflare_push.middleware.push_middleware',
140 'django.contrib.sessions.middleware.SessionMiddleware',
141 'django.middleware.common.CommonMiddleware',
142 'django.contrib.auth.middleware.AuthenticationMiddleware',
143 'django.contrib.messages.middleware.MessageMiddleware',
144
145 'wagtail.contrib.redirects.middleware.RedirectMiddleware',
146
147 'meinberlin.apps.embed.middleware.AjaxPathMiddleware'
148 )
149
150 SITE_ID = 1
151
152 ROOT_URLCONF = 'meinberlin.config.urls'
153
154 LOCALE_PATHS = [os.path.join(BASE_DIR, 'locale')]
155
156 TEMPLATES = [
157 {
158 'BACKEND': 'django.template.backends.django.DjangoTemplates',
159 'DIRS': [
160 os.path.join(PROJECT_DIR, 'templates'),
161 ],
162 'APP_DIRS': True,
163 'OPTIONS': {
164 'context_processors': [
165 'django.template.context_processors.debug',
166 'django.template.context_processors.request',
167 'django.contrib.auth.context_processors.auth',
168 'django.contrib.messages.context_processors.messages',
169 'wagtail.contrib.settings.context_processors.settings',
170 ],
171 },
172 },
173 ]
174
175 WSGI_APPLICATION = 'meinberlin.config.wsgi.application'
176
177
178 # Database
179 # https://docs.djangoproject.com/en/1.8/ref/settings/#databases
180
181 DATABASES = {
182 'default': {
183 'ENGINE': 'django.db.backends.sqlite3',
184 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
185 'TEST': {
186 'NAME': os.path.join(BASE_DIR, 'test_db.sqlite3'),
187 }
188 }
189 }
190
191
192 # Internationalization
193 # https://docs.djangoproject.com/en/1.8/topics/i18n/
194
195 LANGUAGE_CODE = 'de-DE'
196
197 # The default language is used for emails and strings
198 # that are stored translated to the database.
199 DEFAULT_LANGUAGE = 'de'
200
201 TIME_ZONE = 'Europe/Berlin'
202
203 USE_I18N = True
204
205 USE_L10N = True
206
207 USE_TZ = True
208
209
210 # Static files (CSS, JavaScript, Images)
211 # https://docs.djangoproject.com/en/1.8/howto/static-files/
212
213 STATICFILES_DIRS = [
214 os.path.join(PROJECT_DIR, 'static'),
215 ]
216
217 STATIC_ROOT = os.path.join(BASE_DIR, 'static')
218 STATIC_URL = '/static/'
219
220 MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
221 MEDIA_URL = '/media/'
222
223 IMAGE_ALIASES = {
224 '*': {
225 'max_size': 5 * 10**6,
226 'fileformats': ('image/png', 'image/jpeg', 'image/gif')
227 },
228 'heroimage': {'min_resolution': (1500, 500)},
229 'tileimage': {'min_resolution': (500, 300)},
230 'logo': {'min_resolution': (200, 50)},
231 'avatar': {'min_resolution': (200, 200)},
232 'idea_image': {'min_resolution': (600, 400)},
233 'plan_image': {'min_resolution': (600, 400)},
234 }
235
236 THUMBNAIL_ALIASES = {
237 '': {
238 'heroimage': {'size': (1500, 500)},
239 'project_thumbnail': {'size': (520, 330)},
240 'logo': {'size': (160, 160), 'background': 'white'},
241 'item_image': {'size': (330, 0), 'crop': 'scale'},
242 'map_thumbnail': {'size': (200, 200), 'crop': 'smart'},
243 'project_tile': {'size': (500, 500)}
244 }
245 }
246
247 ALLOWED_UPLOAD_IMAGES = ('png', 'jpeg', 'gif')
248
249
250 # Wagtail settings
251
252 WAGTAIL_SITE_NAME = 'meinBerlin'
253 WAGTAILIMAGES_IMAGE_MODEL = 'meinberlin_cms.CustomImage'
254
255 # Base URL to use when referring to full URLs within the Wagtail admin backend -
256 # e.g. in notification emails. Don't include '/admin' or a trailing slash
257 BASE_URL = 'http://localhost:8000'
258
259 # Authentication
260
261 AUTH_USER_MODEL = 'meinberlin_users.User'
262
263 AUTHENTICATION_BACKENDS = (
264 'rules.permissions.ObjectPermissionBackend',
265 'django.contrib.auth.backends.ModelBackend',
266 'allauth.account.auth_backends.AuthenticationBackend',
267 )
268
269 ACCOUNT_ADAPTER = 'meinberlin.apps.users.adapters.AccountAdapter'
270 ACCOUNT_AUTHENTICATION_METHOD = 'username_email'
271 ACCOUNT_EMAIL_CONFIRMATION_EXPIRE_DAYS = 3
272 ACCOUNT_EMAIL_REQUIRED = True
273 ACCOUNT_EMAIL_VERIFICATION = 'mandatory'
274 ACCOUNT_FORMS = {'signup': 'meinberlin.apps.users.forms.TermsSignupForm'}
275 ACCOUNT_LOGIN_ATTEMPTS_LIMIT = 10
276 ACCOUNT_LOGIN_ATTEMPTS_TIMEOUT = 300 # seconds
277 ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True
278 ACCOUNT_LOGIN_ON_PASSWORD_RESET = True
279 ACCOUNT_USERNAME_REQUIRED = True
280 SOCIALACCOUNT_AUTO_SIGNUP = False
281 SOCIALACCOUNT_EMAIL_VERIFICATION = 'none'
282 SOCIALACCOUNT_FORMS = {'signup': 'meinberlin.apps.users.forms.SocialTermsSignupForm'}
283 SOCIALACCOUNT_QUERY_EMAIL = True
284 SESSION_COOKIE_SAMESITE = None # This is currently needed for servicekonto account connection
285
286 LOGIN_URL = 'account_login'
287 LOGIN_REDIRECT_URL = '/'
288
289 EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
290
291 PASSWORD_HASHERS = [
292 'django.contrib.auth.hashers.PBKDF2PasswordHasher',
293 'django.contrib.auth.hashers.BCryptPasswordHasher', # a3
294 'meinberlin.apps.users.hashers.A2PasswordHasher',
295 ]
296
297 # captcha
298 CAPTCHA_URL = u'https://meinberlin-captcha.liqd.net/api.php'
299
300 # ckeditor
301
302 CKEDITOR_UPLOAD_PATH = 'uploads/'
303 CKEDITOR_RESTRICT_BY_USER = 'username'
304 CKEDITOR_ALLOW_NONIMAGE_FILES = True
305
306 CKEDITOR_CONFIGS = {
307 'default': {
308 'width': '100%',
309 'title': _('Rich text editor'),
310 'toolbar': 'Custom',
311 'toolbar_Custom': [
312 ['Bold', 'Italic', 'Underline'],
313 ['NumberedList', 'BulletedList'],
314 ['Link', 'Unlink'],
315 ['Embed', 'EmbedBase']
316 ],
317 'removePlugins': 'stylesheetparser',
318 'extraAllowedContent': 'iframe[*]',
319 'extraPlugins': ','.join(['embed', 'embedbase']),
320 },
321 'image-editor': {
322 'width': '100%',
323 'title': _('Rich text editor'),
324 'toolbar': 'Custom',
325 'toolbar_Custom': [
326 ['Bold', 'Italic', 'Underline'],
327 ['Image'],
328 ['NumberedList', 'BulletedList'],
329 ['Link', 'Unlink'],
330 ],
331 },
332 'collapsible-image-editor': {
333 'width': '100%',
334 'title': _('Rich text editor'),
335 'toolbar': 'Custom',
336 'toolbar_Custom': [
337 ['Bold', 'Italic', 'Underline'],
338 ['Image'],
339 ['NumberedList', 'BulletedList'],
340 ['Link', 'Unlink'],
341 ['CollapsibleItem'],
342 ['Embed', 'EmbedBase']
343 ],
344 'removePlugins': 'stylesheetparser',
345 'extraAllowedContent': 'iframe[*]; div[*]',
346 },
347 'video-editor': {
348 'width': '100%',
349 'title': _('Rich text editor'),
350 'toolbar': 'Custom',
351 'toolbar_Custom': [
352 ['Embed', 'EmbedBase']
353 ],
354 'removePlugins': 'stylesheetparser',
355 'extraAllowedContent': 'iframe[*]; div[*]',
356 }
357 }
358
359 BLEACH_LIST = {
360 'default': {
361 'tags': ['p', 'strong', 'em', 'u', 'ol', 'li', 'ul', 'a', 'img',
362 'iframe', 'div'],
363 'attributes': {
364 'a': ['href', 'rel', 'target'],
365 'img': ['src', 'alt', 'style'],
366 'div': ['class'],
367 'iframe': ['src', 'alt', 'style']
368 },
369 },
370 'image-editor': {
371 'tags': ['p', 'strong', 'em', 'u', 'ol', 'li', 'ul', 'a', 'img'],
372 'attributes': {
373 'a': ['href', 'rel', 'target'],
374 'img': ['src', 'alt', 'style']
375 },
376 'styles': [
377 'float',
378 'margin',
379 'padding',
380 'width',
381 'height',
382 'margin-bottom',
383 'margin-top',
384 'margin-left',
385 'margin-right',
386 ],
387 },
388 'collapsible-image-editor': {
389 'tags': ['p', 'strong', 'em', 'u', 'ol', 'li', 'ul', 'a', 'img',
390 'div', 'iframe'],
391 'attributes': {
392 'a': ['href', 'rel', 'target'],
393 'img': ['src', 'alt', 'style'],
394 'div': ['class'],
395 'iframe': ['src', 'alt', 'style']
396 },
397 'styles': [
398 'float',
399 'margin',
400 'padding',
401 'width',
402 'height',
403 'margin-bottom',
404 'margin-top',
405 'margin-left',
406 'margin-right',
407 ],
408 },
409 'video-editor': {
410 'tags': ['a', 'img', 'div', 'iframe'],
411 'attributes': {
412 'a': ['href', 'rel', 'target'],
413 'img': ['src', 'alt', 'style'],
414 'div': ['class'],
415 'iframe': ['src', 'alt', 'style']
416 }
417 }
418 }
419
420
421 # adhocracy4
422
423 A4_ORGANISATIONS_MODEL = 'meinberlin_organisations.Organisation'
424
425 A4_RATEABLES = (
426 ('a4comments', 'comment'),
427 ('meinberlin_ideas', 'idea'),
428 ('meinberlin_mapideas', 'mapidea'),
429 ('meinberlin_budgeting', 'proposal'),
430 ('meinberlin_kiezkasse', 'proposal'),
431 ('meinberlin_topicprio', 'topic'),
432 ('meinberlin_maptopicprio', 'maptopic'),
433 )
434
435 A4_COMMENTABLES = (
436 ('a4comments', 'comment'),
437 ('a4polls', 'poll'),
438 ('meinberlin_ideas', 'idea'),
439 ('meinberlin_mapideas', 'mapidea'),
440 ('meinberlin_budgeting', 'proposal'),
441 ('meinberlin_kiezkasse', 'proposal'),
442 ('meinberlin_topicprio', 'topic'),
443 ('meinberlin_maptopicprio', 'maptopic'),
444 ('meinberlin_documents', 'chapter'),
445 ('meinberlin_documents', 'paragraph'),
446 )
447
448 A4_REPORTABLES = (
449 ('a4comments', 'comment'),
450 ('meinberlin_ideas', 'idea'),
451 ('meinberlin_mapideas', 'mapidea'),
452 ('meinberlin_budgeting', 'proposal'),
453 ('meinberlin_kiezkasse', 'proposal'),
454 )
455
456 A4_ACTIONABLES = (
457 ('a4comments', 'comment'),
458 ('meinberlin_ideas', 'idea'),
459 ('meinberlin_mapideas', 'mapidea'),
460 ('meinberlin_budgeting', 'proposal'),
461 ('meinberlin_kiezkasse', 'proposal'),
462 )
463
464 A4_AUTO_FOLLOWABLES = (
465 # Disabled to keep current behaviour: the auto follow functionality did
466 # not work until 2018/03/21 due to a adhocracy4 bug
467 # ('a4comments', 'comment'),
468 # ('meinberlin_ideas', 'idea'),
469 # ('meinberlin_mapideas', 'mapidea'),
470 # ('meinberlin_budgeting', 'proposal'),
471 # ('meinberlin_kiezkasse', 'proposal'),
472 )
473
474 A4_CATEGORIZABLE = (
475 ('meinberlin_ideas', 'idea'),
476 ('meinberlin_mapideas', 'mapidea'),
477 ('meinberlin_budgeting', 'proposal'),
478 ('meinberlin_kiezkasse', 'proposal'),
479 ('meinberlin_topicprio', 'topic'),
480 ('meinberlin_maptopicprio', 'maptopic'),
481 )
482
483 A4_LABELS_ADDABLE = (
484 ('meinberlin_ideas', 'idea'),
485 ('meinberlin_mapideas', 'mapidea'),
486 ('meinberlin_budgeting', 'proposal'),
487 ('meinberlin_kiezkasse', 'proposal'),
488 ('meinberlin_topicprio', 'topic'),
489 ('meinberlin_maptopicprio', 'maptopic'),
490 )
491
492 A4_CATEGORY_ICONS = (
493 ('', _('Pin without icon')),
494 ('diamant', _('Diamond')),
495 ('dreieck_oben', _('Triangle up')),
496 ('dreieck_unten', _('Triangle down')),
497 ('ellipse', _('Ellipse')),
498 ('halbkreis', _('Semi circle')),
499 ('hexagon', _('Hexagon')),
500 ('parallelogramm', _('Rhomboid')),
501 ('pentagramm', _('Star')),
502 ('quadrat', _('Square')),
503 ('raute', _('Octothorpe')),
504 ('rechtecke', _('Rectangle')),
505 ('ring', _('Circle')),
506 ('rw_dreieck', _('Right triangle')),
507 ('zickzack', _('Zigzag'))
508 )
509
510 A4_USE_VECTORMAP = True
511 A4_MAP_BASEURL = 'https://maps.berlinonline.de/styles/klokantech-basic/style.json'
512 A4_OPENMAPTILES_TOKEN = '9aVUrssbx7PKNUKo3WtXY6MqETI6Q336u5D142QS'
513 A4_MAPBOX_TOKEN = ''
514
515 A4_PROJECT_TOPICS = (
516 ('ANT', _('Anti-discrimination')),
517 ('WOR', _('Work & economy')),
518 ('BUI', _('Building & living')),
519 ('EDU', _('Education & research')),
520 ('CHI', _('Children, youth & family')),
521 ('FIN', _('Finances')),
522 ('HEA', _('Health & sports')),
523 ('INT', _('Integration')),
524 ('CUL', _('Culture & leisure')),
525 ('NEI', _('Neighborhood & participation')),
526 ('URB', _('Urban development')),
527 ('ENV', _('Environment & public green space')),
528 ('TRA', _('Traffic'))
529 )
530
531 A4_MAP_ATTRIBUTION = '© <a href="http://openstreetmap.org/copyright">OpenStreetMap</a> contributors'
532 A4_MAP_BOUNDING_BOX = ([[52.3517, 13.8229], [52.6839, 12.9543]])
533
534 A4_DASHBOARD = {
535 'PROJECT_DASHBOARD_CLASS': 'meinberlin.apps.dashboard.TypedProjectDashboard',
536 'BLUEPRINTS': 'meinberlin.apps.dashboard.blueprints.blueprints'}
537
538 A4_ACTIONS_PHASE_ENDS_HOURS = 48
539
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/config/settings/base.py b/meinberlin/config/settings/base.py
--- a/meinberlin/config/settings/base.py
+++ b/meinberlin/config/settings/base.py
@@ -312,7 +312,6 @@
['Bold', 'Italic', 'Underline'],
['NumberedList', 'BulletedList'],
['Link', 'Unlink'],
- ['Embed', 'EmbedBase']
],
'removePlugins': 'stylesheetparser',
'extraAllowedContent': 'iframe[*]',
| {"golden_diff": "diff --git a/meinberlin/config/settings/base.py b/meinberlin/config/settings/base.py\n--- a/meinberlin/config/settings/base.py\n+++ b/meinberlin/config/settings/base.py\n@@ -312,7 +312,6 @@\n ['Bold', 'Italic', 'Underline'],\n ['NumberedList', 'BulletedList'],\n ['Link', 'Unlink'],\n- ['Embed', 'EmbedBase']\n ],\n 'removePlugins': 'stylesheetparser',\n 'extraAllowedContent': 'iframe[*]',\n", "issue": "users should not be able to embed video in their idea\n**URL:** https://meinberlin-stage.liqd.net/mapideas/create/module/brainstorming-mit-kartenfunktion-36/\r\n**user:** registered user\r\n**expected behaviour:** should not be able to embed video\r\n**behaviour:** is able to embed video in idea form\r\n**important screensize:**\r\n**device & browser:** \r\n**Comment/Question:** we should not allow this also because it may look crap in frontend if also a picture has been uploaded. Don't know where this came from but it is not on prod.\r\n\r\nScreenshot?\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for meinberlin project.\n\nGenerated by 'django-admin startproject' using Django 1.8.17.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.8/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.8/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nimport os\n\nfrom django.utils.translation import ugettext_lazy as _\n\nCONFIG_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nPROJECT_DIR = os.path.dirname(CONFIG_DIR)\nBASE_DIR = os.path.dirname(PROJECT_DIR)\n\n# General settings\n\nCONTACT_EMAIL = '[email protected]'\nSUPERVISOR_EMAIL = '[email protected]'\nTRACKING_ENABLED = False\n\n# Application definition\n\nINSTALLED_APPS = (\n 'django.contrib.sites',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.humanize',\n 'django.contrib.sitemaps',\n\n 'wagtail.contrib.forms',\n 'wagtail.contrib.redirects',\n 'wagtail.contrib.settings',\n 'wagtail.embeds',\n 'wagtail.sites',\n 'wagtail.users',\n 'wagtail.snippets',\n 'wagtail.documents',\n 'wagtail.images',\n 'wagtail.search',\n 'wagtail.admin',\n 'wagtail.core',\n 'wagtail.contrib.styleguide',\n\n 'taggit', # wagtail dependency\n 'widget_tweaks',\n 'rest_framework',\n 'allauth',\n 'allauth.account',\n 'allauth.socialaccount',\n 'rules.apps.AutodiscoverRulesConfig',\n 'easy_thumbnails',\n 'ckeditor',\n 'ckeditor_uploader',\n 'capture_tag',\n 'background_task',\n\n 'adhocracy4.actions',\n 'adhocracy4.administrative_districts',\n 'adhocracy4.categories',\n 'adhocracy4.ckeditor',\n 'adhocracy4.comments',\n 'adhocracy4.dashboard',\n 'adhocracy4.filters',\n 'adhocracy4.follows',\n 'adhocracy4.forms',\n 'adhocracy4.images',\n 'adhocracy4.labels',\n 'adhocracy4.maps',\n 'adhocracy4.modules',\n 'adhocracy4.organisations',\n 'adhocracy4.phases',\n 'adhocracy4.polls',\n 'adhocracy4.projects',\n 'adhocracy4.ratings',\n 'adhocracy4.reports',\n 'adhocracy4.rules',\n\n # General components that define models or helpers\n 'meinberlin.apps.actions',\n 'meinberlin.apps.captcha',\n 'meinberlin.apps.cms',\n 'meinberlin.apps.contrib',\n 'meinberlin.apps.likes',\n 'meinberlin.apps.livequestions',\n 'meinberlin.apps.maps',\n 'meinberlin.apps.moderatorfeedback',\n 'meinberlin.apps.moderatorremark',\n 'meinberlin.apps.notifications',\n 'meinberlin.apps.organisations',\n 'meinberlin.apps.users',\n\n # General apps containing views\n 'meinberlin.apps.account',\n 'meinberlin.apps.adminlog',\n 'meinberlin.apps.dashboard',\n 'meinberlin.apps.embed',\n 'meinberlin.apps.exports',\n 'meinberlin.apps.initiators',\n 'meinberlin.apps.newsletters',\n 'meinberlin.apps.offlineevents',\n 'meinberlin.apps.plans',\n 'meinberlin.apps.platformemails',\n\n # Apps defining phases\n 'meinberlin.apps.activities',\n 'meinberlin.apps.bplan',\n 'meinberlin.apps.budgeting',\n 'meinberlin.apps.documents',\n 'meinberlin.apps.extprojects',\n 'meinberlin.apps.ideas',\n 'meinberlin.apps.kiezkasse',\n 'meinberlin.apps.mapideas',\n 'meinberlin.apps.maptopicprio',\n 'meinberlin.apps.projectcontainers',\n 'meinberlin.apps.topicprio',\n\n # Apps overwriting and adding to a4\n 'meinberlin.apps.polls',\n 'meinberlin.apps.projects',\n)\n\nMIDDLEWARE = (\n 'django.middleware.security.SecurityMiddleware',\n 'whitenoise.middleware.WhiteNoiseMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'csp.middleware.CSPMiddleware',\n 'django_cloudflare_push.middleware.push_middleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n\n 'wagtail.contrib.redirects.middleware.RedirectMiddleware',\n\n 'meinberlin.apps.embed.middleware.AjaxPathMiddleware'\n)\n\nSITE_ID = 1\n\nROOT_URLCONF = 'meinberlin.config.urls'\n\nLOCALE_PATHS = [os.path.join(BASE_DIR, 'locale')]\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n os.path.join(PROJECT_DIR, 'templates'),\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n 'wagtail.contrib.settings.context_processors.settings',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'meinberlin.config.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.8/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n 'TEST': {\n 'NAME': os.path.join(BASE_DIR, 'test_db.sqlite3'),\n }\n }\n}\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.8/topics/i18n/\n\nLANGUAGE_CODE = 'de-DE'\n\n# The default language is used for emails and strings\n# that are stored translated to the database.\nDEFAULT_LANGUAGE = 'de'\n\nTIME_ZONE = 'Europe/Berlin'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.8/howto/static-files/\n\nSTATICFILES_DIRS = [\n os.path.join(PROJECT_DIR, 'static'),\n]\n\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nSTATIC_URL = '/static/'\n\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nMEDIA_URL = '/media/'\n\nIMAGE_ALIASES = {\n '*': {\n 'max_size': 5 * 10**6,\n 'fileformats': ('image/png', 'image/jpeg', 'image/gif')\n },\n 'heroimage': {'min_resolution': (1500, 500)},\n 'tileimage': {'min_resolution': (500, 300)},\n 'logo': {'min_resolution': (200, 50)},\n 'avatar': {'min_resolution': (200, 200)},\n 'idea_image': {'min_resolution': (600, 400)},\n 'plan_image': {'min_resolution': (600, 400)},\n}\n\nTHUMBNAIL_ALIASES = {\n '': {\n 'heroimage': {'size': (1500, 500)},\n 'project_thumbnail': {'size': (520, 330)},\n 'logo': {'size': (160, 160), 'background': 'white'},\n 'item_image': {'size': (330, 0), 'crop': 'scale'},\n 'map_thumbnail': {'size': (200, 200), 'crop': 'smart'},\n 'project_tile': {'size': (500, 500)}\n }\n}\n\nALLOWED_UPLOAD_IMAGES = ('png', 'jpeg', 'gif')\n\n\n# Wagtail settings\n\nWAGTAIL_SITE_NAME = 'meinBerlin'\nWAGTAILIMAGES_IMAGE_MODEL = 'meinberlin_cms.CustomImage'\n\n# Base URL to use when referring to full URLs within the Wagtail admin backend -\n# e.g. in notification emails. Don't include '/admin' or a trailing slash\nBASE_URL = 'http://localhost:8000'\n\n# Authentication\n\nAUTH_USER_MODEL = 'meinberlin_users.User'\n\nAUTHENTICATION_BACKENDS = (\n 'rules.permissions.ObjectPermissionBackend',\n 'django.contrib.auth.backends.ModelBackend',\n 'allauth.account.auth_backends.AuthenticationBackend',\n)\n\nACCOUNT_ADAPTER = 'meinberlin.apps.users.adapters.AccountAdapter'\nACCOUNT_AUTHENTICATION_METHOD = 'username_email'\nACCOUNT_EMAIL_CONFIRMATION_EXPIRE_DAYS = 3\nACCOUNT_EMAIL_REQUIRED = True\nACCOUNT_EMAIL_VERIFICATION = 'mandatory'\nACCOUNT_FORMS = {'signup': 'meinberlin.apps.users.forms.TermsSignupForm'}\nACCOUNT_LOGIN_ATTEMPTS_LIMIT = 10\nACCOUNT_LOGIN_ATTEMPTS_TIMEOUT = 300 # seconds\nACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True\nACCOUNT_LOGIN_ON_PASSWORD_RESET = True\nACCOUNT_USERNAME_REQUIRED = True\nSOCIALACCOUNT_AUTO_SIGNUP = False\nSOCIALACCOUNT_EMAIL_VERIFICATION = 'none'\nSOCIALACCOUNT_FORMS = {'signup': 'meinberlin.apps.users.forms.SocialTermsSignupForm'}\nSOCIALACCOUNT_QUERY_EMAIL = True\nSESSION_COOKIE_SAMESITE = None # This is currently needed for servicekonto account connection\n\nLOGIN_URL = 'account_login'\nLOGIN_REDIRECT_URL = '/'\n\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\nPASSWORD_HASHERS = [\n 'django.contrib.auth.hashers.PBKDF2PasswordHasher',\n 'django.contrib.auth.hashers.BCryptPasswordHasher', # a3\n 'meinberlin.apps.users.hashers.A2PasswordHasher',\n]\n\n# captcha\nCAPTCHA_URL = u'https://meinberlin-captcha.liqd.net/api.php'\n\n# ckeditor\n\nCKEDITOR_UPLOAD_PATH = 'uploads/'\nCKEDITOR_RESTRICT_BY_USER = 'username'\nCKEDITOR_ALLOW_NONIMAGE_FILES = True\n\nCKEDITOR_CONFIGS = {\n 'default': {\n 'width': '100%',\n 'title': _('Rich text editor'),\n 'toolbar': 'Custom',\n 'toolbar_Custom': [\n ['Bold', 'Italic', 'Underline'],\n ['NumberedList', 'BulletedList'],\n ['Link', 'Unlink'],\n ['Embed', 'EmbedBase']\n ],\n 'removePlugins': 'stylesheetparser',\n 'extraAllowedContent': 'iframe[*]',\n 'extraPlugins': ','.join(['embed', 'embedbase']),\n },\n 'image-editor': {\n 'width': '100%',\n 'title': _('Rich text editor'),\n 'toolbar': 'Custom',\n 'toolbar_Custom': [\n ['Bold', 'Italic', 'Underline'],\n ['Image'],\n ['NumberedList', 'BulletedList'],\n ['Link', 'Unlink'],\n ],\n },\n 'collapsible-image-editor': {\n 'width': '100%',\n 'title': _('Rich text editor'),\n 'toolbar': 'Custom',\n 'toolbar_Custom': [\n ['Bold', 'Italic', 'Underline'],\n ['Image'],\n ['NumberedList', 'BulletedList'],\n ['Link', 'Unlink'],\n ['CollapsibleItem'],\n ['Embed', 'EmbedBase']\n ],\n 'removePlugins': 'stylesheetparser',\n 'extraAllowedContent': 'iframe[*]; div[*]',\n },\n 'video-editor': {\n 'width': '100%',\n 'title': _('Rich text editor'),\n 'toolbar': 'Custom',\n 'toolbar_Custom': [\n ['Embed', 'EmbedBase']\n ],\n 'removePlugins': 'stylesheetparser',\n 'extraAllowedContent': 'iframe[*]; div[*]',\n }\n}\n\nBLEACH_LIST = {\n 'default': {\n 'tags': ['p', 'strong', 'em', 'u', 'ol', 'li', 'ul', 'a', 'img',\n 'iframe', 'div'],\n 'attributes': {\n 'a': ['href', 'rel', 'target'],\n 'img': ['src', 'alt', 'style'],\n 'div': ['class'],\n 'iframe': ['src', 'alt', 'style']\n },\n },\n 'image-editor': {\n 'tags': ['p', 'strong', 'em', 'u', 'ol', 'li', 'ul', 'a', 'img'],\n 'attributes': {\n 'a': ['href', 'rel', 'target'],\n 'img': ['src', 'alt', 'style']\n },\n 'styles': [\n 'float',\n 'margin',\n 'padding',\n 'width',\n 'height',\n 'margin-bottom',\n 'margin-top',\n 'margin-left',\n 'margin-right',\n ],\n },\n 'collapsible-image-editor': {\n 'tags': ['p', 'strong', 'em', 'u', 'ol', 'li', 'ul', 'a', 'img',\n 'div', 'iframe'],\n 'attributes': {\n 'a': ['href', 'rel', 'target'],\n 'img': ['src', 'alt', 'style'],\n 'div': ['class'],\n 'iframe': ['src', 'alt', 'style']\n },\n 'styles': [\n 'float',\n 'margin',\n 'padding',\n 'width',\n 'height',\n 'margin-bottom',\n 'margin-top',\n 'margin-left',\n 'margin-right',\n ],\n },\n 'video-editor': {\n 'tags': ['a', 'img', 'div', 'iframe'],\n 'attributes': {\n 'a': ['href', 'rel', 'target'],\n 'img': ['src', 'alt', 'style'],\n 'div': ['class'],\n 'iframe': ['src', 'alt', 'style']\n }\n }\n}\n\n\n# adhocracy4\n\nA4_ORGANISATIONS_MODEL = 'meinberlin_organisations.Organisation'\n\nA4_RATEABLES = (\n ('a4comments', 'comment'),\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n ('meinberlin_topicprio', 'topic'),\n ('meinberlin_maptopicprio', 'maptopic'),\n)\n\nA4_COMMENTABLES = (\n ('a4comments', 'comment'),\n ('a4polls', 'poll'),\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n ('meinberlin_topicprio', 'topic'),\n ('meinberlin_maptopicprio', 'maptopic'),\n ('meinberlin_documents', 'chapter'),\n ('meinberlin_documents', 'paragraph'),\n)\n\nA4_REPORTABLES = (\n ('a4comments', 'comment'),\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n)\n\nA4_ACTIONABLES = (\n ('a4comments', 'comment'),\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n)\n\nA4_AUTO_FOLLOWABLES = (\n # Disabled to keep current behaviour: the auto follow functionality did\n # not work until 2018/03/21 due to a adhocracy4 bug\n # ('a4comments', 'comment'),\n # ('meinberlin_ideas', 'idea'),\n # ('meinberlin_mapideas', 'mapidea'),\n # ('meinberlin_budgeting', 'proposal'),\n # ('meinberlin_kiezkasse', 'proposal'),\n)\n\nA4_CATEGORIZABLE = (\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n ('meinberlin_topicprio', 'topic'),\n ('meinberlin_maptopicprio', 'maptopic'),\n)\n\nA4_LABELS_ADDABLE = (\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n ('meinberlin_topicprio', 'topic'),\n ('meinberlin_maptopicprio', 'maptopic'),\n)\n\nA4_CATEGORY_ICONS = (\n ('', _('Pin without icon')),\n ('diamant', _('Diamond')),\n ('dreieck_oben', _('Triangle up')),\n ('dreieck_unten', _('Triangle down')),\n ('ellipse', _('Ellipse')),\n ('halbkreis', _('Semi circle')),\n ('hexagon', _('Hexagon')),\n ('parallelogramm', _('Rhomboid')),\n ('pentagramm', _('Star')),\n ('quadrat', _('Square')),\n ('raute', _('Octothorpe')),\n ('rechtecke', _('Rectangle')),\n ('ring', _('Circle')),\n ('rw_dreieck', _('Right triangle')),\n ('zickzack', _('Zigzag'))\n)\n\nA4_USE_VECTORMAP = True\nA4_MAP_BASEURL = 'https://maps.berlinonline.de/styles/klokantech-basic/style.json'\nA4_OPENMAPTILES_TOKEN = '9aVUrssbx7PKNUKo3WtXY6MqETI6Q336u5D142QS'\nA4_MAPBOX_TOKEN = ''\n\nA4_PROJECT_TOPICS = (\n ('ANT', _('Anti-discrimination')),\n ('WOR', _('Work & economy')),\n ('BUI', _('Building & living')),\n ('EDU', _('Education & research')),\n ('CHI', _('Children, youth & family')),\n ('FIN', _('Finances')),\n ('HEA', _('Health & sports')),\n ('INT', _('Integration')),\n ('CUL', _('Culture & leisure')),\n ('NEI', _('Neighborhood & participation')),\n ('URB', _('Urban development')),\n ('ENV', _('Environment & public green space')),\n ('TRA', _('Traffic'))\n)\n\nA4_MAP_ATTRIBUTION = '© <a href=\"http://openstreetmap.org/copyright\">OpenStreetMap</a> contributors'\nA4_MAP_BOUNDING_BOX = ([[52.3517, 13.8229], [52.6839, 12.9543]])\n\nA4_DASHBOARD = {\n 'PROJECT_DASHBOARD_CLASS': 'meinberlin.apps.dashboard.TypedProjectDashboard',\n 'BLUEPRINTS': 'meinberlin.apps.dashboard.blueprints.blueprints'}\n\nA4_ACTIONS_PHASE_ENDS_HOURS = 48\n", "path": "meinberlin/config/settings/base.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for meinberlin project.\n\nGenerated by 'django-admin startproject' using Django 1.8.17.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.8/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.8/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nimport os\n\nfrom django.utils.translation import ugettext_lazy as _\n\nCONFIG_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nPROJECT_DIR = os.path.dirname(CONFIG_DIR)\nBASE_DIR = os.path.dirname(PROJECT_DIR)\n\n# General settings\n\nCONTACT_EMAIL = '[email protected]'\nSUPERVISOR_EMAIL = '[email protected]'\nTRACKING_ENABLED = False\n\n# Application definition\n\nINSTALLED_APPS = (\n 'django.contrib.sites',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.humanize',\n 'django.contrib.sitemaps',\n\n 'wagtail.contrib.forms',\n 'wagtail.contrib.redirects',\n 'wagtail.contrib.settings',\n 'wagtail.embeds',\n 'wagtail.sites',\n 'wagtail.users',\n 'wagtail.snippets',\n 'wagtail.documents',\n 'wagtail.images',\n 'wagtail.search',\n 'wagtail.admin',\n 'wagtail.core',\n 'wagtail.contrib.styleguide',\n\n 'taggit', # wagtail dependency\n 'widget_tweaks',\n 'rest_framework',\n 'allauth',\n 'allauth.account',\n 'allauth.socialaccount',\n 'rules.apps.AutodiscoverRulesConfig',\n 'easy_thumbnails',\n 'ckeditor',\n 'ckeditor_uploader',\n 'capture_tag',\n 'background_task',\n\n 'adhocracy4.actions',\n 'adhocracy4.administrative_districts',\n 'adhocracy4.categories',\n 'adhocracy4.ckeditor',\n 'adhocracy4.comments',\n 'adhocracy4.dashboard',\n 'adhocracy4.filters',\n 'adhocracy4.follows',\n 'adhocracy4.forms',\n 'adhocracy4.images',\n 'adhocracy4.labels',\n 'adhocracy4.maps',\n 'adhocracy4.modules',\n 'adhocracy4.organisations',\n 'adhocracy4.phases',\n 'adhocracy4.polls',\n 'adhocracy4.projects',\n 'adhocracy4.ratings',\n 'adhocracy4.reports',\n 'adhocracy4.rules',\n\n # General components that define models or helpers\n 'meinberlin.apps.actions',\n 'meinberlin.apps.captcha',\n 'meinberlin.apps.cms',\n 'meinberlin.apps.contrib',\n 'meinberlin.apps.likes',\n 'meinberlin.apps.livequestions',\n 'meinberlin.apps.maps',\n 'meinberlin.apps.moderatorfeedback',\n 'meinberlin.apps.moderatorremark',\n 'meinberlin.apps.notifications',\n 'meinberlin.apps.organisations',\n 'meinberlin.apps.users',\n\n # General apps containing views\n 'meinberlin.apps.account',\n 'meinberlin.apps.adminlog',\n 'meinberlin.apps.dashboard',\n 'meinberlin.apps.embed',\n 'meinberlin.apps.exports',\n 'meinberlin.apps.initiators',\n 'meinberlin.apps.newsletters',\n 'meinberlin.apps.offlineevents',\n 'meinberlin.apps.plans',\n 'meinberlin.apps.platformemails',\n\n # Apps defining phases\n 'meinberlin.apps.activities',\n 'meinberlin.apps.bplan',\n 'meinberlin.apps.budgeting',\n 'meinberlin.apps.documents',\n 'meinberlin.apps.extprojects',\n 'meinberlin.apps.ideas',\n 'meinberlin.apps.kiezkasse',\n 'meinberlin.apps.mapideas',\n 'meinberlin.apps.maptopicprio',\n 'meinberlin.apps.projectcontainers',\n 'meinberlin.apps.topicprio',\n\n # Apps overwriting and adding to a4\n 'meinberlin.apps.polls',\n 'meinberlin.apps.projects',\n)\n\nMIDDLEWARE = (\n 'django.middleware.security.SecurityMiddleware',\n 'whitenoise.middleware.WhiteNoiseMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'csp.middleware.CSPMiddleware',\n 'django_cloudflare_push.middleware.push_middleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n\n 'wagtail.contrib.redirects.middleware.RedirectMiddleware',\n\n 'meinberlin.apps.embed.middleware.AjaxPathMiddleware'\n)\n\nSITE_ID = 1\n\nROOT_URLCONF = 'meinberlin.config.urls'\n\nLOCALE_PATHS = [os.path.join(BASE_DIR, 'locale')]\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n os.path.join(PROJECT_DIR, 'templates'),\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n 'wagtail.contrib.settings.context_processors.settings',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'meinberlin.config.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.8/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n 'TEST': {\n 'NAME': os.path.join(BASE_DIR, 'test_db.sqlite3'),\n }\n }\n}\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.8/topics/i18n/\n\nLANGUAGE_CODE = 'de-DE'\n\n# The default language is used for emails and strings\n# that are stored translated to the database.\nDEFAULT_LANGUAGE = 'de'\n\nTIME_ZONE = 'Europe/Berlin'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.8/howto/static-files/\n\nSTATICFILES_DIRS = [\n os.path.join(PROJECT_DIR, 'static'),\n]\n\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nSTATIC_URL = '/static/'\n\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nMEDIA_URL = '/media/'\n\nIMAGE_ALIASES = {\n '*': {\n 'max_size': 5 * 10**6,\n 'fileformats': ('image/png', 'image/jpeg', 'image/gif')\n },\n 'heroimage': {'min_resolution': (1500, 500)},\n 'tileimage': {'min_resolution': (500, 300)},\n 'logo': {'min_resolution': (200, 50)},\n 'avatar': {'min_resolution': (200, 200)},\n 'idea_image': {'min_resolution': (600, 400)},\n 'plan_image': {'min_resolution': (600, 400)},\n}\n\nTHUMBNAIL_ALIASES = {\n '': {\n 'heroimage': {'size': (1500, 500)},\n 'project_thumbnail': {'size': (520, 330)},\n 'logo': {'size': (160, 160), 'background': 'white'},\n 'item_image': {'size': (330, 0), 'crop': 'scale'},\n 'map_thumbnail': {'size': (200, 200), 'crop': 'smart'},\n 'project_tile': {'size': (500, 500)}\n }\n}\n\nALLOWED_UPLOAD_IMAGES = ('png', 'jpeg', 'gif')\n\n\n# Wagtail settings\n\nWAGTAIL_SITE_NAME = 'meinBerlin'\nWAGTAILIMAGES_IMAGE_MODEL = 'meinberlin_cms.CustomImage'\n\n# Base URL to use when referring to full URLs within the Wagtail admin backend -\n# e.g. in notification emails. Don't include '/admin' or a trailing slash\nBASE_URL = 'http://localhost:8000'\n\n# Authentication\n\nAUTH_USER_MODEL = 'meinberlin_users.User'\n\nAUTHENTICATION_BACKENDS = (\n 'rules.permissions.ObjectPermissionBackend',\n 'django.contrib.auth.backends.ModelBackend',\n 'allauth.account.auth_backends.AuthenticationBackend',\n)\n\nACCOUNT_ADAPTER = 'meinberlin.apps.users.adapters.AccountAdapter'\nACCOUNT_AUTHENTICATION_METHOD = 'username_email'\nACCOUNT_EMAIL_CONFIRMATION_EXPIRE_DAYS = 3\nACCOUNT_EMAIL_REQUIRED = True\nACCOUNT_EMAIL_VERIFICATION = 'mandatory'\nACCOUNT_FORMS = {'signup': 'meinberlin.apps.users.forms.TermsSignupForm'}\nACCOUNT_LOGIN_ATTEMPTS_LIMIT = 10\nACCOUNT_LOGIN_ATTEMPTS_TIMEOUT = 300 # seconds\nACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True\nACCOUNT_LOGIN_ON_PASSWORD_RESET = True\nACCOUNT_USERNAME_REQUIRED = True\nSOCIALACCOUNT_AUTO_SIGNUP = False\nSOCIALACCOUNT_EMAIL_VERIFICATION = 'none'\nSOCIALACCOUNT_FORMS = {'signup': 'meinberlin.apps.users.forms.SocialTermsSignupForm'}\nSOCIALACCOUNT_QUERY_EMAIL = True\nSESSION_COOKIE_SAMESITE = None # This is currently needed for servicekonto account connection\n\nLOGIN_URL = 'account_login'\nLOGIN_REDIRECT_URL = '/'\n\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\nPASSWORD_HASHERS = [\n 'django.contrib.auth.hashers.PBKDF2PasswordHasher',\n 'django.contrib.auth.hashers.BCryptPasswordHasher', # a3\n 'meinberlin.apps.users.hashers.A2PasswordHasher',\n]\n\n# captcha\nCAPTCHA_URL = u'https://meinberlin-captcha.liqd.net/api.php'\n\n# ckeditor\n\nCKEDITOR_UPLOAD_PATH = 'uploads/'\nCKEDITOR_RESTRICT_BY_USER = 'username'\nCKEDITOR_ALLOW_NONIMAGE_FILES = True\n\nCKEDITOR_CONFIGS = {\n 'default': {\n 'width': '100%',\n 'title': _('Rich text editor'),\n 'toolbar': 'Custom',\n 'toolbar_Custom': [\n ['Bold', 'Italic', 'Underline'],\n ['NumberedList', 'BulletedList'],\n ['Link', 'Unlink'],\n ],\n 'removePlugins': 'stylesheetparser',\n 'extraAllowedContent': 'iframe[*]',\n 'extraPlugins': ','.join(['embed', 'embedbase']),\n },\n 'image-editor': {\n 'width': '100%',\n 'title': _('Rich text editor'),\n 'toolbar': 'Custom',\n 'toolbar_Custom': [\n ['Bold', 'Italic', 'Underline'],\n ['Image'],\n ['NumberedList', 'BulletedList'],\n ['Link', 'Unlink'],\n ],\n },\n 'collapsible-image-editor': {\n 'width': '100%',\n 'title': _('Rich text editor'),\n 'toolbar': 'Custom',\n 'toolbar_Custom': [\n ['Bold', 'Italic', 'Underline'],\n ['Image'],\n ['NumberedList', 'BulletedList'],\n ['Link', 'Unlink'],\n ['CollapsibleItem'],\n ['Embed', 'EmbedBase']\n ],\n 'removePlugins': 'stylesheetparser',\n 'extraAllowedContent': 'iframe[*]; div[*]',\n },\n 'video-editor': {\n 'width': '100%',\n 'title': _('Rich text editor'),\n 'toolbar': 'Custom',\n 'toolbar_Custom': [\n ['Embed', 'EmbedBase']\n ],\n 'removePlugins': 'stylesheetparser',\n 'extraAllowedContent': 'iframe[*]; div[*]',\n }\n}\n\nBLEACH_LIST = {\n 'default': {\n 'tags': ['p', 'strong', 'em', 'u', 'ol', 'li', 'ul', 'a', 'img',\n 'iframe', 'div'],\n 'attributes': {\n 'a': ['href', 'rel', 'target'],\n 'img': ['src', 'alt', 'style'],\n 'div': ['class'],\n 'iframe': ['src', 'alt', 'style']\n },\n },\n 'image-editor': {\n 'tags': ['p', 'strong', 'em', 'u', 'ol', 'li', 'ul', 'a', 'img'],\n 'attributes': {\n 'a': ['href', 'rel', 'target'],\n 'img': ['src', 'alt', 'style']\n },\n 'styles': [\n 'float',\n 'margin',\n 'padding',\n 'width',\n 'height',\n 'margin-bottom',\n 'margin-top',\n 'margin-left',\n 'margin-right',\n ],\n },\n 'collapsible-image-editor': {\n 'tags': ['p', 'strong', 'em', 'u', 'ol', 'li', 'ul', 'a', 'img',\n 'div', 'iframe'],\n 'attributes': {\n 'a': ['href', 'rel', 'target'],\n 'img': ['src', 'alt', 'style'],\n 'div': ['class'],\n 'iframe': ['src', 'alt', 'style']\n },\n 'styles': [\n 'float',\n 'margin',\n 'padding',\n 'width',\n 'height',\n 'margin-bottom',\n 'margin-top',\n 'margin-left',\n 'margin-right',\n ],\n },\n 'video-editor': {\n 'tags': ['a', 'img', 'div', 'iframe'],\n 'attributes': {\n 'a': ['href', 'rel', 'target'],\n 'img': ['src', 'alt', 'style'],\n 'div': ['class'],\n 'iframe': ['src', 'alt', 'style']\n }\n }\n}\n\n\n# adhocracy4\n\nA4_ORGANISATIONS_MODEL = 'meinberlin_organisations.Organisation'\n\nA4_RATEABLES = (\n ('a4comments', 'comment'),\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n ('meinberlin_topicprio', 'topic'),\n ('meinberlin_maptopicprio', 'maptopic'),\n)\n\nA4_COMMENTABLES = (\n ('a4comments', 'comment'),\n ('a4polls', 'poll'),\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n ('meinberlin_topicprio', 'topic'),\n ('meinberlin_maptopicprio', 'maptopic'),\n ('meinberlin_documents', 'chapter'),\n ('meinberlin_documents', 'paragraph'),\n)\n\nA4_REPORTABLES = (\n ('a4comments', 'comment'),\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n)\n\nA4_ACTIONABLES = (\n ('a4comments', 'comment'),\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n)\n\nA4_AUTO_FOLLOWABLES = (\n # Disabled to keep current behaviour: the auto follow functionality did\n # not work until 2018/03/21 due to a adhocracy4 bug\n # ('a4comments', 'comment'),\n # ('meinberlin_ideas', 'idea'),\n # ('meinberlin_mapideas', 'mapidea'),\n # ('meinberlin_budgeting', 'proposal'),\n # ('meinberlin_kiezkasse', 'proposal'),\n)\n\nA4_CATEGORIZABLE = (\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n ('meinberlin_topicprio', 'topic'),\n ('meinberlin_maptopicprio', 'maptopic'),\n)\n\nA4_LABELS_ADDABLE = (\n ('meinberlin_ideas', 'idea'),\n ('meinberlin_mapideas', 'mapidea'),\n ('meinberlin_budgeting', 'proposal'),\n ('meinberlin_kiezkasse', 'proposal'),\n ('meinberlin_topicprio', 'topic'),\n ('meinberlin_maptopicprio', 'maptopic'),\n)\n\nA4_CATEGORY_ICONS = (\n ('', _('Pin without icon')),\n ('diamant', _('Diamond')),\n ('dreieck_oben', _('Triangle up')),\n ('dreieck_unten', _('Triangle down')),\n ('ellipse', _('Ellipse')),\n ('halbkreis', _('Semi circle')),\n ('hexagon', _('Hexagon')),\n ('parallelogramm', _('Rhomboid')),\n ('pentagramm', _('Star')),\n ('quadrat', _('Square')),\n ('raute', _('Octothorpe')),\n ('rechtecke', _('Rectangle')),\n ('ring', _('Circle')),\n ('rw_dreieck', _('Right triangle')),\n ('zickzack', _('Zigzag'))\n)\n\nA4_USE_VECTORMAP = True\nA4_MAP_BASEURL = 'https://maps.berlinonline.de/styles/klokantech-basic/style.json'\nA4_OPENMAPTILES_TOKEN = '9aVUrssbx7PKNUKo3WtXY6MqETI6Q336u5D142QS'\nA4_MAPBOX_TOKEN = ''\n\nA4_PROJECT_TOPICS = (\n ('ANT', _('Anti-discrimination')),\n ('WOR', _('Work & economy')),\n ('BUI', _('Building & living')),\n ('EDU', _('Education & research')),\n ('CHI', _('Children, youth & family')),\n ('FIN', _('Finances')),\n ('HEA', _('Health & sports')),\n ('INT', _('Integration')),\n ('CUL', _('Culture & leisure')),\n ('NEI', _('Neighborhood & participation')),\n ('URB', _('Urban development')),\n ('ENV', _('Environment & public green space')),\n ('TRA', _('Traffic'))\n)\n\nA4_MAP_ATTRIBUTION = '© <a href=\"http://openstreetmap.org/copyright\">OpenStreetMap</a> contributors'\nA4_MAP_BOUNDING_BOX = ([[52.3517, 13.8229], [52.6839, 12.9543]])\n\nA4_DASHBOARD = {\n 'PROJECT_DASHBOARD_CLASS': 'meinberlin.apps.dashboard.TypedProjectDashboard',\n 'BLUEPRINTS': 'meinberlin.apps.dashboard.blueprints.blueprints'}\n\nA4_ACTIONS_PHASE_ENDS_HOURS = 48\n", "path": "meinberlin/config/settings/base.py"}]} |
gh_patches_debug_1076 | rasdani/github-patches | git_diff | joke2k__faker-1432 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gmail.co.uk isn't a valid free email domain in the UK
* Faker version: 6.6.2
* OS: macOS 11.2.3
When generating a free email address, I got a result with the domain `gmail.co.uk`. From the source code, this list of free UK email domains was copied from the PHP version of Faker, which is now archived.
According to [this Google support thread](https://support.google.com/mail/thread/4572636?hl=en) (albeit not necessarily from someone with the authority to speak on behalf of Google), there is no such domain.
### Steps to reproduce
1. Configure Faker with the `en_UK` locale.
1. Generate free emails by calling `fake.free_email()` repeatedly
1. Observe that some of them end in `gmail.co.uk`
### Expected behavior
Email addresses should not have `gmail.co.uk` as a domain.
### Actual behavior
As a replacement, maybe include Hotmail's successor, `outlook.com`? It's not UK specific, but I don't know anything about the state of free UK email providers.
gmail.co.uk isn't a valid free email domain in the UK
* Faker version: 6.6.2
* OS: macOS 11.2.3
When generating a free email address, I got a result with the domain `gmail.co.uk`. From the source code, this list of free UK email domains was copied from the PHP version of Faker, which is now archived.
According to [this Google support thread](https://support.google.com/mail/thread/4572636?hl=en) (albeit not necessarily from someone with the authority to speak on behalf of Google), there is no such domain.
### Steps to reproduce
1. Configure Faker with the `en_UK` locale.
1. Generate free emails by calling `fake.free_email()` repeatedly
1. Observe that some of them end in `gmail.co.uk`
### Expected behavior
Email addresses should not have `gmail.co.uk` as a domain.
### Actual behavior
As a replacement, maybe include Hotmail's successor, `outlook.com`? It's not UK specific, but I don't know anything about the state of free UK email providers.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/internet/en_GB/__init__.py`
Content:
```
1 from .. import Provider as InternetProvider
2
3
4 class Provider(InternetProvider):
5 # Data taken from
6 # https://github.com/fzaninotto/Faker/blob/master/src/Faker/Provider/en_GB/Internet.php
7
8 free_email_domains = (
9 'gmail.com',
10 'yahoo.com',
11 'hotmail.com',
12 'gmail.co.uk',
13 'yahoo.co.uk',
14 'hotmail.co.uk',
15 )
16
17 tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/internet/en_GB/__init__.py b/faker/providers/internet/en_GB/__init__.py
--- a/faker/providers/internet/en_GB/__init__.py
+++ b/faker/providers/internet/en_GB/__init__.py
@@ -9,9 +9,9 @@
'gmail.com',
'yahoo.com',
'hotmail.com',
- 'gmail.co.uk',
'yahoo.co.uk',
'hotmail.co.uk',
+ 'outlook.com',
)
tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')
| {"golden_diff": "diff --git a/faker/providers/internet/en_GB/__init__.py b/faker/providers/internet/en_GB/__init__.py\n--- a/faker/providers/internet/en_GB/__init__.py\n+++ b/faker/providers/internet/en_GB/__init__.py\n@@ -9,9 +9,9 @@\n 'gmail.com',\n 'yahoo.com',\n 'hotmail.com',\n- 'gmail.co.uk',\n 'yahoo.co.uk',\n 'hotmail.co.uk',\n+ 'outlook.com',\n )\n \n tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')\n", "issue": "gmail.co.uk isn't a valid free email domain in the UK\n* Faker version: 6.6.2\r\n* OS: macOS 11.2.3\r\n\r\nWhen generating a free email address, I got a result with the domain `gmail.co.uk`. From the source code, this list of free UK email domains was copied from the PHP version of Faker, which is now archived. \r\n\r\nAccording to [this Google support thread](https://support.google.com/mail/thread/4572636?hl=en) (albeit not necessarily from someone with the authority to speak on behalf of Google), there is no such domain.\r\n\r\n### Steps to reproduce\r\n\r\n1. Configure Faker with the `en_UK` locale.\r\n1. Generate free emails by calling `fake.free_email()` repeatedly\r\n1. Observe that some of them end in `gmail.co.uk`\r\n\r\n### Expected behavior\r\nEmail addresses should not have `gmail.co.uk` as a domain.\r\n\r\n### Actual behavior\r\nAs a replacement, maybe include Hotmail's successor, `outlook.com`? It's not UK specific, but I don't know anything about the state of free UK email providers.\ngmail.co.uk isn't a valid free email domain in the UK\n* Faker version: 6.6.2\r\n* OS: macOS 11.2.3\r\n\r\nWhen generating a free email address, I got a result with the domain `gmail.co.uk`. From the source code, this list of free UK email domains was copied from the PHP version of Faker, which is now archived. \r\n\r\nAccording to [this Google support thread](https://support.google.com/mail/thread/4572636?hl=en) (albeit not necessarily from someone with the authority to speak on behalf of Google), there is no such domain.\r\n\r\n### Steps to reproduce\r\n\r\n1. Configure Faker with the `en_UK` locale.\r\n1. Generate free emails by calling `fake.free_email()` repeatedly\r\n1. Observe that some of them end in `gmail.co.uk`\r\n\r\n### Expected behavior\r\nEmail addresses should not have `gmail.co.uk` as a domain.\r\n\r\n### Actual behavior\r\nAs a replacement, maybe include Hotmail's successor, `outlook.com`? It's not UK specific, but I don't know anything about the state of free UK email providers.\n", "before_files": [{"content": "from .. import Provider as InternetProvider\n\n\nclass Provider(InternetProvider):\n # Data taken from\n # https://github.com/fzaninotto/Faker/blob/master/src/Faker/Provider/en_GB/Internet.php\n\n free_email_domains = (\n 'gmail.com',\n 'yahoo.com',\n 'hotmail.com',\n 'gmail.co.uk',\n 'yahoo.co.uk',\n 'hotmail.co.uk',\n )\n\n tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')\n", "path": "faker/providers/internet/en_GB/__init__.py"}], "after_files": [{"content": "from .. import Provider as InternetProvider\n\n\nclass Provider(InternetProvider):\n # Data taken from\n # https://github.com/fzaninotto/Faker/blob/master/src/Faker/Provider/en_GB/Internet.php\n\n free_email_domains = (\n 'gmail.com',\n 'yahoo.com',\n 'hotmail.com',\n 'yahoo.co.uk',\n 'hotmail.co.uk',\n 'outlook.com',\n )\n\n tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')\n", "path": "faker/providers/internet/en_GB/__init__.py"}]} |
gh_patches_debug_1077 | rasdani/github-patches | git_diff | deepchecks__deepchecks-440 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] unnecessary warnings in integrity suite
Scenario:
When I simply run the example from the readme.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepchecks/utils/features.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 # pylint: disable=inconsistent-quotes
12 """Utils module containing feature importance calculations."""
13 import typing as t
14 from warnings import warn
15 from functools import lru_cache
16
17 import numpy as np
18 import pandas as pd
19 from pandas.core.dtypes.common import is_float_dtype
20 from sklearn.inspection import permutation_importance
21 from sklearn.pipeline import Pipeline
22
23 from deepchecks import base
24 from deepchecks import errors
25 from deepchecks.utils import validation
26 from deepchecks.utils.typing import Hashable
27 from deepchecks.utils.model import get_model_of_pipeline
28
29
30 __all__ = [
31 'calculate_feature_importance',
32 'calculate_feature_importance_or_none',
33 'column_importance_sorter_dict',
34 'column_importance_sorter_df',
35 'infer_categorical_features',
36 'is_categorical'
37 ]
38
39
40 _NUMBER_OF_FEATURES_LIMIT: int = 200
41
42
43 def set_number_of_features_limit(limit: int):
44 """Set number of features limit to calculate features importance.
45
46 Args:
47 limit (int): limit value
48 """
49 global _NUMBER_OF_FEATURES_LIMIT
50 _NUMBER_OF_FEATURES_LIMIT = limit
51
52
53 def get_number_of_features_limit() -> int:
54 """Get number of features limit to calculate features importance."""
55 return _NUMBER_OF_FEATURES_LIMIT
56
57
58 def calculate_feature_importance_or_none(
59 model: t.Any,
60 dataset: t.Union['base.Dataset', pd.DataFrame],
61 force_permutation: bool = False,
62 permutation_kwargs: t.Optional[t.Dict[str, t.Any]] = None
63 ) -> t.Optional[pd.Series]:
64 """Calculate features effect on the label or None if the input is incorrect.
65
66 Args:
67 model (Any):
68 a fitted model
69 dataset (Union[Dataset, pandas.DataFrame]):
70 dataset used to fit the model
71 force_permutation (bool, default False):
72 force permutation importance calculation
73 permutation_kwargs (Optional[Dict[str, Any]], defaultNone):
74 kwargs for permutation importance calculation
75
76 Returns:
77 Optional[pandas.Series]:
78 features importance normalized to 0-1 indexed by feature names
79 or None if the input is incorrect
80 """
81 try:
82 # calculate feature importance if dataset has label and the model is fitted on it
83 return calculate_feature_importance(
84 model=model,
85 dataset=dataset,
86 force_permutation=force_permutation,
87 permutation_kwargs=permutation_kwargs
88 )
89 except (errors.DeepchecksValueError, errors.NumberOfFeaturesLimitError) as error:
90 # DeepchecksValueError:
91 # if model validation failed;
92 # if it was not possible to calculate features importance;
93 # NumberOfFeaturesLimitError:
94 # if the number of features limit were exceeded;
95 warn(f'Features importance was not calculated:\n{str(error)}')
96
97
98 def calculate_feature_importance(
99 model: t.Any,
100 dataset: t.Union['base.Dataset', pd.DataFrame],
101 force_permutation: bool = False,
102 permutation_kwargs: t.Dict[str, t.Any] = None
103 ) -> pd.Series:
104 """Calculate features effect on the label.
105
106 Args:
107 model (Any):
108 a fitted model
109 dataset (Union[Dataset, pandas.DataFrame]):
110 dataset used to fit the model
111 force_permutation (bool, default False):
112 force permutation importance calculation
113 permutation_kwargs (Optional[Dict[str, Any]], defaultNone):
114 kwargs for permutation importance calculation
115
116 Returns:
117 pandas.Series: feature importance normalized to 0-1 indexed by feature names
118
119 Raises:
120 NotFittedError:
121 Call 'fit' with appropriate arguments before using this estimator;
122 DeepchecksValueError:
123 if model validation failed;
124 if it was not possible to calculate features importance;
125 NumberOfFeaturesLimitError:
126 if the number of features limit were exceeded;
127 """
128 # TODO: maybe it is better to split it into two functions, one for dataframe instances
129 # second for dataset instances
130 permutation_kwargs = permutation_kwargs or {}
131 permutation_kwargs['random_state'] = permutation_kwargs.get('random_state') or 42
132 validation.validate_model(dataset, model)
133
134 if isinstance(dataset, base.Dataset) and force_permutation is True:
135 if len(dataset.features) > _NUMBER_OF_FEATURES_LIMIT:
136 raise errors.NumberOfFeaturesLimitError(
137 f"Dataset contains more than {_NUMBER_OF_FEATURES_LIMIT} of features, "
138 "therefore features importance is not calculated. If you want to "
139 "change this behaviour please use :function:`deepchecks.utils.features.set_number_of_features_limit`"
140 )
141 return _calc_importance(model, dataset, **permutation_kwargs).fillna(0)
142
143 feature_importances = _built_in_importance(model, dataset)
144
145 # if _built_in_importance was calculated and returned None,
146 # check if pipeline and / or attempt permutation importance
147 if feature_importances is None and isinstance(model, Pipeline):
148 internal_estimator = get_model_of_pipeline(model)
149 if internal_estimator is not None:
150 try:
151 feature_importances = _built_in_importance(internal_estimator, dataset)
152 except ValueError:
153 # in case pipeline had an encoder
154 pass
155
156 if feature_importances is not None:
157 return feature_importances.fillna(0)
158 elif isinstance(dataset, base.Dataset):
159 return _calc_importance(model, dataset, **permutation_kwargs).fillna(0)
160 else:
161 raise errors.DeepchecksValueError(
162 "Was not able to calculate features importance" # FIXME: better message
163 )
164
165
166 def _built_in_importance(
167 model: t.Any,
168 dataset: t.Union['base.Dataset', pd.DataFrame],
169 ) -> t.Optional[pd.Series]:
170 """Get feature importance member if present in model."""
171 features = dataset.features if isinstance(dataset, base.Dataset) else dataset.columns
172
173 if hasattr(model, 'feature_importances_'): # Ensembles
174 normalized_feature_importance_values = model.feature_importances_ / model.feature_importances_.sum()
175 return pd.Series(normalized_feature_importance_values, index=features)
176
177 if hasattr(model, 'coef_'): # Linear models
178 coef = np.abs(model.coef_.flatten())
179 coef = coef / coef.sum()
180 return pd.Series(coef, index=features)
181
182
183 @lru_cache(maxsize=32)
184 def _calc_importance(
185 model: t.Any,
186 dataset: 'base.Dataset',
187 n_repeats: int = 30,
188 mask_high_variance_features: bool = False,
189 random_state: int = 42,
190 n_samples: int = 10000,
191 ) -> pd.Series:
192 """Calculate permutation feature importance. Return nonzero value only when std doesn't mask signal.
193
194 Args:
195 model (Any): A fitted model
196 dataset (Dataset): dataset used to fit the model
197 n_repeats (int): Number of times to permute a feature
198 mask_high_variance_features (bool): If true, features for which calculated permutation importance values
199 varied greatly would be returned has having 0 feature importance
200 random_state (int): Random seed for permutation importance calculation.
201 n_samples (int): The number of samples to draw from X to compute feature importance
202 in each repeat (without replacement).
203 Returns:
204 pd.Series of feature importance normalized to 0-1 indexed by feature names
205 """
206 dataset.validate_label()
207
208 n_samples = min(n_samples, dataset.n_samples)
209 dataset_sample_idx = dataset.label_col.sample(n_samples, random_state=random_state).index
210
211 r = permutation_importance(
212 model,
213 dataset.features_columns.loc[dataset_sample_idx, :],
214 dataset.label_col.loc[dataset_sample_idx],
215 n_repeats=n_repeats,
216 random_state=random_state,
217 n_jobs=-1
218 )
219
220 significance_mask = (
221 r.importances_mean - r.importances_std > 0
222 if mask_high_variance_features
223 else r.importances_mean > 0
224 )
225
226 feature_importances = r.importances_mean * significance_mask
227 total = feature_importances.sum()
228
229 if total != 0:
230 feature_importances = feature_importances / total
231
232 return pd.Series(feature_importances, index=dataset.features)
233
234
235 def get_importance(name: str, feature_importances: pd.Series, ds: 'base.Dataset') -> int:
236 """Return importance based on feature importance or label/date/index first."""
237 if name in feature_importances.keys():
238 return feature_importances[name]
239 if name in [ds.label_name, ds.datetime_name, ds.index_name]:
240 return 1
241 return 0
242
243
244 def column_importance_sorter_dict(
245 cols_dict: t.Dict[Hashable, t.Any],
246 dataset: 'base.Dataset',
247 feature_importances: t.Optional[pd.Series] = None,
248 n_top: int = 10
249 ) -> t.Dict:
250 """Return the dict of columns sorted and limited by feature importance.
251
252 Args:
253 cols_dict (Dict[Hashable, t.Any]):
254 dict where columns are the keys
255 dataset (Dataset):
256 dataset used to fit the model
257 feature_importances (pd.Series):
258 feature importance normalized to 0-1 indexed by feature names
259 n_top_columns (int):
260 amount of columns to show ordered by feature importance (date, index, label are first);
261 is used only if model was specified
262
263 Returns:
264 Dict[Hashable, Any]: the dict of columns sorted and limited by feature importance.
265 """
266 if feature_importances is not None:
267 key = lambda name: get_importance(name[0], feature_importances, dataset)
268 cols_dict = dict(sorted(cols_dict.items(), key=key, reverse=True))
269 if n_top:
270 return dict(list(cols_dict.items())[:n_top])
271 return cols_dict
272
273
274 def column_importance_sorter_df(
275 df: pd.DataFrame,
276 ds: 'base.Dataset',
277 feature_importances: pd.Series,
278 n_top: int = 10,
279 col: t.Optional[Hashable] = None
280 ) -> pd.DataFrame:
281 """Return the dataframe of of columns sorted and limited by feature importance.
282
283 Args:
284 df (DataFrame): DataFrame to sort
285 ds (Dataset): dataset used to fit the model
286 feature_importances (pd.Series): feature importance normalized to 0-1 indexed by feature names
287 n_top (int): amount of columns to show ordered by feature importance (date, index, label are first)
288 col (Optional[Hashable]): name of column to sort the dataframe by
289 Returns:
290 pd.DataFrame: the dataframe sorted and limited by feature importance.
291
292 """
293 if feature_importances is not None:
294 key = lambda column: [get_importance(name, feature_importances, ds) for name in column]
295 if col:
296 df = df.sort_values(by=[col], key=key, ascending=False)
297 df = df.sort_index(key=key, ascending=False)
298 if n_top:
299 return df.head(n_top)
300 return df
301
302
303 def infer_categorical_features(
304 df: pd.DataFrame,
305 max_categorical_ratio: float = 0.01,
306 max_categories: int = 30,
307 max_float_categories: int = 5,
308 columns: t.Optional[t.List[Hashable]] = None,
309 ) -> t.List[Hashable]:
310 """Infers which features are categorical by checking types and number of unique values.
311
312 Arguments:
313 df (DataFrame): dataframe for which to infer categorical features
314
315 Returns:
316 List[hashable]: list of categorical features
317 """
318 categorical_dtypes = df.select_dtypes(include='category')
319
320 if len(categorical_dtypes.columns) > 0:
321 return list(categorical_dtypes.columns)
322
323 if columns is not None:
324 dataframe_columns = validation.ensure_hashable_or_mutable_sequence(columns)
325 else:
326 dataframe_columns = df.columns
327
328 return [
329 column
330 for column in dataframe_columns
331 if is_categorical(
332 t.cast(pd.Series, df[column]),
333 max_categorical_ratio,
334 max_categories,
335 max_float_categories
336 )
337 ]
338
339
340 def is_categorical(
341 column: pd.Series,
342 max_categorical_ratio: float = 0.01,
343 max_categories: int = 30,
344 max_float_categories: int = 5
345 ) -> bool:
346 """Check if uniques are few enough to count as categorical.
347
348 Args:
349 column (Series):
350 The name of the column in the dataframe
351
352 Returns:
353 bool: True if is categorical according to input numbers
354 """
355 n_unique = column.nunique(dropna=True)
356 n_samples = len(column.dropna())
357
358 if is_float_dtype(column):
359 return n_unique <= max_float_categories
360
361 return n_unique / n_samples < max_categorical_ratio and n_unique <= max_categories
362
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/deepchecks/utils/features.py b/deepchecks/utils/features.py
--- a/deepchecks/utils/features.py
+++ b/deepchecks/utils/features.py
@@ -79,6 +79,8 @@
or None if the input is incorrect
"""
try:
+ if model is None:
+ return None
# calculate feature importance if dataset has label and the model is fitted on it
return calculate_feature_importance(
model=model,
| {"golden_diff": "diff --git a/deepchecks/utils/features.py b/deepchecks/utils/features.py\n--- a/deepchecks/utils/features.py\n+++ b/deepchecks/utils/features.py\n@@ -79,6 +79,8 @@\n or None if the input is incorrect\n \"\"\"\n try:\n+ if model is None:\n+ return None\n # calculate feature importance if dataset has label and the model is fitted on it\n return calculate_feature_importance(\n model=model,\n", "issue": "[BUG] unnecessary warnings in integrity suite\nScenario:\r\nWhen I simply run the example from the readme.\r\n\r\n\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n# pylint: disable=inconsistent-quotes\n\"\"\"Utils module containing feature importance calculations.\"\"\"\nimport typing as t\nfrom warnings import warn\nfrom functools import lru_cache\n\nimport numpy as np\nimport pandas as pd\nfrom pandas.core.dtypes.common import is_float_dtype\nfrom sklearn.inspection import permutation_importance\nfrom sklearn.pipeline import Pipeline\n\nfrom deepchecks import base\nfrom deepchecks import errors\nfrom deepchecks.utils import validation\nfrom deepchecks.utils.typing import Hashable\nfrom deepchecks.utils.model import get_model_of_pipeline\n\n\n__all__ = [\n 'calculate_feature_importance',\n 'calculate_feature_importance_or_none',\n 'column_importance_sorter_dict',\n 'column_importance_sorter_df',\n 'infer_categorical_features',\n 'is_categorical'\n]\n\n\n_NUMBER_OF_FEATURES_LIMIT: int = 200\n\n\ndef set_number_of_features_limit(limit: int):\n \"\"\"Set number of features limit to calculate features importance.\n\n Args:\n limit (int): limit value\n \"\"\"\n global _NUMBER_OF_FEATURES_LIMIT\n _NUMBER_OF_FEATURES_LIMIT = limit\n\n\ndef get_number_of_features_limit() -> int:\n \"\"\"Get number of features limit to calculate features importance.\"\"\"\n return _NUMBER_OF_FEATURES_LIMIT\n\n\ndef calculate_feature_importance_or_none(\n model: t.Any,\n dataset: t.Union['base.Dataset', pd.DataFrame],\n force_permutation: bool = False,\n permutation_kwargs: t.Optional[t.Dict[str, t.Any]] = None\n) -> t.Optional[pd.Series]:\n \"\"\"Calculate features effect on the label or None if the input is incorrect.\n\n Args:\n model (Any):\n a fitted model\n dataset (Union[Dataset, pandas.DataFrame]):\n dataset used to fit the model\n force_permutation (bool, default False):\n force permutation importance calculation\n permutation_kwargs (Optional[Dict[str, Any]], defaultNone):\n kwargs for permutation importance calculation\n\n Returns:\n Optional[pandas.Series]:\n features importance normalized to 0-1 indexed by feature names\n or None if the input is incorrect\n \"\"\"\n try:\n # calculate feature importance if dataset has label and the model is fitted on it\n return calculate_feature_importance(\n model=model,\n dataset=dataset,\n force_permutation=force_permutation,\n permutation_kwargs=permutation_kwargs\n )\n except (errors.DeepchecksValueError, errors.NumberOfFeaturesLimitError) as error:\n # DeepchecksValueError:\n # if model validation failed;\n # if it was not possible to calculate features importance;\n # NumberOfFeaturesLimitError:\n # if the number of features limit were exceeded;\n warn(f'Features importance was not calculated:\\n{str(error)}')\n\n\ndef calculate_feature_importance(\n model: t.Any,\n dataset: t.Union['base.Dataset', pd.DataFrame],\n force_permutation: bool = False,\n permutation_kwargs: t.Dict[str, t.Any] = None\n) -> pd.Series:\n \"\"\"Calculate features effect on the label.\n\n Args:\n model (Any):\n a fitted model\n dataset (Union[Dataset, pandas.DataFrame]):\n dataset used to fit the model\n force_permutation (bool, default False):\n force permutation importance calculation\n permutation_kwargs (Optional[Dict[str, Any]], defaultNone):\n kwargs for permutation importance calculation\n\n Returns:\n pandas.Series: feature importance normalized to 0-1 indexed by feature names\n\n Raises:\n NotFittedError:\n Call 'fit' with appropriate arguments before using this estimator;\n DeepchecksValueError:\n if model validation failed;\n if it was not possible to calculate features importance;\n NumberOfFeaturesLimitError:\n if the number of features limit were exceeded;\n \"\"\"\n # TODO: maybe it is better to split it into two functions, one for dataframe instances\n # second for dataset instances\n permutation_kwargs = permutation_kwargs or {}\n permutation_kwargs['random_state'] = permutation_kwargs.get('random_state') or 42\n validation.validate_model(dataset, model)\n\n if isinstance(dataset, base.Dataset) and force_permutation is True:\n if len(dataset.features) > _NUMBER_OF_FEATURES_LIMIT:\n raise errors.NumberOfFeaturesLimitError(\n f\"Dataset contains more than {_NUMBER_OF_FEATURES_LIMIT} of features, \"\n \"therefore features importance is not calculated. If you want to \"\n \"change this behaviour please use :function:`deepchecks.utils.features.set_number_of_features_limit`\"\n )\n return _calc_importance(model, dataset, **permutation_kwargs).fillna(0)\n\n feature_importances = _built_in_importance(model, dataset)\n\n # if _built_in_importance was calculated and returned None,\n # check if pipeline and / or attempt permutation importance\n if feature_importances is None and isinstance(model, Pipeline):\n internal_estimator = get_model_of_pipeline(model)\n if internal_estimator is not None:\n try:\n feature_importances = _built_in_importance(internal_estimator, dataset)\n except ValueError:\n # in case pipeline had an encoder\n pass\n\n if feature_importances is not None:\n return feature_importances.fillna(0)\n elif isinstance(dataset, base.Dataset):\n return _calc_importance(model, dataset, **permutation_kwargs).fillna(0)\n else:\n raise errors.DeepchecksValueError(\n \"Was not able to calculate features importance\" # FIXME: better message\n )\n\n\ndef _built_in_importance(\n model: t.Any,\n dataset: t.Union['base.Dataset', pd.DataFrame],\n) -> t.Optional[pd.Series]:\n \"\"\"Get feature importance member if present in model.\"\"\"\n features = dataset.features if isinstance(dataset, base.Dataset) else dataset.columns\n\n if hasattr(model, 'feature_importances_'): # Ensembles\n normalized_feature_importance_values = model.feature_importances_ / model.feature_importances_.sum()\n return pd.Series(normalized_feature_importance_values, index=features)\n\n if hasattr(model, 'coef_'): # Linear models\n coef = np.abs(model.coef_.flatten())\n coef = coef / coef.sum()\n return pd.Series(coef, index=features)\n\n\n@lru_cache(maxsize=32)\ndef _calc_importance(\n model: t.Any,\n dataset: 'base.Dataset',\n n_repeats: int = 30,\n mask_high_variance_features: bool = False,\n random_state: int = 42,\n n_samples: int = 10000,\n) -> pd.Series:\n \"\"\"Calculate permutation feature importance. Return nonzero value only when std doesn't mask signal.\n\n Args:\n model (Any): A fitted model\n dataset (Dataset): dataset used to fit the model\n n_repeats (int): Number of times to permute a feature\n mask_high_variance_features (bool): If true, features for which calculated permutation importance values\n varied greatly would be returned has having 0 feature importance\n random_state (int): Random seed for permutation importance calculation.\n n_samples (int): The number of samples to draw from X to compute feature importance\n in each repeat (without replacement).\n Returns:\n pd.Series of feature importance normalized to 0-1 indexed by feature names\n \"\"\"\n dataset.validate_label()\n\n n_samples = min(n_samples, dataset.n_samples)\n dataset_sample_idx = dataset.label_col.sample(n_samples, random_state=random_state).index\n\n r = permutation_importance(\n model,\n dataset.features_columns.loc[dataset_sample_idx, :],\n dataset.label_col.loc[dataset_sample_idx],\n n_repeats=n_repeats,\n random_state=random_state,\n n_jobs=-1\n )\n\n significance_mask = (\n r.importances_mean - r.importances_std > 0\n if mask_high_variance_features\n else r.importances_mean > 0\n )\n\n feature_importances = r.importances_mean * significance_mask\n total = feature_importances.sum()\n\n if total != 0:\n feature_importances = feature_importances / total\n\n return pd.Series(feature_importances, index=dataset.features)\n\n\ndef get_importance(name: str, feature_importances: pd.Series, ds: 'base.Dataset') -> int:\n \"\"\"Return importance based on feature importance or label/date/index first.\"\"\"\n if name in feature_importances.keys():\n return feature_importances[name]\n if name in [ds.label_name, ds.datetime_name, ds.index_name]:\n return 1\n return 0\n\n\ndef column_importance_sorter_dict(\n cols_dict: t.Dict[Hashable, t.Any],\n dataset: 'base.Dataset',\n feature_importances: t.Optional[pd.Series] = None,\n n_top: int = 10\n) -> t.Dict:\n \"\"\"Return the dict of columns sorted and limited by feature importance.\n\n Args:\n cols_dict (Dict[Hashable, t.Any]):\n dict where columns are the keys\n dataset (Dataset):\n dataset used to fit the model\n feature_importances (pd.Series):\n feature importance normalized to 0-1 indexed by feature names\n n_top_columns (int):\n amount of columns to show ordered by feature importance (date, index, label are first);\n is used only if model was specified\n\n Returns:\n Dict[Hashable, Any]: the dict of columns sorted and limited by feature importance.\n \"\"\"\n if feature_importances is not None:\n key = lambda name: get_importance(name[0], feature_importances, dataset)\n cols_dict = dict(sorted(cols_dict.items(), key=key, reverse=True))\n if n_top:\n return dict(list(cols_dict.items())[:n_top])\n return cols_dict\n\n\ndef column_importance_sorter_df(\n df: pd.DataFrame,\n ds: 'base.Dataset',\n feature_importances: pd.Series,\n n_top: int = 10,\n col: t.Optional[Hashable] = None\n) -> pd.DataFrame:\n \"\"\"Return the dataframe of of columns sorted and limited by feature importance.\n\n Args:\n df (DataFrame): DataFrame to sort\n ds (Dataset): dataset used to fit the model\n feature_importances (pd.Series): feature importance normalized to 0-1 indexed by feature names\n n_top (int): amount of columns to show ordered by feature importance (date, index, label are first)\n col (Optional[Hashable]): name of column to sort the dataframe by\n Returns:\n pd.DataFrame: the dataframe sorted and limited by feature importance.\n\n \"\"\"\n if feature_importances is not None:\n key = lambda column: [get_importance(name, feature_importances, ds) for name in column]\n if col:\n df = df.sort_values(by=[col], key=key, ascending=False)\n df = df.sort_index(key=key, ascending=False)\n if n_top:\n return df.head(n_top)\n return df\n\n\ndef infer_categorical_features(\n df: pd.DataFrame,\n max_categorical_ratio: float = 0.01,\n max_categories: int = 30,\n max_float_categories: int = 5,\n columns: t.Optional[t.List[Hashable]] = None,\n) -> t.List[Hashable]:\n \"\"\"Infers which features are categorical by checking types and number of unique values.\n\n Arguments:\n df (DataFrame): dataframe for which to infer categorical features\n\n Returns:\n List[hashable]: list of categorical features\n \"\"\"\n categorical_dtypes = df.select_dtypes(include='category')\n\n if len(categorical_dtypes.columns) > 0:\n return list(categorical_dtypes.columns)\n\n if columns is not None:\n dataframe_columns = validation.ensure_hashable_or_mutable_sequence(columns)\n else:\n dataframe_columns = df.columns\n\n return [\n column\n for column in dataframe_columns\n if is_categorical(\n t.cast(pd.Series, df[column]),\n max_categorical_ratio,\n max_categories,\n max_float_categories\n )\n ]\n\n\ndef is_categorical(\n column: pd.Series,\n max_categorical_ratio: float = 0.01,\n max_categories: int = 30,\n max_float_categories: int = 5\n) -> bool:\n \"\"\"Check if uniques are few enough to count as categorical.\n\n Args:\n column (Series):\n The name of the column in the dataframe\n\n Returns:\n bool: True if is categorical according to input numbers\n \"\"\"\n n_unique = column.nunique(dropna=True)\n n_samples = len(column.dropna())\n\n if is_float_dtype(column):\n return n_unique <= max_float_categories\n\n return n_unique / n_samples < max_categorical_ratio and n_unique <= max_categories\n", "path": "deepchecks/utils/features.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n# pylint: disable=inconsistent-quotes\n\"\"\"Utils module containing feature importance calculations.\"\"\"\nimport typing as t\nfrom warnings import warn\nfrom functools import lru_cache\n\nimport numpy as np\nimport pandas as pd\nfrom pandas.core.dtypes.common import is_float_dtype\nfrom sklearn.inspection import permutation_importance\nfrom sklearn.pipeline import Pipeline\n\nfrom deepchecks import base\nfrom deepchecks import errors\nfrom deepchecks.utils import validation\nfrom deepchecks.utils.typing import Hashable\nfrom deepchecks.utils.model import get_model_of_pipeline\n\n\n__all__ = [\n 'calculate_feature_importance',\n 'calculate_feature_importance_or_none',\n 'column_importance_sorter_dict',\n 'column_importance_sorter_df',\n 'infer_categorical_features',\n 'is_categorical'\n]\n\n\n_NUMBER_OF_FEATURES_LIMIT: int = 200\n\n\ndef set_number_of_features_limit(limit: int):\n \"\"\"Set number of features limit to calculate features importance.\n\n Args:\n limit (int): limit value\n \"\"\"\n global _NUMBER_OF_FEATURES_LIMIT\n _NUMBER_OF_FEATURES_LIMIT = limit\n\n\ndef get_number_of_features_limit() -> int:\n \"\"\"Get number of features limit to calculate features importance.\"\"\"\n return _NUMBER_OF_FEATURES_LIMIT\n\n\ndef calculate_feature_importance_or_none(\n model: t.Any,\n dataset: t.Union['base.Dataset', pd.DataFrame],\n force_permutation: bool = False,\n permutation_kwargs: t.Optional[t.Dict[str, t.Any]] = None\n) -> t.Optional[pd.Series]:\n \"\"\"Calculate features effect on the label or None if the input is incorrect.\n\n Args:\n model (Any):\n a fitted model\n dataset (Union[Dataset, pandas.DataFrame]):\n dataset used to fit the model\n force_permutation (bool, default False):\n force permutation importance calculation\n permutation_kwargs (Optional[Dict[str, Any]], defaultNone):\n kwargs for permutation importance calculation\n\n Returns:\n Optional[pandas.Series]:\n features importance normalized to 0-1 indexed by feature names\n or None if the input is incorrect\n \"\"\"\n try:\n if model is None:\n return None\n # calculate feature importance if dataset has label and the model is fitted on it\n return calculate_feature_importance(\n model=model,\n dataset=dataset,\n force_permutation=force_permutation,\n permutation_kwargs=permutation_kwargs\n )\n except (errors.DeepchecksValueError, errors.NumberOfFeaturesLimitError) as error:\n # DeepchecksValueError:\n # if model validation failed;\n # if it was not possible to calculate features importance;\n # NumberOfFeaturesLimitError:\n # if the number of features limit were exceeded;\n warn(f'Features importance was not calculated:\\n{str(error)}')\n\n\ndef calculate_feature_importance(\n model: t.Any,\n dataset: t.Union['base.Dataset', pd.DataFrame],\n force_permutation: bool = False,\n permutation_kwargs: t.Dict[str, t.Any] = None\n) -> pd.Series:\n \"\"\"Calculate features effect on the label.\n\n Args:\n model (Any):\n a fitted model\n dataset (Union[Dataset, pandas.DataFrame]):\n dataset used to fit the model\n force_permutation (bool, default False):\n force permutation importance calculation\n permutation_kwargs (Optional[Dict[str, Any]], defaultNone):\n kwargs for permutation importance calculation\n\n Returns:\n pandas.Series: feature importance normalized to 0-1 indexed by feature names\n\n Raises:\n NotFittedError:\n Call 'fit' with appropriate arguments before using this estimator;\n DeepchecksValueError:\n if model validation failed;\n if it was not possible to calculate features importance;\n NumberOfFeaturesLimitError:\n if the number of features limit were exceeded;\n \"\"\"\n # TODO: maybe it is better to split it into two functions, one for dataframe instances\n # second for dataset instances\n permutation_kwargs = permutation_kwargs or {}\n permutation_kwargs['random_state'] = permutation_kwargs.get('random_state') or 42\n validation.validate_model(dataset, model)\n\n if isinstance(dataset, base.Dataset) and force_permutation is True:\n if len(dataset.features) > _NUMBER_OF_FEATURES_LIMIT:\n raise errors.NumberOfFeaturesLimitError(\n f\"Dataset contains more than {_NUMBER_OF_FEATURES_LIMIT} of features, \"\n \"therefore features importance is not calculated. If you want to \"\n \"change this behaviour please use :function:`deepchecks.utils.features.set_number_of_features_limit`\"\n )\n return _calc_importance(model, dataset, **permutation_kwargs).fillna(0)\n\n feature_importances = _built_in_importance(model, dataset)\n\n # if _built_in_importance was calculated and returned None,\n # check if pipeline and / or attempt permutation importance\n if feature_importances is None and isinstance(model, Pipeline):\n internal_estimator = get_model_of_pipeline(model)\n if internal_estimator is not None:\n try:\n feature_importances = _built_in_importance(internal_estimator, dataset)\n except ValueError:\n # in case pipeline had an encoder\n pass\n\n if feature_importances is not None:\n return feature_importances.fillna(0)\n elif isinstance(dataset, base.Dataset):\n return _calc_importance(model, dataset, **permutation_kwargs).fillna(0)\n else:\n raise errors.DeepchecksValueError(\n \"Was not able to calculate features importance\" # FIXME: better message\n )\n\n\ndef _built_in_importance(\n model: t.Any,\n dataset: t.Union['base.Dataset', pd.DataFrame],\n) -> t.Optional[pd.Series]:\n \"\"\"Get feature importance member if present in model.\"\"\"\n features = dataset.features if isinstance(dataset, base.Dataset) else dataset.columns\n\n if hasattr(model, 'feature_importances_'): # Ensembles\n normalized_feature_importance_values = model.feature_importances_ / model.feature_importances_.sum()\n return pd.Series(normalized_feature_importance_values, index=features)\n\n if hasattr(model, 'coef_'): # Linear models\n coef = np.abs(model.coef_.flatten())\n coef = coef / coef.sum()\n return pd.Series(coef, index=features)\n\n\n@lru_cache(maxsize=32)\ndef _calc_importance(\n model: t.Any,\n dataset: 'base.Dataset',\n n_repeats: int = 30,\n mask_high_variance_features: bool = False,\n random_state: int = 42,\n n_samples: int = 10000,\n) -> pd.Series:\n \"\"\"Calculate permutation feature importance. Return nonzero value only when std doesn't mask signal.\n\n Args:\n model (Any): A fitted model\n dataset (Dataset): dataset used to fit the model\n n_repeats (int): Number of times to permute a feature\n mask_high_variance_features (bool): If true, features for which calculated permutation importance values\n varied greatly would be returned has having 0 feature importance\n random_state (int): Random seed for permutation importance calculation.\n n_samples (int): The number of samples to draw from X to compute feature importance\n in each repeat (without replacement).\n Returns:\n pd.Series of feature importance normalized to 0-1 indexed by feature names\n \"\"\"\n dataset.validate_label()\n\n n_samples = min(n_samples, dataset.n_samples)\n dataset_sample_idx = dataset.label_col.sample(n_samples, random_state=random_state).index\n\n r = permutation_importance(\n model,\n dataset.features_columns.loc[dataset_sample_idx, :],\n dataset.label_col.loc[dataset_sample_idx],\n n_repeats=n_repeats,\n random_state=random_state,\n n_jobs=-1\n )\n\n significance_mask = (\n r.importances_mean - r.importances_std > 0\n if mask_high_variance_features\n else r.importances_mean > 0\n )\n\n feature_importances = r.importances_mean * significance_mask\n total = feature_importances.sum()\n\n if total != 0:\n feature_importances = feature_importances / total\n\n return pd.Series(feature_importances, index=dataset.features)\n\n\ndef get_importance(name: str, feature_importances: pd.Series, ds: 'base.Dataset') -> int:\n \"\"\"Return importance based on feature importance or label/date/index first.\"\"\"\n if name in feature_importances.keys():\n return feature_importances[name]\n if name in [ds.label_name, ds.datetime_name, ds.index_name]:\n return 1\n return 0\n\n\ndef column_importance_sorter_dict(\n cols_dict: t.Dict[Hashable, t.Any],\n dataset: 'base.Dataset',\n feature_importances: t.Optional[pd.Series] = None,\n n_top: int = 10\n) -> t.Dict:\n \"\"\"Return the dict of columns sorted and limited by feature importance.\n\n Args:\n cols_dict (Dict[Hashable, t.Any]):\n dict where columns are the keys\n dataset (Dataset):\n dataset used to fit the model\n feature_importances (pd.Series):\n feature importance normalized to 0-1 indexed by feature names\n n_top_columns (int):\n amount of columns to show ordered by feature importance (date, index, label are first);\n is used only if model was specified\n\n Returns:\n Dict[Hashable, Any]: the dict of columns sorted and limited by feature importance.\n \"\"\"\n if feature_importances is not None:\n key = lambda name: get_importance(name[0], feature_importances, dataset)\n cols_dict = dict(sorted(cols_dict.items(), key=key, reverse=True))\n if n_top:\n return dict(list(cols_dict.items())[:n_top])\n return cols_dict\n\n\ndef column_importance_sorter_df(\n df: pd.DataFrame,\n ds: 'base.Dataset',\n feature_importances: pd.Series,\n n_top: int = 10,\n col: t.Optional[Hashable] = None\n) -> pd.DataFrame:\n \"\"\"Return the dataframe of of columns sorted and limited by feature importance.\n\n Args:\n df (DataFrame): DataFrame to sort\n ds (Dataset): dataset used to fit the model\n feature_importances (pd.Series): feature importance normalized to 0-1 indexed by feature names\n n_top (int): amount of columns to show ordered by feature importance (date, index, label are first)\n col (Optional[Hashable]): name of column to sort the dataframe by\n Returns:\n pd.DataFrame: the dataframe sorted and limited by feature importance.\n\n \"\"\"\n if feature_importances is not None:\n key = lambda column: [get_importance(name, feature_importances, ds) for name in column]\n if col:\n df = df.sort_values(by=[col], key=key, ascending=False)\n df = df.sort_index(key=key, ascending=False)\n if n_top:\n return df.head(n_top)\n return df\n\n\ndef infer_categorical_features(\n df: pd.DataFrame,\n max_categorical_ratio: float = 0.01,\n max_categories: int = 30,\n max_float_categories: int = 5,\n columns: t.Optional[t.List[Hashable]] = None,\n) -> t.List[Hashable]:\n \"\"\"Infers which features are categorical by checking types and number of unique values.\n\n Arguments:\n df (DataFrame): dataframe for which to infer categorical features\n\n Returns:\n List[hashable]: list of categorical features\n \"\"\"\n categorical_dtypes = df.select_dtypes(include='category')\n\n if len(categorical_dtypes.columns) > 0:\n return list(categorical_dtypes.columns)\n\n if columns is not None:\n dataframe_columns = validation.ensure_hashable_or_mutable_sequence(columns)\n else:\n dataframe_columns = df.columns\n\n return [\n column\n for column in dataframe_columns\n if is_categorical(\n t.cast(pd.Series, df[column]),\n max_categorical_ratio,\n max_categories,\n max_float_categories\n )\n ]\n\n\ndef is_categorical(\n column: pd.Series,\n max_categorical_ratio: float = 0.01,\n max_categories: int = 30,\n max_float_categories: int = 5\n) -> bool:\n \"\"\"Check if uniques are few enough to count as categorical.\n\n Args:\n column (Series):\n The name of the column in the dataframe\n\n Returns:\n bool: True if is categorical according to input numbers\n \"\"\"\n n_unique = column.nunique(dropna=True)\n n_samples = len(column.dropna())\n\n if is_float_dtype(column):\n return n_unique <= max_float_categories\n\n return n_unique / n_samples < max_categorical_ratio and n_unique <= max_categories\n", "path": "deepchecks/utils/features.py"}]} |
gh_patches_debug_1078 | rasdani/github-patches | git_diff | magenta__magenta-841 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
execfile() was removed from Python 3
https://github.com/tensorflow/magenta/blob/master/magenta/tools/pip/setup.py#L23
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `magenta/tools/pip/setup.py`
Content:
```
1 # Copyright 2016 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """A setuptools based setup module for magenta."""
15
16 from setuptools import find_packages
17 from setuptools import setup
18
19 # Bit of a hack to parse the version string stored in version.py without
20 # executing __init__.py, which will end up requiring a bunch of dependencies to
21 # execute (e.g., tensorflow, pretty_midi, etc.).
22 # Makes the __version__ variable available.
23 execfile('magenta/version.py')
24
25
26 REQUIRED_PACKAGES = [
27 'IPython',
28 'Pillow >= 3.4.2',
29 'bokeh >= 0.12.0',
30 'futures',
31 'intervaltree >= 2.1.0',
32 'matplotlib >= 1.5.3',
33 'mido == 1.2.6',
34 'numpy >= 1.11.0',
35 'pandas >= 0.18.1',
36 'pretty_midi >= 0.2.6',
37 'python-rtmidi',
38 'scipy >= 0.18.1',
39 'tensorflow >= 1.1.0',
40 'wheel',
41 ]
42
43 CONSOLE_SCRIPTS = [
44 'magenta.interfaces.midi.magenta_midi',
45 'magenta.interfaces.midi.midi_clock',
46 'magenta.models.drums_rnn.drums_rnn_create_dataset',
47 'magenta.models.drums_rnn.drums_rnn_generate',
48 'magenta.models.drums_rnn.drums_rnn_train',
49 'magenta.models.image_stylization.image_stylization_create_dataset',
50 'magenta.models.image_stylization.image_stylization_evaluate',
51 'magenta.models.image_stylization.image_stylization_finetune',
52 'magenta.models.image_stylization.image_stylization_train',
53 'magenta.models.image_stylization.image_stylization_transform',
54 'magenta.models.improv_rnn.improv_rnn_create_dataset',
55 'magenta.models.improv_rnn.improv_rnn_generate',
56 'magenta.models.improv_rnn.improv_rnn_train',
57 'magenta.models.melody_rnn.melody_rnn_create_dataset',
58 'magenta.models.melody_rnn.melody_rnn_generate',
59 'magenta.models.melody_rnn.melody_rnn_train',
60 'magenta.models.nsynth.wavenet.nsynth_generate',
61 'magenta.models.nsynth.wavenet.nsynth_save_embeddings',
62 'magenta.models.performance_rnn.performance_rnn_create_dataset',
63 'magenta.models.performance_rnn.performance_rnn_generate',
64 'magenta.models.performance_rnn.performance_rnn_train',
65 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_create_dataset',
66 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_generate',
67 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_train',
68 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',
69 'magenta.models.polyphony_rnn.polyphony_rnn_generate',
70 'magenta.models.polyphony_rnn.polyphony_rnn_train',
71 'magenta.models.rl_tuner.rl_tuner_train',
72 'magenta.models.sketch_rnn.sketch_rnn_train',
73 'magenta.scripts.convert_dir_to_note_sequences',
74 ]
75
76 setup(
77 name='magenta',
78 version=__version__, # pylint: disable=undefined-variable
79 description='Use machine learning to create art and music',
80 long_description='',
81 url='https://magenta.tensorflow.org/',
82 author='Google Inc.',
83 author_email='[email protected]',
84 license='Apache 2',
85 # PyPI package information.
86 classifiers=[
87 'Development Status :: 4 - Beta',
88 'Intended Audience :: Developers',
89 'Intended Audience :: Education',
90 'Intended Audience :: Science/Research',
91 'License :: OSI Approved :: Apache Software License',
92 'Programming Language :: Python :: 2.7',
93 'Programming Language :: Python :: 3',
94 'Topic :: Scientific/Engineering :: Mathematics',
95 'Topic :: Software Development :: Libraries :: Python Modules',
96 'Topic :: Software Development :: Libraries',
97 ],
98 keywords='tensorflow machine learning magenta music art',
99
100 packages=find_packages(),
101 install_requires=REQUIRED_PACKAGES,
102 entry_points={
103 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in
104 ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],
105 },
106
107 include_package_data=True,
108 package_data={
109 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],
110 },
111 )
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/magenta/tools/pip/setup.py b/magenta/tools/pip/setup.py
--- a/magenta/tools/pip/setup.py
+++ b/magenta/tools/pip/setup.py
@@ -20,7 +20,8 @@
# executing __init__.py, which will end up requiring a bunch of dependencies to
# execute (e.g., tensorflow, pretty_midi, etc.).
# Makes the __version__ variable available.
-execfile('magenta/version.py')
+with open('magenta/version.py') as in_file:
+ exec(in_file.read())
REQUIRED_PACKAGES = [
| {"golden_diff": "diff --git a/magenta/tools/pip/setup.py b/magenta/tools/pip/setup.py\n--- a/magenta/tools/pip/setup.py\n+++ b/magenta/tools/pip/setup.py\n@@ -20,7 +20,8 @@\n # executing __init__.py, which will end up requiring a bunch of dependencies to\n # execute (e.g., tensorflow, pretty_midi, etc.).\n # Makes the __version__ variable available.\n-execfile('magenta/version.py')\n+with open('magenta/version.py') as in_file:\n+ exec(in_file.read())\n \n \n REQUIRED_PACKAGES = [\n", "issue": "execfile() was removed from Python 3\nhttps://github.com/tensorflow/magenta/blob/master/magenta/tools/pip/setup.py#L23\n", "before_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A setuptools based setup module for magenta.\"\"\"\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Bit of a hack to parse the version string stored in version.py without\n# executing __init__.py, which will end up requiring a bunch of dependencies to\n# execute (e.g., tensorflow, pretty_midi, etc.).\n# Makes the __version__ variable available.\nexecfile('magenta/version.py')\n\n\nREQUIRED_PACKAGES = [\n 'IPython',\n 'Pillow >= 3.4.2',\n 'bokeh >= 0.12.0',\n 'futures',\n 'intervaltree >= 2.1.0',\n 'matplotlib >= 1.5.3',\n 'mido == 1.2.6',\n 'numpy >= 1.11.0',\n 'pandas >= 0.18.1',\n 'pretty_midi >= 0.2.6',\n 'python-rtmidi',\n 'scipy >= 0.18.1',\n 'tensorflow >= 1.1.0',\n 'wheel',\n]\n\nCONSOLE_SCRIPTS = [\n 'magenta.interfaces.midi.magenta_midi',\n 'magenta.interfaces.midi.midi_clock',\n 'magenta.models.drums_rnn.drums_rnn_create_dataset',\n 'magenta.models.drums_rnn.drums_rnn_generate',\n 'magenta.models.drums_rnn.drums_rnn_train',\n 'magenta.models.image_stylization.image_stylization_create_dataset',\n 'magenta.models.image_stylization.image_stylization_evaluate',\n 'magenta.models.image_stylization.image_stylization_finetune',\n 'magenta.models.image_stylization.image_stylization_train',\n 'magenta.models.image_stylization.image_stylization_transform',\n 'magenta.models.improv_rnn.improv_rnn_create_dataset',\n 'magenta.models.improv_rnn.improv_rnn_generate',\n 'magenta.models.improv_rnn.improv_rnn_train',\n 'magenta.models.melody_rnn.melody_rnn_create_dataset',\n 'magenta.models.melody_rnn.melody_rnn_generate',\n 'magenta.models.melody_rnn.melody_rnn_train',\n 'magenta.models.nsynth.wavenet.nsynth_generate',\n 'magenta.models.nsynth.wavenet.nsynth_save_embeddings',\n 'magenta.models.performance_rnn.performance_rnn_create_dataset',\n 'magenta.models.performance_rnn.performance_rnn_generate',\n 'magenta.models.performance_rnn.performance_rnn_train',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_create_dataset',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_generate',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_train',\n 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',\n 'magenta.models.polyphony_rnn.polyphony_rnn_generate',\n 'magenta.models.polyphony_rnn.polyphony_rnn_train',\n 'magenta.models.rl_tuner.rl_tuner_train',\n 'magenta.models.sketch_rnn.sketch_rnn_train',\n 'magenta.scripts.convert_dir_to_note_sequences',\n]\n\nsetup(\n name='magenta',\n version=__version__, # pylint: disable=undefined-variable\n description='Use machine learning to create art and music',\n long_description='',\n url='https://magenta.tensorflow.org/',\n author='Google Inc.',\n author_email='[email protected]',\n license='Apache 2',\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='tensorflow machine learning magenta music art',\n\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n entry_points={\n 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in\n ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],\n },\n\n include_package_data=True,\n package_data={\n 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],\n },\n)\n", "path": "magenta/tools/pip/setup.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A setuptools based setup module for magenta.\"\"\"\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Bit of a hack to parse the version string stored in version.py without\n# executing __init__.py, which will end up requiring a bunch of dependencies to\n# execute (e.g., tensorflow, pretty_midi, etc.).\n# Makes the __version__ variable available.\nwith open('magenta/version.py') as in_file:\n exec(in_file.read())\n\n\nREQUIRED_PACKAGES = [\n 'IPython',\n 'Pillow >= 3.4.2',\n 'bokeh >= 0.12.0',\n 'futures',\n 'intervaltree >= 2.1.0',\n 'matplotlib >= 1.5.3',\n 'mido == 1.2.6',\n 'numpy >= 1.11.0',\n 'pandas >= 0.18.1',\n 'pretty_midi >= 0.2.6',\n 'python-rtmidi',\n 'scipy >= 0.18.1',\n 'tensorflow >= 1.1.0',\n 'wheel',\n]\n\nCONSOLE_SCRIPTS = [\n 'magenta.interfaces.midi.magenta_midi',\n 'magenta.interfaces.midi.midi_clock',\n 'magenta.models.drums_rnn.drums_rnn_create_dataset',\n 'magenta.models.drums_rnn.drums_rnn_generate',\n 'magenta.models.drums_rnn.drums_rnn_train',\n 'magenta.models.image_stylization.image_stylization_create_dataset',\n 'magenta.models.image_stylization.image_stylization_evaluate',\n 'magenta.models.image_stylization.image_stylization_finetune',\n 'magenta.models.image_stylization.image_stylization_train',\n 'magenta.models.image_stylization.image_stylization_transform',\n 'magenta.models.improv_rnn.improv_rnn_create_dataset',\n 'magenta.models.improv_rnn.improv_rnn_generate',\n 'magenta.models.improv_rnn.improv_rnn_train',\n 'magenta.models.melody_rnn.melody_rnn_create_dataset',\n 'magenta.models.melody_rnn.melody_rnn_generate',\n 'magenta.models.melody_rnn.melody_rnn_train',\n 'magenta.models.nsynth.wavenet.nsynth_generate',\n 'magenta.models.nsynth.wavenet.nsynth_save_embeddings',\n 'magenta.models.performance_rnn.performance_rnn_create_dataset',\n 'magenta.models.performance_rnn.performance_rnn_generate',\n 'magenta.models.performance_rnn.performance_rnn_train',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_create_dataset',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_generate',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_train',\n 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',\n 'magenta.models.polyphony_rnn.polyphony_rnn_generate',\n 'magenta.models.polyphony_rnn.polyphony_rnn_train',\n 'magenta.models.rl_tuner.rl_tuner_train',\n 'magenta.models.sketch_rnn.sketch_rnn_train',\n 'magenta.scripts.convert_dir_to_note_sequences',\n]\n\nsetup(\n name='magenta',\n version=__version__, # pylint: disable=undefined-variable\n description='Use machine learning to create art and music',\n long_description='',\n url='https://magenta.tensorflow.org/',\n author='Google Inc.',\n author_email='[email protected]',\n license='Apache 2',\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='tensorflow machine learning magenta music art',\n\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n entry_points={\n 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in\n ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],\n },\n\n include_package_data=True,\n package_data={\n 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],\n },\n)\n", "path": "magenta/tools/pip/setup.py"}]} |
gh_patches_debug_1079 | rasdani/github-patches | git_diff | celery__kombu-878 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ValueError: Socket not connected
Hello,
the following error happens sometimes when publishing :
```
File "/foo/bar/lib/python2.7/site-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/foo/bar/lib/python2.7/site-packages/kombu/connection.py", line 506, in _ensured
self.collect()
File "/foo/bar/lib/python2.7/site-packages/kombu/connection.py", line 350, in collect
gc_transport(self._connection)
File "/foo/bar/lib/python2.7/site-packages/kombu/transport/librabbitmq.py", line 148, in _collect
os.close(connection.fileno())
ValueError: Socket not connected
```
kombu==4.1.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kombu/transport/librabbitmq.py`
Content:
```
1 """`librabbitmq`_ transport.
2
3 .. _`librabbitmq`: https://pypi.python.org/librabbitmq/
4 """
5 from __future__ import absolute_import, unicode_literals
6
7 import os
8 import socket
9 import warnings
10
11 import librabbitmq as amqp
12 from librabbitmq import ChannelError, ConnectionError
13
14 from kombu.five import items, values
15 from kombu.utils.amq_manager import get_manager
16 from kombu.utils.text import version_string_as_tuple
17
18 from . import base
19 from .base import to_rabbitmq_queue_arguments
20
21 W_VERSION = """
22 librabbitmq version too old to detect RabbitMQ version information
23 so make sure you are using librabbitmq 1.5 when using rabbitmq > 3.3
24 """
25 DEFAULT_PORT = 5672
26 DEFAULT_SSL_PORT = 5671
27
28 NO_SSL_ERROR = """\
29 ssl not supported by librabbitmq, please use pyamqp:// or stunnel\
30 """
31
32
33 class Message(base.Message):
34 """AMQP Message (librabbitmq)."""
35
36 def __init__(self, channel, props, info, body):
37 super(Message, self).__init__(
38 channel=channel,
39 body=body,
40 delivery_info=info,
41 properties=props,
42 delivery_tag=info.get('delivery_tag'),
43 content_type=props.get('content_type'),
44 content_encoding=props.get('content_encoding'),
45 headers=props.get('headers'))
46
47
48 class Channel(amqp.Channel, base.StdChannel):
49 """AMQP Channel (librabbitmq)."""
50
51 Message = Message
52
53 def prepare_message(self, body, priority=None,
54 content_type=None, content_encoding=None,
55 headers=None, properties=None):
56 """Encapsulate data into a AMQP message."""
57 properties = properties if properties is not None else {}
58 properties.update({'content_type': content_type,
59 'content_encoding': content_encoding,
60 'headers': headers,
61 'priority': priority})
62 return body, properties
63
64 def prepare_queue_arguments(self, arguments, **kwargs):
65 arguments = to_rabbitmq_queue_arguments(arguments, **kwargs)
66 return {k.encode('utf8'): v for k, v in items(arguments)}
67
68
69 class Connection(amqp.Connection):
70 """AMQP Connection (librabbitmq)."""
71
72 Channel = Channel
73 Message = Message
74
75
76 class Transport(base.Transport):
77 """AMQP Transport (librabbitmq)."""
78
79 Connection = Connection
80
81 default_port = DEFAULT_PORT
82 default_ssl_port = DEFAULT_SSL_PORT
83
84 connection_errors = (
85 base.Transport.connection_errors + (
86 ConnectionError, socket.error, IOError, OSError)
87 )
88 channel_errors = (
89 base.Transport.channel_errors + (ChannelError,)
90 )
91 driver_type = 'amqp'
92 driver_name = 'librabbitmq'
93
94 implements = base.Transport.implements.extend(
95 asynchronous=True,
96 heartbeats=False,
97 )
98
99 def __init__(self, client, **kwargs):
100 self.client = client
101 self.default_port = kwargs.get('default_port') or self.default_port
102 self.default_ssl_port = (kwargs.get('default_ssl_port') or
103 self.default_ssl_port)
104 self.__reader = None
105
106 def driver_version(self):
107 return amqp.__version__
108
109 def create_channel(self, connection):
110 return connection.channel()
111
112 def drain_events(self, connection, **kwargs):
113 return connection.drain_events(**kwargs)
114
115 def establish_connection(self):
116 """Establish connection to the AMQP broker."""
117 conninfo = self.client
118 for name, default_value in items(self.default_connection_params):
119 if not getattr(conninfo, name, None):
120 setattr(conninfo, name, default_value)
121 if conninfo.ssl:
122 raise NotImplementedError(NO_SSL_ERROR)
123 opts = dict({
124 'host': conninfo.host,
125 'userid': conninfo.userid,
126 'password': conninfo.password,
127 'virtual_host': conninfo.virtual_host,
128 'login_method': conninfo.login_method,
129 'insist': conninfo.insist,
130 'ssl': conninfo.ssl,
131 'connect_timeout': conninfo.connect_timeout,
132 }, **conninfo.transport_options or {})
133 conn = self.Connection(**opts)
134 conn.client = self.client
135 self.client.drain_events = conn.drain_events
136 return conn
137
138 def close_connection(self, connection):
139 """Close the AMQP broker connection."""
140 self.client.drain_events = None
141 connection.close()
142
143 def _collect(self, connection):
144 if connection is not None:
145 for channel in values(connection.channels):
146 channel.connection = None
147 try:
148 os.close(connection.fileno())
149 except OSError:
150 pass
151 connection.channels.clear()
152 connection.callbacks.clear()
153 self.client.drain_events = None
154 self.client = None
155
156 def verify_connection(self, connection):
157 return connection.connected
158
159 def register_with_event_loop(self, connection, loop):
160 loop.add_reader(
161 connection.fileno(), self.on_readable, connection, loop,
162 )
163
164 def get_manager(self, *args, **kwargs):
165 return get_manager(self.client, *args, **kwargs)
166
167 def qos_semantics_matches_spec(self, connection):
168 try:
169 props = connection.server_properties
170 except AttributeError:
171 warnings.warn(UserWarning(W_VERSION))
172 else:
173 if props.get('product') == 'RabbitMQ':
174 return version_string_as_tuple(props['version']) < (3, 3)
175 return True
176
177 @property
178 def default_connection_params(self):
179 return {
180 'userid': 'guest',
181 'password': 'guest',
182 'port': (self.default_ssl_port if self.client.ssl
183 else self.default_port),
184 'hostname': 'localhost',
185 'login_method': 'AMQPLAIN',
186 }
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kombu/transport/librabbitmq.py b/kombu/transport/librabbitmq.py
--- a/kombu/transport/librabbitmq.py
+++ b/kombu/transport/librabbitmq.py
@@ -146,7 +146,7 @@
channel.connection = None
try:
os.close(connection.fileno())
- except OSError:
+ except (OSError, ValueError):
pass
connection.channels.clear()
connection.callbacks.clear()
| {"golden_diff": "diff --git a/kombu/transport/librabbitmq.py b/kombu/transport/librabbitmq.py\n--- a/kombu/transport/librabbitmq.py\n+++ b/kombu/transport/librabbitmq.py\n@@ -146,7 +146,7 @@\n channel.connection = None\n try:\n os.close(connection.fileno())\n- except OSError:\n+ except (OSError, ValueError):\n pass\n connection.channels.clear()\n connection.callbacks.clear()\n", "issue": "ValueError: Socket not connected\nHello,\r\nthe following error happens sometimes when publishing :\r\n\r\n```\r\n File \"/foo/bar/lib/python2.7/site-packages/kombu/messaging.py\", line 181, in publish\r\n exchange_name, declare,\r\n File \"/foo/bar/lib/python2.7/site-packages/kombu/connection.py\", line 506, in _ensured\r\n self.collect()\r\n File \"/foo/bar/lib/python2.7/site-packages/kombu/connection.py\", line 350, in collect\r\n gc_transport(self._connection)\r\n File \"/foo/bar/lib/python2.7/site-packages/kombu/transport/librabbitmq.py\", line 148, in _collect\r\n os.close(connection.fileno())\r\nValueError: Socket not connected\r\n\r\n```\r\n\r\nkombu==4.1.0\n", "before_files": [{"content": "\"\"\"`librabbitmq`_ transport.\n\n.. _`librabbitmq`: https://pypi.python.org/librabbitmq/\n\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport os\nimport socket\nimport warnings\n\nimport librabbitmq as amqp\nfrom librabbitmq import ChannelError, ConnectionError\n\nfrom kombu.five import items, values\nfrom kombu.utils.amq_manager import get_manager\nfrom kombu.utils.text import version_string_as_tuple\n\nfrom . import base\nfrom .base import to_rabbitmq_queue_arguments\n\nW_VERSION = \"\"\"\n librabbitmq version too old to detect RabbitMQ version information\n so make sure you are using librabbitmq 1.5 when using rabbitmq > 3.3\n\"\"\"\nDEFAULT_PORT = 5672\nDEFAULT_SSL_PORT = 5671\n\nNO_SSL_ERROR = \"\"\"\\\nssl not supported by librabbitmq, please use pyamqp:// or stunnel\\\n\"\"\"\n\n\nclass Message(base.Message):\n \"\"\"AMQP Message (librabbitmq).\"\"\"\n\n def __init__(self, channel, props, info, body):\n super(Message, self).__init__(\n channel=channel,\n body=body,\n delivery_info=info,\n properties=props,\n delivery_tag=info.get('delivery_tag'),\n content_type=props.get('content_type'),\n content_encoding=props.get('content_encoding'),\n headers=props.get('headers'))\n\n\nclass Channel(amqp.Channel, base.StdChannel):\n \"\"\"AMQP Channel (librabbitmq).\"\"\"\n\n Message = Message\n\n def prepare_message(self, body, priority=None,\n content_type=None, content_encoding=None,\n headers=None, properties=None):\n \"\"\"Encapsulate data into a AMQP message.\"\"\"\n properties = properties if properties is not None else {}\n properties.update({'content_type': content_type,\n 'content_encoding': content_encoding,\n 'headers': headers,\n 'priority': priority})\n return body, properties\n\n def prepare_queue_arguments(self, arguments, **kwargs):\n arguments = to_rabbitmq_queue_arguments(arguments, **kwargs)\n return {k.encode('utf8'): v for k, v in items(arguments)}\n\n\nclass Connection(amqp.Connection):\n \"\"\"AMQP Connection (librabbitmq).\"\"\"\n\n Channel = Channel\n Message = Message\n\n\nclass Transport(base.Transport):\n \"\"\"AMQP Transport (librabbitmq).\"\"\"\n\n Connection = Connection\n\n default_port = DEFAULT_PORT\n default_ssl_port = DEFAULT_SSL_PORT\n\n connection_errors = (\n base.Transport.connection_errors + (\n ConnectionError, socket.error, IOError, OSError)\n )\n channel_errors = (\n base.Transport.channel_errors + (ChannelError,)\n )\n driver_type = 'amqp'\n driver_name = 'librabbitmq'\n\n implements = base.Transport.implements.extend(\n asynchronous=True,\n heartbeats=False,\n )\n\n def __init__(self, client, **kwargs):\n self.client = client\n self.default_port = kwargs.get('default_port') or self.default_port\n self.default_ssl_port = (kwargs.get('default_ssl_port') or\n self.default_ssl_port)\n self.__reader = None\n\n def driver_version(self):\n return amqp.__version__\n\n def create_channel(self, connection):\n return connection.channel()\n\n def drain_events(self, connection, **kwargs):\n return connection.drain_events(**kwargs)\n\n def establish_connection(self):\n \"\"\"Establish connection to the AMQP broker.\"\"\"\n conninfo = self.client\n for name, default_value in items(self.default_connection_params):\n if not getattr(conninfo, name, None):\n setattr(conninfo, name, default_value)\n if conninfo.ssl:\n raise NotImplementedError(NO_SSL_ERROR)\n opts = dict({\n 'host': conninfo.host,\n 'userid': conninfo.userid,\n 'password': conninfo.password,\n 'virtual_host': conninfo.virtual_host,\n 'login_method': conninfo.login_method,\n 'insist': conninfo.insist,\n 'ssl': conninfo.ssl,\n 'connect_timeout': conninfo.connect_timeout,\n }, **conninfo.transport_options or {})\n conn = self.Connection(**opts)\n conn.client = self.client\n self.client.drain_events = conn.drain_events\n return conn\n\n def close_connection(self, connection):\n \"\"\"Close the AMQP broker connection.\"\"\"\n self.client.drain_events = None\n connection.close()\n\n def _collect(self, connection):\n if connection is not None:\n for channel in values(connection.channels):\n channel.connection = None\n try:\n os.close(connection.fileno())\n except OSError:\n pass\n connection.channels.clear()\n connection.callbacks.clear()\n self.client.drain_events = None\n self.client = None\n\n def verify_connection(self, connection):\n return connection.connected\n\n def register_with_event_loop(self, connection, loop):\n loop.add_reader(\n connection.fileno(), self.on_readable, connection, loop,\n )\n\n def get_manager(self, *args, **kwargs):\n return get_manager(self.client, *args, **kwargs)\n\n def qos_semantics_matches_spec(self, connection):\n try:\n props = connection.server_properties\n except AttributeError:\n warnings.warn(UserWarning(W_VERSION))\n else:\n if props.get('product') == 'RabbitMQ':\n return version_string_as_tuple(props['version']) < (3, 3)\n return True\n\n @property\n def default_connection_params(self):\n return {\n 'userid': 'guest',\n 'password': 'guest',\n 'port': (self.default_ssl_port if self.client.ssl\n else self.default_port),\n 'hostname': 'localhost',\n 'login_method': 'AMQPLAIN',\n }\n", "path": "kombu/transport/librabbitmq.py"}], "after_files": [{"content": "\"\"\"`librabbitmq`_ transport.\n\n.. _`librabbitmq`: https://pypi.python.org/librabbitmq/\n\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport os\nimport socket\nimport warnings\n\nimport librabbitmq as amqp\nfrom librabbitmq import ChannelError, ConnectionError\n\nfrom kombu.five import items, values\nfrom kombu.utils.amq_manager import get_manager\nfrom kombu.utils.text import version_string_as_tuple\n\nfrom . import base\nfrom .base import to_rabbitmq_queue_arguments\n\nW_VERSION = \"\"\"\n librabbitmq version too old to detect RabbitMQ version information\n so make sure you are using librabbitmq 1.5 when using rabbitmq > 3.3\n\"\"\"\nDEFAULT_PORT = 5672\nDEFAULT_SSL_PORT = 5671\n\nNO_SSL_ERROR = \"\"\"\\\nssl not supported by librabbitmq, please use pyamqp:// or stunnel\\\n\"\"\"\n\n\nclass Message(base.Message):\n \"\"\"AMQP Message (librabbitmq).\"\"\"\n\n def __init__(self, channel, props, info, body):\n super(Message, self).__init__(\n channel=channel,\n body=body,\n delivery_info=info,\n properties=props,\n delivery_tag=info.get('delivery_tag'),\n content_type=props.get('content_type'),\n content_encoding=props.get('content_encoding'),\n headers=props.get('headers'))\n\n\nclass Channel(amqp.Channel, base.StdChannel):\n \"\"\"AMQP Channel (librabbitmq).\"\"\"\n\n Message = Message\n\n def prepare_message(self, body, priority=None,\n content_type=None, content_encoding=None,\n headers=None, properties=None):\n \"\"\"Encapsulate data into a AMQP message.\"\"\"\n properties = properties if properties is not None else {}\n properties.update({'content_type': content_type,\n 'content_encoding': content_encoding,\n 'headers': headers,\n 'priority': priority})\n return body, properties\n\n def prepare_queue_arguments(self, arguments, **kwargs):\n arguments = to_rabbitmq_queue_arguments(arguments, **kwargs)\n return {k.encode('utf8'): v for k, v in items(arguments)}\n\n\nclass Connection(amqp.Connection):\n \"\"\"AMQP Connection (librabbitmq).\"\"\"\n\n Channel = Channel\n Message = Message\n\n\nclass Transport(base.Transport):\n \"\"\"AMQP Transport (librabbitmq).\"\"\"\n\n Connection = Connection\n\n default_port = DEFAULT_PORT\n default_ssl_port = DEFAULT_SSL_PORT\n\n connection_errors = (\n base.Transport.connection_errors + (\n ConnectionError, socket.error, IOError, OSError)\n )\n channel_errors = (\n base.Transport.channel_errors + (ChannelError,)\n )\n driver_type = 'amqp'\n driver_name = 'librabbitmq'\n\n implements = base.Transport.implements.extend(\n asynchronous=True,\n heartbeats=False,\n )\n\n def __init__(self, client, **kwargs):\n self.client = client\n self.default_port = kwargs.get('default_port') or self.default_port\n self.default_ssl_port = (kwargs.get('default_ssl_port') or\n self.default_ssl_port)\n self.__reader = None\n\n def driver_version(self):\n return amqp.__version__\n\n def create_channel(self, connection):\n return connection.channel()\n\n def drain_events(self, connection, **kwargs):\n return connection.drain_events(**kwargs)\n\n def establish_connection(self):\n \"\"\"Establish connection to the AMQP broker.\"\"\"\n conninfo = self.client\n for name, default_value in items(self.default_connection_params):\n if not getattr(conninfo, name, None):\n setattr(conninfo, name, default_value)\n if conninfo.ssl:\n raise NotImplementedError(NO_SSL_ERROR)\n opts = dict({\n 'host': conninfo.host,\n 'userid': conninfo.userid,\n 'password': conninfo.password,\n 'virtual_host': conninfo.virtual_host,\n 'login_method': conninfo.login_method,\n 'insist': conninfo.insist,\n 'ssl': conninfo.ssl,\n 'connect_timeout': conninfo.connect_timeout,\n }, **conninfo.transport_options or {})\n conn = self.Connection(**opts)\n conn.client = self.client\n self.client.drain_events = conn.drain_events\n return conn\n\n def close_connection(self, connection):\n \"\"\"Close the AMQP broker connection.\"\"\"\n self.client.drain_events = None\n connection.close()\n\n def _collect(self, connection):\n if connection is not None:\n for channel in values(connection.channels):\n channel.connection = None\n try:\n os.close(connection.fileno())\n except (OSError, ValueError):\n pass\n connection.channels.clear()\n connection.callbacks.clear()\n self.client.drain_events = None\n self.client = None\n\n def verify_connection(self, connection):\n return connection.connected\n\n def register_with_event_loop(self, connection, loop):\n loop.add_reader(\n connection.fileno(), self.on_readable, connection, loop,\n )\n\n def get_manager(self, *args, **kwargs):\n return get_manager(self.client, *args, **kwargs)\n\n def qos_semantics_matches_spec(self, connection):\n try:\n props = connection.server_properties\n except AttributeError:\n warnings.warn(UserWarning(W_VERSION))\n else:\n if props.get('product') == 'RabbitMQ':\n return version_string_as_tuple(props['version']) < (3, 3)\n return True\n\n @property\n def default_connection_params(self):\n return {\n 'userid': 'guest',\n 'password': 'guest',\n 'port': (self.default_ssl_port if self.client.ssl\n else self.default_port),\n 'hostname': 'localhost',\n 'login_method': 'AMQPLAIN',\n }\n", "path": "kombu/transport/librabbitmq.py"}]} |
gh_patches_debug_1080 | rasdani/github-patches | git_diff | DDMAL__CantusDB-900 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
we need to re-add a restart policy to docker-compose.yml
A recent change to docker-compose.yml removed the `restart: always` policy we added to our containers a couple of weeks ago. We should re-instate this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/widgets.py`
Content:
```
1 from django.forms.widgets import TextInput, Select, Textarea, CheckboxInput
2 from django.utils.safestring import mark_safe
3
4 class TextInputWidget(TextInput):
5 def __init__(self):
6 self.attrs = {"class": "form-control form-control-sm"}
7
8
9 class SelectWidget(Select):
10 """
11 not used, this widget does work, but we cannot order the choices by name
12 """
13
14 def __init__(self):
15 attrs = {"class": "form-control custom-select custom-select-sm"}
16 super().__init__(attrs=attrs)
17 # super().choices = choices
18 # self.choices = super().choices
19
20
21 class TextAreaWidget(Textarea):
22 def __init__(self):
23 self.attrs = {"class": "form-control", "rows": "3"}
24
25
26 class VolpianoAreaWidget(Textarea):
27 def __init__(self):
28 self.attrs = {
29 "class": "form-control",
30 "rows": "1.5",
31 "style": "font-family: Volpiano; font-size: xx-large",
32 }
33
34
35 class VolpianoInputWidget(TextInput):
36 def __init__(self):
37 self.attrs = {
38 "class": "form-control form-control-sm",
39 "style": "font-family: Volpiano; font-size: xx-large",
40 }
41
42
43 class CheckboxWidget(CheckboxInput):
44 pass
45
46
47 class AdminTextAreaWidget(Textarea):
48 def __init__(self):
49 self.attrs = {"class": "form-control", "rows": 10, "cols": 75}
50
51 def render(self, name, value, attrs=None, renderer=None):
52 return super().render(name, value, attrs=self.attrs) + mark_safe(
53 '<span style="color: red; font-weight: bold;"> * </span>'
54 )
55
56
57 class AdminTextInputWidget(TextInputWidget):
58 def render(self, name, value, attrs=None, renderer=None):
59 return super().render(name, value) + mark_safe(
60 '<span style="color: red; font-weight: bold;"> * </span>'
61 )
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/widgets.py b/django/cantusdb_project/main_app/widgets.py
--- a/django/cantusdb_project/main_app/widgets.py
+++ b/django/cantusdb_project/main_app/widgets.py
@@ -1,6 +1,7 @@
from django.forms.widgets import TextInput, Select, Textarea, CheckboxInput
from django.utils.safestring import mark_safe
+
class TextInputWidget(TextInput):
def __init__(self):
self.attrs = {"class": "form-control form-control-sm"}
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/widgets.py b/django/cantusdb_project/main_app/widgets.py\n--- a/django/cantusdb_project/main_app/widgets.py\n+++ b/django/cantusdb_project/main_app/widgets.py\n@@ -1,6 +1,7 @@\n from django.forms.widgets import TextInput, Select, Textarea, CheckboxInput\n from django.utils.safestring import mark_safe\n \n+\n class TextInputWidget(TextInput):\n def __init__(self):\n self.attrs = {\"class\": \"form-control form-control-sm\"}\n", "issue": "we need to re-add a restart policy to docker-compose.yml\nA recent change to docker-compose.yml removed the `restart: always` policy we added to our containers a couple of weeks ago. We should re-instate this.\n", "before_files": [{"content": "from django.forms.widgets import TextInput, Select, Textarea, CheckboxInput\nfrom django.utils.safestring import mark_safe\n\nclass TextInputWidget(TextInput):\n def __init__(self):\n self.attrs = {\"class\": \"form-control form-control-sm\"}\n\n\nclass SelectWidget(Select):\n \"\"\"\n not used, this widget does work, but we cannot order the choices by name\n \"\"\"\n\n def __init__(self):\n attrs = {\"class\": \"form-control custom-select custom-select-sm\"}\n super().__init__(attrs=attrs)\n # super().choices = choices\n # self.choices = super().choices\n\n\nclass TextAreaWidget(Textarea):\n def __init__(self):\n self.attrs = {\"class\": \"form-control\", \"rows\": \"3\"}\n\n\nclass VolpianoAreaWidget(Textarea):\n def __init__(self):\n self.attrs = {\n \"class\": \"form-control\",\n \"rows\": \"1.5\",\n \"style\": \"font-family: Volpiano; font-size: xx-large\",\n }\n\n\nclass VolpianoInputWidget(TextInput):\n def __init__(self):\n self.attrs = {\n \"class\": \"form-control form-control-sm\",\n \"style\": \"font-family: Volpiano; font-size: xx-large\",\n }\n\n\nclass CheckboxWidget(CheckboxInput):\n pass\n\n\nclass AdminTextAreaWidget(Textarea):\n def __init__(self):\n self.attrs = {\"class\": \"form-control\", \"rows\": 10, \"cols\": 75}\n\n def render(self, name, value, attrs=None, renderer=None):\n return super().render(name, value, attrs=self.attrs) + mark_safe(\n '<span style=\"color: red; font-weight: bold;\"> * </span>'\n )\n\n\nclass AdminTextInputWidget(TextInputWidget):\n def render(self, name, value, attrs=None, renderer=None):\n return super().render(name, value) + mark_safe(\n '<span style=\"color: red; font-weight: bold;\"> * </span>'\n )\n", "path": "django/cantusdb_project/main_app/widgets.py"}], "after_files": [{"content": "from django.forms.widgets import TextInput, Select, Textarea, CheckboxInput\nfrom django.utils.safestring import mark_safe\n\n\nclass TextInputWidget(TextInput):\n def __init__(self):\n self.attrs = {\"class\": \"form-control form-control-sm\"}\n\n\nclass SelectWidget(Select):\n \"\"\"\n not used, this widget does work, but we cannot order the choices by name\n \"\"\"\n\n def __init__(self):\n attrs = {\"class\": \"form-control custom-select custom-select-sm\"}\n super().__init__(attrs=attrs)\n # super().choices = choices\n # self.choices = super().choices\n\n\nclass TextAreaWidget(Textarea):\n def __init__(self):\n self.attrs = {\"class\": \"form-control\", \"rows\": \"3\"}\n\n\nclass VolpianoAreaWidget(Textarea):\n def __init__(self):\n self.attrs = {\n \"class\": \"form-control\",\n \"rows\": \"1.5\",\n \"style\": \"font-family: Volpiano; font-size: xx-large\",\n }\n\n\nclass VolpianoInputWidget(TextInput):\n def __init__(self):\n self.attrs = {\n \"class\": \"form-control form-control-sm\",\n \"style\": \"font-family: Volpiano; font-size: xx-large\",\n }\n\n\nclass CheckboxWidget(CheckboxInput):\n pass\n\n\nclass AdminTextAreaWidget(Textarea):\n def __init__(self):\n self.attrs = {\"class\": \"form-control\", \"rows\": 10, \"cols\": 75}\n\n def render(self, name, value, attrs=None, renderer=None):\n return super().render(name, value, attrs=self.attrs) + mark_safe(\n '<span style=\"color: red; font-weight: bold;\"> * </span>'\n )\n\n\nclass AdminTextInputWidget(TextInputWidget):\n def render(self, name, value, attrs=None, renderer=None):\n return super().render(name, value) + mark_safe(\n '<span style=\"color: red; font-weight: bold;\"> * </span>'\n )\n", "path": "django/cantusdb_project/main_app/widgets.py"}]} |
gh_patches_debug_1081 | rasdani/github-patches | git_diff | pex-tool__pex-2286 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`venv create` no longer includes `--sources-directory` contents when all files are nested
It seems like there was a regression from 2.1.148 -> 2.1.149 with the behaviour of `venv create` with a `--pex-repository` that was created with `--sources-directory`: those sources aren't included in the final venv.
Reproducer:
```shell
cd $(mktemp -d)
# create our dummy file
mkdir -p source_files/foo
touch source_files/foo/bar.py # NB.1
# touch source_files/qux.py # NB.2
for version in v2.1.148 v2.1.149; do
curl -s -L https://github.com/pantsbuild/pex/releases/download/$version/pex > pex-$version
chmod +x pex-$version
# NB.3
./pex-$version --output-file=repository-$version.pex --sources-directory=source_files
# NB.4
PEX_SCRIPT=pex3 ./pex-$version venv create --dest-dir=dest-$version --pex-repository=repository-$version.pex --layout=flat
# what was included?
tree dest-$version
done
```
Running that shows that the contents of the `dest-...` directory depends on the version, without the `bar.py` file when using v2.1.149, but should be the same:
```
dest-v2.1.148
└── foo
└── bar.py
1 directory, 1 file
dest-v2.1.149
0 directories, 0 files
```
Ablative studies:
- uncommenting `NB.2` line (to have two files) passes ✅ (both versions have both `foo/bar.py` and `qux.py`)
- _replacing_ the `NB.1` with `NB.2` (to just `qux.py` at the top level) passes ✅
- _always_ using v2.1.148 on line `NB.3` (create the pex) and v2.1.149 on line `NB.4` (create the venv) passes ✅
- v2.1.149 for `NB.3` and v2.1.148 for `NB.4` fails ❌
- I think third-party dependencies work okay, but haven't confirmed in this reduced setting
- This reproduces without `--layout`, but the output is simpler with `--layout=flat`
(First observed in https://github.com/pantsbuild/pants/pull/20149.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/util.py`
Content:
```
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import absolute_import
5
6 import contextlib
7 import hashlib
8 import importlib
9 import os
10 import shutil
11 import tempfile
12 from hashlib import sha1
13 from site import makepath # type: ignore[attr-defined]
14
15 from pex import hashing
16 from pex.common import is_pyc_dir, is_pyc_file, safe_mkdir, safe_mkdtemp
17 from pex.compatibility import ( # type: ignore[attr-defined] # `exec_function` is defined dynamically
18 PY2,
19 exec_function,
20 )
21 from pex.orderedset import OrderedSet
22 from pex.typing import TYPE_CHECKING
23
24 if TYPE_CHECKING:
25 from typing import IO, Any, Callable, Iterator, Optional, Text
26
27 from pex.hashing import Hasher
28
29
30 class DistributionHelper(object):
31 # TODO(#584: This appears unused, but clients might still use it. We cannot remove until we
32 # have a deprecation policy.
33 @classmethod
34 def access_zipped_assets(cls, static_module_name, static_path, dir_location=None):
35 # type: (str, str, Optional[str]) -> str
36 """Create a copy of static resource files as we can't serve them from within the pex file.
37
38 :param static_module_name: Module name containing module to cache in a tempdir
39 :param static_path: Module name, for example 'serverset'
40 :param dir_location: create a new temporary directory inside, or None to have one created
41 :returns temp_dir: Temporary directory with the zipped assets inside
42 """
43 if dir_location is None:
44 temp_dir = safe_mkdtemp()
45 else:
46 temp_dir = dir_location
47
48 module = importlib.import_module(static_module_name)
49 # N.B.: This handles namespace packages new and old.
50 paths = OrderedSet(os.path.realpath(d) for d in getattr(module, "__path__", []))
51 if module.__file__:
52 # And this handles old-style __init__.py packages.
53 paths.add(os.path.realpath(module.__file__))
54
55 safe_mkdir(temp_dir)
56 for path in paths:
57 resource_dir = os.path.realpath(os.path.join(path, static_path))
58 if os.path.isdir(resource_dir):
59 for root, dirs, files in os.walk(resource_dir):
60 for d in dirs:
61 safe_mkdir(
62 os.path.join(
63 temp_dir, os.path.relpath(os.path.join(root, d), resource_dir)
64 )
65 )
66 for f in files:
67 src = os.path.join(root, f)
68 shutil.copy(src, os.path.join(temp_dir, os.path.relpath(src, resource_dir)))
69 return temp_dir
70
71
72 class CacheHelper(object):
73 @classmethod
74 def hash(cls, path, digest=None, hasher=sha1):
75 # type: (Text, Optional[Hasher], Callable[[], Hasher]) -> str
76 """Return the digest of a single file in a memory-efficient manner."""
77 if digest is None:
78 digest = hasher()
79 hashing.file_hash(path, digest)
80 return digest.hexdigest()
81
82 @classmethod
83 def pex_code_hash(cls, directory):
84 # type: (str) -> str
85 """Return a reproducible hash of the contents of a loose PEX; excluding all `.pyc` files."""
86 digest = hashlib.sha1()
87 hashing.dir_hash(
88 directory=directory,
89 digest=digest,
90 dir_filter=is_pyc_dir,
91 file_filter=lambda file_path: not is_pyc_file(file_path)
92 and not file_path.startswith("."),
93 )
94 return digest.hexdigest()
95
96 @classmethod
97 def dir_hash(cls, directory, digest=None, hasher=sha1):
98 # type: (str, Optional[Hasher], Callable[[], Hasher]) -> str
99 """Return a reproducible hash of the contents of a directory; excluding all `.pyc` files."""
100 if digest is None:
101 digest = hasher()
102 hashing.dir_hash(
103 directory=directory,
104 digest=digest,
105 dir_filter=lambda d: not is_pyc_dir(d),
106 file_filter=lambda f: not is_pyc_file(f),
107 )
108 return digest.hexdigest()
109
110 @classmethod
111 def zip_hash(
112 cls,
113 zip_path, # type: str
114 relpath=None, # type: Optional[str]
115 ):
116 # type: (...) -> str
117 """Return a reproducible hash of the contents of a zip; excluding all `.pyc` files."""
118 digest = hashlib.sha1()
119 hashing.zip_hash(
120 zip_path=zip_path,
121 digest=digest,
122 relpath=relpath,
123 dir_filter=lambda d: not is_pyc_dir(d),
124 file_filter=lambda f: not is_pyc_file(f),
125 )
126 return digest.hexdigest()
127
128
129 @contextlib.contextmanager
130 def named_temporary_file(**kwargs):
131 # type: (**Any) -> Iterator[IO]
132 """Due to a bug in python (https://bugs.python.org/issue14243), we need this to be able to use
133 the temporary file without deleting it."""
134 assert "delete" not in kwargs
135 kwargs["delete"] = False
136 fp = tempfile.NamedTemporaryFile(**kwargs)
137 try:
138 with fp:
139 yield fp
140 finally:
141 os.remove(fp.name)
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/util.py b/pex/util.py
--- a/pex/util.py
+++ b/pex/util.py
@@ -87,7 +87,7 @@
hashing.dir_hash(
directory=directory,
digest=digest,
- dir_filter=is_pyc_dir,
+ dir_filter=lambda d: not is_pyc_dir(d),
file_filter=lambda file_path: not is_pyc_file(file_path)
and not file_path.startswith("."),
)
| {"golden_diff": "diff --git a/pex/util.py b/pex/util.py\n--- a/pex/util.py\n+++ b/pex/util.py\n@@ -87,7 +87,7 @@\n hashing.dir_hash(\n directory=directory,\n digest=digest,\n- dir_filter=is_pyc_dir,\n+ dir_filter=lambda d: not is_pyc_dir(d),\n file_filter=lambda file_path: not is_pyc_file(file_path)\n and not file_path.startswith(\".\"),\n )\n", "issue": "`venv create` no longer includes `--sources-directory` contents when all files are nested\nIt seems like there was a regression from 2.1.148 -> 2.1.149 with the behaviour of `venv create` with a `--pex-repository` that was created with `--sources-directory`: those sources aren't included in the final venv.\r\n\r\nReproducer:\r\n\r\n```shell\r\ncd $(mktemp -d)\r\n\r\n# create our dummy file\r\nmkdir -p source_files/foo\r\ntouch source_files/foo/bar.py # NB.1\r\n# touch source_files/qux.py # NB.2\r\n\r\nfor version in v2.1.148 v2.1.149; do\r\n curl -s -L https://github.com/pantsbuild/pex/releases/download/$version/pex > pex-$version\r\n chmod +x pex-$version\r\n\r\n # NB.3\r\n ./pex-$version --output-file=repository-$version.pex --sources-directory=source_files\r\n\r\n # NB.4\r\n PEX_SCRIPT=pex3 ./pex-$version venv create --dest-dir=dest-$version --pex-repository=repository-$version.pex --layout=flat\r\n\r\n # what was included?\r\n tree dest-$version\r\ndone\r\n```\r\n\r\nRunning that shows that the contents of the `dest-...` directory depends on the version, without the `bar.py` file when using v2.1.149, but should be the same:\r\n\r\n```\r\ndest-v2.1.148\r\n\u2514\u2500\u2500 foo\r\n \u2514\u2500\u2500 bar.py\r\n\r\n1 directory, 1 file\r\ndest-v2.1.149\r\n\r\n0 directories, 0 files\r\n```\r\n\r\nAblative studies:\r\n\r\n- uncommenting `NB.2` line (to have two files) passes \u2705 (both versions have both `foo/bar.py` and `qux.py`)\r\n- _replacing_ the `NB.1` with `NB.2` (to just `qux.py` at the top level) passes \u2705 \r\n- _always_ using v2.1.148 on line `NB.3` (create the pex) and v2.1.149 on line `NB.4` (create the venv) passes \u2705 \r\n- v2.1.149 for `NB.3` and v2.1.148 for `NB.4` fails \u274c \r\n- I think third-party dependencies work okay, but haven't confirmed in this reduced setting\r\n- This reproduces without `--layout`, but the output is simpler with `--layout=flat`\r\n\r\n(First observed in https://github.com/pantsbuild/pants/pull/20149.)\n", "before_files": [{"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import absolute_import\n\nimport contextlib\nimport hashlib\nimport importlib\nimport os\nimport shutil\nimport tempfile\nfrom hashlib import sha1\nfrom site import makepath # type: ignore[attr-defined]\n\nfrom pex import hashing\nfrom pex.common import is_pyc_dir, is_pyc_file, safe_mkdir, safe_mkdtemp\nfrom pex.compatibility import ( # type: ignore[attr-defined] # `exec_function` is defined dynamically\n PY2,\n exec_function,\n)\nfrom pex.orderedset import OrderedSet\nfrom pex.typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import IO, Any, Callable, Iterator, Optional, Text\n\n from pex.hashing import Hasher\n\n\nclass DistributionHelper(object):\n # TODO(#584: This appears unused, but clients might still use it. We cannot remove until we\n # have a deprecation policy.\n @classmethod\n def access_zipped_assets(cls, static_module_name, static_path, dir_location=None):\n # type: (str, str, Optional[str]) -> str\n \"\"\"Create a copy of static resource files as we can't serve them from within the pex file.\n\n :param static_module_name: Module name containing module to cache in a tempdir\n :param static_path: Module name, for example 'serverset'\n :param dir_location: create a new temporary directory inside, or None to have one created\n :returns temp_dir: Temporary directory with the zipped assets inside\n \"\"\"\n if dir_location is None:\n temp_dir = safe_mkdtemp()\n else:\n temp_dir = dir_location\n\n module = importlib.import_module(static_module_name)\n # N.B.: This handles namespace packages new and old.\n paths = OrderedSet(os.path.realpath(d) for d in getattr(module, \"__path__\", []))\n if module.__file__:\n # And this handles old-style __init__.py packages.\n paths.add(os.path.realpath(module.__file__))\n\n safe_mkdir(temp_dir)\n for path in paths:\n resource_dir = os.path.realpath(os.path.join(path, static_path))\n if os.path.isdir(resource_dir):\n for root, dirs, files in os.walk(resource_dir):\n for d in dirs:\n safe_mkdir(\n os.path.join(\n temp_dir, os.path.relpath(os.path.join(root, d), resource_dir)\n )\n )\n for f in files:\n src = os.path.join(root, f)\n shutil.copy(src, os.path.join(temp_dir, os.path.relpath(src, resource_dir)))\n return temp_dir\n\n\nclass CacheHelper(object):\n @classmethod\n def hash(cls, path, digest=None, hasher=sha1):\n # type: (Text, Optional[Hasher], Callable[[], Hasher]) -> str\n \"\"\"Return the digest of a single file in a memory-efficient manner.\"\"\"\n if digest is None:\n digest = hasher()\n hashing.file_hash(path, digest)\n return digest.hexdigest()\n\n @classmethod\n def pex_code_hash(cls, directory):\n # type: (str) -> str\n \"\"\"Return a reproducible hash of the contents of a loose PEX; excluding all `.pyc` files.\"\"\"\n digest = hashlib.sha1()\n hashing.dir_hash(\n directory=directory,\n digest=digest,\n dir_filter=is_pyc_dir,\n file_filter=lambda file_path: not is_pyc_file(file_path)\n and not file_path.startswith(\".\"),\n )\n return digest.hexdigest()\n\n @classmethod\n def dir_hash(cls, directory, digest=None, hasher=sha1):\n # type: (str, Optional[Hasher], Callable[[], Hasher]) -> str\n \"\"\"Return a reproducible hash of the contents of a directory; excluding all `.pyc` files.\"\"\"\n if digest is None:\n digest = hasher()\n hashing.dir_hash(\n directory=directory,\n digest=digest,\n dir_filter=lambda d: not is_pyc_dir(d),\n file_filter=lambda f: not is_pyc_file(f),\n )\n return digest.hexdigest()\n\n @classmethod\n def zip_hash(\n cls,\n zip_path, # type: str\n relpath=None, # type: Optional[str]\n ):\n # type: (...) -> str\n \"\"\"Return a reproducible hash of the contents of a zip; excluding all `.pyc` files.\"\"\"\n digest = hashlib.sha1()\n hashing.zip_hash(\n zip_path=zip_path,\n digest=digest,\n relpath=relpath,\n dir_filter=lambda d: not is_pyc_dir(d),\n file_filter=lambda f: not is_pyc_file(f),\n )\n return digest.hexdigest()\n\n\[email protected]\ndef named_temporary_file(**kwargs):\n # type: (**Any) -> Iterator[IO]\n \"\"\"Due to a bug in python (https://bugs.python.org/issue14243), we need this to be able to use\n the temporary file without deleting it.\"\"\"\n assert \"delete\" not in kwargs\n kwargs[\"delete\"] = False\n fp = tempfile.NamedTemporaryFile(**kwargs)\n try:\n with fp:\n yield fp\n finally:\n os.remove(fp.name)\n", "path": "pex/util.py"}], "after_files": [{"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import absolute_import\n\nimport contextlib\nimport hashlib\nimport importlib\nimport os\nimport shutil\nimport tempfile\nfrom hashlib import sha1\nfrom site import makepath # type: ignore[attr-defined]\n\nfrom pex import hashing\nfrom pex.common import is_pyc_dir, is_pyc_file, safe_mkdir, safe_mkdtemp\nfrom pex.compatibility import ( # type: ignore[attr-defined] # `exec_function` is defined dynamically\n PY2,\n exec_function,\n)\nfrom pex.orderedset import OrderedSet\nfrom pex.typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import IO, Any, Callable, Iterator, Optional, Text\n\n from pex.hashing import Hasher\n\n\nclass DistributionHelper(object):\n # TODO(#584: This appears unused, but clients might still use it. We cannot remove until we\n # have a deprecation policy.\n @classmethod\n def access_zipped_assets(cls, static_module_name, static_path, dir_location=None):\n # type: (str, str, Optional[str]) -> str\n \"\"\"Create a copy of static resource files as we can't serve them from within the pex file.\n\n :param static_module_name: Module name containing module to cache in a tempdir\n :param static_path: Module name, for example 'serverset'\n :param dir_location: create a new temporary directory inside, or None to have one created\n :returns temp_dir: Temporary directory with the zipped assets inside\n \"\"\"\n if dir_location is None:\n temp_dir = safe_mkdtemp()\n else:\n temp_dir = dir_location\n\n module = importlib.import_module(static_module_name)\n # N.B.: This handles namespace packages new and old.\n paths = OrderedSet(os.path.realpath(d) for d in getattr(module, \"__path__\", []))\n if module.__file__:\n # And this handles old-style __init__.py packages.\n paths.add(os.path.realpath(module.__file__))\n\n safe_mkdir(temp_dir)\n for path in paths:\n resource_dir = os.path.realpath(os.path.join(path, static_path))\n if os.path.isdir(resource_dir):\n for root, dirs, files in os.walk(resource_dir):\n for d in dirs:\n safe_mkdir(\n os.path.join(\n temp_dir, os.path.relpath(os.path.join(root, d), resource_dir)\n )\n )\n for f in files:\n src = os.path.join(root, f)\n shutil.copy(src, os.path.join(temp_dir, os.path.relpath(src, resource_dir)))\n return temp_dir\n\n\nclass CacheHelper(object):\n @classmethod\n def hash(cls, path, digest=None, hasher=sha1):\n # type: (Text, Optional[Hasher], Callable[[], Hasher]) -> str\n \"\"\"Return the digest of a single file in a memory-efficient manner.\"\"\"\n if digest is None:\n digest = hasher()\n hashing.file_hash(path, digest)\n return digest.hexdigest()\n\n @classmethod\n def pex_code_hash(cls, directory):\n # type: (str) -> str\n \"\"\"Return a reproducible hash of the contents of a loose PEX; excluding all `.pyc` files.\"\"\"\n digest = hashlib.sha1()\n hashing.dir_hash(\n directory=directory,\n digest=digest,\n dir_filter=lambda d: not is_pyc_dir(d),\n file_filter=lambda file_path: not is_pyc_file(file_path)\n and not file_path.startswith(\".\"),\n )\n return digest.hexdigest()\n\n @classmethod\n def dir_hash(cls, directory, digest=None, hasher=sha1):\n # type: (str, Optional[Hasher], Callable[[], Hasher]) -> str\n \"\"\"Return a reproducible hash of the contents of a directory; excluding all `.pyc` files.\"\"\"\n if digest is None:\n digest = hasher()\n hashing.dir_hash(\n directory=directory,\n digest=digest,\n dir_filter=lambda d: not is_pyc_dir(d),\n file_filter=lambda f: not is_pyc_file(f),\n )\n return digest.hexdigest()\n\n @classmethod\n def zip_hash(\n cls,\n zip_path, # type: str\n relpath=None, # type: Optional[str]\n ):\n # type: (...) -> str\n \"\"\"Return a reproducible hash of the contents of a zip; excluding all `.pyc` files.\"\"\"\n digest = hashlib.sha1()\n hashing.zip_hash(\n zip_path=zip_path,\n digest=digest,\n relpath=relpath,\n dir_filter=lambda d: not is_pyc_dir(d),\n file_filter=lambda f: not is_pyc_file(f),\n )\n return digest.hexdigest()\n\n\[email protected]\ndef named_temporary_file(**kwargs):\n # type: (**Any) -> Iterator[IO]\n \"\"\"Due to a bug in python (https://bugs.python.org/issue14243), we need this to be able to use\n the temporary file without deleting it.\"\"\"\n assert \"delete\" not in kwargs\n kwargs[\"delete\"] = False\n fp = tempfile.NamedTemporaryFile(**kwargs)\n try:\n with fp:\n yield fp\n finally:\n os.remove(fp.name)\n", "path": "pex/util.py"}]} |
gh_patches_debug_1082 | rasdani/github-patches | git_diff | pymodbus-dev__pymodbus-945 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AsyncioModbusSerialClient TypeError Coroutine
### Versions
* Python: 3.9
* OS: Ubuntu 20.04
* Pymodbus: `3.0.0dev4`
* Modbus Hardware (if used):
### Pymodbus Specific
* Server: None
* Client: rtu - async
### Description
When I try `3.0.0dev4` and the latest commit as of today. I am getting a type error that variable `coro` is not a coroutine in file `serial.py`. I am trying to create `AsyncModbusSerialClient(schedulers.ASYNC_IO, port=connPort, baudrate=connSpeed, method=connMethod, timeout=commTimeout)` in an existing running loop.
I don't think the coroutine was created correctly. What do you think?
Old:
`future = asyncio.run_coroutine_threadsafe(coro, loop=loop)`
Proposed:
` future = asyncio.run_coroutine_threadsafe(coro(), loop=loop)`
"""Create asyncio based asynchronous serial clients.
:param port: Serial port
:param framer: Modbus Framer
:param kwargs: Serial port options
:return: asyncio event loop and serial client
"""
try:
loop = kwargs.pop("loop", None) or asyncio.get_running_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
proto_cls = kwargs.get("proto_cls") or ModbusClientProtocol
client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)
coro = client.connect
if not loop.is_running():
loop.run_until_complete(coro())
else: # loop is not asyncio.get_event_loop():
future = asyncio.run_coroutine_threadsafe(coro, loop=loop) <- `Fails here`
future.result()
return loop, client
```
``` py
def async_io_factory(port=None, framer=None, **kwargs):
"""Create asyncio based asynchronous serial clients.
:param port: Serial port
:param framer: Modbus Framer
:param kwargs: Serial port options
:return: asyncio event loop and serial client
"""
try:
loop = kwargs.pop("loop", None) or asyncio.get_running_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
proto_cls = kwargs.get("proto_cls") or ModbusClientProtocol
client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)
coro = client.connect
if not loop.is_running():
loop.run_until_complete(coro())
else: # loop is not asyncio.get_event_loop():
future = asyncio.run_coroutine_threadsafe(coro, loop=loop) <- `Fails here`
future.result()
return loop, client
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pymodbus/client/asynchronous/factory/serial.py`
Content:
```
1 """Factory to create asynchronous serial clients based on twisted/asyncio."""
2 # pylint: disable=missing-type-doc
3 import logging
4 import asyncio
5
6 from pymodbus.client.asynchronous import schedulers
7 from pymodbus.client.asynchronous.thread import EventLoopThread
8 from pymodbus.client.asynchronous.async_io import (
9 ModbusClientProtocol,
10 AsyncioModbusSerialClient,
11 )
12 from pymodbus.factory import ClientDecoder
13
14
15 _logger = logging.getLogger(__name__)
16
17
18 def reactor_factory(port, framer, **kwargs):
19 """Create twisted serial asynchronous client.
20
21 :param port: Serial port
22 :param framer: Modbus Framer
23 :param kwargs:
24 :return: event_loop_thread and twisted serial client
25 """
26 from twisted.internet import reactor # pylint: disable=import-outside-toplevel
27 from twisted.internet.serialport import ( # pylint: disable=import-outside-toplevel
28 SerialPort,
29 )
30 from twisted.internet.protocol import ( # pylint: disable=import-outside-toplevel
31 ClientFactory,
32 )
33
34 class SerialClientFactory(ClientFactory):
35 """Define serial client factory."""
36
37 def __init__(self, framer, proto_cls):
38 """Remember things necessary for building a protocols."""
39 self.proto_cls = proto_cls
40 self.framer = framer
41
42 def buildProtocol(self): # pylint: disable=arguments-differ
43 """Create a protocol and start the reading cycle-"""
44 proto = self.proto_cls(self.framer)
45 proto.factory = self
46 return proto
47
48 class SerialModbusClient(SerialPort): # pylint: disable=abstract-method
49 """Define serial client."""
50
51 def __init__(self, framer, *args, **kwargs):
52 """Initialize the client and start listening on the serial port.
53
54 :param factory: The factory to build clients with
55 """
56 self.decoder = ClientDecoder()
57 proto_cls = kwargs.pop("proto_cls", None)
58 proto = SerialClientFactory(framer, proto_cls).buildProtocol()
59 SerialPort.__init__(self, proto, *args, **kwargs)
60
61 proto = EventLoopThread(
62 "reactor",
63 reactor.run, # pylint: disable=no-member
64 reactor.stop, # pylint: disable=no-member
65 installSignalHandlers=0,
66 )
67 ser_client = SerialModbusClient(framer, port, reactor, **kwargs)
68
69 return proto, ser_client
70
71
72 def async_io_factory(port=None, framer=None, **kwargs):
73 """Create asyncio based asynchronous serial clients.
74
75 :param port: Serial port
76 :param framer: Modbus Framer
77 :param kwargs: Serial port options
78 :return: asyncio event loop and serial client
79 """
80 try:
81 loop = kwargs.pop("loop", None) or asyncio.get_running_loop()
82 except RuntimeError:
83 loop = asyncio.new_event_loop()
84
85 proto_cls = kwargs.get("proto_cls") or ModbusClientProtocol
86
87 client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)
88 coro = client.connect
89 if not loop.is_running():
90 loop.run_until_complete(coro())
91 else: # loop is not asyncio.get_event_loop():
92 future = asyncio.run_coroutine_threadsafe(coro, loop=loop)
93 future.result()
94
95 return loop, client
96
97
98 def get_factory(scheduler):
99 """Get protocol factory based on the backend scheduler being used.
100
101 :param scheduler: REACTOR/ASYNC_IO
102 :return:
103 :raises Exception: Failure
104 """
105 if scheduler == schedulers.REACTOR:
106 return reactor_factory
107 if scheduler == schedulers.ASYNC_IO:
108 return async_io_factory
109
110 txt = f"Allowed Schedulers: {schedulers.REACTOR}, {schedulers.ASYNC_IO}"
111 _logger.warning(txt)
112 txt = f'Invalid Scheduler "{scheduler}"'
113 raise Exception(txt)
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pymodbus/client/asynchronous/factory/serial.py b/pymodbus/client/asynchronous/factory/serial.py
--- a/pymodbus/client/asynchronous/factory/serial.py
+++ b/pymodbus/client/asynchronous/factory/serial.py
@@ -89,7 +89,7 @@
if not loop.is_running():
loop.run_until_complete(coro())
else: # loop is not asyncio.get_event_loop():
- future = asyncio.run_coroutine_threadsafe(coro, loop=loop)
+ future = asyncio.run_coroutine_threadsafe(coro(), loop=loop)
future.result()
return loop, client
| {"golden_diff": "diff --git a/pymodbus/client/asynchronous/factory/serial.py b/pymodbus/client/asynchronous/factory/serial.py\n--- a/pymodbus/client/asynchronous/factory/serial.py\n+++ b/pymodbus/client/asynchronous/factory/serial.py\n@@ -89,7 +89,7 @@\n if not loop.is_running():\n loop.run_until_complete(coro())\n else: # loop is not asyncio.get_event_loop():\n- future = asyncio.run_coroutine_threadsafe(coro, loop=loop)\n+ future = asyncio.run_coroutine_threadsafe(coro(), loop=loop)\n future.result()\n \n return loop, client\n", "issue": "AsyncioModbusSerialClient TypeError Coroutine\n### Versions\r\n\r\n* Python: 3.9\r\n* OS: Ubuntu 20.04\r\n* Pymodbus: `3.0.0dev4`\r\n* Modbus Hardware (if used): \r\n\r\n### Pymodbus Specific\r\n* Server: None\r\n* Client: rtu - async\r\n\r\n### Description\r\n\r\nWhen I try `3.0.0dev4` and the latest commit as of today. I am getting a type error that variable `coro` is not a coroutine in file `serial.py`. I am trying to create `AsyncModbusSerialClient(schedulers.ASYNC_IO, port=connPort, baudrate=connSpeed, method=connMethod, timeout=commTimeout)` in an existing running loop.\r\n\r\nI don't think the coroutine was created correctly. What do you think?\r\n\r\nOld:\r\n`future = asyncio.run_coroutine_threadsafe(coro, loop=loop)` \r\n\r\nProposed:\r\n` future = asyncio.run_coroutine_threadsafe(coro(), loop=loop)`\r\n \"\"\"Create asyncio based asynchronous serial clients.\r\n :param port: Serial port\r\n :param framer: Modbus Framer\r\n :param kwargs: Serial port options\r\n :return: asyncio event loop and serial client\r\n \"\"\"\r\n try:\r\n loop = kwargs.pop(\"loop\", None) or asyncio.get_running_loop()\r\n except RuntimeError:\r\n loop = asyncio.new_event_loop()\r\n\r\n proto_cls = kwargs.get(\"proto_cls\") or ModbusClientProtocol\r\n\r\n client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)\r\n coro = client.connect\r\n if not loop.is_running():\r\n loop.run_until_complete(coro())\r\n else: # loop is not asyncio.get_event_loop():\r\n future = asyncio.run_coroutine_threadsafe(coro, loop=loop) <- `Fails here`\r\n future.result()\r\n\r\n return loop, client\r\n```\r\n``` py\r\ndef async_io_factory(port=None, framer=None, **kwargs):\r\n \"\"\"Create asyncio based asynchronous serial clients.\r\n :param port: Serial port\r\n :param framer: Modbus Framer\r\n :param kwargs: Serial port options\r\n :return: asyncio event loop and serial client\r\n \"\"\"\r\n try:\r\n loop = kwargs.pop(\"loop\", None) or asyncio.get_running_loop()\r\n except RuntimeError:\r\n loop = asyncio.new_event_loop()\r\n\r\n proto_cls = kwargs.get(\"proto_cls\") or ModbusClientProtocol\r\n\r\n client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)\r\n coro = client.connect\r\n if not loop.is_running():\r\n loop.run_until_complete(coro())\r\n else: # loop is not asyncio.get_event_loop():\r\n future = asyncio.run_coroutine_threadsafe(coro, loop=loop) <- `Fails here`\r\n future.result()\r\n\r\n return loop, client\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Factory to create asynchronous serial clients based on twisted/asyncio.\"\"\"\n# pylint: disable=missing-type-doc\nimport logging\nimport asyncio\n\nfrom pymodbus.client.asynchronous import schedulers\nfrom pymodbus.client.asynchronous.thread import EventLoopThread\nfrom pymodbus.client.asynchronous.async_io import (\n ModbusClientProtocol,\n AsyncioModbusSerialClient,\n)\nfrom pymodbus.factory import ClientDecoder\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef reactor_factory(port, framer, **kwargs):\n \"\"\"Create twisted serial asynchronous client.\n\n :param port: Serial port\n :param framer: Modbus Framer\n :param kwargs:\n :return: event_loop_thread and twisted serial client\n \"\"\"\n from twisted.internet import reactor # pylint: disable=import-outside-toplevel\n from twisted.internet.serialport import ( # pylint: disable=import-outside-toplevel\n SerialPort,\n )\n from twisted.internet.protocol import ( # pylint: disable=import-outside-toplevel\n ClientFactory,\n )\n\n class SerialClientFactory(ClientFactory):\n \"\"\"Define serial client factory.\"\"\"\n\n def __init__(self, framer, proto_cls):\n \"\"\"Remember things necessary for building a protocols.\"\"\"\n self.proto_cls = proto_cls\n self.framer = framer\n\n def buildProtocol(self): # pylint: disable=arguments-differ\n \"\"\"Create a protocol and start the reading cycle-\"\"\"\n proto = self.proto_cls(self.framer)\n proto.factory = self\n return proto\n\n class SerialModbusClient(SerialPort): # pylint: disable=abstract-method\n \"\"\"Define serial client.\"\"\"\n\n def __init__(self, framer, *args, **kwargs):\n \"\"\"Initialize the client and start listening on the serial port.\n\n :param factory: The factory to build clients with\n \"\"\"\n self.decoder = ClientDecoder()\n proto_cls = kwargs.pop(\"proto_cls\", None)\n proto = SerialClientFactory(framer, proto_cls).buildProtocol()\n SerialPort.__init__(self, proto, *args, **kwargs)\n\n proto = EventLoopThread(\n \"reactor\",\n reactor.run, # pylint: disable=no-member\n reactor.stop, # pylint: disable=no-member\n installSignalHandlers=0,\n )\n ser_client = SerialModbusClient(framer, port, reactor, **kwargs)\n\n return proto, ser_client\n\n\ndef async_io_factory(port=None, framer=None, **kwargs):\n \"\"\"Create asyncio based asynchronous serial clients.\n\n :param port: Serial port\n :param framer: Modbus Framer\n :param kwargs: Serial port options\n :return: asyncio event loop and serial client\n \"\"\"\n try:\n loop = kwargs.pop(\"loop\", None) or asyncio.get_running_loop()\n except RuntimeError:\n loop = asyncio.new_event_loop()\n\n proto_cls = kwargs.get(\"proto_cls\") or ModbusClientProtocol\n\n client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)\n coro = client.connect\n if not loop.is_running():\n loop.run_until_complete(coro())\n else: # loop is not asyncio.get_event_loop():\n future = asyncio.run_coroutine_threadsafe(coro, loop=loop)\n future.result()\n\n return loop, client\n\n\ndef get_factory(scheduler):\n \"\"\"Get protocol factory based on the backend scheduler being used.\n\n :param scheduler: REACTOR/ASYNC_IO\n :return:\n :raises Exception: Failure\n \"\"\"\n if scheduler == schedulers.REACTOR:\n return reactor_factory\n if scheduler == schedulers.ASYNC_IO:\n return async_io_factory\n\n txt = f\"Allowed Schedulers: {schedulers.REACTOR}, {schedulers.ASYNC_IO}\"\n _logger.warning(txt)\n txt = f'Invalid Scheduler \"{scheduler}\"'\n raise Exception(txt)\n", "path": "pymodbus/client/asynchronous/factory/serial.py"}], "after_files": [{"content": "\"\"\"Factory to create asynchronous serial clients based on twisted/asyncio.\"\"\"\n# pylint: disable=missing-type-doc\nimport logging\nimport asyncio\n\nfrom pymodbus.client.asynchronous import schedulers\nfrom pymodbus.client.asynchronous.thread import EventLoopThread\nfrom pymodbus.client.asynchronous.async_io import (\n ModbusClientProtocol,\n AsyncioModbusSerialClient,\n)\nfrom pymodbus.factory import ClientDecoder\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef reactor_factory(port, framer, **kwargs):\n \"\"\"Create twisted serial asynchronous client.\n\n :param port: Serial port\n :param framer: Modbus Framer\n :param kwargs:\n :return: event_loop_thread and twisted serial client\n \"\"\"\n from twisted.internet import reactor # pylint: disable=import-outside-toplevel\n from twisted.internet.serialport import ( # pylint: disable=import-outside-toplevel\n SerialPort,\n )\n from twisted.internet.protocol import ( # pylint: disable=import-outside-toplevel\n ClientFactory,\n )\n\n class SerialClientFactory(ClientFactory):\n \"\"\"Define serial client factory.\"\"\"\n\n def __init__(self, framer, proto_cls):\n \"\"\"Remember things necessary for building a protocols.\"\"\"\n self.proto_cls = proto_cls\n self.framer = framer\n\n def buildProtocol(self): # pylint: disable=arguments-differ\n \"\"\"Create a protocol and start the reading cycle-\"\"\"\n proto = self.proto_cls(self.framer)\n proto.factory = self\n return proto\n\n class SerialModbusClient(SerialPort): # pylint: disable=abstract-method\n \"\"\"Define serial client.\"\"\"\n\n def __init__(self, framer, *args, **kwargs):\n \"\"\"Initialize the client and start listening on the serial port.\n\n :param factory: The factory to build clients with\n \"\"\"\n self.decoder = ClientDecoder()\n proto_cls = kwargs.pop(\"proto_cls\", None)\n proto = SerialClientFactory(framer, proto_cls).buildProtocol()\n SerialPort.__init__(self, proto, *args, **kwargs)\n\n proto = EventLoopThread(\n \"reactor\",\n reactor.run, # pylint: disable=no-member\n reactor.stop, # pylint: disable=no-member\n installSignalHandlers=0,\n )\n ser_client = SerialModbusClient(framer, port, reactor, **kwargs)\n\n return proto, ser_client\n\n\ndef async_io_factory(port=None, framer=None, **kwargs):\n \"\"\"Create asyncio based asynchronous serial clients.\n\n :param port: Serial port\n :param framer: Modbus Framer\n :param kwargs: Serial port options\n :return: asyncio event loop and serial client\n \"\"\"\n try:\n loop = kwargs.pop(\"loop\", None) or asyncio.get_running_loop()\n except RuntimeError:\n loop = asyncio.new_event_loop()\n\n proto_cls = kwargs.get(\"proto_cls\") or ModbusClientProtocol\n\n client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)\n coro = client.connect\n if not loop.is_running():\n loop.run_until_complete(coro())\n else: # loop is not asyncio.get_event_loop():\n future = asyncio.run_coroutine_threadsafe(coro(), loop=loop)\n future.result()\n\n return loop, client\n\n\ndef get_factory(scheduler):\n \"\"\"Get protocol factory based on the backend scheduler being used.\n\n :param scheduler: REACTOR/ASYNC_IO\n :return:\n :raises Exception: Failure\n \"\"\"\n if scheduler == schedulers.REACTOR:\n return reactor_factory\n if scheduler == schedulers.ASYNC_IO:\n return async_io_factory\n\n txt = f\"Allowed Schedulers: {schedulers.REACTOR}, {schedulers.ASYNC_IO}\"\n _logger.warning(txt)\n txt = f'Invalid Scheduler \"{scheduler}\"'\n raise Exception(txt)\n", "path": "pymodbus/client/asynchronous/factory/serial.py"}]} |
gh_patches_debug_1083 | rasdani/github-patches | git_diff | scikit-hep__awkward-895 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typo in `identifier.py`
https://github.com/scikit-hep/awkward-1.0/blob/a0ec3bcacacc81a47fe61a1d99b0bc512a8bb3cf/src/awkward/_v2/identifier.py#L30
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/awkward/_v2/identifier.py`
Content:
```
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2
3 from __future__ import absolute_import
4
5 import awkward as ak
6
7 np = ak.nplike.NumpyMetadata.instance()
8
9
10 class Identifier(object):
11 _numrefs = 0
12
13 @staticmethod
14 def newref():
15 out = Identifier._numrefs
16 Identifier._numrefs += 1
17 return out
18
19 def __init__(self, ref, fieldloc, data):
20 self._ref = ref
21 self._fieldloc = fieldloc
22 if not isinstance(fieldloc, dict) or not all(
23 isinstance(k, int) and isinstance(v, str) for k, v in fieldloc.items()
24 ):
25 raise TypeError("Identifier fieldloc must be a dict of int -> str")
26 self._nplike = ak.nplike.of(data)
27
28 self._data = self._nplike.asarray(data, order="C")
29 if len(self._data.shape) != 2:
30 raise TypeError("Identifer data must be 2-dimensional")
31
32 # TypeError for unsupported types?
33 self._T = self._data.dtype
34 if self._T not in (np.dtype(np.int32), np.dtype(np.int64)):
35 raise TypeError("Identifier data must be int32, int64")
36
37 @classmethod
38 # cpp takes width, length?
39 def zeros(cls, ref, fieldloc, length, width, nplike, dtype):
40 return Identifier(ref, fieldloc, nplike.zeros((length, width), dtype=dtype))
41
42 @classmethod
43 def empty(cls, ref, fieldloc, length, width, nplike, dtype):
44 return Identifier(ref, fieldloc, nplike.empty((length, width), dtype=dtype))
45
46 @property
47 def ref(self):
48 return self._ref
49
50 @property
51 def filedloc(self):
52 return self._fieldloc
53
54 @property
55 def data(self):
56 return self._data
57
58 @property
59 def nplike(self):
60 return self._nplike
61
62 def __len__(self):
63 return len(self._data)
64
65 def width(self):
66 return self._data.shape[1]
67
68 def to64(self):
69 return Identifier(self._ref, self._fieldloc, self._data.astype(np.int64))
70
71 def __getitem__(self, where):
72 return self._data[where]
73
74 def __copy__(self):
75 return Identifier(self._ref, self._fieldloc, self._data.copy())
76
77 def __repr__(self):
78 return self._repr("", "", "")
79
80 def _repr(self, indent, pre, post):
81 out = [indent, pre, "<Identifier ref=" + repr(str(self._ref)) + " fieldloc="]
82 out.append(repr(str(self._fieldloc)))
83 out.append(" length=")
84 out.append(repr(str(len(self._data))))
85 out.append(" width=")
86 out.append(repr(str(self._data.shape[1])))
87 out.append(" at=")
88 out.append(repr(hex(self._data.ctypes.data)))
89 out.append(">\n")
90 out.append(indent + " ")
91 out.append(
92 self._nplike.array_str(self._data, max_line_width=30).replace(
93 "\n", "\n" + indent + " "
94 )
95 )
96 out.append("\n")
97 out.append(indent)
98 out.append("</Identifier>")
99 out.append(post)
100 return "".join(out)
101
102 def convert_to(self, nplike):
103 return Identifier(self._ref, self._fieldloc, nplike.asarray(self._data))
104
105 def referentially_equal(self, other):
106 return (
107 self._ref == other._ref
108 and self._fieldloc == other._fieldloc
109 and self._data.ctypes.data == other._data.ctypes.data
110 and self._data.shape == other._data.shape
111 and self._data.strides == other._data.strides
112 and self._data.dtype == other._data.dtype
113 )
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/awkward/_v2/identifier.py b/src/awkward/_v2/identifier.py
--- a/src/awkward/_v2/identifier.py
+++ b/src/awkward/_v2/identifier.py
@@ -27,7 +27,7 @@
self._data = self._nplike.asarray(data, order="C")
if len(self._data.shape) != 2:
- raise TypeError("Identifer data must be 2-dimensional")
+ raise TypeError("Identifier data must be 2-dimensional")
# TypeError for unsupported types?
self._T = self._data.dtype
| {"golden_diff": "diff --git a/src/awkward/_v2/identifier.py b/src/awkward/_v2/identifier.py\n--- a/src/awkward/_v2/identifier.py\n+++ b/src/awkward/_v2/identifier.py\n@@ -27,7 +27,7 @@\n \n self._data = self._nplike.asarray(data, order=\"C\")\n if len(self._data.shape) != 2:\n- raise TypeError(\"Identifer data must be 2-dimensional\")\n+ raise TypeError(\"Identifier data must be 2-dimensional\")\n \n # TypeError for unsupported types?\n self._T = self._data.dtype\n", "issue": "Typo in `identifier.py`\nhttps://github.com/scikit-hep/awkward-1.0/blob/a0ec3bcacacc81a47fe61a1d99b0bc512a8bb3cf/src/awkward/_v2/identifier.py#L30\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nfrom __future__ import absolute_import\n\nimport awkward as ak\n\nnp = ak.nplike.NumpyMetadata.instance()\n\n\nclass Identifier(object):\n _numrefs = 0\n\n @staticmethod\n def newref():\n out = Identifier._numrefs\n Identifier._numrefs += 1\n return out\n\n def __init__(self, ref, fieldloc, data):\n self._ref = ref\n self._fieldloc = fieldloc\n if not isinstance(fieldloc, dict) or not all(\n isinstance(k, int) and isinstance(v, str) for k, v in fieldloc.items()\n ):\n raise TypeError(\"Identifier fieldloc must be a dict of int -> str\")\n self._nplike = ak.nplike.of(data)\n\n self._data = self._nplike.asarray(data, order=\"C\")\n if len(self._data.shape) != 2:\n raise TypeError(\"Identifer data must be 2-dimensional\")\n\n # TypeError for unsupported types?\n self._T = self._data.dtype\n if self._T not in (np.dtype(np.int32), np.dtype(np.int64)):\n raise TypeError(\"Identifier data must be int32, int64\")\n\n @classmethod\n # cpp takes width, length?\n def zeros(cls, ref, fieldloc, length, width, nplike, dtype):\n return Identifier(ref, fieldloc, nplike.zeros((length, width), dtype=dtype))\n\n @classmethod\n def empty(cls, ref, fieldloc, length, width, nplike, dtype):\n return Identifier(ref, fieldloc, nplike.empty((length, width), dtype=dtype))\n\n @property\n def ref(self):\n return self._ref\n\n @property\n def filedloc(self):\n return self._fieldloc\n\n @property\n def data(self):\n return self._data\n\n @property\n def nplike(self):\n return self._nplike\n\n def __len__(self):\n return len(self._data)\n\n def width(self):\n return self._data.shape[1]\n\n def to64(self):\n return Identifier(self._ref, self._fieldloc, self._data.astype(np.int64))\n\n def __getitem__(self, where):\n return self._data[where]\n\n def __copy__(self):\n return Identifier(self._ref, self._fieldloc, self._data.copy())\n\n def __repr__(self):\n return self._repr(\"\", \"\", \"\")\n\n def _repr(self, indent, pre, post):\n out = [indent, pre, \"<Identifier ref=\" + repr(str(self._ref)) + \" fieldloc=\"]\n out.append(repr(str(self._fieldloc)))\n out.append(\" length=\")\n out.append(repr(str(len(self._data))))\n out.append(\" width=\")\n out.append(repr(str(self._data.shape[1])))\n out.append(\" at=\")\n out.append(repr(hex(self._data.ctypes.data)))\n out.append(\">\\n\")\n out.append(indent + \" \")\n out.append(\n self._nplike.array_str(self._data, max_line_width=30).replace(\n \"\\n\", \"\\n\" + indent + \" \"\n )\n )\n out.append(\"\\n\")\n out.append(indent)\n out.append(\"</Identifier>\")\n out.append(post)\n return \"\".join(out)\n\n def convert_to(self, nplike):\n return Identifier(self._ref, self._fieldloc, nplike.asarray(self._data))\n\n def referentially_equal(self, other):\n return (\n self._ref == other._ref\n and self._fieldloc == other._fieldloc\n and self._data.ctypes.data == other._data.ctypes.data\n and self._data.shape == other._data.shape\n and self._data.strides == other._data.strides\n and self._data.dtype == other._data.dtype\n )\n", "path": "src/awkward/_v2/identifier.py"}], "after_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nfrom __future__ import absolute_import\n\nimport awkward as ak\n\nnp = ak.nplike.NumpyMetadata.instance()\n\n\nclass Identifier(object):\n _numrefs = 0\n\n @staticmethod\n def newref():\n out = Identifier._numrefs\n Identifier._numrefs += 1\n return out\n\n def __init__(self, ref, fieldloc, data):\n self._ref = ref\n self._fieldloc = fieldloc\n if not isinstance(fieldloc, dict) or not all(\n isinstance(k, int) and isinstance(v, str) for k, v in fieldloc.items()\n ):\n raise TypeError(\"Identifier fieldloc must be a dict of int -> str\")\n self._nplike = ak.nplike.of(data)\n\n self._data = self._nplike.asarray(data, order=\"C\")\n if len(self._data.shape) != 2:\n raise TypeError(\"Identifier data must be 2-dimensional\")\n\n # TypeError for unsupported types?\n self._T = self._data.dtype\n if self._T not in (np.dtype(np.int32), np.dtype(np.int64)):\n raise TypeError(\"Identifier data must be int32, int64\")\n\n @classmethod\n # cpp takes width, length?\n def zeros(cls, ref, fieldloc, length, width, nplike, dtype):\n return Identifier(ref, fieldloc, nplike.zeros((length, width), dtype=dtype))\n\n @classmethod\n def empty(cls, ref, fieldloc, length, width, nplike, dtype):\n return Identifier(ref, fieldloc, nplike.empty((length, width), dtype=dtype))\n\n @property\n def ref(self):\n return self._ref\n\n @property\n def filedloc(self):\n return self._fieldloc\n\n @property\n def data(self):\n return self._data\n\n @property\n def nplike(self):\n return self._nplike\n\n def __len__(self):\n return len(self._data)\n\n def width(self):\n return self._data.shape[1]\n\n def to64(self):\n return Identifier(self._ref, self._fieldloc, self._data.astype(np.int64))\n\n def __getitem__(self, where):\n return self._data[where]\n\n def __copy__(self):\n return Identifier(self._ref, self._fieldloc, self._data.copy())\n\n def __repr__(self):\n return self._repr(\"\", \"\", \"\")\n\n def _repr(self, indent, pre, post):\n out = [indent, pre, \"<Identifier ref=\" + repr(str(self._ref)) + \" fieldloc=\"]\n out.append(repr(str(self._fieldloc)))\n out.append(\" length=\")\n out.append(repr(str(len(self._data))))\n out.append(\" width=\")\n out.append(repr(str(self._data.shape[1])))\n out.append(\" at=\")\n out.append(repr(hex(self._data.ctypes.data)))\n out.append(\">\\n\")\n out.append(indent + \" \")\n out.append(\n self._nplike.array_str(self._data, max_line_width=30).replace(\n \"\\n\", \"\\n\" + indent + \" \"\n )\n )\n out.append(\"\\n\")\n out.append(indent)\n out.append(\"</Identifier>\")\n out.append(post)\n return \"\".join(out)\n\n def convert_to(self, nplike):\n return Identifier(self._ref, self._fieldloc, nplike.asarray(self._data))\n\n def referentially_equal(self, other):\n return (\n self._ref == other._ref\n and self._fieldloc == other._fieldloc\n and self._data.ctypes.data == other._data.ctypes.data\n and self._data.shape == other._data.shape\n and self._data.strides == other._data.strides\n and self._data.dtype == other._data.dtype\n )\n", "path": "src/awkward/_v2/identifier.py"}]} |
gh_patches_debug_1084 | rasdani/github-patches | git_diff | qtile__qtile-2534 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Configurations test notification crashes xfce4-notifyd
# Issue description
The notification for valid or invalid config file is displaying once and then
crashing xfce4notifyd. I am not sure whose fault this is but on qtile stable it
was working.
# Qtile version
```sh
$ qtile --version
0.17.1.dev315+g67f97604
```
# Error Logs
qtile
```python
2021-06-09 17:58:30,020 ERROR libqtile base.py:on_done():L559 Failed to reschedule.
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/libqtile/widget/base.py", line 551, in on_done
self.update(result)
File "/usr/lib/python3.9/site-packages/libqtile/widget/base.py", line 462, in update
old_width = self.layout.width
File "/usr/lib/python3.9/site-packages/libqtile/drawer.py", line 80, in width
return self.layout.get_pixel_size()[0]
File "/usr/lib/python3.9/site-packages/libqtile/pangocffi.py", line 135, in get_pixel_size
pango.pango_layout_get_pixel_size(self._pointer, width, height)
TypeError: initializer for ctype 'PangoLayout *' must be a cdata pointer, not NoneType
2021-06-09 17:58:30,022 WARNING libqtile lifecycle.py:_atexit():L34 Restarting Qtile with os.execv(...)
```
***
xfce4notifyd
***
```sh
$ /usr/lib/xfce4/notifyd/xfce4-notifyd
(xfce4-notifyd:1110745): GLib-CRITICAL **: 18:04:02.329: g_key_file_set_string: assertion 'string != NULL' failed
(xfce4-notifyd:1110745): GLib-CRITICAL **: 18:04:07.415: g_key_file_set_string: assertion 'string != NULL' failed
[1] 1110745 segmentation fault (core dumped) /usr/lib/xfce4/notifyd/xfce4-notifyd
```
On version 0.17.0 this problem didn't happen. It works Ok with dunst though. I didn't test more versions or notifier daemons.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/utils.py`
Content:
```
1 # Copyright (c) 2008, Aldo Cortesi. All rights reserved.
2 # Copyright (c) 2020, Matt Colligan. All rights reserved.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a copy
5 # of this software and associated documentation files (the "Software"), to deal
6 # in the Software without restriction, including without limitation the rights
7 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
8 # copies of the Software, and to permit persons to whom the Software is
9 # furnished to do so, subject to the following conditions:
10 #
11 # The above copyright notice and this permission notice shall be included in
12 # all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
17 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
20 # SOFTWARE.
21
22 import asyncio
23 import glob
24 import importlib
25 import os
26 import traceback
27 from collections import defaultdict
28 from collections.abc import Sequence
29 from random import randint
30 from shutil import which
31 from typing import Tuple, Union
32
33 try:
34 from dbus_next import Message, Variant # type: ignore
35 from dbus_next.aio import MessageBus # type: ignore
36 from dbus_next.constants import BusType, MessageType # type: ignore
37 has_dbus = True
38 except ImportError:
39 has_dbus = False
40
41 from libqtile.log_utils import logger
42
43
44 class QtileError(Exception):
45 pass
46
47
48 def lget(o, v):
49 try:
50 return o[v]
51 except (IndexError, TypeError):
52 return None
53
54
55 def shuffle_up(lst):
56 if len(lst) > 1:
57 c = lst[-1]
58 lst.remove(c)
59 lst.insert(0, c)
60
61
62 def shuffle_down(lst):
63 if len(lst) > 1:
64 c = lst[0]
65 lst.remove(c)
66 lst.append(c)
67
68
69 ColorType = Union[str, Tuple[int, int, int], Tuple[int, int, int, float]]
70
71
72 def rgb(x):
73 """
74 Returns a valid RGBA tuple.
75
76 Here are some valid specifcations:
77 #ff0000
78 with alpha: #ff000080
79 ff0000
80 with alpha: ff0000.5
81 (255, 0, 0)
82 with alpha: (255, 0, 0, 0.5)
83 """
84 if isinstance(x, (tuple, list)):
85 if len(x) == 4:
86 alpha = x[3]
87 else:
88 alpha = 1
89 return (x[0] / 255.0, x[1] / 255.0, x[2] / 255.0, alpha)
90 elif isinstance(x, str):
91 if x.startswith("#"):
92 x = x[1:]
93 if "." in x:
94 x, alpha = x.split(".")
95 alpha = float("0." + alpha)
96 else:
97 alpha = 1
98 if len(x) not in (6, 8):
99 raise ValueError("RGB specifier must be 6 or 8 characters long.")
100 vals = [int(i, 16) for i in (x[0:2], x[2:4], x[4:6])]
101 if len(x) == 8:
102 alpha = int(x[6:8], 16) / 255.0
103 vals.append(alpha)
104 return rgb(vals)
105 raise ValueError("Invalid RGB specifier.")
106
107
108 def hex(x):
109 r, g, b, _ = rgb(x)
110 return '#%02x%02x%02x' % (int(r * 255), int(g * 255), int(b * 255))
111
112
113 def scrub_to_utf8(text):
114 if not text:
115 return ""
116 elif isinstance(text, str):
117 return text
118 else:
119 return text.decode("utf-8", "ignore")
120
121
122 def get_cache_dir():
123 """
124 Returns the cache directory and create if it doesn't exists
125 """
126
127 cache_directory = os.path.expandvars('$XDG_CACHE_HOME')
128 if cache_directory == '$XDG_CACHE_HOME':
129 # if variable wasn't set
130 cache_directory = os.path.expanduser("~/.cache")
131 cache_directory = os.path.join(cache_directory, 'qtile')
132 if not os.path.exists(cache_directory):
133 os.makedirs(cache_directory)
134 return cache_directory
135
136
137 def describe_attributes(obj, attrs, func=lambda x: x):
138 """
139 Helper for __repr__ functions to list attributes with truthy values only
140 (or values that return a truthy value by func)
141 """
142
143 pairs = []
144
145 for attr in attrs:
146 value = getattr(obj, attr, None)
147 if func(value):
148 pairs.append('%s=%s' % (attr, value))
149
150 return ', '.join(pairs)
151
152
153 def import_class(module_path, class_name, fallback=None):
154 """Import a class safely
155
156 Try to import the class module, and if it fails because of an ImporError
157 it logs on WARNING, and logs the traceback on DEBUG level
158 """
159 try:
160 module = importlib.import_module(module_path, __package__)
161 return getattr(module, class_name)
162 except ImportError as error:
163 logger.warning("Unmet dependencies for '%s.%s': %s", module_path,
164 class_name, error)
165 if fallback:
166 logger.debug("%s", traceback.format_exc())
167 return fallback(module_path, class_name)
168 raise
169
170
171 def lazify_imports(registry, package, fallback=None):
172 """Leverage PEP 562 to make imports lazy in an __init__.py
173
174 The registry must be a dictionary with the items to import as keys and the
175 modules they belong to as a value.
176 """
177 __all__ = tuple(registry.keys())
178
179 def __dir__():
180 return __all__
181
182 def __getattr__(name):
183 if name not in registry:
184 raise AttributeError
185 module_path = "{}.{}".format(package, registry[name])
186 return import_class(module_path, name, fallback=fallback)
187
188 return __all__, __dir__, __getattr__
189
190
191 def send_notification(title, message, urgent=False, timeout=10000, id=None):
192 """
193 Send a notification.
194
195 The id argument, if passed, requests the notification server to replace a visible
196 notification with the same ID. An ID is returned for each call; this would then be
197 passed when calling this function again to replace that notification. See:
198 https://developer.gnome.org/notification-spec/
199 """
200 if not has_dbus:
201 logger.warning(
202 "dbus-next is not installed. Unable to send notifications."
203 )
204 return -1
205
206 id = randint(10, 1000) if id is None else id
207 urgency = 2 if urgent else 1
208
209 try:
210 loop = asyncio.get_running_loop()
211 except RuntimeError:
212 logger.warning("Eventloop has not started. Cannot send notification.")
213 else:
214 loop.create_task(_notify(title, message, urgency, timeout, id))
215
216 return id
217
218
219 async def _notify(title, message, urgency, timeout, id):
220 notification = ["qtile", # Application name
221 id, # id
222 "", # icon
223 title, # summary
224 message, # body
225 [""], # actions
226 {"urgency": Variant("y", urgency)}, # hints
227 timeout] # timeout
228
229 bus, msg = await _send_dbus_message(True,
230 MessageType.METHOD_CALL,
231 "org.freedesktop.Notifications",
232 "org.freedesktop.Notifications",
233 "/org/freedesktop/Notifications",
234 "Notify",
235 "susssasa{sv}i",
236 notification)
237
238 if msg.message_type == MessageType.ERROR:
239 logger.warning("Unable to send notification. "
240 "Is a notification server running?")
241
242 # a new bus connection is made each time a notification is sent so
243 # we disconnect when the notification is done
244 bus.disconnect()
245
246
247 def guess_terminal(preference=None):
248 """Try to guess terminal."""
249 test_terminals = []
250 if isinstance(preference, str):
251 test_terminals += [preference]
252 elif isinstance(preference, Sequence):
253 test_terminals += list(preference)
254 test_terminals += [
255 'roxterm',
256 'sakura',
257 'hyper',
258 'alacritty',
259 'terminator',
260 'termite',
261 'gnome-terminal',
262 'konsole',
263 'xfce4-terminal',
264 'lxterminal',
265 'mate-terminal',
266 'kitty',
267 'yakuake',
268 'tilda',
269 'guake',
270 'eterm',
271 'st',
272 'urxvt',
273 'xterm',
274 'x-terminal-emulator',
275 ]
276
277 for terminal in test_terminals:
278 logger.debug('Guessing terminal: {}'.format(terminal))
279 if not which(terminal, os.X_OK):
280 continue
281
282 logger.info('Terminal found: {}'.format(terminal))
283 return terminal
284
285 logger.error('Default terminal has not been found.')
286
287
288 def scan_files(dirpath, *names):
289 """
290 Search a folder recursively for files matching those passed as arguments, with
291 globbing. Returns a dict with keys equal to entries in names, and values a list of
292 matching paths. E.g.:
293
294 >>> scan_files('/wallpapers', '*.png', '*.jpg')
295 defaultdict(<class 'list'>, {'*.png': ['/wallpapers/w1.png'], '*.jpg':
296 ['/wallpapers/w2.jpg', '/wallpapers/w3.jpg']})
297
298 """
299 files = defaultdict(list)
300
301 for name in names:
302 found = glob.glob(os.path.join(dirpath, '**', name), recursive=True)
303 files[name].extend(found)
304
305 return files
306
307
308 async def _send_dbus_message(session_bus, message_type, destination, interface,
309 path, member, signature, body):
310 """
311 Private method to send messages to dbus via dbus_next.
312
313 Returns a tuple of the bus object and message response.
314 """
315 if session_bus:
316 bus_type = BusType.SESSION
317 else:
318 bus_type = BusType.SYSTEM
319
320 if isinstance(body, str):
321 body = [body]
322
323 bus = await MessageBus(bus_type=bus_type).connect()
324
325 msg = await bus.call(
326 Message(message_type=message_type,
327 destination=destination,
328 interface=interface,
329 path=path,
330 member=member,
331 signature=signature,
332 body=body))
333
334 return bus, msg
335
336
337 async def add_signal_receiver(callback, session_bus=False, signal_name=None,
338 dbus_interface=None, bus_name=None, path=None):
339 """
340 Helper function which aims to recreate python-dbus's add_signal_receiver
341 method in dbus_next with asyncio calls.
342
343 Returns True if subscription is successful.
344 """
345 if not has_dbus:
346 logger.warning(
347 "dbus-next is not installed. "
348 "Unable to subscribe to signals"
349 )
350 return False
351
352 match_args = {
353 "type": "signal",
354 "sender": bus_name,
355 "member": signal_name,
356 "path": path,
357 "interface": dbus_interface
358 }
359
360 rule = ",".join("{}='{}'".format(k, v)
361 for k, v in match_args.items() if v)
362
363 bus, msg = await _send_dbus_message(session_bus,
364 MessageType.METHOD_CALL,
365 "org.freedesktop.DBus",
366 "org.freedesktop.DBus",
367 "/org/freedesktop/DBus",
368 "AddMatch",
369 "s",
370 rule)
371
372 # Check if message sent successfully
373 if msg.message_type == MessageType.METHOD_RETURN:
374 bus.add_message_handler(callback)
375 return True
376
377 else:
378 return False
379
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libqtile/utils.py b/libqtile/utils.py
--- a/libqtile/utils.py
+++ b/libqtile/utils.py
@@ -222,7 +222,7 @@
"", # icon
title, # summary
message, # body
- [""], # actions
+ [], # actions
{"urgency": Variant("y", urgency)}, # hints
timeout] # timeout
| {"golden_diff": "diff --git a/libqtile/utils.py b/libqtile/utils.py\n--- a/libqtile/utils.py\n+++ b/libqtile/utils.py\n@@ -222,7 +222,7 @@\n \"\", # icon\n title, # summary\n message, # body\n- [\"\"], # actions\n+ [], # actions\n {\"urgency\": Variant(\"y\", urgency)}, # hints\n timeout] # timeout\n", "issue": "[BUG] Configurations test notification crashes xfce4-notifyd\n# Issue description\r\nThe notification for valid or invalid config file is displaying once and then\r\ncrashing xfce4notifyd. I am not sure whose fault this is but on qtile stable it\r\nwas working.\r\n\r\n# Qtile version\r\n```sh\r\n$ qtile --version\r\n0.17.1.dev315+g67f97604\r\n```\r\n\r\n# Error Logs\r\n\r\n\r\nqtile\r\n```python\r\n2021-06-09 17:58:30,020 ERROR libqtile base.py:on_done():L559 Failed to reschedule.\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.9/site-packages/libqtile/widget/base.py\", line 551, in on_done\r\n self.update(result)\r\n File \"/usr/lib/python3.9/site-packages/libqtile/widget/base.py\", line 462, in update\r\n old_width = self.layout.width\r\n File \"/usr/lib/python3.9/site-packages/libqtile/drawer.py\", line 80, in width\r\n return self.layout.get_pixel_size()[0]\r\n File \"/usr/lib/python3.9/site-packages/libqtile/pangocffi.py\", line 135, in get_pixel_size\r\n pango.pango_layout_get_pixel_size(self._pointer, width, height)\r\nTypeError: initializer for ctype 'PangoLayout *' must be a cdata pointer, not NoneType\r\n2021-06-09 17:58:30,022 WARNING libqtile lifecycle.py:_atexit():L34 Restarting Qtile with os.execv(...)\r\n```\r\n\r\n***\r\nxfce4notifyd\r\n***\r\n\r\n```sh\r\n$ /usr/lib/xfce4/notifyd/xfce4-notifyd \r\n\r\n(xfce4-notifyd:1110745): GLib-CRITICAL **: 18:04:02.329: g_key_file_set_string: assertion 'string != NULL' failed\r\n\r\n(xfce4-notifyd:1110745): GLib-CRITICAL **: 18:04:07.415: g_key_file_set_string: assertion 'string != NULL' failed\r\n[1] 1110745 segmentation fault (core dumped) /usr/lib/xfce4/notifyd/xfce4-notifyd\r\n```\r\n\r\nOn version 0.17.0 this problem didn't happen. It works Ok with dunst though. I didn't test more versions or notifier daemons.\r\n\n", "before_files": [{"content": "# Copyright (c) 2008, Aldo Cortesi. All rights reserved.\n# Copyright (c) 2020, Matt Colligan. All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport asyncio\nimport glob\nimport importlib\nimport os\nimport traceback\nfrom collections import defaultdict\nfrom collections.abc import Sequence\nfrom random import randint\nfrom shutil import which\nfrom typing import Tuple, Union\n\ntry:\n from dbus_next import Message, Variant # type: ignore\n from dbus_next.aio import MessageBus # type: ignore\n from dbus_next.constants import BusType, MessageType # type: ignore\n has_dbus = True\nexcept ImportError:\n has_dbus = False\n\nfrom libqtile.log_utils import logger\n\n\nclass QtileError(Exception):\n pass\n\n\ndef lget(o, v):\n try:\n return o[v]\n except (IndexError, TypeError):\n return None\n\n\ndef shuffle_up(lst):\n if len(lst) > 1:\n c = lst[-1]\n lst.remove(c)\n lst.insert(0, c)\n\n\ndef shuffle_down(lst):\n if len(lst) > 1:\n c = lst[0]\n lst.remove(c)\n lst.append(c)\n\n\nColorType = Union[str, Tuple[int, int, int], Tuple[int, int, int, float]]\n\n\ndef rgb(x):\n \"\"\"\n Returns a valid RGBA tuple.\n\n Here are some valid specifcations:\n #ff0000\n with alpha: #ff000080\n ff0000\n with alpha: ff0000.5\n (255, 0, 0)\n with alpha: (255, 0, 0, 0.5)\n \"\"\"\n if isinstance(x, (tuple, list)):\n if len(x) == 4:\n alpha = x[3]\n else:\n alpha = 1\n return (x[0] / 255.0, x[1] / 255.0, x[2] / 255.0, alpha)\n elif isinstance(x, str):\n if x.startswith(\"#\"):\n x = x[1:]\n if \".\" in x:\n x, alpha = x.split(\".\")\n alpha = float(\"0.\" + alpha)\n else:\n alpha = 1\n if len(x) not in (6, 8):\n raise ValueError(\"RGB specifier must be 6 or 8 characters long.\")\n vals = [int(i, 16) for i in (x[0:2], x[2:4], x[4:6])]\n if len(x) == 8:\n alpha = int(x[6:8], 16) / 255.0\n vals.append(alpha)\n return rgb(vals)\n raise ValueError(\"Invalid RGB specifier.\")\n\n\ndef hex(x):\n r, g, b, _ = rgb(x)\n return '#%02x%02x%02x' % (int(r * 255), int(g * 255), int(b * 255))\n\n\ndef scrub_to_utf8(text):\n if not text:\n return \"\"\n elif isinstance(text, str):\n return text\n else:\n return text.decode(\"utf-8\", \"ignore\")\n\n\ndef get_cache_dir():\n \"\"\"\n Returns the cache directory and create if it doesn't exists\n \"\"\"\n\n cache_directory = os.path.expandvars('$XDG_CACHE_HOME')\n if cache_directory == '$XDG_CACHE_HOME':\n # if variable wasn't set\n cache_directory = os.path.expanduser(\"~/.cache\")\n cache_directory = os.path.join(cache_directory, 'qtile')\n if not os.path.exists(cache_directory):\n os.makedirs(cache_directory)\n return cache_directory\n\n\ndef describe_attributes(obj, attrs, func=lambda x: x):\n \"\"\"\n Helper for __repr__ functions to list attributes with truthy values only\n (or values that return a truthy value by func)\n \"\"\"\n\n pairs = []\n\n for attr in attrs:\n value = getattr(obj, attr, None)\n if func(value):\n pairs.append('%s=%s' % (attr, value))\n\n return ', '.join(pairs)\n\n\ndef import_class(module_path, class_name, fallback=None):\n \"\"\"Import a class safely\n\n Try to import the class module, and if it fails because of an ImporError\n it logs on WARNING, and logs the traceback on DEBUG level\n \"\"\"\n try:\n module = importlib.import_module(module_path, __package__)\n return getattr(module, class_name)\n except ImportError as error:\n logger.warning(\"Unmet dependencies for '%s.%s': %s\", module_path,\n class_name, error)\n if fallback:\n logger.debug(\"%s\", traceback.format_exc())\n return fallback(module_path, class_name)\n raise\n\n\ndef lazify_imports(registry, package, fallback=None):\n \"\"\"Leverage PEP 562 to make imports lazy in an __init__.py\n\n The registry must be a dictionary with the items to import as keys and the\n modules they belong to as a value.\n \"\"\"\n __all__ = tuple(registry.keys())\n\n def __dir__():\n return __all__\n\n def __getattr__(name):\n if name not in registry:\n raise AttributeError\n module_path = \"{}.{}\".format(package, registry[name])\n return import_class(module_path, name, fallback=fallback)\n\n return __all__, __dir__, __getattr__\n\n\ndef send_notification(title, message, urgent=False, timeout=10000, id=None):\n \"\"\"\n Send a notification.\n\n The id argument, if passed, requests the notification server to replace a visible\n notification with the same ID. An ID is returned for each call; this would then be\n passed when calling this function again to replace that notification. See:\n https://developer.gnome.org/notification-spec/\n \"\"\"\n if not has_dbus:\n logger.warning(\n \"dbus-next is not installed. Unable to send notifications.\"\n )\n return -1\n\n id = randint(10, 1000) if id is None else id\n urgency = 2 if urgent else 1\n\n try:\n loop = asyncio.get_running_loop()\n except RuntimeError:\n logger.warning(\"Eventloop has not started. Cannot send notification.\")\n else:\n loop.create_task(_notify(title, message, urgency, timeout, id))\n\n return id\n\n\nasync def _notify(title, message, urgency, timeout, id):\n notification = [\"qtile\", # Application name\n id, # id\n \"\", # icon\n title, # summary\n message, # body\n [\"\"], # actions\n {\"urgency\": Variant(\"y\", urgency)}, # hints\n timeout] # timeout\n\n bus, msg = await _send_dbus_message(True,\n MessageType.METHOD_CALL,\n \"org.freedesktop.Notifications\",\n \"org.freedesktop.Notifications\",\n \"/org/freedesktop/Notifications\",\n \"Notify\",\n \"susssasa{sv}i\",\n notification)\n\n if msg.message_type == MessageType.ERROR:\n logger.warning(\"Unable to send notification. \"\n \"Is a notification server running?\")\n\n # a new bus connection is made each time a notification is sent so\n # we disconnect when the notification is done\n bus.disconnect()\n\n\ndef guess_terminal(preference=None):\n \"\"\"Try to guess terminal.\"\"\"\n test_terminals = []\n if isinstance(preference, str):\n test_terminals += [preference]\n elif isinstance(preference, Sequence):\n test_terminals += list(preference)\n test_terminals += [\n 'roxterm',\n 'sakura',\n 'hyper',\n 'alacritty',\n 'terminator',\n 'termite',\n 'gnome-terminal',\n 'konsole',\n 'xfce4-terminal',\n 'lxterminal',\n 'mate-terminal',\n 'kitty',\n 'yakuake',\n 'tilda',\n 'guake',\n 'eterm',\n 'st',\n 'urxvt',\n 'xterm',\n 'x-terminal-emulator',\n ]\n\n for terminal in test_terminals:\n logger.debug('Guessing terminal: {}'.format(terminal))\n if not which(terminal, os.X_OK):\n continue\n\n logger.info('Terminal found: {}'.format(terminal))\n return terminal\n\n logger.error('Default terminal has not been found.')\n\n\ndef scan_files(dirpath, *names):\n \"\"\"\n Search a folder recursively for files matching those passed as arguments, with\n globbing. Returns a dict with keys equal to entries in names, and values a list of\n matching paths. E.g.:\n\n >>> scan_files('/wallpapers', '*.png', '*.jpg')\n defaultdict(<class 'list'>, {'*.png': ['/wallpapers/w1.png'], '*.jpg':\n ['/wallpapers/w2.jpg', '/wallpapers/w3.jpg']})\n\n \"\"\"\n files = defaultdict(list)\n\n for name in names:\n found = glob.glob(os.path.join(dirpath, '**', name), recursive=True)\n files[name].extend(found)\n\n return files\n\n\nasync def _send_dbus_message(session_bus, message_type, destination, interface,\n path, member, signature, body):\n \"\"\"\n Private method to send messages to dbus via dbus_next.\n\n Returns a tuple of the bus object and message response.\n \"\"\"\n if session_bus:\n bus_type = BusType.SESSION\n else:\n bus_type = BusType.SYSTEM\n\n if isinstance(body, str):\n body = [body]\n\n bus = await MessageBus(bus_type=bus_type).connect()\n\n msg = await bus.call(\n Message(message_type=message_type,\n destination=destination,\n interface=interface,\n path=path,\n member=member,\n signature=signature,\n body=body))\n\n return bus, msg\n\n\nasync def add_signal_receiver(callback, session_bus=False, signal_name=None,\n dbus_interface=None, bus_name=None, path=None):\n \"\"\"\n Helper function which aims to recreate python-dbus's add_signal_receiver\n method in dbus_next with asyncio calls.\n\n Returns True if subscription is successful.\n \"\"\"\n if not has_dbus:\n logger.warning(\n \"dbus-next is not installed. \"\n \"Unable to subscribe to signals\"\n )\n return False\n\n match_args = {\n \"type\": \"signal\",\n \"sender\": bus_name,\n \"member\": signal_name,\n \"path\": path,\n \"interface\": dbus_interface\n }\n\n rule = \",\".join(\"{}='{}'\".format(k, v)\n for k, v in match_args.items() if v)\n\n bus, msg = await _send_dbus_message(session_bus,\n MessageType.METHOD_CALL,\n \"org.freedesktop.DBus\",\n \"org.freedesktop.DBus\",\n \"/org/freedesktop/DBus\",\n \"AddMatch\",\n \"s\",\n rule)\n\n # Check if message sent successfully\n if msg.message_type == MessageType.METHOD_RETURN:\n bus.add_message_handler(callback)\n return True\n\n else:\n return False\n", "path": "libqtile/utils.py"}], "after_files": [{"content": "# Copyright (c) 2008, Aldo Cortesi. All rights reserved.\n# Copyright (c) 2020, Matt Colligan. All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport asyncio\nimport glob\nimport importlib\nimport os\nimport traceback\nfrom collections import defaultdict\nfrom collections.abc import Sequence\nfrom random import randint\nfrom shutil import which\nfrom typing import Tuple, Union\n\ntry:\n from dbus_next import Message, Variant # type: ignore\n from dbus_next.aio import MessageBus # type: ignore\n from dbus_next.constants import BusType, MessageType # type: ignore\n has_dbus = True\nexcept ImportError:\n has_dbus = False\n\nfrom libqtile.log_utils import logger\n\n\nclass QtileError(Exception):\n pass\n\n\ndef lget(o, v):\n try:\n return o[v]\n except (IndexError, TypeError):\n return None\n\n\ndef shuffle_up(lst):\n if len(lst) > 1:\n c = lst[-1]\n lst.remove(c)\n lst.insert(0, c)\n\n\ndef shuffle_down(lst):\n if len(lst) > 1:\n c = lst[0]\n lst.remove(c)\n lst.append(c)\n\n\nColorType = Union[str, Tuple[int, int, int], Tuple[int, int, int, float]]\n\n\ndef rgb(x):\n \"\"\"\n Returns a valid RGBA tuple.\n\n Here are some valid specifcations:\n #ff0000\n with alpha: #ff000080\n ff0000\n with alpha: ff0000.5\n (255, 0, 0)\n with alpha: (255, 0, 0, 0.5)\n \"\"\"\n if isinstance(x, (tuple, list)):\n if len(x) == 4:\n alpha = x[3]\n else:\n alpha = 1\n return (x[0] / 255.0, x[1] / 255.0, x[2] / 255.0, alpha)\n elif isinstance(x, str):\n if x.startswith(\"#\"):\n x = x[1:]\n if \".\" in x:\n x, alpha = x.split(\".\")\n alpha = float(\"0.\" + alpha)\n else:\n alpha = 1\n if len(x) not in (6, 8):\n raise ValueError(\"RGB specifier must be 6 or 8 characters long.\")\n vals = [int(i, 16) for i in (x[0:2], x[2:4], x[4:6])]\n if len(x) == 8:\n alpha = int(x[6:8], 16) / 255.0\n vals.append(alpha)\n return rgb(vals)\n raise ValueError(\"Invalid RGB specifier.\")\n\n\ndef hex(x):\n r, g, b, _ = rgb(x)\n return '#%02x%02x%02x' % (int(r * 255), int(g * 255), int(b * 255))\n\n\ndef scrub_to_utf8(text):\n if not text:\n return \"\"\n elif isinstance(text, str):\n return text\n else:\n return text.decode(\"utf-8\", \"ignore\")\n\n\ndef get_cache_dir():\n \"\"\"\n Returns the cache directory and create if it doesn't exists\n \"\"\"\n\n cache_directory = os.path.expandvars('$XDG_CACHE_HOME')\n if cache_directory == '$XDG_CACHE_HOME':\n # if variable wasn't set\n cache_directory = os.path.expanduser(\"~/.cache\")\n cache_directory = os.path.join(cache_directory, 'qtile')\n if not os.path.exists(cache_directory):\n os.makedirs(cache_directory)\n return cache_directory\n\n\ndef describe_attributes(obj, attrs, func=lambda x: x):\n \"\"\"\n Helper for __repr__ functions to list attributes with truthy values only\n (or values that return a truthy value by func)\n \"\"\"\n\n pairs = []\n\n for attr in attrs:\n value = getattr(obj, attr, None)\n if func(value):\n pairs.append('%s=%s' % (attr, value))\n\n return ', '.join(pairs)\n\n\ndef import_class(module_path, class_name, fallback=None):\n \"\"\"Import a class safely\n\n Try to import the class module, and if it fails because of an ImporError\n it logs on WARNING, and logs the traceback on DEBUG level\n \"\"\"\n try:\n module = importlib.import_module(module_path, __package__)\n return getattr(module, class_name)\n except ImportError as error:\n logger.warning(\"Unmet dependencies for '%s.%s': %s\", module_path,\n class_name, error)\n if fallback:\n logger.debug(\"%s\", traceback.format_exc())\n return fallback(module_path, class_name)\n raise\n\n\ndef lazify_imports(registry, package, fallback=None):\n \"\"\"Leverage PEP 562 to make imports lazy in an __init__.py\n\n The registry must be a dictionary with the items to import as keys and the\n modules they belong to as a value.\n \"\"\"\n __all__ = tuple(registry.keys())\n\n def __dir__():\n return __all__\n\n def __getattr__(name):\n if name not in registry:\n raise AttributeError\n module_path = \"{}.{}\".format(package, registry[name])\n return import_class(module_path, name, fallback=fallback)\n\n return __all__, __dir__, __getattr__\n\n\ndef send_notification(title, message, urgent=False, timeout=10000, id=None):\n \"\"\"\n Send a notification.\n\n The id argument, if passed, requests the notification server to replace a visible\n notification with the same ID. An ID is returned for each call; this would then be\n passed when calling this function again to replace that notification. See:\n https://developer.gnome.org/notification-spec/\n \"\"\"\n if not has_dbus:\n logger.warning(\n \"dbus-next is not installed. Unable to send notifications.\"\n )\n return -1\n\n id = randint(10, 1000) if id is None else id\n urgency = 2 if urgent else 1\n\n try:\n loop = asyncio.get_running_loop()\n except RuntimeError:\n logger.warning(\"Eventloop has not started. Cannot send notification.\")\n else:\n loop.create_task(_notify(title, message, urgency, timeout, id))\n\n return id\n\n\nasync def _notify(title, message, urgency, timeout, id):\n notification = [\"qtile\", # Application name\n id, # id\n \"\", # icon\n title, # summary\n message, # body\n [], # actions\n {\"urgency\": Variant(\"y\", urgency)}, # hints\n timeout] # timeout\n\n bus, msg = await _send_dbus_message(True,\n MessageType.METHOD_CALL,\n \"org.freedesktop.Notifications\",\n \"org.freedesktop.Notifications\",\n \"/org/freedesktop/Notifications\",\n \"Notify\",\n \"susssasa{sv}i\",\n notification)\n\n if msg.message_type == MessageType.ERROR:\n logger.warning(\"Unable to send notification. \"\n \"Is a notification server running?\")\n\n # a new bus connection is made each time a notification is sent so\n # we disconnect when the notification is done\n bus.disconnect()\n\n\ndef guess_terminal(preference=None):\n \"\"\"Try to guess terminal.\"\"\"\n test_terminals = []\n if isinstance(preference, str):\n test_terminals += [preference]\n elif isinstance(preference, Sequence):\n test_terminals += list(preference)\n test_terminals += [\n 'roxterm',\n 'sakura',\n 'hyper',\n 'alacritty',\n 'terminator',\n 'termite',\n 'gnome-terminal',\n 'konsole',\n 'xfce4-terminal',\n 'lxterminal',\n 'mate-terminal',\n 'kitty',\n 'yakuake',\n 'tilda',\n 'guake',\n 'eterm',\n 'st',\n 'urxvt',\n 'xterm',\n 'x-terminal-emulator',\n ]\n\n for terminal in test_terminals:\n logger.debug('Guessing terminal: {}'.format(terminal))\n if not which(terminal, os.X_OK):\n continue\n\n logger.info('Terminal found: {}'.format(terminal))\n return terminal\n\n logger.error('Default terminal has not been found.')\n\n\ndef scan_files(dirpath, *names):\n \"\"\"\n Search a folder recursively for files matching those passed as arguments, with\n globbing. Returns a dict with keys equal to entries in names, and values a list of\n matching paths. E.g.:\n\n >>> scan_files('/wallpapers', '*.png', '*.jpg')\n defaultdict(<class 'list'>, {'*.png': ['/wallpapers/w1.png'], '*.jpg':\n ['/wallpapers/w2.jpg', '/wallpapers/w3.jpg']})\n\n \"\"\"\n files = defaultdict(list)\n\n for name in names:\n found = glob.glob(os.path.join(dirpath, '**', name), recursive=True)\n files[name].extend(found)\n\n return files\n\n\nasync def _send_dbus_message(session_bus, message_type, destination, interface,\n path, member, signature, body):\n \"\"\"\n Private method to send messages to dbus via dbus_next.\n\n Returns a tuple of the bus object and message response.\n \"\"\"\n if session_bus:\n bus_type = BusType.SESSION\n else:\n bus_type = BusType.SYSTEM\n\n if isinstance(body, str):\n body = [body]\n\n bus = await MessageBus(bus_type=bus_type).connect()\n\n msg = await bus.call(\n Message(message_type=message_type,\n destination=destination,\n interface=interface,\n path=path,\n member=member,\n signature=signature,\n body=body))\n\n return bus, msg\n\n\nasync def add_signal_receiver(callback, session_bus=False, signal_name=None,\n dbus_interface=None, bus_name=None, path=None):\n \"\"\"\n Helper function which aims to recreate python-dbus's add_signal_receiver\n method in dbus_next with asyncio calls.\n\n Returns True if subscription is successful.\n \"\"\"\n if not has_dbus:\n logger.warning(\n \"dbus-next is not installed. \"\n \"Unable to subscribe to signals\"\n )\n return False\n\n match_args = {\n \"type\": \"signal\",\n \"sender\": bus_name,\n \"member\": signal_name,\n \"path\": path,\n \"interface\": dbus_interface\n }\n\n rule = \",\".join(\"{}='{}'\".format(k, v)\n for k, v in match_args.items() if v)\n\n bus, msg = await _send_dbus_message(session_bus,\n MessageType.METHOD_CALL,\n \"org.freedesktop.DBus\",\n \"org.freedesktop.DBus\",\n \"/org/freedesktop/DBus\",\n \"AddMatch\",\n \"s\",\n rule)\n\n # Check if message sent successfully\n if msg.message_type == MessageType.METHOD_RETURN:\n bus.add_message_handler(callback)\n return True\n\n else:\n return False\n", "path": "libqtile/utils.py"}]} |
gh_patches_debug_1085 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-1881 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OK failing since at least 2017-07-29
OK has been failing since 2017-07-29
Based on automated runs it appears that OK has not run successfully in 2 days (2017-07-29).
```
02:10:09 INFO pupa: save person Roger Thompson as person_1e8475c0-74f6-11e7-858b-0242ac110005.json
02:10:09 INFO pupa: save membership 1e8475c0-74f6-11e7-858b-0242ac110005 membership in ~{"classification": "upper"} as membership_1e84796c-74f6-11e7-858b-0242ac110005.json
02:10:09 INFO pupa: save membership 1e8475c0-74f6-11e7-858b-0242ac110005 membership in ~{"classification": "party", "name": "Republican"} as membership_1e847b92-74f6-11e7-858b-0242ac110005.json
02:10:09 INFO scrapelib: GET - http://oksenate.gov/Senators/biographies/treat_bio.html
02:10:10 INFO pupa: save person Greg Treat as person_1f1d4a98-74f6-11e7-858b-0242ac110005.json
02:10:10 INFO pupa: save membership 1f1d4a98-74f6-11e7-858b-0242ac110005 membership in ~{"classification": "upper"} as membership_1f1d4e58-74f6-11e7-858b-0242ac110005.json
02:10:10 INFO pupa: save membership 1f1d4a98-74f6-11e7-858b-0242ac110005 membership in ~{"classification": "party", "name": "Republican"} as membership_1f1d5074-74f6-11e7-858b-0242ac110005.json
02:10:10 INFO scrapelib: GET - http://oksenate.gov/Senators/biographies/yen_bio.html
02:10:11 INFO pupa: save membership 1fb7ab60-74f6-11e7-858b-0242ac110005 membership in ~{"classification": "upper"} as membership_1fb7b04c-74f6-11e7-858b-0242ac110005.json
02:10:11 INFO pupa: save person Ervin Yen as person_1fb7ab60-74f6-11e7-858b-0242ac110005.json
02:10:11 INFO pupa: save membership 1fb7ab60-74f6-11e7-858b-0242ac110005 membership in ~{"classification": "party", "name": "Republican"} as membership_1fb7b326-74f6-11e7-858b-0242ac110005.json
02:10:11 INFO pupa: GET via curl subprocess: https://www.okhouse.gov/Members/Default.aspx
02:10:12 INFO pupa: GET via curl subprocess: https://www.okhouse.gov/District.aspx?District=33
02:10:12 INFO pupa: save person Greg Babinec as person_2113c200-74f6-11e7-858b-0242ac110005.json
02:10:12 WARNING pupa: validation of Person 2113c200-74f6-11e7-858b-0242ac110005 failed: 1 validation errors:
Value None for field '<obj>.image' is not of type string
no pupa_settings on path, using defaults
ok (scrape, import)
bills: {}
people: {}
committees: {}
Value None for field '<obj>.image' is not of type string
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 175, in validate
validator.validate(self.as_dict(), schema)
File "/opt/openstates/venv-pupa/lib/python3.5/site-packages/validictory/validator.py", line 620, in validate
raise MultipleValidationError(self._errors)
validictory.validator.MultipleValidationError: 1 validation errors:
Traceback (most recent call last):
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/openstates/venv-pupa//bin/pupa", line 11, in <module>
load_entry_point('pupa', 'console_scripts', 'pupa')()
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py", line 67, in main
subcommands[args.subcommand].handle(args, other)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 104, in do_scrape
self.save_object(obj)
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 89, in save_object
raise ve
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 85, in save_object
obj.validate()
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 178, in validate
self.__class__.__name__, self._id, ve)
pupa.exceptions.ScrapeValueError: validation of Person 2113c200-74f6-11e7-858b-0242ac110005 failed: 1 validation errors:
Value None for field '<obj>.image' is not of type string
```
Visit http://bobsled.openstates.org for more info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/ok/people.py`
Content:
```
1 import re
2 import lxml
3 from pupa.scrape import Person, Scraper
4 from openstates.utils import LXMLMixin, validate_email_address
5 from .utils import LXMLMixinOK
6
7
8 class OKPersonScraper(Scraper, LXMLMixin, LXMLMixinOK):
9
10 _parties = {'R': 'Republican', 'D': 'Democratic', 'I': 'Independent'}
11
12 def _scrub(self, text):
13 """Squish whitespace and kill \xa0."""
14 return re.sub(r'[\s\xa0]+', ' ', text)
15
16 def _clean_office_info(self, office_info):
17 office_info = list(map(self._scrub, office_info.itertext()))
18 # Throw away anything after any email address, phone number, or
19 # address lines.
20 while office_info:
21 last = office_info[-1]
22 if ('@' not in last
23 and ', OK' not in last
24 and not re.search(r'[\d\-\(\) ]{7,}', last)):
25 office_info.pop()
26 else:
27 break
28 return office_info
29
30 def _extract_phone(self, office_info):
31 phone = None
32
33 for line in office_info:
34 phone_match = re.search(r'''(\(\d{3}\) \d{3}-\d{4}|
35 \d{3}.\d{3}.\d{4})''', line)
36 if phone_match is not None:
37 phone = phone_match.group(1).strip()
38
39 return phone
40
41 def _extract_email(self, doc):
42 xpath = '//div[@class="districtheadleft"]' \
43 + '/b[contains(text(), "Email:")]' \
44 + '/../following-sibling::div' \
45 + '/script/text()'
46 script = doc.xpath(xpath)
47 if not script:
48 return ''
49 script = script[0]
50 line = filter(
51 lambda line: '+ "@" +' in line,
52 script.split('\r\n'))[0]
53 parts = re.findall(r'"(.+?)"', line)
54
55 email = ''.join(parts)
56
57 return email if validate_email_address(email) else ''
58
59 def scrape(self, chamber=None):
60 term = self.jurisdiction.legislative_sessions[-1]['identifier']
61 chambers = [chamber] if chamber is not None else ['upper', 'lower']
62 for chamber in chambers:
63 yield from getattr(self, 'scrape_' + chamber + '_chamber')(term)
64
65 def scrape_lower_chamber(self, term):
66 url = "https://www.okhouse.gov/Members/Default.aspx"
67 page = self.curl_lxmlize(url)
68
69 legislator_nodes = self.get_nodes(
70 page,
71 '//table[@id="ctl00_ContentPlaceHolder1_RadGrid1_ctl00"]/tbody/tr')
72
73 for legislator_node in legislator_nodes:
74 name_node = self.get_node(
75 legislator_node,
76 './/td[1]/a')
77
78 if name_node is not None:
79 name_text = name_node.text.strip()
80
81 # Handle seats with no current representative
82 if re.search(r'District \d+', name_text):
83 continue
84
85 last_name, delimiter, first_name = name_text.partition(',')
86
87 if last_name is not None and first_name is not None:
88 first_name = first_name.strip()
89 last_name = last_name.strip()
90 name = ' '.join([first_name, last_name])
91 else:
92 raise ValueError('Unable to parse name: {}'.format(
93 name_text))
94
95 if name.startswith('House District'):
96 continue
97
98 district_node = self.get_node(
99 legislator_node,
100 './/td[3]')
101
102 if district_node is not None:
103 district = district_node.text.strip()
104
105 party_node = self.get_node(
106 legislator_node,
107 './/td[4]')
108
109 if party_node is not None:
110 party_text = party_node.text.strip()
111
112 party = self._parties[party_text]
113
114 legislator_url = 'https://www.okhouse.gov/District.aspx?District=' + district
115 legislator_page = self.curl_lxmlize(legislator_url)
116
117 photo_url = self.get_node(
118 legislator_page,
119 '//a[@id="ctl00_ContentPlaceHolder1_imgHiRes"]/@href')
120
121 person = Person(primary_org='lower',
122 district=district,
123 name=name,
124 party=party,
125 image=photo_url)
126 person.extras['_scraped_name'] = name_text
127 person.add_link(legislator_url)
128 person.add_source(url)
129 person.add_source(legislator_url)
130
131 # Scrape offices.
132 self.scrape_lower_offices(legislator_page, person)
133
134 yield person
135
136 def scrape_lower_offices(self, doc, person):
137
138 # Capitol offices:
139 xpath = '//*[contains(text(), "Capitol Address")]'
140 for bold in doc.xpath(xpath):
141
142 # Get the address.
143 address_div = next(bold.getparent().itersiblings())
144
145 # Get the room number.
146 xpath = '//*[contains(@id, "CapitolRoom")]/text()'
147 room = address_div.xpath(xpath)
148 if room:
149 parts = map(self._scrub, list(address_div.itertext()))
150 parts = [x.strip() for x in parts if x.strip()]
151 phone = parts.pop()
152 parts = [parts[0], 'Room ' + room[0], parts[-1]]
153 address = '\n'.join(parts)
154 else:
155 address = None
156 phone = None
157
158 if not phone:
159 phone = None
160
161 # Get the email address, extracted from a series of JS
162 # "document.write" lines.
163 email = self._extract_email(doc)
164 if email:
165 person.add_contact_detail(type='email', value=email,
166 note='Capitol Office')
167 person.extras['email'] = email
168 if phone:
169 person.add_contact_detail(type='voice', value=str(phone),
170 note='Capitol Office')
171 if address:
172 person.add_contact_detail(type='address', value=address,
173 note='Capitol Office')
174
175 # District offices only have address, no other information
176 district_address = doc.xpath('//span[@id="ctl00_Content'
177 'PlaceHolder1_lblDistrictAddress"]/text()')
178 if district_address:
179 district_city_state, = doc.xpath('//span[@id="ctl00_Content'
180 'PlaceHolder1_lblDistrictCity"]/text()')
181 district_address = "{}\n{}".format(district_address[0], district_city_state)
182 if district_address:
183 person.add_contact_detail(type='address', value=district_address,
184 note='District Office')
185
186 def scrape_upper_chamber(self, term):
187 url = "http://oksenate.gov/Senators/Default.aspx"
188 html = self.get(url).text
189 doc = lxml.html.fromstring(html)
190 doc.make_links_absolute(url)
191
192 for a in doc.xpath('//table[@summary]')[0]. \
193 xpath('.//td//a[contains(@href, "biographies")]'):
194 tail = a.xpath('..')[0].tail
195 if tail:
196 district = tail.split()[1]
197 else:
198 district = a.xpath('../../span')[1].text.split()[1]
199
200 if a.text is None or a.text.strip() == 'Vacant':
201 self.warning("District {} appears to be empty".format(district))
202 continue
203 else:
204 match = re.match(r'(.+) \(([A-Z])\)', a.text.strip())
205 name, party = match.group(1), self._parties[match.group(2)]
206
207 url = a.get('href')
208
209 person = Person(primary_org='upper',
210 district=district,
211 name=name.strip(),
212 party=party,
213 )
214 person.add_link(url)
215 person.add_source(url)
216 self.scrape_upper_offices(person, url)
217 yield person
218
219 def scrape_upper_offices(self, person, url):
220 url = url.replace('aspx', 'html')
221 html = self.get(url).text
222 person.add_source(url)
223 doc = lxml.html.fromstring(html)
224 doc.make_links_absolute(url)
225
226 try:
227 xpath = '//h3[contains(., "Office")]'
228 for table in doc.xpath(xpath)[0].itersiblings():
229 if table.tag == 'table':
230 break
231 except IndexError:
232 self.warning('invalid bio page for %s', person)
233 return
234 col1, col2 = table.xpath('tr[2]/td')
235 lxml.etree.strip_tags(col1, 'sup')
236 lxml.etree.strip_tags(col2, 'sup')
237
238 capitol_office_info = self._clean_office_info(col1)
239
240 # Set email on the leg object.
241 if capitol_office_info:
242 if '@' in capitol_office_info[-1]:
243 email = capitol_office_info.pop()
244 person.extras['email'] = email
245 else:
246 email = None
247
248 capitol_phone = self._extract_phone(capitol_office_info)
249
250 capitol_address_lines = map(
251 lambda line: line.strip(),
252 filter(
253 lambda string: re.search(r', OK|Lincoln Blvd|Room \d', string),
254 capitol_office_info))
255
256 if email:
257 person.add_contact_detail(type='email', value=email,
258 note='Capitol Office')
259 if capitol_phone:
260 person.add_contact_detail(type='voice', value=str(capitol_phone),
261 note='Capitol Office')
262
263 capitol_address = '\n'.join(capitol_address_lines)
264 if capitol_address:
265 person.add_contact_detail(type='address', value=capitol_address,
266 note='Capitol Office')
267
268 district_office_info = self._clean_office_info(col2)
269 # This probably isn't a valid district office at less than two lines.
270 if len(district_office_info) < 2:
271 return
272
273 district_address_lines = []
274 for line in district_office_info:
275 district_address_lines.append(line.strip())
276 if 'OK' in line:
277 break
278
279 if 'OK' in district_address_lines[-1]:
280 district_address = '\n'.join(filter(lambda line: line,
281 district_address_lines))
282 else:
283 district_address = None
284 # self.logger.debug(district_address)
285
286 district_phone = self._extract_phone(district_office_info)
287
288 if capitol_phone:
289 person.add_contact_detail(type='voice', value=str(district_phone),
290 note='District Office')
291 if capitol_address_lines:
292 person.add_contact_detail(type='address', value=district_address,
293 note='District Office')
294
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/ok/people.py b/openstates/ok/people.py
--- a/openstates/ok/people.py
+++ b/openstates/ok/people.py
@@ -111,7 +111,7 @@
party = self._parties[party_text]
- legislator_url = 'https://www.okhouse.gov/District.aspx?District=' + district
+ legislator_url = 'https://www.okhouse.gov/Members/District.aspx?District=' + district
legislator_page = self.curl_lxmlize(legislator_url)
photo_url = self.get_node(
| {"golden_diff": "diff --git a/openstates/ok/people.py b/openstates/ok/people.py\n--- a/openstates/ok/people.py\n+++ b/openstates/ok/people.py\n@@ -111,7 +111,7 @@\n \n party = self._parties[party_text]\n \n- legislator_url = 'https://www.okhouse.gov/District.aspx?District=' + district\n+ legislator_url = 'https://www.okhouse.gov/Members/District.aspx?District=' + district\n legislator_page = self.curl_lxmlize(legislator_url)\n \n photo_url = self.get_node(\n", "issue": "OK failing since at least 2017-07-29\nOK has been failing since 2017-07-29\n\nBased on automated runs it appears that OK has not run successfully in 2 days (2017-07-29).\n\n\n```\n 02:10:09 INFO pupa: save person Roger Thompson as person_1e8475c0-74f6-11e7-858b-0242ac110005.json\n02:10:09 INFO pupa: save membership 1e8475c0-74f6-11e7-858b-0242ac110005 membership in ~{\"classification\": \"upper\"} as membership_1e84796c-74f6-11e7-858b-0242ac110005.json\n02:10:09 INFO pupa: save membership 1e8475c0-74f6-11e7-858b-0242ac110005 membership in ~{\"classification\": \"party\", \"name\": \"Republican\"} as membership_1e847b92-74f6-11e7-858b-0242ac110005.json\n02:10:09 INFO scrapelib: GET - http://oksenate.gov/Senators/biographies/treat_bio.html\n02:10:10 INFO pupa: save person Greg Treat as person_1f1d4a98-74f6-11e7-858b-0242ac110005.json\n02:10:10 INFO pupa: save membership 1f1d4a98-74f6-11e7-858b-0242ac110005 membership in ~{\"classification\": \"upper\"} as membership_1f1d4e58-74f6-11e7-858b-0242ac110005.json\n02:10:10 INFO pupa: save membership 1f1d4a98-74f6-11e7-858b-0242ac110005 membership in ~{\"classification\": \"party\", \"name\": \"Republican\"} as membership_1f1d5074-74f6-11e7-858b-0242ac110005.json\n02:10:10 INFO scrapelib: GET - http://oksenate.gov/Senators/biographies/yen_bio.html\n02:10:11 INFO pupa: save membership 1fb7ab60-74f6-11e7-858b-0242ac110005 membership in ~{\"classification\": \"upper\"} as membership_1fb7b04c-74f6-11e7-858b-0242ac110005.json\n02:10:11 INFO pupa: save person Ervin Yen as person_1fb7ab60-74f6-11e7-858b-0242ac110005.json\n02:10:11 INFO pupa: save membership 1fb7ab60-74f6-11e7-858b-0242ac110005 membership in ~{\"classification\": \"party\", \"name\": \"Republican\"} as membership_1fb7b326-74f6-11e7-858b-0242ac110005.json\n02:10:11 INFO pupa: GET via curl subprocess: https://www.okhouse.gov/Members/Default.aspx\n02:10:12 INFO pupa: GET via curl subprocess: https://www.okhouse.gov/District.aspx?District=33\n02:10:12 INFO pupa: save person Greg Babinec as person_2113c200-74f6-11e7-858b-0242ac110005.json\n02:10:12 WARNING pupa: validation of Person 2113c200-74f6-11e7-858b-0242ac110005 failed: 1 validation errors:\nValue None for field '<obj>.image' is not of type string\nno pupa_settings on path, using defaults\nok (scrape, import)\n bills: {}\n people: {}\n committees: {}\nValue None for field '<obj>.image' is not of type string\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 175, in validate\n validator.validate(self.as_dict(), schema)\n File \"/opt/openstates/venv-pupa/lib/python3.5/site-packages/validictory/validator.py\", line 620, in validate\n raise MultipleValidationError(self._errors)\nvalidictory.validator.MultipleValidationError: 1 validation errors:\nTraceback (most recent call last):\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last):\n File \"/opt/openstates/venv-pupa//bin/pupa\", line 11, in <module>\n load_entry_point('pupa', 'console_scripts', 'pupa')()\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py\", line 67, in main\n subcommands[args.subcommand].handle(args, other)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 260, in handle\n return self.do_handle(args, other, juris)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 305, in do_handle\n report['scrape'] = self.do_scrape(juris, args, scrapers)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 173, in do_scrape\n report[scraper_name] = scraper.do_scrape(**scrape_args)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 104, in do_scrape\n self.save_object(obj)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 89, in save_object\n raise ve\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 85, in save_object\n obj.validate()\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 178, in validate\n self.__class__.__name__, self._id, ve)\npupa.exceptions.ScrapeValueError: validation of Person 2113c200-74f6-11e7-858b-0242ac110005 failed: 1 validation errors:\nValue None for field '<obj>.image' is not of type string\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "import re\nimport lxml\nfrom pupa.scrape import Person, Scraper\nfrom openstates.utils import LXMLMixin, validate_email_address\nfrom .utils import LXMLMixinOK\n\n\nclass OKPersonScraper(Scraper, LXMLMixin, LXMLMixinOK):\n\n _parties = {'R': 'Republican', 'D': 'Democratic', 'I': 'Independent'}\n\n def _scrub(self, text):\n \"\"\"Squish whitespace and kill \\xa0.\"\"\"\n return re.sub(r'[\\s\\xa0]+', ' ', text)\n\n def _clean_office_info(self, office_info):\n office_info = list(map(self._scrub, office_info.itertext()))\n # Throw away anything after any email address, phone number, or\n # address lines.\n while office_info:\n last = office_info[-1]\n if ('@' not in last\n and ', OK' not in last\n and not re.search(r'[\\d\\-\\(\\) ]{7,}', last)):\n office_info.pop()\n else:\n break\n return office_info\n\n def _extract_phone(self, office_info):\n phone = None\n\n for line in office_info:\n phone_match = re.search(r'''(\\(\\d{3}\\) \\d{3}-\\d{4}|\n \\d{3}.\\d{3}.\\d{4})''', line)\n if phone_match is not None:\n phone = phone_match.group(1).strip()\n\n return phone\n\n def _extract_email(self, doc):\n xpath = '//div[@class=\"districtheadleft\"]' \\\n + '/b[contains(text(), \"Email:\")]' \\\n + '/../following-sibling::div' \\\n + '/script/text()'\n script = doc.xpath(xpath)\n if not script:\n return ''\n script = script[0]\n line = filter(\n lambda line: '+ \"@\" +' in line,\n script.split('\\r\\n'))[0]\n parts = re.findall(r'\"(.+?)\"', line)\n\n email = ''.join(parts)\n\n return email if validate_email_address(email) else ''\n\n def scrape(self, chamber=None):\n term = self.jurisdiction.legislative_sessions[-1]['identifier']\n chambers = [chamber] if chamber is not None else ['upper', 'lower']\n for chamber in chambers:\n yield from getattr(self, 'scrape_' + chamber + '_chamber')(term)\n\n def scrape_lower_chamber(self, term):\n url = \"https://www.okhouse.gov/Members/Default.aspx\"\n page = self.curl_lxmlize(url)\n\n legislator_nodes = self.get_nodes(\n page,\n '//table[@id=\"ctl00_ContentPlaceHolder1_RadGrid1_ctl00\"]/tbody/tr')\n\n for legislator_node in legislator_nodes:\n name_node = self.get_node(\n legislator_node,\n './/td[1]/a')\n\n if name_node is not None:\n name_text = name_node.text.strip()\n\n # Handle seats with no current representative\n if re.search(r'District \\d+', name_text):\n continue\n\n last_name, delimiter, first_name = name_text.partition(',')\n\n if last_name is not None and first_name is not None:\n first_name = first_name.strip()\n last_name = last_name.strip()\n name = ' '.join([first_name, last_name])\n else:\n raise ValueError('Unable to parse name: {}'.format(\n name_text))\n\n if name.startswith('House District'):\n continue\n\n district_node = self.get_node(\n legislator_node,\n './/td[3]')\n\n if district_node is not None:\n district = district_node.text.strip()\n\n party_node = self.get_node(\n legislator_node,\n './/td[4]')\n\n if party_node is not None:\n party_text = party_node.text.strip()\n\n party = self._parties[party_text]\n\n legislator_url = 'https://www.okhouse.gov/District.aspx?District=' + district\n legislator_page = self.curl_lxmlize(legislator_url)\n\n photo_url = self.get_node(\n legislator_page,\n '//a[@id=\"ctl00_ContentPlaceHolder1_imgHiRes\"]/@href')\n\n person = Person(primary_org='lower',\n district=district,\n name=name,\n party=party,\n image=photo_url)\n person.extras['_scraped_name'] = name_text\n person.add_link(legislator_url)\n person.add_source(url)\n person.add_source(legislator_url)\n\n # Scrape offices.\n self.scrape_lower_offices(legislator_page, person)\n\n yield person\n\n def scrape_lower_offices(self, doc, person):\n\n # Capitol offices:\n xpath = '//*[contains(text(), \"Capitol Address\")]'\n for bold in doc.xpath(xpath):\n\n # Get the address.\n address_div = next(bold.getparent().itersiblings())\n\n # Get the room number.\n xpath = '//*[contains(@id, \"CapitolRoom\")]/text()'\n room = address_div.xpath(xpath)\n if room:\n parts = map(self._scrub, list(address_div.itertext()))\n parts = [x.strip() for x in parts if x.strip()]\n phone = parts.pop()\n parts = [parts[0], 'Room ' + room[0], parts[-1]]\n address = '\\n'.join(parts)\n else:\n address = None\n phone = None\n\n if not phone:\n phone = None\n\n # Get the email address, extracted from a series of JS\n # \"document.write\" lines.\n email = self._extract_email(doc)\n if email:\n person.add_contact_detail(type='email', value=email,\n note='Capitol Office')\n person.extras['email'] = email\n if phone:\n person.add_contact_detail(type='voice', value=str(phone),\n note='Capitol Office')\n if address:\n person.add_contact_detail(type='address', value=address,\n note='Capitol Office')\n\n # District offices only have address, no other information\n district_address = doc.xpath('//span[@id=\"ctl00_Content'\n 'PlaceHolder1_lblDistrictAddress\"]/text()')\n if district_address:\n district_city_state, = doc.xpath('//span[@id=\"ctl00_Content'\n 'PlaceHolder1_lblDistrictCity\"]/text()')\n district_address = \"{}\\n{}\".format(district_address[0], district_city_state)\n if district_address:\n person.add_contact_detail(type='address', value=district_address,\n note='District Office')\n\n def scrape_upper_chamber(self, term):\n url = \"http://oksenate.gov/Senators/Default.aspx\"\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n for a in doc.xpath('//table[@summary]')[0]. \\\n xpath('.//td//a[contains(@href, \"biographies\")]'):\n tail = a.xpath('..')[0].tail\n if tail:\n district = tail.split()[1]\n else:\n district = a.xpath('../../span')[1].text.split()[1]\n\n if a.text is None or a.text.strip() == 'Vacant':\n self.warning(\"District {} appears to be empty\".format(district))\n continue\n else:\n match = re.match(r'(.+) \\(([A-Z])\\)', a.text.strip())\n name, party = match.group(1), self._parties[match.group(2)]\n\n url = a.get('href')\n\n person = Person(primary_org='upper',\n district=district,\n name=name.strip(),\n party=party,\n )\n person.add_link(url)\n person.add_source(url)\n self.scrape_upper_offices(person, url)\n yield person\n\n def scrape_upper_offices(self, person, url):\n url = url.replace('aspx', 'html')\n html = self.get(url).text\n person.add_source(url)\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n try:\n xpath = '//h3[contains(., \"Office\")]'\n for table in doc.xpath(xpath)[0].itersiblings():\n if table.tag == 'table':\n break\n except IndexError:\n self.warning('invalid bio page for %s', person)\n return\n col1, col2 = table.xpath('tr[2]/td')\n lxml.etree.strip_tags(col1, 'sup')\n lxml.etree.strip_tags(col2, 'sup')\n\n capitol_office_info = self._clean_office_info(col1)\n\n # Set email on the leg object.\n if capitol_office_info:\n if '@' in capitol_office_info[-1]:\n email = capitol_office_info.pop()\n person.extras['email'] = email\n else:\n email = None\n\n capitol_phone = self._extract_phone(capitol_office_info)\n\n capitol_address_lines = map(\n lambda line: line.strip(),\n filter(\n lambda string: re.search(r', OK|Lincoln Blvd|Room \\d', string),\n capitol_office_info))\n\n if email:\n person.add_contact_detail(type='email', value=email,\n note='Capitol Office')\n if capitol_phone:\n person.add_contact_detail(type='voice', value=str(capitol_phone),\n note='Capitol Office')\n\n capitol_address = '\\n'.join(capitol_address_lines)\n if capitol_address:\n person.add_contact_detail(type='address', value=capitol_address,\n note='Capitol Office')\n\n district_office_info = self._clean_office_info(col2)\n # This probably isn't a valid district office at less than two lines.\n if len(district_office_info) < 2:\n return\n\n district_address_lines = []\n for line in district_office_info:\n district_address_lines.append(line.strip())\n if 'OK' in line:\n break\n\n if 'OK' in district_address_lines[-1]:\n district_address = '\\n'.join(filter(lambda line: line,\n district_address_lines))\n else:\n district_address = None\n # self.logger.debug(district_address)\n\n district_phone = self._extract_phone(district_office_info)\n\n if capitol_phone:\n person.add_contact_detail(type='voice', value=str(district_phone),\n note='District Office')\n if capitol_address_lines:\n person.add_contact_detail(type='address', value=district_address,\n note='District Office')\n", "path": "openstates/ok/people.py"}], "after_files": [{"content": "import re\nimport lxml\nfrom pupa.scrape import Person, Scraper\nfrom openstates.utils import LXMLMixin, validate_email_address\nfrom .utils import LXMLMixinOK\n\n\nclass OKPersonScraper(Scraper, LXMLMixin, LXMLMixinOK):\n\n _parties = {'R': 'Republican', 'D': 'Democratic', 'I': 'Independent'}\n\n def _scrub(self, text):\n \"\"\"Squish whitespace and kill \\xa0.\"\"\"\n return re.sub(r'[\\s\\xa0]+', ' ', text)\n\n def _clean_office_info(self, office_info):\n office_info = list(map(self._scrub, office_info.itertext()))\n # Throw away anything after any email address, phone number, or\n # address lines.\n while office_info:\n last = office_info[-1]\n if ('@' not in last\n and ', OK' not in last\n and not re.search(r'[\\d\\-\\(\\) ]{7,}', last)):\n office_info.pop()\n else:\n break\n return office_info\n\n def _extract_phone(self, office_info):\n phone = None\n\n for line in office_info:\n phone_match = re.search(r'''(\\(\\d{3}\\) \\d{3}-\\d{4}|\n \\d{3}.\\d{3}.\\d{4})''', line)\n if phone_match is not None:\n phone = phone_match.group(1).strip()\n\n return phone\n\n def _extract_email(self, doc):\n xpath = '//div[@class=\"districtheadleft\"]' \\\n + '/b[contains(text(), \"Email:\")]' \\\n + '/../following-sibling::div' \\\n + '/script/text()'\n script = doc.xpath(xpath)\n if not script:\n return ''\n script = script[0]\n line = filter(\n lambda line: '+ \"@\" +' in line,\n script.split('\\r\\n'))[0]\n parts = re.findall(r'\"(.+?)\"', line)\n\n email = ''.join(parts)\n\n return email if validate_email_address(email) else ''\n\n def scrape(self, chamber=None):\n term = self.jurisdiction.legislative_sessions[-1]['identifier']\n chambers = [chamber] if chamber is not None else ['upper', 'lower']\n for chamber in chambers:\n yield from getattr(self, 'scrape_' + chamber + '_chamber')(term)\n\n def scrape_lower_chamber(self, term):\n url = \"https://www.okhouse.gov/Members/Default.aspx\"\n page = self.curl_lxmlize(url)\n\n legislator_nodes = self.get_nodes(\n page,\n '//table[@id=\"ctl00_ContentPlaceHolder1_RadGrid1_ctl00\"]/tbody/tr')\n\n for legislator_node in legislator_nodes:\n name_node = self.get_node(\n legislator_node,\n './/td[1]/a')\n\n if name_node is not None:\n name_text = name_node.text.strip()\n\n # Handle seats with no current representative\n if re.search(r'District \\d+', name_text):\n continue\n\n last_name, delimiter, first_name = name_text.partition(',')\n\n if last_name is not None and first_name is not None:\n first_name = first_name.strip()\n last_name = last_name.strip()\n name = ' '.join([first_name, last_name])\n else:\n raise ValueError('Unable to parse name: {}'.format(\n name_text))\n\n if name.startswith('House District'):\n continue\n\n district_node = self.get_node(\n legislator_node,\n './/td[3]')\n\n if district_node is not None:\n district = district_node.text.strip()\n\n party_node = self.get_node(\n legislator_node,\n './/td[4]')\n\n if party_node is not None:\n party_text = party_node.text.strip()\n\n party = self._parties[party_text]\n\n legislator_url = 'https://www.okhouse.gov/Members/District.aspx?District=' + district\n legislator_page = self.curl_lxmlize(legislator_url)\n\n photo_url = self.get_node(\n legislator_page,\n '//a[@id=\"ctl00_ContentPlaceHolder1_imgHiRes\"]/@href')\n\n person = Person(primary_org='lower',\n district=district,\n name=name,\n party=party,\n image=photo_url)\n person.extras['_scraped_name'] = name_text\n person.add_link(legislator_url)\n person.add_source(url)\n person.add_source(legislator_url)\n\n # Scrape offices.\n self.scrape_lower_offices(legislator_page, person)\n\n yield person\n\n def scrape_lower_offices(self, doc, person):\n\n # Capitol offices:\n xpath = '//*[contains(text(), \"Capitol Address\")]'\n for bold in doc.xpath(xpath):\n\n # Get the address.\n address_div = next(bold.getparent().itersiblings())\n\n # Get the room number.\n xpath = '//*[contains(@id, \"CapitolRoom\")]/text()'\n room = address_div.xpath(xpath)\n if room:\n parts = map(self._scrub, list(address_div.itertext()))\n parts = [x.strip() for x in parts if x.strip()]\n phone = parts.pop()\n parts = [parts[0], 'Room ' + room[0], parts[-1]]\n address = '\\n'.join(parts)\n else:\n address = None\n phone = None\n\n if not phone:\n phone = None\n\n # Get the email address, extracted from a series of JS\n # \"document.write\" lines.\n email = self._extract_email(doc)\n if email:\n person.add_contact_detail(type='email', value=email,\n note='Capitol Office')\n person.extras['email'] = email\n if phone:\n person.add_contact_detail(type='voice', value=str(phone),\n note='Capitol Office')\n if address:\n person.add_contact_detail(type='address', value=address,\n note='Capitol Office')\n\n # District offices only have address, no other information\n district_address = doc.xpath('//span[@id=\"ctl00_Content'\n 'PlaceHolder1_lblDistrictAddress\"]/text()')\n if district_address:\n district_city_state, = doc.xpath('//span[@id=\"ctl00_Content'\n 'PlaceHolder1_lblDistrictCity\"]/text()')\n district_address = \"{}\\n{}\".format(district_address[0], district_city_state)\n if district_address:\n person.add_contact_detail(type='address', value=district_address,\n note='District Office')\n\n def scrape_upper_chamber(self, term):\n url = \"http://oksenate.gov/Senators/Default.aspx\"\n html = self.get(url).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n for a in doc.xpath('//table[@summary]')[0]. \\\n xpath('.//td//a[contains(@href, \"biographies\")]'):\n tail = a.xpath('..')[0].tail\n if tail:\n district = tail.split()[1]\n else:\n district = a.xpath('../../span')[1].text.split()[1]\n\n if a.text is None or a.text.strip() == 'Vacant':\n self.warning(\"District {} appears to be empty\".format(district))\n continue\n else:\n match = re.match(r'(.+) \\(([A-Z])\\)', a.text.strip())\n name, party = match.group(1), self._parties[match.group(2)]\n\n url = a.get('href')\n\n person = Person(primary_org='upper',\n district=district,\n name=name.strip(),\n party=party,\n )\n person.add_link(url)\n person.add_source(url)\n self.scrape_upper_offices(person, url)\n yield person\n\n def scrape_upper_offices(self, person, url):\n url = url.replace('aspx', 'html')\n html = self.get(url).text\n person.add_source(url)\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute(url)\n\n try:\n xpath = '//h3[contains(., \"Office\")]'\n for table in doc.xpath(xpath)[0].itersiblings():\n if table.tag == 'table':\n break\n except IndexError:\n self.warning('invalid bio page for %s', person)\n return\n col1, col2 = table.xpath('tr[2]/td')\n lxml.etree.strip_tags(col1, 'sup')\n lxml.etree.strip_tags(col2, 'sup')\n\n capitol_office_info = self._clean_office_info(col1)\n\n # Set email on the leg object.\n if capitol_office_info:\n if '@' in capitol_office_info[-1]:\n email = capitol_office_info.pop()\n person.extras['email'] = email\n else:\n email = None\n\n capitol_phone = self._extract_phone(capitol_office_info)\n\n capitol_address_lines = map(\n lambda line: line.strip(),\n filter(\n lambda string: re.search(r', OK|Lincoln Blvd|Room \\d', string),\n capitol_office_info))\n\n if email:\n person.add_contact_detail(type='email', value=email,\n note='Capitol Office')\n if capitol_phone:\n person.add_contact_detail(type='voice', value=str(capitol_phone),\n note='Capitol Office')\n\n capitol_address = '\\n'.join(capitol_address_lines)\n if capitol_address:\n person.add_contact_detail(type='address', value=capitol_address,\n note='Capitol Office')\n\n district_office_info = self._clean_office_info(col2)\n # This probably isn't a valid district office at less than two lines.\n if len(district_office_info) < 2:\n return\n\n district_address_lines = []\n for line in district_office_info:\n district_address_lines.append(line.strip())\n if 'OK' in line:\n break\n\n if 'OK' in district_address_lines[-1]:\n district_address = '\\n'.join(filter(lambda line: line,\n district_address_lines))\n else:\n district_address = None\n # self.logger.debug(district_address)\n\n district_phone = self._extract_phone(district_office_info)\n\n if capitol_phone:\n person.add_contact_detail(type='voice', value=str(district_phone),\n note='District Office')\n if capitol_address_lines:\n person.add_contact_detail(type='address', value=district_address,\n note='District Office')\n", "path": "openstates/ok/people.py"}]} |
gh_patches_debug_1086 | rasdani/github-patches | git_diff | hydroshare__hydroshare-2260 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rename userInfo/ API endpoint to user/
Placeholder ticket
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hs_rest_api/urls.py`
Content:
```
1 from django.conf.urls import patterns, url
2 from hs_core import views
3 from hs_file_types import views as file_type_views
4
5 from rest_framework_swagger.views import get_swagger_view
6
7 schema_view = get_swagger_view(title='Hydroshare API')
8
9 urlpatterns = patterns(
10 '',
11
12 # Swagger Docs View
13 url(r'^$', schema_view),
14
15 # resource API
16 url(r'^resource/types/$', views.resource_rest_api.ResourceTypes.as_view(),
17 name='list_resource_types'),
18
19 # DEPRECATED: use from above instead
20 url(r'^resourceTypes/$', views.resource_rest_api.ResourceTypes.as_view(),
21 name='DEPRECATED_list_resource_types'),
22
23 # DEPRECATED: use GET /resource/ instead
24 url(r'^resourceList/$', views.resource_rest_api.ResourceList.as_view(),
25 name='DEPRECATED_list_resources'),
26
27 url(r'^resource/$', views.resource_rest_api.ResourceListCreate.as_view(),
28 name='list_create_resource'),
29
30 # Public endpoint for resource flags
31 url(r'^resource/(?P<pk>[0-9a-f-]+)/flag/$', views.set_resource_flag_public,
32 name='public_set_resource_flag'),
33
34 url(r'^resource/(?P<pk>[0-9a-f-]+)/$',
35 views.resource_rest_api.ResourceReadUpdateDelete.as_view(),
36 name='get_update_delete_resource'),
37
38 # Create new version of a resource
39 url(r'^resource/(?P<pk>[0-9a-f-]+)/version/$', views.create_new_version_resource_public,
40 name='new_version_resource_public'),
41
42 # public copy resource endpoint
43 url(r'^resource/(?P<pk>[0-9a-f-]+)/copy/$',
44 views.copy_resource_public, name='copy_resource_public'),
45
46 # DEPRECATED: use form above instead
47 url(r'^resource/accessRules/(?P<pk>[0-9a-f-]+)/$',
48 views.resource_rest_api.AccessRulesUpdate.as_view(),
49 name='DEPRECATED_update_access_rules'),
50
51 url(r'^resource/(?P<pk>[0-9a-f-]+)/sysmeta/$',
52 views.resource_rest_api.SystemMetadataRetrieve.as_view(),
53 name='get_system_metadata'),
54
55 # DEPRECATED: use from above instead
56 url(r'^sysmeta/(?P<pk>[0-9a-f-]+)/$',
57 views.resource_rest_api.SystemMetadataRetrieve.as_view(),
58 name='DEPRECATED_get_system_metadata'),
59
60 url(r'^resource/(?P<pk>[0-9a-f-]+)/scimeta/$',
61 views.resource_rest_api.ScienceMetadataRetrieveUpdate.as_view(),
62 name='get_update_science_metadata'),
63
64 # Resource metadata editing
65 url(r'^resource/(?P<pk>[0-9a-f-]+)/scimeta/elements/$',
66 views.resource_metadata_rest_api.MetadataElementsRetrieveUpdate.as_view(),
67 name='get_update_science_metadata_elements'),
68
69 # Update key-value metadata
70 url(r'^resource/(?P<pk>[0-9a-f-]+)/scimeta/custom/$',
71 views.update_key_value_metadata_public,
72 name='update_custom_metadata'),
73
74 # DEPRECATED: use from above instead
75 url(r'^scimeta/(?P<pk>[0-9a-f-]+)/$',
76 views.resource_rest_api.ScienceMetadataRetrieveUpdate.as_view(),
77 name='DEPRECATED_get_update_science_metadata'),
78
79 url(r'^resource/(?P<pk>[A-z0-9]+)/map/$',
80 views.resource_rest_api.ResourceMapRetrieve.as_view(),
81 name='get_resource_map'),
82
83 # Patterns are now checked in the view class.
84 url(r'^resource/(?P<pk>[0-9a-f-]+)/files/(?P<pathname>.+)/$',
85 views.resource_rest_api.ResourceFileCRUD.as_view(),
86 name='get_update_delete_resource_file'),
87
88 url(r'^resource/(?P<pk>[0-9a-f-]+)/files/$',
89 views.resource_rest_api.ResourceFileListCreate.as_view(),
90 name='list_create_resource_file'),
91
92 url(r'^resource/(?P<pk>[0-9a-f-]+)/folders/(?P<pathname>.*)/$',
93 views.resource_folder_rest_api.ResourceFolders.as_view(),
94 name='list_manipulate_folders'),
95
96 # public unzip endpoint
97 url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/unzip/(?P<pathname>.*)/$',
98 views.resource_folder_hierarchy.data_store_folder_unzip_public),
99
100 # public zip folder endpoint
101 url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/zip/$',
102 views.resource_folder_hierarchy.data_store_folder_zip_public),
103
104 # public move or rename
105 url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/move-or-rename/$',
106 views.resource_folder_hierarchy.data_store_file_or_folder_move_or_rename_public),
107
108 url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/set-file-type/(?P<file_path>.*)/'
109 r'(?P<hs_file_type>[A-z]+)/$',
110 file_type_views.set_file_type_public,
111 name="set_file_type_public"),
112
113 # DEPRECATED: use form above instead. Added unused POST for simplicity
114 url(r'^resource/(?P<pk>[0-9a-f-]+)/file_list/$',
115 views.resource_rest_api.ResourceFileListCreate.as_view(),
116 name='DEPRECATED_get_resource_file_list'),
117
118 url(r'^taskstatus/(?P<task_id>[A-z0-9\-]+)/$',
119 views.resource_rest_api.CheckTaskStatus.as_view(),
120 name='get_task_status'),
121
122 url(r'^userInfo/$',
123 views.user_rest_api.UserInfo.as_view(), name='get_logged_in_user_info'),
124
125 # Resource Access
126 url(r'^resource/(?P<pk>[0-9a-f-]+)/access/$',
127 views.resource_access_api.ResourceAccessUpdateDelete.as_view(),
128 name='get_update_delete_resource_access'),
129 )
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hs_rest_api/urls.py b/hs_rest_api/urls.py
--- a/hs_rest_api/urls.py
+++ b/hs_rest_api/urls.py
@@ -119,6 +119,9 @@
views.resource_rest_api.CheckTaskStatus.as_view(),
name='get_task_status'),
+ url(r'^user/$',
+ views.user_rest_api.UserInfo.as_view(), name='get_logged_in_user_info'),
+
url(r'^userInfo/$',
views.user_rest_api.UserInfo.as_view(), name='get_logged_in_user_info'),
| {"golden_diff": "diff --git a/hs_rest_api/urls.py b/hs_rest_api/urls.py\n--- a/hs_rest_api/urls.py\n+++ b/hs_rest_api/urls.py\n@@ -119,6 +119,9 @@\n views.resource_rest_api.CheckTaskStatus.as_view(),\n name='get_task_status'),\n \n+ url(r'^user/$',\n+ views.user_rest_api.UserInfo.as_view(), name='get_logged_in_user_info'),\n+\n url(r'^userInfo/$',\n views.user_rest_api.UserInfo.as_view(), name='get_logged_in_user_info'),\n", "issue": "Rename userInfo/ API endpoint to user/\nPlaceholder ticket\n", "before_files": [{"content": "from django.conf.urls import patterns, url\nfrom hs_core import views\nfrom hs_file_types import views as file_type_views\n\nfrom rest_framework_swagger.views import get_swagger_view\n\nschema_view = get_swagger_view(title='Hydroshare API')\n\nurlpatterns = patterns(\n '',\n\n # Swagger Docs View\n url(r'^$', schema_view),\n\n # resource API\n url(r'^resource/types/$', views.resource_rest_api.ResourceTypes.as_view(),\n name='list_resource_types'),\n\n # DEPRECATED: use from above instead\n url(r'^resourceTypes/$', views.resource_rest_api.ResourceTypes.as_view(),\n name='DEPRECATED_list_resource_types'),\n\n # DEPRECATED: use GET /resource/ instead\n url(r'^resourceList/$', views.resource_rest_api.ResourceList.as_view(),\n name='DEPRECATED_list_resources'),\n\n url(r'^resource/$', views.resource_rest_api.ResourceListCreate.as_view(),\n name='list_create_resource'),\n\n # Public endpoint for resource flags\n url(r'^resource/(?P<pk>[0-9a-f-]+)/flag/$', views.set_resource_flag_public,\n name='public_set_resource_flag'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/$',\n views.resource_rest_api.ResourceReadUpdateDelete.as_view(),\n name='get_update_delete_resource'),\n\n # Create new version of a resource\n url(r'^resource/(?P<pk>[0-9a-f-]+)/version/$', views.create_new_version_resource_public,\n name='new_version_resource_public'),\n\n # public copy resource endpoint\n url(r'^resource/(?P<pk>[0-9a-f-]+)/copy/$',\n views.copy_resource_public, name='copy_resource_public'),\n\n # DEPRECATED: use form above instead\n url(r'^resource/accessRules/(?P<pk>[0-9a-f-]+)/$',\n views.resource_rest_api.AccessRulesUpdate.as_view(),\n name='DEPRECATED_update_access_rules'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/sysmeta/$',\n views.resource_rest_api.SystemMetadataRetrieve.as_view(),\n name='get_system_metadata'),\n\n # DEPRECATED: use from above instead\n url(r'^sysmeta/(?P<pk>[0-9a-f-]+)/$',\n views.resource_rest_api.SystemMetadataRetrieve.as_view(),\n name='DEPRECATED_get_system_metadata'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/scimeta/$',\n views.resource_rest_api.ScienceMetadataRetrieveUpdate.as_view(),\n name='get_update_science_metadata'),\n\n # Resource metadata editing\n url(r'^resource/(?P<pk>[0-9a-f-]+)/scimeta/elements/$',\n views.resource_metadata_rest_api.MetadataElementsRetrieveUpdate.as_view(),\n name='get_update_science_metadata_elements'),\n\n # Update key-value metadata\n url(r'^resource/(?P<pk>[0-9a-f-]+)/scimeta/custom/$',\n views.update_key_value_metadata_public,\n name='update_custom_metadata'),\n\n # DEPRECATED: use from above instead\n url(r'^scimeta/(?P<pk>[0-9a-f-]+)/$',\n views.resource_rest_api.ScienceMetadataRetrieveUpdate.as_view(),\n name='DEPRECATED_get_update_science_metadata'),\n\n url(r'^resource/(?P<pk>[A-z0-9]+)/map/$',\n views.resource_rest_api.ResourceMapRetrieve.as_view(),\n name='get_resource_map'),\n\n # Patterns are now checked in the view class.\n url(r'^resource/(?P<pk>[0-9a-f-]+)/files/(?P<pathname>.+)/$',\n views.resource_rest_api.ResourceFileCRUD.as_view(),\n name='get_update_delete_resource_file'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/files/$',\n views.resource_rest_api.ResourceFileListCreate.as_view(),\n name='list_create_resource_file'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/folders/(?P<pathname>.*)/$',\n views.resource_folder_rest_api.ResourceFolders.as_view(),\n name='list_manipulate_folders'),\n\n # public unzip endpoint\n url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/unzip/(?P<pathname>.*)/$',\n views.resource_folder_hierarchy.data_store_folder_unzip_public),\n\n # public zip folder endpoint\n url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/zip/$',\n views.resource_folder_hierarchy.data_store_folder_zip_public),\n\n # public move or rename\n url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/move-or-rename/$',\n views.resource_folder_hierarchy.data_store_file_or_folder_move_or_rename_public),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/set-file-type/(?P<file_path>.*)/'\n r'(?P<hs_file_type>[A-z]+)/$',\n file_type_views.set_file_type_public,\n name=\"set_file_type_public\"),\n\n # DEPRECATED: use form above instead. Added unused POST for simplicity\n url(r'^resource/(?P<pk>[0-9a-f-]+)/file_list/$',\n views.resource_rest_api.ResourceFileListCreate.as_view(),\n name='DEPRECATED_get_resource_file_list'),\n\n url(r'^taskstatus/(?P<task_id>[A-z0-9\\-]+)/$',\n views.resource_rest_api.CheckTaskStatus.as_view(),\n name='get_task_status'),\n\n url(r'^userInfo/$',\n views.user_rest_api.UserInfo.as_view(), name='get_logged_in_user_info'),\n\n # Resource Access\n url(r'^resource/(?P<pk>[0-9a-f-]+)/access/$',\n views.resource_access_api.ResourceAccessUpdateDelete.as_view(),\n name='get_update_delete_resource_access'),\n)\n", "path": "hs_rest_api/urls.py"}], "after_files": [{"content": "from django.conf.urls import patterns, url\nfrom hs_core import views\nfrom hs_file_types import views as file_type_views\n\nfrom rest_framework_swagger.views import get_swagger_view\n\nschema_view = get_swagger_view(title='Hydroshare API')\n\nurlpatterns = patterns(\n '',\n\n # Swagger Docs View\n url(r'^$', schema_view),\n\n # resource API\n url(r'^resource/types/$', views.resource_rest_api.ResourceTypes.as_view(),\n name='list_resource_types'),\n\n # DEPRECATED: use from above instead\n url(r'^resourceTypes/$', views.resource_rest_api.ResourceTypes.as_view(),\n name='DEPRECATED_list_resource_types'),\n\n # DEPRECATED: use GET /resource/ instead\n url(r'^resourceList/$', views.resource_rest_api.ResourceList.as_view(),\n name='DEPRECATED_list_resources'),\n\n url(r'^resource/$', views.resource_rest_api.ResourceListCreate.as_view(),\n name='list_create_resource'),\n\n # Public endpoint for resource flags\n url(r'^resource/(?P<pk>[0-9a-f-]+)/flag/$', views.set_resource_flag_public,\n name='public_set_resource_flag'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/$',\n views.resource_rest_api.ResourceReadUpdateDelete.as_view(),\n name='get_update_delete_resource'),\n\n # Create new version of a resource\n url(r'^resource/(?P<pk>[0-9a-f-]+)/version/$', views.create_new_version_resource_public,\n name='new_version_resource_public'),\n\n # public copy resource endpoint\n url(r'^resource/(?P<pk>[0-9a-f-]+)/copy/$',\n views.copy_resource_public, name='copy_resource_public'),\n\n # DEPRECATED: use form above instead\n url(r'^resource/accessRules/(?P<pk>[0-9a-f-]+)/$',\n views.resource_rest_api.AccessRulesUpdate.as_view(),\n name='DEPRECATED_update_access_rules'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/sysmeta/$',\n views.resource_rest_api.SystemMetadataRetrieve.as_view(),\n name='get_system_metadata'),\n\n # DEPRECATED: use from above instead\n url(r'^sysmeta/(?P<pk>[0-9a-f-]+)/$',\n views.resource_rest_api.SystemMetadataRetrieve.as_view(),\n name='DEPRECATED_get_system_metadata'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/scimeta/$',\n views.resource_rest_api.ScienceMetadataRetrieveUpdate.as_view(),\n name='get_update_science_metadata'),\n\n # Resource metadata editing\n url(r'^resource/(?P<pk>[0-9a-f-]+)/scimeta/elements/$',\n views.resource_metadata_rest_api.MetadataElementsRetrieveUpdate.as_view(),\n name='get_update_science_metadata_elements'),\n\n # Update key-value metadata\n url(r'^resource/(?P<pk>[0-9a-f-]+)/scimeta/custom/$',\n views.update_key_value_metadata_public,\n name='update_custom_metadata'),\n\n # DEPRECATED: use from above instead\n url(r'^scimeta/(?P<pk>[0-9a-f-]+)/$',\n views.resource_rest_api.ScienceMetadataRetrieveUpdate.as_view(),\n name='DEPRECATED_get_update_science_metadata'),\n\n url(r'^resource/(?P<pk>[A-z0-9]+)/map/$',\n views.resource_rest_api.ResourceMapRetrieve.as_view(),\n name='get_resource_map'),\n\n # Patterns are now checked in the view class.\n url(r'^resource/(?P<pk>[0-9a-f-]+)/files/(?P<pathname>.+)/$',\n views.resource_rest_api.ResourceFileCRUD.as_view(),\n name='get_update_delete_resource_file'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/files/$',\n views.resource_rest_api.ResourceFileListCreate.as_view(),\n name='list_create_resource_file'),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/folders/(?P<pathname>.*)/$',\n views.resource_folder_rest_api.ResourceFolders.as_view(),\n name='list_manipulate_folders'),\n\n # public unzip endpoint\n url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/unzip/(?P<pathname>.*)/$',\n views.resource_folder_hierarchy.data_store_folder_unzip_public),\n\n # public zip folder endpoint\n url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/zip/$',\n views.resource_folder_hierarchy.data_store_folder_zip_public),\n\n # public move or rename\n url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/move-or-rename/$',\n views.resource_folder_hierarchy.data_store_file_or_folder_move_or_rename_public),\n\n url(r'^resource/(?P<pk>[0-9a-f-]+)/functions/set-file-type/(?P<file_path>.*)/'\n r'(?P<hs_file_type>[A-z]+)/$',\n file_type_views.set_file_type_public,\n name=\"set_file_type_public\"),\n\n # DEPRECATED: use form above instead. Added unused POST for simplicity\n url(r'^resource/(?P<pk>[0-9a-f-]+)/file_list/$',\n views.resource_rest_api.ResourceFileListCreate.as_view(),\n name='DEPRECATED_get_resource_file_list'),\n\n url(r'^taskstatus/(?P<task_id>[A-z0-9\\-]+)/$',\n views.resource_rest_api.CheckTaskStatus.as_view(),\n name='get_task_status'),\n\n url(r'^user/$',\n views.user_rest_api.UserInfo.as_view(), name='get_logged_in_user_info'),\n\n url(r'^userInfo/$',\n views.user_rest_api.UserInfo.as_view(), name='get_logged_in_user_info'),\n\n # Resource Access\n url(r'^resource/(?P<pk>[0-9a-f-]+)/access/$',\n views.resource_access_api.ResourceAccessUpdateDelete.as_view(),\n name='get_update_delete_resource_access'),\n)\n", "path": "hs_rest_api/urls.py"}]} |
gh_patches_debug_1087 | rasdani/github-patches | git_diff | jazzband__pip-tools-12 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support Python versions lower than 2.7, too
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 """
2 pip-tools keeps your pinned dependencies fresh.
3 """
4 import sys
5 from setuptools import setup
6
7
8 setup(
9 name='pip-tools',
10 version='0.2',
11 url='https://github.com/nvie/pip-tools/',
12 license='BSD',
13 author='Vincent Driessen',
14 author_email='[email protected]',
15 description=__doc__,
16 #packages=[],
17 scripts=['bin/pip-review', 'bin/pip-dump'],
18 #include_package_data=True,
19 zip_safe=False,
20 platforms='any',
21 #install_requires=[],
22 classifiers=[
23 # As from http://pypi.python.org/pypi?%3Aaction=list_classifiers
24 #'Development Status :: 1 - Planning',
25 #'Development Status :: 2 - Pre-Alpha',
26 #'Development Status :: 3 - Alpha',
27 'Development Status :: 4 - Beta',
28 #'Development Status :: 5 - Production/Stable',
29 #'Development Status :: 6 - Mature',
30 #'Development Status :: 7 - Inactive',
31 'Intended Audience :: Developers',
32 'Intended Audience :: System Administrators',
33 'License :: OSI Approved :: BSD License',
34 'Operating System :: OS Independent',
35 'Topic :: System :: Systems Administration',
36 ]
37 )
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,7 @@
#include_package_data=True,
zip_safe=False,
platforms='any',
- #install_requires=[],
+ install_requires=['argparse==1.2.1'], # needed for python 2.6
classifiers=[
# As from http://pypi.python.org/pypi?%3Aaction=list_classifiers
#'Development Status :: 1 - Planning',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,7 +18,7 @@\n #include_package_data=True,\n zip_safe=False,\n platforms='any',\n- #install_requires=[],\n+ install_requires=['argparse==1.2.1'], # needed for python 2.6\n classifiers=[\n # As from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n #'Development Status :: 1 - Planning',\n", "issue": "Support Python versions lower than 2.7, too\n\n", "before_files": [{"content": "\"\"\"\npip-tools keeps your pinned dependencies fresh.\n\"\"\"\nimport sys\nfrom setuptools import setup\n\n\nsetup(\n name='pip-tools',\n version='0.2',\n url='https://github.com/nvie/pip-tools/',\n license='BSD',\n author='Vincent Driessen',\n author_email='[email protected]',\n description=__doc__,\n #packages=[],\n scripts=['bin/pip-review', 'bin/pip-dump'],\n #include_package_data=True,\n zip_safe=False,\n platforms='any',\n #install_requires=[],\n classifiers=[\n # As from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n #'Development Status :: 1 - Planning',\n #'Development Status :: 2 - Pre-Alpha',\n #'Development Status :: 3 - Alpha',\n 'Development Status :: 4 - Beta',\n #'Development Status :: 5 - Production/Stable',\n #'Development Status :: 6 - Mature',\n #'Development Status :: 7 - Inactive',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: System :: Systems Administration',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"\npip-tools keeps your pinned dependencies fresh.\n\"\"\"\nimport sys\nfrom setuptools import setup\n\n\nsetup(\n name='pip-tools',\n version='0.2',\n url='https://github.com/nvie/pip-tools/',\n license='BSD',\n author='Vincent Driessen',\n author_email='[email protected]',\n description=__doc__,\n #packages=[],\n scripts=['bin/pip-review', 'bin/pip-dump'],\n #include_package_data=True,\n zip_safe=False,\n platforms='any',\n install_requires=['argparse==1.2.1'], # needed for python 2.6\n classifiers=[\n # As from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n #'Development Status :: 1 - Planning',\n #'Development Status :: 2 - Pre-Alpha',\n #'Development Status :: 3 - Alpha',\n 'Development Status :: 4 - Beta',\n #'Development Status :: 5 - Production/Stable',\n #'Development Status :: 6 - Mature',\n #'Development Status :: 7 - Inactive',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: System :: Systems Administration',\n ]\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1088 | rasdani/github-patches | git_diff | ansible-collections__community.vmware-414 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
vmware_guest_custom_attributes module crashes when trying to set a VirtualMachine attribute with the same name as an existing HostSystem attribute
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When running a task with `vmware_guest_custom_attributes`, if the name of any attribute already exists as a HostSystem attribute, the module will crash with an unhandled exception.
```
pyVmomi.VmomiSupport.InvalidArgument: (vmodl.fault.InvalidArgument) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'A specified parameter was not correct: entity',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
invalidProperty = u'entity'
}
```
The crash is due to the module finding the HostSystem attribute and trying to do a `self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)` with the key of the wrong type of attribute.
The issue happens because in the line https://github.com/ansible-collections/community.vmware/blob/a92ccb0a07cc833e22b13cb838d0696b16ebf64d/plugins/modules/vmware_guest_custom_attributes.py#L202 there is no explicit filtering for VirtualMachine custom attributes. If the cycle's first match is a HostSystem attribute, the function will return the wrong type.
This would work if the `check_exists` function were something like:
```
def check_exists(self, field):
for x in self.custom_field_mgr:
if x.name == field and x.managedObjectType == vim.VirtualMachine:
return x
return False
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_custom_attributes
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.13
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/user1/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/user1/virtualenvs/ansible-2.9/lib/python2.7/site-packages/ansible
executable location = /home/user1/virtualenvs/ansible-2.9/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/user1/ansible.aso/ansible.cfg) =
ANSIBLE_SSH_RETRIES(/home/user1/ansible.aso/ansible.cfg) = 2
CACHE_PLUGIN(/home/user1/ansible.aso/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/user1/ansible.aso/ansible.cfg) = inventory/.cache
CACHE_PLUGIN_TIMEOUT(/home/user1/ansible.aso/ansible.cfg) = 86400
DEFAULT_GATHERING(/home/user1/ansible.aso/ansible.cfg) = smart
DEFAULT_LOG_PATH(/home/user1/ansible.aso/ansible.cfg) = /home/user1/ansible.aso/ansible.log
DEFAULT_MANAGED_STR(/home/user1/ansible.aso/ansible.cfg) = Managed by Ansible - DO NOT MODIFY
DEFAULT_TIMEOUT(/home/user1/ansible.aso/ansible.cfg) = 30
HOST_KEY_CHECKING(/home/user1/ansible.aso/ansible.cfg) = False
INVENTORY_CACHE_ENABLED(/home/user1/ansible.aso/ansible.cfg) = True
INVENTORY_CACHE_PLUGIN(/home/user1/ansible.aso/ansible.cfg) = jsonfile
INVENTORY_CACHE_PLUGIN_CONNECTION(/home/user1/ansible.aso/ansible.cfg) = inventory/.cache
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
`CentOS Linux release 7.6.1810 (Core)`
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
With the following playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Write VM custom attributes"
hosts: all
gather_facts: false
tasks:
- name: Add virtual machine custom attributes
vmware_guest_custom_attributes:
hostname: "{{ vm_vcenter_host | default(lookup('env', 'VMWARE_HOST')) }}"
username: "{{ vm_vcenter_user | default(lookup('env', 'VMWARE_USER')) }}"
password: "{{ vm_vcenter_pass | default(lookup('env', 'VMWARE_PASSWORD')) }}"
name: "{{ inventory_hostname }}"
validate_certs: no
state: present
attributes:
- name: "Department"
value: "{{ custom_attribute_department | default('undefined') }}"
delegate_to: localhost
register: attributes
```
vcenter has the following Custom Attributes:
```
(vim.CustomFieldsManager.FieldDef) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
key = 630,
name = 'Department',
type = str,
managedObjectType = vim.HostSystem,
fieldDefPrivileges = <unset>,
fieldInstancePrivileges = <unset>
}
(vim.CustomFieldsManager.FieldDef) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
key = 1044,
name = 'Department',
type = str,
managedObjectType = vim.VirtualMachine,
fieldDefPrivileges = <unset>,
fieldInstancePrivileges = <unset>
}
```
and run as:
`ansible-playbook -i inventory/vm_inventory_testvm.ini playbook_vcenter_custom_annotations.yml -l testvm02 -D --flush-cache -vvv`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Should create / update the VM custom attribute
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Crashes with exception:
<!--- Paste verbatim command output between quotes -->
```paste below
pyVmomi.VmomiSupport.InvalidArgument: (vmodl.fault.InvalidArgument) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'A specified parameter was not correct: entity',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
invalidProperty = u'entity'
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/modules/vmware_guest_custom_attributes.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright, (c) 2018, Ansible Project
5 # Copyright, (c) 2018, Abhijeet Kasurde <[email protected]>
6 #
7 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
8
9 from __future__ import absolute_import, division, print_function
10 __metaclass__ = type
11
12
13 DOCUMENTATION = '''
14 ---
15 module: vmware_guest_custom_attributes
16 short_description: Manage custom attributes from VMware for the given virtual machine
17 description:
18 - This module can be used to add, remove and update custom attributes for the given virtual machine.
19 author:
20 - Jimmy Conner (@cigamit)
21 - Abhijeet Kasurde (@Akasurde)
22 notes:
23 - Tested on vSphere 6.5
24 requirements:
25 - "python >= 2.6"
26 - PyVmomi
27 options:
28 name:
29 description:
30 - Name of the virtual machine to work with.
31 - This is required parameter, if C(uuid) or C(moid) is not supplied.
32 type: str
33 state:
34 description:
35 - The action to take.
36 - If set to C(present), then custom attribute is added or updated.
37 - If set to C(absent), then custom attribute is removed.
38 default: 'present'
39 choices: ['present', 'absent']
40 type: str
41 uuid:
42 description:
43 - UUID of the virtual machine to manage if known. This is VMware's unique identifier.
44 - This is required parameter, if C(name) or C(moid) is not supplied.
45 type: str
46 moid:
47 description:
48 - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.
49 - This is required if C(name) or C(uuid) is not supplied.
50 type: str
51 use_instance_uuid:
52 description:
53 - Whether to use the VMware instance UUID rather than the BIOS UUID.
54 default: no
55 type: bool
56 folder:
57 description:
58 - Absolute path to find an existing guest.
59 - This is required parameter, if C(name) is supplied and multiple virtual machines with same name are found.
60 type: str
61 datacenter:
62 description:
63 - Datacenter name where the virtual machine is located in.
64 type: str
65 attributes:
66 description:
67 - A list of name and value of custom attributes that needs to be manage.
68 - Value of custom attribute is not required and will be ignored, if C(state) is set to C(absent).
69 suboptions:
70 name:
71 description:
72 - Name of the attribute.
73 type: str
74 required: True
75 value:
76 description:
77 - Value of the attribute.
78 type: str
79 default: ''
80 default: []
81 type: list
82 elements: dict
83 extends_documentation_fragment:
84 - community.vmware.vmware.documentation
85
86 '''
87
88 EXAMPLES = '''
89 - name: Add virtual machine custom attributes
90 community.vmware.vmware_guest_custom_attributes:
91 hostname: "{{ vcenter_hostname }}"
92 username: "{{ vcenter_username }}"
93 password: "{{ vcenter_password }}"
94 uuid: 421e4592-c069-924d-ce20-7e7533fab926
95 state: present
96 attributes:
97 - name: MyAttribute
98 value: MyValue
99 delegate_to: localhost
100 register: attributes
101
102 - name: Add multiple virtual machine custom attributes
103 community.vmware.vmware_guest_custom_attributes:
104 hostname: "{{ vcenter_hostname }}"
105 username: "{{ vcenter_username }}"
106 password: "{{ vcenter_password }}"
107 uuid: 421e4592-c069-924d-ce20-7e7533fab926
108 state: present
109 attributes:
110 - name: MyAttribute
111 value: MyValue
112 - name: MyAttribute2
113 value: MyValue2
114 delegate_to: localhost
115 register: attributes
116
117 - name: Remove virtual machine Attribute
118 community.vmware.vmware_guest_custom_attributes:
119 hostname: "{{ vcenter_hostname }}"
120 username: "{{ vcenter_username }}"
121 password: "{{ vcenter_password }}"
122 uuid: 421e4592-c069-924d-ce20-7e7533fab926
123 state: absent
124 attributes:
125 - name: MyAttribute
126 delegate_to: localhost
127 register: attributes
128
129 - name: Remove virtual machine Attribute using Virtual Machine MoID
130 community.vmware.vmware_guest_custom_attributes:
131 hostname: "{{ vcenter_hostname }}"
132 username: "{{ vcenter_username }}"
133 password: "{{ vcenter_password }}"
134 moid: vm-42
135 state: absent
136 attributes:
137 - name: MyAttribute
138 delegate_to: localhost
139 register: attributes
140 '''
141
142 RETURN = """
143 custom_attributes:
144 description: metadata about the virtual machine attributes
145 returned: always
146 type: dict
147 sample: {
148 "mycustom": "my_custom_value",
149 "mycustom_2": "my_custom_value_2",
150 "sample_1": "sample_1_value",
151 "sample_2": "sample_2_value",
152 "sample_3": "sample_3_value"
153 }
154 """
155
156 try:
157 from pyVmomi import vim
158 except ImportError:
159 pass
160
161 from ansible.module_utils.basic import AnsibleModule
162 from ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec
163
164
165 class VmAttributeManager(PyVmomi):
166 def __init__(self, module):
167 super(VmAttributeManager, self).__init__(module)
168
169 def set_custom_field(self, vm, user_fields):
170 result_fields = dict()
171 change_list = list()
172 changed = False
173
174 for field in user_fields:
175 field_key = self.check_exists(field['name'])
176 found = False
177 field_value = field.get('value', '')
178
179 for k, v in [(x.name, v.value) for x in self.custom_field_mgr for v in vm.customValue if x.key == v.key]:
180 if k == field['name']:
181 found = True
182 if v != field_value:
183 if not self.module.check_mode:
184 self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)
185 result_fields[k] = field_value
186 change_list.append(True)
187 if not found and field_value != "":
188 if not field_key and not self.module.check_mode:
189 field_key = self.content.customFieldsManager.AddFieldDefinition(name=field['name'], moType=vim.VirtualMachine)
190 change_list.append(True)
191 if not self.module.check_mode:
192 self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)
193 result_fields[field['name']] = field_value
194
195 if any(change_list):
196 changed = True
197
198 return {'changed': changed, 'failed': False, 'custom_attributes': result_fields}
199
200 def check_exists(self, field):
201 for x in self.custom_field_mgr:
202 if x.name == field:
203 return x
204 return False
205
206
207 def main():
208 argument_spec = vmware_argument_spec()
209 argument_spec.update(
210 datacenter=dict(type='str'),
211 name=dict(type='str'),
212 folder=dict(type='str'),
213 uuid=dict(type='str'),
214 moid=dict(type='str'),
215 use_instance_uuid=dict(type='bool', default=False),
216 state=dict(type='str', default='present',
217 choices=['absent', 'present']),
218 attributes=dict(
219 type='list',
220 default=[],
221 elements='dict',
222 options=dict(
223 name=dict(type='str', required=True),
224 value=dict(type='str', default=''),
225 )
226 ),
227 )
228
229 module = AnsibleModule(
230 argument_spec=argument_spec,
231 supports_check_mode=True,
232 required_one_of=[
233 ['name', 'uuid', 'moid']
234 ],
235 )
236
237 if module.params.get('folder'):
238 # FindByInventoryPath() does not require an absolute path
239 # so we should leave the input folder path unmodified
240 module.params['folder'] = module.params['folder'].rstrip('/')
241
242 pyv = VmAttributeManager(module)
243 results = {'changed': False, 'failed': False, 'instance': dict()}
244
245 # Check if the virtual machine exists before continuing
246 vm = pyv.get_vm()
247
248 if vm:
249 # virtual machine already exists
250 if module.params['state'] == "present":
251 results = pyv.set_custom_field(vm, module.params['attributes'])
252 elif module.params['state'] == "absent":
253 results = pyv.set_custom_field(vm, module.params['attributes'])
254 module.exit_json(**results)
255 else:
256 # virtual machine does not exists
257 vm_id = (module.params.get('name') or module.params.get('uuid') or module.params.get('moid'))
258 module.fail_json(msg="Unable to manage custom attributes for non-existing"
259 " virtual machine %s" % vm_id)
260
261
262 if __name__ == '__main__':
263 main()
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/modules/vmware_guest_custom_attributes.py b/plugins/modules/vmware_guest_custom_attributes.py
--- a/plugins/modules/vmware_guest_custom_attributes.py
+++ b/plugins/modules/vmware_guest_custom_attributes.py
@@ -199,7 +199,8 @@
def check_exists(self, field):
for x in self.custom_field_mgr:
- if x.name == field:
+ # The custom attribute should be either global (managedObjectType == None) or VM specific
+ if x.managedObjectType in (None, vim.VirtualMachine) and x.name == field:
return x
return False
| {"golden_diff": "diff --git a/plugins/modules/vmware_guest_custom_attributes.py b/plugins/modules/vmware_guest_custom_attributes.py\n--- a/plugins/modules/vmware_guest_custom_attributes.py\n+++ b/plugins/modules/vmware_guest_custom_attributes.py\n@@ -199,7 +199,8 @@\n \n def check_exists(self, field):\n for x in self.custom_field_mgr:\n- if x.name == field:\n+ # The custom attribute should be either global (managedObjectType == None) or VM specific\n+ if x.managedObjectType in (None, vim.VirtualMachine) and x.name == field:\n return x\n return False\n", "issue": "vmware_guest_custom_attributes module crashes when trying to set a VirtualMachine attribute with the same name as an existing HostSystem attribute\n<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Also test if the latest release and devel branch are affected too -->\r\n<!--- Complete *all* sections as described, this form is processed automatically -->\r\n\r\n##### SUMMARY\r\nWhen running a task with `vmware_guest_custom_attributes`, if the name of any attribute already exists as a HostSystem attribute, the module will crash with an unhandled exception.\r\n\r\n```\r\npyVmomi.VmomiSupport.InvalidArgument: (vmodl.fault.InvalidArgument) {\r\n dynamicType = <unset>,\r\n dynamicProperty = (vmodl.DynamicProperty) [],\r\n msg = 'A specified parameter was not correct: entity',\r\n faultCause = <unset>,\r\n faultMessage = (vmodl.LocalizableMessage) [],\r\n invalidProperty = u'entity'\r\n}\r\n```\r\n\r\nThe crash is due to the module finding the HostSystem attribute and trying to do a `self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)` with the key of the wrong type of attribute.\r\n\r\nThe issue happens because in the line https://github.com/ansible-collections/community.vmware/blob/a92ccb0a07cc833e22b13cb838d0696b16ebf64d/plugins/modules/vmware_guest_custom_attributes.py#L202 there is no explicit filtering for VirtualMachine custom attributes. If the cycle's first match is a HostSystem attribute, the function will return the wrong type.\r\n\r\nThis would work if the `check_exists` function were something like:\r\n\r\n```\r\n def check_exists(self, field):\r\n for x in self.custom_field_mgr:\r\n if x.name == field and x.managedObjectType == vim.VirtualMachine:\r\n return x\r\n return False\r\n```\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\nvmware_guest_custom_attributes\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \"ansible --version\" between quotes -->\r\n```\r\nansible 2.9.13\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/user1/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/user1/virtualenvs/ansible-2.9/lib/python2.7/site-packages/ansible\r\n executable location = /home/user1/virtualenvs/ansible-2.9/bin/ansible\r\n python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]\r\n\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- Paste verbatim output from \"ansible-config dump --only-changed\" between quotes -->\r\n```paste below\r\nANSIBLE_SSH_ARGS(/home/user1/ansible.aso/ansible.cfg) =\r\nANSIBLE_SSH_RETRIES(/home/user1/ansible.aso/ansible.cfg) = 2\r\nCACHE_PLUGIN(/home/user1/ansible.aso/ansible.cfg) = jsonfile\r\nCACHE_PLUGIN_CONNECTION(/home/user1/ansible.aso/ansible.cfg) = inventory/.cache\r\nCACHE_PLUGIN_TIMEOUT(/home/user1/ansible.aso/ansible.cfg) = 86400\r\nDEFAULT_GATHERING(/home/user1/ansible.aso/ansible.cfg) = smart\r\nDEFAULT_LOG_PATH(/home/user1/ansible.aso/ansible.cfg) = /home/user1/ansible.aso/ansible.log\r\nDEFAULT_MANAGED_STR(/home/user1/ansible.aso/ansible.cfg) = Managed by Ansible - DO NOT MODIFY\r\nDEFAULT_TIMEOUT(/home/user1/ansible.aso/ansible.cfg) = 30\r\nHOST_KEY_CHECKING(/home/user1/ansible.aso/ansible.cfg) = False\r\nINVENTORY_CACHE_ENABLED(/home/user1/ansible.aso/ansible.cfg) = True\r\nINVENTORY_CACHE_PLUGIN(/home/user1/ansible.aso/ansible.cfg) = jsonfile\r\nINVENTORY_CACHE_PLUGIN_CONNECTION(/home/user1/ansible.aso/ansible.cfg) = inventory/.cache\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->\r\n`CentOS Linux release 7.6.1810 (Core)`\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->\r\n\r\nWith the following playbook:\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: \"Write VM custom attributes\"\r\n hosts: all\r\n gather_facts: false\r\n\r\n tasks:\r\n - name: Add virtual machine custom attributes\r\n vmware_guest_custom_attributes:\r\n hostname: \"{{ vm_vcenter_host | default(lookup('env', 'VMWARE_HOST')) }}\"\r\n username: \"{{ vm_vcenter_user | default(lookup('env', 'VMWARE_USER')) }}\"\r\n password: \"{{ vm_vcenter_pass | default(lookup('env', 'VMWARE_PASSWORD')) }}\"\r\n name: \"{{ inventory_hostname }}\"\r\n validate_certs: no\r\n state: present\r\n attributes:\r\n - name: \"Department\"\r\n value: \"{{ custom_attribute_department | default('undefined') }}\"\r\n delegate_to: localhost\r\n register: attributes\r\n\r\n```\r\nvcenter has the following Custom Attributes:\r\n\r\n```\r\n(vim.CustomFieldsManager.FieldDef) {\r\n dynamicType = <unset>,\r\n dynamicProperty = (vmodl.DynamicProperty) [],\r\n key = 630,\r\n name = 'Department',\r\n type = str,\r\n managedObjectType = vim.HostSystem,\r\n fieldDefPrivileges = <unset>,\r\n fieldInstancePrivileges = <unset>\r\n}\r\n\r\n(vim.CustomFieldsManager.FieldDef) {\r\n dynamicType = <unset>,\r\n dynamicProperty = (vmodl.DynamicProperty) [],\r\n key = 1044,\r\n name = 'Department',\r\n type = str,\r\n managedObjectType = vim.VirtualMachine,\r\n fieldDefPrivileges = <unset>,\r\n fieldInstancePrivileges = <unset>\r\n}\r\n```\r\n\r\nand run as:\r\n\r\n`ansible-playbook -i inventory/vm_inventory_testvm.ini playbook_vcenter_custom_annotations.yml -l testvm02 -D --flush-cache -vvv`\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- Describe what you expected to happen when running the steps above -->\r\n\r\nShould create / update the VM custom attribute\r\n\r\n\r\n##### ACTUAL RESULTS\r\n<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->\r\n\r\nCrashes with exception:\r\n\r\n<!--- Paste verbatim command output between quotes -->\r\n```paste below\r\npyVmomi.VmomiSupport.InvalidArgument: (vmodl.fault.InvalidArgument) {\r\n dynamicType = <unset>,\r\n dynamicProperty = (vmodl.DynamicProperty) [],\r\n msg = 'A specified parameter was not correct: entity',\r\n faultCause = <unset>,\r\n faultMessage = (vmodl.LocalizableMessage) [],\r\n invalidProperty = u'entity'\r\n}\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright, (c) 2018, Ansible Project\n# Copyright, (c) 2018, Abhijeet Kasurde <[email protected]>\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: vmware_guest_custom_attributes\nshort_description: Manage custom attributes from VMware for the given virtual machine\ndescription:\n - This module can be used to add, remove and update custom attributes for the given virtual machine.\nauthor:\n - Jimmy Conner (@cigamit)\n - Abhijeet Kasurde (@Akasurde)\nnotes:\n - Tested on vSphere 6.5\nrequirements:\n - \"python >= 2.6\"\n - PyVmomi\noptions:\n name:\n description:\n - Name of the virtual machine to work with.\n - This is required parameter, if C(uuid) or C(moid) is not supplied.\n type: str\n state:\n description:\n - The action to take.\n - If set to C(present), then custom attribute is added or updated.\n - If set to C(absent), then custom attribute is removed.\n default: 'present'\n choices: ['present', 'absent']\n type: str\n uuid:\n description:\n - UUID of the virtual machine to manage if known. This is VMware's unique identifier.\n - This is required parameter, if C(name) or C(moid) is not supplied.\n type: str\n moid:\n description:\n - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.\n - This is required if C(name) or C(uuid) is not supplied.\n type: str\n use_instance_uuid:\n description:\n - Whether to use the VMware instance UUID rather than the BIOS UUID.\n default: no\n type: bool\n folder:\n description:\n - Absolute path to find an existing guest.\n - This is required parameter, if C(name) is supplied and multiple virtual machines with same name are found.\n type: str\n datacenter:\n description:\n - Datacenter name where the virtual machine is located in.\n type: str\n attributes:\n description:\n - A list of name and value of custom attributes that needs to be manage.\n - Value of custom attribute is not required and will be ignored, if C(state) is set to C(absent).\n suboptions:\n name:\n description:\n - Name of the attribute.\n type: str\n required: True\n value:\n description:\n - Value of the attribute.\n type: str\n default: ''\n default: []\n type: list\n elements: dict\nextends_documentation_fragment:\n- community.vmware.vmware.documentation\n\n'''\n\nEXAMPLES = '''\n- name: Add virtual machine custom attributes\n community.vmware.vmware_guest_custom_attributes:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n uuid: 421e4592-c069-924d-ce20-7e7533fab926\n state: present\n attributes:\n - name: MyAttribute\n value: MyValue\n delegate_to: localhost\n register: attributes\n\n- name: Add multiple virtual machine custom attributes\n community.vmware.vmware_guest_custom_attributes:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n uuid: 421e4592-c069-924d-ce20-7e7533fab926\n state: present\n attributes:\n - name: MyAttribute\n value: MyValue\n - name: MyAttribute2\n value: MyValue2\n delegate_to: localhost\n register: attributes\n\n- name: Remove virtual machine Attribute\n community.vmware.vmware_guest_custom_attributes:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n uuid: 421e4592-c069-924d-ce20-7e7533fab926\n state: absent\n attributes:\n - name: MyAttribute\n delegate_to: localhost\n register: attributes\n\n- name: Remove virtual machine Attribute using Virtual Machine MoID\n community.vmware.vmware_guest_custom_attributes:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n moid: vm-42\n state: absent\n attributes:\n - name: MyAttribute\n delegate_to: localhost\n register: attributes\n'''\n\nRETURN = \"\"\"\ncustom_attributes:\n description: metadata about the virtual machine attributes\n returned: always\n type: dict\n sample: {\n \"mycustom\": \"my_custom_value\",\n \"mycustom_2\": \"my_custom_value_2\",\n \"sample_1\": \"sample_1_value\",\n \"sample_2\": \"sample_2_value\",\n \"sample_3\": \"sample_3_value\"\n }\n\"\"\"\n\ntry:\n from pyVmomi import vim\nexcept ImportError:\n pass\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec\n\n\nclass VmAttributeManager(PyVmomi):\n def __init__(self, module):\n super(VmAttributeManager, self).__init__(module)\n\n def set_custom_field(self, vm, user_fields):\n result_fields = dict()\n change_list = list()\n changed = False\n\n for field in user_fields:\n field_key = self.check_exists(field['name'])\n found = False\n field_value = field.get('value', '')\n\n for k, v in [(x.name, v.value) for x in self.custom_field_mgr for v in vm.customValue if x.key == v.key]:\n if k == field['name']:\n found = True\n if v != field_value:\n if not self.module.check_mode:\n self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)\n result_fields[k] = field_value\n change_list.append(True)\n if not found and field_value != \"\":\n if not field_key and not self.module.check_mode:\n field_key = self.content.customFieldsManager.AddFieldDefinition(name=field['name'], moType=vim.VirtualMachine)\n change_list.append(True)\n if not self.module.check_mode:\n self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)\n result_fields[field['name']] = field_value\n\n if any(change_list):\n changed = True\n\n return {'changed': changed, 'failed': False, 'custom_attributes': result_fields}\n\n def check_exists(self, field):\n for x in self.custom_field_mgr:\n if x.name == field:\n return x\n return False\n\n\ndef main():\n argument_spec = vmware_argument_spec()\n argument_spec.update(\n datacenter=dict(type='str'),\n name=dict(type='str'),\n folder=dict(type='str'),\n uuid=dict(type='str'),\n moid=dict(type='str'),\n use_instance_uuid=dict(type='bool', default=False),\n state=dict(type='str', default='present',\n choices=['absent', 'present']),\n attributes=dict(\n type='list',\n default=[],\n elements='dict',\n options=dict(\n name=dict(type='str', required=True),\n value=dict(type='str', default=''),\n )\n ),\n )\n\n module = AnsibleModule(\n argument_spec=argument_spec,\n supports_check_mode=True,\n required_one_of=[\n ['name', 'uuid', 'moid']\n ],\n )\n\n if module.params.get('folder'):\n # FindByInventoryPath() does not require an absolute path\n # so we should leave the input folder path unmodified\n module.params['folder'] = module.params['folder'].rstrip('/')\n\n pyv = VmAttributeManager(module)\n results = {'changed': False, 'failed': False, 'instance': dict()}\n\n # Check if the virtual machine exists before continuing\n vm = pyv.get_vm()\n\n if vm:\n # virtual machine already exists\n if module.params['state'] == \"present\":\n results = pyv.set_custom_field(vm, module.params['attributes'])\n elif module.params['state'] == \"absent\":\n results = pyv.set_custom_field(vm, module.params['attributes'])\n module.exit_json(**results)\n else:\n # virtual machine does not exists\n vm_id = (module.params.get('name') or module.params.get('uuid') or module.params.get('moid'))\n module.fail_json(msg=\"Unable to manage custom attributes for non-existing\"\n \" virtual machine %s\" % vm_id)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/vmware_guest_custom_attributes.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright, (c) 2018, Ansible Project\n# Copyright, (c) 2018, Abhijeet Kasurde <[email protected]>\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: vmware_guest_custom_attributes\nshort_description: Manage custom attributes from VMware for the given virtual machine\ndescription:\n - This module can be used to add, remove and update custom attributes for the given virtual machine.\nauthor:\n - Jimmy Conner (@cigamit)\n - Abhijeet Kasurde (@Akasurde)\nnotes:\n - Tested on vSphere 6.5\nrequirements:\n - \"python >= 2.6\"\n - PyVmomi\noptions:\n name:\n description:\n - Name of the virtual machine to work with.\n - This is required parameter, if C(uuid) or C(moid) is not supplied.\n type: str\n state:\n description:\n - The action to take.\n - If set to C(present), then custom attribute is added or updated.\n - If set to C(absent), then custom attribute is removed.\n default: 'present'\n choices: ['present', 'absent']\n type: str\n uuid:\n description:\n - UUID of the virtual machine to manage if known. This is VMware's unique identifier.\n - This is required parameter, if C(name) or C(moid) is not supplied.\n type: str\n moid:\n description:\n - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.\n - This is required if C(name) or C(uuid) is not supplied.\n type: str\n use_instance_uuid:\n description:\n - Whether to use the VMware instance UUID rather than the BIOS UUID.\n default: no\n type: bool\n folder:\n description:\n - Absolute path to find an existing guest.\n - This is required parameter, if C(name) is supplied and multiple virtual machines with same name are found.\n type: str\n datacenter:\n description:\n - Datacenter name where the virtual machine is located in.\n type: str\n attributes:\n description:\n - A list of name and value of custom attributes that needs to be manage.\n - Value of custom attribute is not required and will be ignored, if C(state) is set to C(absent).\n suboptions:\n name:\n description:\n - Name of the attribute.\n type: str\n required: True\n value:\n description:\n - Value of the attribute.\n type: str\n default: ''\n default: []\n type: list\n elements: dict\nextends_documentation_fragment:\n- community.vmware.vmware.documentation\n\n'''\n\nEXAMPLES = '''\n- name: Add virtual machine custom attributes\n community.vmware.vmware_guest_custom_attributes:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n uuid: 421e4592-c069-924d-ce20-7e7533fab926\n state: present\n attributes:\n - name: MyAttribute\n value: MyValue\n delegate_to: localhost\n register: attributes\n\n- name: Add multiple virtual machine custom attributes\n community.vmware.vmware_guest_custom_attributes:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n uuid: 421e4592-c069-924d-ce20-7e7533fab926\n state: present\n attributes:\n - name: MyAttribute\n value: MyValue\n - name: MyAttribute2\n value: MyValue2\n delegate_to: localhost\n register: attributes\n\n- name: Remove virtual machine Attribute\n community.vmware.vmware_guest_custom_attributes:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n uuid: 421e4592-c069-924d-ce20-7e7533fab926\n state: absent\n attributes:\n - name: MyAttribute\n delegate_to: localhost\n register: attributes\n\n- name: Remove virtual machine Attribute using Virtual Machine MoID\n community.vmware.vmware_guest_custom_attributes:\n hostname: \"{{ vcenter_hostname }}\"\n username: \"{{ vcenter_username }}\"\n password: \"{{ vcenter_password }}\"\n moid: vm-42\n state: absent\n attributes:\n - name: MyAttribute\n delegate_to: localhost\n register: attributes\n'''\n\nRETURN = \"\"\"\ncustom_attributes:\n description: metadata about the virtual machine attributes\n returned: always\n type: dict\n sample: {\n \"mycustom\": \"my_custom_value\",\n \"mycustom_2\": \"my_custom_value_2\",\n \"sample_1\": \"sample_1_value\",\n \"sample_2\": \"sample_2_value\",\n \"sample_3\": \"sample_3_value\"\n }\n\"\"\"\n\ntry:\n from pyVmomi import vim\nexcept ImportError:\n pass\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible_collections.community.vmware.plugins.module_utils.vmware import PyVmomi, vmware_argument_spec\n\n\nclass VmAttributeManager(PyVmomi):\n def __init__(self, module):\n super(VmAttributeManager, self).__init__(module)\n\n def set_custom_field(self, vm, user_fields):\n result_fields = dict()\n change_list = list()\n changed = False\n\n for field in user_fields:\n field_key = self.check_exists(field['name'])\n found = False\n field_value = field.get('value', '')\n\n for k, v in [(x.name, v.value) for x in self.custom_field_mgr for v in vm.customValue if x.key == v.key]:\n if k == field['name']:\n found = True\n if v != field_value:\n if not self.module.check_mode:\n self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)\n result_fields[k] = field_value\n change_list.append(True)\n if not found and field_value != \"\":\n if not field_key and not self.module.check_mode:\n field_key = self.content.customFieldsManager.AddFieldDefinition(name=field['name'], moType=vim.VirtualMachine)\n change_list.append(True)\n if not self.module.check_mode:\n self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)\n result_fields[field['name']] = field_value\n\n if any(change_list):\n changed = True\n\n return {'changed': changed, 'failed': False, 'custom_attributes': result_fields}\n\n def check_exists(self, field):\n for x in self.custom_field_mgr:\n # The custom attribute should be either global (managedObjectType == None) or VM specific\n if x.managedObjectType in (None, vim.VirtualMachine) and x.name == field:\n return x\n return False\n\n\ndef main():\n argument_spec = vmware_argument_spec()\n argument_spec.update(\n datacenter=dict(type='str'),\n name=dict(type='str'),\n folder=dict(type='str'),\n uuid=dict(type='str'),\n moid=dict(type='str'),\n use_instance_uuid=dict(type='bool', default=False),\n state=dict(type='str', default='present',\n choices=['absent', 'present']),\n attributes=dict(\n type='list',\n default=[],\n elements='dict',\n options=dict(\n name=dict(type='str', required=True),\n value=dict(type='str', default=''),\n )\n ),\n )\n\n module = AnsibleModule(\n argument_spec=argument_spec,\n supports_check_mode=True,\n required_one_of=[\n ['name', 'uuid', 'moid']\n ],\n )\n\n if module.params.get('folder'):\n # FindByInventoryPath() does not require an absolute path\n # so we should leave the input folder path unmodified\n module.params['folder'] = module.params['folder'].rstrip('/')\n\n pyv = VmAttributeManager(module)\n results = {'changed': False, 'failed': False, 'instance': dict()}\n\n # Check if the virtual machine exists before continuing\n vm = pyv.get_vm()\n\n if vm:\n # virtual machine already exists\n if module.params['state'] == \"present\":\n results = pyv.set_custom_field(vm, module.params['attributes'])\n elif module.params['state'] == \"absent\":\n results = pyv.set_custom_field(vm, module.params['attributes'])\n module.exit_json(**results)\n else:\n # virtual machine does not exists\n vm_id = (module.params.get('name') or module.params.get('uuid') or module.params.get('moid'))\n module.fail_json(msg=\"Unable to manage custom attributes for non-existing\"\n \" virtual machine %s\" % vm_id)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/vmware_guest_custom_attributes.py"}]} |
gh_patches_debug_1089 | rasdani/github-patches | git_diff | pypa__pipenv-5778 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Requirements output different since 2023.7.1 causing pip install issues
### Issue description
The output of `pipenv requirements --hash` has changed slightly in `2023.7.1` (#5757) and `pip` appears to be sensitive to it in some scenarios, causing `pip` to be unable to install the package(s) from the generated requirements.txt.
Snippet of requirements.txt generated with `2023.6.26`
```
pyzip==0.2.0 ; python_version >= '3.1' --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298
```
Snippet of requirements.txt generated with `2023.7.1` - The hash is now before the marker
```
pyzip==0.2.0 --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298; python_version >= '3.1'
```
### Expected result
- `2023.7.1` generates a requirements.txt as per `2023.6.26`
### Actual result
- `2023.7.1` generates a slightly different requirements.txt
### Steps to replicate
Pip successfully installs the package with the `2023.6.26` requirements.txt:
```
$ pipenv run pip --version
pip 23.1.2
$ cat requirements_2023.6.26.txt
pyzip==0.2.0 ; python_version >= '3.1' --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298
$ pipenv run pip install -r requirements_2023.6.26.txt -t test_dir
Collecting pyzip==0.2.0 (from -r requirements_2023.6.26.txt (line 1))
Using cached pyzip-0.2.0-py3-none-any.whl
Installing collected packages: pyzip
Successfully installed pyzip-0.2.0
```
Pip fails to install the package with the `2023.7.3` requirements.txt, thinking there is a hash mismatch even though it displays two identical shas:
```
$ pipenv run pip --version
pip 23.1.2
$ cat requirements_2023.7.1.txt
pyzip==0.2.0 --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298; python_version >= '3.1'
$ pipenv run pip install -r requirements_2023.7.1.txt -t test_dir
Collecting pyzip==0.2.0 (from -r requirements_2023.7.1.txt (line 1))
Using cached pyzip-0.2.0-py3-none-any.whl
WARNING: The hashes of the source archive found in cache entry don't match, ignoring cached built wheel and re-downloading source.
Using cached pyzip-0.2.0.tar.gz (6.3 kB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
pyzip==0.2.0 from https://files.pythonhosted.org/packages/40/72/e29470ecfb5f2bc8cdd2a1b8a6aa14af8d44aa08fe5efa407cd991ce2c64/pyzip-0.2.0.tar.gz (from -r requirements_2023.7.1.txt (line 1)):
Expected sha256 c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298;
Got c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298
```
I will raise a PR with a fix for consideration.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pipenv/routines/requirements.py`
Content:
```
1 import re
2 import sys
3
4 from pipenv.utils.dependencies import get_lockfile_section_using_pipfile_category
5 from pipenv.vendor import click
6
7
8 def requirements_from_deps(deps, include_hashes=True, include_markers=True):
9 pip_packages = []
10
11 for package_name, package_info in deps.items():
12 # Handling git repositories
13 if "git" in package_info:
14 git = package_info["git"]
15 ref = package_info.get("ref", "")
16 extras = (
17 "[{}]".format(",".join(package_info.get("extras", [])))
18 if "extras" in package_info
19 else ""
20 )
21 pip_package = f"{package_name}{extras} @ git+{git}@{ref}"
22 else:
23 # Handling packages with hashes and markers
24 version = package_info.get("version", "").replace("==", "")
25 hashes = (
26 " --hash={}".format(" --hash=".join(package_info["hashes"]))
27 if include_hashes and "hashes" in package_info
28 else ""
29 )
30 markers = (
31 "; {}".format(package_info["markers"])
32 if include_markers and "markers" in package_info
33 else ""
34 )
35 pip_package = f"{package_name}=={version}{hashes}{markers}"
36
37 # Append to the list
38 pip_packages.append(pip_package)
39
40 # pip_packages contains the pip-installable lines
41 return pip_packages
42
43
44 def generate_requirements(
45 project,
46 dev=False,
47 dev_only=False,
48 include_hashes=False,
49 include_markers=True,
50 categories="",
51 ):
52 lockfile = project.load_lockfile(expand_env_vars=False)
53
54 for i, package_index in enumerate(lockfile["_meta"]["sources"]):
55 prefix = "-i" if i == 0 else "--extra-index-url"
56 click.echo(" ".join([prefix, package_index["url"]]))
57
58 deps = {}
59 categories_list = re.split(r", *| ", categories) if categories else []
60
61 if categories_list:
62 for category in categories_list:
63 category = get_lockfile_section_using_pipfile_category(category.strip())
64 deps.update(lockfile.get(category, {}))
65 else:
66 if dev or dev_only:
67 deps.update(lockfile["develop"])
68 if not dev_only:
69 deps.update(lockfile["default"])
70
71 pip_installable_lines = requirements_from_deps(
72 deps, include_hashes=include_hashes, include_markers=include_markers
73 )
74
75 for line in pip_installable_lines:
76 click.echo(line)
77
78 sys.exit(0)
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pipenv/routines/requirements.py b/pipenv/routines/requirements.py
--- a/pipenv/routines/requirements.py
+++ b/pipenv/routines/requirements.py
@@ -32,7 +32,7 @@
if include_markers and "markers" in package_info
else ""
)
- pip_package = f"{package_name}=={version}{hashes}{markers}"
+ pip_package = f"{package_name}=={version}{markers}{hashes}"
# Append to the list
pip_packages.append(pip_package)
| {"golden_diff": "diff --git a/pipenv/routines/requirements.py b/pipenv/routines/requirements.py\n--- a/pipenv/routines/requirements.py\n+++ b/pipenv/routines/requirements.py\n@@ -32,7 +32,7 @@\n if include_markers and \"markers\" in package_info\n else \"\"\n )\n- pip_package = f\"{package_name}=={version}{hashes}{markers}\"\n+ pip_package = f\"{package_name}=={version}{markers}{hashes}\"\n \n # Append to the list\n pip_packages.append(pip_package)\n", "issue": "Requirements output different since 2023.7.1 causing pip install issues\n### Issue description\r\n\r\nThe output of `pipenv requirements --hash` has changed slightly in `2023.7.1` (#5757) and `pip` appears to be sensitive to it in some scenarios, causing `pip` to be unable to install the package(s) from the generated requirements.txt.\r\n\r\nSnippet of requirements.txt generated with `2023.6.26`\r\n\r\n```\r\npyzip==0.2.0 ; python_version >= '3.1' --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298\r\n```\r\n\r\nSnippet of requirements.txt generated with `2023.7.1` - The hash is now before the marker\r\n\r\n```\r\npyzip==0.2.0 --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298; python_version >= '3.1'\r\n```\r\n\r\n### Expected result\r\n\r\n- `2023.7.1` generates a requirements.txt as per `2023.6.26`\r\n\r\n### Actual result\r\n\r\n- `2023.7.1` generates a slightly different requirements.txt\r\n\r\n### Steps to replicate\r\nPip successfully installs the package with the `2023.6.26` requirements.txt:\r\n\r\n```\r\n$ pipenv run pip --version\r\npip 23.1.2\r\n\r\n$ cat requirements_2023.6.26.txt\r\npyzip==0.2.0 ; python_version >= '3.1' --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298\r\n\r\n$ pipenv run pip install -r requirements_2023.6.26.txt -t test_dir\r\nCollecting pyzip==0.2.0 (from -r requirements_2023.6.26.txt (line 1))\r\n Using cached pyzip-0.2.0-py3-none-any.whl\r\nInstalling collected packages: pyzip\r\nSuccessfully installed pyzip-0.2.0\r\n```\r\n\r\nPip fails to install the package with the `2023.7.3` requirements.txt, thinking there is a hash mismatch even though it displays two identical shas:\r\n\r\n```\r\n$ pipenv run pip --version\r\npip 23.1.2\r\n\r\n$ cat requirements_2023.7.1.txt\r\npyzip==0.2.0 --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298; python_version >= '3.1'\r\n\r\n$ pipenv run pip install -r requirements_2023.7.1.txt -t test_dir\r\nCollecting pyzip==0.2.0 (from -r requirements_2023.7.1.txt (line 1))\r\n Using cached pyzip-0.2.0-py3-none-any.whl\r\n WARNING: The hashes of the source archive found in cache entry don't match, ignoring cached built wheel and re-downloading source.\r\n Using cached pyzip-0.2.0.tar.gz (6.3 kB)\r\nERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.\r\n pyzip==0.2.0 from https://files.pythonhosted.org/packages/40/72/e29470ecfb5f2bc8cdd2a1b8a6aa14af8d44aa08fe5efa407cd991ce2c64/pyzip-0.2.0.tar.gz (from -r requirements_2023.7.1.txt (line 1)):\r\n Expected sha256 c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298;\r\n Got c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298\r\n```\r\n\r\nI will raise a PR with a fix for consideration.\n", "before_files": [{"content": "import re\nimport sys\n\nfrom pipenv.utils.dependencies import get_lockfile_section_using_pipfile_category\nfrom pipenv.vendor import click\n\n\ndef requirements_from_deps(deps, include_hashes=True, include_markers=True):\n pip_packages = []\n\n for package_name, package_info in deps.items():\n # Handling git repositories\n if \"git\" in package_info:\n git = package_info[\"git\"]\n ref = package_info.get(\"ref\", \"\")\n extras = (\n \"[{}]\".format(\",\".join(package_info.get(\"extras\", [])))\n if \"extras\" in package_info\n else \"\"\n )\n pip_package = f\"{package_name}{extras} @ git+{git}@{ref}\"\n else:\n # Handling packages with hashes and markers\n version = package_info.get(\"version\", \"\").replace(\"==\", \"\")\n hashes = (\n \" --hash={}\".format(\" --hash=\".join(package_info[\"hashes\"]))\n if include_hashes and \"hashes\" in package_info\n else \"\"\n )\n markers = (\n \"; {}\".format(package_info[\"markers\"])\n if include_markers and \"markers\" in package_info\n else \"\"\n )\n pip_package = f\"{package_name}=={version}{hashes}{markers}\"\n\n # Append to the list\n pip_packages.append(pip_package)\n\n # pip_packages contains the pip-installable lines\n return pip_packages\n\n\ndef generate_requirements(\n project,\n dev=False,\n dev_only=False,\n include_hashes=False,\n include_markers=True,\n categories=\"\",\n):\n lockfile = project.load_lockfile(expand_env_vars=False)\n\n for i, package_index in enumerate(lockfile[\"_meta\"][\"sources\"]):\n prefix = \"-i\" if i == 0 else \"--extra-index-url\"\n click.echo(\" \".join([prefix, package_index[\"url\"]]))\n\n deps = {}\n categories_list = re.split(r\", *| \", categories) if categories else []\n\n if categories_list:\n for category in categories_list:\n category = get_lockfile_section_using_pipfile_category(category.strip())\n deps.update(lockfile.get(category, {}))\n else:\n if dev or dev_only:\n deps.update(lockfile[\"develop\"])\n if not dev_only:\n deps.update(lockfile[\"default\"])\n\n pip_installable_lines = requirements_from_deps(\n deps, include_hashes=include_hashes, include_markers=include_markers\n )\n\n for line in pip_installable_lines:\n click.echo(line)\n\n sys.exit(0)\n", "path": "pipenv/routines/requirements.py"}], "after_files": [{"content": "import re\nimport sys\n\nfrom pipenv.utils.dependencies import get_lockfile_section_using_pipfile_category\nfrom pipenv.vendor import click\n\n\ndef requirements_from_deps(deps, include_hashes=True, include_markers=True):\n pip_packages = []\n\n for package_name, package_info in deps.items():\n # Handling git repositories\n if \"git\" in package_info:\n git = package_info[\"git\"]\n ref = package_info.get(\"ref\", \"\")\n extras = (\n \"[{}]\".format(\",\".join(package_info.get(\"extras\", [])))\n if \"extras\" in package_info\n else \"\"\n )\n pip_package = f\"{package_name}{extras} @ git+{git}@{ref}\"\n else:\n # Handling packages with hashes and markers\n version = package_info.get(\"version\", \"\").replace(\"==\", \"\")\n hashes = (\n \" --hash={}\".format(\" --hash=\".join(package_info[\"hashes\"]))\n if include_hashes and \"hashes\" in package_info\n else \"\"\n )\n markers = (\n \"; {}\".format(package_info[\"markers\"])\n if include_markers and \"markers\" in package_info\n else \"\"\n )\n pip_package = f\"{package_name}=={version}{markers}{hashes}\"\n\n # Append to the list\n pip_packages.append(pip_package)\n\n # pip_packages contains the pip-installable lines\n return pip_packages\n\n\ndef generate_requirements(\n project,\n dev=False,\n dev_only=False,\n include_hashes=False,\n include_markers=True,\n categories=\"\",\n):\n lockfile = project.load_lockfile(expand_env_vars=False)\n\n for i, package_index in enumerate(lockfile[\"_meta\"][\"sources\"]):\n prefix = \"-i\" if i == 0 else \"--extra-index-url\"\n click.echo(\" \".join([prefix, package_index[\"url\"]]))\n\n deps = {}\n categories_list = re.split(r\", *| \", categories) if categories else []\n\n if categories_list:\n for category in categories_list:\n category = get_lockfile_section_using_pipfile_category(category.strip())\n deps.update(lockfile.get(category, {}))\n else:\n if dev or dev_only:\n deps.update(lockfile[\"develop\"])\n if not dev_only:\n deps.update(lockfile[\"default\"])\n\n pip_installable_lines = requirements_from_deps(\n deps, include_hashes=include_hashes, include_markers=include_markers\n )\n\n for line in pip_installable_lines:\n click.echo(line)\n\n sys.exit(0)\n", "path": "pipenv/routines/requirements.py"}]} |
gh_patches_debug_1090 | rasdani/github-patches | git_diff | streamlink__streamlink-5911 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.showroom: streamlink unable to download any live streams from showroom.com
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
streamlink 6.7.2
### Description
On 2024.03.29, showroom.com made some changes to their site.
When I try to use streamlink to record a showroom url that is online, for eg. https://www.showroom-live.com/r/48_KOJIMA_AIKO
> streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o testing.ts
the expected behavior is that it should return this:
> [cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO
[utils.l10n][debug] Language code: en_US
[cli][info] Available streams: 144p (worst), 360p (best)
[cli][info] Opening stream: 360p (hls)
[cli][info] Writing output to D:\testing.ts
[cli][debug] Checking file output
[stream.hls][debug] Reloading playlist
[cli][debug] Pre-buffering 8192 bytes
[stream.hls][debug] First Sequence: 1; Last Sequence: 4
[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 3; End Sequence: None
[stream.hls][debug] Adding segment 3 to queue
[stream.hls][debug] Adding segment 4 to queue
However, when I tried recording a showroom stream on 2024.03.29, I got an error stating that the stream is restricted.
> L:\>streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o sample.ts
[session][debug] Loading plugin: showroom
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.10.6
[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022
[cli][debug] Streamlink: 6.7.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2022.6.15
[cli][debug] exceptiongroup: 1.2.0
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.16.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.2
[cli][debug] trio: 0.25.0
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.5.0
[cli][debug] urllib3: 1.26.12
[cli][debug] websocket-client: 1.4.0
[cli][debug] Arguments:
[cli][debug] url=https://www.showroom-live.com/r/48_KOJIMA_AIKO
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --output=sample.ts
[cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO
[plugins.showroom][debug] Room ID: 270117
[plugins.showroom][error] This stream is restricted
error: No playable streams found on this URL: https://www.showroom-live.com/r/48_KOJIMA_AIKO
- I tried downloading 12 different showroom live streams, but received the same error for all of them.
- I tried changing my IP address using a VPN to a Japan/Hong Kong/Singapore/Germany/USA IP, but the same problem persist.
- Next, I tried to locate the m3u8 address of the showroom stream using stream detector addon (Firefox) and use the .m3u8 address directly in streamlink:
> streamlink --loglevel debug https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8 best -o testing.ts
Streamlink was able to work as normal and download successfully:
> D:\>streamlink --loglevel debug https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8 best -o testing.ts
> [session][debug] Loading plugin: hls
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.10.6
[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022
[cli][debug] Streamlink: 6.7.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2022.6.15
[cli][debug] exceptiongroup: 1.2.0
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.16.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.2
[cli][debug] trio: 0.25.0
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.5.0
[cli][debug] urllib3: 1.26.12
[cli][debug] websocket-client: 1.4.0
[cli][debug] Arguments:
[cli][debug] url=https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --output=testing.ts
[cli][info] Found matching plugin hls for URL https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8
[plugins.hls][debug] URL=https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8; params={}
[utils.l10n][debug] Language code: en_US
[cli][info] Available streams: live (worst, best)
[cli][info] Opening stream: live (hls)
[cli][info] Writing output to
D:\testing.ts
[cli][debug] Checking file output
[stream.hls][debug] Reloading playlist
[cli][debug] Pre-buffering 8192 bytes
[stream.hls][debug] First Sequence: 8904; Last Sequence: 8906
[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 8904; End Sequence: None
[stream.hls][debug] Adding segment 8904 to queue
[stream.hls][debug] Adding segment 8905 to queue
[stream.hls][debug] Adding segment 8906 to queue
[stream.hls][debug] Writing segment 8904 to output
[stream.hls][debug] Segment 8904 complete
[cli][debug] Writing stream to output
[download] Written 538.66 KiB to L:\testing.ts (0s) [stream.hls][debug] Writing segment 8905 to output
[stream.hls][debug] Segment 8905 complete
[download] Written 1.17 MiB to L:\testing.ts (0s) [stream.hls][debug] Writing segment 8906 to output
[stream.hls][debug] Segment 8906 complete
[download] Written 1.73 MiB to L:\testing.ts (1s) [stream.hls][debug] Reloading playlist
I was thinking that this might be a streamlink plugin issue and not Showroom disabling their API, because I tried testing with a Japanese GUI ffmpeg based showroom downloader, called ショールーム録画っち (https://www.skypower.xyz/showroom_rokugatch.html). I was able to download streams successfully by just entering the showroom url.
### Debug log
```text
L:\>streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o sample.ts
[session][debug] Loading plugin: showroom
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.10.6
[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022
[cli][debug] Streamlink: 6.7.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2022.6.15
[cli][debug] exceptiongroup: 1.2.0
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.16.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.2
[cli][debug] trio: 0.25.0
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.5.0
[cli][debug] urllib3: 1.26.12
[cli][debug] websocket-client: 1.4.0
[cli][debug] Arguments:
[cli][debug] url=https://www.showroom-live.com/r/48_KOJIMA_AIKO
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --output=sample.ts
[cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO
[plugins.showroom][debug] Room ID: 270117
[plugins.showroom][error] This stream is restricted
error: No playable streams found on this URL: https://www.showroom-live.com/r/48_KOJIMA_AIKO
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/showroom.py`
Content:
```
1 """
2 $description Japanese live-streaming service used primarily by Japanese idols & voice actors and their fans.
3 $url showroom-live.com
4 $type live
5 $metadata title
6 """
7
8 import logging
9 import re
10 from urllib.parse import parse_qsl, urlparse
11
12 from streamlink.plugin import Plugin, pluginmatcher
13 from streamlink.plugin.api import validate
14 from streamlink.stream.hls import HLSStream
15
16
17 log = logging.getLogger(__name__)
18
19
20 @pluginmatcher(re.compile(
21 r"https?://(?:\w+\.)?showroom-live\.com/",
22 ))
23 class Showroom(Plugin):
24 LIVE_STATUS = 2
25
26 def __init__(self, *args, **kwargs):
27 super().__init__(*args, **kwargs)
28 self.session.set_option("hls-playlist-reload-time", "segment")
29
30 def _get_streams(self):
31 room_id = self.session.http.get(
32 self.url,
33 schema=validate.Schema(
34 validate.parse_html(),
35 validate.xml_xpath_string(".//nav//a[contains(@href,'/room/profile?')]/@href"),
36 validate.none_or_all(
37 validate.transform(lambda _url_profile: dict(parse_qsl(urlparse(_url_profile).query))),
38 validate.get("room_id"),
39 ),
40 ),
41 )
42 if not room_id:
43 return
44
45 log.debug(f"Room ID: {room_id}")
46
47 live_status, self.title = self.session.http.get(
48 "https://www.showroom-live.com/api/live/live_info",
49 params={
50 "room_id": room_id,
51 },
52 schema=validate.Schema(
53 validate.parse_json(),
54 {
55 "live_status": int,
56 "room_name": str,
57 },
58 validate.union_get(
59 "live_status",
60 "room_name",
61 ),
62 ),
63 )
64 if live_status != self.LIVE_STATUS:
65 log.info("This stream is currently offline")
66 return
67
68 url = self.session.http.get(
69 "https://www.showroom-live.com/api/live/streaming_url",
70 params={
71 "room_id": room_id,
72 "abr_available": 1,
73 },
74 schema=validate.Schema(
75 validate.parse_json(),
76 {"streaming_url_list": [{
77 "type": str,
78 "url": validate.url(),
79 }]},
80 validate.get("streaming_url_list"),
81 validate.filter(lambda p: p["type"] == "hls_all"),
82 validate.get((0, "url")),
83 ),
84 )
85
86 res = self.session.http.get(url, acceptable_status=(200, 403, 404))
87 if res.headers["Content-Type"] != "application/x-mpegURL":
88 log.error("This stream is restricted")
89 return
90
91 return HLSStream.parse_variant_playlist(self.session, url)
92
93
94 __plugin__ = Showroom
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/showroom.py b/src/streamlink/plugins/showroom.py
--- a/src/streamlink/plugins/showroom.py
+++ b/src/streamlink/plugins/showroom.py
@@ -84,7 +84,7 @@
)
res = self.session.http.get(url, acceptable_status=(200, 403, 404))
- if res.headers["Content-Type"] != "application/x-mpegURL":
+ if res.headers["Content-Type"] not in ("application/x-mpegURL", "application/vnd.apple.mpegurl"):
log.error("This stream is restricted")
return
| {"golden_diff": "diff --git a/src/streamlink/plugins/showroom.py b/src/streamlink/plugins/showroom.py\n--- a/src/streamlink/plugins/showroom.py\n+++ b/src/streamlink/plugins/showroom.py\n@@ -84,7 +84,7 @@\n )\n \n res = self.session.http.get(url, acceptable_status=(200, 403, 404))\n- if res.headers[\"Content-Type\"] != \"application/x-mpegURL\":\n+ if res.headers[\"Content-Type\"] not in (\"application/x-mpegURL\", \"application/vnd.apple.mpegurl\"):\n log.error(\"This stream is restricted\")\n return\n", "issue": "plugins.showroom: streamlink unable to download any live streams from showroom.com\n### Checklist\r\n\r\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\n\r\nstreamlink 6.7.2\r\n\r\n### Description\r\n\r\nOn 2024.03.29, showroom.com made some changes to their site.\r\n\r\nWhen I try to use streamlink to record a showroom url that is online, for eg. https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n\r\n> streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o testing.ts\r\n\r\nthe expected behavior is that it should return this:\r\n\r\n> [cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n [utils.l10n][debug] Language code: en_US\r\n [cli][info] Available streams: 144p (worst), 360p (best)\r\n [cli][info] Opening stream: 360p (hls)\r\n [cli][info] Writing output to D:\\testing.ts\r\n [cli][debug] Checking file output\r\n [stream.hls][debug] Reloading playlist\r\n [cli][debug] Pre-buffering 8192 bytes\r\n [stream.hls][debug] First Sequence: 1; Last Sequence: 4\r\n [stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 3; End Sequence: None\r\n [stream.hls][debug] Adding segment 3 to queue\r\n [stream.hls][debug] Adding segment 4 to queue\r\n\r\nHowever, when I tried recording a showroom stream on 2024.03.29, I got an error stating that the stream is restricted.\r\n\r\n> L:\\>streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o sample.ts\r\n[session][debug] Loading plugin: showroom\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.10.6\r\n[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022\r\n[cli][debug] Streamlink: 6.7.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2022.6.15\r\n[cli][debug] exceptiongroup: 1.2.0\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.16.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.2\r\n[cli][debug] trio: 0.25.0\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.5.0\r\n[cli][debug] urllib3: 1.26.12\r\n[cli][debug] websocket-client: 1.4.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --output=sample.ts\r\n[cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n[plugins.showroom][debug] Room ID: 270117\r\n[plugins.showroom][error] This stream is restricted\r\nerror: No playable streams found on this URL: https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n\r\n- I tried downloading 12 different showroom live streams, but received the same error for all of them.\r\n- I tried changing my IP address using a VPN to a Japan/Hong Kong/Singapore/Germany/USA IP, but the same problem persist.\r\n- Next, I tried to locate the m3u8 address of the showroom stream using stream detector addon (Firefox) and use the .m3u8 address directly in streamlink:\r\n\r\n> streamlink --loglevel debug https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8 best -o testing.ts\r\n\r\nStreamlink was able to work as normal and download successfully:\r\n\r\n> D:\\>streamlink --loglevel debug https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8 best -o testing.ts\r\n\r\n> [session][debug] Loading plugin: hls\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.10.6\r\n[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022\r\n[cli][debug] Streamlink: 6.7.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2022.6.15\r\n[cli][debug] exceptiongroup: 1.2.0\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.16.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.2\r\n[cli][debug] trio: 0.25.0\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.5.0\r\n[cli][debug] urllib3: 1.26.12\r\n[cli][debug] websocket-client: 1.4.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --output=testing.ts\r\n[cli][info] Found matching plugin hls for URL https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8\r\n[plugins.hls][debug] URL=https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8; params={}\r\n[utils.l10n][debug] Language code: en_US\r\n[cli][info] Available streams: live (worst, best)\r\n[cli][info] Opening stream: live (hls)\r\n[cli][info] Writing output to\r\nD:\\testing.ts\r\n[cli][debug] Checking file output\r\n[stream.hls][debug] Reloading playlist\r\n[cli][debug] Pre-buffering 8192 bytes\r\n[stream.hls][debug] First Sequence: 8904; Last Sequence: 8906\r\n[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 8904; End Sequence: None\r\n[stream.hls][debug] Adding segment 8904 to queue\r\n[stream.hls][debug] Adding segment 8905 to queue\r\n[stream.hls][debug] Adding segment 8906 to queue\r\n[stream.hls][debug] Writing segment 8904 to output\r\n[stream.hls][debug] Segment 8904 complete\r\n[cli][debug] Writing stream to output\r\n[download] Written 538.66 KiB to L:\\testing.ts (0s) [stream.hls][debug] Writing segment 8905 to output\r\n[stream.hls][debug] Segment 8905 complete\r\n[download] Written 1.17 MiB to L:\\testing.ts (0s) [stream.hls][debug] Writing segment 8906 to output\r\n[stream.hls][debug] Segment 8906 complete\r\n[download] Written 1.73 MiB to L:\\testing.ts (1s) [stream.hls][debug] Reloading playlist\r\n\r\nI was thinking that this might be a streamlink plugin issue and not Showroom disabling their API, because I tried testing with a Japanese GUI ffmpeg based showroom downloader, called \u30b7\u30e7\u30fc\u30eb\u30fc\u30e0\u9332\u753b\u3063\u3061 (https://www.skypower.xyz/showroom_rokugatch.html). I was able to download streams successfully by just entering the showroom url.\r\n\r\n\r\n\r\n\r\n\r\n### Debug log\r\n\r\n```text\r\nL:\\>streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o sample.ts\r\n[session][debug] Loading plugin: showroom\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.10.6\r\n[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022\r\n[cli][debug] Streamlink: 6.7.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2022.6.15\r\n[cli][debug] exceptiongroup: 1.2.0\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.16.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.2\r\n[cli][debug] trio: 0.25.0\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.5.0\r\n[cli][debug] urllib3: 1.26.12\r\n[cli][debug] websocket-client: 1.4.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --output=sample.ts\r\n[cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n[plugins.showroom][debug] Room ID: 270117\r\n[plugins.showroom][error] This stream is restricted\r\nerror: No playable streams found on this URL: https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\n$description Japanese live-streaming service used primarily by Japanese idols & voice actors and their fans.\n$url showroom-live.com\n$type live\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import parse_qsl, urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:\\w+\\.)?showroom-live\\.com/\",\n))\nclass Showroom(Plugin):\n LIVE_STATUS = 2\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.session.set_option(\"hls-playlist-reload-time\", \"segment\")\n\n def _get_streams(self):\n room_id = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//nav//a[contains(@href,'/room/profile?')]/@href\"),\n validate.none_or_all(\n validate.transform(lambda _url_profile: dict(parse_qsl(urlparse(_url_profile).query))),\n validate.get(\"room_id\"),\n ),\n ),\n )\n if not room_id:\n return\n\n log.debug(f\"Room ID: {room_id}\")\n\n live_status, self.title = self.session.http.get(\n \"https://www.showroom-live.com/api/live/live_info\",\n params={\n \"room_id\": room_id,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"live_status\": int,\n \"room_name\": str,\n },\n validate.union_get(\n \"live_status\",\n \"room_name\",\n ),\n ),\n )\n if live_status != self.LIVE_STATUS:\n log.info(\"This stream is currently offline\")\n return\n\n url = self.session.http.get(\n \"https://www.showroom-live.com/api/live/streaming_url\",\n params={\n \"room_id\": room_id,\n \"abr_available\": 1,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\"streaming_url_list\": [{\n \"type\": str,\n \"url\": validate.url(),\n }]},\n validate.get(\"streaming_url_list\"),\n validate.filter(lambda p: p[\"type\"] == \"hls_all\"),\n validate.get((0, \"url\")),\n ),\n )\n\n res = self.session.http.get(url, acceptable_status=(200, 403, 404))\n if res.headers[\"Content-Type\"] != \"application/x-mpegURL\":\n log.error(\"This stream is restricted\")\n return\n\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = Showroom\n", "path": "src/streamlink/plugins/showroom.py"}], "after_files": [{"content": "\"\"\"\n$description Japanese live-streaming service used primarily by Japanese idols & voice actors and their fans.\n$url showroom-live.com\n$type live\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import parse_qsl, urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:\\w+\\.)?showroom-live\\.com/\",\n))\nclass Showroom(Plugin):\n LIVE_STATUS = 2\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.session.set_option(\"hls-playlist-reload-time\", \"segment\")\n\n def _get_streams(self):\n room_id = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//nav//a[contains(@href,'/room/profile?')]/@href\"),\n validate.none_or_all(\n validate.transform(lambda _url_profile: dict(parse_qsl(urlparse(_url_profile).query))),\n validate.get(\"room_id\"),\n ),\n ),\n )\n if not room_id:\n return\n\n log.debug(f\"Room ID: {room_id}\")\n\n live_status, self.title = self.session.http.get(\n \"https://www.showroom-live.com/api/live/live_info\",\n params={\n \"room_id\": room_id,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"live_status\": int,\n \"room_name\": str,\n },\n validate.union_get(\n \"live_status\",\n \"room_name\",\n ),\n ),\n )\n if live_status != self.LIVE_STATUS:\n log.info(\"This stream is currently offline\")\n return\n\n url = self.session.http.get(\n \"https://www.showroom-live.com/api/live/streaming_url\",\n params={\n \"room_id\": room_id,\n \"abr_available\": 1,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\"streaming_url_list\": [{\n \"type\": str,\n \"url\": validate.url(),\n }]},\n validate.get(\"streaming_url_list\"),\n validate.filter(lambda p: p[\"type\"] == \"hls_all\"),\n validate.get((0, \"url\")),\n ),\n )\n\n res = self.session.http.get(url, acceptable_status=(200, 403, 404))\n if res.headers[\"Content-Type\"] not in (\"application/x-mpegURL\", \"application/vnd.apple.mpegurl\"):\n log.error(\"This stream is restricted\")\n return\n\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = Showroom\n", "path": "src/streamlink/plugins/showroom.py"}]} |
gh_patches_debug_1091 | rasdani/github-patches | git_diff | optuna__optuna-50 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`pfnopt.minimize` fails under `strorage=None` (default)
```python
import pfnopt
def obj(client):
x = client.sample_uniform('x', 0.1, 0.2)
return x
def main():
pfnopt.minimize(obj, n_trials=2)
if __name__ == '__main__':
main()
```
```
AttributeError: 'NoneType' object has no attribute 'get_study_uuid_from_id'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pfnopt/study.py`
Content:
```
1 import datetime
2 import multiprocessing
3 import multiprocessing.pool
4 from typing import Any # NOQA
5 from typing import Callable # NOQA
6 from typing import Dict # NOQA
7 from typing import Iterable # NOQA
8 from typing import List # NOQA
9 from typing import Optional # NOQA
10
11 from pfnopt import client as client_module
12 from pfnopt import pruners
13 from pfnopt import samplers
14 from pfnopt import storages
15 from pfnopt import trial # NOQA
16
17 ObjectiveFuncType = Callable[[client_module.BaseClient], float]
18
19
20 class Study(object):
21
22 def __init__(
23 self,
24 study_uuid, # type: str
25 storage, # type: storages.BaseStorage
26 sampler=None, # type: samplers.BaseSampler
27 pruner=None, # type: pruners.BasePruner
28 ):
29 # type: (...) -> None
30
31 self.study_uuid = study_uuid
32 self.storage = storage
33 self.sampler = sampler or samplers.TPESampler()
34 self.pruner = pruner or pruners.MedianPruner()
35
36 self.study_id = storage.get_study_id_from_uuid(study_uuid)
37
38 @property
39 def best_params(self):
40 # type: () -> Dict[str, Any]
41
42 return self.best_trial.params
43
44 @property
45 def best_value(self):
46 # type: () -> float
47
48 return self.best_trial.value
49
50 @property
51 def best_trial(self):
52 # type: () -> trial.Trial
53
54 return self.storage.get_best_trial(self.study_id)
55
56 @property
57 def trials(self):
58 # type: () -> List[trial.Trial]
59
60 return self.storage.get_all_trials(self.study_id)
61
62 def run(self, func, n_trials=None, timeout_seconds=None, n_jobs=1):
63 # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None
64
65 if n_jobs == 1:
66 self._run_sequential(func, n_trials, timeout_seconds)
67 else:
68 self._run_parallel(func, n_trials, timeout_seconds, n_jobs)
69
70 def _run_sequential(self, func, n_trials, timeout_seconds):
71 # type: (ObjectiveFuncType, Optional[int], Optional[float]) -> None
72
73 i_trial = 0
74 time_start = datetime.datetime.now()
75 while True:
76 if n_trials is not None:
77 if i_trial >= n_trials:
78 break
79 i_trial += 1
80
81 if timeout_seconds is not None:
82 elapsed_seconds = (datetime.datetime.now() - time_start).total_seconds()
83 if elapsed_seconds >= timeout_seconds:
84 break
85
86 trial_id = self.storage.create_new_trial_id(self.study_id)
87 client = client_module.LocalClient(self, trial_id)
88 result = func(client)
89 client.complete(result)
90
91 def _run_parallel(self, func, n_trials, timeout_seconds, n_jobs):
92 # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None
93
94 if isinstance(self.storage, storages.RDBStorage):
95 raise TypeError('Parallel run with RDBStorage is not supported.')
96
97 if n_jobs == -1:
98 n_jobs = multiprocessing.cpu_count()
99
100 pool = multiprocessing.pool.ThreadPool(n_jobs) # type: ignore
101
102 def f(_):
103 trial_id = self.storage.create_new_trial_id(self.study_id)
104 client = client_module.LocalClient(self, trial_id)
105 result = func(client)
106 client.complete(result)
107
108 self.start_datetime = datetime.datetime.now()
109
110 if n_trials is not None:
111 ite = range(n_trials) # type: Iterable[int]
112 else:
113 ite = iter(int, 1) # Infinite iterator
114
115 imap_ite = pool.imap(f, ite, chunksize=1)
116 while True:
117 if timeout_seconds is None:
118 to = None
119 else:
120 elapsed_timedelta = datetime.datetime.now() - self.start_datetime
121 elapsed_seconds = elapsed_timedelta.total_seconds()
122 to = (timeout_seconds - elapsed_seconds)
123
124 try:
125 imap_ite.next(timeout=to) # type: ignore
126 except (StopIteration, multiprocessing.TimeoutError): # type: ignore
127 break
128
129 pool.terminate()
130
131
132 def minimize(
133 func, # type: ObjectiveFuncType
134 n_trials=None, # type: Optional[int]
135 timeout_seconds=None, # type: Optional[float]
136 n_jobs=1, # type: int
137 storage=None, # type: storages.BaseStorage
138 sampler=None, # type: samplers.BaseSampler
139 pruner=None, # type: pruners.BasePruner
140 study=None, # type: Study
141 ):
142 # type: (...) -> Study
143
144 study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)
145 study.run(func, n_trials, timeout_seconds, n_jobs)
146 return study
147
148
149 # TODO(akiba): implement me
150 def maximize():
151 raise NotImplementedError
152
153
154 def create_new_study(storage, sampler=None, pruner=None):
155 # type: (storages.BaseStorage, samplers.BaseSampler, pruners.BasePruner) -> Study
156 study_uuid = storage.get_study_uuid_from_id(storage.create_new_study_id())
157 return Study(study_uuid=study_uuid, storage=storage, sampler=sampler, pruner=pruner)
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pfnopt/study.py b/pfnopt/study.py
--- a/pfnopt/study.py
+++ b/pfnopt/study.py
@@ -140,7 +140,7 @@
study=None, # type: Study
):
# type: (...) -> Study
-
+ storage = storage or storages.InMemoryStorage()
study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)
study.run(func, n_trials, timeout_seconds, n_jobs)
return study
| {"golden_diff": "diff --git a/pfnopt/study.py b/pfnopt/study.py\n--- a/pfnopt/study.py\n+++ b/pfnopt/study.py\n@@ -140,7 +140,7 @@\n study=None, # type: Study\n ):\n # type: (...) -> Study\n-\n+ storage = storage or storages.InMemoryStorage()\n study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)\n study.run(func, n_trials, timeout_seconds, n_jobs)\n return study\n", "issue": "`pfnopt.minimize` fails under `strorage=None` (default)\n```python\r\nimport pfnopt\r\n\r\n\r\ndef obj(client):\r\n x = client.sample_uniform('x', 0.1, 0.2)\r\n return x\r\n\r\n\r\ndef main():\r\n pfnopt.minimize(obj, n_trials=2)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n```\r\nAttributeError: 'NoneType' object has no attribute 'get_study_uuid_from_id'\r\n```\n", "before_files": [{"content": "import datetime\nimport multiprocessing\nimport multiprocessing.pool\nfrom typing import Any # NOQA\nfrom typing import Callable # NOQA\nfrom typing import Dict # NOQA\nfrom typing import Iterable # NOQA\nfrom typing import List # NOQA\nfrom typing import Optional # NOQA\n\nfrom pfnopt import client as client_module\nfrom pfnopt import pruners\nfrom pfnopt import samplers\nfrom pfnopt import storages\nfrom pfnopt import trial # NOQA\n\nObjectiveFuncType = Callable[[client_module.BaseClient], float]\n\n\nclass Study(object):\n\n def __init__(\n self,\n study_uuid, # type: str\n storage, # type: storages.BaseStorage\n sampler=None, # type: samplers.BaseSampler\n pruner=None, # type: pruners.BasePruner\n ):\n # type: (...) -> None\n\n self.study_uuid = study_uuid\n self.storage = storage\n self.sampler = sampler or samplers.TPESampler()\n self.pruner = pruner or pruners.MedianPruner()\n\n self.study_id = storage.get_study_id_from_uuid(study_uuid)\n\n @property\n def best_params(self):\n # type: () -> Dict[str, Any]\n\n return self.best_trial.params\n\n @property\n def best_value(self):\n # type: () -> float\n\n return self.best_trial.value\n\n @property\n def best_trial(self):\n # type: () -> trial.Trial\n\n return self.storage.get_best_trial(self.study_id)\n\n @property\n def trials(self):\n # type: () -> List[trial.Trial]\n\n return self.storage.get_all_trials(self.study_id)\n\n def run(self, func, n_trials=None, timeout_seconds=None, n_jobs=1):\n # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None\n\n if n_jobs == 1:\n self._run_sequential(func, n_trials, timeout_seconds)\n else:\n self._run_parallel(func, n_trials, timeout_seconds, n_jobs)\n\n def _run_sequential(self, func, n_trials, timeout_seconds):\n # type: (ObjectiveFuncType, Optional[int], Optional[float]) -> None\n\n i_trial = 0\n time_start = datetime.datetime.now()\n while True:\n if n_trials is not None:\n if i_trial >= n_trials:\n break\n i_trial += 1\n\n if timeout_seconds is not None:\n elapsed_seconds = (datetime.datetime.now() - time_start).total_seconds()\n if elapsed_seconds >= timeout_seconds:\n break\n\n trial_id = self.storage.create_new_trial_id(self.study_id)\n client = client_module.LocalClient(self, trial_id)\n result = func(client)\n client.complete(result)\n\n def _run_parallel(self, func, n_trials, timeout_seconds, n_jobs):\n # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None\n\n if isinstance(self.storage, storages.RDBStorage):\n raise TypeError('Parallel run with RDBStorage is not supported.')\n\n if n_jobs == -1:\n n_jobs = multiprocessing.cpu_count()\n\n pool = multiprocessing.pool.ThreadPool(n_jobs) # type: ignore\n\n def f(_):\n trial_id = self.storage.create_new_trial_id(self.study_id)\n client = client_module.LocalClient(self, trial_id)\n result = func(client)\n client.complete(result)\n\n self.start_datetime = datetime.datetime.now()\n\n if n_trials is not None:\n ite = range(n_trials) # type: Iterable[int]\n else:\n ite = iter(int, 1) # Infinite iterator\n\n imap_ite = pool.imap(f, ite, chunksize=1)\n while True:\n if timeout_seconds is None:\n to = None\n else:\n elapsed_timedelta = datetime.datetime.now() - self.start_datetime\n elapsed_seconds = elapsed_timedelta.total_seconds()\n to = (timeout_seconds - elapsed_seconds)\n\n try:\n imap_ite.next(timeout=to) # type: ignore\n except (StopIteration, multiprocessing.TimeoutError): # type: ignore\n break\n\n pool.terminate()\n\n\ndef minimize(\n func, # type: ObjectiveFuncType\n n_trials=None, # type: Optional[int]\n timeout_seconds=None, # type: Optional[float]\n n_jobs=1, # type: int\n storage=None, # type: storages.BaseStorage\n sampler=None, # type: samplers.BaseSampler\n pruner=None, # type: pruners.BasePruner\n study=None, # type: Study\n):\n # type: (...) -> Study\n\n study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)\n study.run(func, n_trials, timeout_seconds, n_jobs)\n return study\n\n\n# TODO(akiba): implement me\ndef maximize():\n raise NotImplementedError\n\n\ndef create_new_study(storage, sampler=None, pruner=None):\n # type: (storages.BaseStorage, samplers.BaseSampler, pruners.BasePruner) -> Study\n study_uuid = storage.get_study_uuid_from_id(storage.create_new_study_id())\n return Study(study_uuid=study_uuid, storage=storage, sampler=sampler, pruner=pruner)\n", "path": "pfnopt/study.py"}], "after_files": [{"content": "import datetime\nimport multiprocessing\nimport multiprocessing.pool\nfrom typing import Any # NOQA\nfrom typing import Callable # NOQA\nfrom typing import Dict # NOQA\nfrom typing import Iterable # NOQA\nfrom typing import List # NOQA\nfrom typing import Optional # NOQA\n\nfrom pfnopt import client as client_module\nfrom pfnopt import pruners\nfrom pfnopt import samplers\nfrom pfnopt import storages\nfrom pfnopt import trial # NOQA\n\nObjectiveFuncType = Callable[[client_module.BaseClient], float]\n\n\nclass Study(object):\n\n def __init__(\n self,\n study_uuid, # type: str\n storage, # type: storages.BaseStorage\n sampler=None, # type: samplers.BaseSampler\n pruner=None, # type: pruners.BasePruner\n ):\n # type: (...) -> None\n\n self.study_uuid = study_uuid\n self.storage = storage\n self.sampler = sampler or samplers.TPESampler()\n self.pruner = pruner or pruners.MedianPruner()\n\n self.study_id = storage.get_study_id_from_uuid(study_uuid)\n\n @property\n def best_params(self):\n # type: () -> Dict[str, Any]\n\n return self.best_trial.params\n\n @property\n def best_value(self):\n # type: () -> float\n\n return self.best_trial.value\n\n @property\n def best_trial(self):\n # type: () -> trial.Trial\n\n return self.storage.get_best_trial(self.study_id)\n\n @property\n def trials(self):\n # type: () -> List[trial.Trial]\n\n return self.storage.get_all_trials(self.study_id)\n\n def run(self, func, n_trials=None, timeout_seconds=None, n_jobs=1):\n # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None\n\n if n_jobs == 1:\n self._run_sequential(func, n_trials, timeout_seconds)\n else:\n self._run_parallel(func, n_trials, timeout_seconds, n_jobs)\n\n def _run_sequential(self, func, n_trials, timeout_seconds):\n # type: (ObjectiveFuncType, Optional[int], Optional[float]) -> None\n\n i_trial = 0\n time_start = datetime.datetime.now()\n while True:\n if n_trials is not None:\n if i_trial >= n_trials:\n break\n i_trial += 1\n\n if timeout_seconds is not None:\n elapsed_seconds = (datetime.datetime.now() - time_start).total_seconds()\n if elapsed_seconds >= timeout_seconds:\n break\n\n trial_id = self.storage.create_new_trial_id(self.study_id)\n client = client_module.LocalClient(self, trial_id)\n result = func(client)\n client.complete(result)\n\n def _run_parallel(self, func, n_trials, timeout_seconds, n_jobs):\n # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None\n\n if isinstance(self.storage, storages.RDBStorage):\n raise TypeError('Parallel run with RDBStorage is not supported.')\n\n if n_jobs == -1:\n n_jobs = multiprocessing.cpu_count()\n\n pool = multiprocessing.pool.ThreadPool(n_jobs) # type: ignore\n\n def f(_):\n trial_id = self.storage.create_new_trial_id(self.study_id)\n client = client_module.LocalClient(self, trial_id)\n result = func(client)\n client.complete(result)\n\n self.start_datetime = datetime.datetime.now()\n\n if n_trials is not None:\n ite = range(n_trials) # type: Iterable[int]\n else:\n ite = iter(int, 1) # Infinite iterator\n\n imap_ite = pool.imap(f, ite, chunksize=1)\n while True:\n if timeout_seconds is None:\n to = None\n else:\n elapsed_timedelta = datetime.datetime.now() - self.start_datetime\n elapsed_seconds = elapsed_timedelta.total_seconds()\n to = (timeout_seconds - elapsed_seconds)\n\n try:\n imap_ite.next(timeout=to) # type: ignore\n except (StopIteration, multiprocessing.TimeoutError): # type: ignore\n break\n\n pool.terminate()\n\n\ndef minimize(\n func, # type: ObjectiveFuncType\n n_trials=None, # type: Optional[int]\n timeout_seconds=None, # type: Optional[float]\n n_jobs=1, # type: int\n storage=None, # type: storages.BaseStorage\n sampler=None, # type: samplers.BaseSampler\n pruner=None, # type: pruners.BasePruner\n study=None, # type: Study\n):\n # type: (...) -> Study\n storage = storage or storages.InMemoryStorage()\n study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)\n study.run(func, n_trials, timeout_seconds, n_jobs)\n return study\n\n\n# TODO(akiba): implement me\ndef maximize():\n raise NotImplementedError\n\n\ndef create_new_study(storage, sampler=None, pruner=None):\n # type: (storages.BaseStorage, samplers.BaseSampler, pruners.BasePruner) -> Study\n study_uuid = storage.get_study_uuid_from_id(storage.create_new_study_id())\n return Study(study_uuid=study_uuid, storage=storage, sampler=sampler, pruner=pruner)\n", "path": "pfnopt/study.py"}]} |
gh_patches_debug_1092 | rasdani/github-patches | git_diff | mkdocs__mkdocs-130 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update requirements
While working with Markdown extensions (c.f. #74), I noticed that mkdocs' setup.py has its dependencies [pinned to specific patch versions](https://github.com/tomchristie/mkdocs/blob/master/setup.py#L18):
```
install_requires = [
'Jinja2==2.7.1',
'Markdown==2.3.1',
'PyYAML==3.10',
'watchdog==0.7.0',
'ghp-import==0.4.1'
]
```
Since these dependencies are slightly out of date (e.g., [Jinja2 is at 2.7.3](https://pypi.python.org/pypi/Jinja2) and [Markdown is at 2.4.1](https://pypi.python.org/pypi/Markdown)), it's hard to use mkdocs on a system with other software. Perhaps it's a shame that Python doesn't have npm-like dependency management, but that's the way it is—you'll get a setuptools when trying to run mkdocs error if any other package upgrades Jinja to a bugfix release.
How would the developers feel about loosening these version requirements? An idiomatic approach is to [just use `>=`](https://github.com/mitsuhiko/flask/blob/master/setup.py#L99).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10
11 name = 'mkdocs'
12 package = 'mkdocs'
13 description = 'In progress.'
14 url = 'http://www.mkdocs.org'
15 author = 'Tom Christie'
16 author_email = '[email protected]'
17 license = 'BSD'
18 install_requires = [
19 'Jinja2==2.7.1',
20 'Markdown==2.3.1',
21 'PyYAML==3.10',
22 'watchdog==0.7.0',
23 'ghp-import==0.4.1'
24 ]
25
26 long_description = """Work in progress."""
27
28
29 def get_version(package):
30 """
31 Return package version as listed in `__version__` in `init.py`.
32 """
33 init_py = open(os.path.join(package, '__init__.py')).read()
34 return re.search("^__version__ = ['\"]([^'\"]+)['\"]", init_py, re.MULTILINE).group(1)
35
36
37 def get_packages(package):
38 """
39 Return root package and all sub-packages.
40 """
41 return [dirpath
42 for dirpath, dirnames, filenames in os.walk(package)
43 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
44
45
46 def get_package_data(package):
47 """
48 Return all files under the root package, that are not in a
49 package themselves.
50 """
51 walk = [(dirpath.replace(package + os.sep, '', 1), filenames)
52 for dirpath, dirnames, filenames in os.walk(package)
53 if not os.path.exists(os.path.join(dirpath, '__init__.py'))]
54
55 filepaths = []
56 for base, filenames in walk:
57 filepaths.extend([os.path.join(base, filename)
58 for filename in filenames])
59 return {package: filepaths}
60
61
62 if sys.argv[-1] == 'publish':
63 os.system("python setup.py sdist upload")
64 args = {'version': get_version(package)}
65 print("You probably want to also tag the version now:")
66 print(" git tag -a %(version)s -m 'version %(version)s'" % args)
67 print(" git push --tags")
68 sys.exit()
69
70
71 setup(
72 name=name,
73 version=get_version(package),
74 url=url,
75 license=license,
76 description=description,
77 long_description=long_description,
78 author=author,
79 author_email=author_email,
80 packages=get_packages(package),
81 package_data=get_package_data(package),
82 install_requires=install_requires,
83 entry_points={
84 'console_scripts': [
85 'mkdocs = mkdocs.main:run_main',
86 ],
87 },
88 classifiers=[
89 'Development Status :: 5 - Production/Stable',
90 'Environment :: Console',
91 'Environment :: Web Environment',
92 'Intended Audience :: Developers',
93 'License :: OSI Approved :: BSD License',
94 'Operating System :: OS Independent',
95 'Programming Language :: Python',
96 'Programming Language :: Python :: 2',
97 'Programming Language :: Python :: 2.6',
98 'Programming Language :: Python :: 2.7',
99 'Programming Language :: Python :: 3',
100 'Programming Language :: Python :: 3.3',
101 'Programming Language :: Python :: 3.4',
102 'Topic :: Documentation',
103 'Topic :: Text Processing',
104 ]
105 )
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,11 +16,11 @@
author_email = '[email protected]'
license = 'BSD'
install_requires = [
- 'Jinja2==2.7.1',
- 'Markdown==2.3.1',
- 'PyYAML==3.10',
- 'watchdog==0.7.0',
- 'ghp-import==0.4.1'
+ 'Jinja2>=2.7.1',
+ 'Markdown>=2.3.1,<2.5',
+ 'PyYAML>=3.10',
+ 'watchdog>=0.7.0',
+ 'ghp-import>=0.4.1'
]
long_description = """Work in progress."""
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,11 +16,11 @@\n author_email = '[email protected]'\n license = 'BSD'\n install_requires = [\n- 'Jinja2==2.7.1',\n- 'Markdown==2.3.1',\n- 'PyYAML==3.10',\n- 'watchdog==0.7.0',\n- 'ghp-import==0.4.1'\n+ 'Jinja2>=2.7.1',\n+ 'Markdown>=2.3.1,<2.5',\n+ 'PyYAML>=3.10',\n+ 'watchdog>=0.7.0',\n+ 'ghp-import>=0.4.1'\n ]\n \n long_description = \"\"\"Work in progress.\"\"\"\n", "issue": "Update requirements\nWhile working with Markdown extensions (c.f. #74), I noticed that mkdocs' setup.py has its dependencies [pinned to specific patch versions](https://github.com/tomchristie/mkdocs/blob/master/setup.py#L18):\n\n```\ninstall_requires = [\n 'Jinja2==2.7.1',\n 'Markdown==2.3.1',\n 'PyYAML==3.10',\n 'watchdog==0.7.0',\n 'ghp-import==0.4.1'\n]\n```\n\nSince these dependencies are slightly out of date (e.g., [Jinja2 is at 2.7.3](https://pypi.python.org/pypi/Jinja2) and [Markdown is at 2.4.1](https://pypi.python.org/pypi/Markdown)), it's hard to use mkdocs on a system with other software. Perhaps it's a shame that Python doesn't have npm-like dependency management, but that's the way it is\u2014you'll get a setuptools when trying to run mkdocs error if any other package upgrades Jinja to a bugfix release.\n\nHow would the developers feel about loosening these version requirements? An idiomatic approach is to [just use `>=`](https://github.com/mitsuhiko/flask/blob/master/setup.py#L99).\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'In progress.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'Jinja2==2.7.1',\n 'Markdown==2.3.1',\n 'PyYAML==3.10',\n 'watchdog==0.7.0',\n 'ghp-import==0.4.1'\n]\n\nlong_description = \"\"\"Work in progress.\"\"\"\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'In progress.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'Jinja2>=2.7.1',\n 'Markdown>=2.3.1,<2.5',\n 'PyYAML>=3.10',\n 'watchdog>=0.7.0',\n 'ghp-import>=0.4.1'\n]\n\nlong_description = \"\"\"Work in progress.\"\"\"\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1093 | rasdani/github-patches | git_diff | ray-project__ray-9141 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tune][rllib] Windows: FileExistsError when running rllib tune job
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
My script (available below) runs fine on Linux but does not fully run yet on Windows.
I can't tell exactly what went wrong, but the two major errors I see is:
`TypeError: len() of unsized object`
`FileExistsError: [WinError 183] Cannot create a file when that file already exists`
Full error log:
https://gist.github.com/juliusfrost/acbca090259610a176847e7026dd6d30
### Reproduction
Run on Windows OS
Install pytorch
Install the latest `ray` and `rllib` versions
Install `atari-py` if necessary
Download `train_a2c.py` from https://gist.github.com/juliusfrost/61b8be67d33b9bc9ab1faf7ada9d2ae3
Run `python train_a2c.py BreakoutNoFrameskip-v4`
if you don't have a gpu add `--gpus 0`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/tune/trial_runner.py`
Content:
```
1 import click
2 from datetime import datetime
3 import json
4 import logging
5 import os
6 import time
7 import traceback
8 import types
9
10 import ray.cloudpickle as cloudpickle
11 from ray.tune import TuneError
12 from ray.tune.stopper import NoopStopper
13 from ray.tune.progress_reporter import trial_progress_str
14 from ray.tune.ray_trial_executor import RayTrialExecutor
15 from ray.tune.result import (TIME_THIS_ITER_S, RESULT_DUPLICATE,
16 SHOULD_CHECKPOINT)
17 from ray.tune.syncer import get_cloud_syncer
18 from ray.tune.trial import Checkpoint, Trial
19 from ray.tune.schedulers import FIFOScheduler, TrialScheduler
20 from ray.tune.suggest import BasicVariantGenerator
21 from ray.tune.utils import warn_if_slow, flatten_dict
22 from ray.tune.web_server import TuneServer
23 from ray.utils import binary_to_hex, hex_to_binary
24
25 MAX_DEBUG_TRIALS = 20
26
27 logger = logging.getLogger(__name__)
28
29
30 def _find_newest_ckpt(ckpt_dir):
31 """Returns path to most recently modified checkpoint."""
32 full_paths = [
33 os.path.join(ckpt_dir, fname) for fname in os.listdir(ckpt_dir)
34 if fname.startswith("experiment_state") and fname.endswith(".json")
35 ]
36 return max(full_paths)
37
38
39 class _TuneFunctionEncoder(json.JSONEncoder):
40 def default(self, obj):
41 if isinstance(obj, types.FunctionType):
42 return self._to_cloudpickle(obj)
43 try:
44 return super(_TuneFunctionEncoder, self).default(obj)
45 except Exception:
46 logger.debug("Unable to encode. Falling back to cloudpickle.")
47 return self._to_cloudpickle(obj)
48
49 def _to_cloudpickle(self, obj):
50 return {
51 "_type": "CLOUDPICKLE_FALLBACK",
52 "value": binary_to_hex(cloudpickle.dumps(obj))
53 }
54
55
56 class _TuneFunctionDecoder(json.JSONDecoder):
57 def __init__(self, *args, **kwargs):
58 json.JSONDecoder.__init__(
59 self, object_hook=self.object_hook, *args, **kwargs)
60
61 def object_hook(self, obj):
62 if obj.get("_type") == "CLOUDPICKLE_FALLBACK":
63 return self._from_cloudpickle(obj)
64 return obj
65
66 def _from_cloudpickle(self, obj):
67 return cloudpickle.loads(hex_to_binary(obj["value"]))
68
69
70 class TrialRunner:
71 """A TrialRunner implements the event loop for scheduling trials on Ray.
72
73 .. code-block: python
74
75 runner = TrialRunner()
76 runner.add_trial(Trial(...))
77 runner.add_trial(Trial(...))
78 while not runner.is_finished():
79 runner.step()
80 print(runner.debug_string())
81
82 The main job of TrialRunner is scheduling trials to efficiently use cluster
83 resources, without overloading the cluster.
84
85 While Ray itself provides resource management for tasks and actors, this is
86 not sufficient when scheduling trials that may instantiate multiple actors.
87 This is because if insufficient resources are available, concurrent trials
88 could deadlock waiting for new resources to become available. Furthermore,
89 oversubscribing the cluster could degrade training performance, leading to
90 misleading benchmark results.
91
92 Args:
93 search_alg (SearchAlgorithm): SearchAlgorithm for generating
94 Trial objects.
95 scheduler (TrialScheduler): Defaults to FIFOScheduler.
96 launch_web_server (bool): Flag for starting TuneServer
97 local_checkpoint_dir (str): Path where
98 global checkpoints are stored and restored from.
99 remote_checkpoint_dir (str): Remote path where
100 global checkpoints are stored and restored from. Used
101 if `resume` == REMOTE.
102 stopper: Custom class for stopping whole experiments. See
103 ``Stopper``.
104 resume (str|False): see `tune.py:run`.
105 sync_to_cloud (func|str): See `tune.py:run`.
106 server_port (int): Port number for launching TuneServer.
107 fail_fast (bool): Finishes as soon as a trial fails if True.
108 verbose (bool): Flag for verbosity. If False, trial results
109 will not be output.
110 checkpoint_period (int): Trial runner checkpoint periodicity in
111 seconds. Defaults to 10.
112 trial_executor (TrialExecutor): Defaults to RayTrialExecutor.
113 """
114
115 CKPT_FILE_TMPL = "experiment_state-{}.json"
116 VALID_RESUME_TYPES = [True, "LOCAL", "REMOTE", "PROMPT"]
117
118 def __init__(self,
119 search_alg=None,
120 scheduler=None,
121 launch_web_server=False,
122 local_checkpoint_dir=None,
123 remote_checkpoint_dir=None,
124 sync_to_cloud=None,
125 stopper=None,
126 resume=False,
127 server_port=TuneServer.DEFAULT_PORT,
128 fail_fast=False,
129 verbose=True,
130 checkpoint_period=10,
131 trial_executor=None):
132 self._search_alg = search_alg or BasicVariantGenerator()
133 self._scheduler_alg = scheduler or FIFOScheduler()
134 self.trial_executor = trial_executor or RayTrialExecutor()
135
136 # For debugging, it may be useful to halt trials after some time has
137 # elapsed. TODO(ekl) consider exposing this in the API.
138 self._global_time_limit = float(
139 os.environ.get("TRIALRUNNER_WALLTIME_LIMIT", float("inf")))
140 self._total_time = 0
141 self._iteration = 0
142 self._has_errored = False
143 self._fail_fast = fail_fast
144 self._verbose = verbose
145
146 self._server = None
147 self._server_port = server_port
148 if launch_web_server:
149 self._server = TuneServer(self, self._server_port)
150
151 self._trials = []
152 self._cached_trial_decisions = {}
153 self._stop_queue = []
154 self._should_stop_experiment = False # used by TuneServer
155 self._local_checkpoint_dir = local_checkpoint_dir
156
157 if self._local_checkpoint_dir:
158 os.makedirs(self._local_checkpoint_dir, exist_ok=True)
159
160 self._remote_checkpoint_dir = remote_checkpoint_dir
161 self._syncer = get_cloud_syncer(local_checkpoint_dir,
162 remote_checkpoint_dir, sync_to_cloud)
163 self._stopper = stopper or NoopStopper()
164 self._resumed = False
165
166 if self._validate_resume(resume_type=resume):
167 try:
168 self.resume()
169 logger.info("Resuming trial.")
170 self._resumed = True
171 except Exception:
172 logger.exception(
173 "Runner restore failed. Restarting experiment.")
174 else:
175 logger.debug("Starting a new experiment.")
176
177 self._start_time = time.time()
178 self._last_checkpoint_time = -float("inf")
179 self._checkpoint_period = checkpoint_period
180 self._session_str = datetime.fromtimestamp(
181 self._start_time).strftime("%Y-%m-%d_%H-%M-%S")
182 self.checkpoint_file = None
183 if self._local_checkpoint_dir:
184 self.checkpoint_file = os.path.join(
185 self._local_checkpoint_dir,
186 TrialRunner.CKPT_FILE_TMPL.format(self._session_str))
187
188 @property
189 def scheduler_alg(self):
190 return self._scheduler_alg
191
192 def _validate_resume(self, resume_type):
193 """Checks whether to resume experiment.
194
195 Args:
196 resume_type: One of True, "REMOTE", "LOCAL", "PROMPT".
197 """
198 if not resume_type:
199 return False
200 assert resume_type in self.VALID_RESUME_TYPES, (
201 "resume_type {} is not one of {}".format(resume_type,
202 self.VALID_RESUME_TYPES))
203 # Not clear if we need this assertion, since we should always have a
204 # local checkpoint dir.
205 assert self._local_checkpoint_dir or self._remote_checkpoint_dir
206 if resume_type in [True, "LOCAL", "PROMPT"]:
207 if not self.checkpoint_exists(self._local_checkpoint_dir):
208 raise ValueError("Called resume when no checkpoint exists "
209 "in local directory.")
210 elif resume_type == "PROMPT":
211 if click.confirm("Resume from local directory?"):
212 return True
213
214 if resume_type in ["REMOTE", "PROMPT"]:
215 if resume_type == "PROMPT" and not click.confirm(
216 "Try downloading from remote directory?"):
217 return False
218 if not self._remote_checkpoint_dir:
219 raise ValueError(
220 "Called resume from remote without remote directory.")
221
222 # Try syncing down the upload directory.
223 logger.info("Downloading from %s", self._remote_checkpoint_dir)
224 # TODO(ujvl): Note that this syncs down the entire directory,
225 # which may also contain trial checkpoints. We should selectively
226 # sync the necessary files instead.
227 self._syncer.sync_down_if_needed()
228 self._syncer.wait()
229
230 if not self.checkpoint_exists(self._local_checkpoint_dir):
231 raise ValueError("Called resume when no checkpoint exists "
232 "in remote or local directory.")
233 return True
234
235 @classmethod
236 def checkpoint_exists(cls, directory):
237 if not os.path.exists(directory):
238 return False
239 return any(
240 (fname.startswith("experiment_state") and fname.endswith(".json"))
241 for fname in os.listdir(directory))
242
243 def add_experiment(self, experiment):
244 if not self._resumed:
245 self._search_alg.add_configurations([experiment])
246 else:
247 logger.info("TrialRunner resumed, ignoring new add_experiment.")
248
249 def checkpoint(self, force=False):
250 """Saves execution state to `self._local_checkpoint_dir`.
251
252 Overwrites the current session checkpoint, which starts when self
253 is instantiated. Throttle depends on self._checkpoint_period.
254
255 Args:
256 force (bool): Forces a checkpoint despite checkpoint_period.
257 """
258 if not self._local_checkpoint_dir:
259 return
260 now = time.time()
261 if now - self._last_checkpoint_time < self._checkpoint_period and (
262 not force):
263 return
264 self._last_checkpoint_time = now
265 runner_state = {
266 "checkpoints": list(
267 self.trial_executor.get_checkpoints().values()),
268 "runner_data": self.__getstate__(),
269 "stats": {
270 "start_time": self._start_time,
271 "timestamp": self._last_checkpoint_time
272 }
273 }
274 tmp_file_name = os.path.join(self._local_checkpoint_dir,
275 ".tmp_checkpoint")
276 with open(tmp_file_name, "w") as f:
277 json.dump(runner_state, f, indent=2, cls=_TuneFunctionEncoder)
278
279 os.rename(tmp_file_name, self.checkpoint_file)
280 if force:
281 self._syncer.sync_up()
282 else:
283 self._syncer.sync_up_if_needed()
284 return self._local_checkpoint_dir
285
286 def resume(self):
287 """Resumes all checkpointed trials from previous run.
288
289 Requires user to manually re-register their objects. Also stops
290 all ongoing trials.
291 """
292 newest_ckpt_path = _find_newest_ckpt(self._local_checkpoint_dir)
293 with open(newest_ckpt_path, "r") as f:
294 runner_state = json.load(f, cls=_TuneFunctionDecoder)
295 self.checkpoint_file = newest_ckpt_path
296
297 logger.warning("".join([
298 "Attempting to resume experiment from {}. ".format(
299 self._local_checkpoint_dir), "This feature is experimental, "
300 "and may not work with all search algorithms. ",
301 "This will ignore any new changes to the specification."
302 ]))
303
304 self.__setstate__(runner_state["runner_data"])
305
306 trials = []
307 for trial_cp in runner_state["checkpoints"]:
308 new_trial = Trial(trial_cp["trainable_name"])
309 new_trial.__setstate__(trial_cp)
310 trials += [new_trial]
311 for trial in sorted(
312 trials, key=lambda t: t.last_update_time, reverse=True):
313 self.add_trial(trial)
314
315 def is_finished(self):
316 """Returns whether all trials have finished running."""
317 if self._total_time > self._global_time_limit:
318 logger.warning("Exceeded global time limit {} / {}".format(
319 self._total_time, self._global_time_limit))
320 return True
321
322 trials_done = all(trial.is_finished() for trial in self._trials)
323 return trials_done and self._search_alg.is_finished()
324
325 def step(self):
326 """Runs one step of the trial event loop.
327
328 Callers should typically run this method repeatedly in a loop. They
329 may inspect or modify the runner's state in between calls to step().
330 """
331 if self.is_finished():
332 raise TuneError("Called step when all trials finished?")
333 with warn_if_slow("on_step_begin"):
334 self.trial_executor.on_step_begin(self)
335 next_trial = self._get_next_trial() # blocking
336 if next_trial is not None:
337 with warn_if_slow("start_trial"):
338 self.trial_executor.start_trial(next_trial)
339 elif self.trial_executor.get_running_trials():
340 self._process_events() # blocking
341 else:
342 self.trial_executor.on_no_available_trials(self)
343
344 self._stop_experiment_if_needed()
345
346 try:
347 with warn_if_slow("experiment_checkpoint"):
348 self.checkpoint()
349 except Exception:
350 logger.exception("Trial Runner checkpointing failed.")
351 self._iteration += 1
352
353 if self._server:
354 with warn_if_slow("server"):
355 self._process_stop_requests()
356
357 if self.is_finished():
358 self._server.shutdown()
359 with warn_if_slow("on_step_end"):
360 self.trial_executor.on_step_end(self)
361
362 def get_trial(self, tid):
363 trial = [t for t in self._trials if t.trial_id == tid]
364 return trial[0] if trial else None
365
366 def get_trials(self):
367 """Returns the list of trials managed by this TrialRunner.
368
369 Note that the caller usually should not mutate trial state directly.
370 """
371 return self._trials
372
373 def add_trial(self, trial):
374 """Adds a new trial to this TrialRunner.
375
376 Trials may be added at any time.
377
378 Args:
379 trial (Trial): Trial to queue.
380 """
381 trial.set_verbose(self._verbose)
382 self._trials.append(trial)
383 with warn_if_slow("scheduler.on_trial_add"):
384 self._scheduler_alg.on_trial_add(self, trial)
385 self.trial_executor.try_checkpoint_metadata(trial)
386
387 def debug_string(self, delim="\n"):
388 result_keys = [
389 list(t.last_result) for t in self.get_trials() if t.last_result
390 ]
391 metrics = set().union(*result_keys)
392 messages = [
393 self._scheduler_alg.debug_string(),
394 self.trial_executor.debug_string(),
395 trial_progress_str(self.get_trials(), metrics),
396 ]
397 return delim.join(messages)
398
399 def has_resources(self, resources):
400 """Returns whether this runner has at least the specified resources."""
401 return self.trial_executor.has_resources(resources)
402
403 def _stop_experiment_if_needed(self):
404 """Stops all trials."""
405 fail_fast = self._fail_fast and self._has_errored
406 if (self._stopper.stop_all() or fail_fast
407 or self._should_stop_experiment):
408 self._search_alg.set_finished()
409 [
410 self.trial_executor.stop_trial(t) for t in self._trials
411 if t.status is not Trial.ERROR
412 ]
413
414 def _get_next_trial(self):
415 """Replenishes queue.
416
417 Blocks if all trials queued have finished, but search algorithm is
418 still not finished.
419 """
420 trials_done = all(trial.is_finished() for trial in self._trials)
421 wait_for_trial = trials_done and not self._search_alg.is_finished()
422 self._update_trial_queue(blocking=wait_for_trial)
423 with warn_if_slow("choose_trial_to_run"):
424 trial = self._scheduler_alg.choose_trial_to_run(self)
425 return trial
426
427 def _process_events(self):
428 failed_trial = self.trial_executor.get_next_failed_trial()
429 if failed_trial:
430 error_msg = (
431 "{} (IP: {}) detected as stale. This is likely because the "
432 "node was lost").format(failed_trial, failed_trial.node_ip)
433 logger.info(error_msg)
434 with warn_if_slow("process_failed_trial"):
435 self._process_trial_failure(failed_trial, error_msg=error_msg)
436 else:
437 # TODO(ujvl): Consider combining get_next_available_trial and
438 # fetch_result functionality so that we don't timeout on fetch.
439 trial = self.trial_executor.get_next_available_trial() # blocking
440 if trial.is_restoring:
441 with warn_if_slow("process_trial_restore"):
442 self._process_trial_restore(trial)
443 elif trial.is_saving:
444 with warn_if_slow("process_trial_save") as profile:
445 self._process_trial_save(trial)
446 if profile.too_slow and trial.sync_on_checkpoint:
447 # TODO(ujvl): Suggest using DurableTrainable once
448 # API has converged.
449 logger.warning(
450 "Consider turning off forced head-worker trial "
451 "checkpoint syncs by setting sync_on_checkpoint=False"
452 ". Note that this may result in faulty trial "
453 "restoration if a failure occurs while the checkpoint "
454 "is being synced from the worker to the head node.")
455 else:
456 with warn_if_slow("process_trial"):
457 self._process_trial(trial)
458
459 def _process_trial(self, trial):
460 """Processes a trial result.
461
462 Fetches the trial's latest result and makes a scheduling decision
463 regarding its next action. If a checkpoint is taken, the decided
464 action is cached and acted on only after the checkpoint is later
465 processed (see `_process_trial_save`). Otherwise the decision is
466 acted on immediately.
467
468 Args:
469 trial (Trial): Trial with a result ready to be processed.
470 """
471 try:
472 result = self.trial_executor.fetch_result(trial)
473
474 is_duplicate = RESULT_DUPLICATE in result
475 force_checkpoint = result.get(SHOULD_CHECKPOINT, False)
476 # TrialScheduler and SearchAlgorithm still receive a
477 # notification because there may be special handling for
478 # the `on_trial_complete` hook.
479 if is_duplicate:
480 logger.debug("Trial finished without logging 'done'.")
481 result = trial.last_result
482 result.update(done=True)
483
484 self._total_time += result.get(TIME_THIS_ITER_S, 0)
485
486 flat_result = flatten_dict(result)
487 if self._stopper(trial.trial_id,
488 result) or trial.should_stop(flat_result):
489 # Hook into scheduler
490 self._scheduler_alg.on_trial_complete(self, trial, flat_result)
491 self._search_alg.on_trial_complete(
492 trial.trial_id, result=flat_result)
493 decision = TrialScheduler.STOP
494 else:
495 with warn_if_slow("scheduler.on_trial_result"):
496 decision = self._scheduler_alg.on_trial_result(
497 self, trial, flat_result)
498 with warn_if_slow("search_alg.on_trial_result"):
499 self._search_alg.on_trial_result(trial.trial_id,
500 flat_result)
501 if decision == TrialScheduler.STOP:
502 with warn_if_slow("search_alg.on_trial_complete"):
503 self._search_alg.on_trial_complete(
504 trial.trial_id, result=flat_result)
505
506 if not is_duplicate:
507 trial.update_last_result(
508 result, terminate=(decision == TrialScheduler.STOP))
509
510 # Checkpoints to disk. This should be checked even if
511 # the scheduler decision is STOP or PAUSE. Note that
512 # PAUSE only checkpoints to memory and does not update
513 # the global checkpoint state.
514 self._checkpoint_trial_if_needed(trial, force=force_checkpoint)
515
516 if trial.is_saving:
517 # Cache decision to execute on after the save is processed.
518 # This prevents changing the trial's state or kicking off
519 # another training step prematurely.
520 self._cached_trial_decisions[trial.trial_id] = decision
521 else:
522 self._execute_action(trial, decision)
523 except Exception:
524 logger.exception("Trial %s: Error processing event.", trial)
525 self._process_trial_failure(trial, traceback.format_exc())
526
527 def _process_trial_save(self, trial):
528 """Processes a trial save.
529
530 Acts on the decision cached during the last `_process_trial` call.
531
532 Args:
533 trial (Trial): Trial being saved.
534 """
535 logger.debug("Trial %s: Processing trial save.", trial)
536 checkpoint_value = None
537
538 try:
539 checkpoint_value = self.trial_executor.fetch_result(trial)
540 except Exception:
541 logger.exception("Trial %s: Error processing result.", trial)
542 self._process_trial_failure(trial, traceback.format_exc())
543
544 if checkpoint_value:
545 try:
546 trial.saving_to.value = checkpoint_value
547 trial.on_checkpoint(trial.saving_to)
548 self.trial_executor.try_checkpoint_metadata(trial)
549 except Exception:
550 logger.exception("Trial %s: Error handling checkpoint %s",
551 trial, checkpoint_value)
552
553 trial.saving_to = None
554 decision = self._cached_trial_decisions.pop(trial.trial_id, None)
555 if decision and checkpoint_value:
556 self._execute_action(trial, decision)
557
558 def _process_trial_restore(self, trial):
559 """Processes a trial restore.
560
561 Args:
562 trial (Trial): Trial being restored.
563 """
564 logger.debug("Trial %s: Processing trial restore.", trial)
565 try:
566 self.trial_executor.fetch_result(trial)
567 trial.on_restore()
568 logger.debug("Trial %s: Restore processed successfully", trial)
569 self.trial_executor.set_status(trial, Trial.RUNNING)
570 self.trial_executor.continue_training(trial)
571 except Exception:
572 logger.exception("Trial %s: Error processing restore.", trial)
573 self._process_trial_failure(trial, traceback.format_exc())
574
575 def _process_trial_failure(self, trial, error_msg):
576 """Handle trial failure.
577
578 Attempt trial recovery if possible, clean up state otherwise.
579
580 Args:
581 trial (Trial): Failed trial.
582 error_msg (str): Error message prior to invoking this method.
583 """
584 self._has_errored = True
585 if trial.status == Trial.RUNNING:
586 if trial.should_recover():
587 self._try_recover(trial, error_msg)
588 else:
589 self._scheduler_alg.on_trial_error(self, trial)
590 self._search_alg.on_trial_complete(trial.trial_id, error=True)
591 self.trial_executor.stop_trial(
592 trial, error=True, error_msg=error_msg)
593
594 def _execute_action(self, trial, decision):
595 """Executes action based on decision.
596
597 Args:
598 trial (Trial): Trial to act on.
599 decision (str): Scheduling decision to undertake.
600 """
601 if decision == TrialScheduler.CONTINUE:
602 self.trial_executor.continue_training(trial)
603 elif decision == TrialScheduler.PAUSE:
604 self.trial_executor.pause_trial(trial)
605 elif decision == TrialScheduler.STOP:
606 self.trial_executor.export_trial_if_needed(trial)
607 self.trial_executor.stop_trial(trial)
608 else:
609 raise ValueError("Invalid decision: {}".format(decision))
610
611 def _checkpoint_trial_if_needed(self, trial, force=False):
612 """Checkpoints trial based off trial.last_result."""
613 if trial.should_checkpoint() or force:
614 # Save trial runtime if possible.
615 if trial.runner:
616 self.trial_executor.save(trial, storage=Checkpoint.PERSISTENT)
617
618 def _try_recover(self, trial, error_msg):
619 """Tries to recover trial.
620
621 Notifies SearchAlgorithm and Scheduler if failure to recover.
622
623 Args:
624 trial (Trial): Trial to recover.
625 error_msg (str): Error message from prior to invoking this method.
626 """
627 if trial.is_restoring:
628 # Restore was unsuccessful, try again without checkpoint.
629 trial.clear_checkpoint()
630 self.trial_executor.stop_trial(
631 trial,
632 error=error_msg is not None,
633 error_msg=error_msg,
634 stop_logger=False)
635 trial.result_logger.flush()
636 if self.trial_executor.has_resources(trial.resources):
637 logger.info(
638 "Trial %s: Attempting to restore "
639 "trial state from last checkpoint.", trial)
640 self.trial_executor.start_trial(trial)
641 if trial.status == Trial.ERROR:
642 logger.exception(
643 "Trial %s: Error restoring trial from checkpoint, abort.",
644 trial)
645 self._scheduler_alg.on_trial_error(self, trial)
646 self._search_alg.on_trial_complete(trial.trial_id, error=True)
647 else:
648 logger.debug("Trial %s: Restore dispatched correctly.", trial)
649 else:
650 logger.debug("Trial %s: Notifying Scheduler and requeueing.",
651 trial)
652 self._requeue_trial(trial)
653
654 def _requeue_trial(self, trial):
655 """Notification to TrialScheduler and requeue trial.
656
657 This does not notify the SearchAlgorithm because the function
658 evaluation is still in progress.
659
660 """
661 self._scheduler_alg.on_trial_error(self, trial)
662 self.trial_executor.set_status(trial, Trial.PENDING)
663
664 # TODO(rliaw): Right now, this pushes the trial to the end of queue
665 # because restoration can be expensive. However, this is not
666 # ideal since it just hides the issue - a better fix would
667 # be to use an actor table to detect the IP of the Trainable
668 # and rsync the files there.
669 # See https://github.com/ray-project/ray/issues/5168
670 self._trials.pop(self._trials.index(trial))
671 self._trials.append(trial)
672
673 with warn_if_slow("scheduler.on_trial_add"):
674 self._scheduler_alg.on_trial_add(self, trial)
675
676 def _update_trial_queue(self, blocking=False, timeout=600):
677 """Adds next trials to queue if possible.
678
679 Note that the timeout is currently unexposed to the user.
680
681 Args:
682 blocking (bool): Blocks until either a trial is available
683 or is_finished (timeout or search algorithm finishes).
684 timeout (int): Seconds before blocking times out.
685 """
686 trials = self._search_alg.next_trials()
687 if blocking and not trials:
688 start = time.time()
689 # Checking `is_finished` instead of _search_alg.is_finished
690 # is fine because blocking only occurs if all trials are
691 # finished and search_algorithm is not yet finished
692 while (not trials and not self.is_finished()
693 and time.time() - start < timeout):
694 logger.info("Blocking for next trial...")
695 trials = self._search_alg.next_trials()
696 time.sleep(1)
697
698 for trial in trials:
699 self.add_trial(trial)
700
701 def request_stop_trial(self, trial):
702 self._stop_queue.append(trial)
703
704 def request_stop_experiment(self):
705 self._should_stop_experiment = True
706
707 def _process_stop_requests(self):
708 while self._stop_queue:
709 t = self._stop_queue.pop()
710 self.stop_trial(t)
711
712 def stop_trial(self, trial):
713 """Stops trial.
714
715 Trials may be stopped at any time. If trial is in state PENDING
716 or PAUSED, calls `on_trial_remove` for scheduler and
717 `on_trial_complete() for search_alg.
718 Otherwise waits for result for the trial and calls
719 `on_trial_complete` for scheduler and search_alg if RUNNING.
720 """
721 error = False
722 error_msg = None
723
724 if trial.status in [Trial.ERROR, Trial.TERMINATED]:
725 return
726 elif trial.status in [Trial.PENDING, Trial.PAUSED]:
727 self._scheduler_alg.on_trial_remove(self, trial)
728 self._search_alg.on_trial_complete(trial.trial_id)
729 elif trial.status is Trial.RUNNING:
730 try:
731 result = self.trial_executor.fetch_result(trial)
732 trial.update_last_result(result, terminate=True)
733 self._scheduler_alg.on_trial_complete(self, trial, result)
734 self._search_alg.on_trial_complete(
735 trial.trial_id, result=result)
736 except Exception:
737 error_msg = traceback.format_exc()
738 logger.exception("Error processing event.")
739 self._scheduler_alg.on_trial_error(self, trial)
740 self._search_alg.on_trial_complete(trial.trial_id, error=True)
741 error = True
742
743 self.trial_executor.stop_trial(trial, error=error, error_msg=error_msg)
744
745 def cleanup_trials(self):
746 self.trial_executor.cleanup()
747
748 def __getstate__(self):
749 """Gets state for trial.
750
751 Note that this is not used as a pickling override as
752 does not have all fields.
753 """
754 state = self.__dict__.copy()
755 for k in [
756 "_trials",
757 "_stop_queue",
758 "_server",
759 "_search_alg",
760 "_scheduler_alg",
761 "trial_executor",
762 "_syncer",
763 ]:
764 del state[k]
765 state["launch_web_server"] = bool(self._server)
766 return state
767
768 def __setstate__(self, state):
769 launch_web_server = state.pop("launch_web_server")
770
771 # Use session_str from previous checkpoint if does not exist
772 session_str = state.pop("_session_str")
773 self.__dict__.setdefault("_session_str", session_str)
774 # Use start_time from previous checkpoint if does not exist
775 start_time = state.pop("_start_time")
776 self.__dict__.setdefault("_start_time", start_time)
777
778 self.__dict__.update(state)
779 if launch_web_server:
780 self._server = TuneServer(self, self._server_port)
781
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/ray/tune/trial_runner.py b/python/ray/tune/trial_runner.py
--- a/python/ray/tune/trial_runner.py
+++ b/python/ray/tune/trial_runner.py
@@ -276,7 +276,7 @@
with open(tmp_file_name, "w") as f:
json.dump(runner_state, f, indent=2, cls=_TuneFunctionEncoder)
- os.rename(tmp_file_name, self.checkpoint_file)
+ os.replace(tmp_file_name, self.checkpoint_file)
if force:
self._syncer.sync_up()
else:
| {"golden_diff": "diff --git a/python/ray/tune/trial_runner.py b/python/ray/tune/trial_runner.py\n--- a/python/ray/tune/trial_runner.py\n+++ b/python/ray/tune/trial_runner.py\n@@ -276,7 +276,7 @@\n with open(tmp_file_name, \"w\") as f:\n json.dump(runner_state, f, indent=2, cls=_TuneFunctionEncoder)\n \n- os.rename(tmp_file_name, self.checkpoint_file)\n+ os.replace(tmp_file_name, self.checkpoint_file)\n if force:\n self._syncer.sync_up()\n else:\n", "issue": "[tune][rllib] Windows: FileExistsError when running rllib tune job\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\nMy script (available below) runs fine on Linux but does not fully run yet on Windows.\r\nI can't tell exactly what went wrong, but the two major errors I see is:\r\n`TypeError: len() of unsized object`\r\n`FileExistsError: [WinError 183] Cannot create a file when that file already exists`\r\n\r\nFull error log:\r\nhttps://gist.github.com/juliusfrost/acbca090259610a176847e7026dd6d30\r\n\r\n### Reproduction\r\nRun on Windows OS\r\nInstall pytorch\r\nInstall the latest `ray` and `rllib` versions\r\nInstall `atari-py` if necessary\r\nDownload `train_a2c.py` from https://gist.github.com/juliusfrost/61b8be67d33b9bc9ab1faf7ada9d2ae3\r\nRun `python train_a2c.py BreakoutNoFrameskip-v4`\r\nif you don't have a gpu add `--gpus 0`\r\n\n", "before_files": [{"content": "import click\nfrom datetime import datetime\nimport json\nimport logging\nimport os\nimport time\nimport traceback\nimport types\n\nimport ray.cloudpickle as cloudpickle\nfrom ray.tune import TuneError\nfrom ray.tune.stopper import NoopStopper\nfrom ray.tune.progress_reporter import trial_progress_str\nfrom ray.tune.ray_trial_executor import RayTrialExecutor\nfrom ray.tune.result import (TIME_THIS_ITER_S, RESULT_DUPLICATE,\n SHOULD_CHECKPOINT)\nfrom ray.tune.syncer import get_cloud_syncer\nfrom ray.tune.trial import Checkpoint, Trial\nfrom ray.tune.schedulers import FIFOScheduler, TrialScheduler\nfrom ray.tune.suggest import BasicVariantGenerator\nfrom ray.tune.utils import warn_if_slow, flatten_dict\nfrom ray.tune.web_server import TuneServer\nfrom ray.utils import binary_to_hex, hex_to_binary\n\nMAX_DEBUG_TRIALS = 20\n\nlogger = logging.getLogger(__name__)\n\n\ndef _find_newest_ckpt(ckpt_dir):\n \"\"\"Returns path to most recently modified checkpoint.\"\"\"\n full_paths = [\n os.path.join(ckpt_dir, fname) for fname in os.listdir(ckpt_dir)\n if fname.startswith(\"experiment_state\") and fname.endswith(\".json\")\n ]\n return max(full_paths)\n\n\nclass _TuneFunctionEncoder(json.JSONEncoder):\n def default(self, obj):\n if isinstance(obj, types.FunctionType):\n return self._to_cloudpickle(obj)\n try:\n return super(_TuneFunctionEncoder, self).default(obj)\n except Exception:\n logger.debug(\"Unable to encode. Falling back to cloudpickle.\")\n return self._to_cloudpickle(obj)\n\n def _to_cloudpickle(self, obj):\n return {\n \"_type\": \"CLOUDPICKLE_FALLBACK\",\n \"value\": binary_to_hex(cloudpickle.dumps(obj))\n }\n\n\nclass _TuneFunctionDecoder(json.JSONDecoder):\n def __init__(self, *args, **kwargs):\n json.JSONDecoder.__init__(\n self, object_hook=self.object_hook, *args, **kwargs)\n\n def object_hook(self, obj):\n if obj.get(\"_type\") == \"CLOUDPICKLE_FALLBACK\":\n return self._from_cloudpickle(obj)\n return obj\n\n def _from_cloudpickle(self, obj):\n return cloudpickle.loads(hex_to_binary(obj[\"value\"]))\n\n\nclass TrialRunner:\n \"\"\"A TrialRunner implements the event loop for scheduling trials on Ray.\n\n .. code-block: python\n\n runner = TrialRunner()\n runner.add_trial(Trial(...))\n runner.add_trial(Trial(...))\n while not runner.is_finished():\n runner.step()\n print(runner.debug_string())\n\n The main job of TrialRunner is scheduling trials to efficiently use cluster\n resources, without overloading the cluster.\n\n While Ray itself provides resource management for tasks and actors, this is\n not sufficient when scheduling trials that may instantiate multiple actors.\n This is because if insufficient resources are available, concurrent trials\n could deadlock waiting for new resources to become available. Furthermore,\n oversubscribing the cluster could degrade training performance, leading to\n misleading benchmark results.\n\n Args:\n search_alg (SearchAlgorithm): SearchAlgorithm for generating\n Trial objects.\n scheduler (TrialScheduler): Defaults to FIFOScheduler.\n launch_web_server (bool): Flag for starting TuneServer\n local_checkpoint_dir (str): Path where\n global checkpoints are stored and restored from.\n remote_checkpoint_dir (str): Remote path where\n global checkpoints are stored and restored from. Used\n if `resume` == REMOTE.\n stopper: Custom class for stopping whole experiments. See\n ``Stopper``.\n resume (str|False): see `tune.py:run`.\n sync_to_cloud (func|str): See `tune.py:run`.\n server_port (int): Port number for launching TuneServer.\n fail_fast (bool): Finishes as soon as a trial fails if True.\n verbose (bool): Flag for verbosity. If False, trial results\n will not be output.\n checkpoint_period (int): Trial runner checkpoint periodicity in\n seconds. Defaults to 10.\n trial_executor (TrialExecutor): Defaults to RayTrialExecutor.\n \"\"\"\n\n CKPT_FILE_TMPL = \"experiment_state-{}.json\"\n VALID_RESUME_TYPES = [True, \"LOCAL\", \"REMOTE\", \"PROMPT\"]\n\n def __init__(self,\n search_alg=None,\n scheduler=None,\n launch_web_server=False,\n local_checkpoint_dir=None,\n remote_checkpoint_dir=None,\n sync_to_cloud=None,\n stopper=None,\n resume=False,\n server_port=TuneServer.DEFAULT_PORT,\n fail_fast=False,\n verbose=True,\n checkpoint_period=10,\n trial_executor=None):\n self._search_alg = search_alg or BasicVariantGenerator()\n self._scheduler_alg = scheduler or FIFOScheduler()\n self.trial_executor = trial_executor or RayTrialExecutor()\n\n # For debugging, it may be useful to halt trials after some time has\n # elapsed. TODO(ekl) consider exposing this in the API.\n self._global_time_limit = float(\n os.environ.get(\"TRIALRUNNER_WALLTIME_LIMIT\", float(\"inf\")))\n self._total_time = 0\n self._iteration = 0\n self._has_errored = False\n self._fail_fast = fail_fast\n self._verbose = verbose\n\n self._server = None\n self._server_port = server_port\n if launch_web_server:\n self._server = TuneServer(self, self._server_port)\n\n self._trials = []\n self._cached_trial_decisions = {}\n self._stop_queue = []\n self._should_stop_experiment = False # used by TuneServer\n self._local_checkpoint_dir = local_checkpoint_dir\n\n if self._local_checkpoint_dir:\n os.makedirs(self._local_checkpoint_dir, exist_ok=True)\n\n self._remote_checkpoint_dir = remote_checkpoint_dir\n self._syncer = get_cloud_syncer(local_checkpoint_dir,\n remote_checkpoint_dir, sync_to_cloud)\n self._stopper = stopper or NoopStopper()\n self._resumed = False\n\n if self._validate_resume(resume_type=resume):\n try:\n self.resume()\n logger.info(\"Resuming trial.\")\n self._resumed = True\n except Exception:\n logger.exception(\n \"Runner restore failed. Restarting experiment.\")\n else:\n logger.debug(\"Starting a new experiment.\")\n\n self._start_time = time.time()\n self._last_checkpoint_time = -float(\"inf\")\n self._checkpoint_period = checkpoint_period\n self._session_str = datetime.fromtimestamp(\n self._start_time).strftime(\"%Y-%m-%d_%H-%M-%S\")\n self.checkpoint_file = None\n if self._local_checkpoint_dir:\n self.checkpoint_file = os.path.join(\n self._local_checkpoint_dir,\n TrialRunner.CKPT_FILE_TMPL.format(self._session_str))\n\n @property\n def scheduler_alg(self):\n return self._scheduler_alg\n\n def _validate_resume(self, resume_type):\n \"\"\"Checks whether to resume experiment.\n\n Args:\n resume_type: One of True, \"REMOTE\", \"LOCAL\", \"PROMPT\".\n \"\"\"\n if not resume_type:\n return False\n assert resume_type in self.VALID_RESUME_TYPES, (\n \"resume_type {} is not one of {}\".format(resume_type,\n self.VALID_RESUME_TYPES))\n # Not clear if we need this assertion, since we should always have a\n # local checkpoint dir.\n assert self._local_checkpoint_dir or self._remote_checkpoint_dir\n if resume_type in [True, \"LOCAL\", \"PROMPT\"]:\n if not self.checkpoint_exists(self._local_checkpoint_dir):\n raise ValueError(\"Called resume when no checkpoint exists \"\n \"in local directory.\")\n elif resume_type == \"PROMPT\":\n if click.confirm(\"Resume from local directory?\"):\n return True\n\n if resume_type in [\"REMOTE\", \"PROMPT\"]:\n if resume_type == \"PROMPT\" and not click.confirm(\n \"Try downloading from remote directory?\"):\n return False\n if not self._remote_checkpoint_dir:\n raise ValueError(\n \"Called resume from remote without remote directory.\")\n\n # Try syncing down the upload directory.\n logger.info(\"Downloading from %s\", self._remote_checkpoint_dir)\n # TODO(ujvl): Note that this syncs down the entire directory,\n # which may also contain trial checkpoints. We should selectively\n # sync the necessary files instead.\n self._syncer.sync_down_if_needed()\n self._syncer.wait()\n\n if not self.checkpoint_exists(self._local_checkpoint_dir):\n raise ValueError(\"Called resume when no checkpoint exists \"\n \"in remote or local directory.\")\n return True\n\n @classmethod\n def checkpoint_exists(cls, directory):\n if not os.path.exists(directory):\n return False\n return any(\n (fname.startswith(\"experiment_state\") and fname.endswith(\".json\"))\n for fname in os.listdir(directory))\n\n def add_experiment(self, experiment):\n if not self._resumed:\n self._search_alg.add_configurations([experiment])\n else:\n logger.info(\"TrialRunner resumed, ignoring new add_experiment.\")\n\n def checkpoint(self, force=False):\n \"\"\"Saves execution state to `self._local_checkpoint_dir`.\n\n Overwrites the current session checkpoint, which starts when self\n is instantiated. Throttle depends on self._checkpoint_period.\n\n Args:\n force (bool): Forces a checkpoint despite checkpoint_period.\n \"\"\"\n if not self._local_checkpoint_dir:\n return\n now = time.time()\n if now - self._last_checkpoint_time < self._checkpoint_period and (\n not force):\n return\n self._last_checkpoint_time = now\n runner_state = {\n \"checkpoints\": list(\n self.trial_executor.get_checkpoints().values()),\n \"runner_data\": self.__getstate__(),\n \"stats\": {\n \"start_time\": self._start_time,\n \"timestamp\": self._last_checkpoint_time\n }\n }\n tmp_file_name = os.path.join(self._local_checkpoint_dir,\n \".tmp_checkpoint\")\n with open(tmp_file_name, \"w\") as f:\n json.dump(runner_state, f, indent=2, cls=_TuneFunctionEncoder)\n\n os.rename(tmp_file_name, self.checkpoint_file)\n if force:\n self._syncer.sync_up()\n else:\n self._syncer.sync_up_if_needed()\n return self._local_checkpoint_dir\n\n def resume(self):\n \"\"\"Resumes all checkpointed trials from previous run.\n\n Requires user to manually re-register their objects. Also stops\n all ongoing trials.\n \"\"\"\n newest_ckpt_path = _find_newest_ckpt(self._local_checkpoint_dir)\n with open(newest_ckpt_path, \"r\") as f:\n runner_state = json.load(f, cls=_TuneFunctionDecoder)\n self.checkpoint_file = newest_ckpt_path\n\n logger.warning(\"\".join([\n \"Attempting to resume experiment from {}. \".format(\n self._local_checkpoint_dir), \"This feature is experimental, \"\n \"and may not work with all search algorithms. \",\n \"This will ignore any new changes to the specification.\"\n ]))\n\n self.__setstate__(runner_state[\"runner_data\"])\n\n trials = []\n for trial_cp in runner_state[\"checkpoints\"]:\n new_trial = Trial(trial_cp[\"trainable_name\"])\n new_trial.__setstate__(trial_cp)\n trials += [new_trial]\n for trial in sorted(\n trials, key=lambda t: t.last_update_time, reverse=True):\n self.add_trial(trial)\n\n def is_finished(self):\n \"\"\"Returns whether all trials have finished running.\"\"\"\n if self._total_time > self._global_time_limit:\n logger.warning(\"Exceeded global time limit {} / {}\".format(\n self._total_time, self._global_time_limit))\n return True\n\n trials_done = all(trial.is_finished() for trial in self._trials)\n return trials_done and self._search_alg.is_finished()\n\n def step(self):\n \"\"\"Runs one step of the trial event loop.\n\n Callers should typically run this method repeatedly in a loop. They\n may inspect or modify the runner's state in between calls to step().\n \"\"\"\n if self.is_finished():\n raise TuneError(\"Called step when all trials finished?\")\n with warn_if_slow(\"on_step_begin\"):\n self.trial_executor.on_step_begin(self)\n next_trial = self._get_next_trial() # blocking\n if next_trial is not None:\n with warn_if_slow(\"start_trial\"):\n self.trial_executor.start_trial(next_trial)\n elif self.trial_executor.get_running_trials():\n self._process_events() # blocking\n else:\n self.trial_executor.on_no_available_trials(self)\n\n self._stop_experiment_if_needed()\n\n try:\n with warn_if_slow(\"experiment_checkpoint\"):\n self.checkpoint()\n except Exception:\n logger.exception(\"Trial Runner checkpointing failed.\")\n self._iteration += 1\n\n if self._server:\n with warn_if_slow(\"server\"):\n self._process_stop_requests()\n\n if self.is_finished():\n self._server.shutdown()\n with warn_if_slow(\"on_step_end\"):\n self.trial_executor.on_step_end(self)\n\n def get_trial(self, tid):\n trial = [t for t in self._trials if t.trial_id == tid]\n return trial[0] if trial else None\n\n def get_trials(self):\n \"\"\"Returns the list of trials managed by this TrialRunner.\n\n Note that the caller usually should not mutate trial state directly.\n \"\"\"\n return self._trials\n\n def add_trial(self, trial):\n \"\"\"Adds a new trial to this TrialRunner.\n\n Trials may be added at any time.\n\n Args:\n trial (Trial): Trial to queue.\n \"\"\"\n trial.set_verbose(self._verbose)\n self._trials.append(trial)\n with warn_if_slow(\"scheduler.on_trial_add\"):\n self._scheduler_alg.on_trial_add(self, trial)\n self.trial_executor.try_checkpoint_metadata(trial)\n\n def debug_string(self, delim=\"\\n\"):\n result_keys = [\n list(t.last_result) for t in self.get_trials() if t.last_result\n ]\n metrics = set().union(*result_keys)\n messages = [\n self._scheduler_alg.debug_string(),\n self.trial_executor.debug_string(),\n trial_progress_str(self.get_trials(), metrics),\n ]\n return delim.join(messages)\n\n def has_resources(self, resources):\n \"\"\"Returns whether this runner has at least the specified resources.\"\"\"\n return self.trial_executor.has_resources(resources)\n\n def _stop_experiment_if_needed(self):\n \"\"\"Stops all trials.\"\"\"\n fail_fast = self._fail_fast and self._has_errored\n if (self._stopper.stop_all() or fail_fast\n or self._should_stop_experiment):\n self._search_alg.set_finished()\n [\n self.trial_executor.stop_trial(t) for t in self._trials\n if t.status is not Trial.ERROR\n ]\n\n def _get_next_trial(self):\n \"\"\"Replenishes queue.\n\n Blocks if all trials queued have finished, but search algorithm is\n still not finished.\n \"\"\"\n trials_done = all(trial.is_finished() for trial in self._trials)\n wait_for_trial = trials_done and not self._search_alg.is_finished()\n self._update_trial_queue(blocking=wait_for_trial)\n with warn_if_slow(\"choose_trial_to_run\"):\n trial = self._scheduler_alg.choose_trial_to_run(self)\n return trial\n\n def _process_events(self):\n failed_trial = self.trial_executor.get_next_failed_trial()\n if failed_trial:\n error_msg = (\n \"{} (IP: {}) detected as stale. This is likely because the \"\n \"node was lost\").format(failed_trial, failed_trial.node_ip)\n logger.info(error_msg)\n with warn_if_slow(\"process_failed_trial\"):\n self._process_trial_failure(failed_trial, error_msg=error_msg)\n else:\n # TODO(ujvl): Consider combining get_next_available_trial and\n # fetch_result functionality so that we don't timeout on fetch.\n trial = self.trial_executor.get_next_available_trial() # blocking\n if trial.is_restoring:\n with warn_if_slow(\"process_trial_restore\"):\n self._process_trial_restore(trial)\n elif trial.is_saving:\n with warn_if_slow(\"process_trial_save\") as profile:\n self._process_trial_save(trial)\n if profile.too_slow and trial.sync_on_checkpoint:\n # TODO(ujvl): Suggest using DurableTrainable once\n # API has converged.\n logger.warning(\n \"Consider turning off forced head-worker trial \"\n \"checkpoint syncs by setting sync_on_checkpoint=False\"\n \". Note that this may result in faulty trial \"\n \"restoration if a failure occurs while the checkpoint \"\n \"is being synced from the worker to the head node.\")\n else:\n with warn_if_slow(\"process_trial\"):\n self._process_trial(trial)\n\n def _process_trial(self, trial):\n \"\"\"Processes a trial result.\n\n Fetches the trial's latest result and makes a scheduling decision\n regarding its next action. If a checkpoint is taken, the decided\n action is cached and acted on only after the checkpoint is later\n processed (see `_process_trial_save`). Otherwise the decision is\n acted on immediately.\n\n Args:\n trial (Trial): Trial with a result ready to be processed.\n \"\"\"\n try:\n result = self.trial_executor.fetch_result(trial)\n\n is_duplicate = RESULT_DUPLICATE in result\n force_checkpoint = result.get(SHOULD_CHECKPOINT, False)\n # TrialScheduler and SearchAlgorithm still receive a\n # notification because there may be special handling for\n # the `on_trial_complete` hook.\n if is_duplicate:\n logger.debug(\"Trial finished without logging 'done'.\")\n result = trial.last_result\n result.update(done=True)\n\n self._total_time += result.get(TIME_THIS_ITER_S, 0)\n\n flat_result = flatten_dict(result)\n if self._stopper(trial.trial_id,\n result) or trial.should_stop(flat_result):\n # Hook into scheduler\n self._scheduler_alg.on_trial_complete(self, trial, flat_result)\n self._search_alg.on_trial_complete(\n trial.trial_id, result=flat_result)\n decision = TrialScheduler.STOP\n else:\n with warn_if_slow(\"scheduler.on_trial_result\"):\n decision = self._scheduler_alg.on_trial_result(\n self, trial, flat_result)\n with warn_if_slow(\"search_alg.on_trial_result\"):\n self._search_alg.on_trial_result(trial.trial_id,\n flat_result)\n if decision == TrialScheduler.STOP:\n with warn_if_slow(\"search_alg.on_trial_complete\"):\n self._search_alg.on_trial_complete(\n trial.trial_id, result=flat_result)\n\n if not is_duplicate:\n trial.update_last_result(\n result, terminate=(decision == TrialScheduler.STOP))\n\n # Checkpoints to disk. This should be checked even if\n # the scheduler decision is STOP or PAUSE. Note that\n # PAUSE only checkpoints to memory and does not update\n # the global checkpoint state.\n self._checkpoint_trial_if_needed(trial, force=force_checkpoint)\n\n if trial.is_saving:\n # Cache decision to execute on after the save is processed.\n # This prevents changing the trial's state or kicking off\n # another training step prematurely.\n self._cached_trial_decisions[trial.trial_id] = decision\n else:\n self._execute_action(trial, decision)\n except Exception:\n logger.exception(\"Trial %s: Error processing event.\", trial)\n self._process_trial_failure(trial, traceback.format_exc())\n\n def _process_trial_save(self, trial):\n \"\"\"Processes a trial save.\n\n Acts on the decision cached during the last `_process_trial` call.\n\n Args:\n trial (Trial): Trial being saved.\n \"\"\"\n logger.debug(\"Trial %s: Processing trial save.\", trial)\n checkpoint_value = None\n\n try:\n checkpoint_value = self.trial_executor.fetch_result(trial)\n except Exception:\n logger.exception(\"Trial %s: Error processing result.\", trial)\n self._process_trial_failure(trial, traceback.format_exc())\n\n if checkpoint_value:\n try:\n trial.saving_to.value = checkpoint_value\n trial.on_checkpoint(trial.saving_to)\n self.trial_executor.try_checkpoint_metadata(trial)\n except Exception:\n logger.exception(\"Trial %s: Error handling checkpoint %s\",\n trial, checkpoint_value)\n\n trial.saving_to = None\n decision = self._cached_trial_decisions.pop(trial.trial_id, None)\n if decision and checkpoint_value:\n self._execute_action(trial, decision)\n\n def _process_trial_restore(self, trial):\n \"\"\"Processes a trial restore.\n\n Args:\n trial (Trial): Trial being restored.\n \"\"\"\n logger.debug(\"Trial %s: Processing trial restore.\", trial)\n try:\n self.trial_executor.fetch_result(trial)\n trial.on_restore()\n logger.debug(\"Trial %s: Restore processed successfully\", trial)\n self.trial_executor.set_status(trial, Trial.RUNNING)\n self.trial_executor.continue_training(trial)\n except Exception:\n logger.exception(\"Trial %s: Error processing restore.\", trial)\n self._process_trial_failure(trial, traceback.format_exc())\n\n def _process_trial_failure(self, trial, error_msg):\n \"\"\"Handle trial failure.\n\n Attempt trial recovery if possible, clean up state otherwise.\n\n Args:\n trial (Trial): Failed trial.\n error_msg (str): Error message prior to invoking this method.\n \"\"\"\n self._has_errored = True\n if trial.status == Trial.RUNNING:\n if trial.should_recover():\n self._try_recover(trial, error_msg)\n else:\n self._scheduler_alg.on_trial_error(self, trial)\n self._search_alg.on_trial_complete(trial.trial_id, error=True)\n self.trial_executor.stop_trial(\n trial, error=True, error_msg=error_msg)\n\n def _execute_action(self, trial, decision):\n \"\"\"Executes action based on decision.\n\n Args:\n trial (Trial): Trial to act on.\n decision (str): Scheduling decision to undertake.\n \"\"\"\n if decision == TrialScheduler.CONTINUE:\n self.trial_executor.continue_training(trial)\n elif decision == TrialScheduler.PAUSE:\n self.trial_executor.pause_trial(trial)\n elif decision == TrialScheduler.STOP:\n self.trial_executor.export_trial_if_needed(trial)\n self.trial_executor.stop_trial(trial)\n else:\n raise ValueError(\"Invalid decision: {}\".format(decision))\n\n def _checkpoint_trial_if_needed(self, trial, force=False):\n \"\"\"Checkpoints trial based off trial.last_result.\"\"\"\n if trial.should_checkpoint() or force:\n # Save trial runtime if possible.\n if trial.runner:\n self.trial_executor.save(trial, storage=Checkpoint.PERSISTENT)\n\n def _try_recover(self, trial, error_msg):\n \"\"\"Tries to recover trial.\n\n Notifies SearchAlgorithm and Scheduler if failure to recover.\n\n Args:\n trial (Trial): Trial to recover.\n error_msg (str): Error message from prior to invoking this method.\n \"\"\"\n if trial.is_restoring:\n # Restore was unsuccessful, try again without checkpoint.\n trial.clear_checkpoint()\n self.trial_executor.stop_trial(\n trial,\n error=error_msg is not None,\n error_msg=error_msg,\n stop_logger=False)\n trial.result_logger.flush()\n if self.trial_executor.has_resources(trial.resources):\n logger.info(\n \"Trial %s: Attempting to restore \"\n \"trial state from last checkpoint.\", trial)\n self.trial_executor.start_trial(trial)\n if trial.status == Trial.ERROR:\n logger.exception(\n \"Trial %s: Error restoring trial from checkpoint, abort.\",\n trial)\n self._scheduler_alg.on_trial_error(self, trial)\n self._search_alg.on_trial_complete(trial.trial_id, error=True)\n else:\n logger.debug(\"Trial %s: Restore dispatched correctly.\", trial)\n else:\n logger.debug(\"Trial %s: Notifying Scheduler and requeueing.\",\n trial)\n self._requeue_trial(trial)\n\n def _requeue_trial(self, trial):\n \"\"\"Notification to TrialScheduler and requeue trial.\n\n This does not notify the SearchAlgorithm because the function\n evaluation is still in progress.\n\n \"\"\"\n self._scheduler_alg.on_trial_error(self, trial)\n self.trial_executor.set_status(trial, Trial.PENDING)\n\n # TODO(rliaw): Right now, this pushes the trial to the end of queue\n # because restoration can be expensive. However, this is not\n # ideal since it just hides the issue - a better fix would\n # be to use an actor table to detect the IP of the Trainable\n # and rsync the files there.\n # See https://github.com/ray-project/ray/issues/5168\n self._trials.pop(self._trials.index(trial))\n self._trials.append(trial)\n\n with warn_if_slow(\"scheduler.on_trial_add\"):\n self._scheduler_alg.on_trial_add(self, trial)\n\n def _update_trial_queue(self, blocking=False, timeout=600):\n \"\"\"Adds next trials to queue if possible.\n\n Note that the timeout is currently unexposed to the user.\n\n Args:\n blocking (bool): Blocks until either a trial is available\n or is_finished (timeout or search algorithm finishes).\n timeout (int): Seconds before blocking times out.\n \"\"\"\n trials = self._search_alg.next_trials()\n if blocking and not trials:\n start = time.time()\n # Checking `is_finished` instead of _search_alg.is_finished\n # is fine because blocking only occurs if all trials are\n # finished and search_algorithm is not yet finished\n while (not trials and not self.is_finished()\n and time.time() - start < timeout):\n logger.info(\"Blocking for next trial...\")\n trials = self._search_alg.next_trials()\n time.sleep(1)\n\n for trial in trials:\n self.add_trial(trial)\n\n def request_stop_trial(self, trial):\n self._stop_queue.append(trial)\n\n def request_stop_experiment(self):\n self._should_stop_experiment = True\n\n def _process_stop_requests(self):\n while self._stop_queue:\n t = self._stop_queue.pop()\n self.stop_trial(t)\n\n def stop_trial(self, trial):\n \"\"\"Stops trial.\n\n Trials may be stopped at any time. If trial is in state PENDING\n or PAUSED, calls `on_trial_remove` for scheduler and\n `on_trial_complete() for search_alg.\n Otherwise waits for result for the trial and calls\n `on_trial_complete` for scheduler and search_alg if RUNNING.\n \"\"\"\n error = False\n error_msg = None\n\n if trial.status in [Trial.ERROR, Trial.TERMINATED]:\n return\n elif trial.status in [Trial.PENDING, Trial.PAUSED]:\n self._scheduler_alg.on_trial_remove(self, trial)\n self._search_alg.on_trial_complete(trial.trial_id)\n elif trial.status is Trial.RUNNING:\n try:\n result = self.trial_executor.fetch_result(trial)\n trial.update_last_result(result, terminate=True)\n self._scheduler_alg.on_trial_complete(self, trial, result)\n self._search_alg.on_trial_complete(\n trial.trial_id, result=result)\n except Exception:\n error_msg = traceback.format_exc()\n logger.exception(\"Error processing event.\")\n self._scheduler_alg.on_trial_error(self, trial)\n self._search_alg.on_trial_complete(trial.trial_id, error=True)\n error = True\n\n self.trial_executor.stop_trial(trial, error=error, error_msg=error_msg)\n\n def cleanup_trials(self):\n self.trial_executor.cleanup()\n\n def __getstate__(self):\n \"\"\"Gets state for trial.\n\n Note that this is not used as a pickling override as\n does not have all fields.\n \"\"\"\n state = self.__dict__.copy()\n for k in [\n \"_trials\",\n \"_stop_queue\",\n \"_server\",\n \"_search_alg\",\n \"_scheduler_alg\",\n \"trial_executor\",\n \"_syncer\",\n ]:\n del state[k]\n state[\"launch_web_server\"] = bool(self._server)\n return state\n\n def __setstate__(self, state):\n launch_web_server = state.pop(\"launch_web_server\")\n\n # Use session_str from previous checkpoint if does not exist\n session_str = state.pop(\"_session_str\")\n self.__dict__.setdefault(\"_session_str\", session_str)\n # Use start_time from previous checkpoint if does not exist\n start_time = state.pop(\"_start_time\")\n self.__dict__.setdefault(\"_start_time\", start_time)\n\n self.__dict__.update(state)\n if launch_web_server:\n self._server = TuneServer(self, self._server_port)\n", "path": "python/ray/tune/trial_runner.py"}], "after_files": [{"content": "import click\nfrom datetime import datetime\nimport json\nimport logging\nimport os\nimport time\nimport traceback\nimport types\n\nimport ray.cloudpickle as cloudpickle\nfrom ray.tune import TuneError\nfrom ray.tune.stopper import NoopStopper\nfrom ray.tune.progress_reporter import trial_progress_str\nfrom ray.tune.ray_trial_executor import RayTrialExecutor\nfrom ray.tune.result import (TIME_THIS_ITER_S, RESULT_DUPLICATE,\n SHOULD_CHECKPOINT)\nfrom ray.tune.syncer import get_cloud_syncer\nfrom ray.tune.trial import Checkpoint, Trial\nfrom ray.tune.schedulers import FIFOScheduler, TrialScheduler\nfrom ray.tune.suggest import BasicVariantGenerator\nfrom ray.tune.utils import warn_if_slow, flatten_dict\nfrom ray.tune.web_server import TuneServer\nfrom ray.utils import binary_to_hex, hex_to_binary\n\nMAX_DEBUG_TRIALS = 20\n\nlogger = logging.getLogger(__name__)\n\n\ndef _find_newest_ckpt(ckpt_dir):\n \"\"\"Returns path to most recently modified checkpoint.\"\"\"\n full_paths = [\n os.path.join(ckpt_dir, fname) for fname in os.listdir(ckpt_dir)\n if fname.startswith(\"experiment_state\") and fname.endswith(\".json\")\n ]\n return max(full_paths)\n\n\nclass _TuneFunctionEncoder(json.JSONEncoder):\n def default(self, obj):\n if isinstance(obj, types.FunctionType):\n return self._to_cloudpickle(obj)\n try:\n return super(_TuneFunctionEncoder, self).default(obj)\n except Exception:\n logger.debug(\"Unable to encode. Falling back to cloudpickle.\")\n return self._to_cloudpickle(obj)\n\n def _to_cloudpickle(self, obj):\n return {\n \"_type\": \"CLOUDPICKLE_FALLBACK\",\n \"value\": binary_to_hex(cloudpickle.dumps(obj))\n }\n\n\nclass _TuneFunctionDecoder(json.JSONDecoder):\n def __init__(self, *args, **kwargs):\n json.JSONDecoder.__init__(\n self, object_hook=self.object_hook, *args, **kwargs)\n\n def object_hook(self, obj):\n if obj.get(\"_type\") == \"CLOUDPICKLE_FALLBACK\":\n return self._from_cloudpickle(obj)\n return obj\n\n def _from_cloudpickle(self, obj):\n return cloudpickle.loads(hex_to_binary(obj[\"value\"]))\n\n\nclass TrialRunner:\n \"\"\"A TrialRunner implements the event loop for scheduling trials on Ray.\n\n .. code-block: python\n\n runner = TrialRunner()\n runner.add_trial(Trial(...))\n runner.add_trial(Trial(...))\n while not runner.is_finished():\n runner.step()\n print(runner.debug_string())\n\n The main job of TrialRunner is scheduling trials to efficiently use cluster\n resources, without overloading the cluster.\n\n While Ray itself provides resource management for tasks and actors, this is\n not sufficient when scheduling trials that may instantiate multiple actors.\n This is because if insufficient resources are available, concurrent trials\n could deadlock waiting for new resources to become available. Furthermore,\n oversubscribing the cluster could degrade training performance, leading to\n misleading benchmark results.\n\n Args:\n search_alg (SearchAlgorithm): SearchAlgorithm for generating\n Trial objects.\n scheduler (TrialScheduler): Defaults to FIFOScheduler.\n launch_web_server (bool): Flag for starting TuneServer\n local_checkpoint_dir (str): Path where\n global checkpoints are stored and restored from.\n remote_checkpoint_dir (str): Remote path where\n global checkpoints are stored and restored from. Used\n if `resume` == REMOTE.\n stopper: Custom class for stopping whole experiments. See\n ``Stopper``.\n resume (str|False): see `tune.py:run`.\n sync_to_cloud (func|str): See `tune.py:run`.\n server_port (int): Port number for launching TuneServer.\n fail_fast (bool): Finishes as soon as a trial fails if True.\n verbose (bool): Flag for verbosity. If False, trial results\n will not be output.\n checkpoint_period (int): Trial runner checkpoint periodicity in\n seconds. Defaults to 10.\n trial_executor (TrialExecutor): Defaults to RayTrialExecutor.\n \"\"\"\n\n CKPT_FILE_TMPL = \"experiment_state-{}.json\"\n VALID_RESUME_TYPES = [True, \"LOCAL\", \"REMOTE\", \"PROMPT\"]\n\n def __init__(self,\n search_alg=None,\n scheduler=None,\n launch_web_server=False,\n local_checkpoint_dir=None,\n remote_checkpoint_dir=None,\n sync_to_cloud=None,\n stopper=None,\n resume=False,\n server_port=TuneServer.DEFAULT_PORT,\n fail_fast=False,\n verbose=True,\n checkpoint_period=10,\n trial_executor=None):\n self._search_alg = search_alg or BasicVariantGenerator()\n self._scheduler_alg = scheduler or FIFOScheduler()\n self.trial_executor = trial_executor or RayTrialExecutor()\n\n # For debugging, it may be useful to halt trials after some time has\n # elapsed. TODO(ekl) consider exposing this in the API.\n self._global_time_limit = float(\n os.environ.get(\"TRIALRUNNER_WALLTIME_LIMIT\", float(\"inf\")))\n self._total_time = 0\n self._iteration = 0\n self._has_errored = False\n self._fail_fast = fail_fast\n self._verbose = verbose\n\n self._server = None\n self._server_port = server_port\n if launch_web_server:\n self._server = TuneServer(self, self._server_port)\n\n self._trials = []\n self._cached_trial_decisions = {}\n self._stop_queue = []\n self._should_stop_experiment = False # used by TuneServer\n self._local_checkpoint_dir = local_checkpoint_dir\n\n if self._local_checkpoint_dir:\n os.makedirs(self._local_checkpoint_dir, exist_ok=True)\n\n self._remote_checkpoint_dir = remote_checkpoint_dir\n self._syncer = get_cloud_syncer(local_checkpoint_dir,\n remote_checkpoint_dir, sync_to_cloud)\n self._stopper = stopper or NoopStopper()\n self._resumed = False\n\n if self._validate_resume(resume_type=resume):\n try:\n self.resume()\n logger.info(\"Resuming trial.\")\n self._resumed = True\n except Exception:\n logger.exception(\n \"Runner restore failed. Restarting experiment.\")\n else:\n logger.debug(\"Starting a new experiment.\")\n\n self._start_time = time.time()\n self._last_checkpoint_time = -float(\"inf\")\n self._checkpoint_period = checkpoint_period\n self._session_str = datetime.fromtimestamp(\n self._start_time).strftime(\"%Y-%m-%d_%H-%M-%S\")\n self.checkpoint_file = None\n if self._local_checkpoint_dir:\n self.checkpoint_file = os.path.join(\n self._local_checkpoint_dir,\n TrialRunner.CKPT_FILE_TMPL.format(self._session_str))\n\n @property\n def scheduler_alg(self):\n return self._scheduler_alg\n\n def _validate_resume(self, resume_type):\n \"\"\"Checks whether to resume experiment.\n\n Args:\n resume_type: One of True, \"REMOTE\", \"LOCAL\", \"PROMPT\".\n \"\"\"\n if not resume_type:\n return False\n assert resume_type in self.VALID_RESUME_TYPES, (\n \"resume_type {} is not one of {}\".format(resume_type,\n self.VALID_RESUME_TYPES))\n # Not clear if we need this assertion, since we should always have a\n # local checkpoint dir.\n assert self._local_checkpoint_dir or self._remote_checkpoint_dir\n if resume_type in [True, \"LOCAL\", \"PROMPT\"]:\n if not self.checkpoint_exists(self._local_checkpoint_dir):\n raise ValueError(\"Called resume when no checkpoint exists \"\n \"in local directory.\")\n elif resume_type == \"PROMPT\":\n if click.confirm(\"Resume from local directory?\"):\n return True\n\n if resume_type in [\"REMOTE\", \"PROMPT\"]:\n if resume_type == \"PROMPT\" and not click.confirm(\n \"Try downloading from remote directory?\"):\n return False\n if not self._remote_checkpoint_dir:\n raise ValueError(\n \"Called resume from remote without remote directory.\")\n\n # Try syncing down the upload directory.\n logger.info(\"Downloading from %s\", self._remote_checkpoint_dir)\n # TODO(ujvl): Note that this syncs down the entire directory,\n # which may also contain trial checkpoints. We should selectively\n # sync the necessary files instead.\n self._syncer.sync_down_if_needed()\n self._syncer.wait()\n\n if not self.checkpoint_exists(self._local_checkpoint_dir):\n raise ValueError(\"Called resume when no checkpoint exists \"\n \"in remote or local directory.\")\n return True\n\n @classmethod\n def checkpoint_exists(cls, directory):\n if not os.path.exists(directory):\n return False\n return any(\n (fname.startswith(\"experiment_state\") and fname.endswith(\".json\"))\n for fname in os.listdir(directory))\n\n def add_experiment(self, experiment):\n if not self._resumed:\n self._search_alg.add_configurations([experiment])\n else:\n logger.info(\"TrialRunner resumed, ignoring new add_experiment.\")\n\n def checkpoint(self, force=False):\n \"\"\"Saves execution state to `self._local_checkpoint_dir`.\n\n Overwrites the current session checkpoint, which starts when self\n is instantiated. Throttle depends on self._checkpoint_period.\n\n Args:\n force (bool): Forces a checkpoint despite checkpoint_period.\n \"\"\"\n if not self._local_checkpoint_dir:\n return\n now = time.time()\n if now - self._last_checkpoint_time < self._checkpoint_period and (\n not force):\n return\n self._last_checkpoint_time = now\n runner_state = {\n \"checkpoints\": list(\n self.trial_executor.get_checkpoints().values()),\n \"runner_data\": self.__getstate__(),\n \"stats\": {\n \"start_time\": self._start_time,\n \"timestamp\": self._last_checkpoint_time\n }\n }\n tmp_file_name = os.path.join(self._local_checkpoint_dir,\n \".tmp_checkpoint\")\n with open(tmp_file_name, \"w\") as f:\n json.dump(runner_state, f, indent=2, cls=_TuneFunctionEncoder)\n\n os.replace(tmp_file_name, self.checkpoint_file)\n if force:\n self._syncer.sync_up()\n else:\n self._syncer.sync_up_if_needed()\n return self._local_checkpoint_dir\n\n def resume(self):\n \"\"\"Resumes all checkpointed trials from previous run.\n\n Requires user to manually re-register their objects. Also stops\n all ongoing trials.\n \"\"\"\n newest_ckpt_path = _find_newest_ckpt(self._local_checkpoint_dir)\n with open(newest_ckpt_path, \"r\") as f:\n runner_state = json.load(f, cls=_TuneFunctionDecoder)\n self.checkpoint_file = newest_ckpt_path\n\n logger.warning(\"\".join([\n \"Attempting to resume experiment from {}. \".format(\n self._local_checkpoint_dir), \"This feature is experimental, \"\n \"and may not work with all search algorithms. \",\n \"This will ignore any new changes to the specification.\"\n ]))\n\n self.__setstate__(runner_state[\"runner_data\"])\n\n trials = []\n for trial_cp in runner_state[\"checkpoints\"]:\n new_trial = Trial(trial_cp[\"trainable_name\"])\n new_trial.__setstate__(trial_cp)\n trials += [new_trial]\n for trial in sorted(\n trials, key=lambda t: t.last_update_time, reverse=True):\n self.add_trial(trial)\n\n def is_finished(self):\n \"\"\"Returns whether all trials have finished running.\"\"\"\n if self._total_time > self._global_time_limit:\n logger.warning(\"Exceeded global time limit {} / {}\".format(\n self._total_time, self._global_time_limit))\n return True\n\n trials_done = all(trial.is_finished() for trial in self._trials)\n return trials_done and self._search_alg.is_finished()\n\n def step(self):\n \"\"\"Runs one step of the trial event loop.\n\n Callers should typically run this method repeatedly in a loop. They\n may inspect or modify the runner's state in between calls to step().\n \"\"\"\n if self.is_finished():\n raise TuneError(\"Called step when all trials finished?\")\n with warn_if_slow(\"on_step_begin\"):\n self.trial_executor.on_step_begin(self)\n next_trial = self._get_next_trial() # blocking\n if next_trial is not None:\n with warn_if_slow(\"start_trial\"):\n self.trial_executor.start_trial(next_trial)\n elif self.trial_executor.get_running_trials():\n self._process_events() # blocking\n else:\n self.trial_executor.on_no_available_trials(self)\n\n self._stop_experiment_if_needed()\n\n try:\n with warn_if_slow(\"experiment_checkpoint\"):\n self.checkpoint()\n except Exception:\n logger.exception(\"Trial Runner checkpointing failed.\")\n self._iteration += 1\n\n if self._server:\n with warn_if_slow(\"server\"):\n self._process_stop_requests()\n\n if self.is_finished():\n self._server.shutdown()\n with warn_if_slow(\"on_step_end\"):\n self.trial_executor.on_step_end(self)\n\n def get_trial(self, tid):\n trial = [t for t in self._trials if t.trial_id == tid]\n return trial[0] if trial else None\n\n def get_trials(self):\n \"\"\"Returns the list of trials managed by this TrialRunner.\n\n Note that the caller usually should not mutate trial state directly.\n \"\"\"\n return self._trials\n\n def add_trial(self, trial):\n \"\"\"Adds a new trial to this TrialRunner.\n\n Trials may be added at any time.\n\n Args:\n trial (Trial): Trial to queue.\n \"\"\"\n trial.set_verbose(self._verbose)\n self._trials.append(trial)\n with warn_if_slow(\"scheduler.on_trial_add\"):\n self._scheduler_alg.on_trial_add(self, trial)\n self.trial_executor.try_checkpoint_metadata(trial)\n\n def debug_string(self, delim=\"\\n\"):\n result_keys = [\n list(t.last_result) for t in self.get_trials() if t.last_result\n ]\n metrics = set().union(*result_keys)\n messages = [\n self._scheduler_alg.debug_string(),\n self.trial_executor.debug_string(),\n trial_progress_str(self.get_trials(), metrics),\n ]\n return delim.join(messages)\n\n def has_resources(self, resources):\n \"\"\"Returns whether this runner has at least the specified resources.\"\"\"\n return self.trial_executor.has_resources(resources)\n\n def _stop_experiment_if_needed(self):\n \"\"\"Stops all trials.\"\"\"\n fail_fast = self._fail_fast and self._has_errored\n if (self._stopper.stop_all() or fail_fast\n or self._should_stop_experiment):\n self._search_alg.set_finished()\n [\n self.trial_executor.stop_trial(t) for t in self._trials\n if t.status is not Trial.ERROR\n ]\n\n def _get_next_trial(self):\n \"\"\"Replenishes queue.\n\n Blocks if all trials queued have finished, but search algorithm is\n still not finished.\n \"\"\"\n trials_done = all(trial.is_finished() for trial in self._trials)\n wait_for_trial = trials_done and not self._search_alg.is_finished()\n self._update_trial_queue(blocking=wait_for_trial)\n with warn_if_slow(\"choose_trial_to_run\"):\n trial = self._scheduler_alg.choose_trial_to_run(self)\n return trial\n\n def _process_events(self):\n failed_trial = self.trial_executor.get_next_failed_trial()\n if failed_trial:\n error_msg = (\n \"{} (IP: {}) detected as stale. This is likely because the \"\n \"node was lost\").format(failed_trial, failed_trial.node_ip)\n logger.info(error_msg)\n with warn_if_slow(\"process_failed_trial\"):\n self._process_trial_failure(failed_trial, error_msg=error_msg)\n else:\n # TODO(ujvl): Consider combining get_next_available_trial and\n # fetch_result functionality so that we don't timeout on fetch.\n trial = self.trial_executor.get_next_available_trial() # blocking\n if trial.is_restoring:\n with warn_if_slow(\"process_trial_restore\"):\n self._process_trial_restore(trial)\n elif trial.is_saving:\n with warn_if_slow(\"process_trial_save\") as profile:\n self._process_trial_save(trial)\n if profile.too_slow and trial.sync_on_checkpoint:\n # TODO(ujvl): Suggest using DurableTrainable once\n # API has converged.\n logger.warning(\n \"Consider turning off forced head-worker trial \"\n \"checkpoint syncs by setting sync_on_checkpoint=False\"\n \". Note that this may result in faulty trial \"\n \"restoration if a failure occurs while the checkpoint \"\n \"is being synced from the worker to the head node.\")\n else:\n with warn_if_slow(\"process_trial\"):\n self._process_trial(trial)\n\n def _process_trial(self, trial):\n \"\"\"Processes a trial result.\n\n Fetches the trial's latest result and makes a scheduling decision\n regarding its next action. If a checkpoint is taken, the decided\n action is cached and acted on only after the checkpoint is later\n processed (see `_process_trial_save`). Otherwise the decision is\n acted on immediately.\n\n Args:\n trial (Trial): Trial with a result ready to be processed.\n \"\"\"\n try:\n result = self.trial_executor.fetch_result(trial)\n\n is_duplicate = RESULT_DUPLICATE in result\n force_checkpoint = result.get(SHOULD_CHECKPOINT, False)\n # TrialScheduler and SearchAlgorithm still receive a\n # notification because there may be special handling for\n # the `on_trial_complete` hook.\n if is_duplicate:\n logger.debug(\"Trial finished without logging 'done'.\")\n result = trial.last_result\n result.update(done=True)\n\n self._total_time += result.get(TIME_THIS_ITER_S, 0)\n\n flat_result = flatten_dict(result)\n if self._stopper(trial.trial_id,\n result) or trial.should_stop(flat_result):\n # Hook into scheduler\n self._scheduler_alg.on_trial_complete(self, trial, flat_result)\n self._search_alg.on_trial_complete(\n trial.trial_id, result=flat_result)\n decision = TrialScheduler.STOP\n else:\n with warn_if_slow(\"scheduler.on_trial_result\"):\n decision = self._scheduler_alg.on_trial_result(\n self, trial, flat_result)\n with warn_if_slow(\"search_alg.on_trial_result\"):\n self._search_alg.on_trial_result(trial.trial_id,\n flat_result)\n if decision == TrialScheduler.STOP:\n with warn_if_slow(\"search_alg.on_trial_complete\"):\n self._search_alg.on_trial_complete(\n trial.trial_id, result=flat_result)\n\n if not is_duplicate:\n trial.update_last_result(\n result, terminate=(decision == TrialScheduler.STOP))\n\n # Checkpoints to disk. This should be checked even if\n # the scheduler decision is STOP or PAUSE. Note that\n # PAUSE only checkpoints to memory and does not update\n # the global checkpoint state.\n self._checkpoint_trial_if_needed(trial, force=force_checkpoint)\n\n if trial.is_saving:\n # Cache decision to execute on after the save is processed.\n # This prevents changing the trial's state or kicking off\n # another training step prematurely.\n self._cached_trial_decisions[trial.trial_id] = decision\n else:\n self._execute_action(trial, decision)\n except Exception:\n logger.exception(\"Trial %s: Error processing event.\", trial)\n self._process_trial_failure(trial, traceback.format_exc())\n\n def _process_trial_save(self, trial):\n \"\"\"Processes a trial save.\n\n Acts on the decision cached during the last `_process_trial` call.\n\n Args:\n trial (Trial): Trial being saved.\n \"\"\"\n logger.debug(\"Trial %s: Processing trial save.\", trial)\n checkpoint_value = None\n\n try:\n checkpoint_value = self.trial_executor.fetch_result(trial)\n except Exception:\n logger.exception(\"Trial %s: Error processing result.\", trial)\n self._process_trial_failure(trial, traceback.format_exc())\n\n if checkpoint_value:\n try:\n trial.saving_to.value = checkpoint_value\n trial.on_checkpoint(trial.saving_to)\n self.trial_executor.try_checkpoint_metadata(trial)\n except Exception:\n logger.exception(\"Trial %s: Error handling checkpoint %s\",\n trial, checkpoint_value)\n\n trial.saving_to = None\n decision = self._cached_trial_decisions.pop(trial.trial_id, None)\n if decision and checkpoint_value:\n self._execute_action(trial, decision)\n\n def _process_trial_restore(self, trial):\n \"\"\"Processes a trial restore.\n\n Args:\n trial (Trial): Trial being restored.\n \"\"\"\n logger.debug(\"Trial %s: Processing trial restore.\", trial)\n try:\n self.trial_executor.fetch_result(trial)\n trial.on_restore()\n logger.debug(\"Trial %s: Restore processed successfully\", trial)\n self.trial_executor.set_status(trial, Trial.RUNNING)\n self.trial_executor.continue_training(trial)\n except Exception:\n logger.exception(\"Trial %s: Error processing restore.\", trial)\n self._process_trial_failure(trial, traceback.format_exc())\n\n def _process_trial_failure(self, trial, error_msg):\n \"\"\"Handle trial failure.\n\n Attempt trial recovery if possible, clean up state otherwise.\n\n Args:\n trial (Trial): Failed trial.\n error_msg (str): Error message prior to invoking this method.\n \"\"\"\n self._has_errored = True\n if trial.status == Trial.RUNNING:\n if trial.should_recover():\n self._try_recover(trial, error_msg)\n else:\n self._scheduler_alg.on_trial_error(self, trial)\n self._search_alg.on_trial_complete(trial.trial_id, error=True)\n self.trial_executor.stop_trial(\n trial, error=True, error_msg=error_msg)\n\n def _execute_action(self, trial, decision):\n \"\"\"Executes action based on decision.\n\n Args:\n trial (Trial): Trial to act on.\n decision (str): Scheduling decision to undertake.\n \"\"\"\n if decision == TrialScheduler.CONTINUE:\n self.trial_executor.continue_training(trial)\n elif decision == TrialScheduler.PAUSE:\n self.trial_executor.pause_trial(trial)\n elif decision == TrialScheduler.STOP:\n self.trial_executor.export_trial_if_needed(trial)\n self.trial_executor.stop_trial(trial)\n else:\n raise ValueError(\"Invalid decision: {}\".format(decision))\n\n def _checkpoint_trial_if_needed(self, trial, force=False):\n \"\"\"Checkpoints trial based off trial.last_result.\"\"\"\n if trial.should_checkpoint() or force:\n # Save trial runtime if possible.\n if trial.runner:\n self.trial_executor.save(trial, storage=Checkpoint.PERSISTENT)\n\n def _try_recover(self, trial, error_msg):\n \"\"\"Tries to recover trial.\n\n Notifies SearchAlgorithm and Scheduler if failure to recover.\n\n Args:\n trial (Trial): Trial to recover.\n error_msg (str): Error message from prior to invoking this method.\n \"\"\"\n if trial.is_restoring:\n # Restore was unsuccessful, try again without checkpoint.\n trial.clear_checkpoint()\n self.trial_executor.stop_trial(\n trial,\n error=error_msg is not None,\n error_msg=error_msg,\n stop_logger=False)\n trial.result_logger.flush()\n if self.trial_executor.has_resources(trial.resources):\n logger.info(\n \"Trial %s: Attempting to restore \"\n \"trial state from last checkpoint.\", trial)\n self.trial_executor.start_trial(trial)\n if trial.status == Trial.ERROR:\n logger.exception(\n \"Trial %s: Error restoring trial from checkpoint, abort.\",\n trial)\n self._scheduler_alg.on_trial_error(self, trial)\n self._search_alg.on_trial_complete(trial.trial_id, error=True)\n else:\n logger.debug(\"Trial %s: Restore dispatched correctly.\", trial)\n else:\n logger.debug(\"Trial %s: Notifying Scheduler and requeueing.\",\n trial)\n self._requeue_trial(trial)\n\n def _requeue_trial(self, trial):\n \"\"\"Notification to TrialScheduler and requeue trial.\n\n This does not notify the SearchAlgorithm because the function\n evaluation is still in progress.\n\n \"\"\"\n self._scheduler_alg.on_trial_error(self, trial)\n self.trial_executor.set_status(trial, Trial.PENDING)\n\n # TODO(rliaw): Right now, this pushes the trial to the end of queue\n # because restoration can be expensive. However, this is not\n # ideal since it just hides the issue - a better fix would\n # be to use an actor table to detect the IP of the Trainable\n # and rsync the files there.\n # See https://github.com/ray-project/ray/issues/5168\n self._trials.pop(self._trials.index(trial))\n self._trials.append(trial)\n\n with warn_if_slow(\"scheduler.on_trial_add\"):\n self._scheduler_alg.on_trial_add(self, trial)\n\n def _update_trial_queue(self, blocking=False, timeout=600):\n \"\"\"Adds next trials to queue if possible.\n\n Note that the timeout is currently unexposed to the user.\n\n Args:\n blocking (bool): Blocks until either a trial is available\n or is_finished (timeout or search algorithm finishes).\n timeout (int): Seconds before blocking times out.\n \"\"\"\n trials = self._search_alg.next_trials()\n if blocking and not trials:\n start = time.time()\n # Checking `is_finished` instead of _search_alg.is_finished\n # is fine because blocking only occurs if all trials are\n # finished and search_algorithm is not yet finished\n while (not trials and not self.is_finished()\n and time.time() - start < timeout):\n logger.info(\"Blocking for next trial...\")\n trials = self._search_alg.next_trials()\n time.sleep(1)\n\n for trial in trials:\n self.add_trial(trial)\n\n def request_stop_trial(self, trial):\n self._stop_queue.append(trial)\n\n def request_stop_experiment(self):\n self._should_stop_experiment = True\n\n def _process_stop_requests(self):\n while self._stop_queue:\n t = self._stop_queue.pop()\n self.stop_trial(t)\n\n def stop_trial(self, trial):\n \"\"\"Stops trial.\n\n Trials may be stopped at any time. If trial is in state PENDING\n or PAUSED, calls `on_trial_remove` for scheduler and\n `on_trial_complete() for search_alg.\n Otherwise waits for result for the trial and calls\n `on_trial_complete` for scheduler and search_alg if RUNNING.\n \"\"\"\n error = False\n error_msg = None\n\n if trial.status in [Trial.ERROR, Trial.TERMINATED]:\n return\n elif trial.status in [Trial.PENDING, Trial.PAUSED]:\n self._scheduler_alg.on_trial_remove(self, trial)\n self._search_alg.on_trial_complete(trial.trial_id)\n elif trial.status is Trial.RUNNING:\n try:\n result = self.trial_executor.fetch_result(trial)\n trial.update_last_result(result, terminate=True)\n self._scheduler_alg.on_trial_complete(self, trial, result)\n self._search_alg.on_trial_complete(\n trial.trial_id, result=result)\n except Exception:\n error_msg = traceback.format_exc()\n logger.exception(\"Error processing event.\")\n self._scheduler_alg.on_trial_error(self, trial)\n self._search_alg.on_trial_complete(trial.trial_id, error=True)\n error = True\n\n self.trial_executor.stop_trial(trial, error=error, error_msg=error_msg)\n\n def cleanup_trials(self):\n self.trial_executor.cleanup()\n\n def __getstate__(self):\n \"\"\"Gets state for trial.\n\n Note that this is not used as a pickling override as\n does not have all fields.\n \"\"\"\n state = self.__dict__.copy()\n for k in [\n \"_trials\",\n \"_stop_queue\",\n \"_server\",\n \"_search_alg\",\n \"_scheduler_alg\",\n \"trial_executor\",\n \"_syncer\",\n ]:\n del state[k]\n state[\"launch_web_server\"] = bool(self._server)\n return state\n\n def __setstate__(self, state):\n launch_web_server = state.pop(\"launch_web_server\")\n\n # Use session_str from previous checkpoint if does not exist\n session_str = state.pop(\"_session_str\")\n self.__dict__.setdefault(\"_session_str\", session_str)\n # Use start_time from previous checkpoint if does not exist\n start_time = state.pop(\"_start_time\")\n self.__dict__.setdefault(\"_start_time\", start_time)\n\n self.__dict__.update(state)\n if launch_web_server:\n self._server = TuneServer(self, self._server_port)\n", "path": "python/ray/tune/trial_runner.py"}]} |
gh_patches_debug_1094 | rasdani/github-patches | git_diff | interlegis__sapl-1749 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Formatar mudança de linha nos campos text do crud
As mudanças de linha `\n` dos campos TextField, ao que parece, estão sendo exibidas nas telas de leitura do crud.
Por exemplo no campo `observacao` de `DocumentoAdministrativo`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sapl/crispy_layout_mixin.py`
Content:
```
1 from math import ceil
2
3 import rtyaml
4 from crispy_forms.bootstrap import FormActions
5 from crispy_forms.helper import FormHelper
6 from crispy_forms.layout import HTML, Div, Fieldset, Layout, Submit
7 from django import template
8 from django.core.urlresolvers import reverse, reverse_lazy
9 from django.utils import formats
10 from django.utils.translation import ugettext as _
11
12
13 def heads_and_tails(list_of_lists):
14 for alist in list_of_lists:
15 yield alist[0], alist[1:]
16
17
18 def to_column(name_span):
19 fieldname, span = name_span
20 return Div(fieldname, css_class='col-md-%d' % span)
21
22
23 def to_row(names_spans):
24 return Div(*map(to_column, names_spans), css_class='row-fluid')
25
26
27 def to_fieldsets(fields):
28 for field in fields:
29 if isinstance(field, list):
30 legend, row_specs = field[0], field[1:]
31 rows = [to_row(name_span_list) for name_span_list in row_specs]
32 yield Fieldset(legend, *rows)
33 else:
34 yield field
35
36
37 def form_actions(more=[Div(css_class='clearfix')],
38 label=_('Salvar'), name='salvar', css_class='pull-right', disabled=True):
39
40 if disabled:
41 doubleclick = 'this.form.submit();this.disabled=true;'
42 else:
43 doubleclick = 'return true;'
44
45 return FormActions(
46 Submit(name, label, css_class=css_class,
47 # para impedir resubmissão do form
48 onclick=doubleclick),
49 *more)
50
51
52 class SaplFormLayout(Layout):
53
54 def __init__(self, *fields, cancel_label=_('Cancelar'),
55 save_label=_('Salvar'), actions=None):
56
57 buttons = actions
58 if not buttons:
59 buttons = form_actions(label=save_label, more=[
60 HTML('<a href="{{ view.cancel_url }}"'
61 ' class="btn btn-inverse">%s</a>' % cancel_label)
62 if cancel_label else None])
63
64 _fields = list(to_fieldsets(fields))
65 if buttons:
66 _fields += [to_row([(buttons, 12)])]
67 super(SaplFormLayout, self).__init__(*_fields)
68
69
70 def get_field_display(obj, fieldname):
71 field = ''
72 try:
73 field = obj._meta.get_field(fieldname)
74 except Exception as e:
75 """ nos casos que o fieldname não é um field_model,
76 ele pode ser um aggregate, annotate, um property, um manager,
77 ou mesmo uma método no model.
78 """
79 value = getattr(obj, fieldname)
80 try:
81 verbose_name = value.model._meta.verbose_name
82 except AttributeError:
83 verbose_name = ''
84
85 else:
86 verbose_name = str(field.verbose_name)\
87 if hasattr(field, 'verbose_name') else ''
88
89 if hasattr(field, 'choices') and field.choices:
90 value = getattr(obj, 'get_%s_display' % fieldname)()
91 else:
92 value = getattr(obj, fieldname)
93
94 str_type_from_value = str(type(value))
95 str_type_from_field = str(type(field))
96
97 if value is None:
98 display = ''
99 elif '.date' in str_type_from_value:
100 display = formats.date_format(value, "SHORT_DATE_FORMAT")
101 elif 'bool' in str_type_from_value:
102 display = _('Sim') if value else _('Não')
103 elif 'ImageFieldFile' in str(type(value)):
104 if value:
105 display = '<img src="{}" />'.format(value.url)
106 else:
107 display = ''
108 elif 'FieldFile' in str_type_from_value:
109 if value:
110 display = '<a href="{}">{}</a>'.format(
111 value.url,
112 value.name.split('/')[-1:][0])
113 else:
114 display = ''
115 elif 'ManyRelatedManager' in str_type_from_value\
116 or 'RelatedManager' in str_type_from_value\
117 or 'GenericRelatedObjectManager' in str_type_from_value:
118 display = '<ul>'
119 for v in value.all():
120 display += '<li>%s</li>' % str(v)
121 display += '</ul>'
122 if not verbose_name:
123 if hasattr(field, 'related_model'):
124 verbose_name = str(
125 field.related_model._meta.verbose_name_plural)
126 elif hasattr(field, 'model'):
127 verbose_name = str(field.model._meta.verbose_name_plural)
128 elif 'GenericForeignKey' in str_type_from_field:
129 display = '<a href="{}">{}</a>'.format(
130 reverse(
131 '%s:%s_detail' % (
132 value._meta.app_config.name, obj.content_type.model),
133 args=(value.id,)),
134 value)
135 else:
136 display = str(value)
137 return verbose_name, display
138
139
140 class CrispyLayoutFormMixin:
141
142 @property
143 def layout_key(self):
144 if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key'):
145 return super(CrispyLayoutFormMixin, self).layout_key
146 else:
147 return self.model.__name__
148
149 @property
150 def layout_key_set(self):
151 if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key_set'):
152 return super(CrispyLayoutFormMixin, self).layout_key_set
153 else:
154 obj = self.crud if hasattr(self, 'crud') else self
155 return getattr(obj.model,
156 obj.model_set).field.model.__name__
157
158 def get_layout(self):
159 yaml_layout = '%s/layouts.yaml' % self.model._meta.app_config.label
160 return read_layout_from_yaml(yaml_layout, self.layout_key)
161
162 def get_layout_set(self):
163 obj = self.crud if hasattr(self, 'crud') else self
164 yaml_layout = '%s/layouts.yaml' % getattr(
165 obj.model, obj.model_set).field.model._meta.app_config.label
166 return read_layout_from_yaml(yaml_layout, self.layout_key_set)
167
168 @property
169 def fields(self):
170 if hasattr(self, 'form_class') and self.form_class:
171 return None
172 else:
173 '''Returns all fields in the layout'''
174 return [fieldname for legend_rows in self.get_layout()
175 for row in legend_rows[1:]
176 for fieldname, span in row]
177
178 def get_form(self, form_class=None):
179 try:
180 form = super(CrispyLayoutFormMixin, self).get_form(form_class)
181 except AttributeError:
182 # simply return None if there is no get_form on super
183 pass
184 else:
185 if self.layout_key:
186 form.helper = FormHelper()
187 form.helper.layout = SaplFormLayout(*self.get_layout())
188 return form
189
190 @property
191 def list_field_names(self):
192 '''The list of field names to display on table
193
194 This base implementation returns the field names
195 in the first fieldset of the layout.
196 '''
197 obj = self.crud if hasattr(self, 'crud') else self
198 if hasattr(obj, 'list_field_names') and obj.list_field_names:
199 return obj.list_field_names
200 rows = self.get_layout()[0][1:]
201 return [fieldname for row in rows for fieldname, __ in row]
202
203 @property
204 def list_field_names_set(self):
205 '''The list of field names to display on table
206
207 This base implementation returns the field names
208 in the first fieldset of the layout.
209 '''
210 rows = self.get_layout_set()[0][1:]
211 return [fieldname for row in rows for fieldname, __ in row]
212
213 def get_column(self, fieldname, span):
214 obj = self.get_object()
215
216 func = None
217 if '|' in fieldname:
218 fieldname, func = tuple(fieldname.split('|'))
219
220 if func:
221 verbose_name, text = getattr(self, func)(obj, fieldname)
222 else:
223 verbose_name, text = get_field_display(obj, fieldname)
224
225 return {
226 'id': fieldname,
227 'span': span,
228 'verbose_name': verbose_name,
229 'text': text,
230 }
231
232 def fk_urlize_for_detail(self, obj, fieldname):
233
234 field = obj._meta.get_field(fieldname)
235 value = getattr(obj, fieldname)
236
237 display = '<a href="{}">{}</a>'.format(
238 reverse(
239 '%s:%s_detail' % (
240 value._meta.app_config.name, value._meta.model_name),
241 args=(value.id,)),
242 value)
243
244 return field.verbose_name, display
245
246 def m2m_urlize_for_detail(self, obj, fieldname):
247
248 manager, fieldname = tuple(fieldname.split('__'))
249
250 manager = getattr(obj, manager)
251
252 verbose_name = manager.model._meta.verbose_name
253 display = ''
254 for item in manager.all():
255 obj_m2m = getattr(item, fieldname)
256
257 if obj == obj_m2m:
258 continue
259
260 verbose_name = item._meta.get_field(fieldname).verbose_name
261
262 display += '<li><a href="{}">{}</a></li>'.format(
263 reverse(
264 '%s:%s_detail' % (
265 obj_m2m._meta.app_config.name, obj_m2m._meta.model_name),
266 args=(obj_m2m.id,)),
267 obj_m2m)
268
269 display += ''
270
271 if display:
272 display = '<ul>%s</ul>' % display
273 else:
274 verbose_name = ''
275
276 return verbose_name, display
277
278 @property
279 def layout_display(self):
280
281 return [
282 {'legend': legend,
283 'rows': [[self.get_column(fieldname, span)
284 for fieldname, span in row]
285 for row in rows]
286 } for legend, rows in heads_and_tails(self.get_layout())]
287
288
289 def read_yaml_from_file(yaml_layout):
290 # TODO cache this at application level
291 t = template.loader.get_template(yaml_layout)
292 # aqui é importante converter para str pois, dependendo do ambiente,
293 # o rtyaml pode usar yaml.CSafeLoader, que exige str ou stream
294 rendered = str(t.render())
295 return rtyaml.load(rendered)
296
297
298 def read_layout_from_yaml(yaml_layout, key):
299 # TODO cache this at application level
300 yaml = read_yaml_from_file(yaml_layout)
301 base = yaml[key]
302
303 def line_to_namespans(line):
304 split = [cell.split(':') for cell in line.split()]
305 namespans = [[s[0], int(s[1]) if len(s) > 1 else 0] for s in split]
306 remaining = 12 - sum(s for n, s in namespans)
307 nondefined = [ns for ns in namespans if not ns[1]]
308 while nondefined:
309 span = ceil(remaining / len(nondefined))
310 namespan = nondefined.pop(0)
311 namespan[1] = span
312 remaining = remaining - span
313 return list(map(tuple, namespans))
314
315 return [[legend] + [line_to_namespans(l) for l in lines]
316 for legend, lines in base.items()]
317
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sapl/crispy_layout_mixin.py b/sapl/crispy_layout_mixin.py
--- a/sapl/crispy_layout_mixin.py
+++ b/sapl/crispy_layout_mixin.py
@@ -132,6 +132,8 @@
value._meta.app_config.name, obj.content_type.model),
args=(value.id,)),
value)
+ elif 'TextField' in str_type_from_field:
+ display = value.replace('\n', '<br/>')
else:
display = str(value)
return verbose_name, display
| {"golden_diff": "diff --git a/sapl/crispy_layout_mixin.py b/sapl/crispy_layout_mixin.py\n--- a/sapl/crispy_layout_mixin.py\n+++ b/sapl/crispy_layout_mixin.py\n@@ -132,6 +132,8 @@\n value._meta.app_config.name, obj.content_type.model),\n args=(value.id,)),\n value)\n+ elif 'TextField' in str_type_from_field:\n+ display = value.replace('\\n', '<br/>')\n else:\n display = str(value)\n return verbose_name, display\n", "issue": "Formatar mudan\u00e7a de linha nos campos text do crud\nAs mudan\u00e7as de linha `\\n` dos campos TextField, ao que parece, est\u00e3o sendo exibidas nas telas de leitura do crud.\r\n\r\nPor exemplo no campo `observacao` de `DocumentoAdministrativo`.\n", "before_files": [{"content": "from math import ceil\n\nimport rtyaml\nfrom crispy_forms.bootstrap import FormActions\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import HTML, Div, Fieldset, Layout, Submit\nfrom django import template\nfrom django.core.urlresolvers import reverse, reverse_lazy\nfrom django.utils import formats\nfrom django.utils.translation import ugettext as _\n\n\ndef heads_and_tails(list_of_lists):\n for alist in list_of_lists:\n yield alist[0], alist[1:]\n\n\ndef to_column(name_span):\n fieldname, span = name_span\n return Div(fieldname, css_class='col-md-%d' % span)\n\n\ndef to_row(names_spans):\n return Div(*map(to_column, names_spans), css_class='row-fluid')\n\n\ndef to_fieldsets(fields):\n for field in fields:\n if isinstance(field, list):\n legend, row_specs = field[0], field[1:]\n rows = [to_row(name_span_list) for name_span_list in row_specs]\n yield Fieldset(legend, *rows)\n else:\n yield field\n\n\ndef form_actions(more=[Div(css_class='clearfix')],\n label=_('Salvar'), name='salvar', css_class='pull-right', disabled=True):\n\n if disabled:\n doubleclick = 'this.form.submit();this.disabled=true;'\n else:\n doubleclick = 'return true;'\n\n return FormActions(\n Submit(name, label, css_class=css_class,\n # para impedir resubmiss\u00e3o do form\n onclick=doubleclick),\n *more)\n\n\nclass SaplFormLayout(Layout):\n\n def __init__(self, *fields, cancel_label=_('Cancelar'),\n save_label=_('Salvar'), actions=None):\n\n buttons = actions\n if not buttons:\n buttons = form_actions(label=save_label, more=[\n HTML('<a href=\"{{ view.cancel_url }}\"'\n ' class=\"btn btn-inverse\">%s</a>' % cancel_label)\n if cancel_label else None])\n\n _fields = list(to_fieldsets(fields))\n if buttons:\n _fields += [to_row([(buttons, 12)])]\n super(SaplFormLayout, self).__init__(*_fields)\n\n\ndef get_field_display(obj, fieldname):\n field = ''\n try:\n field = obj._meta.get_field(fieldname)\n except Exception as e:\n \"\"\" nos casos que o fieldname n\u00e3o \u00e9 um field_model,\n ele pode ser um aggregate, annotate, um property, um manager,\n ou mesmo uma m\u00e9todo no model.\n \"\"\"\n value = getattr(obj, fieldname)\n try:\n verbose_name = value.model._meta.verbose_name\n except AttributeError:\n verbose_name = ''\n\n else:\n verbose_name = str(field.verbose_name)\\\n if hasattr(field, 'verbose_name') else ''\n\n if hasattr(field, 'choices') and field.choices:\n value = getattr(obj, 'get_%s_display' % fieldname)()\n else:\n value = getattr(obj, fieldname)\n\n str_type_from_value = str(type(value))\n str_type_from_field = str(type(field))\n\n if value is None:\n display = ''\n elif '.date' in str_type_from_value:\n display = formats.date_format(value, \"SHORT_DATE_FORMAT\")\n elif 'bool' in str_type_from_value:\n display = _('Sim') if value else _('N\u00e3o')\n elif 'ImageFieldFile' in str(type(value)):\n if value:\n display = '<img src=\"{}\" />'.format(value.url)\n else:\n display = ''\n elif 'FieldFile' in str_type_from_value:\n if value:\n display = '<a href=\"{}\">{}</a>'.format(\n value.url,\n value.name.split('/')[-1:][0])\n else:\n display = ''\n elif 'ManyRelatedManager' in str_type_from_value\\\n or 'RelatedManager' in str_type_from_value\\\n or 'GenericRelatedObjectManager' in str_type_from_value:\n display = '<ul>'\n for v in value.all():\n display += '<li>%s</li>' % str(v)\n display += '</ul>'\n if not verbose_name:\n if hasattr(field, 'related_model'):\n verbose_name = str(\n field.related_model._meta.verbose_name_plural)\n elif hasattr(field, 'model'):\n verbose_name = str(field.model._meta.verbose_name_plural)\n elif 'GenericForeignKey' in str_type_from_field:\n display = '<a href=\"{}\">{}</a>'.format(\n reverse(\n '%s:%s_detail' % (\n value._meta.app_config.name, obj.content_type.model),\n args=(value.id,)),\n value)\n else:\n display = str(value)\n return verbose_name, display\n\n\nclass CrispyLayoutFormMixin:\n\n @property\n def layout_key(self):\n if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key'):\n return super(CrispyLayoutFormMixin, self).layout_key\n else:\n return self.model.__name__\n\n @property\n def layout_key_set(self):\n if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key_set'):\n return super(CrispyLayoutFormMixin, self).layout_key_set\n else:\n obj = self.crud if hasattr(self, 'crud') else self\n return getattr(obj.model,\n obj.model_set).field.model.__name__\n\n def get_layout(self):\n yaml_layout = '%s/layouts.yaml' % self.model._meta.app_config.label\n return read_layout_from_yaml(yaml_layout, self.layout_key)\n\n def get_layout_set(self):\n obj = self.crud if hasattr(self, 'crud') else self\n yaml_layout = '%s/layouts.yaml' % getattr(\n obj.model, obj.model_set).field.model._meta.app_config.label\n return read_layout_from_yaml(yaml_layout, self.layout_key_set)\n\n @property\n def fields(self):\n if hasattr(self, 'form_class') and self.form_class:\n return None\n else:\n '''Returns all fields in the layout'''\n return [fieldname for legend_rows in self.get_layout()\n for row in legend_rows[1:]\n for fieldname, span in row]\n\n def get_form(self, form_class=None):\n try:\n form = super(CrispyLayoutFormMixin, self).get_form(form_class)\n except AttributeError:\n # simply return None if there is no get_form on super\n pass\n else:\n if self.layout_key:\n form.helper = FormHelper()\n form.helper.layout = SaplFormLayout(*self.get_layout())\n return form\n\n @property\n def list_field_names(self):\n '''The list of field names to display on table\n\n This base implementation returns the field names\n in the first fieldset of the layout.\n '''\n obj = self.crud if hasattr(self, 'crud') else self\n if hasattr(obj, 'list_field_names') and obj.list_field_names:\n return obj.list_field_names\n rows = self.get_layout()[0][1:]\n return [fieldname for row in rows for fieldname, __ in row]\n\n @property\n def list_field_names_set(self):\n '''The list of field names to display on table\n\n This base implementation returns the field names\n in the first fieldset of the layout.\n '''\n rows = self.get_layout_set()[0][1:]\n return [fieldname for row in rows for fieldname, __ in row]\n\n def get_column(self, fieldname, span):\n obj = self.get_object()\n\n func = None\n if '|' in fieldname:\n fieldname, func = tuple(fieldname.split('|'))\n\n if func:\n verbose_name, text = getattr(self, func)(obj, fieldname)\n else:\n verbose_name, text = get_field_display(obj, fieldname)\n\n return {\n 'id': fieldname,\n 'span': span,\n 'verbose_name': verbose_name,\n 'text': text,\n }\n\n def fk_urlize_for_detail(self, obj, fieldname):\n\n field = obj._meta.get_field(fieldname)\n value = getattr(obj, fieldname)\n\n display = '<a href=\"{}\">{}</a>'.format(\n reverse(\n '%s:%s_detail' % (\n value._meta.app_config.name, value._meta.model_name),\n args=(value.id,)),\n value)\n\n return field.verbose_name, display\n\n def m2m_urlize_for_detail(self, obj, fieldname):\n\n manager, fieldname = tuple(fieldname.split('__'))\n\n manager = getattr(obj, manager)\n\n verbose_name = manager.model._meta.verbose_name\n display = ''\n for item in manager.all():\n obj_m2m = getattr(item, fieldname)\n\n if obj == obj_m2m:\n continue\n\n verbose_name = item._meta.get_field(fieldname).verbose_name\n\n display += '<li><a href=\"{}\">{}</a></li>'.format(\n reverse(\n '%s:%s_detail' % (\n obj_m2m._meta.app_config.name, obj_m2m._meta.model_name),\n args=(obj_m2m.id,)),\n obj_m2m)\n\n display += ''\n\n if display:\n display = '<ul>%s</ul>' % display\n else:\n verbose_name = ''\n\n return verbose_name, display\n\n @property\n def layout_display(self):\n\n return [\n {'legend': legend,\n 'rows': [[self.get_column(fieldname, span)\n for fieldname, span in row]\n for row in rows]\n } for legend, rows in heads_and_tails(self.get_layout())]\n\n\ndef read_yaml_from_file(yaml_layout):\n # TODO cache this at application level\n t = template.loader.get_template(yaml_layout)\n # aqui \u00e9 importante converter para str pois, dependendo do ambiente,\n # o rtyaml pode usar yaml.CSafeLoader, que exige str ou stream\n rendered = str(t.render())\n return rtyaml.load(rendered)\n\n\ndef read_layout_from_yaml(yaml_layout, key):\n # TODO cache this at application level\n yaml = read_yaml_from_file(yaml_layout)\n base = yaml[key]\n\n def line_to_namespans(line):\n split = [cell.split(':') for cell in line.split()]\n namespans = [[s[0], int(s[1]) if len(s) > 1 else 0] for s in split]\n remaining = 12 - sum(s for n, s in namespans)\n nondefined = [ns for ns in namespans if not ns[1]]\n while nondefined:\n span = ceil(remaining / len(nondefined))\n namespan = nondefined.pop(0)\n namespan[1] = span\n remaining = remaining - span\n return list(map(tuple, namespans))\n\n return [[legend] + [line_to_namespans(l) for l in lines]\n for legend, lines in base.items()]\n", "path": "sapl/crispy_layout_mixin.py"}], "after_files": [{"content": "from math import ceil\n\nimport rtyaml\nfrom crispy_forms.bootstrap import FormActions\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import HTML, Div, Fieldset, Layout, Submit\nfrom django import template\nfrom django.core.urlresolvers import reverse, reverse_lazy\nfrom django.utils import formats\nfrom django.utils.translation import ugettext as _\n\n\ndef heads_and_tails(list_of_lists):\n for alist in list_of_lists:\n yield alist[0], alist[1:]\n\n\ndef to_column(name_span):\n fieldname, span = name_span\n return Div(fieldname, css_class='col-md-%d' % span)\n\n\ndef to_row(names_spans):\n return Div(*map(to_column, names_spans), css_class='row-fluid')\n\n\ndef to_fieldsets(fields):\n for field in fields:\n if isinstance(field, list):\n legend, row_specs = field[0], field[1:]\n rows = [to_row(name_span_list) for name_span_list in row_specs]\n yield Fieldset(legend, *rows)\n else:\n yield field\n\n\ndef form_actions(more=[Div(css_class='clearfix')],\n label=_('Salvar'), name='salvar', css_class='pull-right', disabled=True):\n\n if disabled:\n doubleclick = 'this.form.submit();this.disabled=true;'\n else:\n doubleclick = 'return true;'\n\n return FormActions(\n Submit(name, label, css_class=css_class,\n # para impedir resubmiss\u00e3o do form\n onclick=doubleclick),\n *more)\n\n\nclass SaplFormLayout(Layout):\n\n def __init__(self, *fields, cancel_label=_('Cancelar'),\n save_label=_('Salvar'), actions=None):\n\n buttons = actions\n if not buttons:\n buttons = form_actions(label=save_label, more=[\n HTML('<a href=\"{{ view.cancel_url }}\"'\n ' class=\"btn btn-inverse\">%s</a>' % cancel_label)\n if cancel_label else None])\n\n _fields = list(to_fieldsets(fields))\n if buttons:\n _fields += [to_row([(buttons, 12)])]\n super(SaplFormLayout, self).__init__(*_fields)\n\n\ndef get_field_display(obj, fieldname):\n field = ''\n try:\n field = obj._meta.get_field(fieldname)\n except Exception as e:\n \"\"\" nos casos que o fieldname n\u00e3o \u00e9 um field_model,\n ele pode ser um aggregate, annotate, um property, um manager,\n ou mesmo uma m\u00e9todo no model.\n \"\"\"\n value = getattr(obj, fieldname)\n try:\n verbose_name = value.model._meta.verbose_name\n except AttributeError:\n verbose_name = ''\n\n else:\n verbose_name = str(field.verbose_name)\\\n if hasattr(field, 'verbose_name') else ''\n\n if hasattr(field, 'choices') and field.choices:\n value = getattr(obj, 'get_%s_display' % fieldname)()\n else:\n value = getattr(obj, fieldname)\n\n str_type_from_value = str(type(value))\n str_type_from_field = str(type(field))\n\n if value is None:\n display = ''\n elif '.date' in str_type_from_value:\n display = formats.date_format(value, \"SHORT_DATE_FORMAT\")\n elif 'bool' in str_type_from_value:\n display = _('Sim') if value else _('N\u00e3o')\n elif 'ImageFieldFile' in str(type(value)):\n if value:\n display = '<img src=\"{}\" />'.format(value.url)\n else:\n display = ''\n elif 'FieldFile' in str_type_from_value:\n if value:\n display = '<a href=\"{}\">{}</a>'.format(\n value.url,\n value.name.split('/')[-1:][0])\n else:\n display = ''\n elif 'ManyRelatedManager' in str_type_from_value\\\n or 'RelatedManager' in str_type_from_value\\\n or 'GenericRelatedObjectManager' in str_type_from_value:\n display = '<ul>'\n for v in value.all():\n display += '<li>%s</li>' % str(v)\n display += '</ul>'\n if not verbose_name:\n if hasattr(field, 'related_model'):\n verbose_name = str(\n field.related_model._meta.verbose_name_plural)\n elif hasattr(field, 'model'):\n verbose_name = str(field.model._meta.verbose_name_plural)\n elif 'GenericForeignKey' in str_type_from_field:\n display = '<a href=\"{}\">{}</a>'.format(\n reverse(\n '%s:%s_detail' % (\n value._meta.app_config.name, obj.content_type.model),\n args=(value.id,)),\n value)\n elif 'TextField' in str_type_from_field:\n display = value.replace('\\n', '<br/>')\n else:\n display = str(value)\n return verbose_name, display\n\n\nclass CrispyLayoutFormMixin:\n\n @property\n def layout_key(self):\n if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key'):\n return super(CrispyLayoutFormMixin, self).layout_key\n else:\n return self.model.__name__\n\n @property\n def layout_key_set(self):\n if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key_set'):\n return super(CrispyLayoutFormMixin, self).layout_key_set\n else:\n obj = self.crud if hasattr(self, 'crud') else self\n return getattr(obj.model,\n obj.model_set).field.model.__name__\n\n def get_layout(self):\n yaml_layout = '%s/layouts.yaml' % self.model._meta.app_config.label\n return read_layout_from_yaml(yaml_layout, self.layout_key)\n\n def get_layout_set(self):\n obj = self.crud if hasattr(self, 'crud') else self\n yaml_layout = '%s/layouts.yaml' % getattr(\n obj.model, obj.model_set).field.model._meta.app_config.label\n return read_layout_from_yaml(yaml_layout, self.layout_key_set)\n\n @property\n def fields(self):\n if hasattr(self, 'form_class') and self.form_class:\n return None\n else:\n '''Returns all fields in the layout'''\n return [fieldname for legend_rows in self.get_layout()\n for row in legend_rows[1:]\n for fieldname, span in row]\n\n def get_form(self, form_class=None):\n try:\n form = super(CrispyLayoutFormMixin, self).get_form(form_class)\n except AttributeError:\n # simply return None if there is no get_form on super\n pass\n else:\n if self.layout_key:\n form.helper = FormHelper()\n form.helper.layout = SaplFormLayout(*self.get_layout())\n return form\n\n @property\n def list_field_names(self):\n '''The list of field names to display on table\n\n This base implementation returns the field names\n in the first fieldset of the layout.\n '''\n obj = self.crud if hasattr(self, 'crud') else self\n if hasattr(obj, 'list_field_names') and obj.list_field_names:\n return obj.list_field_names\n rows = self.get_layout()[0][1:]\n return [fieldname for row in rows for fieldname, __ in row]\n\n @property\n def list_field_names_set(self):\n '''The list of field names to display on table\n\n This base implementation returns the field names\n in the first fieldset of the layout.\n '''\n rows = self.get_layout_set()[0][1:]\n return [fieldname for row in rows for fieldname, __ in row]\n\n def get_column(self, fieldname, span):\n obj = self.get_object()\n\n func = None\n if '|' in fieldname:\n fieldname, func = tuple(fieldname.split('|'))\n\n if func:\n verbose_name, text = getattr(self, func)(obj, fieldname)\n else:\n verbose_name, text = get_field_display(obj, fieldname)\n\n return {\n 'id': fieldname,\n 'span': span,\n 'verbose_name': verbose_name,\n 'text': text,\n }\n\n def fk_urlize_for_detail(self, obj, fieldname):\n\n field = obj._meta.get_field(fieldname)\n value = getattr(obj, fieldname)\n\n display = '<a href=\"{}\">{}</a>'.format(\n reverse(\n '%s:%s_detail' % (\n value._meta.app_config.name, value._meta.model_name),\n args=(value.id,)),\n value)\n\n return field.verbose_name, display\n\n def m2m_urlize_for_detail(self, obj, fieldname):\n\n manager, fieldname = tuple(fieldname.split('__'))\n\n manager = getattr(obj, manager)\n\n verbose_name = manager.model._meta.verbose_name\n display = ''\n for item in manager.all():\n obj_m2m = getattr(item, fieldname)\n\n if obj == obj_m2m:\n continue\n\n verbose_name = item._meta.get_field(fieldname).verbose_name\n\n display += '<li><a href=\"{}\">{}</a></li>'.format(\n reverse(\n '%s:%s_detail' % (\n obj_m2m._meta.app_config.name, obj_m2m._meta.model_name),\n args=(obj_m2m.id,)),\n obj_m2m)\n\n display += ''\n\n if display:\n display = '<ul>%s</ul>' % display\n else:\n verbose_name = ''\n\n return verbose_name, display\n\n @property\n def layout_display(self):\n\n return [\n {'legend': legend,\n 'rows': [[self.get_column(fieldname, span)\n for fieldname, span in row]\n for row in rows]\n } for legend, rows in heads_and_tails(self.get_layout())]\n\n\ndef read_yaml_from_file(yaml_layout):\n # TODO cache this at application level\n t = template.loader.get_template(yaml_layout)\n # aqui \u00e9 importante converter para str pois, dependendo do ambiente,\n # o rtyaml pode usar yaml.CSafeLoader, que exige str ou stream\n rendered = str(t.render())\n return rtyaml.load(rendered)\n\n\ndef read_layout_from_yaml(yaml_layout, key):\n # TODO cache this at application level\n yaml = read_yaml_from_file(yaml_layout)\n base = yaml[key]\n\n def line_to_namespans(line):\n split = [cell.split(':') for cell in line.split()]\n namespans = [[s[0], int(s[1]) if len(s) > 1 else 0] for s in split]\n remaining = 12 - sum(s for n, s in namespans)\n nondefined = [ns for ns in namespans if not ns[1]]\n while nondefined:\n span = ceil(remaining / len(nondefined))\n namespan = nondefined.pop(0)\n namespan[1] = span\n remaining = remaining - span\n return list(map(tuple, namespans))\n\n return [[legend] + [line_to_namespans(l) for l in lines]\n for legend, lines in base.items()]\n", "path": "sapl/crispy_layout_mixin.py"}]} |
gh_patches_debug_1095 | rasdani/github-patches | git_diff | graspologic-org__graspologic-491 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add tutorial for MASE
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `graspologic/embed/mase.py`
Content:
```
1 # Copyright (c) Microsoft Corporation and contributors.
2 # Licensed under the MIT License.
3
4 import numpy as np
5 from sklearn.utils.validation import check_is_fitted
6
7 from ..utils import import_graph, is_almost_symmetric
8 from .base import BaseEmbedMulti
9 from .svd import select_dimension, selectSVD
10
11
12 class MultipleASE(BaseEmbedMulti):
13 r"""
14 Multiple Adjacency Spectral Embedding (MASE) embeds arbitrary number of input
15 graphs with matched vertex sets.
16
17 For a population of undirected graphs, MASE assumes that the population of graphs
18 is sampled from :math:`VR^{(i)}V^T` where :math:`V \in \mathbb{R}^{n\times d}` and
19 :math:`R^{(i)} \in \mathbb{R}^{d\times d}`. Score matrices, :math:`R^{(i)}`, are
20 allowed to vary for each graph, but are symmetric. All graphs share a common a
21 latent position matrix :math:`V`.
22
23 For a population of directed graphs, MASE assumes that the population is sampled
24 from :math:`UR^{(i)}V^T` where :math:`U \in \mathbb{R}^{n\times d_1}`,
25 :math:`V \in \mathbb{R}^{n\times d_2}`, and
26 :math:`R^{(i)} \in \mathbb{R}^{d_1\times d_2}`. In this case, score matrices
27 :math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a
28 common latent position matrices :math:`U` and :math:`V`.
29
30 Parameters
31 ----------
32 n_components : int or None, default = None
33 Desired dimensionality of output data. If "full",
34 ``n_components`` must be ``<= min(X.shape)``. Otherwise, ``n_components`` must be
35 ``< min(X.shape)``. If None, then optimal dimensions will be chosen by
36 :func:`~graspologic.embed.select_dimension` using ``n_elbows`` argument.
37
38 n_elbows : int, optional, default: 2
39 If ``n_components`` is None, then compute the optimal embedding dimension using
40 :func:`~graspologic.embed.select_dimension`. Otherwise, ignored.
41
42 algorithm : {'randomized' (default), 'full', 'truncated'}, optional
43 SVD solver to use:
44
45 - 'randomized'
46 Computes randomized svd using
47 :func:`sklearn.utils.extmath.randomized_svd`
48 - 'full'
49 Computes full svd using :func:`scipy.linalg.svd`
50 - 'truncated'
51 Computes truncated svd using :func:`scipy.sparse.linalg.svds`
52
53 n_iter : int, optional (default = 5)
54 Number of iterations for randomized SVD solver. Not used by 'full' or
55 'truncated'. The default is larger than the default in randomized_svd
56 to handle sparse matrices that may have large slowly decaying spectrum.
57
58 scaled : bool, optional (default=True)
59 Whether to scale individual eigenvectors with eigenvalues in first embedding
60 stage.
61
62 diag_aug : bool, optional (default = True)
63 Whether to replace the main diagonal of each adjacency matrices with
64 a vector corresponding to the degree (or sum of edge weights for a
65 weighted network) before embedding.
66
67 concat : bool, optional (default False)
68 If graph(s) are directed, whether to concatenate each graph's left and right (out and in) latent positions
69 along axis 1.
70
71
72 Attributes
73 ----------
74 n_graphs_ : int
75 Number of graphs
76
77 n_vertices_ : int
78 Number of vertices in each graph
79
80 latent_left_ : array, shape (n_samples, n_components)
81 Estimated left latent positions of the graph.
82
83 latent_right_ : array, shape (n_samples, n_components), or None
84 Estimated right latent positions of the graph. Only computed when the an input
85 graph is directed, or adjacency matrix is assymetric. Otherwise, None.
86
87 scores_ : array, shape (n_samples, n_components, n_components)
88 Estimated :math:`\hat{R}` matrices for each input graph.
89
90 singular_values_ : array, shape (n_components) OR length 2 tuple of arrays
91 If input graph is undirected, equal to the singular values of the concatenated
92 adjacency spectral embeddings. If input graph is directed, :attr:`singular_values_`
93 is a tuple of length 2, where :attr:`singular_values_[0]` corresponds to
94 the singular values of the concatenated left adjacency spectral embeddings,
95 and :attr:`singular_values_[1]` corresponds to
96 the singular values of the concatenated right adjacency spectral embeddings.
97
98 Notes
99 -----
100 When an input graph is directed, ``n_components`` of :attr:`latent_left_` may not be equal
101 to ``n_components`` of :attr:`latent_right_`.
102 """
103
104 def __init__(
105 self,
106 n_components=None,
107 n_elbows=2,
108 algorithm="randomized",
109 n_iter=5,
110 scaled=True,
111 diag_aug=True,
112 concat=False,
113 ):
114 if not isinstance(scaled, bool):
115 msg = "scaled must be a boolean, not {}".format(scaled)
116 raise TypeError(msg)
117
118 super().__init__(
119 n_components=n_components,
120 n_elbows=n_elbows,
121 algorithm=algorithm,
122 n_iter=n_iter,
123 diag_aug=diag_aug,
124 concat=concat,
125 )
126 self.scaled = scaled
127
128 def _reduce_dim(self, graphs):
129 # first embed into log2(n_vertices) for each graph
130 n_components = int(np.ceil(np.log2(np.min(self.n_vertices_))))
131
132 # embed individual graphs
133 embeddings = [
134 selectSVD(
135 graph,
136 n_components=n_components,
137 algorithm=self.algorithm,
138 n_iter=self.n_iter,
139 )
140 for graph in graphs
141 ]
142 Us, Ds, Vs = zip(*embeddings)
143
144 # Choose the best embedding dimension for each graphs
145 if self.n_components is None:
146 embedding_dimensions = []
147 for D in Ds:
148 elbows, _ = select_dimension(D, n_elbows=self.n_elbows)
149 embedding_dimensions.append(elbows[-1])
150
151 # Choose the max of all of best embedding dimension of all graphs
152 best_dimension = int(np.ceil(np.max(embedding_dimensions)))
153 else:
154 best_dimension = self.n_components
155
156 if not self.scaled:
157 Us = np.hstack([U[:, :best_dimension] for U in Us])
158 Vs = np.hstack([V.T[:, :best_dimension] for V in Vs])
159 else:
160 # Equivalent to ASE
161 Us = np.hstack(
162 [
163 U[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))
164 for U, D in zip(Us, Ds)
165 ]
166 )
167 Vs = np.hstack(
168 [
169 V.T[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))
170 for V, D in zip(Vs, Ds)
171 ]
172 )
173
174 # Second SVD for vertices
175 # The notation is slightly different than the paper
176 Uhat, sing_vals_left, _ = selectSVD(
177 Us,
178 n_components=self.n_components,
179 n_elbows=self.n_elbows,
180 algorithm=self.algorithm,
181 n_iter=self.n_iter,
182 )
183
184 Vhat, sing_vals_right, _ = selectSVD(
185 Vs,
186 n_components=self.n_components,
187 n_elbows=self.n_elbows,
188 algorithm=self.algorithm,
189 n_iter=self.n_iter,
190 )
191 return Uhat, Vhat, sing_vals_left, sing_vals_right
192
193 def fit(self, graphs, y=None):
194 """
195 Fit the model with graphs.
196
197 Parameters
198 ----------
199 graphs : list of nx.Graph or ndarray, or ndarray
200 If list of nx.Graph, each Graph must contain same number of nodes.
201 If list of ndarray, each array must have shape (n_vertices, n_vertices).
202 If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).
203
204 Returns
205 -------
206 self : object
207 Returns an instance of self.
208 """
209 graphs = self._check_input_graphs(graphs)
210
211 # Check if undirected
212 undirected = all(is_almost_symmetric(g) for g in graphs)
213
214 # Diag augment
215 if self.diag_aug:
216 graphs = self._diag_aug(graphs)
217
218 # embed
219 Uhat, Vhat, sing_vals_left, sing_vals_right = self._reduce_dim(graphs)
220 self.latent_left_ = Uhat
221 if not undirected:
222 self.latent_right_ = Vhat
223 self.scores_ = Uhat.T @ graphs @ Vhat
224 self.singular_values_ = (sing_vals_left, sing_vals_right)
225 else:
226 self.latent_right_ = None
227 self.scores_ = Uhat.T @ graphs @ Uhat
228 self.singular_values_ = sing_vals_left
229
230 return self
231
232 def fit_transform(self, graphs, y=None):
233 """
234 Fit the model with graphs and apply the embedding on graphs.
235 n_components is either automatically determined or based on user input.
236
237 Parameters
238 ----------
239 graphs : list of nx.Graph or ndarray, or ndarray
240 If list of nx.Graph, each Graph must contain same number of nodes.
241 If list of ndarray, each array must have shape (n_vertices, n_vertices).
242 If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).
243
244 Returns
245 -------
246 out : np.ndarray or length 2 tuple of np.ndarray.
247 If input graphs were symmetric shape (n_vertices, n_components).
248 If graphs were directed and ``concat`` is False, returns tuple of two arrays (same shape as above).
249 The first corresponds to the left latent positions, and the second to the right latent positions.
250 When ``concat`` is True left and right (out and in) latent positions are concatenated along axis 1.
251 """
252 return self._fit_transform(graphs)
253
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/graspologic/embed/mase.py b/graspologic/embed/mase.py
--- a/graspologic/embed/mase.py
+++ b/graspologic/embed/mase.py
@@ -27,6 +27,8 @@
:math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a
common latent position matrices :math:`U` and :math:`V`.
+ Read more in the :ref:`tutorials <embed_tutorials>`
+
Parameters
----------
n_components : int or None, default = None
| {"golden_diff": "diff --git a/graspologic/embed/mase.py b/graspologic/embed/mase.py\n--- a/graspologic/embed/mase.py\n+++ b/graspologic/embed/mase.py\n@@ -27,6 +27,8 @@\n :math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a\n common latent position matrices :math:`U` and :math:`V`.\n \n+ Read more in the :ref:`tutorials <embed_tutorials>`\n+\n Parameters\n ----------\n n_components : int or None, default = None\n", "issue": "Add tutorial for MASE\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nimport numpy as np\nfrom sklearn.utils.validation import check_is_fitted\n\nfrom ..utils import import_graph, is_almost_symmetric\nfrom .base import BaseEmbedMulti\nfrom .svd import select_dimension, selectSVD\n\n\nclass MultipleASE(BaseEmbedMulti):\n r\"\"\"\n Multiple Adjacency Spectral Embedding (MASE) embeds arbitrary number of input\n graphs with matched vertex sets.\n\n For a population of undirected graphs, MASE assumes that the population of graphs\n is sampled from :math:`VR^{(i)}V^T` where :math:`V \\in \\mathbb{R}^{n\\times d}` and\n :math:`R^{(i)} \\in \\mathbb{R}^{d\\times d}`. Score matrices, :math:`R^{(i)}`, are\n allowed to vary for each graph, but are symmetric. All graphs share a common a\n latent position matrix :math:`V`.\n\n For a population of directed graphs, MASE assumes that the population is sampled\n from :math:`UR^{(i)}V^T` where :math:`U \\in \\mathbb{R}^{n\\times d_1}`,\n :math:`V \\in \\mathbb{R}^{n\\times d_2}`, and\n :math:`R^{(i)} \\in \\mathbb{R}^{d_1\\times d_2}`. In this case, score matrices\n :math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a\n common latent position matrices :math:`U` and :math:`V`.\n\n Parameters\n ----------\n n_components : int or None, default = None\n Desired dimensionality of output data. If \"full\",\n ``n_components`` must be ``<= min(X.shape)``. Otherwise, ``n_components`` must be\n ``< min(X.shape)``. If None, then optimal dimensions will be chosen by\n :func:`~graspologic.embed.select_dimension` using ``n_elbows`` argument.\n\n n_elbows : int, optional, default: 2\n If ``n_components`` is None, then compute the optimal embedding dimension using\n :func:`~graspologic.embed.select_dimension`. Otherwise, ignored.\n\n algorithm : {'randomized' (default), 'full', 'truncated'}, optional\n SVD solver to use:\n\n - 'randomized'\n Computes randomized svd using\n :func:`sklearn.utils.extmath.randomized_svd`\n - 'full'\n Computes full svd using :func:`scipy.linalg.svd`\n - 'truncated'\n Computes truncated svd using :func:`scipy.sparse.linalg.svds`\n\n n_iter : int, optional (default = 5)\n Number of iterations for randomized SVD solver. Not used by 'full' or\n 'truncated'. The default is larger than the default in randomized_svd\n to handle sparse matrices that may have large slowly decaying spectrum.\n\n scaled : bool, optional (default=True)\n Whether to scale individual eigenvectors with eigenvalues in first embedding\n stage.\n\n diag_aug : bool, optional (default = True)\n Whether to replace the main diagonal of each adjacency matrices with\n a vector corresponding to the degree (or sum of edge weights for a\n weighted network) before embedding.\n\n concat : bool, optional (default False)\n If graph(s) are directed, whether to concatenate each graph's left and right (out and in) latent positions\n along axis 1.\n\n\n Attributes\n ----------\n n_graphs_ : int\n Number of graphs\n\n n_vertices_ : int\n Number of vertices in each graph\n\n latent_left_ : array, shape (n_samples, n_components)\n Estimated left latent positions of the graph.\n\n latent_right_ : array, shape (n_samples, n_components), or None\n Estimated right latent positions of the graph. Only computed when the an input\n graph is directed, or adjacency matrix is assymetric. Otherwise, None.\n\n scores_ : array, shape (n_samples, n_components, n_components)\n Estimated :math:`\\hat{R}` matrices for each input graph.\n\n singular_values_ : array, shape (n_components) OR length 2 tuple of arrays\n If input graph is undirected, equal to the singular values of the concatenated\n adjacency spectral embeddings. If input graph is directed, :attr:`singular_values_`\n is a tuple of length 2, where :attr:`singular_values_[0]` corresponds to\n the singular values of the concatenated left adjacency spectral embeddings,\n and :attr:`singular_values_[1]` corresponds to\n the singular values of the concatenated right adjacency spectral embeddings.\n\n Notes\n -----\n When an input graph is directed, ``n_components`` of :attr:`latent_left_` may not be equal\n to ``n_components`` of :attr:`latent_right_`.\n \"\"\"\n\n def __init__(\n self,\n n_components=None,\n n_elbows=2,\n algorithm=\"randomized\",\n n_iter=5,\n scaled=True,\n diag_aug=True,\n concat=False,\n ):\n if not isinstance(scaled, bool):\n msg = \"scaled must be a boolean, not {}\".format(scaled)\n raise TypeError(msg)\n\n super().__init__(\n n_components=n_components,\n n_elbows=n_elbows,\n algorithm=algorithm,\n n_iter=n_iter,\n diag_aug=diag_aug,\n concat=concat,\n )\n self.scaled = scaled\n\n def _reduce_dim(self, graphs):\n # first embed into log2(n_vertices) for each graph\n n_components = int(np.ceil(np.log2(np.min(self.n_vertices_))))\n\n # embed individual graphs\n embeddings = [\n selectSVD(\n graph,\n n_components=n_components,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n for graph in graphs\n ]\n Us, Ds, Vs = zip(*embeddings)\n\n # Choose the best embedding dimension for each graphs\n if self.n_components is None:\n embedding_dimensions = []\n for D in Ds:\n elbows, _ = select_dimension(D, n_elbows=self.n_elbows)\n embedding_dimensions.append(elbows[-1])\n\n # Choose the max of all of best embedding dimension of all graphs\n best_dimension = int(np.ceil(np.max(embedding_dimensions)))\n else:\n best_dimension = self.n_components\n\n if not self.scaled:\n Us = np.hstack([U[:, :best_dimension] for U in Us])\n Vs = np.hstack([V.T[:, :best_dimension] for V in Vs])\n else:\n # Equivalent to ASE\n Us = np.hstack(\n [\n U[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))\n for U, D in zip(Us, Ds)\n ]\n )\n Vs = np.hstack(\n [\n V.T[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))\n for V, D in zip(Vs, Ds)\n ]\n )\n\n # Second SVD for vertices\n # The notation is slightly different than the paper\n Uhat, sing_vals_left, _ = selectSVD(\n Us,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n\n Vhat, sing_vals_right, _ = selectSVD(\n Vs,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n return Uhat, Vhat, sing_vals_left, sing_vals_right\n\n def fit(self, graphs, y=None):\n \"\"\"\n Fit the model with graphs.\n\n Parameters\n ----------\n graphs : list of nx.Graph or ndarray, or ndarray\n If list of nx.Graph, each Graph must contain same number of nodes.\n If list of ndarray, each array must have shape (n_vertices, n_vertices).\n If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).\n\n Returns\n -------\n self : object\n Returns an instance of self.\n \"\"\"\n graphs = self._check_input_graphs(graphs)\n\n # Check if undirected\n undirected = all(is_almost_symmetric(g) for g in graphs)\n\n # Diag augment\n if self.diag_aug:\n graphs = self._diag_aug(graphs)\n\n # embed\n Uhat, Vhat, sing_vals_left, sing_vals_right = self._reduce_dim(graphs)\n self.latent_left_ = Uhat\n if not undirected:\n self.latent_right_ = Vhat\n self.scores_ = Uhat.T @ graphs @ Vhat\n self.singular_values_ = (sing_vals_left, sing_vals_right)\n else:\n self.latent_right_ = None\n self.scores_ = Uhat.T @ graphs @ Uhat\n self.singular_values_ = sing_vals_left\n\n return self\n\n def fit_transform(self, graphs, y=None):\n \"\"\"\n Fit the model with graphs and apply the embedding on graphs.\n n_components is either automatically determined or based on user input.\n\n Parameters\n ----------\n graphs : list of nx.Graph or ndarray, or ndarray\n If list of nx.Graph, each Graph must contain same number of nodes.\n If list of ndarray, each array must have shape (n_vertices, n_vertices).\n If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).\n\n Returns\n -------\n out : np.ndarray or length 2 tuple of np.ndarray.\n If input graphs were symmetric shape (n_vertices, n_components).\n If graphs were directed and ``concat`` is False, returns tuple of two arrays (same shape as above).\n The first corresponds to the left latent positions, and the second to the right latent positions.\n When ``concat`` is True left and right (out and in) latent positions are concatenated along axis 1.\n \"\"\"\n return self._fit_transform(graphs)\n", "path": "graspologic/embed/mase.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nimport numpy as np\nfrom sklearn.utils.validation import check_is_fitted\n\nfrom ..utils import import_graph, is_almost_symmetric\nfrom .base import BaseEmbedMulti\nfrom .svd import select_dimension, selectSVD\n\n\nclass MultipleASE(BaseEmbedMulti):\n r\"\"\"\n Multiple Adjacency Spectral Embedding (MASE) embeds arbitrary number of input\n graphs with matched vertex sets.\n\n For a population of undirected graphs, MASE assumes that the population of graphs\n is sampled from :math:`VR^{(i)}V^T` where :math:`V \\in \\mathbb{R}^{n\\times d}` and\n :math:`R^{(i)} \\in \\mathbb{R}^{d\\times d}`. Score matrices, :math:`R^{(i)}`, are\n allowed to vary for each graph, but are symmetric. All graphs share a common a\n latent position matrix :math:`V`.\n\n For a population of directed graphs, MASE assumes that the population is sampled\n from :math:`UR^{(i)}V^T` where :math:`U \\in \\mathbb{R}^{n\\times d_1}`,\n :math:`V \\in \\mathbb{R}^{n\\times d_2}`, and\n :math:`R^{(i)} \\in \\mathbb{R}^{d_1\\times d_2}`. In this case, score matrices\n :math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a\n common latent position matrices :math:`U` and :math:`V`.\n\n Read more in the :ref:`tutorials <embed_tutorials>`\n\n Parameters\n ----------\n n_components : int or None, default = None\n Desired dimensionality of output data. If \"full\",\n ``n_components`` must be ``<= min(X.shape)``. Otherwise, ``n_components`` must be\n ``< min(X.shape)``. If None, then optimal dimensions will be chosen by\n :func:`~graspologic.embed.select_dimension` using ``n_elbows`` argument.\n\n n_elbows : int, optional, default: 2\n If ``n_components`` is None, then compute the optimal embedding dimension using\n :func:`~graspologic.embed.select_dimension`. Otherwise, ignored.\n\n algorithm : {'randomized' (default), 'full', 'truncated'}, optional\n SVD solver to use:\n\n - 'randomized'\n Computes randomized svd using\n :func:`sklearn.utils.extmath.randomized_svd`\n - 'full'\n Computes full svd using :func:`scipy.linalg.svd`\n - 'truncated'\n Computes truncated svd using :func:`scipy.sparse.linalg.svds`\n\n n_iter : int, optional (default = 5)\n Number of iterations for randomized SVD solver. Not used by 'full' or\n 'truncated'. The default is larger than the default in randomized_svd\n to handle sparse matrices that may have large slowly decaying spectrum.\n\n scaled : bool, optional (default=True)\n Whether to scale individual eigenvectors with eigenvalues in first embedding\n stage.\n\n diag_aug : bool, optional (default = True)\n Whether to replace the main diagonal of each adjacency matrices with\n a vector corresponding to the degree (or sum of edge weights for a\n weighted network) before embedding.\n\n concat : bool, optional (default False)\n If graph(s) are directed, whether to concatenate each graph's left and right (out and in) latent positions\n along axis 1.\n\n\n Attributes\n ----------\n n_graphs_ : int\n Number of graphs\n\n n_vertices_ : int\n Number of vertices in each graph\n\n latent_left_ : array, shape (n_samples, n_components)\n Estimated left latent positions of the graph.\n\n latent_right_ : array, shape (n_samples, n_components), or None\n Estimated right latent positions of the graph. Only computed when the an input\n graph is directed, or adjacency matrix is assymetric. Otherwise, None.\n\n scores_ : array, shape (n_samples, n_components, n_components)\n Estimated :math:`\\hat{R}` matrices for each input graph.\n\n singular_values_ : array, shape (n_components) OR length 2 tuple of arrays\n If input graph is undirected, equal to the singular values of the concatenated\n adjacency spectral embeddings. If input graph is directed, :attr:`singular_values_`\n is a tuple of length 2, where :attr:`singular_values_[0]` corresponds to\n the singular values of the concatenated left adjacency spectral embeddings,\n and :attr:`singular_values_[1]` corresponds to\n the singular values of the concatenated right adjacency spectral embeddings.\n\n Notes\n -----\n When an input graph is directed, ``n_components`` of :attr:`latent_left_` may not be equal\n to ``n_components`` of :attr:`latent_right_`.\n \"\"\"\n\n def __init__(\n self,\n n_components=None,\n n_elbows=2,\n algorithm=\"randomized\",\n n_iter=5,\n scaled=True,\n diag_aug=True,\n concat=False,\n ):\n if not isinstance(scaled, bool):\n msg = \"scaled must be a boolean, not {}\".format(scaled)\n raise TypeError(msg)\n\n super().__init__(\n n_components=n_components,\n n_elbows=n_elbows,\n algorithm=algorithm,\n n_iter=n_iter,\n diag_aug=diag_aug,\n concat=concat,\n )\n self.scaled = scaled\n\n def _reduce_dim(self, graphs):\n # first embed into log2(n_vertices) for each graph\n n_components = int(np.ceil(np.log2(np.min(self.n_vertices_))))\n\n # embed individual graphs\n embeddings = [\n selectSVD(\n graph,\n n_components=n_components,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n for graph in graphs\n ]\n Us, Ds, Vs = zip(*embeddings)\n\n # Choose the best embedding dimension for each graphs\n if self.n_components is None:\n embedding_dimensions = []\n for D in Ds:\n elbows, _ = select_dimension(D, n_elbows=self.n_elbows)\n embedding_dimensions.append(elbows[-1])\n\n # Choose the max of all of best embedding dimension of all graphs\n best_dimension = int(np.ceil(np.max(embedding_dimensions)))\n else:\n best_dimension = self.n_components\n\n if not self.scaled:\n Us = np.hstack([U[:, :best_dimension] for U in Us])\n Vs = np.hstack([V.T[:, :best_dimension] for V in Vs])\n else:\n # Equivalent to ASE\n Us = np.hstack(\n [\n U[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))\n for U, D in zip(Us, Ds)\n ]\n )\n Vs = np.hstack(\n [\n V.T[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))\n for V, D in zip(Vs, Ds)\n ]\n )\n\n # Second SVD for vertices\n # The notation is slightly different than the paper\n Uhat, sing_vals_left, _ = selectSVD(\n Us,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n\n Vhat, sing_vals_right, _ = selectSVD(\n Vs,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n return Uhat, Vhat, sing_vals_left, sing_vals_right\n\n def fit(self, graphs, y=None):\n \"\"\"\n Fit the model with graphs.\n\n Parameters\n ----------\n graphs : list of nx.Graph or ndarray, or ndarray\n If list of nx.Graph, each Graph must contain same number of nodes.\n If list of ndarray, each array must have shape (n_vertices, n_vertices).\n If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).\n\n Returns\n -------\n self : object\n Returns an instance of self.\n \"\"\"\n graphs = self._check_input_graphs(graphs)\n\n # Check if undirected\n undirected = all(is_almost_symmetric(g) for g in graphs)\n\n # Diag augment\n if self.diag_aug:\n graphs = self._diag_aug(graphs)\n\n # embed\n Uhat, Vhat, sing_vals_left, sing_vals_right = self._reduce_dim(graphs)\n self.latent_left_ = Uhat\n if not undirected:\n self.latent_right_ = Vhat\n self.scores_ = Uhat.T @ graphs @ Vhat\n self.singular_values_ = (sing_vals_left, sing_vals_right)\n else:\n self.latent_right_ = None\n self.scores_ = Uhat.T @ graphs @ Uhat\n self.singular_values_ = sing_vals_left\n\n return self\n\n def fit_transform(self, graphs, y=None):\n \"\"\"\n Fit the model with graphs and apply the embedding on graphs.\n n_components is either automatically determined or based on user input.\n\n Parameters\n ----------\n graphs : list of nx.Graph or ndarray, or ndarray\n If list of nx.Graph, each Graph must contain same number of nodes.\n If list of ndarray, each array must have shape (n_vertices, n_vertices).\n If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).\n\n Returns\n -------\n out : np.ndarray or length 2 tuple of np.ndarray.\n If input graphs were symmetric shape (n_vertices, n_components).\n If graphs were directed and ``concat`` is False, returns tuple of two arrays (same shape as above).\n The first corresponds to the left latent positions, and the second to the right latent positions.\n When ``concat`` is True left and right (out and in) latent positions are concatenated along axis 1.\n \"\"\"\n return self._fit_transform(graphs)\n", "path": "graspologic/embed/mase.py"}]} |
gh_patches_debug_1096 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1757 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update Success Message on "Your contact information" page
### Issue description
On "Your contact information page" in the Domain Management pages we have explainer text that says: Updating your contact information here will update the contact information for all domains in your account.
However, in the success message we say: "Your contact information for this domain has been updated." This is misleading.
### Acceptance criteria
- [x] Change the success text to say "Your contact information for all your domains has been updated."
### Additional context
_No response_
### Links to other issues
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/views/domain.py`
Content:
```
1 """Views for a single Domain.
2
3 Authorization is handled by the `DomainPermissionView`. To ensure that only
4 authorized users can see information on a domain, every view here should
5 inherit from `DomainPermissionView` (or DomainInvitationPermissionDeleteView).
6 """
7
8 import logging
9
10 from django.contrib import messages
11 from django.contrib.messages.views import SuccessMessageMixin
12 from django.db import IntegrityError
13 from django.http import HttpResponseRedirect
14 from django.shortcuts import redirect
15 from django.urls import reverse
16 from django.views.generic.edit import FormMixin
17
18 from registrar.models import (
19 Domain,
20 DomainInvitation,
21 User,
22 UserDomainRole,
23 )
24 from registrar.models.public_contact import PublicContact
25 from registrar.utility.enums import DefaultEmail
26 from registrar.utility.errors import (
27 GenericError,
28 GenericErrorCodes,
29 NameserverError,
30 NameserverErrorCodes as nsErrorCodes,
31 DsDataError,
32 DsDataErrorCodes,
33 SecurityEmailError,
34 SecurityEmailErrorCodes,
35 )
36 from registrar.models.utility.contact_error import ContactError
37 from registrar.views.utility.permission_views import UserDomainRolePermissionDeleteView
38
39 from ..forms import (
40 ContactForm,
41 AuthorizingOfficialContactForm,
42 DomainOrgNameAddressForm,
43 DomainAddUserForm,
44 DomainSecurityEmailForm,
45 NameserverFormset,
46 DomainDnssecForm,
47 DomainDsdataFormset,
48 DomainDsdataForm,
49 )
50
51 from epplibwrapper import (
52 common,
53 extensions,
54 RegistryError,
55 )
56
57 from ..utility.email import send_templated_email, EmailSendingError
58 from .utility import DomainPermissionView, DomainInvitationPermissionDeleteView
59
60
61 logger = logging.getLogger(__name__)
62
63
64 class DomainBaseView(DomainPermissionView):
65 """
66 Base View for the Domain. Handles getting and setting the domain
67 in session cache on GETs. Also provides methods for getting
68 and setting the domain in cache
69 """
70
71 def get(self, request, *args, **kwargs):
72 self._get_domain(request)
73 context = self.get_context_data(object=self.object)
74 return self.render_to_response(context)
75
76 def _get_domain(self, request):
77 """
78 get domain from session cache or from db and set
79 to self.object
80 set session to self for downstream functions to
81 update session cache
82 """
83 self.session = request.session
84 # domain:private_key is the session key to use for
85 # caching the domain in the session
86 domain_pk = "domain:" + str(self.kwargs.get("pk"))
87 cached_domain = self.session.get(domain_pk)
88
89 if cached_domain:
90 self.object = cached_domain
91 else:
92 self.object = self.get_object()
93 self._update_session_with_domain()
94
95 def _update_session_with_domain(self):
96 """
97 update domain in the session cache
98 """
99 domain_pk = "domain:" + str(self.kwargs.get("pk"))
100 self.session[domain_pk] = self.object
101
102
103 class DomainFormBaseView(DomainBaseView, FormMixin):
104 """
105 Form Base View for the Domain. Handles getting and setting
106 domain in cache when dealing with domain forms. Provides
107 implementations of post, form_valid and form_invalid.
108 """
109
110 def post(self, request, *args, **kwargs):
111 """Form submission posts to this view.
112
113 This post method harmonizes using DomainBaseView and FormMixin
114 """
115 self._get_domain(request)
116 form = self.get_form()
117 if form.is_valid():
118 return self.form_valid(form)
119 else:
120 return self.form_invalid(form)
121
122 def form_valid(self, form):
123 # updates session cache with domain
124 self._update_session_with_domain()
125
126 # superclass has the redirect
127 return super().form_valid(form)
128
129 def form_invalid(self, form):
130 # updates session cache with domain
131 self._update_session_with_domain()
132
133 # superclass has the redirect
134 return super().form_invalid(form)
135
136
137 class DomainView(DomainBaseView):
138
139 """Domain detail overview page."""
140
141 template_name = "domain_detail.html"
142
143 def get_context_data(self, **kwargs):
144 context = super().get_context_data(**kwargs)
145
146 default_emails = [DefaultEmail.PUBLIC_CONTACT_DEFAULT.value, DefaultEmail.LEGACY_DEFAULT.value]
147
148 context["hidden_security_emails"] = default_emails
149
150 security_email = self.object.get_security_email()
151 if security_email is None or security_email in default_emails:
152 context["security_email"] = None
153 return context
154 context["security_email"] = security_email
155 return context
156
157 def in_editable_state(self, pk):
158 """Override in_editable_state from DomainPermission
159 Allow detail page to be viewable"""
160
161 requested_domain = None
162 if Domain.objects.filter(id=pk).exists():
163 requested_domain = Domain.objects.get(id=pk)
164
165 # return true if the domain exists, this will allow the detail page to load
166 if requested_domain:
167 return True
168 return False
169
170 def _get_domain(self, request):
171 """
172 override get_domain for this view so that domain overview
173 always resets the cache for the domain object
174 """
175 self.session = request.session
176 self.object = self.get_object()
177 self._update_session_with_domain()
178
179
180 class DomainOrgNameAddressView(DomainFormBaseView):
181 """Organization name and mailing address view"""
182
183 model = Domain
184 template_name = "domain_org_name_address.html"
185 context_object_name = "domain"
186 form_class = DomainOrgNameAddressForm
187
188 def get_form_kwargs(self, *args, **kwargs):
189 """Add domain_info.organization_name instance to make a bound form."""
190 form_kwargs = super().get_form_kwargs(*args, **kwargs)
191 form_kwargs["instance"] = self.object.domain_info
192 return form_kwargs
193
194 def get_success_url(self):
195 """Redirect to the overview page for the domain."""
196 return reverse("domain-org-name-address", kwargs={"pk": self.object.pk})
197
198 def form_valid(self, form):
199 """The form is valid, save the organization name and mailing address."""
200 form.save()
201
202 messages.success(self.request, "The organization information for this domain has been updated.")
203
204 # superclass has the redirect
205 return super().form_valid(form)
206
207
208 class DomainAuthorizingOfficialView(DomainFormBaseView):
209 """Domain authorizing official editing view."""
210
211 model = Domain
212 template_name = "domain_authorizing_official.html"
213 context_object_name = "domain"
214 form_class = AuthorizingOfficialContactForm
215
216 def get_form_kwargs(self, *args, **kwargs):
217 """Add domain_info.authorizing_official instance to make a bound form."""
218 form_kwargs = super().get_form_kwargs(*args, **kwargs)
219 form_kwargs["instance"] = self.object.domain_info.authorizing_official
220 return form_kwargs
221
222 def get_success_url(self):
223 """Redirect to the overview page for the domain."""
224 return reverse("domain-authorizing-official", kwargs={"pk": self.object.pk})
225
226 def form_valid(self, form):
227 """The form is valid, save the authorizing official."""
228 # Set the domain information in the form so that it can be accessible
229 # to associate a new Contact as authorizing official, if new Contact is needed
230 # in the save() method
231 form.set_domain_info(self.object.domain_info)
232 form.save()
233
234 messages.success(self.request, "The authorizing official for this domain has been updated.")
235
236 # superclass has the redirect
237 return super().form_valid(form)
238
239
240 class DomainDNSView(DomainBaseView):
241 """DNS Information View."""
242
243 template_name = "domain_dns.html"
244
245
246 class DomainNameserversView(DomainFormBaseView):
247 """Domain nameserver editing view."""
248
249 template_name = "domain_nameservers.html"
250 form_class = NameserverFormset
251 model = Domain
252
253 def get_initial(self):
254 """The initial value for the form (which is a formset here)."""
255 nameservers = self.object.nameservers
256 initial_data = []
257
258 if nameservers is not None:
259 # Add existing nameservers as initial data
260 initial_data.extend({"server": name, "ip": ",".join(ip)} for name, ip in nameservers)
261
262 # Ensure at least 3 fields, filled or empty
263 while len(initial_data) < 2:
264 initial_data.append({})
265
266 return initial_data
267
268 def get_success_url(self):
269 """Redirect to the nameservers page for the domain."""
270 return reverse("domain-dns-nameservers", kwargs={"pk": self.object.pk})
271
272 def get_context_data(self, **kwargs):
273 """Adjust context from FormMixin for formsets."""
274 context = super().get_context_data(**kwargs)
275 # use "formset" instead of "form" for the key
276 context["formset"] = context.pop("form")
277 return context
278
279 def get_form(self, **kwargs):
280 """Override the labels and required fields every time we get a formset."""
281 formset = super().get_form(**kwargs)
282
283 for i, form in enumerate(formset):
284 form.fields["server"].label += f" {i+1}"
285 if i < 2:
286 form.fields["server"].required = True
287 else:
288 form.fields["server"].required = False
289 form.fields["server"].label += " (optional)"
290 form.fields["domain"].initial = self.object.name
291 return formset
292
293 def post(self, request, *args, **kwargs):
294 """Form submission posts to this view.
295
296 This post method harmonizes using DomainBaseView and FormMixin
297 """
298 self._get_domain(request)
299 formset = self.get_form()
300
301 if "btn-cancel-click" in request.POST:
302 url = self.get_success_url()
303 return HttpResponseRedirect(url)
304
305 if formset.is_valid():
306 return self.form_valid(formset)
307 else:
308 return self.form_invalid(formset)
309
310 def form_valid(self, formset):
311 """The formset is valid, perform something with it."""
312
313 self.request.session["nameservers_form_domain"] = self.object
314
315 # Set the nameservers from the formset
316 nameservers = []
317 for form in formset:
318 try:
319 ip_string = form.cleaned_data["ip"]
320 # ip_string will be None or a string of IP addresses
321 # comma-separated
322 ip_list = []
323 if ip_string:
324 # Split the string into a list using a comma as the delimiter
325 ip_list = ip_string.split(",")
326
327 as_tuple = (
328 form.cleaned_data["server"],
329 ip_list,
330 )
331 nameservers.append(as_tuple)
332 except KeyError:
333 # no server information in this field, skip it
334 pass
335
336 try:
337 self.object.nameservers = nameservers
338 except NameserverError as Err:
339 # NamserverErrors *should* be caught in form; if reached here,
340 # there was an uncaught error in submission (through EPP)
341 messages.error(self.request, NameserverError(code=nsErrorCodes.BAD_DATA))
342 logger.error(f"Nameservers error: {Err}")
343 # TODO: registry is not throwing an error when no connection
344 except RegistryError as Err:
345 if Err.is_connection_error():
346 messages.error(
347 self.request,
348 GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),
349 )
350 logger.error(f"Registry connection error: {Err}")
351 else:
352 messages.error(self.request, NameserverError(code=nsErrorCodes.BAD_DATA))
353 logger.error(f"Registry error: {Err}")
354 else:
355 messages.success(
356 self.request,
357 "The name servers for this domain have been updated. "
358 "Note that DNS changes could take anywhere from a few minutes to "
359 "48 hours to propagate across the internet.",
360 )
361
362 # superclass has the redirect
363 return super().form_valid(formset)
364
365
366 class DomainDNSSECView(DomainFormBaseView):
367 """Domain DNSSEC editing view."""
368
369 template_name = "domain_dnssec.html"
370 form_class = DomainDnssecForm
371
372 def get_context_data(self, **kwargs):
373 """The initial value for the form (which is a formset here)."""
374 context = super().get_context_data(**kwargs)
375
376 has_dnssec_records = self.object.dnssecdata is not None
377
378 # Create HTML for the modal button
379 modal_button = (
380 '<button type="submit" '
381 'class="usa-button usa-button--secondary" '
382 'name="disable_dnssec">Confirm</button>'
383 )
384
385 context["modal_button"] = modal_button
386 context["has_dnssec_records"] = has_dnssec_records
387 context["dnssec_enabled"] = self.request.session.pop("dnssec_enabled", False)
388
389 return context
390
391 def get_success_url(self):
392 """Redirect to the DNSSEC page for the domain."""
393 return reverse("domain-dns-dnssec", kwargs={"pk": self.object.pk})
394
395 def post(self, request, *args, **kwargs):
396 """Form submission posts to this view."""
397 self._get_domain(request)
398 form = self.get_form()
399 if form.is_valid():
400 if "disable_dnssec" in request.POST:
401 try:
402 self.object.dnssecdata = {}
403 except RegistryError as err:
404 errmsg = "Error removing existing DNSSEC record(s)."
405 logger.error(errmsg + ": " + err)
406 messages.error(self.request, errmsg)
407
408 return self.form_valid(form)
409
410
411 class DomainDsDataView(DomainFormBaseView):
412 """Domain DNSSEC ds data editing view."""
413
414 template_name = "domain_dsdata.html"
415 form_class = DomainDsdataFormset
416 form = DomainDsdataForm
417
418 def get_initial(self):
419 """The initial value for the form (which is a formset here)."""
420 dnssecdata: extensions.DNSSECExtension = self.object.dnssecdata
421 initial_data = []
422
423 if dnssecdata is not None and dnssecdata.dsData is not None:
424 # Add existing nameservers as initial data
425 initial_data.extend(
426 {
427 "key_tag": record.keyTag,
428 "algorithm": record.alg,
429 "digest_type": record.digestType,
430 "digest": record.digest,
431 }
432 for record in dnssecdata.dsData
433 )
434
435 # Ensure at least 1 record, filled or empty
436 while len(initial_data) == 0:
437 initial_data.append({})
438
439 return initial_data
440
441 def get_success_url(self):
442 """Redirect to the DS data page for the domain."""
443 return reverse("domain-dns-dnssec-dsdata", kwargs={"pk": self.object.pk})
444
445 def get_context_data(self, **kwargs):
446 """Adjust context from FormMixin for formsets."""
447 context = super().get_context_data(**kwargs)
448 # use "formset" instead of "form" for the key
449 context["formset"] = context.pop("form")
450
451 return context
452
453 def post(self, request, *args, **kwargs):
454 """Formset submission posts to this view."""
455 self._get_domain(request)
456 formset = self.get_form()
457 override = False
458
459 # This is called by the form cancel button,
460 # and also by the modal's X and cancel buttons
461 if "btn-cancel-click" in request.POST:
462 url = self.get_success_url()
463 return HttpResponseRedirect(url)
464
465 # This is called by the Disable DNSSEC modal to override
466 if "disable-override-click" in request.POST:
467 override = True
468
469 # This is called when all DNSSEC data has been deleted and the
470 # Save button is pressed
471 if len(formset) == 0 and formset.initial != [{}] and override is False:
472 # trigger the modal
473 # get context data from super() rather than self
474 # to preserve the context["form"]
475 context = super().get_context_data(form=formset)
476 context["trigger_modal"] = True
477 # Create HTML for the modal button
478 modal_button = (
479 '<button type="submit" '
480 'class="usa-button usa-button--secondary" '
481 'name="disable-override-click">Remove all DS data</button>'
482 )
483
484 # context to back out of a broken form on all fields delete
485 context["modal_button"] = modal_button
486 return self.render_to_response(context)
487
488 if formset.is_valid() or override:
489 return self.form_valid(formset)
490 else:
491 return self.form_invalid(formset)
492
493 def form_valid(self, formset, **kwargs):
494 """The formset is valid, perform something with it."""
495
496 # Set the dnssecdata from the formset
497 dnssecdata = extensions.DNSSECExtension()
498
499 for form in formset:
500 try:
501 # if 'delete' not in form.cleaned_data
502 # or form.cleaned_data['delete'] == False:
503 dsrecord = {
504 "keyTag": form.cleaned_data["key_tag"],
505 "alg": int(form.cleaned_data["algorithm"]),
506 "digestType": int(form.cleaned_data["digest_type"]),
507 "digest": form.cleaned_data["digest"],
508 }
509 if dnssecdata.dsData is None:
510 dnssecdata.dsData = []
511 dnssecdata.dsData.append(common.DSData(**dsrecord))
512 except KeyError:
513 # no cleaned_data provided for this form, but passed
514 # as valid; this can happen if form has been added but
515 # not been interacted with; in that case, want to ignore
516 pass
517 try:
518 self.object.dnssecdata = dnssecdata
519 except RegistryError as err:
520 if err.is_connection_error():
521 messages.error(
522 self.request,
523 GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),
524 )
525 logger.error(f"Registry connection error: {err}")
526 else:
527 messages.error(self.request, DsDataError(code=DsDataErrorCodes.BAD_DATA))
528 logger.error(f"Registry error: {err}")
529 return self.form_invalid(formset)
530 else:
531 messages.success(self.request, "The DS data records for this domain have been updated.")
532 # superclass has the redirect
533 return super().form_valid(formset)
534
535
536 class DomainYourContactInformationView(DomainFormBaseView):
537 """Domain your contact information editing view."""
538
539 template_name = "domain_your_contact_information.html"
540 form_class = ContactForm
541
542 def get_form_kwargs(self, *args, **kwargs):
543 """Add domain_info.submitter instance to make a bound form."""
544 form_kwargs = super().get_form_kwargs(*args, **kwargs)
545 form_kwargs["instance"] = self.request.user.contact
546 return form_kwargs
547
548 def get_success_url(self):
549 """Redirect to the your contact information for the domain."""
550 return reverse("domain-your-contact-information", kwargs={"pk": self.object.pk})
551
552 def form_valid(self, form):
553 """The form is valid, call setter in model."""
554
555 # Post to DB using values from the form
556 form.save()
557
558 messages.success(self.request, "Your contact information has been updated.")
559
560 # superclass has the redirect
561 return super().form_valid(form)
562
563
564 class DomainSecurityEmailView(DomainFormBaseView):
565 """Domain security email editing view."""
566
567 template_name = "domain_security_email.html"
568 form_class = DomainSecurityEmailForm
569
570 def get_initial(self):
571 """The initial value for the form."""
572 initial = super().get_initial()
573 security_contact = self.object.security_contact
574
575 invalid_emails = [DefaultEmail.PUBLIC_CONTACT_DEFAULT.value, DefaultEmail.LEGACY_DEFAULT.value]
576 if security_contact is None or security_contact.email in invalid_emails:
577 initial["security_email"] = None
578 return initial
579 initial["security_email"] = security_contact.email
580 return initial
581
582 def get_success_url(self):
583 """Redirect to the security email page for the domain."""
584 return reverse("domain-security-email", kwargs={"pk": self.object.pk})
585
586 def form_valid(self, form):
587 """The form is valid, call setter in model."""
588
589 # Set the security email from the form
590 new_email: str = form.cleaned_data.get("security_email", "")
591
592 # If we pass nothing for the sec email, set to the default
593 if new_email is None or new_email.strip() == "":
594 new_email = PublicContact.get_default_security().email
595
596 contact = self.object.security_contact
597
598 # If no default is created for security_contact,
599 # then we cannot connect to the registry.
600 if contact is None:
601 messages.error(
602 self.request,
603 GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),
604 )
605 return redirect(self.get_success_url())
606
607 contact.email = new_email
608
609 try:
610 contact.save()
611 except RegistryError as Err:
612 if Err.is_connection_error():
613 messages.error(
614 self.request,
615 GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),
616 )
617 logger.error(f"Registry connection error: {Err}")
618 else:
619 messages.error(self.request, SecurityEmailError(code=SecurityEmailErrorCodes.BAD_DATA))
620 logger.error(f"Registry error: {Err}")
621 except ContactError as Err:
622 messages.error(self.request, SecurityEmailError(code=SecurityEmailErrorCodes.BAD_DATA))
623 logger.error(f"Generic registry error: {Err}")
624 else:
625 messages.success(self.request, "The security email for this domain has been updated.")
626
627 # superclass has the redirect
628 return redirect(self.get_success_url())
629
630
631 class DomainUsersView(DomainBaseView):
632 """Domain managers page in the domain details."""
633
634 template_name = "domain_users.html"
635
636 def get_context_data(self, **kwargs):
637 """The initial value for the form (which is a formset here)."""
638 context = super().get_context_data(**kwargs)
639
640 # Add conditionals to the context (such as "can_delete_users")
641 context = self._add_booleans_to_context(context)
642
643 # Add modal buttons to the context (such as for delete)
644 context = self._add_modal_buttons_to_context(context)
645
646 # Get the email of the current user
647 context["current_user_email"] = self.request.user.email
648
649 return context
650
651 def _add_booleans_to_context(self, context):
652 # Determine if the current user can delete managers
653 domain_pk = None
654 can_delete_users = False
655
656 if self.kwargs is not None and "pk" in self.kwargs:
657 domain_pk = self.kwargs["pk"]
658 # Prevent the end user from deleting themselves as a manager if they are the
659 # only manager that exists on a domain.
660 can_delete_users = UserDomainRole.objects.filter(domain__id=domain_pk).count() > 1
661
662 context["can_delete_users"] = can_delete_users
663 return context
664
665 def _add_modal_buttons_to_context(self, context):
666 """Adds modal buttons (and their HTML) to the context"""
667 # Create HTML for the modal button
668 modal_button = (
669 '<button type="submit" '
670 'class="usa-button usa-button--secondary" '
671 'name="delete_domain_manager">Yes, remove domain manager</button>'
672 )
673 context["modal_button"] = modal_button
674
675 # Create HTML for the modal button when deleting yourself
676 modal_button_self = (
677 '<button type="submit" '
678 'class="usa-button usa-button--secondary" '
679 'name="delete_domain_manager_self">Yes, remove myself</button>'
680 )
681 context["modal_button_self"] = modal_button_self
682
683 return context
684
685
686 class DomainAddUserView(DomainFormBaseView):
687 """Inside of a domain's user management, a form for adding users.
688
689 Multiple inheritance is used here for permissions, form handling, and
690 details of the individual domain.
691 """
692
693 template_name = "domain_add_user.html"
694 form_class = DomainAddUserForm
695
696 def get_success_url(self):
697 return reverse("domain-users", kwargs={"pk": self.object.pk})
698
699 def _domain_abs_url(self):
700 """Get an absolute URL for this domain."""
701 return self.request.build_absolute_uri(reverse("domain", kwargs={"pk": self.object.id}))
702
703 def _send_domain_invitation_email(self, email: str, requestor: User, add_success=True):
704 """Performs the sending of the domain invitation email,
705 does not make a domain information object
706 email: string- email to send to
707 add_success: bool- default True indicates:
708 adding a success message to the view if the email sending succeeds"""
709
710 # Set a default email address to send to for staff
711 requestor_email = "[email protected]"
712
713 # Check if the email requestor has a valid email address
714 if not requestor.is_staff and requestor.email is not None and requestor.email.strip() != "":
715 requestor_email = requestor.email
716 elif not requestor.is_staff:
717 messages.error(self.request, "Can't send invitation email. No email is associated with your account.")
718 logger.error(
719 f"Can't send email to '{email}' on domain '{self.object}'."
720 f"No email exists for the requestor '{requestor.username}'.",
721 exc_info=True,
722 )
723 return None
724
725 try:
726 send_templated_email(
727 "emails/domain_invitation.txt",
728 "emails/domain_invitation_subject.txt",
729 to_address=email,
730 context={
731 "domain_url": self._domain_abs_url(),
732 "domain": self.object,
733 "requestor_email": requestor_email,
734 },
735 )
736 except EmailSendingError:
737 messages.warning(self.request, "Could not send email invitation.")
738 logger.warn(
739 "Could not sent email invitation to %s for domain %s",
740 email,
741 self.object,
742 exc_info=True,
743 )
744 else:
745 if add_success:
746 messages.success(self.request, f"{email} has been invited to this domain.")
747
748 def _make_invitation(self, email_address: str, requestor: User):
749 """Make a Domain invitation for this email and redirect with a message."""
750 invitation, created = DomainInvitation.objects.get_or_create(email=email_address, domain=self.object)
751 if not created:
752 # that invitation already existed
753 messages.warning(
754 self.request,
755 f"{email_address} has already been invited to this domain.",
756 )
757 else:
758 self._send_domain_invitation_email(email=email_address, requestor=requestor)
759 return redirect(self.get_success_url())
760
761 def form_valid(self, form):
762 """Add the specified user on this domain."""
763 requested_email = form.cleaned_data["email"]
764 requestor = self.request.user
765 # look up a user with that email
766 try:
767 requested_user = User.objects.get(email=requested_email)
768 except User.DoesNotExist:
769 # no matching user, go make an invitation
770 return self._make_invitation(requested_email, requestor)
771 else:
772 # if user already exists then just send an email
773 self._send_domain_invitation_email(requested_email, requestor, add_success=False)
774
775 try:
776 UserDomainRole.objects.create(
777 user=requested_user,
778 domain=self.object,
779 role=UserDomainRole.Roles.MANAGER,
780 )
781 except IntegrityError:
782 # User already has the desired role! Do nothing??
783 pass
784
785 messages.success(self.request, f"Added user {requested_email}.")
786
787 return redirect(self.get_success_url())
788
789
790 class DomainInvitationDeleteView(DomainInvitationPermissionDeleteView, SuccessMessageMixin):
791 object: DomainInvitation # workaround for type mismatch in DeleteView
792
793 def get_success_url(self):
794 return reverse("domain-users", kwargs={"pk": self.object.domain.id})
795
796 def get_success_message(self, cleaned_data):
797 return f"Successfully canceled invitation for {self.object.email}."
798
799
800 class DomainDeleteUserView(UserDomainRolePermissionDeleteView):
801 """Inside of a domain's user management, a form for deleting users."""
802
803 object: UserDomainRole # workaround for type mismatch in DeleteView
804
805 def get_object(self, queryset=None):
806 """Custom get_object definition to grab a UserDomainRole object from a domain_id and user_id"""
807 domain_id = self.kwargs.get("pk")
808 user_id = self.kwargs.get("user_pk")
809 return UserDomainRole.objects.get(domain=domain_id, user=user_id)
810
811 def get_success_url(self):
812 """Refreshes the page after a delete is successful"""
813 return reverse("domain-users", kwargs={"pk": self.object.domain.id})
814
815 def get_success_message(self, delete_self=False):
816 """Returns confirmation content for the deletion event"""
817
818 # Grab the text representation of the user we want to delete
819 email_or_name = self.object.user.email
820 if email_or_name is None or email_or_name.strip() == "":
821 email_or_name = self.object.user
822
823 # If the user is deleting themselves, return a specific message.
824 # If not, return something more generic.
825 if delete_self:
826 message = f"You are no longer managing the domain {self.object.domain}."
827 else:
828 message = f"Removed {email_or_name} as a manager for this domain."
829
830 return message
831
832 def form_valid(self, form):
833 """Delete the specified user on this domain."""
834
835 # Delete the object
836 super().form_valid(form)
837
838 # Is the user deleting themselves? If so, display a different message
839 delete_self = self.request.user == self.object.user
840
841 # Add a success message
842 messages.success(self.request, self.get_success_message(delete_self))
843 return redirect(self.get_success_url())
844
845 def post(self, request, *args, **kwargs):
846 """Custom post implementation to redirect to home in the event that the user deletes themselves"""
847 response = super().post(request, *args, **kwargs)
848
849 # If the user is deleting themselves, redirect to home
850 delete_self = self.request.user == self.object.user
851 if delete_self:
852 return redirect(reverse("home"))
853
854 return response
855
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/registrar/views/domain.py b/src/registrar/views/domain.py
--- a/src/registrar/views/domain.py
+++ b/src/registrar/views/domain.py
@@ -555,7 +555,7 @@
# Post to DB using values from the form
form.save()
- messages.success(self.request, "Your contact information has been updated.")
+ messages.success(self.request, "Your contact information for all your domains has been updated.")
# superclass has the redirect
return super().form_valid(form)
| {"golden_diff": "diff --git a/src/registrar/views/domain.py b/src/registrar/views/domain.py\n--- a/src/registrar/views/domain.py\n+++ b/src/registrar/views/domain.py\n@@ -555,7 +555,7 @@\n # Post to DB using values from the form\n form.save()\n \n- messages.success(self.request, \"Your contact information has been updated.\")\n+ messages.success(self.request, \"Your contact information for all your domains has been updated.\")\n \n # superclass has the redirect\n return super().form_valid(form)\n", "issue": "Update Success Message on \"Your contact information\" page\n### Issue description\n\nOn \"Your contact information page\" in the Domain Management pages we have explainer text that says: Updating your contact information here will update the contact information for all domains in your account.\r\n\r\n However, in the success message we say: \"Your contact information for this domain has been updated.\" This is misleading.\r\n\r\n\n\n### Acceptance criteria\n\n- [x] Change the success text to say \"Your contact information for all your domains has been updated.\"\n\n### Additional context\n\n_No response_\n\n### Links to other issues\n\n_No response_\n", "before_files": [{"content": "\"\"\"Views for a single Domain.\n\nAuthorization is handled by the `DomainPermissionView`. To ensure that only\nauthorized users can see information on a domain, every view here should\ninherit from `DomainPermissionView` (or DomainInvitationPermissionDeleteView).\n\"\"\"\n\nimport logging\n\nfrom django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.db import IntegrityError\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.views.generic.edit import FormMixin\n\nfrom registrar.models import (\n Domain,\n DomainInvitation,\n User,\n UserDomainRole,\n)\nfrom registrar.models.public_contact import PublicContact\nfrom registrar.utility.enums import DefaultEmail\nfrom registrar.utility.errors import (\n GenericError,\n GenericErrorCodes,\n NameserverError,\n NameserverErrorCodes as nsErrorCodes,\n DsDataError,\n DsDataErrorCodes,\n SecurityEmailError,\n SecurityEmailErrorCodes,\n)\nfrom registrar.models.utility.contact_error import ContactError\nfrom registrar.views.utility.permission_views import UserDomainRolePermissionDeleteView\n\nfrom ..forms import (\n ContactForm,\n AuthorizingOfficialContactForm,\n DomainOrgNameAddressForm,\n DomainAddUserForm,\n DomainSecurityEmailForm,\n NameserverFormset,\n DomainDnssecForm,\n DomainDsdataFormset,\n DomainDsdataForm,\n)\n\nfrom epplibwrapper import (\n common,\n extensions,\n RegistryError,\n)\n\nfrom ..utility.email import send_templated_email, EmailSendingError\nfrom .utility import DomainPermissionView, DomainInvitationPermissionDeleteView\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass DomainBaseView(DomainPermissionView):\n \"\"\"\n Base View for the Domain. Handles getting and setting the domain\n in session cache on GETs. Also provides methods for getting\n and setting the domain in cache\n \"\"\"\n\n def get(self, request, *args, **kwargs):\n self._get_domain(request)\n context = self.get_context_data(object=self.object)\n return self.render_to_response(context)\n\n def _get_domain(self, request):\n \"\"\"\n get domain from session cache or from db and set\n to self.object\n set session to self for downstream functions to\n update session cache\n \"\"\"\n self.session = request.session\n # domain:private_key is the session key to use for\n # caching the domain in the session\n domain_pk = \"domain:\" + str(self.kwargs.get(\"pk\"))\n cached_domain = self.session.get(domain_pk)\n\n if cached_domain:\n self.object = cached_domain\n else:\n self.object = self.get_object()\n self._update_session_with_domain()\n\n def _update_session_with_domain(self):\n \"\"\"\n update domain in the session cache\n \"\"\"\n domain_pk = \"domain:\" + str(self.kwargs.get(\"pk\"))\n self.session[domain_pk] = self.object\n\n\nclass DomainFormBaseView(DomainBaseView, FormMixin):\n \"\"\"\n Form Base View for the Domain. Handles getting and setting\n domain in cache when dealing with domain forms. Provides\n implementations of post, form_valid and form_invalid.\n \"\"\"\n\n def post(self, request, *args, **kwargs):\n \"\"\"Form submission posts to this view.\n\n This post method harmonizes using DomainBaseView and FormMixin\n \"\"\"\n self._get_domain(request)\n form = self.get_form()\n if form.is_valid():\n return self.form_valid(form)\n else:\n return self.form_invalid(form)\n\n def form_valid(self, form):\n # updates session cache with domain\n self._update_session_with_domain()\n\n # superclass has the redirect\n return super().form_valid(form)\n\n def form_invalid(self, form):\n # updates session cache with domain\n self._update_session_with_domain()\n\n # superclass has the redirect\n return super().form_invalid(form)\n\n\nclass DomainView(DomainBaseView):\n\n \"\"\"Domain detail overview page.\"\"\"\n\n template_name = \"domain_detail.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n default_emails = [DefaultEmail.PUBLIC_CONTACT_DEFAULT.value, DefaultEmail.LEGACY_DEFAULT.value]\n\n context[\"hidden_security_emails\"] = default_emails\n\n security_email = self.object.get_security_email()\n if security_email is None or security_email in default_emails:\n context[\"security_email\"] = None\n return context\n context[\"security_email\"] = security_email\n return context\n\n def in_editable_state(self, pk):\n \"\"\"Override in_editable_state from DomainPermission\n Allow detail page to be viewable\"\"\"\n\n requested_domain = None\n if Domain.objects.filter(id=pk).exists():\n requested_domain = Domain.objects.get(id=pk)\n\n # return true if the domain exists, this will allow the detail page to load\n if requested_domain:\n return True\n return False\n\n def _get_domain(self, request):\n \"\"\"\n override get_domain for this view so that domain overview\n always resets the cache for the domain object\n \"\"\"\n self.session = request.session\n self.object = self.get_object()\n self._update_session_with_domain()\n\n\nclass DomainOrgNameAddressView(DomainFormBaseView):\n \"\"\"Organization name and mailing address view\"\"\"\n\n model = Domain\n template_name = \"domain_org_name_address.html\"\n context_object_name = \"domain\"\n form_class = DomainOrgNameAddressForm\n\n def get_form_kwargs(self, *args, **kwargs):\n \"\"\"Add domain_info.organization_name instance to make a bound form.\"\"\"\n form_kwargs = super().get_form_kwargs(*args, **kwargs)\n form_kwargs[\"instance\"] = self.object.domain_info\n return form_kwargs\n\n def get_success_url(self):\n \"\"\"Redirect to the overview page for the domain.\"\"\"\n return reverse(\"domain-org-name-address\", kwargs={\"pk\": self.object.pk})\n\n def form_valid(self, form):\n \"\"\"The form is valid, save the organization name and mailing address.\"\"\"\n form.save()\n\n messages.success(self.request, \"The organization information for this domain has been updated.\")\n\n # superclass has the redirect\n return super().form_valid(form)\n\n\nclass DomainAuthorizingOfficialView(DomainFormBaseView):\n \"\"\"Domain authorizing official editing view.\"\"\"\n\n model = Domain\n template_name = \"domain_authorizing_official.html\"\n context_object_name = \"domain\"\n form_class = AuthorizingOfficialContactForm\n\n def get_form_kwargs(self, *args, **kwargs):\n \"\"\"Add domain_info.authorizing_official instance to make a bound form.\"\"\"\n form_kwargs = super().get_form_kwargs(*args, **kwargs)\n form_kwargs[\"instance\"] = self.object.domain_info.authorizing_official\n return form_kwargs\n\n def get_success_url(self):\n \"\"\"Redirect to the overview page for the domain.\"\"\"\n return reverse(\"domain-authorizing-official\", kwargs={\"pk\": self.object.pk})\n\n def form_valid(self, form):\n \"\"\"The form is valid, save the authorizing official.\"\"\"\n # Set the domain information in the form so that it can be accessible\n # to associate a new Contact as authorizing official, if new Contact is needed\n # in the save() method\n form.set_domain_info(self.object.domain_info)\n form.save()\n\n messages.success(self.request, \"The authorizing official for this domain has been updated.\")\n\n # superclass has the redirect\n return super().form_valid(form)\n\n\nclass DomainDNSView(DomainBaseView):\n \"\"\"DNS Information View.\"\"\"\n\n template_name = \"domain_dns.html\"\n\n\nclass DomainNameserversView(DomainFormBaseView):\n \"\"\"Domain nameserver editing view.\"\"\"\n\n template_name = \"domain_nameservers.html\"\n form_class = NameserverFormset\n model = Domain\n\n def get_initial(self):\n \"\"\"The initial value for the form (which is a formset here).\"\"\"\n nameservers = self.object.nameservers\n initial_data = []\n\n if nameservers is not None:\n # Add existing nameservers as initial data\n initial_data.extend({\"server\": name, \"ip\": \",\".join(ip)} for name, ip in nameservers)\n\n # Ensure at least 3 fields, filled or empty\n while len(initial_data) < 2:\n initial_data.append({})\n\n return initial_data\n\n def get_success_url(self):\n \"\"\"Redirect to the nameservers page for the domain.\"\"\"\n return reverse(\"domain-dns-nameservers\", kwargs={\"pk\": self.object.pk})\n\n def get_context_data(self, **kwargs):\n \"\"\"Adjust context from FormMixin for formsets.\"\"\"\n context = super().get_context_data(**kwargs)\n # use \"formset\" instead of \"form\" for the key\n context[\"formset\"] = context.pop(\"form\")\n return context\n\n def get_form(self, **kwargs):\n \"\"\"Override the labels and required fields every time we get a formset.\"\"\"\n formset = super().get_form(**kwargs)\n\n for i, form in enumerate(formset):\n form.fields[\"server\"].label += f\" {i+1}\"\n if i < 2:\n form.fields[\"server\"].required = True\n else:\n form.fields[\"server\"].required = False\n form.fields[\"server\"].label += \" (optional)\"\n form.fields[\"domain\"].initial = self.object.name\n return formset\n\n def post(self, request, *args, **kwargs):\n \"\"\"Form submission posts to this view.\n\n This post method harmonizes using DomainBaseView and FormMixin\n \"\"\"\n self._get_domain(request)\n formset = self.get_form()\n\n if \"btn-cancel-click\" in request.POST:\n url = self.get_success_url()\n return HttpResponseRedirect(url)\n\n if formset.is_valid():\n return self.form_valid(formset)\n else:\n return self.form_invalid(formset)\n\n def form_valid(self, formset):\n \"\"\"The formset is valid, perform something with it.\"\"\"\n\n self.request.session[\"nameservers_form_domain\"] = self.object\n\n # Set the nameservers from the formset\n nameservers = []\n for form in formset:\n try:\n ip_string = form.cleaned_data[\"ip\"]\n # ip_string will be None or a string of IP addresses\n # comma-separated\n ip_list = []\n if ip_string:\n # Split the string into a list using a comma as the delimiter\n ip_list = ip_string.split(\",\")\n\n as_tuple = (\n form.cleaned_data[\"server\"],\n ip_list,\n )\n nameservers.append(as_tuple)\n except KeyError:\n # no server information in this field, skip it\n pass\n\n try:\n self.object.nameservers = nameservers\n except NameserverError as Err:\n # NamserverErrors *should* be caught in form; if reached here,\n # there was an uncaught error in submission (through EPP)\n messages.error(self.request, NameserverError(code=nsErrorCodes.BAD_DATA))\n logger.error(f\"Nameservers error: {Err}\")\n # TODO: registry is not throwing an error when no connection\n except RegistryError as Err:\n if Err.is_connection_error():\n messages.error(\n self.request,\n GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),\n )\n logger.error(f\"Registry connection error: {Err}\")\n else:\n messages.error(self.request, NameserverError(code=nsErrorCodes.BAD_DATA))\n logger.error(f\"Registry error: {Err}\")\n else:\n messages.success(\n self.request,\n \"The name servers for this domain have been updated. \"\n \"Note that DNS changes could take anywhere from a few minutes to \"\n \"48 hours to propagate across the internet.\",\n )\n\n # superclass has the redirect\n return super().form_valid(formset)\n\n\nclass DomainDNSSECView(DomainFormBaseView):\n \"\"\"Domain DNSSEC editing view.\"\"\"\n\n template_name = \"domain_dnssec.html\"\n form_class = DomainDnssecForm\n\n def get_context_data(self, **kwargs):\n \"\"\"The initial value for the form (which is a formset here).\"\"\"\n context = super().get_context_data(**kwargs)\n\n has_dnssec_records = self.object.dnssecdata is not None\n\n # Create HTML for the modal button\n modal_button = (\n '<button type=\"submit\" '\n 'class=\"usa-button usa-button--secondary\" '\n 'name=\"disable_dnssec\">Confirm</button>'\n )\n\n context[\"modal_button\"] = modal_button\n context[\"has_dnssec_records\"] = has_dnssec_records\n context[\"dnssec_enabled\"] = self.request.session.pop(\"dnssec_enabled\", False)\n\n return context\n\n def get_success_url(self):\n \"\"\"Redirect to the DNSSEC page for the domain.\"\"\"\n return reverse(\"domain-dns-dnssec\", kwargs={\"pk\": self.object.pk})\n\n def post(self, request, *args, **kwargs):\n \"\"\"Form submission posts to this view.\"\"\"\n self._get_domain(request)\n form = self.get_form()\n if form.is_valid():\n if \"disable_dnssec\" in request.POST:\n try:\n self.object.dnssecdata = {}\n except RegistryError as err:\n errmsg = \"Error removing existing DNSSEC record(s).\"\n logger.error(errmsg + \": \" + err)\n messages.error(self.request, errmsg)\n\n return self.form_valid(form)\n\n\nclass DomainDsDataView(DomainFormBaseView):\n \"\"\"Domain DNSSEC ds data editing view.\"\"\"\n\n template_name = \"domain_dsdata.html\"\n form_class = DomainDsdataFormset\n form = DomainDsdataForm\n\n def get_initial(self):\n \"\"\"The initial value for the form (which is a formset here).\"\"\"\n dnssecdata: extensions.DNSSECExtension = self.object.dnssecdata\n initial_data = []\n\n if dnssecdata is not None and dnssecdata.dsData is not None:\n # Add existing nameservers as initial data\n initial_data.extend(\n {\n \"key_tag\": record.keyTag,\n \"algorithm\": record.alg,\n \"digest_type\": record.digestType,\n \"digest\": record.digest,\n }\n for record in dnssecdata.dsData\n )\n\n # Ensure at least 1 record, filled or empty\n while len(initial_data) == 0:\n initial_data.append({})\n\n return initial_data\n\n def get_success_url(self):\n \"\"\"Redirect to the DS data page for the domain.\"\"\"\n return reverse(\"domain-dns-dnssec-dsdata\", kwargs={\"pk\": self.object.pk})\n\n def get_context_data(self, **kwargs):\n \"\"\"Adjust context from FormMixin for formsets.\"\"\"\n context = super().get_context_data(**kwargs)\n # use \"formset\" instead of \"form\" for the key\n context[\"formset\"] = context.pop(\"form\")\n\n return context\n\n def post(self, request, *args, **kwargs):\n \"\"\"Formset submission posts to this view.\"\"\"\n self._get_domain(request)\n formset = self.get_form()\n override = False\n\n # This is called by the form cancel button,\n # and also by the modal's X and cancel buttons\n if \"btn-cancel-click\" in request.POST:\n url = self.get_success_url()\n return HttpResponseRedirect(url)\n\n # This is called by the Disable DNSSEC modal to override\n if \"disable-override-click\" in request.POST:\n override = True\n\n # This is called when all DNSSEC data has been deleted and the\n # Save button is pressed\n if len(formset) == 0 and formset.initial != [{}] and override is False:\n # trigger the modal\n # get context data from super() rather than self\n # to preserve the context[\"form\"]\n context = super().get_context_data(form=formset)\n context[\"trigger_modal\"] = True\n # Create HTML for the modal button\n modal_button = (\n '<button type=\"submit\" '\n 'class=\"usa-button usa-button--secondary\" '\n 'name=\"disable-override-click\">Remove all DS data</button>'\n )\n\n # context to back out of a broken form on all fields delete\n context[\"modal_button\"] = modal_button\n return self.render_to_response(context)\n\n if formset.is_valid() or override:\n return self.form_valid(formset)\n else:\n return self.form_invalid(formset)\n\n def form_valid(self, formset, **kwargs):\n \"\"\"The formset is valid, perform something with it.\"\"\"\n\n # Set the dnssecdata from the formset\n dnssecdata = extensions.DNSSECExtension()\n\n for form in formset:\n try:\n # if 'delete' not in form.cleaned_data\n # or form.cleaned_data['delete'] == False:\n dsrecord = {\n \"keyTag\": form.cleaned_data[\"key_tag\"],\n \"alg\": int(form.cleaned_data[\"algorithm\"]),\n \"digestType\": int(form.cleaned_data[\"digest_type\"]),\n \"digest\": form.cleaned_data[\"digest\"],\n }\n if dnssecdata.dsData is None:\n dnssecdata.dsData = []\n dnssecdata.dsData.append(common.DSData(**dsrecord))\n except KeyError:\n # no cleaned_data provided for this form, but passed\n # as valid; this can happen if form has been added but\n # not been interacted with; in that case, want to ignore\n pass\n try:\n self.object.dnssecdata = dnssecdata\n except RegistryError as err:\n if err.is_connection_error():\n messages.error(\n self.request,\n GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),\n )\n logger.error(f\"Registry connection error: {err}\")\n else:\n messages.error(self.request, DsDataError(code=DsDataErrorCodes.BAD_DATA))\n logger.error(f\"Registry error: {err}\")\n return self.form_invalid(formset)\n else:\n messages.success(self.request, \"The DS data records for this domain have been updated.\")\n # superclass has the redirect\n return super().form_valid(formset)\n\n\nclass DomainYourContactInformationView(DomainFormBaseView):\n \"\"\"Domain your contact information editing view.\"\"\"\n\n template_name = \"domain_your_contact_information.html\"\n form_class = ContactForm\n\n def get_form_kwargs(self, *args, **kwargs):\n \"\"\"Add domain_info.submitter instance to make a bound form.\"\"\"\n form_kwargs = super().get_form_kwargs(*args, **kwargs)\n form_kwargs[\"instance\"] = self.request.user.contact\n return form_kwargs\n\n def get_success_url(self):\n \"\"\"Redirect to the your contact information for the domain.\"\"\"\n return reverse(\"domain-your-contact-information\", kwargs={\"pk\": self.object.pk})\n\n def form_valid(self, form):\n \"\"\"The form is valid, call setter in model.\"\"\"\n\n # Post to DB using values from the form\n form.save()\n\n messages.success(self.request, \"Your contact information has been updated.\")\n\n # superclass has the redirect\n return super().form_valid(form)\n\n\nclass DomainSecurityEmailView(DomainFormBaseView):\n \"\"\"Domain security email editing view.\"\"\"\n\n template_name = \"domain_security_email.html\"\n form_class = DomainSecurityEmailForm\n\n def get_initial(self):\n \"\"\"The initial value for the form.\"\"\"\n initial = super().get_initial()\n security_contact = self.object.security_contact\n\n invalid_emails = [DefaultEmail.PUBLIC_CONTACT_DEFAULT.value, DefaultEmail.LEGACY_DEFAULT.value]\n if security_contact is None or security_contact.email in invalid_emails:\n initial[\"security_email\"] = None\n return initial\n initial[\"security_email\"] = security_contact.email\n return initial\n\n def get_success_url(self):\n \"\"\"Redirect to the security email page for the domain.\"\"\"\n return reverse(\"domain-security-email\", kwargs={\"pk\": self.object.pk})\n\n def form_valid(self, form):\n \"\"\"The form is valid, call setter in model.\"\"\"\n\n # Set the security email from the form\n new_email: str = form.cleaned_data.get(\"security_email\", \"\")\n\n # If we pass nothing for the sec email, set to the default\n if new_email is None or new_email.strip() == \"\":\n new_email = PublicContact.get_default_security().email\n\n contact = self.object.security_contact\n\n # If no default is created for security_contact,\n # then we cannot connect to the registry.\n if contact is None:\n messages.error(\n self.request,\n GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),\n )\n return redirect(self.get_success_url())\n\n contact.email = new_email\n\n try:\n contact.save()\n except RegistryError as Err:\n if Err.is_connection_error():\n messages.error(\n self.request,\n GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),\n )\n logger.error(f\"Registry connection error: {Err}\")\n else:\n messages.error(self.request, SecurityEmailError(code=SecurityEmailErrorCodes.BAD_DATA))\n logger.error(f\"Registry error: {Err}\")\n except ContactError as Err:\n messages.error(self.request, SecurityEmailError(code=SecurityEmailErrorCodes.BAD_DATA))\n logger.error(f\"Generic registry error: {Err}\")\n else:\n messages.success(self.request, \"The security email for this domain has been updated.\")\n\n # superclass has the redirect\n return redirect(self.get_success_url())\n\n\nclass DomainUsersView(DomainBaseView):\n \"\"\"Domain managers page in the domain details.\"\"\"\n\n template_name = \"domain_users.html\"\n\n def get_context_data(self, **kwargs):\n \"\"\"The initial value for the form (which is a formset here).\"\"\"\n context = super().get_context_data(**kwargs)\n\n # Add conditionals to the context (such as \"can_delete_users\")\n context = self._add_booleans_to_context(context)\n\n # Add modal buttons to the context (such as for delete)\n context = self._add_modal_buttons_to_context(context)\n\n # Get the email of the current user\n context[\"current_user_email\"] = self.request.user.email\n\n return context\n\n def _add_booleans_to_context(self, context):\n # Determine if the current user can delete managers\n domain_pk = None\n can_delete_users = False\n\n if self.kwargs is not None and \"pk\" in self.kwargs:\n domain_pk = self.kwargs[\"pk\"]\n # Prevent the end user from deleting themselves as a manager if they are the\n # only manager that exists on a domain.\n can_delete_users = UserDomainRole.objects.filter(domain__id=domain_pk).count() > 1\n\n context[\"can_delete_users\"] = can_delete_users\n return context\n\n def _add_modal_buttons_to_context(self, context):\n \"\"\"Adds modal buttons (and their HTML) to the context\"\"\"\n # Create HTML for the modal button\n modal_button = (\n '<button type=\"submit\" '\n 'class=\"usa-button usa-button--secondary\" '\n 'name=\"delete_domain_manager\">Yes, remove domain manager</button>'\n )\n context[\"modal_button\"] = modal_button\n\n # Create HTML for the modal button when deleting yourself\n modal_button_self = (\n '<button type=\"submit\" '\n 'class=\"usa-button usa-button--secondary\" '\n 'name=\"delete_domain_manager_self\">Yes, remove myself</button>'\n )\n context[\"modal_button_self\"] = modal_button_self\n\n return context\n\n\nclass DomainAddUserView(DomainFormBaseView):\n \"\"\"Inside of a domain's user management, a form for adding users.\n\n Multiple inheritance is used here for permissions, form handling, and\n details of the individual domain.\n \"\"\"\n\n template_name = \"domain_add_user.html\"\n form_class = DomainAddUserForm\n\n def get_success_url(self):\n return reverse(\"domain-users\", kwargs={\"pk\": self.object.pk})\n\n def _domain_abs_url(self):\n \"\"\"Get an absolute URL for this domain.\"\"\"\n return self.request.build_absolute_uri(reverse(\"domain\", kwargs={\"pk\": self.object.id}))\n\n def _send_domain_invitation_email(self, email: str, requestor: User, add_success=True):\n \"\"\"Performs the sending of the domain invitation email,\n does not make a domain information object\n email: string- email to send to\n add_success: bool- default True indicates:\n adding a success message to the view if the email sending succeeds\"\"\"\n\n # Set a default email address to send to for staff\n requestor_email = \"[email protected]\"\n\n # Check if the email requestor has a valid email address\n if not requestor.is_staff and requestor.email is not None and requestor.email.strip() != \"\":\n requestor_email = requestor.email\n elif not requestor.is_staff:\n messages.error(self.request, \"Can't send invitation email. No email is associated with your account.\")\n logger.error(\n f\"Can't send email to '{email}' on domain '{self.object}'.\"\n f\"No email exists for the requestor '{requestor.username}'.\",\n exc_info=True,\n )\n return None\n\n try:\n send_templated_email(\n \"emails/domain_invitation.txt\",\n \"emails/domain_invitation_subject.txt\",\n to_address=email,\n context={\n \"domain_url\": self._domain_abs_url(),\n \"domain\": self.object,\n \"requestor_email\": requestor_email,\n },\n )\n except EmailSendingError:\n messages.warning(self.request, \"Could not send email invitation.\")\n logger.warn(\n \"Could not sent email invitation to %s for domain %s\",\n email,\n self.object,\n exc_info=True,\n )\n else:\n if add_success:\n messages.success(self.request, f\"{email} has been invited to this domain.\")\n\n def _make_invitation(self, email_address: str, requestor: User):\n \"\"\"Make a Domain invitation for this email and redirect with a message.\"\"\"\n invitation, created = DomainInvitation.objects.get_or_create(email=email_address, domain=self.object)\n if not created:\n # that invitation already existed\n messages.warning(\n self.request,\n f\"{email_address} has already been invited to this domain.\",\n )\n else:\n self._send_domain_invitation_email(email=email_address, requestor=requestor)\n return redirect(self.get_success_url())\n\n def form_valid(self, form):\n \"\"\"Add the specified user on this domain.\"\"\"\n requested_email = form.cleaned_data[\"email\"]\n requestor = self.request.user\n # look up a user with that email\n try:\n requested_user = User.objects.get(email=requested_email)\n except User.DoesNotExist:\n # no matching user, go make an invitation\n return self._make_invitation(requested_email, requestor)\n else:\n # if user already exists then just send an email\n self._send_domain_invitation_email(requested_email, requestor, add_success=False)\n\n try:\n UserDomainRole.objects.create(\n user=requested_user,\n domain=self.object,\n role=UserDomainRole.Roles.MANAGER,\n )\n except IntegrityError:\n # User already has the desired role! Do nothing??\n pass\n\n messages.success(self.request, f\"Added user {requested_email}.\")\n\n return redirect(self.get_success_url())\n\n\nclass DomainInvitationDeleteView(DomainInvitationPermissionDeleteView, SuccessMessageMixin):\n object: DomainInvitation # workaround for type mismatch in DeleteView\n\n def get_success_url(self):\n return reverse(\"domain-users\", kwargs={\"pk\": self.object.domain.id})\n\n def get_success_message(self, cleaned_data):\n return f\"Successfully canceled invitation for {self.object.email}.\"\n\n\nclass DomainDeleteUserView(UserDomainRolePermissionDeleteView):\n \"\"\"Inside of a domain's user management, a form for deleting users.\"\"\"\n\n object: UserDomainRole # workaround for type mismatch in DeleteView\n\n def get_object(self, queryset=None):\n \"\"\"Custom get_object definition to grab a UserDomainRole object from a domain_id and user_id\"\"\"\n domain_id = self.kwargs.get(\"pk\")\n user_id = self.kwargs.get(\"user_pk\")\n return UserDomainRole.objects.get(domain=domain_id, user=user_id)\n\n def get_success_url(self):\n \"\"\"Refreshes the page after a delete is successful\"\"\"\n return reverse(\"domain-users\", kwargs={\"pk\": self.object.domain.id})\n\n def get_success_message(self, delete_self=False):\n \"\"\"Returns confirmation content for the deletion event\"\"\"\n\n # Grab the text representation of the user we want to delete\n email_or_name = self.object.user.email\n if email_or_name is None or email_or_name.strip() == \"\":\n email_or_name = self.object.user\n\n # If the user is deleting themselves, return a specific message.\n # If not, return something more generic.\n if delete_self:\n message = f\"You are no longer managing the domain {self.object.domain}.\"\n else:\n message = f\"Removed {email_or_name} as a manager for this domain.\"\n\n return message\n\n def form_valid(self, form):\n \"\"\"Delete the specified user on this domain.\"\"\"\n\n # Delete the object\n super().form_valid(form)\n\n # Is the user deleting themselves? If so, display a different message\n delete_self = self.request.user == self.object.user\n\n # Add a success message\n messages.success(self.request, self.get_success_message(delete_self))\n return redirect(self.get_success_url())\n\n def post(self, request, *args, **kwargs):\n \"\"\"Custom post implementation to redirect to home in the event that the user deletes themselves\"\"\"\n response = super().post(request, *args, **kwargs)\n\n # If the user is deleting themselves, redirect to home\n delete_self = self.request.user == self.object.user\n if delete_self:\n return redirect(reverse(\"home\"))\n\n return response\n", "path": "src/registrar/views/domain.py"}], "after_files": [{"content": "\"\"\"Views for a single Domain.\n\nAuthorization is handled by the `DomainPermissionView`. To ensure that only\nauthorized users can see information on a domain, every view here should\ninherit from `DomainPermissionView` (or DomainInvitationPermissionDeleteView).\n\"\"\"\n\nimport logging\n\nfrom django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.db import IntegrityError\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.views.generic.edit import FormMixin\n\nfrom registrar.models import (\n Domain,\n DomainInvitation,\n User,\n UserDomainRole,\n)\nfrom registrar.models.public_contact import PublicContact\nfrom registrar.utility.enums import DefaultEmail\nfrom registrar.utility.errors import (\n GenericError,\n GenericErrorCodes,\n NameserverError,\n NameserverErrorCodes as nsErrorCodes,\n DsDataError,\n DsDataErrorCodes,\n SecurityEmailError,\n SecurityEmailErrorCodes,\n)\nfrom registrar.models.utility.contact_error import ContactError\nfrom registrar.views.utility.permission_views import UserDomainRolePermissionDeleteView\n\nfrom ..forms import (\n ContactForm,\n AuthorizingOfficialContactForm,\n DomainOrgNameAddressForm,\n DomainAddUserForm,\n DomainSecurityEmailForm,\n NameserverFormset,\n DomainDnssecForm,\n DomainDsdataFormset,\n DomainDsdataForm,\n)\n\nfrom epplibwrapper import (\n common,\n extensions,\n RegistryError,\n)\n\nfrom ..utility.email import send_templated_email, EmailSendingError\nfrom .utility import DomainPermissionView, DomainInvitationPermissionDeleteView\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass DomainBaseView(DomainPermissionView):\n \"\"\"\n Base View for the Domain. Handles getting and setting the domain\n in session cache on GETs. Also provides methods for getting\n and setting the domain in cache\n \"\"\"\n\n def get(self, request, *args, **kwargs):\n self._get_domain(request)\n context = self.get_context_data(object=self.object)\n return self.render_to_response(context)\n\n def _get_domain(self, request):\n \"\"\"\n get domain from session cache or from db and set\n to self.object\n set session to self for downstream functions to\n update session cache\n \"\"\"\n self.session = request.session\n # domain:private_key is the session key to use for\n # caching the domain in the session\n domain_pk = \"domain:\" + str(self.kwargs.get(\"pk\"))\n cached_domain = self.session.get(domain_pk)\n\n if cached_domain:\n self.object = cached_domain\n else:\n self.object = self.get_object()\n self._update_session_with_domain()\n\n def _update_session_with_domain(self):\n \"\"\"\n update domain in the session cache\n \"\"\"\n domain_pk = \"domain:\" + str(self.kwargs.get(\"pk\"))\n self.session[domain_pk] = self.object\n\n\nclass DomainFormBaseView(DomainBaseView, FormMixin):\n \"\"\"\n Form Base View for the Domain. Handles getting and setting\n domain in cache when dealing with domain forms. Provides\n implementations of post, form_valid and form_invalid.\n \"\"\"\n\n def post(self, request, *args, **kwargs):\n \"\"\"Form submission posts to this view.\n\n This post method harmonizes using DomainBaseView and FormMixin\n \"\"\"\n self._get_domain(request)\n form = self.get_form()\n if form.is_valid():\n return self.form_valid(form)\n else:\n return self.form_invalid(form)\n\n def form_valid(self, form):\n # updates session cache with domain\n self._update_session_with_domain()\n\n # superclass has the redirect\n return super().form_valid(form)\n\n def form_invalid(self, form):\n # updates session cache with domain\n self._update_session_with_domain()\n\n # superclass has the redirect\n return super().form_invalid(form)\n\n\nclass DomainView(DomainBaseView):\n\n \"\"\"Domain detail overview page.\"\"\"\n\n template_name = \"domain_detail.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n default_emails = [DefaultEmail.PUBLIC_CONTACT_DEFAULT.value, DefaultEmail.LEGACY_DEFAULT.value]\n\n context[\"hidden_security_emails\"] = default_emails\n\n security_email = self.object.get_security_email()\n if security_email is None or security_email in default_emails:\n context[\"security_email\"] = None\n return context\n context[\"security_email\"] = security_email\n return context\n\n def in_editable_state(self, pk):\n \"\"\"Override in_editable_state from DomainPermission\n Allow detail page to be viewable\"\"\"\n\n requested_domain = None\n if Domain.objects.filter(id=pk).exists():\n requested_domain = Domain.objects.get(id=pk)\n\n # return true if the domain exists, this will allow the detail page to load\n if requested_domain:\n return True\n return False\n\n def _get_domain(self, request):\n \"\"\"\n override get_domain for this view so that domain overview\n always resets the cache for the domain object\n \"\"\"\n self.session = request.session\n self.object = self.get_object()\n self._update_session_with_domain()\n\n\nclass DomainOrgNameAddressView(DomainFormBaseView):\n \"\"\"Organization name and mailing address view\"\"\"\n\n model = Domain\n template_name = \"domain_org_name_address.html\"\n context_object_name = \"domain\"\n form_class = DomainOrgNameAddressForm\n\n def get_form_kwargs(self, *args, **kwargs):\n \"\"\"Add domain_info.organization_name instance to make a bound form.\"\"\"\n form_kwargs = super().get_form_kwargs(*args, **kwargs)\n form_kwargs[\"instance\"] = self.object.domain_info\n return form_kwargs\n\n def get_success_url(self):\n \"\"\"Redirect to the overview page for the domain.\"\"\"\n return reverse(\"domain-org-name-address\", kwargs={\"pk\": self.object.pk})\n\n def form_valid(self, form):\n \"\"\"The form is valid, save the organization name and mailing address.\"\"\"\n form.save()\n\n messages.success(self.request, \"The organization information for this domain has been updated.\")\n\n # superclass has the redirect\n return super().form_valid(form)\n\n\nclass DomainAuthorizingOfficialView(DomainFormBaseView):\n \"\"\"Domain authorizing official editing view.\"\"\"\n\n model = Domain\n template_name = \"domain_authorizing_official.html\"\n context_object_name = \"domain\"\n form_class = AuthorizingOfficialContactForm\n\n def get_form_kwargs(self, *args, **kwargs):\n \"\"\"Add domain_info.authorizing_official instance to make a bound form.\"\"\"\n form_kwargs = super().get_form_kwargs(*args, **kwargs)\n form_kwargs[\"instance\"] = self.object.domain_info.authorizing_official\n return form_kwargs\n\n def get_success_url(self):\n \"\"\"Redirect to the overview page for the domain.\"\"\"\n return reverse(\"domain-authorizing-official\", kwargs={\"pk\": self.object.pk})\n\n def form_valid(self, form):\n \"\"\"The form is valid, save the authorizing official.\"\"\"\n # Set the domain information in the form so that it can be accessible\n # to associate a new Contact as authorizing official, if new Contact is needed\n # in the save() method\n form.set_domain_info(self.object.domain_info)\n form.save()\n\n messages.success(self.request, \"The authorizing official for this domain has been updated.\")\n\n # superclass has the redirect\n return super().form_valid(form)\n\n\nclass DomainDNSView(DomainBaseView):\n \"\"\"DNS Information View.\"\"\"\n\n template_name = \"domain_dns.html\"\n\n\nclass DomainNameserversView(DomainFormBaseView):\n \"\"\"Domain nameserver editing view.\"\"\"\n\n template_name = \"domain_nameservers.html\"\n form_class = NameserverFormset\n model = Domain\n\n def get_initial(self):\n \"\"\"The initial value for the form (which is a formset here).\"\"\"\n nameservers = self.object.nameservers\n initial_data = []\n\n if nameservers is not None:\n # Add existing nameservers as initial data\n initial_data.extend({\"server\": name, \"ip\": \",\".join(ip)} for name, ip in nameservers)\n\n # Ensure at least 3 fields, filled or empty\n while len(initial_data) < 2:\n initial_data.append({})\n\n return initial_data\n\n def get_success_url(self):\n \"\"\"Redirect to the nameservers page for the domain.\"\"\"\n return reverse(\"domain-dns-nameservers\", kwargs={\"pk\": self.object.pk})\n\n def get_context_data(self, **kwargs):\n \"\"\"Adjust context from FormMixin for formsets.\"\"\"\n context = super().get_context_data(**kwargs)\n # use \"formset\" instead of \"form\" for the key\n context[\"formset\"] = context.pop(\"form\")\n return context\n\n def get_form(self, **kwargs):\n \"\"\"Override the labels and required fields every time we get a formset.\"\"\"\n formset = super().get_form(**kwargs)\n\n for i, form in enumerate(formset):\n form.fields[\"server\"].label += f\" {i+1}\"\n if i < 2:\n form.fields[\"server\"].required = True\n else:\n form.fields[\"server\"].required = False\n form.fields[\"server\"].label += \" (optional)\"\n form.fields[\"domain\"].initial = self.object.name\n return formset\n\n def post(self, request, *args, **kwargs):\n \"\"\"Form submission posts to this view.\n\n This post method harmonizes using DomainBaseView and FormMixin\n \"\"\"\n self._get_domain(request)\n formset = self.get_form()\n\n if \"btn-cancel-click\" in request.POST:\n url = self.get_success_url()\n return HttpResponseRedirect(url)\n\n if formset.is_valid():\n return self.form_valid(formset)\n else:\n return self.form_invalid(formset)\n\n def form_valid(self, formset):\n \"\"\"The formset is valid, perform something with it.\"\"\"\n\n self.request.session[\"nameservers_form_domain\"] = self.object\n\n # Set the nameservers from the formset\n nameservers = []\n for form in formset:\n try:\n ip_string = form.cleaned_data[\"ip\"]\n # ip_string will be None or a string of IP addresses\n # comma-separated\n ip_list = []\n if ip_string:\n # Split the string into a list using a comma as the delimiter\n ip_list = ip_string.split(\",\")\n\n as_tuple = (\n form.cleaned_data[\"server\"],\n ip_list,\n )\n nameservers.append(as_tuple)\n except KeyError:\n # no server information in this field, skip it\n pass\n\n try:\n self.object.nameservers = nameservers\n except NameserverError as Err:\n # NamserverErrors *should* be caught in form; if reached here,\n # there was an uncaught error in submission (through EPP)\n messages.error(self.request, NameserverError(code=nsErrorCodes.BAD_DATA))\n logger.error(f\"Nameservers error: {Err}\")\n # TODO: registry is not throwing an error when no connection\n except RegistryError as Err:\n if Err.is_connection_error():\n messages.error(\n self.request,\n GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),\n )\n logger.error(f\"Registry connection error: {Err}\")\n else:\n messages.error(self.request, NameserverError(code=nsErrorCodes.BAD_DATA))\n logger.error(f\"Registry error: {Err}\")\n else:\n messages.success(\n self.request,\n \"The name servers for this domain have been updated. \"\n \"Note that DNS changes could take anywhere from a few minutes to \"\n \"48 hours to propagate across the internet.\",\n )\n\n # superclass has the redirect\n return super().form_valid(formset)\n\n\nclass DomainDNSSECView(DomainFormBaseView):\n \"\"\"Domain DNSSEC editing view.\"\"\"\n\n template_name = \"domain_dnssec.html\"\n form_class = DomainDnssecForm\n\n def get_context_data(self, **kwargs):\n \"\"\"The initial value for the form (which is a formset here).\"\"\"\n context = super().get_context_data(**kwargs)\n\n has_dnssec_records = self.object.dnssecdata is not None\n\n # Create HTML for the modal button\n modal_button = (\n '<button type=\"submit\" '\n 'class=\"usa-button usa-button--secondary\" '\n 'name=\"disable_dnssec\">Confirm</button>'\n )\n\n context[\"modal_button\"] = modal_button\n context[\"has_dnssec_records\"] = has_dnssec_records\n context[\"dnssec_enabled\"] = self.request.session.pop(\"dnssec_enabled\", False)\n\n return context\n\n def get_success_url(self):\n \"\"\"Redirect to the DNSSEC page for the domain.\"\"\"\n return reverse(\"domain-dns-dnssec\", kwargs={\"pk\": self.object.pk})\n\n def post(self, request, *args, **kwargs):\n \"\"\"Form submission posts to this view.\"\"\"\n self._get_domain(request)\n form = self.get_form()\n if form.is_valid():\n if \"disable_dnssec\" in request.POST:\n try:\n self.object.dnssecdata = {}\n except RegistryError as err:\n errmsg = \"Error removing existing DNSSEC record(s).\"\n logger.error(errmsg + \": \" + err)\n messages.error(self.request, errmsg)\n\n return self.form_valid(form)\n\n\nclass DomainDsDataView(DomainFormBaseView):\n \"\"\"Domain DNSSEC ds data editing view.\"\"\"\n\n template_name = \"domain_dsdata.html\"\n form_class = DomainDsdataFormset\n form = DomainDsdataForm\n\n def get_initial(self):\n \"\"\"The initial value for the form (which is a formset here).\"\"\"\n dnssecdata: extensions.DNSSECExtension = self.object.dnssecdata\n initial_data = []\n\n if dnssecdata is not None and dnssecdata.dsData is not None:\n # Add existing nameservers as initial data\n initial_data.extend(\n {\n \"key_tag\": record.keyTag,\n \"algorithm\": record.alg,\n \"digest_type\": record.digestType,\n \"digest\": record.digest,\n }\n for record in dnssecdata.dsData\n )\n\n # Ensure at least 1 record, filled or empty\n while len(initial_data) == 0:\n initial_data.append({})\n\n return initial_data\n\n def get_success_url(self):\n \"\"\"Redirect to the DS data page for the domain.\"\"\"\n return reverse(\"domain-dns-dnssec-dsdata\", kwargs={\"pk\": self.object.pk})\n\n def get_context_data(self, **kwargs):\n \"\"\"Adjust context from FormMixin for formsets.\"\"\"\n context = super().get_context_data(**kwargs)\n # use \"formset\" instead of \"form\" for the key\n context[\"formset\"] = context.pop(\"form\")\n\n return context\n\n def post(self, request, *args, **kwargs):\n \"\"\"Formset submission posts to this view.\"\"\"\n self._get_domain(request)\n formset = self.get_form()\n override = False\n\n # This is called by the form cancel button,\n # and also by the modal's X and cancel buttons\n if \"btn-cancel-click\" in request.POST:\n url = self.get_success_url()\n return HttpResponseRedirect(url)\n\n # This is called by the Disable DNSSEC modal to override\n if \"disable-override-click\" in request.POST:\n override = True\n\n # This is called when all DNSSEC data has been deleted and the\n # Save button is pressed\n if len(formset) == 0 and formset.initial != [{}] and override is False:\n # trigger the modal\n # get context data from super() rather than self\n # to preserve the context[\"form\"]\n context = super().get_context_data(form=formset)\n context[\"trigger_modal\"] = True\n # Create HTML for the modal button\n modal_button = (\n '<button type=\"submit\" '\n 'class=\"usa-button usa-button--secondary\" '\n 'name=\"disable-override-click\">Remove all DS data</button>'\n )\n\n # context to back out of a broken form on all fields delete\n context[\"modal_button\"] = modal_button\n return self.render_to_response(context)\n\n if formset.is_valid() or override:\n return self.form_valid(formset)\n else:\n return self.form_invalid(formset)\n\n def form_valid(self, formset, **kwargs):\n \"\"\"The formset is valid, perform something with it.\"\"\"\n\n # Set the dnssecdata from the formset\n dnssecdata = extensions.DNSSECExtension()\n\n for form in formset:\n try:\n # if 'delete' not in form.cleaned_data\n # or form.cleaned_data['delete'] == False:\n dsrecord = {\n \"keyTag\": form.cleaned_data[\"key_tag\"],\n \"alg\": int(form.cleaned_data[\"algorithm\"]),\n \"digestType\": int(form.cleaned_data[\"digest_type\"]),\n \"digest\": form.cleaned_data[\"digest\"],\n }\n if dnssecdata.dsData is None:\n dnssecdata.dsData = []\n dnssecdata.dsData.append(common.DSData(**dsrecord))\n except KeyError:\n # no cleaned_data provided for this form, but passed\n # as valid; this can happen if form has been added but\n # not been interacted with; in that case, want to ignore\n pass\n try:\n self.object.dnssecdata = dnssecdata\n except RegistryError as err:\n if err.is_connection_error():\n messages.error(\n self.request,\n GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),\n )\n logger.error(f\"Registry connection error: {err}\")\n else:\n messages.error(self.request, DsDataError(code=DsDataErrorCodes.BAD_DATA))\n logger.error(f\"Registry error: {err}\")\n return self.form_invalid(formset)\n else:\n messages.success(self.request, \"The DS data records for this domain have been updated.\")\n # superclass has the redirect\n return super().form_valid(formset)\n\n\nclass DomainYourContactInformationView(DomainFormBaseView):\n \"\"\"Domain your contact information editing view.\"\"\"\n\n template_name = \"domain_your_contact_information.html\"\n form_class = ContactForm\n\n def get_form_kwargs(self, *args, **kwargs):\n \"\"\"Add domain_info.submitter instance to make a bound form.\"\"\"\n form_kwargs = super().get_form_kwargs(*args, **kwargs)\n form_kwargs[\"instance\"] = self.request.user.contact\n return form_kwargs\n\n def get_success_url(self):\n \"\"\"Redirect to the your contact information for the domain.\"\"\"\n return reverse(\"domain-your-contact-information\", kwargs={\"pk\": self.object.pk})\n\n def form_valid(self, form):\n \"\"\"The form is valid, call setter in model.\"\"\"\n\n # Post to DB using values from the form\n form.save()\n\n messages.success(self.request, \"Your contact information for all your domains has been updated.\")\n\n # superclass has the redirect\n return super().form_valid(form)\n\n\nclass DomainSecurityEmailView(DomainFormBaseView):\n \"\"\"Domain security email editing view.\"\"\"\n\n template_name = \"domain_security_email.html\"\n form_class = DomainSecurityEmailForm\n\n def get_initial(self):\n \"\"\"The initial value for the form.\"\"\"\n initial = super().get_initial()\n security_contact = self.object.security_contact\n\n invalid_emails = [DefaultEmail.PUBLIC_CONTACT_DEFAULT.value, DefaultEmail.LEGACY_DEFAULT.value]\n if security_contact is None or security_contact.email in invalid_emails:\n initial[\"security_email\"] = None\n return initial\n initial[\"security_email\"] = security_contact.email\n return initial\n\n def get_success_url(self):\n \"\"\"Redirect to the security email page for the domain.\"\"\"\n return reverse(\"domain-security-email\", kwargs={\"pk\": self.object.pk})\n\n def form_valid(self, form):\n \"\"\"The form is valid, call setter in model.\"\"\"\n\n # Set the security email from the form\n new_email: str = form.cleaned_data.get(\"security_email\", \"\")\n\n # If we pass nothing for the sec email, set to the default\n if new_email is None or new_email.strip() == \"\":\n new_email = PublicContact.get_default_security().email\n\n contact = self.object.security_contact\n\n # If no default is created for security_contact,\n # then we cannot connect to the registry.\n if contact is None:\n messages.error(\n self.request,\n GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),\n )\n return redirect(self.get_success_url())\n\n contact.email = new_email\n\n try:\n contact.save()\n except RegistryError as Err:\n if Err.is_connection_error():\n messages.error(\n self.request,\n GenericError(code=GenericErrorCodes.CANNOT_CONTACT_REGISTRY),\n )\n logger.error(f\"Registry connection error: {Err}\")\n else:\n messages.error(self.request, SecurityEmailError(code=SecurityEmailErrorCodes.BAD_DATA))\n logger.error(f\"Registry error: {Err}\")\n except ContactError as Err:\n messages.error(self.request, SecurityEmailError(code=SecurityEmailErrorCodes.BAD_DATA))\n logger.error(f\"Generic registry error: {Err}\")\n else:\n messages.success(self.request, \"The security email for this domain has been updated.\")\n\n # superclass has the redirect\n return redirect(self.get_success_url())\n\n\nclass DomainUsersView(DomainBaseView):\n \"\"\"Domain managers page in the domain details.\"\"\"\n\n template_name = \"domain_users.html\"\n\n def get_context_data(self, **kwargs):\n \"\"\"The initial value for the form (which is a formset here).\"\"\"\n context = super().get_context_data(**kwargs)\n\n # Add conditionals to the context (such as \"can_delete_users\")\n context = self._add_booleans_to_context(context)\n\n # Add modal buttons to the context (such as for delete)\n context = self._add_modal_buttons_to_context(context)\n\n # Get the email of the current user\n context[\"current_user_email\"] = self.request.user.email\n\n return context\n\n def _add_booleans_to_context(self, context):\n # Determine if the current user can delete managers\n domain_pk = None\n can_delete_users = False\n\n if self.kwargs is not None and \"pk\" in self.kwargs:\n domain_pk = self.kwargs[\"pk\"]\n # Prevent the end user from deleting themselves as a manager if they are the\n # only manager that exists on a domain.\n can_delete_users = UserDomainRole.objects.filter(domain__id=domain_pk).count() > 1\n\n context[\"can_delete_users\"] = can_delete_users\n return context\n\n def _add_modal_buttons_to_context(self, context):\n \"\"\"Adds modal buttons (and their HTML) to the context\"\"\"\n # Create HTML for the modal button\n modal_button = (\n '<button type=\"submit\" '\n 'class=\"usa-button usa-button--secondary\" '\n 'name=\"delete_domain_manager\">Yes, remove domain manager</button>'\n )\n context[\"modal_button\"] = modal_button\n\n # Create HTML for the modal button when deleting yourself\n modal_button_self = (\n '<button type=\"submit\" '\n 'class=\"usa-button usa-button--secondary\" '\n 'name=\"delete_domain_manager_self\">Yes, remove myself</button>'\n )\n context[\"modal_button_self\"] = modal_button_self\n\n return context\n\n\nclass DomainAddUserView(DomainFormBaseView):\n \"\"\"Inside of a domain's user management, a form for adding users.\n\n Multiple inheritance is used here for permissions, form handling, and\n details of the individual domain.\n \"\"\"\n\n template_name = \"domain_add_user.html\"\n form_class = DomainAddUserForm\n\n def get_success_url(self):\n return reverse(\"domain-users\", kwargs={\"pk\": self.object.pk})\n\n def _domain_abs_url(self):\n \"\"\"Get an absolute URL for this domain.\"\"\"\n return self.request.build_absolute_uri(reverse(\"domain\", kwargs={\"pk\": self.object.id}))\n\n def _send_domain_invitation_email(self, email: str, requestor: User, add_success=True):\n \"\"\"Performs the sending of the domain invitation email,\n does not make a domain information object\n email: string- email to send to\n add_success: bool- default True indicates:\n adding a success message to the view if the email sending succeeds\"\"\"\n\n # Set a default email address to send to for staff\n requestor_email = \"[email protected]\"\n\n # Check if the email requestor has a valid email address\n if not requestor.is_staff and requestor.email is not None and requestor.email.strip() != \"\":\n requestor_email = requestor.email\n elif not requestor.is_staff:\n messages.error(self.request, \"Can't send invitation email. No email is associated with your account.\")\n logger.error(\n f\"Can't send email to '{email}' on domain '{self.object}'.\"\n f\"No email exists for the requestor '{requestor.username}'.\",\n exc_info=True,\n )\n return None\n\n try:\n send_templated_email(\n \"emails/domain_invitation.txt\",\n \"emails/domain_invitation_subject.txt\",\n to_address=email,\n context={\n \"domain_url\": self._domain_abs_url(),\n \"domain\": self.object,\n \"requestor_email\": requestor_email,\n },\n )\n except EmailSendingError:\n messages.warning(self.request, \"Could not send email invitation.\")\n logger.warn(\n \"Could not sent email invitation to %s for domain %s\",\n email,\n self.object,\n exc_info=True,\n )\n else:\n if add_success:\n messages.success(self.request, f\"{email} has been invited to this domain.\")\n\n def _make_invitation(self, email_address: str, requestor: User):\n \"\"\"Make a Domain invitation for this email and redirect with a message.\"\"\"\n invitation, created = DomainInvitation.objects.get_or_create(email=email_address, domain=self.object)\n if not created:\n # that invitation already existed\n messages.warning(\n self.request,\n f\"{email_address} has already been invited to this domain.\",\n )\n else:\n self._send_domain_invitation_email(email=email_address, requestor=requestor)\n return redirect(self.get_success_url())\n\n def form_valid(self, form):\n \"\"\"Add the specified user on this domain.\"\"\"\n requested_email = form.cleaned_data[\"email\"]\n requestor = self.request.user\n # look up a user with that email\n try:\n requested_user = User.objects.get(email=requested_email)\n except User.DoesNotExist:\n # no matching user, go make an invitation\n return self._make_invitation(requested_email, requestor)\n else:\n # if user already exists then just send an email\n self._send_domain_invitation_email(requested_email, requestor, add_success=False)\n\n try:\n UserDomainRole.objects.create(\n user=requested_user,\n domain=self.object,\n role=UserDomainRole.Roles.MANAGER,\n )\n except IntegrityError:\n # User already has the desired role! Do nothing??\n pass\n\n messages.success(self.request, f\"Added user {requested_email}.\")\n\n return redirect(self.get_success_url())\n\n\nclass DomainInvitationDeleteView(DomainInvitationPermissionDeleteView, SuccessMessageMixin):\n object: DomainInvitation # workaround for type mismatch in DeleteView\n\n def get_success_url(self):\n return reverse(\"domain-users\", kwargs={\"pk\": self.object.domain.id})\n\n def get_success_message(self, cleaned_data):\n return f\"Successfully canceled invitation for {self.object.email}.\"\n\n\nclass DomainDeleteUserView(UserDomainRolePermissionDeleteView):\n \"\"\"Inside of a domain's user management, a form for deleting users.\"\"\"\n\n object: UserDomainRole # workaround for type mismatch in DeleteView\n\n def get_object(self, queryset=None):\n \"\"\"Custom get_object definition to grab a UserDomainRole object from a domain_id and user_id\"\"\"\n domain_id = self.kwargs.get(\"pk\")\n user_id = self.kwargs.get(\"user_pk\")\n return UserDomainRole.objects.get(domain=domain_id, user=user_id)\n\n def get_success_url(self):\n \"\"\"Refreshes the page after a delete is successful\"\"\"\n return reverse(\"domain-users\", kwargs={\"pk\": self.object.domain.id})\n\n def get_success_message(self, delete_self=False):\n \"\"\"Returns confirmation content for the deletion event\"\"\"\n\n # Grab the text representation of the user we want to delete\n email_or_name = self.object.user.email\n if email_or_name is None or email_or_name.strip() == \"\":\n email_or_name = self.object.user\n\n # If the user is deleting themselves, return a specific message.\n # If not, return something more generic.\n if delete_self:\n message = f\"You are no longer managing the domain {self.object.domain}.\"\n else:\n message = f\"Removed {email_or_name} as a manager for this domain.\"\n\n return message\n\n def form_valid(self, form):\n \"\"\"Delete the specified user on this domain.\"\"\"\n\n # Delete the object\n super().form_valid(form)\n\n # Is the user deleting themselves? If so, display a different message\n delete_self = self.request.user == self.object.user\n\n # Add a success message\n messages.success(self.request, self.get_success_message(delete_self))\n return redirect(self.get_success_url())\n\n def post(self, request, *args, **kwargs):\n \"\"\"Custom post implementation to redirect to home in the event that the user deletes themselves\"\"\"\n response = super().post(request, *args, **kwargs)\n\n # If the user is deleting themselves, redirect to home\n delete_self = self.request.user == self.object.user\n if delete_self:\n return redirect(reverse(\"home\"))\n\n return response\n", "path": "src/registrar/views/domain.py"}]} |
gh_patches_debug_1097 | rasdani/github-patches | git_diff | ray-project__ray-1413 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Worker dies when passed pandas DataFrame.
### System information
- **Ray version**: 0.3.0
- **Python version**: 3.6.0
- **Exact command to reproduce**:
```python
import pandas as pd
import ray
pd.__version__ # '0.19.2'
ray.init()
df = pd.DataFrame(data={'col1': [1, 2, 3, 4], 'col2': [3, 4, 5, 6]})
@ray.remote
def f(x):
pass
f.remote(df)
```
The last line causes the following error to be printed in the background.
```
A worker died or was killed while executing a task.
```
cc @devin-petersohn
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/dataframe/__init__.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 from .dataframe import DataFrame
6 from .dataframe import from_pandas
7 from .dataframe import to_pandas
8 from .series import Series
9 import ray
10 import pandas as pd
11
12 __all__ = ["DataFrame", "from_pandas", "to_pandas", "Series"]
13
14 ray.register_custom_serializer(pd.DataFrame, use_pickle=True)
15 ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/ray/dataframe/__init__.py b/python/ray/dataframe/__init__.py
--- a/python/ray/dataframe/__init__.py
+++ b/python/ray/dataframe/__init__.py
@@ -6,10 +6,5 @@
from .dataframe import from_pandas
from .dataframe import to_pandas
from .series import Series
-import ray
-import pandas as pd
__all__ = ["DataFrame", "from_pandas", "to_pandas", "Series"]
-
-ray.register_custom_serializer(pd.DataFrame, use_pickle=True)
-ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)
| {"golden_diff": "diff --git a/python/ray/dataframe/__init__.py b/python/ray/dataframe/__init__.py\n--- a/python/ray/dataframe/__init__.py\n+++ b/python/ray/dataframe/__init__.py\n@@ -6,10 +6,5 @@\n from .dataframe import from_pandas\n from .dataframe import to_pandas\n from .series import Series\n-import ray\n-import pandas as pd\n \n __all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n-\n-ray.register_custom_serializer(pd.DataFrame, use_pickle=True)\n-ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)\n", "issue": "Worker dies when passed pandas DataFrame.\n### System information\r\n- **Ray version**: 0.3.0\r\n- **Python version**: 3.6.0\r\n- **Exact command to reproduce**:\r\n\r\n```python\r\nimport pandas as pd\r\nimport ray\r\n\r\npd.__version__ # '0.19.2'\r\n\r\nray.init()\r\n\r\ndf = pd.DataFrame(data={'col1': [1, 2, 3, 4], 'col2': [3, 4, 5, 6]})\r\n\r\[email protected]\r\ndef f(x):\r\n pass\r\n\r\nf.remote(df)\r\n```\r\n\r\nThe last line causes the following error to be printed in the background.\r\n\r\n```\r\nA worker died or was killed while executing a task.\r\n```\r\n\r\ncc @devin-petersohn\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom .dataframe import DataFrame\nfrom .dataframe import from_pandas\nfrom .dataframe import to_pandas\nfrom .series import Series\nimport ray\nimport pandas as pd\n\n__all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n\nray.register_custom_serializer(pd.DataFrame, use_pickle=True)\nray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)\n", "path": "python/ray/dataframe/__init__.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom .dataframe import DataFrame\nfrom .dataframe import from_pandas\nfrom .dataframe import to_pandas\nfrom .series import Series\n\n__all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n", "path": "python/ray/dataframe/__init__.py"}]} |
gh_patches_debug_1098 | rasdani/github-patches | git_diff | Gallopsled__pwntools-109 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Creating multiple rop objects is veeery noisy
I get this A LOT:
```
[*] Found gadgets for './ropasaurusrex-85a84f36f81e11f720b1cf5ea0d1fb0d5a603c0d' in cache '/tmp/pwntoo ls-rop-cache/ropasaurusrex-85a84f36f81e11f720b1cf5ea0d1fb0d5a603c0d c5bb68949dcc3264cd3a560c05d0b5 66-0x8048000'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwnlib/rop.py`
Content:
```
1 """Return Oriented Programming
2 """
3 import hashlib, os, sys, tempfile, re
4
5 from . import context, log, elf
6 from .util import packing, lists
7
8 try:
9 import ropgadget
10 __ok = True
11 except ImportError:
12 __ok = False
13
14 class ROP(object):
15 """Class which simplifies the generation of ROP-chains.
16
17 Example:
18
19 .. code-block:: python
20
21 elf = ELF('ropasaurusrex')
22 rop = ROP(elf)
23 rop.read(0, elf.bss(0x80))
24 rop.dump()
25 # ['0x0000: 0x80482fc (read)',
26 # '0x0004: 0xdeadbeef',
27 # '0x0008: 0x0',
28 # '0x000c: 0x80496a8']
29 str(rop)
30 # '\\xfc\\x82\\x04\\x08\\xef\\xbe\\xad\\xde\\x00\\x00\\x00\\x00\\xa8\\x96\\x04\\x08'
31 """
32 def __init__(self, elfs, base = None):
33 """
34 Args:
35 elfs(list): List of pwnlib.elf.ELF objects for mining
36 """
37 # Permit singular ROP(elf) vs ROP([elf])
38 if isinstance(elfs, elf.ELF):
39 elfs = [elfs]
40 elif isinstance(elfs, (str, unicode)):
41 elfs = [elf.ELF(elfs)]
42
43 self.elfs = elfs
44 self._chain = []
45 self.base = base
46 self.align = max(e.elfclass for e in elfs)/8
47 self.migrated = False
48 self.__load()
49
50 def resolve(self, resolvable):
51 """Resolves a symbol to an address
52
53 Args:
54 resolvable(str,int): Thing to convert into an address
55
56 Returns:
57 int containing address of 'resolvable', or None
58 """
59 if isinstance(resolvable, str):
60 for elf in self.elfs:
61 if resolvable in elf.symbols:
62 return elf.symbols[resolvable]
63 if isinstance(resolvable, (int,long)):
64 return resolvable
65 return None
66
67 def unresolve(self, value):
68 """Inverts 'resolve'. Given an address, it attempts to find a symbol
69 for it in the loaded ELF files. If none is found, it searches all
70 known gadgets, and returns the disassembly
71
72 Args:
73 value(int): Address to look up
74
75 Returns:
76 String containing the symbol name for the address, disassembly for a gadget
77 (if there's one at that address), or an empty string.
78 """
79 for elf in self.elfs:
80 for name, addr in elf.symbols.items():
81 if addr == value:
82 return name
83
84 if value in self.gadgets:
85 return '; '.join(self.gadgets[value]['insns'])
86 return ''
87
88 def _output_struct(self, value, output):
89 next_index = len(output)
90
91 if isinstance(value, (int, long)):
92 return value
93 elif isinstance(value, (unicode, str)):
94 if isinstance(value, unicode):
95 value = value.encode('utf8')
96
97 while True:
98 value += '\x00'
99 if len(value) % self.align == 0:
100 break
101
102 output.append([value])
103 return (next_index,)
104 elif isinstance(value, (tuple, list)):
105 l = []
106 output.append(l)
107 for v in value:
108 l.append(self._output_struct(v, output))
109 return (next_index,)
110 else:
111 log.error("ROP: Cannot flatten value %r" % value)
112
113 def _build_x86(self):
114 # Stage 1:
115 # Convert every call in self._chain from a (addr, args) tuple
116 # into a (addr, pivot, args, pad) tuple.
117 #
118 # Stage 2:
119 # Micro-optimizations for the last call in the chain.
120 #
121 # Stage 3:
122 # Convert into a [[str/ints/refs]], where
123 # refs are references to one of the first lists and will be turned
124 # into pointers outside this function. Refs are represented as
125 # length-1 tuples.
126
127 if not self._chain:
128 return []
129
130 # Stage 1
131 chain = []
132 for addr, args in self._chain:
133 if not args:
134 chain.append([addr, [], [], 0])
135 else:
136 need = (1+len(args)) * self.align
137 best_pivot = None
138 best_size = None
139
140 for size, pivot in sorted(self.pivots.items()):
141 if size >= need:
142 best_pivot = pivot
143 best_size = size
144 break
145
146 if best_pivot == None:
147 log.error("Could not find gadget to clean up stack for call %r %r" % (addr, args))
148
149 chain.append([addr, [best_pivot], args, best_size/4 - len(args) - 1])
150
151 # Stage 2
152 # If the last call has arguments, there is no need
153 # to fix up the stack up for those arguments
154 if chain[-1][2]:
155 chain[-1][1] = [0xdeadbeef]
156 chain[-1][3] = 0
157
158 # If the last call does not have any arguments, there is no
159 # need to fix up the stack for the second-to-last call.
160 # We can put the last call as the pivot address for
161 # the second-to-last call.
162 if len(chain) > 1 and not chain[-1][2] and chain[-2][2]:
163 # This optimization does not work if a raw string is on the stack
164 if not isinstance(chain[-1][0], (str, unicode)):
165 chain[-2][1] = [chain[-1][0]]
166 chain[-2][3] = 0
167 chain.pop()
168
169 # Stage 3
170 outrop = []
171 output = [outrop]
172
173 for addr, pivot, args, pad in chain:
174 outrop.append(addr)
175 outrop.extend(pivot)
176 for arg in args:
177 outrop.append(self._output_struct(arg, output))
178 for _ in range(pad):
179 outrop.append('$$$$')
180
181 return output
182
183 def build(self, base = None):
184 """Build the ROP chain into a list (addr, int/string, bool), where the
185 last value is True iff the value was an internal reference.
186
187 It is guaranteed that the individual parts are next to each other.
188
189 If there is no base available, then the returned addresses are indexed from 0.
190
191 Args:
192 base(int): The base address to build the rop-chain from. Defaults to
193 self.base.
194 """
195
196 if base == None:
197 base = self.base
198
199 # Use the architecture specific builder to get a [[str/ints/refs]]
200 meth = '_build_' + self.elfs[0].get_machine_arch()
201 if not hasattr(self, meth):
202 log.error("Cannot build rop for architecture %r" % self.elfs[0].get_machine_arch())
203 rop = getattr(self, meth)()
204
205 # Stage 1
206 # Generate a dictionary {ref_id: addr}.
207 addrs = {}
208 if base != None:
209 addr = base
210 for i, l in enumerate(rop):
211 addrs[i] = addr
212 for v in l:
213 if isinstance(v, (int, long, tuple)):
214 addr += self.align
215 else:
216 addr += len(v)
217
218 # Stage 2:
219 # Convert into [(addr, int/string, bool)]
220 addr = base or 0
221 out = []
222 for l in rop:
223 for v in l:
224 if isinstance(v, (int, long)):
225 out.append((addr, v, False))
226 addr += self.align
227 elif isinstance(v, str):
228 out.append((addr, v, False))
229 addr += len(v)
230 elif isinstance(v, tuple):
231 if v[0] in addrs:
232 out.append((addr, addrs[v[0]], True))
233 addr += self.align
234 elif base != None:
235 log.bug("ROP: References unknown structure index")
236 else:
237 log.error("ROP: Cannot use structures without a base address")
238 else:
239 log.bug("ROP: Unexpected value: %r" % v)
240
241 return out
242
243 def chain(self):
244 """Build the ROP chain
245
246 Returns:
247 str containging raw ROP bytes
248 """
249
250 return packing.flat(
251 [value for addr, value, was_ref in self.build()],
252 word_size = 8*self.align
253 )
254
255 def dump(self):
256 """Dump the ROP chain in an easy-to-read manner"""
257 result = []
258
259 rop = self.build(self.base or 0)
260 addrs = [addr for addr, value, was_ref in rop]
261 for addr, value, was_ref in rop:
262 if isinstance(value, str):
263 line = "0x%04x: %16r" % (addr, value.rstrip('\x00'))
264 elif isinstance(value, (int, long)):
265 if was_ref:
266 line = "0x%04x: %#16x (%+d)" % (
267 addr,
268 value,
269 value - addr
270 )
271 else:
272 ref = self.unresolve(value)
273 line = "0x%04x: %#16x%s" % (
274 addr,
275 value,
276 (' (%s)' % ref) if ref else ''
277 )
278 else:
279 log.bug("ROP: ROP.build returned an unexpected value %r" % value)
280
281 result.append(line)
282
283 return result
284
285 def call(self, resolvable, arguments=()):
286 """Add a call to the ROP chain
287
288 Args:
289 resolvable(str,int): Value which can be looked up via 'resolve',
290 or is already an integer.
291 arguments(list): List of arguments which can be passed to pack().
292 Alternately, if a base address is set, arbitrarily nested
293 structures of strings or integers can be provided.
294 """
295 if self.migrated:
296 log.error("Cannot append to a migrated chain")
297
298 addr = self.resolve(resolvable)
299
300 if addr is None:
301 log.error("Could not resolve %r" % resolvable)
302
303 self._chain.append((addr, arguments))
304
305 def raw(self, value):
306 """Adds a raw integer or string to the ROP chain.
307
308 If your architecture requires aligned values, then make
309 sure that any given string is aligned!
310
311 Args:
312 data(int/str): The raw value to put onto the rop chain.
313 """
314
315 if self.migrated:
316 log.error("Cannot append to a migrated chain")
317
318 self._chain.append((value, ()))
319
320 def migrate(self, next_base):
321 """Explicitly set $sp, by using a ``leave; ret`` gadget"""
322
323 if isinstance(next_base, ROP):
324 next_base = self.base
325
326 pop_sp = self.rsp or self.esp
327 pop_bp = self.rbp or self.ebp
328 leave = self.leave
329
330 if pop_sp and len(pop_sp[1]['regs']) == 1:
331 self.raw(pop_sp[0])
332 self.raw(next_base)
333 elif pop_bp and leave and len(pop_bp[1]['regs']) == 1:
334 self.raw(pop_bp[0])
335 self.raw(next_base-4)
336 self.raw(leave[0])
337 else:
338 log.error("Cannot find the gadgets to migrate")
339
340 self.migrated = True
341
342 def __str__(self):
343 """Returns: Raw bytes of the ROP chain"""
344 return self.chain()
345
346 def __get_cachefile_name(self, elf):
347 basename = os.path.basename(elf.file.name)
348 md5sum = hashlib.md5(elf.get_data()).hexdigest()
349
350 filename = "%s-%s-%#x" % (basename, md5sum, elf.address)
351
352 cachedir = os.path.join(tempfile.gettempdir(), 'pwntools-rop-cache')
353
354 if not os.path.exists(cachedir):
355 os.mkdir(cachedir)
356
357 return os.path.join(cachedir, filename)
358
359 def __cache_load(self, elf):
360 filename = self.__get_cachefile_name(elf)
361
362 if os.path.exists(filename):
363 log.info("Found gadgets for %r in cache %r" % (elf.file.name,filename))
364 return eval(file(filename).read())
365
366 def __cache_save(self, elf, data):
367 file(self.__get_cachefile_name(elf),'w+').write(repr(data))
368
369 def __load(self):
370 """Load all ROP gadgets for the selected ELF files"""
371 #
372 # We accept only instructions that look like these.
373 #
374 # - leave
375 # - pop reg
376 # - add $sp, value
377 # - ret
378 #
379 # Currently, ROPgadget does not detect multi-byte "C2" ret.
380 # https://github.com/JonathanSalwan/ROPgadget/issues/53
381 #
382
383 pop = re.compile(r'^pop (.*)')
384 add = re.compile(r'^add .sp, (\S+)$')
385 ret = re.compile(r'^ret$')
386 leave = re.compile(r'^leave$')
387
388 #
389 # Validation routine
390 #
391 # >>> valid('pop eax')
392 # True
393 # >>> valid('add rax, 0x24')
394 # False
395 # >>> valid('add esp, 0x24')
396 # True
397 #
398 valid = lambda insn: any(map(lambda pattern: pattern.match(insn), [pop,add,ret,leave]))
399
400 #
401 # Currently, ropgadget.args.Args() doesn't take any arguments, and pulls
402 # only from sys.argv. Preserve it through this call. We also
403 # monkey-patch sys.stdout to suppress output from ropgadget.
404 #
405 argv = sys.argv
406 stdout = sys.stdout
407 class Wrapper:
408 def __init__(self, fd):
409 self._fd = fd
410 def write(self, s):
411 pass
412 def __getattr__(self, k):
413 return self._fd.__getattribute__(k)
414 sys.stdout = Wrapper(sys.stdout)
415 gadgets = {}
416 try:
417 for elf in self.elfs:
418 cache = self.__cache_load(elf)
419 if cache:
420 gadgets.update(cache)
421 continue
422
423 log.info("Loading gadgets for %r @ %#x" % (elf.path, elf.address))
424 sys.argv = ['ropgadget', '--binary', elf.path, '--only', 'add|pop|leave|ret', '--nojop', '--nosys']
425 args = ropgadget.args.Args().getArgs()
426 core = ropgadget.core.Core(args)
427 core.do_binary(elf.path)
428 core.do_load(0)
429
430 elf_gadgets = {}
431 for gadget in core._Core__gadgets:
432
433 address = gadget['vaddr'] - elf.load_addr + elf.address
434 insns = [g.strip() for g in gadget['gadget'].split(';')]
435
436 if all(map(valid, insns)):
437 elf_gadgets[address] = insns
438 self.__cache_save(elf, elf_gadgets)
439 gadgets.update(elf_gadgets)
440 finally:
441 sys.argv = argv
442 sys.stdout = stdout
443
444
445 #
446 # For each gadget we decided to keep, find out how much it moves the stack,
447 # and log which registers it modifies.
448 #
449 self.gadgets = {}
450 self.pivots = {}
451
452 frame_regs = ['ebp','esp'] if self.align == 4 else ['rbp','rsp']
453
454 for addr,insns in gadgets.items():
455 sp_move = 0
456 regs = []
457 for insn in insns:
458 if pop.match(insn):
459 regs.append(pop.match(insn).group(1))
460 sp_move += self.align
461 elif add.match(insn):
462 sp_move += int(add.match(insn).group(1), 16)
463 elif ret.match(insn):
464 sp_move += self.align
465 elif leave.match(insn):
466 #
467 # HACK: Since this modifies ESP directly, this should
468 # never be returned as a 'normal' ROP gadget that
469 # simply 'increments' the stack.
470 #
471 # As such, the 'move' is set to a very large value,
472 # to prevent .search() from returning it unless $sp
473 # is specified as a register.
474 #
475 sp_move += 9999999999
476 regs += frame_regs
477
478 # Permit duplicates, because blacklisting bytes in the gadget
479 # addresses may result in us needing the dupes.
480 self.gadgets[addr] = {'insns': insns, 'regs': regs, 'move': sp_move}
481
482 # Don't use 'pop esp' for pivots
483 if not set(['rsp','esp']) & set(regs):
484 self.pivots[sp_move] = addr
485
486 #
487 # HACK: Set up a special '.leave' helper. This is so that
488 # I don't have to rewrite __getattr__ to support this.
489 #
490 leave = self.search(regs = frame_regs, order = 'regs')
491 if leave[1]['regs'] != frame_regs:
492 leave = None
493 self.leave = leave
494
495 def __repr__(self):
496 return "ROP(%r)" % self.elfs
497
498 def search(self, move = 0, regs = [], order = 'size'):
499 """Search for a gadget which matches the specified criteria.
500
501 Args:
502 move(int): Minimum number of bytes by which the stack
503 pointer is adjusted.
504 regs(list): Minimum list of registers which are popped off the
505 stack.
506 order(str): Either the string 'size' or 'regs'. Decides how to
507 order multiple gadgets the fulfill the requirements.
508
509 The search will try to minimize the number of bytes popped more than
510 requested, the number of registers touched besides the requested and
511 the address.
512
513 If ``order == 'size'``, then gadgets are compared lexicographically
514 by ``(total_moves, total_regs, addr)``, otherwise by ``(total_regs, total_moves, addr)``.
515
516 Returns:
517 A tuple of (address, info) in the same format as self.gadgets.items().
518 """
519
520 regs = set(regs)
521
522 # Search for an exact match, save the closest match
523 closest = None
524 closest_val = (float('inf'), float('inf'), float('inf'))
525 for a,i in self.gadgets.items():
526 cur_regs = set(i['regs'])
527 if regs == cur_regs and move == i['move']:
528 return (a, i)
529
530 if not (regs.issubset(cur_regs) and move <= i['move']):
531 continue
532
533 if order == 'size':
534 cur = (i['move'], len(i['regs']), a)
535 else:
536 cur = (len(i['regs']), i['move'], a)
537
538 if cur < closest_val:
539 closest = (a, i)
540 closest_val = cur
541
542 return closest
543
544 def __getattr__(self, attr):
545 """Helper to make finding ROP gadets easier.
546
547 Also provides a shorthand for .call():
548 ```
549 rop.function(args) ==> rop.call(function, args)
550 ```
551
552 >>> elf=ELF('/bin/bash')
553 >>> rop=ROP([elf])
554 >>> rop.rdi == rop.search(regs=['rdi'], order = 'regs')
555 True
556 >>> rop.r13_r14_r15_rbp == rop.search(regs=['r13','r14','r15','rbp'], order = 'regs')
557 True
558 >>> rop.ret == rop.search(move=rop.align)
559 True
560 >>> rop.ret_8 == rop.search(move=8)
561 True
562 >>> rop.ret != None
563 True
564 """
565 bad_attrs = [
566 'trait_names', # ipython tab-complete
567 'download', # frequent typo
568 'upload', # frequent typo
569 ]
570
571 if attr in self.__dict__ \
572 or attr in bad_attrs \
573 or attr.startswith('_'):
574 raise AttributeError('ROP instance has no attribute %r' % attr)
575
576 #
577 # Check for 'ret' or 'ret_X'
578 #
579 if attr.startswith('ret'):
580 count = 4
581 if '_' in attr:
582 count = int(attr.split('_')[1])
583
584 return self.search(move=count)
585
586 #
587 # Check for a '_'-delimited list of registers
588 #
589 x86_suffixes = ['ax', 'bx', 'cx', 'dx', 'bp', 'sp', 'di', 'si',
590 'r8', 'r9', '10', '11', '12', '13', '14', '15']
591 if all(map(lambda x: x[-2:] in x86_suffixes, attr.split('_'))):
592 return self.search(regs = attr.split('_'), order = 'regs')
593
594 #
595 # Otherwise, assume it's a rop.call() shorthand
596 #
597 def call(*args):
598 return self.call(attr,args)
599 return call
600
601
602 if not __ok:
603 def ROP(*args, **kwargs):
604 log.error("ROP is not supported without installing libcapstone. See http://www.capstone-engine.org/download.html")
605
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwnlib/rop.py b/pwnlib/rop.py
--- a/pwnlib/rop.py
+++ b/pwnlib/rop.py
@@ -360,7 +360,7 @@
filename = self.__get_cachefile_name(elf)
if os.path.exists(filename):
- log.info("Found gadgets for %r in cache %r" % (elf.file.name,filename))
+ log.info("Found cached gadgets for %r" % (elf.file.name))
return eval(file(filename).read())
def __cache_save(self, elf, data):
| {"golden_diff": "diff --git a/pwnlib/rop.py b/pwnlib/rop.py\n--- a/pwnlib/rop.py\n+++ b/pwnlib/rop.py\n@@ -360,7 +360,7 @@\n filename = self.__get_cachefile_name(elf)\n \n if os.path.exists(filename):\n- log.info(\"Found gadgets for %r in cache %r\" % (elf.file.name,filename))\n+ log.info(\"Found cached gadgets for %r\" % (elf.file.name))\n return eval(file(filename).read())\n \n def __cache_save(self, elf, data):\n", "issue": "Creating multiple rop objects is veeery noisy\nI get this A LOT:\n\n```\n[*] Found gadgets for './ropasaurusrex-85a84f36f81e11f720b1cf5ea0d1fb0d5a603c0d' in cache '/tmp/pwntoo ls-rop-cache/ropasaurusrex-85a84f36f81e11f720b1cf5ea0d1fb0d5a603c0d c5bb68949dcc3264cd3a560c05d0b5 66-0x8048000'\n```\n\n", "before_files": [{"content": "\"\"\"Return Oriented Programming\n\"\"\"\nimport hashlib, os, sys, tempfile, re\n\nfrom . import context, log, elf\nfrom .util import packing, lists\n\ntry:\n import ropgadget\n __ok = True\nexcept ImportError:\n __ok = False\n\nclass ROP(object):\n \"\"\"Class which simplifies the generation of ROP-chains.\n\n Example:\n\n .. code-block:: python\n\n elf = ELF('ropasaurusrex')\n rop = ROP(elf)\n rop.read(0, elf.bss(0x80))\n rop.dump()\n # ['0x0000: 0x80482fc (read)',\n # '0x0004: 0xdeadbeef',\n # '0x0008: 0x0',\n # '0x000c: 0x80496a8']\n str(rop)\n # '\\\\xfc\\\\x82\\\\x04\\\\x08\\\\xef\\\\xbe\\\\xad\\\\xde\\\\x00\\\\x00\\\\x00\\\\x00\\\\xa8\\\\x96\\\\x04\\\\x08'\n \"\"\"\n def __init__(self, elfs, base = None):\n \"\"\"\n Args:\n elfs(list): List of pwnlib.elf.ELF objects for mining\n \"\"\"\n # Permit singular ROP(elf) vs ROP([elf])\n if isinstance(elfs, elf.ELF):\n elfs = [elfs]\n elif isinstance(elfs, (str, unicode)):\n elfs = [elf.ELF(elfs)]\n\n self.elfs = elfs\n self._chain = []\n self.base = base\n self.align = max(e.elfclass for e in elfs)/8\n self.migrated = False\n self.__load()\n\n def resolve(self, resolvable):\n \"\"\"Resolves a symbol to an address\n\n Args:\n resolvable(str,int): Thing to convert into an address\n\n Returns:\n int containing address of 'resolvable', or None\n \"\"\"\n if isinstance(resolvable, str):\n for elf in self.elfs:\n if resolvable in elf.symbols:\n return elf.symbols[resolvable]\n if isinstance(resolvable, (int,long)):\n return resolvable\n return None\n\n def unresolve(self, value):\n \"\"\"Inverts 'resolve'. Given an address, it attempts to find a symbol\n for it in the loaded ELF files. If none is found, it searches all\n known gadgets, and returns the disassembly\n\n Args:\n value(int): Address to look up\n\n Returns:\n String containing the symbol name for the address, disassembly for a gadget\n (if there's one at that address), or an empty string.\n \"\"\"\n for elf in self.elfs:\n for name, addr in elf.symbols.items():\n if addr == value:\n return name\n\n if value in self.gadgets:\n return '; '.join(self.gadgets[value]['insns'])\n return ''\n\n def _output_struct(self, value, output):\n next_index = len(output)\n\n if isinstance(value, (int, long)):\n return value\n elif isinstance(value, (unicode, str)):\n if isinstance(value, unicode):\n value = value.encode('utf8')\n\n while True:\n value += '\\x00'\n if len(value) % self.align == 0:\n break\n\n output.append([value])\n return (next_index,)\n elif isinstance(value, (tuple, list)):\n l = []\n output.append(l)\n for v in value:\n l.append(self._output_struct(v, output))\n return (next_index,)\n else:\n log.error(\"ROP: Cannot flatten value %r\" % value)\n\n def _build_x86(self):\n # Stage 1:\n # Convert every call in self._chain from a (addr, args) tuple\n # into a (addr, pivot, args, pad) tuple.\n #\n # Stage 2:\n # Micro-optimizations for the last call in the chain.\n #\n # Stage 3:\n # Convert into a [[str/ints/refs]], where\n # refs are references to one of the first lists and will be turned\n # into pointers outside this function. Refs are represented as\n # length-1 tuples.\n\n if not self._chain:\n return []\n\n # Stage 1\n chain = []\n for addr, args in self._chain:\n if not args:\n chain.append([addr, [], [], 0])\n else:\n need = (1+len(args)) * self.align\n best_pivot = None\n best_size = None\n\n for size, pivot in sorted(self.pivots.items()):\n if size >= need:\n best_pivot = pivot\n best_size = size\n break\n\n if best_pivot == None:\n log.error(\"Could not find gadget to clean up stack for call %r %r\" % (addr, args))\n\n chain.append([addr, [best_pivot], args, best_size/4 - len(args) - 1])\n\n # Stage 2\n # If the last call has arguments, there is no need\n # to fix up the stack up for those arguments\n if chain[-1][2]:\n chain[-1][1] = [0xdeadbeef]\n chain[-1][3] = 0\n\n # If the last call does not have any arguments, there is no\n # need to fix up the stack for the second-to-last call.\n # We can put the last call as the pivot address for\n # the second-to-last call.\n if len(chain) > 1 and not chain[-1][2] and chain[-2][2]:\n # This optimization does not work if a raw string is on the stack\n if not isinstance(chain[-1][0], (str, unicode)):\n chain[-2][1] = [chain[-1][0]]\n chain[-2][3] = 0\n chain.pop()\n\n # Stage 3\n outrop = []\n output = [outrop]\n\n for addr, pivot, args, pad in chain:\n outrop.append(addr)\n outrop.extend(pivot)\n for arg in args:\n outrop.append(self._output_struct(arg, output))\n for _ in range(pad):\n outrop.append('$$$$')\n\n return output\n\n def build(self, base = None):\n \"\"\"Build the ROP chain into a list (addr, int/string, bool), where the\n last value is True iff the value was an internal reference.\n\n It is guaranteed that the individual parts are next to each other.\n\n If there is no base available, then the returned addresses are indexed from 0.\n\n Args:\n base(int): The base address to build the rop-chain from. Defaults to\n self.base.\n \"\"\"\n\n if base == None:\n base = self.base\n\n # Use the architecture specific builder to get a [[str/ints/refs]]\n meth = '_build_' + self.elfs[0].get_machine_arch()\n if not hasattr(self, meth):\n log.error(\"Cannot build rop for architecture %r\" % self.elfs[0].get_machine_arch())\n rop = getattr(self, meth)()\n\n # Stage 1\n # Generate a dictionary {ref_id: addr}.\n addrs = {}\n if base != None:\n addr = base\n for i, l in enumerate(rop):\n addrs[i] = addr\n for v in l:\n if isinstance(v, (int, long, tuple)):\n addr += self.align\n else:\n addr += len(v)\n\n # Stage 2:\n # Convert into [(addr, int/string, bool)]\n addr = base or 0\n out = []\n for l in rop:\n for v in l:\n if isinstance(v, (int, long)):\n out.append((addr, v, False))\n addr += self.align\n elif isinstance(v, str):\n out.append((addr, v, False))\n addr += len(v)\n elif isinstance(v, tuple):\n if v[0] in addrs:\n out.append((addr, addrs[v[0]], True))\n addr += self.align\n elif base != None:\n log.bug(\"ROP: References unknown structure index\")\n else:\n log.error(\"ROP: Cannot use structures without a base address\")\n else:\n log.bug(\"ROP: Unexpected value: %r\" % v)\n\n return out\n\n def chain(self):\n \"\"\"Build the ROP chain\n\n Returns:\n str containging raw ROP bytes\n \"\"\"\n\n return packing.flat(\n [value for addr, value, was_ref in self.build()],\n word_size = 8*self.align\n )\n\n def dump(self):\n \"\"\"Dump the ROP chain in an easy-to-read manner\"\"\"\n result = []\n\n rop = self.build(self.base or 0)\n addrs = [addr for addr, value, was_ref in rop]\n for addr, value, was_ref in rop:\n if isinstance(value, str):\n line = \"0x%04x: %16r\" % (addr, value.rstrip('\\x00'))\n elif isinstance(value, (int, long)):\n if was_ref:\n line = \"0x%04x: %#16x (%+d)\" % (\n addr,\n value,\n value - addr\n )\n else:\n ref = self.unresolve(value)\n line = \"0x%04x: %#16x%s\" % (\n addr,\n value,\n (' (%s)' % ref) if ref else ''\n )\n else:\n log.bug(\"ROP: ROP.build returned an unexpected value %r\" % value)\n\n result.append(line)\n\n return result\n\n def call(self, resolvable, arguments=()):\n \"\"\"Add a call to the ROP chain\n\n Args:\n resolvable(str,int): Value which can be looked up via 'resolve',\n or is already an integer.\n arguments(list): List of arguments which can be passed to pack().\n Alternately, if a base address is set, arbitrarily nested\n structures of strings or integers can be provided.\n \"\"\"\n if self.migrated:\n log.error(\"Cannot append to a migrated chain\")\n\n addr = self.resolve(resolvable)\n\n if addr is None:\n log.error(\"Could not resolve %r\" % resolvable)\n\n self._chain.append((addr, arguments))\n\n def raw(self, value):\n \"\"\"Adds a raw integer or string to the ROP chain.\n\n If your architecture requires aligned values, then make\n sure that any given string is aligned!\n\n Args:\n data(int/str): The raw value to put onto the rop chain.\n \"\"\"\n\n if self.migrated:\n log.error(\"Cannot append to a migrated chain\")\n\n self._chain.append((value, ()))\n\n def migrate(self, next_base):\n \"\"\"Explicitly set $sp, by using a ``leave; ret`` gadget\"\"\"\n\n if isinstance(next_base, ROP):\n next_base = self.base\n\n pop_sp = self.rsp or self.esp\n pop_bp = self.rbp or self.ebp\n leave = self.leave\n\n if pop_sp and len(pop_sp[1]['regs']) == 1:\n self.raw(pop_sp[0])\n self.raw(next_base)\n elif pop_bp and leave and len(pop_bp[1]['regs']) == 1:\n self.raw(pop_bp[0])\n self.raw(next_base-4)\n self.raw(leave[0])\n else:\n log.error(\"Cannot find the gadgets to migrate\")\n\n self.migrated = True\n\n def __str__(self):\n \"\"\"Returns: Raw bytes of the ROP chain\"\"\"\n return self.chain()\n\n def __get_cachefile_name(self, elf):\n basename = os.path.basename(elf.file.name)\n md5sum = hashlib.md5(elf.get_data()).hexdigest()\n\n filename = \"%s-%s-%#x\" % (basename, md5sum, elf.address)\n\n cachedir = os.path.join(tempfile.gettempdir(), 'pwntools-rop-cache')\n\n if not os.path.exists(cachedir):\n os.mkdir(cachedir)\n\n return os.path.join(cachedir, filename)\n\n def __cache_load(self, elf):\n filename = self.__get_cachefile_name(elf)\n\n if os.path.exists(filename):\n log.info(\"Found gadgets for %r in cache %r\" % (elf.file.name,filename))\n return eval(file(filename).read())\n\n def __cache_save(self, elf, data):\n file(self.__get_cachefile_name(elf),'w+').write(repr(data))\n\n def __load(self):\n \"\"\"Load all ROP gadgets for the selected ELF files\"\"\"\n #\n # We accept only instructions that look like these.\n #\n # - leave\n # - pop reg\n # - add $sp, value\n # - ret\n #\n # Currently, ROPgadget does not detect multi-byte \"C2\" ret.\n # https://github.com/JonathanSalwan/ROPgadget/issues/53\n #\n\n pop = re.compile(r'^pop (.*)')\n add = re.compile(r'^add .sp, (\\S+)$')\n ret = re.compile(r'^ret$')\n leave = re.compile(r'^leave$')\n\n #\n # Validation routine\n #\n # >>> valid('pop eax')\n # True\n # >>> valid('add rax, 0x24')\n # False\n # >>> valid('add esp, 0x24')\n # True\n #\n valid = lambda insn: any(map(lambda pattern: pattern.match(insn), [pop,add,ret,leave]))\n\n #\n # Currently, ropgadget.args.Args() doesn't take any arguments, and pulls\n # only from sys.argv. Preserve it through this call. We also\n # monkey-patch sys.stdout to suppress output from ropgadget.\n #\n argv = sys.argv\n stdout = sys.stdout\n class Wrapper:\n def __init__(self, fd):\n self._fd = fd\n def write(self, s):\n pass\n def __getattr__(self, k):\n return self._fd.__getattribute__(k)\n sys.stdout = Wrapper(sys.stdout)\n gadgets = {}\n try:\n for elf in self.elfs:\n cache = self.__cache_load(elf)\n if cache:\n gadgets.update(cache)\n continue\n\n log.info(\"Loading gadgets for %r @ %#x\" % (elf.path, elf.address))\n sys.argv = ['ropgadget', '--binary', elf.path, '--only', 'add|pop|leave|ret', '--nojop', '--nosys']\n args = ropgadget.args.Args().getArgs()\n core = ropgadget.core.Core(args)\n core.do_binary(elf.path)\n core.do_load(0)\n\n elf_gadgets = {}\n for gadget in core._Core__gadgets:\n\n address = gadget['vaddr'] - elf.load_addr + elf.address\n insns = [g.strip() for g in gadget['gadget'].split(';')]\n\n if all(map(valid, insns)):\n elf_gadgets[address] = insns\n self.__cache_save(elf, elf_gadgets)\n gadgets.update(elf_gadgets)\n finally:\n sys.argv = argv\n sys.stdout = stdout\n\n\n #\n # For each gadget we decided to keep, find out how much it moves the stack,\n # and log which registers it modifies.\n #\n self.gadgets = {}\n self.pivots = {}\n\n frame_regs = ['ebp','esp'] if self.align == 4 else ['rbp','rsp']\n\n for addr,insns in gadgets.items():\n sp_move = 0\n regs = []\n for insn in insns:\n if pop.match(insn):\n regs.append(pop.match(insn).group(1))\n sp_move += self.align\n elif add.match(insn):\n sp_move += int(add.match(insn).group(1), 16)\n elif ret.match(insn):\n sp_move += self.align\n elif leave.match(insn):\n #\n # HACK: Since this modifies ESP directly, this should\n # never be returned as a 'normal' ROP gadget that\n # simply 'increments' the stack.\n #\n # As such, the 'move' is set to a very large value,\n # to prevent .search() from returning it unless $sp\n # is specified as a register.\n #\n sp_move += 9999999999\n regs += frame_regs\n\n # Permit duplicates, because blacklisting bytes in the gadget\n # addresses may result in us needing the dupes.\n self.gadgets[addr] = {'insns': insns, 'regs': regs, 'move': sp_move}\n\n # Don't use 'pop esp' for pivots\n if not set(['rsp','esp']) & set(regs):\n self.pivots[sp_move] = addr\n\n #\n # HACK: Set up a special '.leave' helper. This is so that\n # I don't have to rewrite __getattr__ to support this.\n #\n leave = self.search(regs = frame_regs, order = 'regs')\n if leave[1]['regs'] != frame_regs:\n leave = None\n self.leave = leave\n\n def __repr__(self):\n return \"ROP(%r)\" % self.elfs\n\n def search(self, move = 0, regs = [], order = 'size'):\n \"\"\"Search for a gadget which matches the specified criteria.\n\n Args:\n move(int): Minimum number of bytes by which the stack\n pointer is adjusted.\n regs(list): Minimum list of registers which are popped off the\n stack.\n order(str): Either the string 'size' or 'regs'. Decides how to\n order multiple gadgets the fulfill the requirements.\n\n The search will try to minimize the number of bytes popped more than\n requested, the number of registers touched besides the requested and\n the address.\n\n If ``order == 'size'``, then gadgets are compared lexicographically\n by ``(total_moves, total_regs, addr)``, otherwise by ``(total_regs, total_moves, addr)``.\n\n Returns:\n A tuple of (address, info) in the same format as self.gadgets.items().\n \"\"\"\n\n regs = set(regs)\n\n # Search for an exact match, save the closest match\n closest = None\n closest_val = (float('inf'), float('inf'), float('inf'))\n for a,i in self.gadgets.items():\n cur_regs = set(i['regs'])\n if regs == cur_regs and move == i['move']:\n return (a, i)\n\n if not (regs.issubset(cur_regs) and move <= i['move']):\n continue\n\n if order == 'size':\n cur = (i['move'], len(i['regs']), a)\n else:\n cur = (len(i['regs']), i['move'], a)\n\n if cur < closest_val:\n closest = (a, i)\n closest_val = cur\n\n return closest\n\n def __getattr__(self, attr):\n \"\"\"Helper to make finding ROP gadets easier.\n\n Also provides a shorthand for .call():\n ```\n rop.function(args) ==> rop.call(function, args)\n ```\n\n >>> elf=ELF('/bin/bash')\n >>> rop=ROP([elf])\n >>> rop.rdi == rop.search(regs=['rdi'], order = 'regs')\n True\n >>> rop.r13_r14_r15_rbp == rop.search(regs=['r13','r14','r15','rbp'], order = 'regs')\n True\n >>> rop.ret == rop.search(move=rop.align)\n True\n >>> rop.ret_8 == rop.search(move=8)\n True\n >>> rop.ret != None\n True\n \"\"\"\n bad_attrs = [\n 'trait_names', # ipython tab-complete\n 'download', # frequent typo\n 'upload', # frequent typo\n ]\n\n if attr in self.__dict__ \\\n or attr in bad_attrs \\\n or attr.startswith('_'):\n raise AttributeError('ROP instance has no attribute %r' % attr)\n\n #\n # Check for 'ret' or 'ret_X'\n #\n if attr.startswith('ret'):\n count = 4\n if '_' in attr:\n count = int(attr.split('_')[1])\n\n return self.search(move=count)\n\n #\n # Check for a '_'-delimited list of registers\n #\n x86_suffixes = ['ax', 'bx', 'cx', 'dx', 'bp', 'sp', 'di', 'si',\n 'r8', 'r9', '10', '11', '12', '13', '14', '15']\n if all(map(lambda x: x[-2:] in x86_suffixes, attr.split('_'))):\n return self.search(regs = attr.split('_'), order = 'regs')\n\n #\n # Otherwise, assume it's a rop.call() shorthand\n #\n def call(*args):\n return self.call(attr,args)\n return call\n\n\nif not __ok:\n def ROP(*args, **kwargs):\n log.error(\"ROP is not supported without installing libcapstone. See http://www.capstone-engine.org/download.html\")\n", "path": "pwnlib/rop.py"}], "after_files": [{"content": "\"\"\"Return Oriented Programming\n\"\"\"\nimport hashlib, os, sys, tempfile, re\n\nfrom . import context, log, elf\nfrom .util import packing, lists\n\ntry:\n import ropgadget\n __ok = True\nexcept ImportError:\n __ok = False\n\nclass ROP(object):\n \"\"\"Class which simplifies the generation of ROP-chains.\n\n Example:\n\n .. code-block:: python\n\n elf = ELF('ropasaurusrex')\n rop = ROP(elf)\n rop.read(0, elf.bss(0x80))\n rop.dump()\n # ['0x0000: 0x80482fc (read)',\n # '0x0004: 0xdeadbeef',\n # '0x0008: 0x0',\n # '0x000c: 0x80496a8']\n str(rop)\n # '\\\\xfc\\\\x82\\\\x04\\\\x08\\\\xef\\\\xbe\\\\xad\\\\xde\\\\x00\\\\x00\\\\x00\\\\x00\\\\xa8\\\\x96\\\\x04\\\\x08'\n \"\"\"\n def __init__(self, elfs, base = None):\n \"\"\"\n Args:\n elfs(list): List of pwnlib.elf.ELF objects for mining\n \"\"\"\n # Permit singular ROP(elf) vs ROP([elf])\n if isinstance(elfs, elf.ELF):\n elfs = [elfs]\n elif isinstance(elfs, (str, unicode)):\n elfs = [elf.ELF(elfs)]\n\n self.elfs = elfs\n self._chain = []\n self.base = base\n self.align = max(e.elfclass for e in elfs)/8\n self.migrated = False\n self.__load()\n\n def resolve(self, resolvable):\n \"\"\"Resolves a symbol to an address\n\n Args:\n resolvable(str,int): Thing to convert into an address\n\n Returns:\n int containing address of 'resolvable', or None\n \"\"\"\n if isinstance(resolvable, str):\n for elf in self.elfs:\n if resolvable in elf.symbols:\n return elf.symbols[resolvable]\n if isinstance(resolvable, (int,long)):\n return resolvable\n return None\n\n def unresolve(self, value):\n \"\"\"Inverts 'resolve'. Given an address, it attempts to find a symbol\n for it in the loaded ELF files. If none is found, it searches all\n known gadgets, and returns the disassembly\n\n Args:\n value(int): Address to look up\n\n Returns:\n String containing the symbol name for the address, disassembly for a gadget\n (if there's one at that address), or an empty string.\n \"\"\"\n for elf in self.elfs:\n for name, addr in elf.symbols.items():\n if addr == value:\n return name\n\n if value in self.gadgets:\n return '; '.join(self.gadgets[value]['insns'])\n return ''\n\n def _output_struct(self, value, output):\n next_index = len(output)\n\n if isinstance(value, (int, long)):\n return value\n elif isinstance(value, (unicode, str)):\n if isinstance(value, unicode):\n value = value.encode('utf8')\n\n while True:\n value += '\\x00'\n if len(value) % self.align == 0:\n break\n\n output.append([value])\n return (next_index,)\n elif isinstance(value, (tuple, list)):\n l = []\n output.append(l)\n for v in value:\n l.append(self._output_struct(v, output))\n return (next_index,)\n else:\n log.error(\"ROP: Cannot flatten value %r\" % value)\n\n def _build_x86(self):\n # Stage 1:\n # Convert every call in self._chain from a (addr, args) tuple\n # into a (addr, pivot, args, pad) tuple.\n #\n # Stage 2:\n # Micro-optimizations for the last call in the chain.\n #\n # Stage 3:\n # Convert into a [[str/ints/refs]], where\n # refs are references to one of the first lists and will be turned\n # into pointers outside this function. Refs are represented as\n # length-1 tuples.\n\n if not self._chain:\n return []\n\n # Stage 1\n chain = []\n for addr, args in self._chain:\n if not args:\n chain.append([addr, [], [], 0])\n else:\n need = (1+len(args)) * self.align\n best_pivot = None\n best_size = None\n\n for size, pivot in sorted(self.pivots.items()):\n if size >= need:\n best_pivot = pivot\n best_size = size\n break\n\n if best_pivot == None:\n log.error(\"Could not find gadget to clean up stack for call %r %r\" % (addr, args))\n\n chain.append([addr, [best_pivot], args, best_size/4 - len(args) - 1])\n\n # Stage 2\n # If the last call has arguments, there is no need\n # to fix up the stack up for those arguments\n if chain[-1][2]:\n chain[-1][1] = [0xdeadbeef]\n chain[-1][3] = 0\n\n # If the last call does not have any arguments, there is no\n # need to fix up the stack for the second-to-last call.\n # We can put the last call as the pivot address for\n # the second-to-last call.\n if len(chain) > 1 and not chain[-1][2] and chain[-2][2]:\n # This optimization does not work if a raw string is on the stack\n if not isinstance(chain[-1][0], (str, unicode)):\n chain[-2][1] = [chain[-1][0]]\n chain[-2][3] = 0\n chain.pop()\n\n # Stage 3\n outrop = []\n output = [outrop]\n\n for addr, pivot, args, pad in chain:\n outrop.append(addr)\n outrop.extend(pivot)\n for arg in args:\n outrop.append(self._output_struct(arg, output))\n for _ in range(pad):\n outrop.append('$$$$')\n\n return output\n\n def build(self, base = None):\n \"\"\"Build the ROP chain into a list (addr, int/string, bool), where the\n last value is True iff the value was an internal reference.\n\n It is guaranteed that the individual parts are next to each other.\n\n If there is no base available, then the returned addresses are indexed from 0.\n\n Args:\n base(int): The base address to build the rop-chain from. Defaults to\n self.base.\n \"\"\"\n\n if base == None:\n base = self.base\n\n # Use the architecture specific builder to get a [[str/ints/refs]]\n meth = '_build_' + self.elfs[0].get_machine_arch()\n if not hasattr(self, meth):\n log.error(\"Cannot build rop for architecture %r\" % self.elfs[0].get_machine_arch())\n rop = getattr(self, meth)()\n\n # Stage 1\n # Generate a dictionary {ref_id: addr}.\n addrs = {}\n if base != None:\n addr = base\n for i, l in enumerate(rop):\n addrs[i] = addr\n for v in l:\n if isinstance(v, (int, long, tuple)):\n addr += self.align\n else:\n addr += len(v)\n\n # Stage 2:\n # Convert into [(addr, int/string, bool)]\n addr = base or 0\n out = []\n for l in rop:\n for v in l:\n if isinstance(v, (int, long)):\n out.append((addr, v, False))\n addr += self.align\n elif isinstance(v, str):\n out.append((addr, v, False))\n addr += len(v)\n elif isinstance(v, tuple):\n if v[0] in addrs:\n out.append((addr, addrs[v[0]], True))\n addr += self.align\n elif base != None:\n log.bug(\"ROP: References unknown structure index\")\n else:\n log.error(\"ROP: Cannot use structures without a base address\")\n else:\n log.bug(\"ROP: Unexpected value: %r\" % v)\n\n return out\n\n def chain(self):\n \"\"\"Build the ROP chain\n\n Returns:\n str containging raw ROP bytes\n \"\"\"\n\n return packing.flat(\n [value for addr, value, was_ref in self.build()],\n word_size = 8*self.align\n )\n\n def dump(self):\n \"\"\"Dump the ROP chain in an easy-to-read manner\"\"\"\n result = []\n\n rop = self.build(self.base or 0)\n addrs = [addr for addr, value, was_ref in rop]\n for addr, value, was_ref in rop:\n if isinstance(value, str):\n line = \"0x%04x: %16r\" % (addr, value.rstrip('\\x00'))\n elif isinstance(value, (int, long)):\n if was_ref:\n line = \"0x%04x: %#16x (%+d)\" % (\n addr,\n value,\n value - addr\n )\n else:\n ref = self.unresolve(value)\n line = \"0x%04x: %#16x%s\" % (\n addr,\n value,\n (' (%s)' % ref) if ref else ''\n )\n else:\n log.bug(\"ROP: ROP.build returned an unexpected value %r\" % value)\n\n result.append(line)\n\n return result\n\n def call(self, resolvable, arguments=()):\n \"\"\"Add a call to the ROP chain\n\n Args:\n resolvable(str,int): Value which can be looked up via 'resolve',\n or is already an integer.\n arguments(list): List of arguments which can be passed to pack().\n Alternately, if a base address is set, arbitrarily nested\n structures of strings or integers can be provided.\n \"\"\"\n if self.migrated:\n log.error(\"Cannot append to a migrated chain\")\n\n addr = self.resolve(resolvable)\n\n if addr is None:\n log.error(\"Could not resolve %r\" % resolvable)\n\n self._chain.append((addr, arguments))\n\n def raw(self, value):\n \"\"\"Adds a raw integer or string to the ROP chain.\n\n If your architecture requires aligned values, then make\n sure that any given string is aligned!\n\n Args:\n data(int/str): The raw value to put onto the rop chain.\n \"\"\"\n\n if self.migrated:\n log.error(\"Cannot append to a migrated chain\")\n\n self._chain.append((value, ()))\n\n def migrate(self, next_base):\n \"\"\"Explicitly set $sp, by using a ``leave; ret`` gadget\"\"\"\n\n if isinstance(next_base, ROP):\n next_base = self.base\n\n pop_sp = self.rsp or self.esp\n pop_bp = self.rbp or self.ebp\n leave = self.leave\n\n if pop_sp and len(pop_sp[1]['regs']) == 1:\n self.raw(pop_sp[0])\n self.raw(next_base)\n elif pop_bp and leave and len(pop_bp[1]['regs']) == 1:\n self.raw(pop_bp[0])\n self.raw(next_base-4)\n self.raw(leave[0])\n else:\n log.error(\"Cannot find the gadgets to migrate\")\n\n self.migrated = True\n\n def __str__(self):\n \"\"\"Returns: Raw bytes of the ROP chain\"\"\"\n return self.chain()\n\n def __get_cachefile_name(self, elf):\n basename = os.path.basename(elf.file.name)\n md5sum = hashlib.md5(elf.get_data()).hexdigest()\n\n filename = \"%s-%s-%#x\" % (basename, md5sum, elf.address)\n\n cachedir = os.path.join(tempfile.gettempdir(), 'pwntools-rop-cache')\n\n if not os.path.exists(cachedir):\n os.mkdir(cachedir)\n\n return os.path.join(cachedir, filename)\n\n def __cache_load(self, elf):\n filename = self.__get_cachefile_name(elf)\n\n if os.path.exists(filename):\n log.info(\"Found cached gadgets for %r\" % (elf.file.name))\n return eval(file(filename).read())\n\n def __cache_save(self, elf, data):\n file(self.__get_cachefile_name(elf),'w+').write(repr(data))\n\n def __load(self):\n \"\"\"Load all ROP gadgets for the selected ELF files\"\"\"\n #\n # We accept only instructions that look like these.\n #\n # - leave\n # - pop reg\n # - add $sp, value\n # - ret\n #\n # Currently, ROPgadget does not detect multi-byte \"C2\" ret.\n # https://github.com/JonathanSalwan/ROPgadget/issues/53\n #\n\n pop = re.compile(r'^pop (.*)')\n add = re.compile(r'^add .sp, (\\S+)$')\n ret = re.compile(r'^ret$')\n leave = re.compile(r'^leave$')\n\n #\n # Validation routine\n #\n # >>> valid('pop eax')\n # True\n # >>> valid('add rax, 0x24')\n # False\n # >>> valid('add esp, 0x24')\n # True\n #\n valid = lambda insn: any(map(lambda pattern: pattern.match(insn), [pop,add,ret,leave]))\n\n #\n # Currently, ropgadget.args.Args() doesn't take any arguments, and pulls\n # only from sys.argv. Preserve it through this call. We also\n # monkey-patch sys.stdout to suppress output from ropgadget.\n #\n argv = sys.argv\n stdout = sys.stdout\n class Wrapper:\n def __init__(self, fd):\n self._fd = fd\n def write(self, s):\n pass\n def __getattr__(self, k):\n return self._fd.__getattribute__(k)\n sys.stdout = Wrapper(sys.stdout)\n gadgets = {}\n try:\n for elf in self.elfs:\n cache = self.__cache_load(elf)\n if cache:\n gadgets.update(cache)\n continue\n\n log.info(\"Loading gadgets for %r @ %#x\" % (elf.path, elf.address))\n sys.argv = ['ropgadget', '--binary', elf.path, '--only', 'add|pop|leave|ret', '--nojop', '--nosys']\n args = ropgadget.args.Args().getArgs()\n core = ropgadget.core.Core(args)\n core.do_binary(elf.path)\n core.do_load(0)\n\n elf_gadgets = {}\n for gadget in core._Core__gadgets:\n\n address = gadget['vaddr'] - elf.load_addr + elf.address\n insns = [g.strip() for g in gadget['gadget'].split(';')]\n\n if all(map(valid, insns)):\n elf_gadgets[address] = insns\n self.__cache_save(elf, elf_gadgets)\n gadgets.update(elf_gadgets)\n finally:\n sys.argv = argv\n sys.stdout = stdout\n\n\n #\n # For each gadget we decided to keep, find out how much it moves the stack,\n # and log which registers it modifies.\n #\n self.gadgets = {}\n self.pivots = {}\n\n frame_regs = ['ebp','esp'] if self.align == 4 else ['rbp','rsp']\n\n for addr,insns in gadgets.items():\n sp_move = 0\n regs = []\n for insn in insns:\n if pop.match(insn):\n regs.append(pop.match(insn).group(1))\n sp_move += self.align\n elif add.match(insn):\n sp_move += int(add.match(insn).group(1), 16)\n elif ret.match(insn):\n sp_move += self.align\n elif leave.match(insn):\n #\n # HACK: Since this modifies ESP directly, this should\n # never be returned as a 'normal' ROP gadget that\n # simply 'increments' the stack.\n #\n # As such, the 'move' is set to a very large value,\n # to prevent .search() from returning it unless $sp\n # is specified as a register.\n #\n sp_move += 9999999999\n regs += frame_regs\n\n # Permit duplicates, because blacklisting bytes in the gadget\n # addresses may result in us needing the dupes.\n self.gadgets[addr] = {'insns': insns, 'regs': regs, 'move': sp_move}\n\n # Don't use 'pop esp' for pivots\n if not set(['rsp','esp']) & set(regs):\n self.pivots[sp_move] = addr\n\n #\n # HACK: Set up a special '.leave' helper. This is so that\n # I don't have to rewrite __getattr__ to support this.\n #\n leave = self.search(regs = frame_regs, order = 'regs')\n if leave[1]['regs'] != frame_regs:\n leave = None\n self.leave = leave\n\n def __repr__(self):\n return \"ROP(%r)\" % self.elfs\n\n def search(self, move = 0, regs = [], order = 'size'):\n \"\"\"Search for a gadget which matches the specified criteria.\n\n Args:\n move(int): Minimum number of bytes by which the stack\n pointer is adjusted.\n regs(list): Minimum list of registers which are popped off the\n stack.\n order(str): Either the string 'size' or 'regs'. Decides how to\n order multiple gadgets the fulfill the requirements.\n\n The search will try to minimize the number of bytes popped more than\n requested, the number of registers touched besides the requested and\n the address.\n\n If ``order == 'size'``, then gadgets are compared lexicographically\n by ``(total_moves, total_regs, addr)``, otherwise by ``(total_regs, total_moves, addr)``.\n\n Returns:\n A tuple of (address, info) in the same format as self.gadgets.items().\n \"\"\"\n\n regs = set(regs)\n\n # Search for an exact match, save the closest match\n closest = None\n closest_val = (float('inf'), float('inf'), float('inf'))\n for a,i in self.gadgets.items():\n cur_regs = set(i['regs'])\n if regs == cur_regs and move == i['move']:\n return (a, i)\n\n if not (regs.issubset(cur_regs) and move <= i['move']):\n continue\n\n if order == 'size':\n cur = (i['move'], len(i['regs']), a)\n else:\n cur = (len(i['regs']), i['move'], a)\n\n if cur < closest_val:\n closest = (a, i)\n closest_val = cur\n\n return closest\n\n def __getattr__(self, attr):\n \"\"\"Helper to make finding ROP gadets easier.\n\n Also provides a shorthand for .call():\n ```\n rop.function(args) ==> rop.call(function, args)\n ```\n\n >>> elf=ELF('/bin/bash')\n >>> rop=ROP([elf])\n >>> rop.rdi == rop.search(regs=['rdi'], order = 'regs')\n True\n >>> rop.r13_r14_r15_rbp == rop.search(regs=['r13','r14','r15','rbp'], order = 'regs')\n True\n >>> rop.ret == rop.search(move=rop.align)\n True\n >>> rop.ret_8 == rop.search(move=8)\n True\n >>> rop.ret != None\n True\n \"\"\"\n bad_attrs = [\n 'trait_names', # ipython tab-complete\n 'download', # frequent typo\n 'upload', # frequent typo\n ]\n\n if attr in self.__dict__ \\\n or attr in bad_attrs \\\n or attr.startswith('_'):\n raise AttributeError('ROP instance has no attribute %r' % attr)\n\n #\n # Check for 'ret' or 'ret_X'\n #\n if attr.startswith('ret'):\n count = 4\n if '_' in attr:\n count = int(attr.split('_')[1])\n\n return self.search(move=count)\n\n #\n # Check for a '_'-delimited list of registers\n #\n x86_suffixes = ['ax', 'bx', 'cx', 'dx', 'bp', 'sp', 'di', 'si',\n 'r8', 'r9', '10', '11', '12', '13', '14', '15']\n if all(map(lambda x: x[-2:] in x86_suffixes, attr.split('_'))):\n return self.search(regs = attr.split('_'), order = 'regs')\n\n #\n # Otherwise, assume it's a rop.call() shorthand\n #\n def call(*args):\n return self.call(attr,args)\n return call\n\n\nif not __ok:\n def ROP(*args, **kwargs):\n log.error(\"ROP is not supported without installing libcapstone. See http://www.capstone-engine.org/download.html\")\n", "path": "pwnlib/rop.py"}]} |
gh_patches_debug_1099 | rasdani/github-patches | git_diff | pyro-ppl__pyro-1903 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot delete constrained parameter [bug]
### **Issue Description**
Deleting a constrained parameter throws an error.
In the function `param_store.__delitem__()`, the line
`unconstrained_value = constrained_value.unconstrained()`
throws
`AttributeError: 'Tensor' object has no attribute 'unconstrained'`
### **Environment**
OS: Windows 8.1
Python Version: 3.6.8
Pytorch Version: 1.1.0
Pyro Version: 0.3.3
This error was also present using Pytorch 1.0 and Pyro 0.3.1.
### **Code Snippet**
```py
import torch
import pyro
from torch.distributions import constraints
param_store = pyro.get_param_store()
a = pyro.param('a', torch.ones(3))
print(param_store.keys()) #dict_keys(['a'])
param_store.__delitem__('a') #Works fine
print(param_store.keys()) #dict_keys([])
b = pyro.param('b', torch.ones(3), constraint=constraints.positive)
print(param_store.keys()) #dict_keys(['b'])
param_store.__delitem__('b') #AttributeError: 'Tensor' object has no attribute 'unconstrained'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/params/param_store.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 import re
4 import warnings
5 import weakref
6
7 import torch
8 from torch.distributions import constraints, transform_to
9
10
11 class ParamStoreDict(object):
12 """
13 Global store for parameters in Pyro. This is basically a key-value store.
14 The typical user interacts with the ParamStore primarily through the
15 primitive `pyro.param`.
16
17 See `Intro Part II <http://pyro.ai/examples/intro_part_ii.html>`_ for further discussion
18 and `SVI Part I <http://pyro.ai/examples/svi_part_i.html>`_ for some examples.
19
20 Some things to bear in mind when using parameters in Pyro:
21
22 - parameters must be assigned unique names
23 - the `init_tensor` argument to `pyro.param` is only used the first time that a given (named)
24 parameter is registered with Pyro.
25 - for this reason, a user may need to use the `clear()` method if working in a REPL in order to
26 get the desired behavior. this method can also be invoked with `pyro.clear_param_store()`.
27 - the internal name of a parameter within a PyTorch `nn.Module` that has been registered with
28 Pyro is prepended with the Pyro name of the module. so nothing prevents the user from having
29 two different modules each of which contains a parameter named `weight`. by contrast, a user
30 can only have one top-level parameter named `weight` (outside of any module).
31 - parameters can be saved and loaded from disk using `save` and `load`.
32 """
33
34 # -------------------------------------------------------------------------------
35 # New dict-like interface
36
37 def __init__(self):
38 """
39 initialize ParamStore data structures
40 """
41 self._params = {} # dictionary from param name to param
42 self._param_to_name = {} # dictionary from unconstrained param to param name
43 self._constraints = {} # dictionary from param name to constraint object
44
45 def clear(self):
46 """
47 Clear the ParamStore
48 """
49 self._params = {}
50 self._param_to_name = {}
51 self._constraints = {}
52
53 def items(self):
54 """
55 Iterate over ``(name, constrained_param)`` pairs.
56 """
57 for name in self._params:
58 yield name, self[name]
59
60 def keys(self):
61 """
62 Iterate over param names.
63 """
64 return self._params.keys()
65
66 def values(self):
67 """
68 Iterate over constrained parameter values.
69 """
70 for name, constrained_param in self.items():
71 yield constrained_param
72
73 def __bool__(self):
74 return bool(self._params)
75
76 def __len__(self):
77 return len(self._params)
78
79 def __contains__(self, name):
80 return name in self._params
81
82 def __iter__(self):
83 """
84 Iterate over param names.
85 """
86 return iter(self.keys())
87
88 def __delitem__(self, name):
89 """
90 Remove a parameter from the param store.
91 """
92 constrained_value = self._params.pop(name)
93 unconstrained_value = constrained_value.unconstrained()
94 self._param_to_name.pop(unconstrained_value)
95 self._constraints.pop(name)
96
97 def __getitem__(self, name):
98 """
99 Get the constrained value of a named parameter.
100 """
101 unconstrained_value = self._params[name]
102
103 # compute the constrained value
104 constraint = self._constraints[name]
105 constrained_value = transform_to(constraint)(unconstrained_value)
106 constrained_value.unconstrained = weakref.ref(unconstrained_value)
107
108 return constrained_value
109
110 def __setitem__(self, name, new_constrained_value):
111 """
112 Set the constrained value of an existing parameter, or the value of a
113 new unconstrained parameter. To declare a new parameter with
114 constraint, use :meth:`setdefault`.
115 """
116 # store constraint, defaulting to unconstrained
117 constraint = self._constraints.setdefault(name, constraints.real)
118
119 # compute the unconstrained value
120 with torch.no_grad():
121 # FIXME should we .detach() the new_constrained_value?
122 unconstrained_value = transform_to(constraint).inv(new_constrained_value)
123 unconstrained_value = unconstrained_value.contiguous()
124 unconstrained_value.requires_grad_(True)
125
126 # store a bidirectional mapping between name and unconstrained tensor
127 self._params[name] = unconstrained_value
128 self._param_to_name[unconstrained_value] = name
129
130 def setdefault(self, name, init_constrained_value, constraint=constraints.real):
131 """
132 Retrieve a constrained parameter value from the if it exists, otherwise
133 set the initial value. Note that this is a little fancier than
134 :meth:`dict.setdefault`.
135
136 If the parameter already exists, ``init_constrained_tensor`` will be ignored. To avoid
137 expensive creation of ``init_constrained_tensor`` you can wrap it in a ``lambda`` that
138 will only be evaluated if the parameter does not already exist::
139
140 param_store.get("foo", lambda: (0.001 * torch.randn(1000, 1000)).exp(),
141 constraint=constraints.positive)
142
143 :param str name: parameter name
144 :param init_constrained_value: initial constrained value
145 :type init_constrained_value: torch.Tensor or callable returning a torch.Tensor
146 :param constraint: torch constraint object
147 :type constraint: torch.distributions.constraints.Constraint
148 :returns: constrained parameter value
149 :rtype: torch.Tensor
150 """
151 if name not in self._params:
152 # set the constraint
153 self._constraints[name] = constraint
154
155 # evaluate the lazy value
156 if callable(init_constrained_value):
157 init_constrained_value = init_constrained_value()
158
159 # set the initial value
160 self[name] = init_constrained_value
161
162 # get the param, which is guaranteed to exist
163 return self[name]
164
165 # -------------------------------------------------------------------------------
166 # Old non-dict interface
167
168 def named_parameters(self):
169 """
170 Returns an iterator over ``(name, unconstrained_value)`` tuples for
171 each parameter in the ParamStore.
172 """
173 return self._params.items()
174
175 def get_all_param_names(self):
176 warnings.warn("ParamStore.get_all_param_names() is deprecated; use .keys() instead.",
177 DeprecationWarning)
178 return self.keys()
179
180 def replace_param(self, param_name, new_param, old_param):
181 warnings.warn("ParamStore.replace_param() is deprecated; use .__setitem__() instead.",
182 DeprecationWarning)
183 assert self._params[param_name] is old_param.unconstrained()
184 self[param_name] = new_param
185
186 def get_param(self, name, init_tensor=None, constraint=constraints.real, event_dim=None):
187 """
188 Get parameter from its name. If it does not yet exist in the
189 ParamStore, it will be created and stored.
190 The Pyro primitive `pyro.param` dispatches to this method.
191
192 :param name: parameter name
193 :type name: str
194 :param init_tensor: initial tensor
195 :type init_tensor: torch.Tensor
196 :param constraint: torch constraint
197 :type constraint: torch.distributions.constraints.Constraint
198 :param int event_dim: (ignored)
199 :returns: parameter
200 :rtype: torch.Tensor
201 """
202 if init_tensor is None:
203 return self[name]
204 else:
205 return self.setdefault(name, init_tensor, constraint)
206
207 def match(self, name):
208 """
209 Get all parameters that match regex. The parameter must exist.
210
211 :param name: regular expression
212 :type name: str
213 :returns: dict with key param name and value torch Tensor
214 """
215 pattern = re.compile(name)
216 return {name: self[name] for name in self if pattern.match(name)}
217
218 def param_name(self, p):
219 """
220 Get parameter name from parameter
221
222 :param p: parameter
223 :returns: parameter name
224 """
225 return self._param_to_name.get(p)
226
227 def get_state(self):
228 """
229 Get the ParamStore state.
230 """
231 state = {
232 'params': self._params,
233 'constraints': self._constraints,
234 }
235 return state
236
237 def set_state(self, state):
238 """
239 Set the ParamStore state using state from a previous get_state() call
240 """
241 assert isinstance(state, dict), "malformed ParamStore state"
242 assert set(state.keys()) == set(['params', 'constraints']), \
243 "malformed ParamStore keys {}".format(state.keys())
244
245 for param_name, param in state['params'].items():
246 self._params[param_name] = param
247 self._param_to_name[param] = param_name
248
249 for param_name, constraint in state['constraints'].items():
250 if isinstance(constraint, type(constraints.real)):
251 # Work around lack of hash & equality comparison on constraints.
252 constraint = constraints.real
253 self._constraints[param_name] = constraint
254
255 def save(self, filename):
256 """
257 Save parameters to disk
258
259 :param filename: file name to save to
260 :type filename: str
261 """
262 with open(filename, "wb") as output_file:
263 torch.save(self.get_state(), output_file)
264
265 def load(self, filename, map_location=None):
266 """
267 Loads parameters from disk
268
269 .. note::
270
271 If using :meth:`pyro.module` on parameters loaded from
272 disk, be sure to set the ``update_module_params`` flag::
273
274 pyro.get_param_store().load('saved_params.save')
275 pyro.module('module', nn, update_module_params=True)
276
277 :param filename: file name to load from
278 :type filename: str
279 :param map_location: specifies how to remap storage locations
280 :type map_location: function, torch.device, string or a dict
281 """
282 with open(filename, "rb") as input_file:
283 state = torch.load(input_file, map_location)
284 self.set_state(state)
285
286
287 # used to create fully-formed param names, e.g. mymodule$$$mysubmodule.weight
288 _MODULE_NAMESPACE_DIVIDER = "$$$"
289
290
291 def param_with_module_name(pyro_name, param_name):
292 return _MODULE_NAMESPACE_DIVIDER.join([pyro_name, param_name])
293
294
295 def module_from_param_with_module_name(param_name):
296 return param_name.split(_MODULE_NAMESPACE_DIVIDER)[0]
297
298
299 def user_param_name(param_name):
300 if _MODULE_NAMESPACE_DIVIDER in param_name:
301 return param_name.split(_MODULE_NAMESPACE_DIVIDER)[1]
302 return param_name
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyro/params/param_store.py b/pyro/params/param_store.py
--- a/pyro/params/param_store.py
+++ b/pyro/params/param_store.py
@@ -89,8 +89,7 @@
"""
Remove a parameter from the param store.
"""
- constrained_value = self._params.pop(name)
- unconstrained_value = constrained_value.unconstrained()
+ unconstrained_value = self._params.pop(name)
self._param_to_name.pop(unconstrained_value)
self._constraints.pop(name)
| {"golden_diff": "diff --git a/pyro/params/param_store.py b/pyro/params/param_store.py\n--- a/pyro/params/param_store.py\n+++ b/pyro/params/param_store.py\n@@ -89,8 +89,7 @@\n \"\"\"\n Remove a parameter from the param store.\n \"\"\"\n- constrained_value = self._params.pop(name)\n- unconstrained_value = constrained_value.unconstrained()\n+ unconstrained_value = self._params.pop(name)\n self._param_to_name.pop(unconstrained_value)\n self._constraints.pop(name)\n", "issue": "Cannot delete constrained parameter [bug]\n### **Issue Description**\r\nDeleting a constrained parameter throws an error.\r\nIn the function `param_store.__delitem__()`, the line \r\n`unconstrained_value = constrained_value.unconstrained()`\r\nthrows\r\n`AttributeError: 'Tensor' object has no attribute 'unconstrained'`\r\n\r\n### **Environment**\r\nOS: Windows 8.1\r\nPython Version: 3.6.8\r\nPytorch Version: 1.1.0\r\nPyro Version: 0.3.3\r\n\r\nThis error was also present using Pytorch 1.0 and Pyro 0.3.1.\r\n### **Code Snippet**\r\n```py\r\nimport torch\r\nimport pyro\r\nfrom torch.distributions import constraints\r\n\r\nparam_store = pyro.get_param_store()\r\na = pyro.param('a', torch.ones(3))\r\nprint(param_store.keys()) #dict_keys(['a'])\r\nparam_store.__delitem__('a') #Works fine\r\nprint(param_store.keys()) #dict_keys([])\r\nb = pyro.param('b', torch.ones(3), constraint=constraints.positive)\r\nprint(param_store.keys()) #dict_keys(['b'])\r\nparam_store.__delitem__('b') #AttributeError: 'Tensor' object has no attribute 'unconstrained'\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport re\nimport warnings\nimport weakref\n\nimport torch\nfrom torch.distributions import constraints, transform_to\n\n\nclass ParamStoreDict(object):\n \"\"\"\n Global store for parameters in Pyro. This is basically a key-value store.\n The typical user interacts with the ParamStore primarily through the\n primitive `pyro.param`.\n\n See `Intro Part II <http://pyro.ai/examples/intro_part_ii.html>`_ for further discussion\n and `SVI Part I <http://pyro.ai/examples/svi_part_i.html>`_ for some examples.\n\n Some things to bear in mind when using parameters in Pyro:\n\n - parameters must be assigned unique names\n - the `init_tensor` argument to `pyro.param` is only used the first time that a given (named)\n parameter is registered with Pyro.\n - for this reason, a user may need to use the `clear()` method if working in a REPL in order to\n get the desired behavior. this method can also be invoked with `pyro.clear_param_store()`.\n - the internal name of a parameter within a PyTorch `nn.Module` that has been registered with\n Pyro is prepended with the Pyro name of the module. so nothing prevents the user from having\n two different modules each of which contains a parameter named `weight`. by contrast, a user\n can only have one top-level parameter named `weight` (outside of any module).\n - parameters can be saved and loaded from disk using `save` and `load`.\n \"\"\"\n\n # -------------------------------------------------------------------------------\n # New dict-like interface\n\n def __init__(self):\n \"\"\"\n initialize ParamStore data structures\n \"\"\"\n self._params = {} # dictionary from param name to param\n self._param_to_name = {} # dictionary from unconstrained param to param name\n self._constraints = {} # dictionary from param name to constraint object\n\n def clear(self):\n \"\"\"\n Clear the ParamStore\n \"\"\"\n self._params = {}\n self._param_to_name = {}\n self._constraints = {}\n\n def items(self):\n \"\"\"\n Iterate over ``(name, constrained_param)`` pairs.\n \"\"\"\n for name in self._params:\n yield name, self[name]\n\n def keys(self):\n \"\"\"\n Iterate over param names.\n \"\"\"\n return self._params.keys()\n\n def values(self):\n \"\"\"\n Iterate over constrained parameter values.\n \"\"\"\n for name, constrained_param in self.items():\n yield constrained_param\n\n def __bool__(self):\n return bool(self._params)\n\n def __len__(self):\n return len(self._params)\n\n def __contains__(self, name):\n return name in self._params\n\n def __iter__(self):\n \"\"\"\n Iterate over param names.\n \"\"\"\n return iter(self.keys())\n\n def __delitem__(self, name):\n \"\"\"\n Remove a parameter from the param store.\n \"\"\"\n constrained_value = self._params.pop(name)\n unconstrained_value = constrained_value.unconstrained()\n self._param_to_name.pop(unconstrained_value)\n self._constraints.pop(name)\n\n def __getitem__(self, name):\n \"\"\"\n Get the constrained value of a named parameter.\n \"\"\"\n unconstrained_value = self._params[name]\n\n # compute the constrained value\n constraint = self._constraints[name]\n constrained_value = transform_to(constraint)(unconstrained_value)\n constrained_value.unconstrained = weakref.ref(unconstrained_value)\n\n return constrained_value\n\n def __setitem__(self, name, new_constrained_value):\n \"\"\"\n Set the constrained value of an existing parameter, or the value of a\n new unconstrained parameter. To declare a new parameter with\n constraint, use :meth:`setdefault`.\n \"\"\"\n # store constraint, defaulting to unconstrained\n constraint = self._constraints.setdefault(name, constraints.real)\n\n # compute the unconstrained value\n with torch.no_grad():\n # FIXME should we .detach() the new_constrained_value?\n unconstrained_value = transform_to(constraint).inv(new_constrained_value)\n unconstrained_value = unconstrained_value.contiguous()\n unconstrained_value.requires_grad_(True)\n\n # store a bidirectional mapping between name and unconstrained tensor\n self._params[name] = unconstrained_value\n self._param_to_name[unconstrained_value] = name\n\n def setdefault(self, name, init_constrained_value, constraint=constraints.real):\n \"\"\"\n Retrieve a constrained parameter value from the if it exists, otherwise\n set the initial value. Note that this is a little fancier than\n :meth:`dict.setdefault`.\n\n If the parameter already exists, ``init_constrained_tensor`` will be ignored. To avoid\n expensive creation of ``init_constrained_tensor`` you can wrap it in a ``lambda`` that\n will only be evaluated if the parameter does not already exist::\n\n param_store.get(\"foo\", lambda: (0.001 * torch.randn(1000, 1000)).exp(),\n constraint=constraints.positive)\n\n :param str name: parameter name\n :param init_constrained_value: initial constrained value\n :type init_constrained_value: torch.Tensor or callable returning a torch.Tensor\n :param constraint: torch constraint object\n :type constraint: torch.distributions.constraints.Constraint\n :returns: constrained parameter value\n :rtype: torch.Tensor\n \"\"\"\n if name not in self._params:\n # set the constraint\n self._constraints[name] = constraint\n\n # evaluate the lazy value\n if callable(init_constrained_value):\n init_constrained_value = init_constrained_value()\n\n # set the initial value\n self[name] = init_constrained_value\n\n # get the param, which is guaranteed to exist\n return self[name]\n\n # -------------------------------------------------------------------------------\n # Old non-dict interface\n\n def named_parameters(self):\n \"\"\"\n Returns an iterator over ``(name, unconstrained_value)`` tuples for\n each parameter in the ParamStore.\n \"\"\"\n return self._params.items()\n\n def get_all_param_names(self):\n warnings.warn(\"ParamStore.get_all_param_names() is deprecated; use .keys() instead.\",\n DeprecationWarning)\n return self.keys()\n\n def replace_param(self, param_name, new_param, old_param):\n warnings.warn(\"ParamStore.replace_param() is deprecated; use .__setitem__() instead.\",\n DeprecationWarning)\n assert self._params[param_name] is old_param.unconstrained()\n self[param_name] = new_param\n\n def get_param(self, name, init_tensor=None, constraint=constraints.real, event_dim=None):\n \"\"\"\n Get parameter from its name. If it does not yet exist in the\n ParamStore, it will be created and stored.\n The Pyro primitive `pyro.param` dispatches to this method.\n\n :param name: parameter name\n :type name: str\n :param init_tensor: initial tensor\n :type init_tensor: torch.Tensor\n :param constraint: torch constraint\n :type constraint: torch.distributions.constraints.Constraint\n :param int event_dim: (ignored)\n :returns: parameter\n :rtype: torch.Tensor\n \"\"\"\n if init_tensor is None:\n return self[name]\n else:\n return self.setdefault(name, init_tensor, constraint)\n\n def match(self, name):\n \"\"\"\n Get all parameters that match regex. The parameter must exist.\n\n :param name: regular expression\n :type name: str\n :returns: dict with key param name and value torch Tensor\n \"\"\"\n pattern = re.compile(name)\n return {name: self[name] for name in self if pattern.match(name)}\n\n def param_name(self, p):\n \"\"\"\n Get parameter name from parameter\n\n :param p: parameter\n :returns: parameter name\n \"\"\"\n return self._param_to_name.get(p)\n\n def get_state(self):\n \"\"\"\n Get the ParamStore state.\n \"\"\"\n state = {\n 'params': self._params,\n 'constraints': self._constraints,\n }\n return state\n\n def set_state(self, state):\n \"\"\"\n Set the ParamStore state using state from a previous get_state() call\n \"\"\"\n assert isinstance(state, dict), \"malformed ParamStore state\"\n assert set(state.keys()) == set(['params', 'constraints']), \\\n \"malformed ParamStore keys {}\".format(state.keys())\n\n for param_name, param in state['params'].items():\n self._params[param_name] = param\n self._param_to_name[param] = param_name\n\n for param_name, constraint in state['constraints'].items():\n if isinstance(constraint, type(constraints.real)):\n # Work around lack of hash & equality comparison on constraints.\n constraint = constraints.real\n self._constraints[param_name] = constraint\n\n def save(self, filename):\n \"\"\"\n Save parameters to disk\n\n :param filename: file name to save to\n :type filename: str\n \"\"\"\n with open(filename, \"wb\") as output_file:\n torch.save(self.get_state(), output_file)\n\n def load(self, filename, map_location=None):\n \"\"\"\n Loads parameters from disk\n\n .. note::\n\n If using :meth:`pyro.module` on parameters loaded from\n disk, be sure to set the ``update_module_params`` flag::\n\n pyro.get_param_store().load('saved_params.save')\n pyro.module('module', nn, update_module_params=True)\n\n :param filename: file name to load from\n :type filename: str\n :param map_location: specifies how to remap storage locations\n :type map_location: function, torch.device, string or a dict\n \"\"\"\n with open(filename, \"rb\") as input_file:\n state = torch.load(input_file, map_location)\n self.set_state(state)\n\n\n# used to create fully-formed param names, e.g. mymodule$$$mysubmodule.weight\n_MODULE_NAMESPACE_DIVIDER = \"$$$\"\n\n\ndef param_with_module_name(pyro_name, param_name):\n return _MODULE_NAMESPACE_DIVIDER.join([pyro_name, param_name])\n\n\ndef module_from_param_with_module_name(param_name):\n return param_name.split(_MODULE_NAMESPACE_DIVIDER)[0]\n\n\ndef user_param_name(param_name):\n if _MODULE_NAMESPACE_DIVIDER in param_name:\n return param_name.split(_MODULE_NAMESPACE_DIVIDER)[1]\n return param_name\n", "path": "pyro/params/param_store.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport re\nimport warnings\nimport weakref\n\nimport torch\nfrom torch.distributions import constraints, transform_to\n\n\nclass ParamStoreDict(object):\n \"\"\"\n Global store for parameters in Pyro. This is basically a key-value store.\n The typical user interacts with the ParamStore primarily through the\n primitive `pyro.param`.\n\n See `Intro Part II <http://pyro.ai/examples/intro_part_ii.html>`_ for further discussion\n and `SVI Part I <http://pyro.ai/examples/svi_part_i.html>`_ for some examples.\n\n Some things to bear in mind when using parameters in Pyro:\n\n - parameters must be assigned unique names\n - the `init_tensor` argument to `pyro.param` is only used the first time that a given (named)\n parameter is registered with Pyro.\n - for this reason, a user may need to use the `clear()` method if working in a REPL in order to\n get the desired behavior. this method can also be invoked with `pyro.clear_param_store()`.\n - the internal name of a parameter within a PyTorch `nn.Module` that has been registered with\n Pyro is prepended with the Pyro name of the module. so nothing prevents the user from having\n two different modules each of which contains a parameter named `weight`. by contrast, a user\n can only have one top-level parameter named `weight` (outside of any module).\n - parameters can be saved and loaded from disk using `save` and `load`.\n \"\"\"\n\n # -------------------------------------------------------------------------------\n # New dict-like interface\n\n def __init__(self):\n \"\"\"\n initialize ParamStore data structures\n \"\"\"\n self._params = {} # dictionary from param name to param\n self._param_to_name = {} # dictionary from unconstrained param to param name\n self._constraints = {} # dictionary from param name to constraint object\n\n def clear(self):\n \"\"\"\n Clear the ParamStore\n \"\"\"\n self._params = {}\n self._param_to_name = {}\n self._constraints = {}\n\n def items(self):\n \"\"\"\n Iterate over ``(name, constrained_param)`` pairs.\n \"\"\"\n for name in self._params:\n yield name, self[name]\n\n def keys(self):\n \"\"\"\n Iterate over param names.\n \"\"\"\n return self._params.keys()\n\n def values(self):\n \"\"\"\n Iterate over constrained parameter values.\n \"\"\"\n for name, constrained_param in self.items():\n yield constrained_param\n\n def __bool__(self):\n return bool(self._params)\n\n def __len__(self):\n return len(self._params)\n\n def __contains__(self, name):\n return name in self._params\n\n def __iter__(self):\n \"\"\"\n Iterate over param names.\n \"\"\"\n return iter(self.keys())\n\n def __delitem__(self, name):\n \"\"\"\n Remove a parameter from the param store.\n \"\"\"\n unconstrained_value = self._params.pop(name)\n self._param_to_name.pop(unconstrained_value)\n self._constraints.pop(name)\n\n def __getitem__(self, name):\n \"\"\"\n Get the constrained value of a named parameter.\n \"\"\"\n unconstrained_value = self._params[name]\n\n # compute the constrained value\n constraint = self._constraints[name]\n constrained_value = transform_to(constraint)(unconstrained_value)\n constrained_value.unconstrained = weakref.ref(unconstrained_value)\n\n return constrained_value\n\n def __setitem__(self, name, new_constrained_value):\n \"\"\"\n Set the constrained value of an existing parameter, or the value of a\n new unconstrained parameter. To declare a new parameter with\n constraint, use :meth:`setdefault`.\n \"\"\"\n # store constraint, defaulting to unconstrained\n constraint = self._constraints.setdefault(name, constraints.real)\n\n # compute the unconstrained value\n with torch.no_grad():\n # FIXME should we .detach() the new_constrained_value?\n unconstrained_value = transform_to(constraint).inv(new_constrained_value)\n unconstrained_value = unconstrained_value.contiguous()\n unconstrained_value.requires_grad_(True)\n\n # store a bidirectional mapping between name and unconstrained tensor\n self._params[name] = unconstrained_value\n self._param_to_name[unconstrained_value] = name\n\n def setdefault(self, name, init_constrained_value, constraint=constraints.real):\n \"\"\"\n Retrieve a constrained parameter value from the if it exists, otherwise\n set the initial value. Note that this is a little fancier than\n :meth:`dict.setdefault`.\n\n If the parameter already exists, ``init_constrained_tensor`` will be ignored. To avoid\n expensive creation of ``init_constrained_tensor`` you can wrap it in a ``lambda`` that\n will only be evaluated if the parameter does not already exist::\n\n param_store.get(\"foo\", lambda: (0.001 * torch.randn(1000, 1000)).exp(),\n constraint=constraints.positive)\n\n :param str name: parameter name\n :param init_constrained_value: initial constrained value\n :type init_constrained_value: torch.Tensor or callable returning a torch.Tensor\n :param constraint: torch constraint object\n :type constraint: torch.distributions.constraints.Constraint\n :returns: constrained parameter value\n :rtype: torch.Tensor\n \"\"\"\n if name not in self._params:\n # set the constraint\n self._constraints[name] = constraint\n\n # evaluate the lazy value\n if callable(init_constrained_value):\n init_constrained_value = init_constrained_value()\n\n # set the initial value\n self[name] = init_constrained_value\n\n # get the param, which is guaranteed to exist\n return self[name]\n\n # -------------------------------------------------------------------------------\n # Old non-dict interface\n\n def named_parameters(self):\n \"\"\"\n Returns an iterator over ``(name, unconstrained_value)`` tuples for\n each parameter in the ParamStore.\n \"\"\"\n return self._params.items()\n\n def get_all_param_names(self):\n warnings.warn(\"ParamStore.get_all_param_names() is deprecated; use .keys() instead.\",\n DeprecationWarning)\n return self.keys()\n\n def replace_param(self, param_name, new_param, old_param):\n warnings.warn(\"ParamStore.replace_param() is deprecated; use .__setitem__() instead.\",\n DeprecationWarning)\n assert self._params[param_name] is old_param.unconstrained()\n self[param_name] = new_param\n\n def get_param(self, name, init_tensor=None, constraint=constraints.real, event_dim=None):\n \"\"\"\n Get parameter from its name. If it does not yet exist in the\n ParamStore, it will be created and stored.\n The Pyro primitive `pyro.param` dispatches to this method.\n\n :param name: parameter name\n :type name: str\n :param init_tensor: initial tensor\n :type init_tensor: torch.Tensor\n :param constraint: torch constraint\n :type constraint: torch.distributions.constraints.Constraint\n :param int event_dim: (ignored)\n :returns: parameter\n :rtype: torch.Tensor\n \"\"\"\n if init_tensor is None:\n return self[name]\n else:\n return self.setdefault(name, init_tensor, constraint)\n\n def match(self, name):\n \"\"\"\n Get all parameters that match regex. The parameter must exist.\n\n :param name: regular expression\n :type name: str\n :returns: dict with key param name and value torch Tensor\n \"\"\"\n pattern = re.compile(name)\n return {name: self[name] for name in self if pattern.match(name)}\n\n def param_name(self, p):\n \"\"\"\n Get parameter name from parameter\n\n :param p: parameter\n :returns: parameter name\n \"\"\"\n return self._param_to_name.get(p)\n\n def get_state(self):\n \"\"\"\n Get the ParamStore state.\n \"\"\"\n state = {\n 'params': self._params,\n 'constraints': self._constraints,\n }\n return state\n\n def set_state(self, state):\n \"\"\"\n Set the ParamStore state using state from a previous get_state() call\n \"\"\"\n assert isinstance(state, dict), \"malformed ParamStore state\"\n assert set(state.keys()) == set(['params', 'constraints']), \\\n \"malformed ParamStore keys {}\".format(state.keys())\n\n for param_name, param in state['params'].items():\n self._params[param_name] = param\n self._param_to_name[param] = param_name\n\n for param_name, constraint in state['constraints'].items():\n if isinstance(constraint, type(constraints.real)):\n # Work around lack of hash & equality comparison on constraints.\n constraint = constraints.real\n self._constraints[param_name] = constraint\n\n def save(self, filename):\n \"\"\"\n Save parameters to disk\n\n :param filename: file name to save to\n :type filename: str\n \"\"\"\n with open(filename, \"wb\") as output_file:\n torch.save(self.get_state(), output_file)\n\n def load(self, filename, map_location=None):\n \"\"\"\n Loads parameters from disk\n\n .. note::\n\n If using :meth:`pyro.module` on parameters loaded from\n disk, be sure to set the ``update_module_params`` flag::\n\n pyro.get_param_store().load('saved_params.save')\n pyro.module('module', nn, update_module_params=True)\n\n :param filename: file name to load from\n :type filename: str\n :param map_location: specifies how to remap storage locations\n :type map_location: function, torch.device, string or a dict\n \"\"\"\n with open(filename, \"rb\") as input_file:\n state = torch.load(input_file, map_location)\n self.set_state(state)\n\n\n# used to create fully-formed param names, e.g. mymodule$$$mysubmodule.weight\n_MODULE_NAMESPACE_DIVIDER = \"$$$\"\n\n\ndef param_with_module_name(pyro_name, param_name):\n return _MODULE_NAMESPACE_DIVIDER.join([pyro_name, param_name])\n\n\ndef module_from_param_with_module_name(param_name):\n return param_name.split(_MODULE_NAMESPACE_DIVIDER)[0]\n\n\ndef user_param_name(param_name):\n if _MODULE_NAMESPACE_DIVIDER in param_name:\n return param_name.split(_MODULE_NAMESPACE_DIVIDER)[1]\n return param_name\n", "path": "pyro/params/param_store.py"}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.