problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_21518
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-modules-extras-2806
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
os_project requires python-shade 1.8.1 or higher
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
os_project
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 16.04
##### SUMMARY
<!--- Explain the problem briefly -->
With ansible 2.1.1.0 the os_project module requires python-shade 1.8 or higher. But ubuntu ships python-shade 1.5, so this module does not work anymore. The os_project module is working correctly with ansible 2.1.0.0
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Create OpenStack project
os_project:
cloud: my_cloud
name: projectname
description: Test Project
domain_id: default
endpoint_type: internal
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
A new project in OpenStack
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_g7eGSB/ansible_module_os_project.py", line 222, in <module>
main()
File "/tmp/ansible_g7eGSB/ansible_module_os_project.py", line 183, in main
project = cloud.get_project(name, domain_id=domain)
TypeError: get_project() got an unexpected keyword argument 'domain_id'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cloud/openstack/os_project.py`
Content:
```
1 #!/usr/bin/python
2 # Copyright (c) 2015 IBM Corporation
3 #
4 # This module is free software: you can redistribute it and/or modify
5 # it under the terms of the GNU General Public License as published by
6 # the Free Software Foundation, either version 3 of the License, or
7 # (at your option) any later version.
8 #
9 # This software is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU General Public License for more details.
13 #
14 # You should have received a copy of the GNU General Public License
15 # along with this software. If not, see <http://www.gnu.org/licenses/>.
16
17
18 try:
19 import shade
20 HAS_SHADE = True
21 except ImportError:
22 HAS_SHADE = False
23
24 DOCUMENTATION = '''
25 ---
26 module: os_project
27 short_description: Manage OpenStack Projects
28 extends_documentation_fragment: openstack
29 version_added: "2.0"
30 author: "Alberto Gireud (@agireud)"
31 description:
32 - Manage OpenStack Projects. Projects can be created,
33 updated or deleted using this module. A project will be updated
34 if I(name) matches an existing project and I(state) is present.
35 The value for I(name) cannot be updated without deleting and
36 re-creating the project.
37 options:
38 name:
39 description:
40 - Name for the project
41 required: true
42 description:
43 description:
44 - Description for the project
45 required: false
46 default: None
47 domain_id:
48 description:
49 - Domain id to create the project in if the cloud supports domains
50 required: false
51 default: None
52 aliases: ['domain']
53 enabled:
54 description:
55 - Is the project enabled
56 required: false
57 default: True
58 state:
59 description:
60 - Should the resource be present or absent.
61 choices: [present, absent]
62 default: present
63 requirements:
64 - "python >= 2.6"
65 - "shade"
66 '''
67
68 EXAMPLES = '''
69 # Create a project
70 - os_project:
71 cloud: mycloud
72 state: present
73 name: demoproject
74 description: demodescription
75 domain_id: demoid
76 enabled: True
77
78 # Delete a project
79 - os_project:
80 cloud: mycloud
81 state: absent
82 name: demoproject
83 '''
84
85
86 RETURN = '''
87 project:
88 description: Dictionary describing the project.
89 returned: On success when I(state) is 'present'
90 type: dictionary
91 contains:
92 id:
93 description: Project ID
94 type: string
95 sample: "f59382db809c43139982ca4189404650"
96 name:
97 description: Project name
98 type: string
99 sample: "demoproject"
100 description:
101 description: Project description
102 type: string
103 sample: "demodescription"
104 enabled:
105 description: Boolean to indicate if project is enabled
106 type: bool
107 sample: True
108 '''
109
110 def _needs_update(module, project):
111 keys = ('description', 'enabled')
112 for key in keys:
113 if module.params[key] is not None and module.params[key] != project.get(key):
114 return True
115
116 return False
117
118 def _system_state_change(module, project):
119 state = module.params['state']
120 if state == 'present':
121 if project is None:
122 changed = True
123 else:
124 if _needs_update(module, project):
125 changed = True
126 else:
127 changed = False
128
129 elif state == 'absent':
130 if project is None:
131 changed=False
132 else:
133 changed=True
134
135 return changed;
136
137 def main():
138
139 argument_spec = openstack_full_argument_spec(
140 name=dict(required=True),
141 description=dict(required=False, default=None),
142 domain_id=dict(required=False, default=None, aliases=['domain']),
143 enabled=dict(default=True, type='bool'),
144 state=dict(default='present', choices=['absent', 'present'])
145 )
146
147 module_kwargs = openstack_module_kwargs()
148 module = AnsibleModule(
149 argument_spec,
150 supports_check_mode=True,
151 **module_kwargs
152 )
153
154 if not HAS_SHADE:
155 module.fail_json(msg='shade is required for this module')
156
157 name = module.params['name']
158 description = module.params['description']
159 domain = module.params.pop('domain_id')
160 enabled = module.params['enabled']
161 state = module.params['state']
162
163 try:
164 if domain:
165 opcloud = shade.operator_cloud(**module.params)
166 try:
167 # We assume admin is passing domain id
168 dom = opcloud.get_domain(domain)['id']
169 domain = dom
170 except:
171 # If we fail, maybe admin is passing a domain name.
172 # Note that domains have unique names, just like id.
173 try:
174 dom = opcloud.search_domains(filters={'name': domain})[0]['id']
175 domain = dom
176 except:
177 # Ok, let's hope the user is non-admin and passing a sane id
178 pass
179
180 cloud = shade.openstack_cloud(**module.params)
181
182 if domain:
183 project = cloud.get_project(name, domain_id=domain)
184 else:
185 project = cloud.get_project(name)
186
187 if module.check_mode:
188 module.exit_json(changed=_system_state_change(module, project))
189
190 if state == 'present':
191 if project is None:
192 project = cloud.create_project(
193 name=name, description=description,
194 domain_id=domain,
195 enabled=enabled)
196 changed = True
197 else:
198 if _needs_update(module, project):
199 project = cloud.update_project(
200 project['id'], description=description,
201 enabled=enabled)
202 changed = True
203 else:
204 changed = False
205 module.exit_json(changed=changed, project=project)
206
207 elif state == 'absent':
208 if project is None:
209 changed=False
210 else:
211 cloud.delete_project(project['id'])
212 changed=True
213 module.exit_json(changed=changed)
214
215 except shade.OpenStackCloudException as e:
216 module.fail_json(msg=e.message, extra_data=e.extra_data)
217
218 from ansible.module_utils.basic import *
219 from ansible.module_utils.openstack import *
220
221 if __name__ == '__main__':
222 main()
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cloud/openstack/os_project.py b/cloud/openstack/os_project.py
--- a/cloud/openstack/os_project.py
+++ b/cloud/openstack/os_project.py
@@ -21,6 +21,8 @@
except ImportError:
HAS_SHADE = False
+from distutils.version import StrictVersion
+
DOCUMENTATION = '''
---
module: os_project
@@ -46,7 +48,8 @@
default: None
domain_id:
description:
- - Domain id to create the project in if the cloud supports domains
+ - Domain id to create the project in if the cloud supports domains.
+ The domain_id parameter requires shade >= 1.8.0
required: false
default: None
aliases: ['domain']
@@ -160,6 +163,9 @@
enabled = module.params['enabled']
state = module.params['state']
+ if domain and StrictVersion(shade.__version__) < StrictVersion('1.8.0'):
+ module.fail_json(msg="The domain argument requires shade >=1.8.0")
+
try:
if domain:
opcloud = shade.operator_cloud(**module.params)
|
{"golden_diff": "diff --git a/cloud/openstack/os_project.py b/cloud/openstack/os_project.py\n--- a/cloud/openstack/os_project.py\n+++ b/cloud/openstack/os_project.py\n@@ -21,6 +21,8 @@\n except ImportError:\n HAS_SHADE = False\n \n+from distutils.version import StrictVersion\n+\n DOCUMENTATION = '''\n ---\n module: os_project\n@@ -46,7 +48,8 @@\n default: None\n domain_id:\n description:\n- - Domain id to create the project in if the cloud supports domains\n+ - Domain id to create the project in if the cloud supports domains.\n+ The domain_id parameter requires shade >= 1.8.0\n required: false\n default: None\n aliases: ['domain']\n@@ -160,6 +163,9 @@\n enabled = module.params['enabled']\n state = module.params['state']\n \n+ if domain and StrictVersion(shade.__version__) < StrictVersion('1.8.0'):\n+ module.fail_json(msg=\"The domain argument requires shade >=1.8.0\")\n+\n try:\n if domain:\n opcloud = shade.operator_cloud(**module.params)\n", "issue": "os_project requires python-shade 1.8.1 or higher\n<!--- Verify first that your issue/request is not already reported in GitHub -->\n##### ISSUE TYPE\n\n<!--- Pick one below and delete the rest: -->\n- Bug Report\n##### COMPONENT NAME\n\n<!--- Name of the plugin/module/task -->\n\nos_project\n##### ANSIBLE VERSION\n\n<!--- Paste verbatim output from \u201cansible --version\u201d between quotes below -->\n\n```\nansible 2.1.1.0\n```\n##### OS / ENVIRONMENT\n\n<!---\nMention the OS you are running Ansible from, and the OS you are\nmanaging, or say \u201cN/A\u201d for anything that is not platform-specific.\n-->\n\nUbuntu 16.04\n##### SUMMARY\n\n<!--- Explain the problem briefly -->\n\nWith ansible 2.1.1.0 the os_project module requires python-shade 1.8 or higher. But ubuntu ships python-shade 1.5, so this module does not work anymore. The os_project module is working correctly with ansible 2.1.0.0\n##### STEPS TO REPRODUCE\n\n<!---\nFor bugs, show exactly how to reproduce the problem.\nFor new features, show how the feature would be used.\n-->\n\n<!--- Paste example playbooks or commands between quotes below -->\n\n```\n- name: Create OpenStack project\n os_project:\n cloud: my_cloud\n name: projectname\n description: Test Project\n domain_id: default\n endpoint_type: internal\n```\n\n<!--- You can also paste gist.github.com links for larger files -->\n##### EXPECTED RESULTS\n\n<!--- What did you expect to happen when running the steps above? -->\n\nA new project in OpenStack\n##### ACTUAL RESULTS\n\n<!--- What actually happened? If possible run with high verbosity (-vvvv) -->\n\n<!--- Paste verbatim command output between quotes below -->\n\n```\nAn exception occurred during task execution. The full traceback is:\nTraceback (most recent call last):\n File \"/tmp/ansible_g7eGSB/ansible_module_os_project.py\", line 222, in <module>\n main()\n File \"/tmp/ansible_g7eGSB/ansible_module_os_project.py\", line 183, in main\n project = cloud.get_project(name, domain_id=domain)\nTypeError: get_project() got an unexpected keyword argument 'domain_id'\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# Copyright (c) 2015 IBM Corporation\n#\n# This module is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This software is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this software. If not, see <http://www.gnu.org/licenses/>.\n\n\ntry:\n import shade\n HAS_SHADE = True\nexcept ImportError:\n HAS_SHADE = False\n\nDOCUMENTATION = '''\n---\nmodule: os_project\nshort_description: Manage OpenStack Projects\nextends_documentation_fragment: openstack\nversion_added: \"2.0\"\nauthor: \"Alberto Gireud (@agireud)\"\ndescription:\n - Manage OpenStack Projects. Projects can be created,\n updated or deleted using this module. A project will be updated\n if I(name) matches an existing project and I(state) is present.\n The value for I(name) cannot be updated without deleting and\n re-creating the project.\noptions:\n name:\n description:\n - Name for the project\n required: true\n description:\n description:\n - Description for the project\n required: false\n default: None\n domain_id:\n description:\n - Domain id to create the project in if the cloud supports domains\n required: false\n default: None\n aliases: ['domain']\n enabled:\n description:\n - Is the project enabled\n required: false\n default: True\n state:\n description:\n - Should the resource be present or absent.\n choices: [present, absent]\n default: present\nrequirements:\n - \"python >= 2.6\"\n - \"shade\"\n'''\n\nEXAMPLES = '''\n# Create a project\n- os_project:\n cloud: mycloud\n state: present\n name: demoproject\n description: demodescription\n domain_id: demoid\n enabled: True\n\n# Delete a project\n- os_project:\n cloud: mycloud\n state: absent\n name: demoproject\n'''\n\n\nRETURN = '''\nproject:\n description: Dictionary describing the project.\n returned: On success when I(state) is 'present'\n type: dictionary\n contains:\n id:\n description: Project ID\n type: string\n sample: \"f59382db809c43139982ca4189404650\"\n name:\n description: Project name\n type: string\n sample: \"demoproject\"\n description:\n description: Project description\n type: string\n sample: \"demodescription\"\n enabled:\n description: Boolean to indicate if project is enabled\n type: bool\n sample: True\n'''\n\ndef _needs_update(module, project):\n keys = ('description', 'enabled')\n for key in keys:\n if module.params[key] is not None and module.params[key] != project.get(key):\n return True\n\n return False\n\ndef _system_state_change(module, project):\n state = module.params['state']\n if state == 'present':\n if project is None:\n changed = True\n else:\n if _needs_update(module, project):\n changed = True\n else:\n changed = False\n\n elif state == 'absent':\n if project is None:\n changed=False\n else:\n changed=True\n\n return changed;\n\ndef main():\n\n argument_spec = openstack_full_argument_spec(\n name=dict(required=True),\n description=dict(required=False, default=None),\n domain_id=dict(required=False, default=None, aliases=['domain']),\n enabled=dict(default=True, type='bool'),\n state=dict(default='present', choices=['absent', 'present'])\n )\n\n module_kwargs = openstack_module_kwargs()\n module = AnsibleModule(\n argument_spec,\n supports_check_mode=True,\n **module_kwargs\n )\n\n if not HAS_SHADE:\n module.fail_json(msg='shade is required for this module')\n\n name = module.params['name']\n description = module.params['description']\n domain = module.params.pop('domain_id')\n enabled = module.params['enabled']\n state = module.params['state']\n\n try:\n if domain:\n opcloud = shade.operator_cloud(**module.params)\n try:\n # We assume admin is passing domain id\n dom = opcloud.get_domain(domain)['id']\n domain = dom\n except:\n # If we fail, maybe admin is passing a domain name.\n # Note that domains have unique names, just like id.\n try:\n dom = opcloud.search_domains(filters={'name': domain})[0]['id']\n domain = dom\n except:\n # Ok, let's hope the user is non-admin and passing a sane id\n pass\n\n cloud = shade.openstack_cloud(**module.params)\n\n if domain:\n project = cloud.get_project(name, domain_id=domain)\n else:\n project = cloud.get_project(name)\n\n if module.check_mode:\n module.exit_json(changed=_system_state_change(module, project))\n\n if state == 'present':\n if project is None:\n project = cloud.create_project(\n name=name, description=description,\n domain_id=domain,\n enabled=enabled)\n changed = True\n else:\n if _needs_update(module, project):\n project = cloud.update_project(\n project['id'], description=description,\n enabled=enabled)\n changed = True\n else:\n changed = False\n module.exit_json(changed=changed, project=project)\n\n elif state == 'absent':\n if project is None:\n changed=False\n else:\n cloud.delete_project(project['id'])\n changed=True\n module.exit_json(changed=changed)\n\n except shade.OpenStackCloudException as e:\n module.fail_json(msg=e.message, extra_data=e.extra_data)\n\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.openstack import *\n\nif __name__ == '__main__':\n main()\n", "path": "cloud/openstack/os_project.py"}], "after_files": [{"content": "#!/usr/bin/python\n# Copyright (c) 2015 IBM Corporation\n#\n# This module is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This software is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this software. If not, see <http://www.gnu.org/licenses/>.\n\n\ntry:\n import shade\n HAS_SHADE = True\nexcept ImportError:\n HAS_SHADE = False\n\nfrom distutils.version import StrictVersion\n\nDOCUMENTATION = '''\n---\nmodule: os_project\nshort_description: Manage OpenStack Projects\nextends_documentation_fragment: openstack\nversion_added: \"2.0\"\nauthor: \"Alberto Gireud (@agireud)\"\ndescription:\n - Manage OpenStack Projects. Projects can be created,\n updated or deleted using this module. A project will be updated\n if I(name) matches an existing project and I(state) is present.\n The value for I(name) cannot be updated without deleting and\n re-creating the project.\noptions:\n name:\n description:\n - Name for the project\n required: true\n description:\n description:\n - Description for the project\n required: false\n default: None\n domain_id:\n description:\n - Domain id to create the project in if the cloud supports domains.\n The domain_id parameter requires shade >= 1.8.0\n required: false\n default: None\n aliases: ['domain']\n enabled:\n description:\n - Is the project enabled\n required: false\n default: True\n state:\n description:\n - Should the resource be present or absent.\n choices: [present, absent]\n default: present\nrequirements:\n - \"python >= 2.6\"\n - \"shade\"\n'''\n\nEXAMPLES = '''\n# Create a project\n- os_project:\n cloud: mycloud\n state: present\n name: demoproject\n description: demodescription\n domain_id: demoid\n enabled: True\n\n# Delete a project\n- os_project:\n cloud: mycloud\n state: absent\n name: demoproject\n'''\n\n\nRETURN = '''\nproject:\n description: Dictionary describing the project.\n returned: On success when I(state) is 'present'\n type: dictionary\n contains:\n id:\n description: Project ID\n type: string\n sample: \"f59382db809c43139982ca4189404650\"\n name:\n description: Project name\n type: string\n sample: \"demoproject\"\n description:\n description: Project description\n type: string\n sample: \"demodescription\"\n enabled:\n description: Boolean to indicate if project is enabled\n type: bool\n sample: True\n'''\n\ndef _needs_update(module, project):\n keys = ('description', 'enabled')\n for key in keys:\n if module.params[key] is not None and module.params[key] != project.get(key):\n return True\n\n return False\n\ndef _system_state_change(module, project):\n state = module.params['state']\n if state == 'present':\n if project is None:\n changed = True\n else:\n if _needs_update(module, project):\n changed = True\n else:\n changed = False\n\n elif state == 'absent':\n if project is None:\n changed=False\n else:\n changed=True\n\n return changed;\n\ndef main():\n\n argument_spec = openstack_full_argument_spec(\n name=dict(required=True),\n description=dict(required=False, default=None),\n domain_id=dict(required=False, default=None, aliases=['domain']),\n enabled=dict(default=True, type='bool'),\n state=dict(default='present', choices=['absent', 'present'])\n )\n\n module_kwargs = openstack_module_kwargs()\n module = AnsibleModule(\n argument_spec,\n supports_check_mode=True,\n **module_kwargs\n )\n\n if not HAS_SHADE:\n module.fail_json(msg='shade is required for this module')\n\n name = module.params['name']\n description = module.params['description']\n domain = module.params.pop('domain_id')\n enabled = module.params['enabled']\n state = module.params['state']\n\n if domain and StrictVersion(shade.__version__) < StrictVersion('1.8.0'):\n module.fail_json(msg=\"The domain argument requires shade >=1.8.0\")\n\n try:\n if domain:\n opcloud = shade.operator_cloud(**module.params)\n try:\n # We assume admin is passing domain id\n dom = opcloud.get_domain(domain)['id']\n domain = dom\n except:\n # If we fail, maybe admin is passing a domain name.\n # Note that domains have unique names, just like id.\n try:\n dom = opcloud.search_domains(filters={'name': domain})[0]['id']\n domain = dom\n except:\n # Ok, let's hope the user is non-admin and passing a sane id\n pass\n\n cloud = shade.openstack_cloud(**module.params)\n\n if domain:\n project = cloud.get_project(name, domain_id=domain)\n else:\n project = cloud.get_project(name)\n\n if module.check_mode:\n module.exit_json(changed=_system_state_change(module, project))\n\n if state == 'present':\n if project is None:\n project = cloud.create_project(\n name=name, description=description,\n domain_id=domain,\n enabled=enabled)\n changed = True\n else:\n if _needs_update(module, project):\n project = cloud.update_project(\n project['id'], description=description,\n enabled=enabled)\n changed = True\n else:\n changed = False\n module.exit_json(changed=changed, project=project)\n\n elif state == 'absent':\n if project is None:\n changed=False\n else:\n cloud.delete_project(project['id'])\n changed=True\n module.exit_json(changed=changed)\n\n except shade.OpenStackCloudException as e:\n module.fail_json(msg=e.message, extra_data=e.extra_data)\n\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.openstack import *\n\nif __name__ == '__main__':\n main()\n", "path": "cloud/openstack/os_project.py"}]}
| 2,704 | 257 |
gh_patches_debug_8956
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-11147
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sync naming convention and style in NLP datasets
Also fixes a possible bug with np.load()/f.close() pair not being exception-safe.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras/datasets/boston_housing.py`
Content:
```
1 """Boston housing price regression dataset.
2 """
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6
7 from ..utils.data_utils import get_file
8 import numpy as np
9
10
11 def load_data(path='boston_housing.npz', test_split=0.2, seed=113):
12 """Loads the Boston Housing dataset.
13
14 # Arguments
15 path: path where to cache the dataset locally
16 (relative to ~/.keras/datasets).
17 test_split: fraction of the data to reserve as test set.
18 seed: Random seed for shuffling the data
19 before computing the test split.
20
21 # Returns
22 Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
23 """
24 assert 0 <= test_split < 1
25 path = get_file(path,
26 origin='https://s3.amazonaws.com/keras-datasets/boston_housing.npz',
27 file_hash='f553886a1f8d56431e820c5b82552d9d95cfcb96d1e678153f8839538947dff5')
28 f = np.load(path)
29 x = f['x']
30 y = f['y']
31 f.close()
32
33 np.random.seed(seed)
34 indices = np.arange(len(x))
35 np.random.shuffle(indices)
36 x = x[indices]
37 y = y[indices]
38
39 x_train = np.array(x[:int(len(x) * (1 - test_split))])
40 y_train = np.array(y[:int(len(x) * (1 - test_split))])
41 x_test = np.array(x[int(len(x) * (1 - test_split)):])
42 y_test = np.array(y[int(len(x) * (1 - test_split)):])
43 return (x_train, y_train), (x_test, y_test)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/keras/datasets/boston_housing.py b/keras/datasets/boston_housing.py
--- a/keras/datasets/boston_housing.py
+++ b/keras/datasets/boston_housing.py
@@ -25,10 +25,9 @@
path = get_file(path,
origin='https://s3.amazonaws.com/keras-datasets/boston_housing.npz',
file_hash='f553886a1f8d56431e820c5b82552d9d95cfcb96d1e678153f8839538947dff5')
- f = np.load(path)
- x = f['x']
- y = f['y']
- f.close()
+ with np.load(path) as f:
+ x = f['x']
+ y = f['y']
np.random.seed(seed)
indices = np.arange(len(x))
|
{"golden_diff": "diff --git a/keras/datasets/boston_housing.py b/keras/datasets/boston_housing.py\n--- a/keras/datasets/boston_housing.py\n+++ b/keras/datasets/boston_housing.py\n@@ -25,10 +25,9 @@\n path = get_file(path,\n origin='https://s3.amazonaws.com/keras-datasets/boston_housing.npz',\n file_hash='f553886a1f8d56431e820c5b82552d9d95cfcb96d1e678153f8839538947dff5')\n- f = np.load(path)\n- x = f['x']\n- y = f['y']\n- f.close()\n+ with np.load(path) as f:\n+ x = f['x']\n+ y = f['y']\n \n np.random.seed(seed)\n indices = np.arange(len(x))\n", "issue": "Sync naming convention and style in NLP datasets\nAlso fixes a possible bug with np.load()/f.close() pair not being exception-safe.\n", "before_files": [{"content": "\"\"\"Boston housing price regression dataset.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom ..utils.data_utils import get_file\nimport numpy as np\n\n\ndef load_data(path='boston_housing.npz', test_split=0.2, seed=113):\n \"\"\"Loads the Boston Housing dataset.\n\n # Arguments\n path: path where to cache the dataset locally\n (relative to ~/.keras/datasets).\n test_split: fraction of the data to reserve as test set.\n seed: Random seed for shuffling the data\n before computing the test split.\n\n # Returns\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n \"\"\"\n assert 0 <= test_split < 1\n path = get_file(path,\n origin='https://s3.amazonaws.com/keras-datasets/boston_housing.npz',\n file_hash='f553886a1f8d56431e820c5b82552d9d95cfcb96d1e678153f8839538947dff5')\n f = np.load(path)\n x = f['x']\n y = f['y']\n f.close()\n\n np.random.seed(seed)\n indices = np.arange(len(x))\n np.random.shuffle(indices)\n x = x[indices]\n y = y[indices]\n\n x_train = np.array(x[:int(len(x) * (1 - test_split))])\n y_train = np.array(y[:int(len(x) * (1 - test_split))])\n x_test = np.array(x[int(len(x) * (1 - test_split)):])\n y_test = np.array(y[int(len(x) * (1 - test_split)):])\n return (x_train, y_train), (x_test, y_test)\n", "path": "keras/datasets/boston_housing.py"}], "after_files": [{"content": "\"\"\"Boston housing price regression dataset.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom ..utils.data_utils import get_file\nimport numpy as np\n\n\ndef load_data(path='boston_housing.npz', test_split=0.2, seed=113):\n \"\"\"Loads the Boston Housing dataset.\n\n # Arguments\n path: path where to cache the dataset locally\n (relative to ~/.keras/datasets).\n test_split: fraction of the data to reserve as test set.\n seed: Random seed for shuffling the data\n before computing the test split.\n\n # Returns\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n \"\"\"\n assert 0 <= test_split < 1\n path = get_file(path,\n origin='https://s3.amazonaws.com/keras-datasets/boston_housing.npz',\n file_hash='f553886a1f8d56431e820c5b82552d9d95cfcb96d1e678153f8839538947dff5')\n with np.load(path) as f:\n x = f['x']\n y = f['y']\n\n np.random.seed(seed)\n indices = np.arange(len(x))\n np.random.shuffle(indices)\n x = x[indices]\n y = y[indices]\n\n x_train = np.array(x[:int(len(x) * (1 - test_split))])\n y_train = np.array(y[:int(len(x) * (1 - test_split))])\n x_test = np.array(x[int(len(x) * (1 - test_split)):])\n y_test = np.array(y[int(len(x) * (1 - test_split)):])\n return (x_train, y_train), (x_test, y_test)\n", "path": "keras/datasets/boston_housing.py"}]}
| 797 | 228 |
gh_patches_debug_159
|
rasdani/github-patches
|
git_diff
|
uccser__cs-unplugged-54
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Bootstrap 4 SCSS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/config/settings.py`
Content:
```
1 """
2 Django settings for csunplugged project.
3
4 Generated by 'django-admin startproject' using Django 1.10.3.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.10/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.10/ref/settings/
11 """
12
13 import os
14 from config.settings_secret import *
15
16 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
17 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
18
19 # nasty hard coding
20 SETTINGS_PATH = os.path.dirname(os.path.dirname(__file__))
21
22
23 # Quick-start development settings - unsuitable for production
24 # See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
25
26 # SECURITY WARNING: keep the secret key used in production secret!
27 SECRET_KEY = 'l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f'
28
29 # SECURITY WARNING: don't run with debug turned on in production!
30 DEBUG = True
31
32 ALLOWED_HOSTS = []
33
34
35 # Application definition
36
37 INSTALLED_APPS = [
38 'general.apps.GeneralConfig',
39 'topics.apps.TopicsConfig',
40 'resources.apps.ResourcesConfig',
41 'django.contrib.admin',
42 'django.contrib.auth',
43 'django.contrib.contenttypes',
44 'django.contrib.sessions',
45 'django.contrib.messages',
46 'django.contrib.staticfiles',
47 ]
48
49 MIDDLEWARE = [
50 'django.middleware.security.SecurityMiddleware',
51 'django.contrib.sessions.middleware.SessionMiddleware',
52 'django.middleware.locale.LocaleMiddleware',
53 'django.middleware.common.CommonMiddleware',
54 'django.middleware.csrf.CsrfViewMiddleware',
55 'django.contrib.auth.middleware.AuthenticationMiddleware',
56 'django.contrib.messages.middleware.MessageMiddleware',
57 'django.middleware.clickjacking.XFrameOptionsMiddleware',
58 ]
59
60 ROOT_URLCONF = 'config.urls'
61
62 TEMPLATES = [
63 {
64 'BACKEND': 'django.template.backends.django.DjangoTemplates',
65 'DIRS': [
66 os.path.join(SETTINGS_PATH, 'templates'),
67 os.path.join(SETTINGS_PATH, 'resources/content/')
68 ],
69 'APP_DIRS': True,
70 'OPTIONS': {
71 'context_processors': [
72 'django.template.context_processors.debug',
73 'django.template.context_processors.request',
74 'django.contrib.auth.context_processors.auth',
75 'django.contrib.messages.context_processors.messages',
76 ],
77 },
78 },
79 ]
80
81 WSGI_APPLICATION = 'config.wsgi.application'
82
83
84 # Database
85 # https://docs.djangoproject.com/en/1.10/ref/settings/#databases
86 # Database values are stored in `settings_secret.py`
87 # A template of this file is available as `settings_secret_template.py`
88
89
90 # Password validation
91 # https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
92
93 AUTH_PASSWORD_VALIDATORS = [
94 {
95 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
96 },
97 {
98 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
99 },
100 {
101 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
102 },
103 {
104 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
105 },
106 ]
107
108
109 # Internationalization
110 # https://docs.djangoproject.com/en/1.10/topics/i18n/
111
112 LANGUAGE_CODE = 'en-us'
113
114 TIME_ZONE = 'UTC'
115
116 USE_I18N = True
117
118 USE_L10N = True
119
120 USE_TZ = True
121
122 LOCALE_PATHS = ['locale']
123
124 # Static files (CSS, JavaScript, Images)
125 # https://docs.djangoproject.com/en/1.10/howto/static-files/
126
127 STATIC_URL = '/static/'
128 STATICFILES_DIRS = (
129 os.path.join(BASE_DIR, 'static'),
130 )
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/csunplugged/config/settings.py b/csunplugged/config/settings.py
--- a/csunplugged/config/settings.py
+++ b/csunplugged/config/settings.py
@@ -126,5 +126,5 @@
STATIC_URL = '/static/'
STATICFILES_DIRS = (
- os.path.join(BASE_DIR, 'static'),
+ os.path.join(BASE_DIR, 'build'),
)
|
{"golden_diff": "diff --git a/csunplugged/config/settings.py b/csunplugged/config/settings.py\n--- a/csunplugged/config/settings.py\n+++ b/csunplugged/config/settings.py\n@@ -126,5 +126,5 @@\n \n STATIC_URL = '/static/'\n STATICFILES_DIRS = (\n- os.path.join(BASE_DIR, 'static'),\n+ os.path.join(BASE_DIR, 'build'),\n )\n", "issue": "Add Bootstrap 4 SCSS\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for csunplugged project.\n\nGenerated by 'django-admin startproject' using Django 1.10.3.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport os\nfrom config.settings_secret import *\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# nasty hard coding\nSETTINGS_PATH = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = 'l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f'\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'general.apps.GeneralConfig',\n 'topics.apps.TopicsConfig',\n 'resources.apps.ResourcesConfig',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'config.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n os.path.join(SETTINGS_PATH, 'templates'),\n os.path.join(SETTINGS_PATH, 'resources/content/')\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'config.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.10/ref/settings/#databases\n# Database values are stored in `settings_secret.py`\n# A template of this file is available as `settings_secret_template.py`\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\nLOCALE_PATHS = ['locale']\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'static'),\n )\n", "path": "csunplugged/config/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for csunplugged project.\n\nGenerated by 'django-admin startproject' using Django 1.10.3.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport os\nfrom config.settings_secret import *\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# nasty hard coding\nSETTINGS_PATH = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = 'l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f'\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'general.apps.GeneralConfig',\n 'topics.apps.TopicsConfig',\n 'resources.apps.ResourcesConfig',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'config.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n os.path.join(SETTINGS_PATH, 'templates'),\n os.path.join(SETTINGS_PATH, 'resources/content/')\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'config.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.10/ref/settings/#databases\n# Database values are stored in `settings_secret.py`\n# A template of this file is available as `settings_secret_template.py`\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\nLOCALE_PATHS = ['locale']\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'build'),\n )\n", "path": "csunplugged/config/settings.py"}]}
| 1,380 | 90 |
gh_patches_debug_40627
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-801
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Memory leak due to cyclic references in Trace
@karalets noticed memory growth when using Pyro.
We tracked the number of non-garbage-collected objects during SVI inference. We found that objects grow over iterations but are occasionally collected. This evidence points to a cyclic reference somewhere. Among the amassing objects were `Variable`s and Pyro `Trace` objects, suggesting that the cyclic reference is either in Pyro internals or in the networkx `DiGraph` object from which `Trace` inherits.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/poutine/trace.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 import collections
4 import warnings
5
6 import networkx
7 from torch.autograd import Variable
8
9 from pyro.distributions.util import scale_tensor
10 from pyro.util import is_nan, is_inf
11
12
13 def _warn_if_nan(name, value):
14 if isinstance(value, Variable):
15 value = value.data[0]
16 if is_nan(value):
17 warnings.warn("Encountered NAN log_pdf at site '{}'".format(name))
18 if is_inf(value) and value > 0:
19 warnings.warn("Encountered +inf log_pdf at site '{}'".format(name))
20 # Note that -inf log_pdf is fine: it is merely a zero-probability event.
21
22
23 class Trace(networkx.DiGraph):
24 """
25 Execution trace data structure
26 """
27
28 node_dict_factory = collections.OrderedDict
29
30 def __init__(self, *args, **kwargs):
31 """
32 :param string graph_type: string specifying the kind of trace graph to construct
33
34 Constructor. Currently identical to networkx.``DiGraph(\*args, \**kwargs)``,
35 except for storing the graph_type attribute
36 """
37 graph_type = kwargs.pop("graph_type", "flat")
38 assert graph_type in ("flat", "dense"), \
39 "{} not a valid graph type".format(graph_type)
40 self.graph_type = graph_type
41 super(Trace, self).__init__(*args, **kwargs)
42
43 def add_node(self, site_name, *args, **kwargs):
44 """
45 :param string site_name: the name of the site to be added
46
47 Adds a site to the trace.
48
49 Identical to super(Trace, self).add_node,
50 but raises an error when attempting to add a duplicate node
51 instead of silently overwriting.
52 """
53 # XXX should do more validation than this
54 if kwargs["type"] != "param":
55 assert site_name not in self, \
56 "site {} already in trace".format(site_name)
57
58 # XXX should copy in case site gets mutated, or dont bother?
59 super(Trace, self).add_node(site_name, *args, **kwargs.copy())
60
61 def copy(self):
62 """
63 Makes a shallow copy of self with nodes and edges preserved.
64 Identical to super(Trace, self).copy(), but preserves the type
65 and the self.graph_type attribute
66 """
67 trace = super(Trace, self).copy()
68 trace.graph_type = self.graph_type
69 trace.__class__ = Trace
70 return trace
71
72 def log_pdf(self, site_filter=lambda name, site: True):
73 """
74 Compute the local and overall log-probabilities of the trace.
75
76 The local computation is memoized.
77
78 :returns: total log probability.
79 :rtype: torch.autograd.Variable
80 """
81 log_p = 0.0
82 for name, site in self.nodes.items():
83 if site["type"] == "sample" and site_filter(name, site):
84 try:
85 site_log_p = site["log_pdf"]
86 except KeyError:
87 args, kwargs = site["args"], site["kwargs"]
88 site_log_p = site["fn"].log_prob(site["value"], *args, **kwargs)
89 site_log_p = scale_tensor(site_log_p, site["scale"]).sum()
90 site["log_pdf"] = site_log_p
91 _warn_if_nan(name, site_log_p)
92 log_p += site_log_p
93 return log_p
94
95 # XXX This only makes sense when all tensors have compatible shape.
96 def batch_log_pdf(self, site_filter=lambda name, site: True):
97 """
98 Compute the batched local and overall log-probabilities of the trace.
99
100 The local computation is memoized, and also stores the local `.log_pdf()`.
101 """
102 log_p = 0.0
103 for name, site in self.nodes.items():
104 if site["type"] == "sample" and site_filter(name, site):
105 try:
106 site_log_p = site["batch_log_pdf"]
107 except KeyError:
108 args, kwargs = site["args"], site["kwargs"]
109 site_log_p = site["fn"].log_prob(site["value"], *args, **kwargs)
110 site_log_p = scale_tensor(site_log_p, site["scale"])
111 site["batch_log_pdf"] = site_log_p
112 site["log_pdf"] = site_log_p.sum()
113 _warn_if_nan(name, site["log_pdf"])
114 # Here log_p may be broadcast to a larger tensor:
115 log_p = log_p + site_log_p
116 return log_p
117
118 def compute_batch_log_pdf(self, site_filter=lambda name, site: True):
119 """
120 Compute the batched local log-probabilities at each site of the trace.
121
122 The local computation is memoized, and also stores the local `.log_pdf()`.
123 """
124 for name, site in self.nodes.items():
125 if site["type"] == "sample" and site_filter(name, site):
126 try:
127 site["batch_log_pdf"]
128 except KeyError:
129 args, kwargs = site["args"], site["kwargs"]
130 site_log_p = site["fn"].log_prob(site["value"], *args, **kwargs)
131 site_log_p = scale_tensor(site_log_p, site["scale"])
132 site["batch_log_pdf"] = site_log_p
133 site["log_pdf"] = site_log_p.sum()
134 _warn_if_nan(name, site["log_pdf"])
135
136 def compute_score_parts(self):
137 """
138 Compute the batched local score parts at each site of the trace.
139 """
140 for name, site in self.nodes.items():
141 if site["type"] == "sample" and "score_parts" not in site:
142 # Note that ScoreParts overloads the multiplication operator
143 # to correctly scale each of its three parts.
144 value = site["fn"].score_parts(site["value"], *site["args"], **site["kwargs"]) * site["scale"]
145 site["score_parts"] = value
146 site["batch_log_pdf"] = value[0]
147 site["log_pdf"] = value[0].sum()
148 _warn_if_nan(name, site["log_pdf"])
149
150 @property
151 def observation_nodes(self):
152 """
153 Gets a list of names of observe sites
154 """
155 return [name for name, node in self.nodes.items()
156 if node["type"] == "sample" and
157 node["is_observed"]]
158
159 @property
160 def stochastic_nodes(self):
161 """
162 Gets a list of names of sample sites
163 """
164 return [name for name, node in self.nodes.items()
165 if node["type"] == "sample" and
166 not node["is_observed"]]
167
168 @property
169 def reparameterized_nodes(self):
170 """
171 Gets a list of names of sample sites whose stochastic functions
172 are reparameterizable primitive distributions
173 """
174 return [name for name, node in self.nodes.items()
175 if node["type"] == "sample" and
176 not node["is_observed"] and
177 getattr(node["fn"], "reparameterized", False)]
178
179 @property
180 def nonreparam_stochastic_nodes(self):
181 """
182 Gets a list of names of sample sites whose stochastic functions
183 are not reparameterizable primitive distributions
184 """
185 return list(set(self.stochastic_nodes) - set(self.reparameterized_nodes))
186
187 def iter_stochastic_nodes(self):
188 """
189 Returns an iterator over stochastic nodes in the trace.
190 """
191 for name, node in self.nodes.items():
192 if node["type"] == "sample" and not node["is_observed"]:
193 yield name, node
194
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyro/poutine/trace.py b/pyro/poutine/trace.py
--- a/pyro/poutine/trace.py
+++ b/pyro/poutine/trace.py
@@ -20,13 +20,15 @@
# Note that -inf log_pdf is fine: it is merely a zero-probability event.
-class Trace(networkx.DiGraph):
+class DiGraph(networkx.DiGraph):
+ node_dict_factory = collections.OrderedDict
+
+
+class Trace(object):
"""
Execution trace data structure
"""
- node_dict_factory = collections.OrderedDict
-
def __init__(self, *args, **kwargs):
"""
:param string graph_type: string specifying the kind of trace graph to construct
@@ -34,12 +36,59 @@
Constructor. Currently identical to networkx.``DiGraph(\*args, \**kwargs)``,
except for storing the graph_type attribute
"""
+ self._graph = DiGraph(*args, **kwargs)
graph_type = kwargs.pop("graph_type", "flat")
assert graph_type in ("flat", "dense"), \
"{} not a valid graph type".format(graph_type)
self.graph_type = graph_type
super(Trace, self).__init__(*args, **kwargs)
+ def __del__(self):
+ # Work around cyclic reference bugs in networkx.DiGraph
+ # See https://github.com/uber/pyro/issues/798
+ self._graph.__dict__.clear()
+
+ @property
+ def nodes(self):
+ return self._graph.nodes
+
+ @property
+ def edges(self):
+ return self._graph.edges
+
+ @property
+ def graph(self):
+ return self._graph.graph
+
+ @property
+ def remove_node(self):
+ return self._graph.remove_node
+
+ @property
+ def add_edge(self):
+ return self._graph.add_edge
+
+ @property
+ def is_directed(self):
+ return self._graph.is_directed
+
+ @property
+ def in_degree(self):
+ return self._graph.in_degree
+
+ @property
+ def successors(self):
+ return self._graph.successors
+
+ def __contains__(self, site_name):
+ return site_name in self._graph
+
+ def __iter__(self):
+ return iter(self._graph)
+
+ def __len__(self):
+ return len(self._graph)
+
def add_node(self, site_name, *args, **kwargs):
"""
:param string site_name: the name of the site to be added
@@ -56,7 +105,7 @@
"site {} already in trace".format(site_name)
# XXX should copy in case site gets mutated, or dont bother?
- super(Trace, self).add_node(site_name, *args, **kwargs.copy())
+ self._graph.add_node(site_name, *args, **kwargs)
def copy(self):
"""
@@ -64,9 +113,9 @@
Identical to super(Trace, self).copy(), but preserves the type
and the self.graph_type attribute
"""
- trace = super(Trace, self).copy()
+ trace = Trace()
+ trace._graph = self._graph.copy()
trace.graph_type = self.graph_type
- trace.__class__ = Trace
return trace
def log_pdf(self, site_filter=lambda name, site: True):
|
{"golden_diff": "diff --git a/pyro/poutine/trace.py b/pyro/poutine/trace.py\n--- a/pyro/poutine/trace.py\n+++ b/pyro/poutine/trace.py\n@@ -20,13 +20,15 @@\n # Note that -inf log_pdf is fine: it is merely a zero-probability event.\n \n \n-class Trace(networkx.DiGraph):\n+class DiGraph(networkx.DiGraph):\n+ node_dict_factory = collections.OrderedDict\n+\n+\n+class Trace(object):\n \"\"\"\n Execution trace data structure\n \"\"\"\n \n- node_dict_factory = collections.OrderedDict\n-\n def __init__(self, *args, **kwargs):\n \"\"\"\n :param string graph_type: string specifying the kind of trace graph to construct\n@@ -34,12 +36,59 @@\n Constructor. Currently identical to networkx.``DiGraph(\\*args, \\**kwargs)``,\n except for storing the graph_type attribute\n \"\"\"\n+ self._graph = DiGraph(*args, **kwargs)\n graph_type = kwargs.pop(\"graph_type\", \"flat\")\n assert graph_type in (\"flat\", \"dense\"), \\\n \"{} not a valid graph type\".format(graph_type)\n self.graph_type = graph_type\n super(Trace, self).__init__(*args, **kwargs)\n \n+ def __del__(self):\n+ # Work around cyclic reference bugs in networkx.DiGraph\n+ # See https://github.com/uber/pyro/issues/798\n+ self._graph.__dict__.clear()\n+\n+ @property\n+ def nodes(self):\n+ return self._graph.nodes\n+\n+ @property\n+ def edges(self):\n+ return self._graph.edges\n+\n+ @property\n+ def graph(self):\n+ return self._graph.graph\n+\n+ @property\n+ def remove_node(self):\n+ return self._graph.remove_node\n+\n+ @property\n+ def add_edge(self):\n+ return self._graph.add_edge\n+\n+ @property\n+ def is_directed(self):\n+ return self._graph.is_directed\n+\n+ @property\n+ def in_degree(self):\n+ return self._graph.in_degree\n+\n+ @property\n+ def successors(self):\n+ return self._graph.successors\n+\n+ def __contains__(self, site_name):\n+ return site_name in self._graph\n+\n+ def __iter__(self):\n+ return iter(self._graph)\n+\n+ def __len__(self):\n+ return len(self._graph)\n+\n def add_node(self, site_name, *args, **kwargs):\n \"\"\"\n :param string site_name: the name of the site to be added\n@@ -56,7 +105,7 @@\n \"site {} already in trace\".format(site_name)\n \n # XXX should copy in case site gets mutated, or dont bother?\n- super(Trace, self).add_node(site_name, *args, **kwargs.copy())\n+ self._graph.add_node(site_name, *args, **kwargs)\n \n def copy(self):\n \"\"\"\n@@ -64,9 +113,9 @@\n Identical to super(Trace, self).copy(), but preserves the type\n and the self.graph_type attribute\n \"\"\"\n- trace = super(Trace, self).copy()\n+ trace = Trace()\n+ trace._graph = self._graph.copy()\n trace.graph_type = self.graph_type\n- trace.__class__ = Trace\n return trace\n \n def log_pdf(self, site_filter=lambda name, site: True):\n", "issue": "Memory leak due to cyclic references in Trace\n@karalets noticed memory growth when using Pyro.\r\n\r\nWe tracked the number of non-garbage-collected objects during SVI inference. We found that objects grow over iterations but are occasionally collected. This evidence points to a cyclic reference somewhere. Among the amassing objects were `Variable`s and Pyro `Trace` objects, suggesting that the cyclic reference is either in Pyro internals or in the networkx `DiGraph` object from which `Trace` inherits.\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport collections\nimport warnings\n\nimport networkx\nfrom torch.autograd import Variable\n\nfrom pyro.distributions.util import scale_tensor\nfrom pyro.util import is_nan, is_inf\n\n\ndef _warn_if_nan(name, value):\n if isinstance(value, Variable):\n value = value.data[0]\n if is_nan(value):\n warnings.warn(\"Encountered NAN log_pdf at site '{}'\".format(name))\n if is_inf(value) and value > 0:\n warnings.warn(\"Encountered +inf log_pdf at site '{}'\".format(name))\n # Note that -inf log_pdf is fine: it is merely a zero-probability event.\n\n\nclass Trace(networkx.DiGraph):\n \"\"\"\n Execution trace data structure\n \"\"\"\n\n node_dict_factory = collections.OrderedDict\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n :param string graph_type: string specifying the kind of trace graph to construct\n\n Constructor. Currently identical to networkx.``DiGraph(\\*args, \\**kwargs)``,\n except for storing the graph_type attribute\n \"\"\"\n graph_type = kwargs.pop(\"graph_type\", \"flat\")\n assert graph_type in (\"flat\", \"dense\"), \\\n \"{} not a valid graph type\".format(graph_type)\n self.graph_type = graph_type\n super(Trace, self).__init__(*args, **kwargs)\n\n def add_node(self, site_name, *args, **kwargs):\n \"\"\"\n :param string site_name: the name of the site to be added\n\n Adds a site to the trace.\n\n Identical to super(Trace, self).add_node,\n but raises an error when attempting to add a duplicate node\n instead of silently overwriting.\n \"\"\"\n # XXX should do more validation than this\n if kwargs[\"type\"] != \"param\":\n assert site_name not in self, \\\n \"site {} already in trace\".format(site_name)\n\n # XXX should copy in case site gets mutated, or dont bother?\n super(Trace, self).add_node(site_name, *args, **kwargs.copy())\n\n def copy(self):\n \"\"\"\n Makes a shallow copy of self with nodes and edges preserved.\n Identical to super(Trace, self).copy(), but preserves the type\n and the self.graph_type attribute\n \"\"\"\n trace = super(Trace, self).copy()\n trace.graph_type = self.graph_type\n trace.__class__ = Trace\n return trace\n\n def log_pdf(self, site_filter=lambda name, site: True):\n \"\"\"\n Compute the local and overall log-probabilities of the trace.\n\n The local computation is memoized.\n\n :returns: total log probability.\n :rtype: torch.autograd.Variable\n \"\"\"\n log_p = 0.0\n for name, site in self.nodes.items():\n if site[\"type\"] == \"sample\" and site_filter(name, site):\n try:\n site_log_p = site[\"log_pdf\"]\n except KeyError:\n args, kwargs = site[\"args\"], site[\"kwargs\"]\n site_log_p = site[\"fn\"].log_prob(site[\"value\"], *args, **kwargs)\n site_log_p = scale_tensor(site_log_p, site[\"scale\"]).sum()\n site[\"log_pdf\"] = site_log_p\n _warn_if_nan(name, site_log_p)\n log_p += site_log_p\n return log_p\n\n # XXX This only makes sense when all tensors have compatible shape.\n def batch_log_pdf(self, site_filter=lambda name, site: True):\n \"\"\"\n Compute the batched local and overall log-probabilities of the trace.\n\n The local computation is memoized, and also stores the local `.log_pdf()`.\n \"\"\"\n log_p = 0.0\n for name, site in self.nodes.items():\n if site[\"type\"] == \"sample\" and site_filter(name, site):\n try:\n site_log_p = site[\"batch_log_pdf\"]\n except KeyError:\n args, kwargs = site[\"args\"], site[\"kwargs\"]\n site_log_p = site[\"fn\"].log_prob(site[\"value\"], *args, **kwargs)\n site_log_p = scale_tensor(site_log_p, site[\"scale\"])\n site[\"batch_log_pdf\"] = site_log_p\n site[\"log_pdf\"] = site_log_p.sum()\n _warn_if_nan(name, site[\"log_pdf\"])\n # Here log_p may be broadcast to a larger tensor:\n log_p = log_p + site_log_p\n return log_p\n\n def compute_batch_log_pdf(self, site_filter=lambda name, site: True):\n \"\"\"\n Compute the batched local log-probabilities at each site of the trace.\n\n The local computation is memoized, and also stores the local `.log_pdf()`.\n \"\"\"\n for name, site in self.nodes.items():\n if site[\"type\"] == \"sample\" and site_filter(name, site):\n try:\n site[\"batch_log_pdf\"]\n except KeyError:\n args, kwargs = site[\"args\"], site[\"kwargs\"]\n site_log_p = site[\"fn\"].log_prob(site[\"value\"], *args, **kwargs)\n site_log_p = scale_tensor(site_log_p, site[\"scale\"])\n site[\"batch_log_pdf\"] = site_log_p\n site[\"log_pdf\"] = site_log_p.sum()\n _warn_if_nan(name, site[\"log_pdf\"])\n\n def compute_score_parts(self):\n \"\"\"\n Compute the batched local score parts at each site of the trace.\n \"\"\"\n for name, site in self.nodes.items():\n if site[\"type\"] == \"sample\" and \"score_parts\" not in site:\n # Note that ScoreParts overloads the multiplication operator\n # to correctly scale each of its three parts.\n value = site[\"fn\"].score_parts(site[\"value\"], *site[\"args\"], **site[\"kwargs\"]) * site[\"scale\"]\n site[\"score_parts\"] = value\n site[\"batch_log_pdf\"] = value[0]\n site[\"log_pdf\"] = value[0].sum()\n _warn_if_nan(name, site[\"log_pdf\"])\n\n @property\n def observation_nodes(self):\n \"\"\"\n Gets a list of names of observe sites\n \"\"\"\n return [name for name, node in self.nodes.items()\n if node[\"type\"] == \"sample\" and\n node[\"is_observed\"]]\n\n @property\n def stochastic_nodes(self):\n \"\"\"\n Gets a list of names of sample sites\n \"\"\"\n return [name for name, node in self.nodes.items()\n if node[\"type\"] == \"sample\" and\n not node[\"is_observed\"]]\n\n @property\n def reparameterized_nodes(self):\n \"\"\"\n Gets a list of names of sample sites whose stochastic functions\n are reparameterizable primitive distributions\n \"\"\"\n return [name for name, node in self.nodes.items()\n if node[\"type\"] == \"sample\" and\n not node[\"is_observed\"] and\n getattr(node[\"fn\"], \"reparameterized\", False)]\n\n @property\n def nonreparam_stochastic_nodes(self):\n \"\"\"\n Gets a list of names of sample sites whose stochastic functions\n are not reparameterizable primitive distributions\n \"\"\"\n return list(set(self.stochastic_nodes) - set(self.reparameterized_nodes))\n\n def iter_stochastic_nodes(self):\n \"\"\"\n Returns an iterator over stochastic nodes in the trace.\n \"\"\"\n for name, node in self.nodes.items():\n if node[\"type\"] == \"sample\" and not node[\"is_observed\"]:\n yield name, node\n", "path": "pyro/poutine/trace.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport collections\nimport warnings\n\nimport networkx\nfrom torch.autograd import Variable\n\nfrom pyro.distributions.util import scale_tensor\nfrom pyro.util import is_nan, is_inf\n\n\ndef _warn_if_nan(name, value):\n if isinstance(value, Variable):\n value = value.data[0]\n if is_nan(value):\n warnings.warn(\"Encountered NAN log_pdf at site '{}'\".format(name))\n if is_inf(value) and value > 0:\n warnings.warn(\"Encountered +inf log_pdf at site '{}'\".format(name))\n # Note that -inf log_pdf is fine: it is merely a zero-probability event.\n\n\nclass DiGraph(networkx.DiGraph):\n node_dict_factory = collections.OrderedDict\n\n\nclass Trace(object):\n \"\"\"\n Execution trace data structure\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n :param string graph_type: string specifying the kind of trace graph to construct\n\n Constructor. Currently identical to networkx.``DiGraph(\\*args, \\**kwargs)``,\n except for storing the graph_type attribute\n \"\"\"\n self._graph = DiGraph(*args, **kwargs)\n graph_type = kwargs.pop(\"graph_type\", \"flat\")\n assert graph_type in (\"flat\", \"dense\"), \\\n \"{} not a valid graph type\".format(graph_type)\n self.graph_type = graph_type\n super(Trace, self).__init__(*args, **kwargs)\n\n def __del__(self):\n # Work around cyclic reference bugs in networkx.DiGraph\n # See https://github.com/uber/pyro/issues/798\n self._graph.__dict__.clear()\n\n @property\n def nodes(self):\n return self._graph.nodes\n\n @property\n def edges(self):\n return self._graph.edges\n\n @property\n def graph(self):\n return self._graph.graph\n\n @property\n def remove_node(self):\n return self._graph.remove_node\n\n @property\n def add_edge(self):\n return self._graph.add_edge\n\n @property\n def is_directed(self):\n return self._graph.is_directed\n\n @property\n def in_degree(self):\n return self._graph.in_degree\n\n @property\n def successors(self):\n return self._graph.successors\n\n def __contains__(self, site_name):\n return site_name in self._graph\n\n def __iter__(self):\n return iter(self._graph)\n\n def __len__(self):\n return len(self._graph)\n\n def add_node(self, site_name, *args, **kwargs):\n \"\"\"\n :param string site_name: the name of the site to be added\n\n Adds a site to the trace.\n\n Identical to super(Trace, self).add_node,\n but raises an error when attempting to add a duplicate node\n instead of silently overwriting.\n \"\"\"\n # XXX should do more validation than this\n if kwargs[\"type\"] != \"param\":\n assert site_name not in self, \\\n \"site {} already in trace\".format(site_name)\n\n # XXX should copy in case site gets mutated, or dont bother?\n self._graph.add_node(site_name, *args, **kwargs)\n\n def copy(self):\n \"\"\"\n Makes a shallow copy of self with nodes and edges preserved.\n Identical to super(Trace, self).copy(), but preserves the type\n and the self.graph_type attribute\n \"\"\"\n trace = Trace()\n trace._graph = self._graph.copy()\n trace.graph_type = self.graph_type\n return trace\n\n def log_pdf(self, site_filter=lambda name, site: True):\n \"\"\"\n Compute the local and overall log-probabilities of the trace.\n\n The local computation is memoized.\n\n :returns: total log probability.\n :rtype: torch.autograd.Variable\n \"\"\"\n log_p = 0.0\n for name, site in self.nodes.items():\n if site[\"type\"] == \"sample\" and site_filter(name, site):\n try:\n site_log_p = site[\"log_pdf\"]\n except KeyError:\n args, kwargs = site[\"args\"], site[\"kwargs\"]\n site_log_p = site[\"fn\"].log_prob(site[\"value\"], *args, **kwargs)\n site_log_p = scale_tensor(site_log_p, site[\"scale\"]).sum()\n site[\"log_pdf\"] = site_log_p\n _warn_if_nan(name, site_log_p)\n log_p += site_log_p\n return log_p\n\n # XXX This only makes sense when all tensors have compatible shape.\n def batch_log_pdf(self, site_filter=lambda name, site: True):\n \"\"\"\n Compute the batched local and overall log-probabilities of the trace.\n\n The local computation is memoized, and also stores the local `.log_pdf()`.\n \"\"\"\n log_p = 0.0\n for name, site in self.nodes.items():\n if site[\"type\"] == \"sample\" and site_filter(name, site):\n try:\n site_log_p = site[\"batch_log_pdf\"]\n except KeyError:\n args, kwargs = site[\"args\"], site[\"kwargs\"]\n site_log_p = site[\"fn\"].log_prob(site[\"value\"], *args, **kwargs)\n site_log_p = scale_tensor(site_log_p, site[\"scale\"])\n site[\"batch_log_pdf\"] = site_log_p\n site[\"log_pdf\"] = site_log_p.sum()\n _warn_if_nan(name, site[\"log_pdf\"])\n # Here log_p may be broadcast to a larger tensor:\n log_p = log_p + site_log_p\n return log_p\n\n def compute_batch_log_pdf(self, site_filter=lambda name, site: True):\n \"\"\"\n Compute the batched local log-probabilities at each site of the trace.\n\n The local computation is memoized, and also stores the local `.log_pdf()`.\n \"\"\"\n for name, site in self.nodes.items():\n if site[\"type\"] == \"sample\" and site_filter(name, site):\n try:\n site[\"batch_log_pdf\"]\n except KeyError:\n args, kwargs = site[\"args\"], site[\"kwargs\"]\n site_log_p = site[\"fn\"].log_prob(site[\"value\"], *args, **kwargs)\n site_log_p = scale_tensor(site_log_p, site[\"scale\"])\n site[\"batch_log_pdf\"] = site_log_p\n site[\"log_pdf\"] = site_log_p.sum()\n _warn_if_nan(name, site[\"log_pdf\"])\n\n def compute_score_parts(self):\n \"\"\"\n Compute the batched local score parts at each site of the trace.\n \"\"\"\n for name, site in self.nodes.items():\n if site[\"type\"] == \"sample\" and \"score_parts\" not in site:\n # Note that ScoreParts overloads the multiplication operator\n # to correctly scale each of its three parts.\n value = site[\"fn\"].score_parts(site[\"value\"], *site[\"args\"], **site[\"kwargs\"]) * site[\"scale\"]\n site[\"score_parts\"] = value\n site[\"batch_log_pdf\"] = value[0]\n site[\"log_pdf\"] = value[0].sum()\n _warn_if_nan(name, site[\"log_pdf\"])\n\n @property\n def observation_nodes(self):\n \"\"\"\n Gets a list of names of observe sites\n \"\"\"\n return [name for name, node in self.nodes.items()\n if node[\"type\"] == \"sample\" and\n node[\"is_observed\"]]\n\n @property\n def stochastic_nodes(self):\n \"\"\"\n Gets a list of names of sample sites\n \"\"\"\n return [name for name, node in self.nodes.items()\n if node[\"type\"] == \"sample\" and\n not node[\"is_observed\"]]\n\n @property\n def reparameterized_nodes(self):\n \"\"\"\n Gets a list of names of sample sites whose stochastic functions\n are reparameterizable primitive distributions\n \"\"\"\n return [name for name, node in self.nodes.items()\n if node[\"type\"] == \"sample\" and\n not node[\"is_observed\"] and\n getattr(node[\"fn\"], \"reparameterized\", False)]\n\n @property\n def nonreparam_stochastic_nodes(self):\n \"\"\"\n Gets a list of names of sample sites whose stochastic functions\n are not reparameterizable primitive distributions\n \"\"\"\n return list(set(self.stochastic_nodes) - set(self.reparameterized_nodes))\n\n def iter_stochastic_nodes(self):\n \"\"\"\n Returns an iterator over stochastic nodes in the trace.\n \"\"\"\n for name, node in self.nodes.items():\n if node[\"type\"] == \"sample\" and not node[\"is_observed\"]:\n yield name, node\n", "path": "pyro/poutine/trace.py"}]}
| 2,476 | 787 |
gh_patches_debug_20201
|
rasdani/github-patches
|
git_diff
|
translate__pootle-4492
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add test if TP uses a proper checker
This commit https://github.com/translate/pootle/commit/1d6ef1c987f2ee421b678fb9ac36e16175e4f364 fixed very hidden bug, let's add a test for it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytest_pootle/fixtures/models/translation_project.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 import pytest
11
12
13 def _require_tp(language, project):
14 """Helper to get/create a new translation project."""
15 from pootle_translationproject.models import create_translation_project
16
17 return create_translation_project(language, project)
18
19
20 def _require_tp_with_obsolete_dir(language, project):
21 """Helper to get/create a translation project in obsolete state."""
22 from pootle_translationproject.models import create_translation_project
23
24 tp = create_translation_project(language, project)
25 tp.directory.makeobsolete()
26
27 return tp
28
29
30 @pytest.fixture
31 def afrikaans_tutorial(afrikaans, tutorial):
32 """Require Afrikaans Tutorial."""
33 return _require_tp(afrikaans, tutorial)
34
35
36 @pytest.fixture
37 def arabic_tutorial_obsolete(arabic, tutorial):
38 """Require Arabic Tutorial in obsolete state."""
39 return _require_tp_with_obsolete_dir(arabic, tutorial)
40
41
42 @pytest.fixture
43 def english_tutorial(english, tutorial):
44 """Require English Tutorial."""
45 return _require_tp(english, tutorial)
46
47
48 @pytest.fixture
49 def french_tutorial(french, tutorial):
50 """Require French Tutorial."""
51 return _require_tp(french, tutorial)
52
53
54 @pytest.fixture
55 def spanish_tutorial(spanish, tutorial):
56 """Require Spanish Tutorial."""
57 return _require_tp(spanish, tutorial)
58
59
60 @pytest.fixture
61 def italian_tutorial(italian, tutorial):
62 """Require Italian Tutorial."""
63 return _require_tp(italian, tutorial)
64
65
66 @pytest.fixture
67 def russian_tutorial(russian, tutorial):
68 """Require Russian Tutorial."""
69 return _require_tp(russian, tutorial)
70
71
72 @pytest.fixture
73 def afrikaans_vfolder_test(afrikaans, vfolder_test):
74 """Require Afrikaans Virtual Folder Test."""
75 return _require_tp(afrikaans, vfolder_test)
76
77
78 @pytest.fixture
79 def templates_tutorial(templates, tutorial):
80 """Require Template Tutorial."""
81 return _require_tp(templates, tutorial)
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pytest_pootle/fixtures/models/translation_project.py b/pytest_pootle/fixtures/models/translation_project.py
--- a/pytest_pootle/fixtures/models/translation_project.py
+++ b/pytest_pootle/fixtures/models/translation_project.py
@@ -7,6 +7,8 @@
# or later license. See the LICENSE file for a copy of the license and the
# AUTHORS file for copyright and authorship information.
+import shutil
+
import pytest
@@ -79,3 +81,25 @@
def templates_tutorial(templates, tutorial):
"""Require Template Tutorial."""
return _require_tp(templates, tutorial)
+
+
+def get_project_checkers():
+ from translate.filters import checks
+
+ return ['standard'] + list(checks.projectcheckers.keys())
+
+
[email protected](params=get_project_checkers())
+def tp_checker_tests(request, english):
+ from pytest_pootle.factories import ProjectFactory
+
+ checker_name = request.param
+ project = ProjectFactory(
+ checkstyle=checker_name,
+ source_language=english)
+
+ def _remove_project_directory():
+ shutil.rmtree(project.get_real_path())
+ request.addfinalizer(_remove_project_directory)
+
+ return (checker_name, project)
|
{"golden_diff": "diff --git a/pytest_pootle/fixtures/models/translation_project.py b/pytest_pootle/fixtures/models/translation_project.py\n--- a/pytest_pootle/fixtures/models/translation_project.py\n+++ b/pytest_pootle/fixtures/models/translation_project.py\n@@ -7,6 +7,8 @@\n # or later license. See the LICENSE file for a copy of the license and the\n # AUTHORS file for copyright and authorship information.\n \n+import shutil\n+\n import pytest\n \n \n@@ -79,3 +81,25 @@\n def templates_tutorial(templates, tutorial):\n \"\"\"Require Template Tutorial.\"\"\"\n return _require_tp(templates, tutorial)\n+\n+\n+def get_project_checkers():\n+ from translate.filters import checks\n+\n+ return ['standard'] + list(checks.projectcheckers.keys())\n+\n+\[email protected](params=get_project_checkers())\n+def tp_checker_tests(request, english):\n+ from pytest_pootle.factories import ProjectFactory\n+\n+ checker_name = request.param\n+ project = ProjectFactory(\n+ checkstyle=checker_name,\n+ source_language=english)\n+\n+ def _remove_project_directory():\n+ shutil.rmtree(project.get_real_path())\n+ request.addfinalizer(_remove_project_directory)\n+\n+ return (checker_name, project)\n", "issue": "Add test if TP uses a proper checker\nThis commit https://github.com/translate/pootle/commit/1d6ef1c987f2ee421b678fb9ac36e16175e4f364 fixed very hidden bug, let's add a test for it.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport pytest\n\n\ndef _require_tp(language, project):\n \"\"\"Helper to get/create a new translation project.\"\"\"\n from pootle_translationproject.models import create_translation_project\n\n return create_translation_project(language, project)\n\n\ndef _require_tp_with_obsolete_dir(language, project):\n \"\"\"Helper to get/create a translation project in obsolete state.\"\"\"\n from pootle_translationproject.models import create_translation_project\n\n tp = create_translation_project(language, project)\n tp.directory.makeobsolete()\n\n return tp\n\n\[email protected]\ndef afrikaans_tutorial(afrikaans, tutorial):\n \"\"\"Require Afrikaans Tutorial.\"\"\"\n return _require_tp(afrikaans, tutorial)\n\n\[email protected]\ndef arabic_tutorial_obsolete(arabic, tutorial):\n \"\"\"Require Arabic Tutorial in obsolete state.\"\"\"\n return _require_tp_with_obsolete_dir(arabic, tutorial)\n\n\[email protected]\ndef english_tutorial(english, tutorial):\n \"\"\"Require English Tutorial.\"\"\"\n return _require_tp(english, tutorial)\n\n\[email protected]\ndef french_tutorial(french, tutorial):\n \"\"\"Require French Tutorial.\"\"\"\n return _require_tp(french, tutorial)\n\n\[email protected]\ndef spanish_tutorial(spanish, tutorial):\n \"\"\"Require Spanish Tutorial.\"\"\"\n return _require_tp(spanish, tutorial)\n\n\[email protected]\ndef italian_tutorial(italian, tutorial):\n \"\"\"Require Italian Tutorial.\"\"\"\n return _require_tp(italian, tutorial)\n\n\[email protected]\ndef russian_tutorial(russian, tutorial):\n \"\"\"Require Russian Tutorial.\"\"\"\n return _require_tp(russian, tutorial)\n\n\[email protected]\ndef afrikaans_vfolder_test(afrikaans, vfolder_test):\n \"\"\"Require Afrikaans Virtual Folder Test.\"\"\"\n return _require_tp(afrikaans, vfolder_test)\n\n\[email protected]\ndef templates_tutorial(templates, tutorial):\n \"\"\"Require Template Tutorial.\"\"\"\n return _require_tp(templates, tutorial)\n", "path": "pytest_pootle/fixtures/models/translation_project.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport shutil\n\nimport pytest\n\n\ndef _require_tp(language, project):\n \"\"\"Helper to get/create a new translation project.\"\"\"\n from pootle_translationproject.models import create_translation_project\n\n return create_translation_project(language, project)\n\n\ndef _require_tp_with_obsolete_dir(language, project):\n \"\"\"Helper to get/create a translation project in obsolete state.\"\"\"\n from pootle_translationproject.models import create_translation_project\n\n tp = create_translation_project(language, project)\n tp.directory.makeobsolete()\n\n return tp\n\n\[email protected]\ndef afrikaans_tutorial(afrikaans, tutorial):\n \"\"\"Require Afrikaans Tutorial.\"\"\"\n return _require_tp(afrikaans, tutorial)\n\n\[email protected]\ndef arabic_tutorial_obsolete(arabic, tutorial):\n \"\"\"Require Arabic Tutorial in obsolete state.\"\"\"\n return _require_tp_with_obsolete_dir(arabic, tutorial)\n\n\[email protected]\ndef english_tutorial(english, tutorial):\n \"\"\"Require English Tutorial.\"\"\"\n return _require_tp(english, tutorial)\n\n\[email protected]\ndef french_tutorial(french, tutorial):\n \"\"\"Require French Tutorial.\"\"\"\n return _require_tp(french, tutorial)\n\n\[email protected]\ndef spanish_tutorial(spanish, tutorial):\n \"\"\"Require Spanish Tutorial.\"\"\"\n return _require_tp(spanish, tutorial)\n\n\[email protected]\ndef italian_tutorial(italian, tutorial):\n \"\"\"Require Italian Tutorial.\"\"\"\n return _require_tp(italian, tutorial)\n\n\[email protected]\ndef russian_tutorial(russian, tutorial):\n \"\"\"Require Russian Tutorial.\"\"\"\n return _require_tp(russian, tutorial)\n\n\[email protected]\ndef afrikaans_vfolder_test(afrikaans, vfolder_test):\n \"\"\"Require Afrikaans Virtual Folder Test.\"\"\"\n return _require_tp(afrikaans, vfolder_test)\n\n\[email protected]\ndef templates_tutorial(templates, tutorial):\n \"\"\"Require Template Tutorial.\"\"\"\n return _require_tp(templates, tutorial)\n\n\ndef get_project_checkers():\n from translate.filters import checks\n\n return ['standard'] + list(checks.projectcheckers.keys())\n\n\[email protected](params=get_project_checkers())\ndef tp_checker_tests(request, english):\n from pytest_pootle.factories import ProjectFactory\n\n checker_name = request.param\n project = ProjectFactory(\n checkstyle=checker_name,\n source_language=english)\n\n def _remove_project_directory():\n shutil.rmtree(project.get_real_path())\n request.addfinalizer(_remove_project_directory)\n\n return (checker_name, project)\n", "path": "pytest_pootle/fixtures/models/translation_project.py"}]}
| 991 | 279 |
gh_patches_debug_5364
|
rasdani/github-patches
|
git_diff
|
cleanlab__cleanlab-404
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
color_sentence fails in tutorial notebook
<!-- Briefly summarize the issue. -->
In the notebook, `display_issues` highlights all token issues with a call to `color_sentence`:
https://github.com/cleanlab/cleanlab/blob/1a239922fe195d2a6104d6dc3552d53da16380ce/docs/source/tutorials/token_classification.ipynb?short_path=2ebceca#L369-L379
One of the examples trips everything up with the following error:
```
missing ), unterminated subpattern at position 2
```
# Stack trace
From [failed CI job](https://github.com/cleanlab/cleanlab/actions/runs/2996555945):
<details><summary> Click to toggle stack trace</summary>
```bash
---------------------------------------------------------------------------
error Traceback (most recent call last)
Input In [12], in <module>
----> 1 display_issues(issues,given_words,pred_probs=pred_probs,given_labels=labels,
2 exclude=[(0,1),(1,0)],class_names=merged_entities)
File ~/work/cleanlab/cleanlab/cleanlab/token_classification/summary.py:81, in display_issues(issues, given_words, pred_probs, given_labels, exclude, class_names, top)
78 given = class_names[given]
80 shown += 1
---> 81 print("Sentence %d, token %d: \n%s" % (i, j, color_sentence(sentence,word)))
82 if given_labels and not pred_probs:
83 print("Given label: %s\n" % str(given))
File ~/work/cleanlab/cleanlab/cleanlab/internal/token_classification_utils.py:175, in color_sentence(sentence, word)
158 """
159 Searches for a given token in the sentence and returns the sentence where the given token is colored red
160
(...)
172
173 """
174 colored_word = colored(word, "red")
--> 175 colored_sentence, number_of_substitions = re.subn(
176 r"\b{}\b".format(word),colored_word,sentence
177 )
178 if number_of_substitions == 0:
179 # Use basic string manipulation if regex fails
180 colored_sentence = sentence.replace(word, colored_word)
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/re.py:221, in subn(pattern, repl, string, count, flags)
212 def subn(pattern, repl, string, count=0, flags=0):
213 """Return a 2-tuple containing (new_string, number).
214 new_string is the string obtained by replacing the leftmost
215 non-overlapping occurrences of the pattern in the source
(...)
219 If it is a callable, it's passed the Match object and must
220 return a replacement string to be used."""
--> 221 return _compile(pattern,flags).subn(repl, string, count)
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/re.py:304, in _compile(pattern, flags)
302 if not sre_compile.isstring(pattern):
303 raise TypeError("first argument must be string or compiled pattern")
--> 304 p = sre_compile.compile(pattern,flags)
305 if not (flags & DEBUG):
306 if len(_cache) >= _MAXCACHE:
307 # Drop the oldest item
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_compile.py:764, in compile(p, flags)
762 if isstring(p):
763 pattern = p
--> 764 p = sre_parse.parse(p,flags)
765 else:
766 pattern = None
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_parse.py:948, in parse(str, flags, state)
945 state.str = str
947 try:
--> 948 p = _parse_sub(source,state,flags&SRE_FLAG_VERBOSE,0)
949 except Verbose:
950 # the VERBOSE flag was switched on inside the pattern. to be
951 # on the safe side, we'll parse the whole thing again...
952 state = State()
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_parse.py:443, in _parse_sub(source, state, verbose, nested)
441 start = source.tell()
442 while True:
--> 443 itemsappend(_parse(source,state,verbose,nested+1,
444 notnestedandnotitems))
445 if not sourcematch("|"):
446 break
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_parse.py:836, in _parse(source, state, verbose, nested, first)
834 p = _parse_sub(source, state, sub_verbose, nested + 1)
835 if not source.match(")"):
--> 836 raise source.error("missing ), unterminated subpattern",
837 source.tell() - start)
838 if group is not None:
839 state.closegroup(group, p)
error: missing ), unterminated subpattern at position 2
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cleanlab/internal/token_classification_utils.py`
Content:
```
1 import re
2 import string
3 import numpy as np
4 from termcolor import colored
5 from typing import List, Optional, Callable, Tuple
6
7
8 def get_sentence(words: List[str]) -> str:
9 """
10 Get sentence formed by a list of words with minor processing for readability
11
12 Parameters
13 ----------
14 words: List[str]
15 list of word-level tokens
16
17 Returns
18 ----------
19 sentence: string
20 sentence formed by list of word-level tokens
21
22 """
23 sentence = ""
24 for word in words:
25 if word not in string.punctuation or word in ["-", "("]:
26 word = " " + word
27 sentence += word
28 sentence = sentence.replace(" '", "'").replace("( ", "(").strip()
29 return sentence
30
31
32 def filter_sentence(
33 sentences: List[str],
34 condition: Optional[Callable[[str], bool]] = None,
35 ) -> Tuple[List[str], List[bool]]:
36 """
37 Filter sentence based on some condition, and returns filter mask
38
39 Parameters
40 ----------
41 sentences: List[str]
42 list of sentences
43
44 condition: Optional[Callable[[str], bool]]
45 sentence filtering condition
46
47 Returns
48 ---------
49 sentences: List[str]
50 list of sentences filtered
51
52 mask: List[bool]
53 boolean mask such that `mask[i] == True` if the i'th sentence is included in the
54 filtered sentence, otherwise `mask[i] == False`
55
56 """
57 if not condition:
58 condition = lambda sentence: len(sentence) > 1 and "#" not in sentence
59 mask = list(map(condition, sentences))
60 sentences = [sentence for m, sentence in zip(mask, sentences) if m]
61 return sentences, mask
62
63
64 def process_token(token: str, replace: List[Tuple[str, str]] = [("#", "")]) -> str:
65 """
66 Replaces special characters in the tokens
67
68 Parameters
69 ----------
70 token: str
71 token which potentially contains special characters
72
73 replace: List[Tuple[str, str]]
74 list of tuples `(s1, s2)`, where all occurances of s1 are replaced by s2
75
76 Returns
77 ---------
78 processed_token: str
79 processed token whose special character has been replaced
80
81 Note
82 ----
83 Only applies to characters in the original input token.
84 """
85 replace_dict = {re.escape(k): v for (k, v) in replace}
86 pattern = "|".join(replace_dict.keys())
87 compiled_pattern = re.compile(pattern)
88 replacement = lambda match: replace_dict[re.escape(match.group(0))]
89 processed_token = compiled_pattern.sub(replacement, token)
90 return processed_token
91
92
93 def mapping(entities: List[int], maps: List[int]) -> List[int]:
94 """
95 Map a list of entities to its corresponding entities
96
97 Parameters
98 ----------
99 entities: List[int]
100 a list of given entities
101
102 maps: List[int]
103 a list of mapped entities, such that the i'th indexed token should be mapped to `maps[i]`
104
105 Returns
106 ---------
107 mapped_entities: List[int]
108 a list of mapped entities
109
110 Examples
111 --------
112 >>> unique_identities = [0, 1, 2, 3, 4] # ["O", "B-PER", "I-PER", "B-LOC", "I-LOC"]
113 >>> maps = [0, 1, 1, 2, 2] # ["O", "PER", "PER", "LOC", "LOC"]
114 >>> mapping(unique_identities, maps)
115 [0, 1, 1, 2, 2] # ["O", "PER", "PER", "LOC", "LOC"]
116 >>> mapping([0, 0, 4, 4, 3, 4, 0, 2], maps)
117 [0, 0, 2, 2, 2, 2, 0, 1] # ["O", "O", "LOC", "LOC", "LOC", "LOC", "O", "PER"]
118 """
119 f = lambda x: maps[x]
120 return list(map(f, entities))
121
122
123 def merge_probs(probs: np.ndarray, maps: List[int]) -> np.ndarray:
124 """
125 Merges model-predictive probabilities with desired mapping
126
127 Parameters
128 ----------
129 probs:
130 np.array of shape `(N, K)`, where N is the number of tokens, and K is the number of classes for the model
131
132 maps: List[int]
133 a list of mapped index, such that the probability of the token being in the i'th class is mapped to the
134 `maps[i]` index. If `maps[i] == -1`, the i'th column of `probs` is ignored. If `np.any(maps == -1)`, the
135 returned probability is re-normalized.
136
137 Returns
138 ---------
139 probs_merged:
140 np.array of shape `(N, K')`, where K' is the number of new classes. Probablities are merged and
141 re-normalized if necessary.
142
143 """
144 old_classes = probs.shape[1]
145 map_size = np.max(maps) + 1
146 probs_merged = np.zeros([len(probs), map_size], dtype=probs.dtype.type)
147
148 for i in range(old_classes):
149 if maps[i] >= 0:
150 probs_merged[:, maps[i]] += probs[:, i]
151 if -1 in maps:
152 row_sums = probs_merged.sum(axis=1)
153 probs_merged /= row_sums[:, np.newaxis]
154 return probs_merged
155
156
157 def color_sentence(sentence: str, word: str) -> str:
158 """
159 Searches for a given token in the sentence and returns the sentence where the given token is colored red
160
161 Parameters
162 ----------
163 sentence:
164 a sentence where the word is searched
165
166 word:
167 keyword to find in `sentence`. Assumes the word exists in the sentence.
168 Returns
169 ---------
170 colored_sentence:
171 `sentence` where the every occurance of the word is colored red, using `termcolor.colored`
172
173 """
174 colored_word = colored(word, "red")
175 colored_sentence, number_of_substitions = re.subn(
176 r"\b{}\b".format(word), colored_word, sentence
177 )
178 if number_of_substitions == 0:
179 # Use basic string manipulation if regex fails
180 colored_sentence = sentence.replace(word, colored_word)
181 return colored_sentence
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cleanlab/internal/token_classification_utils.py b/cleanlab/internal/token_classification_utils.py
--- a/cleanlab/internal/token_classification_utils.py
+++ b/cleanlab/internal/token_classification_utils.py
@@ -173,7 +173,7 @@
"""
colored_word = colored(word, "red")
colored_sentence, number_of_substitions = re.subn(
- r"\b{}\b".format(word), colored_word, sentence
+ r"\b{}\b".format(re.escape(word)), colored_word, sentence
)
if number_of_substitions == 0:
# Use basic string manipulation if regex fails
|
{"golden_diff": "diff --git a/cleanlab/internal/token_classification_utils.py b/cleanlab/internal/token_classification_utils.py\n--- a/cleanlab/internal/token_classification_utils.py\n+++ b/cleanlab/internal/token_classification_utils.py\n@@ -173,7 +173,7 @@\n \"\"\"\n colored_word = colored(word, \"red\")\n colored_sentence, number_of_substitions = re.subn(\n- r\"\\b{}\\b\".format(word), colored_word, sentence\n+ r\"\\b{}\\b\".format(re.escape(word)), colored_word, sentence\n )\n if number_of_substitions == 0:\n # Use basic string manipulation if regex fails\n", "issue": "color_sentence fails in tutorial notebook\n<!-- Briefly summarize the issue. -->\r\n\r\nIn the notebook, `display_issues` highlights all token issues with a call to `color_sentence`:\r\n\r\nhttps://github.com/cleanlab/cleanlab/blob/1a239922fe195d2a6104d6dc3552d53da16380ce/docs/source/tutorials/token_classification.ipynb?short_path=2ebceca#L369-L379\r\n\r\nOne of the examples trips everything up with the following error:\r\n```\r\nmissing ), unterminated subpattern at position 2\r\n```\r\n\r\n\r\n# Stack trace\r\n\r\nFrom [failed CI job](https://github.com/cleanlab/cleanlab/actions/runs/2996555945):\r\n<details><summary> Click to toggle stack trace</summary>\r\n\r\n```bash\r\n---------------------------------------------------------------------------\r\nerror Traceback (most recent call last)\r\nInput In [12], in <module>\r\n----> 1 display_issues(issues,given_words,pred_probs=pred_probs,given_labels=labels,\r\n 2 exclude=[(0,1),(1,0)],class_names=merged_entities)\r\n\r\nFile ~/work/cleanlab/cleanlab/cleanlab/token_classification/summary.py:81, in display_issues(issues, given_words, pred_probs, given_labels, exclude, class_names, top)\r\n 78 given = class_names[given]\r\n 80 shown += 1\r\n---> 81 print(\"Sentence %d, token %d: \\n%s\" % (i, j, color_sentence(sentence,word)))\r\n 82 if given_labels and not pred_probs:\r\n 83 print(\"Given label: %s\\n\" % str(given))\r\n\r\nFile ~/work/cleanlab/cleanlab/cleanlab/internal/token_classification_utils.py:175, in color_sentence(sentence, word)\r\n 158 \"\"\"\r\n 159 Searches for a given token in the sentence and returns the sentence where the given token is colored red\r\n 160 \r\n (...)\r\n 172 \r\n 173 \"\"\"\r\n 174 colored_word = colored(word, \"red\")\r\n--> 175 colored_sentence, number_of_substitions = re.subn(\r\n 176 r\"\\b{}\\b\".format(word),colored_word,sentence\r\n 177 )\r\n 178 if number_of_substitions == 0:\r\n 179 # Use basic string manipulation if regex fails\r\n 180 colored_sentence = sentence.replace(word, colored_word)\r\n\r\nFile /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/re.py:221, in subn(pattern, repl, string, count, flags)\r\n 212 def subn(pattern, repl, string, count=0, flags=0):\r\n 213 \"\"\"Return a 2-tuple containing (new_string, number).\r\n 214 new_string is the string obtained by replacing the leftmost\r\n 215 non-overlapping occurrences of the pattern in the source\r\n (...)\r\n 219 If it is a callable, it's passed the Match object and must\r\n 220 return a replacement string to be used.\"\"\"\r\n--> 221 return _compile(pattern,flags).subn(repl, string, count)\r\n\r\nFile /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/re.py:304, in _compile(pattern, flags)\r\n 302 if not sre_compile.isstring(pattern):\r\n 303 raise TypeError(\"first argument must be string or compiled pattern\")\r\n--> 304 p = sre_compile.compile(pattern,flags)\r\n 305 if not (flags & DEBUG):\r\n 306 if len(_cache) >= _MAXCACHE:\r\n 307 # Drop the oldest item\r\n\r\nFile /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_compile.py:764, in compile(p, flags)\r\n 762 if isstring(p):\r\n 763 pattern = p\r\n--> 764 p = sre_parse.parse(p,flags)\r\n 765 else:\r\n 766 pattern = None\r\n\r\nFile /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_parse.py:948, in parse(str, flags, state)\r\n 945 state.str = str\r\n 947 try:\r\n--> 948 p = _parse_sub(source,state,flags&SRE_FLAG_VERBOSE,0)\r\n 949 except Verbose:\r\n 950 # the VERBOSE flag was switched on inside the pattern. to be\r\n 951 # on the safe side, we'll parse the whole thing again...\r\n 952 state = State()\r\n\r\nFile /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_parse.py:443, in _parse_sub(source, state, verbose, nested)\r\n 441 start = source.tell()\r\n 442 while True:\r\n--> 443 itemsappend(_parse(source,state,verbose,nested+1,\r\n 444 notnestedandnotitems))\r\n 445 if not sourcematch(\"|\"):\r\n 446 break\r\n\r\nFile /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_parse.py:836, in _parse(source, state, verbose, nested, first)\r\n 834 p = _parse_sub(source, state, sub_verbose, nested + 1)\r\n 835 if not source.match(\")\"):\r\n--> 836 raise source.error(\"missing ), unterminated subpattern\",\r\n 837 source.tell() - start)\r\n 838 if group is not None:\r\n 839 state.closegroup(group, p)\r\n\r\nerror: missing ), unterminated subpattern at position 2\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "import re\nimport string\nimport numpy as np\nfrom termcolor import colored\nfrom typing import List, Optional, Callable, Tuple\n\n\ndef get_sentence(words: List[str]) -> str:\n \"\"\"\n Get sentence formed by a list of words with minor processing for readability\n\n Parameters\n ----------\n words: List[str]\n list of word-level tokens\n\n Returns\n ----------\n sentence: string\n sentence formed by list of word-level tokens\n\n \"\"\"\n sentence = \"\"\n for word in words:\n if word not in string.punctuation or word in [\"-\", \"(\"]:\n word = \" \" + word\n sentence += word\n sentence = sentence.replace(\" '\", \"'\").replace(\"( \", \"(\").strip()\n return sentence\n\n\ndef filter_sentence(\n sentences: List[str],\n condition: Optional[Callable[[str], bool]] = None,\n) -> Tuple[List[str], List[bool]]:\n \"\"\"\n Filter sentence based on some condition, and returns filter mask\n\n Parameters\n ----------\n sentences: List[str]\n list of sentences\n\n condition: Optional[Callable[[str], bool]]\n sentence filtering condition\n\n Returns\n ---------\n sentences: List[str]\n list of sentences filtered\n\n mask: List[bool]\n boolean mask such that `mask[i] == True` if the i'th sentence is included in the\n filtered sentence, otherwise `mask[i] == False`\n\n \"\"\"\n if not condition:\n condition = lambda sentence: len(sentence) > 1 and \"#\" not in sentence\n mask = list(map(condition, sentences))\n sentences = [sentence for m, sentence in zip(mask, sentences) if m]\n return sentences, mask\n\n\ndef process_token(token: str, replace: List[Tuple[str, str]] = [(\"#\", \"\")]) -> str:\n \"\"\"\n Replaces special characters in the tokens\n\n Parameters\n ----------\n token: str\n token which potentially contains special characters\n\n replace: List[Tuple[str, str]]\n list of tuples `(s1, s2)`, where all occurances of s1 are replaced by s2\n\n Returns\n ---------\n processed_token: str\n processed token whose special character has been replaced\n\n Note\n ----\n Only applies to characters in the original input token.\n \"\"\"\n replace_dict = {re.escape(k): v for (k, v) in replace}\n pattern = \"|\".join(replace_dict.keys())\n compiled_pattern = re.compile(pattern)\n replacement = lambda match: replace_dict[re.escape(match.group(0))]\n processed_token = compiled_pattern.sub(replacement, token)\n return processed_token\n\n\ndef mapping(entities: List[int], maps: List[int]) -> List[int]:\n \"\"\"\n Map a list of entities to its corresponding entities\n\n Parameters\n ----------\n entities: List[int]\n a list of given entities\n\n maps: List[int]\n a list of mapped entities, such that the i'th indexed token should be mapped to `maps[i]`\n\n Returns\n ---------\n mapped_entities: List[int]\n a list of mapped entities\n\n Examples\n --------\n >>> unique_identities = [0, 1, 2, 3, 4] # [\"O\", \"B-PER\", \"I-PER\", \"B-LOC\", \"I-LOC\"]\n >>> maps = [0, 1, 1, 2, 2] # [\"O\", \"PER\", \"PER\", \"LOC\", \"LOC\"]\n >>> mapping(unique_identities, maps)\n [0, 1, 1, 2, 2] # [\"O\", \"PER\", \"PER\", \"LOC\", \"LOC\"]\n >>> mapping([0, 0, 4, 4, 3, 4, 0, 2], maps)\n [0, 0, 2, 2, 2, 2, 0, 1] # [\"O\", \"O\", \"LOC\", \"LOC\", \"LOC\", \"LOC\", \"O\", \"PER\"]\n \"\"\"\n f = lambda x: maps[x]\n return list(map(f, entities))\n\n\ndef merge_probs(probs: np.ndarray, maps: List[int]) -> np.ndarray:\n \"\"\"\n Merges model-predictive probabilities with desired mapping\n\n Parameters\n ----------\n probs:\n np.array of shape `(N, K)`, where N is the number of tokens, and K is the number of classes for the model\n\n maps: List[int]\n a list of mapped index, such that the probability of the token being in the i'th class is mapped to the\n `maps[i]` index. If `maps[i] == -1`, the i'th column of `probs` is ignored. If `np.any(maps == -1)`, the\n returned probability is re-normalized.\n\n Returns\n ---------\n probs_merged:\n np.array of shape `(N, K')`, where K' is the number of new classes. Probablities are merged and\n re-normalized if necessary.\n\n \"\"\"\n old_classes = probs.shape[1]\n map_size = np.max(maps) + 1\n probs_merged = np.zeros([len(probs), map_size], dtype=probs.dtype.type)\n\n for i in range(old_classes):\n if maps[i] >= 0:\n probs_merged[:, maps[i]] += probs[:, i]\n if -1 in maps:\n row_sums = probs_merged.sum(axis=1)\n probs_merged /= row_sums[:, np.newaxis]\n return probs_merged\n\n\ndef color_sentence(sentence: str, word: str) -> str:\n \"\"\"\n Searches for a given token in the sentence and returns the sentence where the given token is colored red\n\n Parameters\n ----------\n sentence:\n a sentence where the word is searched\n\n word:\n keyword to find in `sentence`. Assumes the word exists in the sentence.\n Returns\n ---------\n colored_sentence:\n `sentence` where the every occurance of the word is colored red, using `termcolor.colored`\n\n \"\"\"\n colored_word = colored(word, \"red\")\n colored_sentence, number_of_substitions = re.subn(\n r\"\\b{}\\b\".format(word), colored_word, sentence\n )\n if number_of_substitions == 0:\n # Use basic string manipulation if regex fails\n colored_sentence = sentence.replace(word, colored_word)\n return colored_sentence\n", "path": "cleanlab/internal/token_classification_utils.py"}], "after_files": [{"content": "import re\nimport string\nimport numpy as np\nfrom termcolor import colored\nfrom typing import List, Optional, Callable, Tuple\n\n\ndef get_sentence(words: List[str]) -> str:\n \"\"\"\n Get sentence formed by a list of words with minor processing for readability\n\n Parameters\n ----------\n words: List[str]\n list of word-level tokens\n\n Returns\n ----------\n sentence: string\n sentence formed by list of word-level tokens\n\n \"\"\"\n sentence = \"\"\n for word in words:\n if word not in string.punctuation or word in [\"-\", \"(\"]:\n word = \" \" + word\n sentence += word\n sentence = sentence.replace(\" '\", \"'\").replace(\"( \", \"(\").strip()\n return sentence\n\n\ndef filter_sentence(\n sentences: List[str],\n condition: Optional[Callable[[str], bool]] = None,\n) -> Tuple[List[str], List[bool]]:\n \"\"\"\n Filter sentence based on some condition, and returns filter mask\n\n Parameters\n ----------\n sentences: List[str]\n list of sentences\n\n condition: Optional[Callable[[str], bool]]\n sentence filtering condition\n\n Returns\n ---------\n sentences: List[str]\n list of sentences filtered\n\n mask: List[bool]\n boolean mask such that `mask[i] == True` if the i'th sentence is included in the\n filtered sentence, otherwise `mask[i] == False`\n\n \"\"\"\n if not condition:\n condition = lambda sentence: len(sentence) > 1 and \"#\" not in sentence\n mask = list(map(condition, sentences))\n sentences = [sentence for m, sentence in zip(mask, sentences) if m]\n return sentences, mask\n\n\ndef process_token(token: str, replace: List[Tuple[str, str]] = [(\"#\", \"\")]) -> str:\n \"\"\"\n Replaces special characters in the tokens\n\n Parameters\n ----------\n token: str\n token which potentially contains special characters\n\n replace: List[Tuple[str, str]]\n list of tuples `(s1, s2)`, where all occurances of s1 are replaced by s2\n\n Returns\n ---------\n processed_token: str\n processed token whose special character has been replaced\n\n Note\n ----\n Only applies to characters in the original input token.\n \"\"\"\n replace_dict = {re.escape(k): v for (k, v) in replace}\n pattern = \"|\".join(replace_dict.keys())\n compiled_pattern = re.compile(pattern)\n replacement = lambda match: replace_dict[re.escape(match.group(0))]\n processed_token = compiled_pattern.sub(replacement, token)\n return processed_token\n\n\ndef mapping(entities: List[int], maps: List[int]) -> List[int]:\n \"\"\"\n Map a list of entities to its corresponding entities\n\n Parameters\n ----------\n entities: List[int]\n a list of given entities\n\n maps: List[int]\n a list of mapped entities, such that the i'th indexed token should be mapped to `maps[i]`\n\n Returns\n ---------\n mapped_entities: List[int]\n a list of mapped entities\n\n Examples\n --------\n >>> unique_identities = [0, 1, 2, 3, 4] # [\"O\", \"B-PER\", \"I-PER\", \"B-LOC\", \"I-LOC\"]\n >>> maps = [0, 1, 1, 2, 2] # [\"O\", \"PER\", \"PER\", \"LOC\", \"LOC\"]\n >>> mapping(unique_identities, maps)\n [0, 1, 1, 2, 2] # [\"O\", \"PER\", \"PER\", \"LOC\", \"LOC\"]\n >>> mapping([0, 0, 4, 4, 3, 4, 0, 2], maps)\n [0, 0, 2, 2, 2, 2, 0, 1] # [\"O\", \"O\", \"LOC\", \"LOC\", \"LOC\", \"LOC\", \"O\", \"PER\"]\n \"\"\"\n f = lambda x: maps[x]\n return list(map(f, entities))\n\n\ndef merge_probs(probs: np.ndarray, maps: List[int]) -> np.ndarray:\n \"\"\"\n Merges model-predictive probabilities with desired mapping\n\n Parameters\n ----------\n probs:\n np.array of shape `(N, K)`, where N is the number of tokens, and K is the number of classes for the model\n\n maps: List[int]\n a list of mapped index, such that the probability of the token being in the i'th class is mapped to the\n `maps[i]` index. If `maps[i] == -1`, the i'th column of `probs` is ignored. If `np.any(maps == -1)`, the\n returned probability is re-normalized.\n\n Returns\n ---------\n probs_merged:\n np.array of shape `(N, K')`, where K' is the number of new classes. Probablities are merged and\n re-normalized if necessary.\n\n \"\"\"\n old_classes = probs.shape[1]\n map_size = np.max(maps) + 1\n probs_merged = np.zeros([len(probs), map_size], dtype=probs.dtype.type)\n\n for i in range(old_classes):\n if maps[i] >= 0:\n probs_merged[:, maps[i]] += probs[:, i]\n if -1 in maps:\n row_sums = probs_merged.sum(axis=1)\n probs_merged /= row_sums[:, np.newaxis]\n return probs_merged\n\n\ndef color_sentence(sentence: str, word: str) -> str:\n \"\"\"\n Searches for a given token in the sentence and returns the sentence where the given token is colored red\n\n Parameters\n ----------\n sentence:\n a sentence where the word is searched\n\n word:\n keyword to find in `sentence`. Assumes the word exists in the sentence.\n Returns\n ---------\n colored_sentence:\n `sentence` where the every occurance of the word is colored red, using `termcolor.colored`\n\n \"\"\"\n colored_word = colored(word, \"red\")\n colored_sentence, number_of_substitions = re.subn(\n r\"\\b{}\\b\".format(re.escape(word)), colored_word, sentence\n )\n if number_of_substitions == 0:\n # Use basic string manipulation if regex fails\n colored_sentence = sentence.replace(word, colored_word)\n return colored_sentence\n", "path": "cleanlab/internal/token_classification_utils.py"}]}
| 3,479 | 141 |
gh_patches_debug_24147
|
rasdani/github-patches
|
git_diff
|
UTNkar__moore-310
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Raw HTML content type
<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->
### Description
There is currently no content type for raw HTML that can be used for the Jexpo. A special content type could be created for Jexpo as it works right now but since the way Jexpo is initialized can change (according to themselves), they recommend using a raw HTML.
There should be a content type for raw HTML.
<!-- Please select the appropriate "topic category"/blue and "issue type"/yellow label -->
Forms are missing form introduction.
### Description
Creating a new form in Wagtail lets you specify an introduction to the form, which isn't displayed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/home/models/web_page.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2 from django.db import models
3 from django.utils.translation import ugettext_lazy as _
4 from wagtail.admin.edit_handlers import FieldPanel, StreamFieldPanel, \
5 TabbedInterface, ObjectList
6 from wagtail.core.fields import StreamField
7 from wagtail.core.models import Page
8 from blocks.models import WAGTAIL_STATIC_BLOCKTYPES
9 from google.models import GoogleFormBlock, GoogleDriveBlock, \
10 GoogleCalendarBlock
11 from news.models import LatestNewsBlock
12 from utils.translation import TranslatedField
13
14
15 class WebPage(Page):
16 # ---- General Page information ------
17
18 title_sv = models.CharField(max_length=255)
19 translated_title = TranslatedField('title', 'title_sv')
20
21 body_en = StreamField(
22 WAGTAIL_STATIC_BLOCKTYPES + [
23 ('google_calendar', GoogleCalendarBlock()),
24 ('google_drive', GoogleDriveBlock()),
25 ('google_form', GoogleFormBlock()),
26 ('news', LatestNewsBlock()),
27 ],
28 blank=True,
29 )
30 body_sv = StreamField(
31 WAGTAIL_STATIC_BLOCKTYPES + [
32 ('google_calendar', GoogleCalendarBlock()),
33 ('google_drive', GoogleDriveBlock()),
34 ('google_form', GoogleFormBlock()),
35 ('news', LatestNewsBlock()),
36 ],
37 blank=True,
38 )
39 body = TranslatedField('body_en', 'body_sv')
40
41 content_panels_en = Page.content_panels + [
42 StreamFieldPanel('body_en'),
43 ]
44
45 content_panels_sv = [
46 FieldPanel('title_sv', classname="full title"),
47 StreamFieldPanel('body_sv'),
48 ]
49
50 edit_handler = TabbedInterface([
51 ObjectList(content_panels_en, heading=_('English')),
52 ObjectList(content_panels_sv, heading=_('Swedish')),
53 ObjectList(Page.promote_panels, heading=_('Promote')),
54 ObjectList(Page.settings_panels, heading=_('Settings')),
55 ])
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/home/models/web_page.py b/src/home/models/web_page.py
--- a/src/home/models/web_page.py
+++ b/src/home/models/web_page.py
@@ -5,6 +5,7 @@
TabbedInterface, ObjectList
from wagtail.core.fields import StreamField
from wagtail.core.models import Page
+from wagtail.core.blocks import RawHTMLBlock
from blocks.models import WAGTAIL_STATIC_BLOCKTYPES
from google.models import GoogleFormBlock, GoogleDriveBlock, \
GoogleCalendarBlock
@@ -24,6 +25,7 @@
('google_drive', GoogleDriveBlock()),
('google_form', GoogleFormBlock()),
('news', LatestNewsBlock()),
+ ('html', RawHTMLBlock(group="Basic")),
],
blank=True,
)
@@ -33,6 +35,7 @@
('google_drive', GoogleDriveBlock()),
('google_form', GoogleFormBlock()),
('news', LatestNewsBlock()),
+ ('html', RawHTMLBlock(group="Basic")),
],
blank=True,
)
|
{"golden_diff": "diff --git a/src/home/models/web_page.py b/src/home/models/web_page.py\n--- a/src/home/models/web_page.py\n+++ b/src/home/models/web_page.py\n@@ -5,6 +5,7 @@\n TabbedInterface, ObjectList\n from wagtail.core.fields import StreamField\n from wagtail.core.models import Page\n+from wagtail.core.blocks import RawHTMLBlock\n from blocks.models import WAGTAIL_STATIC_BLOCKTYPES\n from google.models import GoogleFormBlock, GoogleDriveBlock, \\\n GoogleCalendarBlock\n@@ -24,6 +25,7 @@\n ('google_drive', GoogleDriveBlock()),\n ('google_form', GoogleFormBlock()),\n ('news', LatestNewsBlock()),\n+ ('html', RawHTMLBlock(group=\"Basic\")),\n ],\n blank=True,\n )\n@@ -33,6 +35,7 @@\n ('google_drive', GoogleDriveBlock()),\n ('google_form', GoogleFormBlock()),\n ('news', LatestNewsBlock()),\n+ ('html', RawHTMLBlock(group=\"Basic\")),\n ],\n blank=True,\n )\n", "issue": "Raw HTML content type\n<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->\r\n\r\n### Description\r\nThere is currently no content type for raw HTML that can be used for the Jexpo. A special content type could be created for Jexpo as it works right now but since the way Jexpo is initialized can change (according to themselves), they recommend using a raw HTML. \r\n\r\nThere should be a content type for raw HTML.\r\n\r\n<!-- Please select the appropriate \"topic category\"/blue and \"issue type\"/yellow label -->\r\n\nForms are missing form introduction.\n### Description\r\n\r\nCreating a new form in Wagtail lets you specify an introduction to the form, which isn't displayed.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom wagtail.admin.edit_handlers import FieldPanel, StreamFieldPanel, \\\n TabbedInterface, ObjectList\nfrom wagtail.core.fields import StreamField\nfrom wagtail.core.models import Page\nfrom blocks.models import WAGTAIL_STATIC_BLOCKTYPES\nfrom google.models import GoogleFormBlock, GoogleDriveBlock, \\\n GoogleCalendarBlock\nfrom news.models import LatestNewsBlock\nfrom utils.translation import TranslatedField\n\n\nclass WebPage(Page):\n # ---- General Page information ------\n\n title_sv = models.CharField(max_length=255)\n translated_title = TranslatedField('title', 'title_sv')\n\n body_en = StreamField(\n WAGTAIL_STATIC_BLOCKTYPES + [\n ('google_calendar', GoogleCalendarBlock()),\n ('google_drive', GoogleDriveBlock()),\n ('google_form', GoogleFormBlock()),\n ('news', LatestNewsBlock()),\n ],\n blank=True,\n )\n body_sv = StreamField(\n WAGTAIL_STATIC_BLOCKTYPES + [\n ('google_calendar', GoogleCalendarBlock()),\n ('google_drive', GoogleDriveBlock()),\n ('google_form', GoogleFormBlock()),\n ('news', LatestNewsBlock()),\n ],\n blank=True,\n )\n body = TranslatedField('body_en', 'body_sv')\n\n content_panels_en = Page.content_panels + [\n StreamFieldPanel('body_en'),\n ]\n\n content_panels_sv = [\n FieldPanel('title_sv', classname=\"full title\"),\n StreamFieldPanel('body_sv'),\n ]\n\n edit_handler = TabbedInterface([\n ObjectList(content_panels_en, heading=_('English')),\n ObjectList(content_panels_sv, heading=_('Swedish')),\n ObjectList(Page.promote_panels, heading=_('Promote')),\n ObjectList(Page.settings_panels, heading=_('Settings')),\n ])\n", "path": "src/home/models/web_page.py"}], "after_files": [{"content": "from __future__ import absolute_import, unicode_literals\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom wagtail.admin.edit_handlers import FieldPanel, StreamFieldPanel, \\\n TabbedInterface, ObjectList\nfrom wagtail.core.fields import StreamField\nfrom wagtail.core.models import Page\nfrom wagtail.core.blocks import RawHTMLBlock\nfrom blocks.models import WAGTAIL_STATIC_BLOCKTYPES\nfrom google.models import GoogleFormBlock, GoogleDriveBlock, \\\n GoogleCalendarBlock\nfrom news.models import LatestNewsBlock\nfrom utils.translation import TranslatedField\n\n\nclass WebPage(Page):\n # ---- General Page information ------\n\n title_sv = models.CharField(max_length=255)\n translated_title = TranslatedField('title', 'title_sv')\n\n body_en = StreamField(\n WAGTAIL_STATIC_BLOCKTYPES + [\n ('google_calendar', GoogleCalendarBlock()),\n ('google_drive', GoogleDriveBlock()),\n ('google_form', GoogleFormBlock()),\n ('news', LatestNewsBlock()),\n ('html', RawHTMLBlock(group=\"Basic\")),\n ],\n blank=True,\n )\n body_sv = StreamField(\n WAGTAIL_STATIC_BLOCKTYPES + [\n ('google_calendar', GoogleCalendarBlock()),\n ('google_drive', GoogleDriveBlock()),\n ('google_form', GoogleFormBlock()),\n ('news', LatestNewsBlock()),\n ('html', RawHTMLBlock(group=\"Basic\")),\n ],\n blank=True,\n )\n body = TranslatedField('body_en', 'body_sv')\n\n content_panels_en = Page.content_panels + [\n StreamFieldPanel('body_en'),\n ]\n\n content_panels_sv = [\n FieldPanel('title_sv', classname=\"full title\"),\n StreamFieldPanel('body_sv'),\n ]\n\n edit_handler = TabbedInterface([\n ObjectList(content_panels_en, heading=_('English')),\n ObjectList(content_panels_sv, heading=_('Swedish')),\n ObjectList(Page.promote_panels, heading=_('Promote')),\n ObjectList(Page.settings_panels, heading=_('Settings')),\n ])\n", "path": "src/home/models/web_page.py"}]}
| 919 | 227 |
gh_patches_debug_14785
|
rasdani/github-patches
|
git_diff
|
translate__pootle-6584
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Team page is very slow to load
not sure of the cause, but new team page is loading v slowly
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_language/views.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.contrib import messages
10 from django.http import Http404
11 from django.shortcuts import redirect, render
12 from django.urls import reverse
13 from django.utils.functional import cached_property
14 from django.utils.safestring import mark_safe
15
16 from pootle.core.browser import make_project_item
17 from pootle.core.decorators import (
18 get_object_or_404, get_path_obj, permission_required)
19 from pootle.core.exceptions import Http400
20 from pootle.core.views import PootleBrowseView, PootleTranslateView
21 from pootle.core.views.admin import PootleFormView
22 from pootle.core.views.decorators import requires_permission, set_permissions
23 from pootle.core.views.formtable import Formtable
24 from pootle.core.views.mixins import PootleJSONMixin
25 from pootle.i18n import formatter
26 from pootle.i18n.gettext import ugettext_lazy as _, ungettext_lazy
27 from pootle_misc.util import cmp_by_last_activity
28 from pootle_store.constants import STATES_MAP
29
30 from .apps import PootleLanguageConfig
31 from .forms import (
32 LanguageSpecialCharsForm, LanguageSuggestionAdminForm,
33 LanguageTeamAdminForm, LanguageTeamNewMemberSearchForm)
34 from .models import Language
35
36
37 class LanguageMixin(object):
38 ns = "pootle.language"
39 sw_version = PootleLanguageConfig.version
40 model = Language
41 browse_url_path = "pootle-language-browse"
42 translate_url_path = "pootle-language-translate"
43 template_extends = 'languages/base.html'
44
45 @property
46 def language(self):
47 return self.object
48
49 @property
50 def permission_context(self):
51 return self.get_object().directory
52
53 @property
54 def url_kwargs(self):
55 return {"language_code": self.object.code}
56
57 def get_object(self):
58 return self.object
59
60 @cached_property
61 def object(self):
62 lang = Language.get_canonical(self.kwargs["language_code"])
63 if lang is None:
64 raise Http404
65 return lang
66
67 def get(self, *args, **kwargs):
68 self.object = self.get_object()
69 if self.object.code != kwargs["language_code"]:
70 return redirect(
71 self.url_pattern_name,
72 self.object.code,
73 permanent=True)
74 return super(LanguageMixin, self).get(*args, **kwargs)
75
76
77 class LanguageBrowseView(LanguageMixin, PootleBrowseView):
78 url_pattern_name = "pootle-language-browse"
79 view_name = "language"
80
81 @cached_property
82 def object_children(self):
83 items = [make_project_item(tp)
84 for tp in self.object.get_children_for_user(self.request.user)]
85 items = self.add_child_stats(items)
86 items.sort(cmp_by_last_activity)
87 return items
88
89 @property
90 def language(self):
91 return {
92 'code': self.object.code,
93 'name': self.object.name}
94
95 def get(self, *args, **kwargs):
96 response = super(LanguageBrowseView, self).get(*args, **kwargs)
97 response.set_cookie('pootle-language', self.object.code)
98 return response
99
100
101 class LanguageTranslateView(LanguageMixin, PootleTranslateView):
102 url_pattern_name = "pootle-language-translate"
103
104
105 @get_path_obj
106 @permission_required('administrate')
107 def language_characters_admin(request, language):
108 form = LanguageSpecialCharsForm(request.POST
109 if request.method == 'POST'
110 else None,
111 instance=language)
112 if form.is_valid():
113 form.save()
114 return redirect('pootle-language-browse', language.code)
115
116 ctx = {
117 'page': 'admin-characters',
118
119 'browse_url': reverse('pootle-language-browse', kwargs={
120 'language_code': language.code,
121 }),
122 'translate_url': reverse('pootle-language-translate', kwargs={
123 'language_code': language.code,
124 }),
125
126 'language': language,
127 'directory': language.directory,
128 'form': form,
129 }
130
131 return render(request, 'languages/admin/characters.html', ctx)
132
133
134 class SuggestionFormtable(Formtable):
135 row_field = "suggestions"
136 filters_template = "languages/admin/includes/suggestions_header.html"
137
138 @property
139 def messages(self):
140 return self.kwargs.get("messages", [])
141
142
143 class SuggestionDisplay(object):
144
145 def __init__(self, suggestion):
146 self.__suggestion__ = suggestion
147
148 @property
149 def unit(self):
150 return self.__suggestion__.unit.source_f
151
152 @property
153 def project(self):
154 tp = self.__suggestion__.unit.store.translation_project
155 return mark_safe(
156 "<a href='%s'>%s</a>"
157 % (tp.get_absolute_url(),
158 tp.project.code))
159
160 @property
161 def unit_state(self):
162 return STATES_MAP[self.__suggestion__.unit.state]
163
164 @property
165 def unit_link(self):
166 return mark_safe(
167 "<a href='%s'>#%s</a>"
168 % (self.__suggestion__.unit.get_translate_url(),
169 self.__suggestion__.unit.id))
170
171 def __getattr__(self, k):
172 try:
173 return getattr(self.__suggestion__, k)
174 except AttributeError:
175 return self.__getattribute__(k)
176
177
178 class PootleLanguageAdminFormView(PootleFormView):
179
180 @property
181 def permission_context(self):
182 return self.language.directory
183
184 @set_permissions
185 @requires_permission("administrate")
186 def dispatch(self, request, *args, **kwargs):
187 # get funky with the request 8/
188 return super(
189 PootleLanguageAdminFormView, self).dispatch(request, *args, **kwargs)
190
191 @cached_property
192 def language(self):
193 return get_object_or_404(
194 Language.objects.select_related("directory"),
195 code=self.kwargs["language_code"])
196
197 def get_form_kwargs(self):
198 kwargs = super(PootleLanguageAdminFormView, self).get_form_kwargs()
199 kwargs["language"] = self.language
200 return kwargs
201
202 @property
203 def success_kwargs(self):
204 return dict(language_code=self.language.code)
205
206
207 class LanguageSuggestionAdminView(PootleLanguageAdminFormView):
208 template_name = 'languages/admin/language_team_suggestions.html'
209 form_class = LanguageSuggestionAdminForm
210 success_url_pattern = "pootle-language-admin-suggestions"
211 formtable_columns = (
212 _("Unit"),
213 _("State"),
214 _("Source"),
215 _("Suggestion"),
216 _("Suggested by"),
217 _("Suggested at"),
218 _("Project"))
219
220 @property
221 def default_form_kwargs(self):
222 return dict(
223 page_no=1,
224 results_per_page=10)
225
226 def add_success_message(self, form):
227 count = (
228 form.fields["suggestions"].queryset.count()
229 if form.cleaned_data["select_all"]
230 else len(form.cleaned_data["suggestions"]))
231 reject_and_notify = (
232 form.cleaned_data["actions"] == "reject"
233 and form.cleaned_data["comment"])
234 accept_and_notify = (
235 form.cleaned_data["actions"] == "accept"
236 and form.cleaned_data["comment"])
237 if reject_and_notify:
238 message = ungettext_lazy(
239 "Rejected %s suggestion with comment. User will be notified",
240 "Rejected %s suggestions with comment. Users will be notified",
241 count, count)
242 elif accept_and_notify:
243 message = ungettext_lazy(
244 "Accepted %s suggestion with comment. User will be notified",
245 "Accepted %s suggestions with comment. Users will be notified",
246 count, count)
247 elif form.cleaned_data["actions"] == "reject":
248 message = ungettext_lazy(
249 "Rejected %s suggestion",
250 "Rejected %s suggestions",
251 count, count)
252 else:
253 message = ungettext_lazy(
254 "Accepted %s suggestion",
255 "Accepted %s suggestions",
256 count, count)
257 messages.success(self.request, message)
258
259 def get_context_data(self, **kwargs):
260 context = super(
261 LanguageSuggestionAdminView, self).get_context_data(**kwargs)
262 context["page"] = "admin-suggestions"
263 context["language"] = self.language
264 form = context["form"]
265 form.is_valid()
266 batch = form.batch()
267 form.fields["suggestions"].choices = [
268 (item.id, SuggestionDisplay(item))
269 for item in
270 batch.object_list]
271 context["formtable"] = SuggestionFormtable(
272 form,
273 columns=self.formtable_columns,
274 page=batch,
275 messages=messages.get_messages(self.request))
276 return context
277
278 def get_form_kwargs(self):
279 kwargs = super(LanguageSuggestionAdminView, self).get_form_kwargs()
280 if not self.request.POST:
281 kwargs["data"] = self.default_form_kwargs
282 kwargs["user"] = self.request.user
283 return kwargs
284
285
286 class LanguageTeamAdminFormView(PootleLanguageAdminFormView):
287 form_class = LanguageTeamAdminForm
288 template_name = "languages/admin/language_team.html"
289 success_url_pattern = "pootle-language-admin-team"
290
291 def get_context_data(self, **kwargs):
292 context = super(LanguageTeamAdminFormView, self).get_context_data(**kwargs)
293 form = context["form"]
294 context["tps"] = self.language.translationproject_set.exclude(
295 project__disabled=True)
296 stats = self.language.data_tool.get_stats(
297 include_children=False,
298 user=self.request.user)
299 keys = ("total", "critical", "incomplete", "translated", "fuzzy",
300 "untranslated")
301 for k in keys:
302 if k in stats:
303 stats[k + "_display"] = formatter.number(stats[k])
304 context["stats"] = stats
305 context["suggestions"] = form.language_team.suggestions
306 suggestions_count = 0
307 if context["suggestions"]:
308 suggestions_count = context["suggestions"].count()
309 context["suggestions_display"] = formatter.number(suggestions_count)
310 context["language"] = self.language
311 context["page"] = "admin-team"
312 context["browse_url"] = reverse(
313 "pootle-language-browse",
314 kwargs=dict(language_code=self.language.code))
315 context["translate_url"] = reverse(
316 "pootle-language-translate",
317 kwargs=dict(language_code=self.language.code))
318 return context
319
320
321 class LanguageTeamAdminNewMembersJSON(PootleJSONMixin, PootleLanguageAdminFormView):
322 form_class = LanguageTeamNewMemberSearchForm
323
324 def get_context_data(self, **kwargs):
325 context = super(
326 LanguageTeamAdminNewMembersJSON, self).get_context_data(**kwargs)
327 form = context["form"]
328 return (
329 dict(items=form.search())
330 if form.is_valid()
331 else dict(items=[]))
332
333 def get_form_kwargs(self):
334 kwargs = super(LanguageTeamAdminNewMembersJSON, self).get_form_kwargs()
335 kwargs["data"] = self.request.POST
336 return kwargs
337
338 def form_valid(self, form):
339 return self.render_to_response(
340 self.get_context_data(form=form))
341
342 def form_invalid(self, form):
343 raise Http400(form.errors)
344
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/apps/pootle_language/views.py b/pootle/apps/pootle_language/views.py
--- a/pootle/apps/pootle_language/views.py
+++ b/pootle/apps/pootle_language/views.py
@@ -302,11 +302,9 @@
if k in stats:
stats[k + "_display"] = formatter.number(stats[k])
context["stats"] = stats
- context["suggestions"] = form.language_team.suggestions
- suggestions_count = 0
- if context["suggestions"]:
- suggestions_count = context["suggestions"].count()
- context["suggestions_display"] = formatter.number(suggestions_count)
+ context["suggestions"] = form.language_team.suggestions.count()
+ context["suggestions_display"] = formatter.number(
+ context["suggestions"])
context["language"] = self.language
context["page"] = "admin-team"
context["browse_url"] = reverse(
|
{"golden_diff": "diff --git a/pootle/apps/pootle_language/views.py b/pootle/apps/pootle_language/views.py\n--- a/pootle/apps/pootle_language/views.py\n+++ b/pootle/apps/pootle_language/views.py\n@@ -302,11 +302,9 @@\n if k in stats:\n stats[k + \"_display\"] = formatter.number(stats[k])\n context[\"stats\"] = stats\n- context[\"suggestions\"] = form.language_team.suggestions\n- suggestions_count = 0\n- if context[\"suggestions\"]:\n- suggestions_count = context[\"suggestions\"].count()\n- context[\"suggestions_display\"] = formatter.number(suggestions_count)\n+ context[\"suggestions\"] = form.language_team.suggestions.count()\n+ context[\"suggestions_display\"] = formatter.number(\n+ context[\"suggestions\"])\n context[\"language\"] = self.language\n context[\"page\"] = \"admin-team\"\n context[\"browse_url\"] = reverse(\n", "issue": "Team page is very slow to load\nnot sure of the cause, but new team page is loading v slowly\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.contrib import messages\nfrom django.http import Http404\nfrom django.shortcuts import redirect, render\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\nfrom django.utils.safestring import mark_safe\n\nfrom pootle.core.browser import make_project_item\nfrom pootle.core.decorators import (\n get_object_or_404, get_path_obj, permission_required)\nfrom pootle.core.exceptions import Http400\nfrom pootle.core.views import PootleBrowseView, PootleTranslateView\nfrom pootle.core.views.admin import PootleFormView\nfrom pootle.core.views.decorators import requires_permission, set_permissions\nfrom pootle.core.views.formtable import Formtable\nfrom pootle.core.views.mixins import PootleJSONMixin\nfrom pootle.i18n import formatter\nfrom pootle.i18n.gettext import ugettext_lazy as _, ungettext_lazy\nfrom pootle_misc.util import cmp_by_last_activity\nfrom pootle_store.constants import STATES_MAP\n\nfrom .apps import PootleLanguageConfig\nfrom .forms import (\n LanguageSpecialCharsForm, LanguageSuggestionAdminForm,\n LanguageTeamAdminForm, LanguageTeamNewMemberSearchForm)\nfrom .models import Language\n\n\nclass LanguageMixin(object):\n ns = \"pootle.language\"\n sw_version = PootleLanguageConfig.version\n model = Language\n browse_url_path = \"pootle-language-browse\"\n translate_url_path = \"pootle-language-translate\"\n template_extends = 'languages/base.html'\n\n @property\n def language(self):\n return self.object\n\n @property\n def permission_context(self):\n return self.get_object().directory\n\n @property\n def url_kwargs(self):\n return {\"language_code\": self.object.code}\n\n def get_object(self):\n return self.object\n\n @cached_property\n def object(self):\n lang = Language.get_canonical(self.kwargs[\"language_code\"])\n if lang is None:\n raise Http404\n return lang\n\n def get(self, *args, **kwargs):\n self.object = self.get_object()\n if self.object.code != kwargs[\"language_code\"]:\n return redirect(\n self.url_pattern_name,\n self.object.code,\n permanent=True)\n return super(LanguageMixin, self).get(*args, **kwargs)\n\n\nclass LanguageBrowseView(LanguageMixin, PootleBrowseView):\n url_pattern_name = \"pootle-language-browse\"\n view_name = \"language\"\n\n @cached_property\n def object_children(self):\n items = [make_project_item(tp)\n for tp in self.object.get_children_for_user(self.request.user)]\n items = self.add_child_stats(items)\n items.sort(cmp_by_last_activity)\n return items\n\n @property\n def language(self):\n return {\n 'code': self.object.code,\n 'name': self.object.name}\n\n def get(self, *args, **kwargs):\n response = super(LanguageBrowseView, self).get(*args, **kwargs)\n response.set_cookie('pootle-language', self.object.code)\n return response\n\n\nclass LanguageTranslateView(LanguageMixin, PootleTranslateView):\n url_pattern_name = \"pootle-language-translate\"\n\n\n@get_path_obj\n@permission_required('administrate')\ndef language_characters_admin(request, language):\n form = LanguageSpecialCharsForm(request.POST\n if request.method == 'POST'\n else None,\n instance=language)\n if form.is_valid():\n form.save()\n return redirect('pootle-language-browse', language.code)\n\n ctx = {\n 'page': 'admin-characters',\n\n 'browse_url': reverse('pootle-language-browse', kwargs={\n 'language_code': language.code,\n }),\n 'translate_url': reverse('pootle-language-translate', kwargs={\n 'language_code': language.code,\n }),\n\n 'language': language,\n 'directory': language.directory,\n 'form': form,\n }\n\n return render(request, 'languages/admin/characters.html', ctx)\n\n\nclass SuggestionFormtable(Formtable):\n row_field = \"suggestions\"\n filters_template = \"languages/admin/includes/suggestions_header.html\"\n\n @property\n def messages(self):\n return self.kwargs.get(\"messages\", [])\n\n\nclass SuggestionDisplay(object):\n\n def __init__(self, suggestion):\n self.__suggestion__ = suggestion\n\n @property\n def unit(self):\n return self.__suggestion__.unit.source_f\n\n @property\n def project(self):\n tp = self.__suggestion__.unit.store.translation_project\n return mark_safe(\n \"<a href='%s'>%s</a>\"\n % (tp.get_absolute_url(),\n tp.project.code))\n\n @property\n def unit_state(self):\n return STATES_MAP[self.__suggestion__.unit.state]\n\n @property\n def unit_link(self):\n return mark_safe(\n \"<a href='%s'>#%s</a>\"\n % (self.__suggestion__.unit.get_translate_url(),\n self.__suggestion__.unit.id))\n\n def __getattr__(self, k):\n try:\n return getattr(self.__suggestion__, k)\n except AttributeError:\n return self.__getattribute__(k)\n\n\nclass PootleLanguageAdminFormView(PootleFormView):\n\n @property\n def permission_context(self):\n return self.language.directory\n\n @set_permissions\n @requires_permission(\"administrate\")\n def dispatch(self, request, *args, **kwargs):\n # get funky with the request 8/\n return super(\n PootleLanguageAdminFormView, self).dispatch(request, *args, **kwargs)\n\n @cached_property\n def language(self):\n return get_object_or_404(\n Language.objects.select_related(\"directory\"),\n code=self.kwargs[\"language_code\"])\n\n def get_form_kwargs(self):\n kwargs = super(PootleLanguageAdminFormView, self).get_form_kwargs()\n kwargs[\"language\"] = self.language\n return kwargs\n\n @property\n def success_kwargs(self):\n return dict(language_code=self.language.code)\n\n\nclass LanguageSuggestionAdminView(PootleLanguageAdminFormView):\n template_name = 'languages/admin/language_team_suggestions.html'\n form_class = LanguageSuggestionAdminForm\n success_url_pattern = \"pootle-language-admin-suggestions\"\n formtable_columns = (\n _(\"Unit\"),\n _(\"State\"),\n _(\"Source\"),\n _(\"Suggestion\"),\n _(\"Suggested by\"),\n _(\"Suggested at\"),\n _(\"Project\"))\n\n @property\n def default_form_kwargs(self):\n return dict(\n page_no=1,\n results_per_page=10)\n\n def add_success_message(self, form):\n count = (\n form.fields[\"suggestions\"].queryset.count()\n if form.cleaned_data[\"select_all\"]\n else len(form.cleaned_data[\"suggestions\"]))\n reject_and_notify = (\n form.cleaned_data[\"actions\"] == \"reject\"\n and form.cleaned_data[\"comment\"])\n accept_and_notify = (\n form.cleaned_data[\"actions\"] == \"accept\"\n and form.cleaned_data[\"comment\"])\n if reject_and_notify:\n message = ungettext_lazy(\n \"Rejected %s suggestion with comment. User will be notified\",\n \"Rejected %s suggestions with comment. Users will be notified\",\n count, count)\n elif accept_and_notify:\n message = ungettext_lazy(\n \"Accepted %s suggestion with comment. User will be notified\",\n \"Accepted %s suggestions with comment. Users will be notified\",\n count, count)\n elif form.cleaned_data[\"actions\"] == \"reject\":\n message = ungettext_lazy(\n \"Rejected %s suggestion\",\n \"Rejected %s suggestions\",\n count, count)\n else:\n message = ungettext_lazy(\n \"Accepted %s suggestion\",\n \"Accepted %s suggestions\",\n count, count)\n messages.success(self.request, message)\n\n def get_context_data(self, **kwargs):\n context = super(\n LanguageSuggestionAdminView, self).get_context_data(**kwargs)\n context[\"page\"] = \"admin-suggestions\"\n context[\"language\"] = self.language\n form = context[\"form\"]\n form.is_valid()\n batch = form.batch()\n form.fields[\"suggestions\"].choices = [\n (item.id, SuggestionDisplay(item))\n for item in\n batch.object_list]\n context[\"formtable\"] = SuggestionFormtable(\n form,\n columns=self.formtable_columns,\n page=batch,\n messages=messages.get_messages(self.request))\n return context\n\n def get_form_kwargs(self):\n kwargs = super(LanguageSuggestionAdminView, self).get_form_kwargs()\n if not self.request.POST:\n kwargs[\"data\"] = self.default_form_kwargs\n kwargs[\"user\"] = self.request.user\n return kwargs\n\n\nclass LanguageTeamAdminFormView(PootleLanguageAdminFormView):\n form_class = LanguageTeamAdminForm\n template_name = \"languages/admin/language_team.html\"\n success_url_pattern = \"pootle-language-admin-team\"\n\n def get_context_data(self, **kwargs):\n context = super(LanguageTeamAdminFormView, self).get_context_data(**kwargs)\n form = context[\"form\"]\n context[\"tps\"] = self.language.translationproject_set.exclude(\n project__disabled=True)\n stats = self.language.data_tool.get_stats(\n include_children=False,\n user=self.request.user)\n keys = (\"total\", \"critical\", \"incomplete\", \"translated\", \"fuzzy\",\n \"untranslated\")\n for k in keys:\n if k in stats:\n stats[k + \"_display\"] = formatter.number(stats[k])\n context[\"stats\"] = stats\n context[\"suggestions\"] = form.language_team.suggestions\n suggestions_count = 0\n if context[\"suggestions\"]:\n suggestions_count = context[\"suggestions\"].count()\n context[\"suggestions_display\"] = formatter.number(suggestions_count)\n context[\"language\"] = self.language\n context[\"page\"] = \"admin-team\"\n context[\"browse_url\"] = reverse(\n \"pootle-language-browse\",\n kwargs=dict(language_code=self.language.code))\n context[\"translate_url\"] = reverse(\n \"pootle-language-translate\",\n kwargs=dict(language_code=self.language.code))\n return context\n\n\nclass LanguageTeamAdminNewMembersJSON(PootleJSONMixin, PootleLanguageAdminFormView):\n form_class = LanguageTeamNewMemberSearchForm\n\n def get_context_data(self, **kwargs):\n context = super(\n LanguageTeamAdminNewMembersJSON, self).get_context_data(**kwargs)\n form = context[\"form\"]\n return (\n dict(items=form.search())\n if form.is_valid()\n else dict(items=[]))\n\n def get_form_kwargs(self):\n kwargs = super(LanguageTeamAdminNewMembersJSON, self).get_form_kwargs()\n kwargs[\"data\"] = self.request.POST\n return kwargs\n\n def form_valid(self, form):\n return self.render_to_response(\n self.get_context_data(form=form))\n\n def form_invalid(self, form):\n raise Http400(form.errors)\n", "path": "pootle/apps/pootle_language/views.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.contrib import messages\nfrom django.http import Http404\nfrom django.shortcuts import redirect, render\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\nfrom django.utils.safestring import mark_safe\n\nfrom pootle.core.browser import make_project_item\nfrom pootle.core.decorators import (\n get_object_or_404, get_path_obj, permission_required)\nfrom pootle.core.exceptions import Http400\nfrom pootle.core.views import PootleBrowseView, PootleTranslateView\nfrom pootle.core.views.admin import PootleFormView\nfrom pootle.core.views.decorators import requires_permission, set_permissions\nfrom pootle.core.views.formtable import Formtable\nfrom pootle.core.views.mixins import PootleJSONMixin\nfrom pootle.i18n import formatter\nfrom pootle.i18n.gettext import ugettext_lazy as _, ungettext_lazy\nfrom pootle_misc.util import cmp_by_last_activity\nfrom pootle_store.constants import STATES_MAP\n\nfrom .apps import PootleLanguageConfig\nfrom .forms import (\n LanguageSpecialCharsForm, LanguageSuggestionAdminForm,\n LanguageTeamAdminForm, LanguageTeamNewMemberSearchForm)\nfrom .models import Language\n\n\nclass LanguageMixin(object):\n ns = \"pootle.language\"\n sw_version = PootleLanguageConfig.version\n model = Language\n browse_url_path = \"pootle-language-browse\"\n translate_url_path = \"pootle-language-translate\"\n template_extends = 'languages/base.html'\n\n @property\n def language(self):\n return self.object\n\n @property\n def permission_context(self):\n return self.get_object().directory\n\n @property\n def url_kwargs(self):\n return {\"language_code\": self.object.code}\n\n def get_object(self):\n return self.object\n\n @cached_property\n def object(self):\n lang = Language.get_canonical(self.kwargs[\"language_code\"])\n if lang is None:\n raise Http404\n return lang\n\n def get(self, *args, **kwargs):\n self.object = self.get_object()\n if self.object.code != kwargs[\"language_code\"]:\n return redirect(\n self.url_pattern_name,\n self.object.code,\n permanent=True)\n return super(LanguageMixin, self).get(*args, **kwargs)\n\n\nclass LanguageBrowseView(LanguageMixin, PootleBrowseView):\n url_pattern_name = \"pootle-language-browse\"\n view_name = \"language\"\n\n @cached_property\n def object_children(self):\n items = [make_project_item(tp)\n for tp in self.object.get_children_for_user(self.request.user)]\n items = self.add_child_stats(items)\n items.sort(cmp_by_last_activity)\n return items\n\n @property\n def language(self):\n return {\n 'code': self.object.code,\n 'name': self.object.name}\n\n def get(self, *args, **kwargs):\n response = super(LanguageBrowseView, self).get(*args, **kwargs)\n response.set_cookie('pootle-language', self.object.code)\n return response\n\n\nclass LanguageTranslateView(LanguageMixin, PootleTranslateView):\n url_pattern_name = \"pootle-language-translate\"\n\n\n@get_path_obj\n@permission_required('administrate')\ndef language_characters_admin(request, language):\n form = LanguageSpecialCharsForm(request.POST\n if request.method == 'POST'\n else None,\n instance=language)\n if form.is_valid():\n form.save()\n return redirect('pootle-language-browse', language.code)\n\n ctx = {\n 'page': 'admin-characters',\n\n 'browse_url': reverse('pootle-language-browse', kwargs={\n 'language_code': language.code,\n }),\n 'translate_url': reverse('pootle-language-translate', kwargs={\n 'language_code': language.code,\n }),\n\n 'language': language,\n 'directory': language.directory,\n 'form': form,\n }\n\n return render(request, 'languages/admin/characters.html', ctx)\n\n\nclass SuggestionFormtable(Formtable):\n row_field = \"suggestions\"\n filters_template = \"languages/admin/includes/suggestions_header.html\"\n\n @property\n def messages(self):\n return self.kwargs.get(\"messages\", [])\n\n\nclass SuggestionDisplay(object):\n\n def __init__(self, suggestion):\n self.__suggestion__ = suggestion\n\n @property\n def unit(self):\n return self.__suggestion__.unit.source_f\n\n @property\n def project(self):\n tp = self.__suggestion__.unit.store.translation_project\n return mark_safe(\n \"<a href='%s'>%s</a>\"\n % (tp.get_absolute_url(),\n tp.project.code))\n\n @property\n def unit_state(self):\n return STATES_MAP[self.__suggestion__.unit.state]\n\n @property\n def unit_link(self):\n return mark_safe(\n \"<a href='%s'>#%s</a>\"\n % (self.__suggestion__.unit.get_translate_url(),\n self.__suggestion__.unit.id))\n\n def __getattr__(self, k):\n try:\n return getattr(self.__suggestion__, k)\n except AttributeError:\n return self.__getattribute__(k)\n\n\nclass PootleLanguageAdminFormView(PootleFormView):\n\n @property\n def permission_context(self):\n return self.language.directory\n\n @set_permissions\n @requires_permission(\"administrate\")\n def dispatch(self, request, *args, **kwargs):\n # get funky with the request 8/\n return super(\n PootleLanguageAdminFormView, self).dispatch(request, *args, **kwargs)\n\n @cached_property\n def language(self):\n return get_object_or_404(\n Language.objects.select_related(\"directory\"),\n code=self.kwargs[\"language_code\"])\n\n def get_form_kwargs(self):\n kwargs = super(PootleLanguageAdminFormView, self).get_form_kwargs()\n kwargs[\"language\"] = self.language\n return kwargs\n\n @property\n def success_kwargs(self):\n return dict(language_code=self.language.code)\n\n\nclass LanguageSuggestionAdminView(PootleLanguageAdminFormView):\n template_name = 'languages/admin/language_team_suggestions.html'\n form_class = LanguageSuggestionAdminForm\n success_url_pattern = \"pootle-language-admin-suggestions\"\n formtable_columns = (\n _(\"Unit\"),\n _(\"State\"),\n _(\"Source\"),\n _(\"Suggestion\"),\n _(\"Suggested by\"),\n _(\"Suggested at\"),\n _(\"Project\"))\n\n @property\n def default_form_kwargs(self):\n return dict(\n page_no=1,\n results_per_page=10)\n\n def add_success_message(self, form):\n count = (\n form.fields[\"suggestions\"].queryset.count()\n if form.cleaned_data[\"select_all\"]\n else len(form.cleaned_data[\"suggestions\"]))\n reject_and_notify = (\n form.cleaned_data[\"actions\"] == \"reject\"\n and form.cleaned_data[\"comment\"])\n accept_and_notify = (\n form.cleaned_data[\"actions\"] == \"accept\"\n and form.cleaned_data[\"comment\"])\n if reject_and_notify:\n message = ungettext_lazy(\n \"Rejected %s suggestion with comment. User will be notified\",\n \"Rejected %s suggestions with comment. Users will be notified\",\n count, count)\n elif accept_and_notify:\n message = ungettext_lazy(\n \"Accepted %s suggestion with comment. User will be notified\",\n \"Accepted %s suggestions with comment. Users will be notified\",\n count, count)\n elif form.cleaned_data[\"actions\"] == \"reject\":\n message = ungettext_lazy(\n \"Rejected %s suggestion\",\n \"Rejected %s suggestions\",\n count, count)\n else:\n message = ungettext_lazy(\n \"Accepted %s suggestion\",\n \"Accepted %s suggestions\",\n count, count)\n messages.success(self.request, message)\n\n def get_context_data(self, **kwargs):\n context = super(\n LanguageSuggestionAdminView, self).get_context_data(**kwargs)\n context[\"page\"] = \"admin-suggestions\"\n context[\"language\"] = self.language\n form = context[\"form\"]\n form.is_valid()\n batch = form.batch()\n form.fields[\"suggestions\"].choices = [\n (item.id, SuggestionDisplay(item))\n for item in\n batch.object_list]\n context[\"formtable\"] = SuggestionFormtable(\n form,\n columns=self.formtable_columns,\n page=batch,\n messages=messages.get_messages(self.request))\n return context\n\n def get_form_kwargs(self):\n kwargs = super(LanguageSuggestionAdminView, self).get_form_kwargs()\n if not self.request.POST:\n kwargs[\"data\"] = self.default_form_kwargs\n kwargs[\"user\"] = self.request.user\n return kwargs\n\n\nclass LanguageTeamAdminFormView(PootleLanguageAdminFormView):\n form_class = LanguageTeamAdminForm\n template_name = \"languages/admin/language_team.html\"\n success_url_pattern = \"pootle-language-admin-team\"\n\n def get_context_data(self, **kwargs):\n context = super(LanguageTeamAdminFormView, self).get_context_data(**kwargs)\n form = context[\"form\"]\n context[\"tps\"] = self.language.translationproject_set.exclude(\n project__disabled=True)\n stats = self.language.data_tool.get_stats(\n include_children=False,\n user=self.request.user)\n keys = (\"total\", \"critical\", \"incomplete\", \"translated\", \"fuzzy\",\n \"untranslated\")\n for k in keys:\n if k in stats:\n stats[k + \"_display\"] = formatter.number(stats[k])\n context[\"stats\"] = stats\n context[\"suggestions\"] = form.language_team.suggestions.count()\n context[\"suggestions_display\"] = formatter.number(\n context[\"suggestions\"])\n context[\"language\"] = self.language\n context[\"page\"] = \"admin-team\"\n context[\"browse_url\"] = reverse(\n \"pootle-language-browse\",\n kwargs=dict(language_code=self.language.code))\n context[\"translate_url\"] = reverse(\n \"pootle-language-translate\",\n kwargs=dict(language_code=self.language.code))\n return context\n\n\nclass LanguageTeamAdminNewMembersJSON(PootleJSONMixin, PootleLanguageAdminFormView):\n form_class = LanguageTeamNewMemberSearchForm\n\n def get_context_data(self, **kwargs):\n context = super(\n LanguageTeamAdminNewMembersJSON, self).get_context_data(**kwargs)\n form = context[\"form\"]\n return (\n dict(items=form.search())\n if form.is_valid()\n else dict(items=[]))\n\n def get_form_kwargs(self):\n kwargs = super(LanguageTeamAdminNewMembersJSON, self).get_form_kwargs()\n kwargs[\"data\"] = self.request.POST\n return kwargs\n\n def form_valid(self, form):\n return self.render_to_response(\n self.get_context_data(form=form))\n\n def form_invalid(self, form):\n raise Http400(form.errors)\n", "path": "pootle/apps/pootle_language/views.py"}]}
| 3,674 | 213 |
gh_patches_debug_17224
|
rasdani/github-patches
|
git_diff
|
cobbler__cobbler-626
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'get-loader' needs syslinux updating.
I've confirmed that the version of syslinux (3.61) currently in get-loader will not let you deploy vmware 5.1 on random machine.
It errors with "fatal error: 10 (out of resources)" when loading tools.t00.
Using the pxelinux.0 and menu.c32 binaries from the syslinux-3.86.tar.gz build on kernel.org fixes it, and lets it work.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cobbler/action_dlcontent.py`
Content:
```
1 """
2 Downloads bootloader content for all arches for when the user doesn't want to supply their own.
3
4 Copyright 2009, Red Hat, Inc and Others
5 Michael DeHaan <michael.dehaan AT gmail>
6
7 This program is free software; you can redistribute it and/or modify
8 it under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 2 of the License, or
10 (at your option) any later version.
11
12 This program is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 GNU General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
18 along with this program; if not, write to the Free Software
19 Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
20 02110-1301 USA
21 """
22
23 import os
24 import urlgrabber
25 import clogger
26
27 class ContentDownloader:
28
29 def __init__(self,config,logger=None):
30 """
31 Constructor
32 """
33 self.config = config
34 self.settings = config.settings()
35 if logger is None:
36 logger = clogger.Logger()
37 self.logger = logger
38
39
40 def run(self,force=False):
41 """
42 Download bootloader content for all of the latest bootloaders, since the user
43 has chosen to not supply their own. You may ask "why not get this from yum", though
44 Fedora has no IA64 repo, for instance, and we also want this to be able to work on Debian and
45 further do not want folks to have to install a cross compiler. For those that don't like this approach
46 they can still source their cross-arch bootloader content manually.
47 """
48
49 content_server = "http://www.cobblerd.org/loaders"
50 dest = "/var/lib/cobbler/loaders"
51
52 files = (
53 ( "%s/README" % content_server, "%s/README" % dest ),
54 ( "%s/COPYING.elilo" % content_server, "%s/COPYING.elilo" % dest ),
55 ( "%s/COPYING.yaboot" % content_server, "%s/COPYING.yaboot" % dest),
56 ( "%s/COPYING.syslinux" % content_server, "%s/COPYING.syslinux" % dest),
57 ( "%s/elilo-3.8-ia64.efi" % content_server, "%s/elilo-ia64.efi" % dest ),
58 ( "%s/yaboot-1.3.14-12" % content_server, "%s/yaboot" % dest),
59 ( "%s/pxelinux.0-4.02" % content_server, "%s/pxelinux.0" % dest),
60 ( "%s/menu.c32-4.02" % content_server, "%s/menu.c32" % dest),
61 ( "%s/grub-0.97-x86.efi" % content_server, "%s/grub-x86.efi" % dest),
62 ( "%s/grub-0.97-x86_64.efi" % content_server, "%s/grub-x86_64.efi" % dest),
63 )
64
65 proxies = {}
66 if os.environ.has_key("HTTP_PROXY"):
67 proxies['http'] = os.environ["HTTP_PROXY"]
68
69 if os.environ.has_key("HTTPS_PROXY"):
70 proxies['https'] = os.environ["HTTPS_PROXY"]
71
72 if os.environ.has_key("FTP_PROXY"):
73 proxies['ftp'] = os.environ["FTP_PROXY"]
74
75 if len(proxies) == 0:
76 proxies = None
77
78 for src,dst in files:
79 if os.path.exists(dst) and not force:
80 self.logger.info("path %s already exists, not overwriting existing content, use --force if you wish to update" % dst)
81 continue
82 self.logger.info("downloading %s to %s" % (src,dst))
83 urlgrabber.grabber.urlgrab(src, filename=dst, proxies=proxies)
84
85 return True
86
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cobbler/action_dlcontent.py b/cobbler/action_dlcontent.py
--- a/cobbler/action_dlcontent.py
+++ b/cobbler/action_dlcontent.py
@@ -56,8 +56,8 @@
( "%s/COPYING.syslinux" % content_server, "%s/COPYING.syslinux" % dest),
( "%s/elilo-3.8-ia64.efi" % content_server, "%s/elilo-ia64.efi" % dest ),
( "%s/yaboot-1.3.14-12" % content_server, "%s/yaboot" % dest),
- ( "%s/pxelinux.0-4.02" % content_server, "%s/pxelinux.0" % dest),
- ( "%s/menu.c32-4.02" % content_server, "%s/menu.c32" % dest),
+ ( "%s/pxelinux.0-3.86" % content_server, "%s/pxelinux.0" % dest),
+ ( "%s/menu.c32-3.86" % content_server, "%s/menu.c32" % dest),
( "%s/grub-0.97-x86.efi" % content_server, "%s/grub-x86.efi" % dest),
( "%s/grub-0.97-x86_64.efi" % content_server, "%s/grub-x86_64.efi" % dest),
)
|
{"golden_diff": "diff --git a/cobbler/action_dlcontent.py b/cobbler/action_dlcontent.py\n--- a/cobbler/action_dlcontent.py\n+++ b/cobbler/action_dlcontent.py\n@@ -56,8 +56,8 @@\n ( \"%s/COPYING.syslinux\" % content_server, \"%s/COPYING.syslinux\" % dest),\n ( \"%s/elilo-3.8-ia64.efi\" % content_server, \"%s/elilo-ia64.efi\" % dest ),\n ( \"%s/yaboot-1.3.14-12\" % content_server, \"%s/yaboot\" % dest),\n- ( \"%s/pxelinux.0-4.02\" % content_server, \"%s/pxelinux.0\" % dest),\n- ( \"%s/menu.c32-4.02\" % content_server, \"%s/menu.c32\" % dest),\n+ ( \"%s/pxelinux.0-3.86\" % content_server, \"%s/pxelinux.0\" % dest),\n+ ( \"%s/menu.c32-3.86\" % content_server, \"%s/menu.c32\" % dest),\n ( \"%s/grub-0.97-x86.efi\" % content_server, \"%s/grub-x86.efi\" % dest),\n ( \"%s/grub-0.97-x86_64.efi\" % content_server, \"%s/grub-x86_64.efi\" % dest),\n )\n", "issue": "'get-loader' needs syslinux updating.\nI've confirmed that the version of syslinux (3.61) currently in get-loader will not let you deploy vmware 5.1 on random machine. \n\nIt errors with \"fatal error: 10 (out of resources)\" when loading tools.t00.\n\nUsing the pxelinux.0 and menu.c32 binaries from the syslinux-3.86.tar.gz build on kernel.org fixes it, and lets it work. \n\n", "before_files": [{"content": "\"\"\"\nDownloads bootloader content for all arches for when the user doesn't want to supply their own.\n\nCopyright 2009, Red Hat, Inc and Others\nMichael DeHaan <michael.dehaan AT gmail>\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program; if not, write to the Free Software\nFoundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n02110-1301 USA\n\"\"\"\n\nimport os\nimport urlgrabber\nimport clogger\n\nclass ContentDownloader:\n\n def __init__(self,config,logger=None):\n \"\"\"\n Constructor\n \"\"\"\n self.config = config\n self.settings = config.settings()\n if logger is None:\n logger = clogger.Logger()\n self.logger = logger\n\n\n def run(self,force=False):\n \"\"\"\n Download bootloader content for all of the latest bootloaders, since the user\n has chosen to not supply their own. You may ask \"why not get this from yum\", though\n Fedora has no IA64 repo, for instance, and we also want this to be able to work on Debian and\n further do not want folks to have to install a cross compiler. For those that don't like this approach\n they can still source their cross-arch bootloader content manually.\n \"\"\"\n\n content_server = \"http://www.cobblerd.org/loaders\"\n dest = \"/var/lib/cobbler/loaders\"\n\n files = (\n ( \"%s/README\" % content_server, \"%s/README\" % dest ),\n ( \"%s/COPYING.elilo\" % content_server, \"%s/COPYING.elilo\" % dest ),\n ( \"%s/COPYING.yaboot\" % content_server, \"%s/COPYING.yaboot\" % dest),\n ( \"%s/COPYING.syslinux\" % content_server, \"%s/COPYING.syslinux\" % dest),\n ( \"%s/elilo-3.8-ia64.efi\" % content_server, \"%s/elilo-ia64.efi\" % dest ),\n ( \"%s/yaboot-1.3.14-12\" % content_server, \"%s/yaboot\" % dest),\n ( \"%s/pxelinux.0-4.02\" % content_server, \"%s/pxelinux.0\" % dest),\n ( \"%s/menu.c32-4.02\" % content_server, \"%s/menu.c32\" % dest),\n ( \"%s/grub-0.97-x86.efi\" % content_server, \"%s/grub-x86.efi\" % dest),\n ( \"%s/grub-0.97-x86_64.efi\" % content_server, \"%s/grub-x86_64.efi\" % dest),\n )\n\n proxies = {}\n if os.environ.has_key(\"HTTP_PROXY\"):\n proxies['http'] = os.environ[\"HTTP_PROXY\"]\n\n if os.environ.has_key(\"HTTPS_PROXY\"):\n proxies['https'] = os.environ[\"HTTPS_PROXY\"]\n\n if os.environ.has_key(\"FTP_PROXY\"):\n proxies['ftp'] = os.environ[\"FTP_PROXY\"]\n\n if len(proxies) == 0:\n proxies = None\n\n for src,dst in files:\n if os.path.exists(dst) and not force:\n self.logger.info(\"path %s already exists, not overwriting existing content, use --force if you wish to update\" % dst)\n continue\n self.logger.info(\"downloading %s to %s\" % (src,dst))\n urlgrabber.grabber.urlgrab(src, filename=dst, proxies=proxies)\n\n return True\n\n", "path": "cobbler/action_dlcontent.py"}], "after_files": [{"content": "\"\"\"\nDownloads bootloader content for all arches for when the user doesn't want to supply their own.\n\nCopyright 2009, Red Hat, Inc and Others\nMichael DeHaan <michael.dehaan AT gmail>\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program; if not, write to the Free Software\nFoundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n02110-1301 USA\n\"\"\"\n\nimport os\nimport urlgrabber\nimport clogger\n\nclass ContentDownloader:\n\n def __init__(self,config,logger=None):\n \"\"\"\n Constructor\n \"\"\"\n self.config = config\n self.settings = config.settings()\n if logger is None:\n logger = clogger.Logger()\n self.logger = logger\n\n\n def run(self,force=False):\n \"\"\"\n Download bootloader content for all of the latest bootloaders, since the user\n has chosen to not supply their own. You may ask \"why not get this from yum\", though\n Fedora has no IA64 repo, for instance, and we also want this to be able to work on Debian and\n further do not want folks to have to install a cross compiler. For those that don't like this approach\n they can still source their cross-arch bootloader content manually.\n \"\"\"\n\n content_server = \"http://www.cobblerd.org/loaders\"\n dest = \"/var/lib/cobbler/loaders\"\n\n files = (\n ( \"%s/README\" % content_server, \"%s/README\" % dest ),\n ( \"%s/COPYING.elilo\" % content_server, \"%s/COPYING.elilo\" % dest ),\n ( \"%s/COPYING.yaboot\" % content_server, \"%s/COPYING.yaboot\" % dest),\n ( \"%s/COPYING.syslinux\" % content_server, \"%s/COPYING.syslinux\" % dest),\n ( \"%s/elilo-3.8-ia64.efi\" % content_server, \"%s/elilo-ia64.efi\" % dest ),\n ( \"%s/yaboot-1.3.14-12\" % content_server, \"%s/yaboot\" % dest),\n ( \"%s/pxelinux.0-3.86\" % content_server, \"%s/pxelinux.0\" % dest),\n ( \"%s/menu.c32-3.86\" % content_server, \"%s/menu.c32\" % dest),\n ( \"%s/grub-0.97-x86.efi\" % content_server, \"%s/grub-x86.efi\" % dest),\n ( \"%s/grub-0.97-x86_64.efi\" % content_server, \"%s/grub-x86_64.efi\" % dest),\n )\n\n proxies = {}\n if os.environ.has_key(\"HTTP_PROXY\"):\n proxies['http'] = os.environ[\"HTTP_PROXY\"]\n\n if os.environ.has_key(\"HTTPS_PROXY\"):\n proxies['https'] = os.environ[\"HTTPS_PROXY\"]\n\n if os.environ.has_key(\"FTP_PROXY\"):\n proxies['ftp'] = os.environ[\"FTP_PROXY\"]\n\n if len(proxies) == 0:\n proxies = None\n\n for src,dst in files:\n if os.path.exists(dst) and not force:\n self.logger.info(\"path %s already exists, not overwriting existing content, use --force if you wish to update\" % dst)\n continue\n self.logger.info(\"downloading %s to %s\" % (src,dst))\n urlgrabber.grabber.urlgrab(src, filename=dst, proxies=proxies)\n\n return True\n\n", "path": "cobbler/action_dlcontent.py"}]}
| 1,444 | 343 |
gh_patches_debug_11180
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-4761
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Have tests generate a certificate on-the-fly
Unless we want to be doing https://github.com/scrapy/scrapy/pull/4650 every year, we should look into making tests generate a fresh certificate at run time.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conftest.py`
Content:
```
1 from pathlib import Path
2
3 import pytest
4
5
6 def _py_files(folder):
7 return (str(p) for p in Path(folder).rglob('*.py'))
8
9
10 collect_ignore = [
11 # not a test, but looks like a test
12 "scrapy/utils/testsite.py",
13 # contains scripts to be run by tests/test_crawler.py::CrawlerProcessSubprocess
14 *_py_files("tests/CrawlerProcess"),
15 # contains scripts to be run by tests/test_crawler.py::CrawlerRunnerSubprocess
16 *_py_files("tests/CrawlerRunner"),
17 # Py36-only parts of respective tests
18 *_py_files("tests/py36"),
19 ]
20
21 for line in open('tests/ignores.txt'):
22 file_path = line.strip()
23 if file_path and file_path[0] != '#':
24 collect_ignore.append(file_path)
25
26
27 @pytest.fixture()
28 def chdir(tmpdir):
29 """Change to pytest-provided temporary directory"""
30 tmpdir.chdir()
31
32
33 def pytest_collection_modifyitems(session, config, items):
34 # Avoid executing tests when executing `--flake8` flag (pytest-flake8)
35 try:
36 from pytest_flake8 import Flake8Item
37 if config.getoption('--flake8'):
38 items[:] = [item for item in items if isinstance(item, Flake8Item)]
39 except ImportError:
40 pass
41
42
43 @pytest.fixture(scope='class')
44 def reactor_pytest(request):
45 if not request.cls:
46 # doctests
47 return
48 request.cls.reactor_pytest = request.config.getoption("--reactor")
49 return request.cls.reactor_pytest
50
51
52 @pytest.fixture(autouse=True)
53 def only_asyncio(request, reactor_pytest):
54 if request.node.get_closest_marker('only_asyncio') and reactor_pytest != 'asyncio':
55 pytest.skip('This test is only run with --reactor=asyncio')
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -2,6 +2,8 @@
import pytest
+from tests.keys import generate_keys
+
def _py_files(folder):
return (str(p) for p in Path(folder).rglob('*.py'))
@@ -53,3 +55,7 @@
def only_asyncio(request, reactor_pytest):
if request.node.get_closest_marker('only_asyncio') and reactor_pytest != 'asyncio':
pytest.skip('This test is only run with --reactor=asyncio')
+
+
+# Generate localhost certificate files, needed by some tests
+generate_keys()
|
{"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -2,6 +2,8 @@\n \n import pytest\n \n+from tests.keys import generate_keys\n+\n \n def _py_files(folder):\n return (str(p) for p in Path(folder).rglob('*.py'))\n@@ -53,3 +55,7 @@\n def only_asyncio(request, reactor_pytest):\n if request.node.get_closest_marker('only_asyncio') and reactor_pytest != 'asyncio':\n pytest.skip('This test is only run with --reactor=asyncio')\n+\n+\n+# Generate localhost certificate files, needed by some tests\n+generate_keys()\n", "issue": "Have tests generate a certificate on-the-fly\nUnless we want to be doing https://github.com/scrapy/scrapy/pull/4650 every year, we should look into making tests generate a fresh certificate at run time.\n", "before_files": [{"content": "from pathlib import Path\n\nimport pytest\n\n\ndef _py_files(folder):\n return (str(p) for p in Path(folder).rglob('*.py'))\n\n\ncollect_ignore = [\n # not a test, but looks like a test\n \"scrapy/utils/testsite.py\",\n # contains scripts to be run by tests/test_crawler.py::CrawlerProcessSubprocess\n *_py_files(\"tests/CrawlerProcess\"),\n # contains scripts to be run by tests/test_crawler.py::CrawlerRunnerSubprocess\n *_py_files(\"tests/CrawlerRunner\"),\n # Py36-only parts of respective tests\n *_py_files(\"tests/py36\"),\n]\n\nfor line in open('tests/ignores.txt'):\n file_path = line.strip()\n if file_path and file_path[0] != '#':\n collect_ignore.append(file_path)\n\n\[email protected]()\ndef chdir(tmpdir):\n \"\"\"Change to pytest-provided temporary directory\"\"\"\n tmpdir.chdir()\n\n\ndef pytest_collection_modifyitems(session, config, items):\n # Avoid executing tests when executing `--flake8` flag (pytest-flake8)\n try:\n from pytest_flake8 import Flake8Item\n if config.getoption('--flake8'):\n items[:] = [item for item in items if isinstance(item, Flake8Item)]\n except ImportError:\n pass\n\n\[email protected](scope='class')\ndef reactor_pytest(request):\n if not request.cls:\n # doctests\n return\n request.cls.reactor_pytest = request.config.getoption(\"--reactor\")\n return request.cls.reactor_pytest\n\n\[email protected](autouse=True)\ndef only_asyncio(request, reactor_pytest):\n if request.node.get_closest_marker('only_asyncio') and reactor_pytest != 'asyncio':\n pytest.skip('This test is only run with --reactor=asyncio')\n", "path": "conftest.py"}], "after_files": [{"content": "from pathlib import Path\n\nimport pytest\n\nfrom tests.keys import generate_keys\n\n\ndef _py_files(folder):\n return (str(p) for p in Path(folder).rglob('*.py'))\n\n\ncollect_ignore = [\n # not a test, but looks like a test\n \"scrapy/utils/testsite.py\",\n # contains scripts to be run by tests/test_crawler.py::CrawlerProcessSubprocess\n *_py_files(\"tests/CrawlerProcess\"),\n # contains scripts to be run by tests/test_crawler.py::CrawlerRunnerSubprocess\n *_py_files(\"tests/CrawlerRunner\"),\n # Py36-only parts of respective tests\n *_py_files(\"tests/py36\"),\n]\n\nfor line in open('tests/ignores.txt'):\n file_path = line.strip()\n if file_path and file_path[0] != '#':\n collect_ignore.append(file_path)\n\n\[email protected]()\ndef chdir(tmpdir):\n \"\"\"Change to pytest-provided temporary directory\"\"\"\n tmpdir.chdir()\n\n\ndef pytest_collection_modifyitems(session, config, items):\n # Avoid executing tests when executing `--flake8` flag (pytest-flake8)\n try:\n from pytest_flake8 import Flake8Item\n if config.getoption('--flake8'):\n items[:] = [item for item in items if isinstance(item, Flake8Item)]\n except ImportError:\n pass\n\n\[email protected](scope='class')\ndef reactor_pytest(request):\n if not request.cls:\n # doctests\n return\n request.cls.reactor_pytest = request.config.getoption(\"--reactor\")\n return request.cls.reactor_pytest\n\n\[email protected](autouse=True)\ndef only_asyncio(request, reactor_pytest):\n if request.node.get_closest_marker('only_asyncio') and reactor_pytest != 'asyncio':\n pytest.skip('This test is only run with --reactor=asyncio')\n\n\n# Generate localhost certificate files, needed by some tests\ngenerate_keys()\n", "path": "conftest.py"}]}
| 812 | 154 |
gh_patches_debug_25712
|
rasdani/github-patches
|
git_diff
|
cisagov__manage.get.gov-829
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add field for tribe locality on Tribal Goverment Question Page
### Story
As a Registrant Applicant I want a way to specify the locality of my tribe so that I can give analysts more specific information to complete their review of my request

### Acceptance Criteria
- [x] Content is drafted for the question
- [x] Content is approved
- [x] Design prototype demonstrates the look and feel for approval
- [x] Implement the field in the registrar
A new field is added to the page that allows the user to specify the locality of their tribe
### Additional Context
_No response_
### Issue Links
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/templatetags/custom_filters.py`
Content:
```
1 from django import template
2 import re
3
4 register = template.Library()
5
6
7 @register.filter(name="extract_value")
8 def extract_value(html_input):
9 match = re.search(r'value="([^"]*)"', html_input)
10 if match:
11 return match.group(1)
12 return ""
13
14
15 @register.filter
16 def extract_a_text(value):
17 # Use regex to extract the text within the <a> tag
18 pattern = r"<a\b[^>]*>(.*?)</a>"
19 match = re.search(pattern, value)
20 if match:
21 extracted_text = match.group(1)
22 else:
23 extracted_text = ""
24
25 return extracted_text
26
```
Path: `src/registrar/views/utility/mixins.py`
Content:
```
1 """Permissions-related mixin classes."""
2
3 from django.contrib.auth.mixins import PermissionRequiredMixin
4
5 from registrar.models import UserDomainRole, DomainApplication, DomainInvitation
6
7
8 class PermissionsLoginMixin(PermissionRequiredMixin):
9
10 """Mixin that redirects to login page if not logged in, otherwise 403."""
11
12 def handle_no_permission(self):
13 self.raise_exception = self.request.user.is_authenticated
14 return super().handle_no_permission()
15
16
17 class DomainPermission(PermissionsLoginMixin):
18
19 """Does the logged-in user have access to this domain?"""
20
21 def has_permission(self):
22 """Check if this user has access to this domain.
23
24 The user is in self.request.user and the domain needs to be looked
25 up from the domain's primary key in self.kwargs["pk"]
26 """
27 if not self.request.user.is_authenticated:
28 return False
29
30 # user needs to have a role on the domain
31 if not UserDomainRole.objects.filter(
32 user=self.request.user, domain__id=self.kwargs["pk"]
33 ).exists():
34 return False
35
36 # if we need to check more about the nature of role, do it here.
37 return True
38
39
40 class DomainApplicationPermission(PermissionsLoginMixin):
41
42 """Does the logged-in user have access to this domain application?"""
43
44 def has_permission(self):
45 """Check if this user has access to this domain application.
46
47 The user is in self.request.user and the domain needs to be looked
48 up from the domain's primary key in self.kwargs["pk"]
49 """
50 if not self.request.user.is_authenticated:
51 return False
52
53 # user needs to be the creator of the application
54 # this query is empty if there isn't a domain application with this
55 # id and this user as creator
56 if not DomainApplication.objects.filter(
57 creator=self.request.user, id=self.kwargs["pk"]
58 ).exists():
59 return False
60
61 return True
62
63
64 class DomainInvitationPermission(PermissionsLoginMixin):
65
66 """Does the logged-in user have access to this domain invitation?
67
68 A user has access to a domain invitation if they have a role on the
69 associated domain.
70 """
71
72 def has_permission(self):
73 """Check if this user has a role on the domain of this invitation."""
74 if not self.request.user.is_authenticated:
75 return False
76
77 if not DomainInvitation.objects.filter(
78 id=self.kwargs["pk"], domain__permissions__user=self.request.user
79 ).exists():
80 return False
81
82 return True
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/registrar/templatetags/custom_filters.py b/src/registrar/templatetags/custom_filters.py
--- a/src/registrar/templatetags/custom_filters.py
+++ b/src/registrar/templatetags/custom_filters.py
@@ -23,3 +23,20 @@
extracted_text = ""
return extracted_text
+
+
[email protected]
+def find_index(haystack, needle):
+ try:
+ return haystack.index(needle)
+ except ValueError:
+ return -1
+
+
[email protected]
+def slice_after(value, substring):
+ index = value.find(substring)
+ if index != -1:
+ result = value[index + len(substring) :]
+ return result
+ return value
diff --git a/src/registrar/views/utility/mixins.py b/src/registrar/views/utility/mixins.py
--- a/src/registrar/views/utility/mixins.py
+++ b/src/registrar/views/utility/mixins.py
@@ -24,6 +24,12 @@
The user is in self.request.user and the domain needs to be looked
up from the domain's primary key in self.kwargs["pk"]
"""
+
+ # ticket 806
+ # if self.request.user is staff or admin and
+ # domain.application__status = 'approved' or 'rejected' or 'action needed'
+ # return True
+
if not self.request.user.is_authenticated:
return False
@@ -33,6 +39,10 @@
).exists():
return False
+ # ticket 796
+ # if domain.application__status != 'approved'
+ # return false
+
# if we need to check more about the nature of role, do it here.
return True
|
{"golden_diff": "diff --git a/src/registrar/templatetags/custom_filters.py b/src/registrar/templatetags/custom_filters.py\n--- a/src/registrar/templatetags/custom_filters.py\n+++ b/src/registrar/templatetags/custom_filters.py\n@@ -23,3 +23,20 @@\n extracted_text = \"\"\n \n return extracted_text\n+\n+\[email protected]\n+def find_index(haystack, needle):\n+ try:\n+ return haystack.index(needle)\n+ except ValueError:\n+ return -1\n+\n+\[email protected]\n+def slice_after(value, substring):\n+ index = value.find(substring)\n+ if index != -1:\n+ result = value[index + len(substring) :]\n+ return result\n+ return value\ndiff --git a/src/registrar/views/utility/mixins.py b/src/registrar/views/utility/mixins.py\n--- a/src/registrar/views/utility/mixins.py\n+++ b/src/registrar/views/utility/mixins.py\n@@ -24,6 +24,12 @@\n The user is in self.request.user and the domain needs to be looked\n up from the domain's primary key in self.kwargs[\"pk\"]\n \"\"\"\n+\n+ # ticket 806\n+ # if self.request.user is staff or admin and\n+ # domain.application__status = 'approved' or 'rejected' or 'action needed'\n+ # return True\n+\n if not self.request.user.is_authenticated:\n return False\n \n@@ -33,6 +39,10 @@\n ).exists():\n return False\n \n+ # ticket 796\n+ # if domain.application__status != 'approved'\n+ # return false\n+\n # if we need to check more about the nature of role, do it here.\n return True\n", "issue": "Add field for tribe locality on Tribal Goverment Question Page \n### Story\r\n\r\nAs a Registrant Applicant I want a way to specify the locality of my tribe so that I can give analysts more specific information to complete their review of my request\r\n\r\n\r\n\r\n\r\n### Acceptance Criteria\r\n\r\n- [x] Content is drafted for the question\r\n- [x] Content is approved\r\n- [x] Design prototype demonstrates the look and feel for approval\r\n- [x] Implement the field in the registrar\r\n\r\nA new field is added to the page that allows the user to specify the locality of their tribe\r\n\r\n### Additional Context\r\n\r\n_No response_\r\n\r\n### Issue Links\r\n\r\n_No response_\n", "before_files": [{"content": "from django import template\nimport re\n\nregister = template.Library()\n\n\[email protected](name=\"extract_value\")\ndef extract_value(html_input):\n match = re.search(r'value=\"([^\"]*)\"', html_input)\n if match:\n return match.group(1)\n return \"\"\n\n\[email protected]\ndef extract_a_text(value):\n # Use regex to extract the text within the <a> tag\n pattern = r\"<a\\b[^>]*>(.*?)</a>\"\n match = re.search(pattern, value)\n if match:\n extracted_text = match.group(1)\n else:\n extracted_text = \"\"\n\n return extracted_text\n", "path": "src/registrar/templatetags/custom_filters.py"}, {"content": "\"\"\"Permissions-related mixin classes.\"\"\"\n\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\n\nfrom registrar.models import UserDomainRole, DomainApplication, DomainInvitation\n\n\nclass PermissionsLoginMixin(PermissionRequiredMixin):\n\n \"\"\"Mixin that redirects to login page if not logged in, otherwise 403.\"\"\"\n\n def handle_no_permission(self):\n self.raise_exception = self.request.user.is_authenticated\n return super().handle_no_permission()\n\n\nclass DomainPermission(PermissionsLoginMixin):\n\n \"\"\"Does the logged-in user have access to this domain?\"\"\"\n\n def has_permission(self):\n \"\"\"Check if this user has access to this domain.\n\n The user is in self.request.user and the domain needs to be looked\n up from the domain's primary key in self.kwargs[\"pk\"]\n \"\"\"\n if not self.request.user.is_authenticated:\n return False\n\n # user needs to have a role on the domain\n if not UserDomainRole.objects.filter(\n user=self.request.user, domain__id=self.kwargs[\"pk\"]\n ).exists():\n return False\n\n # if we need to check more about the nature of role, do it here.\n return True\n\n\nclass DomainApplicationPermission(PermissionsLoginMixin):\n\n \"\"\"Does the logged-in user have access to this domain application?\"\"\"\n\n def has_permission(self):\n \"\"\"Check if this user has access to this domain application.\n\n The user is in self.request.user and the domain needs to be looked\n up from the domain's primary key in self.kwargs[\"pk\"]\n \"\"\"\n if not self.request.user.is_authenticated:\n return False\n\n # user needs to be the creator of the application\n # this query is empty if there isn't a domain application with this\n # id and this user as creator\n if not DomainApplication.objects.filter(\n creator=self.request.user, id=self.kwargs[\"pk\"]\n ).exists():\n return False\n\n return True\n\n\nclass DomainInvitationPermission(PermissionsLoginMixin):\n\n \"\"\"Does the logged-in user have access to this domain invitation?\n\n A user has access to a domain invitation if they have a role on the\n associated domain.\n \"\"\"\n\n def has_permission(self):\n \"\"\"Check if this user has a role on the domain of this invitation.\"\"\"\n if not self.request.user.is_authenticated:\n return False\n\n if not DomainInvitation.objects.filter(\n id=self.kwargs[\"pk\"], domain__permissions__user=self.request.user\n ).exists():\n return False\n\n return True\n", "path": "src/registrar/views/utility/mixins.py"}], "after_files": [{"content": "from django import template\nimport re\n\nregister = template.Library()\n\n\[email protected](name=\"extract_value\")\ndef extract_value(html_input):\n match = re.search(r'value=\"([^\"]*)\"', html_input)\n if match:\n return match.group(1)\n return \"\"\n\n\[email protected]\ndef extract_a_text(value):\n # Use regex to extract the text within the <a> tag\n pattern = r\"<a\\b[^>]*>(.*?)</a>\"\n match = re.search(pattern, value)\n if match:\n extracted_text = match.group(1)\n else:\n extracted_text = \"\"\n\n return extracted_text\n\n\[email protected]\ndef find_index(haystack, needle):\n try:\n return haystack.index(needle)\n except ValueError:\n return -1\n\n\[email protected]\ndef slice_after(value, substring):\n index = value.find(substring)\n if index != -1:\n result = value[index + len(substring) :]\n return result\n return value\n", "path": "src/registrar/templatetags/custom_filters.py"}, {"content": "\"\"\"Permissions-related mixin classes.\"\"\"\n\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\n\nfrom registrar.models import UserDomainRole, DomainApplication, DomainInvitation\n\n\nclass PermissionsLoginMixin(PermissionRequiredMixin):\n\n \"\"\"Mixin that redirects to login page if not logged in, otherwise 403.\"\"\"\n\n def handle_no_permission(self):\n self.raise_exception = self.request.user.is_authenticated\n return super().handle_no_permission()\n\n\nclass DomainPermission(PermissionsLoginMixin):\n\n \"\"\"Does the logged-in user have access to this domain?\"\"\"\n\n def has_permission(self):\n \"\"\"Check if this user has access to this domain.\n\n The user is in self.request.user and the domain needs to be looked\n up from the domain's primary key in self.kwargs[\"pk\"]\n \"\"\"\n\n # ticket 806\n # if self.request.user is staff or admin and\n # domain.application__status = 'approved' or 'rejected' or 'action needed'\n # return True\n\n if not self.request.user.is_authenticated:\n return False\n\n # user needs to have a role on the domain\n if not UserDomainRole.objects.filter(\n user=self.request.user, domain__id=self.kwargs[\"pk\"]\n ).exists():\n return False\n\n # ticket 796\n # if domain.application__status != 'approved'\n # return false\n\n # if we need to check more about the nature of role, do it here.\n return True\n\n\nclass DomainApplicationPermission(PermissionsLoginMixin):\n\n \"\"\"Does the logged-in user have access to this domain application?\"\"\"\n\n def has_permission(self):\n \"\"\"Check if this user has access to this domain application.\n\n The user is in self.request.user and the domain needs to be looked\n up from the domain's primary key in self.kwargs[\"pk\"]\n \"\"\"\n if not self.request.user.is_authenticated:\n return False\n\n # user needs to be the creator of the application\n # this query is empty if there isn't a domain application with this\n # id and this user as creator\n if not DomainApplication.objects.filter(\n creator=self.request.user, id=self.kwargs[\"pk\"]\n ).exists():\n return False\n\n return True\n\n\nclass DomainInvitationPermission(PermissionsLoginMixin):\n\n \"\"\"Does the logged-in user have access to this domain invitation?\n\n A user has access to a domain invitation if they have a role on the\n associated domain.\n \"\"\"\n\n def has_permission(self):\n \"\"\"Check if this user has a role on the domain of this invitation.\"\"\"\n if not self.request.user.is_authenticated:\n return False\n\n if not DomainInvitation.objects.filter(\n id=self.kwargs[\"pk\"], domain__permissions__user=self.request.user\n ).exists():\n return False\n\n return True\n", "path": "src/registrar/views/utility/mixins.py"}]}
| 1,351 | 403 |
gh_patches_debug_13470
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-2346
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Log scroll bug
* [x] I have created a file in the `master/buildbot/newsfragment` directory (and read the `README.txt` in that directory)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/changes/bitbucket.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15 import time
16 from datetime import datetime
17
18 from twisted.internet import defer
19 from twisted.internet import reactor
20 from twisted.python import log
21 from twisted.web import client
22
23 from buildbot.changes import base
24 from buildbot.util import ascii2unicode
25 from buildbot.util import datetime2epoch
26 from buildbot.util import deferredLocked
27 from buildbot.util import epoch2datetime
28 from buildbot.util import json
29
30
31 class BitbucketPullrequestPoller(base.PollingChangeSource):
32
33 compare_attrs = ("owner", "slug", "branch",
34 "pollInterval", "useTimestamps",
35 "category", "project", "pollAtLaunch")
36
37 db_class_name = 'BitbucketPullrequestPoller'
38
39 def __init__(self, owner, slug,
40 branch=None,
41 pollInterval=10 * 60,
42 useTimestamps=True,
43 category=None,
44 project='',
45 pullrequest_filter=True,
46 encoding='utf-8',
47 pollAtLaunch=False
48 ):
49
50 self.owner = owner
51 self.slug = slug
52 self.branch = branch
53 base.PollingChangeSource.__init__(
54 self, name='/'.join([owner, slug]), pollInterval=pollInterval, pollAtLaunch=pollAtLaunch)
55 self.encoding = encoding
56
57 if hasattr(pullrequest_filter, '__call__'):
58 self.pullrequest_filter = pullrequest_filter
59 else:
60 self.pullrequest_filter = (lambda _: pullrequest_filter)
61
62 self.lastChange = time.time()
63 self.lastPoll = time.time()
64 self.useTimestamps = useTimestamps
65 self.category = category if callable(
66 category) else ascii2unicode(category)
67 self.project = ascii2unicode(project)
68 self.initLock = defer.DeferredLock()
69
70 def describe(self):
71 return "BitbucketPullrequestPoller watching the "\
72 "Bitbucket repository %s/%s, branch: %s" % (
73 self.owner, self.slug, self.branch)
74
75 @deferredLocked('initLock')
76 def poll(self):
77 d = self._getChanges()
78 d.addCallback(self._processChanges)
79 d.addErrback(self._processChangesFailure)
80 return d
81
82 def _getChanges(self):
83 self.lastPoll = time.time()
84 log.msg("BitbucketPullrequestPoller: polling "
85 "Bitbucket repository %s/%s, branch: %s" % (self.owner, self.slug, self.branch))
86 url = "https://bitbucket.org/api/2.0/repositories/%s/%s/pullrequests" % (
87 self.owner, self.slug)
88 return client.getPage(url, timeout=self.pollInterval)
89
90 @defer.inlineCallbacks
91 def _processChanges(self, page):
92 result = json.loads(page, encoding=self.encoding)
93 for pr in result['values']:
94 branch = pr['source']['branch']['name']
95 nr = int(pr['id'])
96 # Note that this is a short hash. The full length hash can be accessed via the
97 # commit api resource but we want to avoid requesting multiple pages as long as
98 # we are not sure that the pull request is new or updated.
99 revision = pr['source']['commit']['hash']
100
101 # check branch
102 if not self.branch or branch in self.branch:
103 current = yield self._getCurrentRev(nr)
104
105 if not current or current != revision:
106 # parse pull request api page (required for the filter)
107 page = yield client.getPage(str(pr['links']['self']['href']))
108 pr_json = json.loads(page, encoding=self.encoding)
109
110 # filter pull requests by user function
111 if not self.pullrequest_filter(pr_json):
112 log.msg('pull request does not match filter')
113 continue
114
115 # access additional information
116 author = pr['author']['display_name']
117 prlink = pr['links']['html']['href']
118 # Get time updated time. Note that the timezone offset is
119 # ignored.
120 if self.useTimestamps:
121 updated = datetime.strptime(
122 pr['updated_on'].split('.')[0],
123 '%Y-%m-%dT%H:%M:%S')
124 else:
125 updated = epoch2datetime(reactor.seconds())
126 title = pr['title']
127 # parse commit api page
128 page = yield client.getPage(str(pr['source']['commit']['links']['self']['href']))
129 commit_json = json.loads(page, encoding=self.encoding)
130 # use the full-length hash from now on
131 revision = commit_json['hash']
132 revlink = commit_json['links']['html']['href']
133 # parse repo api page
134 page = yield client.getPage(str(pr['source']['repository']['links']['self']['href']))
135 repo_json = json.loads(page, encoding=self.encoding)
136 repo = repo_json['links']['html']['href']
137
138 # update database
139 yield self._setCurrentRev(nr, revision)
140 # emit the change
141 yield self.master.data.updates.addChange(
142 author=ascii2unicode(author),
143 revision=ascii2unicode(revision),
144 revlink=ascii2unicode(revlink),
145 comments=u'pull-request #%d: %s\n%s' % (
146 nr, title, prlink),
147 when_timestamp=datetime2epoch(updated),
148 branch=self.branch,
149 category=self.category,
150 project=self.project,
151 repository=ascii2unicode(repo),
152 src=u'bitbucket',
153 )
154
155 def _processChangesFailure(self, f):
156 log.msg('BitbucketPullrequestPoller: json api poll failed')
157 log.err(f)
158 # eat the failure to continue along the defered chain - we still want
159 # to catch up
160 return None
161
162 def _getCurrentRev(self, pr_id):
163 # Return a deferred datetime object for the given pull request number
164 # or None.
165 d = self._getStateObjectId()
166
167 @d.addCallback
168 def oid_callback(oid):
169 current = self.master.db.state.getState(
170 oid, 'pull_request%d' % pr_id, None)
171
172 @current.addCallback
173 def result_callback(result):
174 return result
175 return current
176 return d
177
178 def _setCurrentRev(self, pr_id, rev):
179 # Set the datetime entry for a specifed pull request.
180 d = self._getStateObjectId()
181
182 @d.addCallback
183 def oid_callback(oid):
184 return self.master.db.state.setState(oid, 'pull_request%d' % pr_id, rev)
185
186 return d
187
188 def _getStateObjectId(self):
189 # Return a deferred for object id in state db.
190 return self.master.db.state.getObjectId(
191 '%s/%s#%s' % (self.owner, self.slug, self.branch), self.db_class_name)
192
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/master/buildbot/changes/bitbucket.py b/master/buildbot/changes/bitbucket.py
--- a/master/buildbot/changes/bitbucket.py
+++ b/master/buildbot/changes/bitbucket.py
@@ -102,7 +102,8 @@
if not self.branch or branch in self.branch:
current = yield self._getCurrentRev(nr)
- if not current or current != revision:
+ # compare _short_ hashes to check if the PR has been updated
+ if not current or current[0:12] != revision[0:12]:
# parse pull request api page (required for the filter)
page = yield client.getPage(str(pr['links']['self']['href']))
pr_json = json.loads(page, encoding=self.encoding)
|
{"golden_diff": "diff --git a/master/buildbot/changes/bitbucket.py b/master/buildbot/changes/bitbucket.py\n--- a/master/buildbot/changes/bitbucket.py\n+++ b/master/buildbot/changes/bitbucket.py\n@@ -102,7 +102,8 @@\n if not self.branch or branch in self.branch:\n current = yield self._getCurrentRev(nr)\n \n- if not current or current != revision:\n+ # compare _short_ hashes to check if the PR has been updated\n+ if not current or current[0:12] != revision[0:12]:\n # parse pull request api page (required for the filter)\n page = yield client.getPage(str(pr['links']['self']['href']))\n pr_json = json.loads(page, encoding=self.encoding)\n", "issue": "Log scroll bug\n\r\n* [x] I have created a file in the `master/buildbot/newsfragment` directory (and read the `README.txt` in that directory)\r\n\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\nimport time\nfrom datetime import datetime\n\nfrom twisted.internet import defer\nfrom twisted.internet import reactor\nfrom twisted.python import log\nfrom twisted.web import client\n\nfrom buildbot.changes import base\nfrom buildbot.util import ascii2unicode\nfrom buildbot.util import datetime2epoch\nfrom buildbot.util import deferredLocked\nfrom buildbot.util import epoch2datetime\nfrom buildbot.util import json\n\n\nclass BitbucketPullrequestPoller(base.PollingChangeSource):\n\n compare_attrs = (\"owner\", \"slug\", \"branch\",\n \"pollInterval\", \"useTimestamps\",\n \"category\", \"project\", \"pollAtLaunch\")\n\n db_class_name = 'BitbucketPullrequestPoller'\n\n def __init__(self, owner, slug,\n branch=None,\n pollInterval=10 * 60,\n useTimestamps=True,\n category=None,\n project='',\n pullrequest_filter=True,\n encoding='utf-8',\n pollAtLaunch=False\n ):\n\n self.owner = owner\n self.slug = slug\n self.branch = branch\n base.PollingChangeSource.__init__(\n self, name='/'.join([owner, slug]), pollInterval=pollInterval, pollAtLaunch=pollAtLaunch)\n self.encoding = encoding\n\n if hasattr(pullrequest_filter, '__call__'):\n self.pullrequest_filter = pullrequest_filter\n else:\n self.pullrequest_filter = (lambda _: pullrequest_filter)\n\n self.lastChange = time.time()\n self.lastPoll = time.time()\n self.useTimestamps = useTimestamps\n self.category = category if callable(\n category) else ascii2unicode(category)\n self.project = ascii2unicode(project)\n self.initLock = defer.DeferredLock()\n\n def describe(self):\n return \"BitbucketPullrequestPoller watching the \"\\\n \"Bitbucket repository %s/%s, branch: %s\" % (\n self.owner, self.slug, self.branch)\n\n @deferredLocked('initLock')\n def poll(self):\n d = self._getChanges()\n d.addCallback(self._processChanges)\n d.addErrback(self._processChangesFailure)\n return d\n\n def _getChanges(self):\n self.lastPoll = time.time()\n log.msg(\"BitbucketPullrequestPoller: polling \"\n \"Bitbucket repository %s/%s, branch: %s\" % (self.owner, self.slug, self.branch))\n url = \"https://bitbucket.org/api/2.0/repositories/%s/%s/pullrequests\" % (\n self.owner, self.slug)\n return client.getPage(url, timeout=self.pollInterval)\n\n @defer.inlineCallbacks\n def _processChanges(self, page):\n result = json.loads(page, encoding=self.encoding)\n for pr in result['values']:\n branch = pr['source']['branch']['name']\n nr = int(pr['id'])\n # Note that this is a short hash. The full length hash can be accessed via the\n # commit api resource but we want to avoid requesting multiple pages as long as\n # we are not sure that the pull request is new or updated.\n revision = pr['source']['commit']['hash']\n\n # check branch\n if not self.branch or branch in self.branch:\n current = yield self._getCurrentRev(nr)\n\n if not current or current != revision:\n # parse pull request api page (required for the filter)\n page = yield client.getPage(str(pr['links']['self']['href']))\n pr_json = json.loads(page, encoding=self.encoding)\n\n # filter pull requests by user function\n if not self.pullrequest_filter(pr_json):\n log.msg('pull request does not match filter')\n continue\n\n # access additional information\n author = pr['author']['display_name']\n prlink = pr['links']['html']['href']\n # Get time updated time. Note that the timezone offset is\n # ignored.\n if self.useTimestamps:\n updated = datetime.strptime(\n pr['updated_on'].split('.')[0],\n '%Y-%m-%dT%H:%M:%S')\n else:\n updated = epoch2datetime(reactor.seconds())\n title = pr['title']\n # parse commit api page\n page = yield client.getPage(str(pr['source']['commit']['links']['self']['href']))\n commit_json = json.loads(page, encoding=self.encoding)\n # use the full-length hash from now on\n revision = commit_json['hash']\n revlink = commit_json['links']['html']['href']\n # parse repo api page\n page = yield client.getPage(str(pr['source']['repository']['links']['self']['href']))\n repo_json = json.loads(page, encoding=self.encoding)\n repo = repo_json['links']['html']['href']\n\n # update database\n yield self._setCurrentRev(nr, revision)\n # emit the change\n yield self.master.data.updates.addChange(\n author=ascii2unicode(author),\n revision=ascii2unicode(revision),\n revlink=ascii2unicode(revlink),\n comments=u'pull-request #%d: %s\\n%s' % (\n nr, title, prlink),\n when_timestamp=datetime2epoch(updated),\n branch=self.branch,\n category=self.category,\n project=self.project,\n repository=ascii2unicode(repo),\n src=u'bitbucket',\n )\n\n def _processChangesFailure(self, f):\n log.msg('BitbucketPullrequestPoller: json api poll failed')\n log.err(f)\n # eat the failure to continue along the defered chain - we still want\n # to catch up\n return None\n\n def _getCurrentRev(self, pr_id):\n # Return a deferred datetime object for the given pull request number\n # or None.\n d = self._getStateObjectId()\n\n @d.addCallback\n def oid_callback(oid):\n current = self.master.db.state.getState(\n oid, 'pull_request%d' % pr_id, None)\n\n @current.addCallback\n def result_callback(result):\n return result\n return current\n return d\n\n def _setCurrentRev(self, pr_id, rev):\n # Set the datetime entry for a specifed pull request.\n d = self._getStateObjectId()\n\n @d.addCallback\n def oid_callback(oid):\n return self.master.db.state.setState(oid, 'pull_request%d' % pr_id, rev)\n\n return d\n\n def _getStateObjectId(self):\n # Return a deferred for object id in state db.\n return self.master.db.state.getObjectId(\n '%s/%s#%s' % (self.owner, self.slug, self.branch), self.db_class_name)\n", "path": "master/buildbot/changes/bitbucket.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\nimport time\nfrom datetime import datetime\n\nfrom twisted.internet import defer\nfrom twisted.internet import reactor\nfrom twisted.python import log\nfrom twisted.web import client\n\nfrom buildbot.changes import base\nfrom buildbot.util import ascii2unicode\nfrom buildbot.util import datetime2epoch\nfrom buildbot.util import deferredLocked\nfrom buildbot.util import epoch2datetime\nfrom buildbot.util import json\n\n\nclass BitbucketPullrequestPoller(base.PollingChangeSource):\n\n compare_attrs = (\"owner\", \"slug\", \"branch\",\n \"pollInterval\", \"useTimestamps\",\n \"category\", \"project\", \"pollAtLaunch\")\n\n db_class_name = 'BitbucketPullrequestPoller'\n\n def __init__(self, owner, slug,\n branch=None,\n pollInterval=10 * 60,\n useTimestamps=True,\n category=None,\n project='',\n pullrequest_filter=True,\n encoding='utf-8',\n pollAtLaunch=False\n ):\n\n self.owner = owner\n self.slug = slug\n self.branch = branch\n base.PollingChangeSource.__init__(\n self, name='/'.join([owner, slug]), pollInterval=pollInterval, pollAtLaunch=pollAtLaunch)\n self.encoding = encoding\n\n if hasattr(pullrequest_filter, '__call__'):\n self.pullrequest_filter = pullrequest_filter\n else:\n self.pullrequest_filter = (lambda _: pullrequest_filter)\n\n self.lastChange = time.time()\n self.lastPoll = time.time()\n self.useTimestamps = useTimestamps\n self.category = category if callable(\n category) else ascii2unicode(category)\n self.project = ascii2unicode(project)\n self.initLock = defer.DeferredLock()\n\n def describe(self):\n return \"BitbucketPullrequestPoller watching the \"\\\n \"Bitbucket repository %s/%s, branch: %s\" % (\n self.owner, self.slug, self.branch)\n\n @deferredLocked('initLock')\n def poll(self):\n d = self._getChanges()\n d.addCallback(self._processChanges)\n d.addErrback(self._processChangesFailure)\n return d\n\n def _getChanges(self):\n self.lastPoll = time.time()\n log.msg(\"BitbucketPullrequestPoller: polling \"\n \"Bitbucket repository %s/%s, branch: %s\" % (self.owner, self.slug, self.branch))\n url = \"https://bitbucket.org/api/2.0/repositories/%s/%s/pullrequests\" % (\n self.owner, self.slug)\n return client.getPage(url, timeout=self.pollInterval)\n\n @defer.inlineCallbacks\n def _processChanges(self, page):\n result = json.loads(page, encoding=self.encoding)\n for pr in result['values']:\n branch = pr['source']['branch']['name']\n nr = int(pr['id'])\n # Note that this is a short hash. The full length hash can be accessed via the\n # commit api resource but we want to avoid requesting multiple pages as long as\n # we are not sure that the pull request is new or updated.\n revision = pr['source']['commit']['hash']\n\n # check branch\n if not self.branch or branch in self.branch:\n current = yield self._getCurrentRev(nr)\n\n # compare _short_ hashes to check if the PR has been updated\n if not current or current[0:12] != revision[0:12]:\n # parse pull request api page (required for the filter)\n page = yield client.getPage(str(pr['links']['self']['href']))\n pr_json = json.loads(page, encoding=self.encoding)\n\n # filter pull requests by user function\n if not self.pullrequest_filter(pr_json):\n log.msg('pull request does not match filter')\n continue\n\n # access additional information\n author = pr['author']['display_name']\n prlink = pr['links']['html']['href']\n # Get time updated time. Note that the timezone offset is\n # ignored.\n if self.useTimestamps:\n updated = datetime.strptime(\n pr['updated_on'].split('.')[0],\n '%Y-%m-%dT%H:%M:%S')\n else:\n updated = epoch2datetime(reactor.seconds())\n title = pr['title']\n # parse commit api page\n page = yield client.getPage(str(pr['source']['commit']['links']['self']['href']))\n commit_json = json.loads(page, encoding=self.encoding)\n # use the full-length hash from now on\n revision = commit_json['hash']\n revlink = commit_json['links']['html']['href']\n # parse repo api page\n page = yield client.getPage(str(pr['source']['repository']['links']['self']['href']))\n repo_json = json.loads(page, encoding=self.encoding)\n repo = repo_json['links']['html']['href']\n\n # update database\n yield self._setCurrentRev(nr, revision)\n # emit the change\n yield self.master.data.updates.addChange(\n author=ascii2unicode(author),\n revision=ascii2unicode(revision),\n revlink=ascii2unicode(revlink),\n comments=u'pull-request #%d: %s\\n%s' % (\n nr, title, prlink),\n when_timestamp=datetime2epoch(updated),\n branch=self.branch,\n category=self.category,\n project=self.project,\n repository=ascii2unicode(repo),\n src=u'bitbucket',\n )\n\n def _processChangesFailure(self, f):\n log.msg('BitbucketPullrequestPoller: json api poll failed')\n log.err(f)\n # eat the failure to continue along the defered chain - we still want\n # to catch up\n return None\n\n def _getCurrentRev(self, pr_id):\n # Return a deferred datetime object for the given pull request number\n # or None.\n d = self._getStateObjectId()\n\n @d.addCallback\n def oid_callback(oid):\n current = self.master.db.state.getState(\n oid, 'pull_request%d' % pr_id, None)\n\n @current.addCallback\n def result_callback(result):\n return result\n return current\n return d\n\n def _setCurrentRev(self, pr_id, rev):\n # Set the datetime entry for a specifed pull request.\n d = self._getStateObjectId()\n\n @d.addCallback\n def oid_callback(oid):\n return self.master.db.state.setState(oid, 'pull_request%d' % pr_id, rev)\n\n return d\n\n def _getStateObjectId(self):\n # Return a deferred for object id in state db.\n return self.master.db.state.getObjectId(\n '%s/%s#%s' % (self.owner, self.slug, self.branch), self.db_class_name)\n", "path": "master/buildbot/changes/bitbucket.py"}]}
| 2,365 | 168 |
gh_patches_debug_26088
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-368
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
remove raven dependency
because we don't use it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/settings.py`
Content:
```
1 # Django settings for evap project.
2
3 # automatically determine SITE_ROOT, used for absolute paths below
4 import os.path
5 SITE_ROOT = os.path.dirname(os.path.realpath(__file__))
6
7 DEBUG = True
8 TEMPLATE_DEBUG = DEBUG
9
10 ADMINS = (
11 # ('Your Name', '[email protected]'),
12 )
13
14 MANAGERS = ADMINS
15
16 DATABASES = {
17 'default': {
18 'ENGINE': 'django.db.backends.sqlite3', # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
19 'NAME': os.path.join(SITE_ROOT, 'database.sqlite3'), # Or path to database file if using sqlite3.
20 'USER': '', # Not used with sqlite3.
21 'PASSWORD': '', # Not used with sqlite3.
22 'HOST': '', # Set to empty string for localhost. Not used with sqlite3.
23 'PORT': '', # Set to empty string for default. Not used with sqlite3.
24 }
25 }
26
27 CACHES = {
28 'default': {
29 # 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
30 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
31 }
32 }
33
34 # config for feedback links
35 FEEDBACK_EMAIL = "webmaster@localhost"
36 TRACKER_URL = "https://github.com/fsr-itse/EvaP"
37
38 # config for mail system
39 DEFAULT_FROM_EMAIL = "webmaster@localhost"
40 REPLY_TO_EMAIL = DEFAULT_FROM_EMAIL
41 if DEBUG:
42 EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
43
44 # key authentication settings
45 LOGIN_KEY_VALIDITY = 210 # days, so roughly 7 months
46
47 # minimum answers needed for publishing
48 MIN_ANSWER_COUNT = 2
49 MIN_ANSWER_PERCENTAGE = 0.2
50
51 # the final total grade will be calculated by the following formula (GP = GRADE_PERCENTAGE, CP = CONTRIBUTION_PERCENTAGE):
52 # final_likert = CP * likert_answers_about_persons + (1-CP) * likert_answers_about_courses
53 # final_grade = CP * grade_answers_about_persons + (1-CP) * grade_answers_about_courses
54 # final = GP * final_grade + (1-GP) * final_likert
55 GRADE_PERCENTAGE = 0.8
56 CONTRIBUTION_PERCENTAGE = 0.5
57
58 # days before end date to send reminder
59 REMIND_X_DAYS_AHEAD_OF_END_DATE = 2
60
61 # email domains for the internal users of the hosting institution used to
62 # figure out who can login with username and password and who needs a login key
63 INSTITUTION_EMAIL_DOMAINS = ["hpi.uni-potsdam.de", "student.hpi.uni-potsdam.de"]
64
65 # Local time zone for this installation. Choices can be found here:
66 # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
67 # although not all choices may be available on all operating systems.
68 # On Unix systems, a value of None will cause Django to use the same
69 # timezone as the operating system.
70 # If running in a Windows environment this must be set to the same as your
71 # system time zone.
72 TIME_ZONE = 'Europe/Berlin'
73
74 # Language code for this installation. All choices can be found here:
75 # http://www.i18nguy.com/unicode/language-identifiers.html
76 LANGUAGE_CODE = 'en'
77
78 LANGUAGES = (
79 ('en', "English"),
80 ('de', "Deutsch"),
81 )
82
83 SITE_ID = 1
84
85 # If you set this to False, Django will make some optimizations so as not
86 # to load the internationalization machinery.
87 USE_I18N = True
88
89 # If you set this to False, Django will not format dates, numbers and
90 # calendars according to the current locale
91 USE_L10N = True
92
93 # Additional locations of fixture files
94 FIXTURE_DIRS = (
95 os.path.join(SITE_ROOT, "evaluation/fixtures"),
96 )
97
98 # Locale paths
99 LOCALE_PATHS = (
100 os.path.join(SITE_ROOT, "locale"),
101 )
102
103 # Absolute filesystem path to the directory that will hold user-uploaded files.
104 # Example: "/home/media/media.lawrence.com/media/"
105 MEDIA_ROOT = os.path.join(SITE_ROOT, "upload")
106
107 # URL that handles the media served from MEDIA_ROOT. Make sure to use a
108 # trailing slash.
109 # Examples: "http://media.lawrence.com/media/", "http://example.com/media/"
110 MEDIA_URL = '/media/'
111
112 # Absolute path to the directory static files should be collected to.
113 # Don't put anything in this directory yourself; store your static files
114 # in apps' "static/" subdirectories and in STATICFILES_DIRS.
115 # Example: "/home/media/media.lawrence.com/static/"
116 STATIC_ROOT = os.path.join(SITE_ROOT, "staticfiles")
117
118 # URL prefix for static files.
119 # Example: "http://media.lawrence.com/static/"
120 STATIC_URL = '/static/'
121
122 # URL prefix for admin static files -- CSS, JavaScript and images.
123 # Make sure to use a trailing slash.
124 # Examples: "http://foo.com/static/admin/", "/static/admin/".
125 ADMIN_MEDIA_PREFIX = '/static/admin/'
126
127 # Additional locations of static files
128 STATICFILES_DIRS = (
129 # Put strings here, like "/home/html/static" or "C:/www/django/static".
130 # Always use forward slashes, even on Windows.
131 # Don't forget to use absolute paths, not relative paths.
132 os.path.join(SITE_ROOT, "static"),
133 )
134
135 # List of finder classes that know how to find static files in
136 # various locations.
137 STATICFILES_FINDERS = (
138 'django.contrib.staticfiles.finders.FileSystemFinder',
139 'django.contrib.staticfiles.finders.AppDirectoriesFinder',
140 # 'django.contrib.staticfiles.finders.DefaultStorageFinder',
141 )
142
143 # Make this unique, and don't share it with anybody.
144 SECRET_KEY = 'k9-)vh3c_dtm6bpi7j(!*s_^91v0!ekjt_#o&0i$e22tnn^-vb'
145
146 # List of callables that know how to import templates from various sources.
147 TEMPLATE_LOADERS = (
148 'django.template.loaders.filesystem.Loader',
149 'django.template.loaders.app_directories.Loader',
150 # 'django.template.loaders.eggs.Loader',
151 )
152
153 TEMPLATE_CONTEXT_PROCESSORS = (
154 "django.contrib.auth.context_processors.auth",
155 "django.core.context_processors.debug",
156 "django.core.context_processors.i18n",
157 "django.core.context_processors.media",
158 "django.core.context_processors.static",
159 "django.core.context_processors.request",
160 "django.contrib.messages.context_processors.messages",
161 )
162
163 MIDDLEWARE_CLASSES = (
164 'django.middleware.common.CommonMiddleware',
165 'django.contrib.sessions.middleware.SessionMiddleware',
166 'django.middleware.locale.LocaleMiddleware',
167 'django.middleware.csrf.CsrfViewMiddleware',
168 'django.contrib.auth.middleware.AuthenticationMiddleware',
169 'django.contrib.messages.middleware.MessageMiddleware',
170 'evap.evaluation.auth.RequestAuthMiddleware',
171 'evap.evaluation.403.Django403Middleware',
172 )
173
174 AUTHENTICATION_BACKENDS = (
175 'evap.evaluation.auth.RequestAuthUserBackend',
176 # 'django_auth_kerberos.backends.KrbBackend',
177 'django.contrib.auth.backends.ModelBackend',
178 )
179
180 # kerberos realm and service
181 #KRB5_REALM = 'EXAMPLE.COM'
182 #KRB5_SERVICE = '[email protected]'
183
184 # redirect url after login
185 LOGIN_REDIRECT_URL = '/'
186
187 LOGIN_URL = "/"
188
189 ROOT_URLCONF = 'evap.urls'
190
191 TEMPLATE_DIRS = (
192 # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
193 # Always use forward slashes, even on Windows.
194 # Don't forget to use absolute paths, not relative paths.
195 os.path.join(SITE_ROOT, "templates"),
196 )
197
198 INSTALLED_APPS = (
199 'django.contrib.auth',
200 'django.contrib.contenttypes',
201 'django.contrib.sessions',
202 'django.contrib.sites',
203 'django.contrib.messages',
204 'django.contrib.staticfiles',
205 'django.contrib.admin',
206 'south',
207 'widget_tweaks',
208 'evap.evaluation',
209 'evap.fsr',
210 'evap.results',
211 'evap.student',
212 'evap.contributor',
213 # 'django_auth_kerberos',
214 )
215 if not DEBUG:
216 INSTALLED_APPS += (
217 'raven.contrib.django.raven_compat',
218 )
219
220 RAVEN_CONFIG = {
221 'dsn': 'http://public:[email protected]/1',
222 }
223
224 # A sample logging configuration. The only tangible logging
225 # performed by this configuration is to send an email to
226 # the site admins on every HTTP 500 error.
227 # See http://docs.djangoproject.com/en/dev/topics/logging for
228 # more details on how to customize your logging configuration.
229 LOGGING = {
230 'version': 1,
231 'disable_existing_loggers': False,
232 'filters': {
233 'require_debug_false': {
234 '()': 'django.utils.log.RequireDebugFalse'
235 }
236 },
237 'handlers': {
238 'sentry': {
239 'level': 'ERROR',
240 'class': 'raven.contrib.django.handlers.SentryHandler',
241 },
242 'mail_admins': {
243 'level': 'ERROR',
244 'filters': ['require_debug_false'],
245 'class': 'django.utils.log.AdminEmailHandler'
246 },
247 'console': {
248 'level': 'INFO',
249 'class': 'logging.StreamHandler'
250 }
251 },
252 'loggers': {
253 'django.request': {
254 'handlers': ['mail_admins'],
255 'level': 'ERROR',
256 'propagate': True,
257 },
258 'evap.evaluation.management.commands.import_evaj': {
259 'handlers': ['console'],
260 'level': 'INFO'
261 },
262 'raven': {
263 'level': 'DEBUG',
264 'handlers': ['console'],
265 'propagate': False,
266 },
267 'sentry.errors': {
268 'level': 'DEBUG',
269 'handlers': ['console'],
270 'propagate': False,
271 }
272 }
273 }
274
275 # Create a localsettings.py if you want to override settings per machine
276 # or user, e.g. for development or different settings in deployments using
277 # multiple servers.
278 _LOCAL_SETTINGS_FILENAME = os.path.join(SITE_ROOT, "localsettings.py")
279 if os.path.exists(_LOCAL_SETTINGS_FILENAME):
280 execfile(_LOCAL_SETTINGS_FILENAME)
281 del _LOCAL_SETTINGS_FILENAME
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evap/settings.py b/evap/settings.py
--- a/evap/settings.py
+++ b/evap/settings.py
@@ -212,14 +212,6 @@
'evap.contributor',
# 'django_auth_kerberos',
)
-if not DEBUG:
- INSTALLED_APPS += (
- 'raven.contrib.django.raven_compat',
- )
-
-RAVEN_CONFIG = {
- 'dsn': 'http://public:[email protected]/1',
-}
# A sample logging configuration. The only tangible logging
# performed by this configuration is to send an email to
@@ -235,10 +227,6 @@
}
},
'handlers': {
- 'sentry': {
- 'level': 'ERROR',
- 'class': 'raven.contrib.django.handlers.SentryHandler',
- },
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
@@ -258,16 +246,6 @@
'evap.evaluation.management.commands.import_evaj': {
'handlers': ['console'],
'level': 'INFO'
- },
- 'raven': {
- 'level': 'DEBUG',
- 'handlers': ['console'],
- 'propagate': False,
- },
- 'sentry.errors': {
- 'level': 'DEBUG',
- 'handlers': ['console'],
- 'propagate': False,
}
}
}
|
{"golden_diff": "diff --git a/evap/settings.py b/evap/settings.py\n--- a/evap/settings.py\n+++ b/evap/settings.py\n@@ -212,14 +212,6 @@\n 'evap.contributor',\n # 'django_auth_kerberos',\n )\n-if not DEBUG:\n- INSTALLED_APPS += (\n- 'raven.contrib.django.raven_compat',\n- )\n-\n-RAVEN_CONFIG = {\n- 'dsn': 'http://public:[email protected]/1',\n-}\n \n # A sample logging configuration. The only tangible logging\n # performed by this configuration is to send an email to\n@@ -235,10 +227,6 @@\n }\n },\n 'handlers': {\n- 'sentry': {\n- 'level': 'ERROR',\n- 'class': 'raven.contrib.django.handlers.SentryHandler',\n- },\n 'mail_admins': {\n 'level': 'ERROR',\n 'filters': ['require_debug_false'],\n@@ -258,16 +246,6 @@\n 'evap.evaluation.management.commands.import_evaj': {\n 'handlers': ['console'],\n 'level': 'INFO'\n- },\n- 'raven': {\n- 'level': 'DEBUG',\n- 'handlers': ['console'],\n- 'propagate': False,\n- },\n- 'sentry.errors': {\n- 'level': 'DEBUG',\n- 'handlers': ['console'],\n- 'propagate': False,\n }\n }\n }\n", "issue": "remove raven dependency\nbecause we don't use it.\n\n", "before_files": [{"content": "# Django settings for evap project.\n\n# automatically determine SITE_ROOT, used for absolute paths below\nimport os.path\nSITE_ROOT = os.path.dirname(os.path.realpath(__file__))\n\nDEBUG = True\nTEMPLATE_DEBUG = DEBUG\n\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nMANAGERS = ADMINS\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3', # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.\n 'NAME': os.path.join(SITE_ROOT, 'database.sqlite3'), # Or path to database file if using sqlite3.\n 'USER': '', # Not used with sqlite3.\n 'PASSWORD': '', # Not used with sqlite3.\n 'HOST': '', # Set to empty string for localhost. Not used with sqlite3.\n 'PORT': '', # Set to empty string for default. Not used with sqlite3.\n }\n}\n\nCACHES = {\n 'default': {\n # 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n }\n}\n\n# config for feedback links\nFEEDBACK_EMAIL = \"webmaster@localhost\"\nTRACKER_URL = \"https://github.com/fsr-itse/EvaP\"\n\n# config for mail system\nDEFAULT_FROM_EMAIL = \"webmaster@localhost\"\nREPLY_TO_EMAIL = DEFAULT_FROM_EMAIL\nif DEBUG:\n EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n# key authentication settings\nLOGIN_KEY_VALIDITY = 210 # days, so roughly 7 months\n\n# minimum answers needed for publishing\nMIN_ANSWER_COUNT = 2\nMIN_ANSWER_PERCENTAGE = 0.2\n\n# the final total grade will be calculated by the following formula (GP = GRADE_PERCENTAGE, CP = CONTRIBUTION_PERCENTAGE):\n# final_likert = CP * likert_answers_about_persons + (1-CP) * likert_answers_about_courses\n# final_grade = CP * grade_answers_about_persons + (1-CP) * grade_answers_about_courses\n# final = GP * final_grade + (1-GP) * final_likert\nGRADE_PERCENTAGE = 0.8\nCONTRIBUTION_PERCENTAGE = 0.5\n\n# days before end date to send reminder\nREMIND_X_DAYS_AHEAD_OF_END_DATE = 2\n\n# email domains for the internal users of the hosting institution used to\n# figure out who can login with username and password and who needs a login key\nINSTITUTION_EMAIL_DOMAINS = [\"hpi.uni-potsdam.de\", \"student.hpi.uni-potsdam.de\"]\n\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# On Unix systems, a value of None will cause Django to use the same\n# timezone as the operating system.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'Europe/Berlin'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\nLANGUAGE_CODE = 'en'\n\nLANGUAGES = (\n ('en', \"English\"),\n ('de', \"Deutsch\"),\n)\n\nSITE_ID = 1\n\n# If you set this to False, Django will make some optimizations so as not\n# to load the internationalization machinery.\nUSE_I18N = True\n\n# If you set this to False, Django will not format dates, numbers and\n# calendars according to the current locale\nUSE_L10N = True\n\n# Additional locations of fixture files\nFIXTURE_DIRS = (\n os.path.join(SITE_ROOT, \"evaluation/fixtures\"),\n)\n\n# Locale paths\nLOCALE_PATHS = (\n os.path.join(SITE_ROOT, \"locale\"),\n)\n\n# Absolute filesystem path to the directory that will hold user-uploaded files.\n# Example: \"/home/media/media.lawrence.com/media/\"\nMEDIA_ROOT = os.path.join(SITE_ROOT, \"upload\")\n\n# URL that handles the media served from MEDIA_ROOT. Make sure to use a\n# trailing slash.\n# Examples: \"http://media.lawrence.com/media/\", \"http://example.com/media/\"\nMEDIA_URL = '/media/'\n\n# Absolute path to the directory static files should be collected to.\n# Don't put anything in this directory yourself; store your static files\n# in apps' \"static/\" subdirectories and in STATICFILES_DIRS.\n# Example: \"/home/media/media.lawrence.com/static/\"\nSTATIC_ROOT = os.path.join(SITE_ROOT, \"staticfiles\")\n\n# URL prefix for static files.\n# Example: \"http://media.lawrence.com/static/\"\nSTATIC_URL = '/static/'\n\n# URL prefix for admin static files -- CSS, JavaScript and images.\n# Make sure to use a trailing slash.\n# Examples: \"http://foo.com/static/admin/\", \"/static/admin/\".\nADMIN_MEDIA_PREFIX = '/static/admin/'\n\n# Additional locations of static files\nSTATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n os.path.join(SITE_ROOT, \"static\"),\n)\n\n# List of finder classes that know how to find static files in\n# various locations.\nSTATICFILES_FINDERS = (\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n# 'django.contrib.staticfiles.finders.DefaultStorageFinder',\n)\n\n# Make this unique, and don't share it with anybody.\nSECRET_KEY = 'k9-)vh3c_dtm6bpi7j(!*s_^91v0!ekjt_#o&0i$e22tnn^-vb'\n\n# List of callables that know how to import templates from various sources.\nTEMPLATE_LOADERS = (\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n# 'django.template.loaders.eggs.Loader',\n)\n\nTEMPLATE_CONTEXT_PROCESSORS = (\n \"django.contrib.auth.context_processors.auth\",\n \"django.core.context_processors.debug\",\n \"django.core.context_processors.i18n\",\n \"django.core.context_processors.media\",\n \"django.core.context_processors.static\",\n \"django.core.context_processors.request\",\n \"django.contrib.messages.context_processors.messages\",\n)\n\nMIDDLEWARE_CLASSES = (\n 'django.middleware.common.CommonMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'evap.evaluation.auth.RequestAuthMiddleware',\n 'evap.evaluation.403.Django403Middleware',\n)\n\nAUTHENTICATION_BACKENDS = (\n 'evap.evaluation.auth.RequestAuthUserBackend',\n# 'django_auth_kerberos.backends.KrbBackend',\n 'django.contrib.auth.backends.ModelBackend',\n)\n\n# kerberos realm and service\n#KRB5_REALM = 'EXAMPLE.COM'\n#KRB5_SERVICE = '[email protected]'\n\n# redirect url after login\nLOGIN_REDIRECT_URL = '/'\n\nLOGIN_URL = \"/\"\n\nROOT_URLCONF = 'evap.urls'\n\nTEMPLATE_DIRS = (\n # Put strings here, like \"/home/html/django_templates\" or \"C:/www/django/templates\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n os.path.join(SITE_ROOT, \"templates\"),\n)\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.sites',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.admin',\n 'south',\n 'widget_tweaks',\n 'evap.evaluation',\n 'evap.fsr',\n 'evap.results',\n 'evap.student',\n 'evap.contributor',\n# 'django_auth_kerberos',\n)\nif not DEBUG:\n INSTALLED_APPS += (\n 'raven.contrib.django.raven_compat',\n )\n\nRAVEN_CONFIG = {\n 'dsn': 'http://public:[email protected]/1',\n}\n\n# A sample logging configuration. The only tangible logging\n# performed by this configuration is to send an email to\n# the site admins on every HTTP 500 error.\n# See http://docs.djangoproject.com/en/dev/topics/logging for\n# more details on how to customize your logging configuration.\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse'\n }\n },\n 'handlers': {\n 'sentry': {\n 'level': 'ERROR',\n 'class': 'raven.contrib.django.handlers.SentryHandler',\n },\n 'mail_admins': {\n 'level': 'ERROR',\n 'filters': ['require_debug_false'],\n 'class': 'django.utils.log.AdminEmailHandler'\n },\n 'console': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler'\n }\n },\n 'loggers': {\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': True,\n },\n 'evap.evaluation.management.commands.import_evaj': {\n 'handlers': ['console'],\n 'level': 'INFO'\n },\n 'raven': {\n 'level': 'DEBUG',\n 'handlers': ['console'],\n 'propagate': False,\n },\n 'sentry.errors': {\n 'level': 'DEBUG',\n 'handlers': ['console'],\n 'propagate': False,\n }\n }\n}\n\n# Create a localsettings.py if you want to override settings per machine\n# or user, e.g. for development or different settings in deployments using\n# multiple servers.\n_LOCAL_SETTINGS_FILENAME = os.path.join(SITE_ROOT, \"localsettings.py\")\nif os.path.exists(_LOCAL_SETTINGS_FILENAME):\n execfile(_LOCAL_SETTINGS_FILENAME)\ndel _LOCAL_SETTINGS_FILENAME\n", "path": "evap/settings.py"}], "after_files": [{"content": "# Django settings for evap project.\n\n# automatically determine SITE_ROOT, used for absolute paths below\nimport os.path\nSITE_ROOT = os.path.dirname(os.path.realpath(__file__))\n\nDEBUG = True\nTEMPLATE_DEBUG = DEBUG\n\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nMANAGERS = ADMINS\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3', # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.\n 'NAME': os.path.join(SITE_ROOT, 'database.sqlite3'), # Or path to database file if using sqlite3.\n 'USER': '', # Not used with sqlite3.\n 'PASSWORD': '', # Not used with sqlite3.\n 'HOST': '', # Set to empty string for localhost. Not used with sqlite3.\n 'PORT': '', # Set to empty string for default. Not used with sqlite3.\n }\n}\n\nCACHES = {\n 'default': {\n # 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n }\n}\n\n# config for feedback links\nFEEDBACK_EMAIL = \"webmaster@localhost\"\nTRACKER_URL = \"https://github.com/fsr-itse/EvaP\"\n\n# config for mail system\nDEFAULT_FROM_EMAIL = \"webmaster@localhost\"\nREPLY_TO_EMAIL = DEFAULT_FROM_EMAIL\nif DEBUG:\n EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n# key authentication settings\nLOGIN_KEY_VALIDITY = 210 # days, so roughly 7 months\n\n# minimum answers needed for publishing\nMIN_ANSWER_COUNT = 2\nMIN_ANSWER_PERCENTAGE = 0.2\n\n# the final total grade will be calculated by the following formula (GP = GRADE_PERCENTAGE, CP = CONTRIBUTION_PERCENTAGE):\n# final_likert = CP * likert_answers_about_persons + (1-CP) * likert_answers_about_courses\n# final_grade = CP * grade_answers_about_persons + (1-CP) * grade_answers_about_courses\n# final = GP * final_grade + (1-GP) * final_likert\nGRADE_PERCENTAGE = 0.8\nCONTRIBUTION_PERCENTAGE = 0.5\n\n# days before end date to send reminder\nREMIND_X_DAYS_AHEAD_OF_END_DATE = 2\n\n# email domains for the internal users of the hosting institution used to\n# figure out who can login with username and password and who needs a login key\nINSTITUTION_EMAIL_DOMAINS = [\"hpi.uni-potsdam.de\", \"student.hpi.uni-potsdam.de\"]\n\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# On Unix systems, a value of None will cause Django to use the same\n# timezone as the operating system.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'Europe/Berlin'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\nLANGUAGE_CODE = 'en'\n\nLANGUAGES = (\n ('en', \"English\"),\n ('de', \"Deutsch\"),\n)\n\nSITE_ID = 1\n\n# If you set this to False, Django will make some optimizations so as not\n# to load the internationalization machinery.\nUSE_I18N = True\n\n# If you set this to False, Django will not format dates, numbers and\n# calendars according to the current locale\nUSE_L10N = True\n\n# Additional locations of fixture files\nFIXTURE_DIRS = (\n os.path.join(SITE_ROOT, \"evaluation/fixtures\"),\n)\n\n# Locale paths\nLOCALE_PATHS = (\n os.path.join(SITE_ROOT, \"locale\"),\n)\n\n# Absolute filesystem path to the directory that will hold user-uploaded files.\n# Example: \"/home/media/media.lawrence.com/media/\"\nMEDIA_ROOT = os.path.join(SITE_ROOT, \"upload\")\n\n# URL that handles the media served from MEDIA_ROOT. Make sure to use a\n# trailing slash.\n# Examples: \"http://media.lawrence.com/media/\", \"http://example.com/media/\"\nMEDIA_URL = '/media/'\n\n# Absolute path to the directory static files should be collected to.\n# Don't put anything in this directory yourself; store your static files\n# in apps' \"static/\" subdirectories and in STATICFILES_DIRS.\n# Example: \"/home/media/media.lawrence.com/static/\"\nSTATIC_ROOT = os.path.join(SITE_ROOT, \"staticfiles\")\n\n# URL prefix for static files.\n# Example: \"http://media.lawrence.com/static/\"\nSTATIC_URL = '/static/'\n\n# URL prefix for admin static files -- CSS, JavaScript and images.\n# Make sure to use a trailing slash.\n# Examples: \"http://foo.com/static/admin/\", \"/static/admin/\".\nADMIN_MEDIA_PREFIX = '/static/admin/'\n\n# Additional locations of static files\nSTATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n os.path.join(SITE_ROOT, \"static\"),\n)\n\n# List of finder classes that know how to find static files in\n# various locations.\nSTATICFILES_FINDERS = (\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n# 'django.contrib.staticfiles.finders.DefaultStorageFinder',\n)\n\n# Make this unique, and don't share it with anybody.\nSECRET_KEY = 'k9-)vh3c_dtm6bpi7j(!*s_^91v0!ekjt_#o&0i$e22tnn^-vb'\n\n# List of callables that know how to import templates from various sources.\nTEMPLATE_LOADERS = (\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n# 'django.template.loaders.eggs.Loader',\n)\n\nTEMPLATE_CONTEXT_PROCESSORS = (\n \"django.contrib.auth.context_processors.auth\",\n \"django.core.context_processors.debug\",\n \"django.core.context_processors.i18n\",\n \"django.core.context_processors.media\",\n \"django.core.context_processors.static\",\n \"django.core.context_processors.request\",\n \"django.contrib.messages.context_processors.messages\",\n)\n\nMIDDLEWARE_CLASSES = (\n 'django.middleware.common.CommonMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'evap.evaluation.auth.RequestAuthMiddleware',\n 'evap.evaluation.403.Django403Middleware',\n)\n\nAUTHENTICATION_BACKENDS = (\n 'evap.evaluation.auth.RequestAuthUserBackend',\n# 'django_auth_kerberos.backends.KrbBackend',\n 'django.contrib.auth.backends.ModelBackend',\n)\n\n# kerberos realm and service\n#KRB5_REALM = 'EXAMPLE.COM'\n#KRB5_SERVICE = '[email protected]'\n\n# redirect url after login\nLOGIN_REDIRECT_URL = '/'\n\nLOGIN_URL = \"/\"\n\nROOT_URLCONF = 'evap.urls'\n\nTEMPLATE_DIRS = (\n # Put strings here, like \"/home/html/django_templates\" or \"C:/www/django/templates\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n os.path.join(SITE_ROOT, \"templates\"),\n)\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.sites',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.admin',\n 'south',\n 'widget_tweaks',\n 'evap.evaluation',\n 'evap.fsr',\n 'evap.results',\n 'evap.student',\n 'evap.contributor',\n# 'django_auth_kerberos',\n)\n\n# A sample logging configuration. The only tangible logging\n# performed by this configuration is to send an email to\n# the site admins on every HTTP 500 error.\n# See http://docs.djangoproject.com/en/dev/topics/logging for\n# more details on how to customize your logging configuration.\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse'\n }\n },\n 'handlers': {\n 'mail_admins': {\n 'level': 'ERROR',\n 'filters': ['require_debug_false'],\n 'class': 'django.utils.log.AdminEmailHandler'\n },\n 'console': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler'\n }\n },\n 'loggers': {\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': True,\n },\n 'evap.evaluation.management.commands.import_evaj': {\n 'handlers': ['console'],\n 'level': 'INFO'\n }\n }\n}\n\n# Create a localsettings.py if you want to override settings per machine\n# or user, e.g. for development or different settings in deployments using\n# multiple servers.\n_LOCAL_SETTINGS_FILENAME = os.path.join(SITE_ROOT, \"localsettings.py\")\nif os.path.exists(_LOCAL_SETTINGS_FILENAME):\n execfile(_LOCAL_SETTINGS_FILENAME)\ndel _LOCAL_SETTINGS_FILENAME\n", "path": "evap/settings.py"}]}
| 3,247 | 336 |
gh_patches_debug_39426
|
rasdani/github-patches
|
git_diff
|
lisa-lab__pylearn2-1451
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improving mapping.nest error message
If you pass mapping.nest something that is a list rather than a tuple, you get a completely unrelated error message:
```
File "/usr/local/google/home/goodfellow/pylearn2/pylearn2/utils/data_specs.py", line 273, in nest
assert self.n_unique_specs == 1
AssertionError
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pylearn2/utils/data_specs.py`
Content:
```
1 """
2 Utilities for working with data format specifications.
3
4 See :ref:`data_specs` for a high level overview of the relevant concepts.
5 """
6 from pylearn2.space import CompositeSpace, NullSpace, Space
7 from pylearn2.utils import safe_zip
8
9
10 class DataSpecsMapping(object):
11 """
12 Converts between nested tuples and non-redundant flattened ones.
13
14 The mapping is built from data specifications, provided as a
15 (space, sources) pair, where space can be a composite space (possibly
16 of other composite spaces), and sources is a tuple of string identifiers
17 or other sources. Both space and sources must have the same structure.
18
19 Parameters
20 ----------
21 data_specs : WRITEME
22 WRITEME
23
24 Attributes
25 ----------
26 specs_to_index : dict
27 Maps one elementary (not composite) data_specs pair to its
28 index in the flattened space. Not sure if this one should
29 be a member, or passed as a parameter to _fill_mapping. It
30 might be us
31 """
32 #might be useful to get the index of one data_specs later
33 #but if it is not, then we should remove it.
34 def __init__(self, data_specs):
35 self.specs_to_index = {}
36
37 # Size of the flattened space
38 self.n_unique_specs = 0
39
40 # Builds the mapping
41 space, source = data_specs
42 assert isinstance(space, Space), 'Given space: ' + str(space) + \
43 ' was not a instance of Space.'
44 self.spec_mapping = self._fill_mapping(space, source)
45
46 def _fill_mapping(self, space, source):
47 """
48 Builds a nested tuple of integers representing the mapping
49
50 Parameters
51 ----------
52 space : WRITEME
53 source : WRITEME
54
55 Returns
56 -------
57 WRITEME
58 """
59 if isinstance(space, NullSpace):
60 # This Space does not contain any data, and should not
61 # be mapped to anything
62 assert source == ''
63 return None
64
65 elif not isinstance(space, CompositeSpace):
66 # Space is a simple Space, source should be a simple source
67 if isinstance(source, tuple):
68 source, = source
69
70 # If (space, source) has not already been seen, insert it.
71 # We need both the space and the source to match.
72 if (space, source) in self.specs_to_index:
73 spec_index = self.specs_to_index[(space, source)]
74 else:
75 spec_index = self.n_unique_specs
76 self.specs_to_index[(space, source)] = spec_index
77 self.n_unique_specs += 1
78
79 return spec_index
80
81 else:
82 # Recursively fill the mapping, and return it
83 spec_mapping = tuple(
84 self._fill_mapping(sub_space, sub_source)
85 for sub_space, sub_source in safe_zip(
86 space.components, source))
87
88 return spec_mapping
89
90 def _fill_flat(self, nested, mapping, rval):
91 """
92 Auxiliary recursive function used by self.flatten
93
94 Parameters
95 ----------
96 nested : WRITEME
97 mapping : WRITEME
98 rval : WRITEME
99
100 Returns
101 -------
102 WRITEME
103 """
104 if isinstance(nested, CompositeSpace):
105 nested = tuple(nested.components)
106
107 if mapping is None:
108 # The corresponding Space was a NullSpace, which does
109 # not correspond to actual data, so nested should evaluate
110 # to False, and should not be included in the flattened version
111 if not isinstance(nested, NullSpace):
112 assert not nested, ("The following element is mapped to "
113 "NullSpace, so it should evaluate to False (for instance, "
114 "None, an empty string or an empty tuple), but is %s"
115 % nested)
116 return
117
118 if isinstance(mapping, int):
119 # "nested" should actually be a single element
120 idx = mapping
121 if isinstance(nested, tuple):
122 if len(nested) != 1:
123 raise ValueError("When mapping is an int, we expect "
124 "nested to be a single element. But mapping is "
125 + str(mapping) + " and nested is a tuple of "
126 "length " + str(len(nested)))
127 nested, = nested
128
129 if rval[idx] is None:
130 rval[idx] = nested
131 else:
132 assert rval[idx] == nested, ("This mapping was built "
133 "with the same element occurring more than once "
134 "in the nested representation, but current nested "
135 "sequence has different values (%s and %s) at "
136 "these positions." % (rval[idx], nested))
137 else:
138 for sub_nested, sub_mapping in safe_zip(nested, mapping):
139 self._fill_flat(sub_nested, sub_mapping, rval)
140
141 def flatten(self, nested, return_tuple=False):
142 """
143 Iterate jointly through nested and spec_mapping, returns a flat tuple.
144
145 The integer in spec_mapping corresponding to each element in nested
146 represents the index of that element in the returned sequence.
147 If the original data_specs had duplicate elements at different places,
148 then "nested" also have to have equal elements at these positions.
149 "nested" can be a nested tuple, or composite space. If it is a
150 composite space, a flattened composite space will be returned.
151
152 If `return_tuple` is True, a tuple is always returned (tuple of
153 non-composite Spaces if nested is a Space, empty tuple if all
154 Spaces are NullSpaces, length-1 tuple if there is only one
155 non-composite Space, etc.).
156
157 Parameters
158 ----------
159 nested : WRITEME
160 return_tuple : WRITEME
161
162 Returns
163 -------
164 WRITEME
165 """
166 # Initialize the flatten returned value with Nones
167 rval = [None] * self.n_unique_specs
168
169 # Fill rval with the auxiliary function
170 self._fill_flat(nested, self.spec_mapping, rval)
171
172 assert None not in rval, ("This mapping is invalid, as it did not "
173 "contain all numbers from 0 to %i (or None was in nested), "
174 "nested: %s" % (self.n_unique_specs - 1, nested))
175
176 if return_tuple:
177 return tuple(rval)
178
179 # else, return something close to the type of nested
180 if len(rval) == 1:
181 return rval[0]
182 if isinstance(nested, tuple):
183 return tuple(rval)
184 elif isinstance(nested, Space):
185 return CompositeSpace(rval)
186
187 def _make_nested_tuple(self, flat, mapping):
188 """
189 Auxiliary recursive function used by self.nest
190
191 Parameters
192 ----------
193 flat : WRITEME
194 mapping : WRITEME
195
196 Returns
197 -------
198 WRITEME
199 """
200 if mapping is None:
201 # The corresponding space was a NullSpace,
202 # and there is no corresponding value in flat,
203 # we use None as a placeholder
204 return None
205 if isinstance(mapping, int):
206 # We are at a leaf of the tree
207 idx = mapping
208 if isinstance(flat, tuple):
209 assert 0 <= idx < len(flat)
210 return flat[idx]
211 else:
212 assert idx == 0
213 return flat
214 else:
215 return tuple(
216 self._make_nested_tuple(flat, sub_mapping)
217 for sub_mapping in mapping)
218
219 def _make_nested_space(self, flat, mapping):
220 """
221 Auxiliary recursive function used by self.nest
222
223 Parameters
224 ----------
225 flat : WRITEME
226 mapping : WRITEME
227
228 Returns
229 -------
230 WRITEME
231 """
232 if isinstance(mapping, int):
233 # We are at a leaf of the tree
234 idx = mapping
235 if isinstance(flat, CompositeSpace):
236 assert 0 <= idx < len(flat.components)
237 return flat.components[idx]
238 else:
239 assert idx == 0
240 return flat
241 else:
242 return CompositeSpace([
243 self._make_nested_space(flat, sub_mapping)
244 for sub_mapping in mapping])
245
246 def nest(self, flat):
247 """
248 Iterate through spec_mapping, building a nested tuple from "flat".
249
250 The length of "flat" should be equal to self.n_unique_specs.
251
252 Parameters
253 ----------
254 flat : Space or tuple
255 WRITEME
256
257 Returns
258 -------
259 WRITEME
260 """
261 if isinstance(flat, Space):
262 if isinstance(flat, CompositeSpace):
263 assert len(flat.components) == self.n_unique_specs
264 else:
265 assert self.n_unique_specs == 1
266 return self._make_nested_space(flat, self.spec_mapping)
267 else:
268 if isinstance(flat, tuple):
269 assert len(flat) == self.n_unique_specs
270 else:
271 # flat is not iterable, this is valid only if spec_mapping
272 # contains only 0's, that is, when self.n_unique_specs == 1
273 assert self.n_unique_specs == 1
274 return self._make_nested_tuple(flat, self.spec_mapping)
275
276
277 def is_flat_space(space):
278 """
279 Returns True for elementary Spaces and non-nested CompositeSpaces
280
281 Parameters
282 ----------
283 space : WRITEME
284
285 Returns
286 -------
287 WRITEME
288 """
289 if isinstance(space, CompositeSpace):
290 for sub_space in space.components:
291 if isinstance(sub_space, CompositeSpace):
292 return False
293 elif not isinstance(space, Space):
294 raise TypeError("space is not a Space: %s (%s)"
295 % (space, type(space)))
296 return True
297
298
299 def is_flat_source(source):
300 """
301 Returns True for a string or a non-nested tuple of strings
302
303 Parameters
304 ----------
305 source : WRITEME
306
307 Returns
308 -------
309 WRITEME
310 """
311 if isinstance(source, (tuple, list)):
312 for sub_source in source:
313 if isinstance(sub_source, (tuple, list)):
314 return False
315 elif not isinstance(source, str):
316 raise TypeError("source should be a string or a non-nested tuple/list "
317 "of strings: %s" % source)
318 return True
319
320
321 def is_flat_specs(data_specs):
322 """
323 .. todo::
324
325 WRITEME
326 """
327 return is_flat_space(data_specs[0]) and is_flat_source(data_specs[1])
328
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pylearn2/utils/data_specs.py b/pylearn2/utils/data_specs.py
--- a/pylearn2/utils/data_specs.py
+++ b/pylearn2/utils/data_specs.py
@@ -3,6 +3,7 @@
See :ref:`data_specs` for a high level overview of the relevant concepts.
"""
+from collections import Sized
from pylearn2.space import CompositeSpace, NullSpace, Space
from pylearn2.utils import safe_zip
@@ -64,7 +65,7 @@
elif not isinstance(space, CompositeSpace):
# Space is a simple Space, source should be a simple source
- if isinstance(source, tuple):
+ if isinstance(source, (tuple, list)):
source, = source
# If (space, source) has not already been seen, insert it.
@@ -118,7 +119,7 @@
if isinstance(mapping, int):
# "nested" should actually be a single element
idx = mapping
- if isinstance(nested, tuple):
+ if isinstance(nested, (tuple, list)):
if len(nested) != 1:
raise ValueError("When mapping is an int, we expect "
"nested to be a single element. But mapping is "
@@ -179,7 +180,7 @@
# else, return something close to the type of nested
if len(rval) == 1:
return rval[0]
- if isinstance(nested, tuple):
+ if isinstance(nested, (tuple, list)):
return tuple(rval)
elif isinstance(nested, Space):
return CompositeSpace(rval)
@@ -205,7 +206,7 @@
if isinstance(mapping, int):
# We are at a leaf of the tree
idx = mapping
- if isinstance(flat, tuple):
+ if isinstance(flat, (tuple, list)):
assert 0 <= idx < len(flat)
return flat[idx]
else:
@@ -265,7 +266,7 @@
assert self.n_unique_specs == 1
return self._make_nested_space(flat, self.spec_mapping)
else:
- if isinstance(flat, tuple):
+ if isinstance(flat, (list, tuple)):
assert len(flat) == self.n_unique_specs
else:
# flat is not iterable, this is valid only if spec_mapping
|
{"golden_diff": "diff --git a/pylearn2/utils/data_specs.py b/pylearn2/utils/data_specs.py\n--- a/pylearn2/utils/data_specs.py\n+++ b/pylearn2/utils/data_specs.py\n@@ -3,6 +3,7 @@\n \n See :ref:`data_specs` for a high level overview of the relevant concepts.\n \"\"\"\n+from collections import Sized\n from pylearn2.space import CompositeSpace, NullSpace, Space\n from pylearn2.utils import safe_zip\n \n@@ -64,7 +65,7 @@\n \n elif not isinstance(space, CompositeSpace):\n # Space is a simple Space, source should be a simple source\n- if isinstance(source, tuple):\n+ if isinstance(source, (tuple, list)):\n source, = source\n \n # If (space, source) has not already been seen, insert it.\n@@ -118,7 +119,7 @@\n if isinstance(mapping, int):\n # \"nested\" should actually be a single element\n idx = mapping\n- if isinstance(nested, tuple):\n+ if isinstance(nested, (tuple, list)):\n if len(nested) != 1:\n raise ValueError(\"When mapping is an int, we expect \"\n \"nested to be a single element. But mapping is \"\n@@ -179,7 +180,7 @@\n # else, return something close to the type of nested\n if len(rval) == 1:\n return rval[0]\n- if isinstance(nested, tuple):\n+ if isinstance(nested, (tuple, list)):\n return tuple(rval)\n elif isinstance(nested, Space):\n return CompositeSpace(rval)\n@@ -205,7 +206,7 @@\n if isinstance(mapping, int):\n # We are at a leaf of the tree\n idx = mapping\n- if isinstance(flat, tuple):\n+ if isinstance(flat, (tuple, list)):\n assert 0 <= idx < len(flat)\n return flat[idx]\n else:\n@@ -265,7 +266,7 @@\n assert self.n_unique_specs == 1\n return self._make_nested_space(flat, self.spec_mapping)\n else:\n- if isinstance(flat, tuple):\n+ if isinstance(flat, (list, tuple)):\n assert len(flat) == self.n_unique_specs\n else:\n # flat is not iterable, this is valid only if spec_mapping\n", "issue": "Improving mapping.nest error message\nIf you pass mapping.nest something that is a list rather than a tuple, you get a completely unrelated error message:\n\n```\n File \"/usr/local/google/home/goodfellow/pylearn2/pylearn2/utils/data_specs.py\", line 273, in nest\n assert self.n_unique_specs == 1\nAssertionError\n```\n\n", "before_files": [{"content": "\"\"\"\nUtilities for working with data format specifications.\n\nSee :ref:`data_specs` for a high level overview of the relevant concepts.\n\"\"\"\nfrom pylearn2.space import CompositeSpace, NullSpace, Space\nfrom pylearn2.utils import safe_zip\n\n\nclass DataSpecsMapping(object):\n \"\"\"\n Converts between nested tuples and non-redundant flattened ones.\n\n The mapping is built from data specifications, provided as a\n (space, sources) pair, where space can be a composite space (possibly\n of other composite spaces), and sources is a tuple of string identifiers\n or other sources. Both space and sources must have the same structure.\n\n Parameters\n ----------\n data_specs : WRITEME\n WRITEME\n\n Attributes\n ----------\n specs_to_index : dict\n Maps one elementary (not composite) data_specs pair to its\n index in the flattened space. Not sure if this one should\n be a member, or passed as a parameter to _fill_mapping. It\n might be us\n \"\"\"\n #might be useful to get the index of one data_specs later\n #but if it is not, then we should remove it.\n def __init__(self, data_specs):\n self.specs_to_index = {}\n\n # Size of the flattened space\n self.n_unique_specs = 0\n\n # Builds the mapping\n space, source = data_specs\n assert isinstance(space, Space), 'Given space: ' + str(space) + \\\n ' was not a instance of Space.'\n self.spec_mapping = self._fill_mapping(space, source)\n\n def _fill_mapping(self, space, source):\n \"\"\"\n Builds a nested tuple of integers representing the mapping\n\n Parameters\n ----------\n space : WRITEME\n source : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(space, NullSpace):\n # This Space does not contain any data, and should not\n # be mapped to anything\n assert source == ''\n return None\n\n elif not isinstance(space, CompositeSpace):\n # Space is a simple Space, source should be a simple source\n if isinstance(source, tuple):\n source, = source\n\n # If (space, source) has not already been seen, insert it.\n # We need both the space and the source to match.\n if (space, source) in self.specs_to_index:\n spec_index = self.specs_to_index[(space, source)]\n else:\n spec_index = self.n_unique_specs\n self.specs_to_index[(space, source)] = spec_index\n self.n_unique_specs += 1\n\n return spec_index\n\n else:\n # Recursively fill the mapping, and return it\n spec_mapping = tuple(\n self._fill_mapping(sub_space, sub_source)\n for sub_space, sub_source in safe_zip(\n space.components, source))\n\n return spec_mapping\n\n def _fill_flat(self, nested, mapping, rval):\n \"\"\"\n Auxiliary recursive function used by self.flatten\n\n Parameters\n ----------\n nested : WRITEME\n mapping : WRITEME\n rval : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(nested, CompositeSpace):\n nested = tuple(nested.components)\n\n if mapping is None:\n # The corresponding Space was a NullSpace, which does\n # not correspond to actual data, so nested should evaluate\n # to False, and should not be included in the flattened version\n if not isinstance(nested, NullSpace):\n assert not nested, (\"The following element is mapped to \"\n \"NullSpace, so it should evaluate to False (for instance, \"\n \"None, an empty string or an empty tuple), but is %s\"\n % nested)\n return\n\n if isinstance(mapping, int):\n # \"nested\" should actually be a single element\n idx = mapping\n if isinstance(nested, tuple):\n if len(nested) != 1:\n raise ValueError(\"When mapping is an int, we expect \"\n \"nested to be a single element. But mapping is \"\n + str(mapping) + \" and nested is a tuple of \"\n \"length \" + str(len(nested)))\n nested, = nested\n\n if rval[idx] is None:\n rval[idx] = nested\n else:\n assert rval[idx] == nested, (\"This mapping was built \"\n \"with the same element occurring more than once \"\n \"in the nested representation, but current nested \"\n \"sequence has different values (%s and %s) at \"\n \"these positions.\" % (rval[idx], nested))\n else:\n for sub_nested, sub_mapping in safe_zip(nested, mapping):\n self._fill_flat(sub_nested, sub_mapping, rval)\n\n def flatten(self, nested, return_tuple=False):\n \"\"\"\n Iterate jointly through nested and spec_mapping, returns a flat tuple.\n\n The integer in spec_mapping corresponding to each element in nested\n represents the index of that element in the returned sequence.\n If the original data_specs had duplicate elements at different places,\n then \"nested\" also have to have equal elements at these positions.\n \"nested\" can be a nested tuple, or composite space. If it is a\n composite space, a flattened composite space will be returned.\n\n If `return_tuple` is True, a tuple is always returned (tuple of\n non-composite Spaces if nested is a Space, empty tuple if all\n Spaces are NullSpaces, length-1 tuple if there is only one\n non-composite Space, etc.).\n\n Parameters\n ----------\n nested : WRITEME\n return_tuple : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n # Initialize the flatten returned value with Nones\n rval = [None] * self.n_unique_specs\n\n # Fill rval with the auxiliary function\n self._fill_flat(nested, self.spec_mapping, rval)\n\n assert None not in rval, (\"This mapping is invalid, as it did not \"\n \"contain all numbers from 0 to %i (or None was in nested), \"\n \"nested: %s\" % (self.n_unique_specs - 1, nested))\n\n if return_tuple:\n return tuple(rval)\n\n # else, return something close to the type of nested\n if len(rval) == 1:\n return rval[0]\n if isinstance(nested, tuple):\n return tuple(rval)\n elif isinstance(nested, Space):\n return CompositeSpace(rval)\n\n def _make_nested_tuple(self, flat, mapping):\n \"\"\"\n Auxiliary recursive function used by self.nest\n\n Parameters\n ----------\n flat : WRITEME\n mapping : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if mapping is None:\n # The corresponding space was a NullSpace,\n # and there is no corresponding value in flat,\n # we use None as a placeholder\n return None\n if isinstance(mapping, int):\n # We are at a leaf of the tree\n idx = mapping\n if isinstance(flat, tuple):\n assert 0 <= idx < len(flat)\n return flat[idx]\n else:\n assert idx == 0\n return flat\n else:\n return tuple(\n self._make_nested_tuple(flat, sub_mapping)\n for sub_mapping in mapping)\n\n def _make_nested_space(self, flat, mapping):\n \"\"\"\n Auxiliary recursive function used by self.nest\n\n Parameters\n ----------\n flat : WRITEME\n mapping : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(mapping, int):\n # We are at a leaf of the tree\n idx = mapping\n if isinstance(flat, CompositeSpace):\n assert 0 <= idx < len(flat.components)\n return flat.components[idx]\n else:\n assert idx == 0\n return flat\n else:\n return CompositeSpace([\n self._make_nested_space(flat, sub_mapping)\n for sub_mapping in mapping])\n\n def nest(self, flat):\n \"\"\"\n Iterate through spec_mapping, building a nested tuple from \"flat\".\n\n The length of \"flat\" should be equal to self.n_unique_specs.\n\n Parameters\n ----------\n flat : Space or tuple\n WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(flat, Space):\n if isinstance(flat, CompositeSpace):\n assert len(flat.components) == self.n_unique_specs\n else:\n assert self.n_unique_specs == 1\n return self._make_nested_space(flat, self.spec_mapping)\n else:\n if isinstance(flat, tuple):\n assert len(flat) == self.n_unique_specs\n else:\n # flat is not iterable, this is valid only if spec_mapping\n # contains only 0's, that is, when self.n_unique_specs == 1\n assert self.n_unique_specs == 1\n return self._make_nested_tuple(flat, self.spec_mapping)\n\n\ndef is_flat_space(space):\n \"\"\"\n Returns True for elementary Spaces and non-nested CompositeSpaces\n\n Parameters\n ----------\n space : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(space, CompositeSpace):\n for sub_space in space.components:\n if isinstance(sub_space, CompositeSpace):\n return False\n elif not isinstance(space, Space):\n raise TypeError(\"space is not a Space: %s (%s)\"\n % (space, type(space)))\n return True\n\n\ndef is_flat_source(source):\n \"\"\"\n Returns True for a string or a non-nested tuple of strings\n\n Parameters\n ----------\n source : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(source, (tuple, list)):\n for sub_source in source:\n if isinstance(sub_source, (tuple, list)):\n return False\n elif not isinstance(source, str):\n raise TypeError(\"source should be a string or a non-nested tuple/list \"\n \"of strings: %s\" % source)\n return True\n\n\ndef is_flat_specs(data_specs):\n \"\"\"\n .. todo::\n\n WRITEME\n \"\"\"\n return is_flat_space(data_specs[0]) and is_flat_source(data_specs[1])\n", "path": "pylearn2/utils/data_specs.py"}], "after_files": [{"content": "\"\"\"\nUtilities for working with data format specifications.\n\nSee :ref:`data_specs` for a high level overview of the relevant concepts.\n\"\"\"\nfrom collections import Sized\nfrom pylearn2.space import CompositeSpace, NullSpace, Space\nfrom pylearn2.utils import safe_zip\n\n\nclass DataSpecsMapping(object):\n \"\"\"\n Converts between nested tuples and non-redundant flattened ones.\n\n The mapping is built from data specifications, provided as a\n (space, sources) pair, where space can be a composite space (possibly\n of other composite spaces), and sources is a tuple of string identifiers\n or other sources. Both space and sources must have the same structure.\n\n Parameters\n ----------\n data_specs : WRITEME\n WRITEME\n\n Attributes\n ----------\n specs_to_index : dict\n Maps one elementary (not composite) data_specs pair to its\n index in the flattened space. Not sure if this one should\n be a member, or passed as a parameter to _fill_mapping. It\n might be us\n \"\"\"\n #might be useful to get the index of one data_specs later\n #but if it is not, then we should remove it.\n def __init__(self, data_specs):\n self.specs_to_index = {}\n\n # Size of the flattened space\n self.n_unique_specs = 0\n\n # Builds the mapping\n space, source = data_specs\n assert isinstance(space, Space), 'Given space: ' + str(space) + \\\n ' was not a instance of Space.'\n self.spec_mapping = self._fill_mapping(space, source)\n\n def _fill_mapping(self, space, source):\n \"\"\"\n Builds a nested tuple of integers representing the mapping\n\n Parameters\n ----------\n space : WRITEME\n source : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(space, NullSpace):\n # This Space does not contain any data, and should not\n # be mapped to anything\n assert source == ''\n return None\n\n elif not isinstance(space, CompositeSpace):\n # Space is a simple Space, source should be a simple source\n if isinstance(source, (tuple, list)):\n source, = source\n\n # If (space, source) has not already been seen, insert it.\n # We need both the space and the source to match.\n if (space, source) in self.specs_to_index:\n spec_index = self.specs_to_index[(space, source)]\n else:\n spec_index = self.n_unique_specs\n self.specs_to_index[(space, source)] = spec_index\n self.n_unique_specs += 1\n\n return spec_index\n\n else:\n # Recursively fill the mapping, and return it\n spec_mapping = tuple(\n self._fill_mapping(sub_space, sub_source)\n for sub_space, sub_source in safe_zip(\n space.components, source))\n\n return spec_mapping\n\n def _fill_flat(self, nested, mapping, rval):\n \"\"\"\n Auxiliary recursive function used by self.flatten\n\n Parameters\n ----------\n nested : WRITEME\n mapping : WRITEME\n rval : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(nested, CompositeSpace):\n nested = tuple(nested.components)\n\n if mapping is None:\n # The corresponding Space was a NullSpace, which does\n # not correspond to actual data, so nested should evaluate\n # to False, and should not be included in the flattened version\n if not isinstance(nested, NullSpace):\n assert not nested, (\"The following element is mapped to \"\n \"NullSpace, so it should evaluate to False (for instance, \"\n \"None, an empty string or an empty tuple), but is %s\"\n % nested)\n return\n\n if isinstance(mapping, int):\n # \"nested\" should actually be a single element\n idx = mapping\n if isinstance(nested, (tuple, list)):\n if len(nested) != 1:\n raise ValueError(\"When mapping is an int, we expect \"\n \"nested to be a single element. But mapping is \"\n + str(mapping) + \" and nested is a tuple of \"\n \"length \" + str(len(nested)))\n nested, = nested\n\n if rval[idx] is None:\n rval[idx] = nested\n else:\n assert rval[idx] == nested, (\"This mapping was built \"\n \"with the same element occurring more than once \"\n \"in the nested representation, but current nested \"\n \"sequence has different values (%s and %s) at \"\n \"these positions.\" % (rval[idx], nested))\n else:\n for sub_nested, sub_mapping in safe_zip(nested, mapping):\n self._fill_flat(sub_nested, sub_mapping, rval)\n\n def flatten(self, nested, return_tuple=False):\n \"\"\"\n Iterate jointly through nested and spec_mapping, returns a flat tuple.\n\n The integer in spec_mapping corresponding to each element in nested\n represents the index of that element in the returned sequence.\n If the original data_specs had duplicate elements at different places,\n then \"nested\" also have to have equal elements at these positions.\n \"nested\" can be a nested tuple, or composite space. If it is a\n composite space, a flattened composite space will be returned.\n\n If `return_tuple` is True, a tuple is always returned (tuple of\n non-composite Spaces if nested is a Space, empty tuple if all\n Spaces are NullSpaces, length-1 tuple if there is only one\n non-composite Space, etc.).\n\n Parameters\n ----------\n nested : WRITEME\n return_tuple : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n # Initialize the flatten returned value with Nones\n rval = [None] * self.n_unique_specs\n\n # Fill rval with the auxiliary function\n self._fill_flat(nested, self.spec_mapping, rval)\n\n assert None not in rval, (\"This mapping is invalid, as it did not \"\n \"contain all numbers from 0 to %i (or None was in nested), \"\n \"nested: %s\" % (self.n_unique_specs - 1, nested))\n\n if return_tuple:\n return tuple(rval)\n\n # else, return something close to the type of nested\n if len(rval) == 1:\n return rval[0]\n if isinstance(nested, (tuple, list)):\n return tuple(rval)\n elif isinstance(nested, Space):\n return CompositeSpace(rval)\n\n def _make_nested_tuple(self, flat, mapping):\n \"\"\"\n Auxiliary recursive function used by self.nest\n\n Parameters\n ----------\n flat : WRITEME\n mapping : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if mapping is None:\n # The corresponding space was a NullSpace,\n # and there is no corresponding value in flat,\n # we use None as a placeholder\n return None\n if isinstance(mapping, int):\n # We are at a leaf of the tree\n idx = mapping\n if isinstance(flat, (tuple, list)):\n assert 0 <= idx < len(flat)\n return flat[idx]\n else:\n assert idx == 0\n return flat\n else:\n return tuple(\n self._make_nested_tuple(flat, sub_mapping)\n for sub_mapping in mapping)\n\n def _make_nested_space(self, flat, mapping):\n \"\"\"\n Auxiliary recursive function used by self.nest\n\n Parameters\n ----------\n flat : WRITEME\n mapping : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(mapping, int):\n # We are at a leaf of the tree\n idx = mapping\n if isinstance(flat, CompositeSpace):\n assert 0 <= idx < len(flat.components)\n return flat.components[idx]\n else:\n assert idx == 0\n return flat\n else:\n return CompositeSpace([\n self._make_nested_space(flat, sub_mapping)\n for sub_mapping in mapping])\n\n def nest(self, flat):\n \"\"\"\n Iterate through spec_mapping, building a nested tuple from \"flat\".\n\n The length of \"flat\" should be equal to self.n_unique_specs.\n\n Parameters\n ----------\n flat : Space or tuple\n WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(flat, Space):\n if isinstance(flat, CompositeSpace):\n assert len(flat.components) == self.n_unique_specs\n else:\n assert self.n_unique_specs == 1\n return self._make_nested_space(flat, self.spec_mapping)\n else:\n if isinstance(flat, (list, tuple)):\n assert len(flat) == self.n_unique_specs\n else:\n # flat is not iterable, this is valid only if spec_mapping\n # contains only 0's, that is, when self.n_unique_specs == 1\n assert self.n_unique_specs == 1\n return self._make_nested_tuple(flat, self.spec_mapping)\n\n\ndef is_flat_space(space):\n \"\"\"\n Returns True for elementary Spaces and non-nested CompositeSpaces\n\n Parameters\n ----------\n space : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(space, CompositeSpace):\n for sub_space in space.components:\n if isinstance(sub_space, CompositeSpace):\n return False\n elif not isinstance(space, Space):\n raise TypeError(\"space is not a Space: %s (%s)\"\n % (space, type(space)))\n return True\n\n\ndef is_flat_source(source):\n \"\"\"\n Returns True for a string or a non-nested tuple of strings\n\n Parameters\n ----------\n source : WRITEME\n\n Returns\n -------\n WRITEME\n \"\"\"\n if isinstance(source, (tuple, list)):\n for sub_source in source:\n if isinstance(sub_source, (tuple, list)):\n return False\n elif not isinstance(source, str):\n raise TypeError(\"source should be a string or a non-nested tuple/list \"\n \"of strings: %s\" % source)\n return True\n\n\ndef is_flat_specs(data_specs):\n \"\"\"\n .. todo::\n\n WRITEME\n \"\"\"\n return is_flat_space(data_specs[0]) and is_flat_source(data_specs[1])\n", "path": "pylearn2/utils/data_specs.py"}]}
| 3,469 | 532 |
gh_patches_debug_31078
|
rasdani/github-patches
|
git_diff
|
weecology__retriever-157
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Choose button missing on Database Connection page for Windows & OS X
When selecting a database connection that has an associated file (i.e., sqlite and Access) the GUI should allow you to navigate to the file by clicking on the `Choose` button. This works fine on Ubuntu:

But this button is missing on the current builds on Windows and OS X. The Database Connection page also generally looks narrower on Windows and OS X.
From Windows:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/connect_wizard.py`
Content:
```
1 """Connection setup wizard.
2
3 """
4
5 import os
6 import sys
7 import wx
8 import wx.html
9 import wx.wizard
10 from retriever.lib.models import Engine
11 from retriever.lib.tools import get_saved_connection, save_connection
12 from retriever.app.controls import *
13 from retriever.app.images import icon
14
15 from retriever import VERSION
16
17
18 class ConnectWizard(wx.wizard.Wizard):
19 def __init__(self, lists, engine_list, selected=None):
20 wx.wizard.Wizard.__init__(self, None, -1, "EcoData Retriever")
21
22 #self.SetIcon(icon.GetIcon())
23
24 welcome = """<h2>Connection Wizard</h2>
25
26 <p>The EcoData Retriever downloads raw data files, stores them on your
27 computer, and imports the data into your own local database.</p>
28 <p>To begin, you'll need to set up your own database. Once you've done
29 that, this wizard will walk you through the steps to connect to your
30 database.</p>
31 <p>Supported database systems currently include:</p>
32 <ul>"""
33
34 for db in engine_list:
35 welcome += "<li>" + db.name + "</li>"
36
37 welcome += "</ul>"
38
39 self.pages = []
40 self.lists = lists
41 self.engine_list = engine_list
42 self.selected = selected
43
44 self.pages.append(TitledPage(self, "", ""))
45
46 self.pages.append(ChooseDbPage(self, "Database",
47 "What kind of database are you using?"))
48
49 self.pages.append(ConnectPage(self,
50 "Connection Info",
51 ""))
52
53 self.pages.append(ConfirmPage(self, "Connection", ""))
54
55 (self.TITLE, self.CHOOSEDB,
56 self.CONNECTION, self.CONFIRM) = [self.pages[i]
57 for i in range(len(self.pages))]
58
59 self.TITLE.welcome = HtmlWindow(self.TITLE)
60 self.TITLE.welcome.SetSize((450,400))
61 self.TITLE.welcome.SetHtml(welcome)
62 self.TITLE.sizer.Add(self.TITLE.welcome, 1, wx.EXPAND)
63
64 self.CHOOSEDB.Bind(wx.wizard.EVT_WIZARD_PAGE_CHANGING, self.CONNECTION.Draw)
65 self.CONNECTION.Bind(wx.wizard.EVT_WIZARD_PAGE_CHANGING, self.CONFIRM.Draw)
66 self.CONFIRM.Bind(wx.wizard.EVT_WIZARD_PAGE_CHANGING, self.CONNECTION.Draw)
67
68 for i in range(len(self.pages) - 1):
69 wx.wizard.WizardPageSimple_Chain(self.pages[i], self.pages[i + 1])
70
71 for page in self.pages:
72 self.FitToPage(page)
73
74
75 def check_for_updates(self):
76 reload(retriever)
77
78
79 class ChooseDbPage(TitledPage):
80 def __init__(self, parent, title, label):
81 TitledPage.__init__(self, parent, title, label)
82 engine_list = parent.engine_list
83
84 dblist = ListBox(self, -1,
85 choices=[db.name for db in engine_list],
86 style=wx.LB_SINGLE,
87 size=(-1,150))
88 self.dblist = dblist
89 if parent.selected:
90 index = 0
91 for i in range(len(engine_list)):
92 if engine_list[i].name == parent.selected:
93 index = i
94 self.dblist.SetSelection(index)
95 else:
96 self.dblist.SetSelection(0)
97 self.sizer.Add(self.dblist, -1, wx.EXPAND)
98
99 def dirbtn_click(self, evt):
100 dialog = wx.DirDialog(None, message="Choose a directory to " +
101 "download your data files.")
102 if dialog.ShowModal() == wx.ID_OK:
103 self.raw_data_dir.SetValue(dialog.GetPath())
104 else:
105 pass
106 dialog.Destroy()
107
108
109 class ConnectPage(TitledPage):
110 """The connection info page."""
111 def __init__(self, parent, title, label):
112 TitledPage.__init__(self, parent, title, label)
113 self.option = dict()
114 self.sel = None
115 self.fields = wx.BoxSizer(wx.VERTICAL)
116 self.parent = parent
117
118 def Draw(self, evt):
119 """When the page is drawn, it may need to update its fields if
120 the selected database has changed."""
121
122 if not evt.GetDirection():
123 btn = self.parent.FindWindowById(wx.ID_FORWARD)
124 if btn: btn.Enable()
125
126 if len(self.Parent.CHOOSEDB.dblist.GetStringSelection()) == 0 and evt.Direction:
127 evt.Veto()
128 else:
129 if self.sel != self.Parent.CHOOSEDB.dblist.GetStringSelection():
130 self.sel = self.Parent.CHOOSEDB.dblist.GetStringSelection()
131 self.engine = Engine()
132 for db in self.Parent.engine_list:
133 if db.name == self.sel:
134 self.engine = db
135 self.fields.Clear(True)
136 self.fields = wx.BoxSizer(wx.VERTICAL)
137 if self.engine.instructions:
138 self.fields.Add(StaticText(self, -1, '\n' + self.engine.instructions + '\n\n'))
139 self.fieldset = dict()
140 self.option = dict()
141 saved_opts = get_saved_connection(self.engine.name)
142 for opt in self.engine.required_opts:
143 if opt[0] in saved_opts.keys():
144 default = saved_opts[opt[0]]
145 else:
146 default = opt[2]
147 self.fieldset[opt[0]] = wx.BoxSizer(wx.HORIZONTAL)
148 label = StaticText(self, -1, opt[0] + ": ",
149 size=wx.Size(90,35))
150 if opt[0] == "password":
151 txt = TextCtrl(self, -1,
152 str(default),
153 size=wx.Size(200,-1),
154 style=wx.TE_PASSWORD)
155 else:
156 txt = TextCtrl(self, -1, str(default),
157 size=wx.Size(200,-1))
158 self.option[opt[0]] = txt
159 self.fieldset[opt[0]].AddMany([label,
160 self.option[opt[0]]])
161 if opt[0] == "file":
162 file_opt = opt
163 def open_file_dialog(evt):
164 filter = ""
165 if file_opt[3]:
166 filter = file_opt[3] + "|"
167 filter += "All files (*.*)|*.*"
168 dialog = wx.FileDialog(None, style = wx.OPEN,
169 wildcard = filter)
170 if dialog.ShowModal() == wx.ID_OK:
171 self.option[file_opt[0]].SetValue(dialog.GetPath())
172 self.browse = wx.Button(self, -1, "Choose...")
173 self.fieldset[file_opt[0]].Add(self.browse)
174 self.browse.Bind(wx.EVT_BUTTON, open_file_dialog)
175 self.fieldset[opt[0]].Layout()
176 self.fields.Add(self.fieldset[opt[0]])
177 self.sizer.Add(self.fields)
178 self.sizer.Layout()
179
180
181 class ConfirmPage(TitledPage):
182 """The final confirmation page."""
183 def __init__(self, parent, title, label):
184 TitledPage.__init__(self, parent, title, label)
185 self.fields = wx.BoxSizer(wx.VERTICAL)
186 self.parent = parent
187
188 def Draw(self, evt):
189 if not evt.GetDirection(): return
190
191 self.fields.Clear(True)
192 self.fields = wx.BoxSizer(wx.VERTICAL)
193
194 self.values_dict = dict()
195 connect = self.parent.CONNECTION
196 for key in connect.option.keys():
197 self.values_dict[key] = str(connect.option[key].Value)
198
199 try:
200 connect.engine.opts = self.values_dict
201 connect.engine.connect(force_reconnect=True)
202 message = '''<p><b>Success!</b>Your connection has been saved.</p>
203 <p>Click Finish to continue.</p>'''
204 save_connection(connect.engine.name, self.values_dict)
205
206 except Exception as e:
207 message = message = '''<p><b>Error.</b>There was a problem with your
208 connection:</p><p>%s</p>
209 <p>Click Back to try again, or Cancel.</p>''' % e
210
211 btn = self.parent.FindWindowById(wx.ID_FORWARD)
212 if btn: btn.Disable()
213
214 self.message = HtmlWindow(self)
215 self.message.SetSize((450,400))
216 self.message.SetHtml(message)
217 self.fields.Add(self.message, 1, wx.EXPAND)
218 self.sizer.Add(self.fields)
219
220 self.sizer.Layout()
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/connect_wizard.py b/app/connect_wizard.py
--- a/app/connect_wizard.py
+++ b/app/connect_wizard.py
@@ -95,6 +95,14 @@
else:
self.dblist.SetSelection(0)
self.sizer.Add(self.dblist, -1, wx.EXPAND)
+
+ #TODO: This is a hack to force the wizard to be large enough to not hide
+ #the Choose button on the Connect Page when the engine has a file
+ #attribute. This should be fixed properly by figuring out how to make wx
+ #work with the fact that the connect page has varying fields depending
+ #on the engine.
+ longspace = StaticText(self, -1, "", wx.Size(375, -1))
+ self.sizer.Add(longspace, -1)
def dirbtn_click(self, evt):
dialog = wx.DirDialog(None, message="Choose a directory to " +
@@ -147,14 +155,8 @@
self.fieldset[opt[0]] = wx.BoxSizer(wx.HORIZONTAL)
label = StaticText(self, -1, opt[0] + ": ",
size=wx.Size(90,35))
- if opt[0] == "password":
- txt = TextCtrl(self, -1,
- str(default),
- size=wx.Size(200,-1),
- style=wx.TE_PASSWORD)
- else:
- txt = TextCtrl(self, -1, str(default),
- size=wx.Size(200,-1))
+ style = wx.TE_PASSWORD if opt[0] == "password" else 0
+ txt = TextCtrl(self, -1, str(default), size=wx.Size(200,-1), style=style)
self.option[opt[0]] = txt
self.fieldset[opt[0]].AddMany([label,
self.option[opt[0]]])
|
{"golden_diff": "diff --git a/app/connect_wizard.py b/app/connect_wizard.py\n--- a/app/connect_wizard.py\n+++ b/app/connect_wizard.py\n@@ -95,6 +95,14 @@\n else:\n self.dblist.SetSelection(0)\n self.sizer.Add(self.dblist, -1, wx.EXPAND)\n+\n+ #TODO: This is a hack to force the wizard to be large enough to not hide\n+ #the Choose button on the Connect Page when the engine has a file\n+ #attribute. This should be fixed properly by figuring out how to make wx\n+ #work with the fact that the connect page has varying fields depending\n+ #on the engine.\n+ longspace = StaticText(self, -1, \"\", wx.Size(375, -1))\n+ self.sizer.Add(longspace, -1)\n \n def dirbtn_click(self, evt):\n dialog = wx.DirDialog(None, message=\"Choose a directory to \" +\n@@ -147,14 +155,8 @@\n self.fieldset[opt[0]] = wx.BoxSizer(wx.HORIZONTAL)\n label = StaticText(self, -1, opt[0] + \": \", \n size=wx.Size(90,35))\n- if opt[0] == \"password\":\n- txt = TextCtrl(self, -1, \n- str(default), \n- size=wx.Size(200,-1), \n- style=wx.TE_PASSWORD)\n- else:\n- txt = TextCtrl(self, -1, str(default),\n- size=wx.Size(200,-1))\n+ style = wx.TE_PASSWORD if opt[0] == \"password\" else 0\n+ txt = TextCtrl(self, -1, str(default), size=wx.Size(200,-1), style=style)\n self.option[opt[0]] = txt\n self.fieldset[opt[0]].AddMany([label, \n self.option[opt[0]]])\n", "issue": "Choose button missing on Database Connection page for Windows & OS X\nWhen selecting a database connection that has an associated file (i.e., sqlite and Access) the GUI should allow you to navigate to the file by clicking on the `Choose` button. This works fine on Ubuntu:\n\n\n\nBut this button is missing on the current builds on Windows and OS X. The Database Connection page also generally looks narrower on Windows and OS X.\n\nFrom Windows:\n\n\n", "before_files": [{"content": "\"\"\"Connection setup wizard.\n\n\"\"\"\n\nimport os\nimport sys\nimport wx\nimport wx.html\nimport wx.wizard\nfrom retriever.lib.models import Engine\nfrom retriever.lib.tools import get_saved_connection, save_connection\nfrom retriever.app.controls import *\nfrom retriever.app.images import icon\n\nfrom retriever import VERSION\n\n\nclass ConnectWizard(wx.wizard.Wizard):\n def __init__(self, lists, engine_list, selected=None):\n wx.wizard.Wizard.__init__(self, None, -1, \"EcoData Retriever\")\n \n #self.SetIcon(icon.GetIcon())\n \n welcome = \"\"\"<h2>Connection Wizard</h2>\n \n <p>The EcoData Retriever downloads raw data files, stores them on your\n computer, and imports the data into your own local database.</p>\n <p>To begin, you'll need to set up your own database. Once you've done\n that, this wizard will walk you through the steps to connect to your\n database.</p>\n <p>Supported database systems currently include:</p>\n <ul>\"\"\"\n \n for db in engine_list:\n welcome += \"<li>\" + db.name + \"</li>\" \n \n welcome += \"</ul>\" \n \n self.pages = []\n self.lists = lists\n self.engine_list = engine_list\n self.selected = selected\n \n self.pages.append(TitledPage(self, \"\", \"\"))\n \n self.pages.append(ChooseDbPage(self, \"Database\", \n \"What kind of database are you using?\"))\n \n self.pages.append(ConnectPage(self, \n \"Connection Info\", \n \"\"))\n \n self.pages.append(ConfirmPage(self, \"Connection\", \"\"))\n\n (self.TITLE, self.CHOOSEDB, \n self.CONNECTION, self.CONFIRM) = [self.pages[i] \n for i in range(len(self.pages))]\n \n self.TITLE.welcome = HtmlWindow(self.TITLE)\n self.TITLE.welcome.SetSize((450,400))\n self.TITLE.welcome.SetHtml(welcome)\n self.TITLE.sizer.Add(self.TITLE.welcome, 1, wx.EXPAND)\n \n self.CHOOSEDB.Bind(wx.wizard.EVT_WIZARD_PAGE_CHANGING, self.CONNECTION.Draw)\n self.CONNECTION.Bind(wx.wizard.EVT_WIZARD_PAGE_CHANGING, self.CONFIRM.Draw)\n self.CONFIRM.Bind(wx.wizard.EVT_WIZARD_PAGE_CHANGING, self.CONNECTION.Draw)\n \n for i in range(len(self.pages) - 1):\n wx.wizard.WizardPageSimple_Chain(self.pages[i], self.pages[i + 1])\n \n for page in self.pages:\n self.FitToPage(page)\n \n \n def check_for_updates(self):\n reload(retriever)\n\n\nclass ChooseDbPage(TitledPage):\n def __init__(self, parent, title, label):\n TitledPage.__init__(self, parent, title, label)\n engine_list = parent.engine_list\n \n dblist = ListBox(self, -1, \n choices=[db.name for db in engine_list], \n style=wx.LB_SINGLE,\n size=(-1,150))\n self.dblist = dblist\n if parent.selected:\n index = 0\n for i in range(len(engine_list)):\n if engine_list[i].name == parent.selected:\n index = i \n self.dblist.SetSelection(index)\n else:\n self.dblist.SetSelection(0)\n self.sizer.Add(self.dblist, -1, wx.EXPAND)\n \n def dirbtn_click(self, evt):\n dialog = wx.DirDialog(None, message=\"Choose a directory to \" +\n \"download your data files.\") \n if dialog.ShowModal() == wx.ID_OK: \n self.raw_data_dir.SetValue(dialog.GetPath()) \n else:\n pass\n dialog.Destroy()\n \n\nclass ConnectPage(TitledPage):\n \"\"\"The connection info page.\"\"\"\n def __init__(self, parent, title, label):\n TitledPage.__init__(self, parent, title, label)\n self.option = dict()\n self.sel = None\n self.fields = wx.BoxSizer(wx.VERTICAL)\n self.parent = parent\n \n def Draw(self, evt):\n \"\"\"When the page is drawn, it may need to update its fields if \n the selected database has changed.\"\"\"\n \n if not evt.GetDirection():\n btn = self.parent.FindWindowById(wx.ID_FORWARD) \n if btn: btn.Enable()\n \n if len(self.Parent.CHOOSEDB.dblist.GetStringSelection()) == 0 and evt.Direction:\n evt.Veto() \n else:\n if self.sel != self.Parent.CHOOSEDB.dblist.GetStringSelection():\n self.sel = self.Parent.CHOOSEDB.dblist.GetStringSelection()\n self.engine = Engine()\n for db in self.Parent.engine_list:\n if db.name == self.sel:\n self.engine = db\n self.fields.Clear(True) \n self.fields = wx.BoxSizer(wx.VERTICAL)\n if self.engine.instructions:\n self.fields.Add(StaticText(self, -1, '\\n' + self.engine.instructions + '\\n\\n'))\n self.fieldset = dict()\n self.option = dict()\n saved_opts = get_saved_connection(self.engine.name)\n for opt in self.engine.required_opts:\n if opt[0] in saved_opts.keys():\n default = saved_opts[opt[0]]\n else:\n default = opt[2]\n self.fieldset[opt[0]] = wx.BoxSizer(wx.HORIZONTAL)\n label = StaticText(self, -1, opt[0] + \": \", \n size=wx.Size(90,35))\n if opt[0] == \"password\":\n txt = TextCtrl(self, -1, \n str(default), \n size=wx.Size(200,-1), \n style=wx.TE_PASSWORD)\n else:\n txt = TextCtrl(self, -1, str(default),\n size=wx.Size(200,-1))\n self.option[opt[0]] = txt\n self.fieldset[opt[0]].AddMany([label, \n self.option[opt[0]]])\n if opt[0] == \"file\":\n file_opt = opt\n def open_file_dialog(evt):\n filter = \"\"\n if file_opt[3]:\n filter = file_opt[3] + \"|\"\n filter += \"All files (*.*)|*.*\" \n dialog = wx.FileDialog(None, style = wx.OPEN,\n wildcard = filter)\n if dialog.ShowModal() == wx.ID_OK:\n self.option[file_opt[0]].SetValue(dialog.GetPath())\n self.browse = wx.Button(self, -1, \"Choose...\")\n self.fieldset[file_opt[0]].Add(self.browse)\n self.browse.Bind(wx.EVT_BUTTON, open_file_dialog) \n self.fieldset[opt[0]].Layout()\n self.fields.Add(self.fieldset[opt[0]])\n self.sizer.Add(self.fields)\n self.sizer.Layout()\n \n \nclass ConfirmPage(TitledPage):\n \"\"\"The final confirmation page.\"\"\"\n def __init__(self, parent, title, label):\n TitledPage.__init__(self, parent, title, label)\n self.fields = wx.BoxSizer(wx.VERTICAL)\n self.parent = parent\n \n def Draw(self, evt):\n if not evt.GetDirection(): return\n \n self.fields.Clear(True)\n self.fields = wx.BoxSizer(wx.VERTICAL)\n \n self.values_dict = dict()\n connect = self.parent.CONNECTION\n for key in connect.option.keys():\n self.values_dict[key] = str(connect.option[key].Value)\n \n try:\n connect.engine.opts = self.values_dict\n connect.engine.connect(force_reconnect=True)\n message = '''<p><b>Success!</b>Your connection has been saved.</p>\n <p>Click Finish to continue.</p>'''\n save_connection(connect.engine.name, self.values_dict)\n \n except Exception as e:\n message = message = '''<p><b>Error.</b>There was a problem with your\n connection:</p><p>%s</p>\n <p>Click Back to try again, or Cancel.</p>''' % e\n \n btn = self.parent.FindWindowById(wx.ID_FORWARD) \n if btn: btn.Disable()\n \n self.message = HtmlWindow(self)\n self.message.SetSize((450,400))\n self.message.SetHtml(message)\n self.fields.Add(self.message, 1, wx.EXPAND)\n self.sizer.Add(self.fields)\n \n self.sizer.Layout()\n", "path": "app/connect_wizard.py"}], "after_files": [{"content": "\"\"\"Connection setup wizard.\n\n\"\"\"\n\nimport os\nimport sys\nimport wx\nimport wx.html\nimport wx.wizard\nfrom retriever.lib.models import Engine\nfrom retriever.lib.tools import get_saved_connection, save_connection\nfrom retriever.app.controls import *\nfrom retriever.app.images import icon\n\nfrom retriever import VERSION\n\n\nclass ConnectWizard(wx.wizard.Wizard):\n def __init__(self, lists, engine_list, selected=None):\n wx.wizard.Wizard.__init__(self, None, -1, \"EcoData Retriever\")\n \n #self.SetIcon(icon.GetIcon())\n \n welcome = \"\"\"<h2>Connection Wizard</h2>\n \n <p>The EcoData Retriever downloads raw data files, stores them on your\n computer, and imports the data into your own local database.</p>\n <p>To begin, you'll need to set up your own database. Once you've done\n that, this wizard will walk you through the steps to connect to your\n database.</p>\n <p>Supported database systems currently include:</p>\n <ul>\"\"\"\n \n for db in engine_list:\n welcome += \"<li>\" + db.name + \"</li>\" \n \n welcome += \"</ul>\" \n \n self.pages = []\n self.lists = lists\n self.engine_list = engine_list\n self.selected = selected\n \n self.pages.append(TitledPage(self, \"\", \"\"))\n \n self.pages.append(ChooseDbPage(self, \"Database\", \n \"What kind of database are you using?\"))\n \n self.pages.append(ConnectPage(self, \n \"Connection Info\", \n \"\"))\n \n self.pages.append(ConfirmPage(self, \"Connection\", \"\"))\n\n (self.TITLE, self.CHOOSEDB, \n self.CONNECTION, self.CONFIRM) = [self.pages[i] \n for i in range(len(self.pages))]\n \n self.TITLE.welcome = HtmlWindow(self.TITLE)\n self.TITLE.welcome.SetSize((450,400))\n self.TITLE.welcome.SetHtml(welcome)\n self.TITLE.sizer.Add(self.TITLE.welcome, 1, wx.EXPAND)\n \n self.CHOOSEDB.Bind(wx.wizard.EVT_WIZARD_PAGE_CHANGING, self.CONNECTION.Draw)\n self.CONNECTION.Bind(wx.wizard.EVT_WIZARD_PAGE_CHANGING, self.CONFIRM.Draw)\n self.CONFIRM.Bind(wx.wizard.EVT_WIZARD_PAGE_CHANGING, self.CONNECTION.Draw)\n \n for i in range(len(self.pages) - 1):\n wx.wizard.WizardPageSimple_Chain(self.pages[i], self.pages[i + 1])\n \n for page in self.pages:\n self.FitToPage(page)\n \n \n def check_for_updates(self):\n reload(retriever)\n\n\nclass ChooseDbPage(TitledPage):\n def __init__(self, parent, title, label):\n TitledPage.__init__(self, parent, title, label)\n engine_list = parent.engine_list\n \n dblist = ListBox(self, -1, \n choices=[db.name for db in engine_list], \n style=wx.LB_SINGLE,\n size=(-1,150))\n self.dblist = dblist\n if parent.selected:\n index = 0\n for i in range(len(engine_list)):\n if engine_list[i].name == parent.selected:\n index = i \n self.dblist.SetSelection(index)\n else:\n self.dblist.SetSelection(0)\n self.sizer.Add(self.dblist, -1, wx.EXPAND)\n\n #TODO: This is a hack to force the wizard to be large enough to not hide\n #the Choose button on the Connect Page when the engine has a file\n #attribute. This should be fixed properly by figuring out how to make wx\n #work with the fact that the connect page has varying fields depending\n #on the engine.\n longspace = StaticText(self, -1, \"\", wx.Size(375, -1))\n self.sizer.Add(longspace, -1)\n \n def dirbtn_click(self, evt):\n dialog = wx.DirDialog(None, message=\"Choose a directory to \" +\n \"download your data files.\") \n if dialog.ShowModal() == wx.ID_OK: \n self.raw_data_dir.SetValue(dialog.GetPath()) \n else:\n pass\n dialog.Destroy()\n \n\nclass ConnectPage(TitledPage):\n \"\"\"The connection info page.\"\"\"\n def __init__(self, parent, title, label):\n TitledPage.__init__(self, parent, title, label)\n self.option = dict()\n self.sel = None\n self.fields = wx.BoxSizer(wx.VERTICAL)\n self.parent = parent\n \n def Draw(self, evt):\n \"\"\"When the page is drawn, it may need to update its fields if \n the selected database has changed.\"\"\"\n \n if not evt.GetDirection():\n btn = self.parent.FindWindowById(wx.ID_FORWARD) \n if btn: btn.Enable()\n \n if len(self.Parent.CHOOSEDB.dblist.GetStringSelection()) == 0 and evt.Direction:\n evt.Veto() \n else:\n if self.sel != self.Parent.CHOOSEDB.dblist.GetStringSelection():\n self.sel = self.Parent.CHOOSEDB.dblist.GetStringSelection()\n self.engine = Engine()\n for db in self.Parent.engine_list:\n if db.name == self.sel:\n self.engine = db\n self.fields.Clear(True) \n self.fields = wx.BoxSizer(wx.VERTICAL)\n if self.engine.instructions:\n self.fields.Add(StaticText(self, -1, '\\n' + self.engine.instructions + '\\n\\n'))\n self.fieldset = dict()\n self.option = dict()\n saved_opts = get_saved_connection(self.engine.name)\n for opt in self.engine.required_opts:\n if opt[0] in saved_opts.keys():\n default = saved_opts[opt[0]]\n else:\n default = opt[2]\n self.fieldset[opt[0]] = wx.BoxSizer(wx.HORIZONTAL)\n label = StaticText(self, -1, opt[0] + \": \", \n size=wx.Size(90,35))\n style = wx.TE_PASSWORD if opt[0] == \"password\" else 0\n txt = TextCtrl(self, -1, str(default), size=wx.Size(200,-1), style=style)\n self.option[opt[0]] = txt\n self.fieldset[opt[0]].AddMany([label, \n self.option[opt[0]]])\n if opt[0] == \"file\":\n file_opt = opt\n def open_file_dialog(evt):\n filter = \"\"\n if file_opt[3]:\n filter = file_opt[3] + \"|\"\n filter += \"All files (*.*)|*.*\" \n dialog = wx.FileDialog(None, style = wx.OPEN,\n wildcard = filter)\n if dialog.ShowModal() == wx.ID_OK:\n self.option[file_opt[0]].SetValue(dialog.GetPath())\n self.browse = wx.Button(self, -1, \"Choose...\")\n self.fieldset[file_opt[0]].Add(self.browse)\n self.browse.Bind(wx.EVT_BUTTON, open_file_dialog) \n self.fieldset[opt[0]].Layout()\n self.fields.Add(self.fieldset[opt[0]])\n self.sizer.Add(self.fields)\n self.sizer.Layout()\n \n \nclass ConfirmPage(TitledPage):\n \"\"\"The final confirmation page.\"\"\"\n def __init__(self, parent, title, label):\n TitledPage.__init__(self, parent, title, label)\n self.fields = wx.BoxSizer(wx.VERTICAL)\n self.parent = parent\n \n def Draw(self, evt):\n if not evt.GetDirection(): return\n \n self.fields.Clear(True)\n self.fields = wx.BoxSizer(wx.VERTICAL)\n \n self.values_dict = dict()\n connect = self.parent.CONNECTION\n for key in connect.option.keys():\n self.values_dict[key] = str(connect.option[key].Value)\n \n try:\n connect.engine.opts = self.values_dict\n connect.engine.connect(force_reconnect=True)\n message = '''<p><b>Success!</b>Your connection has been saved.</p>\n <p>Click Finish to continue.</p>'''\n save_connection(connect.engine.name, self.values_dict)\n \n except Exception as e:\n message = message = '''<p><b>Error.</b>There was a problem with your\n connection:</p><p>%s</p>\n <p>Click Back to try again, or Cancel.</p>''' % e\n \n btn = self.parent.FindWindowById(wx.ID_FORWARD) \n if btn: btn.Disable()\n \n self.message = HtmlWindow(self)\n self.message.SetSize((450,400))\n self.message.SetHtml(message)\n self.fields.Add(self.message, 1, wx.EXPAND)\n self.sizer.Add(self.fields)\n \n self.sizer.Layout()\n", "path": "app/connect_wizard.py"}]}
| 2,879 | 434 |
gh_patches_debug_12955
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-2052
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
find_updates: Unexpected error (local variable 'info' referenced before assignment)
### Description
An error is logged, errors shouldn't happen.
### Reproduction steps
Seen in log channel, unknown, run the bot for long enough???
### Expected behavior
No error
### Logs
```
[2021-03-31 22:00:24,531] sopel.tools.jobs ERROR - Error while processing job: local variable 'info' referenced before assignment
[2021-03-31 22:00:24,538] sopel.bot ERROR - Unexpected error (local variable 'info' referenced before assignment)
Traceback (most recent call last):
File "/srv/sopelbots/devvenv/lib/python3.7/site-packages/sopel/tools/jobs.py", line 191, in _call
job.execute(self.manager)
File "/srv/sopelbots/devvenv/lib/python3.7/site-packages/sopel/tools/jobs.py", line 463, in execute
return self._handler(manager)
File "/srv/sopelbots/devvenv/lib/python3.7/site-packages/sopel/modules/find_updates.py", line 86, in check_version
latest = info['version']
UnboundLocalError: local variable 'info' referenced before assignment
```
### Environment
- Sopel `.version`: https://github.com/sopel-irc/sopel/commit/96c55aff852bf40bca56de49b2bc30378bf1c819
- Sopel installed via: pip/wheel
- Python version: 3.7.3
- Operating system: Debian 10.9
- IRCd `/version`: freenode
- Relevant plugins: find_updates
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/find_updates.py`
Content:
```
1 # coding=utf-8
2 """
3 find_updates.py - Sopel Update Check Plugin
4 This is separated from version.py, so that it can be easily overridden by
5 distribution packagers, and they can check their repositories rather than the
6 Sopel website.
7 Copyright 2014, Elsie Powell, embolalia.com
8 Licensed under the Eiffel Forum License 2.
9
10 https://sopel.chat
11 """
12 from __future__ import absolute_import, division, print_function, unicode_literals
13
14 import requests
15
16 from sopel import (
17 __version__ as current_version,
18 _version_info,
19 plugin,
20 tools,
21 version_info,
22 )
23
24
25 wait_time = 24 * 60 * 60 # check once per day
26 version_url = 'https://sopel.chat/latest.json'
27 stable_message = (
28 'A new Sopel version, {}, is available; I am running {}. Please update '
29 'me. Full release notes at {}'
30 )
31 unstable_message = (
32 'A new pre-release version, {}, is available; I am running {}. Please '
33 'update me.{}'
34 )
35
36
37 @plugin.event(tools.events.RPL_LUSERCLIENT)
38 def startup_version_check(bot, trigger):
39 if not bot.memory.get('update_startup_check_run', False):
40 bot.memory['update_startup_check_run'] = True
41 check_version(bot)
42
43
44 def _check_succeeded(bot):
45 bot.memory['update_failures'] = 0
46
47
48 def _check_failed(bot):
49 bot.memory['update_failures'] = 1 + bot.memory.get('update_failures', 0)
50
51
52 @plugin.interval(wait_time)
53 def check_version(bot):
54 version = version_info
55 success = False
56
57 try:
58 r = requests.get(version_url, timeout=(5, 5))
59 except requests.exceptions.RequestException:
60 _check_failed(bot)
61 else:
62 success = True
63
64 try:
65 if success:
66 info = r.json()
67 except ValueError:
68 # TODO: use JSONDecodeError when dropping Pythons < 3.5
69 _check_failed(bot)
70
71 if not success and bot.memory.get('update_failures', 0) > 4:
72 bot.say(
73 "[update] I haven't been able to check for updates in a while. "
74 "Please verify that {} is working and I can reach it."
75 .format(version_url), bot.config.core.owner)
76 bot.say(
77 "[update] If this issue persists, please alert the Sopel dev team "
78 "in #sopel on freenode, or open a GitHub issue: "
79 "https://github.com/sopel-irc/sopel/issues",
80 bot.config.core.owner)
81 return
82
83 _check_succeeded(bot)
84
85 if version.releaselevel == 'final':
86 latest = info['version']
87 notes = info['release_notes']
88 message = stable_message
89 else:
90 latest = info['unstable']
91 notes = info.get('unstable_notes', '')
92 if notes:
93 notes = ' Full release notes at ' + notes
94 message = unstable_message
95 latest_version = _version_info(latest)
96
97 if version < latest_version:
98 msg = message.format(latest, current_version, notes)
99 bot.say('[update] ' + msg, bot.config.core.owner)
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/modules/find_updates.py b/sopel/modules/find_updates.py
--- a/sopel/modules/find_updates.py
+++ b/sopel/modules/find_updates.py
@@ -67,8 +67,14 @@
except ValueError:
# TODO: use JSONDecodeError when dropping Pythons < 3.5
_check_failed(bot)
+ success = False
- if not success and bot.memory.get('update_failures', 0) > 4:
+ if not success:
+ if bot.memory.get('update_failures', 0) <= 4:
+ # not enough failures to worry; silently ignore this one
+ return
+
+ # too many failures to ignore; notify owner
bot.say(
"[update] I haven't been able to check for updates in a while. "
"Please verify that {} is working and I can reach it."
|
{"golden_diff": "diff --git a/sopel/modules/find_updates.py b/sopel/modules/find_updates.py\n--- a/sopel/modules/find_updates.py\n+++ b/sopel/modules/find_updates.py\n@@ -67,8 +67,14 @@\n except ValueError:\n # TODO: use JSONDecodeError when dropping Pythons < 3.5\n _check_failed(bot)\n+ success = False\n \n- if not success and bot.memory.get('update_failures', 0) > 4:\n+ if not success:\n+ if bot.memory.get('update_failures', 0) <= 4:\n+ # not enough failures to worry; silently ignore this one\n+ return\n+\n+ # too many failures to ignore; notify owner\n bot.say(\n \"[update] I haven't been able to check for updates in a while. \"\n \"Please verify that {} is working and I can reach it.\"\n", "issue": "find_updates: Unexpected error (local variable 'info' referenced before assignment)\n### Description\r\nAn error is logged, errors shouldn't happen.\r\n\r\n### Reproduction steps\r\nSeen in log channel, unknown, run the bot for long enough???\r\n\r\n### Expected behavior\r\nNo error\r\n\r\n### Logs\r\n```\r\n[2021-03-31 22:00:24,531] sopel.tools.jobs ERROR - Error while processing job: local variable 'info' referenced before assignment\r\n[2021-03-31 22:00:24,538] sopel.bot ERROR - Unexpected error (local variable 'info' referenced before assignment)\r\nTraceback (most recent call last):\r\n File \"/srv/sopelbots/devvenv/lib/python3.7/site-packages/sopel/tools/jobs.py\", line 191, in _call\r\n job.execute(self.manager)\r\n File \"/srv/sopelbots/devvenv/lib/python3.7/site-packages/sopel/tools/jobs.py\", line 463, in execute\r\n return self._handler(manager)\r\n File \"/srv/sopelbots/devvenv/lib/python3.7/site-packages/sopel/modules/find_updates.py\", line 86, in check_version\r\n latest = info['version']\r\nUnboundLocalError: local variable 'info' referenced before assignment\r\n\r\n```\r\n\r\n### Environment\r\n- Sopel `.version`: https://github.com/sopel-irc/sopel/commit/96c55aff852bf40bca56de49b2bc30378bf1c819\r\n- Sopel installed via: pip/wheel\r\n- Python version: 3.7.3\r\n- Operating system: Debian 10.9\r\n- IRCd `/version`: freenode\r\n- Relevant plugins: find_updates\r\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nfind_updates.py - Sopel Update Check Plugin\nThis is separated from version.py, so that it can be easily overridden by\ndistribution packagers, and they can check their repositories rather than the\nSopel website.\nCopyright 2014, Elsie Powell, embolalia.com\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport requests\n\nfrom sopel import (\n __version__ as current_version,\n _version_info,\n plugin,\n tools,\n version_info,\n)\n\n\nwait_time = 24 * 60 * 60 # check once per day\nversion_url = 'https://sopel.chat/latest.json'\nstable_message = (\n 'A new Sopel version, {}, is available; I am running {}. Please update '\n 'me. Full release notes at {}'\n)\nunstable_message = (\n 'A new pre-release version, {}, is available; I am running {}. Please '\n 'update me.{}'\n)\n\n\[email protected](tools.events.RPL_LUSERCLIENT)\ndef startup_version_check(bot, trigger):\n if not bot.memory.get('update_startup_check_run', False):\n bot.memory['update_startup_check_run'] = True\n check_version(bot)\n\n\ndef _check_succeeded(bot):\n bot.memory['update_failures'] = 0\n\n\ndef _check_failed(bot):\n bot.memory['update_failures'] = 1 + bot.memory.get('update_failures', 0)\n\n\[email protected](wait_time)\ndef check_version(bot):\n version = version_info\n success = False\n\n try:\n r = requests.get(version_url, timeout=(5, 5))\n except requests.exceptions.RequestException:\n _check_failed(bot)\n else:\n success = True\n\n try:\n if success:\n info = r.json()\n except ValueError:\n # TODO: use JSONDecodeError when dropping Pythons < 3.5\n _check_failed(bot)\n\n if not success and bot.memory.get('update_failures', 0) > 4:\n bot.say(\n \"[update] I haven't been able to check for updates in a while. \"\n \"Please verify that {} is working and I can reach it.\"\n .format(version_url), bot.config.core.owner)\n bot.say(\n \"[update] If this issue persists, please alert the Sopel dev team \"\n \"in #sopel on freenode, or open a GitHub issue: \"\n \"https://github.com/sopel-irc/sopel/issues\",\n bot.config.core.owner)\n return\n\n _check_succeeded(bot)\n\n if version.releaselevel == 'final':\n latest = info['version']\n notes = info['release_notes']\n message = stable_message\n else:\n latest = info['unstable']\n notes = info.get('unstable_notes', '')\n if notes:\n notes = ' Full release notes at ' + notes\n message = unstable_message\n latest_version = _version_info(latest)\n\n if version < latest_version:\n msg = message.format(latest, current_version, notes)\n bot.say('[update] ' + msg, bot.config.core.owner)\n", "path": "sopel/modules/find_updates.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nfind_updates.py - Sopel Update Check Plugin\nThis is separated from version.py, so that it can be easily overridden by\ndistribution packagers, and they can check their repositories rather than the\nSopel website.\nCopyright 2014, Elsie Powell, embolalia.com\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport requests\n\nfrom sopel import (\n __version__ as current_version,\n _version_info,\n plugin,\n tools,\n version_info,\n)\n\n\nwait_time = 24 * 60 * 60 # check once per day\nversion_url = 'https://sopel.chat/latest.json'\nstable_message = (\n 'A new Sopel version, {}, is available; I am running {}. Please update '\n 'me. Full release notes at {}'\n)\nunstable_message = (\n 'A new pre-release version, {}, is available; I am running {}. Please '\n 'update me.{}'\n)\n\n\[email protected](tools.events.RPL_LUSERCLIENT)\ndef startup_version_check(bot, trigger):\n if not bot.memory.get('update_startup_check_run', False):\n bot.memory['update_startup_check_run'] = True\n check_version(bot)\n\n\ndef _check_succeeded(bot):\n bot.memory['update_failures'] = 0\n\n\ndef _check_failed(bot):\n bot.memory['update_failures'] = 1 + bot.memory.get('update_failures', 0)\n\n\[email protected](wait_time)\ndef check_version(bot):\n version = version_info\n success = False\n\n try:\n r = requests.get(version_url, timeout=(5, 5))\n except requests.exceptions.RequestException:\n _check_failed(bot)\n else:\n success = True\n\n try:\n if success:\n info = r.json()\n except ValueError:\n # TODO: use JSONDecodeError when dropping Pythons < 3.5\n _check_failed(bot)\n success = False\n\n if not success:\n if bot.memory.get('update_failures', 0) <= 4:\n # not enough failures to worry; silently ignore this one\n return\n\n # too many failures to ignore; notify owner\n bot.say(\n \"[update] I haven't been able to check for updates in a while. \"\n \"Please verify that {} is working and I can reach it.\"\n .format(version_url), bot.config.core.owner)\n bot.say(\n \"[update] If this issue persists, please alert the Sopel dev team \"\n \"in #sopel on freenode, or open a GitHub issue: \"\n \"https://github.com/sopel-irc/sopel/issues\",\n bot.config.core.owner)\n return\n\n _check_succeeded(bot)\n\n if version.releaselevel == 'final':\n latest = info['version']\n notes = info['release_notes']\n message = stable_message\n else:\n latest = info['unstable']\n notes = info.get('unstable_notes', '')\n if notes:\n notes = ' Full release notes at ' + notes\n message = unstable_message\n latest_version = _version_info(latest)\n\n if version < latest_version:\n msg = message.format(latest, current_version, notes)\n bot.say('[update] ' + msg, bot.config.core.owner)\n", "path": "sopel/modules/find_updates.py"}]}
| 1,579 | 200 |
gh_patches_debug_32345
|
rasdani/github-patches
|
git_diff
|
optuna__optuna-1103
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`plot_intermediate_values` example code does not contain intermediate values.
The example code snippet embedded in the documentation for [`plot_intermediate_values`](https://github.com/optuna/optuna/blob/master/optuna/visualization/intermediate_values.py) doesn't call `Trial.report` and thus does not contain intermediate values. The example should be updated. See also https://github.com/optuna/optuna/pull/1003#issuecomment-599359742.
Current documentation with empty plot: https://optuna.readthedocs.io/en/latest/reference/visualization.html#optuna.visualization.plot_intermediate_values
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/visualization/intermediate_values.py`
Content:
```
1 from optuna.logging import get_logger
2 from optuna.trial import TrialState
3 from optuna import type_checking
4 from optuna.visualization.utils import _check_plotly_availability
5 from optuna.visualization.utils import is_available
6
7 if type_checking.TYPE_CHECKING:
8 from optuna.study import Study # NOQA
9
10 if is_available():
11 from optuna.visualization.plotly_imports import go
12
13 logger = get_logger(__name__)
14
15
16 def plot_intermediate_values(study):
17 # type: (Study) -> go.Figure
18 """Plot intermediate values of all trials in a study.
19
20 Example:
21
22 The following code snippet shows how to plot intermediate values.
23
24 .. testcode::
25
26 import optuna
27
28 def objective(trial):
29 x = trial.suggest_uniform('x', -100, 100)
30 y = trial.suggest_categorical('y', [-1, 0, 1])
31 return x ** 2 + y
32
33 study = optuna.create_study()
34 study.optimize(objective, n_trials=10)
35
36 optuna.visualization.plot_intermediate_values(study)
37
38 .. raw:: html
39
40 <iframe src="../_static/plot_intermediate_values.html"
41 width="100%" height="500px" frameborder="0">
42 </iframe>
43
44 Args:
45 study:
46 A :class:`~optuna.study.Study` object whose trials are plotted for their intermediate
47 values.
48
49 Returns:
50 A :class:`plotly.graph_objs.Figure` object.
51 """
52
53 _check_plotly_availability()
54 return _get_intermediate_plot(study)
55
56
57 def _get_intermediate_plot(study):
58 # type: (Study) -> go.Figure
59
60 layout = go.Layout(
61 title="Intermediate Values Plot",
62 xaxis={"title": "Step"},
63 yaxis={"title": "Intermediate Value"},
64 showlegend=False,
65 )
66
67 target_state = [TrialState.PRUNED, TrialState.COMPLETE, TrialState.RUNNING]
68 trials = [trial for trial in study.trials if trial.state in target_state]
69
70 if len(trials) == 0:
71 logger.warning("Study instance does not contain trials.")
72 return go.Figure(data=[], layout=layout)
73
74 traces = []
75 for trial in trials:
76 if trial.intermediate_values:
77 sorted_intermediate_values = sorted(trial.intermediate_values.items())
78 trace = go.Scatter(
79 x=tuple((x for x, _ in sorted_intermediate_values)),
80 y=tuple((y for _, y in sorted_intermediate_values)),
81 mode="lines+markers",
82 marker={"maxdisplayed": 10},
83 name="Trial{}".format(trial.number),
84 )
85 traces.append(trace)
86
87 if not traces:
88 logger.warning(
89 "You need to set up the pruning feature to utilize `plot_intermediate_values()`"
90 )
91 return go.Figure(data=[], layout=layout)
92
93 figure = go.Figure(data=traces, layout=layout)
94
95 return figure
96
```
Path: `docs/source/scripts/plot_intermediate_values.py`
Content:
```
1 import os
2
3 import plotly
4
5 import optuna
6
7
8 def objective(trial):
9 x = trial.suggest_uniform("x", -100, 100)
10 y = trial.suggest_categorical("y", [-1, 0, 1])
11 return x ** 2 + y
12
13
14 def main():
15 sampler = optuna.samplers.TPESampler(seed=10)
16 study = optuna.create_study(sampler=sampler)
17 study.optimize(objective, n_trials=10)
18
19 fig = optuna.visualization.plot_intermediate_values(study)
20 fig_html = plotly.offline.plot(fig, output_type="div", include_plotlyjs="cdn", auto_open=False)
21
22 fig_dir = "../plotly_figures"
23 os.makedirs(fig_dir, exist_ok=True)
24 with open(os.path.join(fig_dir, "plot_intermediate_values.html"), "w") as f:
25 f.write(fig_html)
26
27
28 if __name__ == "__main__":
29 main()
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/source/scripts/plot_intermediate_values.py b/docs/source/scripts/plot_intermediate_values.py
--- a/docs/source/scripts/plot_intermediate_values.py
+++ b/docs/source/scripts/plot_intermediate_values.py
@@ -5,16 +5,35 @@
import optuna
+def f(x):
+ return (x - 2) ** 2
+
+
+def df(x):
+ return 2 * x - 4
+
+
def objective(trial):
- x = trial.suggest_uniform("x", -100, 100)
- y = trial.suggest_categorical("y", [-1, 0, 1])
- return x ** 2 + y
+ lr = trial.suggest_loguniform("lr", 1e-5, 1e-1)
+
+ x = 3
+ for step in range(128):
+ y = f(x)
+
+ trial.report(y, step=step)
+ if trial.should_prune():
+ raise optuna.exceptions.TrialPruned()
+
+ gy = df(x)
+ x -= gy * lr
+
+ return y
def main():
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
- study.optimize(objective, n_trials=10)
+ study.optimize(objective, n_trials=16)
fig = optuna.visualization.plot_intermediate_values(study)
fig_html = plotly.offline.plot(fig, output_type="div", include_plotlyjs="cdn", auto_open=False)
diff --git a/optuna/visualization/intermediate_values.py b/optuna/visualization/intermediate_values.py
--- a/optuna/visualization/intermediate_values.py
+++ b/optuna/visualization/intermediate_values.py
@@ -25,13 +25,30 @@
import optuna
+ def f(x):
+ return (x - 2) ** 2
+
+ def df(x):
+ return 2 * x - 4
+
def objective(trial):
- x = trial.suggest_uniform('x', -100, 100)
- y = trial.suggest_categorical('y', [-1, 0, 1])
- return x ** 2 + y
+ lr = trial.suggest_loguniform("lr", 1e-5, 1e-1)
+
+ x = 3
+ for step in range(128):
+ y = f(x)
+
+ trial.report(y, step=step)
+ if trial.should_prune():
+ raise optuna.exceptions.TrialPruned()
+
+ gy = df(x)
+ x -= gy * lr
+
+ return y
study = optuna.create_study()
- study.optimize(objective, n_trials=10)
+ study.optimize(objective, n_trials=16)
optuna.visualization.plot_intermediate_values(study)
|
{"golden_diff": "diff --git a/docs/source/scripts/plot_intermediate_values.py b/docs/source/scripts/plot_intermediate_values.py\n--- a/docs/source/scripts/plot_intermediate_values.py\n+++ b/docs/source/scripts/plot_intermediate_values.py\n@@ -5,16 +5,35 @@\n import optuna\n \n \n+def f(x):\n+ return (x - 2) ** 2\n+\n+\n+def df(x):\n+ return 2 * x - 4\n+\n+\n def objective(trial):\n- x = trial.suggest_uniform(\"x\", -100, 100)\n- y = trial.suggest_categorical(\"y\", [-1, 0, 1])\n- return x ** 2 + y\n+ lr = trial.suggest_loguniform(\"lr\", 1e-5, 1e-1)\n+\n+ x = 3\n+ for step in range(128):\n+ y = f(x)\n+\n+ trial.report(y, step=step)\n+ if trial.should_prune():\n+ raise optuna.exceptions.TrialPruned()\n+\n+ gy = df(x)\n+ x -= gy * lr\n+\n+ return y\n \n \n def main():\n sampler = optuna.samplers.TPESampler(seed=10)\n study = optuna.create_study(sampler=sampler)\n- study.optimize(objective, n_trials=10)\n+ study.optimize(objective, n_trials=16)\n \n fig = optuna.visualization.plot_intermediate_values(study)\n fig_html = plotly.offline.plot(fig, output_type=\"div\", include_plotlyjs=\"cdn\", auto_open=False)\ndiff --git a/optuna/visualization/intermediate_values.py b/optuna/visualization/intermediate_values.py\n--- a/optuna/visualization/intermediate_values.py\n+++ b/optuna/visualization/intermediate_values.py\n@@ -25,13 +25,30 @@\n \n import optuna\n \n+ def f(x):\n+ return (x - 2) ** 2\n+\n+ def df(x):\n+ return 2 * x - 4\n+\n def objective(trial):\n- x = trial.suggest_uniform('x', -100, 100)\n- y = trial.suggest_categorical('y', [-1, 0, 1])\n- return x ** 2 + y\n+ lr = trial.suggest_loguniform(\"lr\", 1e-5, 1e-1)\n+\n+ x = 3\n+ for step in range(128):\n+ y = f(x)\n+\n+ trial.report(y, step=step)\n+ if trial.should_prune():\n+ raise optuna.exceptions.TrialPruned()\n+\n+ gy = df(x)\n+ x -= gy * lr\n+\n+ return y\n \n study = optuna.create_study()\n- study.optimize(objective, n_trials=10)\n+ study.optimize(objective, n_trials=16)\n \n optuna.visualization.plot_intermediate_values(study)\n", "issue": "`plot_intermediate_values` example code does not contain intermediate values.\nThe example code snippet embedded in the documentation for [`plot_intermediate_values`](https://github.com/optuna/optuna/blob/master/optuna/visualization/intermediate_values.py) doesn't call `Trial.report` and thus does not contain intermediate values. The example should be updated. See also https://github.com/optuna/optuna/pull/1003#issuecomment-599359742. \r\n\r\nCurrent documentation with empty plot: https://optuna.readthedocs.io/en/latest/reference/visualization.html#optuna.visualization.plot_intermediate_values\n", "before_files": [{"content": "from optuna.logging import get_logger\nfrom optuna.trial import TrialState\nfrom optuna import type_checking\nfrom optuna.visualization.utils import _check_plotly_availability\nfrom optuna.visualization.utils import is_available\n\nif type_checking.TYPE_CHECKING:\n from optuna.study import Study # NOQA\n\nif is_available():\n from optuna.visualization.plotly_imports import go\n\nlogger = get_logger(__name__)\n\n\ndef plot_intermediate_values(study):\n # type: (Study) -> go.Figure\n \"\"\"Plot intermediate values of all trials in a study.\n\n Example:\n\n The following code snippet shows how to plot intermediate values.\n\n .. testcode::\n\n import optuna\n\n def objective(trial):\n x = trial.suggest_uniform('x', -100, 100)\n y = trial.suggest_categorical('y', [-1, 0, 1])\n return x ** 2 + y\n\n study = optuna.create_study()\n study.optimize(objective, n_trials=10)\n\n optuna.visualization.plot_intermediate_values(study)\n\n .. raw:: html\n\n <iframe src=\"../_static/plot_intermediate_values.html\"\n width=\"100%\" height=\"500px\" frameborder=\"0\">\n </iframe>\n\n Args:\n study:\n A :class:`~optuna.study.Study` object whose trials are plotted for their intermediate\n values.\n\n Returns:\n A :class:`plotly.graph_objs.Figure` object.\n \"\"\"\n\n _check_plotly_availability()\n return _get_intermediate_plot(study)\n\n\ndef _get_intermediate_plot(study):\n # type: (Study) -> go.Figure\n\n layout = go.Layout(\n title=\"Intermediate Values Plot\",\n xaxis={\"title\": \"Step\"},\n yaxis={\"title\": \"Intermediate Value\"},\n showlegend=False,\n )\n\n target_state = [TrialState.PRUNED, TrialState.COMPLETE, TrialState.RUNNING]\n trials = [trial for trial in study.trials if trial.state in target_state]\n\n if len(trials) == 0:\n logger.warning(\"Study instance does not contain trials.\")\n return go.Figure(data=[], layout=layout)\n\n traces = []\n for trial in trials:\n if trial.intermediate_values:\n sorted_intermediate_values = sorted(trial.intermediate_values.items())\n trace = go.Scatter(\n x=tuple((x for x, _ in sorted_intermediate_values)),\n y=tuple((y for _, y in sorted_intermediate_values)),\n mode=\"lines+markers\",\n marker={\"maxdisplayed\": 10},\n name=\"Trial{}\".format(trial.number),\n )\n traces.append(trace)\n\n if not traces:\n logger.warning(\n \"You need to set up the pruning feature to utilize `plot_intermediate_values()`\"\n )\n return go.Figure(data=[], layout=layout)\n\n figure = go.Figure(data=traces, layout=layout)\n\n return figure\n", "path": "optuna/visualization/intermediate_values.py"}, {"content": "import os\n\nimport plotly\n\nimport optuna\n\n\ndef objective(trial):\n x = trial.suggest_uniform(\"x\", -100, 100)\n y = trial.suggest_categorical(\"y\", [-1, 0, 1])\n return x ** 2 + y\n\n\ndef main():\n sampler = optuna.samplers.TPESampler(seed=10)\n study = optuna.create_study(sampler=sampler)\n study.optimize(objective, n_trials=10)\n\n fig = optuna.visualization.plot_intermediate_values(study)\n fig_html = plotly.offline.plot(fig, output_type=\"div\", include_plotlyjs=\"cdn\", auto_open=False)\n\n fig_dir = \"../plotly_figures\"\n os.makedirs(fig_dir, exist_ok=True)\n with open(os.path.join(fig_dir, \"plot_intermediate_values.html\"), \"w\") as f:\n f.write(fig_html)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "docs/source/scripts/plot_intermediate_values.py"}], "after_files": [{"content": "from optuna.logging import get_logger\nfrom optuna.structs import TrialState\nfrom optuna import type_checking\nfrom optuna.visualization.utils import _check_plotly_availability\nfrom optuna.visualization.utils import is_available\n\nif type_checking.TYPE_CHECKING:\n from optuna.study import Study # NOQA\n\nif is_available():\n from optuna.visualization.plotly_imports import go\n\nlogger = get_logger(__name__)\n\n\ndef plot_intermediate_values(study):\n # type: (Study) -> go.Figure\n \"\"\"Plot intermediate values of all trials in a study.\n\n Example:\n\n The following code snippet shows how to plot intermediate values.\n\n .. testcode::\n\n import optuna\n\n def f(x):\n return (x - 2) ** 2\n\n def df(x):\n return 2 * x - 4\n\n def objective(trial):\n lr = trial.suggest_loguniform(\"lr\", 1e-5, 1e-1)\n\n x = 3\n for step in range(128):\n y = f(x)\n\n trial.report(y, step=step)\n if trial.should_prune():\n raise optuna.exceptions.TrialPruned()\n\n gy = df(x)\n x -= gy * lr\n\n return y\n\n study = optuna.create_study()\n study.optimize(objective, n_trials=16)\n\n optuna.visualization.plot_intermediate_values(study)\n\n .. raw:: html\n\n <iframe src=\"../_static/plot_intermediate_values.html\"\n width=\"100%\" height=\"500px\" frameborder=\"0\">\n </iframe>\n\n Args:\n study:\n A :class:`~optuna.study.Study` object whose trials are plotted for their intermediate\n values.\n\n Returns:\n A :class:`plotly.graph_objs.Figure` object.\n \"\"\"\n\n _check_plotly_availability()\n return _get_intermediate_plot(study)\n\n\ndef _get_intermediate_plot(study):\n # type: (Study) -> go.Figure\n\n layout = go.Layout(\n title=\"Intermediate Values Plot\",\n xaxis={\"title\": \"Step\"},\n yaxis={\"title\": \"Intermediate Value\"},\n showlegend=False,\n )\n\n target_state = [TrialState.PRUNED, TrialState.COMPLETE, TrialState.RUNNING]\n trials = [trial for trial in study.trials if trial.state in target_state]\n\n if len(trials) == 0:\n logger.warning(\"Study instance does not contain trials.\")\n return go.Figure(data=[], layout=layout)\n\n traces = []\n for trial in trials:\n if trial.intermediate_values:\n sorted_intermediate_values = sorted(trial.intermediate_values.items())\n trace = go.Scatter(\n x=tuple((x for x, _ in sorted_intermediate_values)),\n y=tuple((y for _, y in sorted_intermediate_values)),\n mode=\"lines+markers\",\n marker={\"maxdisplayed\": 10},\n name=\"Trial{}\".format(trial.number),\n )\n traces.append(trace)\n\n if not traces:\n logger.warning(\n \"You need to set up the pruning feature to utilize `plot_intermediate_values()`\"\n )\n return go.Figure(data=[], layout=layout)\n\n figure = go.Figure(data=traces, layout=layout)\n\n return figure\n", "path": "optuna/visualization/intermediate_values.py"}, {"content": "import os\n\nimport plotly\n\nimport optuna\n\n\ndef f(x):\n return (x - 2) ** 2\n\n\ndef df(x):\n return 2 * x - 4\n\n\ndef objective(trial):\n lr = trial.suggest_loguniform(\"lr\", 1e-5, 1e-1)\n\n x = 3\n for step in range(128):\n y = f(x)\n\n trial.report(y, step=step)\n if trial.should_prune():\n raise optuna.exceptions.TrialPruned()\n\n gy = df(x)\n x -= gy * lr\n\n return y\n\n\ndef main():\n sampler = optuna.samplers.TPESampler(seed=10)\n study = optuna.create_study(sampler=sampler)\n study.optimize(objective, n_trials=16)\n\n fig = optuna.visualization.plot_intermediate_values(study)\n fig_html = plotly.offline.plot(fig, output_type=\"div\", include_plotlyjs=\"cdn\", auto_open=False)\n\n fig_dir = \"../plotly_figures\"\n os.makedirs(fig_dir, exist_ok=True)\n with open(os.path.join(fig_dir, \"plot_intermediate_values.html\"), \"w\") as f:\n f.write(fig_html)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "docs/source/scripts/plot_intermediate_values.py"}]}
| 1,525 | 671 |
gh_patches_debug_4254
|
rasdani/github-patches
|
git_diff
|
facebookresearch__Mephisto-594
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NameError: name 'Blueprint' is not defined
Hi!
I wanted to try the project from the "quickstart" but I hit
>NameError: name 'Blueprint' is not defined
After running these steps:
```bash
pip install -e .
mkdir ~/data
mephisto config core.main_data_directory ~/data
mephisto check
# OK
cd examples/simple_static_task
python static_test_script.py
```
Any idea? Seems to be related to Flask.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 from mephisto.abstractions.blueprints.abstract.static_task.static_blueprint import (
8 StaticBlueprint,
9 StaticBlueprintArgs,
10 )
11 from dataclasses import dataclass, field
12 from omegaconf import MISSING, DictConfig
13 from mephisto.abstractions.blueprints.static_html_task.static_html_task_builder import (
14 StaticHTMLTaskBuilder,
15 )
16 from mephisto.operations.registry import register_mephisto_abstraction
17
18 import os
19 import time
20 import csv
21 import types
22
23 from typing import ClassVar, List, Type, Any, Dict, Iterable, Optional, TYPE_CHECKING
24
25 if TYPE_CHECKING:
26 from mephisto.data_model.task_run import TaskRun
27 from mephisto.data_model.blueprint import (
28 AgentState,
29 TaskRunner,
30 TaskBuilder,
31 SharedTaskState,
32 )
33 from mephisto.data_model.assignment import Assignment
34 from mephisto.data_model.agent import OnboardingAgent
35 from mephisto.data_model.worker import Worker
36
37 BLUEPRINT_TYPE = "static_task"
38
39
40 @dataclass
41 class StaticHTMLBlueprintArgs(StaticBlueprintArgs):
42 """
43 Adds required options for StaticBlueprints.
44 task_source points to the file intending to be deployed for this task
45 data_csv has the data to be deployed for this task.
46 """
47
48 _blueprint_type: str = BLUEPRINT_TYPE
49 _group: str = field(
50 default="StaticBlueprint",
51 metadata={
52 "help": (
53 "Tasks launched from static blueprints need a "
54 "source html file to display to workers, as well as a csv "
55 "containing values that will be inserted into templates in "
56 "the html. "
57 )
58 },
59 )
60 task_source: str = field(
61 default=MISSING,
62 metadata={
63 "help": "Path to source HTML file for the task being run",
64 "required": True,
65 },
66 )
67 preview_source: Optional[str] = field(
68 default=MISSING,
69 metadata={"help": "Optional path to source HTML file to preview the task"},
70 )
71 onboarding_source: Optional[str] = field(
72 default=MISSING,
73 metadata={"help": "Optional path to source HTML file to onboarding the task"},
74 )
75
76
77 @register_mephisto_abstraction()
78 class StaticHTMLBlueprint(StaticBlueprint):
79 """Blueprint for a task that runs off of a built react javascript bundle"""
80
81 TaskBuilderClass = StaticHTMLTaskBuilder
82 ArgsClass = StaticHTMLBlueprintArgs
83 BLUEPRINT_TYPE = BLUEPRINT_TYPE
84
85 def __init__(
86 self, task_run: "TaskRun", args: "DictConfig", shared_state: "SharedTaskState"
87 ):
88 super().__init__(task_run, args, shared_state)
89 self.html_file = os.path.expanduser(args.blueprint.task_source)
90 if not os.path.exists(self.html_file):
91 raise FileNotFoundError(
92 f"Specified html file {self.html_file} was not found from {os.getcwd()}"
93 )
94
95 self.onboarding_html_file = args.blueprint.get("onboarding_source", None)
96 if self.onboarding_html_file is not None:
97 self.onboarding_html_file = os.path.expanduser(self.onboarding_html_file)
98 if not os.path.exists(self.onboarding_html_file):
99 raise FileNotFoundError(
100 f"Specified onboarding html file {self.onboarding_html_file} was not found from {os.getcwd()}"
101 )
102
103 task_file_name = os.path.basename(self.html_file)
104 for entry in self._initialization_data_dicts:
105 entry["html"] = task_file_name
106
107 @classmethod
108 def assert_task_args(cls, args: DictConfig, shared_state: "SharedTaskState"):
109 """Ensure that the data can be properly loaded"""
110 Blueprint.assert_task_args(args, shared_state)
111 blue_args = args.blueprint
112 if isinstance(shared_state.static_task_data, types.GeneratorType):
113 raise AssertionError("You can't launch an HTML static task on a generator")
114 if blue_args.get("data_csv", None) is not None:
115 csv_file = os.path.expanduser(blue_args.data_csv)
116 assert os.path.exists(
117 csv_file
118 ), f"Provided csv file {csv_file} doesn't exist"
119 elif blue_args.get("data_json", None) is not None:
120 json_file = os.path.expanduser(blue_args.data_json)
121 assert os.path.exists(
122 json_file
123 ), f"Provided JSON file {json_file} doesn't exist"
124 elif blue_args.get("data_jsonl", None) is not None:
125 jsonl_file = os.path.expanduser(blue_args.data_jsonl)
126 assert os.path.exists(
127 jsonl_file
128 ), f"Provided JSON-L file {jsonl_file} doesn't exist"
129 elif shared_state.static_task_data is not None:
130 assert (
131 len(shared_state.static_task_data) > 0
132 ), "Length of data dict provided was 0"
133 else:
134 raise AssertionError(
135 "Must provide one of a data csv, json, json-L, or a list of tasks"
136 )
137
138 if blue_args.get("onboarding_qualification", None) is not None:
139 assert blue_args.get("onboarding_source", None) is not None, (
140 "Must use onboarding html with an onboarding qualification to "
141 "use onboarding."
142 )
143 assert shared_state.validate_onboarding is not None, (
144 "Must use an onboarding validation function to use onboarding "
145 "with static tasks."
146 )
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py b/mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py
--- a/mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py
+++ b/mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py
@@ -10,6 +10,7 @@
)
from dataclasses import dataclass, field
from omegaconf import MISSING, DictConfig
+from mephisto.abstractions.blueprint import Blueprint
from mephisto.abstractions.blueprints.static_html_task.static_html_task_builder import (
StaticHTMLTaskBuilder,
)
|
{"golden_diff": "diff --git a/mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py b/mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py\n--- a/mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py\n+++ b/mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py\n@@ -10,6 +10,7 @@\n )\n from dataclasses import dataclass, field\n from omegaconf import MISSING, DictConfig\n+from mephisto.abstractions.blueprint import Blueprint\n from mephisto.abstractions.blueprints.static_html_task.static_html_task_builder import (\n StaticHTMLTaskBuilder,\n )\n", "issue": "NameError: name 'Blueprint' is not defined\nHi!\r\n\r\nI wanted to try the project from the \"quickstart\" but I hit\r\n\r\n>NameError: name 'Blueprint' is not defined\r\n\r\nAfter running these steps:\r\n\r\n```bash\r\npip install -e .\r\nmkdir ~/data\r\nmephisto config core.main_data_directory ~/data\r\nmephisto check\r\n# OK\r\ncd examples/simple_static_task\r\npython static_test_script.py\r\n```\r\n\r\nAny idea? Seems to be related to Flask.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom mephisto.abstractions.blueprints.abstract.static_task.static_blueprint import (\n StaticBlueprint,\n StaticBlueprintArgs,\n)\nfrom dataclasses import dataclass, field\nfrom omegaconf import MISSING, DictConfig\nfrom mephisto.abstractions.blueprints.static_html_task.static_html_task_builder import (\n StaticHTMLTaskBuilder,\n)\nfrom mephisto.operations.registry import register_mephisto_abstraction\n\nimport os\nimport time\nimport csv\nimport types\n\nfrom typing import ClassVar, List, Type, Any, Dict, Iterable, Optional, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from mephisto.data_model.task_run import TaskRun\n from mephisto.data_model.blueprint import (\n AgentState,\n TaskRunner,\n TaskBuilder,\n SharedTaskState,\n )\n from mephisto.data_model.assignment import Assignment\n from mephisto.data_model.agent import OnboardingAgent\n from mephisto.data_model.worker import Worker\n\nBLUEPRINT_TYPE = \"static_task\"\n\n\n@dataclass\nclass StaticHTMLBlueprintArgs(StaticBlueprintArgs):\n \"\"\"\n Adds required options for StaticBlueprints.\n task_source points to the file intending to be deployed for this task\n data_csv has the data to be deployed for this task.\n \"\"\"\n\n _blueprint_type: str = BLUEPRINT_TYPE\n _group: str = field(\n default=\"StaticBlueprint\",\n metadata={\n \"help\": (\n \"Tasks launched from static blueprints need a \"\n \"source html file to display to workers, as well as a csv \"\n \"containing values that will be inserted into templates in \"\n \"the html. \"\n )\n },\n )\n task_source: str = field(\n default=MISSING,\n metadata={\n \"help\": \"Path to source HTML file for the task being run\",\n \"required\": True,\n },\n )\n preview_source: Optional[str] = field(\n default=MISSING,\n metadata={\"help\": \"Optional path to source HTML file to preview the task\"},\n )\n onboarding_source: Optional[str] = field(\n default=MISSING,\n metadata={\"help\": \"Optional path to source HTML file to onboarding the task\"},\n )\n\n\n@register_mephisto_abstraction()\nclass StaticHTMLBlueprint(StaticBlueprint):\n \"\"\"Blueprint for a task that runs off of a built react javascript bundle\"\"\"\n\n TaskBuilderClass = StaticHTMLTaskBuilder\n ArgsClass = StaticHTMLBlueprintArgs\n BLUEPRINT_TYPE = BLUEPRINT_TYPE\n\n def __init__(\n self, task_run: \"TaskRun\", args: \"DictConfig\", shared_state: \"SharedTaskState\"\n ):\n super().__init__(task_run, args, shared_state)\n self.html_file = os.path.expanduser(args.blueprint.task_source)\n if not os.path.exists(self.html_file):\n raise FileNotFoundError(\n f\"Specified html file {self.html_file} was not found from {os.getcwd()}\"\n )\n\n self.onboarding_html_file = args.blueprint.get(\"onboarding_source\", None)\n if self.onboarding_html_file is not None:\n self.onboarding_html_file = os.path.expanduser(self.onboarding_html_file)\n if not os.path.exists(self.onboarding_html_file):\n raise FileNotFoundError(\n f\"Specified onboarding html file {self.onboarding_html_file} was not found from {os.getcwd()}\"\n )\n\n task_file_name = os.path.basename(self.html_file)\n for entry in self._initialization_data_dicts:\n entry[\"html\"] = task_file_name\n\n @classmethod\n def assert_task_args(cls, args: DictConfig, shared_state: \"SharedTaskState\"):\n \"\"\"Ensure that the data can be properly loaded\"\"\"\n Blueprint.assert_task_args(args, shared_state)\n blue_args = args.blueprint\n if isinstance(shared_state.static_task_data, types.GeneratorType):\n raise AssertionError(\"You can't launch an HTML static task on a generator\")\n if blue_args.get(\"data_csv\", None) is not None:\n csv_file = os.path.expanduser(blue_args.data_csv)\n assert os.path.exists(\n csv_file\n ), f\"Provided csv file {csv_file} doesn't exist\"\n elif blue_args.get(\"data_json\", None) is not None:\n json_file = os.path.expanduser(blue_args.data_json)\n assert os.path.exists(\n json_file\n ), f\"Provided JSON file {json_file} doesn't exist\"\n elif blue_args.get(\"data_jsonl\", None) is not None:\n jsonl_file = os.path.expanduser(blue_args.data_jsonl)\n assert os.path.exists(\n jsonl_file\n ), f\"Provided JSON-L file {jsonl_file} doesn't exist\"\n elif shared_state.static_task_data is not None:\n assert (\n len(shared_state.static_task_data) > 0\n ), \"Length of data dict provided was 0\"\n else:\n raise AssertionError(\n \"Must provide one of a data csv, json, json-L, or a list of tasks\"\n )\n\n if blue_args.get(\"onboarding_qualification\", None) is not None:\n assert blue_args.get(\"onboarding_source\", None) is not None, (\n \"Must use onboarding html with an onboarding qualification to \"\n \"use onboarding.\"\n )\n assert shared_state.validate_onboarding is not None, (\n \"Must use an onboarding validation function to use onboarding \"\n \"with static tasks.\"\n )\n", "path": "mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom mephisto.abstractions.blueprints.abstract.static_task.static_blueprint import (\n StaticBlueprint,\n StaticBlueprintArgs,\n)\nfrom dataclasses import dataclass, field\nfrom omegaconf import MISSING, DictConfig\nfrom mephisto.abstractions.blueprint import Blueprint\nfrom mephisto.abstractions.blueprints.static_html_task.static_html_task_builder import (\n StaticHTMLTaskBuilder,\n)\nfrom mephisto.operations.registry import register_mephisto_abstraction\n\nimport os\nimport time\nimport csv\nimport types\n\nfrom typing import ClassVar, List, Type, Any, Dict, Iterable, Optional, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from mephisto.data_model.task_run import TaskRun\n from mephisto.data_model.blueprint import (\n AgentState,\n TaskRunner,\n TaskBuilder,\n SharedTaskState,\n )\n from mephisto.data_model.assignment import Assignment\n from mephisto.data_model.agent import OnboardingAgent\n from mephisto.data_model.worker import Worker\n\nBLUEPRINT_TYPE = \"static_task\"\n\n\n@dataclass\nclass StaticHTMLBlueprintArgs(StaticBlueprintArgs):\n \"\"\"\n Adds required options for StaticBlueprints.\n task_source points to the file intending to be deployed for this task\n data_csv has the data to be deployed for this task.\n \"\"\"\n\n _blueprint_type: str = BLUEPRINT_TYPE\n _group: str = field(\n default=\"StaticBlueprint\",\n metadata={\n \"help\": (\n \"Tasks launched from static blueprints need a \"\n \"source html file to display to workers, as well as a csv \"\n \"containing values that will be inserted into templates in \"\n \"the html. \"\n )\n },\n )\n task_source: str = field(\n default=MISSING,\n metadata={\n \"help\": \"Path to source HTML file for the task being run\",\n \"required\": True,\n },\n )\n preview_source: Optional[str] = field(\n default=MISSING,\n metadata={\"help\": \"Optional path to source HTML file to preview the task\"},\n )\n onboarding_source: Optional[str] = field(\n default=MISSING,\n metadata={\"help\": \"Optional path to source HTML file to onboarding the task\"},\n )\n\n\n@register_mephisto_abstraction()\nclass StaticHTMLBlueprint(StaticBlueprint):\n \"\"\"Blueprint for a task that runs off of a built react javascript bundle\"\"\"\n\n TaskBuilderClass = StaticHTMLTaskBuilder\n ArgsClass = StaticHTMLBlueprintArgs\n BLUEPRINT_TYPE = BLUEPRINT_TYPE\n\n def __init__(\n self, task_run: \"TaskRun\", args: \"DictConfig\", shared_state: \"SharedTaskState\"\n ):\n super().__init__(task_run, args, shared_state)\n self.html_file = os.path.expanduser(args.blueprint.task_source)\n if not os.path.exists(self.html_file):\n raise FileNotFoundError(\n f\"Specified html file {self.html_file} was not found from {os.getcwd()}\"\n )\n\n self.onboarding_html_file = args.blueprint.get(\"onboarding_source\", None)\n if self.onboarding_html_file is not None:\n self.onboarding_html_file = os.path.expanduser(self.onboarding_html_file)\n if not os.path.exists(self.onboarding_html_file):\n raise FileNotFoundError(\n f\"Specified onboarding html file {self.onboarding_html_file} was not found from {os.getcwd()}\"\n )\n\n task_file_name = os.path.basename(self.html_file)\n for entry in self._initialization_data_dicts:\n entry[\"html\"] = task_file_name\n\n @classmethod\n def assert_task_args(cls, args: DictConfig, shared_state: \"SharedTaskState\"):\n \"\"\"Ensure that the data can be properly loaded\"\"\"\n Blueprint.assert_task_args(args, shared_state)\n blue_args = args.blueprint\n if isinstance(shared_state.static_task_data, types.GeneratorType):\n raise AssertionError(\"You can't launch an HTML static task on a generator\")\n if blue_args.get(\"data_csv\", None) is not None:\n csv_file = os.path.expanduser(blue_args.data_csv)\n assert os.path.exists(\n csv_file\n ), f\"Provided csv file {csv_file} doesn't exist\"\n elif blue_args.get(\"data_json\", None) is not None:\n json_file = os.path.expanduser(blue_args.data_json)\n assert os.path.exists(\n json_file\n ), f\"Provided JSON file {json_file} doesn't exist\"\n elif blue_args.get(\"data_jsonl\", None) is not None:\n jsonl_file = os.path.expanduser(blue_args.data_jsonl)\n assert os.path.exists(\n jsonl_file\n ), f\"Provided JSON-L file {jsonl_file} doesn't exist\"\n elif shared_state.static_task_data is not None:\n assert (\n len(shared_state.static_task_data) > 0\n ), \"Length of data dict provided was 0\"\n else:\n raise AssertionError(\n \"Must provide one of a data csv, json, json-L, or a list of tasks\"\n )\n\n if blue_args.get(\"onboarding_qualification\", None) is not None:\n assert blue_args.get(\"onboarding_source\", None) is not None, (\n \"Must use onboarding html with an onboarding qualification to \"\n \"use onboarding.\"\n )\n assert shared_state.validate_onboarding is not None, (\n \"Must use an onboarding validation function to use onboarding \"\n \"with static tasks.\"\n )\n", "path": "mephisto/abstractions/blueprints/static_html_task/static_html_blueprint.py"}]}
| 1,932 | 145 |
gh_patches_debug_41489
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-4333
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WidgetBox doesn't support padding
### The issue:
The WidgetBox widget doesn't support the padding attribute.
Right Now I can use padding inside children widgets to space them.
But the `text_closed` and `text_open`can't be padded
It would be nice to have a `spacing` and a `padding` attribute
the spacing attribute could be used to space the widgets inside the widgetBox and padding attribute could be used for padding `text_open` and `text_closed`
### Required:
- [X] I have searched past issues to see if this bug has already been reported.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/widget/widgetbox.py`
Content:
```
1 # Copyright (c) 2020 elParaguayo
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
19 # SOFTWARE.
20 from __future__ import annotations
21
22 from typing import TYPE_CHECKING
23
24 from libqtile import bar
25 from libqtile.command.base import expose_command
26 from libqtile.log_utils import logger
27 from libqtile.widget import Systray, base
28
29 if TYPE_CHECKING:
30 from typing import Any
31
32
33 class WidgetBox(base._Widget):
34 """A widget to declutter your bar.
35
36 WidgetBox is a widget that hides widgets by default but shows them when
37 the box is opened.
38
39 Widgets that are hidden will still update etc. as if they were on the main
40 bar.
41
42 Button clicks are passed to widgets when they are visible so callbacks will
43 work.
44
45 Widgets in the box also remain accessible via command interfaces.
46
47 Widgets can only be added to the box via the configuration file. The widget
48 is configured by adding widgets to the "widgets" parameter as follows::
49
50 widget.WidgetBox(widgets=[
51 widget.TextBox(text="This widget is in the box"),
52 widget.Memory()
53 ]
54 ),
55 """
56
57 orientations = base.ORIENTATION_HORIZONTAL
58 defaults: list[tuple[str, Any, str]] = [
59 ("font", "sans", "Text font"),
60 ("fontsize", None, "Font pixel size. Calculated if None."),
61 ("fontshadow", None, "font shadow color, default is None(no shadow)"),
62 ("foreground", "#ffffff", "Foreground colour."),
63 (
64 "close_button_location",
65 "left",
66 "Location of close button when box open ('left' or 'right')",
67 ),
68 ("text_closed", "[<]", "Text when box is closed"),
69 ("text_open", "[>]", "Text when box is open"),
70 ("widgets", list(), "A list of widgets to include in the box"),
71 ("start_opened", False, "Spawn the box opened"),
72 ]
73
74 def __init__(self, _widgets: list[base._Widget] | None = None, **config):
75 base._Widget.__init__(self, bar.CALCULATED, **config)
76 self.add_defaults(WidgetBox.defaults)
77 self.box_is_open = False
78 self.add_callbacks({"Button1": self.toggle})
79
80 if _widgets:
81 logger.warning(
82 "The use of a positional argument in WidgetBox is deprecated. "
83 "Please update your config to use widgets=[...]."
84 )
85 self.widgets = _widgets
86
87 self.close_button_location: str
88 if self.close_button_location not in ["left", "right"]:
89 val = self.close_button_location
90 logger.warning("Invalid value for 'close_button_location': %s", val)
91 self.close_button_location = "left"
92
93 def _configure(self, qtile, bar):
94 base._Widget._configure(self, qtile, bar)
95
96 self.layout = self.drawer.textlayout(
97 self.text_open if self.box_is_open else self.text_closed,
98 self.foreground,
99 self.font,
100 self.fontsize,
101 self.fontshadow,
102 markup=False,
103 )
104
105 if self.configured:
106 return
107
108 for idx, w in enumerate(self.widgets):
109 if w.configured:
110 w = w.create_mirror()
111 self.widgets[idx] = w
112 self.qtile.register_widget(w)
113 w._configure(self.qtile, self.bar)
114 w.offsety = self.bar.border_width[0]
115
116 # In case the widget is mirrored, we need to draw it once so the
117 # mirror can copy the surface but draw it off screen
118 w.offsetx = self.bar.width
119 self.qtile.call_soon(w.draw)
120
121 # Setting the configured flag for widgets was moved to Bar._configure so we need to
122 # set it here.
123 w.configured = True
124
125 # Disable drawing of the widget's contents
126 for w in self.widgets:
127 w.drawer.disable()
128
129 # We're being cautious: `box_is_open` should never be True here...
130 if self.start_opened and not self.box_is_open:
131 self.qtile.call_soon(self.toggle)
132
133 def calculate_length(self):
134 return self.layout.width
135
136 def set_box_label(self):
137 self.layout.text = self.text_open if self.box_is_open else self.text_closed
138
139 def toggle_widgets(self):
140 for widget in self.widgets:
141 try:
142 self.bar.widgets.remove(widget)
143 # Override drawer.drawer with a no-op
144 widget.drawer.disable()
145
146 # Systray widget needs some additional steps to hide as the icons
147 # are separate _Window instances.
148 # Systray unhides icons when it draws so we only need to hide them.
149 if isinstance(widget, Systray):
150 for icon in widget.tray_icons:
151 icon.hide()
152
153 except ValueError:
154 continue
155
156 index = self.bar.widgets.index(self)
157
158 if self.close_button_location == "left":
159 index += 1
160
161 if self.box_is_open:
162 # Need to reverse list as widgets get added in front of eachother.
163 for widget in self.widgets[::-1]:
164 # enable drawing again
165 widget.drawer.enable()
166 self.bar.widgets.insert(index, widget)
167
168 def draw(self):
169 self.drawer.clear(self.background or self.bar.background)
170
171 self.layout.draw(0, int(self.bar.height / 2.0 - self.layout.height / 2.0) + 1)
172
173 self.drawer.draw(offsetx=self.offsetx, offsety=self.offsety, width=self.width)
174
175 @expose_command()
176 def toggle(self):
177 """Toggle box state"""
178 self.box_is_open = not self.box_is_open
179 self.toggle_widgets()
180 self.set_box_label()
181 self.bar.draw()
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/libqtile/widget/widgetbox.py b/libqtile/widget/widgetbox.py
--- a/libqtile/widget/widgetbox.py
+++ b/libqtile/widget/widgetbox.py
@@ -21,16 +21,16 @@
from typing import TYPE_CHECKING
-from libqtile import bar
from libqtile.command.base import expose_command
from libqtile.log_utils import logger
+from libqtile.pangocffi import markup_escape_text
from libqtile.widget import Systray, base
if TYPE_CHECKING:
from typing import Any
-class WidgetBox(base._Widget):
+class WidgetBox(base._TextBox):
"""A widget to declutter your bar.
WidgetBox is a widget that hides widgets by default but shows them when
@@ -56,10 +56,6 @@
orientations = base.ORIENTATION_HORIZONTAL
defaults: list[tuple[str, Any, str]] = [
- ("font", "sans", "Text font"),
- ("fontsize", None, "Font pixel size. Calculated if None."),
- ("fontshadow", None, "font shadow color, default is None(no shadow)"),
- ("foreground", "#ffffff", "Foreground colour."),
(
"close_button_location",
"left",
@@ -72,7 +68,7 @@
]
def __init__(self, _widgets: list[base._Widget] | None = None, **config):
- base._Widget.__init__(self, bar.CALCULATED, **config)
+ base._TextBox.__init__(self, **config)
self.add_defaults(WidgetBox.defaults)
self.box_is_open = False
self.add_callbacks({"Button1": self.toggle})
@@ -91,16 +87,9 @@
self.close_button_location = "left"
def _configure(self, qtile, bar):
- base._Widget._configure(self, qtile, bar)
+ base._TextBox._configure(self, qtile, bar)
- self.layout = self.drawer.textlayout(
- self.text_open if self.box_is_open else self.text_closed,
- self.foreground,
- self.font,
- self.fontsize,
- self.fontshadow,
- markup=False,
- )
+ self.text = markup_escape_text(self.text_open if self.box_is_open else self.text_closed)
if self.configured:
return
@@ -130,11 +119,8 @@
if self.start_opened and not self.box_is_open:
self.qtile.call_soon(self.toggle)
- def calculate_length(self):
- return self.layout.width
-
def set_box_label(self):
- self.layout.text = self.text_open if self.box_is_open else self.text_closed
+ self.text = markup_escape_text(self.text_open if self.box_is_open else self.text_closed)
def toggle_widgets(self):
for widget in self.widgets:
@@ -165,13 +151,6 @@
widget.drawer.enable()
self.bar.widgets.insert(index, widget)
- def draw(self):
- self.drawer.clear(self.background or self.bar.background)
-
- self.layout.draw(0, int(self.bar.height / 2.0 - self.layout.height / 2.0) + 1)
-
- self.drawer.draw(offsetx=self.offsetx, offsety=self.offsety, width=self.width)
-
@expose_command()
def toggle(self):
"""Toggle box state"""
|
{"golden_diff": "diff --git a/libqtile/widget/widgetbox.py b/libqtile/widget/widgetbox.py\n--- a/libqtile/widget/widgetbox.py\n+++ b/libqtile/widget/widgetbox.py\n@@ -21,16 +21,16 @@\n \n from typing import TYPE_CHECKING\n \n-from libqtile import bar\n from libqtile.command.base import expose_command\n from libqtile.log_utils import logger\n+from libqtile.pangocffi import markup_escape_text\n from libqtile.widget import Systray, base\n \n if TYPE_CHECKING:\n from typing import Any\n \n \n-class WidgetBox(base._Widget):\n+class WidgetBox(base._TextBox):\n \"\"\"A widget to declutter your bar.\n \n WidgetBox is a widget that hides widgets by default but shows them when\n@@ -56,10 +56,6 @@\n \n orientations = base.ORIENTATION_HORIZONTAL\n defaults: list[tuple[str, Any, str]] = [\n- (\"font\", \"sans\", \"Text font\"),\n- (\"fontsize\", None, \"Font pixel size. Calculated if None.\"),\n- (\"fontshadow\", None, \"font shadow color, default is None(no shadow)\"),\n- (\"foreground\", \"#ffffff\", \"Foreground colour.\"),\n (\n \"close_button_location\",\n \"left\",\n@@ -72,7 +68,7 @@\n ]\n \n def __init__(self, _widgets: list[base._Widget] | None = None, **config):\n- base._Widget.__init__(self, bar.CALCULATED, **config)\n+ base._TextBox.__init__(self, **config)\n self.add_defaults(WidgetBox.defaults)\n self.box_is_open = False\n self.add_callbacks({\"Button1\": self.toggle})\n@@ -91,16 +87,9 @@\n self.close_button_location = \"left\"\n \n def _configure(self, qtile, bar):\n- base._Widget._configure(self, qtile, bar)\n+ base._TextBox._configure(self, qtile, bar)\n \n- self.layout = self.drawer.textlayout(\n- self.text_open if self.box_is_open else self.text_closed,\n- self.foreground,\n- self.font,\n- self.fontsize,\n- self.fontshadow,\n- markup=False,\n- )\n+ self.text = markup_escape_text(self.text_open if self.box_is_open else self.text_closed)\n \n if self.configured:\n return\n@@ -130,11 +119,8 @@\n if self.start_opened and not self.box_is_open:\n self.qtile.call_soon(self.toggle)\n \n- def calculate_length(self):\n- return self.layout.width\n-\n def set_box_label(self):\n- self.layout.text = self.text_open if self.box_is_open else self.text_closed\n+ self.text = markup_escape_text(self.text_open if self.box_is_open else self.text_closed)\n \n def toggle_widgets(self):\n for widget in self.widgets:\n@@ -165,13 +151,6 @@\n widget.drawer.enable()\n self.bar.widgets.insert(index, widget)\n \n- def draw(self):\n- self.drawer.clear(self.background or self.bar.background)\n-\n- self.layout.draw(0, int(self.bar.height / 2.0 - self.layout.height / 2.0) + 1)\n-\n- self.drawer.draw(offsetx=self.offsetx, offsety=self.offsety, width=self.width)\n-\n @expose_command()\n def toggle(self):\n \"\"\"Toggle box state\"\"\"\n", "issue": "WidgetBox doesn't support padding\n### The issue:\n\nThe WidgetBox widget doesn't support the padding attribute. \r\n\r\nRight Now I can use padding inside children widgets to space them.\r\nBut the `text_closed` and `text_open`can't be padded\r\n\r\nIt would be nice to have a `spacing` and a `padding` attribute\r\nthe spacing attribute could be used to space the widgets inside the widgetBox and padding attribute could be used for padding `text_open` and `text_closed`\n\n### Required:\n\n- [X] I have searched past issues to see if this bug has already been reported.\n", "before_files": [{"content": "# Copyright (c) 2020 elParaguayo\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom libqtile import bar\nfrom libqtile.command.base import expose_command\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import Systray, base\n\nif TYPE_CHECKING:\n from typing import Any\n\n\nclass WidgetBox(base._Widget):\n \"\"\"A widget to declutter your bar.\n\n WidgetBox is a widget that hides widgets by default but shows them when\n the box is opened.\n\n Widgets that are hidden will still update etc. as if they were on the main\n bar.\n\n Button clicks are passed to widgets when they are visible so callbacks will\n work.\n\n Widgets in the box also remain accessible via command interfaces.\n\n Widgets can only be added to the box via the configuration file. The widget\n is configured by adding widgets to the \"widgets\" parameter as follows::\n\n widget.WidgetBox(widgets=[\n widget.TextBox(text=\"This widget is in the box\"),\n widget.Memory()\n ]\n ),\n \"\"\"\n\n orientations = base.ORIENTATION_HORIZONTAL\n defaults: list[tuple[str, Any, str]] = [\n (\"font\", \"sans\", \"Text font\"),\n (\"fontsize\", None, \"Font pixel size. Calculated if None.\"),\n (\"fontshadow\", None, \"font shadow color, default is None(no shadow)\"),\n (\"foreground\", \"#ffffff\", \"Foreground colour.\"),\n (\n \"close_button_location\",\n \"left\",\n \"Location of close button when box open ('left' or 'right')\",\n ),\n (\"text_closed\", \"[<]\", \"Text when box is closed\"),\n (\"text_open\", \"[>]\", \"Text when box is open\"),\n (\"widgets\", list(), \"A list of widgets to include in the box\"),\n (\"start_opened\", False, \"Spawn the box opened\"),\n ]\n\n def __init__(self, _widgets: list[base._Widget] | None = None, **config):\n base._Widget.__init__(self, bar.CALCULATED, **config)\n self.add_defaults(WidgetBox.defaults)\n self.box_is_open = False\n self.add_callbacks({\"Button1\": self.toggle})\n\n if _widgets:\n logger.warning(\n \"The use of a positional argument in WidgetBox is deprecated. \"\n \"Please update your config to use widgets=[...].\"\n )\n self.widgets = _widgets\n\n self.close_button_location: str\n if self.close_button_location not in [\"left\", \"right\"]:\n val = self.close_button_location\n logger.warning(\"Invalid value for 'close_button_location': %s\", val)\n self.close_button_location = \"left\"\n\n def _configure(self, qtile, bar):\n base._Widget._configure(self, qtile, bar)\n\n self.layout = self.drawer.textlayout(\n self.text_open if self.box_is_open else self.text_closed,\n self.foreground,\n self.font,\n self.fontsize,\n self.fontshadow,\n markup=False,\n )\n\n if self.configured:\n return\n\n for idx, w in enumerate(self.widgets):\n if w.configured:\n w = w.create_mirror()\n self.widgets[idx] = w\n self.qtile.register_widget(w)\n w._configure(self.qtile, self.bar)\n w.offsety = self.bar.border_width[0]\n\n # In case the widget is mirrored, we need to draw it once so the\n # mirror can copy the surface but draw it off screen\n w.offsetx = self.bar.width\n self.qtile.call_soon(w.draw)\n\n # Setting the configured flag for widgets was moved to Bar._configure so we need to\n # set it here.\n w.configured = True\n\n # Disable drawing of the widget's contents\n for w in self.widgets:\n w.drawer.disable()\n\n # We're being cautious: `box_is_open` should never be True here...\n if self.start_opened and not self.box_is_open:\n self.qtile.call_soon(self.toggle)\n\n def calculate_length(self):\n return self.layout.width\n\n def set_box_label(self):\n self.layout.text = self.text_open if self.box_is_open else self.text_closed\n\n def toggle_widgets(self):\n for widget in self.widgets:\n try:\n self.bar.widgets.remove(widget)\n # Override drawer.drawer with a no-op\n widget.drawer.disable()\n\n # Systray widget needs some additional steps to hide as the icons\n # are separate _Window instances.\n # Systray unhides icons when it draws so we only need to hide them.\n if isinstance(widget, Systray):\n for icon in widget.tray_icons:\n icon.hide()\n\n except ValueError:\n continue\n\n index = self.bar.widgets.index(self)\n\n if self.close_button_location == \"left\":\n index += 1\n\n if self.box_is_open:\n # Need to reverse list as widgets get added in front of eachother.\n for widget in self.widgets[::-1]:\n # enable drawing again\n widget.drawer.enable()\n self.bar.widgets.insert(index, widget)\n\n def draw(self):\n self.drawer.clear(self.background or self.bar.background)\n\n self.layout.draw(0, int(self.bar.height / 2.0 - self.layout.height / 2.0) + 1)\n\n self.drawer.draw(offsetx=self.offsetx, offsety=self.offsety, width=self.width)\n\n @expose_command()\n def toggle(self):\n \"\"\"Toggle box state\"\"\"\n self.box_is_open = not self.box_is_open\n self.toggle_widgets()\n self.set_box_label()\n self.bar.draw()\n", "path": "libqtile/widget/widgetbox.py"}], "after_files": [{"content": "# Copyright (c) 2020 elParaguayo\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom libqtile.command.base import expose_command\nfrom libqtile.log_utils import logger\nfrom libqtile.pangocffi import markup_escape_text\nfrom libqtile.widget import Systray, base\n\nif TYPE_CHECKING:\n from typing import Any\n\n\nclass WidgetBox(base._TextBox):\n \"\"\"A widget to declutter your bar.\n\n WidgetBox is a widget that hides widgets by default but shows them when\n the box is opened.\n\n Widgets that are hidden will still update etc. as if they were on the main\n bar.\n\n Button clicks are passed to widgets when they are visible so callbacks will\n work.\n\n Widgets in the box also remain accessible via command interfaces.\n\n Widgets can only be added to the box via the configuration file. The widget\n is configured by adding widgets to the \"widgets\" parameter as follows::\n\n widget.WidgetBox(widgets=[\n widget.TextBox(text=\"This widget is in the box\"),\n widget.Memory()\n ]\n ),\n \"\"\"\n\n orientations = base.ORIENTATION_HORIZONTAL\n defaults: list[tuple[str, Any, str]] = [\n (\n \"close_button_location\",\n \"left\",\n \"Location of close button when box open ('left' or 'right')\",\n ),\n (\"text_closed\", \"[<]\", \"Text when box is closed\"),\n (\"text_open\", \"[>]\", \"Text when box is open\"),\n (\"widgets\", list(), \"A list of widgets to include in the box\"),\n (\"start_opened\", False, \"Spawn the box opened\"),\n ]\n\n def __init__(self, _widgets: list[base._Widget] | None = None, **config):\n base._TextBox.__init__(self, **config)\n self.add_defaults(WidgetBox.defaults)\n self.box_is_open = False\n self.add_callbacks({\"Button1\": self.toggle})\n\n if _widgets:\n logger.warning(\n \"The use of a positional argument in WidgetBox is deprecated. \"\n \"Please update your config to use widgets=[...].\"\n )\n self.widgets = _widgets\n\n self.close_button_location: str\n if self.close_button_location not in [\"left\", \"right\"]:\n val = self.close_button_location\n logger.warning(\"Invalid value for 'close_button_location': %s\", val)\n self.close_button_location = \"left\"\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n\n self.text = markup_escape_text(self.text_open if self.box_is_open else self.text_closed)\n\n if self.configured:\n return\n\n for idx, w in enumerate(self.widgets):\n if w.configured:\n w = w.create_mirror()\n self.widgets[idx] = w\n self.qtile.register_widget(w)\n w._configure(self.qtile, self.bar)\n w.offsety = self.bar.border_width[0]\n\n # In case the widget is mirrored, we need to draw it once so the\n # mirror can copy the surface but draw it off screen\n w.offsetx = self.bar.width\n self.qtile.call_soon(w.draw)\n\n # Setting the configured flag for widgets was moved to Bar._configure so we need to\n # set it here.\n w.configured = True\n\n # Disable drawing of the widget's contents\n for w in self.widgets:\n w.drawer.disable()\n\n # We're being cautious: `box_is_open` should never be True here...\n if self.start_opened and not self.box_is_open:\n self.qtile.call_soon(self.toggle)\n\n def set_box_label(self):\n self.text = markup_escape_text(self.text_open if self.box_is_open else self.text_closed)\n\n def toggle_widgets(self):\n for widget in self.widgets:\n try:\n self.bar.widgets.remove(widget)\n # Override drawer.drawer with a no-op\n widget.drawer.disable()\n\n # Systray widget needs some additional steps to hide as the icons\n # are separate _Window instances.\n # Systray unhides icons when it draws so we only need to hide them.\n if isinstance(widget, Systray):\n for icon in widget.tray_icons:\n icon.hide()\n\n except ValueError:\n continue\n\n index = self.bar.widgets.index(self)\n\n if self.close_button_location == \"left\":\n index += 1\n\n if self.box_is_open:\n # Need to reverse list as widgets get added in front of eachother.\n for widget in self.widgets[::-1]:\n # enable drawing again\n widget.drawer.enable()\n self.bar.widgets.insert(index, widget)\n\n @expose_command()\n def toggle(self):\n \"\"\"Toggle box state\"\"\"\n self.box_is_open = not self.box_is_open\n self.toggle_widgets()\n self.set_box_label()\n self.bar.draw()\n", "path": "libqtile/widget/widgetbox.py"}]}
| 2,274 | 762 |
gh_patches_debug_31256
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-3529
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ModuleNotFoundError: No module named 'tools'
i would like to test the result of training, so i write the next:
(base) zhangshen@zhangshen-X550JX:~/mmdetection$ python tools/test.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth --out./result/result_100/pkl --eval bbox
but i got:
Traceback (most recent call last):
File "tools/test.py", line 9, in <module>
from tools.fuse_conv_bn import fuse_module
ModuleNotFoundError: No module named 'tools'
how can i solve this problem?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/fuse_conv_bn.py`
Content:
```
1 import argparse
2
3 import torch
4 import torch.nn as nn
5 from mmcv.runner import save_checkpoint
6
7 from mmdet.apis import init_detector
8
9
10 def fuse_conv_bn(conv, bn):
11 """During inference, the functionary of batch norm layers is turned off but
12 only the mean and var alone channels are used, which exposes the chance to
13 fuse it with the preceding conv layers to save computations and simplify
14 network structures."""
15 conv_w = conv.weight
16 conv_b = conv.bias if conv.bias is not None else torch.zeros_like(
17 bn.running_mean)
18
19 factor = bn.weight / torch.sqrt(bn.running_var + bn.eps)
20 conv.weight = nn.Parameter(conv_w *
21 factor.reshape([conv.out_channels, 1, 1, 1]))
22 conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias)
23 return conv
24
25
26 def fuse_module(m):
27 last_conv = None
28 last_conv_name = None
29
30 for name, child in m.named_children():
31 if isinstance(child, (nn.BatchNorm2d, nn.SyncBatchNorm)):
32 if last_conv is None: # only fuse BN that is after Conv
33 continue
34 fused_conv = fuse_conv_bn(last_conv, child)
35 m._modules[last_conv_name] = fused_conv
36 # To reduce changes, set BN as Identity instead of deleting it.
37 m._modules[name] = nn.Identity()
38 last_conv = None
39 elif isinstance(child, nn.Conv2d):
40 last_conv = child
41 last_conv_name = name
42 else:
43 fuse_module(child)
44 return m
45
46
47 def parse_args():
48 parser = argparse.ArgumentParser(
49 description='fuse Conv and BN layers in a model')
50 parser.add_argument('config', help='config file path')
51 parser.add_argument('checkpoint', help='checkpoint file path')
52 parser.add_argument('out', help='output path of the converted model')
53 args = parser.parse_args()
54 return args
55
56
57 def main():
58 args = parse_args()
59 # build the model from a config file and a checkpoint file
60 model = init_detector(args.config, args.checkpoint)
61 # fuse conv and bn layers of the model
62 fused_model = fuse_module(model)
63 save_checkpoint(fused_model, args.out)
64
65
66 if __name__ == '__main__':
67 main()
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/fuse_conv_bn.py b/tools/fuse_conv_bn.py
deleted file mode 100644
--- a/tools/fuse_conv_bn.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import argparse
-
-import torch
-import torch.nn as nn
-from mmcv.runner import save_checkpoint
-
-from mmdet.apis import init_detector
-
-
-def fuse_conv_bn(conv, bn):
- """During inference, the functionary of batch norm layers is turned off but
- only the mean and var alone channels are used, which exposes the chance to
- fuse it with the preceding conv layers to save computations and simplify
- network structures."""
- conv_w = conv.weight
- conv_b = conv.bias if conv.bias is not None else torch.zeros_like(
- bn.running_mean)
-
- factor = bn.weight / torch.sqrt(bn.running_var + bn.eps)
- conv.weight = nn.Parameter(conv_w *
- factor.reshape([conv.out_channels, 1, 1, 1]))
- conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias)
- return conv
-
-
-def fuse_module(m):
- last_conv = None
- last_conv_name = None
-
- for name, child in m.named_children():
- if isinstance(child, (nn.BatchNorm2d, nn.SyncBatchNorm)):
- if last_conv is None: # only fuse BN that is after Conv
- continue
- fused_conv = fuse_conv_bn(last_conv, child)
- m._modules[last_conv_name] = fused_conv
- # To reduce changes, set BN as Identity instead of deleting it.
- m._modules[name] = nn.Identity()
- last_conv = None
- elif isinstance(child, nn.Conv2d):
- last_conv = child
- last_conv_name = name
- else:
- fuse_module(child)
- return m
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='fuse Conv and BN layers in a model')
- parser.add_argument('config', help='config file path')
- parser.add_argument('checkpoint', help='checkpoint file path')
- parser.add_argument('out', help='output path of the converted model')
- args = parser.parse_args()
- return args
-
-
-def main():
- args = parse_args()
- # build the model from a config file and a checkpoint file
- model = init_detector(args.config, args.checkpoint)
- # fuse conv and bn layers of the model
- fused_model = fuse_module(model)
- save_checkpoint(fused_model, args.out)
-
-
-if __name__ == '__main__':
- main()
|
{"golden_diff": "diff --git a/tools/fuse_conv_bn.py b/tools/fuse_conv_bn.py\ndeleted file mode 100644\n--- a/tools/fuse_conv_bn.py\n+++ /dev/null\n@@ -1,67 +0,0 @@\n-import argparse\n-\n-import torch\n-import torch.nn as nn\n-from mmcv.runner import save_checkpoint\n-\n-from mmdet.apis import init_detector\n-\n-\n-def fuse_conv_bn(conv, bn):\n- \"\"\"During inference, the functionary of batch norm layers is turned off but\n- only the mean and var alone channels are used, which exposes the chance to\n- fuse it with the preceding conv layers to save computations and simplify\n- network structures.\"\"\"\n- conv_w = conv.weight\n- conv_b = conv.bias if conv.bias is not None else torch.zeros_like(\n- bn.running_mean)\n-\n- factor = bn.weight / torch.sqrt(bn.running_var + bn.eps)\n- conv.weight = nn.Parameter(conv_w *\n- factor.reshape([conv.out_channels, 1, 1, 1]))\n- conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias)\n- return conv\n-\n-\n-def fuse_module(m):\n- last_conv = None\n- last_conv_name = None\n-\n- for name, child in m.named_children():\n- if isinstance(child, (nn.BatchNorm2d, nn.SyncBatchNorm)):\n- if last_conv is None: # only fuse BN that is after Conv\n- continue\n- fused_conv = fuse_conv_bn(last_conv, child)\n- m._modules[last_conv_name] = fused_conv\n- # To reduce changes, set BN as Identity instead of deleting it.\n- m._modules[name] = nn.Identity()\n- last_conv = None\n- elif isinstance(child, nn.Conv2d):\n- last_conv = child\n- last_conv_name = name\n- else:\n- fuse_module(child)\n- return m\n-\n-\n-def parse_args():\n- parser = argparse.ArgumentParser(\n- description='fuse Conv and BN layers in a model')\n- parser.add_argument('config', help='config file path')\n- parser.add_argument('checkpoint', help='checkpoint file path')\n- parser.add_argument('out', help='output path of the converted model')\n- args = parser.parse_args()\n- return args\n-\n-\n-def main():\n- args = parse_args()\n- # build the model from a config file and a checkpoint file\n- model = init_detector(args.config, args.checkpoint)\n- # fuse conv and bn layers of the model\n- fused_model = fuse_module(model)\n- save_checkpoint(fused_model, args.out)\n-\n-\n-if __name__ == '__main__':\n- main()\n", "issue": "ModuleNotFoundError: No module named 'tools'\n i would like to test the result of training, so i write the next:\r\n(base) zhangshen@zhangshen-X550JX:~/mmdetection$ python tools/test.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth --out./result/result_100/pkl --eval bbox\r\n\r\nbut i got:\r\nTraceback (most recent call last):\r\n File \"tools/test.py\", line 9, in <module>\r\n from tools.fuse_conv_bn import fuse_module\r\nModuleNotFoundError: No module named 'tools'\r\n\r\nhow can i solve this problem?\n", "before_files": [{"content": "import argparse\n\nimport torch\nimport torch.nn as nn\nfrom mmcv.runner import save_checkpoint\n\nfrom mmdet.apis import init_detector\n\n\ndef fuse_conv_bn(conv, bn):\n \"\"\"During inference, the functionary of batch norm layers is turned off but\n only the mean and var alone channels are used, which exposes the chance to\n fuse it with the preceding conv layers to save computations and simplify\n network structures.\"\"\"\n conv_w = conv.weight\n conv_b = conv.bias if conv.bias is not None else torch.zeros_like(\n bn.running_mean)\n\n factor = bn.weight / torch.sqrt(bn.running_var + bn.eps)\n conv.weight = nn.Parameter(conv_w *\n factor.reshape([conv.out_channels, 1, 1, 1]))\n conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias)\n return conv\n\n\ndef fuse_module(m):\n last_conv = None\n last_conv_name = None\n\n for name, child in m.named_children():\n if isinstance(child, (nn.BatchNorm2d, nn.SyncBatchNorm)):\n if last_conv is None: # only fuse BN that is after Conv\n continue\n fused_conv = fuse_conv_bn(last_conv, child)\n m._modules[last_conv_name] = fused_conv\n # To reduce changes, set BN as Identity instead of deleting it.\n m._modules[name] = nn.Identity()\n last_conv = None\n elif isinstance(child, nn.Conv2d):\n last_conv = child\n last_conv_name = name\n else:\n fuse_module(child)\n return m\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description='fuse Conv and BN layers in a model')\n parser.add_argument('config', help='config file path')\n parser.add_argument('checkpoint', help='checkpoint file path')\n parser.add_argument('out', help='output path of the converted model')\n args = parser.parse_args()\n return args\n\n\ndef main():\n args = parse_args()\n # build the model from a config file and a checkpoint file\n model = init_detector(args.config, args.checkpoint)\n # fuse conv and bn layers of the model\n fused_model = fuse_module(model)\n save_checkpoint(fused_model, args.out)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/fuse_conv_bn.py"}], "after_files": [{"content": null, "path": "tools/fuse_conv_bn.py"}]}
| 1,065 | 600 |
gh_patches_debug_18733
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-1715
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove dependency on HEPData from docstring tests
### Summary
With PRs #1697, #1704, and #1711 there have been attempts to remove any dependency on HEPData infrastructure for testing. There is still a dependency in the [`pyhf.contrib.utils.download` docs](https://pyhf.readthedocs.io/en/v0.6.3/_generated/pyhf.contrib.utils.download.html) as it demos downloading from HEPData in the docstring example. This can fail with
```pytb
____________________ [doctest] pyhf.contrib.utils.download _____________________
028 directory at the path given.
029
030 Example:
031
032 >>> from pyhf.contrib.utils import download
033 >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods")
034 >>> import os
035 >>> sorted(os.listdir("1Lbb-likelihoods"))
036 ['BkgOnly.json', 'README.md', 'patchset.json']
037 >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods.tar.gz", compress=True)
UNEXPECTED EXCEPTION: InvalidArchive('https://doi.org/10.17182/hepdata.90607.v3/r3 gives a response code of 500.\nThere is either something temporarily wrong with the archive host or https://doi.org/10.17182/hepdata.90607.v3/r3 is an invalid URL.')
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run
exec(compile(example.source, filename, "single",
File "<doctest pyhf.contrib.utils.download[4]>", line 1, in <module>
File "/home/runner/work/pyhf/pyhf/src/pyhf/contrib/utils.py", line 73, in download
raise exceptions.InvalidArchive(
pyhf.exceptions.InvalidArchive: https://doi.org/10.17182/hepdata.90607.v3/r3 gives a response code of 500.
There is either something temporarily wrong with the archive host or https://doi.org/10.17182/hepdata.90607.v3/r3 is an invalid URL.
/home/runner/work/pyhf/pyhf/src/pyhf/contrib/utils.py:37: UnexpectedException
```
### OS / Environment
```console
All platforms
```
### Steps to Reproduce
Run the CI and get unlucky
### File Upload (optional)
_No response_
### Expected Results
The CI to run the doctests without error and to be able to not have dependencies on HEPData uptime.
### Actual Results
```pytb
____________________ [doctest] pyhf.contrib.utils.download _____________________
028 directory at the path given.
029
030 Example:
031
032 >>> from pyhf.contrib.utils import download
033 >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods")
034 >>> import os
035 >>> sorted(os.listdir("1Lbb-likelihoods"))
036 ['BkgOnly.json', 'README.md', 'patchset.json']
037 >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods.tar.gz", compress=True)
UNEXPECTED EXCEPTION: InvalidArchive('https://doi.org/10.17182/hepdata.90607.v3/r3 gives a response code of 500.\nThere is either something temporarily wrong with the archive host or https://doi.org/10.17182/hepdata.90607.v3/r3 is an invalid URL.')
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py", line 1334, in __run
exec(compile(example.source, filename, "single",
File "<doctest pyhf.contrib.utils.download[4]>", line 1, in <module>
File "/home/runner/work/pyhf/pyhf/src/pyhf/contrib/utils.py", line 73, in download
raise exceptions.InvalidArchive(
pyhf.exceptions.InvalidArchive: https://doi.org/10.17182/hepdata.90607.v3/r3 gives a response code of 500.
There is either something temporarily wrong with the archive host or https://doi.org/10.17182/hepdata.90607.v3/r3 is an invalid URL.
/home/runner/work/pyhf/pyhf/src/pyhf/contrib/utils.py:37: UnexpectedException
```
### pyhf Version
```console
pyhf, version 0.6.3
```
### Code of Conduct
- [X] I agree to follow the Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/contrib/utils.py`
Content:
```
1 """Helper utilities for common tasks."""
2
3 import logging
4 import tarfile
5 import zipfile
6 from io import BytesIO
7 from pathlib import Path
8 from shutil import rmtree
9 from urllib.parse import urlparse
10
11 from pyhf import exceptions
12
13 log = logging.getLogger(__name__)
14
15 __all__ = ["download"]
16
17
18 def __dir__():
19 return __all__
20
21
22 try:
23 import requests
24
25 def download(archive_url, output_directory, force=False, compress=False):
26 """
27 Download the patchset archive from the remote URL and extract it in a
28 directory at the path given.
29
30 Example:
31
32 >>> from pyhf.contrib.utils import download
33 >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods")
34 >>> import os
35 >>> sorted(os.listdir("1Lbb-likelihoods"))
36 ['BkgOnly.json', 'README.md', 'patchset.json']
37 >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods.tar.gz", compress=True)
38 >>> import glob
39 >>> glob.glob("1Lbb-likelihoods.tar.gz")
40 ['1Lbb-likelihoods.tar.gz']
41
42 Args:
43 archive_url (:obj:`str`): The URL of the :class:`~pyhf.patchset.PatchSet` archive to download.
44 output_directory (:obj:`str`): Name of the directory to unpack the archive into.
45 force (:obj:`bool`): Force download from non-approved host. Default is ``False``.
46 compress (:obj:`bool`): Keep the archive in a compressed ``tar.gz`` form. Default is ``False``.
47
48 Raises:
49 :class:`~pyhf.exceptions.InvalidArchiveHost`: if the provided archive host name is not known to be valid
50 """
51 if not force:
52 valid_hosts = ["www.hepdata.net", "doi.org"]
53 netloc = urlparse(archive_url).netloc
54 if netloc not in valid_hosts:
55 raise exceptions.InvalidArchiveHost(
56 f"{netloc} is not an approved archive host: {', '.join(str(host) for host in valid_hosts)}\n"
57 + "To download an archive from this host use the --force option."
58 )
59
60 # c.f. https://github.com/scikit-hep/pyhf/issues/1491
61 # > Use content negotiation at the landing page for the resource that
62 # > the DOI resolves to. DataCite content negotiation is forwarding all
63 # > requests with unknown content types to the URL registered in the
64 # > handle system.
65 # c.f. https://blog.datacite.org/changes-to-doi-content-negotiation/
66 # The HEPData landing page for the resource file can check if the Accept
67 # request HTTP header matches the content type of the resource file and
68 # return the content directly if so.
69 with requests.get(
70 archive_url, headers={"Accept": "application/x-tar, application/zip"}
71 ) as response:
72 if response.status_code != 200:
73 raise exceptions.InvalidArchive(
74 f"{archive_url} gives a response code of {response.status_code}.\n"
75 + "There is either something temporarily wrong with the archive host"
76 + f" or {archive_url} is an invalid URL."
77 )
78
79 if compress:
80 with open(output_directory, "wb") as archive:
81 archive.write(response.content)
82 else:
83 # Support for file-like objects for tarfile.is_tarfile was added
84 # in Python 3.9, so as pyhf is currently Python 3.7+ then can't
85 # do tarfile.is_tarfile(BytesIO(response.content)).
86 # Instead, just use a 'try except' block to determine if the
87 # archive is a valid tarfile.
88 # TODO: Simplify after pyhf is Python 3.9+ only
89 try:
90 # Use transparent compression to allow for .tar or .tar.gz
91 with tarfile.open(
92 mode="r:*", fileobj=BytesIO(response.content)
93 ) as archive:
94 archive.extractall(output_directory)
95 except tarfile.ReadError:
96 if not zipfile.is_zipfile(BytesIO(response.content)):
97 raise exceptions.InvalidArchive(
98 f"The archive downloaded from {archive_url} is not a tarfile"
99 + " or a zipfile and so can not be opened as one."
100 )
101
102 output_directory = Path(output_directory)
103 if output_directory.exists():
104 rmtree(output_directory)
105 with zipfile.ZipFile(BytesIO(response.content)) as archive:
106 archive.extractall(output_directory)
107
108 # zipfile.ZipFile.extractall extracts to a directory
109 # below a target directory, so to match the extraction
110 # path of tarfile.TarFile.extractall move the extracted
111 # directory to a temporary path and then replace the
112 # output directory target with the contents at the
113 # temporary path.
114 # The directory is moved instead of being extracted one
115 # directory up and then renamed as the name of the
116 # zipfile directory is set at zipfile creation time and
117 # isn't knowable in advance.
118 child_path = [child for child in output_directory.iterdir()][0]
119 _tmp_path = output_directory.parent.joinpath(
120 Path(output_directory.name + "__tmp__")
121 )
122 child_path.replace(_tmp_path)
123 # the zipfile could contain remnant __MACOSX directories
124 # from creation time
125 rmtree(output_directory)
126 _tmp_path.replace(output_directory)
127
128
129 except ModuleNotFoundError:
130 log.error(
131 "\nInstallation of the contrib extra is required to use pyhf.contrib.utils.download"
132 + "\nPlease install with: python -m pip install pyhf[contrib]\n",
133 exc_info=True,
134 )
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pyhf/contrib/utils.py b/src/pyhf/contrib/utils.py
--- a/src/pyhf/contrib/utils.py
+++ b/src/pyhf/contrib/utils.py
@@ -30,13 +30,13 @@
Example:
>>> from pyhf.contrib.utils import download
- >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods")
+ >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods") # doctest: +SKIP
>>> import os
- >>> sorted(os.listdir("1Lbb-likelihoods"))
+ >>> sorted(os.listdir("1Lbb-likelihoods")) # doctest: +SKIP
['BkgOnly.json', 'README.md', 'patchset.json']
- >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods.tar.gz", compress=True)
+ >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods.tar.gz", compress=True) # doctest: +SKIP
>>> import glob
- >>> glob.glob("1Lbb-likelihoods.tar.gz")
+ >>> glob.glob("1Lbb-likelihoods.tar.gz") # doctest: +SKIP
['1Lbb-likelihoods.tar.gz']
Args:
|
{"golden_diff": "diff --git a/src/pyhf/contrib/utils.py b/src/pyhf/contrib/utils.py\n--- a/src/pyhf/contrib/utils.py\n+++ b/src/pyhf/contrib/utils.py\n@@ -30,13 +30,13 @@\n Example:\n \n >>> from pyhf.contrib.utils import download\n- >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods\")\n+ >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods\") # doctest: +SKIP\n >>> import os\n- >>> sorted(os.listdir(\"1Lbb-likelihoods\"))\n+ >>> sorted(os.listdir(\"1Lbb-likelihoods\")) # doctest: +SKIP\n ['BkgOnly.json', 'README.md', 'patchset.json']\n- >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods.tar.gz\", compress=True)\n+ >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods.tar.gz\", compress=True) # doctest: +SKIP\n >>> import glob\n- >>> glob.glob(\"1Lbb-likelihoods.tar.gz\")\n+ >>> glob.glob(\"1Lbb-likelihoods.tar.gz\") # doctest: +SKIP\n ['1Lbb-likelihoods.tar.gz']\n \n Args:\n", "issue": "Remove dependency on HEPData from docstring tests\n### Summary\r\n\r\nWith PRs #1697, #1704, and #1711 there have been attempts to remove any dependency on HEPData infrastructure for testing. There is still a dependency in the [`pyhf.contrib.utils.download` docs](https://pyhf.readthedocs.io/en/v0.6.3/_generated/pyhf.contrib.utils.download.html) as it demos downloading from HEPData in the docstring example. This can fail with \r\n\r\n```pytb\r\n____________________ [doctest] pyhf.contrib.utils.download _____________________\r\n028 directory at the path given.\r\n029 \r\n030 Example:\r\n031 \r\n032 >>> from pyhf.contrib.utils import download\r\n033 >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods\")\r\n034 >>> import os\r\n035 >>> sorted(os.listdir(\"1Lbb-likelihoods\"))\r\n036 ['BkgOnly.json', 'README.md', 'patchset.json']\r\n037 >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods.tar.gz\", compress=True)\r\nUNEXPECTED EXCEPTION: InvalidArchive('https://doi.org/10.17182/hepdata.90607.v3/r3 gives a response code of 500.\\nThere is either something temporarily wrong with the archive host or https://doi.org/10.17182/hepdata.90607.v3/r3 is an invalid URL.')\r\nTraceback (most recent call last):\r\n File \"/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py\", line 1334, in __run\r\n exec(compile(example.source, filename, \"single\",\r\n File \"<doctest pyhf.contrib.utils.download[4]>\", line 1, in <module>\r\n File \"/home/runner/work/pyhf/pyhf/src/pyhf/contrib/utils.py\", line 73, in download\r\n raise exceptions.InvalidArchive(\r\npyhf.exceptions.InvalidArchive: https://doi.org/10.17182/hepdata.90607.v3/r3 gives a response code of 500.\r\nThere is either something temporarily wrong with the archive host or https://doi.org/10.17182/hepdata.90607.v3/r3 is an invalid URL.\r\n/home/runner/work/pyhf/pyhf/src/pyhf/contrib/utils.py:37: UnexpectedException\r\n```\r\n\r\n### OS / Environment\r\n\r\n```console\r\nAll platforms\r\n```\r\n\r\n\r\n### Steps to Reproduce\r\n\r\nRun the CI and get unlucky\r\n\r\n### File Upload (optional)\r\n\r\n_No response_\r\n\r\n### Expected Results\r\n\r\nThe CI to run the doctests without error and to be able to not have dependencies on HEPData uptime.\r\n\r\n### Actual Results\r\n\r\n```pytb\r\n____________________ [doctest] pyhf.contrib.utils.download _____________________\r\n028 directory at the path given.\r\n029 \r\n030 Example:\r\n031 \r\n032 >>> from pyhf.contrib.utils import download\r\n033 >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods\")\r\n034 >>> import os\r\n035 >>> sorted(os.listdir(\"1Lbb-likelihoods\"))\r\n036 ['BkgOnly.json', 'README.md', 'patchset.json']\r\n037 >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods.tar.gz\", compress=True)\r\nUNEXPECTED EXCEPTION: InvalidArchive('https://doi.org/10.17182/hepdata.90607.v3/r3 gives a response code of 500.\\nThere is either something temporarily wrong with the archive host or https://doi.org/10.17182/hepdata.90607.v3/r3 is an invalid URL.')\r\nTraceback (most recent call last):\r\n File \"/opt/hostedtoolcache/Python/3.9.9/x64/lib/python3.9/doctest.py\", line 1334, in __run\r\n exec(compile(example.source, filename, \"single\",\r\n File \"<doctest pyhf.contrib.utils.download[4]>\", line 1, in <module>\r\n File \"/home/runner/work/pyhf/pyhf/src/pyhf/contrib/utils.py\", line 73, in download\r\n raise exceptions.InvalidArchive(\r\npyhf.exceptions.InvalidArchive: https://doi.org/10.17182/hepdata.90607.v3/r3 gives a response code of 500.\r\nThere is either something temporarily wrong with the archive host or https://doi.org/10.17182/hepdata.90607.v3/r3 is an invalid URL.\r\n/home/runner/work/pyhf/pyhf/src/pyhf/contrib/utils.py:37: UnexpectedException\r\n```\r\n\r\n\r\n### pyhf Version\r\n\r\n```console\r\npyhf, version 0.6.3\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Code of Conduct\n", "before_files": [{"content": "\"\"\"Helper utilities for common tasks.\"\"\"\n\nimport logging\nimport tarfile\nimport zipfile\nfrom io import BytesIO\nfrom pathlib import Path\nfrom shutil import rmtree\nfrom urllib.parse import urlparse\n\nfrom pyhf import exceptions\n\nlog = logging.getLogger(__name__)\n\n__all__ = [\"download\"]\n\n\ndef __dir__():\n return __all__\n\n\ntry:\n import requests\n\n def download(archive_url, output_directory, force=False, compress=False):\n \"\"\"\n Download the patchset archive from the remote URL and extract it in a\n directory at the path given.\n\n Example:\n\n >>> from pyhf.contrib.utils import download\n >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods\")\n >>> import os\n >>> sorted(os.listdir(\"1Lbb-likelihoods\"))\n ['BkgOnly.json', 'README.md', 'patchset.json']\n >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods.tar.gz\", compress=True)\n >>> import glob\n >>> glob.glob(\"1Lbb-likelihoods.tar.gz\")\n ['1Lbb-likelihoods.tar.gz']\n\n Args:\n archive_url (:obj:`str`): The URL of the :class:`~pyhf.patchset.PatchSet` archive to download.\n output_directory (:obj:`str`): Name of the directory to unpack the archive into.\n force (:obj:`bool`): Force download from non-approved host. Default is ``False``.\n compress (:obj:`bool`): Keep the archive in a compressed ``tar.gz`` form. Default is ``False``.\n\n Raises:\n :class:`~pyhf.exceptions.InvalidArchiveHost`: if the provided archive host name is not known to be valid\n \"\"\"\n if not force:\n valid_hosts = [\"www.hepdata.net\", \"doi.org\"]\n netloc = urlparse(archive_url).netloc\n if netloc not in valid_hosts:\n raise exceptions.InvalidArchiveHost(\n f\"{netloc} is not an approved archive host: {', '.join(str(host) for host in valid_hosts)}\\n\"\n + \"To download an archive from this host use the --force option.\"\n )\n\n # c.f. https://github.com/scikit-hep/pyhf/issues/1491\n # > Use content negotiation at the landing page for the resource that\n # > the DOI resolves to. DataCite content negotiation is forwarding all\n # > requests with unknown content types to the URL registered in the\n # > handle system.\n # c.f. https://blog.datacite.org/changes-to-doi-content-negotiation/\n # The HEPData landing page for the resource file can check if the Accept\n # request HTTP header matches the content type of the resource file and\n # return the content directly if so.\n with requests.get(\n archive_url, headers={\"Accept\": \"application/x-tar, application/zip\"}\n ) as response:\n if response.status_code != 200:\n raise exceptions.InvalidArchive(\n f\"{archive_url} gives a response code of {response.status_code}.\\n\"\n + \"There is either something temporarily wrong with the archive host\"\n + f\" or {archive_url} is an invalid URL.\"\n )\n\n if compress:\n with open(output_directory, \"wb\") as archive:\n archive.write(response.content)\n else:\n # Support for file-like objects for tarfile.is_tarfile was added\n # in Python 3.9, so as pyhf is currently Python 3.7+ then can't\n # do tarfile.is_tarfile(BytesIO(response.content)).\n # Instead, just use a 'try except' block to determine if the\n # archive is a valid tarfile.\n # TODO: Simplify after pyhf is Python 3.9+ only\n try:\n # Use transparent compression to allow for .tar or .tar.gz\n with tarfile.open(\n mode=\"r:*\", fileobj=BytesIO(response.content)\n ) as archive:\n archive.extractall(output_directory)\n except tarfile.ReadError:\n if not zipfile.is_zipfile(BytesIO(response.content)):\n raise exceptions.InvalidArchive(\n f\"The archive downloaded from {archive_url} is not a tarfile\"\n + \" or a zipfile and so can not be opened as one.\"\n )\n\n output_directory = Path(output_directory)\n if output_directory.exists():\n rmtree(output_directory)\n with zipfile.ZipFile(BytesIO(response.content)) as archive:\n archive.extractall(output_directory)\n\n # zipfile.ZipFile.extractall extracts to a directory\n # below a target directory, so to match the extraction\n # path of tarfile.TarFile.extractall move the extracted\n # directory to a temporary path and then replace the\n # output directory target with the contents at the\n # temporary path.\n # The directory is moved instead of being extracted one\n # directory up and then renamed as the name of the\n # zipfile directory is set at zipfile creation time and\n # isn't knowable in advance.\n child_path = [child for child in output_directory.iterdir()][0]\n _tmp_path = output_directory.parent.joinpath(\n Path(output_directory.name + \"__tmp__\")\n )\n child_path.replace(_tmp_path)\n # the zipfile could contain remnant __MACOSX directories\n # from creation time\n rmtree(output_directory)\n _tmp_path.replace(output_directory)\n\n\nexcept ModuleNotFoundError:\n log.error(\n \"\\nInstallation of the contrib extra is required to use pyhf.contrib.utils.download\"\n + \"\\nPlease install with: python -m pip install pyhf[contrib]\\n\",\n exc_info=True,\n )\n", "path": "src/pyhf/contrib/utils.py"}], "after_files": [{"content": "\"\"\"Helper utilities for common tasks.\"\"\"\n\nimport logging\nimport tarfile\nimport zipfile\nfrom io import BytesIO\nfrom pathlib import Path\nfrom shutil import rmtree\nfrom urllib.parse import urlparse\n\nfrom pyhf import exceptions\n\nlog = logging.getLogger(__name__)\n\n__all__ = [\"download\"]\n\n\ndef __dir__():\n return __all__\n\n\ntry:\n import requests\n\n def download(archive_url, output_directory, force=False, compress=False):\n \"\"\"\n Download the patchset archive from the remote URL and extract it in a\n directory at the path given.\n\n Example:\n\n >>> from pyhf.contrib.utils import download\n >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods\") # doctest: +SKIP\n >>> import os\n >>> sorted(os.listdir(\"1Lbb-likelihoods\")) # doctest: +SKIP\n ['BkgOnly.json', 'README.md', 'patchset.json']\n >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods.tar.gz\", compress=True) # doctest: +SKIP\n >>> import glob\n >>> glob.glob(\"1Lbb-likelihoods.tar.gz\") # doctest: +SKIP\n ['1Lbb-likelihoods.tar.gz']\n\n Args:\n archive_url (:obj:`str`): The URL of the :class:`~pyhf.patchset.PatchSet` archive to download.\n output_directory (:obj:`str`): Name of the directory to unpack the archive into.\n force (:obj:`bool`): Force download from non-approved host. Default is ``False``.\n compress (:obj:`bool`): Keep the archive in a compressed ``tar.gz`` form. Default is ``False``.\n\n Raises:\n :class:`~pyhf.exceptions.InvalidArchiveHost`: if the provided archive host name is not known to be valid\n \"\"\"\n if not force:\n valid_hosts = [\"www.hepdata.net\", \"doi.org\"]\n netloc = urlparse(archive_url).netloc\n if netloc not in valid_hosts:\n raise exceptions.InvalidArchiveHost(\n f\"{netloc} is not an approved archive host: {', '.join(str(host) for host in valid_hosts)}\\n\"\n + \"To download an archive from this host use the --force option.\"\n )\n\n # c.f. https://github.com/scikit-hep/pyhf/issues/1491\n # > Use content negotiation at the landing page for the resource that\n # > the DOI resolves to. DataCite content negotiation is forwarding all\n # > requests with unknown content types to the URL registered in the\n # > handle system.\n # c.f. https://blog.datacite.org/changes-to-doi-content-negotiation/\n # The HEPData landing page for the resource file can check if the Accept\n # request HTTP header matches the content type of the resource file and\n # return the content directly if so.\n with requests.get(\n archive_url, headers={\"Accept\": \"application/x-tar, application/zip\"}\n ) as response:\n if response.status_code != 200:\n raise exceptions.InvalidArchive(\n f\"{archive_url} gives a response code of {response.status_code}.\\n\"\n + \"There is either something temporarily wrong with the archive host\"\n + f\" or {archive_url} is an invalid URL.\"\n )\n\n if compress:\n with open(output_directory, \"wb\") as archive:\n archive.write(response.content)\n else:\n # Support for file-like objects for tarfile.is_tarfile was added\n # in Python 3.9, so as pyhf is currently Python 3.7+ then can't\n # do tarfile.is_tarfile(BytesIO(response.content)).\n # Instead, just use a 'try except' block to determine if the\n # archive is a valid tarfile.\n # TODO: Simplify after pyhf is Python 3.9+ only\n try:\n # Use transparent compression to allow for .tar or .tar.gz\n with tarfile.open(\n mode=\"r:*\", fileobj=BytesIO(response.content)\n ) as archive:\n archive.extractall(output_directory)\n except tarfile.ReadError:\n if not zipfile.is_zipfile(BytesIO(response.content)):\n raise exceptions.InvalidArchive(\n f\"The archive downloaded from {archive_url} is not a tarfile\"\n + \" or a zipfile and so can not be opened as one.\"\n )\n\n output_directory = Path(output_directory)\n if output_directory.exists():\n rmtree(output_directory)\n with zipfile.ZipFile(BytesIO(response.content)) as archive:\n archive.extractall(output_directory)\n\n # zipfile.ZipFile.extractall extracts to a directory\n # below a target directory, so to match the extraction\n # path of tarfile.TarFile.extractall move the extracted\n # directory to a temporary path and then replace the\n # output directory target with the contents at the\n # temporary path.\n # The directory is moved instead of being extracted one\n # directory up and then renamed as the name of the\n # zipfile directory is set at zipfile creation time and\n # isn't knowable in advance.\n child_path = [child for child in output_directory.iterdir()][0]\n _tmp_path = output_directory.parent.joinpath(\n Path(output_directory.name + \"__tmp__\")\n )\n child_path.replace(_tmp_path)\n # the zipfile could contain remnant __MACOSX directories\n # from creation time\n rmtree(output_directory)\n _tmp_path.replace(output_directory)\n\n\nexcept ModuleNotFoundError:\n log.error(\n \"\\nInstallation of the contrib extra is required to use pyhf.contrib.utils.download\"\n + \"\\nPlease install with: python -m pip install pyhf[contrib]\\n\",\n exc_info=True,\n )\n", "path": "src/pyhf/contrib/utils.py"}]}
| 3,019 | 365 |
gh_patches_debug_21808
|
rasdani/github-patches
|
git_diff
|
nonebot__nonebot2-1720
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature: 替换 toml 读取方式
在 python 3.11 中已经支持了读取 toml 配置。https://docs.python.org/3/library/tomllib.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nonebot/plugin/load.py`
Content:
```
1 """本模块定义插件加载接口。
2
3 FrontMatter:
4 sidebar_position: 1
5 description: nonebot.plugin.load 模块
6 """
7 import json
8 from pathlib import Path
9 from types import ModuleType
10 from typing import Set, Union, Iterable, Optional
11
12 import tomlkit
13
14 from nonebot.utils import path_to_module_name
15
16 from .plugin import Plugin
17 from .manager import PluginManager
18 from . import _managers, get_plugin, _module_name_to_plugin_name
19
20
21 def load_plugin(module_path: Union[str, Path]) -> Optional[Plugin]:
22 """加载单个插件,可以是本地插件或是通过 `pip` 安装的插件。
23
24 参数:
25 module_path: 插件名称 `path.to.your.plugin` 或插件路径 `pathlib.Path(path/to/your/plugin)`
26 """
27 module_path = (
28 path_to_module_name(module_path)
29 if isinstance(module_path, Path)
30 else module_path
31 )
32 manager = PluginManager([module_path])
33 _managers.append(manager)
34 return manager.load_plugin(module_path)
35
36
37 def load_plugins(*plugin_dir: str) -> Set[Plugin]:
38 """导入文件夹下多个插件,以 `_` 开头的插件不会被导入!
39
40 参数:
41 plugin_dir: 文件夹路径
42 """
43 manager = PluginManager(search_path=plugin_dir)
44 _managers.append(manager)
45 return manager.load_all_plugins()
46
47
48 def load_all_plugins(
49 module_path: Iterable[str], plugin_dir: Iterable[str]
50 ) -> Set[Plugin]:
51 """导入指定列表中的插件以及指定目录下多个插件,以 `_` 开头的插件不会被导入!
52
53 参数:
54 module_path: 指定插件集合
55 plugin_dir: 指定文件夹路径集合
56 """
57 manager = PluginManager(module_path, plugin_dir)
58 _managers.append(manager)
59 return manager.load_all_plugins()
60
61
62 def load_from_json(file_path: str, encoding: str = "utf-8") -> Set[Plugin]:
63 """导入指定 json 文件中的 `plugins` 以及 `plugin_dirs` 下多个插件,以 `_` 开头的插件不会被导入!
64
65 参数:
66 file_path: 指定 json 文件路径
67 encoding: 指定 json 文件编码
68
69 用法:
70 ```json title=plugins.json
71 {
72 "plugins": ["some_plugin"],
73 "plugin_dirs": ["some_dir"]
74 }
75 ```
76
77 ```python
78 nonebot.load_from_json("plugins.json")
79 ```
80 """
81 with open(file_path, "r", encoding=encoding) as f:
82 data = json.load(f)
83 if not isinstance(data, dict):
84 raise TypeError("json file must contains a dict!")
85 plugins = data.get("plugins")
86 plugin_dirs = data.get("plugin_dirs")
87 assert isinstance(plugins, list), "plugins must be a list of plugin name"
88 assert isinstance(plugin_dirs, list), "plugin_dirs must be a list of directories"
89 return load_all_plugins(set(plugins), set(plugin_dirs))
90
91
92 def load_from_toml(file_path: str, encoding: str = "utf-8") -> Set[Plugin]:
93 """导入指定 toml 文件 `[tool.nonebot]` 中的 `plugins` 以及 `plugin_dirs` 下多个插件,以 `_` 开头的插件不会被导入!
94
95 参数:
96 file_path: 指定 toml 文件路径
97 encoding: 指定 toml 文件编码
98
99 用法:
100 ```toml title=pyproject.toml
101 [tool.nonebot]
102 plugins = ["some_plugin"]
103 plugin_dirs = ["some_dir"]
104 ```
105
106 ```python
107 nonebot.load_from_toml("pyproject.toml")
108 ```
109 """
110 with open(file_path, "r", encoding=encoding) as f:
111 data = tomlkit.parse(f.read()) # type: ignore
112
113 nonebot_data = data.get("tool", {}).get("nonebot")
114 if nonebot_data is None:
115 raise ValueError("Cannot find '[tool.nonebot]' in given toml file!")
116 if not isinstance(nonebot_data, dict):
117 raise TypeError("'[tool.nonebot]' must be a Table!")
118 plugins = nonebot_data.get("plugins", [])
119 plugin_dirs = nonebot_data.get("plugin_dirs", [])
120 assert isinstance(plugins, list), "plugins must be a list of plugin name"
121 assert isinstance(plugin_dirs, list), "plugin_dirs must be a list of directories"
122 return load_all_plugins(plugins, plugin_dirs)
123
124
125 def load_builtin_plugin(name: str) -> Optional[Plugin]:
126 """导入 NoneBot 内置插件。
127
128 参数:
129 name: 插件名称
130 """
131 return load_plugin(f"nonebot.plugins.{name}")
132
133
134 def load_builtin_plugins(*plugins: str) -> Set[Plugin]:
135 """导入多个 NoneBot 内置插件。
136
137 参数:
138 plugins: 插件名称列表
139 """
140 return load_all_plugins([f"nonebot.plugins.{p}" for p in plugins], [])
141
142
143 def _find_manager_by_name(name: str) -> Optional[PluginManager]:
144 for manager in reversed(_managers):
145 if name in manager.plugins or name in manager.searched_plugins:
146 return manager
147
148
149 def require(name: str) -> ModuleType:
150 """获取一个插件的导出内容。
151
152 如果为 `load_plugins` 文件夹导入的插件,则为文件(夹)名。
153
154 参数:
155 name: 插件名,即 {ref}`nonebot.plugin.plugin.Plugin.name`。
156
157 异常:
158 RuntimeError: 插件无法加载
159 """
160 plugin = get_plugin(_module_name_to_plugin_name(name))
161 if not plugin:
162 if manager := _find_manager_by_name(name):
163 plugin = manager.load_plugin(name)
164 else:
165 plugin = load_plugin(name)
166 if not plugin:
167 raise RuntimeError(f'Cannot load plugin "{name}"!')
168 return plugin.module
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nonebot/plugin/load.py b/nonebot/plugin/load.py
--- a/nonebot/plugin/load.py
+++ b/nonebot/plugin/load.py
@@ -9,14 +9,17 @@
from types import ModuleType
from typing import Set, Union, Iterable, Optional
-import tomlkit
-
from nonebot.utils import path_to_module_name
from .plugin import Plugin
from .manager import PluginManager
from . import _managers, get_plugin, _module_name_to_plugin_name
+try:
+ import tomllib # pyright: reportMissingImports=false
+except ModuleNotFoundError:
+ import tomli as tomllib
+
def load_plugin(module_path: Union[str, Path]) -> Optional[Plugin]:
"""加载单个插件,可以是本地插件或是通过 `pip` 安装的插件。
@@ -108,7 +111,7 @@
```
"""
with open(file_path, "r", encoding=encoding) as f:
- data = tomlkit.parse(f.read()) # type: ignore
+ data = tomllib.loads(f.read())
nonebot_data = data.get("tool", {}).get("nonebot")
if nonebot_data is None:
|
{"golden_diff": "diff --git a/nonebot/plugin/load.py b/nonebot/plugin/load.py\n--- a/nonebot/plugin/load.py\n+++ b/nonebot/plugin/load.py\n@@ -9,14 +9,17 @@\n from types import ModuleType\n from typing import Set, Union, Iterable, Optional\n \n-import tomlkit\n-\n from nonebot.utils import path_to_module_name\n \n from .plugin import Plugin\n from .manager import PluginManager\n from . import _managers, get_plugin, _module_name_to_plugin_name\n \n+try:\n+ import tomllib # pyright: reportMissingImports=false\n+except ModuleNotFoundError:\n+ import tomli as tomllib\n+\n \n def load_plugin(module_path: Union[str, Path]) -> Optional[Plugin]:\n \"\"\"\u52a0\u8f7d\u5355\u4e2a\u63d2\u4ef6\uff0c\u53ef\u4ee5\u662f\u672c\u5730\u63d2\u4ef6\u6216\u662f\u901a\u8fc7 `pip` \u5b89\u88c5\u7684\u63d2\u4ef6\u3002\n@@ -108,7 +111,7 @@\n ```\n \"\"\"\n with open(file_path, \"r\", encoding=encoding) as f:\n- data = tomlkit.parse(f.read()) # type: ignore\n+ data = tomllib.loads(f.read())\n \n nonebot_data = data.get(\"tool\", {}).get(\"nonebot\")\n if nonebot_data is None:\n", "issue": "Feature: \u66ff\u6362 toml \u8bfb\u53d6\u65b9\u5f0f\n\u5728 python 3.11 \u4e2d\u5df2\u7ecf\u652f\u6301\u4e86\u8bfb\u53d6 toml \u914d\u7f6e\u3002https://docs.python.org/3/library/tomllib.html\n", "before_files": [{"content": "\"\"\"\u672c\u6a21\u5757\u5b9a\u4e49\u63d2\u4ef6\u52a0\u8f7d\u63a5\u53e3\u3002\n\nFrontMatter:\n sidebar_position: 1\n description: nonebot.plugin.load \u6a21\u5757\n\"\"\"\nimport json\nfrom pathlib import Path\nfrom types import ModuleType\nfrom typing import Set, Union, Iterable, Optional\n\nimport tomlkit\n\nfrom nonebot.utils import path_to_module_name\n\nfrom .plugin import Plugin\nfrom .manager import PluginManager\nfrom . import _managers, get_plugin, _module_name_to_plugin_name\n\n\ndef load_plugin(module_path: Union[str, Path]) -> Optional[Plugin]:\n \"\"\"\u52a0\u8f7d\u5355\u4e2a\u63d2\u4ef6\uff0c\u53ef\u4ee5\u662f\u672c\u5730\u63d2\u4ef6\u6216\u662f\u901a\u8fc7 `pip` \u5b89\u88c5\u7684\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n module_path: \u63d2\u4ef6\u540d\u79f0 `path.to.your.plugin` \u6216\u63d2\u4ef6\u8def\u5f84 `pathlib.Path(path/to/your/plugin)`\n \"\"\"\n module_path = (\n path_to_module_name(module_path)\n if isinstance(module_path, Path)\n else module_path\n )\n manager = PluginManager([module_path])\n _managers.append(manager)\n return manager.load_plugin(module_path)\n\n\ndef load_plugins(*plugin_dir: str) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6587\u4ef6\u5939\u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n plugin_dir: \u6587\u4ef6\u5939\u8def\u5f84\n \"\"\"\n manager = PluginManager(search_path=plugin_dir)\n _managers.append(manager)\n return manager.load_all_plugins()\n\n\ndef load_all_plugins(\n module_path: Iterable[str], plugin_dir: Iterable[str]\n) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a\u5217\u8868\u4e2d\u7684\u63d2\u4ef6\u4ee5\u53ca\u6307\u5b9a\u76ee\u5f55\u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n module_path: \u6307\u5b9a\u63d2\u4ef6\u96c6\u5408\n plugin_dir: \u6307\u5b9a\u6587\u4ef6\u5939\u8def\u5f84\u96c6\u5408\n \"\"\"\n manager = PluginManager(module_path, plugin_dir)\n _managers.append(manager)\n return manager.load_all_plugins()\n\n\ndef load_from_json(file_path: str, encoding: str = \"utf-8\") -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a json \u6587\u4ef6\u4e2d\u7684 `plugins` \u4ee5\u53ca `plugin_dirs` \u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n file_path: \u6307\u5b9a json \u6587\u4ef6\u8def\u5f84\n encoding: \u6307\u5b9a json \u6587\u4ef6\u7f16\u7801\n\n \u7528\u6cd5:\n ```json title=plugins.json\n {\n \"plugins\": [\"some_plugin\"],\n \"plugin_dirs\": [\"some_dir\"]\n }\n ```\n\n ```python\n nonebot.load_from_json(\"plugins.json\")\n ```\n \"\"\"\n with open(file_path, \"r\", encoding=encoding) as f:\n data = json.load(f)\n if not isinstance(data, dict):\n raise TypeError(\"json file must contains a dict!\")\n plugins = data.get(\"plugins\")\n plugin_dirs = data.get(\"plugin_dirs\")\n assert isinstance(plugins, list), \"plugins must be a list of plugin name\"\n assert isinstance(plugin_dirs, list), \"plugin_dirs must be a list of directories\"\n return load_all_plugins(set(plugins), set(plugin_dirs))\n\n\ndef load_from_toml(file_path: str, encoding: str = \"utf-8\") -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a toml \u6587\u4ef6 `[tool.nonebot]` \u4e2d\u7684 `plugins` \u4ee5\u53ca `plugin_dirs` \u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n file_path: \u6307\u5b9a toml \u6587\u4ef6\u8def\u5f84\n encoding: \u6307\u5b9a toml \u6587\u4ef6\u7f16\u7801\n\n \u7528\u6cd5:\n ```toml title=pyproject.toml\n [tool.nonebot]\n plugins = [\"some_plugin\"]\n plugin_dirs = [\"some_dir\"]\n ```\n\n ```python\n nonebot.load_from_toml(\"pyproject.toml\")\n ```\n \"\"\"\n with open(file_path, \"r\", encoding=encoding) as f:\n data = tomlkit.parse(f.read()) # type: ignore\n\n nonebot_data = data.get(\"tool\", {}).get(\"nonebot\")\n if nonebot_data is None:\n raise ValueError(\"Cannot find '[tool.nonebot]' in given toml file!\")\n if not isinstance(nonebot_data, dict):\n raise TypeError(\"'[tool.nonebot]' must be a Table!\")\n plugins = nonebot_data.get(\"plugins\", [])\n plugin_dirs = nonebot_data.get(\"plugin_dirs\", [])\n assert isinstance(plugins, list), \"plugins must be a list of plugin name\"\n assert isinstance(plugin_dirs, list), \"plugin_dirs must be a list of directories\"\n return load_all_plugins(plugins, plugin_dirs)\n\n\ndef load_builtin_plugin(name: str) -> Optional[Plugin]:\n \"\"\"\u5bfc\u5165 NoneBot \u5185\u7f6e\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n name: \u63d2\u4ef6\u540d\u79f0\n \"\"\"\n return load_plugin(f\"nonebot.plugins.{name}\")\n\n\ndef load_builtin_plugins(*plugins: str) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u591a\u4e2a NoneBot \u5185\u7f6e\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n plugins: \u63d2\u4ef6\u540d\u79f0\u5217\u8868\n \"\"\"\n return load_all_plugins([f\"nonebot.plugins.{p}\" for p in plugins], [])\n\n\ndef _find_manager_by_name(name: str) -> Optional[PluginManager]:\n for manager in reversed(_managers):\n if name in manager.plugins or name in manager.searched_plugins:\n return manager\n\n\ndef require(name: str) -> ModuleType:\n \"\"\"\u83b7\u53d6\u4e00\u4e2a\u63d2\u4ef6\u7684\u5bfc\u51fa\u5185\u5bb9\u3002\n\n \u5982\u679c\u4e3a `load_plugins` \u6587\u4ef6\u5939\u5bfc\u5165\u7684\u63d2\u4ef6\uff0c\u5219\u4e3a\u6587\u4ef6(\u5939)\u540d\u3002\n\n \u53c2\u6570:\n name: \u63d2\u4ef6\u540d\uff0c\u5373 {ref}`nonebot.plugin.plugin.Plugin.name`\u3002\n\n \u5f02\u5e38:\n RuntimeError: \u63d2\u4ef6\u65e0\u6cd5\u52a0\u8f7d\n \"\"\"\n plugin = get_plugin(_module_name_to_plugin_name(name))\n if not plugin:\n if manager := _find_manager_by_name(name):\n plugin = manager.load_plugin(name)\n else:\n plugin = load_plugin(name)\n if not plugin:\n raise RuntimeError(f'Cannot load plugin \"{name}\"!')\n return plugin.module\n", "path": "nonebot/plugin/load.py"}], "after_files": [{"content": "\"\"\"\u672c\u6a21\u5757\u5b9a\u4e49\u63d2\u4ef6\u52a0\u8f7d\u63a5\u53e3\u3002\n\nFrontMatter:\n sidebar_position: 1\n description: nonebot.plugin.load \u6a21\u5757\n\"\"\"\nimport json\nfrom pathlib import Path\nfrom types import ModuleType\nfrom typing import Set, Union, Iterable, Optional\n\nfrom nonebot.utils import path_to_module_name\n\nfrom .plugin import Plugin\nfrom .manager import PluginManager\nfrom . import _managers, get_plugin, _module_name_to_plugin_name\n\ntry:\n import tomllib # pyright: reportMissingImports=false\nexcept ModuleNotFoundError:\n import tomli as tomllib\n\n\ndef load_plugin(module_path: Union[str, Path]) -> Optional[Plugin]:\n \"\"\"\u52a0\u8f7d\u5355\u4e2a\u63d2\u4ef6\uff0c\u53ef\u4ee5\u662f\u672c\u5730\u63d2\u4ef6\u6216\u662f\u901a\u8fc7 `pip` \u5b89\u88c5\u7684\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n module_path: \u63d2\u4ef6\u540d\u79f0 `path.to.your.plugin` \u6216\u63d2\u4ef6\u8def\u5f84 `pathlib.Path(path/to/your/plugin)`\n \"\"\"\n module_path = (\n path_to_module_name(module_path)\n if isinstance(module_path, Path)\n else module_path\n )\n manager = PluginManager([module_path])\n _managers.append(manager)\n return manager.load_plugin(module_path)\n\n\ndef load_plugins(*plugin_dir: str) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6587\u4ef6\u5939\u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n plugin_dir: \u6587\u4ef6\u5939\u8def\u5f84\n \"\"\"\n manager = PluginManager(search_path=plugin_dir)\n _managers.append(manager)\n return manager.load_all_plugins()\n\n\ndef load_all_plugins(\n module_path: Iterable[str], plugin_dir: Iterable[str]\n) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a\u5217\u8868\u4e2d\u7684\u63d2\u4ef6\u4ee5\u53ca\u6307\u5b9a\u76ee\u5f55\u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n module_path: \u6307\u5b9a\u63d2\u4ef6\u96c6\u5408\n plugin_dir: \u6307\u5b9a\u6587\u4ef6\u5939\u8def\u5f84\u96c6\u5408\n \"\"\"\n manager = PluginManager(module_path, plugin_dir)\n _managers.append(manager)\n return manager.load_all_plugins()\n\n\ndef load_from_json(file_path: str, encoding: str = \"utf-8\") -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a json \u6587\u4ef6\u4e2d\u7684 `plugins` \u4ee5\u53ca `plugin_dirs` \u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n file_path: \u6307\u5b9a json \u6587\u4ef6\u8def\u5f84\n encoding: \u6307\u5b9a json \u6587\u4ef6\u7f16\u7801\n\n \u7528\u6cd5:\n ```json title=plugins.json\n {\n \"plugins\": [\"some_plugin\"],\n \"plugin_dirs\": [\"some_dir\"]\n }\n ```\n\n ```python\n nonebot.load_from_json(\"plugins.json\")\n ```\n \"\"\"\n with open(file_path, \"r\", encoding=encoding) as f:\n data = json.load(f)\n if not isinstance(data, dict):\n raise TypeError(\"json file must contains a dict!\")\n plugins = data.get(\"plugins\")\n plugin_dirs = data.get(\"plugin_dirs\")\n assert isinstance(plugins, list), \"plugins must be a list of plugin name\"\n assert isinstance(plugin_dirs, list), \"plugin_dirs must be a list of directories\"\n return load_all_plugins(set(plugins), set(plugin_dirs))\n\n\ndef load_from_toml(file_path: str, encoding: str = \"utf-8\") -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a toml \u6587\u4ef6 `[tool.nonebot]` \u4e2d\u7684 `plugins` \u4ee5\u53ca `plugin_dirs` \u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n file_path: \u6307\u5b9a toml \u6587\u4ef6\u8def\u5f84\n encoding: \u6307\u5b9a toml \u6587\u4ef6\u7f16\u7801\n\n \u7528\u6cd5:\n ```toml title=pyproject.toml\n [tool.nonebot]\n plugins = [\"some_plugin\"]\n plugin_dirs = [\"some_dir\"]\n ```\n\n ```python\n nonebot.load_from_toml(\"pyproject.toml\")\n ```\n \"\"\"\n with open(file_path, \"r\", encoding=encoding) as f:\n data = tomllib.loads(f.read())\n\n nonebot_data = data.get(\"tool\", {}).get(\"nonebot\")\n if nonebot_data is None:\n raise ValueError(\"Cannot find '[tool.nonebot]' in given toml file!\")\n if not isinstance(nonebot_data, dict):\n raise TypeError(\"'[tool.nonebot]' must be a Table!\")\n plugins = nonebot_data.get(\"plugins\", [])\n plugin_dirs = nonebot_data.get(\"plugin_dirs\", [])\n assert isinstance(plugins, list), \"plugins must be a list of plugin name\"\n assert isinstance(plugin_dirs, list), \"plugin_dirs must be a list of directories\"\n return load_all_plugins(plugins, plugin_dirs)\n\n\ndef load_builtin_plugin(name: str) -> Optional[Plugin]:\n \"\"\"\u5bfc\u5165 NoneBot \u5185\u7f6e\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n name: \u63d2\u4ef6\u540d\u79f0\n \"\"\"\n return load_plugin(f\"nonebot.plugins.{name}\")\n\n\ndef load_builtin_plugins(*plugins: str) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u591a\u4e2a NoneBot \u5185\u7f6e\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n plugins: \u63d2\u4ef6\u540d\u79f0\u5217\u8868\n \"\"\"\n return load_all_plugins([f\"nonebot.plugins.{p}\" for p in plugins], [])\n\n\ndef _find_manager_by_name(name: str) -> Optional[PluginManager]:\n for manager in reversed(_managers):\n if name in manager.plugins or name in manager.searched_plugins:\n return manager\n\n\ndef require(name: str) -> ModuleType:\n \"\"\"\u83b7\u53d6\u4e00\u4e2a\u63d2\u4ef6\u7684\u5bfc\u51fa\u5185\u5bb9\u3002\n\n \u5982\u679c\u4e3a `load_plugins` \u6587\u4ef6\u5939\u5bfc\u5165\u7684\u63d2\u4ef6\uff0c\u5219\u4e3a\u6587\u4ef6(\u5939)\u540d\u3002\n\n \u53c2\u6570:\n name: \u63d2\u4ef6\u540d\uff0c\u5373 {ref}`nonebot.plugin.plugin.Plugin.name`\u3002\n\n \u5f02\u5e38:\n RuntimeError: \u63d2\u4ef6\u65e0\u6cd5\u52a0\u8f7d\n \"\"\"\n plugin = get_plugin(_module_name_to_plugin_name(name))\n if not plugin:\n if manager := _find_manager_by_name(name):\n plugin = manager.load_plugin(name)\n else:\n plugin = load_plugin(name)\n if not plugin:\n raise RuntimeError(f'Cannot load plugin \"{name}\"!')\n return plugin.module\n", "path": "nonebot/plugin/load.py"}]}
| 2,020 | 272 |
gh_patches_debug_44433
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-1281
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change 'name' parameter to job constructors -> 'job_name'
See: https://github.com/GoogleCloudPlatform/gcloud-python/pull/1263/files#r46738307
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gcloud/bigquery/client.py`
Content:
```
1 # Copyright 2015 Google Inc. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Client for interacting with the Google BigQuery API."""
16
17
18 from gcloud.client import JSONClient
19 from gcloud.bigquery.connection import Connection
20 from gcloud.bigquery.dataset import Dataset
21 from gcloud.bigquery.job import CopyJob
22 from gcloud.bigquery.job import ExtractTableToStorageJob
23 from gcloud.bigquery.job import LoadTableFromStorageJob
24 from gcloud.bigquery.job import RunAsyncQueryJob
25 from gcloud.bigquery.job import RunSyncQueryJob
26
27
28 class Client(JSONClient):
29 """Client to bundle configuration needed for API requests.
30
31 :type project: string
32 :param project: the project which the client acts on behalf of. Will be
33 passed when creating a dataset / job. If not passed,
34 falls back to the default inferred from the environment.
35
36 :type credentials: :class:`oauth2client.client.OAuth2Credentials` or
37 :class:`NoneType`
38 :param credentials: The OAuth2 Credentials to use for the connection
39 owned by this client. If not passed (and if no ``http``
40 object is passed), falls back to the default inferred
41 from the environment.
42
43 :type http: :class:`httplib2.Http` or class that defines ``request()``.
44 :param http: An optional HTTP object to make requests. If not passed, an
45 ``http`` object is created that is bound to the
46 ``credentials`` for the current object.
47 """
48
49 _connection_class = Connection
50
51 def list_datasets(self, include_all=False, max_results=None,
52 page_token=None):
53 """List datasets for the project associated with this client.
54
55 See:
56 https://cloud.google.com/bigquery/docs/reference/v2/datasets/list
57
58 :type include_all: boolean
59 :param include_all: True if results include hidden datasets.
60
61 :type max_results: int
62 :param max_results: maximum number of datasets to return, If not
63 passed, defaults to a value set by the API.
64
65 :type page_token: string
66 :param page_token: opaque marker for the next "page" of datasets. If
67 not passed, the API will return the first page of
68 datasets.
69
70 :rtype: tuple, (list, str)
71 :returns: list of :class:`gcloud.bigquery.dataset.Dataset`, plus a
72 "next page token" string: if the token is not None,
73 indicates that more datasets can be retrieved with another
74 call (pass that value as ``page_token``).
75 """
76 params = {}
77
78 if include_all:
79 params['all'] = True
80
81 if max_results is not None:
82 params['maxResults'] = max_results
83
84 if page_token is not None:
85 params['pageToken'] = page_token
86
87 path = '/projects/%s/datasets' % (self.project,)
88 resp = self.connection.api_request(method='GET', path=path,
89 query_params=params)
90 datasets = [Dataset.from_api_repr(resource, self)
91 for resource in resp['datasets']]
92 return datasets, resp.get('nextPageToken')
93
94 def dataset(self, name):
95 """Construct a dataset bound to this client.
96
97 :type name: string
98 :param name: Name of the dataset.
99
100 :rtype: :class:`gcloud.bigquery.dataset.Dataset`
101 :returns: a new ``Dataset`` instance
102 """
103 return Dataset(name, client=self)
104
105 def load_table_from_storage(self, name, destination, *source_uris):
106 """Construct a job for loading data into a table from CloudStorage.
107
108 :type name: string
109 :param name: Name of the job.
110
111 :type destination: :class:`gcloud.bigquery.table.Table`
112 :param destination: Table into which data is to be loaded.
113
114 :type source_uris: sequence of string
115 :param source_uris: URIs of data files to be loaded.
116
117 :rtype: :class:`gcloud.bigquery.job.LoadTableFromStorageJob`
118 :returns: a new ``LoadTableFromStorageJob`` instance
119 """
120 return LoadTableFromStorageJob(name, destination, source_uris,
121 client=self)
122
123 def copy_table(self, name, destination, *sources):
124 """Construct a job for copying one or more tables into another table.
125
126 :type name: string
127 :param name: Name of the job.
128
129 :type destination: :class:`gcloud.bigquery.table.Table`
130 :param destination: Table into which data is to be copied.
131
132 :type sources: sequence of :class:`gcloud.bigquery.table.Table`
133 :param sources: tables to be copied.
134
135 :rtype: :class:`gcloud.bigquery.job.CopyJob`
136 :returns: a new ``CopyJob`` instance
137 """
138 return CopyJob(name, destination, sources, client=self)
139
140 def extract_table_to_storage(self, name, source, *destination_uris):
141 """Construct a job for extracting a table into Cloud Storage files.
142
143 :type name: string
144 :param name: Name of the job.
145
146 :type source: :class:`gcloud.bigquery.table.Table`
147 :param source: table to be extracted.
148
149 :type destination_uris: sequence of string
150 :param destination_uris: URIs of CloudStorage file(s) into which
151 table data is to be extracted.
152
153 :rtype: :class:`gcloud.bigquery.job.ExtractTableToStorageJob`
154 :returns: a new ``ExtractTableToStorageJob`` instance
155 """
156 return ExtractTableToStorageJob(name, source, destination_uris,
157 client=self)
158
159 def run_async_query(self, name, query):
160 """Construct a job for running a SQL query asynchronously.
161
162 :type name: string
163 :param name: Name of the job.
164
165 :type query: string
166 :param query: SQL query to be executed
167
168 :rtype: :class:`gcloud.bigquery.job.RunAsyncQueryJob`
169 :returns: a new ``RunAsyncQueryJob`` instance
170 """
171 return RunAsyncQueryJob(name, query, client=self)
172
173 def run_sync_query(self, query):
174 """Construct a job for running a SQL query synchronously.
175
176 :type query: string
177 :param query: SQL query to be executed
178
179 :rtype: :class:`gcloud.bigquery.job.RunSyncQueryJob`
180 :returns: a new ``RunSyncQueryJob`` instance
181 """
182 return RunSyncQueryJob(query, client=self)
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gcloud/bigquery/client.py b/gcloud/bigquery/client.py
--- a/gcloud/bigquery/client.py
+++ b/gcloud/bigquery/client.py
@@ -91,22 +91,22 @@
for resource in resp['datasets']]
return datasets, resp.get('nextPageToken')
- def dataset(self, name):
+ def dataset(self, dataset_name):
"""Construct a dataset bound to this client.
- :type name: string
- :param name: Name of the dataset.
+ :type dataset_name: string
+ :param dataset_name: Name of the dataset.
:rtype: :class:`gcloud.bigquery.dataset.Dataset`
:returns: a new ``Dataset`` instance
"""
- return Dataset(name, client=self)
+ return Dataset(dataset_name, client=self)
- def load_table_from_storage(self, name, destination, *source_uris):
+ def load_table_from_storage(self, job_name, destination, *source_uris):
"""Construct a job for loading data into a table from CloudStorage.
- :type name: string
- :param name: Name of the job.
+ :type job_name: string
+ :param job_name: Name of the job.
:type destination: :class:`gcloud.bigquery.table.Table`
:param destination: Table into which data is to be loaded.
@@ -117,14 +117,14 @@
:rtype: :class:`gcloud.bigquery.job.LoadTableFromStorageJob`
:returns: a new ``LoadTableFromStorageJob`` instance
"""
- return LoadTableFromStorageJob(name, destination, source_uris,
+ return LoadTableFromStorageJob(job_name, destination, source_uris,
client=self)
- def copy_table(self, name, destination, *sources):
+ def copy_table(self, job_name, destination, *sources):
"""Construct a job for copying one or more tables into another table.
- :type name: string
- :param name: Name of the job.
+ :type job_name: string
+ :param job_name: Name of the job.
:type destination: :class:`gcloud.bigquery.table.Table`
:param destination: Table into which data is to be copied.
@@ -135,13 +135,13 @@
:rtype: :class:`gcloud.bigquery.job.CopyJob`
:returns: a new ``CopyJob`` instance
"""
- return CopyJob(name, destination, sources, client=self)
+ return CopyJob(job_name, destination, sources, client=self)
- def extract_table_to_storage(self, name, source, *destination_uris):
+ def extract_table_to_storage(self, job_name, source, *destination_uris):
"""Construct a job for extracting a table into Cloud Storage files.
- :type name: string
- :param name: Name of the job.
+ :type job_name: string
+ :param job_name: Name of the job.
:type source: :class:`gcloud.bigquery.table.Table`
:param source: table to be extracted.
@@ -153,14 +153,14 @@
:rtype: :class:`gcloud.bigquery.job.ExtractTableToStorageJob`
:returns: a new ``ExtractTableToStorageJob`` instance
"""
- return ExtractTableToStorageJob(name, source, destination_uris,
+ return ExtractTableToStorageJob(job_name, source, destination_uris,
client=self)
- def run_async_query(self, name, query):
+ def run_async_query(self, job_name, query):
"""Construct a job for running a SQL query asynchronously.
- :type name: string
- :param name: Name of the job.
+ :type job_name: string
+ :param job_name: Name of the job.
:type query: string
:param query: SQL query to be executed
@@ -168,7 +168,7 @@
:rtype: :class:`gcloud.bigquery.job.RunAsyncQueryJob`
:returns: a new ``RunAsyncQueryJob`` instance
"""
- return RunAsyncQueryJob(name, query, client=self)
+ return RunAsyncQueryJob(job_name, query, client=self)
def run_sync_query(self, query):
"""Construct a job for running a SQL query synchronously.
|
{"golden_diff": "diff --git a/gcloud/bigquery/client.py b/gcloud/bigquery/client.py\n--- a/gcloud/bigquery/client.py\n+++ b/gcloud/bigquery/client.py\n@@ -91,22 +91,22 @@\n for resource in resp['datasets']]\n return datasets, resp.get('nextPageToken')\n \n- def dataset(self, name):\n+ def dataset(self, dataset_name):\n \"\"\"Construct a dataset bound to this client.\n \n- :type name: string\n- :param name: Name of the dataset.\n+ :type dataset_name: string\n+ :param dataset_name: Name of the dataset.\n \n :rtype: :class:`gcloud.bigquery.dataset.Dataset`\n :returns: a new ``Dataset`` instance\n \"\"\"\n- return Dataset(name, client=self)\n+ return Dataset(dataset_name, client=self)\n \n- def load_table_from_storage(self, name, destination, *source_uris):\n+ def load_table_from_storage(self, job_name, destination, *source_uris):\n \"\"\"Construct a job for loading data into a table from CloudStorage.\n \n- :type name: string\n- :param name: Name of the job.\n+ :type job_name: string\n+ :param job_name: Name of the job.\n \n :type destination: :class:`gcloud.bigquery.table.Table`\n :param destination: Table into which data is to be loaded.\n@@ -117,14 +117,14 @@\n :rtype: :class:`gcloud.bigquery.job.LoadTableFromStorageJob`\n :returns: a new ``LoadTableFromStorageJob`` instance\n \"\"\"\n- return LoadTableFromStorageJob(name, destination, source_uris,\n+ return LoadTableFromStorageJob(job_name, destination, source_uris,\n client=self)\n \n- def copy_table(self, name, destination, *sources):\n+ def copy_table(self, job_name, destination, *sources):\n \"\"\"Construct a job for copying one or more tables into another table.\n \n- :type name: string\n- :param name: Name of the job.\n+ :type job_name: string\n+ :param job_name: Name of the job.\n \n :type destination: :class:`gcloud.bigquery.table.Table`\n :param destination: Table into which data is to be copied.\n@@ -135,13 +135,13 @@\n :rtype: :class:`gcloud.bigquery.job.CopyJob`\n :returns: a new ``CopyJob`` instance\n \"\"\"\n- return CopyJob(name, destination, sources, client=self)\n+ return CopyJob(job_name, destination, sources, client=self)\n \n- def extract_table_to_storage(self, name, source, *destination_uris):\n+ def extract_table_to_storage(self, job_name, source, *destination_uris):\n \"\"\"Construct a job for extracting a table into Cloud Storage files.\n \n- :type name: string\n- :param name: Name of the job.\n+ :type job_name: string\n+ :param job_name: Name of the job.\n \n :type source: :class:`gcloud.bigquery.table.Table`\n :param source: table to be extracted.\n@@ -153,14 +153,14 @@\n :rtype: :class:`gcloud.bigquery.job.ExtractTableToStorageJob`\n :returns: a new ``ExtractTableToStorageJob`` instance\n \"\"\"\n- return ExtractTableToStorageJob(name, source, destination_uris,\n+ return ExtractTableToStorageJob(job_name, source, destination_uris,\n client=self)\n \n- def run_async_query(self, name, query):\n+ def run_async_query(self, job_name, query):\n \"\"\"Construct a job for running a SQL query asynchronously.\n \n- :type name: string\n- :param name: Name of the job.\n+ :type job_name: string\n+ :param job_name: Name of the job.\n \n :type query: string\n :param query: SQL query to be executed\n@@ -168,7 +168,7 @@\n :rtype: :class:`gcloud.bigquery.job.RunAsyncQueryJob`\n :returns: a new ``RunAsyncQueryJob`` instance\n \"\"\"\n- return RunAsyncQueryJob(name, query, client=self)\n+ return RunAsyncQueryJob(job_name, query, client=self)\n \n def run_sync_query(self, query):\n \"\"\"Construct a job for running a SQL query synchronously.\n", "issue": "Change 'name' parameter to job constructors -> 'job_name'\nSee: https://github.com/GoogleCloudPlatform/gcloud-python/pull/1263/files#r46738307\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Client for interacting with the Google BigQuery API.\"\"\"\n\n\nfrom gcloud.client import JSONClient\nfrom gcloud.bigquery.connection import Connection\nfrom gcloud.bigquery.dataset import Dataset\nfrom gcloud.bigquery.job import CopyJob\nfrom gcloud.bigquery.job import ExtractTableToStorageJob\nfrom gcloud.bigquery.job import LoadTableFromStorageJob\nfrom gcloud.bigquery.job import RunAsyncQueryJob\nfrom gcloud.bigquery.job import RunSyncQueryJob\n\n\nclass Client(JSONClient):\n \"\"\"Client to bundle configuration needed for API requests.\n\n :type project: string\n :param project: the project which the client acts on behalf of. Will be\n passed when creating a dataset / job. If not passed,\n falls back to the default inferred from the environment.\n\n :type credentials: :class:`oauth2client.client.OAuth2Credentials` or\n :class:`NoneType`\n :param credentials: The OAuth2 Credentials to use for the connection\n owned by this client. If not passed (and if no ``http``\n object is passed), falls back to the default inferred\n from the environment.\n\n :type http: :class:`httplib2.Http` or class that defines ``request()``.\n :param http: An optional HTTP object to make requests. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n \"\"\"\n\n _connection_class = Connection\n\n def list_datasets(self, include_all=False, max_results=None,\n page_token=None):\n \"\"\"List datasets for the project associated with this client.\n\n See:\n https://cloud.google.com/bigquery/docs/reference/v2/datasets/list\n\n :type include_all: boolean\n :param include_all: True if results include hidden datasets.\n\n :type max_results: int\n :param max_results: maximum number of datasets to return, If not\n passed, defaults to a value set by the API.\n\n :type page_token: string\n :param page_token: opaque marker for the next \"page\" of datasets. If\n not passed, the API will return the first page of\n datasets.\n\n :rtype: tuple, (list, str)\n :returns: list of :class:`gcloud.bigquery.dataset.Dataset`, plus a\n \"next page token\" string: if the token is not None,\n indicates that more datasets can be retrieved with another\n call (pass that value as ``page_token``).\n \"\"\"\n params = {}\n\n if include_all:\n params['all'] = True\n\n if max_results is not None:\n params['maxResults'] = max_results\n\n if page_token is not None:\n params['pageToken'] = page_token\n\n path = '/projects/%s/datasets' % (self.project,)\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n datasets = [Dataset.from_api_repr(resource, self)\n for resource in resp['datasets']]\n return datasets, resp.get('nextPageToken')\n\n def dataset(self, name):\n \"\"\"Construct a dataset bound to this client.\n\n :type name: string\n :param name: Name of the dataset.\n\n :rtype: :class:`gcloud.bigquery.dataset.Dataset`\n :returns: a new ``Dataset`` instance\n \"\"\"\n return Dataset(name, client=self)\n\n def load_table_from_storage(self, name, destination, *source_uris):\n \"\"\"Construct a job for loading data into a table from CloudStorage.\n\n :type name: string\n :param name: Name of the job.\n\n :type destination: :class:`gcloud.bigquery.table.Table`\n :param destination: Table into which data is to be loaded.\n\n :type source_uris: sequence of string\n :param source_uris: URIs of data files to be loaded.\n\n :rtype: :class:`gcloud.bigquery.job.LoadTableFromStorageJob`\n :returns: a new ``LoadTableFromStorageJob`` instance\n \"\"\"\n return LoadTableFromStorageJob(name, destination, source_uris,\n client=self)\n\n def copy_table(self, name, destination, *sources):\n \"\"\"Construct a job for copying one or more tables into another table.\n\n :type name: string\n :param name: Name of the job.\n\n :type destination: :class:`gcloud.bigquery.table.Table`\n :param destination: Table into which data is to be copied.\n\n :type sources: sequence of :class:`gcloud.bigquery.table.Table`\n :param sources: tables to be copied.\n\n :rtype: :class:`gcloud.bigquery.job.CopyJob`\n :returns: a new ``CopyJob`` instance\n \"\"\"\n return CopyJob(name, destination, sources, client=self)\n\n def extract_table_to_storage(self, name, source, *destination_uris):\n \"\"\"Construct a job for extracting a table into Cloud Storage files.\n\n :type name: string\n :param name: Name of the job.\n\n :type source: :class:`gcloud.bigquery.table.Table`\n :param source: table to be extracted.\n\n :type destination_uris: sequence of string\n :param destination_uris: URIs of CloudStorage file(s) into which\n table data is to be extracted.\n\n :rtype: :class:`gcloud.bigquery.job.ExtractTableToStorageJob`\n :returns: a new ``ExtractTableToStorageJob`` instance\n \"\"\"\n return ExtractTableToStorageJob(name, source, destination_uris,\n client=self)\n\n def run_async_query(self, name, query):\n \"\"\"Construct a job for running a SQL query asynchronously.\n\n :type name: string\n :param name: Name of the job.\n\n :type query: string\n :param query: SQL query to be executed\n\n :rtype: :class:`gcloud.bigquery.job.RunAsyncQueryJob`\n :returns: a new ``RunAsyncQueryJob`` instance\n \"\"\"\n return RunAsyncQueryJob(name, query, client=self)\n\n def run_sync_query(self, query):\n \"\"\"Construct a job for running a SQL query synchronously.\n\n :type query: string\n :param query: SQL query to be executed\n\n :rtype: :class:`gcloud.bigquery.job.RunSyncQueryJob`\n :returns: a new ``RunSyncQueryJob`` instance\n \"\"\"\n return RunSyncQueryJob(query, client=self)\n", "path": "gcloud/bigquery/client.py"}], "after_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Client for interacting with the Google BigQuery API.\"\"\"\n\n\nfrom gcloud.client import JSONClient\nfrom gcloud.bigquery.connection import Connection\nfrom gcloud.bigquery.dataset import Dataset\nfrom gcloud.bigquery.job import CopyJob\nfrom gcloud.bigquery.job import ExtractTableToStorageJob\nfrom gcloud.bigquery.job import LoadTableFromStorageJob\nfrom gcloud.bigquery.job import RunAsyncQueryJob\nfrom gcloud.bigquery.job import RunSyncQueryJob\n\n\nclass Client(JSONClient):\n \"\"\"Client to bundle configuration needed for API requests.\n\n :type project: string\n :param project: the project which the client acts on behalf of. Will be\n passed when creating a dataset / job. If not passed,\n falls back to the default inferred from the environment.\n\n :type credentials: :class:`oauth2client.client.OAuth2Credentials` or\n :class:`NoneType`\n :param credentials: The OAuth2 Credentials to use for the connection\n owned by this client. If not passed (and if no ``http``\n object is passed), falls back to the default inferred\n from the environment.\n\n :type http: :class:`httplib2.Http` or class that defines ``request()``.\n :param http: An optional HTTP object to make requests. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n \"\"\"\n\n _connection_class = Connection\n\n def list_datasets(self, include_all=False, max_results=None,\n page_token=None):\n \"\"\"List datasets for the project associated with this client.\n\n See:\n https://cloud.google.com/bigquery/docs/reference/v2/datasets/list\n\n :type include_all: boolean\n :param include_all: True if results include hidden datasets.\n\n :type max_results: int\n :param max_results: maximum number of datasets to return, If not\n passed, defaults to a value set by the API.\n\n :type page_token: string\n :param page_token: opaque marker for the next \"page\" of datasets. If\n not passed, the API will return the first page of\n datasets.\n\n :rtype: tuple, (list, str)\n :returns: list of :class:`gcloud.bigquery.dataset.Dataset`, plus a\n \"next page token\" string: if the token is not None,\n indicates that more datasets can be retrieved with another\n call (pass that value as ``page_token``).\n \"\"\"\n params = {}\n\n if include_all:\n params['all'] = True\n\n if max_results is not None:\n params['maxResults'] = max_results\n\n if page_token is not None:\n params['pageToken'] = page_token\n\n path = '/projects/%s/datasets' % (self.project,)\n resp = self.connection.api_request(method='GET', path=path,\n query_params=params)\n datasets = [Dataset.from_api_repr(resource, self)\n for resource in resp['datasets']]\n return datasets, resp.get('nextPageToken')\n\n def dataset(self, dataset_name):\n \"\"\"Construct a dataset bound to this client.\n\n :type dataset_name: string\n :param dataset_name: Name of the dataset.\n\n :rtype: :class:`gcloud.bigquery.dataset.Dataset`\n :returns: a new ``Dataset`` instance\n \"\"\"\n return Dataset(dataset_name, client=self)\n\n def load_table_from_storage(self, job_name, destination, *source_uris):\n \"\"\"Construct a job for loading data into a table from CloudStorage.\n\n :type job_name: string\n :param job_name: Name of the job.\n\n :type destination: :class:`gcloud.bigquery.table.Table`\n :param destination: Table into which data is to be loaded.\n\n :type source_uris: sequence of string\n :param source_uris: URIs of data files to be loaded.\n\n :rtype: :class:`gcloud.bigquery.job.LoadTableFromStorageJob`\n :returns: a new ``LoadTableFromStorageJob`` instance\n \"\"\"\n return LoadTableFromStorageJob(job_name, destination, source_uris,\n client=self)\n\n def copy_table(self, job_name, destination, *sources):\n \"\"\"Construct a job for copying one or more tables into another table.\n\n :type job_name: string\n :param job_name: Name of the job.\n\n :type destination: :class:`gcloud.bigquery.table.Table`\n :param destination: Table into which data is to be copied.\n\n :type sources: sequence of :class:`gcloud.bigquery.table.Table`\n :param sources: tables to be copied.\n\n :rtype: :class:`gcloud.bigquery.job.CopyJob`\n :returns: a new ``CopyJob`` instance\n \"\"\"\n return CopyJob(job_name, destination, sources, client=self)\n\n def extract_table_to_storage(self, job_name, source, *destination_uris):\n \"\"\"Construct a job for extracting a table into Cloud Storage files.\n\n :type job_name: string\n :param job_name: Name of the job.\n\n :type source: :class:`gcloud.bigquery.table.Table`\n :param source: table to be extracted.\n\n :type destination_uris: sequence of string\n :param destination_uris: URIs of CloudStorage file(s) into which\n table data is to be extracted.\n\n :rtype: :class:`gcloud.bigquery.job.ExtractTableToStorageJob`\n :returns: a new ``ExtractTableToStorageJob`` instance\n \"\"\"\n return ExtractTableToStorageJob(job_name, source, destination_uris,\n client=self)\n\n def run_async_query(self, job_name, query):\n \"\"\"Construct a job for running a SQL query asynchronously.\n\n :type job_name: string\n :param job_name: Name of the job.\n\n :type query: string\n :param query: SQL query to be executed\n\n :rtype: :class:`gcloud.bigquery.job.RunAsyncQueryJob`\n :returns: a new ``RunAsyncQueryJob`` instance\n \"\"\"\n return RunAsyncQueryJob(job_name, query, client=self)\n\n def run_sync_query(self, query):\n \"\"\"Construct a job for running a SQL query synchronously.\n\n :type query: string\n :param query: SQL query to be executed\n\n :rtype: :class:`gcloud.bigquery.job.RunSyncQueryJob`\n :returns: a new ``RunSyncQueryJob`` instance\n \"\"\"\n return RunSyncQueryJob(query, client=self)\n", "path": "gcloud/bigquery/client.py"}]}
| 2,296 | 992 |
gh_patches_debug_25092
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-1413
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fractional memory / CPU limits / guarantees fail
**How to reproduce the issue**
Set memory limit (or guarantee, or cpu limit / guarantee) to a non-integral spec:
```python
c.Spawner.mem_limit = "1.5G"
```
**What you expected to happen**
(In supported spawners) memory limit is set to 1.5 gigabytes of RAM
**What actually happens**
JupyterHub refuses to start, with:
```
[E 2017-04-18 05:39:02.270 JupyterHub app:1527]
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/jupyterhub/app.py", line 1524, in launch_instance_async
yield self.initialize(argv)
File "/usr/local/lib/python3.4/dist-packages/jupyterhub/app.py", line 1315, in initialize
yield self.init_spawners()
File "/usr/local/lib/python3.4/dist-packages/jupyterhub/app.py", line 1084, in init_spawners
self.users[orm_user.id] = user = User(orm_user, self.tornado_settings)
File "/usr/local/lib/python3.4/dist-packages/jupyterhub/user.py", line 128, in __init__
config=self.settings.get('config'),
File "/usr/local/lib/python3.4/dist-packages/kubespawner/spawner.py", line 29, in __init__
super().__init__(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/jupyterhub/spawner.py", line 345, in __init__
super(Spawner, self).__init__(**kwargs)
File "/usr/local/lib/python3.4/dist-packages/traitlets/config/configurable.py", line 84, in __init__
self.config = config
File "/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py", line 585, in __set__
self.set(obj, value)
File "/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py", line 574, in set
obj._notify_trait(self.name, old_value, new_value)
File "/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py", line 1139, in _notify_trait
type='change',
File "/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py", line 1176, in notify_change
c(change)
File "/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py", line 819, in compatible_observer
return func(self, change)
File "/usr/local/lib/python3.4/dist-packages/traitlets/config/configurable.py", line 186, in _config_changed
self._load_config(change.new, traits=traits, section_names=section_names)
File "/usr/local/lib/python3.4/dist-packages/traitlets/config/configurable.py", line 153, in _load_config
setattr(self, name, deepcopy(config_value))
File "/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py", line 585, in __set__
self.set(obj, value)
File "/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py", line 559, in set
new_value = self._validate(obj, value)
File "/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py", line 591, in _validate
value = self.validate(obj, value)
File "/usr/local/lib/python3.4/dist-packages/jupyterhub/traitlets.py", line 71, in validate
return int(num) * ByteSpecification.UNIT_SUFFIXES[suffix]
ValueError: invalid literal for int() with base 10: '1.5'
```
**Share what version of JupyterHub you are using**
0.72.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/traitlets.py`
Content:
```
1 """
2 Traitlets that are used in JupyterHub
3 """
4 # Copyright (c) Jupyter Development Team.
5 # Distributed under the terms of the Modified BSD License.
6
7 from traitlets import List, Unicode, Integer, TraitError
8
9
10 class URLPrefix(Unicode):
11 def validate(self, obj, value):
12 u = super().validate(obj, value)
13 if not u.startswith('/'):
14 u = '/' + u
15 if not u.endswith('/'):
16 u = u + '/'
17 return u
18
19
20 class Command(List):
21 """Traitlet for a command that should be a list of strings,
22 but allows it to be specified as a single string.
23 """
24 def __init__(self, default_value=None, **kwargs):
25 kwargs.setdefault('minlen', 1)
26 if isinstance(default_value, str):
27 default_value = [default_value]
28 super().__init__(Unicode(), default_value, **kwargs)
29
30 def validate(self, obj, value):
31 if isinstance(value, str):
32 value = [value]
33 return super().validate(obj, value)
34
35
36 class ByteSpecification(Integer):
37 """
38 Allow easily specifying bytes in units of 1024 with suffixes
39
40 Suffixes allowed are:
41 - K -> Kilobyte
42 - M -> Megabyte
43 - G -> Gigabyte
44 - T -> Terabyte
45 """
46
47 UNIT_SUFFIXES = {
48 'K': 1024,
49 'M': 1024 * 1024,
50 'G': 1024 * 1024 * 1024,
51 'T': 1024 * 1024 * 1024 * 1024
52 }
53
54 # Default to allowing None as a value
55 allow_none = True
56
57 def validate(self, obj, value):
58 """
59 Validate that the passed in value is a valid memory specification
60
61 It could either be a pure int, when it is taken as a byte value.
62 If it has one of the suffixes, it is converted into the appropriate
63 pure byte value.
64 """
65 if isinstance(value, int):
66 return value
67 num = value[:-1]
68 suffix = value[-1]
69 if not num.isdigit() and suffix not in ByteSpecification.UNIT_SUFFIXES:
70 raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))
71 else:
72 return int(num) * ByteSpecification.UNIT_SUFFIXES[suffix]
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/jupyterhub/traitlets.py b/jupyterhub/traitlets.py
--- a/jupyterhub/traitlets.py
+++ b/jupyterhub/traitlets.py
@@ -48,7 +48,7 @@
'K': 1024,
'M': 1024 * 1024,
'G': 1024 * 1024 * 1024,
- 'T': 1024 * 1024 * 1024 * 1024
+ 'T': 1024 * 1024 * 1024 * 1024,
}
# Default to allowing None as a value
@@ -62,11 +62,15 @@
If it has one of the suffixes, it is converted into the appropriate
pure byte value.
"""
- if isinstance(value, int):
- return value
- num = value[:-1]
+ if isinstance(value, (int, float)):
+ return int(value)
+
+ try:
+ num = float(value[:-1])
+ except ValueError:
+ raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))
suffix = value[-1]
- if not num.isdigit() and suffix not in ByteSpecification.UNIT_SUFFIXES:
+ if suffix not in self.UNIT_SUFFIXES:
raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))
else:
- return int(num) * ByteSpecification.UNIT_SUFFIXES[suffix]
+ return int(float(num) * self.UNIT_SUFFIXES[suffix])
|
{"golden_diff": "diff --git a/jupyterhub/traitlets.py b/jupyterhub/traitlets.py\n--- a/jupyterhub/traitlets.py\n+++ b/jupyterhub/traitlets.py\n@@ -48,7 +48,7 @@\n 'K': 1024,\n 'M': 1024 * 1024,\n 'G': 1024 * 1024 * 1024,\n- 'T': 1024 * 1024 * 1024 * 1024\n+ 'T': 1024 * 1024 * 1024 * 1024,\n }\n \n # Default to allowing None as a value\n@@ -62,11 +62,15 @@\n If it has one of the suffixes, it is converted into the appropriate\n pure byte value.\n \"\"\"\n- if isinstance(value, int):\n- return value\n- num = value[:-1]\n+ if isinstance(value, (int, float)):\n+ return int(value)\n+\n+ try:\n+ num = float(value[:-1])\n+ except ValueError:\n+ raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))\n suffix = value[-1]\n- if not num.isdigit() and suffix not in ByteSpecification.UNIT_SUFFIXES:\n+ if suffix not in self.UNIT_SUFFIXES:\n raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))\n else:\n- return int(num) * ByteSpecification.UNIT_SUFFIXES[suffix]\n+ return int(float(num) * self.UNIT_SUFFIXES[suffix])\n", "issue": "Fractional memory / CPU limits / guarantees fail\n**How to reproduce the issue**\r\n\r\nSet memory limit (or guarantee, or cpu limit / guarantee) to a non-integral spec:\r\n\r\n```python\r\nc.Spawner.mem_limit = \"1.5G\"\r\n```\r\n\r\n**What you expected to happen**\r\n\r\n(In supported spawners) memory limit is set to 1.5 gigabytes of RAM\r\n\r\n**What actually happens**\r\n\r\nJupyterHub refuses to start, with:\r\n\r\n```\r\n[E 2017-04-18 05:39:02.270 JupyterHub app:1527]\r\n Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.4/dist-packages/jupyterhub/app.py\", line 1524, in launch_instance_async\r\n yield self.initialize(argv)\r\n File \"/usr/local/lib/python3.4/dist-packages/jupyterhub/app.py\", line 1315, in initialize\r\n yield self.init_spawners()\r\n File \"/usr/local/lib/python3.4/dist-packages/jupyterhub/app.py\", line 1084, in init_spawners\r\n self.users[orm_user.id] = user = User(orm_user, self.tornado_settings)\r\n File \"/usr/local/lib/python3.4/dist-packages/jupyterhub/user.py\", line 128, in __init__\r\n config=self.settings.get('config'),\r\n File \"/usr/local/lib/python3.4/dist-packages/kubespawner/spawner.py\", line 29, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"/usr/local/lib/python3.4/dist-packages/jupyterhub/spawner.py\", line 345, in __init__\r\n super(Spawner, self).__init__(**kwargs)\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/config/configurable.py\", line 84, in __init__\r\n self.config = config\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py\", line 585, in __set__\r\n self.set(obj, value)\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py\", line 574, in set\r\n obj._notify_trait(self.name, old_value, new_value)\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py\", line 1139, in _notify_trait\r\n type='change',\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py\", line 1176, in notify_change\r\n c(change)\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py\", line 819, in compatible_observer\r\n return func(self, change)\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/config/configurable.py\", line 186, in _config_changed\r\n self._load_config(change.new, traits=traits, section_names=section_names)\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/config/configurable.py\", line 153, in _load_config\r\n setattr(self, name, deepcopy(config_value))\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py\", line 585, in __set__\r\n self.set(obj, value)\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py\", line 559, in set\r\n new_value = self._validate(obj, value)\r\n File \"/usr/local/lib/python3.4/dist-packages/traitlets/traitlets.py\", line 591, in _validate\r\n value = self.validate(obj, value)\r\n File \"/usr/local/lib/python3.4/dist-packages/jupyterhub/traitlets.py\", line 71, in validate\r\n return int(num) * ByteSpecification.UNIT_SUFFIXES[suffix]\r\n ValueError: invalid literal for int() with base 10: '1.5'\r\n```\r\n\r\n**Share what version of JupyterHub you are using**\r\n\r\n0.72.\n", "before_files": [{"content": "\"\"\"\nTraitlets that are used in JupyterHub\n\"\"\"\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom traitlets import List, Unicode, Integer, TraitError\n\n\nclass URLPrefix(Unicode):\n def validate(self, obj, value):\n u = super().validate(obj, value)\n if not u.startswith('/'):\n u = '/' + u\n if not u.endswith('/'):\n u = u + '/'\n return u\n\n\nclass Command(List):\n \"\"\"Traitlet for a command that should be a list of strings,\n but allows it to be specified as a single string.\n \"\"\"\n def __init__(self, default_value=None, **kwargs):\n kwargs.setdefault('minlen', 1)\n if isinstance(default_value, str):\n default_value = [default_value]\n super().__init__(Unicode(), default_value, **kwargs)\n\n def validate(self, obj, value):\n if isinstance(value, str):\n value = [value]\n return super().validate(obj, value)\n\n\nclass ByteSpecification(Integer):\n \"\"\"\n Allow easily specifying bytes in units of 1024 with suffixes\n\n Suffixes allowed are:\n - K -> Kilobyte\n - M -> Megabyte\n - G -> Gigabyte\n - T -> Terabyte\n \"\"\"\n\n UNIT_SUFFIXES = {\n 'K': 1024,\n 'M': 1024 * 1024,\n 'G': 1024 * 1024 * 1024,\n 'T': 1024 * 1024 * 1024 * 1024\n }\n\n # Default to allowing None as a value\n allow_none = True\n\n def validate(self, obj, value):\n \"\"\"\n Validate that the passed in value is a valid memory specification\n\n It could either be a pure int, when it is taken as a byte value.\n If it has one of the suffixes, it is converted into the appropriate\n pure byte value.\n \"\"\"\n if isinstance(value, int):\n return value\n num = value[:-1]\n suffix = value[-1]\n if not num.isdigit() and suffix not in ByteSpecification.UNIT_SUFFIXES:\n raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))\n else:\n return int(num) * ByteSpecification.UNIT_SUFFIXES[suffix]\n", "path": "jupyterhub/traitlets.py"}], "after_files": [{"content": "\"\"\"\nTraitlets that are used in JupyterHub\n\"\"\"\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom traitlets import List, Unicode, Integer, TraitError\n\n\nclass URLPrefix(Unicode):\n def validate(self, obj, value):\n u = super().validate(obj, value)\n if not u.startswith('/'):\n u = '/' + u\n if not u.endswith('/'):\n u = u + '/'\n return u\n\n\nclass Command(List):\n \"\"\"Traitlet for a command that should be a list of strings,\n but allows it to be specified as a single string.\n \"\"\"\n def __init__(self, default_value=None, **kwargs):\n kwargs.setdefault('minlen', 1)\n if isinstance(default_value, str):\n default_value = [default_value]\n super().__init__(Unicode(), default_value, **kwargs)\n\n def validate(self, obj, value):\n if isinstance(value, str):\n value = [value]\n return super().validate(obj, value)\n\n\nclass ByteSpecification(Integer):\n \"\"\"\n Allow easily specifying bytes in units of 1024 with suffixes\n\n Suffixes allowed are:\n - K -> Kilobyte\n - M -> Megabyte\n - G -> Gigabyte\n - T -> Terabyte\n \"\"\"\n\n UNIT_SUFFIXES = {\n 'K': 1024,\n 'M': 1024 * 1024,\n 'G': 1024 * 1024 * 1024,\n 'T': 1024 * 1024 * 1024 * 1024,\n }\n\n # Default to allowing None as a value\n allow_none = True\n\n def validate(self, obj, value):\n \"\"\"\n Validate that the passed in value is a valid memory specification\n\n It could either be a pure int, when it is taken as a byte value.\n If it has one of the suffixes, it is converted into the appropriate\n pure byte value.\n \"\"\"\n if isinstance(value, (int, float)):\n return int(value)\n\n try:\n num = float(value[:-1])\n except ValueError:\n raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))\n suffix = value[-1]\n if suffix not in self.UNIT_SUFFIXES:\n raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))\n else:\n return int(float(num) * self.UNIT_SUFFIXES[suffix])\n", "path": "jupyterhub/traitlets.py"}]}
| 1,874 | 413 |
gh_patches_debug_59179
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-1943
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Same name for an attribute and a function
Hi, I'm new to programming and I'm not sure if it's a problem, but the code(
Python/data_structures/queue/queue_on_list.py) have the same name for an attribute and a function.
```
class Queue:
def __init__(self):
self.entries = []
self.length = 0
self.front = 0
def front(self):
return self.entries[0]
```
When executed it gives me the error:
TypeError: 'int' object is not callable
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `data_structures/queue/queue_on_list.py`
Content:
```
1 """Queue represented by a Python list"""
2
3
4 class Queue:
5 def __init__(self):
6 self.entries = []
7 self.length = 0
8 self.front = 0
9
10 def __str__(self):
11 printed = "<" + str(self.entries)[1:-1] + ">"
12 return printed
13
14 """Enqueues {@code item}
15 @param item
16 item to enqueue"""
17
18 def put(self, item):
19 self.entries.append(item)
20 self.length = self.length + 1
21
22 """Dequeues {@code item}
23 @requirement: |self.length| > 0
24 @return dequeued
25 item that was dequeued"""
26
27 def get(self):
28 self.length = self.length - 1
29 dequeued = self.entries[self.front]
30 # self.front-=1
31 # self.entries = self.entries[self.front:]
32 self.entries = self.entries[1:]
33 return dequeued
34
35 """Rotates the queue {@code rotation} times
36 @param rotation
37 number of times to rotate queue"""
38
39 def rotate(self, rotation):
40 for i in range(rotation):
41 self.put(self.get())
42
43 """Enqueues {@code item}
44 @return item at front of self.entries"""
45
46 def front(self):
47 return self.entries[0]
48
49 """Returns the length of this.entries"""
50
51 def size(self):
52 return self.length
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/data_structures/queue/queue_on_list.py b/data_structures/queue/queue_on_list.py
--- a/data_structures/queue/queue_on_list.py
+++ b/data_structures/queue/queue_on_list.py
@@ -43,7 +43,7 @@
"""Enqueues {@code item}
@return item at front of self.entries"""
- def front(self):
+ def get_front(self):
return self.entries[0]
"""Returns the length of this.entries"""
|
{"golden_diff": "diff --git a/data_structures/queue/queue_on_list.py b/data_structures/queue/queue_on_list.py\n--- a/data_structures/queue/queue_on_list.py\n+++ b/data_structures/queue/queue_on_list.py\n@@ -43,7 +43,7 @@\n \"\"\"Enqueues {@code item}\r\n @return item at front of self.entries\"\"\"\r\n \r\n- def front(self):\r\n+ def get_front(self):\r\n return self.entries[0]\r\n \r\n \"\"\"Returns the length of this.entries\"\"\"\n", "issue": "Same name for an attribute and a function\nHi, I'm new to programming and I'm not sure if it's a problem, but the code(\r\nPython/data_structures/queue/queue_on_list.py) have the same name for an attribute and a function.\r\n```\r\nclass Queue:\r\n def __init__(self):\r\n self.entries = []\r\n self.length = 0\r\n self.front = 0\r\n\r\n def front(self):\r\n return self.entries[0]\r\n```\r\nWhen executed it gives me the error:\r\nTypeError: 'int' object is not callable\n", "before_files": [{"content": "\"\"\"Queue represented by a Python list\"\"\"\r\n\r\n\r\nclass Queue:\r\n def __init__(self):\r\n self.entries = []\r\n self.length = 0\r\n self.front = 0\r\n\r\n def __str__(self):\r\n printed = \"<\" + str(self.entries)[1:-1] + \">\"\r\n return printed\r\n\r\n \"\"\"Enqueues {@code item}\r\n @param item\r\n item to enqueue\"\"\"\r\n\r\n def put(self, item):\r\n self.entries.append(item)\r\n self.length = self.length + 1\r\n\r\n \"\"\"Dequeues {@code item}\r\n @requirement: |self.length| > 0\r\n @return dequeued\r\n item that was dequeued\"\"\"\r\n\r\n def get(self):\r\n self.length = self.length - 1\r\n dequeued = self.entries[self.front]\r\n # self.front-=1\r\n # self.entries = self.entries[self.front:]\r\n self.entries = self.entries[1:]\r\n return dequeued\r\n\r\n \"\"\"Rotates the queue {@code rotation} times\r\n @param rotation\r\n number of times to rotate queue\"\"\"\r\n\r\n def rotate(self, rotation):\r\n for i in range(rotation):\r\n self.put(self.get())\r\n\r\n \"\"\"Enqueues {@code item}\r\n @return item at front of self.entries\"\"\"\r\n\r\n def front(self):\r\n return self.entries[0]\r\n\r\n \"\"\"Returns the length of this.entries\"\"\"\r\n\r\n def size(self):\r\n return self.length\r\n", "path": "data_structures/queue/queue_on_list.py"}], "after_files": [{"content": "\"\"\"Queue represented by a Python list\"\"\"\r\n\r\n\r\nclass Queue:\r\n def __init__(self):\r\n self.entries = []\r\n self.length = 0\r\n self.front = 0\r\n\r\n def __str__(self):\r\n printed = \"<\" + str(self.entries)[1:-1] + \">\"\r\n return printed\r\n\r\n \"\"\"Enqueues {@code item}\r\n @param item\r\n item to enqueue\"\"\"\r\n\r\n def put(self, item):\r\n self.entries.append(item)\r\n self.length = self.length + 1\r\n\r\n \"\"\"Dequeues {@code item}\r\n @requirement: |self.length| > 0\r\n @return dequeued\r\n item that was dequeued\"\"\"\r\n\r\n def get(self):\r\n self.length = self.length - 1\r\n dequeued = self.entries[self.front]\r\n # self.front-=1\r\n # self.entries = self.entries[self.front:]\r\n self.entries = self.entries[1:]\r\n return dequeued\r\n\r\n \"\"\"Rotates the queue {@code rotation} times\r\n @param rotation\r\n number of times to rotate queue\"\"\"\r\n\r\n def rotate(self, rotation):\r\n for i in range(rotation):\r\n self.put(self.get())\r\n\r\n \"\"\"Enqueues {@code item}\r\n @return item at front of self.entries\"\"\"\r\n\r\n def get_front(self):\r\n return self.entries[0]\r\n\r\n \"\"\"Returns the length of this.entries\"\"\"\r\n\r\n def size(self):\r\n return self.length\r\n", "path": "data_structures/queue/queue_on_list.py"}]}
| 782 | 113 |
gh_patches_debug_64575
|
rasdani/github-patches
|
git_diff
|
encode__starlette-801
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dangerous example regex for CORS Middleware?
Looking at the docs for CORS Middlware here: https://www.starlette.io/middleware/#corsmiddleware , under the `allow_origin_regex` attribute, the example value is `https://.*\.example\.org`.
However, based on the handler code for this at https://github.com/encode/starlette/blob/ab86530eddfcf56e0f7e5ca56f6ab69c15594a7d/starlette/middleware/cors.py#L90 , it appears `https://www.example.org.hacker.com` would pass as a valid origin, right?
It seems like the example should be `https://.*\.example\.org$`, yes?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/middleware/cors.py`
Content:
```
1 import functools
2 import re
3 import typing
4
5 from starlette.datastructures import Headers, MutableHeaders
6 from starlette.responses import PlainTextResponse, Response
7 from starlette.types import ASGIApp, Message, Receive, Scope, Send
8
9 ALL_METHODS = ("DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT")
10
11
12 class CORSMiddleware:
13 def __init__(
14 self,
15 app: ASGIApp,
16 allow_origins: typing.Sequence[str] = (),
17 allow_methods: typing.Sequence[str] = ("GET",),
18 allow_headers: typing.Sequence[str] = (),
19 allow_credentials: bool = False,
20 allow_origin_regex: str = None,
21 expose_headers: typing.Sequence[str] = (),
22 max_age: int = 600,
23 ) -> None:
24
25 if "*" in allow_methods:
26 allow_methods = ALL_METHODS
27
28 compiled_allow_origin_regex = None
29 if allow_origin_regex is not None:
30 compiled_allow_origin_regex = re.compile(allow_origin_regex)
31
32 simple_headers = {}
33 if "*" in allow_origins:
34 simple_headers["Access-Control-Allow-Origin"] = "*"
35 if allow_credentials:
36 simple_headers["Access-Control-Allow-Credentials"] = "true"
37 if expose_headers:
38 simple_headers["Access-Control-Expose-Headers"] = ", ".join(expose_headers)
39
40 preflight_headers = {}
41 if "*" in allow_origins:
42 preflight_headers["Access-Control-Allow-Origin"] = "*"
43 else:
44 preflight_headers["Vary"] = "Origin"
45 preflight_headers.update(
46 {
47 "Access-Control-Allow-Methods": ", ".join(allow_methods),
48 "Access-Control-Max-Age": str(max_age),
49 }
50 )
51 if allow_headers and "*" not in allow_headers:
52 preflight_headers["Access-Control-Allow-Headers"] = ", ".join(allow_headers)
53 if allow_credentials:
54 preflight_headers["Access-Control-Allow-Credentials"] = "true"
55
56 self.app = app
57 self.allow_origins = allow_origins
58 self.allow_methods = allow_methods
59 self.allow_headers = [h.lower() for h in allow_headers]
60 self.allow_all_origins = "*" in allow_origins
61 self.allow_all_headers = "*" in allow_headers
62 self.allow_origin_regex = compiled_allow_origin_regex
63 self.simple_headers = simple_headers
64 self.preflight_headers = preflight_headers
65
66 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
67 if scope["type"] != "http": # pragma: no cover
68 await self.app(scope, receive, send)
69 return
70
71 method = scope["method"]
72 headers = Headers(scope=scope)
73 origin = headers.get("origin")
74
75 if origin is None:
76 await self.app(scope, receive, send)
77 return
78
79 if method == "OPTIONS" and "access-control-request-method" in headers:
80 response = self.preflight_response(request_headers=headers)
81 await response(scope, receive, send)
82 return
83
84 await self.simple_response(scope, receive, send, request_headers=headers)
85
86 def is_allowed_origin(self, origin: str) -> bool:
87 if self.allow_all_origins:
88 return True
89
90 if self.allow_origin_regex is not None and self.allow_origin_regex.match(
91 origin
92 ):
93 return True
94
95 return origin in self.allow_origins
96
97 def preflight_response(self, request_headers: Headers) -> Response:
98 requested_origin = request_headers["origin"]
99 requested_method = request_headers["access-control-request-method"]
100 requested_headers = request_headers.get("access-control-request-headers")
101
102 headers = dict(self.preflight_headers)
103 failures = []
104
105 if self.is_allowed_origin(origin=requested_origin):
106 if not self.allow_all_origins:
107 # If self.allow_all_origins is True, then the "Access-Control-Allow-Origin"
108 # header is already set to "*".
109 # If we only allow specific origins, then we have to mirror back
110 # the Origin header in the response.
111 headers["Access-Control-Allow-Origin"] = requested_origin
112 else:
113 failures.append("origin")
114
115 if requested_method not in self.allow_methods:
116 failures.append("method")
117
118 # If we allow all headers, then we have to mirror back any requested
119 # headers in the response.
120 if self.allow_all_headers and requested_headers is not None:
121 headers["Access-Control-Allow-Headers"] = requested_headers
122 elif requested_headers is not None:
123 for header in [h.lower() for h in requested_headers.split(",")]:
124 if header.strip() not in self.allow_headers:
125 failures.append("headers")
126
127 # We don't strictly need to use 400 responses here, since its up to
128 # the browser to enforce the CORS policy, but its more informative
129 # if we do.
130 if failures:
131 failure_text = "Disallowed CORS " + ", ".join(failures)
132 return PlainTextResponse(failure_text, status_code=400, headers=headers)
133
134 return PlainTextResponse("OK", status_code=200, headers=headers)
135
136 async def simple_response(
137 self, scope: Scope, receive: Receive, send: Send, request_headers: Headers
138 ) -> None:
139 send = functools.partial(self.send, send=send, request_headers=request_headers)
140 await self.app(scope, receive, send)
141
142 async def send(
143 self, message: Message, send: Send, request_headers: Headers
144 ) -> None:
145 if message["type"] != "http.response.start":
146 await send(message)
147 return
148
149 message.setdefault("headers", [])
150 headers = MutableHeaders(scope=message)
151 headers.update(self.simple_headers)
152 origin = request_headers["Origin"]
153 has_cookie = "cookie" in request_headers
154
155 # If request includes any cookie headers, then we must respond
156 # with the specific origin instead of '*'.
157 if self.allow_all_origins and has_cookie:
158 headers["Access-Control-Allow-Origin"] = origin
159
160 # If we only allow specific origins, then we have to mirror back
161 # the Origin header in the response.
162 elif not self.allow_all_origins and self.is_allowed_origin(origin=origin):
163 headers["Access-Control-Allow-Origin"] = origin
164 headers.add_vary_header("Origin")
165 await send(message)
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/starlette/middleware/cors.py b/starlette/middleware/cors.py
--- a/starlette/middleware/cors.py
+++ b/starlette/middleware/cors.py
@@ -87,7 +87,7 @@
if self.allow_all_origins:
return True
- if self.allow_origin_regex is not None and self.allow_origin_regex.match(
+ if self.allow_origin_regex is not None and self.allow_origin_regex.fullmatch(
origin
):
return True
|
{"golden_diff": "diff --git a/starlette/middleware/cors.py b/starlette/middleware/cors.py\n--- a/starlette/middleware/cors.py\n+++ b/starlette/middleware/cors.py\n@@ -87,7 +87,7 @@\n if self.allow_all_origins:\n return True\n \n- if self.allow_origin_regex is not None and self.allow_origin_regex.match(\n+ if self.allow_origin_regex is not None and self.allow_origin_regex.fullmatch(\n origin\n ):\n return True\n", "issue": "Dangerous example regex for CORS Middleware?\nLooking at the docs for CORS Middlware here: https://www.starlette.io/middleware/#corsmiddleware , under the `allow_origin_regex` attribute, the example value is `https://.*\\.example\\.org`.\r\n\r\nHowever, based on the handler code for this at https://github.com/encode/starlette/blob/ab86530eddfcf56e0f7e5ca56f6ab69c15594a7d/starlette/middleware/cors.py#L90 , it appears `https://www.example.org.hacker.com` would pass as a valid origin, right?\r\n\r\nIt seems like the example should be `https://.*\\.example\\.org$`, yes?\n", "before_files": [{"content": "import functools\nimport re\nimport typing\n\nfrom starlette.datastructures import Headers, MutableHeaders\nfrom starlette.responses import PlainTextResponse, Response\nfrom starlette.types import ASGIApp, Message, Receive, Scope, Send\n\nALL_METHODS = (\"DELETE\", \"GET\", \"OPTIONS\", \"PATCH\", \"POST\", \"PUT\")\n\n\nclass CORSMiddleware:\n def __init__(\n self,\n app: ASGIApp,\n allow_origins: typing.Sequence[str] = (),\n allow_methods: typing.Sequence[str] = (\"GET\",),\n allow_headers: typing.Sequence[str] = (),\n allow_credentials: bool = False,\n allow_origin_regex: str = None,\n expose_headers: typing.Sequence[str] = (),\n max_age: int = 600,\n ) -> None:\n\n if \"*\" in allow_methods:\n allow_methods = ALL_METHODS\n\n compiled_allow_origin_regex = None\n if allow_origin_regex is not None:\n compiled_allow_origin_regex = re.compile(allow_origin_regex)\n\n simple_headers = {}\n if \"*\" in allow_origins:\n simple_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n if allow_credentials:\n simple_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n if expose_headers:\n simple_headers[\"Access-Control-Expose-Headers\"] = \", \".join(expose_headers)\n\n preflight_headers = {}\n if \"*\" in allow_origins:\n preflight_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n else:\n preflight_headers[\"Vary\"] = \"Origin\"\n preflight_headers.update(\n {\n \"Access-Control-Allow-Methods\": \", \".join(allow_methods),\n \"Access-Control-Max-Age\": str(max_age),\n }\n )\n if allow_headers and \"*\" not in allow_headers:\n preflight_headers[\"Access-Control-Allow-Headers\"] = \", \".join(allow_headers)\n if allow_credentials:\n preflight_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n\n self.app = app\n self.allow_origins = allow_origins\n self.allow_methods = allow_methods\n self.allow_headers = [h.lower() for h in allow_headers]\n self.allow_all_origins = \"*\" in allow_origins\n self.allow_all_headers = \"*\" in allow_headers\n self.allow_origin_regex = compiled_allow_origin_regex\n self.simple_headers = simple_headers\n self.preflight_headers = preflight_headers\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\": # pragma: no cover\n await self.app(scope, receive, send)\n return\n\n method = scope[\"method\"]\n headers = Headers(scope=scope)\n origin = headers.get(\"origin\")\n\n if origin is None:\n await self.app(scope, receive, send)\n return\n\n if method == \"OPTIONS\" and \"access-control-request-method\" in headers:\n response = self.preflight_response(request_headers=headers)\n await response(scope, receive, send)\n return\n\n await self.simple_response(scope, receive, send, request_headers=headers)\n\n def is_allowed_origin(self, origin: str) -> bool:\n if self.allow_all_origins:\n return True\n\n if self.allow_origin_regex is not None and self.allow_origin_regex.match(\n origin\n ):\n return True\n\n return origin in self.allow_origins\n\n def preflight_response(self, request_headers: Headers) -> Response:\n requested_origin = request_headers[\"origin\"]\n requested_method = request_headers[\"access-control-request-method\"]\n requested_headers = request_headers.get(\"access-control-request-headers\")\n\n headers = dict(self.preflight_headers)\n failures = []\n\n if self.is_allowed_origin(origin=requested_origin):\n if not self.allow_all_origins:\n # If self.allow_all_origins is True, then the \"Access-Control-Allow-Origin\"\n # header is already set to \"*\".\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n headers[\"Access-Control-Allow-Origin\"] = requested_origin\n else:\n failures.append(\"origin\")\n\n if requested_method not in self.allow_methods:\n failures.append(\"method\")\n\n # If we allow all headers, then we have to mirror back any requested\n # headers in the response.\n if self.allow_all_headers and requested_headers is not None:\n headers[\"Access-Control-Allow-Headers\"] = requested_headers\n elif requested_headers is not None:\n for header in [h.lower() for h in requested_headers.split(\",\")]:\n if header.strip() not in self.allow_headers:\n failures.append(\"headers\")\n\n # We don't strictly need to use 400 responses here, since its up to\n # the browser to enforce the CORS policy, but its more informative\n # if we do.\n if failures:\n failure_text = \"Disallowed CORS \" + \", \".join(failures)\n return PlainTextResponse(failure_text, status_code=400, headers=headers)\n\n return PlainTextResponse(\"OK\", status_code=200, headers=headers)\n\n async def simple_response(\n self, scope: Scope, receive: Receive, send: Send, request_headers: Headers\n ) -> None:\n send = functools.partial(self.send, send=send, request_headers=request_headers)\n await self.app(scope, receive, send)\n\n async def send(\n self, message: Message, send: Send, request_headers: Headers\n ) -> None:\n if message[\"type\"] != \"http.response.start\":\n await send(message)\n return\n\n message.setdefault(\"headers\", [])\n headers = MutableHeaders(scope=message)\n headers.update(self.simple_headers)\n origin = request_headers[\"Origin\"]\n has_cookie = \"cookie\" in request_headers\n\n # If request includes any cookie headers, then we must respond\n # with the specific origin instead of '*'.\n if self.allow_all_origins and has_cookie:\n headers[\"Access-Control-Allow-Origin\"] = origin\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n elif not self.allow_all_origins and self.is_allowed_origin(origin=origin):\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers.add_vary_header(\"Origin\")\n await send(message)\n", "path": "starlette/middleware/cors.py"}], "after_files": [{"content": "import functools\nimport re\nimport typing\n\nfrom starlette.datastructures import Headers, MutableHeaders\nfrom starlette.responses import PlainTextResponse, Response\nfrom starlette.types import ASGIApp, Message, Receive, Scope, Send\n\nALL_METHODS = (\"DELETE\", \"GET\", \"OPTIONS\", \"PATCH\", \"POST\", \"PUT\")\n\n\nclass CORSMiddleware:\n def __init__(\n self,\n app: ASGIApp,\n allow_origins: typing.Sequence[str] = (),\n allow_methods: typing.Sequence[str] = (\"GET\",),\n allow_headers: typing.Sequence[str] = (),\n allow_credentials: bool = False,\n allow_origin_regex: str = None,\n expose_headers: typing.Sequence[str] = (),\n max_age: int = 600,\n ) -> None:\n\n if \"*\" in allow_methods:\n allow_methods = ALL_METHODS\n\n compiled_allow_origin_regex = None\n if allow_origin_regex is not None:\n compiled_allow_origin_regex = re.compile(allow_origin_regex)\n\n simple_headers = {}\n if \"*\" in allow_origins:\n simple_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n if allow_credentials:\n simple_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n if expose_headers:\n simple_headers[\"Access-Control-Expose-Headers\"] = \", \".join(expose_headers)\n\n preflight_headers = {}\n if \"*\" in allow_origins:\n preflight_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n else:\n preflight_headers[\"Vary\"] = \"Origin\"\n preflight_headers.update(\n {\n \"Access-Control-Allow-Methods\": \", \".join(allow_methods),\n \"Access-Control-Max-Age\": str(max_age),\n }\n )\n if allow_headers and \"*\" not in allow_headers:\n preflight_headers[\"Access-Control-Allow-Headers\"] = \", \".join(allow_headers)\n if allow_credentials:\n preflight_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n\n self.app = app\n self.allow_origins = allow_origins\n self.allow_methods = allow_methods\n self.allow_headers = [h.lower() for h in allow_headers]\n self.allow_all_origins = \"*\" in allow_origins\n self.allow_all_headers = \"*\" in allow_headers\n self.allow_origin_regex = compiled_allow_origin_regex\n self.simple_headers = simple_headers\n self.preflight_headers = preflight_headers\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\": # pragma: no cover\n await self.app(scope, receive, send)\n return\n\n method = scope[\"method\"]\n headers = Headers(scope=scope)\n origin = headers.get(\"origin\")\n\n if origin is None:\n await self.app(scope, receive, send)\n return\n\n if method == \"OPTIONS\" and \"access-control-request-method\" in headers:\n response = self.preflight_response(request_headers=headers)\n await response(scope, receive, send)\n return\n\n await self.simple_response(scope, receive, send, request_headers=headers)\n\n def is_allowed_origin(self, origin: str) -> bool:\n if self.allow_all_origins:\n return True\n\n if self.allow_origin_regex is not None and self.allow_origin_regex.fullmatch(\n origin\n ):\n return True\n\n return origin in self.allow_origins\n\n def preflight_response(self, request_headers: Headers) -> Response:\n requested_origin = request_headers[\"origin\"]\n requested_method = request_headers[\"access-control-request-method\"]\n requested_headers = request_headers.get(\"access-control-request-headers\")\n\n headers = dict(self.preflight_headers)\n failures = []\n\n if self.is_allowed_origin(origin=requested_origin):\n if not self.allow_all_origins:\n # If self.allow_all_origins is True, then the \"Access-Control-Allow-Origin\"\n # header is already set to \"*\".\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n headers[\"Access-Control-Allow-Origin\"] = requested_origin\n else:\n failures.append(\"origin\")\n\n if requested_method not in self.allow_methods:\n failures.append(\"method\")\n\n # If we allow all headers, then we have to mirror back any requested\n # headers in the response.\n if self.allow_all_headers and requested_headers is not None:\n headers[\"Access-Control-Allow-Headers\"] = requested_headers\n elif requested_headers is not None:\n for header in [h.lower() for h in requested_headers.split(\",\")]:\n if header.strip() not in self.allow_headers:\n failures.append(\"headers\")\n\n # We don't strictly need to use 400 responses here, since its up to\n # the browser to enforce the CORS policy, but its more informative\n # if we do.\n if failures:\n failure_text = \"Disallowed CORS \" + \", \".join(failures)\n return PlainTextResponse(failure_text, status_code=400, headers=headers)\n\n return PlainTextResponse(\"OK\", status_code=200, headers=headers)\n\n async def simple_response(\n self, scope: Scope, receive: Receive, send: Send, request_headers: Headers\n ) -> None:\n send = functools.partial(self.send, send=send, request_headers=request_headers)\n await self.app(scope, receive, send)\n\n async def send(\n self, message: Message, send: Send, request_headers: Headers\n ) -> None:\n if message[\"type\"] != \"http.response.start\":\n await send(message)\n return\n\n message.setdefault(\"headers\", [])\n headers = MutableHeaders(scope=message)\n headers.update(self.simple_headers)\n origin = request_headers[\"Origin\"]\n has_cookie = \"cookie\" in request_headers\n\n # If request includes any cookie headers, then we must respond\n # with the specific origin instead of '*'.\n if self.allow_all_origins and has_cookie:\n headers[\"Access-Control-Allow-Origin\"] = origin\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n elif not self.allow_all_origins and self.is_allowed_origin(origin=origin):\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers.add_vary_header(\"Origin\")\n await send(message)\n", "path": "starlette/middleware/cors.py"}]}
| 2,175 | 108 |
gh_patches_debug_31878
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-4817
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Vision: Single feature functions generated by cloud vision client library does not support parameter max_results
As specified in [the gRPC reference](https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.AnnotateImageRequest), AnnotateImageRequest message have three fields, _image_, _features[]_ and _image_context_, in which the _features[]_ field determines what feature user is request (_type_) and max number of returned results (_max_results_). The code for generating single-feature functions (for example, _face_detection()_), however, does not allow user to specify _max_results_:
```
feature_value = {'type': enum.__dict__[feature]}
def inner(self, image, options=None, **kwargs):
request = dict(
image=image,
features=[feature_value],
**kwargs
)
return self.annotate_image(request, options=options)
```
Reported in https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1173
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vision/google/cloud/vision_helpers/decorators.py`
Content:
```
1 # Copyright 2017, Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17
18 def add_single_feature_methods(cls):
19 """Custom decorator intended for :class:`~vision.helpers.VisionHelpers`.
20
21 This metaclass adds a `{feature}` method for every feature
22 defined on the Feature enum.
23 """
24 # Sanity check: This only makes sense if we are building the GAPIC
25 # subclass and have enums already attached.
26 if not hasattr(cls, 'enums'):
27 return cls
28
29 # Iterate over the Feature.Type enum and add get a list of
30 # features which will receive single-feature detection methods.
31 features = [k for k in cls.enums.Feature.Type.__dict__.keys()
32 if k.replace('_', '').isalpha() and k.upper() == k]
33
34 # Add each single-feature method to the class.
35 for feature in features:
36 # Sanity check: Do not make a method for the falsy feature.
37 if feature == 'TYPE_UNSPECIFIED':
38 continue
39
40 # Assign the appropriate metadata to the function.
41 detect = _create_single_feature_method(feature, cls.enums.Feature.Type)
42
43 # Assign a qualified name to the function, and perform module
44 # replacement on the docstring.
45 detect.__qualname__ = '{cls}.{name}'.format(
46 cls=cls.__name__,
47 name=detect.__name__,
48 )
49 detect.__doc__ = detect.__doc__.format(
50 module=cls.__module__,
51 )
52
53 # Place the function on the class being created.
54 setattr(cls, detect.__name__, detect)
55
56 # Done; return the class.
57 return cls
58
59
60 def _create_single_feature_method(feature, enum):
61 """Return a function that will detect a single feature.
62
63 Args:
64 feature (str): A specific feature defined as an attribute on
65 :class:`~enums.Feature.Type`.
66 enum (class): The :class:`~enums.Feature.Type` class.
67
68 Returns:
69 function: A helper function to detect just that feature.
70 """
71 # Define the function properties.
72 fx_name = feature.lower()
73 if 'detection' in fx_name:
74 fx_doc = 'Perform {0}.'.format(fx_name.replace('_', ' '))
75 else:
76 fx_doc = 'Return {desc} information.'.format(
77 desc=fx_name.replace('_', ' '),
78 )
79
80 # Provide a complete docstring with argument and return value
81 # information.
82 fx_doc += """
83
84 Args:
85 image (:class:`~.{module}.types.Image`): The image to analyze.
86 options (:class:`google.gax.CallOptions`): Overrides the
87 default settings for this call, e.g, timeout, retries, etc.
88 kwargs (dict): Additional properties to be set on the
89 :class:`~.{module}.types.AnnotateImageRequest`.
90
91 Returns:
92 :class:`~.{module}.types.AnnotateImageResponse`: The API response.
93 """
94
95 # Get the actual feature value to send.
96 feature_value = {'type': enum.__dict__[feature]}
97
98 # Define the function to be returned.
99 def inner(self, image, retry=None, timeout=None, **kwargs):
100 """Return a single feature annotation for the given image.
101
102 Intended for use with functools.partial, to create the particular
103 single-feature methods.
104 """
105 request = dict(
106 image=image,
107 features=[feature_value],
108 **kwargs
109 )
110 return self.annotate_image(request, retry=retry, timeout=timeout)
111
112 # Set the appropriate function metadata.
113 inner.__name__ = fx_name
114 inner.__doc__ = fx_doc
115
116 # Return the final function.
117 return inner
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vision/google/cloud/vision_helpers/decorators.py b/vision/google/cloud/vision_helpers/decorators.py
--- a/vision/google/cloud/vision_helpers/decorators.py
+++ b/vision/google/cloud/vision_helpers/decorators.py
@@ -83,8 +83,11 @@
Args:
image (:class:`~.{module}.types.Image`): The image to analyze.
- options (:class:`google.gax.CallOptions`): Overrides the
- default settings for this call, e.g, timeout, retries, etc.
+ max_results (int):
+ Number of results to return, does not apply for
+ TEXT_DETECTION, DOCUMENT_TEXT_DETECTION, or CROP_HINTS.
+ retry (int): Number of retries to do before giving up.
+ timeout (int): Number of seconds before timing out.
kwargs (dict): Additional properties to be set on the
:class:`~.{module}.types.AnnotateImageRequest`.
@@ -96,18 +99,23 @@
feature_value = {'type': enum.__dict__[feature]}
# Define the function to be returned.
- def inner(self, image, retry=None, timeout=None, **kwargs):
+ def inner(self, image, max_results=None,
+ retry=None, timeout=None, **kwargs):
"""Return a single feature annotation for the given image.
Intended for use with functools.partial, to create the particular
single-feature methods.
"""
+ copied_features = feature_value.copy()
+ if max_results is not None:
+ copied_features['max_results'] = max_results
request = dict(
image=image,
- features=[feature_value],
+ features=[copied_features],
**kwargs
)
- return self.annotate_image(request, retry=retry, timeout=timeout)
+ response = self.annotate_image(request, retry=retry, timeout=timeout)
+ return response
# Set the appropriate function metadata.
inner.__name__ = fx_name
|
{"golden_diff": "diff --git a/vision/google/cloud/vision_helpers/decorators.py b/vision/google/cloud/vision_helpers/decorators.py\n--- a/vision/google/cloud/vision_helpers/decorators.py\n+++ b/vision/google/cloud/vision_helpers/decorators.py\n@@ -83,8 +83,11 @@\n \n Args:\n image (:class:`~.{module}.types.Image`): The image to analyze.\n- options (:class:`google.gax.CallOptions`): Overrides the\n- default settings for this call, e.g, timeout, retries, etc.\n+ max_results (int):\n+ Number of results to return, does not apply for\n+ TEXT_DETECTION, DOCUMENT_TEXT_DETECTION, or CROP_HINTS.\n+ retry (int): Number of retries to do before giving up.\n+ timeout (int): Number of seconds before timing out.\n kwargs (dict): Additional properties to be set on the\n :class:`~.{module}.types.AnnotateImageRequest`.\n \n@@ -96,18 +99,23 @@\n feature_value = {'type': enum.__dict__[feature]}\n \n # Define the function to be returned.\n- def inner(self, image, retry=None, timeout=None, **kwargs):\n+ def inner(self, image, max_results=None,\n+ retry=None, timeout=None, **kwargs):\n \"\"\"Return a single feature annotation for the given image.\n \n Intended for use with functools.partial, to create the particular\n single-feature methods.\n \"\"\"\n+ copied_features = feature_value.copy()\n+ if max_results is not None:\n+ copied_features['max_results'] = max_results\n request = dict(\n image=image,\n- features=[feature_value],\n+ features=[copied_features],\n **kwargs\n )\n- return self.annotate_image(request, retry=retry, timeout=timeout)\n+ response = self.annotate_image(request, retry=retry, timeout=timeout)\n+ return response\n \n # Set the appropriate function metadata.\n inner.__name__ = fx_name\n", "issue": "Vision: Single feature functions generated by cloud vision client library does not support parameter max_results\nAs specified in [the gRPC reference](https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.AnnotateImageRequest), AnnotateImageRequest message have three fields, _image_, _features[]_ and _image_context_, in which the _features[]_ field determines what feature user is request (_type_) and max number of returned results (_max_results_). The code for generating single-feature functions (for example, _face_detection()_), however, does not allow user to specify _max_results_:\r\n\r\n```\r\nfeature_value = {'type': enum.__dict__[feature]}\r\ndef inner(self, image, options=None, **kwargs):\r\n request = dict(\r\n image=image,\r\n features=[feature_value],\r\n **kwargs\r\n )\r\n return self.annotate_image(request, options=options)\r\n```\r\n\r\nReported in https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1173\n", "before_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\n\ndef add_single_feature_methods(cls):\n \"\"\"Custom decorator intended for :class:`~vision.helpers.VisionHelpers`.\n\n This metaclass adds a `{feature}` method for every feature\n defined on the Feature enum.\n \"\"\"\n # Sanity check: This only makes sense if we are building the GAPIC\n # subclass and have enums already attached.\n if not hasattr(cls, 'enums'):\n return cls\n\n # Iterate over the Feature.Type enum and add get a list of\n # features which will receive single-feature detection methods.\n features = [k for k in cls.enums.Feature.Type.__dict__.keys()\n if k.replace('_', '').isalpha() and k.upper() == k]\n\n # Add each single-feature method to the class.\n for feature in features:\n # Sanity check: Do not make a method for the falsy feature.\n if feature == 'TYPE_UNSPECIFIED':\n continue\n\n # Assign the appropriate metadata to the function.\n detect = _create_single_feature_method(feature, cls.enums.Feature.Type)\n\n # Assign a qualified name to the function, and perform module\n # replacement on the docstring.\n detect.__qualname__ = '{cls}.{name}'.format(\n cls=cls.__name__,\n name=detect.__name__,\n )\n detect.__doc__ = detect.__doc__.format(\n module=cls.__module__,\n )\n\n # Place the function on the class being created.\n setattr(cls, detect.__name__, detect)\n\n # Done; return the class.\n return cls\n\n\ndef _create_single_feature_method(feature, enum):\n \"\"\"Return a function that will detect a single feature.\n\n Args:\n feature (str): A specific feature defined as an attribute on\n :class:`~enums.Feature.Type`.\n enum (class): The :class:`~enums.Feature.Type` class.\n\n Returns:\n function: A helper function to detect just that feature.\n \"\"\"\n # Define the function properties.\n fx_name = feature.lower()\n if 'detection' in fx_name:\n fx_doc = 'Perform {0}.'.format(fx_name.replace('_', ' '))\n else:\n fx_doc = 'Return {desc} information.'.format(\n desc=fx_name.replace('_', ' '),\n )\n\n # Provide a complete docstring with argument and return value\n # information.\n fx_doc += \"\"\"\n\n Args:\n image (:class:`~.{module}.types.Image`): The image to analyze.\n options (:class:`google.gax.CallOptions`): Overrides the\n default settings for this call, e.g, timeout, retries, etc.\n kwargs (dict): Additional properties to be set on the\n :class:`~.{module}.types.AnnotateImageRequest`.\n\n Returns:\n :class:`~.{module}.types.AnnotateImageResponse`: The API response.\n \"\"\"\n\n # Get the actual feature value to send.\n feature_value = {'type': enum.__dict__[feature]}\n\n # Define the function to be returned.\n def inner(self, image, retry=None, timeout=None, **kwargs):\n \"\"\"Return a single feature annotation for the given image.\n\n Intended for use with functools.partial, to create the particular\n single-feature methods.\n \"\"\"\n request = dict(\n image=image,\n features=[feature_value],\n **kwargs\n )\n return self.annotate_image(request, retry=retry, timeout=timeout)\n\n # Set the appropriate function metadata.\n inner.__name__ = fx_name\n inner.__doc__ = fx_doc\n\n # Return the final function.\n return inner\n", "path": "vision/google/cloud/vision_helpers/decorators.py"}], "after_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\n\ndef add_single_feature_methods(cls):\n \"\"\"Custom decorator intended for :class:`~vision.helpers.VisionHelpers`.\n\n This metaclass adds a `{feature}` method for every feature\n defined on the Feature enum.\n \"\"\"\n # Sanity check: This only makes sense if we are building the GAPIC\n # subclass and have enums already attached.\n if not hasattr(cls, 'enums'):\n return cls\n\n # Iterate over the Feature.Type enum and add get a list of\n # features which will receive single-feature detection methods.\n features = [k for k in cls.enums.Feature.Type.__dict__.keys()\n if k.replace('_', '').isalpha() and k.upper() == k]\n\n # Add each single-feature method to the class.\n for feature in features:\n # Sanity check: Do not make a method for the falsy feature.\n if feature == 'TYPE_UNSPECIFIED':\n continue\n\n # Assign the appropriate metadata to the function.\n detect = _create_single_feature_method(feature, cls.enums.Feature.Type)\n\n # Assign a qualified name to the function, and perform module\n # replacement on the docstring.\n detect.__qualname__ = '{cls}.{name}'.format(\n cls=cls.__name__,\n name=detect.__name__,\n )\n detect.__doc__ = detect.__doc__.format(\n module=cls.__module__,\n )\n\n # Place the function on the class being created.\n setattr(cls, detect.__name__, detect)\n\n # Done; return the class.\n return cls\n\n\ndef _create_single_feature_method(feature, enum):\n \"\"\"Return a function that will detect a single feature.\n\n Args:\n feature (str): A specific feature defined as an attribute on\n :class:`~enums.Feature.Type`.\n enum (class): The :class:`~enums.Feature.Type` class.\n\n Returns:\n function: A helper function to detect just that feature.\n \"\"\"\n # Define the function properties.\n fx_name = feature.lower()\n if 'detection' in fx_name:\n fx_doc = 'Perform {0}.'.format(fx_name.replace('_', ' '))\n else:\n fx_doc = 'Return {desc} information.'.format(\n desc=fx_name.replace('_', ' '),\n )\n\n # Provide a complete docstring with argument and return value\n # information.\n fx_doc += \"\"\"\n\n Args:\n image (:class:`~.{module}.types.Image`): The image to analyze.\n max_results (int):\n Number of results to return, does not apply for\n TEXT_DETECTION, DOCUMENT_TEXT_DETECTION, or CROP_HINTS.\n retry (int): Number of retries to do before giving up.\n timeout (int): Number of seconds before timing out.\n kwargs (dict): Additional properties to be set on the\n :class:`~.{module}.types.AnnotateImageRequest`.\n\n Returns:\n :class:`~.{module}.types.AnnotateImageResponse`: The API response.\n \"\"\"\n\n # Get the actual feature value to send.\n feature_value = {'type': enum.__dict__[feature]}\n\n # Define the function to be returned.\n def inner(self, image, max_results=None,\n retry=None, timeout=None, **kwargs):\n \"\"\"Return a single feature annotation for the given image.\n\n Intended for use with functools.partial, to create the particular\n single-feature methods.\n \"\"\"\n copied_features = feature_value.copy()\n if max_results is not None:\n copied_features['max_results'] = max_results\n request = dict(\n image=image,\n features=[copied_features],\n **kwargs\n )\n response = self.annotate_image(request, retry=retry, timeout=timeout)\n return response\n\n # Set the appropriate function metadata.\n inner.__name__ = fx_name\n inner.__doc__ = fx_doc\n\n # Return the final function.\n return inner\n", "path": "vision/google/cloud/vision_helpers/decorators.py"}]}
| 1,639 | 442 |
gh_patches_debug_64716
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-1509
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Semantic Segmentation target masks broken >0.7.5
## 🐛 Bug
The switch to albumentation in newer releases of lightning-flash seem to have broken transformation of segmentation targets.
This is what I expect masks to look like (screenshot showing below code sample running on 0.7.5):

This is what it looks like on the latest release (0.8.1):

### To Reproduce
Run the below sample with lightning-flash=0.7.5 and lightning-flash=0.8.1 and compare behavior.
#### Code sample
```py
import torch
import flash
from flash.core.data.utils import download_data
from flash.image import SemanticSegmentation, SemanticSegmentationData
import matplotlib.pyplot as plt
import numpy as np
# 1. Create the DataModule
# The data was generated with the CARLA self-driving simulator as part of the Kaggle Lyft Udacity Challenge.
# More info here: https://www.kaggle.com/kumaresanmanickavelu/lyft-udacity-challenge
# download_data(
# "https://github.com/ongchinkiat/LyftPerceptionChallenge/releases/download/v0.1/carla-capture-20180513A.zip",
# "./data",
# )
datamodule = SemanticSegmentationData.from_folders(
train_folder="data/CameraRGB",
train_target_folder="data/CameraSeg",
val_split=0.1,
transform_kwargs=dict(image_size=(256, 256)),
num_classes=21,
batch_size=4,
)
# 2. Build the task
model = SemanticSegmentation(
backbone="mobilenetv3_large_100",
head="fpn",
num_classes=datamodule.num_classes,
)
n = 3
fig, axarr = plt.subplots(ncols=2, nrows=n, figsize=(8, 4*n))
for batch in datamodule.train_dataloader():
print(batch.keys())
for i in range(n):
segm = batch['target'][i]
print(segm.shape)
img = np.rollaxis(batch['input'][i].numpy(), 0, 3)
axarr[i, 0].imshow(img)
axarr[i, 1].imshow(segm)
break
```
### Environment
- OS: Ubuntu WSL2
- Python version: 3.10.8
- GPU model: RTX 3080
- CUDA Version: 11.6
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/flash/image/segmentation/input.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import os
15 from typing import Any, Dict, List, Optional, Tuple, Union
16
17 import numpy as np
18
19 from flash.core.data.io.input import DataKeys, Input
20 from flash.core.data.utilities.loading import IMG_EXTENSIONS, NP_EXTENSIONS, load_image
21 from flash.core.data.utilities.paths import PATH_TYPE, filter_valid_files
22 from flash.core.data.utilities.samples import to_samples
23 from flash.core.integrations.fiftyone.utils import FiftyOneLabelUtilities
24 from flash.core.utilities.imports import _FIFTYONE_AVAILABLE, lazy_import
25 from flash.image.data import ImageFilesInput, ImageNumpyInput, ImageTensorInput
26 from flash.image.segmentation.output import SegmentationLabelsOutput
27
28 if _FIFTYONE_AVAILABLE:
29 fo = lazy_import("fiftyone")
30 SampleCollection = "fiftyone.core.collections.SampleCollection"
31 else:
32 fo = None
33 SampleCollection = None
34
35
36 class SemanticSegmentationInput(Input):
37 num_classes: int
38 labels_map: Dict[int, Tuple[int, int, int]]
39
40 def load_labels_map(
41 self, num_classes: Optional[int] = None, labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None
42 ) -> None:
43 if num_classes is not None:
44 self.num_classes = num_classes
45 labels_map = labels_map or SegmentationLabelsOutput.create_random_labels_map(num_classes)
46
47 if labels_map is not None:
48 self.labels_map = labels_map
49
50
51 class SemanticSegmentationTensorInput(SemanticSegmentationInput, ImageTensorInput):
52 def load_data(
53 self,
54 tensor: Any,
55 masks: Any = None,
56 num_classes: Optional[int] = None,
57 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
58 ) -> List[Dict[str, Any]]:
59 self.load_labels_map(num_classes, labels_map)
60 return to_samples(tensor, masks)
61
62 def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:
63 if DataKeys.TARGET in sample:
64 sample[DataKeys.TARGET] = sample[DataKeys.TARGET].numpy()
65 return super().load_sample(sample)
66
67
68 class SemanticSegmentationNumpyInput(SemanticSegmentationInput, ImageNumpyInput):
69 def load_data(
70 self,
71 array: Any,
72 masks: Any = None,
73 num_classes: Optional[int] = None,
74 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
75 ) -> List[Dict[str, Any]]:
76 self.load_labels_map(num_classes, labels_map)
77 return to_samples(array, masks)
78
79
80 class SemanticSegmentationFilesInput(SemanticSegmentationInput, ImageFilesInput):
81 def load_data(
82 self,
83 files: Union[PATH_TYPE, List[PATH_TYPE]],
84 mask_files: Optional[Union[PATH_TYPE, List[PATH_TYPE]]] = None,
85 num_classes: Optional[int] = None,
86 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
87 ) -> List[Dict[str, Any]]:
88 self.load_labels_map(num_classes, labels_map)
89 if mask_files is None:
90 files = filter_valid_files(files, valid_extensions=IMG_EXTENSIONS + NP_EXTENSIONS)
91 else:
92 files, mask_files = filter_valid_files(files, mask_files, valid_extensions=IMG_EXTENSIONS + NP_EXTENSIONS)
93 return to_samples(files, mask_files)
94
95 def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:
96 if DataKeys.TARGET in sample:
97 sample[DataKeys.TARGET] = np.array(load_image(sample[DataKeys.TARGET])).transpose((2, 0, 1))[:, :, 0]
98 return super().load_sample(sample)
99
100
101 class SemanticSegmentationFolderInput(SemanticSegmentationFilesInput):
102 def load_data(
103 self,
104 folder: PATH_TYPE,
105 mask_folder: Optional[PATH_TYPE] = None,
106 num_classes: Optional[int] = None,
107 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
108 ) -> List[Dict[str, Any]]:
109 self.load_labels_map(num_classes, labels_map)
110 files = os.listdir(folder)
111 files.sort()
112 if mask_folder is not None:
113 mask_files = {os.path.splitext(file)[0]: file for file in os.listdir(mask_folder)}
114 file_names = [os.path.splitext(file)[0] for file in files]
115
116 if len(set(file_names) - mask_files.keys()) != 0:
117 raise ValueError(
118 f"Found inconsistent files in input folder: {folder} and mask folder: {mask_folder}. All input "
119 f"files must have a corresponding mask file with the same name."
120 )
121
122 files = [os.path.join(folder, file) for file in files]
123 mask_files = [os.path.join(mask_folder, mask_files[file_name]) for file_name in file_names]
124 return super().load_data(files, mask_files)
125 return super().load_data([os.path.join(folder, file) for file in files])
126
127
128 class SemanticSegmentationFiftyOneInput(SemanticSegmentationFilesInput):
129 label_field: str
130
131 def load_data(
132 self,
133 sample_collection: SampleCollection,
134 label_field: str = "ground_truth",
135 num_classes: Optional[int] = None,
136 labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,
137 ) -> List[Dict[str, Any]]:
138 self.load_labels_map(num_classes, labels_map)
139
140 self.label_field = label_field
141 label_utilities = FiftyOneLabelUtilities(label_field, fo.Segmentation)
142 label_utilities.validate(sample_collection)
143
144 self._fo_dataset_name = sample_collection.name
145 return to_samples(sample_collection.values("filepath"))
146
147 def predict_load_data(
148 self,
149 sample_collection: SampleCollection,
150 ) -> List[Dict[str, Any]]:
151 return to_samples(sample_collection.values("filepath"))
152
153 def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:
154 filepath = sample[DataKeys.INPUT]
155 sample = super().load_sample(sample)
156 if not self.predicting:
157 fo_dataset = fo.load_dataset(self._fo_dataset_name)
158 fo_sample = fo_dataset[filepath]
159 sample[DataKeys.TARGET] = fo_sample[self.label_field].mask
160 return sample
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/flash/image/segmentation/input.py b/src/flash/image/segmentation/input.py
--- a/src/flash/image/segmentation/input.py
+++ b/src/flash/image/segmentation/input.py
@@ -94,7 +94,7 @@
def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:
if DataKeys.TARGET in sample:
- sample[DataKeys.TARGET] = np.array(load_image(sample[DataKeys.TARGET])).transpose((2, 0, 1))[:, :, 0]
+ sample[DataKeys.TARGET] = np.array(load_image(sample[DataKeys.TARGET])).transpose((2, 0, 1))[0, :, :]
return super().load_sample(sample)
|
{"golden_diff": "diff --git a/src/flash/image/segmentation/input.py b/src/flash/image/segmentation/input.py\n--- a/src/flash/image/segmentation/input.py\n+++ b/src/flash/image/segmentation/input.py\n@@ -94,7 +94,7 @@\n \n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n if DataKeys.TARGET in sample:\n- sample[DataKeys.TARGET] = np.array(load_image(sample[DataKeys.TARGET])).transpose((2, 0, 1))[:, :, 0]\n+ sample[DataKeys.TARGET] = np.array(load_image(sample[DataKeys.TARGET])).transpose((2, 0, 1))[0, :, :]\n return super().load_sample(sample)\n", "issue": "Semantic Segmentation target masks broken >0.7.5\n## \ud83d\udc1b Bug\r\n\r\nThe switch to albumentation in newer releases of lightning-flash seem to have broken transformation of segmentation targets.\r\n\r\nThis is what I expect masks to look like (screenshot showing below code sample running on 0.7.5):\r\n\r\n\r\n\r\nThis is what it looks like on the latest release (0.8.1):\r\n\r\n\r\n\r\n### To Reproduce\r\n\r\nRun the below sample with lightning-flash=0.7.5 and lightning-flash=0.8.1 and compare behavior. \r\n\r\n#### Code sample\r\n\r\n```py\r\nimport torch\r\n\r\nimport flash\r\nfrom flash.core.data.utils import download_data\r\nfrom flash.image import SemanticSegmentation, SemanticSegmentationData\r\n\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\n\r\n# 1. Create the DataModule\r\n# The data was generated with the CARLA self-driving simulator as part of the Kaggle Lyft Udacity Challenge.\r\n# More info here: https://www.kaggle.com/kumaresanmanickavelu/lyft-udacity-challenge\r\n# download_data(\r\n# \"https://github.com/ongchinkiat/LyftPerceptionChallenge/releases/download/v0.1/carla-capture-20180513A.zip\",\r\n# \"./data\",\r\n# )\r\n\r\ndatamodule = SemanticSegmentationData.from_folders(\r\n train_folder=\"data/CameraRGB\",\r\n train_target_folder=\"data/CameraSeg\",\r\n val_split=0.1,\r\n transform_kwargs=dict(image_size=(256, 256)),\r\n num_classes=21,\r\n batch_size=4,\r\n)\r\n\r\n# 2. Build the task\r\nmodel = SemanticSegmentation(\r\n backbone=\"mobilenetv3_large_100\",\r\n head=\"fpn\",\r\n num_classes=datamodule.num_classes,\r\n)\r\n\r\nn = 3\r\nfig, axarr = plt.subplots(ncols=2, nrows=n, figsize=(8, 4*n))\r\n\r\nfor batch in datamodule.train_dataloader():\r\n print(batch.keys())\r\n for i in range(n):\r\n segm = batch['target'][i]\r\n print(segm.shape)\r\n img = np.rollaxis(batch['input'][i].numpy(), 0, 3)\r\n axarr[i, 0].imshow(img)\r\n axarr[i, 1].imshow(segm)\r\n break\r\n```\r\n\r\n### Environment\r\n\r\n - OS: Ubuntu WSL2\r\n - Python version: 3.10.8\r\n - GPU model: RTX 3080\r\n - CUDA Version: 11.6\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nfrom typing import Any, Dict, List, Optional, Tuple, Union\n\nimport numpy as np\n\nfrom flash.core.data.io.input import DataKeys, Input\nfrom flash.core.data.utilities.loading import IMG_EXTENSIONS, NP_EXTENSIONS, load_image\nfrom flash.core.data.utilities.paths import PATH_TYPE, filter_valid_files\nfrom flash.core.data.utilities.samples import to_samples\nfrom flash.core.integrations.fiftyone.utils import FiftyOneLabelUtilities\nfrom flash.core.utilities.imports import _FIFTYONE_AVAILABLE, lazy_import\nfrom flash.image.data import ImageFilesInput, ImageNumpyInput, ImageTensorInput\nfrom flash.image.segmentation.output import SegmentationLabelsOutput\n\nif _FIFTYONE_AVAILABLE:\n fo = lazy_import(\"fiftyone\")\n SampleCollection = \"fiftyone.core.collections.SampleCollection\"\nelse:\n fo = None\n SampleCollection = None\n\n\nclass SemanticSegmentationInput(Input):\n num_classes: int\n labels_map: Dict[int, Tuple[int, int, int]]\n\n def load_labels_map(\n self, num_classes: Optional[int] = None, labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None\n ) -> None:\n if num_classes is not None:\n self.num_classes = num_classes\n labels_map = labels_map or SegmentationLabelsOutput.create_random_labels_map(num_classes)\n\n if labels_map is not None:\n self.labels_map = labels_map\n\n\nclass SemanticSegmentationTensorInput(SemanticSegmentationInput, ImageTensorInput):\n def load_data(\n self,\n tensor: Any,\n masks: Any = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n return to_samples(tensor, masks)\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n if DataKeys.TARGET in sample:\n sample[DataKeys.TARGET] = sample[DataKeys.TARGET].numpy()\n return super().load_sample(sample)\n\n\nclass SemanticSegmentationNumpyInput(SemanticSegmentationInput, ImageNumpyInput):\n def load_data(\n self,\n array: Any,\n masks: Any = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n return to_samples(array, masks)\n\n\nclass SemanticSegmentationFilesInput(SemanticSegmentationInput, ImageFilesInput):\n def load_data(\n self,\n files: Union[PATH_TYPE, List[PATH_TYPE]],\n mask_files: Optional[Union[PATH_TYPE, List[PATH_TYPE]]] = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n if mask_files is None:\n files = filter_valid_files(files, valid_extensions=IMG_EXTENSIONS + NP_EXTENSIONS)\n else:\n files, mask_files = filter_valid_files(files, mask_files, valid_extensions=IMG_EXTENSIONS + NP_EXTENSIONS)\n return to_samples(files, mask_files)\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n if DataKeys.TARGET in sample:\n sample[DataKeys.TARGET] = np.array(load_image(sample[DataKeys.TARGET])).transpose((2, 0, 1))[:, :, 0]\n return super().load_sample(sample)\n\n\nclass SemanticSegmentationFolderInput(SemanticSegmentationFilesInput):\n def load_data(\n self,\n folder: PATH_TYPE,\n mask_folder: Optional[PATH_TYPE] = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n files = os.listdir(folder)\n files.sort()\n if mask_folder is not None:\n mask_files = {os.path.splitext(file)[0]: file for file in os.listdir(mask_folder)}\n file_names = [os.path.splitext(file)[0] for file in files]\n\n if len(set(file_names) - mask_files.keys()) != 0:\n raise ValueError(\n f\"Found inconsistent files in input folder: {folder} and mask folder: {mask_folder}. All input \"\n f\"files must have a corresponding mask file with the same name.\"\n )\n\n files = [os.path.join(folder, file) for file in files]\n mask_files = [os.path.join(mask_folder, mask_files[file_name]) for file_name in file_names]\n return super().load_data(files, mask_files)\n return super().load_data([os.path.join(folder, file) for file in files])\n\n\nclass SemanticSegmentationFiftyOneInput(SemanticSegmentationFilesInput):\n label_field: str\n\n def load_data(\n self,\n sample_collection: SampleCollection,\n label_field: str = \"ground_truth\",\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n\n self.label_field = label_field\n label_utilities = FiftyOneLabelUtilities(label_field, fo.Segmentation)\n label_utilities.validate(sample_collection)\n\n self._fo_dataset_name = sample_collection.name\n return to_samples(sample_collection.values(\"filepath\"))\n\n def predict_load_data(\n self,\n sample_collection: SampleCollection,\n ) -> List[Dict[str, Any]]:\n return to_samples(sample_collection.values(\"filepath\"))\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n filepath = sample[DataKeys.INPUT]\n sample = super().load_sample(sample)\n if not self.predicting:\n fo_dataset = fo.load_dataset(self._fo_dataset_name)\n fo_sample = fo_dataset[filepath]\n sample[DataKeys.TARGET] = fo_sample[self.label_field].mask\n return sample\n", "path": "src/flash/image/segmentation/input.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nfrom typing import Any, Dict, List, Optional, Tuple, Union\n\nimport numpy as np\n\nfrom flash.core.data.io.input import DataKeys, Input\nfrom flash.core.data.utilities.loading import IMG_EXTENSIONS, NP_EXTENSIONS, load_image\nfrom flash.core.data.utilities.paths import PATH_TYPE, filter_valid_files\nfrom flash.core.data.utilities.samples import to_samples\nfrom flash.core.integrations.fiftyone.utils import FiftyOneLabelUtilities\nfrom flash.core.utilities.imports import _FIFTYONE_AVAILABLE, lazy_import\nfrom flash.image.data import ImageFilesInput, ImageNumpyInput, ImageTensorInput\nfrom flash.image.segmentation.output import SegmentationLabelsOutput\n\nif _FIFTYONE_AVAILABLE:\n fo = lazy_import(\"fiftyone\")\n SampleCollection = \"fiftyone.core.collections.SampleCollection\"\nelse:\n fo = None\n SampleCollection = None\n\n\nclass SemanticSegmentationInput(Input):\n num_classes: int\n labels_map: Dict[int, Tuple[int, int, int]]\n\n def load_labels_map(\n self, num_classes: Optional[int] = None, labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None\n ) -> None:\n if num_classes is not None:\n self.num_classes = num_classes\n labels_map = labels_map or SegmentationLabelsOutput.create_random_labels_map(num_classes)\n\n if labels_map is not None:\n self.labels_map = labels_map\n\n\nclass SemanticSegmentationTensorInput(SemanticSegmentationInput, ImageTensorInput):\n def load_data(\n self,\n tensor: Any,\n masks: Any = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n return to_samples(tensor, masks)\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n if DataKeys.TARGET in sample:\n sample[DataKeys.TARGET] = sample[DataKeys.TARGET].numpy()\n return super().load_sample(sample)\n\n\nclass SemanticSegmentationNumpyInput(SemanticSegmentationInput, ImageNumpyInput):\n def load_data(\n self,\n array: Any,\n masks: Any = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n return to_samples(array, masks)\n\n\nclass SemanticSegmentationFilesInput(SemanticSegmentationInput, ImageFilesInput):\n def load_data(\n self,\n files: Union[PATH_TYPE, List[PATH_TYPE]],\n mask_files: Optional[Union[PATH_TYPE, List[PATH_TYPE]]] = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n if mask_files is None:\n files = filter_valid_files(files, valid_extensions=IMG_EXTENSIONS + NP_EXTENSIONS)\n else:\n files, mask_files = filter_valid_files(files, mask_files, valid_extensions=IMG_EXTENSIONS + NP_EXTENSIONS)\n return to_samples(files, mask_files)\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n if DataKeys.TARGET in sample:\n sample[DataKeys.TARGET] = np.array(load_image(sample[DataKeys.TARGET])).transpose((2, 0, 1))[0, :, :]\n return super().load_sample(sample)\n\n\nclass SemanticSegmentationFolderInput(SemanticSegmentationFilesInput):\n def load_data(\n self,\n folder: PATH_TYPE,\n mask_folder: Optional[PATH_TYPE] = None,\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n files = os.listdir(folder)\n files.sort()\n if mask_folder is not None:\n mask_files = {os.path.splitext(file)[0]: file for file in os.listdir(mask_folder)}\n file_names = [os.path.splitext(file)[0] for file in files]\n\n if len(set(file_names) - mask_files.keys()) != 0:\n raise ValueError(\n f\"Found inconsistent files in input folder: {folder} and mask folder: {mask_folder}. All input \"\n f\"files must have a corresponding mask file with the same name.\"\n )\n\n files = [os.path.join(folder, file) for file in files]\n mask_files = [os.path.join(mask_folder, mask_files[file_name]) for file_name in file_names]\n return super().load_data(files, mask_files)\n return super().load_data([os.path.join(folder, file) for file in files])\n\n\nclass SemanticSegmentationFiftyOneInput(SemanticSegmentationFilesInput):\n label_field: str\n\n def load_data(\n self,\n sample_collection: SampleCollection,\n label_field: str = \"ground_truth\",\n num_classes: Optional[int] = None,\n labels_map: Optional[Dict[int, Tuple[int, int, int]]] = None,\n ) -> List[Dict[str, Any]]:\n self.load_labels_map(num_classes, labels_map)\n\n self.label_field = label_field\n label_utilities = FiftyOneLabelUtilities(label_field, fo.Segmentation)\n label_utilities.validate(sample_collection)\n\n self._fo_dataset_name = sample_collection.name\n return to_samples(sample_collection.values(\"filepath\"))\n\n def predict_load_data(\n self,\n sample_collection: SampleCollection,\n ) -> List[Dict[str, Any]]:\n return to_samples(sample_collection.values(\"filepath\"))\n\n def load_sample(self, sample: Dict[str, Any]) -> Dict[str, Any]:\n filepath = sample[DataKeys.INPUT]\n sample = super().load_sample(sample)\n if not self.predicting:\n fo_dataset = fo.load_dataset(self._fo_dataset_name)\n fo_sample = fo_dataset[filepath]\n sample[DataKeys.TARGET] = fo_sample[self.label_field].mask\n return sample\n", "path": "src/flash/image/segmentation/input.py"}]}
| 2,848 | 168 |
gh_patches_debug_17724
|
rasdani/github-patches
|
git_diff
|
vyperlang__vyper-1154
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
2-clause if statement must have a zerovalent body error
### Version Information
* vyper Version: 0.1.0b5
* OS: win
* Python Version (python --version): 3.7.0
* Environment (output of `pip freeze`):
```
attrdict==2.0.0
base58==1.0.2
certifi==2018.11.29
chardet==3.0.4
codecov==2.0.15
coverage==4.5.2
cytoolz==0.9.0.1
ecdsa==0.13
eth-abi==1.3.0
eth-account==0.3.0
eth-hash==0.2.0
eth-keyfile==0.5.1
eth-keys==0.2.0b3
eth-rlp==0.1.2
eth-typing==2.0.0
eth-utils==1.4.0
hexbytes==0.1.0
idna==2.7
numpy==1.15.4
pandas==0.23.4
parsimonious==0.8.1
pycryptodome==3.7.2
python-dateutil==2.7.5
pytz==2018.7
requests==2.20.1
rlp==1.0.3
setuptools-scm==3.1.0
six==1.11.0
toolz==0.9.0
tronapi==3.0.2
urllib3==1.24.1
vyper==0.1.0b5
```
### What's your issue about?
Script returns cryptic "zerovalent" error when there is a function call after an if statement. Adding a pass statement after the function call makes it go away. Likewise, replacing the function call with the body of the function call makes it go away.
Stripped down example:
```
@private
def foo():
pass
@public
def start():
if True:
self.foo()
# pass # uncomment to successfully compile
```
Error output:
```
Error compiling: error_reproduce.vy
Exception: 2-clause if statement must have a zerovalent body: [seq,
/* Internal Call: foo */
[seq_unchecked,
[add, pc, 6],
[goto, priv_3264763256],
jumpdest,
0]]
```
Compiled using `vyper error_reproduce.vy`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/parser/self_call.py`
Content:
```
1 from vyper.exceptions import (
2 ConstancyViolationException
3 )
4 from vyper.parser.lll_node import (
5 LLLnode
6 )
7 from vyper.parser.parser_utils import (
8 pack_arguments,
9 getpos
10 )
11 from vyper.signatures.function_signature import (
12 FunctionSignature
13 )
14 from vyper.types import (
15 BaseType,
16 ByteArrayType,
17 ListType,
18 TupleType,
19 ceil32,
20 get_size_of_type,
21 )
22
23
24 def call_lookup_specs(stmt_expr, context):
25 from vyper.parser.expr import Expr
26 method_name = stmt_expr.func.attr
27 expr_args = [Expr(arg, context).lll_node for arg in stmt_expr.args]
28 sig = FunctionSignature.lookup_sig(context.sigs, method_name, expr_args, stmt_expr, context)
29 return method_name, expr_args, sig
30
31
32 def make_call(stmt_expr, context):
33 method_name, _, sig = call_lookup_specs(stmt_expr, context)
34
35 if context.is_constant and not sig.const:
36 raise ConstancyViolationException(
37 "May not call non-constant function '%s' within a constant function." % (method_name),
38 getpos(stmt_expr)
39 )
40
41 if sig.private:
42 return call_self_private(stmt_expr, context, sig)
43 else:
44 return call_self_public(stmt_expr, context, sig)
45
46
47 def call_make_placeholder(stmt_expr, context, sig):
48 if sig.output_type is None:
49 return 0, 0, 0
50
51 output_placeholder = context.new_placeholder(typ=sig.output_type)
52 out_size = get_size_of_type(sig.output_type) * 32
53 returner = output_placeholder
54
55 if not sig.private and isinstance(sig.output_type, ByteArrayType):
56 returner = output_placeholder + 32
57
58 return output_placeholder, returner, out_size
59
60
61 def call_self_private(stmt_expr, context, sig):
62 # ** Private Call **
63 # Steps:
64 # (x) push current local variables
65 # (x) push arguments
66 # (x) push jumpdest (callback ptr)
67 # (x) jump to label
68 # (x) pop return values
69 # (x) pop local variables
70
71 method_name, expr_args, sig = call_lookup_specs(stmt_expr, context)
72 pre_init = []
73 pop_local_vars = []
74 push_local_vars = []
75 pop_return_values = []
76 push_args = []
77
78 # Push local variables.
79 if context.vars:
80 var_slots = [(v.pos, v.size) for name, v in context.vars.items()]
81 var_slots.sort(key=lambda x: x[0])
82 mem_from, mem_to = var_slots[0][0], var_slots[-1][0] + var_slots[-1][1] * 32
83 push_local_vars = [
84 ['mload', pos] for pos in range(mem_from, mem_to, 32)
85 ]
86 pop_local_vars = [
87 ['mstore', pos, 'pass'] for pos in reversed(range(mem_from, mem_to, 32))
88 ]
89
90 # Push Arguments
91 if expr_args:
92 inargs, inargsize, arg_pos = pack_arguments(sig, expr_args, context, return_placeholder=False, pos=getpos(stmt_expr))
93 push_args += [inargs] # copy arguments first, to not mess up the push/pop sequencing.
94 static_arg_count = len(expr_args) * 32
95 static_pos = arg_pos + static_arg_count
96 total_arg_size = ceil32(inargsize - 4)
97
98 if len(expr_args) * 32 != total_arg_size: # requires dynamic section.
99 ident = 'push_args_%d_%d_%d' % (sig.method_id, stmt_expr.lineno, stmt_expr.col_offset)
100 start_label = ident + '_start'
101 end_label = ident + '_end'
102 i_placeholder = context.new_placeholder(BaseType('uint256'))
103 push_args += [
104 ['mstore', i_placeholder, arg_pos + total_arg_size],
105 ['label', start_label],
106 ['if', ['lt', ['mload', i_placeholder], static_pos], ['goto', end_label]],
107 ['if_unchecked', ['ne', ['mload', ['mload', i_placeholder]], 0], ['mload', ['mload', i_placeholder]]],
108 ['mstore', i_placeholder, ['sub', ['mload', i_placeholder], 32]], # decrease i
109 ['goto', start_label],
110 ['label', end_label]
111 ]
112
113 # push static section
114 push_args += [
115 ['mload', pos] for pos in reversed(range(arg_pos, static_pos, 32))
116 ]
117
118 # Jump to function label.
119 jump_to_func = [
120 ['add', ['pc'], 6], # set callback pointer.
121 ['goto', 'priv_{}'.format(sig.method_id)],
122 ['jumpdest'],
123 ]
124
125 # Pop return values.
126 returner = [0]
127 if sig.output_type:
128 output_placeholder, returner, output_size = call_make_placeholder(stmt_expr, context, sig)
129 if output_size > 0:
130 dynamic_offsets = []
131 if isinstance(sig.output_type, (BaseType, ListType)):
132 pop_return_values = [
133 ['mstore', ['add', output_placeholder, pos], 'pass'] for pos in range(0, output_size, 32)
134 ]
135 elif isinstance(sig.output_type, ByteArrayType):
136 dynamic_offsets = [(0, sig.output_type)]
137 pop_return_values = [
138 ['pop', 'pass'],
139 ]
140 elif isinstance(sig.output_type, TupleType):
141 static_offset = 0
142 pop_return_values = []
143 for out_type in sig.output_type.members:
144 if isinstance(out_type, ByteArrayType):
145 pop_return_values.append(['mstore', ['add', output_placeholder, static_offset], 'pass'])
146 dynamic_offsets.append((['mload', ['add', output_placeholder, static_offset]], out_type))
147 else:
148 pop_return_values.append(['mstore', ['add', output_placeholder, static_offset], 'pass'])
149 static_offset += 32
150
151 # append dynamic unpacker.
152 dyn_idx = 0
153 for in_memory_offset, out_type in dynamic_offsets:
154 ident = "%d_%d_arg_%d" % (stmt_expr.lineno, stmt_expr.col_offset, dyn_idx)
155 dyn_idx += 1
156 start_label = 'dyn_unpack_start_' + ident
157 end_label = 'dyn_unpack_end_' + ident
158 i_placeholder = context.new_placeholder(typ=BaseType('uint256'))
159 begin_pos = ['add', output_placeholder, in_memory_offset]
160 # loop until length.
161 o = LLLnode.from_list(
162 ['seq_unchecked',
163 ['mstore', begin_pos, 'pass'], # get len
164 ['mstore', i_placeholder, 0],
165 ['label', start_label],
166 ['if', ['ge', ['mload', i_placeholder], ['ceil32', ['mload', begin_pos]]], ['goto', end_label]], # break
167 ['mstore', ['add', ['add', begin_pos, 32], ['mload', i_placeholder]], 'pass'], # pop into correct memory slot.
168 ['mstore', i_placeholder, ['add', 32, ['mload', i_placeholder]]], # increment i
169 ['goto', start_label],
170 ['label', end_label]],
171 typ=None, annotation='dynamic unpacker', pos=getpos(stmt_expr))
172 pop_return_values.append(o)
173
174 o = LLLnode.from_list(
175 ['seq_unchecked'] + pre_init + push_local_vars + push_args + jump_to_func + pop_return_values + pop_local_vars + [returner],
176 typ=sig.output_type, location='memory', pos=getpos(stmt_expr), annotation='Internal Call: %s' % method_name,
177 add_gas_estimate=sig.gas
178 )
179 o.gas += sig.gas
180 return o
181
182
183 def call_self_public(stmt_expr, context, sig):
184 # self.* style call to a public function.
185 method_name, expr_args, sig = call_lookup_specs(stmt_expr, context)
186 add_gas = sig.gas # gas of call
187 inargs, inargsize, _ = pack_arguments(sig, expr_args, context, pos=getpos(stmt_expr))
188 output_placeholder, returner, output_size = call_make_placeholder(stmt_expr, context, sig)
189 assert_call = [
190 'assert', ['call', ['gas'], ['address'], 0, inargs, inargsize, output_placeholder, output_size]
191 ]
192 if output_size > 0:
193 assert_call = ['seq', assert_call, returner]
194 o = LLLnode.from_list(
195 assert_call,
196 typ=sig.output_type, location='memory',
197 pos=getpos(stmt_expr), add_gas_estimate=add_gas, annotation='Internal Call: %s' % method_name)
198 o.gas += sig.gas
199 return o
200
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vyper/parser/self_call.py b/vyper/parser/self_call.py
--- a/vyper/parser/self_call.py
+++ b/vyper/parser/self_call.py
@@ -171,8 +171,21 @@
typ=None, annotation='dynamic unpacker', pos=getpos(stmt_expr))
pop_return_values.append(o)
+ call_body = (
+ ['seq_unchecked'] +
+ pre_init +
+ push_local_vars +
+ push_args +
+ jump_to_func +
+ pop_return_values +
+ pop_local_vars +
+ [returner]
+ )
+ # If we have no return, we need to pop off
+ pop_returner_call_body = ['pop', call_body] if sig.output_type is None else call_body
+
o = LLLnode.from_list(
- ['seq_unchecked'] + pre_init + push_local_vars + push_args + jump_to_func + pop_return_values + pop_local_vars + [returner],
+ pop_returner_call_body,
typ=sig.output_type, location='memory', pos=getpos(stmt_expr), annotation='Internal Call: %s' % method_name,
add_gas_estimate=sig.gas
)
|
{"golden_diff": "diff --git a/vyper/parser/self_call.py b/vyper/parser/self_call.py\n--- a/vyper/parser/self_call.py\n+++ b/vyper/parser/self_call.py\n@@ -171,8 +171,21 @@\n typ=None, annotation='dynamic unpacker', pos=getpos(stmt_expr))\n pop_return_values.append(o)\n \n+ call_body = (\n+ ['seq_unchecked'] +\n+ pre_init +\n+ push_local_vars +\n+ push_args +\n+ jump_to_func +\n+ pop_return_values +\n+ pop_local_vars +\n+ [returner]\n+ )\n+ # If we have no return, we need to pop off\n+ pop_returner_call_body = ['pop', call_body] if sig.output_type is None else call_body\n+\n o = LLLnode.from_list(\n- ['seq_unchecked'] + pre_init + push_local_vars + push_args + jump_to_func + pop_return_values + pop_local_vars + [returner],\n+ pop_returner_call_body,\n typ=sig.output_type, location='memory', pos=getpos(stmt_expr), annotation='Internal Call: %s' % method_name,\n add_gas_estimate=sig.gas\n )\n", "issue": "2-clause if statement must have a zerovalent body error\n### Version Information\r\n\r\n* vyper Version: 0.1.0b5\r\n* OS: win\r\n* Python Version (python --version): 3.7.0\r\n* Environment (output of `pip freeze`):\r\n\r\n```\r\nattrdict==2.0.0\r\nbase58==1.0.2\r\ncertifi==2018.11.29\r\nchardet==3.0.4\r\ncodecov==2.0.15\r\ncoverage==4.5.2\r\ncytoolz==0.9.0.1\r\necdsa==0.13\r\neth-abi==1.3.0\r\neth-account==0.3.0\r\neth-hash==0.2.0\r\neth-keyfile==0.5.1\r\neth-keys==0.2.0b3\r\neth-rlp==0.1.2\r\neth-typing==2.0.0\r\neth-utils==1.4.0\r\nhexbytes==0.1.0\r\nidna==2.7\r\nnumpy==1.15.4\r\npandas==0.23.4\r\nparsimonious==0.8.1\r\npycryptodome==3.7.2\r\npython-dateutil==2.7.5\r\npytz==2018.7\r\nrequests==2.20.1\r\nrlp==1.0.3\r\nsetuptools-scm==3.1.0\r\nsix==1.11.0\r\ntoolz==0.9.0\r\ntronapi==3.0.2\r\nurllib3==1.24.1\r\nvyper==0.1.0b5\r\n```\r\n\r\n### What's your issue about?\r\n\r\nScript returns cryptic \"zerovalent\" error when there is a function call after an if statement. Adding a pass statement after the function call makes it go away. Likewise, replacing the function call with the body of the function call makes it go away.\r\n\r\nStripped down example:\r\n```\r\n@private\r\ndef foo():\r\n pass\r\n\r\n@public\r\ndef start():\r\n if True:\r\n self.foo()\r\n # pass # uncomment to successfully compile\r\n```\r\n\r\nError output:\r\n```\r\nError compiling: error_reproduce.vy\r\nException: 2-clause if statement must have a zerovalent body: [seq,\r\n /* Internal Call: foo */\r\n [seq_unchecked,\r\n [add, pc, 6],\r\n [goto, priv_3264763256],\r\n jumpdest,\r\n 0]]\r\n```\r\n\r\nCompiled using `vyper error_reproduce.vy`\r\n\n", "before_files": [{"content": "from vyper.exceptions import (\n ConstancyViolationException\n)\nfrom vyper.parser.lll_node import (\n LLLnode\n)\nfrom vyper.parser.parser_utils import (\n pack_arguments,\n getpos\n)\nfrom vyper.signatures.function_signature import (\n FunctionSignature\n)\nfrom vyper.types import (\n BaseType,\n ByteArrayType,\n ListType,\n TupleType,\n ceil32,\n get_size_of_type,\n)\n\n\ndef call_lookup_specs(stmt_expr, context):\n from vyper.parser.expr import Expr\n method_name = stmt_expr.func.attr\n expr_args = [Expr(arg, context).lll_node for arg in stmt_expr.args]\n sig = FunctionSignature.lookup_sig(context.sigs, method_name, expr_args, stmt_expr, context)\n return method_name, expr_args, sig\n\n\ndef make_call(stmt_expr, context):\n method_name, _, sig = call_lookup_specs(stmt_expr, context)\n\n if context.is_constant and not sig.const:\n raise ConstancyViolationException(\n \"May not call non-constant function '%s' within a constant function.\" % (method_name),\n getpos(stmt_expr)\n )\n\n if sig.private:\n return call_self_private(stmt_expr, context, sig)\n else:\n return call_self_public(stmt_expr, context, sig)\n\n\ndef call_make_placeholder(stmt_expr, context, sig):\n if sig.output_type is None:\n return 0, 0, 0\n\n output_placeholder = context.new_placeholder(typ=sig.output_type)\n out_size = get_size_of_type(sig.output_type) * 32\n returner = output_placeholder\n\n if not sig.private and isinstance(sig.output_type, ByteArrayType):\n returner = output_placeholder + 32\n\n return output_placeholder, returner, out_size\n\n\ndef call_self_private(stmt_expr, context, sig):\n # ** Private Call **\n # Steps:\n # (x) push current local variables\n # (x) push arguments\n # (x) push jumpdest (callback ptr)\n # (x) jump to label\n # (x) pop return values\n # (x) pop local variables\n\n method_name, expr_args, sig = call_lookup_specs(stmt_expr, context)\n pre_init = []\n pop_local_vars = []\n push_local_vars = []\n pop_return_values = []\n push_args = []\n\n # Push local variables.\n if context.vars:\n var_slots = [(v.pos, v.size) for name, v in context.vars.items()]\n var_slots.sort(key=lambda x: x[0])\n mem_from, mem_to = var_slots[0][0], var_slots[-1][0] + var_slots[-1][1] * 32\n push_local_vars = [\n ['mload', pos] for pos in range(mem_from, mem_to, 32)\n ]\n pop_local_vars = [\n ['mstore', pos, 'pass'] for pos in reversed(range(mem_from, mem_to, 32))\n ]\n\n # Push Arguments\n if expr_args:\n inargs, inargsize, arg_pos = pack_arguments(sig, expr_args, context, return_placeholder=False, pos=getpos(stmt_expr))\n push_args += [inargs] # copy arguments first, to not mess up the push/pop sequencing.\n static_arg_count = len(expr_args) * 32\n static_pos = arg_pos + static_arg_count\n total_arg_size = ceil32(inargsize - 4)\n\n if len(expr_args) * 32 != total_arg_size: # requires dynamic section.\n ident = 'push_args_%d_%d_%d' % (sig.method_id, stmt_expr.lineno, stmt_expr.col_offset)\n start_label = ident + '_start'\n end_label = ident + '_end'\n i_placeholder = context.new_placeholder(BaseType('uint256'))\n push_args += [\n ['mstore', i_placeholder, arg_pos + total_arg_size],\n ['label', start_label],\n ['if', ['lt', ['mload', i_placeholder], static_pos], ['goto', end_label]],\n ['if_unchecked', ['ne', ['mload', ['mload', i_placeholder]], 0], ['mload', ['mload', i_placeholder]]],\n ['mstore', i_placeholder, ['sub', ['mload', i_placeholder], 32]], # decrease i\n ['goto', start_label],\n ['label', end_label]\n ]\n\n # push static section\n push_args += [\n ['mload', pos] for pos in reversed(range(arg_pos, static_pos, 32))\n ]\n\n # Jump to function label.\n jump_to_func = [\n ['add', ['pc'], 6], # set callback pointer.\n ['goto', 'priv_{}'.format(sig.method_id)],\n ['jumpdest'],\n ]\n\n # Pop return values.\n returner = [0]\n if sig.output_type:\n output_placeholder, returner, output_size = call_make_placeholder(stmt_expr, context, sig)\n if output_size > 0:\n dynamic_offsets = []\n if isinstance(sig.output_type, (BaseType, ListType)):\n pop_return_values = [\n ['mstore', ['add', output_placeholder, pos], 'pass'] for pos in range(0, output_size, 32)\n ]\n elif isinstance(sig.output_type, ByteArrayType):\n dynamic_offsets = [(0, sig.output_type)]\n pop_return_values = [\n ['pop', 'pass'],\n ]\n elif isinstance(sig.output_type, TupleType):\n static_offset = 0\n pop_return_values = []\n for out_type in sig.output_type.members:\n if isinstance(out_type, ByteArrayType):\n pop_return_values.append(['mstore', ['add', output_placeholder, static_offset], 'pass'])\n dynamic_offsets.append((['mload', ['add', output_placeholder, static_offset]], out_type))\n else:\n pop_return_values.append(['mstore', ['add', output_placeholder, static_offset], 'pass'])\n static_offset += 32\n\n # append dynamic unpacker.\n dyn_idx = 0\n for in_memory_offset, out_type in dynamic_offsets:\n ident = \"%d_%d_arg_%d\" % (stmt_expr.lineno, stmt_expr.col_offset, dyn_idx)\n dyn_idx += 1\n start_label = 'dyn_unpack_start_' + ident\n end_label = 'dyn_unpack_end_' + ident\n i_placeholder = context.new_placeholder(typ=BaseType('uint256'))\n begin_pos = ['add', output_placeholder, in_memory_offset]\n # loop until length.\n o = LLLnode.from_list(\n ['seq_unchecked',\n ['mstore', begin_pos, 'pass'], # get len\n ['mstore', i_placeholder, 0],\n ['label', start_label],\n ['if', ['ge', ['mload', i_placeholder], ['ceil32', ['mload', begin_pos]]], ['goto', end_label]], # break\n ['mstore', ['add', ['add', begin_pos, 32], ['mload', i_placeholder]], 'pass'], # pop into correct memory slot.\n ['mstore', i_placeholder, ['add', 32, ['mload', i_placeholder]]], # increment i\n ['goto', start_label],\n ['label', end_label]],\n typ=None, annotation='dynamic unpacker', pos=getpos(stmt_expr))\n pop_return_values.append(o)\n\n o = LLLnode.from_list(\n ['seq_unchecked'] + pre_init + push_local_vars + push_args + jump_to_func + pop_return_values + pop_local_vars + [returner],\n typ=sig.output_type, location='memory', pos=getpos(stmt_expr), annotation='Internal Call: %s' % method_name,\n add_gas_estimate=sig.gas\n )\n o.gas += sig.gas\n return o\n\n\ndef call_self_public(stmt_expr, context, sig):\n # self.* style call to a public function.\n method_name, expr_args, sig = call_lookup_specs(stmt_expr, context)\n add_gas = sig.gas # gas of call\n inargs, inargsize, _ = pack_arguments(sig, expr_args, context, pos=getpos(stmt_expr))\n output_placeholder, returner, output_size = call_make_placeholder(stmt_expr, context, sig)\n assert_call = [\n 'assert', ['call', ['gas'], ['address'], 0, inargs, inargsize, output_placeholder, output_size]\n ]\n if output_size > 0:\n assert_call = ['seq', assert_call, returner]\n o = LLLnode.from_list(\n assert_call,\n typ=sig.output_type, location='memory',\n pos=getpos(stmt_expr), add_gas_estimate=add_gas, annotation='Internal Call: %s' % method_name)\n o.gas += sig.gas\n return o\n", "path": "vyper/parser/self_call.py"}], "after_files": [{"content": "from vyper.exceptions import (\n ConstancyViolationException\n)\nfrom vyper.parser.lll_node import (\n LLLnode\n)\nfrom vyper.parser.parser_utils import (\n pack_arguments,\n getpos\n)\nfrom vyper.signatures.function_signature import (\n FunctionSignature\n)\nfrom vyper.types import (\n BaseType,\n ByteArrayType,\n ListType,\n TupleType,\n ceil32,\n get_size_of_type,\n)\n\n\ndef call_lookup_specs(stmt_expr, context):\n from vyper.parser.expr import Expr\n method_name = stmt_expr.func.attr\n expr_args = [Expr(arg, context).lll_node for arg in stmt_expr.args]\n sig = FunctionSignature.lookup_sig(context.sigs, method_name, expr_args, stmt_expr, context)\n return method_name, expr_args, sig\n\n\ndef make_call(stmt_expr, context):\n method_name, _, sig = call_lookup_specs(stmt_expr, context)\n\n if context.is_constant and not sig.const:\n raise ConstancyViolationException(\n \"May not call non-constant function '%s' within a constant function.\" % (method_name),\n getpos(stmt_expr)\n )\n\n if sig.private:\n return call_self_private(stmt_expr, context, sig)\n else:\n return call_self_public(stmt_expr, context, sig)\n\n\ndef call_make_placeholder(stmt_expr, context, sig):\n if sig.output_type is None:\n return 0, 0, 0\n\n output_placeholder = context.new_placeholder(typ=sig.output_type)\n out_size = get_size_of_type(sig.output_type) * 32\n returner = output_placeholder\n\n if not sig.private and isinstance(sig.output_type, ByteArrayType):\n returner = output_placeholder + 32\n\n return output_placeholder, returner, out_size\n\n\ndef call_self_private(stmt_expr, context, sig):\n # ** Private Call **\n # Steps:\n # (x) push current local variables\n # (x) push arguments\n # (x) push jumpdest (callback ptr)\n # (x) jump to label\n # (x) pop return values\n # (x) pop local variables\n\n method_name, expr_args, sig = call_lookup_specs(stmt_expr, context)\n pre_init = []\n pop_local_vars = []\n push_local_vars = []\n pop_return_values = []\n push_args = []\n\n # Push local variables.\n if context.vars:\n var_slots = [(v.pos, v.size) for name, v in context.vars.items()]\n var_slots.sort(key=lambda x: x[0])\n mem_from, mem_to = var_slots[0][0], var_slots[-1][0] + var_slots[-1][1] * 32\n push_local_vars = [\n ['mload', pos] for pos in range(mem_from, mem_to, 32)\n ]\n pop_local_vars = [\n ['mstore', pos, 'pass'] for pos in reversed(range(mem_from, mem_to, 32))\n ]\n\n # Push Arguments\n if expr_args:\n inargs, inargsize, arg_pos = pack_arguments(sig, expr_args, context, return_placeholder=False, pos=getpos(stmt_expr))\n push_args += [inargs] # copy arguments first, to not mess up the push/pop sequencing.\n static_arg_count = len(expr_args) * 32\n static_pos = arg_pos + static_arg_count\n total_arg_size = ceil32(inargsize - 4)\n\n if len(expr_args) * 32 != total_arg_size: # requires dynamic section.\n ident = 'push_args_%d_%d_%d' % (sig.method_id, stmt_expr.lineno, stmt_expr.col_offset)\n start_label = ident + '_start'\n end_label = ident + '_end'\n i_placeholder = context.new_placeholder(BaseType('uint256'))\n push_args += [\n ['mstore', i_placeholder, arg_pos + total_arg_size],\n ['label', start_label],\n ['if', ['lt', ['mload', i_placeholder], static_pos], ['goto', end_label]],\n ['if_unchecked', ['ne', ['mload', ['mload', i_placeholder]], 0], ['mload', ['mload', i_placeholder]]],\n ['mstore', i_placeholder, ['sub', ['mload', i_placeholder], 32]], # decrease i\n ['goto', start_label],\n ['label', end_label]\n ]\n\n # push static section\n push_args += [\n ['mload', pos] for pos in reversed(range(arg_pos, static_pos, 32))\n ]\n\n # Jump to function label.\n jump_to_func = [\n ['add', ['pc'], 6], # set callback pointer.\n ['goto', 'priv_{}'.format(sig.method_id)],\n ['jumpdest'],\n ]\n\n # Pop return values.\n returner = [0]\n if sig.output_type:\n output_placeholder, returner, output_size = call_make_placeholder(stmt_expr, context, sig)\n if output_size > 0:\n dynamic_offsets = []\n if isinstance(sig.output_type, (BaseType, ListType)):\n pop_return_values = [\n ['mstore', ['add', output_placeholder, pos], 'pass'] for pos in range(0, output_size, 32)\n ]\n elif isinstance(sig.output_type, ByteArrayType):\n dynamic_offsets = [(0, sig.output_type)]\n pop_return_values = [\n ['pop', 'pass'],\n ]\n elif isinstance(sig.output_type, TupleType):\n static_offset = 0\n pop_return_values = []\n for out_type in sig.output_type.members:\n if isinstance(out_type, ByteArrayType):\n pop_return_values.append(['mstore', ['add', output_placeholder, static_offset], 'pass'])\n dynamic_offsets.append((['mload', ['add', output_placeholder, static_offset]], out_type))\n else:\n pop_return_values.append(['mstore', ['add', output_placeholder, static_offset], 'pass'])\n static_offset += 32\n\n # append dynamic unpacker.\n dyn_idx = 0\n for in_memory_offset, out_type in dynamic_offsets:\n ident = \"%d_%d_arg_%d\" % (stmt_expr.lineno, stmt_expr.col_offset, dyn_idx)\n dyn_idx += 1\n start_label = 'dyn_unpack_start_' + ident\n end_label = 'dyn_unpack_end_' + ident\n i_placeholder = context.new_placeholder(typ=BaseType('uint256'))\n begin_pos = ['add', output_placeholder, in_memory_offset]\n # loop until length.\n o = LLLnode.from_list(\n ['seq_unchecked',\n ['mstore', begin_pos, 'pass'], # get len\n ['mstore', i_placeholder, 0],\n ['label', start_label],\n ['if', ['ge', ['mload', i_placeholder], ['ceil32', ['mload', begin_pos]]], ['goto', end_label]], # break\n ['mstore', ['add', ['add', begin_pos, 32], ['mload', i_placeholder]], 'pass'], # pop into correct memory slot.\n ['mstore', i_placeholder, ['add', 32, ['mload', i_placeholder]]], # increment i\n ['goto', start_label],\n ['label', end_label]],\n typ=None, annotation='dynamic unpacker', pos=getpos(stmt_expr))\n pop_return_values.append(o)\n\n call_body = (\n ['seq_unchecked'] +\n pre_init +\n push_local_vars +\n push_args +\n jump_to_func +\n pop_return_values +\n pop_local_vars +\n [returner]\n )\n # If we have no return, we need to pop off\n pop_returner_call_body = ['pop', call_body] if sig.output_type is None else call_body\n\n o = LLLnode.from_list(\n pop_returner_call_body,\n typ=sig.output_type, location='memory', pos=getpos(stmt_expr), annotation='Internal Call: %s' % method_name,\n add_gas_estimate=sig.gas\n )\n o.gas += sig.gas\n return o\n\n\ndef call_self_public(stmt_expr, context, sig):\n # self.* style call to a public function.\n method_name, expr_args, sig = call_lookup_specs(stmt_expr, context)\n add_gas = sig.gas # gas of call\n inargs, inargsize, _ = pack_arguments(sig, expr_args, context, pos=getpos(stmt_expr))\n output_placeholder, returner, output_size = call_make_placeholder(stmt_expr, context, sig)\n assert_call = [\n 'assert', ['call', ['gas'], ['address'], 0, inargs, inargsize, output_placeholder, output_size]\n ]\n if output_size > 0:\n assert_call = ['seq', assert_call, returner]\n o = LLLnode.from_list(\n assert_call,\n typ=sig.output_type, location='memory',\n pos=getpos(stmt_expr), add_gas_estimate=add_gas, annotation='Internal Call: %s' % method_name)\n o.gas += sig.gas\n return o\n", "path": "vyper/parser/self_call.py"}]}
| 3,265 | 263 |
gh_patches_debug_10222
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-2859
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnboundLocalError when using VisualStudioBuildEnvironment
I keep getting the following python error when using `VisualStudioBuildEnvironment` with conan 1.3.1.
```python
env_build = VisualStudioBuildEnvironment(self)
UnboundLocalError: local variable 'flags' referenced before assignment
```
I think I found the problem in [conans/client/build/compiler_flags.py Line 99](https://github.com/conan-io/conan/blob/develop/conans/client/build/compiler_flags.py#L99)
It looks like
```python
elif not vs_toolset:
```
should just be an `else` or `flags` should be initialized to something in the
```python
if str(compiler) == 'Visual Studio':
```
block.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/build/compiler_flags.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 # Visual Studio cl options reference:
6 # https://msdn.microsoft.com/en-us/library/610ecb4h.aspx
7 # "Options are specified by either a forward slash (/) or a dash (–)."
8 # Here we use "-" better than "/" that produces invalid escaped chars using AutoTools.
9 # -LIBPATH, -D, -I, -ZI and so on.
10
11 """
12
13 from conans.tools import unix_path
14
15
16 def rpath_flags(os_build, compiler, lib_paths):
17 if not os_build:
18 return []
19 if compiler in ("clang", "apple-clang", "gcc"):
20 rpath_separator = "," if os_build in ["Macos", "iOS", "watchOS", "tvOS"] else "="
21 return ['-Wl,-rpath%s"%s"' % (rpath_separator, x.replace("\\", "/"))
22 for x in lib_paths if x]
23 return []
24
25
26 def architecture_flag(compiler, arch):
27 """
28 returns flags specific to the target architecture and compiler
29 """
30 if not compiler or not arch:
31 return ""
32
33 if str(compiler) in ['gcc', 'apple-clang', 'clang', 'sun-cc']:
34 if str(arch) in ['x86_64', 'sparcv9']:
35 return '-m64'
36 elif str(arch) in ['x86', 'sparc']:
37 return '-m32'
38 return ""
39
40
41 def libcxx_define(compiler, libcxx):
42
43 if not compiler or not libcxx:
44 return ""
45
46 if str(compiler) in ['gcc', 'clang', 'apple-clang']:
47 if str(libcxx) == 'libstdc++':
48 return '_GLIBCXX_USE_CXX11_ABI=0'
49 elif str(libcxx) == 'libstdc++11':
50 return '_GLIBCXX_USE_CXX11_ABI=1'
51 return ""
52
53
54 def libcxx_flag(compiler, libcxx):
55 """
56 returns flag specific to the target C++ standard library
57 """
58 if not compiler or not libcxx:
59 return ""
60 if str(compiler) in ['clang', 'apple-clang']:
61 if str(libcxx) in ['libstdc++', 'libstdc++11']:
62 return '-stdlib=libstdc++'
63 elif str(libcxx) == 'libc++':
64 return '-stdlib=libc++'
65 elif str(compiler) == 'sun-cc':
66 return ({"libCstd": "-library=Cstd",
67 "libstdcxx": "-library=stdcxx4",
68 "libstlport": "-library=stlport4",
69 "libstdc++": "-library=stdcpp"}.get(libcxx, ""))
70 return ""
71
72
73 def pic_flag(compiler=None):
74 """
75 returns PIC (position independent code) flags, such as -fPIC
76 """
77 if not compiler or compiler == 'Visual Studio':
78 return ""
79 return '-fPIC'
80
81
82 def build_type_flags(compiler, build_type, vs_toolset=None):
83 """
84 returns flags specific to the build type (Debug, Release, etc.)
85 (-s, -g, /Zi, etc.)
86 """
87 if not compiler or not build_type:
88 return ""
89
90 # https://github.com/Kitware/CMake/blob/d7af8a34b67026feaee558433db3a835d6007e06/
91 # Modules/Platform/Windows-MSVC.cmake
92 if str(compiler) == 'Visual Studio':
93 if vs_toolset and "clang" in str(vs_toolset):
94 flags = {"Debug": ["-gline-tables-only", "-fno-inline", "-O0"],
95 "Release": ["-O2"],
96 "RelWithDebInfo": ["-gline-tables-only", "-O2", "-fno-inline"],
97 "MinSizeRel": []
98 }.get(build_type, ["-O2", "-Ob2"])
99 elif not vs_toolset:
100 flags = {"Debug": ["-Zi", "-Ob0", "-Od"],
101 "Release": ["-O2", "-Ob2"],
102 "RelWithDebInfo": ["-Zi", "-O2", "-Ob1"],
103 "MinSizeRel": ["-O1", "-Ob1"],
104 }.get(build_type, [])
105 return flags
106 else:
107 # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/
108 # Modules/Compiler/GNU.cmake
109 # clang include the gnu (overriding some things, but not build type) and apple clang
110 # overrides clang but it doesn't touch clang either
111 if str(compiler) in ["clang", "gcc", "apple-clang"]:
112 # FIXME: It is not clear that the "-s" is something related with the build type
113 # cmake is not adjusting it
114 # -s: Remove all symbol table and relocation information from the executable.
115 flags = {"Debug": ["-g"],
116 "Release": ["-O3", "-s"] if str(compiler) == "gcc" else ["-O3"],
117 "RelWithDebInfo": ["-O2", "-g"],
118 "MinSizeRel": ["-Os"],
119 }.get(build_type, [])
120 return flags
121 elif str(compiler) == "sun-cc":
122 # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/
123 # Modules/Compiler/SunPro-CXX.cmake
124 flags = {"Debug": ["-g"],
125 "Release": ["-xO3"],
126 "RelWithDebInfo": ["-xO2", "-g"],
127 "MinSizeRel": ["-xO2", "-xspace"],
128 }.get(build_type, [])
129 return flags
130
131 return ""
132
133
134 def build_type_define(build_type=None):
135 """
136 returns definitions specific to the build type (Debug, Release, etc.)
137 like DEBUG, _DEBUG, NDEBUG
138 """
139 return 'NDEBUG' if build_type == 'Release' else ""
140
141
142 def adjust_path(path, win_bash=False, subsystem=None, compiler=None):
143 """
144 adjusts path to be safely passed to the compiler command line
145 for Windows bash, ensures path is in format according to the subsystem
146 for path with spaces, places double quotes around it
147 converts slashes to backslashes, or vice versa
148 """
149 if str(compiler) == 'Visual Studio':
150 path = path.replace('/', '\\')
151 else:
152 path = path.replace('\\', '/')
153 if win_bash:
154 path = unix_path(path, subsystem)
155 return '"%s"' % path if ' ' in path else path
156
157
158 def sysroot_flag(sysroot, win_bash=False, subsystem=None, compiler=None):
159 if str(compiler) != 'Visual Studio' and sysroot:
160 sysroot = adjust_path(sysroot, win_bash=win_bash, subsystem=subsystem, compiler=compiler)
161 return '--sysroot=%s' % sysroot
162 return ""
163
164
165 def visual_runtime(runtime):
166 if runtime:
167 return "-%s" % runtime
168 return ""
169
170
171 def format_defines(defines):
172 return ["-D%s" % define for define in defines if define]
173
174
175 include_path_option = "-I"
176 visual_linker_option_separator = "-link" # Further options will apply to the linker
177
178
179 def format_include_paths(include_paths, win_bash=False, subsystem=None, compiler=None):
180 return ["%s%s" % (include_path_option, adjust_path(include_path, win_bash=win_bash,
181 subsystem=subsystem, compiler=compiler))
182 for include_path in include_paths if include_path]
183
184
185 def format_library_paths(library_paths, win_bash=False, subsystem=None, compiler=None):
186 pattern = "-LIBPATH:%s" if str(compiler) == 'Visual Studio' else "-L%s"
187 return [pattern % adjust_path(library_path, win_bash=win_bash,
188 subsystem=subsystem, compiler=compiler)
189 for library_path in library_paths if library_path]
190
191
192 def format_libraries(libraries, compiler=None):
193 pattern = "%s.lib" if str(compiler) == 'Visual Studio' else "-l%s"
194 return [pattern % library for library in libraries if library]
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/client/build/compiler_flags.py b/conans/client/build/compiler_flags.py
--- a/conans/client/build/compiler_flags.py
+++ b/conans/client/build/compiler_flags.py
@@ -96,7 +96,7 @@
"RelWithDebInfo": ["-gline-tables-only", "-O2", "-fno-inline"],
"MinSizeRel": []
}.get(build_type, ["-O2", "-Ob2"])
- elif not vs_toolset:
+ else:
flags = {"Debug": ["-Zi", "-Ob0", "-Od"],
"Release": ["-O2", "-Ob2"],
"RelWithDebInfo": ["-Zi", "-O2", "-Ob1"],
|
{"golden_diff": "diff --git a/conans/client/build/compiler_flags.py b/conans/client/build/compiler_flags.py\n--- a/conans/client/build/compiler_flags.py\n+++ b/conans/client/build/compiler_flags.py\n@@ -96,7 +96,7 @@\n \"RelWithDebInfo\": [\"-gline-tables-only\", \"-O2\", \"-fno-inline\"],\n \"MinSizeRel\": []\n }.get(build_type, [\"-O2\", \"-Ob2\"])\n- elif not vs_toolset:\n+ else:\n flags = {\"Debug\": [\"-Zi\", \"-Ob0\", \"-Od\"],\n \"Release\": [\"-O2\", \"-Ob2\"],\n \"RelWithDebInfo\": [\"-Zi\", \"-O2\", \"-Ob1\"],\n", "issue": "UnboundLocalError when using VisualStudioBuildEnvironment\nI keep getting the following python error when using `VisualStudioBuildEnvironment` with conan 1.3.1.\r\n```python\r\nenv_build = VisualStudioBuildEnvironment(self)\r\nUnboundLocalError: local variable 'flags' referenced before assignment\r\n```\r\nI think I found the problem in [conans/client/build/compiler_flags.py Line 99](https://github.com/conan-io/conan/blob/develop/conans/client/build/compiler_flags.py#L99)\r\n\r\nIt looks like \r\n```python\r\nelif not vs_toolset:\r\n```\r\nshould just be an `else` or `flags` should be initialized to something in the \r\n```python \r\nif str(compiler) == 'Visual Studio':\r\n``` \r\nblock.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\n # Visual Studio cl options reference:\n # https://msdn.microsoft.com/en-us/library/610ecb4h.aspx\n # \"Options are specified by either a forward slash (/) or a dash (\u2013).\"\n # Here we use \"-\" better than \"/\" that produces invalid escaped chars using AutoTools.\n # -LIBPATH, -D, -I, -ZI and so on.\n\n\"\"\"\n\nfrom conans.tools import unix_path\n\n\ndef rpath_flags(os_build, compiler, lib_paths):\n if not os_build:\n return []\n if compiler in (\"clang\", \"apple-clang\", \"gcc\"):\n rpath_separator = \",\" if os_build in [\"Macos\", \"iOS\", \"watchOS\", \"tvOS\"] else \"=\"\n return ['-Wl,-rpath%s\"%s\"' % (rpath_separator, x.replace(\"\\\\\", \"/\"))\n for x in lib_paths if x]\n return []\n\n\ndef architecture_flag(compiler, arch):\n \"\"\"\n returns flags specific to the target architecture and compiler\n \"\"\"\n if not compiler or not arch:\n return \"\"\n\n if str(compiler) in ['gcc', 'apple-clang', 'clang', 'sun-cc']:\n if str(arch) in ['x86_64', 'sparcv9']:\n return '-m64'\n elif str(arch) in ['x86', 'sparc']:\n return '-m32'\n return \"\"\n\n\ndef libcxx_define(compiler, libcxx):\n\n if not compiler or not libcxx:\n return \"\"\n\n if str(compiler) in ['gcc', 'clang', 'apple-clang']:\n if str(libcxx) == 'libstdc++':\n return '_GLIBCXX_USE_CXX11_ABI=0'\n elif str(libcxx) == 'libstdc++11':\n return '_GLIBCXX_USE_CXX11_ABI=1'\n return \"\"\n\n\ndef libcxx_flag(compiler, libcxx):\n \"\"\"\n returns flag specific to the target C++ standard library\n \"\"\"\n if not compiler or not libcxx:\n return \"\"\n if str(compiler) in ['clang', 'apple-clang']:\n if str(libcxx) in ['libstdc++', 'libstdc++11']:\n return '-stdlib=libstdc++'\n elif str(libcxx) == 'libc++':\n return '-stdlib=libc++'\n elif str(compiler) == 'sun-cc':\n return ({\"libCstd\": \"-library=Cstd\",\n \"libstdcxx\": \"-library=stdcxx4\",\n \"libstlport\": \"-library=stlport4\",\n \"libstdc++\": \"-library=stdcpp\"}.get(libcxx, \"\"))\n return \"\"\n\n\ndef pic_flag(compiler=None):\n \"\"\"\n returns PIC (position independent code) flags, such as -fPIC\n \"\"\"\n if not compiler or compiler == 'Visual Studio':\n return \"\"\n return '-fPIC'\n\n\ndef build_type_flags(compiler, build_type, vs_toolset=None):\n \"\"\"\n returns flags specific to the build type (Debug, Release, etc.)\n (-s, -g, /Zi, etc.)\n \"\"\"\n if not compiler or not build_type:\n return \"\"\n\n # https://github.com/Kitware/CMake/blob/d7af8a34b67026feaee558433db3a835d6007e06/\n # Modules/Platform/Windows-MSVC.cmake\n if str(compiler) == 'Visual Studio':\n if vs_toolset and \"clang\" in str(vs_toolset):\n flags = {\"Debug\": [\"-gline-tables-only\", \"-fno-inline\", \"-O0\"],\n \"Release\": [\"-O2\"],\n \"RelWithDebInfo\": [\"-gline-tables-only\", \"-O2\", \"-fno-inline\"],\n \"MinSizeRel\": []\n }.get(build_type, [\"-O2\", \"-Ob2\"])\n elif not vs_toolset:\n flags = {\"Debug\": [\"-Zi\", \"-Ob0\", \"-Od\"],\n \"Release\": [\"-O2\", \"-Ob2\"],\n \"RelWithDebInfo\": [\"-Zi\", \"-O2\", \"-Ob1\"],\n \"MinSizeRel\": [\"-O1\", \"-Ob1\"],\n }.get(build_type, [])\n return flags\n else:\n # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/\n # Modules/Compiler/GNU.cmake\n # clang include the gnu (overriding some things, but not build type) and apple clang\n # overrides clang but it doesn't touch clang either\n if str(compiler) in [\"clang\", \"gcc\", \"apple-clang\"]:\n # FIXME: It is not clear that the \"-s\" is something related with the build type\n # cmake is not adjusting it\n # -s: Remove all symbol table and relocation information from the executable.\n flags = {\"Debug\": [\"-g\"],\n \"Release\": [\"-O3\", \"-s\"] if str(compiler) == \"gcc\" else [\"-O3\"],\n \"RelWithDebInfo\": [\"-O2\", \"-g\"],\n \"MinSizeRel\": [\"-Os\"],\n }.get(build_type, [])\n return flags\n elif str(compiler) == \"sun-cc\":\n # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/\n # Modules/Compiler/SunPro-CXX.cmake\n flags = {\"Debug\": [\"-g\"],\n \"Release\": [\"-xO3\"],\n \"RelWithDebInfo\": [\"-xO2\", \"-g\"],\n \"MinSizeRel\": [\"-xO2\", \"-xspace\"],\n }.get(build_type, [])\n return flags\n\n return \"\"\n\n\ndef build_type_define(build_type=None):\n \"\"\"\n returns definitions specific to the build type (Debug, Release, etc.)\n like DEBUG, _DEBUG, NDEBUG\n \"\"\"\n return 'NDEBUG' if build_type == 'Release' else \"\"\n\n\ndef adjust_path(path, win_bash=False, subsystem=None, compiler=None):\n \"\"\"\n adjusts path to be safely passed to the compiler command line\n for Windows bash, ensures path is in format according to the subsystem\n for path with spaces, places double quotes around it\n converts slashes to backslashes, or vice versa\n \"\"\"\n if str(compiler) == 'Visual Studio':\n path = path.replace('/', '\\\\')\n else:\n path = path.replace('\\\\', '/')\n if win_bash:\n path = unix_path(path, subsystem)\n return '\"%s\"' % path if ' ' in path else path\n\n\ndef sysroot_flag(sysroot, win_bash=False, subsystem=None, compiler=None):\n if str(compiler) != 'Visual Studio' and sysroot:\n sysroot = adjust_path(sysroot, win_bash=win_bash, subsystem=subsystem, compiler=compiler)\n return '--sysroot=%s' % sysroot\n return \"\"\n\n\ndef visual_runtime(runtime):\n if runtime:\n return \"-%s\" % runtime\n return \"\"\n\n\ndef format_defines(defines):\n return [\"-D%s\" % define for define in defines if define]\n\n\ninclude_path_option = \"-I\"\nvisual_linker_option_separator = \"-link\" # Further options will apply to the linker\n\n\ndef format_include_paths(include_paths, win_bash=False, subsystem=None, compiler=None):\n return [\"%s%s\" % (include_path_option, adjust_path(include_path, win_bash=win_bash,\n subsystem=subsystem, compiler=compiler))\n for include_path in include_paths if include_path]\n\n\ndef format_library_paths(library_paths, win_bash=False, subsystem=None, compiler=None):\n pattern = \"-LIBPATH:%s\" if str(compiler) == 'Visual Studio' else \"-L%s\"\n return [pattern % adjust_path(library_path, win_bash=win_bash,\n subsystem=subsystem, compiler=compiler)\n for library_path in library_paths if library_path]\n\n\ndef format_libraries(libraries, compiler=None):\n pattern = \"%s.lib\" if str(compiler) == 'Visual Studio' else \"-l%s\"\n return [pattern % library for library in libraries if library]\n", "path": "conans/client/build/compiler_flags.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\n # Visual Studio cl options reference:\n # https://msdn.microsoft.com/en-us/library/610ecb4h.aspx\n # \"Options are specified by either a forward slash (/) or a dash (\u2013).\"\n # Here we use \"-\" better than \"/\" that produces invalid escaped chars using AutoTools.\n # -LIBPATH, -D, -I, -ZI and so on.\n\n\"\"\"\n\nfrom conans.tools import unix_path\n\n\ndef rpath_flags(os_build, compiler, lib_paths):\n if not os_build:\n return []\n if compiler in (\"clang\", \"apple-clang\", \"gcc\"):\n rpath_separator = \",\" if os_build in [\"Macos\", \"iOS\", \"watchOS\", \"tvOS\"] else \"=\"\n return ['-Wl,-rpath%s\"%s\"' % (rpath_separator, x.replace(\"\\\\\", \"/\"))\n for x in lib_paths if x]\n return []\n\n\ndef architecture_flag(compiler, arch):\n \"\"\"\n returns flags specific to the target architecture and compiler\n \"\"\"\n if not compiler or not arch:\n return \"\"\n\n if str(compiler) in ['gcc', 'apple-clang', 'clang', 'sun-cc']:\n if str(arch) in ['x86_64', 'sparcv9']:\n return '-m64'\n elif str(arch) in ['x86', 'sparc']:\n return '-m32'\n return \"\"\n\n\ndef libcxx_define(compiler, libcxx):\n\n if not compiler or not libcxx:\n return \"\"\n\n if str(compiler) in ['gcc', 'clang', 'apple-clang']:\n if str(libcxx) == 'libstdc++':\n return '_GLIBCXX_USE_CXX11_ABI=0'\n elif str(libcxx) == 'libstdc++11':\n return '_GLIBCXX_USE_CXX11_ABI=1'\n return \"\"\n\n\ndef libcxx_flag(compiler, libcxx):\n \"\"\"\n returns flag specific to the target C++ standard library\n \"\"\"\n if not compiler or not libcxx:\n return \"\"\n if str(compiler) in ['clang', 'apple-clang']:\n if str(libcxx) in ['libstdc++', 'libstdc++11']:\n return '-stdlib=libstdc++'\n elif str(libcxx) == 'libc++':\n return '-stdlib=libc++'\n elif str(compiler) == 'sun-cc':\n return ({\"libCstd\": \"-library=Cstd\",\n \"libstdcxx\": \"-library=stdcxx4\",\n \"libstlport\": \"-library=stlport4\",\n \"libstdc++\": \"-library=stdcpp\"}.get(libcxx, \"\"))\n return \"\"\n\n\ndef pic_flag(compiler=None):\n \"\"\"\n returns PIC (position independent code) flags, such as -fPIC\n \"\"\"\n if not compiler or compiler == 'Visual Studio':\n return \"\"\n return '-fPIC'\n\n\ndef build_type_flags(compiler, build_type, vs_toolset=None):\n \"\"\"\n returns flags specific to the build type (Debug, Release, etc.)\n (-s, -g, /Zi, etc.)\n \"\"\"\n if not compiler or not build_type:\n return \"\"\n\n # https://github.com/Kitware/CMake/blob/d7af8a34b67026feaee558433db3a835d6007e06/\n # Modules/Platform/Windows-MSVC.cmake\n if str(compiler) == 'Visual Studio':\n if vs_toolset and \"clang\" in str(vs_toolset):\n flags = {\"Debug\": [\"-gline-tables-only\", \"-fno-inline\", \"-O0\"],\n \"Release\": [\"-O2\"],\n \"RelWithDebInfo\": [\"-gline-tables-only\", \"-O2\", \"-fno-inline\"],\n \"MinSizeRel\": []\n }.get(build_type, [\"-O2\", \"-Ob2\"])\n else:\n flags = {\"Debug\": [\"-Zi\", \"-Ob0\", \"-Od\"],\n \"Release\": [\"-O2\", \"-Ob2\"],\n \"RelWithDebInfo\": [\"-Zi\", \"-O2\", \"-Ob1\"],\n \"MinSizeRel\": [\"-O1\", \"-Ob1\"],\n }.get(build_type, [])\n return flags\n else:\n # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/\n # Modules/Compiler/GNU.cmake\n # clang include the gnu (overriding some things, but not build type) and apple clang\n # overrides clang but it doesn't touch clang either\n if str(compiler) in [\"clang\", \"gcc\", \"apple-clang\"]:\n # FIXME: It is not clear that the \"-s\" is something related with the build type\n # cmake is not adjusting it\n # -s: Remove all symbol table and relocation information from the executable.\n flags = {\"Debug\": [\"-g\"],\n \"Release\": [\"-O3\", \"-s\"] if str(compiler) == \"gcc\" else [\"-O3\"],\n \"RelWithDebInfo\": [\"-O2\", \"-g\"],\n \"MinSizeRel\": [\"-Os\"],\n }.get(build_type, [])\n return flags\n elif str(compiler) == \"sun-cc\":\n # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/\n # Modules/Compiler/SunPro-CXX.cmake\n flags = {\"Debug\": [\"-g\"],\n \"Release\": [\"-xO3\"],\n \"RelWithDebInfo\": [\"-xO2\", \"-g\"],\n \"MinSizeRel\": [\"-xO2\", \"-xspace\"],\n }.get(build_type, [])\n return flags\n\n return \"\"\n\n\ndef build_type_define(build_type=None):\n \"\"\"\n returns definitions specific to the build type (Debug, Release, etc.)\n like DEBUG, _DEBUG, NDEBUG\n \"\"\"\n return 'NDEBUG' if build_type == 'Release' else \"\"\n\n\ndef adjust_path(path, win_bash=False, subsystem=None, compiler=None):\n \"\"\"\n adjusts path to be safely passed to the compiler command line\n for Windows bash, ensures path is in format according to the subsystem\n for path with spaces, places double quotes around it\n converts slashes to backslashes, or vice versa\n \"\"\"\n if str(compiler) == 'Visual Studio':\n path = path.replace('/', '\\\\')\n else:\n path = path.replace('\\\\', '/')\n if win_bash:\n path = unix_path(path, subsystem)\n return '\"%s\"' % path if ' ' in path else path\n\n\ndef sysroot_flag(sysroot, win_bash=False, subsystem=None, compiler=None):\n if str(compiler) != 'Visual Studio' and sysroot:\n sysroot = adjust_path(sysroot, win_bash=win_bash, subsystem=subsystem, compiler=compiler)\n return '--sysroot=%s' % sysroot\n return \"\"\n\n\ndef visual_runtime(runtime):\n if runtime:\n return \"-%s\" % runtime\n return \"\"\n\n\ndef format_defines(defines):\n return [\"-D%s\" % define for define in defines if define]\n\n\ninclude_path_option = \"-I\"\nvisual_linker_option_separator = \"-link\" # Further options will apply to the linker\n\n\ndef format_include_paths(include_paths, win_bash=False, subsystem=None, compiler=None):\n return [\"%s%s\" % (include_path_option, adjust_path(include_path, win_bash=win_bash,\n subsystem=subsystem, compiler=compiler))\n for include_path in include_paths if include_path]\n\n\ndef format_library_paths(library_paths, win_bash=False, subsystem=None, compiler=None):\n pattern = \"-LIBPATH:%s\" if str(compiler) == 'Visual Studio' else \"-L%s\"\n return [pattern % adjust_path(library_path, win_bash=win_bash,\n subsystem=subsystem, compiler=compiler)\n for library_path in library_paths if library_path]\n\n\ndef format_libraries(libraries, compiler=None):\n pattern = \"%s.lib\" if str(compiler) == 'Visual Studio' else \"-l%s\"\n return [pattern % library for library in libraries if library]\n", "path": "conans/client/build/compiler_flags.py"}]}
| 2,787 | 163 |
gh_patches_debug_20528
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-344
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GraphiQl explorer tries to use websockets and falls with 500 error and “Connection reset by peer” (with Django)
There is a path to Graph**i**Ql explorer in `urls.py:`:
```
from strawberry.django.views import GraphQLView
urlpatterns = [
...
path('graphql', GraphQLView.as_view(schema=schema)),
]
```
And while there is an active browser tab with `http://127.0.0.1:8000/graphql`, it continuously makes requests to `ws://127.0.0.1:8000/graphql` each 10 seconds, that causes next errors on back each request:
```
Internal Server Error: /api/
Traceback (most recent call last):
File "/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/views/generic/base.py", line 71, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/utils/decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/Users/Max/Documents/env/py38/lib/python3.8/site-packages/strawberry/django/views.py", line 50, in dispatch
data = json.loads(request.body)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
[02/Jun/2020 00:22:45] "GET /api/ HTTP/1.1" 500 92779
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 57288)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py", line 650, in process_request_thread
self.finish_request(request, client_address)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py", line 360, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py", line 720, in __init__
self.handle()
File "/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/servers/basehttp.py", line 174, in handle
self.handle_one_request()
File "/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/servers/basehttp.py", line 182, in handle_one_request
self.raw_requestline = self.rfile.readline(65537)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [Errno 54] Connection reset by peer
```
**So, can it be disabled of fixed somehow? It's annoying, but everything works**
P.S. There is no such thing using Graphene
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/flask/graphiql.py`
Content:
```
1 from os.path import abspath, dirname, join
2
3
4 def render_graphiql_page():
5 dir_path = abspath(join(dirname(__file__), ".."))
6 graphiql_html_file = f"{dir_path}/static/graphiql.html"
7
8 html_string = None
9 with open(graphiql_html_file, "r") as f:
10 html_string = f.read()
11 return html_string
12
```
Path: `strawberry/django/views.py`
Content:
```
1 import json
2 import os
3
4 from django.http import Http404, HttpResponseNotAllowed, JsonResponse
5 from django.http.response import HttpResponseBadRequest
6 from django.template import RequestContext, Template
7 from django.template.exceptions import TemplateDoesNotExist
8 from django.template.loader import render_to_string
9 from django.template.response import TemplateResponse
10 from django.utils.decorators import method_decorator
11 from django.views.decorators.csrf import csrf_exempt
12 from django.views.generic import View
13
14 import strawberry
15 from graphql import graphql_sync
16 from graphql.error import format_error as format_graphql_error
17 from graphql.type.schema import GraphQLSchema
18
19
20 class GraphQLView(View):
21 schema = None
22 graphiql = True
23
24 def __init__(self, schema=None, graphiql=True):
25 if not schema:
26 raise ValueError("You must pass in a schema to GraphQLView")
27
28 if not isinstance(schema, GraphQLSchema):
29 raise ValueError("You must pass in a valid schema to GraphQLView")
30
31 self.schema = schema
32 self.graphiql = graphiql
33
34 def get_root_value(self):
35 return None
36
37 @method_decorator(csrf_exempt)
38 def dispatch(self, request, *args, **kwargs):
39 if request.method.lower() not in ("get", "post"):
40 return HttpResponseNotAllowed(
41 ["GET", "POST"], "GraphQL only supports GET and POST requests."
42 )
43
44 if "text/html" in request.META.get("HTTP_ACCEPT", ""):
45 if not self.graphiql:
46 raise Http404("GraphiQL has been disabled")
47
48 return self._render_graphiql(request)
49
50 data = json.loads(request.body)
51
52 try:
53 query = data["query"]
54 variables = data.get("variables")
55 operation_name = data.get("operationName")
56 except KeyError:
57 return HttpResponseBadRequest("No GraphQL query found in the request")
58
59 context = {"request": request}
60
61 result = graphql_sync(
62 self.schema,
63 query,
64 root_value=self.get_root_value(),
65 variable_values=variables,
66 context_value=context,
67 operation_name=operation_name,
68 )
69
70 response_data = {"data": result.data}
71
72 if result.errors:
73 response_data["errors"] = [
74 format_graphql_error(err) for err in result.errors
75 ]
76
77 return JsonResponse(response_data)
78
79 def _render_graphiql(self, request, context=None):
80 try:
81 template = Template(render_to_string("graphql/graphiql.html"))
82 except TemplateDoesNotExist:
83 template = Template(
84 open(
85 os.path.join(
86 os.path.dirname(os.path.abspath(strawberry.__file__)),
87 "static/graphiql.html",
88 ),
89 "r",
90 ).read()
91 )
92
93 response = TemplateResponse(request=request, template=None, context=context)
94 response.content = template.render(RequestContext(request, context))
95
96 return response
97
```
Path: `strawberry/asgi/utils.py`
Content:
```
1 import pathlib
2
3
4 def get_graphiql_html() -> str:
5 here = pathlib.Path(__file__).parents[1]
6 path = here / "static/graphiql.html"
7
8 with open(path) as f:
9 template = f.read()
10
11 return template
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/asgi/utils.py b/strawberry/asgi/utils.py
--- a/strawberry/asgi/utils.py
+++ b/strawberry/asgi/utils.py
@@ -8,4 +8,4 @@
with open(path) as f:
template = f.read()
- return template
+ return template.replace("{{ SUBSCRIPTION_ENABLED }}", "true")
diff --git a/strawberry/django/views.py b/strawberry/django/views.py
--- a/strawberry/django/views.py
+++ b/strawberry/django/views.py
@@ -90,6 +90,9 @@
).read()
)
+ context = context or {}
+ context.update({"SUBSCRIPTION_ENABLED": "false"})
+
response = TemplateResponse(request=request, template=None, context=context)
response.content = template.render(RequestContext(request, context))
diff --git a/strawberry/flask/graphiql.py b/strawberry/flask/graphiql.py
--- a/strawberry/flask/graphiql.py
+++ b/strawberry/flask/graphiql.py
@@ -6,6 +6,8 @@
graphiql_html_file = f"{dir_path}/static/graphiql.html"
html_string = None
+
with open(graphiql_html_file, "r") as f:
html_string = f.read()
- return html_string
+
+ return html_string.replace("{{ SUBSCRIPTION_ENABLED }}", "false")
|
{"golden_diff": "diff --git a/strawberry/asgi/utils.py b/strawberry/asgi/utils.py\n--- a/strawberry/asgi/utils.py\n+++ b/strawberry/asgi/utils.py\n@@ -8,4 +8,4 @@\n with open(path) as f:\n template = f.read()\n \n- return template\n+ return template.replace(\"{{ SUBSCRIPTION_ENABLED }}\", \"true\")\ndiff --git a/strawberry/django/views.py b/strawberry/django/views.py\n--- a/strawberry/django/views.py\n+++ b/strawberry/django/views.py\n@@ -90,6 +90,9 @@\n ).read()\n )\n \n+ context = context or {}\n+ context.update({\"SUBSCRIPTION_ENABLED\": \"false\"})\n+\n response = TemplateResponse(request=request, template=None, context=context)\n response.content = template.render(RequestContext(request, context))\n \ndiff --git a/strawberry/flask/graphiql.py b/strawberry/flask/graphiql.py\n--- a/strawberry/flask/graphiql.py\n+++ b/strawberry/flask/graphiql.py\n@@ -6,6 +6,8 @@\n graphiql_html_file = f\"{dir_path}/static/graphiql.html\"\n \n html_string = None\n+\n with open(graphiql_html_file, \"r\") as f:\n html_string = f.read()\n- return html_string\n+\n+ return html_string.replace(\"{{ SUBSCRIPTION_ENABLED }}\", \"false\")\n", "issue": "GraphiQl explorer tries to use websockets and falls with 500 error and \u201cConnection reset by peer\u201d (with Django)\nThere is a path to Graph**i**Ql explorer in `urls.py:`:\r\n\r\n```\r\nfrom strawberry.django.views import GraphQLView\r\n\r\nurlpatterns = [\r\n ...\r\n path('graphql', GraphQLView.as_view(schema=schema)),\r\n]\r\n```\r\nAnd while there is an active browser tab with `http://127.0.0.1:8000/graphql`, it continuously makes requests to `ws://127.0.0.1:8000/graphql` each 10 seconds, that causes next errors on back each request:\r\n\r\n```\r\nInternal Server Error: /api/\r\nTraceback (most recent call last):\r\n File \"/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/handlers/exception.py\", line 34, in inner\r\n response = get_response(request)\r\n File \"/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/handlers/base.py\", line 115, in _get_response\r\n response = self.process_exception_by_middleware(e, request)\r\n File \"/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/handlers/base.py\", line 113, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/views/generic/base.py\", line 71, in view\r\n return self.dispatch(request, *args, **kwargs)\r\n File \"/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/utils/decorators.py\", line 43, in _wrapper\r\n return bound_method(*args, **kwargs)\r\n File \"/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/views/decorators/csrf.py\", line 54, in wrapped_view\r\n return view_func(*args, **kwargs)\r\n File \"/Users/Max/Documents/env/py38/lib/python3.8/site-packages/strawberry/django/views.py\", line 50, in dispatch\r\n data = json.loads(request.body)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n[02/Jun/2020 00:22:45] \"GET /api/ HTTP/1.1\" 500 92779\r\n----------------------------------------\r\nException happened during processing of request from ('127.0.0.1', 57288)\r\nTraceback (most recent call last):\r\n File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py\", line 650, in process_request_thread\r\n self.finish_request(request, client_address)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py\", line 360, in finish_request\r\n self.RequestHandlerClass(request, client_address, self)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py\", line 720, in __init__\r\n self.handle()\r\n File \"/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/servers/basehttp.py\", line 174, in handle\r\n self.handle_one_request()\r\n File \"/Users/Max/Documents/env/py38/lib/python3.8/site-packages/django/core/servers/basehttp.py\", line 182, in handle_one_request\r\n self.raw_requestline = self.rfile.readline(65537)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socket.py\", line 669, in readinto\r\n return self._sock.recv_into(b)\r\nConnectionResetError: [Errno 54] Connection reset by peer\r\n```\r\n\r\n**So, can it be disabled of fixed somehow? It's annoying, but everything works**\r\n\r\nP.S. There is no such thing using Graphene\n", "before_files": [{"content": "from os.path import abspath, dirname, join\n\n\ndef render_graphiql_page():\n dir_path = abspath(join(dirname(__file__), \"..\"))\n graphiql_html_file = f\"{dir_path}/static/graphiql.html\"\n\n html_string = None\n with open(graphiql_html_file, \"r\") as f:\n html_string = f.read()\n return html_string\n", "path": "strawberry/flask/graphiql.py"}, {"content": "import json\nimport os\n\nfrom django.http import Http404, HttpResponseNotAllowed, JsonResponse\nfrom django.http.response import HttpResponseBadRequest\nfrom django.template import RequestContext, Template\nfrom django.template.exceptions import TemplateDoesNotExist\nfrom django.template.loader import render_to_string\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.generic import View\n\nimport strawberry\nfrom graphql import graphql_sync\nfrom graphql.error import format_error as format_graphql_error\nfrom graphql.type.schema import GraphQLSchema\n\n\nclass GraphQLView(View):\n schema = None\n graphiql = True\n\n def __init__(self, schema=None, graphiql=True):\n if not schema:\n raise ValueError(\"You must pass in a schema to GraphQLView\")\n\n if not isinstance(schema, GraphQLSchema):\n raise ValueError(\"You must pass in a valid schema to GraphQLView\")\n\n self.schema = schema\n self.graphiql = graphiql\n\n def get_root_value(self):\n return None\n\n @method_decorator(csrf_exempt)\n def dispatch(self, request, *args, **kwargs):\n if request.method.lower() not in (\"get\", \"post\"):\n return HttpResponseNotAllowed(\n [\"GET\", \"POST\"], \"GraphQL only supports GET and POST requests.\"\n )\n\n if \"text/html\" in request.META.get(\"HTTP_ACCEPT\", \"\"):\n if not self.graphiql:\n raise Http404(\"GraphiQL has been disabled\")\n\n return self._render_graphiql(request)\n\n data = json.loads(request.body)\n\n try:\n query = data[\"query\"]\n variables = data.get(\"variables\")\n operation_name = data.get(\"operationName\")\n except KeyError:\n return HttpResponseBadRequest(\"No GraphQL query found in the request\")\n\n context = {\"request\": request}\n\n result = graphql_sync(\n self.schema,\n query,\n root_value=self.get_root_value(),\n variable_values=variables,\n context_value=context,\n operation_name=operation_name,\n )\n\n response_data = {\"data\": result.data}\n\n if result.errors:\n response_data[\"errors\"] = [\n format_graphql_error(err) for err in result.errors\n ]\n\n return JsonResponse(response_data)\n\n def _render_graphiql(self, request, context=None):\n try:\n template = Template(render_to_string(\"graphql/graphiql.html\"))\n except TemplateDoesNotExist:\n template = Template(\n open(\n os.path.join(\n os.path.dirname(os.path.abspath(strawberry.__file__)),\n \"static/graphiql.html\",\n ),\n \"r\",\n ).read()\n )\n\n response = TemplateResponse(request=request, template=None, context=context)\n response.content = template.render(RequestContext(request, context))\n\n return response\n", "path": "strawberry/django/views.py"}, {"content": "import pathlib\n\n\ndef get_graphiql_html() -> str:\n here = pathlib.Path(__file__).parents[1]\n path = here / \"static/graphiql.html\"\n\n with open(path) as f:\n template = f.read()\n\n return template\n", "path": "strawberry/asgi/utils.py"}], "after_files": [{"content": "from os.path import abspath, dirname, join\n\n\ndef render_graphiql_page():\n dir_path = abspath(join(dirname(__file__), \"..\"))\n graphiql_html_file = f\"{dir_path}/static/graphiql.html\"\n\n html_string = None\n\n with open(graphiql_html_file, \"r\") as f:\n html_string = f.read()\n\n return html_string.replace(\"{{ SUBSCRIPTION_ENABLED }}\", \"false\")\n", "path": "strawberry/flask/graphiql.py"}, {"content": "import json\nimport os\n\nfrom django.http import Http404, HttpResponseNotAllowed, JsonResponse\nfrom django.http.response import HttpResponseBadRequest\nfrom django.template import RequestContext, Template\nfrom django.template.exceptions import TemplateDoesNotExist\nfrom django.template.loader import render_to_string\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.generic import View\n\nimport strawberry\nfrom graphql import graphql_sync\nfrom graphql.error import format_error as format_graphql_error\nfrom graphql.type.schema import GraphQLSchema\n\n\nclass GraphQLView(View):\n schema = None\n graphiql = True\n\n def __init__(self, schema=None, graphiql=True):\n if not schema:\n raise ValueError(\"You must pass in a schema to GraphQLView\")\n\n if not isinstance(schema, GraphQLSchema):\n raise ValueError(\"You must pass in a valid schema to GraphQLView\")\n\n self.schema = schema\n self.graphiql = graphiql\n\n def get_root_value(self):\n return None\n\n @method_decorator(csrf_exempt)\n def dispatch(self, request, *args, **kwargs):\n if request.method.lower() not in (\"get\", \"post\"):\n return HttpResponseNotAllowed(\n [\"GET\", \"POST\"], \"GraphQL only supports GET and POST requests.\"\n )\n\n if \"text/html\" in request.META.get(\"HTTP_ACCEPT\", \"\"):\n if not self.graphiql:\n raise Http404(\"GraphiQL has been disabled\")\n\n return self._render_graphiql(request)\n\n data = json.loads(request.body)\n\n try:\n query = data[\"query\"]\n variables = data.get(\"variables\")\n operation_name = data.get(\"operationName\")\n except KeyError:\n return HttpResponseBadRequest(\"No GraphQL query found in the request\")\n\n context = {\"request\": request}\n\n result = graphql_sync(\n self.schema,\n query,\n root_value=self.get_root_value(),\n variable_values=variables,\n context_value=context,\n operation_name=operation_name,\n )\n\n response_data = {\"data\": result.data}\n\n if result.errors:\n response_data[\"errors\"] = [\n format_graphql_error(err) for err in result.errors\n ]\n\n return JsonResponse(response_data)\n\n def _render_graphiql(self, request, context=None):\n try:\n template = Template(render_to_string(\"graphql/graphiql.html\"))\n except TemplateDoesNotExist:\n template = Template(\n open(\n os.path.join(\n os.path.dirname(os.path.abspath(strawberry.__file__)),\n \"static/graphiql.html\",\n ),\n \"r\",\n ).read()\n )\n\n context = context or {}\n context.update({\"SUBSCRIPTION_ENABLED\": \"false\"})\n\n response = TemplateResponse(request=request, template=None, context=context)\n response.content = template.render(RequestContext(request, context))\n\n return response\n", "path": "strawberry/django/views.py"}, {"content": "import pathlib\n\n\ndef get_graphiql_html() -> str:\n here = pathlib.Path(__file__).parents[1]\n path = here / \"static/graphiql.html\"\n\n with open(path) as f:\n template = f.read()\n\n return template.replace(\"{{ SUBSCRIPTION_ENABLED }}\", \"true\")\n", "path": "strawberry/asgi/utils.py"}]}
| 2,320 | 332 |
gh_patches_debug_25483
|
rasdani/github-patches
|
git_diff
|
aio-libs-abandoned__aioredis-py-355
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fixed ConnectionsPool.[pubsub_channels/pubsub_patterns]
Fixed bug in `commands.pubsub.PubSubCommandsMixin.subscribe` (and psubscribe). `ConnectionsPool.pubsub_channles` code was called before `ConnectionsPool._pubsub_conn` creation, and therefore `pubsub_channles`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aioredis/commands/pubsub.py`
Content:
```
1 import json
2
3 from aioredis.util import wait_make_dict
4
5
6 class PubSubCommandsMixin:
7 """Pub/Sub commands mixin.
8
9 For commands details see: http://redis.io/commands/#pubsub
10 """
11
12 def publish(self, channel, message):
13 """Post a message to channel."""
14 return self.execute(b'PUBLISH', channel, message)
15
16 def publish_json(self, channel, obj):
17 """Post a JSON-encoded message to channel."""
18 return self.publish(channel, json.dumps(obj))
19
20 def subscribe(self, channel, *channels):
21 """Switch connection to Pub/Sub mode and
22 subscribe to specified channels.
23
24 Arguments can be instances of :class:`~aioredis.Channel`.
25
26 Returns :func:`asyncio.gather()` coroutine which when done will return
27 a list of :class:`~aioredis.Channel` objects.
28 """
29 conn = self._pool_or_conn
30 return wait_return_channels(
31 conn.execute_pubsub(b'SUBSCRIBE', channel, *channels),
32 conn.pubsub_channels)
33
34 def unsubscribe(self, channel, *channels):
35 """Unsubscribe from specific channels.
36
37 Arguments can be instances of :class:`~aioredis.Channel`.
38 """
39 conn = self._pool_or_conn
40 return conn.execute_pubsub(b'UNSUBSCRIBE', channel, *channels)
41
42 def psubscribe(self, pattern, *patterns):
43 """Switch connection to Pub/Sub mode and
44 subscribe to specified patterns.
45
46 Arguments can be instances of :class:`~aioredis.Channel`.
47
48 Returns :func:`asyncio.gather()` coroutine which when done will return
49 a list of subscribed :class:`~aioredis.Channel` objects with
50 ``is_pattern`` property set to ``True``.
51 """
52 conn = self._pool_or_conn
53 return wait_return_channels(
54 conn.execute_pubsub(b'PSUBSCRIBE', pattern, *patterns),
55 conn.pubsub_patterns)
56
57 def punsubscribe(self, pattern, *patterns):
58 """Unsubscribe from specific patterns.
59
60 Arguments can be instances of :class:`~aioredis.Channel`.
61 """
62 conn = self._pool_or_conn
63 return conn.execute_pubsub(b'PUNSUBSCRIBE', pattern, *patterns)
64
65 def pubsub_channels(self, pattern=None):
66 """Lists the currently active channels."""
67 args = [b'PUBSUB', b'CHANNELS']
68 if pattern is not None:
69 args.append(pattern)
70 return self.execute(*args)
71
72 def pubsub_numsub(self, *channels):
73 """Returns the number of subscribers for the specified channels."""
74 return wait_make_dict(self.execute(
75 b'PUBSUB', b'NUMSUB', *channels))
76
77 def pubsub_numpat(self):
78 """Returns the number of subscriptions to patterns."""
79 return self.execute(b'PUBSUB', b'NUMPAT')
80
81 @property
82 def channels(self):
83 """Returns read-only channels dict.
84
85 See :attr:`~aioredis.RedisConnection.pubsub_channels`
86 """
87 return self._pool_or_conn.pubsub_channels
88
89 @property
90 def patterns(self):
91 """Returns read-only patterns dict.
92
93 See :attr:`~aioredis.RedisConnection.pubsub_patterns`
94 """
95 return self._pool_or_conn.pubsub_patterns
96
97 @property
98 def in_pubsub(self):
99 """Indicates that connection is in PUB/SUB mode.
100
101 Provides the number of subscribed channels.
102 """
103 return self._pool_or_conn.in_pubsub
104
105
106 async def wait_return_channels(fut, channels_dict):
107 return [channels_dict[name]
108 for cmd, name, count in await fut]
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/aioredis/commands/pubsub.py b/aioredis/commands/pubsub.py
--- a/aioredis/commands/pubsub.py
+++ b/aioredis/commands/pubsub.py
@@ -29,7 +29,7 @@
conn = self._pool_or_conn
return wait_return_channels(
conn.execute_pubsub(b'SUBSCRIBE', channel, *channels),
- conn.pubsub_channels)
+ conn, 'pubsub_channels')
def unsubscribe(self, channel, *channels):
"""Unsubscribe from specific channels.
@@ -52,7 +52,7 @@
conn = self._pool_or_conn
return wait_return_channels(
conn.execute_pubsub(b'PSUBSCRIBE', pattern, *patterns),
- conn.pubsub_patterns)
+ conn, 'pubsub_patterns')
def punsubscribe(self, pattern, *patterns):
"""Unsubscribe from specific patterns.
@@ -103,6 +103,7 @@
return self._pool_or_conn.in_pubsub
-async def wait_return_channels(fut, channels_dict):
- return [channels_dict[name]
- for cmd, name, count in await fut]
+async def wait_return_channels(fut, conn, field):
+ res = await fut
+ channels_dict = getattr(conn, field)
+ return [channels_dict[name] for cmd, name, count in res]
|
{"golden_diff": "diff --git a/aioredis/commands/pubsub.py b/aioredis/commands/pubsub.py\n--- a/aioredis/commands/pubsub.py\n+++ b/aioredis/commands/pubsub.py\n@@ -29,7 +29,7 @@\n conn = self._pool_or_conn\n return wait_return_channels(\n conn.execute_pubsub(b'SUBSCRIBE', channel, *channels),\n- conn.pubsub_channels)\n+ conn, 'pubsub_channels')\n \n def unsubscribe(self, channel, *channels):\n \"\"\"Unsubscribe from specific channels.\n@@ -52,7 +52,7 @@\n conn = self._pool_or_conn\n return wait_return_channels(\n conn.execute_pubsub(b'PSUBSCRIBE', pattern, *patterns),\n- conn.pubsub_patterns)\n+ conn, 'pubsub_patterns')\n \n def punsubscribe(self, pattern, *patterns):\n \"\"\"Unsubscribe from specific patterns.\n@@ -103,6 +103,7 @@\n return self._pool_or_conn.in_pubsub\n \n \n-async def wait_return_channels(fut, channels_dict):\n- return [channels_dict[name]\n- for cmd, name, count in await fut]\n+async def wait_return_channels(fut, conn, field):\n+ res = await fut\n+ channels_dict = getattr(conn, field)\n+ return [channels_dict[name] for cmd, name, count in res]\n", "issue": "Fixed ConnectionsPool.[pubsub_channels/pubsub_patterns]\nFixed bug in `commands.pubsub.PubSubCommandsMixin.subscribe` (and psubscribe). `ConnectionsPool.pubsub_channles` code was called before `ConnectionsPool._pubsub_conn` creation, and therefore `pubsub_channles` \n", "before_files": [{"content": "import json\n\nfrom aioredis.util import wait_make_dict\n\n\nclass PubSubCommandsMixin:\n \"\"\"Pub/Sub commands mixin.\n\n For commands details see: http://redis.io/commands/#pubsub\n \"\"\"\n\n def publish(self, channel, message):\n \"\"\"Post a message to channel.\"\"\"\n return self.execute(b'PUBLISH', channel, message)\n\n def publish_json(self, channel, obj):\n \"\"\"Post a JSON-encoded message to channel.\"\"\"\n return self.publish(channel, json.dumps(obj))\n\n def subscribe(self, channel, *channels):\n \"\"\"Switch connection to Pub/Sub mode and\n subscribe to specified channels.\n\n Arguments can be instances of :class:`~aioredis.Channel`.\n\n Returns :func:`asyncio.gather()` coroutine which when done will return\n a list of :class:`~aioredis.Channel` objects.\n \"\"\"\n conn = self._pool_or_conn\n return wait_return_channels(\n conn.execute_pubsub(b'SUBSCRIBE', channel, *channels),\n conn.pubsub_channels)\n\n def unsubscribe(self, channel, *channels):\n \"\"\"Unsubscribe from specific channels.\n\n Arguments can be instances of :class:`~aioredis.Channel`.\n \"\"\"\n conn = self._pool_or_conn\n return conn.execute_pubsub(b'UNSUBSCRIBE', channel, *channels)\n\n def psubscribe(self, pattern, *patterns):\n \"\"\"Switch connection to Pub/Sub mode and\n subscribe to specified patterns.\n\n Arguments can be instances of :class:`~aioredis.Channel`.\n\n Returns :func:`asyncio.gather()` coroutine which when done will return\n a list of subscribed :class:`~aioredis.Channel` objects with\n ``is_pattern`` property set to ``True``.\n \"\"\"\n conn = self._pool_or_conn\n return wait_return_channels(\n conn.execute_pubsub(b'PSUBSCRIBE', pattern, *patterns),\n conn.pubsub_patterns)\n\n def punsubscribe(self, pattern, *patterns):\n \"\"\"Unsubscribe from specific patterns.\n\n Arguments can be instances of :class:`~aioredis.Channel`.\n \"\"\"\n conn = self._pool_or_conn\n return conn.execute_pubsub(b'PUNSUBSCRIBE', pattern, *patterns)\n\n def pubsub_channels(self, pattern=None):\n \"\"\"Lists the currently active channels.\"\"\"\n args = [b'PUBSUB', b'CHANNELS']\n if pattern is not None:\n args.append(pattern)\n return self.execute(*args)\n\n def pubsub_numsub(self, *channels):\n \"\"\"Returns the number of subscribers for the specified channels.\"\"\"\n return wait_make_dict(self.execute(\n b'PUBSUB', b'NUMSUB', *channels))\n\n def pubsub_numpat(self):\n \"\"\"Returns the number of subscriptions to patterns.\"\"\"\n return self.execute(b'PUBSUB', b'NUMPAT')\n\n @property\n def channels(self):\n \"\"\"Returns read-only channels dict.\n\n See :attr:`~aioredis.RedisConnection.pubsub_channels`\n \"\"\"\n return self._pool_or_conn.pubsub_channels\n\n @property\n def patterns(self):\n \"\"\"Returns read-only patterns dict.\n\n See :attr:`~aioredis.RedisConnection.pubsub_patterns`\n \"\"\"\n return self._pool_or_conn.pubsub_patterns\n\n @property\n def in_pubsub(self):\n \"\"\"Indicates that connection is in PUB/SUB mode.\n\n Provides the number of subscribed channels.\n \"\"\"\n return self._pool_or_conn.in_pubsub\n\n\nasync def wait_return_channels(fut, channels_dict):\n return [channels_dict[name]\n for cmd, name, count in await fut]\n", "path": "aioredis/commands/pubsub.py"}], "after_files": [{"content": "import json\n\nfrom aioredis.util import wait_make_dict\n\n\nclass PubSubCommandsMixin:\n \"\"\"Pub/Sub commands mixin.\n\n For commands details see: http://redis.io/commands/#pubsub\n \"\"\"\n\n def publish(self, channel, message):\n \"\"\"Post a message to channel.\"\"\"\n return self.execute(b'PUBLISH', channel, message)\n\n def publish_json(self, channel, obj):\n \"\"\"Post a JSON-encoded message to channel.\"\"\"\n return self.publish(channel, json.dumps(obj))\n\n def subscribe(self, channel, *channels):\n \"\"\"Switch connection to Pub/Sub mode and\n subscribe to specified channels.\n\n Arguments can be instances of :class:`~aioredis.Channel`.\n\n Returns :func:`asyncio.gather()` coroutine which when done will return\n a list of :class:`~aioredis.Channel` objects.\n \"\"\"\n conn = self._pool_or_conn\n return wait_return_channels(\n conn.execute_pubsub(b'SUBSCRIBE', channel, *channels),\n conn, 'pubsub_channels')\n\n def unsubscribe(self, channel, *channels):\n \"\"\"Unsubscribe from specific channels.\n\n Arguments can be instances of :class:`~aioredis.Channel`.\n \"\"\"\n conn = self._pool_or_conn\n return conn.execute_pubsub(b'UNSUBSCRIBE', channel, *channels)\n\n def psubscribe(self, pattern, *patterns):\n \"\"\"Switch connection to Pub/Sub mode and\n subscribe to specified patterns.\n\n Arguments can be instances of :class:`~aioredis.Channel`.\n\n Returns :func:`asyncio.gather()` coroutine which when done will return\n a list of subscribed :class:`~aioredis.Channel` objects with\n ``is_pattern`` property set to ``True``.\n \"\"\"\n conn = self._pool_or_conn\n return wait_return_channels(\n conn.execute_pubsub(b'PSUBSCRIBE', pattern, *patterns),\n conn, 'pubsub_patterns')\n\n def punsubscribe(self, pattern, *patterns):\n \"\"\"Unsubscribe from specific patterns.\n\n Arguments can be instances of :class:`~aioredis.Channel`.\n \"\"\"\n conn = self._pool_or_conn\n return conn.execute_pubsub(b'PUNSUBSCRIBE', pattern, *patterns)\n\n def pubsub_channels(self, pattern=None):\n \"\"\"Lists the currently active channels.\"\"\"\n args = [b'PUBSUB', b'CHANNELS']\n if pattern is not None:\n args.append(pattern)\n return self.execute(*args)\n\n def pubsub_numsub(self, *channels):\n \"\"\"Returns the number of subscribers for the specified channels.\"\"\"\n return wait_make_dict(self.execute(\n b'PUBSUB', b'NUMSUB', *channels))\n\n def pubsub_numpat(self):\n \"\"\"Returns the number of subscriptions to patterns.\"\"\"\n return self.execute(b'PUBSUB', b'NUMPAT')\n\n @property\n def channels(self):\n \"\"\"Returns read-only channels dict.\n\n See :attr:`~aioredis.RedisConnection.pubsub_channels`\n \"\"\"\n return self._pool_or_conn.pubsub_channels\n\n @property\n def patterns(self):\n \"\"\"Returns read-only patterns dict.\n\n See :attr:`~aioredis.RedisConnection.pubsub_patterns`\n \"\"\"\n return self._pool_or_conn.pubsub_patterns\n\n @property\n def in_pubsub(self):\n \"\"\"Indicates that connection is in PUB/SUB mode.\n\n Provides the number of subscribed channels.\n \"\"\"\n return self._pool_or_conn.in_pubsub\n\n\nasync def wait_return_channels(fut, conn, field):\n res = await fut\n channels_dict = getattr(conn, field)\n return [channels_dict[name] for cmd, name, count in res]\n", "path": "aioredis/commands/pubsub.py"}]}
| 1,325 | 309 |
gh_patches_debug_36407
|
rasdani/github-patches
|
git_diff
|
benoitc__gunicorn-1752
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tornado 5.0+ not compatible with gunicorn
gunicorn version: 19.7.1
tornado: 5.0
failed with:
```
$ ./bin/gunicorn -w 1 -t 300 -k tornado web:app -b 0.0.0.0:9003
[2018-03-20 14:17:29 +0000] [28600] [INFO] Starting gunicorn 19.7.1
[2018-03-20 14:17:29 +0000] [28600] [INFO] Listening at: http://0.0.0.0:9003 (28600)
[2018-03-20 14:17:29 +0000] [28600] [INFO] Using worker: tornado
[2018-03-20 14:17:29 +0000] [28605] [INFO] Booting worker with pid: 28605
[2018-03-20 14:17:29 +0000] [28605] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/arbiter.py", line 578, in spawn_worker
worker.init_process()
File "/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/workers/base.py", line 131, in init_process
self.run()
File "/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/workers/gtornado.py", line 70, in run
PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()
TypeError: __init__() got an unexpected keyword argument 'io_loop'
[2018-03-20 14:17:29 +0000] [28605] [INFO] Worker exiting (pid: 28605)
[2018-03-20 14:17:29 +0000] [28608] [INFO] Booting worker with pid: 28608
[2018-03-20 14:17:29 +0000] [28608] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/arbiter.py", line 578, in spawn_worker
worker.init_process()
File "/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/workers/base.py", line 131, in init_process
self.run()
File "/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/workers/gtornado.py", line 70, in run
PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()
TypeError: __init__() got an unexpected keyword argument 'io_loop'
```
according to the release note of Tornado 5.0:
```
io_loop arguments to many Tornado functions have been removed. Use IOLoop.current() instead of passing IOLoop objects explicitly.
```
http://www.tornadoweb.org/en/stable/releases/v5.0.0.html#backwards-compatibility-notes
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gunicorn/workers/gtornado.py`
Content:
```
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 import copy
7 import os
8 import sys
9
10 try:
11 import tornado.web
12 except ImportError:
13 raise RuntimeError("You need tornado installed to use this worker.")
14 import tornado.httpserver
15 from tornado.ioloop import IOLoop, PeriodicCallback
16 from tornado.wsgi import WSGIContainer
17 from gunicorn.workers.base import Worker
18 from gunicorn import __version__ as gversion
19
20
21 class TornadoWorker(Worker):
22
23 @classmethod
24 def setup(cls):
25 web = sys.modules.pop("tornado.web")
26 old_clear = web.RequestHandler.clear
27
28 def clear(self):
29 old_clear(self)
30 if "Gunicorn" not in self._headers["Server"]:
31 self._headers["Server"] += " (Gunicorn/%s)" % gversion
32 web.RequestHandler.clear = clear
33 sys.modules["tornado.web"] = web
34
35 def handle_exit(self, sig, frame):
36 if self.alive:
37 super(TornadoWorker, self).handle_exit(sig, frame)
38
39 def handle_request(self):
40 self.nr += 1
41 if self.alive and self.nr >= self.max_requests:
42 self.log.info("Autorestarting worker after current request.")
43 self.alive = False
44
45 def watchdog(self):
46 if self.alive:
47 self.notify()
48
49 if self.ppid != os.getppid():
50 self.log.info("Parent changed, shutting down: %s", self)
51 self.alive = False
52
53 def heartbeat(self):
54 if not self.alive:
55 if self.server_alive:
56 if hasattr(self, 'server'):
57 try:
58 self.server.stop()
59 except Exception:
60 pass
61 self.server_alive = False
62 else:
63 if not self.ioloop._callbacks:
64 self.ioloop.stop()
65
66 def run(self):
67 self.ioloop = IOLoop.instance()
68 self.alive = True
69 self.server_alive = False
70 PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()
71 PeriodicCallback(self.heartbeat, 1000, io_loop=self.ioloop).start()
72
73 # Assume the app is a WSGI callable if its not an
74 # instance of tornado.web.Application or is an
75 # instance of tornado.wsgi.WSGIApplication
76 app = self.wsgi
77 if not isinstance(app, tornado.web.Application) or \
78 isinstance(app, tornado.wsgi.WSGIApplication):
79 app = WSGIContainer(app)
80
81 # Monkey-patching HTTPConnection.finish to count the
82 # number of requests being handled by Tornado. This
83 # will help gunicorn shutdown the worker if max_requests
84 # is exceeded.
85 httpserver = sys.modules["tornado.httpserver"]
86 if hasattr(httpserver, 'HTTPConnection'):
87 old_connection_finish = httpserver.HTTPConnection.finish
88
89 def finish(other):
90 self.handle_request()
91 old_connection_finish(other)
92 httpserver.HTTPConnection.finish = finish
93 sys.modules["tornado.httpserver"] = httpserver
94
95 server_class = tornado.httpserver.HTTPServer
96 else:
97
98 class _HTTPServer(tornado.httpserver.HTTPServer):
99
100 def on_close(instance, server_conn):
101 self.handle_request()
102 super(_HTTPServer, instance).on_close(server_conn)
103
104 server_class = _HTTPServer
105
106 if self.cfg.is_ssl:
107 _ssl_opt = copy.deepcopy(self.cfg.ssl_options)
108 # tornado refuses initialization if ssl_options contains following
109 # options
110 del _ssl_opt["do_handshake_on_connect"]
111 del _ssl_opt["suppress_ragged_eofs"]
112 server = server_class(app, io_loop=self.ioloop,
113 ssl_options=_ssl_opt)
114 else:
115 server = server_class(app, io_loop=self.ioloop)
116
117 self.server = server
118 self.server_alive = True
119
120 for s in self.sockets:
121 s.setblocking(0)
122 if hasattr(server, "add_socket"): # tornado > 2.0
123 server.add_socket(s)
124 elif hasattr(server, "_sockets"): # tornado 2.0
125 server._sockets[s.fileno()] = s
126
127 server.no_keep_alive = self.cfg.keepalive <= 0
128 server.start(num_processes=1)
129
130 self.ioloop.start()
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gunicorn/workers/gtornado.py b/gunicorn/workers/gtornado.py
--- a/gunicorn/workers/gtornado.py
+++ b/gunicorn/workers/gtornado.py
@@ -8,9 +8,10 @@
import sys
try:
- import tornado.web
+ import tornado
except ImportError:
raise RuntimeError("You need tornado installed to use this worker.")
+import tornado.web
import tornado.httpserver
from tornado.ioloop import IOLoop, PeriodicCallback
from tornado.wsgi import WSGIContainer
@@ -18,6 +19,11 @@
from gunicorn import __version__ as gversion
+# `io_loop` arguments to many Tornado functions have been removed in Tornado 5.0
+# <http://www.tornadoweb.org/en/stable/releases/v5.0.0.html#backwards-compatibility-notes>
+IOLOOP_PARAMETER_REMOVED = tornado.version_info >= (5, 0, 0)
+
+
class TornadoWorker(Worker):
@classmethod
@@ -67,8 +73,12 @@
self.ioloop = IOLoop.instance()
self.alive = True
self.server_alive = False
- PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()
- PeriodicCallback(self.heartbeat, 1000, io_loop=self.ioloop).start()
+ if IOLOOP_PARAMETER_REMOVED:
+ PeriodicCallback(self.watchdog, 1000).start()
+ PeriodicCallback(self.heartbeat, 1000).start()
+ else:
+ PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()
+ PeriodicCallback(self.heartbeat, 1000, io_loop=self.ioloop).start()
# Assume the app is a WSGI callable if its not an
# instance of tornado.web.Application or is an
@@ -109,10 +119,16 @@
# options
del _ssl_opt["do_handshake_on_connect"]
del _ssl_opt["suppress_ragged_eofs"]
- server = server_class(app, io_loop=self.ioloop,
- ssl_options=_ssl_opt)
+ if IOLOOP_PARAMETER_REMOVED:
+ server = server_class(app, ssl_options=_ssl_opt)
+ else:
+ server = server_class(app, io_loop=self.ioloop,
+ ssl_options=_ssl_opt)
else:
- server = server_class(app, io_loop=self.ioloop)
+ if IOLOOP_PARAMETER_REMOVED:
+ server = server_class(app)
+ else:
+ server = server_class(app, io_loop=self.ioloop)
self.server = server
self.server_alive = True
|
{"golden_diff": "diff --git a/gunicorn/workers/gtornado.py b/gunicorn/workers/gtornado.py\n--- a/gunicorn/workers/gtornado.py\n+++ b/gunicorn/workers/gtornado.py\n@@ -8,9 +8,10 @@\n import sys\n \n try:\n- import tornado.web\n+ import tornado\n except ImportError:\n raise RuntimeError(\"You need tornado installed to use this worker.\")\n+import tornado.web\n import tornado.httpserver\n from tornado.ioloop import IOLoop, PeriodicCallback\n from tornado.wsgi import WSGIContainer\n@@ -18,6 +19,11 @@\n from gunicorn import __version__ as gversion\n \n \n+# `io_loop` arguments to many Tornado functions have been removed in Tornado 5.0\n+# <http://www.tornadoweb.org/en/stable/releases/v5.0.0.html#backwards-compatibility-notes>\n+IOLOOP_PARAMETER_REMOVED = tornado.version_info >= (5, 0, 0)\n+\n+\n class TornadoWorker(Worker):\n \n @classmethod\n@@ -67,8 +73,12 @@\n self.ioloop = IOLoop.instance()\n self.alive = True\n self.server_alive = False\n- PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()\n- PeriodicCallback(self.heartbeat, 1000, io_loop=self.ioloop).start()\n+ if IOLOOP_PARAMETER_REMOVED:\n+ PeriodicCallback(self.watchdog, 1000).start()\n+ PeriodicCallback(self.heartbeat, 1000).start()\n+ else:\n+ PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()\n+ PeriodicCallback(self.heartbeat, 1000, io_loop=self.ioloop).start()\n \n # Assume the app is a WSGI callable if its not an\n # instance of tornado.web.Application or is an\n@@ -109,10 +119,16 @@\n # options\n del _ssl_opt[\"do_handshake_on_connect\"]\n del _ssl_opt[\"suppress_ragged_eofs\"]\n- server = server_class(app, io_loop=self.ioloop,\n- ssl_options=_ssl_opt)\n+ if IOLOOP_PARAMETER_REMOVED:\n+ server = server_class(app, ssl_options=_ssl_opt)\n+ else:\n+ server = server_class(app, io_loop=self.ioloop,\n+ ssl_options=_ssl_opt)\n else:\n- server = server_class(app, io_loop=self.ioloop)\n+ if IOLOOP_PARAMETER_REMOVED:\n+ server = server_class(app)\n+ else:\n+ server = server_class(app, io_loop=self.ioloop)\n \n self.server = server\n self.server_alive = True\n", "issue": "tornado 5.0+ not compatible with gunicorn\ngunicorn version: 19.7.1\r\ntornado: 5.0\r\n\r\nfailed with:\r\n\r\n```\r\n$ ./bin/gunicorn -w 1 -t 300 -k tornado web:app -b 0.0.0.0:9003\r\n[2018-03-20 14:17:29 +0000] [28600] [INFO] Starting gunicorn 19.7.1\r\n[2018-03-20 14:17:29 +0000] [28600] [INFO] Listening at: http://0.0.0.0:9003 (28600)\r\n[2018-03-20 14:17:29 +0000] [28600] [INFO] Using worker: tornado\r\n[2018-03-20 14:17:29 +0000] [28605] [INFO] Booting worker with pid: 28605\r\n[2018-03-20 14:17:29 +0000] [28605] [ERROR] Exception in worker process\r\nTraceback (most recent call last):\r\n File \"/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/arbiter.py\", line 578, in spawn_worker\r\n worker.init_process()\r\n File \"/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/workers/base.py\", line 131, in init_process\r\n self.run()\r\n File \"/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/workers/gtornado.py\", line 70, in run\r\n PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()\r\nTypeError: __init__() got an unexpected keyword argument 'io_loop'\r\n[2018-03-20 14:17:29 +0000] [28605] [INFO] Worker exiting (pid: 28605)\r\n[2018-03-20 14:17:29 +0000] [28608] [INFO] Booting worker with pid: 28608\r\n[2018-03-20 14:17:29 +0000] [28608] [ERROR] Exception in worker process\r\nTraceback (most recent call last):\r\n File \"/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/arbiter.py\", line 578, in spawn_worker\r\n worker.init_process()\r\n File \"/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/workers/base.py\", line 131, in init_process\r\n self.run()\r\n File \"/data/data/pkgs/nginxgw/nginxgw-45-838b8953/.buildout/eggs/gunicorn-19.7.1-py2.py3-none-any.ovo/gunicorn/workers/gtornado.py\", line 70, in run\r\n PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()\r\nTypeError: __init__() got an unexpected keyword argument 'io_loop'\r\n```\r\naccording to the release note of Tornado 5.0:\r\n\r\n```\r\nio_loop arguments to many Tornado functions have been removed. Use IOLoop.current() instead of passing IOLoop objects explicitly.\r\n```\r\nhttp://www.tornadoweb.org/en/stable/releases/v5.0.0.html#backwards-compatibility-notes\n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport copy\nimport os\nimport sys\n\ntry:\n import tornado.web\nexcept ImportError:\n raise RuntimeError(\"You need tornado installed to use this worker.\")\nimport tornado.httpserver\nfrom tornado.ioloop import IOLoop, PeriodicCallback\nfrom tornado.wsgi import WSGIContainer\nfrom gunicorn.workers.base import Worker\nfrom gunicorn import __version__ as gversion\n\n\nclass TornadoWorker(Worker):\n\n @classmethod\n def setup(cls):\n web = sys.modules.pop(\"tornado.web\")\n old_clear = web.RequestHandler.clear\n\n def clear(self):\n old_clear(self)\n if \"Gunicorn\" not in self._headers[\"Server\"]:\n self._headers[\"Server\"] += \" (Gunicorn/%s)\" % gversion\n web.RequestHandler.clear = clear\n sys.modules[\"tornado.web\"] = web\n\n def handle_exit(self, sig, frame):\n if self.alive:\n super(TornadoWorker, self).handle_exit(sig, frame)\n\n def handle_request(self):\n self.nr += 1\n if self.alive and self.nr >= self.max_requests:\n self.log.info(\"Autorestarting worker after current request.\")\n self.alive = False\n\n def watchdog(self):\n if self.alive:\n self.notify()\n\n if self.ppid != os.getppid():\n self.log.info(\"Parent changed, shutting down: %s\", self)\n self.alive = False\n\n def heartbeat(self):\n if not self.alive:\n if self.server_alive:\n if hasattr(self, 'server'):\n try:\n self.server.stop()\n except Exception:\n pass\n self.server_alive = False\n else:\n if not self.ioloop._callbacks:\n self.ioloop.stop()\n\n def run(self):\n self.ioloop = IOLoop.instance()\n self.alive = True\n self.server_alive = False\n PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()\n PeriodicCallback(self.heartbeat, 1000, io_loop=self.ioloop).start()\n\n # Assume the app is a WSGI callable if its not an\n # instance of tornado.web.Application or is an\n # instance of tornado.wsgi.WSGIApplication\n app = self.wsgi\n if not isinstance(app, tornado.web.Application) or \\\n isinstance(app, tornado.wsgi.WSGIApplication):\n app = WSGIContainer(app)\n\n # Monkey-patching HTTPConnection.finish to count the\n # number of requests being handled by Tornado. This\n # will help gunicorn shutdown the worker if max_requests\n # is exceeded.\n httpserver = sys.modules[\"tornado.httpserver\"]\n if hasattr(httpserver, 'HTTPConnection'):\n old_connection_finish = httpserver.HTTPConnection.finish\n\n def finish(other):\n self.handle_request()\n old_connection_finish(other)\n httpserver.HTTPConnection.finish = finish\n sys.modules[\"tornado.httpserver\"] = httpserver\n\n server_class = tornado.httpserver.HTTPServer\n else:\n\n class _HTTPServer(tornado.httpserver.HTTPServer):\n\n def on_close(instance, server_conn):\n self.handle_request()\n super(_HTTPServer, instance).on_close(server_conn)\n\n server_class = _HTTPServer\n\n if self.cfg.is_ssl:\n _ssl_opt = copy.deepcopy(self.cfg.ssl_options)\n # tornado refuses initialization if ssl_options contains following\n # options\n del _ssl_opt[\"do_handshake_on_connect\"]\n del _ssl_opt[\"suppress_ragged_eofs\"]\n server = server_class(app, io_loop=self.ioloop,\n ssl_options=_ssl_opt)\n else:\n server = server_class(app, io_loop=self.ioloop)\n\n self.server = server\n self.server_alive = True\n\n for s in self.sockets:\n s.setblocking(0)\n if hasattr(server, \"add_socket\"): # tornado > 2.0\n server.add_socket(s)\n elif hasattr(server, \"_sockets\"): # tornado 2.0\n server._sockets[s.fileno()] = s\n\n server.no_keep_alive = self.cfg.keepalive <= 0\n server.start(num_processes=1)\n\n self.ioloop.start()\n", "path": "gunicorn/workers/gtornado.py"}], "after_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport copy\nimport os\nimport sys\n\ntry:\n import tornado\nexcept ImportError:\n raise RuntimeError(\"You need tornado installed to use this worker.\")\nimport tornado.web\nimport tornado.httpserver\nfrom tornado.ioloop import IOLoop, PeriodicCallback\nfrom tornado.wsgi import WSGIContainer\nfrom gunicorn.workers.base import Worker\nfrom gunicorn import __version__ as gversion\n\n\n# `io_loop` arguments to many Tornado functions have been removed in Tornado 5.0\n# <http://www.tornadoweb.org/en/stable/releases/v5.0.0.html#backwards-compatibility-notes>\nIOLOOP_PARAMETER_REMOVED = tornado.version_info >= (5, 0, 0)\n\n\nclass TornadoWorker(Worker):\n\n @classmethod\n def setup(cls):\n web = sys.modules.pop(\"tornado.web\")\n old_clear = web.RequestHandler.clear\n\n def clear(self):\n old_clear(self)\n if \"Gunicorn\" not in self._headers[\"Server\"]:\n self._headers[\"Server\"] += \" (Gunicorn/%s)\" % gversion\n web.RequestHandler.clear = clear\n sys.modules[\"tornado.web\"] = web\n\n def handle_exit(self, sig, frame):\n if self.alive:\n super(TornadoWorker, self).handle_exit(sig, frame)\n\n def handle_request(self):\n self.nr += 1\n if self.alive and self.nr >= self.max_requests:\n self.log.info(\"Autorestarting worker after current request.\")\n self.alive = False\n\n def watchdog(self):\n if self.alive:\n self.notify()\n\n if self.ppid != os.getppid():\n self.log.info(\"Parent changed, shutting down: %s\", self)\n self.alive = False\n\n def heartbeat(self):\n if not self.alive:\n if self.server_alive:\n if hasattr(self, 'server'):\n try:\n self.server.stop()\n except Exception:\n pass\n self.server_alive = False\n else:\n if not self.ioloop._callbacks:\n self.ioloop.stop()\n\n def run(self):\n self.ioloop = IOLoop.instance()\n self.alive = True\n self.server_alive = False\n if IOLOOP_PARAMETER_REMOVED:\n PeriodicCallback(self.watchdog, 1000).start()\n PeriodicCallback(self.heartbeat, 1000).start()\n else:\n PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()\n PeriodicCallback(self.heartbeat, 1000, io_loop=self.ioloop).start()\n\n # Assume the app is a WSGI callable if its not an\n # instance of tornado.web.Application or is an\n # instance of tornado.wsgi.WSGIApplication\n app = self.wsgi\n if not isinstance(app, tornado.web.Application) or \\\n isinstance(app, tornado.wsgi.WSGIApplication):\n app = WSGIContainer(app)\n\n # Monkey-patching HTTPConnection.finish to count the\n # number of requests being handled by Tornado. This\n # will help gunicorn shutdown the worker if max_requests\n # is exceeded.\n httpserver = sys.modules[\"tornado.httpserver\"]\n if hasattr(httpserver, 'HTTPConnection'):\n old_connection_finish = httpserver.HTTPConnection.finish\n\n def finish(other):\n self.handle_request()\n old_connection_finish(other)\n httpserver.HTTPConnection.finish = finish\n sys.modules[\"tornado.httpserver\"] = httpserver\n\n server_class = tornado.httpserver.HTTPServer\n else:\n\n class _HTTPServer(tornado.httpserver.HTTPServer):\n\n def on_close(instance, server_conn):\n self.handle_request()\n super(_HTTPServer, instance).on_close(server_conn)\n\n server_class = _HTTPServer\n\n if self.cfg.is_ssl:\n _ssl_opt = copy.deepcopy(self.cfg.ssl_options)\n # tornado refuses initialization if ssl_options contains following\n # options\n del _ssl_opt[\"do_handshake_on_connect\"]\n del _ssl_opt[\"suppress_ragged_eofs\"]\n if IOLOOP_PARAMETER_REMOVED:\n server = server_class(app, ssl_options=_ssl_opt)\n else:\n server = server_class(app, io_loop=self.ioloop,\n ssl_options=_ssl_opt)\n else:\n if IOLOOP_PARAMETER_REMOVED:\n server = server_class(app)\n else:\n server = server_class(app, io_loop=self.ioloop)\n\n self.server = server\n self.server_alive = True\n\n for s in self.sockets:\n s.setblocking(0)\n if hasattr(server, \"add_socket\"): # tornado > 2.0\n server.add_socket(s)\n elif hasattr(server, \"_sockets\"): # tornado 2.0\n server._sockets[s.fileno()] = s\n\n server.no_keep_alive = self.cfg.keepalive <= 0\n server.start(num_processes=1)\n\n self.ioloop.start()\n", "path": "gunicorn/workers/gtornado.py"}]}
| 2,528 | 624 |
gh_patches_debug_13881
|
rasdani/github-patches
|
git_diff
|
praw-dev__praw-939
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Provide "best" sort for front page (models.Front)
The new "best" sort for the front page isn't currently available via PRAW. See [this Reddit thread](https://www.reddit.com/r/redditdev/comments/8h8ijn/how_do_you_sort_best_via_the_api/).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `praw/models/front.py`
Content:
```
1 """Provide the Front class."""
2 from .listing.mixins import SubredditListingMixin
3
4
5 class Front(SubredditListingMixin):
6 """Front is a Listing class that represents the front page."""
7
8 def __init__(self, reddit):
9 """Initialize a Front instance."""
10 super(Front, self).__init__(reddit, None)
11 self._path = '/'
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/praw/models/front.py b/praw/models/front.py
--- a/praw/models/front.py
+++ b/praw/models/front.py
@@ -1,4 +1,6 @@
"""Provide the Front class."""
+from ..const import urljoin
+from .listing.generator import ListingGenerator
from .listing.mixins import SubredditListingMixin
@@ -9,3 +11,13 @@
"""Initialize a Front instance."""
super(Front, self).__init__(reddit, None)
self._path = '/'
+
+ def best(self, **generator_kwargs):
+ """Return a ListingGenerator for best items.
+
+ Additional keyword arguments are passed in the initialization of
+ :class:`.ListingGenerator`.
+
+ """
+ return ListingGenerator(self._reddit, urljoin(self._path, 'best'),
+ **generator_kwargs)
|
{"golden_diff": "diff --git a/praw/models/front.py b/praw/models/front.py\n--- a/praw/models/front.py\n+++ b/praw/models/front.py\n@@ -1,4 +1,6 @@\n \"\"\"Provide the Front class.\"\"\"\n+from ..const import urljoin\n+from .listing.generator import ListingGenerator\n from .listing.mixins import SubredditListingMixin\n \n \n@@ -9,3 +11,13 @@\n \"\"\"Initialize a Front instance.\"\"\"\n super(Front, self).__init__(reddit, None)\n self._path = '/'\n+\n+ def best(self, **generator_kwargs):\n+ \"\"\"Return a ListingGenerator for best items.\n+\n+ Additional keyword arguments are passed in the initialization of\n+ :class:`.ListingGenerator`.\n+\n+ \"\"\"\n+ return ListingGenerator(self._reddit, urljoin(self._path, 'best'),\n+ **generator_kwargs)\n", "issue": "Provide \"best\" sort for front page (models.Front)\nThe new \"best\" sort for the front page isn't currently available via PRAW. See [this Reddit thread](https://www.reddit.com/r/redditdev/comments/8h8ijn/how_do_you_sort_best_via_the_api/).\n", "before_files": [{"content": "\"\"\"Provide the Front class.\"\"\"\nfrom .listing.mixins import SubredditListingMixin\n\n\nclass Front(SubredditListingMixin):\n \"\"\"Front is a Listing class that represents the front page.\"\"\"\n\n def __init__(self, reddit):\n \"\"\"Initialize a Front instance.\"\"\"\n super(Front, self).__init__(reddit, None)\n self._path = '/'\n", "path": "praw/models/front.py"}], "after_files": [{"content": "\"\"\"Provide the Front class.\"\"\"\nfrom ..const import urljoin\nfrom .listing.generator import ListingGenerator\nfrom .listing.mixins import SubredditListingMixin\n\n\nclass Front(SubredditListingMixin):\n \"\"\"Front is a Listing class that represents the front page.\"\"\"\n\n def __init__(self, reddit):\n \"\"\"Initialize a Front instance.\"\"\"\n super(Front, self).__init__(reddit, None)\n self._path = '/'\n\n def best(self, **generator_kwargs):\n \"\"\"Return a ListingGenerator for best items.\n\n Additional keyword arguments are passed in the initialization of\n :class:`.ListingGenerator`.\n\n \"\"\"\n return ListingGenerator(self._reddit, urljoin(self._path, 'best'),\n **generator_kwargs)\n", "path": "praw/models/front.py"}]}
| 411 | 187 |
gh_patches_debug_40979
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-599
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
(Event) Minimum length on event description and ingress
Possibly we should also limit the text to exclude words like TBA.
Events in ow4 demand some text in order to look good, so let's put a minimum requirement on length for the event texts. Making up a description of 200 characters should be no big deal.
I've really had it with "INFO: TBA"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/events/admin.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from apps.events.models import Event
4 from apps.events.models import AttendanceEvent
5 from apps.events.models import Attendee
6 from apps.events.models import CompanyEvent
7 from apps.events.models import RuleBundle
8 from apps.events.models import FieldOfStudyRule
9 from apps.events.models import GradeRule
10 from apps.events.models import UserGroupRule
11
12 from apps.feedback.admin import FeedbackRelationInline
13
14 from django.contrib import admin
15
16
17 class AttendeeInline(admin.TabularInline):
18 model = Attendee
19 extra = 1
20
21
22 class CompanyInline(admin.TabularInline):
23 model = CompanyEvent
24 max_num = 20
25 extra = 0
26
27 class RuleBundleInline(admin.TabularInline):
28 model = RuleBundle
29 extra = 1
30 max_num = 20
31
32
33 class AttendanceEventAdmin(admin.ModelAdmin):
34 model = AttendanceEvent
35 inlines = (AttendeeInline, RuleBundleInline)
36
37 class AttendeeAdmin(admin.ModelAdmin):
38 model = Attendee
39 list_display = ('user', 'event')
40
41 class CompanyEventAdmin(admin.ModelAdmin):
42 model = CompanyEvent
43 inlines = (CompanyInline,)
44
45 class RuleBundleAdmin(admin.ModelAdmin):
46 model = RuleBundle
47
48 class FieldOfStudyRuleAdmin(admin.ModelAdmin):
49 model = FieldOfStudyRule
50
51 class GradeRuleAdmin(admin.ModelAdmin):
52 model = GradeRule
53
54 class UserGroupRuleAdmin(admin.ModelAdmin):
55 model = UserGroupRule
56
57 class AttendanceEventInline(admin.StackedInline):
58 model = AttendanceEvent
59 max_num = 1
60 extra = 0
61 filter_horizontal = ('rule_bundles',)
62
63
64 class EventAdmin(admin.ModelAdmin):
65 inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)
66 exclude = ("author", )
67
68 def save_model(self, request, obj, form, change):
69 if not change: # created
70 obj.author = request.user
71 obj.save()
72
73 def save_formset(self, request, form, formset, change):
74 instances = formset.save(commit=False)
75 for instance in instances:
76 instance.save()
77 formset.save_m2m()
78
79 admin.site.register(Event, EventAdmin)
80 admin.site.register(Attendee, AttendeeAdmin)
81 admin.site.register(AttendanceEvent, AttendanceEventAdmin)
82 admin.site.register(RuleBundle, RuleBundleAdmin)
83 admin.site.register(GradeRule, GradeRuleAdmin)
84 admin.site.register(UserGroupRule, UserGroupRuleAdmin)
85 admin.site.register(FieldOfStudyRule, FieldOfStudyRuleAdmin)
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/events/admin.py b/apps/events/admin.py
--- a/apps/events/admin.py
+++ b/apps/events/admin.py
@@ -1,5 +1,10 @@
# -*- coding: utf-8 -*-
+from django import forms
+from django.contrib import admin
+from django.core import validators
+from django.utils.translation import ugettext as _
+
from apps.events.models import Event
from apps.events.models import AttendanceEvent
from apps.events.models import Attendee
@@ -8,10 +13,8 @@
from apps.events.models import FieldOfStudyRule
from apps.events.models import GradeRule
from apps.events.models import UserGroupRule
-
from apps.feedback.admin import FeedbackRelationInline
-from django.contrib import admin
class AttendeeInline(admin.TabularInline):
@@ -24,6 +27,7 @@
max_num = 20
extra = 0
+
class RuleBundleInline(admin.TabularInline):
model = RuleBundle
extra = 1
@@ -34,32 +38,39 @@
model = AttendanceEvent
inlines = (AttendeeInline, RuleBundleInline)
+
class AttendeeAdmin(admin.ModelAdmin):
model = Attendee
list_display = ('user', 'event')
+
class CompanyEventAdmin(admin.ModelAdmin):
model = CompanyEvent
inlines = (CompanyInline,)
+
class RuleBundleAdmin(admin.ModelAdmin):
model = RuleBundle
-
+
+
class FieldOfStudyRuleAdmin(admin.ModelAdmin):
model = FieldOfStudyRule
+
class GradeRuleAdmin(admin.ModelAdmin):
model = GradeRule
+
class UserGroupRuleAdmin(admin.ModelAdmin):
model = UserGroupRule
+
class AttendanceEventInline(admin.StackedInline):
model = AttendanceEvent
max_num = 1
extra = 0
filter_horizontal = ('rule_bundles',)
-
+
class EventAdmin(admin.ModelAdmin):
inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)
@@ -76,6 +87,16 @@
instance.save()
formset.save_m2m()
+ def get_form(self, request, obj=None, **kwargs):
+ form = super(EventAdmin, self).get_form(request, obj, **kwargs)
+ form.base_fields['ingress_short'].validators=[validators.MinLengthValidator(75)]
+ form.base_fields['ingress'].validators=[validators.MinLengthValidator(100)]
+ form.base_fields['description'].validators=[
+ validators.MinLengthValidator(200),
+ validators.RegexValidator("^(?:(?!TBA).)*$", _("Beskrivelsen kan ikke inneholde 'TBA'."), "ulovlig"),
+ ]
+ return form
+
admin.site.register(Event, EventAdmin)
admin.site.register(Attendee, AttendeeAdmin)
admin.site.register(AttendanceEvent, AttendanceEventAdmin)
|
{"golden_diff": "diff --git a/apps/events/admin.py b/apps/events/admin.py\n--- a/apps/events/admin.py\n+++ b/apps/events/admin.py\n@@ -1,5 +1,10 @@\n # -*- coding: utf-8 -*-\n \n+from django import forms\n+from django.contrib import admin\n+from django.core import validators\n+from django.utils.translation import ugettext as _\n+\n from apps.events.models import Event\n from apps.events.models import AttendanceEvent\n from apps.events.models import Attendee\n@@ -8,10 +13,8 @@\n from apps.events.models import FieldOfStudyRule\n from apps.events.models import GradeRule\n from apps.events.models import UserGroupRule\n-\n from apps.feedback.admin import FeedbackRelationInline\n \n-from django.contrib import admin\n \n \n class AttendeeInline(admin.TabularInline):\n@@ -24,6 +27,7 @@\n max_num = 20\n extra = 0\n \n+\n class RuleBundleInline(admin.TabularInline):\n model = RuleBundle\n extra = 1\n@@ -34,32 +38,39 @@\n model = AttendanceEvent\n inlines = (AttendeeInline, RuleBundleInline)\n \n+\n class AttendeeAdmin(admin.ModelAdmin):\n model = Attendee\n list_display = ('user', 'event')\n \n+\n class CompanyEventAdmin(admin.ModelAdmin):\n model = CompanyEvent\n inlines = (CompanyInline,)\n \n+\n class RuleBundleAdmin(admin.ModelAdmin):\n model = RuleBundle\n- \n+\n+\n class FieldOfStudyRuleAdmin(admin.ModelAdmin):\n model = FieldOfStudyRule\n \n+\n class GradeRuleAdmin(admin.ModelAdmin):\n model = GradeRule\n \n+\n class UserGroupRuleAdmin(admin.ModelAdmin):\n model = UserGroupRule\n \n+\n class AttendanceEventInline(admin.StackedInline):\n model = AttendanceEvent\n max_num = 1\n extra = 0\n filter_horizontal = ('rule_bundles',)\n- \n+\n \n class EventAdmin(admin.ModelAdmin):\n inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)\n@@ -76,6 +87,16 @@\n instance.save()\n formset.save_m2m()\n \n+ def get_form(self, request, obj=None, **kwargs):\n+ form = super(EventAdmin, self).get_form(request, obj, **kwargs)\n+ form.base_fields['ingress_short'].validators=[validators.MinLengthValidator(75)]\n+ form.base_fields['ingress'].validators=[validators.MinLengthValidator(100)]\n+ form.base_fields['description'].validators=[\n+ validators.MinLengthValidator(200),\n+ validators.RegexValidator(\"^(?:(?!TBA).)*$\", _(\"Beskrivelsen kan ikke inneholde 'TBA'.\"), \"ulovlig\"),\n+ ]\n+ return form\n+\n admin.site.register(Event, EventAdmin)\n admin.site.register(Attendee, AttendeeAdmin)\n admin.site.register(AttendanceEvent, AttendanceEventAdmin)\n", "issue": "(Event) Minimum length on event description and ingress\nPossibly we should also limit the text to exclude words like TBA.\n\nEvents in ow4 demand some text in order to look good, so let's put a minimum requirement on length for the event texts. Making up a description of 200 characters should be no big deal.\n\nI've really had it with \"INFO: TBA\"\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom apps.events.models import Event\nfrom apps.events.models import AttendanceEvent\nfrom apps.events.models import Attendee\nfrom apps.events.models import CompanyEvent\nfrom apps.events.models import RuleBundle\nfrom apps.events.models import FieldOfStudyRule\nfrom apps.events.models import GradeRule\nfrom apps.events.models import UserGroupRule\n\nfrom apps.feedback.admin import FeedbackRelationInline\n\nfrom django.contrib import admin\n\n\nclass AttendeeInline(admin.TabularInline):\n model = Attendee\n extra = 1\n\n\nclass CompanyInline(admin.TabularInline):\n model = CompanyEvent\n max_num = 20\n extra = 0\n\nclass RuleBundleInline(admin.TabularInline):\n model = RuleBundle\n extra = 1\n max_num = 20\n\n\nclass AttendanceEventAdmin(admin.ModelAdmin):\n model = AttendanceEvent\n inlines = (AttendeeInline, RuleBundleInline)\n\nclass AttendeeAdmin(admin.ModelAdmin):\n model = Attendee\n list_display = ('user', 'event')\n\nclass CompanyEventAdmin(admin.ModelAdmin):\n model = CompanyEvent\n inlines = (CompanyInline,)\n\nclass RuleBundleAdmin(admin.ModelAdmin):\n model = RuleBundle\n \nclass FieldOfStudyRuleAdmin(admin.ModelAdmin):\n model = FieldOfStudyRule\n\nclass GradeRuleAdmin(admin.ModelAdmin):\n model = GradeRule\n\nclass UserGroupRuleAdmin(admin.ModelAdmin):\n model = UserGroupRule\n\nclass AttendanceEventInline(admin.StackedInline):\n model = AttendanceEvent\n max_num = 1\n extra = 0\n filter_horizontal = ('rule_bundles',)\n \n\nclass EventAdmin(admin.ModelAdmin):\n inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)\n exclude = (\"author\", )\n\n def save_model(self, request, obj, form, change):\n if not change: # created\n obj.author = request.user\n obj.save()\n\n def save_formset(self, request, form, formset, change):\n instances = formset.save(commit=False)\n for instance in instances:\n instance.save()\n formset.save_m2m()\n\nadmin.site.register(Event, EventAdmin)\nadmin.site.register(Attendee, AttendeeAdmin)\nadmin.site.register(AttendanceEvent, AttendanceEventAdmin)\nadmin.site.register(RuleBundle, RuleBundleAdmin)\nadmin.site.register(GradeRule, GradeRuleAdmin)\nadmin.site.register(UserGroupRule, UserGroupRuleAdmin)\nadmin.site.register(FieldOfStudyRule, FieldOfStudyRuleAdmin)\n", "path": "apps/events/admin.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django import forms\nfrom django.contrib import admin\nfrom django.core import validators\nfrom django.utils.translation import ugettext as _\n\nfrom apps.events.models import Event\nfrom apps.events.models import AttendanceEvent\nfrom apps.events.models import Attendee\nfrom apps.events.models import CompanyEvent\nfrom apps.events.models import RuleBundle\nfrom apps.events.models import FieldOfStudyRule\nfrom apps.events.models import GradeRule\nfrom apps.events.models import UserGroupRule\nfrom apps.feedback.admin import FeedbackRelationInline\n\n\n\nclass AttendeeInline(admin.TabularInline):\n model = Attendee\n extra = 1\n\n\nclass CompanyInline(admin.TabularInline):\n model = CompanyEvent\n max_num = 20\n extra = 0\n\n\nclass RuleBundleInline(admin.TabularInline):\n model = RuleBundle\n extra = 1\n max_num = 20\n\n\nclass AttendanceEventAdmin(admin.ModelAdmin):\n model = AttendanceEvent\n inlines = (AttendeeInline, RuleBundleInline)\n\n\nclass AttendeeAdmin(admin.ModelAdmin):\n model = Attendee\n list_display = ('user', 'event')\n\n\nclass CompanyEventAdmin(admin.ModelAdmin):\n model = CompanyEvent\n inlines = (CompanyInline,)\n\n\nclass RuleBundleAdmin(admin.ModelAdmin):\n model = RuleBundle\n\n\nclass FieldOfStudyRuleAdmin(admin.ModelAdmin):\n model = FieldOfStudyRule\n\n\nclass GradeRuleAdmin(admin.ModelAdmin):\n model = GradeRule\n\n\nclass UserGroupRuleAdmin(admin.ModelAdmin):\n model = UserGroupRule\n\n\nclass AttendanceEventInline(admin.StackedInline):\n model = AttendanceEvent\n max_num = 1\n extra = 0\n filter_horizontal = ('rule_bundles',)\n\n\nclass EventAdmin(admin.ModelAdmin):\n inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)\n exclude = (\"author\", )\n\n def save_model(self, request, obj, form, change):\n if not change: # created\n obj.author = request.user\n obj.save()\n\n def save_formset(self, request, form, formset, change):\n instances = formset.save(commit=False)\n for instance in instances:\n instance.save()\n formset.save_m2m()\n\n def get_form(self, request, obj=None, **kwargs):\n form = super(EventAdmin, self).get_form(request, obj, **kwargs)\n form.base_fields['ingress_short'].validators=[validators.MinLengthValidator(75)]\n form.base_fields['ingress'].validators=[validators.MinLengthValidator(100)]\n form.base_fields['description'].validators=[\n validators.MinLengthValidator(200),\n validators.RegexValidator(\"^(?:(?!TBA).)*$\", _(\"Beskrivelsen kan ikke inneholde 'TBA'.\"), \"ulovlig\"),\n ]\n return form\n\nadmin.site.register(Event, EventAdmin)\nadmin.site.register(Attendee, AttendeeAdmin)\nadmin.site.register(AttendanceEvent, AttendanceEventAdmin)\nadmin.site.register(RuleBundle, RuleBundleAdmin)\nadmin.site.register(GradeRule, GradeRuleAdmin)\nadmin.site.register(UserGroupRule, UserGroupRuleAdmin)\nadmin.site.register(FieldOfStudyRule, FieldOfStudyRuleAdmin)\n", "path": "apps/events/admin.py"}]}
| 1,039 | 622 |
gh_patches_debug_14877
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.general-1958
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ZFS tries to change immutable attributes
##### SUMMARY
Module `zfs` tries to set immutable attributes like `volblocksize`. Seems to be a regression, see [ansible/ansible-modules-extras#769](https://github.com/ansible/ansible-modules-extras/issues/769) and [ansible/ansible-modules-extras#1378](https://github.com/ansible/ansible-modules-extras/pull/1378)
##### ISSUE TYPE
* Bug Report
##### COMPONENT NAME
zfs
##### ANSIBLE VERSION
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
```
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible_vault_password.txt
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = [u'k8s', u'host_list', u'script', u'yaml', u'ini', u'constructed']
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Debian GNU/Linux 9.9 (Stretch)
##### STEPS TO REPRODUCE
```yaml
- name: "Create zvol"
zfs:
name: mypool/folder/myzvol
state: present
extra_zfs_properties:
volsize: '10G'
volblocksize: 128K
```
##### EXPECTED RESULTS
Module `zfs` should not try to set immutable attributes like `volblocksize` on existing fileystems/volumes.
##### ACTUAL RESULTS
```
fatal: [nodename]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"createparent": null,
"extra_zfs_properties": {
"volblocksize": "128K",
"volsize": "10G"
},
"name": "mypool/folder/myzvol",
"origin": null,
"state": "present"
}
},
"msg": "cannot set property for 'mypool/folder/myzvol: 'volblocksize' is readonly\n"
}
```
Similarly with `encryption`:
```yaml
extra_zfs_properties:
encryption: aes-256-gcm
keyformat: passphrase
keylocation: file:///tmp/zfs-pass
mountpoint: legacy
```
```
fatal: [barbican]: FAILED! => {"changed": false, "msg": "cannot set property for 'barbican/nixos': 'encryption' is readonly\n"}
```
cf https://github.com/ansible/ansible/issues/58759 https://github.com/ansible/ansible/issues/29422
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/modules/storage/zfs/zfs.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright: (c) 2013, Johan Wiren <[email protected]>
5 # Copyright: (c) 2017, Ansible Project
6 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
7
8 from __future__ import absolute_import, division, print_function
9 __metaclass__ = type
10
11 DOCUMENTATION = '''
12 ---
13 module: zfs
14 short_description: Manage zfs
15 description:
16 - Manages ZFS file systems, volumes, clones and snapshots
17 options:
18 name:
19 description:
20 - File system, snapshot or volume name e.g. C(rpool/myfs).
21 required: true
22 type: str
23 state:
24 description:
25 - Whether to create (C(present)), or remove (C(absent)) a
26 file system, snapshot or volume. All parents/children
27 will be created/destroyed as needed to reach the desired state.
28 choices: [ absent, present ]
29 required: true
30 type: str
31 origin:
32 description:
33 - Snapshot from which to create a clone.
34 type: str
35 extra_zfs_properties:
36 description:
37 - A dictionary of zfs properties to be set.
38 - See the zfs(8) man page for more information.
39 type: dict
40 author:
41 - Johan Wiren (@johanwiren)
42 '''
43
44 EXAMPLES = '''
45 - name: Create a new file system called myfs in pool rpool with the setuid property turned off
46 community.general.zfs:
47 name: rpool/myfs
48 state: present
49 extra_zfs_properties:
50 setuid: off
51
52 - name: Create a new volume called myvol in pool rpool.
53 community.general.zfs:
54 name: rpool/myvol
55 state: present
56 extra_zfs_properties:
57 volsize: 10M
58
59 - name: Create a snapshot of rpool/myfs file system.
60 community.general.zfs:
61 name: rpool/myfs@mysnapshot
62 state: present
63
64 - name: Create a new file system called myfs2 with snapdir enabled
65 community.general.zfs:
66 name: rpool/myfs2
67 state: present
68 extra_zfs_properties:
69 snapdir: enabled
70
71 - name: Create a new file system by cloning a snapshot
72 community.general.zfs:
73 name: rpool/cloned_fs
74 state: present
75 origin: rpool/myfs@mysnapshot
76
77 - name: Destroy a filesystem
78 community.general.zfs:
79 name: rpool/myfs
80 state: absent
81 '''
82
83 import os
84
85 from ansible.module_utils.basic import AnsibleModule
86
87
88 class Zfs(object):
89
90 def __init__(self, module, name, properties):
91 self.module = module
92 self.name = name
93 self.properties = properties
94 self.changed = False
95 self.zfs_cmd = module.get_bin_path('zfs', True)
96 self.zpool_cmd = module.get_bin_path('zpool', True)
97 self.pool = name.split('/')[0].split('@')[0]
98 self.is_solaris = os.uname()[0] == 'SunOS'
99 self.is_openzfs = self.check_openzfs()
100 self.enhanced_sharing = self.check_enhanced_sharing()
101
102 def check_openzfs(self):
103 cmd = [self.zpool_cmd]
104 cmd.extend(['get', 'version'])
105 cmd.append(self.pool)
106 (rc, out, err) = self.module.run_command(cmd, check_rc=True)
107 version = out.splitlines()[-1].split()[2]
108 if version == '-':
109 return True
110 if int(version) == 5000:
111 return True
112 return False
113
114 def check_enhanced_sharing(self):
115 if self.is_solaris and not self.is_openzfs:
116 cmd = [self.zpool_cmd]
117 cmd.extend(['get', 'version'])
118 cmd.append(self.pool)
119 (rc, out, err) = self.module.run_command(cmd, check_rc=True)
120 version = out.splitlines()[-1].split()[2]
121 if int(version) >= 34:
122 return True
123 return False
124
125 def exists(self):
126 cmd = [self.zfs_cmd, 'list', '-t', 'all', self.name]
127 (rc, out, err) = self.module.run_command(' '.join(cmd))
128 if rc == 0:
129 return True
130 else:
131 return False
132
133 def create(self):
134 if self.module.check_mode:
135 self.changed = True
136 return
137 properties = self.properties
138 origin = self.module.params.get('origin', None)
139 cmd = [self.zfs_cmd]
140
141 if "@" in self.name:
142 action = 'snapshot'
143 elif origin:
144 action = 'clone'
145 else:
146 action = 'create'
147
148 cmd.append(action)
149
150 if action in ['create', 'clone']:
151 cmd += ['-p']
152
153 if properties:
154 for prop, value in properties.items():
155 if prop == 'volsize':
156 cmd += ['-V', value]
157 elif prop == 'volblocksize':
158 cmd += ['-b', value]
159 else:
160 cmd += ['-o', '%s="%s"' % (prop, value)]
161 if origin and action == 'clone':
162 cmd.append(origin)
163 cmd.append(self.name)
164 (rc, out, err) = self.module.run_command(' '.join(cmd))
165 if rc == 0:
166 self.changed = True
167 else:
168 self.module.fail_json(msg=err)
169
170 def destroy(self):
171 if self.module.check_mode:
172 self.changed = True
173 return
174 cmd = [self.zfs_cmd, 'destroy', '-R', self.name]
175 (rc, out, err) = self.module.run_command(' '.join(cmd))
176 if rc == 0:
177 self.changed = True
178 else:
179 self.module.fail_json(msg=err)
180
181 def set_property(self, prop, value):
182 if self.module.check_mode:
183 self.changed = True
184 return
185 cmd = [self.zfs_cmd, 'set', prop + '=' + str(value), self.name]
186 (rc, out, err) = self.module.run_command(cmd)
187 if rc == 0:
188 self.changed = True
189 else:
190 self.module.fail_json(msg=err)
191
192 def set_properties_if_changed(self):
193 current_properties = self.get_current_properties()
194 for prop, value in self.properties.items():
195 if current_properties.get(prop, None) != value:
196 self.set_property(prop, value)
197
198 def get_current_properties(self):
199 cmd = [self.zfs_cmd, 'get', '-H']
200 if self.enhanced_sharing:
201 cmd += ['-e']
202 cmd += ['all', self.name]
203 rc, out, err = self.module.run_command(" ".join(cmd))
204 properties = dict()
205 for prop, value, source in [l.split('\t')[1:4] for l in out.splitlines()]:
206 if source == 'local':
207 properties[prop] = value
208 # Add alias for enhanced sharing properties
209 if self.enhanced_sharing:
210 properties['sharenfs'] = properties.get('share.nfs', None)
211 properties['sharesmb'] = properties.get('share.smb', None)
212 return properties
213
214
215 def main():
216
217 module = AnsibleModule(
218 argument_spec=dict(
219 name=dict(type='str', required=True),
220 state=dict(type='str', required=True, choices=['absent', 'present']),
221 origin=dict(type='str', default=None),
222 extra_zfs_properties=dict(type='dict', default={}),
223 ),
224 supports_check_mode=True,
225 )
226
227 state = module.params.get('state')
228 name = module.params.get('name')
229
230 if module.params.get('origin') and '@' in name:
231 module.fail_json(msg='cannot specify origin when operating on a snapshot')
232
233 # Reverse the boolification of zfs properties
234 for prop, value in module.params['extra_zfs_properties'].items():
235 if isinstance(value, bool):
236 if value is True:
237 module.params['extra_zfs_properties'][prop] = 'on'
238 else:
239 module.params['extra_zfs_properties'][prop] = 'off'
240 else:
241 module.params['extra_zfs_properties'][prop] = value
242
243 result = dict(
244 name=name,
245 state=state,
246 )
247
248 zfs = Zfs(module, name, module.params['extra_zfs_properties'])
249
250 if state == 'present':
251 if zfs.exists():
252 zfs.set_properties_if_changed()
253 else:
254 zfs.create()
255
256 elif state == 'absent':
257 if zfs.exists():
258 zfs.destroy()
259
260 result.update(zfs.properties)
261 result['changed'] = zfs.changed
262 module.exit_json(**result)
263
264
265 if __name__ == '__main__':
266 main()
267
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/modules/storage/zfs/zfs.py b/plugins/modules/storage/zfs/zfs.py
--- a/plugins/modules/storage/zfs/zfs.py
+++ b/plugins/modules/storage/zfs/zfs.py
@@ -203,7 +203,10 @@
rc, out, err = self.module.run_command(" ".join(cmd))
properties = dict()
for prop, value, source in [l.split('\t')[1:4] for l in out.splitlines()]:
- if source == 'local':
+ # include source '-' so that creation-only properties are not removed
+ # to avoids errors when the dataset already exists and the property is not changed
+ # this scenario is most likely when the same playbook is run more than once
+ if source == 'local' or source == '-':
properties[prop] = value
# Add alias for enhanced sharing properties
if self.enhanced_sharing:
|
{"golden_diff": "diff --git a/plugins/modules/storage/zfs/zfs.py b/plugins/modules/storage/zfs/zfs.py\n--- a/plugins/modules/storage/zfs/zfs.py\n+++ b/plugins/modules/storage/zfs/zfs.py\n@@ -203,7 +203,10 @@\n rc, out, err = self.module.run_command(\" \".join(cmd))\n properties = dict()\n for prop, value, source in [l.split('\\t')[1:4] for l in out.splitlines()]:\n- if source == 'local':\n+ # include source '-' so that creation-only properties are not removed\n+ # to avoids errors when the dataset already exists and the property is not changed\n+ # this scenario is most likely when the same playbook is run more than once\n+ if source == 'local' or source == '-':\n properties[prop] = value\n # Add alias for enhanced sharing properties\n if self.enhanced_sharing:\n", "issue": "ZFS tries to change immutable attributes\n##### SUMMARY\r\n\r\nModule `zfs` tries to set immutable attributes like `volblocksize`. Seems to be a regression, see [ansible/ansible-modules-extras#769](https://github.com/ansible/ansible-modules-extras/issues/769) and [ansible/ansible-modules-extras#1378](https://github.com/ansible/ansible-modules-extras/pull/1378)\r\n##### ISSUE TYPE\r\n\r\n * Bug Report\r\n\r\n\r\n##### COMPONENT NAME\r\n\r\nzfs\r\n##### ANSIBLE VERSION\r\n\r\n```\r\nansible 2.8.1\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```\r\nANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True\r\nDEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible_vault_password.txt\r\nINVENTORY_ENABLED(/etc/ansible/ansible.cfg) = [u'k8s', u'host_list', u'script', u'yaml', u'ini', u'constructed']\r\nRETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\nDebian GNU/Linux 9.9 (Stretch)\r\n##### STEPS TO REPRODUCE\r\n\r\n```yaml\r\n- name: \"Create zvol\"\r\n zfs:\r\n name: mypool/folder/myzvol\r\n state: present\r\n extra_zfs_properties:\r\n volsize: '10G'\r\n volblocksize: 128K\r\n```\r\n\r\n##### EXPECTED RESULTS\r\n\r\nModule `zfs` should not try to set immutable attributes like `volblocksize` on existing fileystems/volumes.\r\n##### ACTUAL RESULTS\r\n\r\n```\r\nfatal: [nodename]: FAILED! => {\r\n \"changed\": false,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"createparent\": null,\r\n \"extra_zfs_properties\": {\r\n \"volblocksize\": \"128K\",\r\n \"volsize\": \"10G\"\r\n },\r\n \"name\": \"mypool/folder/myzvol\",\r\n \"origin\": null,\r\n \"state\": \"present\"\r\n }\r\n },\r\n \"msg\": \"cannot set property for 'mypool/folder/myzvol: 'volblocksize' is readonly\\n\"\r\n}\r\n```\r\n\r\n\r\nSimilarly with `encryption`:\r\n\r\n\r\n```yaml\r\n extra_zfs_properties:\r\n encryption: aes-256-gcm\r\n keyformat: passphrase\r\n keylocation: file:///tmp/zfs-pass\r\n mountpoint: legacy\r\n```\r\n\r\n```\r\nfatal: [barbican]: FAILED! => {\"changed\": false, \"msg\": \"cannot set property for 'barbican/nixos': 'encryption' is readonly\\n\"}\r\n```\r\n\r\ncf https://github.com/ansible/ansible/issues/58759 https://github.com/ansible/ansible/issues/29422\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2013, Johan Wiren <[email protected]>\n# Copyright: (c) 2017, Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\nDOCUMENTATION = '''\n---\nmodule: zfs\nshort_description: Manage zfs\ndescription:\n - Manages ZFS file systems, volumes, clones and snapshots\noptions:\n name:\n description:\n - File system, snapshot or volume name e.g. C(rpool/myfs).\n required: true\n type: str\n state:\n description:\n - Whether to create (C(present)), or remove (C(absent)) a\n file system, snapshot or volume. All parents/children\n will be created/destroyed as needed to reach the desired state.\n choices: [ absent, present ]\n required: true\n type: str\n origin:\n description:\n - Snapshot from which to create a clone.\n type: str\n extra_zfs_properties:\n description:\n - A dictionary of zfs properties to be set.\n - See the zfs(8) man page for more information.\n type: dict\nauthor:\n- Johan Wiren (@johanwiren)\n'''\n\nEXAMPLES = '''\n- name: Create a new file system called myfs in pool rpool with the setuid property turned off\n community.general.zfs:\n name: rpool/myfs\n state: present\n extra_zfs_properties:\n setuid: off\n\n- name: Create a new volume called myvol in pool rpool.\n community.general.zfs:\n name: rpool/myvol\n state: present\n extra_zfs_properties:\n volsize: 10M\n\n- name: Create a snapshot of rpool/myfs file system.\n community.general.zfs:\n name: rpool/myfs@mysnapshot\n state: present\n\n- name: Create a new file system called myfs2 with snapdir enabled\n community.general.zfs:\n name: rpool/myfs2\n state: present\n extra_zfs_properties:\n snapdir: enabled\n\n- name: Create a new file system by cloning a snapshot\n community.general.zfs:\n name: rpool/cloned_fs\n state: present\n origin: rpool/myfs@mysnapshot\n\n- name: Destroy a filesystem\n community.general.zfs:\n name: rpool/myfs\n state: absent\n'''\n\nimport os\n\nfrom ansible.module_utils.basic import AnsibleModule\n\n\nclass Zfs(object):\n\n def __init__(self, module, name, properties):\n self.module = module\n self.name = name\n self.properties = properties\n self.changed = False\n self.zfs_cmd = module.get_bin_path('zfs', True)\n self.zpool_cmd = module.get_bin_path('zpool', True)\n self.pool = name.split('/')[0].split('@')[0]\n self.is_solaris = os.uname()[0] == 'SunOS'\n self.is_openzfs = self.check_openzfs()\n self.enhanced_sharing = self.check_enhanced_sharing()\n\n def check_openzfs(self):\n cmd = [self.zpool_cmd]\n cmd.extend(['get', 'version'])\n cmd.append(self.pool)\n (rc, out, err) = self.module.run_command(cmd, check_rc=True)\n version = out.splitlines()[-1].split()[2]\n if version == '-':\n return True\n if int(version) == 5000:\n return True\n return False\n\n def check_enhanced_sharing(self):\n if self.is_solaris and not self.is_openzfs:\n cmd = [self.zpool_cmd]\n cmd.extend(['get', 'version'])\n cmd.append(self.pool)\n (rc, out, err) = self.module.run_command(cmd, check_rc=True)\n version = out.splitlines()[-1].split()[2]\n if int(version) >= 34:\n return True\n return False\n\n def exists(self):\n cmd = [self.zfs_cmd, 'list', '-t', 'all', self.name]\n (rc, out, err) = self.module.run_command(' '.join(cmd))\n if rc == 0:\n return True\n else:\n return False\n\n def create(self):\n if self.module.check_mode:\n self.changed = True\n return\n properties = self.properties\n origin = self.module.params.get('origin', None)\n cmd = [self.zfs_cmd]\n\n if \"@\" in self.name:\n action = 'snapshot'\n elif origin:\n action = 'clone'\n else:\n action = 'create'\n\n cmd.append(action)\n\n if action in ['create', 'clone']:\n cmd += ['-p']\n\n if properties:\n for prop, value in properties.items():\n if prop == 'volsize':\n cmd += ['-V', value]\n elif prop == 'volblocksize':\n cmd += ['-b', value]\n else:\n cmd += ['-o', '%s=\"%s\"' % (prop, value)]\n if origin and action == 'clone':\n cmd.append(origin)\n cmd.append(self.name)\n (rc, out, err) = self.module.run_command(' '.join(cmd))\n if rc == 0:\n self.changed = True\n else:\n self.module.fail_json(msg=err)\n\n def destroy(self):\n if self.module.check_mode:\n self.changed = True\n return\n cmd = [self.zfs_cmd, 'destroy', '-R', self.name]\n (rc, out, err) = self.module.run_command(' '.join(cmd))\n if rc == 0:\n self.changed = True\n else:\n self.module.fail_json(msg=err)\n\n def set_property(self, prop, value):\n if self.module.check_mode:\n self.changed = True\n return\n cmd = [self.zfs_cmd, 'set', prop + '=' + str(value), self.name]\n (rc, out, err) = self.module.run_command(cmd)\n if rc == 0:\n self.changed = True\n else:\n self.module.fail_json(msg=err)\n\n def set_properties_if_changed(self):\n current_properties = self.get_current_properties()\n for prop, value in self.properties.items():\n if current_properties.get(prop, None) != value:\n self.set_property(prop, value)\n\n def get_current_properties(self):\n cmd = [self.zfs_cmd, 'get', '-H']\n if self.enhanced_sharing:\n cmd += ['-e']\n cmd += ['all', self.name]\n rc, out, err = self.module.run_command(\" \".join(cmd))\n properties = dict()\n for prop, value, source in [l.split('\\t')[1:4] for l in out.splitlines()]:\n if source == 'local':\n properties[prop] = value\n # Add alias for enhanced sharing properties\n if self.enhanced_sharing:\n properties['sharenfs'] = properties.get('share.nfs', None)\n properties['sharesmb'] = properties.get('share.smb', None)\n return properties\n\n\ndef main():\n\n module = AnsibleModule(\n argument_spec=dict(\n name=dict(type='str', required=True),\n state=dict(type='str', required=True, choices=['absent', 'present']),\n origin=dict(type='str', default=None),\n extra_zfs_properties=dict(type='dict', default={}),\n ),\n supports_check_mode=True,\n )\n\n state = module.params.get('state')\n name = module.params.get('name')\n\n if module.params.get('origin') and '@' in name:\n module.fail_json(msg='cannot specify origin when operating on a snapshot')\n\n # Reverse the boolification of zfs properties\n for prop, value in module.params['extra_zfs_properties'].items():\n if isinstance(value, bool):\n if value is True:\n module.params['extra_zfs_properties'][prop] = 'on'\n else:\n module.params['extra_zfs_properties'][prop] = 'off'\n else:\n module.params['extra_zfs_properties'][prop] = value\n\n result = dict(\n name=name,\n state=state,\n )\n\n zfs = Zfs(module, name, module.params['extra_zfs_properties'])\n\n if state == 'present':\n if zfs.exists():\n zfs.set_properties_if_changed()\n else:\n zfs.create()\n\n elif state == 'absent':\n if zfs.exists():\n zfs.destroy()\n\n result.update(zfs.properties)\n result['changed'] = zfs.changed\n module.exit_json(**result)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/storage/zfs/zfs.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2013, Johan Wiren <[email protected]>\n# Copyright: (c) 2017, Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\nDOCUMENTATION = '''\n---\nmodule: zfs\nshort_description: Manage zfs\ndescription:\n - Manages ZFS file systems, volumes, clones and snapshots\noptions:\n name:\n description:\n - File system, snapshot or volume name e.g. C(rpool/myfs).\n required: true\n type: str\n state:\n description:\n - Whether to create (C(present)), or remove (C(absent)) a\n file system, snapshot or volume. All parents/children\n will be created/destroyed as needed to reach the desired state.\n choices: [ absent, present ]\n required: true\n type: str\n origin:\n description:\n - Snapshot from which to create a clone.\n type: str\n extra_zfs_properties:\n description:\n - A dictionary of zfs properties to be set.\n - See the zfs(8) man page for more information.\n type: dict\nauthor:\n- Johan Wiren (@johanwiren)\n'''\n\nEXAMPLES = '''\n- name: Create a new file system called myfs in pool rpool with the setuid property turned off\n community.general.zfs:\n name: rpool/myfs\n state: present\n extra_zfs_properties:\n setuid: off\n\n- name: Create a new volume called myvol in pool rpool.\n community.general.zfs:\n name: rpool/myvol\n state: present\n extra_zfs_properties:\n volsize: 10M\n\n- name: Create a snapshot of rpool/myfs file system.\n community.general.zfs:\n name: rpool/myfs@mysnapshot\n state: present\n\n- name: Create a new file system called myfs2 with snapdir enabled\n community.general.zfs:\n name: rpool/myfs2\n state: present\n extra_zfs_properties:\n snapdir: enabled\n\n- name: Create a new file system by cloning a snapshot\n community.general.zfs:\n name: rpool/cloned_fs\n state: present\n origin: rpool/myfs@mysnapshot\n\n- name: Destroy a filesystem\n community.general.zfs:\n name: rpool/myfs\n state: absent\n'''\n\nimport os\n\nfrom ansible.module_utils.basic import AnsibleModule\n\n\nclass Zfs(object):\n\n def __init__(self, module, name, properties):\n self.module = module\n self.name = name\n self.properties = properties\n self.changed = False\n self.zfs_cmd = module.get_bin_path('zfs', True)\n self.zpool_cmd = module.get_bin_path('zpool', True)\n self.pool = name.split('/')[0].split('@')[0]\n self.is_solaris = os.uname()[0] == 'SunOS'\n self.is_openzfs = self.check_openzfs()\n self.enhanced_sharing = self.check_enhanced_sharing()\n\n def check_openzfs(self):\n cmd = [self.zpool_cmd]\n cmd.extend(['get', 'version'])\n cmd.append(self.pool)\n (rc, out, err) = self.module.run_command(cmd, check_rc=True)\n version = out.splitlines()[-1].split()[2]\n if version == '-':\n return True\n if int(version) == 5000:\n return True\n return False\n\n def check_enhanced_sharing(self):\n if self.is_solaris and not self.is_openzfs:\n cmd = [self.zpool_cmd]\n cmd.extend(['get', 'version'])\n cmd.append(self.pool)\n (rc, out, err) = self.module.run_command(cmd, check_rc=True)\n version = out.splitlines()[-1].split()[2]\n if int(version) >= 34:\n return True\n return False\n\n def exists(self):\n cmd = [self.zfs_cmd, 'list', '-t', 'all', self.name]\n (rc, out, err) = self.module.run_command(' '.join(cmd))\n if rc == 0:\n return True\n else:\n return False\n\n def create(self):\n if self.module.check_mode:\n self.changed = True\n return\n properties = self.properties\n origin = self.module.params.get('origin', None)\n cmd = [self.zfs_cmd]\n\n if \"@\" in self.name:\n action = 'snapshot'\n elif origin:\n action = 'clone'\n else:\n action = 'create'\n\n cmd.append(action)\n\n if action in ['create', 'clone']:\n cmd += ['-p']\n\n if properties:\n for prop, value in properties.items():\n if prop == 'volsize':\n cmd += ['-V', value]\n elif prop == 'volblocksize':\n cmd += ['-b', value]\n else:\n cmd += ['-o', '%s=\"%s\"' % (prop, value)]\n if origin and action == 'clone':\n cmd.append(origin)\n cmd.append(self.name)\n (rc, out, err) = self.module.run_command(' '.join(cmd))\n if rc == 0:\n self.changed = True\n else:\n self.module.fail_json(msg=err)\n\n def destroy(self):\n if self.module.check_mode:\n self.changed = True\n return\n cmd = [self.zfs_cmd, 'destroy', '-R', self.name]\n (rc, out, err) = self.module.run_command(' '.join(cmd))\n if rc == 0:\n self.changed = True\n else:\n self.module.fail_json(msg=err)\n\n def set_property(self, prop, value):\n if self.module.check_mode:\n self.changed = True\n return\n cmd = [self.zfs_cmd, 'set', prop + '=' + str(value), self.name]\n (rc, out, err) = self.module.run_command(cmd)\n if rc == 0:\n self.changed = True\n else:\n self.module.fail_json(msg=err)\n\n def set_properties_if_changed(self):\n current_properties = self.get_current_properties()\n for prop, value in self.properties.items():\n if current_properties.get(prop, None) != value:\n self.set_property(prop, value)\n\n def get_current_properties(self):\n cmd = [self.zfs_cmd, 'get', '-H']\n if self.enhanced_sharing:\n cmd += ['-e']\n cmd += ['all', self.name]\n rc, out, err = self.module.run_command(\" \".join(cmd))\n properties = dict()\n for prop, value, source in [l.split('\\t')[1:4] for l in out.splitlines()]:\n # include source '-' so that creation-only properties are not removed\n # to avoids errors when the dataset already exists and the property is not changed\n # this scenario is most likely when the same playbook is run more than once\n if source == 'local' or source == '-':\n properties[prop] = value\n # Add alias for enhanced sharing properties\n if self.enhanced_sharing:\n properties['sharenfs'] = properties.get('share.nfs', None)\n properties['sharesmb'] = properties.get('share.smb', None)\n return properties\n\n\ndef main():\n\n module = AnsibleModule(\n argument_spec=dict(\n name=dict(type='str', required=True),\n state=dict(type='str', required=True, choices=['absent', 'present']),\n origin=dict(type='str', default=None),\n extra_zfs_properties=dict(type='dict', default={}),\n ),\n supports_check_mode=True,\n )\n\n state = module.params.get('state')\n name = module.params.get('name')\n\n if module.params.get('origin') and '@' in name:\n module.fail_json(msg='cannot specify origin when operating on a snapshot')\n\n # Reverse the boolification of zfs properties\n for prop, value in module.params['extra_zfs_properties'].items():\n if isinstance(value, bool):\n if value is True:\n module.params['extra_zfs_properties'][prop] = 'on'\n else:\n module.params['extra_zfs_properties'][prop] = 'off'\n else:\n module.params['extra_zfs_properties'][prop] = value\n\n result = dict(\n name=name,\n state=state,\n )\n\n zfs = Zfs(module, name, module.params['extra_zfs_properties'])\n\n if state == 'present':\n if zfs.exists():\n zfs.set_properties_if_changed()\n else:\n zfs.create()\n\n elif state == 'absent':\n if zfs.exists():\n zfs.destroy()\n\n result.update(zfs.properties)\n result['changed'] = zfs.changed\n module.exit_json(**result)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/storage/zfs/zfs.py"}]}
| 3,645 | 203 |
gh_patches_debug_22812
|
rasdani/github-patches
|
git_diff
|
redis__redis-py-2071
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add dialect support for RediSearch queries
**Version**: What redis-py and what redis version is the issue happening on?
RediSearch>=2.4.3
On RediSearch 2.4.3 we will introduce a new `DIALECT` command argument for queries (`FT.SEARCH`, `FT.AGGREGATE`) this will allow to process queries either by the current dialect (RediSearch 2.2.x) or the new dialect introduced on RediSearch 2.4.2.
New dialect features:
1. parameterized queries
2. VSS
3. Improved syntax behavior as introduced at https://github.com/RediSearch/RediSearch/releases/tag/v2.4.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redis/commands/search/query.py`
Content:
```
1 class Query:
2 """
3 Query is used to build complex queries that have more parameters than just
4 the query string. The query string is set in the constructor, and other
5 options have setter functions.
6
7 The setter functions return the query object, so they can be chained,
8 i.e. `Query("foo").verbatim().filter(...)` etc.
9 """
10
11 def __init__(self, query_string):
12 """
13 Create a new query object.
14 The query string is set in the constructor, and other options have
15 setter functions.
16 """
17
18 self._query_string = query_string
19 self._offset = 0
20 self._num = 10
21 self._no_content = False
22 self._no_stopwords = False
23 self._fields = None
24 self._verbatim = False
25 self._with_payloads = False
26 self._with_scores = False
27 self._scorer = False
28 self._filters = list()
29 self._ids = None
30 self._slop = -1
31 self._in_order = False
32 self._sortby = None
33 self._return_fields = []
34 self._summarize_fields = []
35 self._highlight_fields = []
36 self._language = None
37 self._expander = None
38
39 def query_string(self):
40 """Return the query string of this query only."""
41 return self._query_string
42
43 def limit_ids(self, *ids):
44 """Limit the results to a specific set of pre-known document
45 ids of any length."""
46 self._ids = ids
47 return self
48
49 def return_fields(self, *fields):
50 """Add fields to return fields."""
51 self._return_fields += fields
52 return self
53
54 def return_field(self, field, as_field=None):
55 """Add field to return fields (Optional: add 'AS' name
56 to the field)."""
57 self._return_fields.append(field)
58 if as_field is not None:
59 self._return_fields += ("AS", as_field)
60 return self
61
62 def _mk_field_list(self, fields):
63 if not fields:
64 return []
65 return [fields] if isinstance(fields, str) else list(fields)
66
67 def summarize(self, fields=None, context_len=None, num_frags=None, sep=None):
68 """
69 Return an abridged format of the field, containing only the segments of
70 the field which contain the matching term(s).
71
72 If `fields` is specified, then only the mentioned fields are
73 summarized; otherwise all results are summarized.
74
75 Server side defaults are used for each option (except `fields`)
76 if not specified
77
78 - **fields** List of fields to summarize. All fields are summarized
79 if not specified
80 - **context_len** Amount of context to include with each fragment
81 - **num_frags** Number of fragments per document
82 - **sep** Separator string to separate fragments
83 """
84 args = ["SUMMARIZE"]
85 fields = self._mk_field_list(fields)
86 if fields:
87 args += ["FIELDS", str(len(fields))] + fields
88
89 if context_len is not None:
90 args += ["LEN", str(context_len)]
91 if num_frags is not None:
92 args += ["FRAGS", str(num_frags)]
93 if sep is not None:
94 args += ["SEPARATOR", sep]
95
96 self._summarize_fields = args
97 return self
98
99 def highlight(self, fields=None, tags=None):
100 """
101 Apply specified markup to matched term(s) within the returned field(s).
102
103 - **fields** If specified then only those mentioned fields are
104 highlighted, otherwise all fields are highlighted
105 - **tags** A list of two strings to surround the match.
106 """
107 args = ["HIGHLIGHT"]
108 fields = self._mk_field_list(fields)
109 if fields:
110 args += ["FIELDS", str(len(fields))] + fields
111 if tags:
112 args += ["TAGS"] + list(tags)
113
114 self._highlight_fields = args
115 return self
116
117 def language(self, language):
118 """
119 Analyze the query as being in the specified language.
120
121 :param language: The language (e.g. `chinese` or `english`)
122 """
123 self._language = language
124 return self
125
126 def slop(self, slop):
127 """Allow a maximum of N intervening non matched terms between
128 phrase terms (0 means exact phrase).
129 """
130 self._slop = slop
131 return self
132
133 def in_order(self):
134 """
135 Match only documents where the query terms appear in
136 the same order in the document.
137 i.e. for the query "hello world", we do not match "world hello"
138 """
139 self._in_order = True
140 return self
141
142 def scorer(self, scorer):
143 """
144 Use a different scoring function to evaluate document relevance.
145 Default is `TFIDF`.
146
147 :param scorer: The scoring function to use
148 (e.g. `TFIDF.DOCNORM` or `BM25`)
149 """
150 self._scorer = scorer
151 return self
152
153 def get_args(self):
154 """Format the redis arguments for this query and return them."""
155 args = [self._query_string]
156 args += self._get_args_tags()
157 args += self._summarize_fields + self._highlight_fields
158 args += ["LIMIT", self._offset, self._num]
159 return args
160
161 def _get_args_tags(self):
162 args = []
163 if self._no_content:
164 args.append("NOCONTENT")
165 if self._fields:
166 args.append("INFIELDS")
167 args.append(len(self._fields))
168 args += self._fields
169 if self._verbatim:
170 args.append("VERBATIM")
171 if self._no_stopwords:
172 args.append("NOSTOPWORDS")
173 if self._filters:
174 for flt in self._filters:
175 if not isinstance(flt, Filter):
176 raise AttributeError("Did not receive a Filter object.")
177 args += flt.args
178 if self._with_payloads:
179 args.append("WITHPAYLOADS")
180 if self._scorer:
181 args += ["SCORER", self._scorer]
182 if self._with_scores:
183 args.append("WITHSCORES")
184 if self._ids:
185 args.append("INKEYS")
186 args.append(len(self._ids))
187 args += self._ids
188 if self._slop >= 0:
189 args += ["SLOP", self._slop]
190 if self._in_order:
191 args.append("INORDER")
192 if self._return_fields:
193 args.append("RETURN")
194 args.append(len(self._return_fields))
195 args += self._return_fields
196 if self._sortby:
197 if not isinstance(self._sortby, SortbyField):
198 raise AttributeError("Did not receive a SortByField.")
199 args.append("SORTBY")
200 args += self._sortby.args
201 if self._language:
202 args += ["LANGUAGE", self._language]
203 if self._expander:
204 args += ["EXPANDER", self._expander]
205
206 return args
207
208 def paging(self, offset, num):
209 """
210 Set the paging for the query (defaults to 0..10).
211
212 - **offset**: Paging offset for the results. Defaults to 0
213 - **num**: How many results do we want
214 """
215 self._offset = offset
216 self._num = num
217 return self
218
219 def verbatim(self):
220 """Set the query to be verbatim, i.e. use no query expansion
221 or stemming.
222 """
223 self._verbatim = True
224 return self
225
226 def no_content(self):
227 """Set the query to only return ids and not the document content."""
228 self._no_content = True
229 return self
230
231 def no_stopwords(self):
232 """
233 Prevent the query from being filtered for stopwords.
234 Only useful in very big queries that you are certain contain
235 no stopwords.
236 """
237 self._no_stopwords = True
238 return self
239
240 def with_payloads(self):
241 """Ask the engine to return document payloads."""
242 self._with_payloads = True
243 return self
244
245 def with_scores(self):
246 """Ask the engine to return document search scores."""
247 self._with_scores = True
248 return self
249
250 def limit_fields(self, *fields):
251 """
252 Limit the search to specific TEXT fields only.
253
254 - **fields**: A list of strings, case sensitive field names
255 from the defined schema.
256 """
257 self._fields = fields
258 return self
259
260 def add_filter(self, flt):
261 """
262 Add a numeric or geo filter to the query.
263 **Currently only one of each filter is supported by the engine**
264
265 - **flt**: A NumericFilter or GeoFilter object, used on a
266 corresponding field
267 """
268
269 self._filters.append(flt)
270 return self
271
272 def sort_by(self, field, asc=True):
273 """
274 Add a sortby field to the query.
275
276 - **field** - the name of the field to sort by
277 - **asc** - when `True`, sorting will be done in asceding order
278 """
279 self._sortby = SortbyField(field, asc)
280 return self
281
282 def expander(self, expander):
283 """
284 Add a expander field to the query.
285
286 - **expander** - the name of the expander
287 """
288 self._expander = expander
289 return self
290
291
292 class Filter:
293 def __init__(self, keyword, field, *args):
294 self.args = [keyword, field] + list(args)
295
296
297 class NumericFilter(Filter):
298 INF = "+inf"
299 NEG_INF = "-inf"
300
301 def __init__(self, field, minval, maxval, minExclusive=False, maxExclusive=False):
302 args = [
303 minval if not minExclusive else f"({minval}",
304 maxval if not maxExclusive else f"({maxval}",
305 ]
306
307 Filter.__init__(self, "FILTER", field, *args)
308
309
310 class GeoFilter(Filter):
311 METERS = "m"
312 KILOMETERS = "km"
313 FEET = "ft"
314 MILES = "mi"
315
316 def __init__(self, field, lon, lat, radius, unit=KILOMETERS):
317 Filter.__init__(self, "GEOFILTER", field, lon, lat, radius, unit)
318
319
320 class SortbyField:
321 def __init__(self, field, asc=True):
322 self.args = [field, "ASC" if asc else "DESC"]
323
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redis/commands/search/query.py b/redis/commands/search/query.py
--- a/redis/commands/search/query.py
+++ b/redis/commands/search/query.py
@@ -35,6 +35,7 @@
self._highlight_fields = []
self._language = None
self._expander = None
+ self._dialect = None
def query_string(self):
"""Return the query string of this query only."""
@@ -202,6 +203,8 @@
args += ["LANGUAGE", self._language]
if self._expander:
args += ["EXPANDER", self._expander]
+ if self._dialect:
+ args += ["DIALECT", self._dialect]
return args
@@ -288,6 +291,15 @@
self._expander = expander
return self
+ def dialect(self, dialect: int) -> "Query":
+ """
+ Add a dialect field to the query.
+
+ - **dialect** - dialect version to execute the query under
+ """
+ self._dialect = dialect
+ return self
+
class Filter:
def __init__(self, keyword, field, *args):
|
{"golden_diff": "diff --git a/redis/commands/search/query.py b/redis/commands/search/query.py\n--- a/redis/commands/search/query.py\n+++ b/redis/commands/search/query.py\n@@ -35,6 +35,7 @@\n self._highlight_fields = []\n self._language = None\n self._expander = None\n+ self._dialect = None\n \n def query_string(self):\n \"\"\"Return the query string of this query only.\"\"\"\n@@ -202,6 +203,8 @@\n args += [\"LANGUAGE\", self._language]\n if self._expander:\n args += [\"EXPANDER\", self._expander]\n+ if self._dialect:\n+ args += [\"DIALECT\", self._dialect]\n \n return args\n \n@@ -288,6 +291,15 @@\n self._expander = expander\n return self\n \n+ def dialect(self, dialect: int) -> \"Query\":\n+ \"\"\"\n+ Add a dialect field to the query.\n+\n+ - **dialect** - dialect version to execute the query under\n+ \"\"\"\n+ self._dialect = dialect\n+ return self\n+\n \n class Filter:\n def __init__(self, keyword, field, *args):\n", "issue": "Add dialect support for RediSearch queries\n\r\n**Version**: What redis-py and what redis version is the issue happening on?\r\nRediSearch>=2.4.3 \r\n\r\nOn RediSearch 2.4.3 we will introduce a new `DIALECT` command argument for queries (`FT.SEARCH`, `FT.AGGREGATE`) this will allow to process queries either by the current dialect (RediSearch 2.2.x) or the new dialect introduced on RediSearch 2.4.2.\r\nNew dialect features:\r\n1. parameterized queries \r\n2. VSS\r\n3. Improved syntax behavior as introduced at https://github.com/RediSearch/RediSearch/releases/tag/v2.4.2\n", "before_files": [{"content": "class Query:\n \"\"\"\n Query is used to build complex queries that have more parameters than just\n the query string. The query string is set in the constructor, and other\n options have setter functions.\n\n The setter functions return the query object, so they can be chained,\n i.e. `Query(\"foo\").verbatim().filter(...)` etc.\n \"\"\"\n\n def __init__(self, query_string):\n \"\"\"\n Create a new query object.\n The query string is set in the constructor, and other options have\n setter functions.\n \"\"\"\n\n self._query_string = query_string\n self._offset = 0\n self._num = 10\n self._no_content = False\n self._no_stopwords = False\n self._fields = None\n self._verbatim = False\n self._with_payloads = False\n self._with_scores = False\n self._scorer = False\n self._filters = list()\n self._ids = None\n self._slop = -1\n self._in_order = False\n self._sortby = None\n self._return_fields = []\n self._summarize_fields = []\n self._highlight_fields = []\n self._language = None\n self._expander = None\n\n def query_string(self):\n \"\"\"Return the query string of this query only.\"\"\"\n return self._query_string\n\n def limit_ids(self, *ids):\n \"\"\"Limit the results to a specific set of pre-known document\n ids of any length.\"\"\"\n self._ids = ids\n return self\n\n def return_fields(self, *fields):\n \"\"\"Add fields to return fields.\"\"\"\n self._return_fields += fields\n return self\n\n def return_field(self, field, as_field=None):\n \"\"\"Add field to return fields (Optional: add 'AS' name\n to the field).\"\"\"\n self._return_fields.append(field)\n if as_field is not None:\n self._return_fields += (\"AS\", as_field)\n return self\n\n def _mk_field_list(self, fields):\n if not fields:\n return []\n return [fields] if isinstance(fields, str) else list(fields)\n\n def summarize(self, fields=None, context_len=None, num_frags=None, sep=None):\n \"\"\"\n Return an abridged format of the field, containing only the segments of\n the field which contain the matching term(s).\n\n If `fields` is specified, then only the mentioned fields are\n summarized; otherwise all results are summarized.\n\n Server side defaults are used for each option (except `fields`)\n if not specified\n\n - **fields** List of fields to summarize. All fields are summarized\n if not specified\n - **context_len** Amount of context to include with each fragment\n - **num_frags** Number of fragments per document\n - **sep** Separator string to separate fragments\n \"\"\"\n args = [\"SUMMARIZE\"]\n fields = self._mk_field_list(fields)\n if fields:\n args += [\"FIELDS\", str(len(fields))] + fields\n\n if context_len is not None:\n args += [\"LEN\", str(context_len)]\n if num_frags is not None:\n args += [\"FRAGS\", str(num_frags)]\n if sep is not None:\n args += [\"SEPARATOR\", sep]\n\n self._summarize_fields = args\n return self\n\n def highlight(self, fields=None, tags=None):\n \"\"\"\n Apply specified markup to matched term(s) within the returned field(s).\n\n - **fields** If specified then only those mentioned fields are\n highlighted, otherwise all fields are highlighted\n - **tags** A list of two strings to surround the match.\n \"\"\"\n args = [\"HIGHLIGHT\"]\n fields = self._mk_field_list(fields)\n if fields:\n args += [\"FIELDS\", str(len(fields))] + fields\n if tags:\n args += [\"TAGS\"] + list(tags)\n\n self._highlight_fields = args\n return self\n\n def language(self, language):\n \"\"\"\n Analyze the query as being in the specified language.\n\n :param language: The language (e.g. `chinese` or `english`)\n \"\"\"\n self._language = language\n return self\n\n def slop(self, slop):\n \"\"\"Allow a maximum of N intervening non matched terms between\n phrase terms (0 means exact phrase).\n \"\"\"\n self._slop = slop\n return self\n\n def in_order(self):\n \"\"\"\n Match only documents where the query terms appear in\n the same order in the document.\n i.e. for the query \"hello world\", we do not match \"world hello\"\n \"\"\"\n self._in_order = True\n return self\n\n def scorer(self, scorer):\n \"\"\"\n Use a different scoring function to evaluate document relevance.\n Default is `TFIDF`.\n\n :param scorer: The scoring function to use\n (e.g. `TFIDF.DOCNORM` or `BM25`)\n \"\"\"\n self._scorer = scorer\n return self\n\n def get_args(self):\n \"\"\"Format the redis arguments for this query and return them.\"\"\"\n args = [self._query_string]\n args += self._get_args_tags()\n args += self._summarize_fields + self._highlight_fields\n args += [\"LIMIT\", self._offset, self._num]\n return args\n\n def _get_args_tags(self):\n args = []\n if self._no_content:\n args.append(\"NOCONTENT\")\n if self._fields:\n args.append(\"INFIELDS\")\n args.append(len(self._fields))\n args += self._fields\n if self._verbatim:\n args.append(\"VERBATIM\")\n if self._no_stopwords:\n args.append(\"NOSTOPWORDS\")\n if self._filters:\n for flt in self._filters:\n if not isinstance(flt, Filter):\n raise AttributeError(\"Did not receive a Filter object.\")\n args += flt.args\n if self._with_payloads:\n args.append(\"WITHPAYLOADS\")\n if self._scorer:\n args += [\"SCORER\", self._scorer]\n if self._with_scores:\n args.append(\"WITHSCORES\")\n if self._ids:\n args.append(\"INKEYS\")\n args.append(len(self._ids))\n args += self._ids\n if self._slop >= 0:\n args += [\"SLOP\", self._slop]\n if self._in_order:\n args.append(\"INORDER\")\n if self._return_fields:\n args.append(\"RETURN\")\n args.append(len(self._return_fields))\n args += self._return_fields\n if self._sortby:\n if not isinstance(self._sortby, SortbyField):\n raise AttributeError(\"Did not receive a SortByField.\")\n args.append(\"SORTBY\")\n args += self._sortby.args\n if self._language:\n args += [\"LANGUAGE\", self._language]\n if self._expander:\n args += [\"EXPANDER\", self._expander]\n\n return args\n\n def paging(self, offset, num):\n \"\"\"\n Set the paging for the query (defaults to 0..10).\n\n - **offset**: Paging offset for the results. Defaults to 0\n - **num**: How many results do we want\n \"\"\"\n self._offset = offset\n self._num = num\n return self\n\n def verbatim(self):\n \"\"\"Set the query to be verbatim, i.e. use no query expansion\n or stemming.\n \"\"\"\n self._verbatim = True\n return self\n\n def no_content(self):\n \"\"\"Set the query to only return ids and not the document content.\"\"\"\n self._no_content = True\n return self\n\n def no_stopwords(self):\n \"\"\"\n Prevent the query from being filtered for stopwords.\n Only useful in very big queries that you are certain contain\n no stopwords.\n \"\"\"\n self._no_stopwords = True\n return self\n\n def with_payloads(self):\n \"\"\"Ask the engine to return document payloads.\"\"\"\n self._with_payloads = True\n return self\n\n def with_scores(self):\n \"\"\"Ask the engine to return document search scores.\"\"\"\n self._with_scores = True\n return self\n\n def limit_fields(self, *fields):\n \"\"\"\n Limit the search to specific TEXT fields only.\n\n - **fields**: A list of strings, case sensitive field names\n from the defined schema.\n \"\"\"\n self._fields = fields\n return self\n\n def add_filter(self, flt):\n \"\"\"\n Add a numeric or geo filter to the query.\n **Currently only one of each filter is supported by the engine**\n\n - **flt**: A NumericFilter or GeoFilter object, used on a\n corresponding field\n \"\"\"\n\n self._filters.append(flt)\n return self\n\n def sort_by(self, field, asc=True):\n \"\"\"\n Add a sortby field to the query.\n\n - **field** - the name of the field to sort by\n - **asc** - when `True`, sorting will be done in asceding order\n \"\"\"\n self._sortby = SortbyField(field, asc)\n return self\n\n def expander(self, expander):\n \"\"\"\n Add a expander field to the query.\n\n - **expander** - the name of the expander\n \"\"\"\n self._expander = expander\n return self\n\n\nclass Filter:\n def __init__(self, keyword, field, *args):\n self.args = [keyword, field] + list(args)\n\n\nclass NumericFilter(Filter):\n INF = \"+inf\"\n NEG_INF = \"-inf\"\n\n def __init__(self, field, minval, maxval, minExclusive=False, maxExclusive=False):\n args = [\n minval if not minExclusive else f\"({minval}\",\n maxval if not maxExclusive else f\"({maxval}\",\n ]\n\n Filter.__init__(self, \"FILTER\", field, *args)\n\n\nclass GeoFilter(Filter):\n METERS = \"m\"\n KILOMETERS = \"km\"\n FEET = \"ft\"\n MILES = \"mi\"\n\n def __init__(self, field, lon, lat, radius, unit=KILOMETERS):\n Filter.__init__(self, \"GEOFILTER\", field, lon, lat, radius, unit)\n\n\nclass SortbyField:\n def __init__(self, field, asc=True):\n self.args = [field, \"ASC\" if asc else \"DESC\"]\n", "path": "redis/commands/search/query.py"}], "after_files": [{"content": "class Query:\n \"\"\"\n Query is used to build complex queries that have more parameters than just\n the query string. The query string is set in the constructor, and other\n options have setter functions.\n\n The setter functions return the query object, so they can be chained,\n i.e. `Query(\"foo\").verbatim().filter(...)` etc.\n \"\"\"\n\n def __init__(self, query_string):\n \"\"\"\n Create a new query object.\n The query string is set in the constructor, and other options have\n setter functions.\n \"\"\"\n\n self._query_string = query_string\n self._offset = 0\n self._num = 10\n self._no_content = False\n self._no_stopwords = False\n self._fields = None\n self._verbatim = False\n self._with_payloads = False\n self._with_scores = False\n self._scorer = False\n self._filters = list()\n self._ids = None\n self._slop = -1\n self._in_order = False\n self._sortby = None\n self._return_fields = []\n self._summarize_fields = []\n self._highlight_fields = []\n self._language = None\n self._expander = None\n self._dialect = None\n\n def query_string(self):\n \"\"\"Return the query string of this query only.\"\"\"\n return self._query_string\n\n def limit_ids(self, *ids):\n \"\"\"Limit the results to a specific set of pre-known document\n ids of any length.\"\"\"\n self._ids = ids\n return self\n\n def return_fields(self, *fields):\n \"\"\"Add fields to return fields.\"\"\"\n self._return_fields += fields\n return self\n\n def return_field(self, field, as_field=None):\n \"\"\"Add field to return fields (Optional: add 'AS' name\n to the field).\"\"\"\n self._return_fields.append(field)\n if as_field is not None:\n self._return_fields += (\"AS\", as_field)\n return self\n\n def _mk_field_list(self, fields):\n if not fields:\n return []\n return [fields] if isinstance(fields, str) else list(fields)\n\n def summarize(self, fields=None, context_len=None, num_frags=None, sep=None):\n \"\"\"\n Return an abridged format of the field, containing only the segments of\n the field which contain the matching term(s).\n\n If `fields` is specified, then only the mentioned fields are\n summarized; otherwise all results are summarized.\n\n Server side defaults are used for each option (except `fields`)\n if not specified\n\n - **fields** List of fields to summarize. All fields are summarized\n if not specified\n - **context_len** Amount of context to include with each fragment\n - **num_frags** Number of fragments per document\n - **sep** Separator string to separate fragments\n \"\"\"\n args = [\"SUMMARIZE\"]\n fields = self._mk_field_list(fields)\n if fields:\n args += [\"FIELDS\", str(len(fields))] + fields\n\n if context_len is not None:\n args += [\"LEN\", str(context_len)]\n if num_frags is not None:\n args += [\"FRAGS\", str(num_frags)]\n if sep is not None:\n args += [\"SEPARATOR\", sep]\n\n self._summarize_fields = args\n return self\n\n def highlight(self, fields=None, tags=None):\n \"\"\"\n Apply specified markup to matched term(s) within the returned field(s).\n\n - **fields** If specified then only those mentioned fields are\n highlighted, otherwise all fields are highlighted\n - **tags** A list of two strings to surround the match.\n \"\"\"\n args = [\"HIGHLIGHT\"]\n fields = self._mk_field_list(fields)\n if fields:\n args += [\"FIELDS\", str(len(fields))] + fields\n if tags:\n args += [\"TAGS\"] + list(tags)\n\n self._highlight_fields = args\n return self\n\n def language(self, language):\n \"\"\"\n Analyze the query as being in the specified language.\n\n :param language: The language (e.g. `chinese` or `english`)\n \"\"\"\n self._language = language\n return self\n\n def slop(self, slop):\n \"\"\"Allow a maximum of N intervening non matched terms between\n phrase terms (0 means exact phrase).\n \"\"\"\n self._slop = slop\n return self\n\n def in_order(self):\n \"\"\"\n Match only documents where the query terms appear in\n the same order in the document.\n i.e. for the query \"hello world\", we do not match \"world hello\"\n \"\"\"\n self._in_order = True\n return self\n\n def scorer(self, scorer):\n \"\"\"\n Use a different scoring function to evaluate document relevance.\n Default is `TFIDF`.\n\n :param scorer: The scoring function to use\n (e.g. `TFIDF.DOCNORM` or `BM25`)\n \"\"\"\n self._scorer = scorer\n return self\n\n def get_args(self):\n \"\"\"Format the redis arguments for this query and return them.\"\"\"\n args = [self._query_string]\n args += self._get_args_tags()\n args += self._summarize_fields + self._highlight_fields\n args += [\"LIMIT\", self._offset, self._num]\n return args\n\n def _get_args_tags(self):\n args = []\n if self._no_content:\n args.append(\"NOCONTENT\")\n if self._fields:\n args.append(\"INFIELDS\")\n args.append(len(self._fields))\n args += self._fields\n if self._verbatim:\n args.append(\"VERBATIM\")\n if self._no_stopwords:\n args.append(\"NOSTOPWORDS\")\n if self._filters:\n for flt in self._filters:\n if not isinstance(flt, Filter):\n raise AttributeError(\"Did not receive a Filter object.\")\n args += flt.args\n if self._with_payloads:\n args.append(\"WITHPAYLOADS\")\n if self._scorer:\n args += [\"SCORER\", self._scorer]\n if self._with_scores:\n args.append(\"WITHSCORES\")\n if self._ids:\n args.append(\"INKEYS\")\n args.append(len(self._ids))\n args += self._ids\n if self._slop >= 0:\n args += [\"SLOP\", self._slop]\n if self._in_order:\n args.append(\"INORDER\")\n if self._return_fields:\n args.append(\"RETURN\")\n args.append(len(self._return_fields))\n args += self._return_fields\n if self._sortby:\n if not isinstance(self._sortby, SortbyField):\n raise AttributeError(\"Did not receive a SortByField.\")\n args.append(\"SORTBY\")\n args += self._sortby.args\n if self._language:\n args += [\"LANGUAGE\", self._language]\n if self._expander:\n args += [\"EXPANDER\", self._expander]\n if self._dialect:\n args += [\"DIALECT\", self._dialect]\n\n return args\n\n def paging(self, offset, num):\n \"\"\"\n Set the paging for the query (defaults to 0..10).\n\n - **offset**: Paging offset for the results. Defaults to 0\n - **num**: How many results do we want\n \"\"\"\n self._offset = offset\n self._num = num\n return self\n\n def verbatim(self):\n \"\"\"Set the query to be verbatim, i.e. use no query expansion\n or stemming.\n \"\"\"\n self._verbatim = True\n return self\n\n def no_content(self):\n \"\"\"Set the query to only return ids and not the document content.\"\"\"\n self._no_content = True\n return self\n\n def no_stopwords(self):\n \"\"\"\n Prevent the query from being filtered for stopwords.\n Only useful in very big queries that you are certain contain\n no stopwords.\n \"\"\"\n self._no_stopwords = True\n return self\n\n def with_payloads(self):\n \"\"\"Ask the engine to return document payloads.\"\"\"\n self._with_payloads = True\n return self\n\n def with_scores(self):\n \"\"\"Ask the engine to return document search scores.\"\"\"\n self._with_scores = True\n return self\n\n def limit_fields(self, *fields):\n \"\"\"\n Limit the search to specific TEXT fields only.\n\n - **fields**: A list of strings, case sensitive field names\n from the defined schema.\n \"\"\"\n self._fields = fields\n return self\n\n def add_filter(self, flt):\n \"\"\"\n Add a numeric or geo filter to the query.\n **Currently only one of each filter is supported by the engine**\n\n - **flt**: A NumericFilter or GeoFilter object, used on a\n corresponding field\n \"\"\"\n\n self._filters.append(flt)\n return self\n\n def sort_by(self, field, asc=True):\n \"\"\"\n Add a sortby field to the query.\n\n - **field** - the name of the field to sort by\n - **asc** - when `True`, sorting will be done in asceding order\n \"\"\"\n self._sortby = SortbyField(field, asc)\n return self\n\n def expander(self, expander):\n \"\"\"\n Add a expander field to the query.\n\n - **expander** - the name of the expander\n \"\"\"\n self._expander = expander\n return self\n\n def dialect(self, dialect: int) -> \"Query\":\n \"\"\"\n Add a dialect field to the query.\n\n - **dialect** - dialect version to execute the query under\n \"\"\"\n self._dialect = dialect\n return self\n\n\nclass Filter:\n def __init__(self, keyword, field, *args):\n self.args = [keyword, field] + list(args)\n\n\nclass NumericFilter(Filter):\n INF = \"+inf\"\n NEG_INF = \"-inf\"\n\n def __init__(self, field, minval, maxval, minExclusive=False, maxExclusive=False):\n args = [\n minval if not minExclusive else f\"({minval}\",\n maxval if not maxExclusive else f\"({maxval}\",\n ]\n\n Filter.__init__(self, \"FILTER\", field, *args)\n\n\nclass GeoFilter(Filter):\n METERS = \"m\"\n KILOMETERS = \"km\"\n FEET = \"ft\"\n MILES = \"mi\"\n\n def __init__(self, field, lon, lat, radius, unit=KILOMETERS):\n Filter.__init__(self, \"GEOFILTER\", field, lon, lat, radius, unit)\n\n\nclass SortbyField:\n def __init__(self, field, asc=True):\n self.args = [field, \"ASC\" if asc else \"DESC\"]\n", "path": "redis/commands/search/query.py"}]}
| 3,592 | 279 |
gh_patches_debug_23645
|
rasdani/github-patches
|
git_diff
|
talonhub__community-659
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"million", "billion", "thousand" etc should not be valid numbers
They should require a prefix such as "one", eg "one million"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `code/numbers.py`
Content:
```
1 from talon import Context, Module, actions
2 from typing import List, Optional, Union, Iterator
3
4 mod = Module()
5 ctx = Context()
6
7 digits = "zero one two three four five six seven eight nine".split()
8 teens = "ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen".split()
9 tens = "twenty thirty forty fifty sixty seventy eighty ninety".split()
10 scales = "hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion".split()
11
12 digits_map = {n: i for i, n in enumerate(digits)}
13 digits_map["oh"] = 0
14 teens_map = {n: i + 10 for i, n in enumerate(teens)}
15 tens_map = {n: 10 * (i + 2) for i, n in enumerate(tens)}
16 scales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}
17 scales_map["hundred"] = 100
18
19 numbers_map = digits_map.copy()
20 numbers_map.update(teens_map)
21 numbers_map.update(tens_map)
22 numbers_map.update(scales_map)
23
24 def parse_number(l: List[str]) -> str:
25 """Parses a list of words into a number/digit string."""
26 l = list(scan_small_numbers(l))
27 for scale in scales:
28 l = parse_scale(scale, l)
29 return "".join(str(n) for n in l)
30
31 def scan_small_numbers(l: List[str]) -> Iterator[Union[str,int]]:
32 """
33 Takes a list of number words, yields a generator of mixed numbers & strings.
34 Translates small number terms (<100) into corresponding numbers.
35 Drops all occurrences of "and".
36 Smashes digits onto tens words, eg. ["twenty", "one"] -> [21].
37 But note that "ten" and "zero" are excluded, ie:
38 ["ten", "three"] -> [10, 3]
39 ["fifty", "zero"] -> [50, 0]
40 Does nothing to scale words ("hundred", "thousand", "million", etc).
41 """
42 # reversed so that repeated pop() visits in left-to-right order
43 l = [x for x in reversed(l) if x != "and"]
44 while l:
45 n = l.pop()
46 # fuse tens onto digits, eg. "twenty", "one" -> 21
47 if n in tens_map and l and digits_map.get(l[-1], 0) != 0:
48 d = l.pop()
49 yield numbers_map[n] + numbers_map[d]
50 # turn small number terms into corresponding numbers
51 elif n not in scales_map:
52 yield numbers_map[n]
53 else:
54 yield n
55
56 def parse_scale(scale: str, l: List[Union[str,int]]) -> List[Union[str,int]]:
57 """Parses a list of mixed numbers & strings for occurrences of the following
58 pattern:
59
60 <multiplier> <scale> <remainder>
61
62 where <scale> is a scale word like "hundred", "thousand", "million", etc and
63 multiplier and remainder are numbers or strings of numbers of the
64 appropriate size. For example:
65
66 parse_scale("hundred", [1, "hundred", 2]) -> [102]
67 parse_scale("thousand", [12, "thousand", 3, 45]) -> [12345]
68
69 We assume that all scales of lower magnitude have already been parsed; don't
70 call parse_scale("thousand") until you've called parse_scale("hundred").
71 """
72 scale_value = scales_map[scale]
73 scale_digits = len(str(scale_value))
74
75 # Split the list on the desired scale word, then parse from left to right.
76 left, *splits = split_list(scale, l)
77 for right in splits:
78 # (1) Figure out the multiplier by looking to the left of the scale
79 # word. We ignore non-integers because they are scale words that we
80 # haven't processed yet; this strategy means that "thousand hundred"
81 # gets parsed as 1,100 instead of 100,000, but "hundred thousand" is
82 # parsed correctly as 100,000.
83 before = 1 # default multiplier
84 if left and isinstance(left[-1], int) and left[-1] != 0:
85 before = left.pop()
86
87 # (2) Absorb numbers to the right, eg. in [1, "thousand", 1, 26], "1
88 # thousand" absorbs ["1", "26"] to make 1,126. We pull numbers off
89 # `right` until we fill up the desired number of digits.
90 after = ""
91 while right and isinstance(right[0], int):
92 next = after + str(right[0])
93 if len(next) >= scale_digits: break
94 after = next
95 right.pop(0)
96 after = int(after) if after else 0
97
98 # (3) Push the parsed number into place, append whatever was left
99 # unparsed, and continue.
100 left.append(before * scale_value + after)
101 left.extend(right)
102
103 return left
104
105 def split_list(value, l: list) -> Iterator:
106 """Splits a list by occurrences of a given value."""
107 start = 0
108 while True:
109 try: i = l.index(value, start)
110 except ValueError: break
111 yield l[start:i]
112 start = i+1
113 yield l[start:]
114
115
116 # # ---------- TESTS (uncomment to run) ----------
117 # def test_number(expected, string):
118 # print('testing:', string)
119 # l = list(scan_small_numbers(string.split()))
120 # print(" scan --->", l)
121 # for scale in scales:
122 # old = l
123 # l = parse_scale(scale, l)
124 # if scale in old: print(" parse -->", l)
125 # else: assert old == l, "parse_scale should do nothing if the scale does not occur in the list"
126 # result = "".join(str(n) for n in l)
127 # assert result == parse_number(string.split())
128 # assert str(expected) == result, f"parsing {string!r}, expected {expected}, got {result}"
129
130 # test_number(105000, "one hundred and five thousand")
131 # test_number(1000000, "one thousand thousand")
132 # test_number(1501000, "one million five hundred one thousand")
133 # test_number(1501106, "one million five hundred and one thousand one hundred and six")
134 # test_number(123, "one two three")
135 # test_number(123, "one twenty three")
136 # test_number(104, "ten four") # borderline, but valid in some dialects
137 # test_number(1066, "ten sixty six") # a common way of saying years
138 # test_number(1906, "nineteen oh six") # year
139 # test_number(2001, "twenty oh one") # year
140 # test_number(2020, "twenty twenty")
141 # test_number(1001, "one thousand one")
142 # test_number(1010, "one thousand ten")
143 # test_number(123456, "one hundred and twenty three thousand and four hundred and fifty six")
144 # test_number(123456, "one twenty three thousand four fifty six")
145
146 # ## failing (and somewhat debatable) tests from old numbers.py
147 # #test_number(10000011, "one million one one")
148 # #test_number(100001010, "one million ten ten")
149 # #test_number(1050006000, "one hundred thousand and five thousand and six thousand")
150
151
152 # ---------- CAPTURES ----------
153 alt_digits = "(" + ("|".join(digits_map.keys())) + ")"
154 alt_teens = "(" + ("|".join(teens_map.keys())) + ")"
155 alt_tens = "(" + ("|".join(tens_map.keys())) + ")"
156 alt_scales = "(" + ("|".join(scales_map.keys())) + ")"
157 number_word = "(" + "|".join(numbers_map.keys()) + ")"
158
159 # TODO: allow things like "double eight" for 88
160 @ctx.capture("digit_string", rule=f"({alt_digits} | {alt_teens} | {alt_tens})+")
161 def digit_string(m) -> str: return parse_number(list(m))
162
163 @ctx.capture("digits", rule="<digit_string>")
164 def digits(m) -> int:
165 """Parses a phrase representing a digit sequence, returning it as an integer."""
166 return int(m.digit_string)
167
168 @mod.capture(rule=f"{number_word}+ (and {number_word}+)*")
169 def number_string(m) -> str:
170 """Parses a number phrase, returning that number as a string."""
171 return parse_number(list(m))
172
173 @ctx.capture("number", rule="<user.number_string>")
174 def number(m) -> int:
175 """Parses a number phrase, returning it as an integer."""
176 return int(m.number_string)
177
178 @ctx.capture("number_signed", rule=f"[negative|minus] <number>")
179 def number_signed(m):
180 number = m[-1]
181 return -number if (m[0] in ["negative", "minus"]) else number
182
183 @ctx.capture(
184 "number_small", rule=f"({alt_digits} | {alt_teens} | {alt_tens} [{alt_digits}])"
185 )
186 def number_small(m): return int(parse_number(list(m)))
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/code/numbers.py b/code/numbers.py
--- a/code/numbers.py
+++ b/code/numbers.py
@@ -155,6 +155,10 @@
alt_tens = "(" + ("|".join(tens_map.keys())) + ")"
alt_scales = "(" + ("|".join(scales_map.keys())) + ")"
number_word = "(" + "|".join(numbers_map.keys()) + ")"
+# don't allow numbers to start with scale words like "hundred", "thousand", etc
+leading_words = numbers_map.keys() - scales_map.keys()
+leading_words -= {'oh', 'o'} # comment out to enable bare/initial "oh"
+number_word_leading = f"({'|'.join(leading_words)})"
# TODO: allow things like "double eight" for 88
@ctx.capture("digit_string", rule=f"({alt_digits} | {alt_teens} | {alt_tens})+")
@@ -165,7 +169,7 @@
"""Parses a phrase representing a digit sequence, returning it as an integer."""
return int(m.digit_string)
[email protected](rule=f"{number_word}+ (and {number_word}+)*")
[email protected](rule=f"{number_word_leading} ([and] {number_word})*")
def number_string(m) -> str:
"""Parses a number phrase, returning that number as a string."""
return parse_number(list(m))
|
{"golden_diff": "diff --git a/code/numbers.py b/code/numbers.py\n--- a/code/numbers.py\n+++ b/code/numbers.py\n@@ -155,6 +155,10 @@\n alt_tens = \"(\" + (\"|\".join(tens_map.keys())) + \")\"\n alt_scales = \"(\" + (\"|\".join(scales_map.keys())) + \")\"\n number_word = \"(\" + \"|\".join(numbers_map.keys()) + \")\"\n+# don't allow numbers to start with scale words like \"hundred\", \"thousand\", etc\n+leading_words = numbers_map.keys() - scales_map.keys()\n+leading_words -= {'oh', 'o'} # comment out to enable bare/initial \"oh\"\n+number_word_leading = f\"({'|'.join(leading_words)})\"\n \n # TODO: allow things like \"double eight\" for 88\n @ctx.capture(\"digit_string\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens})+\")\n@@ -165,7 +169,7 @@\n \"\"\"Parses a phrase representing a digit sequence, returning it as an integer.\"\"\"\n return int(m.digit_string)\n \[email protected](rule=f\"{number_word}+ (and {number_word}+)*\")\[email protected](rule=f\"{number_word_leading} ([and] {number_word})*\")\n def number_string(m) -> str:\n \"\"\"Parses a number phrase, returning that number as a string.\"\"\"\n return parse_number(list(m))\n", "issue": "\"million\", \"billion\", \"thousand\" etc should not be valid numbers\nThey should require a prefix such as \"one\", eg \"one million\"\n", "before_files": [{"content": "from talon import Context, Module, actions\nfrom typing import List, Optional, Union, Iterator\n\nmod = Module()\nctx = Context()\n\ndigits = \"zero one two three four five six seven eight nine\".split()\nteens = \"ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen\".split()\ntens = \"twenty thirty forty fifty sixty seventy eighty ninety\".split()\nscales = \"hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion\".split()\n\ndigits_map = {n: i for i, n in enumerate(digits)}\ndigits_map[\"oh\"] = 0\nteens_map = {n: i + 10 for i, n in enumerate(teens)}\ntens_map = {n: 10 * (i + 2) for i, n in enumerate(tens)}\nscales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}\nscales_map[\"hundred\"] = 100\n\nnumbers_map = digits_map.copy()\nnumbers_map.update(teens_map)\nnumbers_map.update(tens_map)\nnumbers_map.update(scales_map)\n\ndef parse_number(l: List[str]) -> str:\n \"\"\"Parses a list of words into a number/digit string.\"\"\"\n l = list(scan_small_numbers(l))\n for scale in scales:\n l = parse_scale(scale, l)\n return \"\".join(str(n) for n in l)\n\ndef scan_small_numbers(l: List[str]) -> Iterator[Union[str,int]]:\n \"\"\"\n Takes a list of number words, yields a generator of mixed numbers & strings.\n Translates small number terms (<100) into corresponding numbers.\n Drops all occurrences of \"and\".\n Smashes digits onto tens words, eg. [\"twenty\", \"one\"] -> [21].\n But note that \"ten\" and \"zero\" are excluded, ie:\n [\"ten\", \"three\"] -> [10, 3]\n [\"fifty\", \"zero\"] -> [50, 0]\n Does nothing to scale words (\"hundred\", \"thousand\", \"million\", etc).\n \"\"\"\n # reversed so that repeated pop() visits in left-to-right order\n l = [x for x in reversed(l) if x != \"and\"]\n while l:\n n = l.pop()\n # fuse tens onto digits, eg. \"twenty\", \"one\" -> 21\n if n in tens_map and l and digits_map.get(l[-1], 0) != 0:\n d = l.pop()\n yield numbers_map[n] + numbers_map[d]\n # turn small number terms into corresponding numbers\n elif n not in scales_map:\n yield numbers_map[n]\n else:\n yield n\n\ndef parse_scale(scale: str, l: List[Union[str,int]]) -> List[Union[str,int]]:\n \"\"\"Parses a list of mixed numbers & strings for occurrences of the following\n pattern:\n\n <multiplier> <scale> <remainder>\n\n where <scale> is a scale word like \"hundred\", \"thousand\", \"million\", etc and\n multiplier and remainder are numbers or strings of numbers of the\n appropriate size. For example:\n\n parse_scale(\"hundred\", [1, \"hundred\", 2]) -> [102]\n parse_scale(\"thousand\", [12, \"thousand\", 3, 45]) -> [12345]\n\n We assume that all scales of lower magnitude have already been parsed; don't\n call parse_scale(\"thousand\") until you've called parse_scale(\"hundred\").\n \"\"\"\n scale_value = scales_map[scale]\n scale_digits = len(str(scale_value))\n\n # Split the list on the desired scale word, then parse from left to right.\n left, *splits = split_list(scale, l)\n for right in splits:\n # (1) Figure out the multiplier by looking to the left of the scale\n # word. We ignore non-integers because they are scale words that we\n # haven't processed yet; this strategy means that \"thousand hundred\"\n # gets parsed as 1,100 instead of 100,000, but \"hundred thousand\" is\n # parsed correctly as 100,000.\n before = 1 # default multiplier\n if left and isinstance(left[-1], int) and left[-1] != 0:\n before = left.pop()\n\n # (2) Absorb numbers to the right, eg. in [1, \"thousand\", 1, 26], \"1\n # thousand\" absorbs [\"1\", \"26\"] to make 1,126. We pull numbers off\n # `right` until we fill up the desired number of digits.\n after = \"\"\n while right and isinstance(right[0], int):\n next = after + str(right[0])\n if len(next) >= scale_digits: break\n after = next\n right.pop(0)\n after = int(after) if after else 0\n\n # (3) Push the parsed number into place, append whatever was left\n # unparsed, and continue.\n left.append(before * scale_value + after)\n left.extend(right)\n\n return left\n\ndef split_list(value, l: list) -> Iterator:\n \"\"\"Splits a list by occurrences of a given value.\"\"\"\n start = 0\n while True:\n try: i = l.index(value, start)\n except ValueError: break\n yield l[start:i]\n start = i+1\n yield l[start:]\n\n\f\n# # ---------- TESTS (uncomment to run) ----------\n# def test_number(expected, string):\n# print('testing:', string)\n# l = list(scan_small_numbers(string.split()))\n# print(\" scan --->\", l)\n# for scale in scales:\n# old = l\n# l = parse_scale(scale, l)\n# if scale in old: print(\" parse -->\", l)\n# else: assert old == l, \"parse_scale should do nothing if the scale does not occur in the list\"\n# result = \"\".join(str(n) for n in l)\n# assert result == parse_number(string.split())\n# assert str(expected) == result, f\"parsing {string!r}, expected {expected}, got {result}\"\n\n# test_number(105000, \"one hundred and five thousand\")\n# test_number(1000000, \"one thousand thousand\")\n# test_number(1501000, \"one million five hundred one thousand\")\n# test_number(1501106, \"one million five hundred and one thousand one hundred and six\")\n# test_number(123, \"one two three\")\n# test_number(123, \"one twenty three\")\n# test_number(104, \"ten four\") # borderline, but valid in some dialects\n# test_number(1066, \"ten sixty six\") # a common way of saying years\n# test_number(1906, \"nineteen oh six\") # year\n# test_number(2001, \"twenty oh one\") # year\n# test_number(2020, \"twenty twenty\")\n# test_number(1001, \"one thousand one\")\n# test_number(1010, \"one thousand ten\")\n# test_number(123456, \"one hundred and twenty three thousand and four hundred and fifty six\")\n# test_number(123456, \"one twenty three thousand four fifty six\")\n\n# ## failing (and somewhat debatable) tests from old numbers.py\n# #test_number(10000011, \"one million one one\")\n# #test_number(100001010, \"one million ten ten\")\n# #test_number(1050006000, \"one hundred thousand and five thousand and six thousand\")\n\n\f\n# ---------- CAPTURES ----------\nalt_digits = \"(\" + (\"|\".join(digits_map.keys())) + \")\"\nalt_teens = \"(\" + (\"|\".join(teens_map.keys())) + \")\"\nalt_tens = \"(\" + (\"|\".join(tens_map.keys())) + \")\"\nalt_scales = \"(\" + (\"|\".join(scales_map.keys())) + \")\"\nnumber_word = \"(\" + \"|\".join(numbers_map.keys()) + \")\"\n\n# TODO: allow things like \"double eight\" for 88\[email protected](\"digit_string\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens})+\")\ndef digit_string(m) -> str: return parse_number(list(m))\n\[email protected](\"digits\", rule=\"<digit_string>\")\ndef digits(m) -> int:\n \"\"\"Parses a phrase representing a digit sequence, returning it as an integer.\"\"\"\n return int(m.digit_string)\n\[email protected](rule=f\"{number_word}+ (and {number_word}+)*\")\ndef number_string(m) -> str:\n \"\"\"Parses a number phrase, returning that number as a string.\"\"\"\n return parse_number(list(m))\n\[email protected](\"number\", rule=\"<user.number_string>\")\ndef number(m) -> int:\n \"\"\"Parses a number phrase, returning it as an integer.\"\"\"\n return int(m.number_string)\n\[email protected](\"number_signed\", rule=f\"[negative|minus] <number>\")\ndef number_signed(m):\n number = m[-1]\n return -number if (m[0] in [\"negative\", \"minus\"]) else number\n\[email protected](\n \"number_small\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens} [{alt_digits}])\"\n)\ndef number_small(m): return int(parse_number(list(m)))\n", "path": "code/numbers.py"}], "after_files": [{"content": "from talon import Context, Module, actions\nfrom typing import List, Optional, Union, Iterator\n\nmod = Module()\nctx = Context()\n\ndigits = \"zero one two three four five six seven eight nine\".split()\nteens = \"ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen\".split()\ntens = \"twenty thirty forty fifty sixty seventy eighty ninety\".split()\nscales = \"hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion\".split()\n\ndigits_map = {n: i for i, n in enumerate(digits)}\ndigits_map[\"oh\"] = 0\nteens_map = {n: i + 10 for i, n in enumerate(teens)}\ntens_map = {n: 10 * (i + 2) for i, n in enumerate(tens)}\nscales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}\nscales_map[\"hundred\"] = 100\n\nnumbers_map = digits_map.copy()\nnumbers_map.update(teens_map)\nnumbers_map.update(tens_map)\nnumbers_map.update(scales_map)\n\ndef parse_number(l: List[str]) -> str:\n \"\"\"Parses a list of words into a number/digit string.\"\"\"\n l = list(scan_small_numbers(l))\n for scale in scales:\n l = parse_scale(scale, l)\n return \"\".join(str(n) for n in l)\n\ndef scan_small_numbers(l: List[str]) -> Iterator[Union[str,int]]:\n \"\"\"\n Takes a list of number words, yields a generator of mixed numbers & strings.\n Translates small number terms (<100) into corresponding numbers.\n Drops all occurrences of \"and\".\n Smashes digits onto tens words, eg. [\"twenty\", \"one\"] -> [21].\n But note that \"ten\" and \"zero\" are excluded, ie:\n [\"ten\", \"three\"] -> [10, 3]\n [\"fifty\", \"zero\"] -> [50, 0]\n Does nothing to scale words (\"hundred\", \"thousand\", \"million\", etc).\n \"\"\"\n # reversed so that repeated pop() visits in left-to-right order\n l = [x for x in reversed(l) if x != \"and\"]\n while l:\n n = l.pop()\n # fuse tens onto digits, eg. \"twenty\", \"one\" -> 21\n if n in tens_map and l and digits_map.get(l[-1], 0) != 0:\n d = l.pop()\n yield numbers_map[n] + numbers_map[d]\n # turn small number terms into corresponding numbers\n elif n not in scales_map:\n yield numbers_map[n]\n else:\n yield n\n\ndef parse_scale(scale: str, l: List[Union[str,int]]) -> List[Union[str,int]]:\n \"\"\"Parses a list of mixed numbers & strings for occurrences of the following\n pattern:\n\n <multiplier> <scale> <remainder>\n\n where <scale> is a scale word like \"hundred\", \"thousand\", \"million\", etc and\n multiplier and remainder are numbers or strings of numbers of the\n appropriate size. For example:\n\n parse_scale(\"hundred\", [1, \"hundred\", 2]) -> [102]\n parse_scale(\"thousand\", [12, \"thousand\", 3, 45]) -> [12345]\n\n We assume that all scales of lower magnitude have already been parsed; don't\n call parse_scale(\"thousand\") until you've called parse_scale(\"hundred\").\n \"\"\"\n scale_value = scales_map[scale]\n scale_digits = len(str(scale_value))\n\n # Split the list on the desired scale word, then parse from left to right.\n left, *splits = split_list(scale, l)\n for right in splits:\n # (1) Figure out the multiplier by looking to the left of the scale\n # word. We ignore non-integers because they are scale words that we\n # haven't processed yet; this strategy means that \"thousand hundred\"\n # gets parsed as 1,100 instead of 100,000, but \"hundred thousand\" is\n # parsed correctly as 100,000.\n before = 1 # default multiplier\n if left and isinstance(left[-1], int) and left[-1] != 0:\n before = left.pop()\n\n # (2) Absorb numbers to the right, eg. in [1, \"thousand\", 1, 26], \"1\n # thousand\" absorbs [\"1\", \"26\"] to make 1,126. We pull numbers off\n # `right` until we fill up the desired number of digits.\n after = \"\"\n while right and isinstance(right[0], int):\n next = after + str(right[0])\n if len(next) >= scale_digits: break\n after = next\n right.pop(0)\n after = int(after) if after else 0\n\n # (3) Push the parsed number into place, append whatever was left\n # unparsed, and continue.\n left.append(before * scale_value + after)\n left.extend(right)\n\n return left\n\ndef split_list(value, l: list) -> Iterator:\n \"\"\"Splits a list by occurrences of a given value.\"\"\"\n start = 0\n while True:\n try: i = l.index(value, start)\n except ValueError: break\n yield l[start:i]\n start = i+1\n yield l[start:]\n\n\f\n# # ---------- TESTS (uncomment to run) ----------\n# def test_number(expected, string):\n# print('testing:', string)\n# l = list(scan_small_numbers(string.split()))\n# print(\" scan --->\", l)\n# for scale in scales:\n# old = l\n# l = parse_scale(scale, l)\n# if scale in old: print(\" parse -->\", l)\n# else: assert old == l, \"parse_scale should do nothing if the scale does not occur in the list\"\n# result = \"\".join(str(n) for n in l)\n# assert result == parse_number(string.split())\n# assert str(expected) == result, f\"parsing {string!r}, expected {expected}, got {result}\"\n\n# test_number(105000, \"one hundred and five thousand\")\n# test_number(1000000, \"one thousand thousand\")\n# test_number(1501000, \"one million five hundred one thousand\")\n# test_number(1501106, \"one million five hundred and one thousand one hundred and six\")\n# test_number(123, \"one two three\")\n# test_number(123, \"one twenty three\")\n# test_number(104, \"ten four\") # borderline, but valid in some dialects\n# test_number(1066, \"ten sixty six\") # a common way of saying years\n# test_number(1906, \"nineteen oh six\") # year\n# test_number(2001, \"twenty oh one\") # year\n# test_number(2020, \"twenty twenty\")\n# test_number(1001, \"one thousand one\")\n# test_number(1010, \"one thousand ten\")\n# test_number(123456, \"one hundred and twenty three thousand and four hundred and fifty six\")\n# test_number(123456, \"one twenty three thousand four fifty six\")\n\n# ## failing (and somewhat debatable) tests from old numbers.py\n# #test_number(10000011, \"one million one one\")\n# #test_number(100001010, \"one million ten ten\")\n# #test_number(1050006000, \"one hundred thousand and five thousand and six thousand\")\n\n\f\n# ---------- CAPTURES ----------\nalt_digits = \"(\" + (\"|\".join(digits_map.keys())) + \")\"\nalt_teens = \"(\" + (\"|\".join(teens_map.keys())) + \")\"\nalt_tens = \"(\" + (\"|\".join(tens_map.keys())) + \")\"\nalt_scales = \"(\" + (\"|\".join(scales_map.keys())) + \")\"\nnumber_word = \"(\" + \"|\".join(numbers_map.keys()) + \")\"\n# don't allow numbers to start with scale words like \"hundred\", \"thousand\", etc\nleading_words = numbers_map.keys() - scales_map.keys()\nleading_words -= {'oh', 'o'} # comment out to enable bare/initial \"oh\"\nnumber_word_leading = f\"({'|'.join(leading_words)})\"\n\n# TODO: allow things like \"double eight\" for 88\[email protected](\"digit_string\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens})+\")\ndef digit_string(m) -> str: return parse_number(list(m))\n\[email protected](\"digits\", rule=\"<digit_string>\")\ndef digits(m) -> int:\n \"\"\"Parses a phrase representing a digit sequence, returning it as an integer.\"\"\"\n return int(m.digit_string)\n\[email protected](rule=f\"{number_word_leading} ([and] {number_word})*\")\ndef number_string(m) -> str:\n \"\"\"Parses a number phrase, returning that number as a string.\"\"\"\n return parse_number(list(m))\n\[email protected](\"number\", rule=\"<user.number_string>\")\ndef number(m) -> int:\n \"\"\"Parses a number phrase, returning it as an integer.\"\"\"\n return int(m.number_string)\n\[email protected](\"number_signed\", rule=f\"[negative|minus] <number>\")\ndef number_signed(m):\n number = m[-1]\n return -number if (m[0] in [\"negative\", \"minus\"]) else number\n\[email protected](\n \"number_small\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens} [{alt_digits}])\"\n)\ndef number_small(m): return int(parse_number(list(m)))\n", "path": "code/numbers.py"}]}
| 2,867 | 320 |
gh_patches_debug_11994
|
rasdani/github-patches
|
git_diff
|
oppia__oppia-2476
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update the "collections" section of the creator dashboard to use the newer summary tiles.
In the creator dashboard, the "collections" tab shows long horizontal cards. These are out of sync with the rest of the site, and should instead be replaced with the smaller collection summary tiles that are used in the library page.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/controllers/dashboard.py`
Content:
```
1 # Copyright 2014 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS-IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Controllers for the creator dashboard, notifications, and creating new
16 activities.
17 """
18
19 from core.controllers import base
20 from core.domain import collection_domain
21 from core.domain import collection_services
22 from core.domain import config_domain
23 from core.domain import exp_domain
24 from core.domain import exp_services
25 from core.domain import feedback_services
26 from core.domain import stats_services
27 from core.domain import subscription_services
28 from core.domain import summary_services
29 from core.domain import user_jobs_continuous
30 from core.domain import user_services
31 import feconf
32 import utils
33
34 EXPLORATION_ID_KEY = 'explorationId'
35 COLLECTION_ID_KEY = 'collectionId'
36
37 DEFAULT_TWITTER_SHARE_MESSAGE_DASHBOARD = config_domain.ConfigProperty(
38 'default_twitter_share_message_dashboard', {
39 'type': 'unicode',
40 },
41 'Default text for the Twitter share message for the dashboard',
42 default_value=(
43 'Check out this interactive lesson I created on Oppia - a free '
44 'platform for teaching and learning!'))
45
46
47 class NotificationsDashboardPage(base.BaseHandler):
48 """Page with notifications for the user."""
49
50 @base.require_user
51 def get(self):
52 if self.username in config_domain.BANNED_USERNAMES.value:
53 raise self.UnauthorizedUserException(
54 'You do not have the credentials to access this page.')
55 elif user_services.has_fully_registered(self.user_id):
56 self.values.update({
57 'meta_description': feconf.DASHBOARD_PAGE_DESCRIPTION,
58 'nav_mode': feconf.NAV_MODE_DASHBOARD,
59 })
60 self.render_template(
61 'dashboard/notifications_dashboard.html',
62 redirect_url_on_logout='/')
63 else:
64 self.redirect(utils.set_url_query_parameter(
65 feconf.SIGNUP_URL, 'return_url', '/notifications_dashboard'))
66
67
68 class NotificationsDashboardHandler(base.BaseHandler):
69 """Provides data for the user notifications dashboard."""
70
71 def get(self):
72 """Handles GET requests."""
73 if self.user_id is None:
74 raise self.PageNotFoundException
75
76 job_queued_msec, recent_notifications = (
77 user_jobs_continuous.DashboardRecentUpdatesAggregator.get_recent_notifications( # pylint: disable=line-too-long
78 self.user_id))
79
80 last_seen_msec = (
81 subscription_services.get_last_seen_notifications_msec(
82 self.user_id))
83
84 # Replace author_ids with their usernames.
85 author_ids = [
86 notification['author_id'] for notification in recent_notifications
87 if notification['author_id']]
88 author_usernames = user_services.get_usernames(author_ids)
89
90 author_id_to_username = {
91 None: '',
92 }
93 for ind, author_id in enumerate(author_ids):
94 author_id_to_username[author_id] = author_usernames[ind]
95 for notification in recent_notifications:
96 notification['author_username'] = (
97 author_id_to_username[notification['author_id']])
98 del notification['author_id']
99
100 subscription_services.record_user_has_seen_notifications(
101 self.user_id, job_queued_msec if job_queued_msec else 0.0)
102
103 self.values.update({
104 # This may be None if no job has ever run for this user.
105 'job_queued_msec': job_queued_msec,
106 # This may be None if this is the first time the user has seen
107 # the dashboard.
108 'last_seen_msec': last_seen_msec,
109 'recent_notifications': recent_notifications,
110 })
111 self.render_json(self.values)
112
113
114 class DashboardPage(base.BaseHandler):
115 """Page showing the user's creator dashboard."""
116
117 @base.require_user
118 def get(self):
119 if self.username in config_domain.BANNED_USERNAMES.value:
120 raise self.UnauthorizedUserException(
121 'You do not have the credentials to access this page.')
122 elif user_services.has_fully_registered(self.user_id):
123 self.values.update({
124 'nav_mode': feconf.NAV_MODE_DASHBOARD,
125 'allow_yaml_file_upload': feconf.ALLOW_YAML_FILE_UPLOAD,
126 'DEFAULT_TWITTER_SHARE_MESSAGE_DASHBOARD': (
127 DEFAULT_TWITTER_SHARE_MESSAGE_DASHBOARD.value)
128 })
129 self.render_template(
130 'dashboard/dashboard.html', redirect_url_on_logout='/')
131 else:
132 self.redirect(utils.set_url_query_parameter(
133 feconf.SIGNUP_URL, 'return_url', feconf.DASHBOARD_URL))
134
135
136 class DashboardHandler(base.BaseHandler):
137 """Provides data for the user's creator dashboard page."""
138
139 def get(self):
140 """Handles GET requests."""
141 if self.user_id is None:
142 raise self.PageNotFoundException
143
144 def _get_intro_card_color(category):
145 return (
146 feconf.CATEGORIES_TO_COLORS[category] if
147 category in feconf.CATEGORIES_TO_COLORS else
148 feconf.DEFAULT_COLOR)
149
150 def _round_average_ratings(rating):
151 return round(rating, feconf.AVERAGE_RATINGS_DASHBOARD_PRECISION)
152
153 exploration_ids_subscribed_to = (
154 subscription_services.get_exploration_ids_subscribed_to(
155 self.user_id))
156
157 subscribed_exploration_summaries = filter(None, (
158 exp_services.get_exploration_summaries_matching_ids(
159 exploration_ids_subscribed_to)))
160 subscribed_collection_summaries = filter(None, (
161 collection_services.get_collection_summaries_matching_ids(
162 subscription_services.get_collection_ids_subscribed_to(
163 self.user_id))))
164
165 exp_summary_list = summary_services.get_displayable_exp_summary_dicts(
166 subscribed_exploration_summaries)
167 collection_summary_list = []
168
169 feedback_thread_analytics = (
170 feedback_services.get_thread_analytics_multi(
171 exploration_ids_subscribed_to))
172
173 unresolved_answers_dict = (
174 stats_services.get_exps_unresolved_answers_count_for_default_rule(
175 exploration_ids_subscribed_to))
176
177 for ind, exploration in enumerate(exp_summary_list):
178 exploration.update(feedback_thread_analytics[ind].to_dict())
179 exploration.update({
180 'num_unresolved_answers': (
181 unresolved_answers_dict[exploration['id']]
182 if exploration['id'] in unresolved_answers_dict else 0
183 )
184 })
185
186 exp_summary_list = sorted(
187 exp_summary_list,
188 key=lambda x: (x['num_open_threads'], x['last_updated_msec']),
189 reverse=True)
190
191 if (self.username in
192 config_domain.WHITELISTED_COLLECTION_EDITOR_USERNAMES.value):
193 for collection_summary in subscribed_collection_summaries:
194 # TODO(sll): Reuse _get_displayable_collection_summary_dicts()
195 # in summary_services, instead of replicating it like this.
196 collection_summary_list.append({
197 'id': collection_summary.id,
198 'title': collection_summary.title,
199 'category': collection_summary.category,
200 'objective': collection_summary.objective,
201 'language_code': collection_summary.language_code,
202 'last_updated': utils.get_time_in_millisecs(
203 collection_summary.collection_model_last_updated),
204 'created_on': utils.get_time_in_millisecs(
205 collection_summary.collection_model_created_on),
206 'status': collection_summary.status,
207 'community_owned': collection_summary.community_owned,
208 'thumbnail_icon_url': (
209 utils.get_thumbnail_icon_url_for_category(
210 collection_summary.category)),
211 'thumbnail_bg_color': utils.get_hex_color_for_category(
212 collection_summary.category),
213 })
214
215 dashboard_stats = (
216 user_jobs_continuous.UserStatsAggregator.get_dashboard_stats(
217 self.user_id))
218 dashboard_stats.update({
219 'total_open_feedback': feedback_services.get_total_open_threads(
220 feedback_thread_analytics)
221 })
222 if dashboard_stats and dashboard_stats.get('average_ratings'):
223 dashboard_stats['average_ratings'] = (
224 _round_average_ratings(dashboard_stats['average_ratings']))
225
226 last_week_stats = (
227 user_services.get_last_week_dashboard_stats(self.user_id))
228 if last_week_stats and last_week_stats.get('average_ratings'):
229 last_week_stats['average_ratings'] = (
230 _round_average_ratings(last_week_stats['average_ratings']))
231
232 self.values.update({
233 'explorations_list': exp_summary_list,
234 'collections_list': collection_summary_list,
235 'dashboard_stats': dashboard_stats,
236 'last_week_stats': last_week_stats
237 })
238 self.render_json(self.values)
239
240
241 class NotificationsHandler(base.BaseHandler):
242 """Provides data about unseen notifications."""
243
244 def get(self):
245 """Handles GET requests."""
246 num_unseen_notifications = 0
247 if self.user_id and self.username:
248 last_seen_msec = (
249 subscription_services.get_last_seen_notifications_msec(
250 self.user_id))
251 _, recent_notifications = (
252 user_jobs_continuous.DashboardRecentUpdatesAggregator.get_recent_notifications( # pylint: disable=line-too-long
253 self.user_id))
254 for notification in recent_notifications:
255 if (notification['last_updated_ms'] > last_seen_msec and
256 notification['author_id'] != self.user_id):
257 num_unseen_notifications += 1
258
259 self.render_json({
260 'num_unseen_notifications': num_unseen_notifications,
261 })
262
263
264 class NewExploration(base.BaseHandler):
265 """Creates a new exploration."""
266
267 @base.require_fully_signed_up
268 def post(self):
269 """Handles POST requests."""
270 title = self.payload.get('title', feconf.DEFAULT_EXPLORATION_TITLE)
271
272 new_exploration_id = exp_services.get_new_exploration_id()
273 exploration = exp_domain.Exploration.create_default_exploration(
274 new_exploration_id, title=title)
275 exp_services.save_new_exploration(self.user_id, exploration)
276
277 self.render_json({
278 EXPLORATION_ID_KEY: new_exploration_id
279 })
280
281
282 class NewCollection(base.BaseHandler):
283 """Creates a new collection."""
284
285 @base.require_fully_signed_up
286 def post(self):
287 """Handles POST requests."""
288 new_collection_id = collection_services.get_new_collection_id()
289 collection = collection_domain.Collection.create_default_collection(
290 new_collection_id)
291 collection_services.save_new_collection(self.user_id, collection)
292
293 self.render_json({
294 COLLECTION_ID_KEY: new_collection_id
295 })
296
297
298 class UploadExploration(base.BaseHandler):
299 """Uploads a new exploration."""
300
301 @base.require_fully_signed_up
302 def post(self):
303 """Handles POST requests."""
304 yaml_content = self.request.get('yaml_file')
305
306 new_exploration_id = exp_services.get_new_exploration_id()
307 if feconf.ALLOW_YAML_FILE_UPLOAD:
308 exp_services.save_new_exploration_from_yaml_and_assets(
309 self.user_id, yaml_content, new_exploration_id, [])
310 self.render_json({
311 EXPLORATION_ID_KEY: new_exploration_id
312 })
313 else:
314 raise self.InvalidInputException(
315 'This server does not allow file uploads.')
316
317
318 class DashboardRedirectPage(base.BaseHandler):
319 """An page that redirects to the main Dashboard page."""
320
321 def get(self):
322 """Handles GET requests."""
323 self.redirect(feconf.DASHBOARD_URL)
324
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/controllers/dashboard.py b/core/controllers/dashboard.py
--- a/core/controllers/dashboard.py
+++ b/core/controllers/dashboard.py
@@ -204,6 +204,7 @@
'created_on': utils.get_time_in_millisecs(
collection_summary.collection_model_created_on),
'status': collection_summary.status,
+ 'node_count': collection_summary.node_count,
'community_owned': collection_summary.community_owned,
'thumbnail_icon_url': (
utils.get_thumbnail_icon_url_for_category(
|
{"golden_diff": "diff --git a/core/controllers/dashboard.py b/core/controllers/dashboard.py\n--- a/core/controllers/dashboard.py\n+++ b/core/controllers/dashboard.py\n@@ -204,6 +204,7 @@\n 'created_on': utils.get_time_in_millisecs(\n collection_summary.collection_model_created_on),\n 'status': collection_summary.status,\n+ 'node_count': collection_summary.node_count,\n 'community_owned': collection_summary.community_owned,\n 'thumbnail_icon_url': (\n utils.get_thumbnail_icon_url_for_category(\n", "issue": "Update the \"collections\" section of the creator dashboard to use the newer summary tiles.\nIn the creator dashboard, the \"collections\" tab shows long horizontal cards. These are out of sync with the rest of the site, and should instead be replaced with the smaller collection summary tiles that are used in the library page.\n\n", "before_files": [{"content": "# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Controllers for the creator dashboard, notifications, and creating new\nactivities.\n\"\"\"\n\nfrom core.controllers import base\nfrom core.domain import collection_domain\nfrom core.domain import collection_services\nfrom core.domain import config_domain\nfrom core.domain import exp_domain\nfrom core.domain import exp_services\nfrom core.domain import feedback_services\nfrom core.domain import stats_services\nfrom core.domain import subscription_services\nfrom core.domain import summary_services\nfrom core.domain import user_jobs_continuous\nfrom core.domain import user_services\nimport feconf\nimport utils\n\nEXPLORATION_ID_KEY = 'explorationId'\nCOLLECTION_ID_KEY = 'collectionId'\n\nDEFAULT_TWITTER_SHARE_MESSAGE_DASHBOARD = config_domain.ConfigProperty(\n 'default_twitter_share_message_dashboard', {\n 'type': 'unicode',\n },\n 'Default text for the Twitter share message for the dashboard',\n default_value=(\n 'Check out this interactive lesson I created on Oppia - a free '\n 'platform for teaching and learning!'))\n\n\nclass NotificationsDashboardPage(base.BaseHandler):\n \"\"\"Page with notifications for the user.\"\"\"\n\n @base.require_user\n def get(self):\n if self.username in config_domain.BANNED_USERNAMES.value:\n raise self.UnauthorizedUserException(\n 'You do not have the credentials to access this page.')\n elif user_services.has_fully_registered(self.user_id):\n self.values.update({\n 'meta_description': feconf.DASHBOARD_PAGE_DESCRIPTION,\n 'nav_mode': feconf.NAV_MODE_DASHBOARD,\n })\n self.render_template(\n 'dashboard/notifications_dashboard.html',\n redirect_url_on_logout='/')\n else:\n self.redirect(utils.set_url_query_parameter(\n feconf.SIGNUP_URL, 'return_url', '/notifications_dashboard'))\n\n\nclass NotificationsDashboardHandler(base.BaseHandler):\n \"\"\"Provides data for the user notifications dashboard.\"\"\"\n\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n if self.user_id is None:\n raise self.PageNotFoundException\n\n job_queued_msec, recent_notifications = (\n user_jobs_continuous.DashboardRecentUpdatesAggregator.get_recent_notifications( # pylint: disable=line-too-long\n self.user_id))\n\n last_seen_msec = (\n subscription_services.get_last_seen_notifications_msec(\n self.user_id))\n\n # Replace author_ids with their usernames.\n author_ids = [\n notification['author_id'] for notification in recent_notifications\n if notification['author_id']]\n author_usernames = user_services.get_usernames(author_ids)\n\n author_id_to_username = {\n None: '',\n }\n for ind, author_id in enumerate(author_ids):\n author_id_to_username[author_id] = author_usernames[ind]\n for notification in recent_notifications:\n notification['author_username'] = (\n author_id_to_username[notification['author_id']])\n del notification['author_id']\n\n subscription_services.record_user_has_seen_notifications(\n self.user_id, job_queued_msec if job_queued_msec else 0.0)\n\n self.values.update({\n # This may be None if no job has ever run for this user.\n 'job_queued_msec': job_queued_msec,\n # This may be None if this is the first time the user has seen\n # the dashboard.\n 'last_seen_msec': last_seen_msec,\n 'recent_notifications': recent_notifications,\n })\n self.render_json(self.values)\n\n\nclass DashboardPage(base.BaseHandler):\n \"\"\"Page showing the user's creator dashboard.\"\"\"\n\n @base.require_user\n def get(self):\n if self.username in config_domain.BANNED_USERNAMES.value:\n raise self.UnauthorizedUserException(\n 'You do not have the credentials to access this page.')\n elif user_services.has_fully_registered(self.user_id):\n self.values.update({\n 'nav_mode': feconf.NAV_MODE_DASHBOARD,\n 'allow_yaml_file_upload': feconf.ALLOW_YAML_FILE_UPLOAD,\n 'DEFAULT_TWITTER_SHARE_MESSAGE_DASHBOARD': (\n DEFAULT_TWITTER_SHARE_MESSAGE_DASHBOARD.value)\n })\n self.render_template(\n 'dashboard/dashboard.html', redirect_url_on_logout='/')\n else:\n self.redirect(utils.set_url_query_parameter(\n feconf.SIGNUP_URL, 'return_url', feconf.DASHBOARD_URL))\n\n\nclass DashboardHandler(base.BaseHandler):\n \"\"\"Provides data for the user's creator dashboard page.\"\"\"\n\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n if self.user_id is None:\n raise self.PageNotFoundException\n\n def _get_intro_card_color(category):\n return (\n feconf.CATEGORIES_TO_COLORS[category] if\n category in feconf.CATEGORIES_TO_COLORS else\n feconf.DEFAULT_COLOR)\n\n def _round_average_ratings(rating):\n return round(rating, feconf.AVERAGE_RATINGS_DASHBOARD_PRECISION)\n\n exploration_ids_subscribed_to = (\n subscription_services.get_exploration_ids_subscribed_to(\n self.user_id))\n\n subscribed_exploration_summaries = filter(None, (\n exp_services.get_exploration_summaries_matching_ids(\n exploration_ids_subscribed_to)))\n subscribed_collection_summaries = filter(None, (\n collection_services.get_collection_summaries_matching_ids(\n subscription_services.get_collection_ids_subscribed_to(\n self.user_id))))\n\n exp_summary_list = summary_services.get_displayable_exp_summary_dicts(\n subscribed_exploration_summaries)\n collection_summary_list = []\n\n feedback_thread_analytics = (\n feedback_services.get_thread_analytics_multi(\n exploration_ids_subscribed_to))\n\n unresolved_answers_dict = (\n stats_services.get_exps_unresolved_answers_count_for_default_rule(\n exploration_ids_subscribed_to))\n\n for ind, exploration in enumerate(exp_summary_list):\n exploration.update(feedback_thread_analytics[ind].to_dict())\n exploration.update({\n 'num_unresolved_answers': (\n unresolved_answers_dict[exploration['id']]\n if exploration['id'] in unresolved_answers_dict else 0\n )\n })\n\n exp_summary_list = sorted(\n exp_summary_list,\n key=lambda x: (x['num_open_threads'], x['last_updated_msec']),\n reverse=True)\n\n if (self.username in\n config_domain.WHITELISTED_COLLECTION_EDITOR_USERNAMES.value):\n for collection_summary in subscribed_collection_summaries:\n # TODO(sll): Reuse _get_displayable_collection_summary_dicts()\n # in summary_services, instead of replicating it like this.\n collection_summary_list.append({\n 'id': collection_summary.id,\n 'title': collection_summary.title,\n 'category': collection_summary.category,\n 'objective': collection_summary.objective,\n 'language_code': collection_summary.language_code,\n 'last_updated': utils.get_time_in_millisecs(\n collection_summary.collection_model_last_updated),\n 'created_on': utils.get_time_in_millisecs(\n collection_summary.collection_model_created_on),\n 'status': collection_summary.status,\n 'community_owned': collection_summary.community_owned,\n 'thumbnail_icon_url': (\n utils.get_thumbnail_icon_url_for_category(\n collection_summary.category)),\n 'thumbnail_bg_color': utils.get_hex_color_for_category(\n collection_summary.category),\n })\n\n dashboard_stats = (\n user_jobs_continuous.UserStatsAggregator.get_dashboard_stats(\n self.user_id))\n dashboard_stats.update({\n 'total_open_feedback': feedback_services.get_total_open_threads(\n feedback_thread_analytics)\n })\n if dashboard_stats and dashboard_stats.get('average_ratings'):\n dashboard_stats['average_ratings'] = (\n _round_average_ratings(dashboard_stats['average_ratings']))\n\n last_week_stats = (\n user_services.get_last_week_dashboard_stats(self.user_id))\n if last_week_stats and last_week_stats.get('average_ratings'):\n last_week_stats['average_ratings'] = (\n _round_average_ratings(last_week_stats['average_ratings']))\n\n self.values.update({\n 'explorations_list': exp_summary_list,\n 'collections_list': collection_summary_list,\n 'dashboard_stats': dashboard_stats,\n 'last_week_stats': last_week_stats\n })\n self.render_json(self.values)\n\n\nclass NotificationsHandler(base.BaseHandler):\n \"\"\"Provides data about unseen notifications.\"\"\"\n\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n num_unseen_notifications = 0\n if self.user_id and self.username:\n last_seen_msec = (\n subscription_services.get_last_seen_notifications_msec(\n self.user_id))\n _, recent_notifications = (\n user_jobs_continuous.DashboardRecentUpdatesAggregator.get_recent_notifications( # pylint: disable=line-too-long\n self.user_id))\n for notification in recent_notifications:\n if (notification['last_updated_ms'] > last_seen_msec and\n notification['author_id'] != self.user_id):\n num_unseen_notifications += 1\n\n self.render_json({\n 'num_unseen_notifications': num_unseen_notifications,\n })\n\n\nclass NewExploration(base.BaseHandler):\n \"\"\"Creates a new exploration.\"\"\"\n\n @base.require_fully_signed_up\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n title = self.payload.get('title', feconf.DEFAULT_EXPLORATION_TITLE)\n\n new_exploration_id = exp_services.get_new_exploration_id()\n exploration = exp_domain.Exploration.create_default_exploration(\n new_exploration_id, title=title)\n exp_services.save_new_exploration(self.user_id, exploration)\n\n self.render_json({\n EXPLORATION_ID_KEY: new_exploration_id\n })\n\n\nclass NewCollection(base.BaseHandler):\n \"\"\"Creates a new collection.\"\"\"\n\n @base.require_fully_signed_up\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n new_collection_id = collection_services.get_new_collection_id()\n collection = collection_domain.Collection.create_default_collection(\n new_collection_id)\n collection_services.save_new_collection(self.user_id, collection)\n\n self.render_json({\n COLLECTION_ID_KEY: new_collection_id\n })\n\n\nclass UploadExploration(base.BaseHandler):\n \"\"\"Uploads a new exploration.\"\"\"\n\n @base.require_fully_signed_up\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n yaml_content = self.request.get('yaml_file')\n\n new_exploration_id = exp_services.get_new_exploration_id()\n if feconf.ALLOW_YAML_FILE_UPLOAD:\n exp_services.save_new_exploration_from_yaml_and_assets(\n self.user_id, yaml_content, new_exploration_id, [])\n self.render_json({\n EXPLORATION_ID_KEY: new_exploration_id\n })\n else:\n raise self.InvalidInputException(\n 'This server does not allow file uploads.')\n\n\nclass DashboardRedirectPage(base.BaseHandler):\n \"\"\"An page that redirects to the main Dashboard page.\"\"\"\n\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n self.redirect(feconf.DASHBOARD_URL)\n", "path": "core/controllers/dashboard.py"}], "after_files": [{"content": "# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Controllers for the creator dashboard, notifications, and creating new\nactivities.\n\"\"\"\n\nfrom core.controllers import base\nfrom core.domain import collection_domain\nfrom core.domain import collection_services\nfrom core.domain import config_domain\nfrom core.domain import exp_domain\nfrom core.domain import exp_services\nfrom core.domain import feedback_services\nfrom core.domain import stats_services\nfrom core.domain import subscription_services\nfrom core.domain import summary_services\nfrom core.domain import user_jobs_continuous\nfrom core.domain import user_services\nimport feconf\nimport utils\n\nEXPLORATION_ID_KEY = 'explorationId'\nCOLLECTION_ID_KEY = 'collectionId'\n\nDEFAULT_TWITTER_SHARE_MESSAGE_DASHBOARD = config_domain.ConfigProperty(\n 'default_twitter_share_message_dashboard', {\n 'type': 'unicode',\n },\n 'Default text for the Twitter share message for the dashboard',\n default_value=(\n 'Check out this interactive lesson I created on Oppia - a free '\n 'platform for teaching and learning!'))\n\n\nclass NotificationsDashboardPage(base.BaseHandler):\n \"\"\"Page with notifications for the user.\"\"\"\n\n @base.require_user\n def get(self):\n if self.username in config_domain.BANNED_USERNAMES.value:\n raise self.UnauthorizedUserException(\n 'You do not have the credentials to access this page.')\n elif user_services.has_fully_registered(self.user_id):\n self.values.update({\n 'meta_description': feconf.DASHBOARD_PAGE_DESCRIPTION,\n 'nav_mode': feconf.NAV_MODE_DASHBOARD,\n })\n self.render_template(\n 'dashboard/notifications_dashboard.html',\n redirect_url_on_logout='/')\n else:\n self.redirect(utils.set_url_query_parameter(\n feconf.SIGNUP_URL, 'return_url', '/notifications_dashboard'))\n\n\nclass NotificationsDashboardHandler(base.BaseHandler):\n \"\"\"Provides data for the user notifications dashboard.\"\"\"\n\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n if self.user_id is None:\n raise self.PageNotFoundException\n\n job_queued_msec, recent_notifications = (\n user_jobs_continuous.DashboardRecentUpdatesAggregator.get_recent_notifications( # pylint: disable=line-too-long\n self.user_id))\n\n last_seen_msec = (\n subscription_services.get_last_seen_notifications_msec(\n self.user_id))\n\n # Replace author_ids with their usernames.\n author_ids = [\n notification['author_id'] for notification in recent_notifications\n if notification['author_id']]\n author_usernames = user_services.get_usernames(author_ids)\n\n author_id_to_username = {\n None: '',\n }\n for ind, author_id in enumerate(author_ids):\n author_id_to_username[author_id] = author_usernames[ind]\n for notification in recent_notifications:\n notification['author_username'] = (\n author_id_to_username[notification['author_id']])\n del notification['author_id']\n\n subscription_services.record_user_has_seen_notifications(\n self.user_id, job_queued_msec if job_queued_msec else 0.0)\n\n self.values.update({\n # This may be None if no job has ever run for this user.\n 'job_queued_msec': job_queued_msec,\n # This may be None if this is the first time the user has seen\n # the dashboard.\n 'last_seen_msec': last_seen_msec,\n 'recent_notifications': recent_notifications,\n })\n self.render_json(self.values)\n\n\nclass DashboardPage(base.BaseHandler):\n \"\"\"Page showing the user's creator dashboard.\"\"\"\n\n @base.require_user\n def get(self):\n if self.username in config_domain.BANNED_USERNAMES.value:\n raise self.UnauthorizedUserException(\n 'You do not have the credentials to access this page.')\n elif user_services.has_fully_registered(self.user_id):\n self.values.update({\n 'nav_mode': feconf.NAV_MODE_DASHBOARD,\n 'allow_yaml_file_upload': feconf.ALLOW_YAML_FILE_UPLOAD,\n 'DEFAULT_TWITTER_SHARE_MESSAGE_DASHBOARD': (\n DEFAULT_TWITTER_SHARE_MESSAGE_DASHBOARD.value)\n })\n self.render_template(\n 'dashboard/dashboard.html', redirect_url_on_logout='/')\n else:\n self.redirect(utils.set_url_query_parameter(\n feconf.SIGNUP_URL, 'return_url', feconf.DASHBOARD_URL))\n\n\nclass DashboardHandler(base.BaseHandler):\n \"\"\"Provides data for the user's creator dashboard page.\"\"\"\n\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n if self.user_id is None:\n raise self.PageNotFoundException\n\n def _get_intro_card_color(category):\n return (\n feconf.CATEGORIES_TO_COLORS[category] if\n category in feconf.CATEGORIES_TO_COLORS else\n feconf.DEFAULT_COLOR)\n\n def _round_average_ratings(rating):\n return round(rating, feconf.AVERAGE_RATINGS_DASHBOARD_PRECISION)\n\n exploration_ids_subscribed_to = (\n subscription_services.get_exploration_ids_subscribed_to(\n self.user_id))\n\n subscribed_exploration_summaries = filter(None, (\n exp_services.get_exploration_summaries_matching_ids(\n exploration_ids_subscribed_to)))\n subscribed_collection_summaries = filter(None, (\n collection_services.get_collection_summaries_matching_ids(\n subscription_services.get_collection_ids_subscribed_to(\n self.user_id))))\n\n exp_summary_list = summary_services.get_displayable_exp_summary_dicts(\n subscribed_exploration_summaries)\n collection_summary_list = []\n\n feedback_thread_analytics = (\n feedback_services.get_thread_analytics_multi(\n exploration_ids_subscribed_to))\n\n unresolved_answers_dict = (\n stats_services.get_exps_unresolved_answers_count_for_default_rule(\n exploration_ids_subscribed_to))\n\n for ind, exploration in enumerate(exp_summary_list):\n exploration.update(feedback_thread_analytics[ind].to_dict())\n exploration.update({\n 'num_unresolved_answers': (\n unresolved_answers_dict[exploration['id']]\n if exploration['id'] in unresolved_answers_dict else 0\n )\n })\n\n exp_summary_list = sorted(\n exp_summary_list,\n key=lambda x: (x['num_open_threads'], x['last_updated_msec']),\n reverse=True)\n\n if (self.username in\n config_domain.WHITELISTED_COLLECTION_EDITOR_USERNAMES.value):\n for collection_summary in subscribed_collection_summaries:\n # TODO(sll): Reuse _get_displayable_collection_summary_dicts()\n # in summary_services, instead of replicating it like this.\n collection_summary_list.append({\n 'id': collection_summary.id,\n 'title': collection_summary.title,\n 'category': collection_summary.category,\n 'objective': collection_summary.objective,\n 'language_code': collection_summary.language_code,\n 'last_updated': utils.get_time_in_millisecs(\n collection_summary.collection_model_last_updated),\n 'created_on': utils.get_time_in_millisecs(\n collection_summary.collection_model_created_on),\n 'status': collection_summary.status,\n 'node_count': collection_summary.node_count,\n 'community_owned': collection_summary.community_owned,\n 'thumbnail_icon_url': (\n utils.get_thumbnail_icon_url_for_category(\n collection_summary.category)),\n 'thumbnail_bg_color': utils.get_hex_color_for_category(\n collection_summary.category),\n })\n\n dashboard_stats = (\n user_jobs_continuous.UserStatsAggregator.get_dashboard_stats(\n self.user_id))\n dashboard_stats.update({\n 'total_open_feedback': feedback_services.get_total_open_threads(\n feedback_thread_analytics)\n })\n if dashboard_stats and dashboard_stats.get('average_ratings'):\n dashboard_stats['average_ratings'] = (\n _round_average_ratings(dashboard_stats['average_ratings']))\n\n last_week_stats = (\n user_services.get_last_week_dashboard_stats(self.user_id))\n if last_week_stats and last_week_stats.get('average_ratings'):\n last_week_stats['average_ratings'] = (\n _round_average_ratings(last_week_stats['average_ratings']))\n\n self.values.update({\n 'explorations_list': exp_summary_list,\n 'collections_list': collection_summary_list,\n 'dashboard_stats': dashboard_stats,\n 'last_week_stats': last_week_stats\n })\n self.render_json(self.values)\n\n\nclass NotificationsHandler(base.BaseHandler):\n \"\"\"Provides data about unseen notifications.\"\"\"\n\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n num_unseen_notifications = 0\n if self.user_id and self.username:\n last_seen_msec = (\n subscription_services.get_last_seen_notifications_msec(\n self.user_id))\n _, recent_notifications = (\n user_jobs_continuous.DashboardRecentUpdatesAggregator.get_recent_notifications( # pylint: disable=line-too-long\n self.user_id))\n for notification in recent_notifications:\n if (notification['last_updated_ms'] > last_seen_msec and\n notification['author_id'] != self.user_id):\n num_unseen_notifications += 1\n\n self.render_json({\n 'num_unseen_notifications': num_unseen_notifications,\n })\n\n\nclass NewExploration(base.BaseHandler):\n \"\"\"Creates a new exploration.\"\"\"\n\n @base.require_fully_signed_up\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n title = self.payload.get('title', feconf.DEFAULT_EXPLORATION_TITLE)\n\n new_exploration_id = exp_services.get_new_exploration_id()\n exploration = exp_domain.Exploration.create_default_exploration(\n new_exploration_id, title=title)\n exp_services.save_new_exploration(self.user_id, exploration)\n\n self.render_json({\n EXPLORATION_ID_KEY: new_exploration_id\n })\n\n\nclass NewCollection(base.BaseHandler):\n \"\"\"Creates a new collection.\"\"\"\n\n @base.require_fully_signed_up\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n new_collection_id = collection_services.get_new_collection_id()\n collection = collection_domain.Collection.create_default_collection(\n new_collection_id)\n collection_services.save_new_collection(self.user_id, collection)\n\n self.render_json({\n COLLECTION_ID_KEY: new_collection_id\n })\n\n\nclass UploadExploration(base.BaseHandler):\n \"\"\"Uploads a new exploration.\"\"\"\n\n @base.require_fully_signed_up\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n yaml_content = self.request.get('yaml_file')\n\n new_exploration_id = exp_services.get_new_exploration_id()\n if feconf.ALLOW_YAML_FILE_UPLOAD:\n exp_services.save_new_exploration_from_yaml_and_assets(\n self.user_id, yaml_content, new_exploration_id, [])\n self.render_json({\n EXPLORATION_ID_KEY: new_exploration_id\n })\n else:\n raise self.InvalidInputException(\n 'This server does not allow file uploads.')\n\n\nclass DashboardRedirectPage(base.BaseHandler):\n \"\"\"An page that redirects to the main Dashboard page.\"\"\"\n\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n self.redirect(feconf.DASHBOARD_URL)\n", "path": "core/controllers/dashboard.py"}]}
| 3,592 | 108 |
gh_patches_debug_38924
|
rasdani/github-patches
|
git_diff
|
yt-dlp__yt-dlp-8032
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bild.de Extractor not working
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I'm running yt-dlp version **2023.07.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Germany
### Provide a description that is worded well enough to be understood
Today i tried downloading a video from bild.de i have never realy downloaded from there. But after the download starts the file is done. The content of the file is nothing just a black screen.
The m3u8 file i get out looks like this. I already tried updating im on the [email protected] now.
`#EXTM3U
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-VERSION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-TARGETDURATION:6
#EXTINF:6.0,
segment_0_playlist_0.ts
#EXTINF:6.0,
segment_0_playlist_1.ts
#EXTINF:6.0,
segment_0_playlist_2.ts
#EXTINF:6.0,
segment_0_playlist_3.ts
#EXTINF:6.0,
segment_0_playlist_4.ts
#EXTINF:6.0,
segment_0_playlist_5.ts
#EXTINF:6.0,
segment_0_playlist_6.ts
#EXTINF:6.0,
segment_0_playlist_7.ts
#EXTINF:6.0,
segment_0_playlist_8.ts
#EXTINF:6.0,
segment_0_playlist_9.ts
#EXTINF:6.0,
segment_0_playlist_10.ts
#EXTINF:3.96,
segment_0_playlist_11.ts
#EXT-X-ENDLIST
#EXTM3U
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-VERSION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-TARGETDURATION:6
#EXTINF:6.0,
segment_1_playlist_0.ts
#EXTINF:6.0,
segment_1_playlist_1.ts
#EXTINF:6.0,
segment_1_playlist_2.ts
#EXTINF:6.0,
segment_1_playlist_3.ts
#EXTINF:6.0,
segment_1_playlist_4.ts
#EXTINF:6.0,
segment_1_playlist_5.ts
#EXTINF:6.0,
segment_1_playlist_6.ts
#EXTINF:6.0,
segment_1_playlist_7.ts
#EXTINF:6.0,
segment_1_playlist_8.ts
#EXTINF:6.0,
segment_1_playlist_9.ts
#EXTINF:6.0,
segment_1_playlist_10.ts
#EXTINF:3.96,
segment_1_playlist_11.ts
#EXT-X-ENDLIST
#EXTM3U
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-VERSION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-TARGETDURATION:6
#EXTINF:6.0,
segment_2_playlist_0.ts
#EXTINF:6.0,
segment_2_playlist_1.ts
#EXTINF:6.0,
segment_2_playlist_2.ts
#EXTINF:6.0,
segment_2_playlist_3.ts
#EXTINF:6.0,
segment_2_playlist_4.ts
#EXTINF:6.0,
segment_2_playlist_5.ts
#EXTINF:6.0,
segment_2_playlist_6.ts
#EXTINF:6.0,
segment_2_playlist_7.ts
#EXTINF:6.0,
segment_2_playlist_8.ts
#EXTINF:6.0,
segment_2_playlist_9.ts
#EXTINF:6.0,
segment_2_playlist_10.ts
#EXTINF:3.96,
segment_2_playlist_11.ts
#EXT-X-ENDLIST
`
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp.exe -vU https://www.bild.de/video/clip/news-ausland/deftiger-abgang-vom-10m-turm-bademeister-sorgt-fuer-skandal-85158620.bild.html
[debug] Command-line config: ['-vU', 'https://www.bild.de/video/clip/news-ausland/deftiger-abgang-vom-10m-turm-bademeister-sorgt-fuer-skandal-85158620.bild.html']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] [7cccab79e] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 2023-06-11-git-09621fd7d9-essentials_build-www.gyan.dev (setts), ffprobe 2023-06-11-git-09621fd7d9-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.07.22, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3
[debug] Proxy map: {}
[debug] Loaded 1864 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Available version: [email protected], Current version: [email protected]
Current Build Hash: 71c3f327a60b9cbbaa27ff92ed08abe76e600010298e392b98a41e2cdac634f8
yt-dlp is up to date ([email protected])
[Bild] Extracting URL: https://www.bild.de/video/clip/news-ausland/deftiger-abgang-vom-10m-turm-bademeister-sorgt-fuer-skandal-85158620.bild.html
[Bild] 85158620: Downloading JSON metadata
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 85158620: Downloading 1 format(s): 0
[debug] Invoking hlsnative downloader on "https://hds.ak.token.bild.de/85158620,delivery=hls.m3u8"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 3
[download] Destination: Bademeister sorgt für Skandal [85158620].m3u8
[debug] File locking is not supported. Proceeding without locking
[download] 100% of 1.64KiB in 00:00:00 at 6.58KiB/s
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `yt_dlp/extractor/bild.py`
Content:
```
1 from .common import InfoExtractor
2 from ..utils import (
3 int_or_none,
4 unescapeHTML,
5 )
6
7
8 class BildIE(InfoExtractor):
9 _VALID_URL = r'https?://(?:www\.)?bild\.de/(?:[^/]+/)+(?P<display_id>[^/]+)-(?P<id>\d+)(?:,auto=true)?\.bild\.html'
10 IE_DESC = 'Bild.de'
11 _TEST = {
12 'url': 'http://www.bild.de/video/clip/apple-ipad-air/das-koennen-die-neuen-ipads-38184146.bild.html',
13 'md5': 'dd495cbd99f2413502a1713a1156ac8a',
14 'info_dict': {
15 'id': '38184146',
16 'ext': 'mp4',
17 'title': 'Das können die neuen iPads',
18 'description': 'md5:a4058c4fa2a804ab59c00d7244bbf62f',
19 'thumbnail': r're:^https?://.*\.jpg$',
20 'duration': 196,
21 }
22 }
23
24 def _real_extract(self, url):
25 video_id = self._match_id(url)
26
27 video_data = self._download_json(
28 url.split('.bild.html')[0] + ',view=json.bild.html', video_id)
29
30 return {
31 'id': video_id,
32 'title': unescapeHTML(video_data['title']).strip(),
33 'description': unescapeHTML(video_data.get('description')),
34 'url': video_data['clipList'][0]['srces'][0]['src'],
35 'thumbnail': video_data.get('poster'),
36 'duration': int_or_none(video_data.get('durationSec')),
37 }
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/yt_dlp/extractor/bild.py b/yt_dlp/extractor/bild.py
--- a/yt_dlp/extractor/bild.py
+++ b/yt_dlp/extractor/bild.py
@@ -1,6 +1,7 @@
from .common import InfoExtractor
from ..utils import (
int_or_none,
+ traverse_obj,
unescapeHTML,
)
@@ -8,7 +9,8 @@
class BildIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?bild\.de/(?:[^/]+/)+(?P<display_id>[^/]+)-(?P<id>\d+)(?:,auto=true)?\.bild\.html'
IE_DESC = 'Bild.de'
- _TEST = {
+ _TESTS = [{
+ 'note': 'static MP4 only',
'url': 'http://www.bild.de/video/clip/apple-ipad-air/das-koennen-die-neuen-ipads-38184146.bild.html',
'md5': 'dd495cbd99f2413502a1713a1156ac8a',
'info_dict': {
@@ -19,7 +21,19 @@
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 196,
}
- }
+ }, {
+ 'note': 'static MP4 and HLS',
+ 'url': 'https://www.bild.de/video/clip/news-ausland/deftiger-abgang-vom-10m-turm-bademeister-sorgt-fuer-skandal-85158620.bild.html',
+ 'md5': 'fb0ed4f09c495d4ba7ce2eee0bb90de1',
+ 'info_dict': {
+ 'id': '85158620',
+ 'ext': 'mp4',
+ 'title': 'Der Sprungturm-Skandal',
+ 'description': 'md5:709b543c24dc31bbbffee73bccda34ad',
+ 'thumbnail': r're:^https?://.*\.jpg$',
+ 'duration': 69,
+ }
+ }]
def _real_extract(self, url):
video_id = self._match_id(url)
@@ -27,11 +41,23 @@
video_data = self._download_json(
url.split('.bild.html')[0] + ',view=json.bild.html', video_id)
+ formats = []
+ for src in traverse_obj(video_data, ('clipList', 0, 'srces', lambda _, v: v['src'])):
+ src_type = src.get('type')
+ if src_type == 'application/x-mpegURL':
+ formats.extend(
+ self._extract_m3u8_formats(
+ src['src'], video_id, 'mp4', m3u8_id='hls', fatal=False))
+ elif src_type == 'video/mp4':
+ formats.append({'url': src['src'], 'format_id': 'http-mp4'})
+ else:
+ self.report_warning(f'Skipping unsupported format type: "{src_type}"')
+
return {
'id': video_id,
'title': unescapeHTML(video_data['title']).strip(),
'description': unescapeHTML(video_data.get('description')),
- 'url': video_data['clipList'][0]['srces'][0]['src'],
+ 'formats': formats,
'thumbnail': video_data.get('poster'),
'duration': int_or_none(video_data.get('durationSec')),
}
|
{"golden_diff": "diff --git a/yt_dlp/extractor/bild.py b/yt_dlp/extractor/bild.py\n--- a/yt_dlp/extractor/bild.py\n+++ b/yt_dlp/extractor/bild.py\n@@ -1,6 +1,7 @@\n from .common import InfoExtractor\n from ..utils import (\n int_or_none,\n+ traverse_obj,\n unescapeHTML,\n )\n \n@@ -8,7 +9,8 @@\n class BildIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?bild\\.de/(?:[^/]+/)+(?P<display_id>[^/]+)-(?P<id>\\d+)(?:,auto=true)?\\.bild\\.html'\n IE_DESC = 'Bild.de'\n- _TEST = {\n+ _TESTS = [{\n+ 'note': 'static MP4 only',\n 'url': 'http://www.bild.de/video/clip/apple-ipad-air/das-koennen-die-neuen-ipads-38184146.bild.html',\n 'md5': 'dd495cbd99f2413502a1713a1156ac8a',\n 'info_dict': {\n@@ -19,7 +21,19 @@\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'duration': 196,\n }\n- }\n+ }, {\n+ 'note': 'static MP4 and HLS',\n+ 'url': 'https://www.bild.de/video/clip/news-ausland/deftiger-abgang-vom-10m-turm-bademeister-sorgt-fuer-skandal-85158620.bild.html',\n+ 'md5': 'fb0ed4f09c495d4ba7ce2eee0bb90de1',\n+ 'info_dict': {\n+ 'id': '85158620',\n+ 'ext': 'mp4',\n+ 'title': 'Der Sprungturm-Skandal',\n+ 'description': 'md5:709b543c24dc31bbbffee73bccda34ad',\n+ 'thumbnail': r're:^https?://.*\\.jpg$',\n+ 'duration': 69,\n+ }\n+ }]\n \n def _real_extract(self, url):\n video_id = self._match_id(url)\n@@ -27,11 +41,23 @@\n video_data = self._download_json(\n url.split('.bild.html')[0] + ',view=json.bild.html', video_id)\n \n+ formats = []\n+ for src in traverse_obj(video_data, ('clipList', 0, 'srces', lambda _, v: v['src'])):\n+ src_type = src.get('type')\n+ if src_type == 'application/x-mpegURL':\n+ formats.extend(\n+ self._extract_m3u8_formats(\n+ src['src'], video_id, 'mp4', m3u8_id='hls', fatal=False))\n+ elif src_type == 'video/mp4':\n+ formats.append({'url': src['src'], 'format_id': 'http-mp4'})\n+ else:\n+ self.report_warning(f'Skipping unsupported format type: \"{src_type}\"')\n+\n return {\n 'id': video_id,\n 'title': unescapeHTML(video_data['title']).strip(),\n 'description': unescapeHTML(video_data.get('description')),\n- 'url': video_data['clipList'][0]['srces'][0]['src'],\n+ 'formats': formats,\n 'thumbnail': video_data.get('poster'),\n 'duration': int_or_none(video_data.get('durationSec')),\n }\n", "issue": "Bild.de Extractor not working\n### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\n- [X] I've verified that I'm running yt-dlp version **2023.07.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGermany\n\n### Provide a description that is worded well enough to be understood\n\nToday i tried downloading a video from bild.de i have never realy downloaded from there. But after the download starts the file is done. The content of the file is nothing just a black screen. \r\nThe m3u8 file i get out looks like this. I already tried updating im on the [email protected] now.\r\n\r\n`#EXTM3U\r\n#EXT-X-PLAYLIST-TYPE:VOD\r\n#EXT-X-VERSION:5\r\n#EXT-X-MEDIA-SEQUENCE:0\r\n#EXT-X-TARGETDURATION:6\r\n#EXTINF:6.0,\r\nsegment_0_playlist_0.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_1.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_2.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_3.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_4.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_5.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_6.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_7.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_8.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_9.ts\r\n#EXTINF:6.0,\r\nsegment_0_playlist_10.ts\r\n#EXTINF:3.96,\r\nsegment_0_playlist_11.ts\r\n#EXT-X-ENDLIST\r\n#EXTM3U\r\n#EXT-X-PLAYLIST-TYPE:VOD\r\n#EXT-X-VERSION:5\r\n#EXT-X-MEDIA-SEQUENCE:0\r\n#EXT-X-TARGETDURATION:6\r\n#EXTINF:6.0,\r\nsegment_1_playlist_0.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_1.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_2.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_3.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_4.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_5.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_6.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_7.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_8.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_9.ts\r\n#EXTINF:6.0,\r\nsegment_1_playlist_10.ts\r\n#EXTINF:3.96,\r\nsegment_1_playlist_11.ts\r\n#EXT-X-ENDLIST\r\n#EXTM3U\r\n#EXT-X-PLAYLIST-TYPE:VOD\r\n#EXT-X-VERSION:5\r\n#EXT-X-MEDIA-SEQUENCE:0\r\n#EXT-X-TARGETDURATION:6\r\n#EXTINF:6.0,\r\nsegment_2_playlist_0.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_1.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_2.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_3.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_4.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_5.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_6.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_7.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_8.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_9.ts\r\n#EXTINF:6.0,\r\nsegment_2_playlist_10.ts\r\n#EXTINF:3.96,\r\nsegment_2_playlist_11.ts\r\n#EXT-X-ENDLIST\r\n`\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\nyt-dlp.exe -vU https://www.bild.de/video/clip/news-ausland/deftiger-abgang-vom-10m-turm-bademeister-sorgt-fuer-skandal-85158620.bild.html\r\n[debug] Command-line config: ['-vU', 'https://www.bild.de/video/clip/news-ausland/deftiger-abgang-vom-10m-turm-bademeister-sorgt-fuer-skandal-85158620.bild.html']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version [email protected] [7cccab79e] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg 2023-06-11-git-09621fd7d9-essentials_build-www.gyan.dev (setts), ffprobe 2023-06-11-git-09621fd7d9-essentials_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.07.22, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1864 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest\r\nAvailable version: [email protected], Current version: [email protected]\r\nCurrent Build Hash: 71c3f327a60b9cbbaa27ff92ed08abe76e600010298e392b98a41e2cdac634f8\r\nyt-dlp is up to date ([email protected])\r\n[Bild] Extracting URL: https://www.bild.de/video/clip/news-ausland/deftiger-abgang-vom-10m-turm-bademeister-sorgt-fuer-skandal-85158620.bild.html\r\n[Bild] 85158620: Downloading JSON metadata\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] 85158620: Downloading 1 format(s): 0\r\n[debug] Invoking hlsnative downloader on \"https://hds.ak.token.bild.de/85158620,delivery=hls.m3u8\"\r\n[hlsnative] Downloading m3u8 manifest\r\n[hlsnative] Total fragments: 3\r\n[download] Destination: Bademeister sorgt f\u00fcr Skandal [85158620].m3u8\r\n[debug] File locking is not supported. Proceeding without locking\r\n[download] 100% of 1.64KiB in 00:00:00 at 6.58KiB/s\n```\n\n", "before_files": [{"content": "from .common import InfoExtractor\nfrom ..utils import (\n int_or_none,\n unescapeHTML,\n)\n\n\nclass BildIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?bild\\.de/(?:[^/]+/)+(?P<display_id>[^/]+)-(?P<id>\\d+)(?:,auto=true)?\\.bild\\.html'\n IE_DESC = 'Bild.de'\n _TEST = {\n 'url': 'http://www.bild.de/video/clip/apple-ipad-air/das-koennen-die-neuen-ipads-38184146.bild.html',\n 'md5': 'dd495cbd99f2413502a1713a1156ac8a',\n 'info_dict': {\n 'id': '38184146',\n 'ext': 'mp4',\n 'title': 'Das k\u00f6nnen die neuen iPads',\n 'description': 'md5:a4058c4fa2a804ab59c00d7244bbf62f',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'duration': 196,\n }\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n video_data = self._download_json(\n url.split('.bild.html')[0] + ',view=json.bild.html', video_id)\n\n return {\n 'id': video_id,\n 'title': unescapeHTML(video_data['title']).strip(),\n 'description': unescapeHTML(video_data.get('description')),\n 'url': video_data['clipList'][0]['srces'][0]['src'],\n 'thumbnail': video_data.get('poster'),\n 'duration': int_or_none(video_data.get('durationSec')),\n }\n", "path": "yt_dlp/extractor/bild.py"}], "after_files": [{"content": "from .common import InfoExtractor\nfrom ..utils import (\n int_or_none,\n traverse_obj,\n unescapeHTML,\n)\n\n\nclass BildIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?bild\\.de/(?:[^/]+/)+(?P<display_id>[^/]+)-(?P<id>\\d+)(?:,auto=true)?\\.bild\\.html'\n IE_DESC = 'Bild.de'\n _TESTS = [{\n 'note': 'static MP4 only',\n 'url': 'http://www.bild.de/video/clip/apple-ipad-air/das-koennen-die-neuen-ipads-38184146.bild.html',\n 'md5': 'dd495cbd99f2413502a1713a1156ac8a',\n 'info_dict': {\n 'id': '38184146',\n 'ext': 'mp4',\n 'title': 'Das k\u00f6nnen die neuen iPads',\n 'description': 'md5:a4058c4fa2a804ab59c00d7244bbf62f',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'duration': 196,\n }\n }, {\n 'note': 'static MP4 and HLS',\n 'url': 'https://www.bild.de/video/clip/news-ausland/deftiger-abgang-vom-10m-turm-bademeister-sorgt-fuer-skandal-85158620.bild.html',\n 'md5': 'fb0ed4f09c495d4ba7ce2eee0bb90de1',\n 'info_dict': {\n 'id': '85158620',\n 'ext': 'mp4',\n 'title': 'Der Sprungturm-Skandal',\n 'description': 'md5:709b543c24dc31bbbffee73bccda34ad',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'duration': 69,\n }\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n video_data = self._download_json(\n url.split('.bild.html')[0] + ',view=json.bild.html', video_id)\n\n formats = []\n for src in traverse_obj(video_data, ('clipList', 0, 'srces', lambda _, v: v['src'])):\n src_type = src.get('type')\n if src_type == 'application/x-mpegURL':\n formats.extend(\n self._extract_m3u8_formats(\n src['src'], video_id, 'mp4', m3u8_id='hls', fatal=False))\n elif src_type == 'video/mp4':\n formats.append({'url': src['src'], 'format_id': 'http-mp4'})\n else:\n self.report_warning(f'Skipping unsupported format type: \"{src_type}\"')\n\n return {\n 'id': video_id,\n 'title': unescapeHTML(video_data['title']).strip(),\n 'description': unescapeHTML(video_data.get('description')),\n 'formats': formats,\n 'thumbnail': video_data.get('poster'),\n 'duration': int_or_none(video_data.get('durationSec')),\n }\n", "path": "yt_dlp/extractor/bild.py"}]}
| 2,984 | 838 |
gh_patches_debug_30808
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-3008
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`unittest` framework raises `ResourceWarning`
## Description
<!-- A clear description of the bug -->
I'm using [`unittest`](https://docs.python.org/3/library/unittest.html) framework for testing purposes and then a task is running inside a `unittest.TestCase`, a `ResourceWarning` is raised.
## Expected Behavior
<!-- What did you expect to happen instead? -->
Run testcase without raising any warnings.
## Reproduction
<!-- A minimal example that exhibits the behavior. -->
`test.py` file:
```python
import typing
import unittest
from prefect.engine.task_runner import TaskRunner
from prefect.tasks.shell import ShellTask
class ShellTestCase(unittest.TestCase):
def test_shell_command(self) -> typing.NoReturn:
runner = TaskRunner(ShellTask('ls'))
runner.run()
self.assertTrue(True)
```
execution:
```bash
$ python -m unittest test.py
[2020-07-20 16:13:20] INFO - prefect.TaskRunner | Task 'ShellTask': Starting task run...
/home/psimakis/.local/share/virtualenvs/data-workflows-GfPV92cZ/lib/python3.7/site-packages/prefect/utilities/tasks.py:444: ResourceWarning: unclosed file <_io.BufferedReader name=9>
return run_method(self, *args, **kwargs)
ResourceWarning: Enable tracemalloc to get the object allocation traceback
[2020-07-20 16:13:21] INFO - prefect.TaskRunner | Task 'ShellTask': finished task run for task with final state: 'Success'
.
----------------------------------------------------------------------
Ran 1 test in 0.026s
```
## Environment
<!-- Any additional information about your environment
Optionally run `prefect diagnostics` from the command line and paste the information here. -->
```json
{
"config_overrides": {},
"env_vars": [
"PREFECT__CONTEXT__SECRETS__....",
"PREFECT__CONTEXT__SECRETS__...."
],
"system_information": {
"platform": "Linux-5.3.0-28-generic-x86_64-with-debian-buster-sid",
"prefect_version": "0.12.3",
"python_version": "3.7.3"
}
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/tasks/shell.py`
Content:
```
1 import os
2 import tempfile
3 from subprocess import PIPE, STDOUT, Popen
4 from typing import Any
5
6 import prefect
7 from prefect.utilities.tasks import defaults_from_attrs
8
9
10 class ShellTask(prefect.Task):
11 """
12 Task for running arbitrary shell commands.
13
14 Args:
15 - command (string, optional): shell command to be executed; can also be
16 provided post-initialization by calling this task instance
17 - env (dict, optional): dictionary of environment variables to use for
18 the subprocess; can also be provided at runtime
19 - helper_script (str, optional): a string representing a shell script, which
20 will be executed prior to the `command` in the same process. Can be used to
21 change directories, define helper functions, etc. when re-using this Task
22 for different commands in a Flow
23 - shell (string, optional): shell to run the command with; defaults to "bash"
24 - return_all (bool, optional): boolean specifying whether this task
25 should return all lines of stdout as a list, or just the last line
26 as a string; defaults to `False`
27 - log_stderr (bool, optional): boolean specifying whether this task
28 should log the output from stderr in the case of a non-zero exit code;
29 defaults to `False`
30 - **kwargs: additional keyword arguments to pass to the Task constructor
31
32 Example:
33 ```python
34 from prefect import Flow
35 from prefect.tasks.shell import ShellTask
36
37 task = ShellTask(helper_script="cd ~")
38 with Flow("My Flow") as f:
39 # both tasks will be executed in home directory
40 contents = task(command='ls')
41 mv_file = task(command='mv .vimrc /.vimrc')
42
43 out = f.run()
44 ```
45 """
46
47 def __init__(
48 self,
49 command: str = None,
50 env: dict = None,
51 helper_script: str = None,
52 shell: str = "bash",
53 return_all: bool = False,
54 log_stderr: bool = False,
55 **kwargs: Any
56 ):
57 self.command = command
58 self.env = env
59 self.helper_script = helper_script
60 self.shell = shell
61 self.return_all = return_all
62 self.log_stderr = log_stderr
63 super().__init__(**kwargs)
64
65 @defaults_from_attrs("command", "env")
66 def run(self, command: str = None, env: dict = None) -> str:
67 """
68 Run the shell command.
69
70 Args:
71 - command (string): shell command to be executed; can also be
72 provided at task initialization. Any variables / functions defined in
73 `self.helper_script` will be available in the same process this command
74 runs in
75 - env (dict, optional): dictionary of environment variables to use for
76 the subprocess
77
78 Returns:
79 - stdout (string): if `return_all` is `False` (the default), only
80 the last line of stdout is returned, otherwise all lines are
81 returned, which is useful for passing result of shell command
82 to other downstream tasks. If there is no output, `None` is
83 returned.
84
85 Raises:
86 - prefect.engine.signals.FAIL: if command has an exit code other
87 than 0
88 """
89 if command is None:
90 raise TypeError("run() missing required argument: 'command'")
91
92 current_env = os.environ.copy()
93 current_env.update(env or {})
94 with tempfile.NamedTemporaryFile(prefix="prefect-") as tmp:
95 if self.helper_script:
96 tmp.write(self.helper_script.encode())
97 tmp.write("\n".encode())
98 tmp.write(command.encode())
99 tmp.flush()
100 sub_process = Popen(
101 [self.shell, tmp.name], stdout=PIPE, stderr=STDOUT, env=current_env
102 )
103 lines = []
104 line = None
105 for raw_line in iter(sub_process.stdout.readline, b""):
106 line = raw_line.decode("utf-8").rstrip()
107 if self.return_all:
108 lines.append(line)
109 else:
110 # if we're returning all, we don't log every line
111 self.logger.debug(line)
112 sub_process.wait()
113 if sub_process.returncode:
114 msg = "Command failed with exit code {}".format(sub_process.returncode,)
115 self.logger.error(msg)
116
117 if self.log_stderr:
118 self.logger.error("\n".join(lines))
119
120 raise prefect.engine.signals.FAIL(msg) from None # type: ignore
121 if self.return_all:
122 return lines
123 else:
124 return line
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/tasks/shell.py b/src/prefect/tasks/shell.py
--- a/src/prefect/tasks/shell.py
+++ b/src/prefect/tasks/shell.py
@@ -97,27 +97,29 @@
tmp.write("\n".encode())
tmp.write(command.encode())
tmp.flush()
- sub_process = Popen(
+ with Popen(
[self.shell, tmp.name], stdout=PIPE, stderr=STDOUT, env=current_env
- )
- lines = []
- line = None
- for raw_line in iter(sub_process.stdout.readline, b""):
- line = raw_line.decode("utf-8").rstrip()
- if self.return_all:
- lines.append(line)
- else:
- # if we're returning all, we don't log every line
- self.logger.debug(line)
- sub_process.wait()
- if sub_process.returncode:
- msg = "Command failed with exit code {}".format(sub_process.returncode,)
- self.logger.error(msg)
+ ) as sub_process:
+ lines = []
+ line = None
+ for raw_line in iter(sub_process.stdout.readline, b""):
+ line = raw_line.decode("utf-8").rstrip()
+ if self.return_all:
+ lines.append(line)
+ else:
+ # if we're returning all, we don't log every line
+ self.logger.debug(line)
+ sub_process.wait()
+ if sub_process.returncode:
+ msg = "Command failed with exit code {}".format(
+ sub_process.returncode,
+ )
+ self.logger.error(msg)
- if self.log_stderr:
- self.logger.error("\n".join(lines))
+ if self.log_stderr:
+ self.logger.error("\n".join(lines))
- raise prefect.engine.signals.FAIL(msg) from None # type: ignore
+ raise prefect.engine.signals.FAIL(msg) from None # type: ignore
if self.return_all:
return lines
else:
|
{"golden_diff": "diff --git a/src/prefect/tasks/shell.py b/src/prefect/tasks/shell.py\n--- a/src/prefect/tasks/shell.py\n+++ b/src/prefect/tasks/shell.py\n@@ -97,27 +97,29 @@\n tmp.write(\"\\n\".encode())\n tmp.write(command.encode())\n tmp.flush()\n- sub_process = Popen(\n+ with Popen(\n [self.shell, tmp.name], stdout=PIPE, stderr=STDOUT, env=current_env\n- )\n- lines = []\n- line = None\n- for raw_line in iter(sub_process.stdout.readline, b\"\"):\n- line = raw_line.decode(\"utf-8\").rstrip()\n- if self.return_all:\n- lines.append(line)\n- else:\n- # if we're returning all, we don't log every line\n- self.logger.debug(line)\n- sub_process.wait()\n- if sub_process.returncode:\n- msg = \"Command failed with exit code {}\".format(sub_process.returncode,)\n- self.logger.error(msg)\n+ ) as sub_process:\n+ lines = []\n+ line = None\n+ for raw_line in iter(sub_process.stdout.readline, b\"\"):\n+ line = raw_line.decode(\"utf-8\").rstrip()\n+ if self.return_all:\n+ lines.append(line)\n+ else:\n+ # if we're returning all, we don't log every line\n+ self.logger.debug(line)\n+ sub_process.wait()\n+ if sub_process.returncode:\n+ msg = \"Command failed with exit code {}\".format(\n+ sub_process.returncode,\n+ )\n+ self.logger.error(msg)\n \n- if self.log_stderr:\n- self.logger.error(\"\\n\".join(lines))\n+ if self.log_stderr:\n+ self.logger.error(\"\\n\".join(lines))\n \n- raise prefect.engine.signals.FAIL(msg) from None # type: ignore\n+ raise prefect.engine.signals.FAIL(msg) from None # type: ignore\n if self.return_all:\n return lines\n else:\n", "issue": "`unittest` framework raises `ResourceWarning`\n## Description\r\n<!-- A clear description of the bug -->\r\nI'm using [`unittest`](https://docs.python.org/3/library/unittest.html) framework for testing purposes and then a task is running inside a `unittest.TestCase`, a `ResourceWarning` is raised. \r\n\r\n\r\n## Expected Behavior\r\n<!-- What did you expect to happen instead? -->\r\nRun testcase without raising any warnings.\r\n\r\n\r\n\r\n\r\n## Reproduction\r\n<!-- A minimal example that exhibits the behavior. -->\r\n\r\n`test.py` file:\r\n\r\n```python\r\nimport typing\r\nimport unittest\r\n\r\nfrom prefect.engine.task_runner import TaskRunner\r\nfrom prefect.tasks.shell import ShellTask\r\n\r\n\r\nclass ShellTestCase(unittest.TestCase):\r\n def test_shell_command(self) -> typing.NoReturn:\r\n runner = TaskRunner(ShellTask('ls'))\r\n runner.run()\r\n self.assertTrue(True)\r\n```\r\n\r\nexecution:\r\n\r\n```bash\r\n$ python -m unittest test.py \r\n[2020-07-20 16:13:20] INFO - prefect.TaskRunner | Task 'ShellTask': Starting task run...\r\n/home/psimakis/.local/share/virtualenvs/data-workflows-GfPV92cZ/lib/python3.7/site-packages/prefect/utilities/tasks.py:444: ResourceWarning: unclosed file <_io.BufferedReader name=9>\r\n return run_method(self, *args, **kwargs)\r\nResourceWarning: Enable tracemalloc to get the object allocation traceback\r\n[2020-07-20 16:13:21] INFO - prefect.TaskRunner | Task 'ShellTask': finished task run for task with final state: 'Success'\r\n.\r\n----------------------------------------------------------------------\r\nRan 1 test in 0.026s\r\n```\r\n\r\n\r\n\r\n\r\n## Environment\r\n<!-- Any additional information about your environment\r\n\r\nOptionally run `prefect diagnostics` from the command line and paste the information here. -->\r\n```json\r\n{\r\n \"config_overrides\": {},\r\n \"env_vars\": [\r\n \"PREFECT__CONTEXT__SECRETS__....\",\r\n \"PREFECT__CONTEXT__SECRETS__....\"\r\n ],\r\n \"system_information\": {\r\n \"platform\": \"Linux-5.3.0-28-generic-x86_64-with-debian-buster-sid\",\r\n \"prefect_version\": \"0.12.3\",\r\n \"python_version\": \"3.7.3\"\r\n }\r\n}\r\n```\n", "before_files": [{"content": "import os\nimport tempfile\nfrom subprocess import PIPE, STDOUT, Popen\nfrom typing import Any\n\nimport prefect\nfrom prefect.utilities.tasks import defaults_from_attrs\n\n\nclass ShellTask(prefect.Task):\n \"\"\"\n Task for running arbitrary shell commands.\n\n Args:\n - command (string, optional): shell command to be executed; can also be\n provided post-initialization by calling this task instance\n - env (dict, optional): dictionary of environment variables to use for\n the subprocess; can also be provided at runtime\n - helper_script (str, optional): a string representing a shell script, which\n will be executed prior to the `command` in the same process. Can be used to\n change directories, define helper functions, etc. when re-using this Task\n for different commands in a Flow\n - shell (string, optional): shell to run the command with; defaults to \"bash\"\n - return_all (bool, optional): boolean specifying whether this task\n should return all lines of stdout as a list, or just the last line\n as a string; defaults to `False`\n - log_stderr (bool, optional): boolean specifying whether this task\n should log the output from stderr in the case of a non-zero exit code;\n defaults to `False`\n - **kwargs: additional keyword arguments to pass to the Task constructor\n\n Example:\n ```python\n from prefect import Flow\n from prefect.tasks.shell import ShellTask\n\n task = ShellTask(helper_script=\"cd ~\")\n with Flow(\"My Flow\") as f:\n # both tasks will be executed in home directory\n contents = task(command='ls')\n mv_file = task(command='mv .vimrc /.vimrc')\n\n out = f.run()\n ```\n \"\"\"\n\n def __init__(\n self,\n command: str = None,\n env: dict = None,\n helper_script: str = None,\n shell: str = \"bash\",\n return_all: bool = False,\n log_stderr: bool = False,\n **kwargs: Any\n ):\n self.command = command\n self.env = env\n self.helper_script = helper_script\n self.shell = shell\n self.return_all = return_all\n self.log_stderr = log_stderr\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"command\", \"env\")\n def run(self, command: str = None, env: dict = None) -> str:\n \"\"\"\n Run the shell command.\n\n Args:\n - command (string): shell command to be executed; can also be\n provided at task initialization. Any variables / functions defined in\n `self.helper_script` will be available in the same process this command\n runs in\n - env (dict, optional): dictionary of environment variables to use for\n the subprocess\n\n Returns:\n - stdout (string): if `return_all` is `False` (the default), only\n the last line of stdout is returned, otherwise all lines are\n returned, which is useful for passing result of shell command\n to other downstream tasks. If there is no output, `None` is\n returned.\n\n Raises:\n - prefect.engine.signals.FAIL: if command has an exit code other\n than 0\n \"\"\"\n if command is None:\n raise TypeError(\"run() missing required argument: 'command'\")\n\n current_env = os.environ.copy()\n current_env.update(env or {})\n with tempfile.NamedTemporaryFile(prefix=\"prefect-\") as tmp:\n if self.helper_script:\n tmp.write(self.helper_script.encode())\n tmp.write(\"\\n\".encode())\n tmp.write(command.encode())\n tmp.flush()\n sub_process = Popen(\n [self.shell, tmp.name], stdout=PIPE, stderr=STDOUT, env=current_env\n )\n lines = []\n line = None\n for raw_line in iter(sub_process.stdout.readline, b\"\"):\n line = raw_line.decode(\"utf-8\").rstrip()\n if self.return_all:\n lines.append(line)\n else:\n # if we're returning all, we don't log every line\n self.logger.debug(line)\n sub_process.wait()\n if sub_process.returncode:\n msg = \"Command failed with exit code {}\".format(sub_process.returncode,)\n self.logger.error(msg)\n\n if self.log_stderr:\n self.logger.error(\"\\n\".join(lines))\n\n raise prefect.engine.signals.FAIL(msg) from None # type: ignore\n if self.return_all:\n return lines\n else:\n return line\n", "path": "src/prefect/tasks/shell.py"}], "after_files": [{"content": "import os\nimport tempfile\nfrom subprocess import PIPE, STDOUT, Popen\nfrom typing import Any\n\nimport prefect\nfrom prefect.utilities.tasks import defaults_from_attrs\n\n\nclass ShellTask(prefect.Task):\n \"\"\"\n Task for running arbitrary shell commands.\n\n Args:\n - command (string, optional): shell command to be executed; can also be\n provided post-initialization by calling this task instance\n - env (dict, optional): dictionary of environment variables to use for\n the subprocess; can also be provided at runtime\n - helper_script (str, optional): a string representing a shell script, which\n will be executed prior to the `command` in the same process. Can be used to\n change directories, define helper functions, etc. when re-using this Task\n for different commands in a Flow\n - shell (string, optional): shell to run the command with; defaults to \"bash\"\n - return_all (bool, optional): boolean specifying whether this task\n should return all lines of stdout as a list, or just the last line\n as a string; defaults to `False`\n - log_stderr (bool, optional): boolean specifying whether this task\n should log the output from stderr in the case of a non-zero exit code;\n defaults to `False`\n - **kwargs: additional keyword arguments to pass to the Task constructor\n\n Example:\n ```python\n from prefect import Flow\n from prefect.tasks.shell import ShellTask\n\n task = ShellTask(helper_script=\"cd ~\")\n with Flow(\"My Flow\") as f:\n # both tasks will be executed in home directory\n contents = task(command='ls')\n mv_file = task(command='mv .vimrc /.vimrc')\n\n out = f.run()\n ```\n \"\"\"\n\n def __init__(\n self,\n command: str = None,\n env: dict = None,\n helper_script: str = None,\n shell: str = \"bash\",\n return_all: bool = False,\n log_stderr: bool = False,\n **kwargs: Any\n ):\n self.command = command\n self.env = env\n self.helper_script = helper_script\n self.shell = shell\n self.return_all = return_all\n self.log_stderr = log_stderr\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"command\", \"env\")\n def run(self, command: str = None, env: dict = None) -> str:\n \"\"\"\n Run the shell command.\n\n Args:\n - command (string): shell command to be executed; can also be\n provided at task initialization. Any variables / functions defined in\n `self.helper_script` will be available in the same process this command\n runs in\n - env (dict, optional): dictionary of environment variables to use for\n the subprocess\n\n Returns:\n - stdout (string): if `return_all` is `False` (the default), only\n the last line of stdout is returned, otherwise all lines are\n returned, which is useful for passing result of shell command\n to other downstream tasks. If there is no output, `None` is\n returned.\n\n Raises:\n - prefect.engine.signals.FAIL: if command has an exit code other\n than 0\n \"\"\"\n if command is None:\n raise TypeError(\"run() missing required argument: 'command'\")\n\n current_env = os.environ.copy()\n current_env.update(env or {})\n with tempfile.NamedTemporaryFile(prefix=\"prefect-\") as tmp:\n if self.helper_script:\n tmp.write(self.helper_script.encode())\n tmp.write(\"\\n\".encode())\n tmp.write(command.encode())\n tmp.flush()\n with Popen(\n [self.shell, tmp.name], stdout=PIPE, stderr=STDOUT, env=current_env\n ) as sub_process:\n lines = []\n line = None\n for raw_line in iter(sub_process.stdout.readline, b\"\"):\n line = raw_line.decode(\"utf-8\").rstrip()\n if self.return_all:\n lines.append(line)\n else:\n # if we're returning all, we don't log every line\n self.logger.debug(line)\n sub_process.wait()\n if sub_process.returncode:\n msg = \"Command failed with exit code {}\".format(\n sub_process.returncode,\n )\n self.logger.error(msg)\n\n if self.log_stderr:\n self.logger.error(\"\\n\".join(lines))\n\n raise prefect.engine.signals.FAIL(msg) from None # type: ignore\n if self.return_all:\n return lines\n else:\n return line\n", "path": "src/prefect/tasks/shell.py"}]}
| 2,012 | 447 |
gh_patches_debug_54725
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-819
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Source Edit Page: sidebar: source navigation links not displayed
breaking this issue out from #483.
OldCantus - https://cantus.uwaterloo.ca/node/711311/edit:

NewCantus - http://206.12.88.113/edit-source/123653:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/views/source.py`
Content:
```
1 from django.views.generic import DetailView, ListView, CreateView, UpdateView
2 from django.db.models import Q, Prefetch
3 from main_app.models import Source, Provenance, Century
4 from main_app.forms import SourceCreateForm, SourceEditForm
5 from django.contrib import messages
6 from django.urls import reverse
7 from django.contrib.auth.mixins import LoginRequiredMixin
8 from django.http import HttpResponseRedirect
9 from django.contrib.auth.mixins import UserPassesTestMixin
10 from django.core.exceptions import PermissionDenied
11 from django.shortcuts import get_object_or_404
12 from main_app.views.chant import get_feast_selector_options
13
14
15 class SourceDetailView(DetailView):
16 model = Source
17 context_object_name = "source"
18 template_name = "source_detail.html"
19
20 def get_context_data(self, **kwargs):
21 source = self.get_object()
22 display_unpublished = self.request.user.is_authenticated
23 if (source.published is False) and (not display_unpublished):
24 raise PermissionDenied()
25
26 context = super().get_context_data(**kwargs)
27
28 if source.segment and source.segment.id == 4064:
29 # if this is a sequence source
30 context["sequences"] = source.sequence_set.order_by("s_sequence")
31 context["folios"] = (
32 source.sequence_set.values_list("folio", flat=True)
33 .distinct()
34 .order_by("folio")
35 )
36 else:
37 # if this is a chant source
38 folios = (
39 source.chant_set.values_list("folio", flat=True)
40 .distinct()
41 .order_by("folio")
42 )
43 context["folios"] = folios
44 # the options for the feast selector on the right, only chant sources have this
45 context["feasts_with_folios"] = get_feast_selector_options(source, folios)
46 return context
47
48
49 class SourceListView(ListView):
50 paginate_by = 100
51 context_object_name = "sources"
52 template_name = "source_list.html"
53
54 def get_context_data(self, **kwargs):
55 context = super().get_context_data(**kwargs)
56 context["provenances"] = (
57 Provenance.objects.all().order_by("name").values("id", "name")
58 )
59 context["centuries"] = (
60 Century.objects.all().order_by("name").values("id", "name")
61 )
62 return context
63
64 def get_queryset(self):
65 # use select_related() for foreign keys to reduce DB queries
66 queryset = Source.objects.select_related(
67 "rism_siglum", "segment", "provenance"
68 ).order_by("siglum")
69
70 display_unpublished = self.request.user.is_authenticated
71 if display_unpublished:
72 q_obj_filter = Q()
73 else:
74 q_obj_filter = Q(published=True)
75
76 if self.request.GET.get("century"):
77 century_name = Century.objects.get(id=self.request.GET.get("century")).name
78 q_obj_filter &= Q(century__name__icontains=century_name)
79
80 if self.request.GET.get("provenance"):
81 provenance_id = int(self.request.GET.get("provenance"))
82 q_obj_filter &= Q(provenance__id=provenance_id)
83 if self.request.GET.get("segment"):
84 segment_id = int(self.request.GET.get("segment"))
85 q_obj_filter &= Q(segment__id=segment_id)
86 if self.request.GET.get("fullSource") in ["true", "false"]:
87 full_source_str = self.request.GET.get("fullSource")
88 if full_source_str == "true":
89 full_source_q = Q(full_source=True) | Q(full_source=None)
90 q_obj_filter &= full_source_q
91 else:
92 q_obj_filter &= Q(full_source=False)
93
94 if self.request.GET.get("general"):
95 # Strip spaces at the beginning and end. Then make list of terms split on spaces
96 general_search_terms = self.request.GET.get("general").strip(" ").split(" ")
97 # We need a Q Object for each field we're gonna look into
98 title_q = Q()
99 siglum_q = Q()
100 rism_siglum_q = Q()
101 description_q = Q()
102 # it seems that old cantus don't look into title and provenance for the general search terms
103 # cantus.uwaterloo.ca/source/123901 this source cannot be found by searching its provenance 'Kremsmünster' in the general search field
104 # provenance_q = Q()
105 summary_q = Q()
106
107 # For each term, add it to the Q object of each field with an OR operation.
108 # We split the terms so that the words can be separated in the actual
109 # field, allowing for a more flexible search, and a field needs
110 # to match only one of the terms
111 for term in general_search_terms:
112 title_q |= Q(title__icontains=term)
113 siglum_q |= Q(siglum__icontains=term)
114 rism_siglum_q |= Q(rism_siglum__name__icontains=term) | Q(
115 rism_siglum__description__icontains=term
116 )
117 description_q |= Q(description__icontains=term)
118 summary_q |= Q(summary__icontains=term)
119 # provenance_q |= Q(provenance__name__icontains=term)
120 # All the Q objects are put together with OR.
121 # The end result is that at least one term has to match in at least one
122 # field
123 # general_search_q = (
124 # title_q | siglum_q | rism_siglum_q | description_q | provenance_q
125 # )
126 general_search_q = (
127 title_q | siglum_q | rism_siglum_q | description_q | summary_q
128 )
129 q_obj_filter &= general_search_q
130
131 # For the indexing notes search we follow the same procedure as above but with
132 # different fields
133 if self.request.GET.get("indexing"):
134 # Make list of terms split on spaces
135 indexing_search_terms = self.request.GET.get("indexing").split(" ")
136 # We need a Q Object for each field we're gonna look into
137 inventoried_by_q = Q()
138 full_text_entered_by_q = Q()
139 melodies_entered_by_q = Q()
140 proofreaders_q = Q()
141 other_editors_q = Q()
142 indexing_notes_q = Q()
143 # For each term, add it to the Q object of each field with an OR operation.
144 # We split the terms so that the words can be separated in the actual
145 # field, allowing for a more flexible search, and a field needs
146 # to match only one of the terms
147 for term in indexing_search_terms:
148 inventoried_by_q |= Q(inventoried_by__full_name__icontains=term)
149 full_text_entered_by_q |= Q(
150 full_text_entered_by__full_name__icontains=term
151 )
152 melodies_entered_by_q |= Q(
153 melodies_entered_by__full_name__icontains=term
154 )
155 proofreaders_q |= Q(proofreaders__full_name__icontains=term)
156 other_editors_q |= Q(other_editors__full_name__icontains=term)
157 indexing_notes_q |= Q(indexing_notes__icontains=term)
158 # All the Q objects are put together with OR.
159 # The end result is that at least one term has to match in at least one
160 # field
161 indexing_search_q = (
162 inventoried_by_q
163 | full_text_entered_by_q
164 | melodies_entered_by_q
165 | proofreaders_q
166 | other_editors_q
167 | indexing_notes_q
168 )
169 q_obj_filter &= indexing_search_q
170
171 return queryset.filter(q_obj_filter).prefetch_related(
172 Prefetch("century", queryset=Century.objects.all().order_by("id"))
173 )
174
175
176 class SourceCreateView(LoginRequiredMixin, UserPassesTestMixin, CreateView):
177 model = Source
178 template_name = "source_create_form.html"
179 form_class = SourceCreateForm
180
181 def test_func(self):
182 user = self.request.user
183 # checks if the user is allowed to create sources
184 is_authorized = user.groups.filter(
185 Q(name="project manager") | Q(name="editor") | Q(name="contributor")
186 ).exists()
187
188 if is_authorized:
189 return True
190 else:
191 return False
192
193 def get_success_url(self):
194 return reverse("source-create")
195
196 def form_valid(self, form):
197 form.instance.created_by = self.request.user
198 source = form.save()
199
200 # assign this source to the "current_editors"
201 current_editors = source.current_editors.all()
202
203 for editor in current_editors:
204 editor.sources_user_can_edit.add(source)
205
206 messages.success(
207 self.request,
208 "Source created successfully!",
209 )
210
211 return HttpResponseRedirect(self.get_success_url())
212
213
214 class SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):
215 template_name = "source_edit.html"
216 model = Source
217 form_class = SourceEditForm
218 pk_url_kwarg = "source_id"
219
220 def test_func(self):
221 user = self.request.user
222 source_id = self.kwargs.get(self.pk_url_kwarg)
223 source = get_object_or_404(Source, id=source_id)
224
225 assigned_to_source = user.sources_user_can_edit.filter(id=source_id)
226
227 # checks if the user is a project manager
228 is_project_manager = user.groups.filter(name="project manager").exists()
229 # checks if the user is an editor
230 is_editor = user.groups.filter(name="editor").exists()
231 # checks if the user is a contributor
232 is_contributor = user.groups.filter(name="contributor").exists()
233
234 if (
235 (is_project_manager)
236 or (is_editor and assigned_to_source)
237 or (is_editor and source.created_by == user)
238 or (is_contributor and source.created_by == user)
239 ):
240 return True
241 else:
242 return False
243
244 def form_valid(self, form):
245 form.instance.last_updated_by = self.request.user
246
247 # remove this source from the old "current_editors"
248 # assign this source to the new "current_editors"
249
250 old_current_editors = list(
251 Source.objects.get(id=form.instance.id).current_editors.all()
252 )
253 new_current_editors = form.cleaned_data["current_editors"]
254 source = form.save()
255
256 for old_editor in old_current_editors:
257 old_editor.sources_user_can_edit.remove(source)
258
259 for new_editor in new_current_editors:
260 new_editor.sources_user_can_edit.add(source)
261
262 return HttpResponseRedirect(self.get_success_url())
263
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/cantusdb_project/main_app/views/source.py b/django/cantusdb_project/main_app/views/source.py
--- a/django/cantusdb_project/main_app/views/source.py
+++ b/django/cantusdb_project/main_app/views/source.py
@@ -211,7 +211,9 @@
return HttpResponseRedirect(self.get_success_url())
-class SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):
+class SourceEditView(
+ LoginRequiredMixin, UserPassesTestMixin, UpdateView, SourceDetailView
+):
template_name = "source_edit.html"
model = Source
form_class = SourceEditForm
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/source.py b/django/cantusdb_project/main_app/views/source.py\n--- a/django/cantusdb_project/main_app/views/source.py\n+++ b/django/cantusdb_project/main_app/views/source.py\n@@ -211,7 +211,9 @@\n return HttpResponseRedirect(self.get_success_url())\n \n \n-class SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):\n+class SourceEditView(\n+ LoginRequiredMixin, UserPassesTestMixin, UpdateView, SourceDetailView\n+):\n template_name = \"source_edit.html\"\n model = Source\n form_class = SourceEditForm\n", "issue": "Source Edit Page: sidebar: source navigation links not displayed\nbreaking this issue out from #483. \r\n\r\nOldCantus - https://cantus.uwaterloo.ca/node/711311/edit:\r\n\r\n\r\nNewCantus - http://206.12.88.113/edit-source/123653:\r\n\n", "before_files": [{"content": "from django.views.generic import DetailView, ListView, CreateView, UpdateView\nfrom django.db.models import Q, Prefetch\nfrom main_app.models import Source, Provenance, Century\nfrom main_app.forms import SourceCreateForm, SourceEditForm\nfrom django.contrib import messages\nfrom django.urls import reverse\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.http import HttpResponseRedirect\nfrom django.contrib.auth.mixins import UserPassesTestMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404\nfrom main_app.views.chant import get_feast_selector_options\n\n\nclass SourceDetailView(DetailView):\n model = Source\n context_object_name = \"source\"\n template_name = \"source_detail.html\"\n\n def get_context_data(self, **kwargs):\n source = self.get_object()\n display_unpublished = self.request.user.is_authenticated\n if (source.published is False) and (not display_unpublished):\n raise PermissionDenied()\n\n context = super().get_context_data(**kwargs)\n\n if source.segment and source.segment.id == 4064:\n # if this is a sequence source\n context[\"sequences\"] = source.sequence_set.order_by(\"s_sequence\")\n context[\"folios\"] = (\n source.sequence_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n else:\n # if this is a chant source\n folios = (\n source.chant_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n context[\"folios\"] = folios\n # the options for the feast selector on the right, only chant sources have this\n context[\"feasts_with_folios\"] = get_feast_selector_options(source, folios)\n return context\n\n\nclass SourceListView(ListView):\n paginate_by = 100\n context_object_name = \"sources\"\n template_name = \"source_list.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"provenances\"] = (\n Provenance.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n context[\"centuries\"] = (\n Century.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n return context\n\n def get_queryset(self):\n # use select_related() for foreign keys to reduce DB queries\n queryset = Source.objects.select_related(\n \"rism_siglum\", \"segment\", \"provenance\"\n ).order_by(\"siglum\")\n\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n q_obj_filter = Q()\n else:\n q_obj_filter = Q(published=True)\n\n if self.request.GET.get(\"century\"):\n century_name = Century.objects.get(id=self.request.GET.get(\"century\")).name\n q_obj_filter &= Q(century__name__icontains=century_name)\n\n if self.request.GET.get(\"provenance\"):\n provenance_id = int(self.request.GET.get(\"provenance\"))\n q_obj_filter &= Q(provenance__id=provenance_id)\n if self.request.GET.get(\"segment\"):\n segment_id = int(self.request.GET.get(\"segment\"))\n q_obj_filter &= Q(segment__id=segment_id)\n if self.request.GET.get(\"fullSource\") in [\"true\", \"false\"]:\n full_source_str = self.request.GET.get(\"fullSource\")\n if full_source_str == \"true\":\n full_source_q = Q(full_source=True) | Q(full_source=None)\n q_obj_filter &= full_source_q\n else:\n q_obj_filter &= Q(full_source=False)\n\n if self.request.GET.get(\"general\"):\n # Strip spaces at the beginning and end. Then make list of terms split on spaces\n general_search_terms = self.request.GET.get(\"general\").strip(\" \").split(\" \")\n # We need a Q Object for each field we're gonna look into\n title_q = Q()\n siglum_q = Q()\n rism_siglum_q = Q()\n description_q = Q()\n # it seems that old cantus don't look into title and provenance for the general search terms\n # cantus.uwaterloo.ca/source/123901 this source cannot be found by searching its provenance 'Kremsm\u00fcnster' in the general search field\n # provenance_q = Q()\n summary_q = Q()\n\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in general_search_terms:\n title_q |= Q(title__icontains=term)\n siglum_q |= Q(siglum__icontains=term)\n rism_siglum_q |= Q(rism_siglum__name__icontains=term) | Q(\n rism_siglum__description__icontains=term\n )\n description_q |= Q(description__icontains=term)\n summary_q |= Q(summary__icontains=term)\n # provenance_q |= Q(provenance__name__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n # general_search_q = (\n # title_q | siglum_q | rism_siglum_q | description_q | provenance_q\n # )\n general_search_q = (\n title_q | siglum_q | rism_siglum_q | description_q | summary_q\n )\n q_obj_filter &= general_search_q\n\n # For the indexing notes search we follow the same procedure as above but with\n # different fields\n if self.request.GET.get(\"indexing\"):\n # Make list of terms split on spaces\n indexing_search_terms = self.request.GET.get(\"indexing\").split(\" \")\n # We need a Q Object for each field we're gonna look into\n inventoried_by_q = Q()\n full_text_entered_by_q = Q()\n melodies_entered_by_q = Q()\n proofreaders_q = Q()\n other_editors_q = Q()\n indexing_notes_q = Q()\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in indexing_search_terms:\n inventoried_by_q |= Q(inventoried_by__full_name__icontains=term)\n full_text_entered_by_q |= Q(\n full_text_entered_by__full_name__icontains=term\n )\n melodies_entered_by_q |= Q(\n melodies_entered_by__full_name__icontains=term\n )\n proofreaders_q |= Q(proofreaders__full_name__icontains=term)\n other_editors_q |= Q(other_editors__full_name__icontains=term)\n indexing_notes_q |= Q(indexing_notes__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n indexing_search_q = (\n inventoried_by_q\n | full_text_entered_by_q\n | melodies_entered_by_q\n | proofreaders_q\n | other_editors_q\n | indexing_notes_q\n )\n q_obj_filter &= indexing_search_q\n\n return queryset.filter(q_obj_filter).prefetch_related(\n Prefetch(\"century\", queryset=Century.objects.all().order_by(\"id\"))\n )\n\n\nclass SourceCreateView(LoginRequiredMixin, UserPassesTestMixin, CreateView):\n model = Source\n template_name = \"source_create_form.html\"\n form_class = SourceCreateForm\n\n def test_func(self):\n user = self.request.user\n # checks if the user is allowed to create sources\n is_authorized = user.groups.filter(\n Q(name=\"project manager\") | Q(name=\"editor\") | Q(name=\"contributor\")\n ).exists()\n\n if is_authorized:\n return True\n else:\n return False\n\n def get_success_url(self):\n return reverse(\"source-create\")\n\n def form_valid(self, form):\n form.instance.created_by = self.request.user\n source = form.save()\n\n # assign this source to the \"current_editors\"\n current_editors = source.current_editors.all()\n\n for editor in current_editors:\n editor.sources_user_can_edit.add(source)\n\n messages.success(\n self.request,\n \"Source created successfully!\",\n )\n\n return HttpResponseRedirect(self.get_success_url())\n\n\nclass SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):\n template_name = \"source_edit.html\"\n model = Source\n form_class = SourceEditForm\n pk_url_kwarg = \"source_id\"\n\n def test_func(self):\n user = self.request.user\n source_id = self.kwargs.get(self.pk_url_kwarg)\n source = get_object_or_404(Source, id=source_id)\n\n assigned_to_source = user.sources_user_can_edit.filter(id=source_id)\n\n # checks if the user is a project manager\n is_project_manager = user.groups.filter(name=\"project manager\").exists()\n # checks if the user is an editor\n is_editor = user.groups.filter(name=\"editor\").exists()\n # checks if the user is a contributor\n is_contributor = user.groups.filter(name=\"contributor\").exists()\n\n if (\n (is_project_manager)\n or (is_editor and assigned_to_source)\n or (is_editor and source.created_by == user)\n or (is_contributor and source.created_by == user)\n ):\n return True\n else:\n return False\n\n def form_valid(self, form):\n form.instance.last_updated_by = self.request.user\n\n # remove this source from the old \"current_editors\"\n # assign this source to the new \"current_editors\"\n\n old_current_editors = list(\n Source.objects.get(id=form.instance.id).current_editors.all()\n )\n new_current_editors = form.cleaned_data[\"current_editors\"]\n source = form.save()\n\n for old_editor in old_current_editors:\n old_editor.sources_user_can_edit.remove(source)\n\n for new_editor in new_current_editors:\n new_editor.sources_user_can_edit.add(source)\n\n return HttpResponseRedirect(self.get_success_url())\n", "path": "django/cantusdb_project/main_app/views/source.py"}], "after_files": [{"content": "from django.views.generic import DetailView, ListView, CreateView, UpdateView\nfrom django.db.models import Q, Prefetch\nfrom main_app.models import Source, Provenance, Century\nfrom main_app.forms import SourceCreateForm, SourceEditForm\nfrom django.contrib import messages\nfrom django.urls import reverse\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.http import HttpResponseRedirect\nfrom django.contrib.auth.mixins import UserPassesTestMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404\nfrom main_app.views.chant import get_feast_selector_options\n\n\nclass SourceDetailView(DetailView):\n model = Source\n context_object_name = \"source\"\n template_name = \"source_detail.html\"\n\n def get_context_data(self, **kwargs):\n source = self.get_object()\n display_unpublished = self.request.user.is_authenticated\n if (source.published is False) and (not display_unpublished):\n raise PermissionDenied()\n\n context = super().get_context_data(**kwargs)\n\n if source.segment and source.segment.id == 4064:\n # if this is a sequence source\n context[\"sequences\"] = source.sequence_set.order_by(\"s_sequence\")\n context[\"folios\"] = (\n source.sequence_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n else:\n # if this is a chant source\n folios = (\n source.chant_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n context[\"folios\"] = folios\n # the options for the feast selector on the right, only chant sources have this\n context[\"feasts_with_folios\"] = get_feast_selector_options(source, folios)\n return context\n\n\nclass SourceListView(ListView):\n paginate_by = 100\n context_object_name = \"sources\"\n template_name = \"source_list.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"provenances\"] = (\n Provenance.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n context[\"centuries\"] = (\n Century.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n return context\n\n def get_queryset(self):\n # use select_related() for foreign keys to reduce DB queries\n queryset = Source.objects.select_related(\n \"rism_siglum\", \"segment\", \"provenance\"\n ).order_by(\"siglum\")\n\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n q_obj_filter = Q()\n else:\n q_obj_filter = Q(published=True)\n\n if self.request.GET.get(\"century\"):\n century_name = Century.objects.get(id=self.request.GET.get(\"century\")).name\n q_obj_filter &= Q(century__name__icontains=century_name)\n\n if self.request.GET.get(\"provenance\"):\n provenance_id = int(self.request.GET.get(\"provenance\"))\n q_obj_filter &= Q(provenance__id=provenance_id)\n if self.request.GET.get(\"segment\"):\n segment_id = int(self.request.GET.get(\"segment\"))\n q_obj_filter &= Q(segment__id=segment_id)\n if self.request.GET.get(\"fullSource\") in [\"true\", \"false\"]:\n full_source_str = self.request.GET.get(\"fullSource\")\n if full_source_str == \"true\":\n full_source_q = Q(full_source=True) | Q(full_source=None)\n q_obj_filter &= full_source_q\n else:\n q_obj_filter &= Q(full_source=False)\n\n if self.request.GET.get(\"general\"):\n # Strip spaces at the beginning and end. Then make list of terms split on spaces\n general_search_terms = self.request.GET.get(\"general\").strip(\" \").split(\" \")\n # We need a Q Object for each field we're gonna look into\n title_q = Q()\n siglum_q = Q()\n rism_siglum_q = Q()\n description_q = Q()\n # it seems that old cantus don't look into title and provenance for the general search terms\n # cantus.uwaterloo.ca/source/123901 this source cannot be found by searching its provenance 'Kremsm\u00fcnster' in the general search field\n # provenance_q = Q()\n summary_q = Q()\n\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in general_search_terms:\n title_q |= Q(title__icontains=term)\n siglum_q |= Q(siglum__icontains=term)\n rism_siglum_q |= Q(rism_siglum__name__icontains=term) | Q(\n rism_siglum__description__icontains=term\n )\n description_q |= Q(description__icontains=term)\n summary_q |= Q(summary__icontains=term)\n # provenance_q |= Q(provenance__name__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n # general_search_q = (\n # title_q | siglum_q | rism_siglum_q | description_q | provenance_q\n # )\n general_search_q = (\n title_q | siglum_q | rism_siglum_q | description_q | summary_q\n )\n q_obj_filter &= general_search_q\n\n # For the indexing notes search we follow the same procedure as above but with\n # different fields\n if self.request.GET.get(\"indexing\"):\n # Make list of terms split on spaces\n indexing_search_terms = self.request.GET.get(\"indexing\").split(\" \")\n # We need a Q Object for each field we're gonna look into\n inventoried_by_q = Q()\n full_text_entered_by_q = Q()\n melodies_entered_by_q = Q()\n proofreaders_q = Q()\n other_editors_q = Q()\n indexing_notes_q = Q()\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in indexing_search_terms:\n inventoried_by_q |= Q(inventoried_by__full_name__icontains=term)\n full_text_entered_by_q |= Q(\n full_text_entered_by__full_name__icontains=term\n )\n melodies_entered_by_q |= Q(\n melodies_entered_by__full_name__icontains=term\n )\n proofreaders_q |= Q(proofreaders__full_name__icontains=term)\n other_editors_q |= Q(other_editors__full_name__icontains=term)\n indexing_notes_q |= Q(indexing_notes__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n indexing_search_q = (\n inventoried_by_q\n | full_text_entered_by_q\n | melodies_entered_by_q\n | proofreaders_q\n | other_editors_q\n | indexing_notes_q\n )\n q_obj_filter &= indexing_search_q\n\n return queryset.filter(q_obj_filter).prefetch_related(\n Prefetch(\"century\", queryset=Century.objects.all().order_by(\"id\"))\n )\n\n\nclass SourceCreateView(LoginRequiredMixin, UserPassesTestMixin, CreateView):\n model = Source\n template_name = \"source_create_form.html\"\n form_class = SourceCreateForm\n\n def test_func(self):\n user = self.request.user\n # checks if the user is allowed to create sources\n is_authorized = user.groups.filter(\n Q(name=\"project manager\") | Q(name=\"editor\") | Q(name=\"contributor\")\n ).exists()\n\n if is_authorized:\n return True\n else:\n return False\n\n def get_success_url(self):\n return reverse(\"source-create\")\n\n def form_valid(self, form):\n form.instance.created_by = self.request.user\n source = form.save()\n\n # assign this source to the \"current_editors\"\n current_editors = source.current_editors.all()\n\n for editor in current_editors:\n editor.sources_user_can_edit.add(source)\n\n messages.success(\n self.request,\n \"Source created successfully!\",\n )\n\n return HttpResponseRedirect(self.get_success_url())\n\n\nclass SourceEditView(\n LoginRequiredMixin, UserPassesTestMixin, UpdateView, SourceDetailView\n):\n template_name = \"source_edit.html\"\n model = Source\n form_class = SourceEditForm\n pk_url_kwarg = \"source_id\"\n\n def test_func(self):\n user = self.request.user\n source_id = self.kwargs.get(self.pk_url_kwarg)\n source = get_object_or_404(Source, id=source_id)\n\n assigned_to_source = user.sources_user_can_edit.filter(id=source_id)\n\n # checks if the user is a project manager\n is_project_manager = user.groups.filter(name=\"project manager\").exists()\n # checks if the user is an editor\n is_editor = user.groups.filter(name=\"editor\").exists()\n # checks if the user is a contributor\n is_contributor = user.groups.filter(name=\"contributor\").exists()\n\n if (\n (is_project_manager)\n or (is_editor and assigned_to_source)\n or (is_editor and source.created_by == user)\n or (is_contributor and source.created_by == user)\n ):\n return True\n else:\n return False\n\n def form_valid(self, form):\n form.instance.last_updated_by = self.request.user\n\n # remove this source from the old \"current_editors\"\n # assign this source to the new \"current_editors\"\n\n old_current_editors = list(\n Source.objects.get(id=form.instance.id).current_editors.all()\n )\n new_current_editors = form.cleaned_data[\"current_editors\"]\n source = form.save()\n\n for old_editor in old_current_editors:\n old_editor.sources_user_can_edit.remove(source)\n\n for new_editor in new_current_editors:\n new_editor.sources_user_can_edit.add(source)\n\n return HttpResponseRedirect(self.get_success_url())\n", "path": "django/cantusdb_project/main_app/views/source.py"}]}
| 3,496 | 146 |
gh_patches_debug_593
|
rasdani/github-patches
|
git_diff
|
projectmesa__mesa-1437
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
v1.1.0 Safford Release
Milestone: https://github.com/projectmesa/mesa/milestone/31
Highlighted changes:
- #1376 > 6x perf speedup for add/remove agent in `ContinuousSpace`
- #1391 correctness fix for `SimultaneousActivation` and `StagedActivation`
- #1399 make `self.running = True` optional. We need to tell existing users that initializing this is no longer necessary, and so, reducing the boilerplate code
- #1435 Allow user-specified local dir to be served by Tornado. Needed by Mesa-Geo
- #1413 Allow batch_run to take arbitrary parameters
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesa/__init__.py`
Content:
```
1 """
2 Mesa Agent-Based Modeling Framework
3
4 Core Objects: Model, and Agent.
5
6 """
7 import datetime
8
9 from mesa.model import Model
10 from mesa.agent import Agent
11
12 import mesa.time as time
13 import mesa.space as space
14 import mesa.flat.visualization as visualization
15 from mesa.datacollection import DataCollector
16 from mesa.batchrunner import batch_run # noqa
17
18 __all__ = [
19 "Model",
20 "Agent",
21 "time",
22 "space",
23 "visualization",
24 "DataCollector",
25 "batch_run",
26 ]
27
28 __title__ = "mesa"
29 __version__ = "1.0.0"
30 __license__ = "Apache 2.0"
31 __copyright__ = f"Copyright {datetime.date.today().year} Project Mesa Team"
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mesa/__init__.py b/mesa/__init__.py
--- a/mesa/__init__.py
+++ b/mesa/__init__.py
@@ -26,6 +26,6 @@
]
__title__ = "mesa"
-__version__ = "1.0.0"
+__version__ = "1.1.0"
__license__ = "Apache 2.0"
__copyright__ = f"Copyright {datetime.date.today().year} Project Mesa Team"
|
{"golden_diff": "diff --git a/mesa/__init__.py b/mesa/__init__.py\n--- a/mesa/__init__.py\n+++ b/mesa/__init__.py\n@@ -26,6 +26,6 @@\n ]\n \n __title__ = \"mesa\"\n-__version__ = \"1.0.0\"\n+__version__ = \"1.1.0\"\n __license__ = \"Apache 2.0\"\n __copyright__ = f\"Copyright {datetime.date.today().year} Project Mesa Team\"\n", "issue": "v1.1.0 Safford Release\nMilestone: https://github.com/projectmesa/mesa/milestone/31\r\n\r\nHighlighted changes:\r\n- #1376 > 6x perf speedup for add/remove agent in `ContinuousSpace`\r\n- #1391 correctness fix for `SimultaneousActivation` and `StagedActivation`\r\n- #1399 make `self.running = True` optional. We need to tell existing users that initializing this is no longer necessary, and so, reducing the boilerplate code\r\n- #1435 Allow user-specified local dir to be served by Tornado. Needed by Mesa-Geo\r\n- #1413 Allow batch_run to take arbitrary parameters\n", "before_files": [{"content": "\"\"\"\nMesa Agent-Based Modeling Framework\n\nCore Objects: Model, and Agent.\n\n\"\"\"\nimport datetime\n\nfrom mesa.model import Model\nfrom mesa.agent import Agent\n\nimport mesa.time as time\nimport mesa.space as space\nimport mesa.flat.visualization as visualization\nfrom mesa.datacollection import DataCollector\nfrom mesa.batchrunner import batch_run # noqa\n\n__all__ = [\n \"Model\",\n \"Agent\",\n \"time\",\n \"space\",\n \"visualization\",\n \"DataCollector\",\n \"batch_run\",\n]\n\n__title__ = \"mesa\"\n__version__ = \"1.0.0\"\n__license__ = \"Apache 2.0\"\n__copyright__ = f\"Copyright {datetime.date.today().year} Project Mesa Team\"\n", "path": "mesa/__init__.py"}], "after_files": [{"content": "\"\"\"\nMesa Agent-Based Modeling Framework\n\nCore Objects: Model, and Agent.\n\n\"\"\"\nimport datetime\n\nfrom mesa.model import Model\nfrom mesa.agent import Agent\n\nimport mesa.time as time\nimport mesa.space as space\nimport mesa.flat.visualization as visualization\nfrom mesa.datacollection import DataCollector\nfrom mesa.batchrunner import batch_run # noqa\n\n__all__ = [\n \"Model\",\n \"Agent\",\n \"time\",\n \"space\",\n \"visualization\",\n \"DataCollector\",\n \"batch_run\",\n]\n\n__title__ = \"mesa\"\n__version__ = \"1.1.0\"\n__license__ = \"Apache 2.0\"\n__copyright__ = f\"Copyright {datetime.date.today().year} Project Mesa Team\"\n", "path": "mesa/__init__.py"}]}
| 626 | 111 |
gh_patches_debug_31948
|
rasdani/github-patches
|
git_diff
|
redis__redis-py-1791
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
redis-py version attribute should be decoupled from the redis module
Following the conversation from https://github.com/redis/redis-py/issues/1625#issuecomment-991744836 looks like importing `redis` module prior to installation in `setup.py` for `version` attribute is not ideal.
Currently there are two places where module version is required.
- `setup.py` for module installation
- `redis/__init__.py` for module level `__version__` attribute
One way to fix this is to maintain a `version.py` file in top level directory and using that as source of truth in both the above places.
@chayim @hartwork What do you think? I can create a PR for this :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from setuptools import find_packages, setup
3
4 import redis
5
6 setup(
7 name="redis",
8 description="Python client for Redis database and key-value store",
9 long_description=open("README.md").read().strip(),
10 long_description_content_type="text/markdown",
11 keywords=["Redis", "key-value store", "database"],
12 license="MIT",
13 version=redis.__version__,
14 packages=find_packages(
15 include=[
16 "redis",
17 "redis.commands",
18 "redis.commands.bf",
19 "redis.commands.json",
20 "redis.commands.search",
21 "redis.commands.timeseries",
22 "redis.commands.graph",
23 ]
24 ),
25 url="https://github.com/redis/redis-py",
26 author="Redis Inc.",
27 author_email="[email protected]",
28 python_requires=">=3.6",
29 setup_requires=[
30 "packaging>=21.3",
31 ],
32 install_requires=[
33 "deprecated>=1.2.3",
34 "packaging>=21.3",
35 ],
36 classifiers=[
37 "Development Status :: 5 - Production/Stable",
38 "Environment :: Console",
39 "Intended Audience :: Developers",
40 "License :: OSI Approved :: MIT License",
41 "Operating System :: OS Independent",
42 "Programming Language :: Python",
43 "Programming Language :: Python :: 3",
44 "Programming Language :: Python :: 3 :: Only",
45 "Programming Language :: Python :: 3.6",
46 "Programming Language :: Python :: 3.7",
47 "Programming Language :: Python :: 3.8",
48 "Programming Language :: Python :: 3.9",
49 "Programming Language :: Python :: 3.10",
50 "Programming Language :: Python :: Implementation :: CPython",
51 "Programming Language :: Python :: Implementation :: PyPy",
52 ],
53 extras_require={
54 "hiredis": ["hiredis>=1.0.0"],
55 },
56 )
57
```
Path: `redis/__init__.py`
Content:
```
1 from redis.client import Redis, StrictRedis
2 from redis.cluster import RedisCluster
3 from redis.connection import (
4 BlockingConnectionPool,
5 Connection,
6 ConnectionPool,
7 SSLConnection,
8 UnixDomainSocketConnection,
9 )
10 from redis.exceptions import (
11 AuthenticationError,
12 AuthenticationWrongNumberOfArgsError,
13 BusyLoadingError,
14 ChildDeadlockedError,
15 ConnectionError,
16 DataError,
17 InvalidResponse,
18 PubSubError,
19 ReadOnlyError,
20 RedisError,
21 ResponseError,
22 TimeoutError,
23 WatchError,
24 )
25 from redis.sentinel import (
26 Sentinel,
27 SentinelConnectionPool,
28 SentinelManagedConnection,
29 SentinelManagedSSLConnection,
30 )
31 from redis.utils import from_url
32
33
34 def int_or_str(value):
35 try:
36 return int(value)
37 except ValueError:
38 return value
39
40
41 __version__ = "4.1.0rc2"
42
43
44 VERSION = tuple(map(int_or_str, __version__.split(".")))
45
46 __all__ = [
47 "AuthenticationError",
48 "AuthenticationWrongNumberOfArgsError",
49 "BlockingConnectionPool",
50 "BusyLoadingError",
51 "ChildDeadlockedError",
52 "Connection",
53 "ConnectionError",
54 "ConnectionPool",
55 "DataError",
56 "from_url",
57 "InvalidResponse",
58 "PubSubError",
59 "ReadOnlyError",
60 "Redis",
61 "RedisCluster",
62 "RedisError",
63 "ResponseError",
64 "Sentinel",
65 "SentinelConnectionPool",
66 "SentinelManagedConnection",
67 "SentinelManagedSSLConnection",
68 "SSLConnection",
69 "StrictRedis",
70 "TimeoutError",
71 "UnixDomainSocketConnection",
72 "WatchError",
73 ]
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redis/__init__.py b/redis/__init__.py
--- a/redis/__init__.py
+++ b/redis/__init__.py
@@ -1,3 +1,10 @@
+import sys
+
+if sys.version_info >= (3, 8):
+ from importlib import metadata
+else:
+ import importlib_metadata as metadata
+
from redis.client import Redis, StrictRedis
from redis.cluster import RedisCluster
from redis.connection import (
@@ -38,7 +45,10 @@
return value
-__version__ = "4.1.0rc2"
+try:
+ __version__ = metadata.version("redis")
+except metadata.PackageNotFoundError:
+ __version__ = "99.99.99"
VERSION = tuple(map(int_or_str, __version__.split(".")))
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,8 +1,6 @@
#!/usr/bin/env python
from setuptools import find_packages, setup
-import redis
-
setup(
name="redis",
description="Python client for Redis database and key-value store",
@@ -10,7 +8,7 @@
long_description_content_type="text/markdown",
keywords=["Redis", "key-value store", "database"],
license="MIT",
- version=redis.__version__,
+ version="4.1.0rc2",
packages=find_packages(
include=[
"redis",
@@ -26,12 +24,10 @@
author="Redis Inc.",
author_email="[email protected]",
python_requires=">=3.6",
- setup_requires=[
- "packaging>=21.3",
- ],
install_requires=[
"deprecated>=1.2.3",
"packaging>=21.3",
+ 'importlib-metadata >= 1.0; python_version < "3.8"',
],
classifiers=[
"Development Status :: 5 - Production/Stable",
|
{"golden_diff": "diff --git a/redis/__init__.py b/redis/__init__.py\n--- a/redis/__init__.py\n+++ b/redis/__init__.py\n@@ -1,3 +1,10 @@\n+import sys\n+\n+if sys.version_info >= (3, 8):\n+ from importlib import metadata\n+else:\n+ import importlib_metadata as metadata\n+\n from redis.client import Redis, StrictRedis\n from redis.cluster import RedisCluster\n from redis.connection import (\n@@ -38,7 +45,10 @@\n return value\n \n \n-__version__ = \"4.1.0rc2\"\n+try:\n+ __version__ = metadata.version(\"redis\")\n+except metadata.PackageNotFoundError:\n+ __version__ = \"99.99.99\"\n \n \n VERSION = tuple(map(int_or_str, __version__.split(\".\")))\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,8 +1,6 @@\n #!/usr/bin/env python\n from setuptools import find_packages, setup\n \n-import redis\n-\n setup(\n name=\"redis\",\n description=\"Python client for Redis database and key-value store\",\n@@ -10,7 +8,7 @@\n long_description_content_type=\"text/markdown\",\n keywords=[\"Redis\", \"key-value store\", \"database\"],\n license=\"MIT\",\n- version=redis.__version__,\n+ version=\"4.1.0rc2\",\n packages=find_packages(\n include=[\n \"redis\",\n@@ -26,12 +24,10 @@\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n- setup_requires=[\n- \"packaging>=21.3\",\n- ],\n install_requires=[\n \"deprecated>=1.2.3\",\n \"packaging>=21.3\",\n+ 'importlib-metadata >= 1.0; python_version < \"3.8\"',\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n", "issue": "redis-py version attribute should be decoupled from the redis module\nFollowing the conversation from https://github.com/redis/redis-py/issues/1625#issuecomment-991744836 looks like importing `redis` module prior to installation in `setup.py` for `version` attribute is not ideal.\r\n\r\nCurrently there are two places where module version is required.\r\n- `setup.py` for module installation\r\n- `redis/__init__.py` for module level `__version__` attribute\r\n\r\nOne way to fix this is to maintain a `version.py` file in top level directory and using that as source of truth in both the above places. \r\n\r\n@chayim @hartwork What do you think? I can create a PR for this :)\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages, setup\n\nimport redis\n\nsetup(\n name=\"redis\",\n description=\"Python client for Redis database and key-value store\",\n long_description=open(\"README.md\").read().strip(),\n long_description_content_type=\"text/markdown\",\n keywords=[\"Redis\", \"key-value store\", \"database\"],\n license=\"MIT\",\n version=redis.__version__,\n packages=find_packages(\n include=[\n \"redis\",\n \"redis.commands\",\n \"redis.commands.bf\",\n \"redis.commands.json\",\n \"redis.commands.search\",\n \"redis.commands.timeseries\",\n \"redis.commands.graph\",\n ]\n ),\n url=\"https://github.com/redis/redis-py\",\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n setup_requires=[\n \"packaging>=21.3\",\n ],\n install_requires=[\n \"deprecated>=1.2.3\",\n \"packaging>=21.3\",\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n extras_require={\n \"hiredis\": [\"hiredis>=1.0.0\"],\n },\n)\n", "path": "setup.py"}, {"content": "from redis.client import Redis, StrictRedis\nfrom redis.cluster import RedisCluster\nfrom redis.connection import (\n BlockingConnectionPool,\n Connection,\n ConnectionPool,\n SSLConnection,\n UnixDomainSocketConnection,\n)\nfrom redis.exceptions import (\n AuthenticationError,\n AuthenticationWrongNumberOfArgsError,\n BusyLoadingError,\n ChildDeadlockedError,\n ConnectionError,\n DataError,\n InvalidResponse,\n PubSubError,\n ReadOnlyError,\n RedisError,\n ResponseError,\n TimeoutError,\n WatchError,\n)\nfrom redis.sentinel import (\n Sentinel,\n SentinelConnectionPool,\n SentinelManagedConnection,\n SentinelManagedSSLConnection,\n)\nfrom redis.utils import from_url\n\n\ndef int_or_str(value):\n try:\n return int(value)\n except ValueError:\n return value\n\n\n__version__ = \"4.1.0rc2\"\n\n\nVERSION = tuple(map(int_or_str, __version__.split(\".\")))\n\n__all__ = [\n \"AuthenticationError\",\n \"AuthenticationWrongNumberOfArgsError\",\n \"BlockingConnectionPool\",\n \"BusyLoadingError\",\n \"ChildDeadlockedError\",\n \"Connection\",\n \"ConnectionError\",\n \"ConnectionPool\",\n \"DataError\",\n \"from_url\",\n \"InvalidResponse\",\n \"PubSubError\",\n \"ReadOnlyError\",\n \"Redis\",\n \"RedisCluster\",\n \"RedisError\",\n \"ResponseError\",\n \"Sentinel\",\n \"SentinelConnectionPool\",\n \"SentinelManagedConnection\",\n \"SentinelManagedSSLConnection\",\n \"SSLConnection\",\n \"StrictRedis\",\n \"TimeoutError\",\n \"UnixDomainSocketConnection\",\n \"WatchError\",\n]\n", "path": "redis/__init__.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages, setup\n\nsetup(\n name=\"redis\",\n description=\"Python client for Redis database and key-value store\",\n long_description=open(\"README.md\").read().strip(),\n long_description_content_type=\"text/markdown\",\n keywords=[\"Redis\", \"key-value store\", \"database\"],\n license=\"MIT\",\n version=\"4.1.0rc2\",\n packages=find_packages(\n include=[\n \"redis\",\n \"redis.commands\",\n \"redis.commands.bf\",\n \"redis.commands.json\",\n \"redis.commands.search\",\n \"redis.commands.timeseries\",\n \"redis.commands.graph\",\n ]\n ),\n url=\"https://github.com/redis/redis-py\",\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n install_requires=[\n \"deprecated>=1.2.3\",\n \"packaging>=21.3\",\n 'importlib-metadata >= 1.0; python_version < \"3.8\"',\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n extras_require={\n \"hiredis\": [\"hiredis>=1.0.0\"],\n },\n)\n", "path": "setup.py"}, {"content": "import sys\n\nif sys.version_info >= (3, 8):\n from importlib import metadata\nelse:\n import importlib_metadata as metadata\n\nfrom redis.client import Redis, StrictRedis\nfrom redis.cluster import RedisCluster\nfrom redis.connection import (\n BlockingConnectionPool,\n Connection,\n ConnectionPool,\n SSLConnection,\n UnixDomainSocketConnection,\n)\nfrom redis.exceptions import (\n AuthenticationError,\n AuthenticationWrongNumberOfArgsError,\n BusyLoadingError,\n ChildDeadlockedError,\n ConnectionError,\n DataError,\n InvalidResponse,\n PubSubError,\n ReadOnlyError,\n RedisError,\n ResponseError,\n TimeoutError,\n WatchError,\n)\nfrom redis.sentinel import (\n Sentinel,\n SentinelConnectionPool,\n SentinelManagedConnection,\n SentinelManagedSSLConnection,\n)\nfrom redis.utils import from_url\n\n\ndef int_or_str(value):\n try:\n return int(value)\n except ValueError:\n return value\n\n\ntry:\n __version__ = metadata.version(\"redis\")\nexcept metadata.PackageNotFoundError:\n __version__ = \"99.99.99\"\n\n\nVERSION = tuple(map(int_or_str, __version__.split(\".\")))\n\n__all__ = [\n \"AuthenticationError\",\n \"AuthenticationWrongNumberOfArgsError\",\n \"BlockingConnectionPool\",\n \"BusyLoadingError\",\n \"ChildDeadlockedError\",\n \"Connection\",\n \"ConnectionError\",\n \"ConnectionPool\",\n \"DataError\",\n \"from_url\",\n \"InvalidResponse\",\n \"PubSubError\",\n \"ReadOnlyError\",\n \"Redis\",\n \"RedisCluster\",\n \"RedisError\",\n \"ResponseError\",\n \"Sentinel\",\n \"SentinelConnectionPool\",\n \"SentinelManagedConnection\",\n \"SentinelManagedSSLConnection\",\n \"SSLConnection\",\n \"StrictRedis\",\n \"TimeoutError\",\n \"UnixDomainSocketConnection\",\n \"WatchError\",\n]\n", "path": "redis/__init__.py"}]}
| 1,434 | 447 |
gh_patches_debug_22170
|
rasdani/github-patches
|
git_diff
|
borgbackup__borg-1743
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't mount on ubuntu. Missing dependencies ?
Trying to mount a backup archive on ubuntu 16.04 fails with:
`# borg mount ~/.backup/ /tmp/pqp/`
`Loading fuse support failed [ImportError: No module named 'llfuse']`
I have installed borgbackup from ubuntu repositories. Is there any missing dependencies on ubuntu packages ?
> `# dpkg -l | grep -i fuse`
> ii exfat-fuse 1.2.3-1 amd64 read and write exFAT driver for FUSE
> ii fuse 2.9.4-1ubuntu3.1 amd64 Filesystem in Userspace
> ii gvfs-fuse 1.28.2-1ubuntu1~16.04.1 amd64 userspace virtual filesystem - fuse server
> ii ifuse 1.1.2-0.1build3 amd64 FUSE module for iPhone and iPod Touch devices
> ii libfuse2:amd64 2.9.4-1ubuntu3.1 amd64 Filesystem in Userspace (library)
> ii ntfs-3g 1:2015.3.14AR.1-1build1 amd64 read/write NTFS driver for FUSE
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- encoding: utf-8 *-*
2 import os
3 import re
4 import sys
5 from glob import glob
6
7 from distutils.command.build import build
8 from distutils.core import Command
9
10 min_python = (3, 4)
11 my_python = sys.version_info
12
13 if my_python < min_python:
14 print("Borg requires Python %d.%d or later" % min_python)
15 sys.exit(1)
16
17 # Are we building on ReadTheDocs?
18 on_rtd = os.environ.get('READTHEDOCS')
19
20 # msgpack pure python data corruption was fixed in 0.4.6.
21 # Also, we might use some rather recent API features.
22 install_requires = ['msgpack-python>=0.4.6', ]
23
24 extras_require = {
25 # llfuse 0.40 (tested, proven, ok), needs FUSE version >= 2.8.0
26 # llfuse 0.41 (tested shortly, looks ok), needs FUSE version >= 2.8.0
27 # llfuse 0.41.1 (tested shortly, looks ok), needs FUSE version >= 2.8.0
28 # llfuse 0.42 (tested shortly, looks ok), needs FUSE version >= 2.8.0
29 # llfuse 1.0 (tested shortly, looks ok), needs FUSE version >= 2.8.0
30 # llfuse 2.0 will break API
31 'fuse': ['llfuse<2.0', ],
32 }
33
34 if sys.platform.startswith('freebsd'):
35 # llfuse was frequently broken / did not build on freebsd
36 # llfuse 0.41.1, 1.1 are ok
37 extras_require['fuse'] = ['llfuse <2.0, !=0.42.*, !=0.43, !=1.0', ]
38
39 from setuptools import setup, Extension
40 from setuptools.command.sdist import sdist
41
42
43 compress_source = 'borg/compress.pyx'
44 crypto_source = 'borg/crypto.pyx'
45 chunker_source = 'borg/chunker.pyx'
46 hashindex_source = 'borg/hashindex.pyx'
47 platform_linux_source = 'borg/platform_linux.pyx'
48 platform_darwin_source = 'borg/platform_darwin.pyx'
49 platform_freebsd_source = 'borg/platform_freebsd.pyx'
50
51 try:
52 from Cython.Distutils import build_ext
53 import Cython.Compiler.Main as cython_compiler
54
55 class Sdist(sdist):
56 def __init__(self, *args, **kwargs):
57 for src in glob('borg/*.pyx'):
58 cython_compiler.compile(src, cython_compiler.default_options)
59 super().__init__(*args, **kwargs)
60
61 def make_distribution(self):
62 self.filelist.extend([
63 'borg/compress.c',
64 'borg/crypto.c',
65 'borg/chunker.c', 'borg/_chunker.c',
66 'borg/hashindex.c', 'borg/_hashindex.c',
67 'borg/platform_linux.c',
68 'borg/platform_freebsd.c',
69 'borg/platform_darwin.c',
70 ])
71 super().make_distribution()
72
73 except ImportError:
74 class Sdist(sdist):
75 def __init__(self, *args, **kwargs):
76 raise Exception('Cython is required to run sdist')
77
78 compress_source = compress_source.replace('.pyx', '.c')
79 crypto_source = crypto_source.replace('.pyx', '.c')
80 chunker_source = chunker_source.replace('.pyx', '.c')
81 hashindex_source = hashindex_source.replace('.pyx', '.c')
82 platform_linux_source = platform_linux_source.replace('.pyx', '.c')
83 platform_freebsd_source = platform_freebsd_source.replace('.pyx', '.c')
84 platform_darwin_source = platform_darwin_source.replace('.pyx', '.c')
85 from distutils.command.build_ext import build_ext
86 if not on_rtd and not all(os.path.exists(path) for path in [
87 compress_source, crypto_source, chunker_source, hashindex_source,
88 platform_linux_source, platform_freebsd_source, platform_darwin_source]):
89 raise ImportError('The GIT version of Borg needs Cython. Install Cython or use a released version.')
90
91
92 def detect_openssl(prefixes):
93 for prefix in prefixes:
94 filename = os.path.join(prefix, 'include', 'openssl', 'evp.h')
95 if os.path.exists(filename):
96 with open(filename, 'r') as fd:
97 if 'PKCS5_PBKDF2_HMAC(' in fd.read():
98 return prefix
99
100
101 def detect_lz4(prefixes):
102 for prefix in prefixes:
103 filename = os.path.join(prefix, 'include', 'lz4.h')
104 if os.path.exists(filename):
105 with open(filename, 'r') as fd:
106 if 'LZ4_decompress_safe' in fd.read():
107 return prefix
108
109
110 include_dirs = []
111 library_dirs = []
112
113 possible_openssl_prefixes = ['/usr', '/usr/local', '/usr/local/opt/openssl', '/usr/local/ssl', '/usr/local/openssl', '/usr/local/borg', '/opt/local']
114 if os.environ.get('BORG_OPENSSL_PREFIX'):
115 possible_openssl_prefixes.insert(0, os.environ.get('BORG_OPENSSL_PREFIX'))
116 ssl_prefix = detect_openssl(possible_openssl_prefixes)
117 if not ssl_prefix:
118 raise Exception('Unable to find OpenSSL >= 1.0 headers. (Looked here: {})'.format(', '.join(possible_openssl_prefixes)))
119 include_dirs.append(os.path.join(ssl_prefix, 'include'))
120 library_dirs.append(os.path.join(ssl_prefix, 'lib'))
121
122
123 possible_lz4_prefixes = ['/usr', '/usr/local', '/usr/local/opt/lz4', '/usr/local/lz4', '/usr/local/borg', '/opt/local']
124 if os.environ.get('BORG_LZ4_PREFIX'):
125 possible_lz4_prefixes.insert(0, os.environ.get('BORG_LZ4_PREFIX'))
126 lz4_prefix = detect_lz4(possible_lz4_prefixes)
127 if lz4_prefix:
128 include_dirs.append(os.path.join(lz4_prefix, 'include'))
129 library_dirs.append(os.path.join(lz4_prefix, 'lib'))
130 elif not on_rtd:
131 raise Exception('Unable to find LZ4 headers. (Looked here: {})'.format(', '.join(possible_lz4_prefixes)))
132
133
134 with open('README.rst', 'r') as fd:
135 long_description = fd.read()
136
137
138 class build_usage(Command):
139 description = "generate usage for each command"
140
141 user_options = [
142 ('output=', 'O', 'output directory'),
143 ]
144
145 def initialize_options(self):
146 pass
147
148 def finalize_options(self):
149 pass
150
151 def run(self):
152 print('generating usage docs')
153 if not os.path.exists('docs/usage'):
154 os.mkdir('docs/usage')
155 # allows us to build docs without the C modules fully loaded during help generation
156 from borg.archiver import Archiver
157 parser = Archiver().build_parser(prog='borg')
158
159 self.generate_level("", parser, Archiver)
160
161 def generate_level(self, prefix, parser, Archiver):
162 is_subcommand = False
163 choices = {}
164 for action in parser._actions:
165 if action.choices is not None and 'SubParsersAction' in str(action.__class__):
166 is_subcommand = True
167 for cmd, parser in action.choices.items():
168 choices[prefix + cmd] = parser
169 if prefix and not choices:
170 return
171 print('found commands: %s' % list(choices.keys()))
172
173 for command, parser in choices.items():
174 print('generating help for %s' % command)
175
176 if self.generate_level(command + " ", parser, Archiver):
177 break
178
179 with open('docs/usage/%s.rst.inc' % command.replace(" ", "_"), 'w') as doc:
180 doc.write(".. IMPORTANT: this file is auto-generated from borg's built-in help, do not edit!\n\n")
181 if command == 'help':
182 for topic in Archiver.helptext:
183 params = {"topic": topic,
184 "underline": '~' * len('borg help ' + topic)}
185 doc.write(".. _borg_{topic}:\n\n".format(**params))
186 doc.write("borg help {topic}\n{underline}\n\n".format(**params))
187 doc.write(Archiver.helptext[topic])
188 else:
189 params = {"command": command,
190 "command_": command.replace(' ', '_'),
191 "underline": '-' * len('borg ' + command)}
192 doc.write(".. _borg_{command_}:\n\n".format(**params))
193 doc.write("borg {command}\n{underline}\n::\n\n".format(**params))
194 epilog = parser.epilog
195 parser.epilog = None
196 doc.write(re.sub("^", " ", parser.format_help(), flags=re.M))
197 doc.write("\nDescription\n~~~~~~~~~~~\n")
198 doc.write(epilog)
199 return is_subcommand
200
201
202 class build_api(Command):
203 description = "generate a basic api.rst file based on the modules available"
204
205 user_options = [
206 ('output=', 'O', 'output directory'),
207 ]
208
209 def initialize_options(self):
210 pass
211
212 def finalize_options(self):
213 pass
214
215 def run(self):
216 print("auto-generating API documentation")
217 with open("docs/api.rst", "w") as doc:
218 doc.write("""
219 API Documentation
220 =================
221 """)
222 for mod in glob('borg/*.py') + glob('borg/*.pyx'):
223 print("examining module %s" % mod)
224 mod = mod.replace('.pyx', '').replace('.py', '').replace('/', '.')
225 if "._" not in mod:
226 doc.write("""
227 .. automodule:: %s
228 :members:
229 :undoc-members:
230 """ % mod)
231
232
233 cmdclass = {
234 'build_ext': build_ext,
235 'build_api': build_api,
236 'build_usage': build_usage,
237 'sdist': Sdist
238 }
239
240 ext_modules = []
241 if not on_rtd:
242 ext_modules += [
243 Extension('borg.compress', [compress_source], libraries=['lz4'], include_dirs=include_dirs, library_dirs=library_dirs),
244 Extension('borg.crypto', [crypto_source], libraries=['crypto'], include_dirs=include_dirs, library_dirs=library_dirs),
245 Extension('borg.chunker', [chunker_source]),
246 Extension('borg.hashindex', [hashindex_source])
247 ]
248 if sys.platform == 'linux':
249 ext_modules.append(Extension('borg.platform_linux', [platform_linux_source], libraries=['acl']))
250 elif sys.platform.startswith('freebsd'):
251 ext_modules.append(Extension('borg.platform_freebsd', [platform_freebsd_source]))
252 elif sys.platform == 'darwin':
253 ext_modules.append(Extension('borg.platform_darwin', [platform_darwin_source]))
254
255 setup(
256 name='borgbackup',
257 use_scm_version={
258 'write_to': 'borg/_version.py',
259 },
260 author='The Borg Collective (see AUTHORS file)',
261 author_email='[email protected]',
262 url='https://borgbackup.readthedocs.io/',
263 description='Deduplicated, encrypted, authenticated and compressed backups',
264 long_description=long_description,
265 license='BSD',
266 platforms=['Linux', 'MacOS X', 'FreeBSD', 'OpenBSD', 'NetBSD', ],
267 classifiers=[
268 'Development Status :: 4 - Beta',
269 'Environment :: Console',
270 'Intended Audience :: System Administrators',
271 'License :: OSI Approved :: BSD License',
272 'Operating System :: POSIX :: BSD :: FreeBSD',
273 'Operating System :: POSIX :: BSD :: OpenBSD',
274 'Operating System :: POSIX :: BSD :: NetBSD',
275 'Operating System :: MacOS :: MacOS X',
276 'Operating System :: POSIX :: Linux',
277 'Programming Language :: Python',
278 'Programming Language :: Python :: 3',
279 'Programming Language :: Python :: 3.4',
280 'Programming Language :: Python :: 3.5',
281 'Topic :: Security :: Cryptography',
282 'Topic :: System :: Archiving :: Backup',
283 ],
284 packages=['borg', 'borg.testsuite', ],
285 entry_points={
286 'console_scripts': [
287 'borg = borg.archiver:main',
288 ]
289 },
290 cmdclass=cmdclass,
291 ext_modules=ext_modules,
292 setup_requires=['setuptools_scm>=1.7'],
293 install_requires=install_requires,
294 extras_require=extras_require,
295 )
296
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -21,12 +21,17 @@
# Also, we might use some rather recent API features.
install_requires = ['msgpack-python>=0.4.6', ]
+# note for package maintainers: if you package borgbackup for distribution,
+# please add llfuse as a *requirement* on all platforms that have a working
+# llfuse package. "borg mount" needs llfuse to work.
+# if you do not have llfuse, do not require it, most of borgbackup will work.
extras_require = {
# llfuse 0.40 (tested, proven, ok), needs FUSE version >= 2.8.0
# llfuse 0.41 (tested shortly, looks ok), needs FUSE version >= 2.8.0
# llfuse 0.41.1 (tested shortly, looks ok), needs FUSE version >= 2.8.0
# llfuse 0.42 (tested shortly, looks ok), needs FUSE version >= 2.8.0
# llfuse 1.0 (tested shortly, looks ok), needs FUSE version >= 2.8.0
+ # llfuse 1.1.1 (tested shortly, looks ok), needs FUSE version >= 2.8.0
# llfuse 2.0 will break API
'fuse': ['llfuse<2.0', ],
}
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -21,12 +21,17 @@\n # Also, we might use some rather recent API features.\n install_requires = ['msgpack-python>=0.4.6', ]\n \n+# note for package maintainers: if you package borgbackup for distribution,\n+# please add llfuse as a *requirement* on all platforms that have a working\n+# llfuse package. \"borg mount\" needs llfuse to work.\n+# if you do not have llfuse, do not require it, most of borgbackup will work.\n extras_require = {\n # llfuse 0.40 (tested, proven, ok), needs FUSE version >= 2.8.0\n # llfuse 0.41 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 0.41.1 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 0.42 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 1.0 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n+ # llfuse 1.1.1 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 2.0 will break API\n 'fuse': ['llfuse<2.0', ],\n }\n", "issue": "Can't mount on ubuntu. Missing dependencies ?\nTrying to mount a backup archive on ubuntu 16.04 fails with:\n`# borg mount ~/.backup/ /tmp/pqp/`\n`Loading fuse support failed [ImportError: No module named 'llfuse']`\n\nI have installed borgbackup from ubuntu repositories. Is there any missing dependencies on ubuntu packages ?\n\n> `# dpkg -l | grep -i fuse`\n> ii exfat-fuse 1.2.3-1 amd64 read and write exFAT driver for FUSE\n> ii fuse 2.9.4-1ubuntu3.1 amd64 Filesystem in Userspace\n> ii gvfs-fuse 1.28.2-1ubuntu1~16.04.1 amd64 userspace virtual filesystem - fuse server\n> ii ifuse 1.1.2-0.1build3 amd64 FUSE module for iPhone and iPod Touch devices\n> ii libfuse2:amd64 2.9.4-1ubuntu3.1 amd64 Filesystem in Userspace (library)\n> ii ntfs-3g 1:2015.3.14AR.1-1build1 amd64 read/write NTFS driver for FUSE\n\n", "before_files": [{"content": "# -*- encoding: utf-8 *-*\nimport os\nimport re\nimport sys\nfrom glob import glob\n\nfrom distutils.command.build import build\nfrom distutils.core import Command\n\nmin_python = (3, 4)\nmy_python = sys.version_info\n\nif my_python < min_python:\n print(\"Borg requires Python %d.%d or later\" % min_python)\n sys.exit(1)\n\n# Are we building on ReadTheDocs?\non_rtd = os.environ.get('READTHEDOCS')\n\n# msgpack pure python data corruption was fixed in 0.4.6.\n# Also, we might use some rather recent API features.\ninstall_requires = ['msgpack-python>=0.4.6', ]\n\nextras_require = {\n # llfuse 0.40 (tested, proven, ok), needs FUSE version >= 2.8.0\n # llfuse 0.41 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 0.41.1 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 0.42 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 1.0 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 2.0 will break API\n 'fuse': ['llfuse<2.0', ],\n}\n\nif sys.platform.startswith('freebsd'):\n # llfuse was frequently broken / did not build on freebsd\n # llfuse 0.41.1, 1.1 are ok\n extras_require['fuse'] = ['llfuse <2.0, !=0.42.*, !=0.43, !=1.0', ]\n\nfrom setuptools import setup, Extension\nfrom setuptools.command.sdist import sdist\n\n\ncompress_source = 'borg/compress.pyx'\ncrypto_source = 'borg/crypto.pyx'\nchunker_source = 'borg/chunker.pyx'\nhashindex_source = 'borg/hashindex.pyx'\nplatform_linux_source = 'borg/platform_linux.pyx'\nplatform_darwin_source = 'borg/platform_darwin.pyx'\nplatform_freebsd_source = 'borg/platform_freebsd.pyx'\n\ntry:\n from Cython.Distutils import build_ext\n import Cython.Compiler.Main as cython_compiler\n\n class Sdist(sdist):\n def __init__(self, *args, **kwargs):\n for src in glob('borg/*.pyx'):\n cython_compiler.compile(src, cython_compiler.default_options)\n super().__init__(*args, **kwargs)\n\n def make_distribution(self):\n self.filelist.extend([\n 'borg/compress.c',\n 'borg/crypto.c',\n 'borg/chunker.c', 'borg/_chunker.c',\n 'borg/hashindex.c', 'borg/_hashindex.c',\n 'borg/platform_linux.c',\n 'borg/platform_freebsd.c',\n 'borg/platform_darwin.c',\n ])\n super().make_distribution()\n\nexcept ImportError:\n class Sdist(sdist):\n def __init__(self, *args, **kwargs):\n raise Exception('Cython is required to run sdist')\n\n compress_source = compress_source.replace('.pyx', '.c')\n crypto_source = crypto_source.replace('.pyx', '.c')\n chunker_source = chunker_source.replace('.pyx', '.c')\n hashindex_source = hashindex_source.replace('.pyx', '.c')\n platform_linux_source = platform_linux_source.replace('.pyx', '.c')\n platform_freebsd_source = platform_freebsd_source.replace('.pyx', '.c')\n platform_darwin_source = platform_darwin_source.replace('.pyx', '.c')\n from distutils.command.build_ext import build_ext\n if not on_rtd and not all(os.path.exists(path) for path in [\n compress_source, crypto_source, chunker_source, hashindex_source,\n platform_linux_source, platform_freebsd_source, platform_darwin_source]):\n raise ImportError('The GIT version of Borg needs Cython. Install Cython or use a released version.')\n\n\ndef detect_openssl(prefixes):\n for prefix in prefixes:\n filename = os.path.join(prefix, 'include', 'openssl', 'evp.h')\n if os.path.exists(filename):\n with open(filename, 'r') as fd:\n if 'PKCS5_PBKDF2_HMAC(' in fd.read():\n return prefix\n\n\ndef detect_lz4(prefixes):\n for prefix in prefixes:\n filename = os.path.join(prefix, 'include', 'lz4.h')\n if os.path.exists(filename):\n with open(filename, 'r') as fd:\n if 'LZ4_decompress_safe' in fd.read():\n return prefix\n\n\ninclude_dirs = []\nlibrary_dirs = []\n\npossible_openssl_prefixes = ['/usr', '/usr/local', '/usr/local/opt/openssl', '/usr/local/ssl', '/usr/local/openssl', '/usr/local/borg', '/opt/local']\nif os.environ.get('BORG_OPENSSL_PREFIX'):\n possible_openssl_prefixes.insert(0, os.environ.get('BORG_OPENSSL_PREFIX'))\nssl_prefix = detect_openssl(possible_openssl_prefixes)\nif not ssl_prefix:\n raise Exception('Unable to find OpenSSL >= 1.0 headers. (Looked here: {})'.format(', '.join(possible_openssl_prefixes)))\ninclude_dirs.append(os.path.join(ssl_prefix, 'include'))\nlibrary_dirs.append(os.path.join(ssl_prefix, 'lib'))\n\n\npossible_lz4_prefixes = ['/usr', '/usr/local', '/usr/local/opt/lz4', '/usr/local/lz4', '/usr/local/borg', '/opt/local']\nif os.environ.get('BORG_LZ4_PREFIX'):\n possible_lz4_prefixes.insert(0, os.environ.get('BORG_LZ4_PREFIX'))\nlz4_prefix = detect_lz4(possible_lz4_prefixes)\nif lz4_prefix:\n include_dirs.append(os.path.join(lz4_prefix, 'include'))\n library_dirs.append(os.path.join(lz4_prefix, 'lib'))\nelif not on_rtd:\n raise Exception('Unable to find LZ4 headers. (Looked here: {})'.format(', '.join(possible_lz4_prefixes)))\n\n\nwith open('README.rst', 'r') as fd:\n long_description = fd.read()\n\n\nclass build_usage(Command):\n description = \"generate usage for each command\"\n\n user_options = [\n ('output=', 'O', 'output directory'),\n ]\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n print('generating usage docs')\n if not os.path.exists('docs/usage'):\n os.mkdir('docs/usage')\n # allows us to build docs without the C modules fully loaded during help generation\n from borg.archiver import Archiver\n parser = Archiver().build_parser(prog='borg')\n\n self.generate_level(\"\", parser, Archiver)\n\n def generate_level(self, prefix, parser, Archiver):\n is_subcommand = False\n choices = {}\n for action in parser._actions:\n if action.choices is not None and 'SubParsersAction' in str(action.__class__):\n is_subcommand = True\n for cmd, parser in action.choices.items():\n choices[prefix + cmd] = parser\n if prefix and not choices:\n return\n print('found commands: %s' % list(choices.keys()))\n\n for command, parser in choices.items():\n print('generating help for %s' % command)\n\n if self.generate_level(command + \" \", parser, Archiver):\n break\n\n with open('docs/usage/%s.rst.inc' % command.replace(\" \", \"_\"), 'w') as doc:\n doc.write(\".. IMPORTANT: this file is auto-generated from borg's built-in help, do not edit!\\n\\n\")\n if command == 'help':\n for topic in Archiver.helptext:\n params = {\"topic\": topic,\n \"underline\": '~' * len('borg help ' + topic)}\n doc.write(\".. _borg_{topic}:\\n\\n\".format(**params))\n doc.write(\"borg help {topic}\\n{underline}\\n\\n\".format(**params))\n doc.write(Archiver.helptext[topic])\n else:\n params = {\"command\": command,\n \"command_\": command.replace(' ', '_'),\n \"underline\": '-' * len('borg ' + command)}\n doc.write(\".. _borg_{command_}:\\n\\n\".format(**params))\n doc.write(\"borg {command}\\n{underline}\\n::\\n\\n\".format(**params))\n epilog = parser.epilog\n parser.epilog = None\n doc.write(re.sub(\"^\", \" \", parser.format_help(), flags=re.M))\n doc.write(\"\\nDescription\\n~~~~~~~~~~~\\n\")\n doc.write(epilog)\n return is_subcommand\n\n\nclass build_api(Command):\n description = \"generate a basic api.rst file based on the modules available\"\n\n user_options = [\n ('output=', 'O', 'output directory'),\n ]\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n print(\"auto-generating API documentation\")\n with open(\"docs/api.rst\", \"w\") as doc:\n doc.write(\"\"\"\nAPI Documentation\n=================\n\"\"\")\n for mod in glob('borg/*.py') + glob('borg/*.pyx'):\n print(\"examining module %s\" % mod)\n mod = mod.replace('.pyx', '').replace('.py', '').replace('/', '.')\n if \"._\" not in mod:\n doc.write(\"\"\"\n.. automodule:: %s\n :members:\n :undoc-members:\n\"\"\" % mod)\n\n\ncmdclass = {\n 'build_ext': build_ext,\n 'build_api': build_api,\n 'build_usage': build_usage,\n 'sdist': Sdist\n}\n\next_modules = []\nif not on_rtd:\n ext_modules += [\n Extension('borg.compress', [compress_source], libraries=['lz4'], include_dirs=include_dirs, library_dirs=library_dirs),\n Extension('borg.crypto', [crypto_source], libraries=['crypto'], include_dirs=include_dirs, library_dirs=library_dirs),\n Extension('borg.chunker', [chunker_source]),\n Extension('borg.hashindex', [hashindex_source])\n]\n if sys.platform == 'linux':\n ext_modules.append(Extension('borg.platform_linux', [platform_linux_source], libraries=['acl']))\n elif sys.platform.startswith('freebsd'):\n ext_modules.append(Extension('borg.platform_freebsd', [platform_freebsd_source]))\n elif sys.platform == 'darwin':\n ext_modules.append(Extension('borg.platform_darwin', [platform_darwin_source]))\n\nsetup(\n name='borgbackup',\n use_scm_version={\n 'write_to': 'borg/_version.py',\n },\n author='The Borg Collective (see AUTHORS file)',\n author_email='[email protected]',\n url='https://borgbackup.readthedocs.io/',\n description='Deduplicated, encrypted, authenticated and compressed backups',\n long_description=long_description,\n license='BSD',\n platforms=['Linux', 'MacOS X', 'FreeBSD', 'OpenBSD', 'NetBSD', ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: POSIX :: BSD :: FreeBSD',\n 'Operating System :: POSIX :: BSD :: OpenBSD',\n 'Operating System :: POSIX :: BSD :: NetBSD',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Security :: Cryptography',\n 'Topic :: System :: Archiving :: Backup',\n ],\n packages=['borg', 'borg.testsuite', ],\n entry_points={\n 'console_scripts': [\n 'borg = borg.archiver:main',\n ]\n },\n cmdclass=cmdclass,\n ext_modules=ext_modules,\n setup_requires=['setuptools_scm>=1.7'],\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- encoding: utf-8 *-*\nimport os\nimport re\nimport sys\nfrom glob import glob\n\nfrom distutils.command.build import build\nfrom distutils.core import Command\n\nmin_python = (3, 4)\nmy_python = sys.version_info\n\nif my_python < min_python:\n print(\"Borg requires Python %d.%d or later\" % min_python)\n sys.exit(1)\n\n# Are we building on ReadTheDocs?\non_rtd = os.environ.get('READTHEDOCS')\n\n# msgpack pure python data corruption was fixed in 0.4.6.\n# Also, we might use some rather recent API features.\ninstall_requires = ['msgpack-python>=0.4.6', ]\n\n# note for package maintainers: if you package borgbackup for distribution,\n# please add llfuse as a *requirement* on all platforms that have a working\n# llfuse package. \"borg mount\" needs llfuse to work.\n# if you do not have llfuse, do not require it, most of borgbackup will work.\nextras_require = {\n # llfuse 0.40 (tested, proven, ok), needs FUSE version >= 2.8.0\n # llfuse 0.41 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 0.41.1 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 0.42 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 1.0 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 1.1.1 (tested shortly, looks ok), needs FUSE version >= 2.8.0\n # llfuse 2.0 will break API\n 'fuse': ['llfuse<2.0', ],\n}\n\nif sys.platform.startswith('freebsd'):\n # llfuse was frequently broken / did not build on freebsd\n # llfuse 0.41.1, 1.1 are ok\n extras_require['fuse'] = ['llfuse <2.0, !=0.42.*, !=0.43, !=1.0', ]\n\nfrom setuptools import setup, Extension\nfrom setuptools.command.sdist import sdist\n\n\ncompress_source = 'borg/compress.pyx'\ncrypto_source = 'borg/crypto.pyx'\nchunker_source = 'borg/chunker.pyx'\nhashindex_source = 'borg/hashindex.pyx'\nplatform_linux_source = 'borg/platform_linux.pyx'\nplatform_darwin_source = 'borg/platform_darwin.pyx'\nplatform_freebsd_source = 'borg/platform_freebsd.pyx'\n\ntry:\n from Cython.Distutils import build_ext\n import Cython.Compiler.Main as cython_compiler\n\n class Sdist(sdist):\n def __init__(self, *args, **kwargs):\n for src in glob('borg/*.pyx'):\n cython_compiler.compile(src, cython_compiler.default_options)\n super().__init__(*args, **kwargs)\n\n def make_distribution(self):\n self.filelist.extend([\n 'borg/compress.c',\n 'borg/crypto.c',\n 'borg/chunker.c', 'borg/_chunker.c',\n 'borg/hashindex.c', 'borg/_hashindex.c',\n 'borg/platform_linux.c',\n 'borg/platform_freebsd.c',\n 'borg/platform_darwin.c',\n ])\n super().make_distribution()\n\nexcept ImportError:\n class Sdist(sdist):\n def __init__(self, *args, **kwargs):\n raise Exception('Cython is required to run sdist')\n\n compress_source = compress_source.replace('.pyx', '.c')\n crypto_source = crypto_source.replace('.pyx', '.c')\n chunker_source = chunker_source.replace('.pyx', '.c')\n hashindex_source = hashindex_source.replace('.pyx', '.c')\n platform_linux_source = platform_linux_source.replace('.pyx', '.c')\n platform_freebsd_source = platform_freebsd_source.replace('.pyx', '.c')\n platform_darwin_source = platform_darwin_source.replace('.pyx', '.c')\n from distutils.command.build_ext import build_ext\n if not on_rtd and not all(os.path.exists(path) for path in [\n compress_source, crypto_source, chunker_source, hashindex_source,\n platform_linux_source, platform_freebsd_source, platform_darwin_source]):\n raise ImportError('The GIT version of Borg needs Cython. Install Cython or use a released version.')\n\n\ndef detect_openssl(prefixes):\n for prefix in prefixes:\n filename = os.path.join(prefix, 'include', 'openssl', 'evp.h')\n if os.path.exists(filename):\n with open(filename, 'r') as fd:\n if 'PKCS5_PBKDF2_HMAC(' in fd.read():\n return prefix\n\n\ndef detect_lz4(prefixes):\n for prefix in prefixes:\n filename = os.path.join(prefix, 'include', 'lz4.h')\n if os.path.exists(filename):\n with open(filename, 'r') as fd:\n if 'LZ4_decompress_safe' in fd.read():\n return prefix\n\n\ninclude_dirs = []\nlibrary_dirs = []\n\npossible_openssl_prefixes = ['/usr', '/usr/local', '/usr/local/opt/openssl', '/usr/local/ssl', '/usr/local/openssl', '/usr/local/borg', '/opt/local']\nif os.environ.get('BORG_OPENSSL_PREFIX'):\n possible_openssl_prefixes.insert(0, os.environ.get('BORG_OPENSSL_PREFIX'))\nssl_prefix = detect_openssl(possible_openssl_prefixes)\nif not ssl_prefix:\n raise Exception('Unable to find OpenSSL >= 1.0 headers. (Looked here: {})'.format(', '.join(possible_openssl_prefixes)))\ninclude_dirs.append(os.path.join(ssl_prefix, 'include'))\nlibrary_dirs.append(os.path.join(ssl_prefix, 'lib'))\n\n\npossible_lz4_prefixes = ['/usr', '/usr/local', '/usr/local/opt/lz4', '/usr/local/lz4', '/usr/local/borg', '/opt/local']\nif os.environ.get('BORG_LZ4_PREFIX'):\n possible_lz4_prefixes.insert(0, os.environ.get('BORG_LZ4_PREFIX'))\nlz4_prefix = detect_lz4(possible_lz4_prefixes)\nif lz4_prefix:\n include_dirs.append(os.path.join(lz4_prefix, 'include'))\n library_dirs.append(os.path.join(lz4_prefix, 'lib'))\nelif not on_rtd:\n raise Exception('Unable to find LZ4 headers. (Looked here: {})'.format(', '.join(possible_lz4_prefixes)))\n\n\nwith open('README.rst', 'r') as fd:\n long_description = fd.read()\n\n\nclass build_usage(Command):\n description = \"generate usage for each command\"\n\n user_options = [\n ('output=', 'O', 'output directory'),\n ]\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n print('generating usage docs')\n if not os.path.exists('docs/usage'):\n os.mkdir('docs/usage')\n # allows us to build docs without the C modules fully loaded during help generation\n from borg.archiver import Archiver\n parser = Archiver().build_parser(prog='borg')\n\n self.generate_level(\"\", parser, Archiver)\n\n def generate_level(self, prefix, parser, Archiver):\n is_subcommand = False\n choices = {}\n for action in parser._actions:\n if action.choices is not None and 'SubParsersAction' in str(action.__class__):\n is_subcommand = True\n for cmd, parser in action.choices.items():\n choices[prefix + cmd] = parser\n if prefix and not choices:\n return\n print('found commands: %s' % list(choices.keys()))\n\n for command, parser in choices.items():\n print('generating help for %s' % command)\n\n if self.generate_level(command + \" \", parser, Archiver):\n break\n\n with open('docs/usage/%s.rst.inc' % command.replace(\" \", \"_\"), 'w') as doc:\n doc.write(\".. IMPORTANT: this file is auto-generated from borg's built-in help, do not edit!\\n\\n\")\n if command == 'help':\n for topic in Archiver.helptext:\n params = {\"topic\": topic,\n \"underline\": '~' * len('borg help ' + topic)}\n doc.write(\".. _borg_{topic}:\\n\\n\".format(**params))\n doc.write(\"borg help {topic}\\n{underline}\\n\\n\".format(**params))\n doc.write(Archiver.helptext[topic])\n else:\n params = {\"command\": command,\n \"command_\": command.replace(' ', '_'),\n \"underline\": '-' * len('borg ' + command)}\n doc.write(\".. _borg_{command_}:\\n\\n\".format(**params))\n doc.write(\"borg {command}\\n{underline}\\n::\\n\\n\".format(**params))\n epilog = parser.epilog\n parser.epilog = None\n doc.write(re.sub(\"^\", \" \", parser.format_help(), flags=re.M))\n doc.write(\"\\nDescription\\n~~~~~~~~~~~\\n\")\n doc.write(epilog)\n return is_subcommand\n\n\nclass build_api(Command):\n description = \"generate a basic api.rst file based on the modules available\"\n\n user_options = [\n ('output=', 'O', 'output directory'),\n ]\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n print(\"auto-generating API documentation\")\n with open(\"docs/api.rst\", \"w\") as doc:\n doc.write(\"\"\"\nAPI Documentation\n=================\n\"\"\")\n for mod in glob('borg/*.py') + glob('borg/*.pyx'):\n print(\"examining module %s\" % mod)\n mod = mod.replace('.pyx', '').replace('.py', '').replace('/', '.')\n if \"._\" not in mod:\n doc.write(\"\"\"\n.. automodule:: %s\n :members:\n :undoc-members:\n\"\"\" % mod)\n\n\ncmdclass = {\n 'build_ext': build_ext,\n 'build_api': build_api,\n 'build_usage': build_usage,\n 'sdist': Sdist\n}\n\next_modules = []\nif not on_rtd:\n ext_modules += [\n Extension('borg.compress', [compress_source], libraries=['lz4'], include_dirs=include_dirs, library_dirs=library_dirs),\n Extension('borg.crypto', [crypto_source], libraries=['crypto'], include_dirs=include_dirs, library_dirs=library_dirs),\n Extension('borg.chunker', [chunker_source]),\n Extension('borg.hashindex', [hashindex_source])\n]\n if sys.platform == 'linux':\n ext_modules.append(Extension('borg.platform_linux', [platform_linux_source], libraries=['acl']))\n elif sys.platform.startswith('freebsd'):\n ext_modules.append(Extension('borg.platform_freebsd', [platform_freebsd_source]))\n elif sys.platform == 'darwin':\n ext_modules.append(Extension('borg.platform_darwin', [platform_darwin_source]))\n\nsetup(\n name='borgbackup',\n use_scm_version={\n 'write_to': 'borg/_version.py',\n },\n author='The Borg Collective (see AUTHORS file)',\n author_email='[email protected]',\n url='https://borgbackup.readthedocs.io/',\n description='Deduplicated, encrypted, authenticated and compressed backups',\n long_description=long_description,\n license='BSD',\n platforms=['Linux', 'MacOS X', 'FreeBSD', 'OpenBSD', 'NetBSD', ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: POSIX :: BSD :: FreeBSD',\n 'Operating System :: POSIX :: BSD :: OpenBSD',\n 'Operating System :: POSIX :: BSD :: NetBSD',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Security :: Cryptography',\n 'Topic :: System :: Archiving :: Backup',\n ],\n packages=['borg', 'borg.testsuite', ],\n entry_points={\n 'console_scripts': [\n 'borg = borg.archiver:main',\n ]\n },\n cmdclass=cmdclass,\n ext_modules=ext_modules,\n setup_requires=['setuptools_scm>=1.7'],\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}]}
| 4,032 | 335 |
gh_patches_debug_370
|
rasdani/github-patches
|
git_diff
|
horovod__horovod-2262
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Horovod on Ray]Failed on example of tensorflow2_mnist_ray.py
**Environment:**
1. Framework: (TensorFlow1.15.0)
3. Horovod version:0.20.0
**Your question:**
Please ask your question here.
Hi @tgaddair @richardliaw I tried the example of Horovod on ray on CPU nodes, but failed.
The ray cluster is ok.
Using TensorFlow backend.
[2m[36m(pid=262081)[0m Using TensorFlow backend.
[2m[36m(pid=262071)[0m Using TensorFlow backend.
Traceback (most recent call last):
File "tensorflow2_mnist_ray.py", line 94, in <module>
executor.run(train, kwargs=dict(num_epochs=1))
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/horovod/ray/runner.py", line 426, in run
lambda w: fn(*args, **kwargs)) for worker in self.workers])
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/worker.py", line 1516, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(RecursionError): [36mray::BaseHorovodWorker.execute()[39m (pid=262081, ip=10.0.0.131)
File "python/ray/_raylet.pyx", line 414, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 431, in ray._raylet.execute_task
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/serialization.py", line 317, in deserialize_objects
self._deserialize_object(data, metadata, object_id))
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/serialization.py", line 257, in _deserialize_object
return self._deserialize_msgpack_data(data, metadata)
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/serialization.py", line 238, in _deserialize_msgpack_data
python_objects = self._deserialize_pickle5_data(pickle5_data)
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/serialization.py", line 226, in _deserialize_pickle5_data
obj = pickle.loads(in_band)
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
.......
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
RecursionError: maximum recursion depth exceeded while calling a Python object
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/tensorflow2_mnist_ray.py`
Content:
```
1 import tensorflow as tf
2 import horovod.tensorflow.keras as hvd
3 import horovod.keras as hvd
4
5 import ray
6 from horovod.ray import RayExecutor
7
8
9 def train(num_epochs):
10 # Horovod: initialize Horovod.
11 hvd.init()
12
13 # Horovod: pin GPU to be used to process local rank (one GPU per process)
14 gpus = tf.config.experimental.list_physical_devices('GPU')
15 for gpu in gpus:
16 tf.config.experimental.set_memory_growth(gpu, True)
17 if gpus:
18 tf.config.experimental.set_visible_devices(
19 gpus[hvd.local_rank()], 'GPU')
20
21 (mnist_images, mnist_labels), _ = \
22 tf.keras.datasets.mnist.load_data(path='mnist-%d.npz' % hvd.rank())
23
24 dataset = tf.data.Dataset.from_tensor_slices(
25 (tf.cast(mnist_images[..., tf.newaxis] / 255.0, tf.float32),
26 tf.cast(mnist_labels, tf.int64))
27 )
28 dataset = dataset.repeat().shuffle(10000).batch(128)
29
30 mnist_model = tf.keras.Sequential([
31 tf.keras.layers.Conv2D(32, [3, 3], activation='relu'),
32 tf.keras.layers.Conv2D(64, [3, 3], activation='relu'),
33 tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
34 tf.keras.layers.Dropout(0.25),
35 tf.keras.layers.Flatten(),
36 tf.keras.layers.Dense(128, activation='relu'),
37 tf.keras.layers.Dropout(0.5),
38 tf.keras.layers.Dense(10, activation='softmax')
39 ])
40
41 # Horovod: adjust learning rate based on number of GPUs.
42 scaled_lr = 0.001 * hvd.size()
43 opt = tf.optimizers.Adam(scaled_lr)
44
45 # Horovod: add Horovod DistributedOptimizer.
46 opt = hvd.DistributedOptimizer(opt)
47
48 # Horovod: Specify `experimental_run_tf_function=False` to ensure TensorFlow
49 # uses hvd.DistributedOptimizer() to compute gradients.
50 mnist_model.compile(loss=tf.losses.SparseCategoricalCrossentropy(),
51 optimizer=opt,
52 metrics=['accuracy'],
53 experimental_run_tf_function=False)
54
55 callbacks = [
56 # Horovod: broadcast initial variable states from rank 0 to all other processes.
57 # This is necessary to ensure consistent initialization of all workers when
58 # training is started with random weights or restored from a checkpoint.
59 hvd.callbacks.BroadcastGlobalVariablesCallback(0),
60
61 # Horovod: average metrics among workers at the end of every epoch.
62 #
63 # Note: This callback must be in the list before the ReduceLROnPlateau,
64 # TensorBoard or other metrics-based callbacks.
65 hvd.callbacks.MetricAverageCallback(),
66
67 # Horovod: using `lr = 1.0 * hvd.size()` from the very beginning leads to worse final
68 # accuracy. Scale the learning rate `lr = 1.0` ---> `lr = 1.0 * hvd.size()` during
69 # the first three epochs. See https://arxiv.org/abs/1706.02677 for details.
70 hvd.callbacks.LearningRateWarmupCallback(
71 warmup_epochs=3, initial_lr=scaled_lr, verbose=1),
72 ]
73
74 # Horovod: save checkpoints only on worker 0 to prevent other workers from corrupting them.
75 if hvd.rank() == 0:
76 callbacks.append(tf.keras.callbacks.ModelCheckpoint(
77 './checkpoint-{epoch}.h5'))
78
79 # Horovod: write logs on worker 0.
80 verbose = 1 if hvd.rank() == 0 else 0
81
82 # Train the model.
83 # Horovod: adjust number of steps based on number of GPUs.
84 mnist_model.fit(dataset, steps_per_epoch=500 // hvd.size(),
85 callbacks=callbacks, epochs=num_epochs, verbose=verbose)
86
87
88 ray.init()
89 settings = RayExecutor.create_settings(timeout_s=30)
90 executor = RayExecutor(settings, num_hosts=1, num_slots=2, use_gpu=False)
91 executor.start()
92 executor.run(train, kwargs=dict(num_epochs=1))
93 executor.shutdown()
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/tensorflow2_mnist_ray.py b/examples/tensorflow2_mnist_ray.py
--- a/examples/tensorflow2_mnist_ray.py
+++ b/examples/tensorflow2_mnist_ray.py
@@ -1,6 +1,5 @@
import tensorflow as tf
import horovod.tensorflow.keras as hvd
-import horovod.keras as hvd
import ray
from horovod.ray import RayExecutor
|
{"golden_diff": "diff --git a/examples/tensorflow2_mnist_ray.py b/examples/tensorflow2_mnist_ray.py\n--- a/examples/tensorflow2_mnist_ray.py\n+++ b/examples/tensorflow2_mnist_ray.py\n@@ -1,6 +1,5 @@\n import tensorflow as tf\n import horovod.tensorflow.keras as hvd\n-import horovod.keras as hvd\n \n import ray\n from horovod.ray import RayExecutor\n", "issue": "[Horovod on Ray]Failed on example of tensorflow2_mnist_ray.py\n**Environment:**\r\n1. Framework: (TensorFlow1.15.0)\r\n3. Horovod version:0.20.0\r\n\r\n**Your question:**\r\nPlease ask your question here.\r\nHi @tgaddair @richardliaw I tried the example of Horovod on ray on CPU nodes, but failed.\r\nThe ray cluster is ok.\r\n\r\nUsing TensorFlow backend.\r\n\u001b[2m\u001b[36m(pid=262081)\u001b[0m Using TensorFlow backend.\r\n\u001b[2m\u001b[36m(pid=262071)\u001b[0m Using TensorFlow backend.\r\nTraceback (most recent call last):\r\n File \"tensorflow2_mnist_ray.py\", line 94, in <module>\r\n executor.run(train, kwargs=dict(num_epochs=1))\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/horovod/ray/runner.py\", line 426, in run\r\n lambda w: fn(*args, **kwargs)) for worker in self.workers])\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/worker.py\", line 1516, in get\r\n raise value.as_instanceof_cause()\r\nray.exceptions.RayTaskError(RecursionError): \u001b[36mray::BaseHorovodWorker.execute()\u001b[39m (pid=262081, ip=10.0.0.131)\r\n File \"python/ray/_raylet.pyx\", line 414, in ray._raylet.execute_task\r\n File \"python/ray/_raylet.pyx\", line 431, in ray._raylet.execute_task\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/serialization.py\", line 317, in deserialize_objects\r\n self._deserialize_object(data, metadata, object_id))\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/serialization.py\", line 257, in _deserialize_object\r\n return self._deserialize_msgpack_data(data, metadata)\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/serialization.py\", line 238, in _deserialize_msgpack_data\r\n python_objects = self._deserialize_pickle5_data(pickle5_data)\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/ray/serialization.py\", line 226, in _deserialize_pickle5_data\r\n obj = pickle.loads(in_band)\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\n\r\n .......\r\n \r\n\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"/home/xianyang/sw/miniconda3/envs/test/lib/python3.7/site-packages/tensorflow/__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\nRecursionError: maximum recursion depth exceeded while calling a Python object\r\n\r\n\n", "before_files": [{"content": "import tensorflow as tf\nimport horovod.tensorflow.keras as hvd\nimport horovod.keras as hvd\n\nimport ray\nfrom horovod.ray import RayExecutor\n\n\ndef train(num_epochs):\n # Horovod: initialize Horovod.\n hvd.init()\n\n # Horovod: pin GPU to be used to process local rank (one GPU per process)\n gpus = tf.config.experimental.list_physical_devices('GPU')\n for gpu in gpus:\n tf.config.experimental.set_memory_growth(gpu, True)\n if gpus:\n tf.config.experimental.set_visible_devices(\n gpus[hvd.local_rank()], 'GPU')\n\n (mnist_images, mnist_labels), _ = \\\n tf.keras.datasets.mnist.load_data(path='mnist-%d.npz' % hvd.rank())\n\n dataset = tf.data.Dataset.from_tensor_slices(\n (tf.cast(mnist_images[..., tf.newaxis] / 255.0, tf.float32),\n tf.cast(mnist_labels, tf.int64))\n )\n dataset = dataset.repeat().shuffle(10000).batch(128)\n\n mnist_model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, [3, 3], activation='relu'),\n tf.keras.layers.Conv2D(64, [3, 3], activation='relu'),\n tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),\n tf.keras.layers.Dropout(0.25),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.5),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n\n # Horovod: adjust learning rate based on number of GPUs.\n scaled_lr = 0.001 * hvd.size()\n opt = tf.optimizers.Adam(scaled_lr)\n\n # Horovod: add Horovod DistributedOptimizer.\n opt = hvd.DistributedOptimizer(opt)\n\n # Horovod: Specify `experimental_run_tf_function=False` to ensure TensorFlow\n # uses hvd.DistributedOptimizer() to compute gradients.\n mnist_model.compile(loss=tf.losses.SparseCategoricalCrossentropy(),\n optimizer=opt,\n metrics=['accuracy'],\n experimental_run_tf_function=False)\n\n callbacks = [\n # Horovod: broadcast initial variable states from rank 0 to all other processes.\n # This is necessary to ensure consistent initialization of all workers when\n # training is started with random weights or restored from a checkpoint.\n hvd.callbacks.BroadcastGlobalVariablesCallback(0),\n\n # Horovod: average metrics among workers at the end of every epoch.\n #\n # Note: This callback must be in the list before the ReduceLROnPlateau,\n # TensorBoard or other metrics-based callbacks.\n hvd.callbacks.MetricAverageCallback(),\n\n # Horovod: using `lr = 1.0 * hvd.size()` from the very beginning leads to worse final\n # accuracy. Scale the learning rate `lr = 1.0` ---> `lr = 1.0 * hvd.size()` during\n # the first three epochs. See https://arxiv.org/abs/1706.02677 for details.\n hvd.callbacks.LearningRateWarmupCallback(\n warmup_epochs=3, initial_lr=scaled_lr, verbose=1),\n ]\n\n # Horovod: save checkpoints only on worker 0 to prevent other workers from corrupting them.\n if hvd.rank() == 0:\n callbacks.append(tf.keras.callbacks.ModelCheckpoint(\n './checkpoint-{epoch}.h5'))\n\n # Horovod: write logs on worker 0.\n verbose = 1 if hvd.rank() == 0 else 0\n\n # Train the model.\n # Horovod: adjust number of steps based on number of GPUs.\n mnist_model.fit(dataset, steps_per_epoch=500 // hvd.size(),\n callbacks=callbacks, epochs=num_epochs, verbose=verbose)\n\n\nray.init()\nsettings = RayExecutor.create_settings(timeout_s=30)\nexecutor = RayExecutor(settings, num_hosts=1, num_slots=2, use_gpu=False)\nexecutor.start()\nexecutor.run(train, kwargs=dict(num_epochs=1))\nexecutor.shutdown()\n", "path": "examples/tensorflow2_mnist_ray.py"}], "after_files": [{"content": "import tensorflow as tf\nimport horovod.tensorflow.keras as hvd\n\nimport ray\nfrom horovod.ray import RayExecutor\n\n\ndef train(num_epochs):\n # Horovod: initialize Horovod.\n hvd.init()\n\n # Horovod: pin GPU to be used to process local rank (one GPU per process)\n gpus = tf.config.experimental.list_physical_devices('GPU')\n for gpu in gpus:\n tf.config.experimental.set_memory_growth(gpu, True)\n if gpus:\n tf.config.experimental.set_visible_devices(\n gpus[hvd.local_rank()], 'GPU')\n\n (mnist_images, mnist_labels), _ = \\\n tf.keras.datasets.mnist.load_data(path='mnist-%d.npz' % hvd.rank())\n\n dataset = tf.data.Dataset.from_tensor_slices(\n (tf.cast(mnist_images[..., tf.newaxis] / 255.0, tf.float32),\n tf.cast(mnist_labels, tf.int64))\n )\n dataset = dataset.repeat().shuffle(10000).batch(128)\n\n mnist_model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, [3, 3], activation='relu'),\n tf.keras.layers.Conv2D(64, [3, 3], activation='relu'),\n tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),\n tf.keras.layers.Dropout(0.25),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.5),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n\n # Horovod: adjust learning rate based on number of GPUs.\n scaled_lr = 0.001 * hvd.size()\n opt = tf.optimizers.Adam(scaled_lr)\n\n # Horovod: add Horovod DistributedOptimizer.\n opt = hvd.DistributedOptimizer(opt)\n\n # Horovod: Specify `experimental_run_tf_function=False` to ensure TensorFlow\n # uses hvd.DistributedOptimizer() to compute gradients.\n mnist_model.compile(loss=tf.losses.SparseCategoricalCrossentropy(),\n optimizer=opt,\n metrics=['accuracy'],\n experimental_run_tf_function=False)\n\n callbacks = [\n # Horovod: broadcast initial variable states from rank 0 to all other processes.\n # This is necessary to ensure consistent initialization of all workers when\n # training is started with random weights or restored from a checkpoint.\n hvd.callbacks.BroadcastGlobalVariablesCallback(0),\n\n # Horovod: average metrics among workers at the end of every epoch.\n #\n # Note: This callback must be in the list before the ReduceLROnPlateau,\n # TensorBoard or other metrics-based callbacks.\n hvd.callbacks.MetricAverageCallback(),\n\n # Horovod: using `lr = 1.0 * hvd.size()` from the very beginning leads to worse final\n # accuracy. Scale the learning rate `lr = 1.0` ---> `lr = 1.0 * hvd.size()` during\n # the first three epochs. See https://arxiv.org/abs/1706.02677 for details.\n hvd.callbacks.LearningRateWarmupCallback(\n warmup_epochs=3, initial_lr=scaled_lr, verbose=1),\n ]\n\n # Horovod: save checkpoints only on worker 0 to prevent other workers from corrupting them.\n if hvd.rank() == 0:\n callbacks.append(tf.keras.callbacks.ModelCheckpoint(\n './checkpoint-{epoch}.h5'))\n\n # Horovod: write logs on worker 0.\n verbose = 1 if hvd.rank() == 0 else 0\n\n # Train the model.\n # Horovod: adjust number of steps based on number of GPUs.\n mnist_model.fit(dataset, steps_per_epoch=500 // hvd.size(),\n callbacks=callbacks, epochs=num_epochs, verbose=verbose)\n\n\nray.init()\nsettings = RayExecutor.create_settings(timeout_s=30)\nexecutor = RayExecutor(settings, num_hosts=1, num_slots=2, use_gpu=False)\nexecutor.start()\nexecutor.run(train, kwargs=dict(num_epochs=1))\nexecutor.shutdown()\n", "path": "examples/tensorflow2_mnist_ray.py"}]}
| 2,774 | 96 |
gh_patches_debug_32341
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-3217
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
contrib_bots: Prevent runaway bots with rate limiting
It's possible to create a runaway bot if you have a bot send a message to certain stimuli that in turns becomes a stimulus for the bot to send another message and so on and so forth.
We can prevent that in `contrib_bots/run.py` by adding some logic to prevent runaway bots.
Right now RestrictedClient make self.send_message just be client.send_message, but we can instead have a wrapper like this:
```
def send_message(self, *args, **kwargs):
self.rate_limit()
self.client.send_message(*args, **kwargs)
```
And then have appropriate logic in `rate_limit()` and some state variable to make sure less than N messages have been sent in the last second. It might make sense to have a small class for rate limiting that RestrictedClient uses an instance of.
You can do a pretty naive rate limiting scheme where you just keep the last N timestamps in a Python list and truncate it off the front as new sends come in and the list grows to size N+1.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `contrib_bots/run.py`
Content:
```
1 #!/usr/bin/env python
2 from __future__ import print_function
3
4 import importlib
5 import logging
6 import optparse
7 import os
8 import sys
9
10 our_dir = os.path.dirname(os.path.abspath(__file__))
11
12 # For dev setups, we can find the API in the repo itself.
13 if os.path.exists(os.path.join(our_dir, '../api/zulip')):
14 sys.path.insert(0, '../api')
15
16 from zulip import Client
17
18 class RestrictedClient(object):
19 def __init__(self, client):
20 # Only expose a subset of our Client's functionality
21 user_profile = client.get_profile()
22 self.send_message = client.send_message
23 try:
24 self.full_name = user_profile['full_name']
25 self.email = user_profile['email']
26 except KeyError:
27 logging.error('Cannot fetch user profile, make sure you have set'
28 ' up the zuliprc file correctly.')
29 sys.exit(1)
30
31 def get_lib_module(lib_fn):
32 lib_fn = os.path.abspath(lib_fn)
33 if not os.path.dirname(lib_fn).startswith(os.path.join(our_dir, 'lib')):
34 print('Sorry, we will only import code from contrib_bots/lib.')
35 sys.exit(1)
36
37 if not lib_fn.endswith('.py'):
38 print('Please use a .py extension for library files.')
39 sys.exit(1)
40
41 sys.path.append('lib')
42 base_lib_fn = os.path.basename(os.path.splitext(lib_fn)[0])
43 module_name = 'lib.' + base_lib_fn
44 module = importlib.import_module(module_name)
45 return module
46
47 def run_message_handler_for_bot(lib_module, quiet, config_file):
48 # Make sure you set up your ~/.zuliprc
49 client = Client(config_file=config_file)
50 restricted_client = RestrictedClient(client)
51
52 message_handler = lib_module.handler_class()
53
54 class StateHandler(object):
55 def __init__(self):
56 self.state = None
57
58 def set_state(self, state):
59 self.state = state
60
61 def get_state(self):
62 return self.state
63
64 state_handler = StateHandler()
65
66 if not quiet:
67 print(message_handler.usage())
68
69 def handle_message(message):
70 logging.info('waiting for next message')
71 if message_handler.triage_message(message=message,
72 client=restricted_client):
73 message_handler.handle_message(
74 message=message,
75 client=restricted_client,
76 state_handler=state_handler
77 )
78
79 logging.info('starting message handling...')
80 client.call_on_each_message(handle_message)
81
82 def run():
83 usage = '''
84 ./run.py <lib file>
85
86 Example: ./run.py lib/followup.py
87
88 (This program loads bot-related code from the
89 library code and then runs a message loop,
90 feeding messages to the library code to handle.)
91
92 Please make sure you have a current ~/.zuliprc
93 file with the credentials you want to use for
94 this bot.
95
96 See lib/readme.md for more context.
97 '''
98
99 parser = optparse.OptionParser(usage=usage)
100 parser.add_option('--quiet', '-q',
101 action='store_true',
102 help='Turn off logging output.')
103 parser.add_option('--config-file',
104 action='store',
105 help='(alternate config file to ~/.zuliprc)')
106 (options, args) = parser.parse_args()
107
108 if len(args) == 0:
109 print('You must specify a library!')
110 sys.exit(1)
111
112 lib_module = get_lib_module(lib_fn=args[0])
113
114 if not options.quiet:
115 logging.basicConfig(stream=sys.stdout, level=logging.INFO)
116
117 run_message_handler_for_bot(
118 lib_module=lib_module,
119 config_file=options.config_file,
120 quiet=options.quiet
121 )
122
123 if __name__ == '__main__':
124 run()
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/contrib_bots/run.py b/contrib_bots/run.py
--- a/contrib_bots/run.py
+++ b/contrib_bots/run.py
@@ -6,6 +6,7 @@
import optparse
import os
import sys
+import time
our_dir = os.path.dirname(os.path.abspath(__file__))
@@ -15,11 +16,27 @@
from zulip import Client
+class RateLimit(object):
+ def __init__(self, message_limit, interval_limit):
+ self.message_limit = message_limit
+ self.interval_limit = interval_limit
+ self.message_list = []
+
+ def is_legal(self):
+ self.message_list.append(time.time())
+ if len(self.message_list) > self.message_limit:
+ self.message_list.pop(0)
+ time_diff = self.message_list[-1] - self.message_list[0]
+ return time_diff >= self.interval_limit
+ else:
+ return True
+
class RestrictedClient(object):
def __init__(self, client):
# Only expose a subset of our Client's functionality
user_profile = client.get_profile()
- self.send_message = client.send_message
+ self.rate_limit = RateLimit(20, 5)
+ self.client = client
try:
self.full_name = user_profile['full_name']
self.email = user_profile['email']
@@ -28,6 +45,15 @@
' up the zuliprc file correctly.')
sys.exit(1)
+ def send_message(self, *args, **kwargs):
+ if self.rate_limit.is_legal():
+ self.client.send_message(*args, **kwargs)
+ else:
+ logging.error('-----> !*!*!*MESSAGE RATE LIMIT REACHED, EXITING*!*!*! <-----\n'
+ 'Is your bot trapped in an infinite loop by reacting to'
+ ' its own messages?')
+ sys.exit(1)
+
def get_lib_module(lib_fn):
lib_fn = os.path.abspath(lib_fn)
if not os.path.dirname(lib_fn).startswith(os.path.join(our_dir, 'lib')):
|
{"golden_diff": "diff --git a/contrib_bots/run.py b/contrib_bots/run.py\n--- a/contrib_bots/run.py\n+++ b/contrib_bots/run.py\n@@ -6,6 +6,7 @@\n import optparse\n import os\n import sys\n+import time\n \n our_dir = os.path.dirname(os.path.abspath(__file__))\n \n@@ -15,11 +16,27 @@\n \n from zulip import Client\n \n+class RateLimit(object):\n+ def __init__(self, message_limit, interval_limit):\n+ self.message_limit = message_limit\n+ self.interval_limit = interval_limit\n+ self.message_list = []\n+\n+ def is_legal(self):\n+ self.message_list.append(time.time())\n+ if len(self.message_list) > self.message_limit:\n+ self.message_list.pop(0)\n+ time_diff = self.message_list[-1] - self.message_list[0]\n+ return time_diff >= self.interval_limit\n+ else:\n+ return True\n+\n class RestrictedClient(object):\n def __init__(self, client):\n # Only expose a subset of our Client's functionality\n user_profile = client.get_profile()\n- self.send_message = client.send_message\n+ self.rate_limit = RateLimit(20, 5)\n+ self.client = client\n try:\n self.full_name = user_profile['full_name']\n self.email = user_profile['email']\n@@ -28,6 +45,15 @@\n ' up the zuliprc file correctly.')\n sys.exit(1)\n \n+ def send_message(self, *args, **kwargs):\n+ if self.rate_limit.is_legal():\n+ self.client.send_message(*args, **kwargs)\n+ else:\n+ logging.error('-----> !*!*!*MESSAGE RATE LIMIT REACHED, EXITING*!*!*! <-----\\n'\n+ 'Is your bot trapped in an infinite loop by reacting to'\n+ ' its own messages?')\n+ sys.exit(1)\n+\n def get_lib_module(lib_fn):\n lib_fn = os.path.abspath(lib_fn)\n if not os.path.dirname(lib_fn).startswith(os.path.join(our_dir, 'lib')):\n", "issue": "contrib_bots: Prevent runaway bots with rate limiting\nIt's possible to create a runaway bot if you have a bot send a message to certain stimuli that in turns becomes a stimulus for the bot to send another message and so on and so forth.\r\n\r\nWe can prevent that in `contrib_bots/run.py` by adding some logic to prevent runaway bots.\r\n\r\nRight now RestrictedClient make self.send_message just be client.send_message, but we can instead have a wrapper like this:\r\n\r\n```\r\ndef send_message(self, *args, **kwargs):\r\n self.rate_limit()\r\n self.client.send_message(*args, **kwargs)\r\n```\r\n\r\nAnd then have appropriate logic in `rate_limit()` and some state variable to make sure less than N messages have been sent in the last second. It might make sense to have a small class for rate limiting that RestrictedClient uses an instance of.\r\n\r\nYou can do a pretty naive rate limiting scheme where you just keep the last N timestamps in a Python list and truncate it off the front as new sends come in and the list grows to size N+1.\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import print_function\n\nimport importlib\nimport logging\nimport optparse\nimport os\nimport sys\n\nour_dir = os.path.dirname(os.path.abspath(__file__))\n\n# For dev setups, we can find the API in the repo itself.\nif os.path.exists(os.path.join(our_dir, '../api/zulip')):\n sys.path.insert(0, '../api')\n\nfrom zulip import Client\n\nclass RestrictedClient(object):\n def __init__(self, client):\n # Only expose a subset of our Client's functionality\n user_profile = client.get_profile()\n self.send_message = client.send_message\n try:\n self.full_name = user_profile['full_name']\n self.email = user_profile['email']\n except KeyError:\n logging.error('Cannot fetch user profile, make sure you have set'\n ' up the zuliprc file correctly.')\n sys.exit(1)\n\ndef get_lib_module(lib_fn):\n lib_fn = os.path.abspath(lib_fn)\n if not os.path.dirname(lib_fn).startswith(os.path.join(our_dir, 'lib')):\n print('Sorry, we will only import code from contrib_bots/lib.')\n sys.exit(1)\n\n if not lib_fn.endswith('.py'):\n print('Please use a .py extension for library files.')\n sys.exit(1)\n\n sys.path.append('lib')\n base_lib_fn = os.path.basename(os.path.splitext(lib_fn)[0])\n module_name = 'lib.' + base_lib_fn\n module = importlib.import_module(module_name)\n return module\n\ndef run_message_handler_for_bot(lib_module, quiet, config_file):\n # Make sure you set up your ~/.zuliprc\n client = Client(config_file=config_file)\n restricted_client = RestrictedClient(client)\n\n message_handler = lib_module.handler_class()\n\n class StateHandler(object):\n def __init__(self):\n self.state = None\n\n def set_state(self, state):\n self.state = state\n\n def get_state(self):\n return self.state\n\n state_handler = StateHandler()\n\n if not quiet:\n print(message_handler.usage())\n\n def handle_message(message):\n logging.info('waiting for next message')\n if message_handler.triage_message(message=message,\n client=restricted_client):\n message_handler.handle_message(\n message=message,\n client=restricted_client,\n state_handler=state_handler\n )\n\n logging.info('starting message handling...')\n client.call_on_each_message(handle_message)\n\ndef run():\n usage = '''\n ./run.py <lib file>\n\n Example: ./run.py lib/followup.py\n\n (This program loads bot-related code from the\n library code and then runs a message loop,\n feeding messages to the library code to handle.)\n\n Please make sure you have a current ~/.zuliprc\n file with the credentials you want to use for\n this bot.\n\n See lib/readme.md for more context.\n '''\n\n parser = optparse.OptionParser(usage=usage)\n parser.add_option('--quiet', '-q',\n action='store_true',\n help='Turn off logging output.')\n parser.add_option('--config-file',\n action='store',\n help='(alternate config file to ~/.zuliprc)')\n (options, args) = parser.parse_args()\n\n if len(args) == 0:\n print('You must specify a library!')\n sys.exit(1)\n\n lib_module = get_lib_module(lib_fn=args[0])\n\n if not options.quiet:\n logging.basicConfig(stream=sys.stdout, level=logging.INFO)\n\n run_message_handler_for_bot(\n lib_module=lib_module,\n config_file=options.config_file,\n quiet=options.quiet\n )\n\nif __name__ == '__main__':\n run()\n", "path": "contrib_bots/run.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import print_function\n\nimport importlib\nimport logging\nimport optparse\nimport os\nimport sys\nimport time\n\nour_dir = os.path.dirname(os.path.abspath(__file__))\n\n# For dev setups, we can find the API in the repo itself.\nif os.path.exists(os.path.join(our_dir, '../api/zulip')):\n sys.path.insert(0, '../api')\n\nfrom zulip import Client\n\nclass RateLimit(object):\n def __init__(self, message_limit, interval_limit):\n self.message_limit = message_limit\n self.interval_limit = interval_limit\n self.message_list = []\n\n def is_legal(self):\n self.message_list.append(time.time())\n if len(self.message_list) > self.message_limit:\n self.message_list.pop(0)\n time_diff = self.message_list[-1] - self.message_list[0]\n return time_diff >= self.interval_limit\n else:\n return True\n\nclass RestrictedClient(object):\n def __init__(self, client):\n # Only expose a subset of our Client's functionality\n user_profile = client.get_profile()\n self.rate_limit = RateLimit(20, 5)\n self.client = client\n try:\n self.full_name = user_profile['full_name']\n self.email = user_profile['email']\n except KeyError:\n logging.error('Cannot fetch user profile, make sure you have set'\n ' up the zuliprc file correctly.')\n sys.exit(1)\n\n def send_message(self, *args, **kwargs):\n if self.rate_limit.is_legal():\n self.client.send_message(*args, **kwargs)\n else:\n logging.error('-----> !*!*!*MESSAGE RATE LIMIT REACHED, EXITING*!*!*! <-----\\n'\n 'Is your bot trapped in an infinite loop by reacting to'\n ' its own messages?')\n sys.exit(1)\n\ndef get_lib_module(lib_fn):\n lib_fn = os.path.abspath(lib_fn)\n if not os.path.dirname(lib_fn).startswith(os.path.join(our_dir, 'lib')):\n print('Sorry, we will only import code from contrib_bots/lib.')\n sys.exit(1)\n\n if not lib_fn.endswith('.py'):\n print('Please use a .py extension for library files.')\n sys.exit(1)\n\n sys.path.append('lib')\n base_lib_fn = os.path.basename(os.path.splitext(lib_fn)[0])\n module_name = 'lib.' + base_lib_fn\n module = importlib.import_module(module_name)\n return module\n\ndef run_message_handler_for_bot(lib_module, quiet, config_file):\n # Make sure you set up your ~/.zuliprc\n client = Client(config_file=config_file)\n restricted_client = RestrictedClient(client)\n\n message_handler = lib_module.handler_class()\n\n class StateHandler(object):\n def __init__(self):\n self.state = None\n\n def set_state(self, state):\n self.state = state\n\n def get_state(self):\n return self.state\n\n state_handler = StateHandler()\n\n if not quiet:\n print(message_handler.usage())\n\n def handle_message(message):\n logging.info('waiting for next message')\n if message_handler.triage_message(message=message,\n client=restricted_client):\n message_handler.handle_message(\n message=message,\n client=restricted_client,\n state_handler=state_handler\n )\n\n logging.info('starting message handling...')\n client.call_on_each_message(handle_message)\n\ndef run():\n usage = '''\n ./run.py <lib file>\n\n Example: ./run.py lib/followup.py\n\n (This program loads bot-related code from the\n library code and then runs a message loop,\n feeding messages to the library code to handle.)\n\n Please make sure you have a current ~/.zuliprc\n file with the credentials you want to use for\n this bot.\n\n See lib/readme.md for more context.\n '''\n\n parser = optparse.OptionParser(usage=usage)\n parser.add_option('--quiet', '-q',\n action='store_true',\n help='Turn off logging output.')\n parser.add_option('--config-file',\n action='store',\n help='(alternate config file to ~/.zuliprc)')\n (options, args) = parser.parse_args()\n\n if len(args) == 0:\n print('You must specify a library!')\n sys.exit(1)\n\n lib_module = get_lib_module(lib_fn=args[0])\n\n if not options.quiet:\n logging.basicConfig(stream=sys.stdout, level=logging.INFO)\n\n run_message_handler_for_bot(\n lib_module=lib_module,\n config_file=options.config_file,\n quiet=options.quiet\n )\n\nif __name__ == '__main__':\n run()\n", "path": "contrib_bots/run.py"}]}
| 1,560 | 471 |
gh_patches_debug_36999
|
rasdani/github-patches
|
git_diff
|
googleapis__google-auth-library-python-560
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
_default always sets `credentials.quota_project_id` to None
#546 introduced a bug where the quota project id always gets set to `None`. If no `quota_project_id` is passed to `load_credentials_from_file(...)` or `default(...)`, the existing project should not be overridden
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/auth/_default.py`
Content:
```
1 # Copyright 2015 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Application default credentials.
16
17 Implements application default credentials and project ID detection.
18 """
19
20 import io
21 import json
22 import logging
23 import os
24 import warnings
25
26 import six
27
28 from google.auth import environment_vars
29 from google.auth import exceptions
30 import google.auth.transport._http_client
31
32 _LOGGER = logging.getLogger(__name__)
33
34 # Valid types accepted for file-based credentials.
35 _AUTHORIZED_USER_TYPE = "authorized_user"
36 _SERVICE_ACCOUNT_TYPE = "service_account"
37 _VALID_TYPES = (_AUTHORIZED_USER_TYPE, _SERVICE_ACCOUNT_TYPE)
38
39 # Help message when no credentials can be found.
40 _HELP_MESSAGE = """\
41 Could not automatically determine credentials. Please set {env} or \
42 explicitly create credentials and re-run the application. For more \
43 information, please see \
44 https://cloud.google.com/docs/authentication/getting-started
45 """.format(
46 env=environment_vars.CREDENTIALS
47 ).strip()
48
49 # Warning when using Cloud SDK user credentials
50 _CLOUD_SDK_CREDENTIALS_WARNING = """\
51 Your application has authenticated using end user credentials from Google \
52 Cloud SDK without a quota project. You might receive a "quota exceeded" \
53 or "API not enabled" error. We recommend you rerun \
54 `gcloud auth application-default login` and make sure a quota project is \
55 added. Or you can use service accounts instead. For more information \
56 about service accounts, see https://cloud.google.com/docs/authentication/"""
57
58
59 def _warn_about_problematic_credentials(credentials):
60 """Determines if the credentials are problematic.
61
62 Credentials from the Cloud SDK that are associated with Cloud SDK's project
63 are problematic because they may not have APIs enabled and have limited
64 quota. If this is the case, warn about it.
65 """
66 from google.auth import _cloud_sdk
67
68 if credentials.client_id == _cloud_sdk.CLOUD_SDK_CLIENT_ID:
69 warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
70
71
72 def load_credentials_from_file(filename, scopes=None, quota_project_id=None):
73 """Loads Google credentials from a file.
74
75 The credentials file must be a service account key or stored authorized
76 user credentials.
77
78 Args:
79 filename (str): The full path to the credentials file.
80 scopes (Optional[Sequence[str]]): The list of scopes for the credentials. If
81 specified, the credentials will automatically be scoped if
82 necessary
83 quota_project_id (Optional[str]): The project ID used for
84 quota and billing.
85
86 Returns:
87 Tuple[google.auth.credentials.Credentials, Optional[str]]: Loaded
88 credentials and the project ID. Authorized user credentials do not
89 have the project ID information.
90
91 Raises:
92 google.auth.exceptions.DefaultCredentialsError: if the file is in the
93 wrong format or is missing.
94 """
95 if not os.path.exists(filename):
96 raise exceptions.DefaultCredentialsError(
97 "File {} was not found.".format(filename)
98 )
99
100 with io.open(filename, "r") as file_obj:
101 try:
102 info = json.load(file_obj)
103 except ValueError as caught_exc:
104 new_exc = exceptions.DefaultCredentialsError(
105 "File {} is not a valid json file.".format(filename), caught_exc
106 )
107 six.raise_from(new_exc, caught_exc)
108
109 # The type key should indicate that the file is either a service account
110 # credentials file or an authorized user credentials file.
111 credential_type = info.get("type")
112
113 if credential_type == _AUTHORIZED_USER_TYPE:
114 from google.oauth2 import credentials
115
116 try:
117 credentials = credentials.Credentials.from_authorized_user_info(
118 info, scopes=scopes
119 ).with_quota_project(quota_project_id)
120 except ValueError as caught_exc:
121 msg = "Failed to load authorized user credentials from {}".format(filename)
122 new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)
123 six.raise_from(new_exc, caught_exc)
124 if not credentials.quota_project_id:
125 _warn_about_problematic_credentials(credentials)
126 return credentials, None
127
128 elif credential_type == _SERVICE_ACCOUNT_TYPE:
129 from google.oauth2 import service_account
130
131 try:
132 credentials = service_account.Credentials.from_service_account_info(
133 info, scopes=scopes
134 ).with_quota_project(quota_project_id)
135 except ValueError as caught_exc:
136 msg = "Failed to load service account credentials from {}".format(filename)
137 new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)
138 six.raise_from(new_exc, caught_exc)
139 return credentials, info.get("project_id")
140
141 else:
142 raise exceptions.DefaultCredentialsError(
143 "The file {file} does not have a valid type. "
144 "Type is {type}, expected one of {valid_types}.".format(
145 file=filename, type=credential_type, valid_types=_VALID_TYPES
146 )
147 )
148
149
150 def _get_gcloud_sdk_credentials():
151 """Gets the credentials and project ID from the Cloud SDK."""
152 from google.auth import _cloud_sdk
153
154 # Check if application default credentials exist.
155 credentials_filename = _cloud_sdk.get_application_default_credentials_path()
156
157 if not os.path.isfile(credentials_filename):
158 return None, None
159
160 credentials, project_id = load_credentials_from_file(credentials_filename)
161
162 if not project_id:
163 project_id = _cloud_sdk.get_project_id()
164
165 return credentials, project_id
166
167
168 def _get_explicit_environ_credentials():
169 """Gets credentials from the GOOGLE_APPLICATION_CREDENTIALS environment
170 variable."""
171 explicit_file = os.environ.get(environment_vars.CREDENTIALS)
172
173 if explicit_file is not None:
174 credentials, project_id = load_credentials_from_file(
175 os.environ[environment_vars.CREDENTIALS]
176 )
177
178 return credentials, project_id
179
180 else:
181 return None, None
182
183
184 def _get_gae_credentials():
185 """Gets Google App Engine App Identity credentials and project ID."""
186 # While this library is normally bundled with app_engine, there are
187 # some cases where it's not available, so we tolerate ImportError.
188 try:
189 import google.auth.app_engine as app_engine
190 except ImportError:
191 return None, None
192
193 try:
194 credentials = app_engine.Credentials()
195 project_id = app_engine.get_project_id()
196 return credentials, project_id
197 except EnvironmentError:
198 return None, None
199
200
201 def _get_gce_credentials(request=None):
202 """Gets credentials and project ID from the GCE Metadata Service."""
203 # Ping requires a transport, but we want application default credentials
204 # to require no arguments. So, we'll use the _http_client transport which
205 # uses http.client. This is only acceptable because the metadata server
206 # doesn't do SSL and never requires proxies.
207
208 # While this library is normally bundled with compute_engine, there are
209 # some cases where it's not available, so we tolerate ImportError.
210 try:
211 from google.auth import compute_engine
212 from google.auth.compute_engine import _metadata
213 except ImportError:
214 return None, None
215
216 if request is None:
217 request = google.auth.transport._http_client.Request()
218
219 if _metadata.ping(request=request):
220 # Get the project ID.
221 try:
222 project_id = _metadata.get_project_id(request=request)
223 except exceptions.TransportError:
224 project_id = None
225
226 return compute_engine.Credentials(), project_id
227 else:
228 return None, None
229
230
231 def default(scopes=None, request=None, quota_project_id=None):
232 """Gets the default credentials for the current environment.
233
234 `Application Default Credentials`_ provides an easy way to obtain
235 credentials to call Google APIs for server-to-server or local applications.
236 This function acquires credentials from the environment in the following
237 order:
238
239 1. If the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` is set
240 to the path of a valid service account JSON private key file, then it is
241 loaded and returned. The project ID returned is the project ID defined
242 in the service account file if available (some older files do not
243 contain project ID information).
244 2. If the `Google Cloud SDK`_ is installed and has application default
245 credentials set they are loaded and returned.
246
247 To enable application default credentials with the Cloud SDK run::
248
249 gcloud auth application-default login
250
251 If the Cloud SDK has an active project, the project ID is returned. The
252 active project can be set using::
253
254 gcloud config set project
255
256 3. If the application is running in the `App Engine standard environment`_
257 then the credentials and project ID from the `App Identity Service`_
258 are used.
259 4. If the application is running in `Compute Engine`_ or the
260 `App Engine flexible environment`_ then the credentials and project ID
261 are obtained from the `Metadata Service`_.
262 5. If no credentials are found,
263 :class:`~google.auth.exceptions.DefaultCredentialsError` will be raised.
264
265 .. _Application Default Credentials: https://developers.google.com\
266 /identity/protocols/application-default-credentials
267 .. _Google Cloud SDK: https://cloud.google.com/sdk
268 .. _App Engine standard environment: https://cloud.google.com/appengine
269 .. _App Identity Service: https://cloud.google.com/appengine/docs/python\
270 /appidentity/
271 .. _Compute Engine: https://cloud.google.com/compute
272 .. _App Engine flexible environment: https://cloud.google.com\
273 /appengine/flexible
274 .. _Metadata Service: https://cloud.google.com/compute/docs\
275 /storing-retrieving-metadata
276
277 Example::
278
279 import google.auth
280
281 credentials, project_id = google.auth.default()
282
283 Args:
284 scopes (Sequence[str]): The list of scopes for the credentials. If
285 specified, the credentials will automatically be scoped if
286 necessary.
287 request (google.auth.transport.Request): An object used to make
288 HTTP requests. This is used to detect whether the application
289 is running on Compute Engine. If not specified, then it will
290 use the standard library http client to make requests.
291 quota_project_id (Optional[str]): The project ID used for
292 quota and billing.
293 Returns:
294 Tuple[~google.auth.credentials.Credentials, Optional[str]]:
295 the current environment's credentials and project ID. Project ID
296 may be None, which indicates that the Project ID could not be
297 ascertained from the environment.
298
299 Raises:
300 ~google.auth.exceptions.DefaultCredentialsError:
301 If no credentials were found, or if the credentials found were
302 invalid.
303 """
304 from google.auth.credentials import with_scopes_if_required
305
306 explicit_project_id = os.environ.get(
307 environment_vars.PROJECT, os.environ.get(environment_vars.LEGACY_PROJECT)
308 )
309
310 checkers = (
311 _get_explicit_environ_credentials,
312 _get_gcloud_sdk_credentials,
313 _get_gae_credentials,
314 lambda: _get_gce_credentials(request),
315 )
316
317 for checker in checkers:
318 credentials, project_id = checker()
319 if credentials is not None:
320 credentials = with_scopes_if_required(
321 credentials, scopes
322 ).with_quota_project(quota_project_id)
323 effective_project_id = explicit_project_id or project_id
324 if not effective_project_id:
325 _LOGGER.warning(
326 "No project ID could be determined. Consider running "
327 "`gcloud config set project` or setting the %s "
328 "environment variable",
329 environment_vars.PROJECT,
330 )
331 return credentials, effective_project_id
332
333 raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
334
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/google/auth/_default.py b/google/auth/_default.py
--- a/google/auth/_default.py
+++ b/google/auth/_default.py
@@ -116,11 +116,13 @@
try:
credentials = credentials.Credentials.from_authorized_user_info(
info, scopes=scopes
- ).with_quota_project(quota_project_id)
+ )
except ValueError as caught_exc:
msg = "Failed to load authorized user credentials from {}".format(filename)
new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)
six.raise_from(new_exc, caught_exc)
+ if quota_project_id:
+ credentials = credentials.with_quota_project(quota_project_id)
if not credentials.quota_project_id:
_warn_about_problematic_credentials(credentials)
return credentials, None
@@ -131,11 +133,13 @@
try:
credentials = service_account.Credentials.from_service_account_info(
info, scopes=scopes
- ).with_quota_project(quota_project_id)
+ )
except ValueError as caught_exc:
msg = "Failed to load service account credentials from {}".format(filename)
new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)
six.raise_from(new_exc, caught_exc)
+ if quota_project_id:
+ credentials = credentials.with_quota_project(quota_project_id)
return credentials, info.get("project_id")
else:
@@ -317,9 +321,10 @@
for checker in checkers:
credentials, project_id = checker()
if credentials is not None:
- credentials = with_scopes_if_required(
- credentials, scopes
- ).with_quota_project(quota_project_id)
+ credentials = with_scopes_if_required(credentials, scopes)
+ if quota_project_id:
+ credentials = credentials.with_quota_project(quota_project_id)
+
effective_project_id = explicit_project_id or project_id
if not effective_project_id:
_LOGGER.warning(
|
{"golden_diff": "diff --git a/google/auth/_default.py b/google/auth/_default.py\n--- a/google/auth/_default.py\n+++ b/google/auth/_default.py\n@@ -116,11 +116,13 @@\n try:\n credentials = credentials.Credentials.from_authorized_user_info(\n info, scopes=scopes\n- ).with_quota_project(quota_project_id)\n+ )\n except ValueError as caught_exc:\n msg = \"Failed to load authorized user credentials from {}\".format(filename)\n new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)\n six.raise_from(new_exc, caught_exc)\n+ if quota_project_id:\n+ credentials = credentials.with_quota_project(quota_project_id)\n if not credentials.quota_project_id:\n _warn_about_problematic_credentials(credentials)\n return credentials, None\n@@ -131,11 +133,13 @@\n try:\n credentials = service_account.Credentials.from_service_account_info(\n info, scopes=scopes\n- ).with_quota_project(quota_project_id)\n+ )\n except ValueError as caught_exc:\n msg = \"Failed to load service account credentials from {}\".format(filename)\n new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)\n six.raise_from(new_exc, caught_exc)\n+ if quota_project_id:\n+ credentials = credentials.with_quota_project(quota_project_id)\n return credentials, info.get(\"project_id\")\n \n else:\n@@ -317,9 +321,10 @@\n for checker in checkers:\n credentials, project_id = checker()\n if credentials is not None:\n- credentials = with_scopes_if_required(\n- credentials, scopes\n- ).with_quota_project(quota_project_id)\n+ credentials = with_scopes_if_required(credentials, scopes)\n+ if quota_project_id:\n+ credentials = credentials.with_quota_project(quota_project_id)\n+\n effective_project_id = explicit_project_id or project_id\n if not effective_project_id:\n _LOGGER.warning(\n", "issue": "_default always sets `credentials.quota_project_id` to None\n#546 introduced a bug where the quota project id always gets set to `None`. If no `quota_project_id` is passed to `load_credentials_from_file(...)` or `default(...)`, the existing project should not be overridden\n", "before_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Application default credentials.\n\nImplements application default credentials and project ID detection.\n\"\"\"\n\nimport io\nimport json\nimport logging\nimport os\nimport warnings\n\nimport six\n\nfrom google.auth import environment_vars\nfrom google.auth import exceptions\nimport google.auth.transport._http_client\n\n_LOGGER = logging.getLogger(__name__)\n\n# Valid types accepted for file-based credentials.\n_AUTHORIZED_USER_TYPE = \"authorized_user\"\n_SERVICE_ACCOUNT_TYPE = \"service_account\"\n_VALID_TYPES = (_AUTHORIZED_USER_TYPE, _SERVICE_ACCOUNT_TYPE)\n\n# Help message when no credentials can be found.\n_HELP_MESSAGE = \"\"\"\\\nCould not automatically determine credentials. Please set {env} or \\\nexplicitly create credentials and re-run the application. For more \\\ninformation, please see \\\nhttps://cloud.google.com/docs/authentication/getting-started\n\"\"\".format(\n env=environment_vars.CREDENTIALS\n).strip()\n\n# Warning when using Cloud SDK user credentials\n_CLOUD_SDK_CREDENTIALS_WARNING = \"\"\"\\\nYour application has authenticated using end user credentials from Google \\\nCloud SDK without a quota project. You might receive a \"quota exceeded\" \\\nor \"API not enabled\" error. We recommend you rerun \\\n`gcloud auth application-default login` and make sure a quota project is \\\nadded. Or you can use service accounts instead. For more information \\\nabout service accounts, see https://cloud.google.com/docs/authentication/\"\"\"\n\n\ndef _warn_about_problematic_credentials(credentials):\n \"\"\"Determines if the credentials are problematic.\n\n Credentials from the Cloud SDK that are associated with Cloud SDK's project\n are problematic because they may not have APIs enabled and have limited\n quota. If this is the case, warn about it.\n \"\"\"\n from google.auth import _cloud_sdk\n\n if credentials.client_id == _cloud_sdk.CLOUD_SDK_CLIENT_ID:\n warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)\n\n\ndef load_credentials_from_file(filename, scopes=None, quota_project_id=None):\n \"\"\"Loads Google credentials from a file.\n\n The credentials file must be a service account key or stored authorized\n user credentials.\n\n Args:\n filename (str): The full path to the credentials file.\n scopes (Optional[Sequence[str]]): The list of scopes for the credentials. If\n specified, the credentials will automatically be scoped if\n necessary\n quota_project_id (Optional[str]): The project ID used for\n quota and billing.\n\n Returns:\n Tuple[google.auth.credentials.Credentials, Optional[str]]: Loaded\n credentials and the project ID. Authorized user credentials do not\n have the project ID information.\n\n Raises:\n google.auth.exceptions.DefaultCredentialsError: if the file is in the\n wrong format or is missing.\n \"\"\"\n if not os.path.exists(filename):\n raise exceptions.DefaultCredentialsError(\n \"File {} was not found.\".format(filename)\n )\n\n with io.open(filename, \"r\") as file_obj:\n try:\n info = json.load(file_obj)\n except ValueError as caught_exc:\n new_exc = exceptions.DefaultCredentialsError(\n \"File {} is not a valid json file.\".format(filename), caught_exc\n )\n six.raise_from(new_exc, caught_exc)\n\n # The type key should indicate that the file is either a service account\n # credentials file or an authorized user credentials file.\n credential_type = info.get(\"type\")\n\n if credential_type == _AUTHORIZED_USER_TYPE:\n from google.oauth2 import credentials\n\n try:\n credentials = credentials.Credentials.from_authorized_user_info(\n info, scopes=scopes\n ).with_quota_project(quota_project_id)\n except ValueError as caught_exc:\n msg = \"Failed to load authorized user credentials from {}\".format(filename)\n new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)\n six.raise_from(new_exc, caught_exc)\n if not credentials.quota_project_id:\n _warn_about_problematic_credentials(credentials)\n return credentials, None\n\n elif credential_type == _SERVICE_ACCOUNT_TYPE:\n from google.oauth2 import service_account\n\n try:\n credentials = service_account.Credentials.from_service_account_info(\n info, scopes=scopes\n ).with_quota_project(quota_project_id)\n except ValueError as caught_exc:\n msg = \"Failed to load service account credentials from {}\".format(filename)\n new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)\n six.raise_from(new_exc, caught_exc)\n return credentials, info.get(\"project_id\")\n\n else:\n raise exceptions.DefaultCredentialsError(\n \"The file {file} does not have a valid type. \"\n \"Type is {type}, expected one of {valid_types}.\".format(\n file=filename, type=credential_type, valid_types=_VALID_TYPES\n )\n )\n\n\ndef _get_gcloud_sdk_credentials():\n \"\"\"Gets the credentials and project ID from the Cloud SDK.\"\"\"\n from google.auth import _cloud_sdk\n\n # Check if application default credentials exist.\n credentials_filename = _cloud_sdk.get_application_default_credentials_path()\n\n if not os.path.isfile(credentials_filename):\n return None, None\n\n credentials, project_id = load_credentials_from_file(credentials_filename)\n\n if not project_id:\n project_id = _cloud_sdk.get_project_id()\n\n return credentials, project_id\n\n\ndef _get_explicit_environ_credentials():\n \"\"\"Gets credentials from the GOOGLE_APPLICATION_CREDENTIALS environment\n variable.\"\"\"\n explicit_file = os.environ.get(environment_vars.CREDENTIALS)\n\n if explicit_file is not None:\n credentials, project_id = load_credentials_from_file(\n os.environ[environment_vars.CREDENTIALS]\n )\n\n return credentials, project_id\n\n else:\n return None, None\n\n\ndef _get_gae_credentials():\n \"\"\"Gets Google App Engine App Identity credentials and project ID.\"\"\"\n # While this library is normally bundled with app_engine, there are\n # some cases where it's not available, so we tolerate ImportError.\n try:\n import google.auth.app_engine as app_engine\n except ImportError:\n return None, None\n\n try:\n credentials = app_engine.Credentials()\n project_id = app_engine.get_project_id()\n return credentials, project_id\n except EnvironmentError:\n return None, None\n\n\ndef _get_gce_credentials(request=None):\n \"\"\"Gets credentials and project ID from the GCE Metadata Service.\"\"\"\n # Ping requires a transport, but we want application default credentials\n # to require no arguments. So, we'll use the _http_client transport which\n # uses http.client. This is only acceptable because the metadata server\n # doesn't do SSL and never requires proxies.\n\n # While this library is normally bundled with compute_engine, there are\n # some cases where it's not available, so we tolerate ImportError.\n try:\n from google.auth import compute_engine\n from google.auth.compute_engine import _metadata\n except ImportError:\n return None, None\n\n if request is None:\n request = google.auth.transport._http_client.Request()\n\n if _metadata.ping(request=request):\n # Get the project ID.\n try:\n project_id = _metadata.get_project_id(request=request)\n except exceptions.TransportError:\n project_id = None\n\n return compute_engine.Credentials(), project_id\n else:\n return None, None\n\n\ndef default(scopes=None, request=None, quota_project_id=None):\n \"\"\"Gets the default credentials for the current environment.\n\n `Application Default Credentials`_ provides an easy way to obtain\n credentials to call Google APIs for server-to-server or local applications.\n This function acquires credentials from the environment in the following\n order:\n\n 1. If the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` is set\n to the path of a valid service account JSON private key file, then it is\n loaded and returned. The project ID returned is the project ID defined\n in the service account file if available (some older files do not\n contain project ID information).\n 2. If the `Google Cloud SDK`_ is installed and has application default\n credentials set they are loaded and returned.\n\n To enable application default credentials with the Cloud SDK run::\n\n gcloud auth application-default login\n\n If the Cloud SDK has an active project, the project ID is returned. The\n active project can be set using::\n\n gcloud config set project\n\n 3. If the application is running in the `App Engine standard environment`_\n then the credentials and project ID from the `App Identity Service`_\n are used.\n 4. If the application is running in `Compute Engine`_ or the\n `App Engine flexible environment`_ then the credentials and project ID\n are obtained from the `Metadata Service`_.\n 5. If no credentials are found,\n :class:`~google.auth.exceptions.DefaultCredentialsError` will be raised.\n\n .. _Application Default Credentials: https://developers.google.com\\\n /identity/protocols/application-default-credentials\n .. _Google Cloud SDK: https://cloud.google.com/sdk\n .. _App Engine standard environment: https://cloud.google.com/appengine\n .. _App Identity Service: https://cloud.google.com/appengine/docs/python\\\n /appidentity/\n .. _Compute Engine: https://cloud.google.com/compute\n .. _App Engine flexible environment: https://cloud.google.com\\\n /appengine/flexible\n .. _Metadata Service: https://cloud.google.com/compute/docs\\\n /storing-retrieving-metadata\n\n Example::\n\n import google.auth\n\n credentials, project_id = google.auth.default()\n\n Args:\n scopes (Sequence[str]): The list of scopes for the credentials. If\n specified, the credentials will automatically be scoped if\n necessary.\n request (google.auth.transport.Request): An object used to make\n HTTP requests. This is used to detect whether the application\n is running on Compute Engine. If not specified, then it will\n use the standard library http client to make requests.\n quota_project_id (Optional[str]): The project ID used for\n quota and billing.\n Returns:\n Tuple[~google.auth.credentials.Credentials, Optional[str]]:\n the current environment's credentials and project ID. Project ID\n may be None, which indicates that the Project ID could not be\n ascertained from the environment.\n\n Raises:\n ~google.auth.exceptions.DefaultCredentialsError:\n If no credentials were found, or if the credentials found were\n invalid.\n \"\"\"\n from google.auth.credentials import with_scopes_if_required\n\n explicit_project_id = os.environ.get(\n environment_vars.PROJECT, os.environ.get(environment_vars.LEGACY_PROJECT)\n )\n\n checkers = (\n _get_explicit_environ_credentials,\n _get_gcloud_sdk_credentials,\n _get_gae_credentials,\n lambda: _get_gce_credentials(request),\n )\n\n for checker in checkers:\n credentials, project_id = checker()\n if credentials is not None:\n credentials = with_scopes_if_required(\n credentials, scopes\n ).with_quota_project(quota_project_id)\n effective_project_id = explicit_project_id or project_id\n if not effective_project_id:\n _LOGGER.warning(\n \"No project ID could be determined. Consider running \"\n \"`gcloud config set project` or setting the %s \"\n \"environment variable\",\n environment_vars.PROJECT,\n )\n return credentials, effective_project_id\n\n raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)\n", "path": "google/auth/_default.py"}], "after_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Application default credentials.\n\nImplements application default credentials and project ID detection.\n\"\"\"\n\nimport io\nimport json\nimport logging\nimport os\nimport warnings\n\nimport six\n\nfrom google.auth import environment_vars\nfrom google.auth import exceptions\nimport google.auth.transport._http_client\n\n_LOGGER = logging.getLogger(__name__)\n\n# Valid types accepted for file-based credentials.\n_AUTHORIZED_USER_TYPE = \"authorized_user\"\n_SERVICE_ACCOUNT_TYPE = \"service_account\"\n_VALID_TYPES = (_AUTHORIZED_USER_TYPE, _SERVICE_ACCOUNT_TYPE)\n\n# Help message when no credentials can be found.\n_HELP_MESSAGE = \"\"\"\\\nCould not automatically determine credentials. Please set {env} or \\\nexplicitly create credentials and re-run the application. For more \\\ninformation, please see \\\nhttps://cloud.google.com/docs/authentication/getting-started\n\"\"\".format(\n env=environment_vars.CREDENTIALS\n).strip()\n\n# Warning when using Cloud SDK user credentials\n_CLOUD_SDK_CREDENTIALS_WARNING = \"\"\"\\\nYour application has authenticated using end user credentials from Google \\\nCloud SDK without a quota project. You might receive a \"quota exceeded\" \\\nor \"API not enabled\" error. We recommend you rerun \\\n`gcloud auth application-default login` and make sure a quota project is \\\nadded. Or you can use service accounts instead. For more information \\\nabout service accounts, see https://cloud.google.com/docs/authentication/\"\"\"\n\n\ndef _warn_about_problematic_credentials(credentials):\n \"\"\"Determines if the credentials are problematic.\n\n Credentials from the Cloud SDK that are associated with Cloud SDK's project\n are problematic because they may not have APIs enabled and have limited\n quota. If this is the case, warn about it.\n \"\"\"\n from google.auth import _cloud_sdk\n\n if credentials.client_id == _cloud_sdk.CLOUD_SDK_CLIENT_ID:\n warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)\n\n\ndef load_credentials_from_file(filename, scopes=None, quota_project_id=None):\n \"\"\"Loads Google credentials from a file.\n\n The credentials file must be a service account key or stored authorized\n user credentials.\n\n Args:\n filename (str): The full path to the credentials file.\n scopes (Optional[Sequence[str]]): The list of scopes for the credentials. If\n specified, the credentials will automatically be scoped if\n necessary\n quota_project_id (Optional[str]): The project ID used for\n quota and billing.\n\n Returns:\n Tuple[google.auth.credentials.Credentials, Optional[str]]: Loaded\n credentials and the project ID. Authorized user credentials do not\n have the project ID information.\n\n Raises:\n google.auth.exceptions.DefaultCredentialsError: if the file is in the\n wrong format or is missing.\n \"\"\"\n if not os.path.exists(filename):\n raise exceptions.DefaultCredentialsError(\n \"File {} was not found.\".format(filename)\n )\n\n with io.open(filename, \"r\") as file_obj:\n try:\n info = json.load(file_obj)\n except ValueError as caught_exc:\n new_exc = exceptions.DefaultCredentialsError(\n \"File {} is not a valid json file.\".format(filename), caught_exc\n )\n six.raise_from(new_exc, caught_exc)\n\n # The type key should indicate that the file is either a service account\n # credentials file or an authorized user credentials file.\n credential_type = info.get(\"type\")\n\n if credential_type == _AUTHORIZED_USER_TYPE:\n from google.oauth2 import credentials\n\n try:\n credentials = credentials.Credentials.from_authorized_user_info(\n info, scopes=scopes\n )\n except ValueError as caught_exc:\n msg = \"Failed to load authorized user credentials from {}\".format(filename)\n new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)\n six.raise_from(new_exc, caught_exc)\n if quota_project_id:\n credentials = credentials.with_quota_project(quota_project_id)\n if not credentials.quota_project_id:\n _warn_about_problematic_credentials(credentials)\n return credentials, None\n\n elif credential_type == _SERVICE_ACCOUNT_TYPE:\n from google.oauth2 import service_account\n\n try:\n credentials = service_account.Credentials.from_service_account_info(\n info, scopes=scopes\n )\n except ValueError as caught_exc:\n msg = \"Failed to load service account credentials from {}\".format(filename)\n new_exc = exceptions.DefaultCredentialsError(msg, caught_exc)\n six.raise_from(new_exc, caught_exc)\n if quota_project_id:\n credentials = credentials.with_quota_project(quota_project_id)\n return credentials, info.get(\"project_id\")\n\n else:\n raise exceptions.DefaultCredentialsError(\n \"The file {file} does not have a valid type. \"\n \"Type is {type}, expected one of {valid_types}.\".format(\n file=filename, type=credential_type, valid_types=_VALID_TYPES\n )\n )\n\n\ndef _get_gcloud_sdk_credentials():\n \"\"\"Gets the credentials and project ID from the Cloud SDK.\"\"\"\n from google.auth import _cloud_sdk\n\n # Check if application default credentials exist.\n credentials_filename = _cloud_sdk.get_application_default_credentials_path()\n\n if not os.path.isfile(credentials_filename):\n return None, None\n\n credentials, project_id = load_credentials_from_file(credentials_filename)\n\n if not project_id:\n project_id = _cloud_sdk.get_project_id()\n\n return credentials, project_id\n\n\ndef _get_explicit_environ_credentials():\n \"\"\"Gets credentials from the GOOGLE_APPLICATION_CREDENTIALS environment\n variable.\"\"\"\n explicit_file = os.environ.get(environment_vars.CREDENTIALS)\n\n if explicit_file is not None:\n credentials, project_id = load_credentials_from_file(\n os.environ[environment_vars.CREDENTIALS]\n )\n\n return credentials, project_id\n\n else:\n return None, None\n\n\ndef _get_gae_credentials():\n \"\"\"Gets Google App Engine App Identity credentials and project ID.\"\"\"\n # While this library is normally bundled with app_engine, there are\n # some cases where it's not available, so we tolerate ImportError.\n try:\n import google.auth.app_engine as app_engine\n except ImportError:\n return None, None\n\n try:\n credentials = app_engine.Credentials()\n project_id = app_engine.get_project_id()\n return credentials, project_id\n except EnvironmentError:\n return None, None\n\n\ndef _get_gce_credentials(request=None):\n \"\"\"Gets credentials and project ID from the GCE Metadata Service.\"\"\"\n # Ping requires a transport, but we want application default credentials\n # to require no arguments. So, we'll use the _http_client transport which\n # uses http.client. This is only acceptable because the metadata server\n # doesn't do SSL and never requires proxies.\n\n # While this library is normally bundled with compute_engine, there are\n # some cases where it's not available, so we tolerate ImportError.\n try:\n from google.auth import compute_engine\n from google.auth.compute_engine import _metadata\n except ImportError:\n return None, None\n\n if request is None:\n request = google.auth.transport._http_client.Request()\n\n if _metadata.ping(request=request):\n # Get the project ID.\n try:\n project_id = _metadata.get_project_id(request=request)\n except exceptions.TransportError:\n project_id = None\n\n return compute_engine.Credentials(), project_id\n else:\n return None, None\n\n\ndef default(scopes=None, request=None, quota_project_id=None):\n \"\"\"Gets the default credentials for the current environment.\n\n `Application Default Credentials`_ provides an easy way to obtain\n credentials to call Google APIs for server-to-server or local applications.\n This function acquires credentials from the environment in the following\n order:\n\n 1. If the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` is set\n to the path of a valid service account JSON private key file, then it is\n loaded and returned. The project ID returned is the project ID defined\n in the service account file if available (some older files do not\n contain project ID information).\n 2. If the `Google Cloud SDK`_ is installed and has application default\n credentials set they are loaded and returned.\n\n To enable application default credentials with the Cloud SDK run::\n\n gcloud auth application-default login\n\n If the Cloud SDK has an active project, the project ID is returned. The\n active project can be set using::\n\n gcloud config set project\n\n 3. If the application is running in the `App Engine standard environment`_\n then the credentials and project ID from the `App Identity Service`_\n are used.\n 4. If the application is running in `Compute Engine`_ or the\n `App Engine flexible environment`_ then the credentials and project ID\n are obtained from the `Metadata Service`_.\n 5. If no credentials are found,\n :class:`~google.auth.exceptions.DefaultCredentialsError` will be raised.\n\n .. _Application Default Credentials: https://developers.google.com\\\n /identity/protocols/application-default-credentials\n .. _Google Cloud SDK: https://cloud.google.com/sdk\n .. _App Engine standard environment: https://cloud.google.com/appengine\n .. _App Identity Service: https://cloud.google.com/appengine/docs/python\\\n /appidentity/\n .. _Compute Engine: https://cloud.google.com/compute\n .. _App Engine flexible environment: https://cloud.google.com\\\n /appengine/flexible\n .. _Metadata Service: https://cloud.google.com/compute/docs\\\n /storing-retrieving-metadata\n\n Example::\n\n import google.auth\n\n credentials, project_id = google.auth.default()\n\n Args:\n scopes (Sequence[str]): The list of scopes for the credentials. If\n specified, the credentials will automatically be scoped if\n necessary.\n request (google.auth.transport.Request): An object used to make\n HTTP requests. This is used to detect whether the application\n is running on Compute Engine. If not specified, then it will\n use the standard library http client to make requests.\n quota_project_id (Optional[str]): The project ID used for\n quota and billing.\n Returns:\n Tuple[~google.auth.credentials.Credentials, Optional[str]]:\n the current environment's credentials and project ID. Project ID\n may be None, which indicates that the Project ID could not be\n ascertained from the environment.\n\n Raises:\n ~google.auth.exceptions.DefaultCredentialsError:\n If no credentials were found, or if the credentials found were\n invalid.\n \"\"\"\n from google.auth.credentials import with_scopes_if_required\n\n explicit_project_id = os.environ.get(\n environment_vars.PROJECT, os.environ.get(environment_vars.LEGACY_PROJECT)\n )\n\n checkers = (\n _get_explicit_environ_credentials,\n _get_gcloud_sdk_credentials,\n _get_gae_credentials,\n lambda: _get_gce_credentials(request),\n )\n\n for checker in checkers:\n credentials, project_id = checker()\n if credentials is not None:\n credentials = with_scopes_if_required(credentials, scopes)\n if quota_project_id:\n credentials = credentials.with_quota_project(quota_project_id)\n\n effective_project_id = explicit_project_id or project_id\n if not effective_project_id:\n _LOGGER.warning(\n \"No project ID could be determined. Consider running \"\n \"`gcloud config set project` or setting the %s \"\n \"environment variable\",\n environment_vars.PROJECT,\n )\n return credentials, effective_project_id\n\n raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)\n", "path": "google/auth/_default.py"}]}
| 3,786 | 426 |
gh_patches_debug_38222
|
rasdani/github-patches
|
git_diff
|
getredash__redash-1394
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Share access permissions for add/remove widgets
Hi @arikfr ,
in our PR #1113 we focused primarily on sharing access permissions for queries, and for dashboards we currently only allow to modify basic info of the dashboard. What is still missing is ability to allow other users to add and remove widgets. Is that something you are planning to add, or should we give it a shot?
Another thing that came up - we're currently enforcing `require_admin_or_owner(...)` for managing access permissions (e.g., https://github.com/getredash/redash/blob/master/redash/handlers/permissions.py#L42). This is actually a very restrictive limitation, and we believe that anybody with permissions (owner, admin, other permitted users) should be able to add/remove users. For instance, if you consider person A creating a dashboard, then giving access to persons B and C who are actively maintaining that dashboard. Then, if person A leaves the company, B and C would not be able to make the required changes to add another person D. What do you think?
/cc @rohanpd
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/handlers/widgets.py`
Content:
```
1 import json
2
3 from flask import request
4
5 from redash import models
6 from redash.permissions import require_permission, require_admin_or_owner, require_access, view_only
7 from redash.handlers.base import BaseResource
8
9
10 class WidgetListResource(BaseResource):
11 @require_permission('edit_dashboard')
12 def post(self):
13 widget_properties = request.get_json(force=True)
14 dashboard = models.Dashboard.get_by_id_and_org(widget_properties.pop('dashboard_id'), self.current_org)
15 require_admin_or_owner(dashboard.user_id)
16
17 widget_properties['options'] = json.dumps(widget_properties['options'])
18 widget_properties.pop('id', None)
19 widget_properties['dashboard'] = dashboard
20
21 visualization_id = widget_properties.pop('visualization_id')
22 if visualization_id:
23 visualization = models.Visualization.get_by_id_and_org(visualization_id, self.current_org)
24 require_access(visualization.query.groups, self.current_user, view_only)
25 else:
26 visualization = None
27
28 widget_properties['visualization'] = visualization
29
30 widget = models.Widget.create(**widget_properties)
31
32 layout = json.loads(widget.dashboard.layout)
33 new_row = True
34
35 if len(layout) == 0 or widget.width == 2:
36 layout.append([widget.id])
37 elif len(layout[-1]) == 1:
38 neighbour_widget = models.Widget.get(models.Widget.id == layout[-1][0])
39 if neighbour_widget.width == 1:
40 layout[-1].append(widget.id)
41 new_row = False
42 else:
43 layout.append([widget.id])
44 else:
45 layout.append([widget.id])
46
47 widget.dashboard.layout = json.dumps(layout)
48 widget.dashboard.save()
49
50 return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row}
51
52
53 class WidgetResource(BaseResource):
54 @require_permission('edit_dashboard')
55 def post(self, widget_id):
56 # This method currently handles Text Box widgets only.
57 widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)
58 require_admin_or_owner(widget.dashboard.user_id)
59 widget_properties = request.get_json(force=True)
60 widget.text = widget_properties['text']
61 widget.save()
62
63 return widget.to_dict()
64
65 @require_permission('edit_dashboard')
66 def delete(self, widget_id):
67 widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)
68 require_admin_or_owner(widget.dashboard.user_id)
69 widget.delete_instance()
70
71 return {'layout': widget.dashboard.layout}
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redash/handlers/widgets.py b/redash/handlers/widgets.py
--- a/redash/handlers/widgets.py
+++ b/redash/handlers/widgets.py
@@ -1,10 +1,11 @@
import json
from flask import request
-
from redash import models
-from redash.permissions import require_permission, require_admin_or_owner, require_access, view_only
from redash.handlers.base import BaseResource
+from redash.permissions import (require_access,
+ require_object_modify_permission,
+ require_permission, view_only)
class WidgetListResource(BaseResource):
@@ -12,7 +13,7 @@
def post(self):
widget_properties = request.get_json(force=True)
dashboard = models.Dashboard.get_by_id_and_org(widget_properties.pop('dashboard_id'), self.current_org)
- require_admin_or_owner(dashboard.user_id)
+ require_object_modify_permission(dashboard, self.current_user)
widget_properties['options'] = json.dumps(widget_properties['options'])
widget_properties.pop('id', None)
@@ -47,7 +48,7 @@
widget.dashboard.layout = json.dumps(layout)
widget.dashboard.save()
- return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row}
+ return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row, 'version': dashboard.version}
class WidgetResource(BaseResource):
@@ -55,7 +56,7 @@
def post(self, widget_id):
# This method currently handles Text Box widgets only.
widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)
- require_admin_or_owner(widget.dashboard.user_id)
+ require_object_modify_permission(widget.dashboard, self.current_user)
widget_properties = request.get_json(force=True)
widget.text = widget_properties['text']
widget.save()
@@ -65,7 +66,7 @@
@require_permission('edit_dashboard')
def delete(self, widget_id):
widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)
- require_admin_or_owner(widget.dashboard.user_id)
+ require_object_modify_permission(widget.dashboard, self.current_user)
widget.delete_instance()
- return {'layout': widget.dashboard.layout}
+ return {'layout': widget.dashboard.layout, 'version': widget.dashboard.version}
|
{"golden_diff": "diff --git a/redash/handlers/widgets.py b/redash/handlers/widgets.py\n--- a/redash/handlers/widgets.py\n+++ b/redash/handlers/widgets.py\n@@ -1,10 +1,11 @@\n import json\n \n from flask import request\n-\n from redash import models\n-from redash.permissions import require_permission, require_admin_or_owner, require_access, view_only\n from redash.handlers.base import BaseResource\n+from redash.permissions import (require_access,\n+ require_object_modify_permission,\n+ require_permission, view_only)\n \n \n class WidgetListResource(BaseResource):\n@@ -12,7 +13,7 @@\n def post(self):\n widget_properties = request.get_json(force=True)\n dashboard = models.Dashboard.get_by_id_and_org(widget_properties.pop('dashboard_id'), self.current_org)\n- require_admin_or_owner(dashboard.user_id)\n+ require_object_modify_permission(dashboard, self.current_user)\n \n widget_properties['options'] = json.dumps(widget_properties['options'])\n widget_properties.pop('id', None)\n@@ -47,7 +48,7 @@\n widget.dashboard.layout = json.dumps(layout)\n widget.dashboard.save()\n \n- return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row}\n+ return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row, 'version': dashboard.version}\n \n \n class WidgetResource(BaseResource):\n@@ -55,7 +56,7 @@\n def post(self, widget_id):\n # This method currently handles Text Box widgets only.\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n- require_admin_or_owner(widget.dashboard.user_id)\n+ require_object_modify_permission(widget.dashboard, self.current_user)\n widget_properties = request.get_json(force=True)\n widget.text = widget_properties['text']\n widget.save()\n@@ -65,7 +66,7 @@\n @require_permission('edit_dashboard')\n def delete(self, widget_id):\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n- require_admin_or_owner(widget.dashboard.user_id)\n+ require_object_modify_permission(widget.dashboard, self.current_user)\n widget.delete_instance()\n \n- return {'layout': widget.dashboard.layout}\n+ return {'layout': widget.dashboard.layout, 'version': widget.dashboard.version}\n", "issue": "Share access permissions for add/remove widgets\nHi @arikfr ,\r\n\r\nin our PR #1113 we focused primarily on sharing access permissions for queries, and for dashboards we currently only allow to modify basic info of the dashboard. What is still missing is ability to allow other users to add and remove widgets. Is that something you are planning to add, or should we give it a shot?\r\n\r\nAnother thing that came up - we're currently enforcing `require_admin_or_owner(...)` for managing access permissions (e.g., https://github.com/getredash/redash/blob/master/redash/handlers/permissions.py#L42). This is actually a very restrictive limitation, and we believe that anybody with permissions (owner, admin, other permitted users) should be able to add/remove users. For instance, if you consider person A creating a dashboard, then giving access to persons B and C who are actively maintaining that dashboard. Then, if person A leaves the company, B and C would not be able to make the required changes to add another person D. What do you think?\r\n\r\n/cc @rohanpd\n", "before_files": [{"content": "import json\n\nfrom flask import request\n\nfrom redash import models\nfrom redash.permissions import require_permission, require_admin_or_owner, require_access, view_only\nfrom redash.handlers.base import BaseResource\n\n\nclass WidgetListResource(BaseResource):\n @require_permission('edit_dashboard')\n def post(self):\n widget_properties = request.get_json(force=True)\n dashboard = models.Dashboard.get_by_id_and_org(widget_properties.pop('dashboard_id'), self.current_org)\n require_admin_or_owner(dashboard.user_id)\n\n widget_properties['options'] = json.dumps(widget_properties['options'])\n widget_properties.pop('id', None)\n widget_properties['dashboard'] = dashboard\n\n visualization_id = widget_properties.pop('visualization_id')\n if visualization_id:\n visualization = models.Visualization.get_by_id_and_org(visualization_id, self.current_org)\n require_access(visualization.query.groups, self.current_user, view_only)\n else:\n visualization = None\n\n widget_properties['visualization'] = visualization\n\n widget = models.Widget.create(**widget_properties)\n\n layout = json.loads(widget.dashboard.layout)\n new_row = True\n\n if len(layout) == 0 or widget.width == 2:\n layout.append([widget.id])\n elif len(layout[-1]) == 1:\n neighbour_widget = models.Widget.get(models.Widget.id == layout[-1][0])\n if neighbour_widget.width == 1:\n layout[-1].append(widget.id)\n new_row = False\n else:\n layout.append([widget.id])\n else:\n layout.append([widget.id])\n\n widget.dashboard.layout = json.dumps(layout)\n widget.dashboard.save()\n\n return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row}\n\n\nclass WidgetResource(BaseResource):\n @require_permission('edit_dashboard')\n def post(self, widget_id):\n # This method currently handles Text Box widgets only.\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n require_admin_or_owner(widget.dashboard.user_id)\n widget_properties = request.get_json(force=True)\n widget.text = widget_properties['text']\n widget.save()\n\n return widget.to_dict()\n\n @require_permission('edit_dashboard')\n def delete(self, widget_id):\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n require_admin_or_owner(widget.dashboard.user_id)\n widget.delete_instance()\n\n return {'layout': widget.dashboard.layout}\n", "path": "redash/handlers/widgets.py"}], "after_files": [{"content": "import json\n\nfrom flask import request\nfrom redash import models\nfrom redash.handlers.base import BaseResource\nfrom redash.permissions import (require_access,\n require_object_modify_permission,\n require_permission, view_only)\n\n\nclass WidgetListResource(BaseResource):\n @require_permission('edit_dashboard')\n def post(self):\n widget_properties = request.get_json(force=True)\n dashboard = models.Dashboard.get_by_id_and_org(widget_properties.pop('dashboard_id'), self.current_org)\n require_object_modify_permission(dashboard, self.current_user)\n\n widget_properties['options'] = json.dumps(widget_properties['options'])\n widget_properties.pop('id', None)\n widget_properties['dashboard'] = dashboard\n\n visualization_id = widget_properties.pop('visualization_id')\n if visualization_id:\n visualization = models.Visualization.get_by_id_and_org(visualization_id, self.current_org)\n require_access(visualization.query.groups, self.current_user, view_only)\n else:\n visualization = None\n\n widget_properties['visualization'] = visualization\n\n widget = models.Widget.create(**widget_properties)\n\n layout = json.loads(widget.dashboard.layout)\n new_row = True\n\n if len(layout) == 0 or widget.width == 2:\n layout.append([widget.id])\n elif len(layout[-1]) == 1:\n neighbour_widget = models.Widget.get(models.Widget.id == layout[-1][0])\n if neighbour_widget.width == 1:\n layout[-1].append(widget.id)\n new_row = False\n else:\n layout.append([widget.id])\n else:\n layout.append([widget.id])\n\n widget.dashboard.layout = json.dumps(layout)\n widget.dashboard.save()\n\n return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row, 'version': dashboard.version}\n\n\nclass WidgetResource(BaseResource):\n @require_permission('edit_dashboard')\n def post(self, widget_id):\n # This method currently handles Text Box widgets only.\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n require_object_modify_permission(widget.dashboard, self.current_user)\n widget_properties = request.get_json(force=True)\n widget.text = widget_properties['text']\n widget.save()\n\n return widget.to_dict()\n\n @require_permission('edit_dashboard')\n def delete(self, widget_id):\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n require_object_modify_permission(widget.dashboard, self.current_user)\n widget.delete_instance()\n\n return {'layout': widget.dashboard.layout, 'version': widget.dashboard.version}\n", "path": "redash/handlers/widgets.py"}]}
| 1,146 | 506 |
gh_patches_debug_7879
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-3875
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sources with a null value for last_updated will break the web UI
## Description
If a source is created without a `last_updated` filed, it will cause the following error on the journalist index page.
```
172.17.0.1 - - [10/Oct/2018 18:49:40] "GET / HTTP/1.1" 500 -
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1741, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_app/main.py", line 79, in index
starred=starred)
File "/usr/local/lib/python2.7/dist-packages/flask/templating.py", line 135, in render_template
context, ctx.app)
File "/usr/local/lib/python2.7/dist-packages/flask/templating.py", line 117, in _render
rv = template.render(context)
File "/usr/local/lib/python2.7/dist-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/usr/local/lib/python2.7/dist-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_templates/index.html", line 1, in top-level template code
{% extends "base.html" %}
File "/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_templates/base.html", line 50, in top-level template code
{% block body %}{% endblock %}
File "/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_templates/index.html", line 25, in block "body"
{% include '_source_row.html' %}
File "/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_templates/_source_row.html", line 4, in top-level template code
<time class="date" title="{{ source.last_updated|rel_datetime_format }}" datetime="{{ source.last_updated|rel_datetime_format(fmt="%Y-%m-%d %H:%M:%S%Z") }}">{{ source.last_updated|rel_datetime_format(relative=True) }}</time>
File "/home/heartsucker/code/freedomofpress/securedrop/securedrop/template_filters.py", line 12, in rel_datetime_format
time = dates.format_timedelta(datetime.utcnow() - dt,
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
```
The root cause of this is the schema in `sqlite` permits null values in the `last_updated` columns.
```sql
CREATE TABLE sources (
id INTEGER NOT NULL,
uuid VARCHAR(36) NOT NULL,
filesystem_id VARCHAR(96),
journalist_designation VARCHAR(255) NOT NULL,
flagged BOOLEAN,
last_updated DATETIME,
-- ^ allows null :(
pending BOOLEAN,
interaction_count INTEGER NOT NULL,
PRIMARY KEY (id),
UNIQUE (uuid),
UNIQUE (filesystem_id),
CHECK (flagged IN (0, 1)),
CHECK (pending IN (0, 1))
);
```
## Steps to Reproduce
Manually create a source with `last_updated = null`.
## Expected Behavior
The app should recover gracefully or (better) disallow this entirely.
## Actual Behavior
The app blows up.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/journalist_app/main.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import os
4
5 from datetime import datetime
6 from flask import (Blueprint, request, current_app, session, url_for, redirect,
7 render_template, g, flash, abort)
8 from flask_babel import gettext
9 from sqlalchemy.sql.expression import false
10
11 from db import db
12 from models import Source, SourceStar, Submission, Reply
13 from journalist_app.forms import ReplyForm
14 from journalist_app.utils import (validate_user, bulk_delete, download,
15 confirm_bulk_delete, get_source)
16
17
18 def make_blueprint(config):
19 view = Blueprint('main', __name__)
20
21 @view.route('/login', methods=('GET', 'POST'))
22 def login():
23 if request.method == 'POST':
24 user = validate_user(request.form['username'],
25 request.form['password'],
26 request.form['token'])
27 if user:
28 current_app.logger.info("'{}' logged in with the token {}"
29 .format(request.form['username'],
30 request.form['token']))
31
32 # Update access metadata
33 user.last_access = datetime.utcnow()
34 db.session.add(user)
35 db.session.commit()
36
37 session['uid'] = user.id
38 return redirect(url_for('main.index'))
39
40 return render_template("login.html")
41
42 @view.route('/logout')
43 def logout():
44 session.pop('uid', None)
45 session.pop('expires', None)
46 return redirect(url_for('main.index'))
47
48 @view.route('/org-logo')
49 def select_logo():
50 if os.path.exists(os.path.join(current_app.static_folder, 'i',
51 'custom_logo.png')):
52 return redirect(url_for('static', filename='i/custom_logo.png'))
53 else:
54 return redirect(url_for('static', filename='i/logo.png'))
55
56 @view.route('/')
57 def index():
58 unstarred = []
59 starred = []
60
61 # Long SQLAlchemy statements look best when formatted according to
62 # the Pocoo style guide, IMHO:
63 # http://www.pocoo.org/internal/styleguide/
64 sources = Source.query.filter_by(pending=False) \
65 .order_by(Source.last_updated.desc()) \
66 .all()
67 for source in sources:
68 star = SourceStar.query.filter_by(source_id=source.id).first()
69 if star and star.starred:
70 starred.append(source)
71 else:
72 unstarred.append(source)
73 source.num_unread = len(
74 Submission.query.filter_by(source_id=source.id,
75 downloaded=False).all())
76
77 return render_template('index.html',
78 unstarred=unstarred,
79 starred=starred)
80
81 @view.route('/reply', methods=('POST',))
82 def reply():
83 """Attempt to send a Reply from a Journalist to a Source. Empty
84 messages are rejected, and an informative error message is flashed
85 on the client. In the case of unexpected errors involving database
86 transactions (potentially caused by racing request threads that
87 modify the same the database object) logging is done in such a way
88 so as not to write potentially sensitive information to disk, and a
89 generic error message is flashed on the client.
90
91 Returns:
92 flask.Response: The user is redirected to the same Source
93 collection view, regardless if the Reply is created
94 successfully.
95 """
96 form = ReplyForm()
97 if not form.validate_on_submit():
98 for error in form.message.errors:
99 flash(error, "error")
100 return redirect(url_for('col.col', filesystem_id=g.filesystem_id))
101
102 g.source.interaction_count += 1
103 filename = "{0}-{1}-reply.gpg".format(g.source.interaction_count,
104 g.source.journalist_filename)
105 current_app.crypto_util.encrypt(
106 form.message.data,
107 [current_app.crypto_util.getkey(g.filesystem_id),
108 config.JOURNALIST_KEY],
109 output=current_app.storage.path(g.filesystem_id, filename),
110 )
111 reply = Reply(g.user, g.source, filename)
112
113 try:
114 db.session.add(reply)
115 db.session.commit()
116 except Exception as exc:
117 flash(gettext(
118 "An unexpected error occurred! Please "
119 "inform your administrator."), "error")
120 # We take a cautious approach to logging here because we're dealing
121 # with responses to sources. It's possible the exception message
122 # could contain information we don't want to write to disk.
123 current_app.logger.error(
124 "Reply from '{}' (ID {}) failed: {}!".format(g.user.username,
125 g.user.id,
126 exc.__class__))
127 else:
128 flash(gettext("Thanks. Your reply has been stored."),
129 "notification")
130 finally:
131 return redirect(url_for('col.col', filesystem_id=g.filesystem_id))
132
133 @view.route('/flag', methods=('POST',))
134 def flag():
135 g.source.flagged = True
136 db.session.commit()
137 return render_template('flag.html', filesystem_id=g.filesystem_id,
138 codename=g.source.journalist_designation)
139
140 @view.route('/bulk', methods=('POST',))
141 def bulk():
142 action = request.form['action']
143
144 doc_names_selected = request.form.getlist('doc_names_selected')
145 selected_docs = [doc for doc in g.source.collection
146 if doc.filename in doc_names_selected]
147 if selected_docs == []:
148 if action == 'download':
149 flash(gettext("No collections selected for download."),
150 "error")
151 elif action in ('delete', 'confirm_delete'):
152 flash(gettext("No collections selected for deletion."),
153 "error")
154 return redirect(url_for('col.col', filesystem_id=g.filesystem_id))
155
156 if action == 'download':
157 source = get_source(g.filesystem_id)
158 return download(source.journalist_filename, selected_docs)
159 elif action == 'delete':
160 return bulk_delete(g.filesystem_id, selected_docs)
161 elif action == 'confirm_delete':
162 return confirm_bulk_delete(g.filesystem_id, selected_docs)
163 else:
164 abort(400)
165
166 @view.route('/regenerate-code', methods=('POST',))
167 def regenerate_code():
168 original_journalist_designation = g.source.journalist_designation
169 g.source.journalist_designation = current_app.crypto_util.display_id()
170
171 for item in g.source.collection:
172 item.filename = current_app.storage.rename_submission(
173 g.filesystem_id,
174 item.filename,
175 g.source.journalist_filename)
176 db.session.commit()
177
178 flash(gettext(
179 "The source '{original_name}' has been renamed to '{new_name}'")
180 .format(original_name=original_journalist_designation,
181 new_name=g.source.journalist_designation),
182 "notification")
183 return redirect(url_for('col.col', filesystem_id=g.filesystem_id))
184
185 @view.route('/download_unread/<filesystem_id>')
186 def download_unread_filesystem_id(filesystem_id):
187 id = Source.query.filter(Source.filesystem_id == filesystem_id) \
188 .one().id
189 submissions = Submission.query.filter(
190 Submission.source_id == id,
191 Submission.downloaded == false()).all()
192 if submissions == []:
193 flash(gettext("No unread submissions for this source."))
194 return redirect(url_for('col.col', filesystem_id=filesystem_id))
195 source = get_source(filesystem_id)
196 return download(source.journalist_filename, submissions)
197
198 return view
199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/securedrop/journalist_app/main.py b/securedrop/journalist_app/main.py
--- a/securedrop/journalist_app/main.py
+++ b/securedrop/journalist_app/main.py
@@ -62,6 +62,7 @@
# the Pocoo style guide, IMHO:
# http://www.pocoo.org/internal/styleguide/
sources = Source.query.filter_by(pending=False) \
+ .filter(Source.last_updated.isnot(None)) \
.order_by(Source.last_updated.desc()) \
.all()
for source in sources:
|
{"golden_diff": "diff --git a/securedrop/journalist_app/main.py b/securedrop/journalist_app/main.py\n--- a/securedrop/journalist_app/main.py\n+++ b/securedrop/journalist_app/main.py\n@@ -62,6 +62,7 @@\n # the Pocoo style guide, IMHO:\n # http://www.pocoo.org/internal/styleguide/\n sources = Source.query.filter_by(pending=False) \\\n+ .filter(Source.last_updated.isnot(None)) \\\n .order_by(Source.last_updated.desc()) \\\n .all()\n for source in sources:\n", "issue": "Sources with a null value for last_updated will break the web UI\n## Description\r\n\r\nIf a source is created without a `last_updated` filed, it will cause the following error on the journalist index page.\r\n\r\n```\r\n172.17.0.1 - - [10/Oct/2018 18:49:40] \"GET / HTTP/1.1\" 500 -\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 2309, in __call__\r\n return self.wsgi_app(environ, start_response)\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 2295, in wsgi_app\r\n response = self.handle_exception(e)\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1741, in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 2292, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1815, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1718, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1813, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1799, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_app/main.py\", line 79, in index\r\n starred=starred)\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/templating.py\", line 135, in render_template\r\n context, ctx.app)\r\n File \"/usr/local/lib/python2.7/dist-packages/flask/templating.py\", line 117, in _render\r\n rv = template.render(context)\r\n File \"/usr/local/lib/python2.7/dist-packages/jinja2/environment.py\", line 1008, in render\r\n return self.environment.handle_exception(exc_info, True)\r\n File \"/usr/local/lib/python2.7/dist-packages/jinja2/environment.py\", line 780, in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_templates/index.html\", line 1, in top-level template code\r\n {% extends \"base.html\" %}\r\n File \"/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_templates/base.html\", line 50, in top-level template code\r\n {% block body %}{% endblock %}\r\n File \"/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_templates/index.html\", line 25, in block \"body\"\r\n {% include '_source_row.html' %}\r\n File \"/home/heartsucker/code/freedomofpress/securedrop/securedrop/journalist_templates/_source_row.html\", line 4, in top-level template code\r\n <time class=\"date\" title=\"{{ source.last_updated|rel_datetime_format }}\" datetime=\"{{ source.last_updated|rel_datetime_format(fmt=\"%Y-%m-%d %H:%M:%S%Z\") }}\">{{ source.last_updated|rel_datetime_format(relative=True) }}</time>\r\n File \"/home/heartsucker/code/freedomofpress/securedrop/securedrop/template_filters.py\", line 12, in rel_datetime_format\r\n time = dates.format_timedelta(datetime.utcnow() - dt,\r\nTypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'\r\n```\r\n\r\nThe root cause of this is the schema in `sqlite` permits null values in the `last_updated` columns.\r\n\r\n```sql\r\nCREATE TABLE sources (\r\n\tid INTEGER NOT NULL, \r\n\tuuid VARCHAR(36) NOT NULL, \r\n\tfilesystem_id VARCHAR(96), \r\n\tjournalist_designation VARCHAR(255) NOT NULL, \r\n\tflagged BOOLEAN, \r\n\tlast_updated DATETIME,\r\n -- ^ allows null :(\r\n\tpending BOOLEAN, \r\n\tinteraction_count INTEGER NOT NULL, \r\n\tPRIMARY KEY (id), \r\n\tUNIQUE (uuid), \r\n\tUNIQUE (filesystem_id), \r\n\tCHECK (flagged IN (0, 1)), \r\n\tCHECK (pending IN (0, 1))\r\n);\r\n```\r\n\r\n## Steps to Reproduce\r\n\r\nManually create a source with `last_updated = null`.\r\n\r\n## Expected Behavior\r\n\r\nThe app should recover gracefully or (better) disallow this entirely.\r\n\r\n## Actual Behavior\r\n\r\nThe app blows up.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport os\n\nfrom datetime import datetime\nfrom flask import (Blueprint, request, current_app, session, url_for, redirect,\n render_template, g, flash, abort)\nfrom flask_babel import gettext\nfrom sqlalchemy.sql.expression import false\n\nfrom db import db\nfrom models import Source, SourceStar, Submission, Reply\nfrom journalist_app.forms import ReplyForm\nfrom journalist_app.utils import (validate_user, bulk_delete, download,\n confirm_bulk_delete, get_source)\n\n\ndef make_blueprint(config):\n view = Blueprint('main', __name__)\n\n @view.route('/login', methods=('GET', 'POST'))\n def login():\n if request.method == 'POST':\n user = validate_user(request.form['username'],\n request.form['password'],\n request.form['token'])\n if user:\n current_app.logger.info(\"'{}' logged in with the token {}\"\n .format(request.form['username'],\n request.form['token']))\n\n # Update access metadata\n user.last_access = datetime.utcnow()\n db.session.add(user)\n db.session.commit()\n\n session['uid'] = user.id\n return redirect(url_for('main.index'))\n\n return render_template(\"login.html\")\n\n @view.route('/logout')\n def logout():\n session.pop('uid', None)\n session.pop('expires', None)\n return redirect(url_for('main.index'))\n\n @view.route('/org-logo')\n def select_logo():\n if os.path.exists(os.path.join(current_app.static_folder, 'i',\n 'custom_logo.png')):\n return redirect(url_for('static', filename='i/custom_logo.png'))\n else:\n return redirect(url_for('static', filename='i/logo.png'))\n\n @view.route('/')\n def index():\n unstarred = []\n starred = []\n\n # Long SQLAlchemy statements look best when formatted according to\n # the Pocoo style guide, IMHO:\n # http://www.pocoo.org/internal/styleguide/\n sources = Source.query.filter_by(pending=False) \\\n .order_by(Source.last_updated.desc()) \\\n .all()\n for source in sources:\n star = SourceStar.query.filter_by(source_id=source.id).first()\n if star and star.starred:\n starred.append(source)\n else:\n unstarred.append(source)\n source.num_unread = len(\n Submission.query.filter_by(source_id=source.id,\n downloaded=False).all())\n\n return render_template('index.html',\n unstarred=unstarred,\n starred=starred)\n\n @view.route('/reply', methods=('POST',))\n def reply():\n \"\"\"Attempt to send a Reply from a Journalist to a Source. Empty\n messages are rejected, and an informative error message is flashed\n on the client. In the case of unexpected errors involving database\n transactions (potentially caused by racing request threads that\n modify the same the database object) logging is done in such a way\n so as not to write potentially sensitive information to disk, and a\n generic error message is flashed on the client.\n\n Returns:\n flask.Response: The user is redirected to the same Source\n collection view, regardless if the Reply is created\n successfully.\n \"\"\"\n form = ReplyForm()\n if not form.validate_on_submit():\n for error in form.message.errors:\n flash(error, \"error\")\n return redirect(url_for('col.col', filesystem_id=g.filesystem_id))\n\n g.source.interaction_count += 1\n filename = \"{0}-{1}-reply.gpg\".format(g.source.interaction_count,\n g.source.journalist_filename)\n current_app.crypto_util.encrypt(\n form.message.data,\n [current_app.crypto_util.getkey(g.filesystem_id),\n config.JOURNALIST_KEY],\n output=current_app.storage.path(g.filesystem_id, filename),\n )\n reply = Reply(g.user, g.source, filename)\n\n try:\n db.session.add(reply)\n db.session.commit()\n except Exception as exc:\n flash(gettext(\n \"An unexpected error occurred! Please \"\n \"inform your administrator.\"), \"error\")\n # We take a cautious approach to logging here because we're dealing\n # with responses to sources. It's possible the exception message\n # could contain information we don't want to write to disk.\n current_app.logger.error(\n \"Reply from '{}' (ID {}) failed: {}!\".format(g.user.username,\n g.user.id,\n exc.__class__))\n else:\n flash(gettext(\"Thanks. Your reply has been stored.\"),\n \"notification\")\n finally:\n return redirect(url_for('col.col', filesystem_id=g.filesystem_id))\n\n @view.route('/flag', methods=('POST',))\n def flag():\n g.source.flagged = True\n db.session.commit()\n return render_template('flag.html', filesystem_id=g.filesystem_id,\n codename=g.source.journalist_designation)\n\n @view.route('/bulk', methods=('POST',))\n def bulk():\n action = request.form['action']\n\n doc_names_selected = request.form.getlist('doc_names_selected')\n selected_docs = [doc for doc in g.source.collection\n if doc.filename in doc_names_selected]\n if selected_docs == []:\n if action == 'download':\n flash(gettext(\"No collections selected for download.\"),\n \"error\")\n elif action in ('delete', 'confirm_delete'):\n flash(gettext(\"No collections selected for deletion.\"),\n \"error\")\n return redirect(url_for('col.col', filesystem_id=g.filesystem_id))\n\n if action == 'download':\n source = get_source(g.filesystem_id)\n return download(source.journalist_filename, selected_docs)\n elif action == 'delete':\n return bulk_delete(g.filesystem_id, selected_docs)\n elif action == 'confirm_delete':\n return confirm_bulk_delete(g.filesystem_id, selected_docs)\n else:\n abort(400)\n\n @view.route('/regenerate-code', methods=('POST',))\n def regenerate_code():\n original_journalist_designation = g.source.journalist_designation\n g.source.journalist_designation = current_app.crypto_util.display_id()\n\n for item in g.source.collection:\n item.filename = current_app.storage.rename_submission(\n g.filesystem_id,\n item.filename,\n g.source.journalist_filename)\n db.session.commit()\n\n flash(gettext(\n \"The source '{original_name}' has been renamed to '{new_name}'\")\n .format(original_name=original_journalist_designation,\n new_name=g.source.journalist_designation),\n \"notification\")\n return redirect(url_for('col.col', filesystem_id=g.filesystem_id))\n\n @view.route('/download_unread/<filesystem_id>')\n def download_unread_filesystem_id(filesystem_id):\n id = Source.query.filter(Source.filesystem_id == filesystem_id) \\\n .one().id\n submissions = Submission.query.filter(\n Submission.source_id == id,\n Submission.downloaded == false()).all()\n if submissions == []:\n flash(gettext(\"No unread submissions for this source.\"))\n return redirect(url_for('col.col', filesystem_id=filesystem_id))\n source = get_source(filesystem_id)\n return download(source.journalist_filename, submissions)\n\n return view\n", "path": "securedrop/journalist_app/main.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport os\n\nfrom datetime import datetime\nfrom flask import (Blueprint, request, current_app, session, url_for, redirect,\n render_template, g, flash, abort)\nfrom flask_babel import gettext\nfrom sqlalchemy.sql.expression import false\n\nfrom db import db\nfrom models import Source, SourceStar, Submission, Reply\nfrom journalist_app.forms import ReplyForm\nfrom journalist_app.utils import (validate_user, bulk_delete, download,\n confirm_bulk_delete, get_source)\n\n\ndef make_blueprint(config):\n view = Blueprint('main', __name__)\n\n @view.route('/login', methods=('GET', 'POST'))\n def login():\n if request.method == 'POST':\n user = validate_user(request.form['username'],\n request.form['password'],\n request.form['token'])\n if user:\n current_app.logger.info(\"'{}' logged in with the token {}\"\n .format(request.form['username'],\n request.form['token']))\n\n # Update access metadata\n user.last_access = datetime.utcnow()\n db.session.add(user)\n db.session.commit()\n\n session['uid'] = user.id\n return redirect(url_for('main.index'))\n\n return render_template(\"login.html\")\n\n @view.route('/logout')\n def logout():\n session.pop('uid', None)\n session.pop('expires', None)\n return redirect(url_for('main.index'))\n\n @view.route('/org-logo')\n def select_logo():\n if os.path.exists(os.path.join(current_app.static_folder, 'i',\n 'custom_logo.png')):\n return redirect(url_for('static', filename='i/custom_logo.png'))\n else:\n return redirect(url_for('static', filename='i/logo.png'))\n\n @view.route('/')\n def index():\n unstarred = []\n starred = []\n\n # Long SQLAlchemy statements look best when formatted according to\n # the Pocoo style guide, IMHO:\n # http://www.pocoo.org/internal/styleguide/\n sources = Source.query.filter_by(pending=False) \\\n .filter(Source.last_updated.isnot(None)) \\\n .order_by(Source.last_updated.desc()) \\\n .all()\n for source in sources:\n star = SourceStar.query.filter_by(source_id=source.id).first()\n if star and star.starred:\n starred.append(source)\n else:\n unstarred.append(source)\n source.num_unread = len(\n Submission.query.filter_by(source_id=source.id,\n downloaded=False).all())\n\n return render_template('index.html',\n unstarred=unstarred,\n starred=starred)\n\n @view.route('/reply', methods=('POST',))\n def reply():\n \"\"\"Attempt to send a Reply from a Journalist to a Source. Empty\n messages are rejected, and an informative error message is flashed\n on the client. In the case of unexpected errors involving database\n transactions (potentially caused by racing request threads that\n modify the same the database object) logging is done in such a way\n so as not to write potentially sensitive information to disk, and a\n generic error message is flashed on the client.\n\n Returns:\n flask.Response: The user is redirected to the same Source\n collection view, regardless if the Reply is created\n successfully.\n \"\"\"\n form = ReplyForm()\n if not form.validate_on_submit():\n for error in form.message.errors:\n flash(error, \"error\")\n return redirect(url_for('col.col', filesystem_id=g.filesystem_id))\n\n g.source.interaction_count += 1\n filename = \"{0}-{1}-reply.gpg\".format(g.source.interaction_count,\n g.source.journalist_filename)\n current_app.crypto_util.encrypt(\n form.message.data,\n [current_app.crypto_util.getkey(g.filesystem_id),\n config.JOURNALIST_KEY],\n output=current_app.storage.path(g.filesystem_id, filename),\n )\n reply = Reply(g.user, g.source, filename)\n\n try:\n db.session.add(reply)\n db.session.commit()\n except Exception as exc:\n flash(gettext(\n \"An unexpected error occurred! Please \"\n \"inform your administrator.\"), \"error\")\n # We take a cautious approach to logging here because we're dealing\n # with responses to sources. It's possible the exception message\n # could contain information we don't want to write to disk.\n current_app.logger.error(\n \"Reply from '{}' (ID {}) failed: {}!\".format(g.user.username,\n g.user.id,\n exc.__class__))\n else:\n flash(gettext(\"Thanks. Your reply has been stored.\"),\n \"notification\")\n finally:\n return redirect(url_for('col.col', filesystem_id=g.filesystem_id))\n\n @view.route('/flag', methods=('POST',))\n def flag():\n g.source.flagged = True\n db.session.commit()\n return render_template('flag.html', filesystem_id=g.filesystem_id,\n codename=g.source.journalist_designation)\n\n @view.route('/bulk', methods=('POST',))\n def bulk():\n action = request.form['action']\n\n doc_names_selected = request.form.getlist('doc_names_selected')\n selected_docs = [doc for doc in g.source.collection\n if doc.filename in doc_names_selected]\n if selected_docs == []:\n if action == 'download':\n flash(gettext(\"No collections selected for download.\"),\n \"error\")\n elif action in ('delete', 'confirm_delete'):\n flash(gettext(\"No collections selected for deletion.\"),\n \"error\")\n return redirect(url_for('col.col', filesystem_id=g.filesystem_id))\n\n if action == 'download':\n source = get_source(g.filesystem_id)\n return download(source.journalist_filename, selected_docs)\n elif action == 'delete':\n return bulk_delete(g.filesystem_id, selected_docs)\n elif action == 'confirm_delete':\n return confirm_bulk_delete(g.filesystem_id, selected_docs)\n else:\n abort(400)\n\n @view.route('/regenerate-code', methods=('POST',))\n def regenerate_code():\n original_journalist_designation = g.source.journalist_designation\n g.source.journalist_designation = current_app.crypto_util.display_id()\n\n for item in g.source.collection:\n item.filename = current_app.storage.rename_submission(\n g.filesystem_id,\n item.filename,\n g.source.journalist_filename)\n db.session.commit()\n\n flash(gettext(\n \"The source '{original_name}' has been renamed to '{new_name}'\")\n .format(original_name=original_journalist_designation,\n new_name=g.source.journalist_designation),\n \"notification\")\n return redirect(url_for('col.col', filesystem_id=g.filesystem_id))\n\n @view.route('/download_unread/<filesystem_id>')\n def download_unread_filesystem_id(filesystem_id):\n id = Source.query.filter(Source.filesystem_id == filesystem_id) \\\n .one().id\n submissions = Submission.query.filter(\n Submission.source_id == id,\n Submission.downloaded == false()).all()\n if submissions == []:\n flash(gettext(\"No unread submissions for this source.\"))\n return redirect(url_for('col.col', filesystem_id=filesystem_id))\n source = get_source(filesystem_id)\n return download(source.journalist_filename, submissions)\n\n return view\n", "path": "securedrop/journalist_app/main.py"}]}
| 3,388 | 129 |
gh_patches_debug_23124
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-1825
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WSGI middleware should stream request body, rather than loading it all at once.
Hi! I'm deploying a Django app with uvicorn, running on k8s. Our containers were being killed, and I've found that when users upload large files, uvicorn increases memory usage, and slows to a crawl. Eventually causing an OOM.
I'm not sure where this is happening yet, and I suspect could be related to https://github.com/django/channels/pull/1251, but I did a bit more digging and I'm not totally sure. I've tried running uvicorn in wsgi mode, and completely removed channels from the django install, and I'm getting the same behavior. File uploads are being loaded into memory (rather than streaming to disk, as they should be), and upload speeds slow to a crawl. The same app running on gunicorn works fine.
The file I'm testing with is about 470mb.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/middleware/wsgi.py`
Content:
```
1 import asyncio
2 import concurrent.futures
3 import io
4 import sys
5 from collections import deque
6 from typing import TYPE_CHECKING, Deque, Iterable, Optional, Tuple
7
8 if TYPE_CHECKING:
9 from asgiref.typing import (
10 ASGIReceiveCallable,
11 ASGIReceiveEvent,
12 ASGISendCallable,
13 ASGISendEvent,
14 HTTPRequestEvent,
15 HTTPResponseBodyEvent,
16 HTTPResponseStartEvent,
17 HTTPScope,
18 )
19
20 from uvicorn._types import Environ, ExcInfo, StartResponse, WSGIApp
21
22
23 def build_environ(
24 scope: "HTTPScope", message: "ASGIReceiveEvent", body: io.BytesIO
25 ) -> Environ:
26 """
27 Builds a scope and request message into a WSGI environ object.
28 """
29 environ = {
30 "REQUEST_METHOD": scope["method"],
31 "SCRIPT_NAME": "",
32 "PATH_INFO": scope["path"].encode("utf8").decode("latin1"),
33 "QUERY_STRING": scope["query_string"].decode("ascii"),
34 "SERVER_PROTOCOL": "HTTP/%s" % scope["http_version"],
35 "wsgi.version": (1, 0),
36 "wsgi.url_scheme": scope.get("scheme", "http"),
37 "wsgi.input": body,
38 "wsgi.errors": sys.stdout,
39 "wsgi.multithread": True,
40 "wsgi.multiprocess": True,
41 "wsgi.run_once": False,
42 }
43
44 # Get server name and port - required in WSGI, not in ASGI
45 server = scope.get("server")
46 if server is None:
47 server = ("localhost", 80)
48 environ["SERVER_NAME"] = server[0]
49 environ["SERVER_PORT"] = server[1]
50
51 # Get client IP address
52 client = scope.get("client")
53 if client is not None:
54 environ["REMOTE_ADDR"] = client[0]
55
56 # Go through headers and make them into environ entries
57 for name, value in scope.get("headers", []):
58 name_str: str = name.decode("latin1")
59 if name_str == "content-length":
60 corrected_name = "CONTENT_LENGTH"
61 elif name_str == "content-type":
62 corrected_name = "CONTENT_TYPE"
63 else:
64 corrected_name = "HTTP_%s" % name_str.upper().replace("-", "_")
65 # HTTPbis say only ASCII chars are allowed in headers, but we latin1
66 # just in case
67 value_str: str = value.decode("latin1")
68 if corrected_name in environ:
69 corrected_name_environ = environ[corrected_name]
70 assert isinstance(corrected_name_environ, str)
71 value_str = corrected_name_environ + "," + value_str
72 environ[corrected_name] = value_str
73 return environ
74
75
76 class WSGIMiddleware:
77 def __init__(self, app: WSGIApp, workers: int = 10):
78 self.app = app
79 self.executor = concurrent.futures.ThreadPoolExecutor(max_workers=workers)
80
81 async def __call__(
82 self,
83 scope: "HTTPScope",
84 receive: "ASGIReceiveCallable",
85 send: "ASGISendCallable",
86 ) -> None:
87 assert scope["type"] == "http"
88 instance = WSGIResponder(self.app, self.executor, scope)
89 await instance(receive, send)
90
91
92 class WSGIResponder:
93 def __init__(
94 self,
95 app: WSGIApp,
96 executor: concurrent.futures.ThreadPoolExecutor,
97 scope: "HTTPScope",
98 ):
99 self.app = app
100 self.executor = executor
101 self.scope = scope
102 self.status = None
103 self.response_headers = None
104 self.send_event = asyncio.Event()
105 self.send_queue: Deque[Optional["ASGISendEvent"]] = deque()
106 self.loop: asyncio.AbstractEventLoop = asyncio.get_event_loop()
107 self.response_started = False
108 self.exc_info: Optional[ExcInfo] = None
109
110 async def __call__(
111 self, receive: "ASGIReceiveCallable", send: "ASGISendCallable"
112 ) -> None:
113 message: HTTPRequestEvent = await receive() # type: ignore[assignment]
114 body = io.BytesIO(message.get("body", b""))
115 more_body = message.get("more_body", False)
116 if more_body:
117 body.seek(0, io.SEEK_END)
118 while more_body:
119 body_message: "HTTPRequestEvent" = (
120 await receive() # type: ignore[assignment]
121 )
122 body.write(body_message.get("body", b""))
123 more_body = body_message.get("more_body", False)
124 body.seek(0)
125 environ = build_environ(self.scope, message, body)
126 self.loop = asyncio.get_event_loop()
127 wsgi = self.loop.run_in_executor(
128 self.executor, self.wsgi, environ, self.start_response
129 )
130 sender = self.loop.create_task(self.sender(send))
131 try:
132 await asyncio.wait_for(wsgi, None)
133 finally:
134 self.send_queue.append(None)
135 self.send_event.set()
136 await asyncio.wait_for(sender, None)
137 if self.exc_info is not None:
138 raise self.exc_info[0].with_traceback(self.exc_info[1], self.exc_info[2])
139
140 async def sender(self, send: "ASGISendCallable") -> None:
141 while True:
142 if self.send_queue:
143 message = self.send_queue.popleft()
144 if message is None:
145 return
146 await send(message)
147 else:
148 await self.send_event.wait()
149 self.send_event.clear()
150
151 def start_response(
152 self,
153 status: str,
154 response_headers: Iterable[Tuple[str, str]],
155 exc_info: Optional[ExcInfo] = None,
156 ) -> None:
157 self.exc_info = exc_info
158 if not self.response_started:
159 self.response_started = True
160 status_code_str, _ = status.split(" ", 1)
161 status_code = int(status_code_str)
162 headers = [
163 (name.encode("ascii"), value.encode("ascii"))
164 for name, value in response_headers
165 ]
166 http_response_start_event: HTTPResponseStartEvent = {
167 "type": "http.response.start",
168 "status": status_code,
169 "headers": headers,
170 }
171 self.send_queue.append(http_response_start_event)
172 self.loop.call_soon_threadsafe(self.send_event.set)
173
174 def wsgi(self, environ: Environ, start_response: StartResponse) -> None:
175 for chunk in self.app(environ, start_response): # type: ignore
176 response_body: HTTPResponseBodyEvent = {
177 "type": "http.response.body",
178 "body": chunk,
179 "more_body": True,
180 }
181 self.send_queue.append(response_body)
182 self.loop.call_soon_threadsafe(self.send_event.set)
183
184 empty_body: HTTPResponseBodyEvent = {
185 "type": "http.response.body",
186 "body": b"",
187 "more_body": False,
188 }
189 self.send_queue.append(empty_body)
190 self.loop.call_soon_threadsafe(self.send_event.set)
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/uvicorn/middleware/wsgi.py b/uvicorn/middleware/wsgi.py
--- a/uvicorn/middleware/wsgi.py
+++ b/uvicorn/middleware/wsgi.py
@@ -2,6 +2,7 @@
import concurrent.futures
import io
import sys
+import warnings
from collections import deque
from typing import TYPE_CHECKING, Deque, Iterable, Optional, Tuple
@@ -73,8 +74,13 @@
return environ
-class WSGIMiddleware:
+class _WSGIMiddleware:
def __init__(self, app: WSGIApp, workers: int = 10):
+ warnings.warn(
+ "Uvicorn's native WSGI implementation is deprecated, you "
+ "should switch to a2wsgi (`pip install a2wsgi`).",
+ DeprecationWarning,
+ )
self.app = app
self.executor = concurrent.futures.ThreadPoolExecutor(max_workers=workers)
@@ -188,3 +194,9 @@
}
self.send_queue.append(empty_body)
self.loop.call_soon_threadsafe(self.send_event.set)
+
+
+try:
+ from a2wsgi import WSGIMiddleware
+except ModuleNotFoundError:
+ WSGIMiddleware = _WSGIMiddleware # type: ignore[misc, assignment]
|
{"golden_diff": "diff --git a/uvicorn/middleware/wsgi.py b/uvicorn/middleware/wsgi.py\n--- a/uvicorn/middleware/wsgi.py\n+++ b/uvicorn/middleware/wsgi.py\n@@ -2,6 +2,7 @@\n import concurrent.futures\n import io\n import sys\n+import warnings\n from collections import deque\n from typing import TYPE_CHECKING, Deque, Iterable, Optional, Tuple\n \n@@ -73,8 +74,13 @@\n return environ\n \n \n-class WSGIMiddleware:\n+class _WSGIMiddleware:\n def __init__(self, app: WSGIApp, workers: int = 10):\n+ warnings.warn(\n+ \"Uvicorn's native WSGI implementation is deprecated, you \"\n+ \"should switch to a2wsgi (`pip install a2wsgi`).\",\n+ DeprecationWarning,\n+ )\n self.app = app\n self.executor = concurrent.futures.ThreadPoolExecutor(max_workers=workers)\n \n@@ -188,3 +194,9 @@\n }\n self.send_queue.append(empty_body)\n self.loop.call_soon_threadsafe(self.send_event.set)\n+\n+\n+try:\n+ from a2wsgi import WSGIMiddleware\n+except ModuleNotFoundError:\n+ WSGIMiddleware = _WSGIMiddleware # type: ignore[misc, assignment]\n", "issue": "WSGI middleware should stream request body, rather than loading it all at once.\nHi! I'm deploying a Django app with uvicorn, running on k8s. Our containers were being killed, and I've found that when users upload large files, uvicorn increases memory usage, and slows to a crawl. Eventually causing an OOM.\r\n\r\nI'm not sure where this is happening yet, and I suspect could be related to https://github.com/django/channels/pull/1251, but I did a bit more digging and I'm not totally sure. I've tried running uvicorn in wsgi mode, and completely removed channels from the django install, and I'm getting the same behavior. File uploads are being loaded into memory (rather than streaming to disk, as they should be), and upload speeds slow to a crawl. The same app running on gunicorn works fine.\r\n\r\nThe file I'm testing with is about 470mb.\n", "before_files": [{"content": "import asyncio\nimport concurrent.futures\nimport io\nimport sys\nfrom collections import deque\nfrom typing import TYPE_CHECKING, Deque, Iterable, Optional, Tuple\n\nif TYPE_CHECKING:\n from asgiref.typing import (\n ASGIReceiveCallable,\n ASGIReceiveEvent,\n ASGISendCallable,\n ASGISendEvent,\n HTTPRequestEvent,\n HTTPResponseBodyEvent,\n HTTPResponseStartEvent,\n HTTPScope,\n )\n\nfrom uvicorn._types import Environ, ExcInfo, StartResponse, WSGIApp\n\n\ndef build_environ(\n scope: \"HTTPScope\", message: \"ASGIReceiveEvent\", body: io.BytesIO\n) -> Environ:\n \"\"\"\n Builds a scope and request message into a WSGI environ object.\n \"\"\"\n environ = {\n \"REQUEST_METHOD\": scope[\"method\"],\n \"SCRIPT_NAME\": \"\",\n \"PATH_INFO\": scope[\"path\"].encode(\"utf8\").decode(\"latin1\"),\n \"QUERY_STRING\": scope[\"query_string\"].decode(\"ascii\"),\n \"SERVER_PROTOCOL\": \"HTTP/%s\" % scope[\"http_version\"],\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scope.get(\"scheme\", \"http\"),\n \"wsgi.input\": body,\n \"wsgi.errors\": sys.stdout,\n \"wsgi.multithread\": True,\n \"wsgi.multiprocess\": True,\n \"wsgi.run_once\": False,\n }\n\n # Get server name and port - required in WSGI, not in ASGI\n server = scope.get(\"server\")\n if server is None:\n server = (\"localhost\", 80)\n environ[\"SERVER_NAME\"] = server[0]\n environ[\"SERVER_PORT\"] = server[1]\n\n # Get client IP address\n client = scope.get(\"client\")\n if client is not None:\n environ[\"REMOTE_ADDR\"] = client[0]\n\n # Go through headers and make them into environ entries\n for name, value in scope.get(\"headers\", []):\n name_str: str = name.decode(\"latin1\")\n if name_str == \"content-length\":\n corrected_name = \"CONTENT_LENGTH\"\n elif name_str == \"content-type\":\n corrected_name = \"CONTENT_TYPE\"\n else:\n corrected_name = \"HTTP_%s\" % name_str.upper().replace(\"-\", \"_\")\n # HTTPbis say only ASCII chars are allowed in headers, but we latin1\n # just in case\n value_str: str = value.decode(\"latin1\")\n if corrected_name in environ:\n corrected_name_environ = environ[corrected_name]\n assert isinstance(corrected_name_environ, str)\n value_str = corrected_name_environ + \",\" + value_str\n environ[corrected_name] = value_str\n return environ\n\n\nclass WSGIMiddleware:\n def __init__(self, app: WSGIApp, workers: int = 10):\n self.app = app\n self.executor = concurrent.futures.ThreadPoolExecutor(max_workers=workers)\n\n async def __call__(\n self,\n scope: \"HTTPScope\",\n receive: \"ASGIReceiveCallable\",\n send: \"ASGISendCallable\",\n ) -> None:\n assert scope[\"type\"] == \"http\"\n instance = WSGIResponder(self.app, self.executor, scope)\n await instance(receive, send)\n\n\nclass WSGIResponder:\n def __init__(\n self,\n app: WSGIApp,\n executor: concurrent.futures.ThreadPoolExecutor,\n scope: \"HTTPScope\",\n ):\n self.app = app\n self.executor = executor\n self.scope = scope\n self.status = None\n self.response_headers = None\n self.send_event = asyncio.Event()\n self.send_queue: Deque[Optional[\"ASGISendEvent\"]] = deque()\n self.loop: asyncio.AbstractEventLoop = asyncio.get_event_loop()\n self.response_started = False\n self.exc_info: Optional[ExcInfo] = None\n\n async def __call__(\n self, receive: \"ASGIReceiveCallable\", send: \"ASGISendCallable\"\n ) -> None:\n message: HTTPRequestEvent = await receive() # type: ignore[assignment]\n body = io.BytesIO(message.get(\"body\", b\"\"))\n more_body = message.get(\"more_body\", False)\n if more_body:\n body.seek(0, io.SEEK_END)\n while more_body:\n body_message: \"HTTPRequestEvent\" = (\n await receive() # type: ignore[assignment]\n )\n body.write(body_message.get(\"body\", b\"\"))\n more_body = body_message.get(\"more_body\", False)\n body.seek(0)\n environ = build_environ(self.scope, message, body)\n self.loop = asyncio.get_event_loop()\n wsgi = self.loop.run_in_executor(\n self.executor, self.wsgi, environ, self.start_response\n )\n sender = self.loop.create_task(self.sender(send))\n try:\n await asyncio.wait_for(wsgi, None)\n finally:\n self.send_queue.append(None)\n self.send_event.set()\n await asyncio.wait_for(sender, None)\n if self.exc_info is not None:\n raise self.exc_info[0].with_traceback(self.exc_info[1], self.exc_info[2])\n\n async def sender(self, send: \"ASGISendCallable\") -> None:\n while True:\n if self.send_queue:\n message = self.send_queue.popleft()\n if message is None:\n return\n await send(message)\n else:\n await self.send_event.wait()\n self.send_event.clear()\n\n def start_response(\n self,\n status: str,\n response_headers: Iterable[Tuple[str, str]],\n exc_info: Optional[ExcInfo] = None,\n ) -> None:\n self.exc_info = exc_info\n if not self.response_started:\n self.response_started = True\n status_code_str, _ = status.split(\" \", 1)\n status_code = int(status_code_str)\n headers = [\n (name.encode(\"ascii\"), value.encode(\"ascii\"))\n for name, value in response_headers\n ]\n http_response_start_event: HTTPResponseStartEvent = {\n \"type\": \"http.response.start\",\n \"status\": status_code,\n \"headers\": headers,\n }\n self.send_queue.append(http_response_start_event)\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n def wsgi(self, environ: Environ, start_response: StartResponse) -> None:\n for chunk in self.app(environ, start_response): # type: ignore\n response_body: HTTPResponseBodyEvent = {\n \"type\": \"http.response.body\",\n \"body\": chunk,\n \"more_body\": True,\n }\n self.send_queue.append(response_body)\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n empty_body: HTTPResponseBodyEvent = {\n \"type\": \"http.response.body\",\n \"body\": b\"\",\n \"more_body\": False,\n }\n self.send_queue.append(empty_body)\n self.loop.call_soon_threadsafe(self.send_event.set)\n", "path": "uvicorn/middleware/wsgi.py"}], "after_files": [{"content": "import asyncio\nimport concurrent.futures\nimport io\nimport sys\nimport warnings\nfrom collections import deque\nfrom typing import TYPE_CHECKING, Deque, Iterable, Optional, Tuple\n\nif TYPE_CHECKING:\n from asgiref.typing import (\n ASGIReceiveCallable,\n ASGIReceiveEvent,\n ASGISendCallable,\n ASGISendEvent,\n HTTPRequestEvent,\n HTTPResponseBodyEvent,\n HTTPResponseStartEvent,\n HTTPScope,\n )\n\nfrom uvicorn._types import Environ, ExcInfo, StartResponse, WSGIApp\n\n\ndef build_environ(\n scope: \"HTTPScope\", message: \"ASGIReceiveEvent\", body: io.BytesIO\n) -> Environ:\n \"\"\"\n Builds a scope and request message into a WSGI environ object.\n \"\"\"\n environ = {\n \"REQUEST_METHOD\": scope[\"method\"],\n \"SCRIPT_NAME\": \"\",\n \"PATH_INFO\": scope[\"path\"].encode(\"utf8\").decode(\"latin1\"),\n \"QUERY_STRING\": scope[\"query_string\"].decode(\"ascii\"),\n \"SERVER_PROTOCOL\": \"HTTP/%s\" % scope[\"http_version\"],\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scope.get(\"scheme\", \"http\"),\n \"wsgi.input\": body,\n \"wsgi.errors\": sys.stdout,\n \"wsgi.multithread\": True,\n \"wsgi.multiprocess\": True,\n \"wsgi.run_once\": False,\n }\n\n # Get server name and port - required in WSGI, not in ASGI\n server = scope.get(\"server\")\n if server is None:\n server = (\"localhost\", 80)\n environ[\"SERVER_NAME\"] = server[0]\n environ[\"SERVER_PORT\"] = server[1]\n\n # Get client IP address\n client = scope.get(\"client\")\n if client is not None:\n environ[\"REMOTE_ADDR\"] = client[0]\n\n # Go through headers and make them into environ entries\n for name, value in scope.get(\"headers\", []):\n name_str: str = name.decode(\"latin1\")\n if name_str == \"content-length\":\n corrected_name = \"CONTENT_LENGTH\"\n elif name_str == \"content-type\":\n corrected_name = \"CONTENT_TYPE\"\n else:\n corrected_name = \"HTTP_%s\" % name_str.upper().replace(\"-\", \"_\")\n # HTTPbis say only ASCII chars are allowed in headers, but we latin1\n # just in case\n value_str: str = value.decode(\"latin1\")\n if corrected_name in environ:\n corrected_name_environ = environ[corrected_name]\n assert isinstance(corrected_name_environ, str)\n value_str = corrected_name_environ + \",\" + value_str\n environ[corrected_name] = value_str\n return environ\n\n\nclass _WSGIMiddleware:\n def __init__(self, app: WSGIApp, workers: int = 10):\n warnings.warn(\n \"Uvicorn's native WSGI implementation is deprecated, you \"\n \"should switch to a2wsgi (`pip install a2wsgi`).\",\n DeprecationWarning,\n )\n self.app = app\n self.executor = concurrent.futures.ThreadPoolExecutor(max_workers=workers)\n\n async def __call__(\n self,\n scope: \"HTTPScope\",\n receive: \"ASGIReceiveCallable\",\n send: \"ASGISendCallable\",\n ) -> None:\n assert scope[\"type\"] == \"http\"\n instance = WSGIResponder(self.app, self.executor, scope)\n await instance(receive, send)\n\n\nclass WSGIResponder:\n def __init__(\n self,\n app: WSGIApp,\n executor: concurrent.futures.ThreadPoolExecutor,\n scope: \"HTTPScope\",\n ):\n self.app = app\n self.executor = executor\n self.scope = scope\n self.status = None\n self.response_headers = None\n self.send_event = asyncio.Event()\n self.send_queue: Deque[Optional[\"ASGISendEvent\"]] = deque()\n self.loop: asyncio.AbstractEventLoop = asyncio.get_event_loop()\n self.response_started = False\n self.exc_info: Optional[ExcInfo] = None\n\n async def __call__(\n self, receive: \"ASGIReceiveCallable\", send: \"ASGISendCallable\"\n ) -> None:\n message: HTTPRequestEvent = await receive() # type: ignore[assignment]\n body = io.BytesIO(message.get(\"body\", b\"\"))\n more_body = message.get(\"more_body\", False)\n if more_body:\n body.seek(0, io.SEEK_END)\n while more_body:\n body_message: \"HTTPRequestEvent\" = (\n await receive() # type: ignore[assignment]\n )\n body.write(body_message.get(\"body\", b\"\"))\n more_body = body_message.get(\"more_body\", False)\n body.seek(0)\n environ = build_environ(self.scope, message, body)\n self.loop = asyncio.get_event_loop()\n wsgi = self.loop.run_in_executor(\n self.executor, self.wsgi, environ, self.start_response\n )\n sender = self.loop.create_task(self.sender(send))\n try:\n await asyncio.wait_for(wsgi, None)\n finally:\n self.send_queue.append(None)\n self.send_event.set()\n await asyncio.wait_for(sender, None)\n if self.exc_info is not None:\n raise self.exc_info[0].with_traceback(self.exc_info[1], self.exc_info[2])\n\n async def sender(self, send: \"ASGISendCallable\") -> None:\n while True:\n if self.send_queue:\n message = self.send_queue.popleft()\n if message is None:\n return\n await send(message)\n else:\n await self.send_event.wait()\n self.send_event.clear()\n\n def start_response(\n self,\n status: str,\n response_headers: Iterable[Tuple[str, str]],\n exc_info: Optional[ExcInfo] = None,\n ) -> None:\n self.exc_info = exc_info\n if not self.response_started:\n self.response_started = True\n status_code_str, _ = status.split(\" \", 1)\n status_code = int(status_code_str)\n headers = [\n (name.encode(\"ascii\"), value.encode(\"ascii\"))\n for name, value in response_headers\n ]\n http_response_start_event: HTTPResponseStartEvent = {\n \"type\": \"http.response.start\",\n \"status\": status_code,\n \"headers\": headers,\n }\n self.send_queue.append(http_response_start_event)\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n def wsgi(self, environ: Environ, start_response: StartResponse) -> None:\n for chunk in self.app(environ, start_response): # type: ignore\n response_body: HTTPResponseBodyEvent = {\n \"type\": \"http.response.body\",\n \"body\": chunk,\n \"more_body\": True,\n }\n self.send_queue.append(response_body)\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n empty_body: HTTPResponseBodyEvent = {\n \"type\": \"http.response.body\",\n \"body\": b\"\",\n \"more_body\": False,\n }\n self.send_queue.append(empty_body)\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n\ntry:\n from a2wsgi import WSGIMiddleware\nexcept ModuleNotFoundError:\n WSGIMiddleware = _WSGIMiddleware # type: ignore[misc, assignment]\n", "path": "uvicorn/middleware/wsgi.py"}]}
| 2,448 | 299 |
gh_patches_debug_25840
|
rasdani/github-patches
|
git_diff
|
lnbits__lnbits-690
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Split payments shares <1%
Trying to set a payment share <1% will result in 500 INTERNAL SERVER ERROR.
This should work as it would be very useful as a fee for hosting lnbits but 1% is too much.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lnbits/extensions/splitpayments/migrations.py`
Content:
```
1 async def m001_initial(db):
2 """
3 Initial split payment table.
4 """
5 await db.execute(
6 """
7 CREATE TABLE splitpayments.targets (
8 wallet TEXT NOT NULL,
9 source TEXT NOT NULL,
10 percent INTEGER NOT NULL CHECK (percent >= 0 AND percent <= 100),
11 alias TEXT,
12
13 UNIQUE (source, wallet)
14 );
15 """
16 )
17
```
Path: `lnbits/extensions/splitpayments/models.py`
Content:
```
1 from typing import List, Optional
2
3 from fastapi.param_functions import Query
4 from pydantic import BaseModel
5
6
7 class Target(BaseModel):
8 wallet: str
9 source: str
10 percent: int
11 alias: Optional[str]
12
13
14 class TargetPutList(BaseModel):
15 wallet: str = Query(...)
16 alias: str = Query("")
17 percent: int = Query(..., ge=1)
18
19
20 class TargetPut(BaseModel):
21 __root__: List[TargetPutList]
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lnbits/extensions/splitpayments/migrations.py b/lnbits/extensions/splitpayments/migrations.py
--- a/lnbits/extensions/splitpayments/migrations.py
+++ b/lnbits/extensions/splitpayments/migrations.py
@@ -14,3 +14,41 @@
);
"""
)
+
+
+async def m002_float_percent(db):
+ """
+ Add float percent and migrates the existing data.
+ """
+ await db.execute("ALTER TABLE splitpayments.targets RENAME TO splitpayments_old")
+ await db.execute(
+ """
+ CREATE TABLE splitpayments.targets (
+ wallet TEXT NOT NULL,
+ source TEXT NOT NULL,
+ percent REAL NOT NULL CHECK (percent >= 0 AND percent <= 100),
+ alias TEXT,
+
+ UNIQUE (source, wallet)
+ );
+ """
+ )
+
+ for row in [
+ list(row)
+ for row in await db.fetchall("SELECT * FROM splitpayments.splitpayments_old")
+ ]:
+ await db.execute(
+ """
+ INSERT INTO splitpayments.targets (
+ wallet,
+ source,
+ percent,
+ alias
+ )
+ VALUES (?, ?, ?, ?)
+ """,
+ (row[0], row[1], row[2], row[3]),
+ )
+
+ await db.execute("DROP TABLE splitpayments.splitpayments_old")
diff --git a/lnbits/extensions/splitpayments/models.py b/lnbits/extensions/splitpayments/models.py
--- a/lnbits/extensions/splitpayments/models.py
+++ b/lnbits/extensions/splitpayments/models.py
@@ -7,14 +7,14 @@
class Target(BaseModel):
wallet: str
source: str
- percent: int
+ percent: float
alias: Optional[str]
class TargetPutList(BaseModel):
wallet: str = Query(...)
alias: str = Query("")
- percent: int = Query(..., ge=1)
+ percent: float = Query(..., ge=0.01)
class TargetPut(BaseModel):
|
{"golden_diff": "diff --git a/lnbits/extensions/splitpayments/migrations.py b/lnbits/extensions/splitpayments/migrations.py\n--- a/lnbits/extensions/splitpayments/migrations.py\n+++ b/lnbits/extensions/splitpayments/migrations.py\n@@ -14,3 +14,41 @@\n );\n \"\"\"\n )\n+\n+\n+async def m002_float_percent(db):\n+ \"\"\"\n+ Add float percent and migrates the existing data.\n+ \"\"\"\n+ await db.execute(\"ALTER TABLE splitpayments.targets RENAME TO splitpayments_old\")\n+ await db.execute(\n+ \"\"\"\n+ CREATE TABLE splitpayments.targets (\n+ wallet TEXT NOT NULL,\n+ source TEXT NOT NULL,\n+ percent REAL NOT NULL CHECK (percent >= 0 AND percent <= 100),\n+ alias TEXT,\n+\n+ UNIQUE (source, wallet)\n+ );\n+ \"\"\"\n+ )\n+\n+ for row in [\n+ list(row)\n+ for row in await db.fetchall(\"SELECT * FROM splitpayments.splitpayments_old\")\n+ ]:\n+ await db.execute(\n+ \"\"\"\n+ INSERT INTO splitpayments.targets (\n+ wallet,\n+ source,\n+ percent,\n+ alias\n+ )\n+ VALUES (?, ?, ?, ?)\n+ \"\"\",\n+ (row[0], row[1], row[2], row[3]),\n+ )\n+\n+ await db.execute(\"DROP TABLE splitpayments.splitpayments_old\")\ndiff --git a/lnbits/extensions/splitpayments/models.py b/lnbits/extensions/splitpayments/models.py\n--- a/lnbits/extensions/splitpayments/models.py\n+++ b/lnbits/extensions/splitpayments/models.py\n@@ -7,14 +7,14 @@\n class Target(BaseModel):\n wallet: str\n source: str\n- percent: int\n+ percent: float\n alias: Optional[str]\n \n \n class TargetPutList(BaseModel):\n wallet: str = Query(...)\n alias: str = Query(\"\")\n- percent: int = Query(..., ge=1)\n+ percent: float = Query(..., ge=0.01)\n \n \n class TargetPut(BaseModel):\n", "issue": "Split payments shares <1%\nTrying to set a payment share <1% will result in 500 INTERNAL SERVER ERROR.\r\nThis should work as it would be very useful as a fee for hosting lnbits but 1% is too much.\r\n\r\n\n", "before_files": [{"content": "async def m001_initial(db):\n \"\"\"\n Initial split payment table.\n \"\"\"\n await db.execute(\n \"\"\"\n CREATE TABLE splitpayments.targets (\n wallet TEXT NOT NULL,\n source TEXT NOT NULL,\n percent INTEGER NOT NULL CHECK (percent >= 0 AND percent <= 100),\n alias TEXT,\n\n UNIQUE (source, wallet)\n );\n \"\"\"\n )\n", "path": "lnbits/extensions/splitpayments/migrations.py"}, {"content": "from typing import List, Optional\n\nfrom fastapi.param_functions import Query\nfrom pydantic import BaseModel\n\n\nclass Target(BaseModel):\n wallet: str\n source: str\n percent: int\n alias: Optional[str]\n\n\nclass TargetPutList(BaseModel):\n wallet: str = Query(...)\n alias: str = Query(\"\")\n percent: int = Query(..., ge=1)\n\n\nclass TargetPut(BaseModel):\n __root__: List[TargetPutList]\n", "path": "lnbits/extensions/splitpayments/models.py"}], "after_files": [{"content": "async def m001_initial(db):\n \"\"\"\n Initial split payment table.\n \"\"\"\n await db.execute(\n \"\"\"\n CREATE TABLE splitpayments.targets (\n wallet TEXT NOT NULL,\n source TEXT NOT NULL,\n percent INTEGER NOT NULL CHECK (percent >= 0 AND percent <= 100),\n alias TEXT,\n\n UNIQUE (source, wallet)\n );\n \"\"\"\n )\n\n\nasync def m002_float_percent(db):\n \"\"\"\n Add float percent and migrates the existing data.\n \"\"\"\n await db.execute(\"ALTER TABLE splitpayments.targets RENAME TO splitpayments_old\")\n await db.execute(\n \"\"\"\n CREATE TABLE splitpayments.targets (\n wallet TEXT NOT NULL,\n source TEXT NOT NULL,\n percent REAL NOT NULL CHECK (percent >= 0 AND percent <= 100),\n alias TEXT,\n\n UNIQUE (source, wallet)\n );\n \"\"\"\n )\n\n for row in [\n list(row)\n for row in await db.fetchall(\"SELECT * FROM splitpayments.splitpayments_old\")\n ]:\n await db.execute(\n \"\"\"\n INSERT INTO splitpayments.targets (\n wallet,\n source,\n percent,\n alias\n )\n VALUES (?, ?, ?, ?)\n \"\"\",\n (row[0], row[1], row[2], row[3]),\n )\n\n await db.execute(\"DROP TABLE splitpayments.splitpayments_old\")\n", "path": "lnbits/extensions/splitpayments/migrations.py"}, {"content": "from typing import List, Optional\n\nfrom fastapi.param_functions import Query\nfrom pydantic import BaseModel\n\n\nclass Target(BaseModel):\n wallet: str\n source: str\n percent: float\n alias: Optional[str]\n\n\nclass TargetPutList(BaseModel):\n wallet: str = Query(...)\n alias: str = Query(\"\")\n percent: float = Query(..., ge=0.01)\n\n\nclass TargetPut(BaseModel):\n __root__: List[TargetPutList]\n", "path": "lnbits/extensions/splitpayments/models.py"}]}
| 643 | 464 |
gh_patches_debug_24470
|
rasdani/github-patches
|
git_diff
|
talonhub__community-889
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve VSCode draft editor robustness
Users can lose drafts if window switch is too slow. To address this issue, we propose the following:
- [x] Increase sleep timeout in https://github.com/knausj85/knausj_talon/blob/0d4ad8523b87c2fe10457b7fae7f2ba2f22ad735/draft_editor/draft_editor.py#L112
- [x] Keep last draft in memory, and have "draft submit" when outside of VSCode just submit the most recent draft. That way if initial draft submit doesn't work, user can just say "draft submit" again
- [x] While we're here, add "draft top", which selects from cursor to start of document
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `draft_editor/draft_editor.py`
Content:
```
1 from talon import Context, Module, actions, ui
2
3 mod = Module()
4 mod.tag("draft_editor_active", "Indicates whether the draft editor has been activated")
5 mod.tag(
6 "draft_editor_app_focused",
7 "Indicates that the draft editor app currently has focus",
8 )
9
10 ctx = Context()
11 tags: set[str] = set()
12
13
14 def add_tag(tag: str):
15 tags.add(tag)
16 ctx.tags = list(tags)
17
18
19 def remove_tag(tag: str):
20 tags.discard(tag)
21 ctx.tags = list(tags)
22
23
24 default_names = ["Visual Studio Code", "Code", "VSCodium", "Codium", "code-oss"]
25
26 setting_editor_names = mod.setting(
27 "draft_editor",
28 type=str,
29 default=None,
30 desc="List of application names to use for draft editor",
31 )
32
33
34 def get_editor_names():
35 names_csv = setting_editor_names.get()
36 return names_csv.split(", ") if names_csv else default_names
37
38
39 @mod.scope
40 def scope():
41 editor_names = get_editor_names()
42
43 for app in ui.apps(background=False):
44 if app.name in editor_names:
45 return {"draft_editor_running": True}
46
47 return {"draft_editor_running": False}
48
49
50 def handle_app_activate(app):
51 if app.name in get_editor_names():
52 add_tag("user.draft_editor_app_focused")
53 else:
54 remove_tag("user.draft_editor_app_focused")
55
56
57 ui.register("app_launch", scope.update)
58 ui.register("app_close", scope.update)
59 ui.register("app_activate", handle_app_activate)
60
61
62 original_window = None
63
64
65 @mod.action_class
66 class Actions:
67 def draft_editor_open():
68 """Open draft editor"""
69 global original_window
70 original_window = ui.active_window()
71 editor_app = get_editor_app()
72 selected_text = actions.edit.selected_text()
73 actions.user.switcher_focus_app(editor_app)
74 # Wait additional time for talon context to update.
75 actions.sleep("200ms")
76 actions.app.tab_open()
77 if selected_text != "":
78 actions.user.paste(selected_text)
79 add_tag("user.draft_editor_active")
80
81 def draft_editor_submit():
82 """Submit/save draft editor"""
83 close_editor(submit_draft=True)
84
85 def draft_editor_discard():
86 """Discard draft editor"""
87 close_editor(submit_draft=False)
88
89
90 def get_editor_app() -> ui.App:
91 editor_names = get_editor_names()
92
93 for app in ui.apps(background=False):
94 if app.name in editor_names:
95 return app
96
97 raise RuntimeError("Draft editor is not running")
98
99
100 def close_editor(submit_draft: bool):
101 remove_tag("user.draft_editor_active")
102 actions.edit.select_all()
103 selected_text = actions.edit.selected_text()
104 actions.edit.delete()
105 actions.app.tab_close()
106 actions.user.switcher_focus_window(original_window)
107 actions.sleep("200ms")
108 if submit_draft:
109 actions.user.paste(selected_text)
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/draft_editor/draft_editor.py b/draft_editor/draft_editor.py
--- a/draft_editor/draft_editor.py
+++ b/draft_editor/draft_editor.py
@@ -61,6 +61,8 @@
original_window = None
+last_draft = None
+
@mod.action_class
class Actions:
@@ -86,6 +88,11 @@
"""Discard draft editor"""
close_editor(submit_draft=False)
+ def draft_editor_paste_last():
+ """Paste last submitted draft"""
+ if last_draft:
+ actions.user.paste(last_draft)
+
def get_editor_app() -> ui.App:
editor_names = get_editor_names()
@@ -98,12 +105,14 @@
def close_editor(submit_draft: bool):
+ global last_draft
remove_tag("user.draft_editor_active")
actions.edit.select_all()
selected_text = actions.edit.selected_text()
actions.edit.delete()
actions.app.tab_close()
actions.user.switcher_focus_window(original_window)
- actions.sleep("200ms")
+ actions.sleep("300ms")
if submit_draft:
+ last_draft = selected_text
actions.user.paste(selected_text)
|
{"golden_diff": "diff --git a/draft_editor/draft_editor.py b/draft_editor/draft_editor.py\n--- a/draft_editor/draft_editor.py\n+++ b/draft_editor/draft_editor.py\n@@ -61,6 +61,8 @@\n \n original_window = None\n \n+last_draft = None\n+\n \n @mod.action_class\n class Actions:\n@@ -86,6 +88,11 @@\n \"\"\"Discard draft editor\"\"\"\n close_editor(submit_draft=False)\n \n+ def draft_editor_paste_last():\n+ \"\"\"Paste last submitted draft\"\"\"\n+ if last_draft:\n+ actions.user.paste(last_draft)\n+\n \n def get_editor_app() -> ui.App:\n editor_names = get_editor_names()\n@@ -98,12 +105,14 @@\n \n \n def close_editor(submit_draft: bool):\n+ global last_draft\n remove_tag(\"user.draft_editor_active\")\n actions.edit.select_all()\n selected_text = actions.edit.selected_text()\n actions.edit.delete()\n actions.app.tab_close()\n actions.user.switcher_focus_window(original_window)\n- actions.sleep(\"200ms\")\n+ actions.sleep(\"300ms\")\n if submit_draft:\n+ last_draft = selected_text\n actions.user.paste(selected_text)\n", "issue": "Improve VSCode draft editor robustness\nUsers can lose drafts if window switch is too slow. To address this issue, we propose the following:\r\n\r\n- [x] Increase sleep timeout in https://github.com/knausj85/knausj_talon/blob/0d4ad8523b87c2fe10457b7fae7f2ba2f22ad735/draft_editor/draft_editor.py#L112\r\n- [x] Keep last draft in memory, and have \"draft submit\" when outside of VSCode just submit the most recent draft. That way if initial draft submit doesn't work, user can just say \"draft submit\" again\r\n- [x] While we're here, add \"draft top\", which selects from cursor to start of document\n", "before_files": [{"content": "from talon import Context, Module, actions, ui\n\nmod = Module()\nmod.tag(\"draft_editor_active\", \"Indicates whether the draft editor has been activated\")\nmod.tag(\n \"draft_editor_app_focused\",\n \"Indicates that the draft editor app currently has focus\",\n)\n\nctx = Context()\ntags: set[str] = set()\n\n\ndef add_tag(tag: str):\n tags.add(tag)\n ctx.tags = list(tags)\n\n\ndef remove_tag(tag: str):\n tags.discard(tag)\n ctx.tags = list(tags)\n\n\ndefault_names = [\"Visual Studio Code\", \"Code\", \"VSCodium\", \"Codium\", \"code-oss\"]\n\nsetting_editor_names = mod.setting(\n \"draft_editor\",\n type=str,\n default=None,\n desc=\"List of application names to use for draft editor\",\n)\n\n\ndef get_editor_names():\n names_csv = setting_editor_names.get()\n return names_csv.split(\", \") if names_csv else default_names\n\n\[email protected]\ndef scope():\n editor_names = get_editor_names()\n\n for app in ui.apps(background=False):\n if app.name in editor_names:\n return {\"draft_editor_running\": True}\n\n return {\"draft_editor_running\": False}\n\n\ndef handle_app_activate(app):\n if app.name in get_editor_names():\n add_tag(\"user.draft_editor_app_focused\")\n else:\n remove_tag(\"user.draft_editor_app_focused\")\n\n\nui.register(\"app_launch\", scope.update)\nui.register(\"app_close\", scope.update)\nui.register(\"app_activate\", handle_app_activate)\n\n\noriginal_window = None\n\n\[email protected]_class\nclass Actions:\n def draft_editor_open():\n \"\"\"Open draft editor\"\"\"\n global original_window\n original_window = ui.active_window()\n editor_app = get_editor_app()\n selected_text = actions.edit.selected_text()\n actions.user.switcher_focus_app(editor_app)\n # Wait additional time for talon context to update.\n actions.sleep(\"200ms\")\n actions.app.tab_open()\n if selected_text != \"\":\n actions.user.paste(selected_text)\n add_tag(\"user.draft_editor_active\")\n\n def draft_editor_submit():\n \"\"\"Submit/save draft editor\"\"\"\n close_editor(submit_draft=True)\n\n def draft_editor_discard():\n \"\"\"Discard draft editor\"\"\"\n close_editor(submit_draft=False)\n\n\ndef get_editor_app() -> ui.App:\n editor_names = get_editor_names()\n\n for app in ui.apps(background=False):\n if app.name in editor_names:\n return app\n\n raise RuntimeError(\"Draft editor is not running\")\n\n\ndef close_editor(submit_draft: bool):\n remove_tag(\"user.draft_editor_active\")\n actions.edit.select_all()\n selected_text = actions.edit.selected_text()\n actions.edit.delete()\n actions.app.tab_close()\n actions.user.switcher_focus_window(original_window)\n actions.sleep(\"200ms\")\n if submit_draft:\n actions.user.paste(selected_text)\n", "path": "draft_editor/draft_editor.py"}], "after_files": [{"content": "from talon import Context, Module, actions, ui\n\nmod = Module()\nmod.tag(\"draft_editor_active\", \"Indicates whether the draft editor has been activated\")\nmod.tag(\n \"draft_editor_app_focused\",\n \"Indicates that the draft editor app currently has focus\",\n)\n\nctx = Context()\ntags: set[str] = set()\n\n\ndef add_tag(tag: str):\n tags.add(tag)\n ctx.tags = list(tags)\n\n\ndef remove_tag(tag: str):\n tags.discard(tag)\n ctx.tags = list(tags)\n\n\ndefault_names = [\"Visual Studio Code\", \"Code\", \"VSCodium\", \"Codium\", \"code-oss\"]\n\nsetting_editor_names = mod.setting(\n \"draft_editor\",\n type=str,\n default=None,\n desc=\"List of application names to use for draft editor\",\n)\n\n\ndef get_editor_names():\n names_csv = setting_editor_names.get()\n return names_csv.split(\", \") if names_csv else default_names\n\n\[email protected]\ndef scope():\n editor_names = get_editor_names()\n\n for app in ui.apps(background=False):\n if app.name in editor_names:\n return {\"draft_editor_running\": True}\n\n return {\"draft_editor_running\": False}\n\n\ndef handle_app_activate(app):\n if app.name in get_editor_names():\n add_tag(\"user.draft_editor_app_focused\")\n else:\n remove_tag(\"user.draft_editor_app_focused\")\n\n\nui.register(\"app_launch\", scope.update)\nui.register(\"app_close\", scope.update)\nui.register(\"app_activate\", handle_app_activate)\n\n\noriginal_window = None\n\nlast_draft = None\n\n\[email protected]_class\nclass Actions:\n def draft_editor_open():\n \"\"\"Open draft editor\"\"\"\n global original_window\n original_window = ui.active_window()\n editor_app = get_editor_app()\n selected_text = actions.edit.selected_text()\n actions.user.switcher_focus_app(editor_app)\n # Wait additional time for talon context to update.\n actions.sleep(\"200ms\")\n actions.app.tab_open()\n if selected_text != \"\":\n actions.user.paste(selected_text)\n add_tag(\"user.draft_editor_active\")\n\n def draft_editor_submit():\n \"\"\"Submit/save draft editor\"\"\"\n close_editor(submit_draft=True)\n\n def draft_editor_discard():\n \"\"\"Discard draft editor\"\"\"\n close_editor(submit_draft=False)\n\n def draft_editor_paste_last():\n \"\"\"Paste last submitted draft\"\"\"\n if last_draft:\n actions.user.paste(last_draft)\n\n\ndef get_editor_app() -> ui.App:\n editor_names = get_editor_names()\n\n for app in ui.apps(background=False):\n if app.name in editor_names:\n return app\n\n raise RuntimeError(\"Draft editor is not running\")\n\n\ndef close_editor(submit_draft: bool):\n global last_draft\n remove_tag(\"user.draft_editor_active\")\n actions.edit.select_all()\n selected_text = actions.edit.selected_text()\n actions.edit.delete()\n actions.app.tab_close()\n actions.user.switcher_focus_window(original_window)\n actions.sleep(\"300ms\")\n if submit_draft:\n last_draft = selected_text\n actions.user.paste(selected_text)\n", "path": "draft_editor/draft_editor.py"}]}
| 1,291 | 272 |
gh_patches_debug_5922
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-3130
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV2 cloudformation checks are not packaged into the whl file
**Describe the issue**
CKV2 cloudformation checks are not packaged into the whl file.
I don't see them in the list command or the policy index.
Checks are stored at: https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/graph_checks/aws
**Additional context**
A solution should be similar to https://github.com/bridgecrewio/checkov/pull/2255
The impact is that ckv2 cfn policies are not running
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import logging
3 import os
4 from importlib import util
5 from os import path
6
7 import setuptools
8 from setuptools import setup
9
10 # read the contents of your README file
11 this_directory = path.abspath(path.dirname(__file__))
12 with open(path.join(this_directory, "README.md"), encoding="utf-8") as f:
13 long_description = f.read()
14
15 logger = logging.getLogger(__name__)
16 spec = util.spec_from_file_location(
17 "checkov.version", os.path.join("checkov", "version.py")
18 )
19 # noinspection PyUnresolvedReferences
20 mod = util.module_from_spec(spec)
21 spec.loader.exec_module(mod) # type: ignore
22 version = mod.version # type: ignore
23
24 setup(
25 extras_require={
26 "dev": [
27 "pytest==5.3.1",
28 "coverage==5.5",
29 "coverage-badge",
30 "GitPython==3.1.7",
31 "bandit",
32 "jsonschema",
33 ]
34 },
35 install_requires=[
36 "bc-python-hcl2==0.3.42",
37 "cloudsplaining>=0.4.1",
38 "deep_merge",
39 "tabulate",
40 "colorama",
41 "termcolor",
42 "junit-xml>=1.9",
43 "dpath>=1.5.0,<2",
44 "pyyaml>=5.4.1",
45 "boto3>=1.17",
46 "GitPython",
47 "jmespath",
48 "tqdm",
49 "update_checker",
50 "semantic_version",
51 "packaging",
52 "networkx",
53 "dockerfile-parse",
54 "docker",
55 "configargparse",
56 "argcomplete",
57 "detect-secrets",
58 "policyuniverse",
59 "typing-extensions>=4.1.0",
60 "cachetools",
61 "cyclonedx-python-lib>=2.4.0",
62 "click>=8.0.0",
63 "aiohttp",
64 "aiodns",
65 "aiomultiprocess",
66 "jsonpath_ng",
67 "jsonschema~=3.0",
68 "prettytable>=3.0.0",
69 "pycep-parser==0.3.7",
70 "charset-normalizer",
71 ],
72 license="Apache License 2.0",
73 name="checkov",
74 version=version,
75 python_requires=">=3.7",
76 description="Infrastructure as code static analysis",
77 author="bridgecrew",
78 author_email="[email protected]",
79 url="https://github.com/bridgecrewio/checkov",
80 packages=setuptools.find_packages(exclude=["tests*", "integration_tests*"]),
81 include_package_data=True,
82 package_dir={
83 "checkov.bicep.checks.graph_checks": "checkov/bicep/checks/graph_checks",
84 "checkov.terraform.checks.graph_checks": "checkov/terraform/checks/graph_checks",
85 },
86 package_data={
87 "checkov": ["py.typed"],
88 "checkov.bicep.checks.graph_checks": ["*.yaml"],
89 "checkov.common.util.templates": ["*.jinja2"],
90 "checkov.terraform.checks.graph_checks": [
91 "aws/*.yaml",
92 "gcp/*.yaml",
93 "azure/*.yaml",
94 ],
95 },
96 scripts=["bin/checkov", "bin/checkov.cmd"],
97 long_description=long_description,
98 long_description_content_type="text/markdown",
99 classifiers=[
100 "Environment :: Console",
101 "Intended Audience :: Developers",
102 "Intended Audience :: System Administrators",
103 "License :: OSI Approved :: Apache Software License",
104 "Programming Language :: Python :: 3 :: Only",
105 "Programming Language :: Python :: 3.7",
106 "Programming Language :: Python :: 3.8",
107 "Programming Language :: Python :: 3.9",
108 "Programming Language :: Python :: 3.10",
109 "Topic :: Security",
110 "Topic :: Software Development :: Build Tools",
111 "Typing :: Typed",
112 ],
113 )
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -81,6 +81,7 @@
include_package_data=True,
package_dir={
"checkov.bicep.checks.graph_checks": "checkov/bicep/checks/graph_checks",
+ "checkov.cloudformation.checks.graph_checks": "checkov/cloudformation/checks/graph_checks",
"checkov.terraform.checks.graph_checks": "checkov/terraform/checks/graph_checks",
},
package_data={
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -81,6 +81,7 @@\n include_package_data=True,\n package_dir={\n \"checkov.bicep.checks.graph_checks\": \"checkov/bicep/checks/graph_checks\",\n+ \"checkov.cloudformation.checks.graph_checks\": \"checkov/cloudformation/checks/graph_checks\",\n \"checkov.terraform.checks.graph_checks\": \"checkov/terraform/checks/graph_checks\",\n },\n package_data={\n", "issue": "CKV2 cloudformation checks are not packaged into the whl file \n**Describe the issue**\r\nCKV2 cloudformation checks are not packaged into the whl file.\r\nI don't see them in the list command or the policy index.\r\nChecks are stored at: https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/graph_checks/aws\r\n**Additional context**\r\nA solution should be similar to https://github.com/bridgecrewio/checkov/pull/2255\r\n\r\nThe impact is that ckv2 cfn policies are not running\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage==5.5\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\",\n \"jsonschema\",\n ]\n },\n install_requires=[\n \"bc-python-hcl2==0.3.42\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml>=1.9\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3>=1.17\",\n \"GitPython\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\",\n \"configargparse\",\n \"argcomplete\",\n \"detect-secrets\",\n \"policyuniverse\",\n \"typing-extensions>=4.1.0\",\n \"cachetools\",\n \"cyclonedx-python-lib>=2.4.0\",\n \"click>=8.0.0\",\n \"aiohttp\",\n \"aiodns\",\n \"aiomultiprocess\",\n \"jsonpath_ng\",\n \"jsonschema~=3.0\",\n \"prettytable>=3.0.0\",\n \"pycep-parser==0.3.7\",\n \"charset-normalizer\",\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\", \"integration_tests*\"]),\n include_package_data=True,\n package_dir={\n \"checkov.bicep.checks.graph_checks\": \"checkov/bicep/checks/graph_checks\",\n \"checkov.terraform.checks.graph_checks\": \"checkov/terraform/checks/graph_checks\",\n },\n package_data={\n \"checkov\": [\"py.typed\"],\n \"checkov.bicep.checks.graph_checks\": [\"*.yaml\"],\n \"checkov.common.util.templates\": [\"*.jinja2\"],\n \"checkov.terraform.checks.graph_checks\": [\n \"aws/*.yaml\",\n \"gcp/*.yaml\",\n \"azure/*.yaml\",\n ],\n },\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Security\",\n \"Topic :: Software Development :: Build Tools\",\n \"Typing :: Typed\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage==5.5\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\",\n \"jsonschema\",\n ]\n },\n install_requires=[\n \"bc-python-hcl2==0.3.42\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml>=1.9\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3>=1.17\",\n \"GitPython\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\",\n \"configargparse\",\n \"argcomplete\",\n \"detect-secrets\",\n \"policyuniverse\",\n \"typing-extensions>=4.1.0\",\n \"cachetools\",\n \"cyclonedx-python-lib>=2.4.0\",\n \"click>=8.0.0\",\n \"aiohttp\",\n \"aiodns\",\n \"aiomultiprocess\",\n \"jsonpath_ng\",\n \"jsonschema~=3.0\",\n \"prettytable>=3.0.0\",\n \"pycep-parser==0.3.7\",\n \"charset-normalizer\",\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\", \"integration_tests*\"]),\n include_package_data=True,\n package_dir={\n \"checkov.bicep.checks.graph_checks\": \"checkov/bicep/checks/graph_checks\",\n \"checkov.cloudformation.checks.graph_checks\": \"checkov/cloudformation/checks/graph_checks\",\n \"checkov.terraform.checks.graph_checks\": \"checkov/terraform/checks/graph_checks\",\n },\n package_data={\n \"checkov\": [\"py.typed\"],\n \"checkov.bicep.checks.graph_checks\": [\"*.yaml\"],\n \"checkov.common.util.templates\": [\"*.jinja2\"],\n \"checkov.terraform.checks.graph_checks\": [\n \"aws/*.yaml\",\n \"gcp/*.yaml\",\n \"azure/*.yaml\",\n ],\n },\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Security\",\n \"Topic :: Software Development :: Build Tools\",\n \"Typing :: Typed\",\n ],\n)\n", "path": "setup.py"}]}
| 1,471 | 114 |
gh_patches_debug_3899
|
rasdani/github-patches
|
git_diff
|
pydantic__pydantic-6809
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MaxLen annotation validator for custom sequences is incorrect
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
When using an `annotated_type.MaxLen` annotation with a custom sequence it rejects sequences that have precisely the maximum length when it should only reject if they are greater.
I'm pretty sure this is because [the validator function](https://github.com/pydantic/pydantic/blob/6ef959fa69d1ab1aedf2778b6c8f452e0fee37b6/pydantic/_internal/_validators.py#L266C27-L266C27) used for custom sequences currently checks for strict inequality rather than less or equal.
### Example Code
```Python
from typing import Annotated
from annotated_types import MaxLen
from pydantic import BaseModel
class MySequence(BaseModel):
inner: list[int]
def __len__(self):
return len(self.inner)
def __getitem__(self, __key: int) -> int:
return self.inner[__key]
class MyModel(BaseModel):
seq: Annotated[MySequence, MaxLen(3)]
MyModel.model_validate({"seq": {"inner": [1, 2, 3]}})
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.0.3
pydantic-core version: 2.3.0 release build profile
python version: 3.11.4 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 13:47:18) [MSC v.1916 64 bit (AMD64)]
platform: Windows-10-10.0.19045-SP0
optional deps. installed: ['typing-extensions'];
```
Selected Assignee: @davidhewitt
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pydantic/_internal/_validators.py`
Content:
```
1 """Validator functions for standard library types.
2
3 Import of this module is deferred since it contains imports of many standard library modules.
4 """
5
6 from __future__ import annotations as _annotations
7
8 import re
9 import typing
10 from ipaddress import IPv4Address, IPv4Interface, IPv4Network, IPv6Address, IPv6Interface, IPv6Network
11 from typing import Any
12
13 from pydantic_core import PydanticCustomError, core_schema
14 from pydantic_core._pydantic_core import PydanticKnownError
15
16
17 def sequence_validator(
18 __input_value: typing.Sequence[Any],
19 validator: core_schema.ValidatorFunctionWrapHandler,
20 ) -> typing.Sequence[Any]:
21 """Validator for `Sequence` types, isinstance(v, Sequence) has already been called."""
22 value_type = type(__input_value)
23
24 # We don't accept any plain string as a sequence
25 # Relevant issue: https://github.com/pydantic/pydantic/issues/5595
26 if issubclass(value_type, (str, bytes)):
27 raise PydanticCustomError(
28 'sequence_str',
29 "'{type_name}' instances are not allowed as a Sequence value",
30 {'type_name': value_type.__name__},
31 )
32
33 v_list = validator(__input_value)
34
35 # the rest of the logic is just re-creating the original type from `v_list`
36 if value_type == list:
37 return v_list
38 elif issubclass(value_type, range):
39 # return the list as we probably can't re-create the range
40 return v_list
41 else:
42 # best guess at how to re-create the original type, more custom construction logic might be required
43 return value_type(v_list) # type: ignore[call-arg]
44
45
46 def import_string(value: Any) -> Any:
47 if isinstance(value, str):
48 try:
49 return _import_string_logic(value)
50 except ImportError as e:
51 raise PydanticCustomError('import_error', 'Invalid python path: {error}', {'error': str(e)})
52 else:
53 # otherwise we just return the value and let the next validator do the rest of the work
54 return value
55
56
57 def _import_string_logic(dotted_path: str) -> Any:
58 """Inspired by uvicorn — dotted paths should include a colon before the final item if that item is not a module.
59 (This is necessary to distinguish between a submodule and an attribute when there is a conflict.).
60
61 If the dotted path does not include a colon and the final item is not a valid module, importing as an attribute
62 rather than a submodule will be attempted automatically.
63
64 So, for example, the following values of `dotted_path` result in the following returned values:
65 * 'collections': <module 'collections'>
66 * 'collections.abc': <module 'collections.abc'>
67 * 'collections.abc:Mapping': <class 'collections.abc.Mapping'>
68 * `collections.abc.Mapping`: <class 'collections.abc.Mapping'> (though this is a bit slower than the previous line)
69
70 An error will be raised under any of the following scenarios:
71 * `dotted_path` contains more than one colon (e.g., 'collections:abc:Mapping')
72 * the substring of `dotted_path` before the colon is not a valid module in the environment (e.g., '123:Mapping')
73 * the substring of `dotted_path` after the colon is not an attribute of the module (e.g., 'collections:abc123')
74 """
75 from importlib import import_module
76
77 components = dotted_path.strip().split(':')
78 if len(components) > 2:
79 raise ImportError(f"Import strings should have at most one ':'; received {dotted_path!r}")
80
81 module_path = components[0]
82 if not module_path:
83 raise ImportError(f'Import strings should have a nonempty module name; received {dotted_path!r}')
84
85 try:
86 module = import_module(module_path)
87 except ModuleNotFoundError as e:
88 if '.' in module_path:
89 # Check if it would be valid if the final item was separated from its module with a `:`
90 maybe_module_path, maybe_attribute = dotted_path.strip().rsplit('.', 1)
91 try:
92 return _import_string_logic(f'{maybe_module_path}:{maybe_attribute}')
93 except ImportError:
94 pass
95 raise ImportError(f'No module named {module_path!r}') from e
96 raise e
97
98 if len(components) > 1:
99 attribute = components[1]
100 try:
101 return getattr(module, attribute)
102 except AttributeError as e:
103 raise ImportError(f'cannot import name {attribute!r} from {module_path!r}') from e
104 else:
105 return module
106
107
108 def pattern_either_validator(__input_value: Any) -> typing.Pattern[Any]:
109 if isinstance(__input_value, typing.Pattern):
110 return __input_value # type: ignore
111 elif isinstance(__input_value, (str, bytes)):
112 # todo strict mode
113 return compile_pattern(__input_value) # type: ignore
114 else:
115 raise PydanticCustomError('pattern_type', 'Input should be a valid pattern')
116
117
118 def pattern_str_validator(__input_value: Any) -> typing.Pattern[str]:
119 if isinstance(__input_value, typing.Pattern):
120 if isinstance(__input_value.pattern, str):
121 return __input_value
122 else:
123 raise PydanticCustomError('pattern_str_type', 'Input should be a string pattern')
124 elif isinstance(__input_value, str):
125 return compile_pattern(__input_value)
126 elif isinstance(__input_value, bytes):
127 raise PydanticCustomError('pattern_str_type', 'Input should be a string pattern')
128 else:
129 raise PydanticCustomError('pattern_type', 'Input should be a valid pattern')
130
131
132 def pattern_bytes_validator(__input_value: Any) -> typing.Pattern[bytes]:
133 if isinstance(__input_value, typing.Pattern):
134 if isinstance(__input_value.pattern, bytes):
135 return __input_value
136 else:
137 raise PydanticCustomError('pattern_bytes_type', 'Input should be a bytes pattern')
138 elif isinstance(__input_value, bytes):
139 return compile_pattern(__input_value)
140 elif isinstance(__input_value, str):
141 raise PydanticCustomError('pattern_bytes_type', 'Input should be a bytes pattern')
142 else:
143 raise PydanticCustomError('pattern_type', 'Input should be a valid pattern')
144
145
146 PatternType = typing.TypeVar('PatternType', str, bytes)
147
148
149 def compile_pattern(pattern: PatternType) -> typing.Pattern[PatternType]:
150 try:
151 return re.compile(pattern)
152 except re.error:
153 raise PydanticCustomError('pattern_regex', 'Input should be a valid regular expression')
154
155
156 def ip_v4_address_validator(__input_value: Any) -> IPv4Address:
157 if isinstance(__input_value, IPv4Address):
158 return __input_value
159
160 try:
161 return IPv4Address(__input_value)
162 except ValueError:
163 raise PydanticCustomError('ip_v4_address', 'Input is not a valid IPv4 address')
164
165
166 def ip_v6_address_validator(__input_value: Any) -> IPv6Address:
167 if isinstance(__input_value, IPv6Address):
168 return __input_value
169
170 try:
171 return IPv6Address(__input_value)
172 except ValueError:
173 raise PydanticCustomError('ip_v6_address', 'Input is not a valid IPv6 address')
174
175
176 def ip_v4_network_validator(__input_value: Any) -> IPv4Network:
177 """Assume IPv4Network initialised with a default `strict` argument.
178
179 See more:
180 https://docs.python.org/library/ipaddress.html#ipaddress.IPv4Network
181 """
182 if isinstance(__input_value, IPv4Network):
183 return __input_value
184
185 try:
186 return IPv4Network(__input_value)
187 except ValueError:
188 raise PydanticCustomError('ip_v4_network', 'Input is not a valid IPv4 network')
189
190
191 def ip_v6_network_validator(__input_value: Any) -> IPv6Network:
192 """Assume IPv6Network initialised with a default `strict` argument.
193
194 See more:
195 https://docs.python.org/library/ipaddress.html#ipaddress.IPv6Network
196 """
197 if isinstance(__input_value, IPv6Network):
198 return __input_value
199
200 try:
201 return IPv6Network(__input_value)
202 except ValueError:
203 raise PydanticCustomError('ip_v6_network', 'Input is not a valid IPv6 network')
204
205
206 def ip_v4_interface_validator(__input_value: Any) -> IPv4Interface:
207 if isinstance(__input_value, IPv4Interface):
208 return __input_value
209
210 try:
211 return IPv4Interface(__input_value)
212 except ValueError:
213 raise PydanticCustomError('ip_v4_interface', 'Input is not a valid IPv4 interface')
214
215
216 def ip_v6_interface_validator(__input_value: Any) -> IPv6Interface:
217 if isinstance(__input_value, IPv6Interface):
218 return __input_value
219
220 try:
221 return IPv6Interface(__input_value)
222 except ValueError:
223 raise PydanticCustomError('ip_v6_interface', 'Input is not a valid IPv6 interface')
224
225
226 def greater_than_validator(x: Any, gt: Any) -> Any:
227 if not (x > gt):
228 raise PydanticKnownError('greater_than', {'gt': str(gt)})
229 return x
230
231
232 def greater_than_or_equal_validator(x: Any, ge: Any) -> Any:
233 if not (x >= ge):
234 raise PydanticKnownError('greater_than_equal', {'ge': str(ge)})
235 return x
236
237
238 def less_than_validator(x: Any, lt: Any) -> Any:
239 if not (x < lt):
240 raise PydanticKnownError('less_than', {'lt': str(lt)})
241 return x
242
243
244 def less_than_or_equal_validator(x: Any, le: Any) -> Any:
245 if not (x <= le):
246 raise PydanticKnownError('less_than_equal', {'le': str(le)})
247 return x
248
249
250 def multiple_of_validator(x: Any, multiple_of: Any) -> Any:
251 if not (x % multiple_of == 0):
252 raise PydanticKnownError('multiple_of', {'multiple_of': multiple_of})
253 return x
254
255
256 def min_length_validator(x: Any, min_length: Any) -> Any:
257 if not (len(x) >= min_length):
258 raise PydanticKnownError(
259 'too_short',
260 {'field_type': 'Value', 'min_length': min_length, 'actual_length': len(x)},
261 )
262 return x
263
264
265 def max_length_validator(x: Any, max_length: Any) -> Any:
266 if not (len(x) < max_length):
267 raise PydanticKnownError(
268 'too_long',
269 {'field_type': 'Value', 'max_length': max_length, 'actual_length': len(x)},
270 )
271 return x
272
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pydantic/_internal/_validators.py b/pydantic/_internal/_validators.py
--- a/pydantic/_internal/_validators.py
+++ b/pydantic/_internal/_validators.py
@@ -263,7 +263,7 @@
def max_length_validator(x: Any, max_length: Any) -> Any:
- if not (len(x) < max_length):
+ if len(x) > max_length:
raise PydanticKnownError(
'too_long',
{'field_type': 'Value', 'max_length': max_length, 'actual_length': len(x)},
|
{"golden_diff": "diff --git a/pydantic/_internal/_validators.py b/pydantic/_internal/_validators.py\n--- a/pydantic/_internal/_validators.py\n+++ b/pydantic/_internal/_validators.py\n@@ -263,7 +263,7 @@\n \n \n def max_length_validator(x: Any, max_length: Any) -> Any:\n- if not (len(x) < max_length):\n+ if len(x) > max_length:\n raise PydanticKnownError(\n 'too_long',\n {'field_type': 'Value', 'max_length': max_length, 'actual_length': len(x)},\n", "issue": "MaxLen annotation validator for custom sequences is incorrect\n### Initial Checks\n\n- [X] I confirm that I'm using Pydantic V2\n\n### Description\n\nWhen using an `annotated_type.MaxLen` annotation with a custom sequence it rejects sequences that have precisely the maximum length when it should only reject if they are greater.\r\n\r\nI'm pretty sure this is because [the validator function](https://github.com/pydantic/pydantic/blob/6ef959fa69d1ab1aedf2778b6c8f452e0fee37b6/pydantic/_internal/_validators.py#L266C27-L266C27) used for custom sequences currently checks for strict inequality rather than less or equal.\n\n### Example Code\n\n```Python\nfrom typing import Annotated\r\nfrom annotated_types import MaxLen\r\n\r\nfrom pydantic import BaseModel\r\n\r\nclass MySequence(BaseModel):\r\n\r\n inner: list[int]\r\n\r\n def __len__(self):\r\n return len(self.inner)\r\n\r\n def __getitem__(self, __key: int) -> int:\r\n return self.inner[__key]\r\n\r\n\r\nclass MyModel(BaseModel):\r\n seq: Annotated[MySequence, MaxLen(3)]\r\n\r\n\r\nMyModel.model_validate({\"seq\": {\"inner\": [1, 2, 3]}})\n```\n\n\n### Python, Pydantic & OS Version\n\n```Text\npydantic version: 2.0.3\r\npydantic-core version: 2.3.0 release build profile\r\npython version: 3.11.4 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 13:47:18) [MSC v.1916 64 bit (AMD64)]\r\nplatform: Windows-10-10.0.19045-SP0\r\noptional deps. installed: ['typing-extensions'];\n```\n\n\nSelected Assignee: @davidhewitt\n", "before_files": [{"content": "\"\"\"Validator functions for standard library types.\n\nImport of this module is deferred since it contains imports of many standard library modules.\n\"\"\"\n\nfrom __future__ import annotations as _annotations\n\nimport re\nimport typing\nfrom ipaddress import IPv4Address, IPv4Interface, IPv4Network, IPv6Address, IPv6Interface, IPv6Network\nfrom typing import Any\n\nfrom pydantic_core import PydanticCustomError, core_schema\nfrom pydantic_core._pydantic_core import PydanticKnownError\n\n\ndef sequence_validator(\n __input_value: typing.Sequence[Any],\n validator: core_schema.ValidatorFunctionWrapHandler,\n) -> typing.Sequence[Any]:\n \"\"\"Validator for `Sequence` types, isinstance(v, Sequence) has already been called.\"\"\"\n value_type = type(__input_value)\n\n # We don't accept any plain string as a sequence\n # Relevant issue: https://github.com/pydantic/pydantic/issues/5595\n if issubclass(value_type, (str, bytes)):\n raise PydanticCustomError(\n 'sequence_str',\n \"'{type_name}' instances are not allowed as a Sequence value\",\n {'type_name': value_type.__name__},\n )\n\n v_list = validator(__input_value)\n\n # the rest of the logic is just re-creating the original type from `v_list`\n if value_type == list:\n return v_list\n elif issubclass(value_type, range):\n # return the list as we probably can't re-create the range\n return v_list\n else:\n # best guess at how to re-create the original type, more custom construction logic might be required\n return value_type(v_list) # type: ignore[call-arg]\n\n\ndef import_string(value: Any) -> Any:\n if isinstance(value, str):\n try:\n return _import_string_logic(value)\n except ImportError as e:\n raise PydanticCustomError('import_error', 'Invalid python path: {error}', {'error': str(e)})\n else:\n # otherwise we just return the value and let the next validator do the rest of the work\n return value\n\n\ndef _import_string_logic(dotted_path: str) -> Any:\n \"\"\"Inspired by uvicorn \u2014 dotted paths should include a colon before the final item if that item is not a module.\n (This is necessary to distinguish between a submodule and an attribute when there is a conflict.).\n\n If the dotted path does not include a colon and the final item is not a valid module, importing as an attribute\n rather than a submodule will be attempted automatically.\n\n So, for example, the following values of `dotted_path` result in the following returned values:\n * 'collections': <module 'collections'>\n * 'collections.abc': <module 'collections.abc'>\n * 'collections.abc:Mapping': <class 'collections.abc.Mapping'>\n * `collections.abc.Mapping`: <class 'collections.abc.Mapping'> (though this is a bit slower than the previous line)\n\n An error will be raised under any of the following scenarios:\n * `dotted_path` contains more than one colon (e.g., 'collections:abc:Mapping')\n * the substring of `dotted_path` before the colon is not a valid module in the environment (e.g., '123:Mapping')\n * the substring of `dotted_path` after the colon is not an attribute of the module (e.g., 'collections:abc123')\n \"\"\"\n from importlib import import_module\n\n components = dotted_path.strip().split(':')\n if len(components) > 2:\n raise ImportError(f\"Import strings should have at most one ':'; received {dotted_path!r}\")\n\n module_path = components[0]\n if not module_path:\n raise ImportError(f'Import strings should have a nonempty module name; received {dotted_path!r}')\n\n try:\n module = import_module(module_path)\n except ModuleNotFoundError as e:\n if '.' in module_path:\n # Check if it would be valid if the final item was separated from its module with a `:`\n maybe_module_path, maybe_attribute = dotted_path.strip().rsplit('.', 1)\n try:\n return _import_string_logic(f'{maybe_module_path}:{maybe_attribute}')\n except ImportError:\n pass\n raise ImportError(f'No module named {module_path!r}') from e\n raise e\n\n if len(components) > 1:\n attribute = components[1]\n try:\n return getattr(module, attribute)\n except AttributeError as e:\n raise ImportError(f'cannot import name {attribute!r} from {module_path!r}') from e\n else:\n return module\n\n\ndef pattern_either_validator(__input_value: Any) -> typing.Pattern[Any]:\n if isinstance(__input_value, typing.Pattern):\n return __input_value # type: ignore\n elif isinstance(__input_value, (str, bytes)):\n # todo strict mode\n return compile_pattern(__input_value) # type: ignore\n else:\n raise PydanticCustomError('pattern_type', 'Input should be a valid pattern')\n\n\ndef pattern_str_validator(__input_value: Any) -> typing.Pattern[str]:\n if isinstance(__input_value, typing.Pattern):\n if isinstance(__input_value.pattern, str):\n return __input_value\n else:\n raise PydanticCustomError('pattern_str_type', 'Input should be a string pattern')\n elif isinstance(__input_value, str):\n return compile_pattern(__input_value)\n elif isinstance(__input_value, bytes):\n raise PydanticCustomError('pattern_str_type', 'Input should be a string pattern')\n else:\n raise PydanticCustomError('pattern_type', 'Input should be a valid pattern')\n\n\ndef pattern_bytes_validator(__input_value: Any) -> typing.Pattern[bytes]:\n if isinstance(__input_value, typing.Pattern):\n if isinstance(__input_value.pattern, bytes):\n return __input_value\n else:\n raise PydanticCustomError('pattern_bytes_type', 'Input should be a bytes pattern')\n elif isinstance(__input_value, bytes):\n return compile_pattern(__input_value)\n elif isinstance(__input_value, str):\n raise PydanticCustomError('pattern_bytes_type', 'Input should be a bytes pattern')\n else:\n raise PydanticCustomError('pattern_type', 'Input should be a valid pattern')\n\n\nPatternType = typing.TypeVar('PatternType', str, bytes)\n\n\ndef compile_pattern(pattern: PatternType) -> typing.Pattern[PatternType]:\n try:\n return re.compile(pattern)\n except re.error:\n raise PydanticCustomError('pattern_regex', 'Input should be a valid regular expression')\n\n\ndef ip_v4_address_validator(__input_value: Any) -> IPv4Address:\n if isinstance(__input_value, IPv4Address):\n return __input_value\n\n try:\n return IPv4Address(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v4_address', 'Input is not a valid IPv4 address')\n\n\ndef ip_v6_address_validator(__input_value: Any) -> IPv6Address:\n if isinstance(__input_value, IPv6Address):\n return __input_value\n\n try:\n return IPv6Address(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v6_address', 'Input is not a valid IPv6 address')\n\n\ndef ip_v4_network_validator(__input_value: Any) -> IPv4Network:\n \"\"\"Assume IPv4Network initialised with a default `strict` argument.\n\n See more:\n https://docs.python.org/library/ipaddress.html#ipaddress.IPv4Network\n \"\"\"\n if isinstance(__input_value, IPv4Network):\n return __input_value\n\n try:\n return IPv4Network(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v4_network', 'Input is not a valid IPv4 network')\n\n\ndef ip_v6_network_validator(__input_value: Any) -> IPv6Network:\n \"\"\"Assume IPv6Network initialised with a default `strict` argument.\n\n See more:\n https://docs.python.org/library/ipaddress.html#ipaddress.IPv6Network\n \"\"\"\n if isinstance(__input_value, IPv6Network):\n return __input_value\n\n try:\n return IPv6Network(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v6_network', 'Input is not a valid IPv6 network')\n\n\ndef ip_v4_interface_validator(__input_value: Any) -> IPv4Interface:\n if isinstance(__input_value, IPv4Interface):\n return __input_value\n\n try:\n return IPv4Interface(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v4_interface', 'Input is not a valid IPv4 interface')\n\n\ndef ip_v6_interface_validator(__input_value: Any) -> IPv6Interface:\n if isinstance(__input_value, IPv6Interface):\n return __input_value\n\n try:\n return IPv6Interface(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v6_interface', 'Input is not a valid IPv6 interface')\n\n\ndef greater_than_validator(x: Any, gt: Any) -> Any:\n if not (x > gt):\n raise PydanticKnownError('greater_than', {'gt': str(gt)})\n return x\n\n\ndef greater_than_or_equal_validator(x: Any, ge: Any) -> Any:\n if not (x >= ge):\n raise PydanticKnownError('greater_than_equal', {'ge': str(ge)})\n return x\n\n\ndef less_than_validator(x: Any, lt: Any) -> Any:\n if not (x < lt):\n raise PydanticKnownError('less_than', {'lt': str(lt)})\n return x\n\n\ndef less_than_or_equal_validator(x: Any, le: Any) -> Any:\n if not (x <= le):\n raise PydanticKnownError('less_than_equal', {'le': str(le)})\n return x\n\n\ndef multiple_of_validator(x: Any, multiple_of: Any) -> Any:\n if not (x % multiple_of == 0):\n raise PydanticKnownError('multiple_of', {'multiple_of': multiple_of})\n return x\n\n\ndef min_length_validator(x: Any, min_length: Any) -> Any:\n if not (len(x) >= min_length):\n raise PydanticKnownError(\n 'too_short',\n {'field_type': 'Value', 'min_length': min_length, 'actual_length': len(x)},\n )\n return x\n\n\ndef max_length_validator(x: Any, max_length: Any) -> Any:\n if not (len(x) < max_length):\n raise PydanticKnownError(\n 'too_long',\n {'field_type': 'Value', 'max_length': max_length, 'actual_length': len(x)},\n )\n return x\n", "path": "pydantic/_internal/_validators.py"}], "after_files": [{"content": "\"\"\"Validator functions for standard library types.\n\nImport of this module is deferred since it contains imports of many standard library modules.\n\"\"\"\n\nfrom __future__ import annotations as _annotations\n\nimport re\nimport typing\nfrom ipaddress import IPv4Address, IPv4Interface, IPv4Network, IPv6Address, IPv6Interface, IPv6Network\nfrom typing import Any\n\nfrom pydantic_core import PydanticCustomError, core_schema\nfrom pydantic_core._pydantic_core import PydanticKnownError\n\n\ndef sequence_validator(\n __input_value: typing.Sequence[Any],\n validator: core_schema.ValidatorFunctionWrapHandler,\n) -> typing.Sequence[Any]:\n \"\"\"Validator for `Sequence` types, isinstance(v, Sequence) has already been called.\"\"\"\n value_type = type(__input_value)\n\n # We don't accept any plain string as a sequence\n # Relevant issue: https://github.com/pydantic/pydantic/issues/5595\n if issubclass(value_type, (str, bytes)):\n raise PydanticCustomError(\n 'sequence_str',\n \"'{type_name}' instances are not allowed as a Sequence value\",\n {'type_name': value_type.__name__},\n )\n\n v_list = validator(__input_value)\n\n # the rest of the logic is just re-creating the original type from `v_list`\n if value_type == list:\n return v_list\n elif issubclass(value_type, range):\n # return the list as we probably can't re-create the range\n return v_list\n else:\n # best guess at how to re-create the original type, more custom construction logic might be required\n return value_type(v_list) # type: ignore[call-arg]\n\n\ndef import_string(value: Any) -> Any:\n if isinstance(value, str):\n try:\n return _import_string_logic(value)\n except ImportError as e:\n raise PydanticCustomError('import_error', 'Invalid python path: {error}', {'error': str(e)})\n else:\n # otherwise we just return the value and let the next validator do the rest of the work\n return value\n\n\ndef _import_string_logic(dotted_path: str) -> Any:\n \"\"\"Inspired by uvicorn \u2014 dotted paths should include a colon before the final item if that item is not a module.\n (This is necessary to distinguish between a submodule and an attribute when there is a conflict.).\n\n If the dotted path does not include a colon and the final item is not a valid module, importing as an attribute\n rather than a submodule will be attempted automatically.\n\n So, for example, the following values of `dotted_path` result in the following returned values:\n * 'collections': <module 'collections'>\n * 'collections.abc': <module 'collections.abc'>\n * 'collections.abc:Mapping': <class 'collections.abc.Mapping'>\n * `collections.abc.Mapping`: <class 'collections.abc.Mapping'> (though this is a bit slower than the previous line)\n\n An error will be raised under any of the following scenarios:\n * `dotted_path` contains more than one colon (e.g., 'collections:abc:Mapping')\n * the substring of `dotted_path` before the colon is not a valid module in the environment (e.g., '123:Mapping')\n * the substring of `dotted_path` after the colon is not an attribute of the module (e.g., 'collections:abc123')\n \"\"\"\n from importlib import import_module\n\n components = dotted_path.strip().split(':')\n if len(components) > 2:\n raise ImportError(f\"Import strings should have at most one ':'; received {dotted_path!r}\")\n\n module_path = components[0]\n if not module_path:\n raise ImportError(f'Import strings should have a nonempty module name; received {dotted_path!r}')\n\n try:\n module = import_module(module_path)\n except ModuleNotFoundError as e:\n if '.' in module_path:\n # Check if it would be valid if the final item was separated from its module with a `:`\n maybe_module_path, maybe_attribute = dotted_path.strip().rsplit('.', 1)\n try:\n return _import_string_logic(f'{maybe_module_path}:{maybe_attribute}')\n except ImportError:\n pass\n raise ImportError(f'No module named {module_path!r}') from e\n raise e\n\n if len(components) > 1:\n attribute = components[1]\n try:\n return getattr(module, attribute)\n except AttributeError as e:\n raise ImportError(f'cannot import name {attribute!r} from {module_path!r}') from e\n else:\n return module\n\n\ndef pattern_either_validator(__input_value: Any) -> typing.Pattern[Any]:\n if isinstance(__input_value, typing.Pattern):\n return __input_value # type: ignore\n elif isinstance(__input_value, (str, bytes)):\n # todo strict mode\n return compile_pattern(__input_value) # type: ignore\n else:\n raise PydanticCustomError('pattern_type', 'Input should be a valid pattern')\n\n\ndef pattern_str_validator(__input_value: Any) -> typing.Pattern[str]:\n if isinstance(__input_value, typing.Pattern):\n if isinstance(__input_value.pattern, str):\n return __input_value\n else:\n raise PydanticCustomError('pattern_str_type', 'Input should be a string pattern')\n elif isinstance(__input_value, str):\n return compile_pattern(__input_value)\n elif isinstance(__input_value, bytes):\n raise PydanticCustomError('pattern_str_type', 'Input should be a string pattern')\n else:\n raise PydanticCustomError('pattern_type', 'Input should be a valid pattern')\n\n\ndef pattern_bytes_validator(__input_value: Any) -> typing.Pattern[bytes]:\n if isinstance(__input_value, typing.Pattern):\n if isinstance(__input_value.pattern, bytes):\n return __input_value\n else:\n raise PydanticCustomError('pattern_bytes_type', 'Input should be a bytes pattern')\n elif isinstance(__input_value, bytes):\n return compile_pattern(__input_value)\n elif isinstance(__input_value, str):\n raise PydanticCustomError('pattern_bytes_type', 'Input should be a bytes pattern')\n else:\n raise PydanticCustomError('pattern_type', 'Input should be a valid pattern')\n\n\nPatternType = typing.TypeVar('PatternType', str, bytes)\n\n\ndef compile_pattern(pattern: PatternType) -> typing.Pattern[PatternType]:\n try:\n return re.compile(pattern)\n except re.error:\n raise PydanticCustomError('pattern_regex', 'Input should be a valid regular expression')\n\n\ndef ip_v4_address_validator(__input_value: Any) -> IPv4Address:\n if isinstance(__input_value, IPv4Address):\n return __input_value\n\n try:\n return IPv4Address(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v4_address', 'Input is not a valid IPv4 address')\n\n\ndef ip_v6_address_validator(__input_value: Any) -> IPv6Address:\n if isinstance(__input_value, IPv6Address):\n return __input_value\n\n try:\n return IPv6Address(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v6_address', 'Input is not a valid IPv6 address')\n\n\ndef ip_v4_network_validator(__input_value: Any) -> IPv4Network:\n \"\"\"Assume IPv4Network initialised with a default `strict` argument.\n\n See more:\n https://docs.python.org/library/ipaddress.html#ipaddress.IPv4Network\n \"\"\"\n if isinstance(__input_value, IPv4Network):\n return __input_value\n\n try:\n return IPv4Network(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v4_network', 'Input is not a valid IPv4 network')\n\n\ndef ip_v6_network_validator(__input_value: Any) -> IPv6Network:\n \"\"\"Assume IPv6Network initialised with a default `strict` argument.\n\n See more:\n https://docs.python.org/library/ipaddress.html#ipaddress.IPv6Network\n \"\"\"\n if isinstance(__input_value, IPv6Network):\n return __input_value\n\n try:\n return IPv6Network(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v6_network', 'Input is not a valid IPv6 network')\n\n\ndef ip_v4_interface_validator(__input_value: Any) -> IPv4Interface:\n if isinstance(__input_value, IPv4Interface):\n return __input_value\n\n try:\n return IPv4Interface(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v4_interface', 'Input is not a valid IPv4 interface')\n\n\ndef ip_v6_interface_validator(__input_value: Any) -> IPv6Interface:\n if isinstance(__input_value, IPv6Interface):\n return __input_value\n\n try:\n return IPv6Interface(__input_value)\n except ValueError:\n raise PydanticCustomError('ip_v6_interface', 'Input is not a valid IPv6 interface')\n\n\ndef greater_than_validator(x: Any, gt: Any) -> Any:\n if not (x > gt):\n raise PydanticKnownError('greater_than', {'gt': str(gt)})\n return x\n\n\ndef greater_than_or_equal_validator(x: Any, ge: Any) -> Any:\n if not (x >= ge):\n raise PydanticKnownError('greater_than_equal', {'ge': str(ge)})\n return x\n\n\ndef less_than_validator(x: Any, lt: Any) -> Any:\n if not (x < lt):\n raise PydanticKnownError('less_than', {'lt': str(lt)})\n return x\n\n\ndef less_than_or_equal_validator(x: Any, le: Any) -> Any:\n if not (x <= le):\n raise PydanticKnownError('less_than_equal', {'le': str(le)})\n return x\n\n\ndef multiple_of_validator(x: Any, multiple_of: Any) -> Any:\n if not (x % multiple_of == 0):\n raise PydanticKnownError('multiple_of', {'multiple_of': multiple_of})\n return x\n\n\ndef min_length_validator(x: Any, min_length: Any) -> Any:\n if not (len(x) >= min_length):\n raise PydanticKnownError(\n 'too_short',\n {'field_type': 'Value', 'min_length': min_length, 'actual_length': len(x)},\n )\n return x\n\n\ndef max_length_validator(x: Any, max_length: Any) -> Any:\n if len(x) > max_length:\n raise PydanticKnownError(\n 'too_long',\n {'field_type': 'Value', 'max_length': max_length, 'actual_length': len(x)},\n )\n return x\n", "path": "pydantic/_internal/_validators.py"}]}
| 3,762 | 132 |
gh_patches_debug_17623
|
rasdani/github-patches
|
git_diff
|
pypa__pip-5923
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix/5147
Suppress the output of the stack trace when `pip install` is called with an invalid requirement.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/_internal/req/constructors.py`
Content:
```
1 """Backing implementation for InstallRequirement's various constructors
2
3 The idea here is that these formed a major chunk of InstallRequirement's size
4 so, moving them and support code dedicated to them outside of that class
5 helps creates for better understandability for the rest of the code.
6
7 These are meant to be used elsewhere within pip to create instances of
8 InstallRequirement.
9 """
10
11 import logging
12 import os
13 import re
14 import traceback
15
16 from pip._vendor.packaging.markers import Marker
17 from pip._vendor.packaging.requirements import InvalidRequirement, Requirement
18 from pip._vendor.packaging.specifiers import Specifier
19 from pip._vendor.pkg_resources import RequirementParseError, parse_requirements
20
21 from pip._internal.download import (
22 is_archive_file, is_url, path_to_url, url_to_path,
23 )
24 from pip._internal.exceptions import InstallationError
25 from pip._internal.models.index import PyPI, TestPyPI
26 from pip._internal.models.link import Link
27 from pip._internal.req.req_install import InstallRequirement
28 from pip._internal.utils.misc import is_installable_dir
29 from pip._internal.vcs import vcs
30 from pip._internal.wheel import Wheel
31
32 __all__ = [
33 "install_req_from_editable", "install_req_from_line",
34 "parse_editable"
35 ]
36
37 logger = logging.getLogger(__name__)
38 operators = Specifier._operators.keys()
39
40
41 def _strip_extras(path):
42 m = re.match(r'^(.+)(\[[^\]]+\])$', path)
43 extras = None
44 if m:
45 path_no_extras = m.group(1)
46 extras = m.group(2)
47 else:
48 path_no_extras = path
49
50 return path_no_extras, extras
51
52
53 def parse_editable(editable_req):
54 """Parses an editable requirement into:
55 - a requirement name
56 - an URL
57 - extras
58 - editable options
59 Accepted requirements:
60 svn+http://blahblah@rev#egg=Foobar[baz]&subdirectory=version_subdir
61 .[some_extra]
62 """
63
64 url = editable_req
65
66 # If a file path is specified with extras, strip off the extras.
67 url_no_extras, extras = _strip_extras(url)
68
69 if os.path.isdir(url_no_extras):
70 if not os.path.exists(os.path.join(url_no_extras, 'setup.py')):
71 raise InstallationError(
72 "Directory %r is not installable. File 'setup.py' not found." %
73 url_no_extras
74 )
75 # Treating it as code that has already been checked out
76 url_no_extras = path_to_url(url_no_extras)
77
78 if url_no_extras.lower().startswith('file:'):
79 package_name = Link(url_no_extras).egg_fragment
80 if extras:
81 return (
82 package_name,
83 url_no_extras,
84 Requirement("placeholder" + extras.lower()).extras,
85 )
86 else:
87 return package_name, url_no_extras, None
88
89 for version_control in vcs:
90 if url.lower().startswith('%s:' % version_control):
91 url = '%s+%s' % (version_control, url)
92 break
93
94 if '+' not in url:
95 raise InstallationError(
96 '%s should either be a path to a local project or a VCS url '
97 'beginning with svn+, git+, hg+, or bzr+' %
98 editable_req
99 )
100
101 vc_type = url.split('+', 1)[0].lower()
102
103 if not vcs.get_backend(vc_type):
104 error_message = 'For --editable=%s only ' % editable_req + \
105 ', '.join([backend.name + '+URL' for backend in vcs.backends]) + \
106 ' is currently supported'
107 raise InstallationError(error_message)
108
109 package_name = Link(url).egg_fragment
110 if not package_name:
111 raise InstallationError(
112 "Could not detect requirement name for '%s', please specify one "
113 "with #egg=your_package_name" % editable_req
114 )
115 return package_name, url, None
116
117
118 def deduce_helpful_msg(req):
119 """Returns helpful msg in case requirements file does not exist,
120 or cannot be parsed.
121
122 :params req: Requirements file path
123 """
124 msg = ""
125 if os.path.exists(req):
126 msg = " It does exist."
127 # Try to parse and check if it is a requirements file.
128 try:
129 with open(req, 'r') as fp:
130 # parse first line only
131 next(parse_requirements(fp.read()))
132 msg += " The argument you provided " + \
133 "(%s) appears to be a" % (req) + \
134 " requirements file. If that is the" + \
135 " case, use the '-r' flag to install" + \
136 " the packages specified within it."
137 except RequirementParseError:
138 logger.debug("Cannot parse '%s' as requirements \
139 file" % (req), exc_info=1)
140 else:
141 msg += " File '%s' does not exist." % (req)
142 return msg
143
144
145 # ---- The actual constructors follow ----
146
147
148 def install_req_from_editable(
149 editable_req, comes_from=None, isolated=False, options=None,
150 wheel_cache=None, constraint=False
151 ):
152 name, url, extras_override = parse_editable(editable_req)
153 if url.startswith('file:'):
154 source_dir = url_to_path(url)
155 else:
156 source_dir = None
157
158 if name is not None:
159 try:
160 req = Requirement(name)
161 except InvalidRequirement:
162 raise InstallationError("Invalid requirement: '%s'" % name)
163 else:
164 req = None
165 return InstallRequirement(
166 req, comes_from, source_dir=source_dir,
167 editable=True,
168 link=Link(url),
169 constraint=constraint,
170 isolated=isolated,
171 options=options if options else {},
172 wheel_cache=wheel_cache,
173 extras=extras_override or (),
174 )
175
176
177 def install_req_from_line(
178 name, comes_from=None, isolated=False, options=None, wheel_cache=None,
179 constraint=False
180 ):
181 """Creates an InstallRequirement from a name, which might be a
182 requirement, directory containing 'setup.py', filename, or URL.
183 """
184 if is_url(name):
185 marker_sep = '; '
186 else:
187 marker_sep = ';'
188 if marker_sep in name:
189 name, markers = name.split(marker_sep, 1)
190 markers = markers.strip()
191 if not markers:
192 markers = None
193 else:
194 markers = Marker(markers)
195 else:
196 markers = None
197 name = name.strip()
198 req = None
199 path = os.path.normpath(os.path.abspath(name))
200 link = None
201 extras = None
202
203 if is_url(name):
204 link = Link(name)
205 else:
206 p, extras = _strip_extras(path)
207 looks_like_dir = os.path.isdir(p) and (
208 os.path.sep in name or
209 (os.path.altsep is not None and os.path.altsep in name) or
210 name.startswith('.')
211 )
212 if looks_like_dir:
213 if not is_installable_dir(p):
214 raise InstallationError(
215 "Directory %r is not installable. Neither 'setup.py' "
216 "nor 'pyproject.toml' found." % name
217 )
218 link = Link(path_to_url(p))
219 elif is_archive_file(p):
220 if not os.path.isfile(p):
221 logger.warning(
222 'Requirement %r looks like a filename, but the '
223 'file does not exist',
224 name
225 )
226 link = Link(path_to_url(p))
227
228 # it's a local file, dir, or url
229 if link:
230 # Handle relative file URLs
231 if link.scheme == 'file' and re.search(r'\.\./', link.url):
232 link = Link(
233 path_to_url(os.path.normpath(os.path.abspath(link.path))))
234 # wheel file
235 if link.is_wheel:
236 wheel = Wheel(link.filename) # can raise InvalidWheelFilename
237 req = "%s==%s" % (wheel.name, wheel.version)
238 else:
239 # set the req to the egg fragment. when it's not there, this
240 # will become an 'unnamed' requirement
241 req = link.egg_fragment
242
243 # a requirement specifier
244 else:
245 req = name
246
247 if extras:
248 extras = Requirement("placeholder" + extras.lower()).extras
249 else:
250 extras = ()
251 if req is not None:
252 try:
253 req = Requirement(req)
254 except InvalidRequirement:
255 if os.path.sep in req:
256 add_msg = "It looks like a path."
257 add_msg += deduce_helpful_msg(req)
258 elif '=' in req and not any(op in req for op in operators):
259 add_msg = "= is not a valid operator. Did you mean == ?"
260 else:
261 add_msg = traceback.format_exc()
262 raise InstallationError(
263 "Invalid requirement: '%s'\n%s" % (req, add_msg)
264 )
265
266 return InstallRequirement(
267 req, comes_from, link=link, markers=markers,
268 isolated=isolated,
269 options=options if options else {},
270 wheel_cache=wheel_cache,
271 constraint=constraint,
272 extras=extras,
273 )
274
275
276 def install_req_from_req(
277 req, comes_from=None, isolated=False, wheel_cache=None
278 ):
279 try:
280 req = Requirement(req)
281 except InvalidRequirement:
282 raise InstallationError("Invalid requirement: '%s'" % req)
283
284 domains_not_allowed = [
285 PyPI.file_storage_domain,
286 TestPyPI.file_storage_domain,
287 ]
288 if req.url and comes_from.link.netloc in domains_not_allowed:
289 # Explicitly disallow pypi packages that depend on external urls
290 raise InstallationError(
291 "Packages installed from PyPI cannot depend on packages "
292 "which are not also hosted on PyPI.\n"
293 "%s depends on %s " % (comes_from.name, req)
294 )
295
296 return InstallRequirement(
297 req, comes_from, isolated=isolated, wheel_cache=wheel_cache
298 )
299
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pip/_internal/req/constructors.py b/src/pip/_internal/req/constructors.py
--- a/src/pip/_internal/req/constructors.py
+++ b/src/pip/_internal/req/constructors.py
@@ -11,7 +11,6 @@
import logging
import os
import re
-import traceback
from pip._vendor.packaging.markers import Marker
from pip._vendor.packaging.requirements import InvalidRequirement, Requirement
@@ -258,7 +257,7 @@
elif '=' in req and not any(op in req for op in operators):
add_msg = "= is not a valid operator. Did you mean == ?"
else:
- add_msg = traceback.format_exc()
+ add_msg = ""
raise InstallationError(
"Invalid requirement: '%s'\n%s" % (req, add_msg)
)
|
{"golden_diff": "diff --git a/src/pip/_internal/req/constructors.py b/src/pip/_internal/req/constructors.py\n--- a/src/pip/_internal/req/constructors.py\n+++ b/src/pip/_internal/req/constructors.py\n@@ -11,7 +11,6 @@\n import logging\n import os\n import re\n-import traceback\n \n from pip._vendor.packaging.markers import Marker\n from pip._vendor.packaging.requirements import InvalidRequirement, Requirement\n@@ -258,7 +257,7 @@\n elif '=' in req and not any(op in req for op in operators):\n add_msg = \"= is not a valid operator. Did you mean == ?\"\n else:\n- add_msg = traceback.format_exc()\n+ add_msg = \"\"\n raise InstallationError(\n \"Invalid requirement: '%s'\\n%s\" % (req, add_msg)\n )\n", "issue": "Fix/5147\nSuppress the output of the stack trace when `pip install` is called with an invalid requirement.\r\n\n", "before_files": [{"content": "\"\"\"Backing implementation for InstallRequirement's various constructors\n\nThe idea here is that these formed a major chunk of InstallRequirement's size\nso, moving them and support code dedicated to them outside of that class\nhelps creates for better understandability for the rest of the code.\n\nThese are meant to be used elsewhere within pip to create instances of\nInstallRequirement.\n\"\"\"\n\nimport logging\nimport os\nimport re\nimport traceback\n\nfrom pip._vendor.packaging.markers import Marker\nfrom pip._vendor.packaging.requirements import InvalidRequirement, Requirement\nfrom pip._vendor.packaging.specifiers import Specifier\nfrom pip._vendor.pkg_resources import RequirementParseError, parse_requirements\n\nfrom pip._internal.download import (\n is_archive_file, is_url, path_to_url, url_to_path,\n)\nfrom pip._internal.exceptions import InstallationError\nfrom pip._internal.models.index import PyPI, TestPyPI\nfrom pip._internal.models.link import Link\nfrom pip._internal.req.req_install import InstallRequirement\nfrom pip._internal.utils.misc import is_installable_dir\nfrom pip._internal.vcs import vcs\nfrom pip._internal.wheel import Wheel\n\n__all__ = [\n \"install_req_from_editable\", \"install_req_from_line\",\n \"parse_editable\"\n]\n\nlogger = logging.getLogger(__name__)\noperators = Specifier._operators.keys()\n\n\ndef _strip_extras(path):\n m = re.match(r'^(.+)(\\[[^\\]]+\\])$', path)\n extras = None\n if m:\n path_no_extras = m.group(1)\n extras = m.group(2)\n else:\n path_no_extras = path\n\n return path_no_extras, extras\n\n\ndef parse_editable(editable_req):\n \"\"\"Parses an editable requirement into:\n - a requirement name\n - an URL\n - extras\n - editable options\n Accepted requirements:\n svn+http://blahblah@rev#egg=Foobar[baz]&subdirectory=version_subdir\n .[some_extra]\n \"\"\"\n\n url = editable_req\n\n # If a file path is specified with extras, strip off the extras.\n url_no_extras, extras = _strip_extras(url)\n\n if os.path.isdir(url_no_extras):\n if not os.path.exists(os.path.join(url_no_extras, 'setup.py')):\n raise InstallationError(\n \"Directory %r is not installable. File 'setup.py' not found.\" %\n url_no_extras\n )\n # Treating it as code that has already been checked out\n url_no_extras = path_to_url(url_no_extras)\n\n if url_no_extras.lower().startswith('file:'):\n package_name = Link(url_no_extras).egg_fragment\n if extras:\n return (\n package_name,\n url_no_extras,\n Requirement(\"placeholder\" + extras.lower()).extras,\n )\n else:\n return package_name, url_no_extras, None\n\n for version_control in vcs:\n if url.lower().startswith('%s:' % version_control):\n url = '%s+%s' % (version_control, url)\n break\n\n if '+' not in url:\n raise InstallationError(\n '%s should either be a path to a local project or a VCS url '\n 'beginning with svn+, git+, hg+, or bzr+' %\n editable_req\n )\n\n vc_type = url.split('+', 1)[0].lower()\n\n if not vcs.get_backend(vc_type):\n error_message = 'For --editable=%s only ' % editable_req + \\\n ', '.join([backend.name + '+URL' for backend in vcs.backends]) + \\\n ' is currently supported'\n raise InstallationError(error_message)\n\n package_name = Link(url).egg_fragment\n if not package_name:\n raise InstallationError(\n \"Could not detect requirement name for '%s', please specify one \"\n \"with #egg=your_package_name\" % editable_req\n )\n return package_name, url, None\n\n\ndef deduce_helpful_msg(req):\n \"\"\"Returns helpful msg in case requirements file does not exist,\n or cannot be parsed.\n\n :params req: Requirements file path\n \"\"\"\n msg = \"\"\n if os.path.exists(req):\n msg = \" It does exist.\"\n # Try to parse and check if it is a requirements file.\n try:\n with open(req, 'r') as fp:\n # parse first line only\n next(parse_requirements(fp.read()))\n msg += \" The argument you provided \" + \\\n \"(%s) appears to be a\" % (req) + \\\n \" requirements file. If that is the\" + \\\n \" case, use the '-r' flag to install\" + \\\n \" the packages specified within it.\"\n except RequirementParseError:\n logger.debug(\"Cannot parse '%s' as requirements \\\n file\" % (req), exc_info=1)\n else:\n msg += \" File '%s' does not exist.\" % (req)\n return msg\n\n\n# ---- The actual constructors follow ----\n\n\ndef install_req_from_editable(\n editable_req, comes_from=None, isolated=False, options=None,\n wheel_cache=None, constraint=False\n):\n name, url, extras_override = parse_editable(editable_req)\n if url.startswith('file:'):\n source_dir = url_to_path(url)\n else:\n source_dir = None\n\n if name is not None:\n try:\n req = Requirement(name)\n except InvalidRequirement:\n raise InstallationError(\"Invalid requirement: '%s'\" % name)\n else:\n req = None\n return InstallRequirement(\n req, comes_from, source_dir=source_dir,\n editable=True,\n link=Link(url),\n constraint=constraint,\n isolated=isolated,\n options=options if options else {},\n wheel_cache=wheel_cache,\n extras=extras_override or (),\n )\n\n\ndef install_req_from_line(\n name, comes_from=None, isolated=False, options=None, wheel_cache=None,\n constraint=False\n):\n \"\"\"Creates an InstallRequirement from a name, which might be a\n requirement, directory containing 'setup.py', filename, or URL.\n \"\"\"\n if is_url(name):\n marker_sep = '; '\n else:\n marker_sep = ';'\n if marker_sep in name:\n name, markers = name.split(marker_sep, 1)\n markers = markers.strip()\n if not markers:\n markers = None\n else:\n markers = Marker(markers)\n else:\n markers = None\n name = name.strip()\n req = None\n path = os.path.normpath(os.path.abspath(name))\n link = None\n extras = None\n\n if is_url(name):\n link = Link(name)\n else:\n p, extras = _strip_extras(path)\n looks_like_dir = os.path.isdir(p) and (\n os.path.sep in name or\n (os.path.altsep is not None and os.path.altsep in name) or\n name.startswith('.')\n )\n if looks_like_dir:\n if not is_installable_dir(p):\n raise InstallationError(\n \"Directory %r is not installable. Neither 'setup.py' \"\n \"nor 'pyproject.toml' found.\" % name\n )\n link = Link(path_to_url(p))\n elif is_archive_file(p):\n if not os.path.isfile(p):\n logger.warning(\n 'Requirement %r looks like a filename, but the '\n 'file does not exist',\n name\n )\n link = Link(path_to_url(p))\n\n # it's a local file, dir, or url\n if link:\n # Handle relative file URLs\n if link.scheme == 'file' and re.search(r'\\.\\./', link.url):\n link = Link(\n path_to_url(os.path.normpath(os.path.abspath(link.path))))\n # wheel file\n if link.is_wheel:\n wheel = Wheel(link.filename) # can raise InvalidWheelFilename\n req = \"%s==%s\" % (wheel.name, wheel.version)\n else:\n # set the req to the egg fragment. when it's not there, this\n # will become an 'unnamed' requirement\n req = link.egg_fragment\n\n # a requirement specifier\n else:\n req = name\n\n if extras:\n extras = Requirement(\"placeholder\" + extras.lower()).extras\n else:\n extras = ()\n if req is not None:\n try:\n req = Requirement(req)\n except InvalidRequirement:\n if os.path.sep in req:\n add_msg = \"It looks like a path.\"\n add_msg += deduce_helpful_msg(req)\n elif '=' in req and not any(op in req for op in operators):\n add_msg = \"= is not a valid operator. Did you mean == ?\"\n else:\n add_msg = traceback.format_exc()\n raise InstallationError(\n \"Invalid requirement: '%s'\\n%s\" % (req, add_msg)\n )\n\n return InstallRequirement(\n req, comes_from, link=link, markers=markers,\n isolated=isolated,\n options=options if options else {},\n wheel_cache=wheel_cache,\n constraint=constraint,\n extras=extras,\n )\n\n\ndef install_req_from_req(\n req, comes_from=None, isolated=False, wheel_cache=None\n):\n try:\n req = Requirement(req)\n except InvalidRequirement:\n raise InstallationError(\"Invalid requirement: '%s'\" % req)\n\n domains_not_allowed = [\n PyPI.file_storage_domain,\n TestPyPI.file_storage_domain,\n ]\n if req.url and comes_from.link.netloc in domains_not_allowed:\n # Explicitly disallow pypi packages that depend on external urls\n raise InstallationError(\n \"Packages installed from PyPI cannot depend on packages \"\n \"which are not also hosted on PyPI.\\n\"\n \"%s depends on %s \" % (comes_from.name, req)\n )\n\n return InstallRequirement(\n req, comes_from, isolated=isolated, wheel_cache=wheel_cache\n )\n", "path": "src/pip/_internal/req/constructors.py"}], "after_files": [{"content": "\"\"\"Backing implementation for InstallRequirement's various constructors\n\nThe idea here is that these formed a major chunk of InstallRequirement's size\nso, moving them and support code dedicated to them outside of that class\nhelps creates for better understandability for the rest of the code.\n\nThese are meant to be used elsewhere within pip to create instances of\nInstallRequirement.\n\"\"\"\n\nimport logging\nimport os\nimport re\n\nfrom pip._vendor.packaging.markers import Marker\nfrom pip._vendor.packaging.requirements import InvalidRequirement, Requirement\nfrom pip._vendor.packaging.specifiers import Specifier\nfrom pip._vendor.pkg_resources import RequirementParseError, parse_requirements\n\nfrom pip._internal.download import (\n is_archive_file, is_url, path_to_url, url_to_path,\n)\nfrom pip._internal.exceptions import InstallationError\nfrom pip._internal.models.index import PyPI, TestPyPI\nfrom pip._internal.models.link import Link\nfrom pip._internal.req.req_install import InstallRequirement\nfrom pip._internal.utils.misc import is_installable_dir\nfrom pip._internal.vcs import vcs\nfrom pip._internal.wheel import Wheel\n\n__all__ = [\n \"install_req_from_editable\", \"install_req_from_line\",\n \"parse_editable\"\n]\n\nlogger = logging.getLogger(__name__)\noperators = Specifier._operators.keys()\n\n\ndef _strip_extras(path):\n m = re.match(r'^(.+)(\\[[^\\]]+\\])$', path)\n extras = None\n if m:\n path_no_extras = m.group(1)\n extras = m.group(2)\n else:\n path_no_extras = path\n\n return path_no_extras, extras\n\n\ndef parse_editable(editable_req):\n \"\"\"Parses an editable requirement into:\n - a requirement name\n - an URL\n - extras\n - editable options\n Accepted requirements:\n svn+http://blahblah@rev#egg=Foobar[baz]&subdirectory=version_subdir\n .[some_extra]\n \"\"\"\n\n url = editable_req\n\n # If a file path is specified with extras, strip off the extras.\n url_no_extras, extras = _strip_extras(url)\n\n if os.path.isdir(url_no_extras):\n if not os.path.exists(os.path.join(url_no_extras, 'setup.py')):\n raise InstallationError(\n \"Directory %r is not installable. File 'setup.py' not found.\" %\n url_no_extras\n )\n # Treating it as code that has already been checked out\n url_no_extras = path_to_url(url_no_extras)\n\n if url_no_extras.lower().startswith('file:'):\n package_name = Link(url_no_extras).egg_fragment\n if extras:\n return (\n package_name,\n url_no_extras,\n Requirement(\"placeholder\" + extras.lower()).extras,\n )\n else:\n return package_name, url_no_extras, None\n\n for version_control in vcs:\n if url.lower().startswith('%s:' % version_control):\n url = '%s+%s' % (version_control, url)\n break\n\n if '+' not in url:\n raise InstallationError(\n '%s should either be a path to a local project or a VCS url '\n 'beginning with svn+, git+, hg+, or bzr+' %\n editable_req\n )\n\n vc_type = url.split('+', 1)[0].lower()\n\n if not vcs.get_backend(vc_type):\n error_message = 'For --editable=%s only ' % editable_req + \\\n ', '.join([backend.name + '+URL' for backend in vcs.backends]) + \\\n ' is currently supported'\n raise InstallationError(error_message)\n\n package_name = Link(url).egg_fragment\n if not package_name:\n raise InstallationError(\n \"Could not detect requirement name for '%s', please specify one \"\n \"with #egg=your_package_name\" % editable_req\n )\n return package_name, url, None\n\n\ndef deduce_helpful_msg(req):\n \"\"\"Returns helpful msg in case requirements file does not exist,\n or cannot be parsed.\n\n :params req: Requirements file path\n \"\"\"\n msg = \"\"\n if os.path.exists(req):\n msg = \" It does exist.\"\n # Try to parse and check if it is a requirements file.\n try:\n with open(req, 'r') as fp:\n # parse first line only\n next(parse_requirements(fp.read()))\n msg += \" The argument you provided \" + \\\n \"(%s) appears to be a\" % (req) + \\\n \" requirements file. If that is the\" + \\\n \" case, use the '-r' flag to install\" + \\\n \" the packages specified within it.\"\n except RequirementParseError:\n logger.debug(\"Cannot parse '%s' as requirements \\\n file\" % (req), exc_info=1)\n else:\n msg += \" File '%s' does not exist.\" % (req)\n return msg\n\n\n# ---- The actual constructors follow ----\n\n\ndef install_req_from_editable(\n editable_req, comes_from=None, isolated=False, options=None,\n wheel_cache=None, constraint=False\n):\n name, url, extras_override = parse_editable(editable_req)\n if url.startswith('file:'):\n source_dir = url_to_path(url)\n else:\n source_dir = None\n\n if name is not None:\n try:\n req = Requirement(name)\n except InvalidRequirement:\n raise InstallationError(\"Invalid requirement: '%s'\" % name)\n else:\n req = None\n return InstallRequirement(\n req, comes_from, source_dir=source_dir,\n editable=True,\n link=Link(url),\n constraint=constraint,\n isolated=isolated,\n options=options if options else {},\n wheel_cache=wheel_cache,\n extras=extras_override or (),\n )\n\n\ndef install_req_from_line(\n name, comes_from=None, isolated=False, options=None, wheel_cache=None,\n constraint=False\n):\n \"\"\"Creates an InstallRequirement from a name, which might be a\n requirement, directory containing 'setup.py', filename, or URL.\n \"\"\"\n if is_url(name):\n marker_sep = '; '\n else:\n marker_sep = ';'\n if marker_sep in name:\n name, markers = name.split(marker_sep, 1)\n markers = markers.strip()\n if not markers:\n markers = None\n else:\n markers = Marker(markers)\n else:\n markers = None\n name = name.strip()\n req = None\n path = os.path.normpath(os.path.abspath(name))\n link = None\n extras = None\n\n if is_url(name):\n link = Link(name)\n else:\n p, extras = _strip_extras(path)\n looks_like_dir = os.path.isdir(p) and (\n os.path.sep in name or\n (os.path.altsep is not None and os.path.altsep in name) or\n name.startswith('.')\n )\n if looks_like_dir:\n if not is_installable_dir(p):\n raise InstallationError(\n \"Directory %r is not installable. Neither 'setup.py' \"\n \"nor 'pyproject.toml' found.\" % name\n )\n link = Link(path_to_url(p))\n elif is_archive_file(p):\n if not os.path.isfile(p):\n logger.warning(\n 'Requirement %r looks like a filename, but the '\n 'file does not exist',\n name\n )\n link = Link(path_to_url(p))\n\n # it's a local file, dir, or url\n if link:\n # Handle relative file URLs\n if link.scheme == 'file' and re.search(r'\\.\\./', link.url):\n link = Link(\n path_to_url(os.path.normpath(os.path.abspath(link.path))))\n # wheel file\n if link.is_wheel:\n wheel = Wheel(link.filename) # can raise InvalidWheelFilename\n req = \"%s==%s\" % (wheel.name, wheel.version)\n else:\n # set the req to the egg fragment. when it's not there, this\n # will become an 'unnamed' requirement\n req = link.egg_fragment\n\n # a requirement specifier\n else:\n req = name\n\n if extras:\n extras = Requirement(\"placeholder\" + extras.lower()).extras\n else:\n extras = ()\n if req is not None:\n try:\n req = Requirement(req)\n except InvalidRequirement:\n if os.path.sep in req:\n add_msg = \"It looks like a path.\"\n add_msg += deduce_helpful_msg(req)\n elif '=' in req and not any(op in req for op in operators):\n add_msg = \"= is not a valid operator. Did you mean == ?\"\n else:\n add_msg = \"\"\n raise InstallationError(\n \"Invalid requirement: '%s'\\n%s\" % (req, add_msg)\n )\n\n return InstallRequirement(\n req, comes_from, link=link, markers=markers,\n isolated=isolated,\n options=options if options else {},\n wheel_cache=wheel_cache,\n constraint=constraint,\n extras=extras,\n )\n\n\ndef install_req_from_req(\n req, comes_from=None, isolated=False, wheel_cache=None\n):\n try:\n req = Requirement(req)\n except InvalidRequirement:\n raise InstallationError(\"Invalid requirement: '%s'\" % req)\n\n domains_not_allowed = [\n PyPI.file_storage_domain,\n TestPyPI.file_storage_domain,\n ]\n if req.url and comes_from.link.netloc in domains_not_allowed:\n # Explicitly disallow pypi packages that depend on external urls\n raise InstallationError(\n \"Packages installed from PyPI cannot depend on packages \"\n \"which are not also hosted on PyPI.\\n\"\n \"%s depends on %s \" % (comes_from.name, req)\n )\n\n return InstallRequirement(\n req, comes_from, isolated=isolated, wheel_cache=wheel_cache\n )\n", "path": "src/pip/_internal/req/constructors.py"}]}
| 3,244 | 195 |
gh_patches_debug_25485
|
rasdani/github-patches
|
git_diff
|
NVIDIA-Merlin__NVTabular-525
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEA] apply joint TE on multiple columns after another op
**Is your feature request related to a problem? Please describe.**
We'd like to the the following pipeline with LambdaOp and Target Encoding.
```
unixReviewTime = ColumnGroup(['unixReviewTime'])
unixReviewTime_toDatetime = unixReviewTime >> (lambda col: cudf.to_datetime(col, unit='s')) >> Rename(postfix="_toDatetime")
unixReviewTime_toDatetime_year = unixReviewTime_toDatetime >> (lambda col: col.dt.year) >> Rename(postfix="_year")
columns_1 = [[['userID'] + unixReviewTime_toDatetime_year]]
te_features = columns_1 >> nvt.ops.TargetEncoding("label", kfold=1, p_smooth=20)
```
However this does not work.
We also tried `columns_1 = ColumnGroup(['user_id']) + unixReviewTime_toDatetime_year` but this is not providing us with the joint multi-column TE as we want.
**Describe the solution you'd like**
We want to be able to do joint multi-column TE on `userID` (from the input dataset) and `unixReviewTime_toDatetime_year` that is calculated from a lambda op. So our final output column should be `TE_userID_unixReviewTime_toDatetime_year_label`.
**Additional context**
We are using new api and conda environment with the latest NVT main branch.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nvtabular/column_group.py`
Content:
```
1 #
2 # Copyright (c) 2020, NVIDIA CORPORATION.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16 import collections.abc
17
18 from dask.core import flatten
19
20 from nvtabular.ops import LambdaOp, Operator
21
22
23 class ColumnGroup:
24 """A ColumnGroup is a group of columns that you want to apply the same transformations to.
25 ColumnGroup's can be transformed by shifting operators on to them, which returns a new
26 ColumnGroup with the transformations applied. This lets you define a graph of operations
27 that makes up your workflow
28
29 Parameters
30 ----------
31 columns: list of (str or tuple of str)
32 The columns to select from the input Dataset. The elements of this list are strings
33 indicating the column names in most cases, but can also be tuples of strings
34 for feature crosses.
35 """
36
37 def __init__(self, columns):
38 if isinstance(columns, str):
39 self.columns = [columns]
40 else:
41 self.columns = [_convert_col(col) for col in columns]
42 self.parents = []
43 self.children = []
44 self.op = None
45 self.kind = None
46 self.dependencies = None
47
48 def __rshift__(self, operator):
49 """Transforms this ColumnGroup by applying an Operator
50
51 Parameters
52 -----------
53 operators: Operator or callable
54
55 Returns
56 -------
57 ColumnGroup
58 """
59 if isinstance(operator, type) and issubclass(operator, Operator):
60 # handle case where an operator class is passed
61 operator = operator()
62 elif callable(operator):
63 # implicit lambdaop conversion.
64 operator = LambdaOp(operator)
65
66 if not isinstance(operator, Operator):
67 raise ValueError(f"Expected operator or callable, got {operator.__class__}")
68
69 child = ColumnGroup(operator.output_column_names(self.columns))
70 child.parents = [self]
71 self.children.append(child)
72 child.op = operator
73
74 dependencies = operator.dependencies()
75 if dependencies:
76 child.dependencies = set()
77 if not isinstance(dependencies, collections.abc.Sequence):
78 dependencies = [dependencies]
79
80 for dependency in dependencies:
81 if not isinstance(dependency, ColumnGroup):
82 dependency = ColumnGroup(dependency)
83 dependency.children.append(child)
84 child.parents.append(dependency)
85 child.dependencies.add(dependency)
86
87 return child
88
89 def __add__(self, other):
90 """Adds columns from this ColumnGroup with another to return a new ColumnGroup
91
92 Parameters
93 -----------
94 other: ColumnGroup or str or list of str
95
96 Returns
97 -------
98 ColumnGroup
99 """
100 if isinstance(other, str):
101 other = ColumnGroup([other])
102 elif isinstance(other, collections.abc.Sequence):
103 other = ColumnGroup(other)
104
105 # check if there are any columns with the same name in both column groups
106 overlap = set(self.columns).intersection(other.columns)
107 if overlap:
108 raise ValueError(f"duplicate column names found: {overlap}")
109
110 child = ColumnGroup(self.columns + other.columns)
111 child.parents = [self, other]
112 child.kind = "+"
113 self.children.append(child)
114 other.children.append(child)
115 return child
116
117 # handle the "column_name" + ColumnGroup case
118 __radd__ = __add__
119
120 def __sub__(self, other):
121 """Adds columns from this ColumnGroup with another to return a new ColumnGroup
122
123 Parameters
124 -----------
125 other: ColumnGroup or str or list of str
126
127 Returns
128 -------
129 ColumnGroup
130 """
131 if isinstance(other, ColumnGroup):
132 to_remove = set(other.columns)
133 elif isinstance(other, str):
134 to_remove = {other}
135 elif isinstance(other, collections.abc.Sequence):
136 to_remove = set(other)
137 else:
138 raise ValueError(f"Expected ColumnGroup, str, or list of str. Got {other.__class__}")
139 new_columns = [c for c in self.columns if c not in to_remove]
140 child = ColumnGroup(new_columns)
141 child.parents = [self]
142 self.children.append(child)
143 child.kind = f"- {list(to_remove)}"
144 return child
145
146 def __repr__(self):
147 output = " output" if not self.children else ""
148 return f"<ColumnGroup {self.label}{output}>"
149
150 @property
151 def flattened_columns(self):
152 return list(flatten(self.columns, container=tuple))
153
154 @property
155 def input_column_names(self):
156 """ returns the names of columns in the main chain """
157 dependencies = self.dependencies or set()
158 return [
159 col for parent in self.parents for col in parent.columns if parent not in dependencies
160 ]
161
162 @property
163 def label(self):
164 if self.op:
165 return str(self.op.__class__.__name__)
166 elif self.kind:
167 return self.kind
168 elif not self.parents:
169 return f"input cols=[{self._cols_repr}]"
170 else:
171 return "??"
172
173 @property
174 def _cols_repr(self):
175 cols = ", ".join(map(str, self.columns[:3]))
176 if len(self.columns) > 3:
177 cols += "..."
178 return cols
179
180 @property
181 def graph(self):
182 return _to_graphviz(self)
183
184
185 def iter_nodes(nodes):
186 queue = nodes[:]
187 while queue:
188 current = queue.pop()
189 yield current
190 # TODO: deduplicate nodes?
191 for parent in current.parents:
192 queue.append(parent)
193
194
195 def _to_graphviz(column_group):
196 """ converts a columngroup to a GraphViz DiGraph object useful for display in notebooks """
197 from graphviz import Digraph
198
199 column_group = _merge_add_nodes(column_group)
200 graph = Digraph()
201
202 # get all the nodes from parents of this columngroup
203 # and add edges between each of them
204 allnodes = list(set(iter_nodes([column_group])))
205 node_ids = {v: str(k) for k, v in enumerate(allnodes)}
206 for node, nodeid in node_ids.items():
207 graph.node(nodeid, node.label)
208 for parent in node.parents:
209 graph.edge(node_ids[parent], nodeid)
210
211 # add a single 'output' node representing the final state
212 output_node_id = str(len(allnodes))
213 graph.node(output_node_id, f"output cols=[{column_group._cols_repr}]")
214 graph.edge(node_ids[column_group], output_node_id)
215 return graph
216
217
218 def _merge_add_nodes(graph):
219 """ merges repeat '+' nodes, leading to nicer looking outputs """
220 # lets take a copy to avoid mutating the input
221 import copy
222
223 graph = copy.deepcopy(graph)
224
225 queue = [graph]
226 while queue:
227 current = queue.pop()
228 if current.kind == "+":
229 changed = True
230 while changed:
231 changed = False
232 parents = []
233 for i, parent in enumerate(current.parents):
234 if parent.kind == "+" and len(parent.children) == 1:
235 changed = True
236 # disconnect parent, point all the grandparents at current instead
237 parents.extend(parent.parents)
238 for grandparent in parent.parents:
239 grandparent.children = [
240 current if child == parent else child
241 for child in grandparent.children
242 ]
243 else:
244 parents.append(parent)
245 current.parents = parents
246
247 queue.extend(current.parents)
248
249 return graph
250
251
252 def _convert_col(col):
253 if isinstance(col, (str, tuple)):
254 return col
255 elif isinstance(col, list):
256 return tuple(col)
257 else:
258 raise ValueError("Invalid column value for ColumnGroup: %s", col)
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nvtabular/column_group.py b/nvtabular/column_group.py
--- a/nvtabular/column_group.py
+++ b/nvtabular/column_group.py
@@ -35,16 +35,36 @@
"""
def __init__(self, columns):
- if isinstance(columns, str):
- self.columns = [columns]
- else:
- self.columns = [_convert_col(col) for col in columns]
self.parents = []
self.children = []
self.op = None
self.kind = None
self.dependencies = None
+ if isinstance(columns, str):
+ columns = [columns]
+
+ # if any of the values we're passed are a columngroup
+ # we have to ourselves as a childnode in the graph.
+ if any(isinstance(col, ColumnGroup) for col in columns):
+ self.columns = []
+ self.kind = "[...]"
+ for col in columns:
+ if not isinstance(col, ColumnGroup):
+ col = ColumnGroup(col)
+ else:
+ # we can't handle nesting arbitrarily deep here
+ # only accept non-nested (str) columns here
+ if any(not isinstance(c, str) for c in col.columns):
+ raise ValueError("Can't handle more than 1 level of nested columns")
+
+ col.children.append(self)
+ self.parents.append(col)
+ self.columns.append(tuple(col.columns))
+
+ else:
+ self.columns = [_convert_col(col) for col in columns]
+
def __rshift__(self, operator):
"""Transforms this ColumnGroup by applying an Operator
|
{"golden_diff": "diff --git a/nvtabular/column_group.py b/nvtabular/column_group.py\n--- a/nvtabular/column_group.py\n+++ b/nvtabular/column_group.py\n@@ -35,16 +35,36 @@\n \"\"\"\n \n def __init__(self, columns):\n- if isinstance(columns, str):\n- self.columns = [columns]\n- else:\n- self.columns = [_convert_col(col) for col in columns]\n self.parents = []\n self.children = []\n self.op = None\n self.kind = None\n self.dependencies = None\n \n+ if isinstance(columns, str):\n+ columns = [columns]\n+\n+ # if any of the values we're passed are a columngroup\n+ # we have to ourselves as a childnode in the graph.\n+ if any(isinstance(col, ColumnGroup) for col in columns):\n+ self.columns = []\n+ self.kind = \"[...]\"\n+ for col in columns:\n+ if not isinstance(col, ColumnGroup):\n+ col = ColumnGroup(col)\n+ else:\n+ # we can't handle nesting arbitrarily deep here\n+ # only accept non-nested (str) columns here\n+ if any(not isinstance(c, str) for c in col.columns):\n+ raise ValueError(\"Can't handle more than 1 level of nested columns\")\n+\n+ col.children.append(self)\n+ self.parents.append(col)\n+ self.columns.append(tuple(col.columns))\n+\n+ else:\n+ self.columns = [_convert_col(col) for col in columns]\n+\n def __rshift__(self, operator):\n \"\"\"Transforms this ColumnGroup by applying an Operator\n", "issue": "[FEA] apply joint TE on multiple columns after another op\n**Is your feature request related to a problem? Please describe.**\r\nWe'd like to the the following pipeline with LambdaOp and Target Encoding. \r\n\r\n```\r\nunixReviewTime = ColumnGroup(['unixReviewTime'])\r\nunixReviewTime_toDatetime = unixReviewTime >> (lambda col: cudf.to_datetime(col, unit='s')) >> Rename(postfix=\"_toDatetime\")\r\nunixReviewTime_toDatetime_year = unixReviewTime_toDatetime >> (lambda col: col.dt.year) >> Rename(postfix=\"_year\")\r\n\r\ncolumns_1 = [[['userID'] + unixReviewTime_toDatetime_year]]\r\nte_features = columns_1 >> nvt.ops.TargetEncoding(\"label\", kfold=1, p_smooth=20)\r\n```\r\nHowever this does not work. \r\n\r\nWe also tried `columns_1 = ColumnGroup(['user_id']) + unixReviewTime_toDatetime_year` but this is not providing us with the joint multi-column TE as we want.\r\n\r\n**Describe the solution you'd like**\r\nWe want to be able to do joint multi-column TE on `userID` (from the input dataset) and `unixReviewTime_toDatetime_year` that is calculated from a lambda op. So our final output column should be `TE_userID_unixReviewTime_toDatetime_year_label`.\r\n\r\n**Additional context**\r\nWe are using new api and conda environment with the latest NVT main branch.\r\n\n", "before_files": [{"content": "#\n# Copyright (c) 2020, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport collections.abc\n\nfrom dask.core import flatten\n\nfrom nvtabular.ops import LambdaOp, Operator\n\n\nclass ColumnGroup:\n \"\"\"A ColumnGroup is a group of columns that you want to apply the same transformations to.\n ColumnGroup's can be transformed by shifting operators on to them, which returns a new\n ColumnGroup with the transformations applied. This lets you define a graph of operations\n that makes up your workflow\n\n Parameters\n ----------\n columns: list of (str or tuple of str)\n The columns to select from the input Dataset. The elements of this list are strings\n indicating the column names in most cases, but can also be tuples of strings\n for feature crosses.\n \"\"\"\n\n def __init__(self, columns):\n if isinstance(columns, str):\n self.columns = [columns]\n else:\n self.columns = [_convert_col(col) for col in columns]\n self.parents = []\n self.children = []\n self.op = None\n self.kind = None\n self.dependencies = None\n\n def __rshift__(self, operator):\n \"\"\"Transforms this ColumnGroup by applying an Operator\n\n Parameters\n -----------\n operators: Operator or callable\n\n Returns\n -------\n ColumnGroup\n \"\"\"\n if isinstance(operator, type) and issubclass(operator, Operator):\n # handle case where an operator class is passed\n operator = operator()\n elif callable(operator):\n # implicit lambdaop conversion.\n operator = LambdaOp(operator)\n\n if not isinstance(operator, Operator):\n raise ValueError(f\"Expected operator or callable, got {operator.__class__}\")\n\n child = ColumnGroup(operator.output_column_names(self.columns))\n child.parents = [self]\n self.children.append(child)\n child.op = operator\n\n dependencies = operator.dependencies()\n if dependencies:\n child.dependencies = set()\n if not isinstance(dependencies, collections.abc.Sequence):\n dependencies = [dependencies]\n\n for dependency in dependencies:\n if not isinstance(dependency, ColumnGroup):\n dependency = ColumnGroup(dependency)\n dependency.children.append(child)\n child.parents.append(dependency)\n child.dependencies.add(dependency)\n\n return child\n\n def __add__(self, other):\n \"\"\"Adds columns from this ColumnGroup with another to return a new ColumnGroup\n\n Parameters\n -----------\n other: ColumnGroup or str or list of str\n\n Returns\n -------\n ColumnGroup\n \"\"\"\n if isinstance(other, str):\n other = ColumnGroup([other])\n elif isinstance(other, collections.abc.Sequence):\n other = ColumnGroup(other)\n\n # check if there are any columns with the same name in both column groups\n overlap = set(self.columns).intersection(other.columns)\n if overlap:\n raise ValueError(f\"duplicate column names found: {overlap}\")\n\n child = ColumnGroup(self.columns + other.columns)\n child.parents = [self, other]\n child.kind = \"+\"\n self.children.append(child)\n other.children.append(child)\n return child\n\n # handle the \"column_name\" + ColumnGroup case\n __radd__ = __add__\n\n def __sub__(self, other):\n \"\"\"Adds columns from this ColumnGroup with another to return a new ColumnGroup\n\n Parameters\n -----------\n other: ColumnGroup or str or list of str\n\n Returns\n -------\n ColumnGroup\n \"\"\"\n if isinstance(other, ColumnGroup):\n to_remove = set(other.columns)\n elif isinstance(other, str):\n to_remove = {other}\n elif isinstance(other, collections.abc.Sequence):\n to_remove = set(other)\n else:\n raise ValueError(f\"Expected ColumnGroup, str, or list of str. Got {other.__class__}\")\n new_columns = [c for c in self.columns if c not in to_remove]\n child = ColumnGroup(new_columns)\n child.parents = [self]\n self.children.append(child)\n child.kind = f\"- {list(to_remove)}\"\n return child\n\n def __repr__(self):\n output = \" output\" if not self.children else \"\"\n return f\"<ColumnGroup {self.label}{output}>\"\n\n @property\n def flattened_columns(self):\n return list(flatten(self.columns, container=tuple))\n\n @property\n def input_column_names(self):\n \"\"\" returns the names of columns in the main chain \"\"\"\n dependencies = self.dependencies or set()\n return [\n col for parent in self.parents for col in parent.columns if parent not in dependencies\n ]\n\n @property\n def label(self):\n if self.op:\n return str(self.op.__class__.__name__)\n elif self.kind:\n return self.kind\n elif not self.parents:\n return f\"input cols=[{self._cols_repr}]\"\n else:\n return \"??\"\n\n @property\n def _cols_repr(self):\n cols = \", \".join(map(str, self.columns[:3]))\n if len(self.columns) > 3:\n cols += \"...\"\n return cols\n\n @property\n def graph(self):\n return _to_graphviz(self)\n\n\ndef iter_nodes(nodes):\n queue = nodes[:]\n while queue:\n current = queue.pop()\n yield current\n # TODO: deduplicate nodes?\n for parent in current.parents:\n queue.append(parent)\n\n\ndef _to_graphviz(column_group):\n \"\"\" converts a columngroup to a GraphViz DiGraph object useful for display in notebooks \"\"\"\n from graphviz import Digraph\n\n column_group = _merge_add_nodes(column_group)\n graph = Digraph()\n\n # get all the nodes from parents of this columngroup\n # and add edges between each of them\n allnodes = list(set(iter_nodes([column_group])))\n node_ids = {v: str(k) for k, v in enumerate(allnodes)}\n for node, nodeid in node_ids.items():\n graph.node(nodeid, node.label)\n for parent in node.parents:\n graph.edge(node_ids[parent], nodeid)\n\n # add a single 'output' node representing the final state\n output_node_id = str(len(allnodes))\n graph.node(output_node_id, f\"output cols=[{column_group._cols_repr}]\")\n graph.edge(node_ids[column_group], output_node_id)\n return graph\n\n\ndef _merge_add_nodes(graph):\n \"\"\" merges repeat '+' nodes, leading to nicer looking outputs \"\"\"\n # lets take a copy to avoid mutating the input\n import copy\n\n graph = copy.deepcopy(graph)\n\n queue = [graph]\n while queue:\n current = queue.pop()\n if current.kind == \"+\":\n changed = True\n while changed:\n changed = False\n parents = []\n for i, parent in enumerate(current.parents):\n if parent.kind == \"+\" and len(parent.children) == 1:\n changed = True\n # disconnect parent, point all the grandparents at current instead\n parents.extend(parent.parents)\n for grandparent in parent.parents:\n grandparent.children = [\n current if child == parent else child\n for child in grandparent.children\n ]\n else:\n parents.append(parent)\n current.parents = parents\n\n queue.extend(current.parents)\n\n return graph\n\n\ndef _convert_col(col):\n if isinstance(col, (str, tuple)):\n return col\n elif isinstance(col, list):\n return tuple(col)\n else:\n raise ValueError(\"Invalid column value for ColumnGroup: %s\", col)\n", "path": "nvtabular/column_group.py"}], "after_files": [{"content": "#\n# Copyright (c) 2020, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport collections.abc\n\nfrom dask.core import flatten\n\nfrom nvtabular.ops import LambdaOp, Operator\n\n\nclass ColumnGroup:\n \"\"\"A ColumnGroup is a group of columns that you want to apply the same transformations to.\n ColumnGroup's can be transformed by shifting operators on to them, which returns a new\n ColumnGroup with the transformations applied. This lets you define a graph of operations\n that makes up your workflow\n\n Parameters\n ----------\n columns: list of (str or tuple of str)\n The columns to select from the input Dataset. The elements of this list are strings\n indicating the column names in most cases, but can also be tuples of strings\n for feature crosses.\n \"\"\"\n\n def __init__(self, columns):\n self.parents = []\n self.children = []\n self.op = None\n self.kind = None\n self.dependencies = None\n\n if isinstance(columns, str):\n columns = [columns]\n\n # if any of the values we're passed are a columngroup\n # we have to ourselves as a childnode in the graph.\n if any(isinstance(col, ColumnGroup) for col in columns):\n self.columns = []\n self.kind = \"[...]\"\n for col in columns:\n if not isinstance(col, ColumnGroup):\n col = ColumnGroup(col)\n else:\n # we can't handle nesting arbitrarily deep here\n # only accept non-nested (str) columns here\n if any(not isinstance(c, str) for c in col.columns):\n raise ValueError(\"Can't handle more than 1 level of nested columns\")\n\n col.children.append(self)\n self.parents.append(col)\n self.columns.append(tuple(col.columns))\n\n else:\n self.columns = [_convert_col(col) for col in columns]\n\n def __rshift__(self, operator):\n \"\"\"Transforms this ColumnGroup by applying an Operator\n\n Parameters\n -----------\n operators: Operator or callable\n\n Returns\n -------\n ColumnGroup\n \"\"\"\n if isinstance(operator, type) and issubclass(operator, Operator):\n # handle case where an operator class is passed\n operator = operator()\n elif callable(operator):\n # implicit lambdaop conversion.\n operator = LambdaOp(operator)\n\n if not isinstance(operator, Operator):\n raise ValueError(f\"Expected operator or callable, got {operator.__class__}\")\n\n child = ColumnGroup(operator.output_column_names(self.columns))\n child.parents = [self]\n self.children.append(child)\n child.op = operator\n\n dependencies = operator.dependencies()\n if dependencies:\n child.dependencies = set()\n if not isinstance(dependencies, collections.abc.Sequence):\n dependencies = [dependencies]\n\n for dependency in dependencies:\n if not isinstance(dependency, ColumnGroup):\n dependency = ColumnGroup(dependency)\n dependency.children.append(child)\n child.parents.append(dependency)\n child.dependencies.add(dependency)\n\n return child\n\n def __add__(self, other):\n \"\"\"Adds columns from this ColumnGroup with another to return a new ColumnGroup\n\n Parameters\n -----------\n other: ColumnGroup or str or list of str\n\n Returns\n -------\n ColumnGroup\n \"\"\"\n if isinstance(other, str):\n other = ColumnGroup([other])\n elif isinstance(other, collections.abc.Sequence):\n other = ColumnGroup(other)\n\n # check if there are any columns with the same name in both column groups\n overlap = set(self.columns).intersection(other.columns)\n if overlap:\n raise ValueError(f\"duplicate column names found: {overlap}\")\n\n child = ColumnGroup(self.columns + other.columns)\n child.parents = [self, other]\n child.kind = \"+\"\n self.children.append(child)\n other.children.append(child)\n return child\n\n # handle the \"column_name\" + ColumnGroup case\n __radd__ = __add__\n\n def __sub__(self, other):\n \"\"\"Adds columns from this ColumnGroup with another to return a new ColumnGroup\n\n Parameters\n -----------\n other: ColumnGroup or str or list of str\n\n Returns\n -------\n ColumnGroup\n \"\"\"\n if isinstance(other, ColumnGroup):\n to_remove = set(other.columns)\n elif isinstance(other, str):\n to_remove = {other}\n elif isinstance(other, collections.abc.Sequence):\n to_remove = set(other)\n else:\n raise ValueError(f\"Expected ColumnGroup, str, or list of str. Got {other.__class__}\")\n new_columns = [c for c in self.columns if c not in to_remove]\n child = ColumnGroup(new_columns)\n child.parents = [self]\n self.children.append(child)\n child.kind = f\"- {list(to_remove)}\"\n return child\n\n def __repr__(self):\n output = \" output\" if not self.children else \"\"\n return f\"<ColumnGroup {self.label}{output}>\"\n\n @property\n def flattened_columns(self):\n return list(flatten(self.columns, container=tuple))\n\n @property\n def input_column_names(self):\n \"\"\" returns the names of columns in the main chain \"\"\"\n dependencies = self.dependencies or set()\n return [\n col for parent in self.parents for col in parent.columns if parent not in dependencies\n ]\n\n @property\n def label(self):\n if self.op:\n return str(self.op.__class__.__name__)\n elif self.kind:\n return self.kind\n elif not self.parents:\n return f\"input cols=[{self._cols_repr}]\"\n else:\n return \"??\"\n\n @property\n def _cols_repr(self):\n cols = \", \".join(map(str, self.columns[:3]))\n if len(self.columns) > 3:\n cols += \"...\"\n return cols\n\n @property\n def graph(self):\n return _to_graphviz(self)\n\n\ndef iter_nodes(nodes):\n queue = nodes[:]\n while queue:\n current = queue.pop()\n yield current\n # TODO: deduplicate nodes?\n for parent in current.parents:\n queue.append(parent)\n\n\ndef _to_graphviz(column_group):\n \"\"\" converts a columngroup to a GraphViz DiGraph object useful for display in notebooks \"\"\"\n from graphviz import Digraph\n\n column_group = _merge_add_nodes(column_group)\n graph = Digraph()\n\n # get all the nodes from parents of this columngroup\n # and add edges between each of them\n allnodes = list(set(iter_nodes([column_group])))\n node_ids = {v: str(k) for k, v in enumerate(allnodes)}\n for node, nodeid in node_ids.items():\n graph.node(nodeid, node.label)\n for parent in node.parents:\n graph.edge(node_ids[parent], nodeid)\n\n # add a single 'output' node representing the final state\n output_node_id = str(len(allnodes))\n graph.node(output_node_id, f\"output cols=[{column_group._cols_repr}]\")\n graph.edge(node_ids[column_group], output_node_id)\n return graph\n\n\ndef _merge_add_nodes(graph):\n \"\"\" merges repeat '+' nodes, leading to nicer looking outputs \"\"\"\n # lets take a copy to avoid mutating the input\n import copy\n\n graph = copy.deepcopy(graph)\n\n queue = [graph]\n while queue:\n current = queue.pop()\n if current.kind == \"+\":\n changed = True\n while changed:\n changed = False\n parents = []\n for i, parent in enumerate(current.parents):\n if parent.kind == \"+\" and len(parent.children) == 1:\n changed = True\n # disconnect parent, point all the grandparents at current instead\n parents.extend(parent.parents)\n for grandparent in parent.parents:\n grandparent.children = [\n current if child == parent else child\n for child in grandparent.children\n ]\n else:\n parents.append(parent)\n current.parents = parents\n\n queue.extend(current.parents)\n\n return graph\n\n\ndef _convert_col(col):\n if isinstance(col, (str, tuple)):\n return col\n elif isinstance(col, list):\n return tuple(col)\n else:\n raise ValueError(\"Invalid column value for ColumnGroup: %s\", col)\n", "path": "nvtabular/column_group.py"}]}
| 2,956 | 361 |
gh_patches_debug_59152
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-729
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
conn.close() can raise OpenSSL exception after idle timeout
When returning a connection to the pool, urllib3 calls `conn.close()`:
```
if conn and is_connection_dropped(conn):
log.info("Resetting dropped connection: %s" % self.host)
conn.close()
```
This can raise an exception under the following conditions:
- urllib3 is using pyOpenSSL and https
- Remote webserver is running Windows IIS (7.5 and 8 tested)
- KeepAlives are active (default for IIS)
- Connection drops after idle timeout
tcpdump shows that the remote IIS host sends a RST on idle timeout. This appears to be IIS-specific behaviour (Apache2 and haproxy both close the connection normally)
At that point nothing bad happens immediately. However the connection remains in the pool. The next time you send a request to this host, it is reused; some data is sent down it; this then raises an `OpenSSL.SSL.SysCallError` exception. I am guessing that it it is trying to send an SSL shutdown message down the already-closed socket.
More detail at https://github.com/zatosource/zato/issues/465
Proposed patch:
```
--- a/urllib3/connectionpool.py
+++ b/urllib3/connectionpool.py
@@ -239,11 +239,15 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
# If this is a persistent connection, check if it got disconnected
if conn and is_connection_dropped(conn):
log.info("Resetting dropped connection: %s" % self.host)
- conn.close()
- if getattr(conn, 'auto_open', 1) == 0:
- # This is a proxied connection that has been mutated by
- # httplib._tunnel() and cannot be reused (since it would
- # attempt to bypass the proxy)
+ try:
+ conn.close()
+ if getattr(conn, 'auto_open', 1) == 0:
+ # This is a proxied connection that has been mutated by
+ # httplib._tunnel() and cannot be reused (since it would
+ # attempt to bypass the proxy)
+ conn = None
+ except Exception as e:
+ log.info("Closing connection failed: %s, force new connection" % str(e))
conn = None
return conn or self._new_conn()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `urllib3/contrib/pyopenssl.py`
Content:
```
1 '''SSL with SNI_-support for Python 2. Follow these instructions if you would
2 like to verify SSL certificates in Python 2. Note, the default libraries do
3 *not* do certificate checking; you need to do additional work to validate
4 certificates yourself.
5
6 This needs the following packages installed:
7
8 * pyOpenSSL (tested with 0.13)
9 * ndg-httpsclient (tested with 0.3.2)
10 * pyasn1 (tested with 0.1.6)
11
12 You can install them with the following command:
13
14 pip install pyopenssl ndg-httpsclient pyasn1
15
16 To activate certificate checking, call
17 :func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code
18 before you begin making HTTP requests. This can be done in a ``sitecustomize``
19 module, or at any other time before your application begins using ``urllib3``,
20 like this::
21
22 try:
23 import urllib3.contrib.pyopenssl
24 urllib3.contrib.pyopenssl.inject_into_urllib3()
25 except ImportError:
26 pass
27
28 Now you can use :mod:`urllib3` as you normally would, and it will support SNI
29 when the required modules are installed.
30
31 Activating this module also has the positive side effect of disabling SSL/TLS
32 compression in Python 2 (see `CRIME attack`_).
33
34 If you want to configure the default list of supported cipher suites, you can
35 set the ``urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST`` variable.
36
37 Module Variables
38 ----------------
39
40 :var DEFAULT_SSL_CIPHER_LIST: The list of supported SSL/TLS cipher suites.
41
42 .. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication
43 .. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)
44
45 '''
46
47 try:
48 from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT
49 from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName
50 except SyntaxError as e:
51 raise ImportError(e)
52
53 import OpenSSL.SSL
54 from pyasn1.codec.der import decoder as der_decoder
55 from pyasn1.type import univ, constraint
56 from socket import _fileobject, timeout
57 import ssl
58 import select
59
60 from .. import connection
61 from .. import util
62
63 __all__ = ['inject_into_urllib3', 'extract_from_urllib3']
64
65 # SNI only *really* works if we can read the subjectAltName of certificates.
66 HAS_SNI = SUBJ_ALT_NAME_SUPPORT
67
68 # Map from urllib3 to PyOpenSSL compatible parameter-values.
69 _openssl_versions = {
70 ssl.PROTOCOL_SSLv23: OpenSSL.SSL.SSLv23_METHOD,
71 ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,
72 }
73
74 try:
75 _openssl_versions.update({ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD})
76 except AttributeError:
77 pass
78
79 _openssl_verify = {
80 ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE,
81 ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER,
82 ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER
83 + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,
84 }
85
86 DEFAULT_SSL_CIPHER_LIST = util.ssl_.DEFAULT_CIPHERS
87
88 # OpenSSL will only write 16K at a time
89 SSL_WRITE_BLOCKSIZE = 16384
90
91 orig_util_HAS_SNI = util.HAS_SNI
92 orig_connection_ssl_wrap_socket = connection.ssl_wrap_socket
93
94
95 def inject_into_urllib3():
96 'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.'
97
98 connection.ssl_wrap_socket = ssl_wrap_socket
99 util.HAS_SNI = HAS_SNI
100
101
102 def extract_from_urllib3():
103 'Undo monkey-patching by :func:`inject_into_urllib3`.'
104
105 connection.ssl_wrap_socket = orig_connection_ssl_wrap_socket
106 util.HAS_SNI = orig_util_HAS_SNI
107
108
109 ### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.
110 class SubjectAltName(BaseSubjectAltName):
111 '''ASN.1 implementation for subjectAltNames support'''
112
113 # There is no limit to how many SAN certificates a certificate may have,
114 # however this needs to have some limit so we'll set an arbitrarily high
115 # limit.
116 sizeSpec = univ.SequenceOf.sizeSpec + \
117 constraint.ValueSizeConstraint(1, 1024)
118
119
120 ### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.
121 def get_subj_alt_name(peer_cert):
122 # Search through extensions
123 dns_name = []
124 if not SUBJ_ALT_NAME_SUPPORT:
125 return dns_name
126
127 general_names = SubjectAltName()
128 for i in range(peer_cert.get_extension_count()):
129 ext = peer_cert.get_extension(i)
130 ext_name = ext.get_short_name()
131 if ext_name != 'subjectAltName':
132 continue
133
134 # PyOpenSSL returns extension data in ASN.1 encoded form
135 ext_dat = ext.get_data()
136 decoded_dat = der_decoder.decode(ext_dat,
137 asn1Spec=general_names)
138
139 for name in decoded_dat:
140 if not isinstance(name, SubjectAltName):
141 continue
142 for entry in range(len(name)):
143 component = name.getComponentByPosition(entry)
144 if component.getName() != 'dNSName':
145 continue
146 dns_name.append(str(component.getComponent()))
147
148 return dns_name
149
150
151 class WrappedSocket(object):
152 '''API-compatibility wrapper for Python OpenSSL's Connection-class.
153
154 Note: _makefile_refs, _drop() and _reuse() are needed for the garbage
155 collector of pypy.
156 '''
157
158 def __init__(self, connection, socket, suppress_ragged_eofs=True):
159 self.connection = connection
160 self.socket = socket
161 self.suppress_ragged_eofs = suppress_ragged_eofs
162 self._makefile_refs = 0
163
164 def fileno(self):
165 return self.socket.fileno()
166
167 def makefile(self, mode, bufsize=-1):
168 self._makefile_refs += 1
169 return _fileobject(self, mode, bufsize, close=True)
170
171 def recv(self, *args, **kwargs):
172 try:
173 data = self.connection.recv(*args, **kwargs)
174 except OpenSSL.SSL.SysCallError as e:
175 if self.suppress_ragged_eofs and e.args == (-1, 'Unexpected EOF'):
176 return b''
177 else:
178 raise
179 except OpenSSL.SSL.ZeroReturnError as e:
180 if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN:
181 return b''
182 else:
183 raise
184 except OpenSSL.SSL.WantReadError:
185 rd, wd, ed = select.select(
186 [self.socket], [], [], self.socket.gettimeout())
187 if not rd:
188 raise timeout('The read operation timed out')
189 else:
190 return self.recv(*args, **kwargs)
191 else:
192 return data
193
194 def settimeout(self, timeout):
195 return self.socket.settimeout(timeout)
196
197 def _send_until_done(self, data):
198 while True:
199 try:
200 return self.connection.send(data)
201 except OpenSSL.SSL.WantWriteError:
202 _, wlist, _ = select.select([], [self.socket], [],
203 self.socket.gettimeout())
204 if not wlist:
205 raise timeout()
206 continue
207
208 def sendall(self, data):
209 total_sent = 0
210 while total_sent < len(data):
211 sent = self._send_until_done(data[total_sent:total_sent+SSL_WRITE_BLOCKSIZE])
212 total_sent += sent
213
214 def shutdown(self):
215 # FIXME rethrow compatible exceptions should we ever use this
216 self.connection.shutdown()
217
218 def close(self):
219 if self._makefile_refs < 1:
220 return self.connection.close()
221 else:
222 self._makefile_refs -= 1
223
224 def getpeercert(self, binary_form=False):
225 x509 = self.connection.get_peer_certificate()
226
227 if not x509:
228 return x509
229
230 if binary_form:
231 return OpenSSL.crypto.dump_certificate(
232 OpenSSL.crypto.FILETYPE_ASN1,
233 x509)
234
235 return {
236 'subject': (
237 (('commonName', x509.get_subject().CN),),
238 ),
239 'subjectAltName': [
240 ('DNS', value)
241 for value in get_subj_alt_name(x509)
242 ]
243 }
244
245 def _reuse(self):
246 self._makefile_refs += 1
247
248 def _drop(self):
249 if self._makefile_refs < 1:
250 self.close()
251 else:
252 self._makefile_refs -= 1
253
254
255 def _verify_callback(cnx, x509, err_no, err_depth, return_code):
256 return err_no == 0
257
258
259 def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,
260 ca_certs=None, server_hostname=None,
261 ssl_version=None, ca_cert_dir=None):
262 ctx = OpenSSL.SSL.Context(_openssl_versions[ssl_version])
263 if certfile:
264 keyfile = keyfile or certfile # Match behaviour of the normal python ssl library
265 ctx.use_certificate_file(certfile)
266 if keyfile:
267 ctx.use_privatekey_file(keyfile)
268 if cert_reqs != ssl.CERT_NONE:
269 ctx.set_verify(_openssl_verify[cert_reqs], _verify_callback)
270 if ca_certs or ca_cert_dir:
271 try:
272 ctx.load_verify_locations(ca_certs, ca_cert_dir)
273 except OpenSSL.SSL.Error as e:
274 raise ssl.SSLError('bad ca_certs: %r' % ca_certs, e)
275 else:
276 ctx.set_default_verify_paths()
277
278 # Disable TLS compression to migitate CRIME attack (issue #309)
279 OP_NO_COMPRESSION = 0x20000
280 ctx.set_options(OP_NO_COMPRESSION)
281
282 # Set list of supported ciphersuites.
283 ctx.set_cipher_list(DEFAULT_SSL_CIPHER_LIST)
284
285 cnx = OpenSSL.SSL.Connection(ctx, sock)
286 cnx.set_tlsext_host_name(server_hostname)
287 cnx.set_connect_state()
288 while True:
289 try:
290 cnx.do_handshake()
291 except OpenSSL.SSL.WantReadError:
292 rd, _, _ = select.select([sock], [], [], sock.gettimeout())
293 if not rd:
294 raise timeout('select timed out')
295 continue
296 except OpenSSL.SSL.Error as e:
297 raise ssl.SSLError('bad handshake: %r' % e)
298 break
299
300 return WrappedSocket(cnx, sock)
301
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/urllib3/contrib/pyopenssl.py b/urllib3/contrib/pyopenssl.py
--- a/urllib3/contrib/pyopenssl.py
+++ b/urllib3/contrib/pyopenssl.py
@@ -217,7 +217,10 @@
def close(self):
if self._makefile_refs < 1:
- return self.connection.close()
+ try:
+ return self.connection.close()
+ except OpenSSL.SSL.Error:
+ return
else:
self._makefile_refs -= 1
|
{"golden_diff": "diff --git a/urllib3/contrib/pyopenssl.py b/urllib3/contrib/pyopenssl.py\n--- a/urllib3/contrib/pyopenssl.py\n+++ b/urllib3/contrib/pyopenssl.py\n@@ -217,7 +217,10 @@\n \n def close(self):\n if self._makefile_refs < 1:\n- return self.connection.close()\n+ try:\n+ return self.connection.close()\n+ except OpenSSL.SSL.Error:\n+ return\n else:\n self._makefile_refs -= 1\n", "issue": "conn.close() can raise OpenSSL exception after idle timeout\nWhen returning a connection to the pool, urllib3 calls `conn.close()`:\n\n```\n if conn and is_connection_dropped(conn):\n log.info(\"Resetting dropped connection: %s\" % self.host)\n conn.close()\n```\n\nThis can raise an exception under the following conditions:\n- urllib3 is using pyOpenSSL and https\n- Remote webserver is running Windows IIS (7.5 and 8 tested)\n- KeepAlives are active (default for IIS)\n- Connection drops after idle timeout\n\ntcpdump shows that the remote IIS host sends a RST on idle timeout. This appears to be IIS-specific behaviour (Apache2 and haproxy both close the connection normally)\n\nAt that point nothing bad happens immediately. However the connection remains in the pool. The next time you send a request to this host, it is reused; some data is sent down it; this then raises an `OpenSSL.SSL.SysCallError` exception. I am guessing that it it is trying to send an SSL shutdown message down the already-closed socket.\n\nMore detail at https://github.com/zatosource/zato/issues/465\n\nProposed patch:\n\n```\n--- a/urllib3/connectionpool.py\n+++ b/urllib3/connectionpool.py\n@@ -239,11 +239,15 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):\n # If this is a persistent connection, check if it got disconnected\n if conn and is_connection_dropped(conn):\n log.info(\"Resetting dropped connection: %s\" % self.host)\n- conn.close()\n- if getattr(conn, 'auto_open', 1) == 0:\n- # This is a proxied connection that has been mutated by\n- # httplib._tunnel() and cannot be reused (since it would\n- # attempt to bypass the proxy)\n+ try:\n+ conn.close()\n+ if getattr(conn, 'auto_open', 1) == 0:\n+ # This is a proxied connection that has been mutated by\n+ # httplib._tunnel() and cannot be reused (since it would\n+ # attempt to bypass the proxy)\n+ conn = None\n+ except Exception as e:\n+ log.info(\"Closing connection failed: %s, force new connection\" % str(e))\n conn = None\n\n return conn or self._new_conn()\n```\n\n", "before_files": [{"content": "'''SSL with SNI_-support for Python 2. Follow these instructions if you would\nlike to verify SSL certificates in Python 2. Note, the default libraries do\n*not* do certificate checking; you need to do additional work to validate\ncertificates yourself.\n\nThis needs the following packages installed:\n\n* pyOpenSSL (tested with 0.13)\n* ndg-httpsclient (tested with 0.3.2)\n* pyasn1 (tested with 0.1.6)\n\nYou can install them with the following command:\n\n pip install pyopenssl ndg-httpsclient pyasn1\n\nTo activate certificate checking, call\n:func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code\nbefore you begin making HTTP requests. This can be done in a ``sitecustomize``\nmodule, or at any other time before your application begins using ``urllib3``,\nlike this::\n\n try:\n import urllib3.contrib.pyopenssl\n urllib3.contrib.pyopenssl.inject_into_urllib3()\n except ImportError:\n pass\n\nNow you can use :mod:`urllib3` as you normally would, and it will support SNI\nwhen the required modules are installed.\n\nActivating this module also has the positive side effect of disabling SSL/TLS\ncompression in Python 2 (see `CRIME attack`_).\n\nIf you want to configure the default list of supported cipher suites, you can\nset the ``urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST`` variable.\n\nModule Variables\n----------------\n\n:var DEFAULT_SSL_CIPHER_LIST: The list of supported SSL/TLS cipher suites.\n\n.. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication\n.. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)\n\n'''\n\ntry:\n from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT\n from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName\nexcept SyntaxError as e:\n raise ImportError(e)\n\nimport OpenSSL.SSL\nfrom pyasn1.codec.der import decoder as der_decoder\nfrom pyasn1.type import univ, constraint\nfrom socket import _fileobject, timeout\nimport ssl\nimport select\n\nfrom .. import connection\nfrom .. import util\n\n__all__ = ['inject_into_urllib3', 'extract_from_urllib3']\n\n# SNI only *really* works if we can read the subjectAltName of certificates.\nHAS_SNI = SUBJ_ALT_NAME_SUPPORT\n\n# Map from urllib3 to PyOpenSSL compatible parameter-values.\n_openssl_versions = {\n ssl.PROTOCOL_SSLv23: OpenSSL.SSL.SSLv23_METHOD,\n ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,\n}\n\ntry:\n _openssl_versions.update({ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD})\nexcept AttributeError:\n pass\n\n_openssl_verify = {\n ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE,\n ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER,\n ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER\n + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,\n}\n\nDEFAULT_SSL_CIPHER_LIST = util.ssl_.DEFAULT_CIPHERS\n\n# OpenSSL will only write 16K at a time\nSSL_WRITE_BLOCKSIZE = 16384\n\norig_util_HAS_SNI = util.HAS_SNI\norig_connection_ssl_wrap_socket = connection.ssl_wrap_socket\n\n\ndef inject_into_urllib3():\n 'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.'\n\n connection.ssl_wrap_socket = ssl_wrap_socket\n util.HAS_SNI = HAS_SNI\n\n\ndef extract_from_urllib3():\n 'Undo monkey-patching by :func:`inject_into_urllib3`.'\n\n connection.ssl_wrap_socket = orig_connection_ssl_wrap_socket\n util.HAS_SNI = orig_util_HAS_SNI\n\n\n### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.\nclass SubjectAltName(BaseSubjectAltName):\n '''ASN.1 implementation for subjectAltNames support'''\n\n # There is no limit to how many SAN certificates a certificate may have,\n # however this needs to have some limit so we'll set an arbitrarily high\n # limit.\n sizeSpec = univ.SequenceOf.sizeSpec + \\\n constraint.ValueSizeConstraint(1, 1024)\n\n\n### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.\ndef get_subj_alt_name(peer_cert):\n # Search through extensions\n dns_name = []\n if not SUBJ_ALT_NAME_SUPPORT:\n return dns_name\n\n general_names = SubjectAltName()\n for i in range(peer_cert.get_extension_count()):\n ext = peer_cert.get_extension(i)\n ext_name = ext.get_short_name()\n if ext_name != 'subjectAltName':\n continue\n\n # PyOpenSSL returns extension data in ASN.1 encoded form\n ext_dat = ext.get_data()\n decoded_dat = der_decoder.decode(ext_dat,\n asn1Spec=general_names)\n\n for name in decoded_dat:\n if not isinstance(name, SubjectAltName):\n continue\n for entry in range(len(name)):\n component = name.getComponentByPosition(entry)\n if component.getName() != 'dNSName':\n continue\n dns_name.append(str(component.getComponent()))\n\n return dns_name\n\n\nclass WrappedSocket(object):\n '''API-compatibility wrapper for Python OpenSSL's Connection-class.\n\n Note: _makefile_refs, _drop() and _reuse() are needed for the garbage\n collector of pypy.\n '''\n\n def __init__(self, connection, socket, suppress_ragged_eofs=True):\n self.connection = connection\n self.socket = socket\n self.suppress_ragged_eofs = suppress_ragged_eofs\n self._makefile_refs = 0\n\n def fileno(self):\n return self.socket.fileno()\n\n def makefile(self, mode, bufsize=-1):\n self._makefile_refs += 1\n return _fileobject(self, mode, bufsize, close=True)\n\n def recv(self, *args, **kwargs):\n try:\n data = self.connection.recv(*args, **kwargs)\n except OpenSSL.SSL.SysCallError as e:\n if self.suppress_ragged_eofs and e.args == (-1, 'Unexpected EOF'):\n return b''\n else:\n raise\n except OpenSSL.SSL.ZeroReturnError as e:\n if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN:\n return b''\n else:\n raise\n except OpenSSL.SSL.WantReadError:\n rd, wd, ed = select.select(\n [self.socket], [], [], self.socket.gettimeout())\n if not rd:\n raise timeout('The read operation timed out')\n else:\n return self.recv(*args, **kwargs)\n else:\n return data\n\n def settimeout(self, timeout):\n return self.socket.settimeout(timeout)\n\n def _send_until_done(self, data):\n while True:\n try:\n return self.connection.send(data)\n except OpenSSL.SSL.WantWriteError:\n _, wlist, _ = select.select([], [self.socket], [],\n self.socket.gettimeout())\n if not wlist:\n raise timeout()\n continue\n\n def sendall(self, data):\n total_sent = 0\n while total_sent < len(data):\n sent = self._send_until_done(data[total_sent:total_sent+SSL_WRITE_BLOCKSIZE])\n total_sent += sent\n\n def shutdown(self):\n # FIXME rethrow compatible exceptions should we ever use this\n self.connection.shutdown()\n\n def close(self):\n if self._makefile_refs < 1:\n return self.connection.close()\n else:\n self._makefile_refs -= 1\n\n def getpeercert(self, binary_form=False):\n x509 = self.connection.get_peer_certificate()\n\n if not x509:\n return x509\n\n if binary_form:\n return OpenSSL.crypto.dump_certificate(\n OpenSSL.crypto.FILETYPE_ASN1,\n x509)\n\n return {\n 'subject': (\n (('commonName', x509.get_subject().CN),),\n ),\n 'subjectAltName': [\n ('DNS', value)\n for value in get_subj_alt_name(x509)\n ]\n }\n\n def _reuse(self):\n self._makefile_refs += 1\n\n def _drop(self):\n if self._makefile_refs < 1:\n self.close()\n else:\n self._makefile_refs -= 1\n\n\ndef _verify_callback(cnx, x509, err_no, err_depth, return_code):\n return err_no == 0\n\n\ndef ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,\n ca_certs=None, server_hostname=None,\n ssl_version=None, ca_cert_dir=None):\n ctx = OpenSSL.SSL.Context(_openssl_versions[ssl_version])\n if certfile:\n keyfile = keyfile or certfile # Match behaviour of the normal python ssl library\n ctx.use_certificate_file(certfile)\n if keyfile:\n ctx.use_privatekey_file(keyfile)\n if cert_reqs != ssl.CERT_NONE:\n ctx.set_verify(_openssl_verify[cert_reqs], _verify_callback)\n if ca_certs or ca_cert_dir:\n try:\n ctx.load_verify_locations(ca_certs, ca_cert_dir)\n except OpenSSL.SSL.Error as e:\n raise ssl.SSLError('bad ca_certs: %r' % ca_certs, e)\n else:\n ctx.set_default_verify_paths()\n\n # Disable TLS compression to migitate CRIME attack (issue #309)\n OP_NO_COMPRESSION = 0x20000\n ctx.set_options(OP_NO_COMPRESSION)\n\n # Set list of supported ciphersuites.\n ctx.set_cipher_list(DEFAULT_SSL_CIPHER_LIST)\n\n cnx = OpenSSL.SSL.Connection(ctx, sock)\n cnx.set_tlsext_host_name(server_hostname)\n cnx.set_connect_state()\n while True:\n try:\n cnx.do_handshake()\n except OpenSSL.SSL.WantReadError:\n rd, _, _ = select.select([sock], [], [], sock.gettimeout())\n if not rd:\n raise timeout('select timed out')\n continue\n except OpenSSL.SSL.Error as e:\n raise ssl.SSLError('bad handshake: %r' % e)\n break\n\n return WrappedSocket(cnx, sock)\n", "path": "urllib3/contrib/pyopenssl.py"}], "after_files": [{"content": "'''SSL with SNI_-support for Python 2. Follow these instructions if you would\nlike to verify SSL certificates in Python 2. Note, the default libraries do\n*not* do certificate checking; you need to do additional work to validate\ncertificates yourself.\n\nThis needs the following packages installed:\n\n* pyOpenSSL (tested with 0.13)\n* ndg-httpsclient (tested with 0.3.2)\n* pyasn1 (tested with 0.1.6)\n\nYou can install them with the following command:\n\n pip install pyopenssl ndg-httpsclient pyasn1\n\nTo activate certificate checking, call\n:func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code\nbefore you begin making HTTP requests. This can be done in a ``sitecustomize``\nmodule, or at any other time before your application begins using ``urllib3``,\nlike this::\n\n try:\n import urllib3.contrib.pyopenssl\n urllib3.contrib.pyopenssl.inject_into_urllib3()\n except ImportError:\n pass\n\nNow you can use :mod:`urllib3` as you normally would, and it will support SNI\nwhen the required modules are installed.\n\nActivating this module also has the positive side effect of disabling SSL/TLS\ncompression in Python 2 (see `CRIME attack`_).\n\nIf you want to configure the default list of supported cipher suites, you can\nset the ``urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST`` variable.\n\nModule Variables\n----------------\n\n:var DEFAULT_SSL_CIPHER_LIST: The list of supported SSL/TLS cipher suites.\n\n.. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication\n.. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)\n\n'''\n\ntry:\n from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT\n from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName\nexcept SyntaxError as e:\n raise ImportError(e)\n\nimport OpenSSL.SSL\nfrom pyasn1.codec.der import decoder as der_decoder\nfrom pyasn1.type import univ, constraint\nfrom socket import _fileobject, timeout\nimport ssl\nimport select\n\nfrom .. import connection\nfrom .. import util\n\n__all__ = ['inject_into_urllib3', 'extract_from_urllib3']\n\n# SNI only *really* works if we can read the subjectAltName of certificates.\nHAS_SNI = SUBJ_ALT_NAME_SUPPORT\n\n# Map from urllib3 to PyOpenSSL compatible parameter-values.\n_openssl_versions = {\n ssl.PROTOCOL_SSLv23: OpenSSL.SSL.SSLv23_METHOD,\n ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,\n}\n\ntry:\n _openssl_versions.update({ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD})\nexcept AttributeError:\n pass\n\n_openssl_verify = {\n ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE,\n ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER,\n ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER\n + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,\n}\n\nDEFAULT_SSL_CIPHER_LIST = util.ssl_.DEFAULT_CIPHERS\n\n# OpenSSL will only write 16K at a time\nSSL_WRITE_BLOCKSIZE = 16384\n\norig_util_HAS_SNI = util.HAS_SNI\norig_connection_ssl_wrap_socket = connection.ssl_wrap_socket\n\n\ndef inject_into_urllib3():\n 'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.'\n\n connection.ssl_wrap_socket = ssl_wrap_socket\n util.HAS_SNI = HAS_SNI\n\n\ndef extract_from_urllib3():\n 'Undo monkey-patching by :func:`inject_into_urllib3`.'\n\n connection.ssl_wrap_socket = orig_connection_ssl_wrap_socket\n util.HAS_SNI = orig_util_HAS_SNI\n\n\n### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.\nclass SubjectAltName(BaseSubjectAltName):\n '''ASN.1 implementation for subjectAltNames support'''\n\n # There is no limit to how many SAN certificates a certificate may have,\n # however this needs to have some limit so we'll set an arbitrarily high\n # limit.\n sizeSpec = univ.SequenceOf.sizeSpec + \\\n constraint.ValueSizeConstraint(1, 1024)\n\n\n### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.\ndef get_subj_alt_name(peer_cert):\n # Search through extensions\n dns_name = []\n if not SUBJ_ALT_NAME_SUPPORT:\n return dns_name\n\n general_names = SubjectAltName()\n for i in range(peer_cert.get_extension_count()):\n ext = peer_cert.get_extension(i)\n ext_name = ext.get_short_name()\n if ext_name != 'subjectAltName':\n continue\n\n # PyOpenSSL returns extension data in ASN.1 encoded form\n ext_dat = ext.get_data()\n decoded_dat = der_decoder.decode(ext_dat,\n asn1Spec=general_names)\n\n for name in decoded_dat:\n if not isinstance(name, SubjectAltName):\n continue\n for entry in range(len(name)):\n component = name.getComponentByPosition(entry)\n if component.getName() != 'dNSName':\n continue\n dns_name.append(str(component.getComponent()))\n\n return dns_name\n\n\nclass WrappedSocket(object):\n '''API-compatibility wrapper for Python OpenSSL's Connection-class.\n\n Note: _makefile_refs, _drop() and _reuse() are needed for the garbage\n collector of pypy.\n '''\n\n def __init__(self, connection, socket, suppress_ragged_eofs=True):\n self.connection = connection\n self.socket = socket\n self.suppress_ragged_eofs = suppress_ragged_eofs\n self._makefile_refs = 0\n\n def fileno(self):\n return self.socket.fileno()\n\n def makefile(self, mode, bufsize=-1):\n self._makefile_refs += 1\n return _fileobject(self, mode, bufsize, close=True)\n\n def recv(self, *args, **kwargs):\n try:\n data = self.connection.recv(*args, **kwargs)\n except OpenSSL.SSL.SysCallError as e:\n if self.suppress_ragged_eofs and e.args == (-1, 'Unexpected EOF'):\n return b''\n else:\n raise\n except OpenSSL.SSL.ZeroReturnError as e:\n if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN:\n return b''\n else:\n raise\n except OpenSSL.SSL.WantReadError:\n rd, wd, ed = select.select(\n [self.socket], [], [], self.socket.gettimeout())\n if not rd:\n raise timeout('The read operation timed out')\n else:\n return self.recv(*args, **kwargs)\n else:\n return data\n\n def settimeout(self, timeout):\n return self.socket.settimeout(timeout)\n\n def _send_until_done(self, data):\n while True:\n try:\n return self.connection.send(data)\n except OpenSSL.SSL.WantWriteError:\n _, wlist, _ = select.select([], [self.socket], [],\n self.socket.gettimeout())\n if not wlist:\n raise timeout()\n continue\n\n def sendall(self, data):\n total_sent = 0\n while total_sent < len(data):\n sent = self._send_until_done(data[total_sent:total_sent+SSL_WRITE_BLOCKSIZE])\n total_sent += sent\n\n def shutdown(self):\n # FIXME rethrow compatible exceptions should we ever use this\n self.connection.shutdown()\n\n def close(self):\n if self._makefile_refs < 1:\n try:\n return self.connection.close()\n except OpenSSL.SSL.Error:\n return\n else:\n self._makefile_refs -= 1\n\n def getpeercert(self, binary_form=False):\n x509 = self.connection.get_peer_certificate()\n\n if not x509:\n return x509\n\n if binary_form:\n return OpenSSL.crypto.dump_certificate(\n OpenSSL.crypto.FILETYPE_ASN1,\n x509)\n\n return {\n 'subject': (\n (('commonName', x509.get_subject().CN),),\n ),\n 'subjectAltName': [\n ('DNS', value)\n for value in get_subj_alt_name(x509)\n ]\n }\n\n def _reuse(self):\n self._makefile_refs += 1\n\n def _drop(self):\n if self._makefile_refs < 1:\n self.close()\n else:\n self._makefile_refs -= 1\n\n\ndef _verify_callback(cnx, x509, err_no, err_depth, return_code):\n return err_no == 0\n\n\ndef ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,\n ca_certs=None, server_hostname=None,\n ssl_version=None, ca_cert_dir=None):\n ctx = OpenSSL.SSL.Context(_openssl_versions[ssl_version])\n if certfile:\n keyfile = keyfile or certfile # Match behaviour of the normal python ssl library\n ctx.use_certificate_file(certfile)\n if keyfile:\n ctx.use_privatekey_file(keyfile)\n if cert_reqs != ssl.CERT_NONE:\n ctx.set_verify(_openssl_verify[cert_reqs], _verify_callback)\n if ca_certs or ca_cert_dir:\n try:\n ctx.load_verify_locations(ca_certs, ca_cert_dir)\n except OpenSSL.SSL.Error as e:\n raise ssl.SSLError('bad ca_certs: %r' % ca_certs, e)\n else:\n ctx.set_default_verify_paths()\n\n # Disable TLS compression to migitate CRIME attack (issue #309)\n OP_NO_COMPRESSION = 0x20000\n ctx.set_options(OP_NO_COMPRESSION)\n\n # Set list of supported ciphersuites.\n ctx.set_cipher_list(DEFAULT_SSL_CIPHER_LIST)\n\n cnx = OpenSSL.SSL.Connection(ctx, sock)\n cnx.set_tlsext_host_name(server_hostname)\n cnx.set_connect_state()\n while True:\n try:\n cnx.do_handshake()\n except OpenSSL.SSL.WantReadError:\n rd, _, _ = select.select([sock], [], [], sock.gettimeout())\n if not rd:\n raise timeout('select timed out')\n continue\n except OpenSSL.SSL.Error as e:\n raise ssl.SSLError('bad handshake: %r' % e)\n break\n\n return WrappedSocket(cnx, sock)\n", "path": "urllib3/contrib/pyopenssl.py"}]}
| 3,879 | 123 |
gh_patches_debug_17020
|
rasdani/github-patches
|
git_diff
|
pypa__pip-9779
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip 21.0.1 fails when run with warnings converted to errors
**Environment**
* pip version: 21.0.1
* Python version: 3.9.1
* OS: Windows
**Description**
With the latest version of packaging (vendored in 21.0.1) a DeprecationWarning is issued when parsing a "legacy version". If pip is run with warnings converted to errors, this causes a failure.
**Expected behavior**
No error
**How to Reproduce**
`py -wE -m pip --version`
Or to pinpoint it further,
```
py -wE
>>> from pip._vendor import pkg_resources
```
This does *not* happen with setuptools 52.0.0, it appears to be related to the version of setuptools (44.0.0) that we vendor.
**Output**
```
Traceback (most recent call last):
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\packaging\version.py", line 57, in parse
return Version(version)
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\packaging\version.py", line 298, in __init__
raise InvalidVersion("Invalid version: '{0}'".format(version))
pip._vendor.packaging.version.InvalidVersion: Invalid version: 'pip'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 3252, in <module>
def _initialize_master_working_set():
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 3235, in _call_aside
f(*args, **kwargs)
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 3264, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 574, in _build_master
ws = cls()
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 567, in __init__
self.add_entry(entry)
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 623, in add_entry
for dist in find_distributions(entry, True):
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2061, in find_on_path
path_item_entries = _by_version_descending(filtered)
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2034, in _by_version_descending
return sorted(names, key=_by_version, reverse=True)
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2032, in _by_version
return [packaging.version.parse(part) for part in parts]
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2032, in <listcomp>
return [packaging.version.parse(part) for part in parts]
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\packaging\version.py", line 59, in parse
return LegacyVersion(version)
File "C:\Users\Gustav\AppData\Local\Programs\Python\Python39\lib\site-packages\pip\_vendor\packaging\version.py", line 127, in __init__
warnings.warn(
DeprecationWarning: Creating a LegacyVersion has been deprecated and will be removed in the next major release
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/__main__.py`
Content:
```
1 import os
2 import sys
3
4 # Remove '' and current working directory from the first entry
5 # of sys.path, if present to avoid using current directory
6 # in pip commands check, freeze, install, list and show,
7 # when invoked as python -m pip <command>
8 if sys.path[0] in ("", os.getcwd()):
9 sys.path.pop(0)
10
11 # If we are running from a wheel, add the wheel to sys.path
12 # This allows the usage python pip-*.whl/pip install pip-*.whl
13 if __package__ == "":
14 # __file__ is pip-*.whl/pip/__main__.py
15 # first dirname call strips of '/__main__.py', second strips off '/pip'
16 # Resulting path is the name of the wheel itself
17 # Add that to sys.path so we can import pip
18 path = os.path.dirname(os.path.dirname(__file__))
19 sys.path.insert(0, path)
20
21 from pip._internal.cli.main import main as _main
22
23 if __name__ == "__main__":
24 sys.exit(_main())
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pip/__main__.py b/src/pip/__main__.py
--- a/src/pip/__main__.py
+++ b/src/pip/__main__.py
@@ -1,5 +1,6 @@
import os
import sys
+import warnings
# Remove '' and current working directory from the first entry
# of sys.path, if present to avoid using current directory
@@ -18,7 +19,13 @@
path = os.path.dirname(os.path.dirname(__file__))
sys.path.insert(0, path)
-from pip._internal.cli.main import main as _main
-
if __name__ == "__main__":
+ # Work around the error reported in #9540, pending a proper fix.
+ # Note: It is essential the warning filter is set *before* importing
+ # pip, as the deprecation happens at import time, not runtime.
+ warnings.filterwarnings(
+ "ignore", category=DeprecationWarning, module=".*packaging\\.version"
+ )
+ from pip._internal.cli.main import main as _main
+
sys.exit(_main())
|
{"golden_diff": "diff --git a/src/pip/__main__.py b/src/pip/__main__.py\n--- a/src/pip/__main__.py\n+++ b/src/pip/__main__.py\n@@ -1,5 +1,6 @@\n import os\n import sys\n+import warnings\n \n # Remove '' and current working directory from the first entry\n # of sys.path, if present to avoid using current directory\n@@ -18,7 +19,13 @@\n path = os.path.dirname(os.path.dirname(__file__))\n sys.path.insert(0, path)\n \n-from pip._internal.cli.main import main as _main\n-\n if __name__ == \"__main__\":\n+ # Work around the error reported in #9540, pending a proper fix.\n+ # Note: It is essential the warning filter is set *before* importing\n+ # pip, as the deprecation happens at import time, not runtime.\n+ warnings.filterwarnings(\n+ \"ignore\", category=DeprecationWarning, module=\".*packaging\\\\.version\"\n+ )\n+ from pip._internal.cli.main import main as _main\n+\n sys.exit(_main())\n", "issue": "pip 21.0.1 fails when run with warnings converted to errors\n**Environment**\r\n\r\n* pip version: 21.0.1\r\n* Python version: 3.9.1\r\n* OS: Windows\r\n\r\n**Description**\r\nWith the latest version of packaging (vendored in 21.0.1) a DeprecationWarning is issued when parsing a \"legacy version\". If pip is run with warnings converted to errors, this causes a failure.\r\n\r\n**Expected behavior**\r\nNo error\r\n\r\n**How to Reproduce**\r\n`py -wE -m pip --version`\r\n\r\nOr to pinpoint it further,\r\n\r\n```\r\npy -wE\r\n>>> from pip._vendor import pkg_resources\r\n```\r\n\r\nThis does *not* happen with setuptools 52.0.0, it appears to be related to the version of setuptools (44.0.0) that we vendor.\r\n\r\n**Output**\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\packaging\\version.py\", line 57, in parse\r\n return Version(version)\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\packaging\\version.py\", line 298, in __init__\r\n raise InvalidVersion(\"Invalid version: '{0}'\".format(version))\r\npip._vendor.packaging.version.InvalidVersion: Invalid version: 'pip'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 3252, in <module>\r\n def _initialize_master_working_set():\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 3235, in _call_aside\r\n f(*args, **kwargs)\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 3264, in _initialize_master_working_set\r\n working_set = WorkingSet._build_master()\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 574, in _build_master\r\n ws = cls()\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 567, in __init__\r\n self.add_entry(entry)\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 623, in add_entry\r\n for dist in find_distributions(entry, True):\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 2061, in find_on_path\r\n path_item_entries = _by_version_descending(filtered)\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 2034, in _by_version_descending\r\n return sorted(names, key=_by_version, reverse=True)\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 2032, in _by_version\r\n return [packaging.version.parse(part) for part in parts]\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\", line 2032, in <listcomp>\r\n return [packaging.version.parse(part) for part in parts]\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\packaging\\version.py\", line 59, in parse\r\n return LegacyVersion(version)\r\n File \"C:\\Users\\Gustav\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\pip\\_vendor\\packaging\\version.py\", line 127, in __init__\r\n warnings.warn(\r\nDeprecationWarning: Creating a LegacyVersion has been deprecated and will be removed in the next major release\r\n```\r\n\n", "before_files": [{"content": "import os\nimport sys\n\n# Remove '' and current working directory from the first entry\n# of sys.path, if present to avoid using current directory\n# in pip commands check, freeze, install, list and show,\n# when invoked as python -m pip <command>\nif sys.path[0] in (\"\", os.getcwd()):\n sys.path.pop(0)\n\n# If we are running from a wheel, add the wheel to sys.path\n# This allows the usage python pip-*.whl/pip install pip-*.whl\nif __package__ == \"\":\n # __file__ is pip-*.whl/pip/__main__.py\n # first dirname call strips of '/__main__.py', second strips off '/pip'\n # Resulting path is the name of the wheel itself\n # Add that to sys.path so we can import pip\n path = os.path.dirname(os.path.dirname(__file__))\n sys.path.insert(0, path)\n\nfrom pip._internal.cli.main import main as _main\n\nif __name__ == \"__main__\":\n sys.exit(_main())\n", "path": "src/pip/__main__.py"}], "after_files": [{"content": "import os\nimport sys\nimport warnings\n\n# Remove '' and current working directory from the first entry\n# of sys.path, if present to avoid using current directory\n# in pip commands check, freeze, install, list and show,\n# when invoked as python -m pip <command>\nif sys.path[0] in (\"\", os.getcwd()):\n sys.path.pop(0)\n\n# If we are running from a wheel, add the wheel to sys.path\n# This allows the usage python pip-*.whl/pip install pip-*.whl\nif __package__ == \"\":\n # __file__ is pip-*.whl/pip/__main__.py\n # first dirname call strips of '/__main__.py', second strips off '/pip'\n # Resulting path is the name of the wheel itself\n # Add that to sys.path so we can import pip\n path = os.path.dirname(os.path.dirname(__file__))\n sys.path.insert(0, path)\n\nif __name__ == \"__main__\":\n # Work around the error reported in #9540, pending a proper fix.\n # Note: It is essential the warning filter is set *before* importing\n # pip, as the deprecation happens at import time, not runtime.\n warnings.filterwarnings(\n \"ignore\", category=DeprecationWarning, module=\".*packaging\\\\.version\"\n )\n from pip._internal.cli.main import main as _main\n\n sys.exit(_main())\n", "path": "src/pip/__main__.py"}]}
| 1,644 | 247 |
gh_patches_debug_33257
|
rasdani/github-patches
|
git_diff
|
localstack__localstack-2244
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect response content-type header from cloudwatch logs
When using the .NET AWSSDK connected to localstack and querying cloudwatch logs e.g.
var logClient = new AmazonCloudWatchLogsClient(new AmazonCloudWatchLogsConfig()
{
ServiceURL = "http://localhost:4586",
UseHttp = true,
AuthenticationRegion = "eu-central-1",
});
var logGroupName = @"/aws/lambda/f1";
var events = logClient.FilterLogEventsAsync(new FilterLogEventsRequest()
{
LogGroupName = logGroupName,
}).GetAwaiter().GetResult();
The response is returned from the server but cannot be parsed by the SDK client code because the response's content-type header is text/html when it should be application/x-amz-json-1.1. This confirmed using Fiddler traces comparing the response from localstack to the response from AWS in the cloud. Can this be fixed easily?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `localstack/services/logs/logs_listener.py`
Content:
```
1 import re
2 from requests.models import Request
3 from localstack.utils.common import to_str
4 from localstack.services.generic_proxy import ProxyListener
5
6
7 class ProxyListenerCloudWatchLogs(ProxyListener):
8
9 def forward_request(self, method, path, data, headers):
10 if method == 'POST' and path == '/':
11 if 'nextToken' in to_str(data or ''):
12 data = self._fix_next_token_request(data)
13 headers['content-length'] = str(len(data))
14 return Request(data=data, headers=headers, method=method)
15
16 return True
17
18 def return_response(self, method, path, data, headers, response):
19 if 'nextToken' in to_str(response.content or ''):
20 self._fix_next_token_response(response)
21 response.headers['content-length'] = str(len(response._content))
22
23 def _fix_next_token_request(self, data):
24 # Fix for https://github.com/localstack/localstack/issues/1527
25 pattern = r'"nextToken":\s*"([0-9]+)"'
26 replacement = r'"nextToken": \1'
27 return re.sub(pattern, replacement, to_str(data))
28
29 def _fix_next_token_response(self, response):
30 # Fix for https://github.com/localstack/localstack/issues/1527
31 pattern = r'"nextToken":\s*([0-9]+)'
32 replacement = r'"nextToken": "\1"'
33 response._content = re.sub(pattern, replacement, to_str(response.content))
34
35
36 # instantiate listener
37 UPDATE_LOGS = ProxyListenerCloudWatchLogs()
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/localstack/services/logs/logs_listener.py b/localstack/services/logs/logs_listener.py
--- a/localstack/services/logs/logs_listener.py
+++ b/localstack/services/logs/logs_listener.py
@@ -1,11 +1,11 @@
import re
from requests.models import Request
from localstack.utils.common import to_str
+from localstack.constants import APPLICATION_AMZ_JSON_1_1
from localstack.services.generic_proxy import ProxyListener
class ProxyListenerCloudWatchLogs(ProxyListener):
-
def forward_request(self, method, path, data, headers):
if method == 'POST' and path == '/':
if 'nextToken' in to_str(data or ''):
@@ -16,17 +16,22 @@
return True
def return_response(self, method, path, data, headers, response):
+ # Fix Incorrect response content-type header from cloudwatch logs #1343
+ response.headers['content-type'] = APPLICATION_AMZ_JSON_1_1
+
if 'nextToken' in to_str(response.content or ''):
self._fix_next_token_response(response)
response.headers['content-length'] = str(len(response._content))
- def _fix_next_token_request(self, data):
+ @staticmethod
+ def _fix_next_token_request(data):
# Fix for https://github.com/localstack/localstack/issues/1527
pattern = r'"nextToken":\s*"([0-9]+)"'
replacement = r'"nextToken": \1'
return re.sub(pattern, replacement, to_str(data))
- def _fix_next_token_response(self, response):
+ @staticmethod
+ def _fix_next_token_response(response):
# Fix for https://github.com/localstack/localstack/issues/1527
pattern = r'"nextToken":\s*([0-9]+)'
replacement = r'"nextToken": "\1"'
|
{"golden_diff": "diff --git a/localstack/services/logs/logs_listener.py b/localstack/services/logs/logs_listener.py\n--- a/localstack/services/logs/logs_listener.py\n+++ b/localstack/services/logs/logs_listener.py\n@@ -1,11 +1,11 @@\n import re\n from requests.models import Request\n from localstack.utils.common import to_str\n+from localstack.constants import APPLICATION_AMZ_JSON_1_1\n from localstack.services.generic_proxy import ProxyListener\n \n \n class ProxyListenerCloudWatchLogs(ProxyListener):\n-\n def forward_request(self, method, path, data, headers):\n if method == 'POST' and path == '/':\n if 'nextToken' in to_str(data or ''):\n@@ -16,17 +16,22 @@\n return True\n \n def return_response(self, method, path, data, headers, response):\n+ # Fix Incorrect response content-type header from cloudwatch logs #1343\n+ response.headers['content-type'] = APPLICATION_AMZ_JSON_1_1\n+\n if 'nextToken' in to_str(response.content or ''):\n self._fix_next_token_response(response)\n response.headers['content-length'] = str(len(response._content))\n \n- def _fix_next_token_request(self, data):\n+ @staticmethod\n+ def _fix_next_token_request(data):\n # Fix for https://github.com/localstack/localstack/issues/1527\n pattern = r'\"nextToken\":\\s*\"([0-9]+)\"'\n replacement = r'\"nextToken\": \\1'\n return re.sub(pattern, replacement, to_str(data))\n \n- def _fix_next_token_response(self, response):\n+ @staticmethod\n+ def _fix_next_token_response(response):\n # Fix for https://github.com/localstack/localstack/issues/1527\n pattern = r'\"nextToken\":\\s*([0-9]+)'\n replacement = r'\"nextToken\": \"\\1\"'\n", "issue": "Incorrect response content-type header from cloudwatch logs\nWhen using the .NET AWSSDK connected to localstack and querying cloudwatch logs e.g. \r\n var logClient = new AmazonCloudWatchLogsClient(new AmazonCloudWatchLogsConfig()\r\n {\r\n ServiceURL = \"http://localhost:4586\",\r\n UseHttp = true,\r\n AuthenticationRegion = \"eu-central-1\",\r\n });\r\n var logGroupName = @\"/aws/lambda/f1\";\r\n var events = logClient.FilterLogEventsAsync(new FilterLogEventsRequest()\r\n {\r\n LogGroupName = logGroupName,\r\n }).GetAwaiter().GetResult();\r\n\r\nThe response is returned from the server but cannot be parsed by the SDK client code because the response's content-type header is text/html when it should be application/x-amz-json-1.1. This confirmed using Fiddler traces comparing the response from localstack to the response from AWS in the cloud. Can this be fixed easily?\n", "before_files": [{"content": "import re\nfrom requests.models import Request\nfrom localstack.utils.common import to_str\nfrom localstack.services.generic_proxy import ProxyListener\n\n\nclass ProxyListenerCloudWatchLogs(ProxyListener):\n\n def forward_request(self, method, path, data, headers):\n if method == 'POST' and path == '/':\n if 'nextToken' in to_str(data or ''):\n data = self._fix_next_token_request(data)\n headers['content-length'] = str(len(data))\n return Request(data=data, headers=headers, method=method)\n\n return True\n\n def return_response(self, method, path, data, headers, response):\n if 'nextToken' in to_str(response.content or ''):\n self._fix_next_token_response(response)\n response.headers['content-length'] = str(len(response._content))\n\n def _fix_next_token_request(self, data):\n # Fix for https://github.com/localstack/localstack/issues/1527\n pattern = r'\"nextToken\":\\s*\"([0-9]+)\"'\n replacement = r'\"nextToken\": \\1'\n return re.sub(pattern, replacement, to_str(data))\n\n def _fix_next_token_response(self, response):\n # Fix for https://github.com/localstack/localstack/issues/1527\n pattern = r'\"nextToken\":\\s*([0-9]+)'\n replacement = r'\"nextToken\": \"\\1\"'\n response._content = re.sub(pattern, replacement, to_str(response.content))\n\n\n# instantiate listener\nUPDATE_LOGS = ProxyListenerCloudWatchLogs()\n", "path": "localstack/services/logs/logs_listener.py"}], "after_files": [{"content": "import re\nfrom requests.models import Request\nfrom localstack.utils.common import to_str\nfrom localstack.constants import APPLICATION_AMZ_JSON_1_1\nfrom localstack.services.generic_proxy import ProxyListener\n\n\nclass ProxyListenerCloudWatchLogs(ProxyListener):\n def forward_request(self, method, path, data, headers):\n if method == 'POST' and path == '/':\n if 'nextToken' in to_str(data or ''):\n data = self._fix_next_token_request(data)\n headers['content-length'] = str(len(data))\n return Request(data=data, headers=headers, method=method)\n\n return True\n\n def return_response(self, method, path, data, headers, response):\n # Fix Incorrect response content-type header from cloudwatch logs #1343\n response.headers['content-type'] = APPLICATION_AMZ_JSON_1_1\n\n if 'nextToken' in to_str(response.content or ''):\n self._fix_next_token_response(response)\n response.headers['content-length'] = str(len(response._content))\n\n @staticmethod\n def _fix_next_token_request(data):\n # Fix for https://github.com/localstack/localstack/issues/1527\n pattern = r'\"nextToken\":\\s*\"([0-9]+)\"'\n replacement = r'\"nextToken\": \\1'\n return re.sub(pattern, replacement, to_str(data))\n\n @staticmethod\n def _fix_next_token_response(response):\n # Fix for https://github.com/localstack/localstack/issues/1527\n pattern = r'\"nextToken\":\\s*([0-9]+)'\n replacement = r'\"nextToken\": \"\\1\"'\n response._content = re.sub(pattern, replacement, to_str(response.content))\n\n\n# instantiate listener\nUPDATE_LOGS = ProxyListenerCloudWatchLogs()\n", "path": "localstack/services/logs/logs_listener.py"}]}
| 863 | 418 |
gh_patches_debug_33948
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-1279
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rename Epoch to Iterations when using epoch_length with max_epochs=1
## 🚀 Feature
Per the discussion in https://github.com/pytorch/ignite/issues/1263, we'd like to ensure that the progress bar uses the name "Iterations" instead of "Epoch" when the `epoch_length` is passed to the trainer for `max_epochs=1`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/contrib/handlers/tqdm_logger.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import warnings
3 from typing import Any, Mapping
4
5 import torch
6
7 from ignite.contrib.handlers.base_logger import BaseLogger, BaseOutputHandler
8 from ignite.engine import Engine, Events
9 from ignite.engine.events import CallableEventWithFilter
10
11
12 class _OutputHandler(BaseOutputHandler):
13 """Helper handler to log engine's output and/or metrics
14
15 Args:
16 description (str): progress bar description.
17 metric_names (list of str, optional): list of metric names to plot or a string "all" to plot all available
18 metrics.
19 output_transform (callable, optional): output transform function to prepare `engine.state.output` as a number.
20 For example, `output_transform = lambda output: output`
21 This function can also return a dictionary, e.g `{'loss': loss1, 'another_loss': loss2}` to label the plot
22 with corresponding keys.
23 closing_event_name: event's name on which the progress bar is closed. Valid events are from
24 :class:`~ignite.engine.events.Events` or any `event_name` added by
25 :meth:`~ignite.engine.engine.Engine.register_events`.
26
27 """
28
29 def __init__(
30 self, description, metric_names=None, output_transform=None, closing_event_name=Events.EPOCH_COMPLETED
31 ):
32 if metric_names is None and output_transform is None:
33 # This helps to avoid 'Either metric_names or output_transform should be defined' of BaseOutputHandler
34 metric_names = []
35 super(_OutputHandler, self).__init__(description, metric_names, output_transform, global_step_transform=None)
36 self.closing_event_name = closing_event_name
37
38 @staticmethod
39 def get_max_number_events(event_name, engine):
40 if event_name in (Events.ITERATION_STARTED, Events.ITERATION_COMPLETED):
41 return engine.state.epoch_length
42 if event_name in (Events.EPOCH_STARTED, Events.EPOCH_COMPLETED):
43 return engine.state.max_epochs
44 return 1
45
46 def __call__(self, engine, logger, event_name):
47
48 pbar_total = self.get_max_number_events(event_name, engine)
49 if logger.pbar is None:
50 logger._reset(pbar_total=pbar_total)
51
52 desc = self.tag
53 max_num_of_closing_events = self.get_max_number_events(self.closing_event_name, engine)
54 if max_num_of_closing_events > 1:
55 global_step = engine.state.get_event_attrib_value(self.closing_event_name)
56 desc += " [{}/{}]".format(global_step, max_num_of_closing_events)
57 logger.pbar.set_description(desc)
58
59 metrics = self._setup_output_metrics(engine)
60
61 rendered_metrics = {}
62 for key, value in metrics.items():
63 if isinstance(value, torch.Tensor):
64 if value.ndimension() == 0:
65 rendered_metrics[key] = value.item()
66 elif value.ndimension() == 1:
67 for i, v in enumerate(value):
68 k = "{}_{}".format(key, i)
69 rendered_metrics[k] = v.item()
70 else:
71 warnings.warn("ProgressBar can not log " "tensor with {} dimensions".format(value.ndimension()))
72 else:
73 rendered_metrics[key] = value
74
75 if rendered_metrics:
76 logger.pbar.set_postfix(**rendered_metrics)
77
78 global_step = engine.state.get_event_attrib_value(event_name)
79 if pbar_total is not None:
80 global_step = (global_step - 1) % pbar_total + 1
81 logger.pbar.update(global_step - logger.pbar.n)
82
83
84 class ProgressBar(BaseLogger):
85 """
86 TQDM progress bar handler to log training progress and computed metrics.
87
88 Args:
89 persist (bool, optional): set to ``True`` to persist the progress bar after completion (default = ``False``)
90 bar_format (str, optional): Specify a custom bar string formatting. May impact performance.
91 [default: '{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]'].
92 Set to ``None`` to use ``tqdm`` default bar formatting: '{l_bar}{bar}{r_bar}', where
93 l_bar='{desc}: {percentage:3.0f}%|' and
94 r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'. For more details on the
95 formatting, see `tqdm docs <https://tqdm.github.io/docs/tqdm/>`_.
96 **tqdm_kwargs: kwargs passed to tqdm progress bar.
97 By default, progress bar description displays "Epoch [5/10]" where 5 is the current epoch and 10 is the
98 number of epochs. If tqdm_kwargs defines `desc`, e.g. "Predictions", than the description is
99 "Predictions [5/10]" if number of epochs is more than one otherwise it is simply "Predictions".
100
101 Examples:
102
103 Simple progress bar
104
105 .. code-block:: python
106
107 trainer = create_supervised_trainer(model, optimizer, loss)
108
109 pbar = ProgressBar()
110 pbar.attach(trainer)
111
112 # Progress bar will looks like
113 # Epoch [2/50]: [64/128] 50%|█████ [06:17<12:34]
114
115 Log output to a file instead of stderr (tqdm's default output)
116
117 .. code-block:: python
118
119 trainer = create_supervised_trainer(model, optimizer, loss)
120
121 log_file = open("output.log", "w")
122 pbar = ProgressBar(file=log_file)
123 pbar.attach(trainer)
124
125 Attach metrics that already have been computed at :attr:`~ignite.engine.events.Events.ITERATION_COMPLETED`
126 (such as :class:`~ignite.metrics.RunningAverage`)
127
128 .. code-block:: python
129
130 trainer = create_supervised_trainer(model, optimizer, loss)
131
132 RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss')
133
134 pbar = ProgressBar()
135 pbar.attach(trainer, ['loss'])
136
137 # Progress bar will looks like
138 # Epoch [2/50]: [64/128] 50%|█████ , loss=0.123 [06:17<12:34]
139
140 Directly attach the engine's output
141
142 .. code-block:: python
143
144 trainer = create_supervised_trainer(model, optimizer, loss)
145
146 pbar = ProgressBar()
147 pbar.attach(trainer, output_transform=lambda x: {'loss': x})
148
149 # Progress bar will looks like
150 # Epoch [2/50]: [64/128] 50%|█████ , loss=0.123 [06:17<12:34]
151
152 Note:
153 When adding attaching the progress bar to an engine, it is recommend that you replace
154 every print operation in the engine's handlers triggered every iteration with
155 ``pbar.log_message`` to guarantee the correct format of the stdout.
156
157 Note:
158 When using inside jupyter notebook, `ProgressBar` automatically uses `tqdm_notebook`. For correct rendering,
159 please install `ipywidgets <https://ipywidgets.readthedocs.io/en/stable/user_install.html#installation>`_.
160 Due to `tqdm notebook bugs <https://github.com/tqdm/tqdm/issues/594>`_, bar format may be needed to be set
161 to an empty string value.
162
163 """
164
165 _events_order = [
166 Events.STARTED,
167 Events.EPOCH_STARTED,
168 Events.ITERATION_STARTED,
169 Events.ITERATION_COMPLETED,
170 Events.EPOCH_COMPLETED,
171 Events.COMPLETED,
172 ]
173
174 def __init__(
175 self,
176 persist=False,
177 bar_format="{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]",
178 **tqdm_kwargs
179 ):
180
181 try:
182 from tqdm.autonotebook import tqdm
183 except ImportError:
184 raise RuntimeError(
185 "This contrib module requires tqdm to be installed. "
186 "Please install it with command: \n pip install tqdm"
187 )
188
189 self.pbar_cls = tqdm
190 self.pbar = None
191 self.persist = persist
192 self.bar_format = bar_format
193 self.tqdm_kwargs = tqdm_kwargs
194
195 def _reset(self, pbar_total):
196 self.pbar = self.pbar_cls(
197 total=pbar_total, leave=self.persist, bar_format=self.bar_format, initial=1, **self.tqdm_kwargs
198 )
199
200 def _close(self, engine):
201 if self.pbar is not None:
202 # https://github.com/tqdm/notebook.py#L240-L250
203 # issue #1115 : notebook backend of tqdm checks if n < total (error or KeyboardInterrupt)
204 # and the bar persists in 'danger' mode
205 if self.pbar.total is not None:
206 self.pbar.n = self.pbar.total
207 self.pbar.close()
208 self.pbar = None
209
210 @staticmethod
211 def _compare_lt(event1, event2):
212 i1 = ProgressBar._events_order.index(event1)
213 i2 = ProgressBar._events_order.index(event2)
214 return i1 < i2
215
216 def log_message(self, message):
217 """
218 Logs a message, preserving the progress bar correct output format.
219
220 Args:
221 message (str): string you wish to log.
222 """
223 from tqdm import tqdm
224
225 tqdm.write(message, file=self.tqdm_kwargs.get("file", None))
226
227 def attach(
228 self,
229 engine,
230 metric_names=None,
231 output_transform=None,
232 event_name=Events.ITERATION_COMPLETED,
233 closing_event_name=Events.EPOCH_COMPLETED,
234 ):
235 """
236 Attaches the progress bar to an engine object.
237
238 Args:
239 engine (Engine): engine object.
240 metric_names (list of str, optional): list of metric names to plot or a string "all" to plot all available
241 metrics.
242 output_transform (callable, optional): a function to select what you want to print from the engine's
243 output. This function may return either a dictionary with entries in the format of ``{name: value}``,
244 or a single scalar, which will be displayed with the default name `output`.
245 event_name: event's name on which the progress bar advances. Valid events are from
246 :class:`~ignite.engine.events.Events`.
247 closing_event_name: event's name on which the progress bar is closed. Valid events are from
248 :class:`~ignite.engine.events.Events`.
249
250 Note: accepted output value types are numbers, 0d and 1d torch tensors and strings
251
252 """
253 desc = self.tqdm_kwargs.get("desc", "Epoch")
254
255 if event_name not in engine._allowed_events:
256 raise ValueError("Logging event {} is not in allowed events for this engine".format(event_name.name))
257
258 if isinstance(closing_event_name, CallableEventWithFilter):
259 if closing_event_name.filter != CallableEventWithFilter.default_event_filter:
260 raise ValueError("Closing Event should not be a filtered event")
261
262 if not self._compare_lt(event_name, closing_event_name):
263 raise ValueError(
264 "Logging event {} should be called before closing event {}".format(event_name, closing_event_name)
265 )
266
267 log_handler = _OutputHandler(desc, metric_names, output_transform, closing_event_name=closing_event_name)
268
269 super(ProgressBar, self).attach(engine, log_handler, event_name)
270 engine.add_event_handler(closing_event_name, self._close)
271
272 def attach_opt_params_handler(self, engine: Engine, event_name: str, *args: Any, **kwargs: Mapping):
273 """Intentionally empty"""
274 pass
275
276 def _create_output_handler(self, *args, **kwargs):
277 return _OutputHandler(*args, **kwargs)
278
279 def _create_opt_params_handler(self, *args, **kwargs):
280 """Intentionally empty"""
281 pass
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/contrib/handlers/tqdm_logger.py b/ignite/contrib/handlers/tqdm_logger.py
--- a/ignite/contrib/handlers/tqdm_logger.py
+++ b/ignite/contrib/handlers/tqdm_logger.py
@@ -49,7 +49,10 @@
if logger.pbar is None:
logger._reset(pbar_total=pbar_total)
- desc = self.tag
+ max_epochs = engine.state.max_epochs
+ default_desc = "Iteration" if max_epochs == 1 else "Epoch"
+
+ desc = self.tag or default_desc
max_num_of_closing_events = self.get_max_number_events(self.closing_event_name, engine)
if max_num_of_closing_events > 1:
global_step = engine.state.get_event_attrib_value(self.closing_event_name)
@@ -95,7 +98,8 @@
formatting, see `tqdm docs <https://tqdm.github.io/docs/tqdm/>`_.
**tqdm_kwargs: kwargs passed to tqdm progress bar.
By default, progress bar description displays "Epoch [5/10]" where 5 is the current epoch and 10 is the
- number of epochs. If tqdm_kwargs defines `desc`, e.g. "Predictions", than the description is
+ number of epochs; however, if ``max_epochs`` are set to 1, the progress bar instead displays
+ "Iteration: [5/10]". If tqdm_kwargs defines `desc`, e.g. "Predictions", than the description is
"Predictions [5/10]" if number of epochs is more than one otherwise it is simply "Predictions".
Examples:
@@ -250,7 +254,7 @@
Note: accepted output value types are numbers, 0d and 1d torch tensors and strings
"""
- desc = self.tqdm_kwargs.get("desc", "Epoch")
+ desc = self.tqdm_kwargs.get("desc", None)
if event_name not in engine._allowed_events:
raise ValueError("Logging event {} is not in allowed events for this engine".format(event_name.name))
|
{"golden_diff": "diff --git a/ignite/contrib/handlers/tqdm_logger.py b/ignite/contrib/handlers/tqdm_logger.py\n--- a/ignite/contrib/handlers/tqdm_logger.py\n+++ b/ignite/contrib/handlers/tqdm_logger.py\n@@ -49,7 +49,10 @@\n if logger.pbar is None:\n logger._reset(pbar_total=pbar_total)\n \n- desc = self.tag\n+ max_epochs = engine.state.max_epochs\n+ default_desc = \"Iteration\" if max_epochs == 1 else \"Epoch\"\n+\n+ desc = self.tag or default_desc\n max_num_of_closing_events = self.get_max_number_events(self.closing_event_name, engine)\n if max_num_of_closing_events > 1:\n global_step = engine.state.get_event_attrib_value(self.closing_event_name)\n@@ -95,7 +98,8 @@\n formatting, see `tqdm docs <https://tqdm.github.io/docs/tqdm/>`_.\n **tqdm_kwargs: kwargs passed to tqdm progress bar.\n By default, progress bar description displays \"Epoch [5/10]\" where 5 is the current epoch and 10 is the\n- number of epochs. If tqdm_kwargs defines `desc`, e.g. \"Predictions\", than the description is\n+ number of epochs; however, if ``max_epochs`` are set to 1, the progress bar instead displays\n+ \"Iteration: [5/10]\". If tqdm_kwargs defines `desc`, e.g. \"Predictions\", than the description is\n \"Predictions [5/10]\" if number of epochs is more than one otherwise it is simply \"Predictions\".\n \n Examples:\n@@ -250,7 +254,7 @@\n Note: accepted output value types are numbers, 0d and 1d torch tensors and strings\n \n \"\"\"\n- desc = self.tqdm_kwargs.get(\"desc\", \"Epoch\")\n+ desc = self.tqdm_kwargs.get(\"desc\", None)\n \n if event_name not in engine._allowed_events:\n raise ValueError(\"Logging event {} is not in allowed events for this engine\".format(event_name.name))\n", "issue": "Rename Epoch to Iterations when using epoch_length with max_epochs=1\n## \ud83d\ude80 Feature\r\n\r\nPer the discussion in https://github.com/pytorch/ignite/issues/1263, we'd like to ensure that the progress bar uses the name \"Iterations\" instead of \"Epoch\" when the `epoch_length` is passed to the trainer for `max_epochs=1`.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport warnings\nfrom typing import Any, Mapping\n\nimport torch\n\nfrom ignite.contrib.handlers.base_logger import BaseLogger, BaseOutputHandler\nfrom ignite.engine import Engine, Events\nfrom ignite.engine.events import CallableEventWithFilter\n\n\nclass _OutputHandler(BaseOutputHandler):\n \"\"\"Helper handler to log engine's output and/or metrics\n\n Args:\n description (str): progress bar description.\n metric_names (list of str, optional): list of metric names to plot or a string \"all\" to plot all available\n metrics.\n output_transform (callable, optional): output transform function to prepare `engine.state.output` as a number.\n For example, `output_transform = lambda output: output`\n This function can also return a dictionary, e.g `{'loss': loss1, 'another_loss': loss2}` to label the plot\n with corresponding keys.\n closing_event_name: event's name on which the progress bar is closed. Valid events are from\n :class:`~ignite.engine.events.Events` or any `event_name` added by\n :meth:`~ignite.engine.engine.Engine.register_events`.\n\n \"\"\"\n\n def __init__(\n self, description, metric_names=None, output_transform=None, closing_event_name=Events.EPOCH_COMPLETED\n ):\n if metric_names is None and output_transform is None:\n # This helps to avoid 'Either metric_names or output_transform should be defined' of BaseOutputHandler\n metric_names = []\n super(_OutputHandler, self).__init__(description, metric_names, output_transform, global_step_transform=None)\n self.closing_event_name = closing_event_name\n\n @staticmethod\n def get_max_number_events(event_name, engine):\n if event_name in (Events.ITERATION_STARTED, Events.ITERATION_COMPLETED):\n return engine.state.epoch_length\n if event_name in (Events.EPOCH_STARTED, Events.EPOCH_COMPLETED):\n return engine.state.max_epochs\n return 1\n\n def __call__(self, engine, logger, event_name):\n\n pbar_total = self.get_max_number_events(event_name, engine)\n if logger.pbar is None:\n logger._reset(pbar_total=pbar_total)\n\n desc = self.tag\n max_num_of_closing_events = self.get_max_number_events(self.closing_event_name, engine)\n if max_num_of_closing_events > 1:\n global_step = engine.state.get_event_attrib_value(self.closing_event_name)\n desc += \" [{}/{}]\".format(global_step, max_num_of_closing_events)\n logger.pbar.set_description(desc)\n\n metrics = self._setup_output_metrics(engine)\n\n rendered_metrics = {}\n for key, value in metrics.items():\n if isinstance(value, torch.Tensor):\n if value.ndimension() == 0:\n rendered_metrics[key] = value.item()\n elif value.ndimension() == 1:\n for i, v in enumerate(value):\n k = \"{}_{}\".format(key, i)\n rendered_metrics[k] = v.item()\n else:\n warnings.warn(\"ProgressBar can not log \" \"tensor with {} dimensions\".format(value.ndimension()))\n else:\n rendered_metrics[key] = value\n\n if rendered_metrics:\n logger.pbar.set_postfix(**rendered_metrics)\n\n global_step = engine.state.get_event_attrib_value(event_name)\n if pbar_total is not None:\n global_step = (global_step - 1) % pbar_total + 1\n logger.pbar.update(global_step - logger.pbar.n)\n\n\nclass ProgressBar(BaseLogger):\n \"\"\"\n TQDM progress bar handler to log training progress and computed metrics.\n\n Args:\n persist (bool, optional): set to ``True`` to persist the progress bar after completion (default = ``False``)\n bar_format (str, optional): Specify a custom bar string formatting. May impact performance.\n [default: '{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]'].\n Set to ``None`` to use ``tqdm`` default bar formatting: '{l_bar}{bar}{r_bar}', where\n l_bar='{desc}: {percentage:3.0f}%|' and\n r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'. For more details on the\n formatting, see `tqdm docs <https://tqdm.github.io/docs/tqdm/>`_.\n **tqdm_kwargs: kwargs passed to tqdm progress bar.\n By default, progress bar description displays \"Epoch [5/10]\" where 5 is the current epoch and 10 is the\n number of epochs. If tqdm_kwargs defines `desc`, e.g. \"Predictions\", than the description is\n \"Predictions [5/10]\" if number of epochs is more than one otherwise it is simply \"Predictions\".\n\n Examples:\n\n Simple progress bar\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n pbar = ProgressBar()\n pbar.attach(trainer)\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 [06:17<12:34]\n\n Log output to a file instead of stderr (tqdm's default output)\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n log_file = open(\"output.log\", \"w\")\n pbar = ProgressBar(file=log_file)\n pbar.attach(trainer)\n\n Attach metrics that already have been computed at :attr:`~ignite.engine.events.Events.ITERATION_COMPLETED`\n (such as :class:`~ignite.metrics.RunningAverage`)\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss')\n\n pbar = ProgressBar()\n pbar.attach(trainer, ['loss'])\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 , loss=0.123 [06:17<12:34]\n\n Directly attach the engine's output\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n pbar = ProgressBar()\n pbar.attach(trainer, output_transform=lambda x: {'loss': x})\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 , loss=0.123 [06:17<12:34]\n\n Note:\n When adding attaching the progress bar to an engine, it is recommend that you replace\n every print operation in the engine's handlers triggered every iteration with\n ``pbar.log_message`` to guarantee the correct format of the stdout.\n\n Note:\n When using inside jupyter notebook, `ProgressBar` automatically uses `tqdm_notebook`. For correct rendering,\n please install `ipywidgets <https://ipywidgets.readthedocs.io/en/stable/user_install.html#installation>`_.\n Due to `tqdm notebook bugs <https://github.com/tqdm/tqdm/issues/594>`_, bar format may be needed to be set\n to an empty string value.\n\n \"\"\"\n\n _events_order = [\n Events.STARTED,\n Events.EPOCH_STARTED,\n Events.ITERATION_STARTED,\n Events.ITERATION_COMPLETED,\n Events.EPOCH_COMPLETED,\n Events.COMPLETED,\n ]\n\n def __init__(\n self,\n persist=False,\n bar_format=\"{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]\",\n **tqdm_kwargs\n ):\n\n try:\n from tqdm.autonotebook import tqdm\n except ImportError:\n raise RuntimeError(\n \"This contrib module requires tqdm to be installed. \"\n \"Please install it with command: \\n pip install tqdm\"\n )\n\n self.pbar_cls = tqdm\n self.pbar = None\n self.persist = persist\n self.bar_format = bar_format\n self.tqdm_kwargs = tqdm_kwargs\n\n def _reset(self, pbar_total):\n self.pbar = self.pbar_cls(\n total=pbar_total, leave=self.persist, bar_format=self.bar_format, initial=1, **self.tqdm_kwargs\n )\n\n def _close(self, engine):\n if self.pbar is not None:\n # https://github.com/tqdm/notebook.py#L240-L250\n # issue #1115 : notebook backend of tqdm checks if n < total (error or KeyboardInterrupt)\n # and the bar persists in 'danger' mode\n if self.pbar.total is not None:\n self.pbar.n = self.pbar.total\n self.pbar.close()\n self.pbar = None\n\n @staticmethod\n def _compare_lt(event1, event2):\n i1 = ProgressBar._events_order.index(event1)\n i2 = ProgressBar._events_order.index(event2)\n return i1 < i2\n\n def log_message(self, message):\n \"\"\"\n Logs a message, preserving the progress bar correct output format.\n\n Args:\n message (str): string you wish to log.\n \"\"\"\n from tqdm import tqdm\n\n tqdm.write(message, file=self.tqdm_kwargs.get(\"file\", None))\n\n def attach(\n self,\n engine,\n metric_names=None,\n output_transform=None,\n event_name=Events.ITERATION_COMPLETED,\n closing_event_name=Events.EPOCH_COMPLETED,\n ):\n \"\"\"\n Attaches the progress bar to an engine object.\n\n Args:\n engine (Engine): engine object.\n metric_names (list of str, optional): list of metric names to plot or a string \"all\" to plot all available\n metrics.\n output_transform (callable, optional): a function to select what you want to print from the engine's\n output. This function may return either a dictionary with entries in the format of ``{name: value}``,\n or a single scalar, which will be displayed with the default name `output`.\n event_name: event's name on which the progress bar advances. Valid events are from\n :class:`~ignite.engine.events.Events`.\n closing_event_name: event's name on which the progress bar is closed. Valid events are from\n :class:`~ignite.engine.events.Events`.\n\n Note: accepted output value types are numbers, 0d and 1d torch tensors and strings\n\n \"\"\"\n desc = self.tqdm_kwargs.get(\"desc\", \"Epoch\")\n\n if event_name not in engine._allowed_events:\n raise ValueError(\"Logging event {} is not in allowed events for this engine\".format(event_name.name))\n\n if isinstance(closing_event_name, CallableEventWithFilter):\n if closing_event_name.filter != CallableEventWithFilter.default_event_filter:\n raise ValueError(\"Closing Event should not be a filtered event\")\n\n if not self._compare_lt(event_name, closing_event_name):\n raise ValueError(\n \"Logging event {} should be called before closing event {}\".format(event_name, closing_event_name)\n )\n\n log_handler = _OutputHandler(desc, metric_names, output_transform, closing_event_name=closing_event_name)\n\n super(ProgressBar, self).attach(engine, log_handler, event_name)\n engine.add_event_handler(closing_event_name, self._close)\n\n def attach_opt_params_handler(self, engine: Engine, event_name: str, *args: Any, **kwargs: Mapping):\n \"\"\"Intentionally empty\"\"\"\n pass\n\n def _create_output_handler(self, *args, **kwargs):\n return _OutputHandler(*args, **kwargs)\n\n def _create_opt_params_handler(self, *args, **kwargs):\n \"\"\"Intentionally empty\"\"\"\n pass\n", "path": "ignite/contrib/handlers/tqdm_logger.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport warnings\nfrom typing import Any, Mapping\n\nimport torch\n\nfrom ignite.contrib.handlers.base_logger import BaseLogger, BaseOutputHandler\nfrom ignite.engine import Engine, Events\nfrom ignite.engine.events import CallableEventWithFilter\n\n\nclass _OutputHandler(BaseOutputHandler):\n \"\"\"Helper handler to log engine's output and/or metrics\n\n Args:\n description (str): progress bar description.\n metric_names (list of str, optional): list of metric names to plot or a string \"all\" to plot all available\n metrics.\n output_transform (callable, optional): output transform function to prepare `engine.state.output` as a number.\n For example, `output_transform = lambda output: output`\n This function can also return a dictionary, e.g `{'loss': loss1, 'another_loss': loss2}` to label the plot\n with corresponding keys.\n closing_event_name: event's name on which the progress bar is closed. Valid events are from\n :class:`~ignite.engine.events.Events` or any `event_name` added by\n :meth:`~ignite.engine.engine.Engine.register_events`.\n\n \"\"\"\n\n def __init__(\n self, description, metric_names=None, output_transform=None, closing_event_name=Events.EPOCH_COMPLETED\n ):\n if metric_names is None and output_transform is None:\n # This helps to avoid 'Either metric_names or output_transform should be defined' of BaseOutputHandler\n metric_names = []\n super(_OutputHandler, self).__init__(description, metric_names, output_transform, global_step_transform=None)\n self.closing_event_name = closing_event_name\n\n @staticmethod\n def get_max_number_events(event_name, engine):\n if event_name in (Events.ITERATION_STARTED, Events.ITERATION_COMPLETED):\n return engine.state.epoch_length\n if event_name in (Events.EPOCH_STARTED, Events.EPOCH_COMPLETED):\n return engine.state.max_epochs\n return 1\n\n def __call__(self, engine, logger, event_name):\n\n pbar_total = self.get_max_number_events(event_name, engine)\n if logger.pbar is None:\n logger._reset(pbar_total=pbar_total)\n\n max_epochs = engine.state.max_epochs\n default_desc = \"Iteration\" if max_epochs == 1 else \"Epoch\"\n\n desc = self.tag or default_desc\n max_num_of_closing_events = self.get_max_number_events(self.closing_event_name, engine)\n if max_num_of_closing_events > 1:\n global_step = engine.state.get_event_attrib_value(self.closing_event_name)\n desc += \" [{}/{}]\".format(global_step, max_num_of_closing_events)\n logger.pbar.set_description(desc)\n\n metrics = self._setup_output_metrics(engine)\n\n rendered_metrics = {}\n for key, value in metrics.items():\n if isinstance(value, torch.Tensor):\n if value.ndimension() == 0:\n rendered_metrics[key] = value.item()\n elif value.ndimension() == 1:\n for i, v in enumerate(value):\n k = \"{}_{}\".format(key, i)\n rendered_metrics[k] = v.item()\n else:\n warnings.warn(\"ProgressBar can not log \" \"tensor with {} dimensions\".format(value.ndimension()))\n else:\n rendered_metrics[key] = value\n\n if rendered_metrics:\n logger.pbar.set_postfix(**rendered_metrics)\n\n global_step = engine.state.get_event_attrib_value(event_name)\n if pbar_total is not None:\n global_step = (global_step - 1) % pbar_total + 1\n logger.pbar.update(global_step - logger.pbar.n)\n\n\nclass ProgressBar(BaseLogger):\n \"\"\"\n TQDM progress bar handler to log training progress and computed metrics.\n\n Args:\n persist (bool, optional): set to ``True`` to persist the progress bar after completion (default = ``False``)\n bar_format (str, optional): Specify a custom bar string formatting. May impact performance.\n [default: '{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]'].\n Set to ``None`` to use ``tqdm`` default bar formatting: '{l_bar}{bar}{r_bar}', where\n l_bar='{desc}: {percentage:3.0f}%|' and\n r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'. For more details on the\n formatting, see `tqdm docs <https://tqdm.github.io/docs/tqdm/>`_.\n **tqdm_kwargs: kwargs passed to tqdm progress bar.\n By default, progress bar description displays \"Epoch [5/10]\" where 5 is the current epoch and 10 is the\n number of epochs; however, if ``max_epochs`` are set to 1, the progress bar instead displays\n \"Iteration: [5/10]\". If tqdm_kwargs defines `desc`, e.g. \"Predictions\", than the description is\n \"Predictions [5/10]\" if number of epochs is more than one otherwise it is simply \"Predictions\".\n\n Examples:\n\n Simple progress bar\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n pbar = ProgressBar()\n pbar.attach(trainer)\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 [06:17<12:34]\n\n Log output to a file instead of stderr (tqdm's default output)\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n log_file = open(\"output.log\", \"w\")\n pbar = ProgressBar(file=log_file)\n pbar.attach(trainer)\n\n Attach metrics that already have been computed at :attr:`~ignite.engine.events.Events.ITERATION_COMPLETED`\n (such as :class:`~ignite.metrics.RunningAverage`)\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss')\n\n pbar = ProgressBar()\n pbar.attach(trainer, ['loss'])\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 , loss=0.123 [06:17<12:34]\n\n Directly attach the engine's output\n\n .. code-block:: python\n\n trainer = create_supervised_trainer(model, optimizer, loss)\n\n pbar = ProgressBar()\n pbar.attach(trainer, output_transform=lambda x: {'loss': x})\n\n # Progress bar will looks like\n # Epoch [2/50]: [64/128] 50%|\u2588\u2588\u2588\u2588\u2588 , loss=0.123 [06:17<12:34]\n\n Note:\n When adding attaching the progress bar to an engine, it is recommend that you replace\n every print operation in the engine's handlers triggered every iteration with\n ``pbar.log_message`` to guarantee the correct format of the stdout.\n\n Note:\n When using inside jupyter notebook, `ProgressBar` automatically uses `tqdm_notebook`. For correct rendering,\n please install `ipywidgets <https://ipywidgets.readthedocs.io/en/stable/user_install.html#installation>`_.\n Due to `tqdm notebook bugs <https://github.com/tqdm/tqdm/issues/594>`_, bar format may be needed to be set\n to an empty string value.\n\n \"\"\"\n\n _events_order = [\n Events.STARTED,\n Events.EPOCH_STARTED,\n Events.ITERATION_STARTED,\n Events.ITERATION_COMPLETED,\n Events.EPOCH_COMPLETED,\n Events.COMPLETED,\n ]\n\n def __init__(\n self,\n persist=False,\n bar_format=\"{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]\",\n **tqdm_kwargs\n ):\n\n try:\n from tqdm.autonotebook import tqdm\n except ImportError:\n raise RuntimeError(\n \"This contrib module requires tqdm to be installed. \"\n \"Please install it with command: \\n pip install tqdm\"\n )\n\n self.pbar_cls = tqdm\n self.pbar = None\n self.persist = persist\n self.bar_format = bar_format\n self.tqdm_kwargs = tqdm_kwargs\n\n def _reset(self, pbar_total):\n self.pbar = self.pbar_cls(\n total=pbar_total, leave=self.persist, bar_format=self.bar_format, initial=1, **self.tqdm_kwargs\n )\n\n def _close(self, engine):\n if self.pbar is not None:\n # https://github.com/tqdm/notebook.py#L240-L250\n # issue #1115 : notebook backend of tqdm checks if n < total (error or KeyboardInterrupt)\n # and the bar persists in 'danger' mode\n if self.pbar.total is not None:\n self.pbar.n = self.pbar.total\n self.pbar.close()\n self.pbar = None\n\n @staticmethod\n def _compare_lt(event1, event2):\n i1 = ProgressBar._events_order.index(event1)\n i2 = ProgressBar._events_order.index(event2)\n return i1 < i2\n\n def log_message(self, message):\n \"\"\"\n Logs a message, preserving the progress bar correct output format.\n\n Args:\n message (str): string you wish to log.\n \"\"\"\n from tqdm import tqdm\n\n tqdm.write(message, file=self.tqdm_kwargs.get(\"file\", None))\n\n def attach(\n self,\n engine,\n metric_names=None,\n output_transform=None,\n event_name=Events.ITERATION_COMPLETED,\n closing_event_name=Events.EPOCH_COMPLETED,\n ):\n \"\"\"\n Attaches the progress bar to an engine object.\n\n Args:\n engine (Engine): engine object.\n metric_names (list of str, optional): list of metric names to plot or a string \"all\" to plot all available\n metrics.\n output_transform (callable, optional): a function to select what you want to print from the engine's\n output. This function may return either a dictionary with entries in the format of ``{name: value}``,\n or a single scalar, which will be displayed with the default name `output`.\n event_name: event's name on which the progress bar advances. Valid events are from\n :class:`~ignite.engine.events.Events`.\n closing_event_name: event's name on which the progress bar is closed. Valid events are from\n :class:`~ignite.engine.events.Events`.\n\n Note: accepted output value types are numbers, 0d and 1d torch tensors and strings\n\n \"\"\"\n desc = self.tqdm_kwargs.get(\"desc\", None)\n\n if event_name not in engine._allowed_events:\n raise ValueError(\"Logging event {} is not in allowed events for this engine\".format(event_name.name))\n\n if isinstance(closing_event_name, CallableEventWithFilter):\n if closing_event_name.filter != CallableEventWithFilter.default_event_filter:\n raise ValueError(\"Closing Event should not be a filtered event\")\n\n if not self._compare_lt(event_name, closing_event_name):\n raise ValueError(\n \"Logging event {} should be called before closing event {}\".format(event_name, closing_event_name)\n )\n\n log_handler = _OutputHandler(desc, metric_names, output_transform, closing_event_name=closing_event_name)\n\n super(ProgressBar, self).attach(engine, log_handler, event_name)\n engine.add_event_handler(closing_event_name, self._close)\n\n def attach_opt_params_handler(self, engine: Engine, event_name: str, *args: Any, **kwargs: Mapping):\n \"\"\"Intentionally empty\"\"\"\n pass\n\n def _create_output_handler(self, *args, **kwargs):\n return _OutputHandler(*args, **kwargs)\n\n def _create_opt_params_handler(self, *args, **kwargs):\n \"\"\"Intentionally empty\"\"\"\n pass\n", "path": "ignite/contrib/handlers/tqdm_logger.py"}]}
| 3,701 | 482 |
gh_patches_debug_8404
|
rasdani/github-patches
|
git_diff
|
privacyidea__privacyidea-2032
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use information, which pyhsical token was used with a remote token
**Is your feature request related to a problem? Please describe.**
With a remote token it would be great to know, what actually was the physical token that led to a successful authentication.
This can help with rollout scenarios and assigning those physical tokens.
This is interesting if the remote token object points to a user on the remote system. The user can have several tokens. But it would be interesting to know, which token was actually used for authentication.
**Describe the solution you'd like**
The remote tokentype could store the information, which physical token was used for authentication. Returning it in the response does not work out well.
It can be stored in the tokeninfo.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `privacyidea/lib/tokens/remotetoken.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # privacyIDEA is a fork of LinOTP
4 # May 08, 2014 Cornelius Kölbel
5 # License: AGPLv3
6 # contact: http://www.privacyidea.org
7 #
8 # 2018-01-21 Cornelius Kölbel <[email protected]>
9 # Add tokenkind
10 # 2015-01-28 Rewrite for migration to flask
11 # Cornelius Kölbel <[email protected]>
12 #
13 #
14 # Copyright (C) 2010 - 2014 LSE Leading Security Experts GmbH
15 # License: LSE
16 # contact: http://www.linotp.org
17 # http://www.lsexperts.de
18 # [email protected]
19 #
20 # This code is free software; you can redistribute it and/or
21 # modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE
22 # License as published by the Free Software Foundation; either
23 # version 3 of the License, or any later version.
24 #
25 # This code is distributed in the hope that it will be useful,
26 # but WITHOUT ANY WARRANTY; without even the implied warranty of
27 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
28 # GNU AFFERO GENERAL PUBLIC LICENSE for more details.
29 #
30 # You should have received a copy of the GNU Affero General Public
31 # License along with this program. If not, see <http://www.gnu.org/licenses/>.
32 #
33 __doc__ = """This is the implementation of the remote token. The remote token
34 forwards an authentication request to another privacyidea server.
35
36 To do this it uses the parameters remote.server, remote.realm,
37 remote.resolver, remote.user or remote.serial.
38 The parameter remote.local_checkpin determines, whether the PIN should be
39 checked locally or remotely.
40
41 The code is tested in tests/test_lib_tokens_remote
42 """
43
44 import logging
45 import traceback
46 import requests
47 from privacyidea.lib.utils import is_true
48 from privacyidea.lib.decorators import check_token_locked
49 from privacyidea.lib.config import get_from_config
50 from privacyidea.api.lib.utils import getParam
51 from privacyidea.lib.log import log_with
52 from privacyidea.lib.policydecorators import challenge_response_allowed
53 from privacyidea.lib.tokenclass import TokenClass, TOKENKIND
54 from privacyidea.lib import _
55 from privacyidea.lib.policy import SCOPE, ACTION, GROUP
56
57 optional = True
58 required = False
59
60 log = logging.getLogger(__name__)
61
62 ###############################################
63
64
65 class RemoteTokenClass(TokenClass):
66 """
67 The Remote token forwards an authentication request to another privacyIDEA
68 server. The request can be forwarded to a user on the other server or to
69 a serial number on the other server. The PIN can be checked on the local
70 privacyIDEA server or on the remote server.
71
72 Using the Remote token you can assign one physical token to many
73 different users.
74 """
75
76 def __init__(self, db_token):
77 """
78 constructor - create a token class object with it's db token binding
79
80 :param aToken: the db bound token
81 """
82 TokenClass.__init__(self, db_token)
83 self.set_type(u"remote")
84 self.mode = ['authenticate', 'challenge']
85
86 @staticmethod
87 def get_class_type():
88 """
89 return the class type identifier
90 """
91 return "remote"
92
93 @staticmethod
94 def get_class_prefix():
95 """
96 return the token type prefix
97 """
98 return "PIRE"
99
100 @staticmethod
101 @log_with(log)
102 def get_class_info(key=None, ret='all'):
103 """
104 :param key: subsection identifier
105 :type key: string
106 :param ret: default return value, if nothing is found
107 :type ret: user defined
108 :return: subsection if key exists or user defined
109 :rtype: dict or string
110 """
111 res = {'type': 'remote',
112 'title': 'Remote Token',
113 'description': _('Remote Token: Forward authentication request '
114 'to another server.'),
115 'user': [],
116 # This tokentype is enrollable in the UI for...
117 'ui_enroll': ["admin"],
118 'policy': {
119 SCOPE.ENROLL: {
120 ACTION.MAXTOKENUSER: {
121 'type': 'int',
122 'desc': _("The user may only have this maximum number of remote tokens assigned."),
123 'group': GROUP.TOKEN
124 },
125 ACTION.MAXACTIVETOKENUSER: {
126 'type': 'int',
127 'desc': _(
128 "The user may only have this maximum number of active remote tokens assigned."),
129 'group': GROUP.TOKEN
130 }
131 }
132 },
133 }
134
135 if key:
136 ret = res.get(key, {})
137 else:
138 if ret == 'all':
139 ret = res
140 return ret
141
142 def update(self, param):
143 """
144 second phase of the init process - updates parameters
145
146 :param param: the request parameters
147 :return: - nothing -
148 """
149 # if another OTP length would be specified in /admin/init this would
150 # be overwritten by the parent class, which is ok.
151 self.set_otplen(6)
152 TokenClass.update(self, param)
153
154 remoteServer = getParam(param, "remote.server", required)
155 self.add_tokeninfo("remote.server", remoteServer)
156
157 val = getParam(param, "remote.local_checkpin", optional) or 0
158 self.add_tokeninfo("remote.local_checkpin", val)
159
160 for key in ["remote.serial", "remote.user", "remote.path",
161 "remote.realm", "remote.resolver"]:
162 val = getParam(param, key, optional)
163 if val is not None:
164 self.add_tokeninfo(key, val)
165
166 self.add_tokeninfo("tokenkind", TOKENKIND.VIRTUAL)
167
168 @property
169 def check_pin_local(self):
170 """
171 lookup if pin should be checked locally or on remote host
172
173 :return: bool
174 """
175 local_check = is_true(self.get_tokeninfo("remote.local_checkpin"))
176 log.debug(" local checking pin? {0!r}".format(local_check))
177
178 return local_check
179
180 @log_with(log)
181 @check_token_locked
182 def authenticate(self, passw, user=None, options=None):
183 """
184 do the authentication on base of password / otp and user and
185 options, the request parameters.
186
187 Here we contact the other privacyIDEA server to validate the OtpVal.
188
189 :param passw: the password / otp
190 :param user: the requesting user
191 :param options: the additional request parameters
192
193 :return: tuple of (success, otp_count - 0 or -1, reply)
194
195 """
196 res = False
197 otp_counter = -1
198 reply = None
199 otpval = passw
200
201 # should we check the pin localy?
202 if self.check_pin_local:
203 (_res, pin, otpval) = self.split_pin_pass(passw, user,
204 options=options)
205
206 if not TokenClass.check_pin(self, pin):
207 return False, otp_counter, {'message': "Wrong PIN"}
208
209 otp_count = self.check_otp(otpval, options=options)
210 if otp_count >= 0:
211 res = True
212 reply = {'message': 'matching 1 tokens',
213 'serial': self.get_serial(),
214 'type': self.get_tokentype()}
215 else:
216 reply = {'message': 'remote side denied access'}
217
218 return res, otp_count, reply
219
220 @check_token_locked
221 def check_otp(self, otpval, counter=None, window=None, options=None):
222 """
223 run the http request against the remote host
224
225 :param otpval: the OTP value
226 :param counter: The counter for counter based otp values
227 :type counter: int
228 :param window: a counter window
229 :type counter: int
230 :param options: additional token specific options
231 :type options: dict
232 :return: counter of the matching OTP value.
233 :rtype: int
234 """
235 otp_count = -1
236
237 remoteServer = self.get_tokeninfo("remote.server") or ""
238
239 # in preparation of the ability to relocate privacyidea urls,
240 # we introduce the remote url path
241 remotePath = self.get_tokeninfo("remote.path") or ""
242 remotePath = remotePath.strip()
243
244 remoteSerial = self.get_tokeninfo("remote.serial") or ""
245
246 remoteUser = self.get_tokeninfo("remote.user") or ""
247
248 remoteRealm = self.get_tokeninfo("remote.realm") or ""
249
250 remoteResolver = self.get_tokeninfo("remote.resolver") or ""
251
252 ssl_verify = get_from_config("remote.verify_ssl_certificate",
253 False, return_bool=True) or False
254
255 # here we also need to check for remote.user and so on....
256 log.debug("checking OTP len:%r remotely on server: %r,"
257 " serial: %r, user: %r" %
258 (len(otpval), remoteServer, remoteSerial, remoteUser))
259 params = {}
260
261 remotePath = remotePath or "/validate/check"
262 if remoteSerial:
263 params['serial'] = remoteSerial
264 elif remoteUser:
265 params['user'] = remoteUser
266 params['realm'] = remoteRealm
267 params['resolver'] = remoteResolver
268
269 else:
270 log.warning("The remote token does neither contain a "
271 "remote.serial nor a remote.user.")
272 return otp_count
273
274 params['pass'] = otpval
275 request_url = "{0!s}{1!s}".format(remoteServer, remotePath)
276
277 try:
278 r = requests.post(request_url, data=params, verify=ssl_verify)
279
280 if r.status_code == requests.codes.ok:
281 response = r.json()
282 result = response.get("result")
283 if result.get("value"):
284 otp_count = 1
285
286 except Exception as exx: # pragma: no cover
287 log.error("Error getting response from "
288 "remote Server (%r): %r" % (request_url, exx))
289 log.debug("{0!s}".format(traceback.format_exc()))
290
291 return otp_count
292
293 @log_with(log)
294 @challenge_response_allowed
295 def is_challenge_request(self, passw, user=None, options=None):
296 """
297 This method checks, if this is a request, that triggers a challenge.
298 It depends on the way, the pin is checked - either locally or remote
299
300 :param passw: password, which might be pin or pin+otp
301 :type passw: string
302 :param user: The user from the authentication request
303 :type user: User object
304 :param options: dictionary of additional request parameters
305 :type options: dict
306
307 :return: true or false
308 """
309
310 request_is_valid = False
311
312 if self.check_pin_local:
313 pin_match = self.check_pin(passw, user=user,
314 options=options)
315 if pin_match is True:
316 request_is_valid = True
317
318 return request_is_valid
319
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/privacyidea/lib/tokens/remotetoken.py b/privacyidea/lib/tokens/remotetoken.py
--- a/privacyidea/lib/tokens/remotetoken.py
+++ b/privacyidea/lib/tokens/remotetoken.py
@@ -282,6 +282,8 @@
result = response.get("result")
if result.get("value"):
otp_count = 1
+ # Add the serial of the used remote token in the tokeninfo parameters
+ self.add_tokeninfo("last_matching_remote_serial", response.get("detail", {}).get("serial"))
except Exception as exx: # pragma: no cover
log.error("Error getting response from "
|
{"golden_diff": "diff --git a/privacyidea/lib/tokens/remotetoken.py b/privacyidea/lib/tokens/remotetoken.py\n--- a/privacyidea/lib/tokens/remotetoken.py\n+++ b/privacyidea/lib/tokens/remotetoken.py\n@@ -282,6 +282,8 @@\n result = response.get(\"result\")\n if result.get(\"value\"):\n otp_count = 1\n+ # Add the serial of the used remote token in the tokeninfo parameters\n+ self.add_tokeninfo(\"last_matching_remote_serial\", response.get(\"detail\", {}).get(\"serial\"))\n \n except Exception as exx: # pragma: no cover\n log.error(\"Error getting response from \"\n", "issue": "Use information, which pyhsical token was used with a remote token\n**Is your feature request related to a problem? Please describe.**\r\n\r\nWith a remote token it would be great to know, what actually was the physical token that led to a successful authentication.\r\nThis can help with rollout scenarios and assigning those physical tokens.\r\n\r\nThis is interesting if the remote token object points to a user on the remote system. The user can have several tokens. But it would be interesting to know, which token was actually used for authentication.\r\n\r\n**Describe the solution you'd like**\r\n\r\nThe remote tokentype could store the information, which physical token was used for authentication. Returning it in the response does not work out well.\r\nIt can be stored in the tokeninfo.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# privacyIDEA is a fork of LinOTP\n# May 08, 2014 Cornelius K\u00f6lbel\n# License: AGPLv3\n# contact: http://www.privacyidea.org\n#\n# 2018-01-21 Cornelius K\u00f6lbel <[email protected]>\n# Add tokenkind\n# 2015-01-28 Rewrite for migration to flask\n# Cornelius K\u00f6lbel <[email protected]>\n#\n#\n# Copyright (C) 2010 - 2014 LSE Leading Security Experts GmbH\n# License: LSE\n# contact: http://www.linotp.org\n# http://www.lsexperts.de\n# [email protected]\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n__doc__ = \"\"\"This is the implementation of the remote token. The remote token\nforwards an authentication request to another privacyidea server.\n\nTo do this it uses the parameters remote.server, remote.realm,\nremote.resolver, remote.user or remote.serial.\nThe parameter remote.local_checkpin determines, whether the PIN should be\nchecked locally or remotely.\n\nThe code is tested in tests/test_lib_tokens_remote\n\"\"\"\n\nimport logging\nimport traceback\nimport requests\nfrom privacyidea.lib.utils import is_true\nfrom privacyidea.lib.decorators import check_token_locked\nfrom privacyidea.lib.config import get_from_config\nfrom privacyidea.api.lib.utils import getParam\nfrom privacyidea.lib.log import log_with\nfrom privacyidea.lib.policydecorators import challenge_response_allowed\nfrom privacyidea.lib.tokenclass import TokenClass, TOKENKIND\nfrom privacyidea.lib import _\nfrom privacyidea.lib.policy import SCOPE, ACTION, GROUP\n\noptional = True\nrequired = False\n\nlog = logging.getLogger(__name__)\n\n###############################################\n\n\nclass RemoteTokenClass(TokenClass):\n \"\"\"\n The Remote token forwards an authentication request to another privacyIDEA\n server. The request can be forwarded to a user on the other server or to\n a serial number on the other server. The PIN can be checked on the local\n privacyIDEA server or on the remote server.\n\n Using the Remote token you can assign one physical token to many\n different users.\n \"\"\"\n\n def __init__(self, db_token):\n \"\"\"\n constructor - create a token class object with it's db token binding\n\n :param aToken: the db bound token\n \"\"\"\n TokenClass.__init__(self, db_token)\n self.set_type(u\"remote\")\n self.mode = ['authenticate', 'challenge']\n\n @staticmethod\n def get_class_type():\n \"\"\"\n return the class type identifier\n \"\"\"\n return \"remote\"\n\n @staticmethod\n def get_class_prefix():\n \"\"\"\n return the token type prefix\n \"\"\"\n return \"PIRE\"\n\n @staticmethod\n @log_with(log)\n def get_class_info(key=None, ret='all'):\n \"\"\"\n :param key: subsection identifier\n :type key: string\n :param ret: default return value, if nothing is found\n :type ret: user defined\n :return: subsection if key exists or user defined\n :rtype: dict or string\n \"\"\"\n res = {'type': 'remote',\n 'title': 'Remote Token',\n 'description': _('Remote Token: Forward authentication request '\n 'to another server.'),\n 'user': [],\n # This tokentype is enrollable in the UI for...\n 'ui_enroll': [\"admin\"],\n 'policy': {\n SCOPE.ENROLL: {\n ACTION.MAXTOKENUSER: {\n 'type': 'int',\n 'desc': _(\"The user may only have this maximum number of remote tokens assigned.\"),\n 'group': GROUP.TOKEN\n },\n ACTION.MAXACTIVETOKENUSER: {\n 'type': 'int',\n 'desc': _(\n \"The user may only have this maximum number of active remote tokens assigned.\"),\n 'group': GROUP.TOKEN\n }\n }\n },\n }\n\n if key:\n ret = res.get(key, {})\n else:\n if ret == 'all':\n ret = res\n return ret\n\n def update(self, param):\n \"\"\"\n second phase of the init process - updates parameters\n\n :param param: the request parameters\n :return: - nothing -\n \"\"\"\n # if another OTP length would be specified in /admin/init this would\n # be overwritten by the parent class, which is ok.\n self.set_otplen(6)\n TokenClass.update(self, param)\n\n remoteServer = getParam(param, \"remote.server\", required)\n self.add_tokeninfo(\"remote.server\", remoteServer)\n\n val = getParam(param, \"remote.local_checkpin\", optional) or 0\n self.add_tokeninfo(\"remote.local_checkpin\", val)\n\n for key in [\"remote.serial\", \"remote.user\", \"remote.path\",\n \"remote.realm\", \"remote.resolver\"]:\n val = getParam(param, key, optional)\n if val is not None:\n self.add_tokeninfo(key, val)\n\n self.add_tokeninfo(\"tokenkind\", TOKENKIND.VIRTUAL)\n\n @property\n def check_pin_local(self):\n \"\"\"\n lookup if pin should be checked locally or on remote host\n\n :return: bool\n \"\"\"\n local_check = is_true(self.get_tokeninfo(\"remote.local_checkpin\"))\n log.debug(\" local checking pin? {0!r}\".format(local_check))\n\n return local_check\n\n @log_with(log)\n @check_token_locked\n def authenticate(self, passw, user=None, options=None):\n \"\"\"\n do the authentication on base of password / otp and user and\n options, the request parameters.\n\n Here we contact the other privacyIDEA server to validate the OtpVal.\n\n :param passw: the password / otp\n :param user: the requesting user\n :param options: the additional request parameters\n\n :return: tuple of (success, otp_count - 0 or -1, reply)\n\n \"\"\"\n res = False\n otp_counter = -1\n reply = None\n otpval = passw\n\n # should we check the pin localy?\n if self.check_pin_local:\n (_res, pin, otpval) = self.split_pin_pass(passw, user,\n options=options)\n\n if not TokenClass.check_pin(self, pin):\n return False, otp_counter, {'message': \"Wrong PIN\"}\n\n otp_count = self.check_otp(otpval, options=options)\n if otp_count >= 0:\n res = True\n reply = {'message': 'matching 1 tokens',\n 'serial': self.get_serial(),\n 'type': self.get_tokentype()}\n else:\n reply = {'message': 'remote side denied access'}\n\n return res, otp_count, reply\n\n @check_token_locked\n def check_otp(self, otpval, counter=None, window=None, options=None):\n \"\"\"\n run the http request against the remote host\n\n :param otpval: the OTP value\n :param counter: The counter for counter based otp values\n :type counter: int\n :param window: a counter window\n :type counter: int\n :param options: additional token specific options\n :type options: dict\n :return: counter of the matching OTP value.\n :rtype: int\n \"\"\"\n otp_count = -1\n\n remoteServer = self.get_tokeninfo(\"remote.server\") or \"\"\n\n # in preparation of the ability to relocate privacyidea urls,\n # we introduce the remote url path\n remotePath = self.get_tokeninfo(\"remote.path\") or \"\"\n remotePath = remotePath.strip()\n\n remoteSerial = self.get_tokeninfo(\"remote.serial\") or \"\"\n\n remoteUser = self.get_tokeninfo(\"remote.user\") or \"\"\n\n remoteRealm = self.get_tokeninfo(\"remote.realm\") or \"\"\n\n remoteResolver = self.get_tokeninfo(\"remote.resolver\") or \"\"\n\n ssl_verify = get_from_config(\"remote.verify_ssl_certificate\",\n False, return_bool=True) or False\n\n # here we also need to check for remote.user and so on....\n log.debug(\"checking OTP len:%r remotely on server: %r,\"\n \" serial: %r, user: %r\" %\n (len(otpval), remoteServer, remoteSerial, remoteUser))\n params = {}\n\n remotePath = remotePath or \"/validate/check\"\n if remoteSerial:\n params['serial'] = remoteSerial\n elif remoteUser:\n params['user'] = remoteUser\n params['realm'] = remoteRealm\n params['resolver'] = remoteResolver\n\n else:\n log.warning(\"The remote token does neither contain a \"\n \"remote.serial nor a remote.user.\")\n return otp_count\n\n params['pass'] = otpval\n request_url = \"{0!s}{1!s}\".format(remoteServer, remotePath)\n\n try:\n r = requests.post(request_url, data=params, verify=ssl_verify)\n\n if r.status_code == requests.codes.ok:\n response = r.json()\n result = response.get(\"result\")\n if result.get(\"value\"):\n otp_count = 1\n\n except Exception as exx: # pragma: no cover\n log.error(\"Error getting response from \"\n \"remote Server (%r): %r\" % (request_url, exx))\n log.debug(\"{0!s}\".format(traceback.format_exc()))\n\n return otp_count\n\n @log_with(log)\n @challenge_response_allowed\n def is_challenge_request(self, passw, user=None, options=None):\n \"\"\"\n This method checks, if this is a request, that triggers a challenge.\n It depends on the way, the pin is checked - either locally or remote\n\n :param passw: password, which might be pin or pin+otp\n :type passw: string\n :param user: The user from the authentication request\n :type user: User object\n :param options: dictionary of additional request parameters\n :type options: dict\n\n :return: true or false\n \"\"\"\n\n request_is_valid = False\n\n if self.check_pin_local:\n pin_match = self.check_pin(passw, user=user,\n options=options)\n if pin_match is True:\n request_is_valid = True\n\n return request_is_valid\n", "path": "privacyidea/lib/tokens/remotetoken.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# privacyIDEA is a fork of LinOTP\n# May 08, 2014 Cornelius K\u00f6lbel\n# License: AGPLv3\n# contact: http://www.privacyidea.org\n#\n# 2018-01-21 Cornelius K\u00f6lbel <[email protected]>\n# Add tokenkind\n# 2015-01-28 Rewrite for migration to flask\n# Cornelius K\u00f6lbel <[email protected]>\n#\n#\n# Copyright (C) 2010 - 2014 LSE Leading Security Experts GmbH\n# License: LSE\n# contact: http://www.linotp.org\n# http://www.lsexperts.de\n# [email protected]\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n__doc__ = \"\"\"This is the implementation of the remote token. The remote token\nforwards an authentication request to another privacyidea server.\n\nTo do this it uses the parameters remote.server, remote.realm,\nremote.resolver, remote.user or remote.serial.\nThe parameter remote.local_checkpin determines, whether the PIN should be\nchecked locally or remotely.\n\nThe code is tested in tests/test_lib_tokens_remote\n\"\"\"\n\nimport logging\nimport traceback\nimport requests\nfrom privacyidea.lib.utils import is_true\nfrom privacyidea.lib.decorators import check_token_locked\nfrom privacyidea.lib.config import get_from_config\nfrom privacyidea.api.lib.utils import getParam\nfrom privacyidea.lib.log import log_with\nfrom privacyidea.lib.policydecorators import challenge_response_allowed\nfrom privacyidea.lib.tokenclass import TokenClass, TOKENKIND\nfrom privacyidea.lib import _\nfrom privacyidea.lib.policy import SCOPE, ACTION, GROUP\n\noptional = True\nrequired = False\n\nlog = logging.getLogger(__name__)\n\n###############################################\n\n\nclass RemoteTokenClass(TokenClass):\n \"\"\"\n The Remote token forwards an authentication request to another privacyIDEA\n server. The request can be forwarded to a user on the other server or to\n a serial number on the other server. The PIN can be checked on the local\n privacyIDEA server or on the remote server.\n\n Using the Remote token you can assign one physical token to many\n different users.\n \"\"\"\n\n def __init__(self, db_token):\n \"\"\"\n constructor - create a token class object with it's db token binding\n\n :param aToken: the db bound token\n \"\"\"\n TokenClass.__init__(self, db_token)\n self.set_type(u\"remote\")\n self.mode = ['authenticate', 'challenge']\n\n @staticmethod\n def get_class_type():\n \"\"\"\n return the class type identifier\n \"\"\"\n return \"remote\"\n\n @staticmethod\n def get_class_prefix():\n \"\"\"\n return the token type prefix\n \"\"\"\n return \"PIRE\"\n\n @staticmethod\n @log_with(log)\n def get_class_info(key=None, ret='all'):\n \"\"\"\n :param key: subsection identifier\n :type key: string\n :param ret: default return value, if nothing is found\n :type ret: user defined\n :return: subsection if key exists or user defined\n :rtype: dict or string\n \"\"\"\n res = {'type': 'remote',\n 'title': 'Remote Token',\n 'description': _('Remote Token: Forward authentication request '\n 'to another server.'),\n 'user': [],\n # This tokentype is enrollable in the UI for...\n 'ui_enroll': [\"admin\"],\n 'policy': {\n SCOPE.ENROLL: {\n ACTION.MAXTOKENUSER: {\n 'type': 'int',\n 'desc': _(\"The user may only have this maximum number of remote tokens assigned.\"),\n 'group': GROUP.TOKEN\n },\n ACTION.MAXACTIVETOKENUSER: {\n 'type': 'int',\n 'desc': _(\n \"The user may only have this maximum number of active remote tokens assigned.\"),\n 'group': GROUP.TOKEN\n }\n }\n },\n }\n\n if key:\n ret = res.get(key, {})\n else:\n if ret == 'all':\n ret = res\n return ret\n\n def update(self, param):\n \"\"\"\n second phase of the init process - updates parameters\n\n :param param: the request parameters\n :return: - nothing -\n \"\"\"\n # if another OTP length would be specified in /admin/init this would\n # be overwritten by the parent class, which is ok.\n self.set_otplen(6)\n TokenClass.update(self, param)\n\n remoteServer = getParam(param, \"remote.server\", required)\n self.add_tokeninfo(\"remote.server\", remoteServer)\n\n val = getParam(param, \"remote.local_checkpin\", optional) or 0\n self.add_tokeninfo(\"remote.local_checkpin\", val)\n\n for key in [\"remote.serial\", \"remote.user\", \"remote.path\",\n \"remote.realm\", \"remote.resolver\"]:\n val = getParam(param, key, optional)\n if val is not None:\n self.add_tokeninfo(key, val)\n\n self.add_tokeninfo(\"tokenkind\", TOKENKIND.VIRTUAL)\n\n @property\n def check_pin_local(self):\n \"\"\"\n lookup if pin should be checked locally or on remote host\n\n :return: bool\n \"\"\"\n local_check = is_true(self.get_tokeninfo(\"remote.local_checkpin\"))\n log.debug(\" local checking pin? {0!r}\".format(local_check))\n\n return local_check\n\n @log_with(log)\n @check_token_locked\n def authenticate(self, passw, user=None, options=None):\n \"\"\"\n do the authentication on base of password / otp and user and\n options, the request parameters.\n\n Here we contact the other privacyIDEA server to validate the OtpVal.\n\n :param passw: the password / otp\n :param user: the requesting user\n :param options: the additional request parameters\n\n :return: tuple of (success, otp_count - 0 or -1, reply)\n\n \"\"\"\n res = False\n otp_counter = -1\n reply = None\n otpval = passw\n\n # should we check the pin localy?\n if self.check_pin_local:\n (_res, pin, otpval) = self.split_pin_pass(passw, user,\n options=options)\n\n if not TokenClass.check_pin(self, pin):\n return False, otp_counter, {'message': \"Wrong PIN\"}\n\n otp_count = self.check_otp(otpval, options=options)\n if otp_count >= 0:\n res = True\n reply = {'message': 'matching 1 tokens',\n 'serial': self.get_serial(),\n 'type': self.get_tokentype()}\n else:\n reply = {'message': 'remote side denied access'}\n\n return res, otp_count, reply\n\n @check_token_locked\n def check_otp(self, otpval, counter=None, window=None, options=None):\n \"\"\"\n run the http request against the remote host\n\n :param otpval: the OTP value\n :param counter: The counter for counter based otp values\n :type counter: int\n :param window: a counter window\n :type counter: int\n :param options: additional token specific options\n :type options: dict\n :return: counter of the matching OTP value.\n :rtype: int\n \"\"\"\n otp_count = -1\n\n remoteServer = self.get_tokeninfo(\"remote.server\") or \"\"\n\n # in preparation of the ability to relocate privacyidea urls,\n # we introduce the remote url path\n remotePath = self.get_tokeninfo(\"remote.path\") or \"\"\n remotePath = remotePath.strip()\n\n remoteSerial = self.get_tokeninfo(\"remote.serial\") or \"\"\n\n remoteUser = self.get_tokeninfo(\"remote.user\") or \"\"\n\n remoteRealm = self.get_tokeninfo(\"remote.realm\") or \"\"\n\n remoteResolver = self.get_tokeninfo(\"remote.resolver\") or \"\"\n\n ssl_verify = get_from_config(\"remote.verify_ssl_certificate\",\n False, return_bool=True) or False\n\n # here we also need to check for remote.user and so on....\n log.debug(\"checking OTP len:%r remotely on server: %r,\"\n \" serial: %r, user: %r\" %\n (len(otpval), remoteServer, remoteSerial, remoteUser))\n params = {}\n\n remotePath = remotePath or \"/validate/check\"\n if remoteSerial:\n params['serial'] = remoteSerial\n elif remoteUser:\n params['user'] = remoteUser\n params['realm'] = remoteRealm\n params['resolver'] = remoteResolver\n\n else:\n log.warning(\"The remote token does neither contain a \"\n \"remote.serial nor a remote.user.\")\n return otp_count\n\n params['pass'] = otpval\n request_url = \"{0!s}{1!s}\".format(remoteServer, remotePath)\n\n try:\n r = requests.post(request_url, data=params, verify=ssl_verify)\n\n if r.status_code == requests.codes.ok:\n response = r.json()\n result = response.get(\"result\")\n if result.get(\"value\"):\n otp_count = 1\n # Add the serial of the used remote token in the tokeninfo parameters\n self.add_tokeninfo(\"last_matching_remote_serial\", response.get(\"detail\", {}).get(\"serial\"))\n\n except Exception as exx: # pragma: no cover\n log.error(\"Error getting response from \"\n \"remote Server (%r): %r\" % (request_url, exx))\n log.debug(\"{0!s}\".format(traceback.format_exc()))\n\n return otp_count\n\n @log_with(log)\n @challenge_response_allowed\n def is_challenge_request(self, passw, user=None, options=None):\n \"\"\"\n This method checks, if this is a request, that triggers a challenge.\n It depends on the way, the pin is checked - either locally or remote\n\n :param passw: password, which might be pin or pin+otp\n :type passw: string\n :param user: The user from the authentication request\n :type user: User object\n :param options: dictionary of additional request parameters\n :type options: dict\n\n :return: true or false\n \"\"\"\n\n request_is_valid = False\n\n if self.check_pin_local:\n pin_match = self.check_pin(passw, user=user,\n options=options)\n if pin_match is True:\n request_is_valid = True\n\n return request_is_valid\n", "path": "privacyidea/lib/tokens/remotetoken.py"}]}
| 3,689 | 155 |
gh_patches_debug_37457
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-1077
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TransportError: TransportError(500, 'search_phase_execution_exception', 'Result window is too lar...
https://app.getsentry.com/python-software-foundation/warehouse-production/issues/118467761/
```
TransportError: TransportError(500, 'search_phase_execution_exception', 'Result window is too large, from + size must be less than or equal to: [10000] but was [10640]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter.')
(18 additional frame(s) were not displayed)
...
File "warehouse/utils/http.py", line 31, in wrapped
return fn(context, request)
File "warehouse/cache/origin/__init__.py", line 81, in wrapped
return view(context, request)
File "warehouse/views.py", line 179, in search
url_maker=paginate_url_factory(request),
File "warehouse/utils/paginate.py", line 43, in ElasticsearchPage
return Page(*args, **kwargs)
File "warehouse/utils/paginate.py", line 26, in __getitem__
self.results = self.query[range].execute()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/utils/paginate.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from paginate import Page
14
15
16 class _ElasticsearchWrapper:
17
18 def __init__(self, query):
19 self.query = query
20 self.results = None
21 self.best_guess = None
22
23 def __getitem__(self, range):
24 if self.results is not None:
25 raise RuntimeError("Cannot reslice after having already sliced.")
26 self.results = self.query[range].execute()
27
28 if hasattr(self.results, "suggest"):
29 suggestion = self.results.suggest.name_suggestion[0]
30 if suggestion.options:
31 self.best_guess = suggestion.options[0]
32
33 return list(self.results)
34
35 def __len__(self):
36 if self.results is None:
37 raise RuntimeError("Cannot get length until a slice.")
38 return self.results.hits.total
39
40
41 def ElasticsearchPage(*args, **kwargs): # noqa
42 kwargs.setdefault("wrapper_class", _ElasticsearchWrapper)
43 return Page(*args, **kwargs)
44
45
46 def paginate_url_factory(request, query_arg="page"):
47 def make_url(page):
48 query_seq = [
49 (k, v)
50 for k, vs in request.GET.dict_of_lists().items()
51 for v in vs
52 if k != query_arg
53 ]
54 query_seq += [(query_arg, page)]
55 return request.current_route_path(_query=query_seq)
56 return make_url
57
```
Path: `warehouse/views.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from pyramid.httpexceptions import (
14 HTTPException, HTTPSeeOther, HTTPMovedPermanently,
15 )
16 from pyramid.view import (
17 notfound_view_config, forbidden_view_config, view_config,
18 )
19 from sqlalchemy import func
20 from sqlalchemy.orm import aliased, joinedload
21
22 from warehouse.accounts import REDIRECT_FIELD_NAME
23 from warehouse.accounts.models import User
24 from warehouse.cache.origin import origin_cache
25 from warehouse.cache.http import cache_control
26 from warehouse.packaging.models import Project, Release, File
27 from warehouse.utils.row_counter import RowCount
28 from warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory
29
30
31 # We need to ensure that our HTTPException views do not require CSRF because
32 # they will be called in all different kinds of situations, not all of which
33 # will have valid CSRF tokens. For instance, they will be called on POST
34 # requests when the CSRF validator attempts to return a 400 response.
35 @view_config(context=HTTPException, require_csrf=False)
36 @notfound_view_config(append_slash=HTTPMovedPermanently, require_csrf=False)
37 def httpexception_view(exc, request):
38 return exc
39
40
41 @forbidden_view_config()
42 def forbidden(exc, request):
43 # If the forbidden error is because the user isn't logged in, then we'll
44 # redirect them to the log in page.
45 if request.authenticated_userid is None:
46 url = request.route_url(
47 "accounts.login",
48 _query={REDIRECT_FIELD_NAME: request.path_qs},
49 )
50 return HTTPSeeOther(url)
51
52 # If we've reached here, then the user is logged in and they are genuinely
53 # not allowed to access this page.
54 # TODO: Style the forbidden page.
55 return exc
56
57
58 @view_config(
59 route_name="robots.txt",
60 renderer="robots.txt",
61 decorator=[
62 cache_control(1 * 24 * 60 * 60), # 1 day
63 origin_cache(
64 1 * 24 * 60 * 60, # 1 day
65 stale_while_revalidate=6 * 60 * 60, # 6 hours
66 stale_if_error=1 * 24 * 60 * 60, # 1 day
67 ),
68 ],
69 )
70 def robotstxt(request):
71 request.response.content_type = "text/plain"
72 return {}
73
74
75 @view_config(
76 route_name="index",
77 renderer="index.html",
78 decorator=[
79 origin_cache(
80 1 * 60 * 60, # 1 hour
81 stale_while_revalidate=10 * 60, # 10 minutes
82 stale_if_error=1 * 24 * 60 * 60, # 1 day
83 keys=["all-projects"],
84 ),
85 ]
86 )
87 def index(request):
88 project_names = [
89 r[0] for r in (
90 request.db.query(File.name)
91 .group_by(File.name)
92 .order_by(func.sum(File.downloads).desc())
93 .limit(5)
94 .all())
95 ]
96 release_a = aliased(
97 Release,
98 request.db.query(Release)
99 .distinct(Release.name)
100 .filter(Release.name.in_(project_names))
101 .order_by(Release.name, Release._pypi_ordering.desc())
102 .subquery(),
103 )
104 top_projects = (
105 request.db.query(release_a)
106 .options(joinedload(release_a.project),
107 joinedload(release_a.uploader))
108 .order_by(func.array_idx(project_names, release_a.name))
109 .all()
110 )
111
112 latest_releases = (
113 request.db.query(Release)
114 .options(joinedload(Release.project),
115 joinedload(Release.uploader))
116 .order_by(Release.created.desc())
117 .limit(5)
118 .all()
119 )
120
121 counts = dict(
122 request.db.query(RowCount.table_name, RowCount.count)
123 .filter(
124 RowCount.table_name.in_([
125 Project.__tablename__,
126 Release.__tablename__,
127 File.__tablename__,
128 User.__tablename__,
129 ]))
130 .all()
131 )
132
133 return {
134 "latest_releases": latest_releases,
135 "top_projects": top_projects,
136 "num_projects": counts.get(Project.__tablename__, 0),
137 "num_releases": counts.get(Release.__tablename__, 0),
138 "num_files": counts.get(File.__tablename__, 0),
139 "num_users": counts.get(User.__tablename__, 0),
140 }
141
142
143 @view_config(
144 route_name="search",
145 renderer="search/results.html",
146 decorator=[
147 origin_cache(
148 1 * 60 * 60, # 1 hour
149 stale_while_revalidate=10 * 60, # 10 minutes
150 stale_if_error=1 * 24 * 60 * 60, # 1 day
151 keys=["all-projects"],
152 )
153 ],
154 )
155 def search(request):
156 if request.params.get("q"):
157 query = request.es.query(
158 "multi_match",
159 query=request.params["q"],
160 fields=[
161 "name^2", "version", "author", "author_email", "maintainer",
162 "maintainer_email", "home_page", "license", "summary",
163 "description", "keywords", "platform", "download_url",
164 ],
165 ).suggest(
166 name="name_suggestion",
167 text=request.params["q"],
168 term={"field": "name"}
169 )
170 else:
171 query = request.es.query()
172
173 if request.params.get("o"):
174 query = query.sort(request.params["o"])
175
176 page = ElasticsearchPage(
177 query,
178 page=int(request.params.get("page", 1)),
179 url_maker=paginate_url_factory(request),
180 )
181
182 return {
183 "page": page,
184 "term": request.params.get("q"),
185 "order": request.params.get("o"),
186 }
187
188
189 @view_config(
190 route_name="includes.current-user-indicator",
191 renderer="includes/current-user-indicator.html",
192 uses_session=True,
193 )
194 def current_user_indicator(request):
195 return {}
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/warehouse/utils/paginate.py b/warehouse/utils/paginate.py
--- a/warehouse/utils/paginate.py
+++ b/warehouse/utils/paginate.py
@@ -15,12 +15,30 @@
class _ElasticsearchWrapper:
+ max_results = 10000
+
def __init__(self, query):
self.query = query
self.results = None
self.best_guess = None
def __getitem__(self, range):
+ # If we're asking for a range that extends past our maximum results,
+ # then we need to clamp the start of our slice to our maximum results
+ # size, and make sure that the end of our slice >= to that to ensure a
+ # consistent slice.
+ if range.start > self.max_results:
+ range = slice(
+ self.max_results,
+ max(range.stop, self.max_results),
+ range.step,
+ )
+
+ # If we're being asked for a range that extends past our maximum result
+ # then we'll clamp it to the maximum result size and stop there.
+ if range.stop > self.max_results:
+ range = slice(range.start, self.max_results, range.step)
+
if self.results is not None:
raise RuntimeError("Cannot reslice after having already sliced.")
self.results = self.query[range].execute()
@@ -35,7 +53,7 @@
def __len__(self):
if self.results is None:
raise RuntimeError("Cannot get length until a slice.")
- return self.results.hits.total
+ return min(self.results.hits.total, self.max_results)
def ElasticsearchPage(*args, **kwargs): # noqa
diff --git a/warehouse/views.py b/warehouse/views.py
--- a/warehouse/views.py
+++ b/warehouse/views.py
@@ -11,7 +11,7 @@
# limitations under the License.
from pyramid.httpexceptions import (
- HTTPException, HTTPSeeOther, HTTPMovedPermanently,
+ HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,
)
from pyramid.view import (
notfound_view_config, forbidden_view_config, view_config,
@@ -173,12 +173,16 @@
if request.params.get("o"):
query = query.sort(request.params["o"])
+ page_num = int(request.params.get("page", 1))
page = ElasticsearchPage(
query,
- page=int(request.params.get("page", 1)),
+ page=page_num,
url_maker=paginate_url_factory(request),
)
+ if page_num > page.page_count:
+ raise HTTPNotFound
+
return {
"page": page,
"term": request.params.get("q"),
|
{"golden_diff": "diff --git a/warehouse/utils/paginate.py b/warehouse/utils/paginate.py\n--- a/warehouse/utils/paginate.py\n+++ b/warehouse/utils/paginate.py\n@@ -15,12 +15,30 @@\n \n class _ElasticsearchWrapper:\n \n+ max_results = 10000\n+\n def __init__(self, query):\n self.query = query\n self.results = None\n self.best_guess = None\n \n def __getitem__(self, range):\n+ # If we're asking for a range that extends past our maximum results,\n+ # then we need to clamp the start of our slice to our maximum results\n+ # size, and make sure that the end of our slice >= to that to ensure a\n+ # consistent slice.\n+ if range.start > self.max_results:\n+ range = slice(\n+ self.max_results,\n+ max(range.stop, self.max_results),\n+ range.step,\n+ )\n+\n+ # If we're being asked for a range that extends past our maximum result\n+ # then we'll clamp it to the maximum result size and stop there.\n+ if range.stop > self.max_results:\n+ range = slice(range.start, self.max_results, range.step)\n+\n if self.results is not None:\n raise RuntimeError(\"Cannot reslice after having already sliced.\")\n self.results = self.query[range].execute()\n@@ -35,7 +53,7 @@\n def __len__(self):\n if self.results is None:\n raise RuntimeError(\"Cannot get length until a slice.\")\n- return self.results.hits.total\n+ return min(self.results.hits.total, self.max_results)\n \n \n def ElasticsearchPage(*args, **kwargs): # noqa\ndiff --git a/warehouse/views.py b/warehouse/views.py\n--- a/warehouse/views.py\n+++ b/warehouse/views.py\n@@ -11,7 +11,7 @@\n # limitations under the License.\n \n from pyramid.httpexceptions import (\n- HTTPException, HTTPSeeOther, HTTPMovedPermanently,\n+ HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,\n )\n from pyramid.view import (\n notfound_view_config, forbidden_view_config, view_config,\n@@ -173,12 +173,16 @@\n if request.params.get(\"o\"):\n query = query.sort(request.params[\"o\"])\n \n+ page_num = int(request.params.get(\"page\", 1))\n page = ElasticsearchPage(\n query,\n- page=int(request.params.get(\"page\", 1)),\n+ page=page_num,\n url_maker=paginate_url_factory(request),\n )\n \n+ if page_num > page.page_count:\n+ raise HTTPNotFound\n+\n return {\n \"page\": page,\n \"term\": request.params.get(\"q\"),\n", "issue": "TransportError: TransportError(500, 'search_phase_execution_exception', 'Result window is too lar...\nhttps://app.getsentry.com/python-software-foundation/warehouse-production/issues/118467761/\n\n```\nTransportError: TransportError(500, 'search_phase_execution_exception', 'Result window is too large, from + size must be less than or equal to: [10000] but was [10640]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter.')\n(18 additional frame(s) were not displayed)\n...\n File \"warehouse/utils/http.py\", line 31, in wrapped\n return fn(context, request)\n File \"warehouse/cache/origin/__init__.py\", line 81, in wrapped\n return view(context, request)\n File \"warehouse/views.py\", line 179, in search\n url_maker=paginate_url_factory(request),\n File \"warehouse/utils/paginate.py\", line 43, in ElasticsearchPage\n return Page(*args, **kwargs)\n File \"warehouse/utils/paginate.py\", line 26, in __getitem__\n self.results = self.query[range].execute()\n```\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom paginate import Page\n\n\nclass _ElasticsearchWrapper:\n\n def __init__(self, query):\n self.query = query\n self.results = None\n self.best_guess = None\n\n def __getitem__(self, range):\n if self.results is not None:\n raise RuntimeError(\"Cannot reslice after having already sliced.\")\n self.results = self.query[range].execute()\n\n if hasattr(self.results, \"suggest\"):\n suggestion = self.results.suggest.name_suggestion[0]\n if suggestion.options:\n self.best_guess = suggestion.options[0]\n\n return list(self.results)\n\n def __len__(self):\n if self.results is None:\n raise RuntimeError(\"Cannot get length until a slice.\")\n return self.results.hits.total\n\n\ndef ElasticsearchPage(*args, **kwargs): # noqa\n kwargs.setdefault(\"wrapper_class\", _ElasticsearchWrapper)\n return Page(*args, **kwargs)\n\n\ndef paginate_url_factory(request, query_arg=\"page\"):\n def make_url(page):\n query_seq = [\n (k, v)\n for k, vs in request.GET.dict_of_lists().items()\n for v in vs\n if k != query_arg\n ]\n query_seq += [(query_arg, page)]\n return request.current_route_path(_query=query_seq)\n return make_url\n", "path": "warehouse/utils/paginate.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pyramid.httpexceptions import (\n HTTPException, HTTPSeeOther, HTTPMovedPermanently,\n)\nfrom pyramid.view import (\n notfound_view_config, forbidden_view_config, view_config,\n)\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import aliased, joinedload\n\nfrom warehouse.accounts import REDIRECT_FIELD_NAME\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.cache.http import cache_control\nfrom warehouse.packaging.models import Project, Release, File\nfrom warehouse.utils.row_counter import RowCount\nfrom warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory\n\n\n# We need to ensure that our HTTPException views do not require CSRF because\n# they will be called in all different kinds of situations, not all of which\n# will have valid CSRF tokens. For instance, they will be called on POST\n# requests when the CSRF validator attempts to return a 400 response.\n@view_config(context=HTTPException, require_csrf=False)\n@notfound_view_config(append_slash=HTTPMovedPermanently, require_csrf=False)\ndef httpexception_view(exc, request):\n return exc\n\n\n@forbidden_view_config()\ndef forbidden(exc, request):\n # If the forbidden error is because the user isn't logged in, then we'll\n # redirect them to the log in page.\n if request.authenticated_userid is None:\n url = request.route_url(\n \"accounts.login\",\n _query={REDIRECT_FIELD_NAME: request.path_qs},\n )\n return HTTPSeeOther(url)\n\n # If we've reached here, then the user is logged in and they are genuinely\n # not allowed to access this page.\n # TODO: Style the forbidden page.\n return exc\n\n\n@view_config(\n route_name=\"robots.txt\",\n renderer=\"robots.txt\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=6 * 60 * 60, # 6 hours\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef robotstxt(request):\n request.response.content_type = \"text/plain\"\n return {}\n\n\n@view_config(\n route_name=\"index\",\n renderer=\"index.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n ),\n ]\n)\ndef index(request):\n project_names = [\n r[0] for r in (\n request.db.query(File.name)\n .group_by(File.name)\n .order_by(func.sum(File.downloads).desc())\n .limit(5)\n .all())\n ]\n release_a = aliased(\n Release,\n request.db.query(Release)\n .distinct(Release.name)\n .filter(Release.name.in_(project_names))\n .order_by(Release.name, Release._pypi_ordering.desc())\n .subquery(),\n )\n top_projects = (\n request.db.query(release_a)\n .options(joinedload(release_a.project),\n joinedload(release_a.uploader))\n .order_by(func.array_idx(project_names, release_a.name))\n .all()\n )\n\n latest_releases = (\n request.db.query(Release)\n .options(joinedload(Release.project),\n joinedload(Release.uploader))\n .order_by(Release.created.desc())\n .limit(5)\n .all()\n )\n\n counts = dict(\n request.db.query(RowCount.table_name, RowCount.count)\n .filter(\n RowCount.table_name.in_([\n Project.__tablename__,\n Release.__tablename__,\n File.__tablename__,\n User.__tablename__,\n ]))\n .all()\n )\n\n return {\n \"latest_releases\": latest_releases,\n \"top_projects\": top_projects,\n \"num_projects\": counts.get(Project.__tablename__, 0),\n \"num_releases\": counts.get(Release.__tablename__, 0),\n \"num_files\": counts.get(File.__tablename__, 0),\n \"num_users\": counts.get(User.__tablename__, 0),\n }\n\n\n@view_config(\n route_name=\"search\",\n renderer=\"search/results.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n )\n ],\n)\ndef search(request):\n if request.params.get(\"q\"):\n query = request.es.query(\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n \"name^2\", \"version\", \"author\", \"author_email\", \"maintainer\",\n \"maintainer_email\", \"home_page\", \"license\", \"summary\",\n \"description\", \"keywords\", \"platform\", \"download_url\",\n ],\n ).suggest(\n name=\"name_suggestion\",\n text=request.params[\"q\"],\n term={\"field\": \"name\"}\n )\n else:\n query = request.es.query()\n\n if request.params.get(\"o\"):\n query = query.sort(request.params[\"o\"])\n\n page = ElasticsearchPage(\n query,\n page=int(request.params.get(\"page\", 1)),\n url_maker=paginate_url_factory(request),\n )\n\n return {\n \"page\": page,\n \"term\": request.params.get(\"q\"),\n \"order\": request.params.get(\"o\"),\n }\n\n\n@view_config(\n route_name=\"includes.current-user-indicator\",\n renderer=\"includes/current-user-indicator.html\",\n uses_session=True,\n)\ndef current_user_indicator(request):\n return {}\n", "path": "warehouse/views.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom paginate import Page\n\n\nclass _ElasticsearchWrapper:\n\n max_results = 10000\n\n def __init__(self, query):\n self.query = query\n self.results = None\n self.best_guess = None\n\n def __getitem__(self, range):\n # If we're asking for a range that extends past our maximum results,\n # then we need to clamp the start of our slice to our maximum results\n # size, and make sure that the end of our slice >= to that to ensure a\n # consistent slice.\n if range.start > self.max_results:\n range = slice(\n self.max_results,\n max(range.stop, self.max_results),\n range.step,\n )\n\n # If we're being asked for a range that extends past our maximum result\n # then we'll clamp it to the maximum result size and stop there.\n if range.stop > self.max_results:\n range = slice(range.start, self.max_results, range.step)\n\n if self.results is not None:\n raise RuntimeError(\"Cannot reslice after having already sliced.\")\n self.results = self.query[range].execute()\n\n if hasattr(self.results, \"suggest\"):\n suggestion = self.results.suggest.name_suggestion[0]\n if suggestion.options:\n self.best_guess = suggestion.options[0]\n\n return list(self.results)\n\n def __len__(self):\n if self.results is None:\n raise RuntimeError(\"Cannot get length until a slice.\")\n return min(self.results.hits.total, self.max_results)\n\n\ndef ElasticsearchPage(*args, **kwargs): # noqa\n kwargs.setdefault(\"wrapper_class\", _ElasticsearchWrapper)\n return Page(*args, **kwargs)\n\n\ndef paginate_url_factory(request, query_arg=\"page\"):\n def make_url(page):\n query_seq = [\n (k, v)\n for k, vs in request.GET.dict_of_lists().items()\n for v in vs\n if k != query_arg\n ]\n query_seq += [(query_arg, page)]\n return request.current_route_path(_query=query_seq)\n return make_url\n", "path": "warehouse/utils/paginate.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pyramid.httpexceptions import (\n HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,\n)\nfrom pyramid.view import (\n notfound_view_config, forbidden_view_config, view_config,\n)\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import aliased, joinedload\n\nfrom warehouse.accounts import REDIRECT_FIELD_NAME\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.cache.http import cache_control\nfrom warehouse.packaging.models import Project, Release, File\nfrom warehouse.utils.row_counter import RowCount\nfrom warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory\n\n\n# We need to ensure that our HTTPException views do not require CSRF because\n# they will be called in all different kinds of situations, not all of which\n# will have valid CSRF tokens. For instance, they will be called on POST\n# requests when the CSRF validator attempts to return a 400 response.\n@view_config(context=HTTPException, require_csrf=False)\n@notfound_view_config(append_slash=HTTPMovedPermanently, require_csrf=False)\ndef httpexception_view(exc, request):\n return exc\n\n\n@forbidden_view_config()\ndef forbidden(exc, request):\n # If the forbidden error is because the user isn't logged in, then we'll\n # redirect them to the log in page.\n if request.authenticated_userid is None:\n url = request.route_url(\n \"accounts.login\",\n _query={REDIRECT_FIELD_NAME: request.path_qs},\n )\n return HTTPSeeOther(url)\n\n # If we've reached here, then the user is logged in and they are genuinely\n # not allowed to access this page.\n # TODO: Style the forbidden page.\n return exc\n\n\n@view_config(\n route_name=\"robots.txt\",\n renderer=\"robots.txt\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=6 * 60 * 60, # 6 hours\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef robotstxt(request):\n request.response.content_type = \"text/plain\"\n return {}\n\n\n@view_config(\n route_name=\"index\",\n renderer=\"index.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n ),\n ]\n)\ndef index(request):\n project_names = [\n r[0] for r in (\n request.db.query(File.name)\n .group_by(File.name)\n .order_by(func.sum(File.downloads).desc())\n .limit(5)\n .all())\n ]\n release_a = aliased(\n Release,\n request.db.query(Release)\n .distinct(Release.name)\n .filter(Release.name.in_(project_names))\n .order_by(Release.name, Release._pypi_ordering.desc())\n .subquery(),\n )\n top_projects = (\n request.db.query(release_a)\n .options(joinedload(release_a.project),\n joinedload(release_a.uploader))\n .order_by(func.array_idx(project_names, release_a.name))\n .all()\n )\n\n latest_releases = (\n request.db.query(Release)\n .options(joinedload(Release.project),\n joinedload(Release.uploader))\n .order_by(Release.created.desc())\n .limit(5)\n .all()\n )\n\n counts = dict(\n request.db.query(RowCount.table_name, RowCount.count)\n .filter(\n RowCount.table_name.in_([\n Project.__tablename__,\n Release.__tablename__,\n File.__tablename__,\n User.__tablename__,\n ]))\n .all()\n )\n\n return {\n \"latest_releases\": latest_releases,\n \"top_projects\": top_projects,\n \"num_projects\": counts.get(Project.__tablename__, 0),\n \"num_releases\": counts.get(Release.__tablename__, 0),\n \"num_files\": counts.get(File.__tablename__, 0),\n \"num_users\": counts.get(User.__tablename__, 0),\n }\n\n\n@view_config(\n route_name=\"search\",\n renderer=\"search/results.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n )\n ],\n)\ndef search(request):\n if request.params.get(\"q\"):\n query = request.es.query(\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n \"name^2\", \"version\", \"author\", \"author_email\", \"maintainer\",\n \"maintainer_email\", \"home_page\", \"license\", \"summary\",\n \"description\", \"keywords\", \"platform\", \"download_url\",\n ],\n ).suggest(\n name=\"name_suggestion\",\n text=request.params[\"q\"],\n term={\"field\": \"name\"}\n )\n else:\n query = request.es.query()\n\n if request.params.get(\"o\"):\n query = query.sort(request.params[\"o\"])\n\n page_num = int(request.params.get(\"page\", 1))\n page = ElasticsearchPage(\n query,\n page=page_num,\n url_maker=paginate_url_factory(request),\n )\n\n if page_num > page.page_count:\n raise HTTPNotFound\n\n return {\n \"page\": page,\n \"term\": request.params.get(\"q\"),\n \"order\": request.params.get(\"o\"),\n }\n\n\n@view_config(\n route_name=\"includes.current-user-indicator\",\n renderer=\"includes/current-user-indicator.html\",\n uses_session=True,\n)\ndef current_user_indicator(request):\n return {}\n", "path": "warehouse/views.py"}]}
| 2,985 | 613 |
gh_patches_debug_19220
|
rasdani/github-patches
|
git_diff
|
napari__napari-3670
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Removing and readding a 3D image layer is broken
## 🐛 Bug
Re-adding a previously removed 3D image layer results in the following error and a failure to add the layer:
```python
ValueError: interpolation must be one of bessel, bicubic, bilinear, blackman, catrom, gaussian, hamming, hanning, hermite, kaiser, lanczos, mitchell, nearest, quadric, sinc, spline16, spline36
```
This only happens with *3D images*. 2D images are fine.
As appropriate, `git bisect` tells me that it was a regression introduced by *me*, over at #3333 :P
## To Reproduce
```python
import napari
import numpy as np
v = napari.Viewer(ndisplay=3)
img = np.random.rand(100, 100, 100)
il = napari.layers.Image(img)
v.add_layer(il)
# everything is fine so far
```
```python
v.layers.remove(il)
v.add_layer(il)
# error!
```
Removing and readding a 3D image layer is broken
## 🐛 Bug
Re-adding a previously removed 3D image layer results in the following error and a failure to add the layer:
```python
ValueError: interpolation must be one of bessel, bicubic, bilinear, blackman, catrom, gaussian, hamming, hanning, hermite, kaiser, lanczos, mitchell, nearest, quadric, sinc, spline16, spline36
```
This only happens with *3D images*. 2D images are fine.
As appropriate, `git bisect` tells me that it was a regression introduced by *me*, over at #3333 :P
## To Reproduce
```python
import napari
import numpy as np
v = napari.Viewer(ndisplay=3)
img = np.random.rand(100, 100, 100)
il = napari.layers.Image(img)
v.add_layer(il)
# everything is fine so far
```
```python
v.layers.remove(il)
v.add_layer(il)
# error!
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/_vispy/layers/image.py`
Content:
```
1 import warnings
2
3 import numpy as np
4 from vispy.color import Colormap as VispyColormap
5 from vispy.scene.node import Node
6
7 from ...utils.translations import trans
8 from ..utils.gl import fix_data_dtype
9 from ..visuals.image import Image as ImageNode
10 from ..visuals.volume import Volume as VolumeNode
11 from .base import VispyBaseLayer
12
13
14 class ImageLayerNode:
15 def __init__(self, custom_node: Node = None):
16 self._custom_node = custom_node
17 self._image_node = ImageNode(None, method='auto')
18 self._volume_node = VolumeNode(
19 np.zeros((1, 1, 1), dtype=np.float32), clim=[0, 1]
20 )
21
22 def get_node(self, ndisplay: int) -> Node:
23
24 # Return custom node if we have one.
25 if self._custom_node is not None:
26 return self._custom_node
27
28 # Return Image or Volume node based on 2D or 3D.
29 if ndisplay == 2:
30 return self._image_node
31 return self._volume_node
32
33
34 class VispyImageLayer(VispyBaseLayer):
35 def __init__(self, layer, node=None):
36
37 # Use custom node from caller, or our standard image/volume nodes.
38 self._layer_node = ImageLayerNode(node)
39
40 # Default to 2D (image) node.
41 super().__init__(layer, self._layer_node.get_node(2))
42
43 self._array_like = True
44
45 self.layer.events.rendering.connect(self._on_rendering_change)
46 self.layer.events.interpolation.connect(self._on_interpolation_change)
47 self.layer.events.colormap.connect(self._on_colormap_change)
48 self.layer.events.contrast_limits.connect(
49 self._on_contrast_limits_change
50 )
51 self.layer.events.gamma.connect(self._on_gamma_change)
52 self.layer.events.iso_threshold.connect(self._on_iso_threshold_change)
53 self.layer.events.attenuation.connect(self._on_attenuation_change)
54 self.layer.experimental_slicing_plane.events.enabled.connect(
55 self._on_experimental_slicing_plane_enabled_change
56 )
57 self.layer.experimental_slicing_plane.events.position.connect(
58 self._on_experimental_slicing_plane_position_change
59 )
60 self.layer.experimental_slicing_plane.events.thickness.connect(
61 self._on_experimental_slicing_plane_thickness_change
62 )
63 self.layer.experimental_slicing_plane.events.normal.connect(
64 self._on_experimental_slicing_plane_normal_change
65 )
66
67 self.reset()
68 self._on_data_change()
69
70 def _on_display_change(self, data=None):
71
72 parent = self.node.parent
73 self.node.parent = None
74
75 self.node = self._layer_node.get_node(self.layer._ndisplay)
76
77 if data is None:
78 data = np.zeros((1,) * self.layer._ndisplay)
79
80 if self.layer._empty:
81 self.node.visible = False
82 else:
83 self.node.visible = self.layer.visible
84
85 if self.layer.loaded:
86 self.node.set_data(data)
87
88 self.node.parent = parent
89 self.node.order = self.order
90 self.reset()
91
92 def _on_data_change(self):
93 if not self.layer.loaded:
94 # Do nothing if we are not yet loaded. Calling astype below could
95 # be very expensive. Lets not do it until our data has been loaded.
96 return
97
98 self._set_node_data(self.node, self.layer._data_view)
99
100 def _set_node_data(self, node, data):
101 """Our self.layer._data_view has been updated, update our node."""
102
103 data = fix_data_dtype(data)
104
105 if self.layer._ndisplay == 3 and self.layer.ndim == 2:
106 data = np.expand_dims(data, axis=0)
107
108 # Check if data exceeds MAX_TEXTURE_SIZE and downsample
109 if self.MAX_TEXTURE_SIZE_2D is not None and self.layer._ndisplay == 2:
110 data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_2D)
111 elif (
112 self.MAX_TEXTURE_SIZE_3D is not None and self.layer._ndisplay == 3
113 ):
114 data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_3D)
115
116 # Check if ndisplay has changed current node type needs updating
117 if (
118 self.layer._ndisplay == 3 and not isinstance(node, VolumeNode)
119 ) or (self.layer._ndisplay == 2 and not isinstance(node, ImageNode)):
120 self._on_display_change(data)
121 else:
122 node.set_data(data)
123
124 if self.layer._empty:
125 node.visible = False
126 else:
127 node.visible = self.layer.visible
128
129 # Call to update order of translation values with new dims:
130 self._on_matrix_change()
131 node.update()
132
133 def _on_interpolation_change(self):
134 self.node.interpolation = self.layer.interpolation
135
136 def _on_rendering_change(self):
137 if isinstance(self.node, VolumeNode):
138 self.node.method = self.layer.rendering
139 self._on_attenuation_change()
140 self._on_iso_threshold_change()
141
142 def _on_colormap_change(self):
143 self.node.cmap = VispyColormap(*self.layer.colormap)
144
145 def _on_contrast_limits_change(self):
146 self.node.clim = self.layer.contrast_limits
147
148 def _on_gamma_change(self):
149 if len(self.node.shared_program.frag._set_items) > 0:
150 self.node.gamma = self.layer.gamma
151
152 def _on_iso_threshold_change(self):
153 if isinstance(self.node, VolumeNode):
154 self.node.threshold = self.layer.iso_threshold
155
156 def _on_attenuation_change(self):
157 if isinstance(self.node, VolumeNode):
158 self.node.attenuation = self.layer.attenuation
159
160 def _on_experimental_slicing_plane_enabled_change(self):
161 if isinstance(self.node, VolumeNode):
162 if self.layer.experimental_slicing_plane.enabled is True:
163 raycasting_mode = 'plane'
164 else:
165 raycasting_mode = 'volume'
166 self.node.raycasting_mode = raycasting_mode
167
168 def _on_experimental_slicing_plane_thickness_change(self):
169 if isinstance(self.node, VolumeNode):
170 self.node.plane_thickness = (
171 self.layer.experimental_slicing_plane.thickness
172 )
173
174 def _on_experimental_slicing_plane_position_change(self):
175 if isinstance(self.node, VolumeNode):
176 self.node.plane_position = (
177 self.layer.experimental_slicing_plane.position
178 )
179
180 def _on_experimental_slicing_plane_normal_change(self):
181 if isinstance(self.node, VolumeNode):
182 self.node.plane_normal = (
183 self.layer.experimental_slicing_plane.normal
184 )
185
186 def reset(self, event=None):
187 super().reset()
188 self._on_interpolation_change()
189 self._on_colormap_change()
190 self._on_contrast_limits_change()
191 self._on_gamma_change()
192 self._on_rendering_change()
193 self._on_experimental_slicing_plane_enabled_change()
194 self._on_experimental_slicing_plane_position_change()
195 self._on_experimental_slicing_plane_normal_change()
196 self._on_experimental_slicing_plane_thickness_change()
197
198 def downsample_texture(self, data, MAX_TEXTURE_SIZE):
199 """Downsample data based on maximum allowed texture size.
200
201 Parameters
202 ----------
203 data : array
204 Data to be downsampled if needed.
205 MAX_TEXTURE_SIZE : int
206 Maximum allowed texture size.
207
208 Returns
209 -------
210 data : array
211 Data that now fits inside texture.
212 """
213 if np.any(np.greater(data.shape, MAX_TEXTURE_SIZE)):
214 if self.layer.multiscale:
215 raise ValueError(
216 trans._(
217 "Shape of in dividual tiles in multiscale {shape} cannot exceed GL_MAX_TEXTURE_SIZE {texture_size}. Rendering is currently in {ndisplay}D mode.",
218 deferred=True,
219 shape=data.shape,
220 texture_size=MAX_TEXTURE_SIZE,
221 ndisplay=self.layer._ndisplay,
222 )
223 )
224 warnings.warn(
225 trans._(
226 "data shape {shape} exceeds GL_MAX_TEXTURE_SIZE {texture_size} in at least one axis and will be downsampled. Rendering is currently in {ndisplay}D mode.",
227 deferred=True,
228 shape=data.shape,
229 texture_size=MAX_TEXTURE_SIZE,
230 ndisplay=self.layer._ndisplay,
231 )
232 )
233 downsample = np.ceil(
234 np.divide(data.shape, MAX_TEXTURE_SIZE)
235 ).astype(int)
236 scale = np.ones(self.layer.ndim)
237 for i, d in enumerate(self.layer._dims_displayed):
238 scale[d] = downsample[i]
239 self.layer._transforms['tile2data'].scale = scale
240 self._on_matrix_change()
241 slices = tuple(slice(None, None, ds) for ds in downsample)
242 data = data[slices]
243 return data
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/napari/_vispy/layers/image.py b/napari/_vispy/layers/image.py
--- a/napari/_vispy/layers/image.py
+++ b/napari/_vispy/layers/image.py
@@ -64,18 +64,21 @@
self._on_experimental_slicing_plane_normal_change
)
+ # display_change is special (like data_change) because it requires a self.reset()
+ # this means that we have to call it manually. Also, it must be called before reset
+ # in order to set the appropriate node first
+ self._on_display_change()
self.reset()
self._on_data_change()
def _on_display_change(self, data=None):
-
parent = self.node.parent
self.node.parent = None
self.node = self._layer_node.get_node(self.layer._ndisplay)
if data is None:
- data = np.zeros((1,) * self.layer._ndisplay)
+ data = np.zeros((1,) * self.layer._ndisplay, dtype=np.float32)
if self.layer._empty:
self.node.visible = False
|
{"golden_diff": "diff --git a/napari/_vispy/layers/image.py b/napari/_vispy/layers/image.py\n--- a/napari/_vispy/layers/image.py\n+++ b/napari/_vispy/layers/image.py\n@@ -64,18 +64,21 @@\n self._on_experimental_slicing_plane_normal_change\n )\n \n+ # display_change is special (like data_change) because it requires a self.reset()\n+ # this means that we have to call it manually. Also, it must be called before reset\n+ # in order to set the appropriate node first\n+ self._on_display_change()\n self.reset()\n self._on_data_change()\n \n def _on_display_change(self, data=None):\n-\n parent = self.node.parent\n self.node.parent = None\n \n self.node = self._layer_node.get_node(self.layer._ndisplay)\n \n if data is None:\n- data = np.zeros((1,) * self.layer._ndisplay)\n+ data = np.zeros((1,) * self.layer._ndisplay, dtype=np.float32)\n \n if self.layer._empty:\n self.node.visible = False\n", "issue": "Removing and readding a 3D image layer is broken\n## \ud83d\udc1b Bug\r\n\r\nRe-adding a previously removed 3D image layer results in the following error and a failure to add the layer:\r\n\r\n```python\r\nValueError: interpolation must be one of bessel, bicubic, bilinear, blackman, catrom, gaussian, hamming, hanning, hermite, kaiser, lanczos, mitchell, nearest, quadric, sinc, spline16, spline36\r\n```\r\n\r\nThis only happens with *3D images*. 2D images are fine.\r\n\r\nAs appropriate, `git bisect` tells me that it was a regression introduced by *me*, over at #3333 :P\r\n\r\n## To Reproduce\r\n\r\n```python\r\nimport napari\r\nimport numpy as np\r\n\r\nv = napari.Viewer(ndisplay=3)\r\nimg = np.random.rand(100, 100, 100)\r\nil = napari.layers.Image(img)\r\nv.add_layer(il)\r\n# everything is fine so far\r\n```\r\n```python\r\nv.layers.remove(il)\r\nv.add_layer(il)\r\n# error!\r\n```\nRemoving and readding a 3D image layer is broken\n## \ud83d\udc1b Bug\r\n\r\nRe-adding a previously removed 3D image layer results in the following error and a failure to add the layer:\r\n\r\n```python\r\nValueError: interpolation must be one of bessel, bicubic, bilinear, blackman, catrom, gaussian, hamming, hanning, hermite, kaiser, lanczos, mitchell, nearest, quadric, sinc, spline16, spline36\r\n```\r\n\r\nThis only happens with *3D images*. 2D images are fine.\r\n\r\nAs appropriate, `git bisect` tells me that it was a regression introduced by *me*, over at #3333 :P\r\n\r\n## To Reproduce\r\n\r\n```python\r\nimport napari\r\nimport numpy as np\r\n\r\nv = napari.Viewer(ndisplay=3)\r\nimg = np.random.rand(100, 100, 100)\r\nil = napari.layers.Image(img)\r\nv.add_layer(il)\r\n# everything is fine so far\r\n```\r\n```python\r\nv.layers.remove(il)\r\nv.add_layer(il)\r\n# error!\r\n```\n", "before_files": [{"content": "import warnings\n\nimport numpy as np\nfrom vispy.color import Colormap as VispyColormap\nfrom vispy.scene.node import Node\n\nfrom ...utils.translations import trans\nfrom ..utils.gl import fix_data_dtype\nfrom ..visuals.image import Image as ImageNode\nfrom ..visuals.volume import Volume as VolumeNode\nfrom .base import VispyBaseLayer\n\n\nclass ImageLayerNode:\n def __init__(self, custom_node: Node = None):\n self._custom_node = custom_node\n self._image_node = ImageNode(None, method='auto')\n self._volume_node = VolumeNode(\n np.zeros((1, 1, 1), dtype=np.float32), clim=[0, 1]\n )\n\n def get_node(self, ndisplay: int) -> Node:\n\n # Return custom node if we have one.\n if self._custom_node is not None:\n return self._custom_node\n\n # Return Image or Volume node based on 2D or 3D.\n if ndisplay == 2:\n return self._image_node\n return self._volume_node\n\n\nclass VispyImageLayer(VispyBaseLayer):\n def __init__(self, layer, node=None):\n\n # Use custom node from caller, or our standard image/volume nodes.\n self._layer_node = ImageLayerNode(node)\n\n # Default to 2D (image) node.\n super().__init__(layer, self._layer_node.get_node(2))\n\n self._array_like = True\n\n self.layer.events.rendering.connect(self._on_rendering_change)\n self.layer.events.interpolation.connect(self._on_interpolation_change)\n self.layer.events.colormap.connect(self._on_colormap_change)\n self.layer.events.contrast_limits.connect(\n self._on_contrast_limits_change\n )\n self.layer.events.gamma.connect(self._on_gamma_change)\n self.layer.events.iso_threshold.connect(self._on_iso_threshold_change)\n self.layer.events.attenuation.connect(self._on_attenuation_change)\n self.layer.experimental_slicing_plane.events.enabled.connect(\n self._on_experimental_slicing_plane_enabled_change\n )\n self.layer.experimental_slicing_plane.events.position.connect(\n self._on_experimental_slicing_plane_position_change\n )\n self.layer.experimental_slicing_plane.events.thickness.connect(\n self._on_experimental_slicing_plane_thickness_change\n )\n self.layer.experimental_slicing_plane.events.normal.connect(\n self._on_experimental_slicing_plane_normal_change\n )\n\n self.reset()\n self._on_data_change()\n\n def _on_display_change(self, data=None):\n\n parent = self.node.parent\n self.node.parent = None\n\n self.node = self._layer_node.get_node(self.layer._ndisplay)\n\n if data is None:\n data = np.zeros((1,) * self.layer._ndisplay)\n\n if self.layer._empty:\n self.node.visible = False\n else:\n self.node.visible = self.layer.visible\n\n if self.layer.loaded:\n self.node.set_data(data)\n\n self.node.parent = parent\n self.node.order = self.order\n self.reset()\n\n def _on_data_change(self):\n if not self.layer.loaded:\n # Do nothing if we are not yet loaded. Calling astype below could\n # be very expensive. Lets not do it until our data has been loaded.\n return\n\n self._set_node_data(self.node, self.layer._data_view)\n\n def _set_node_data(self, node, data):\n \"\"\"Our self.layer._data_view has been updated, update our node.\"\"\"\n\n data = fix_data_dtype(data)\n\n if self.layer._ndisplay == 3 and self.layer.ndim == 2:\n data = np.expand_dims(data, axis=0)\n\n # Check if data exceeds MAX_TEXTURE_SIZE and downsample\n if self.MAX_TEXTURE_SIZE_2D is not None and self.layer._ndisplay == 2:\n data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_2D)\n elif (\n self.MAX_TEXTURE_SIZE_3D is not None and self.layer._ndisplay == 3\n ):\n data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_3D)\n\n # Check if ndisplay has changed current node type needs updating\n if (\n self.layer._ndisplay == 3 and not isinstance(node, VolumeNode)\n ) or (self.layer._ndisplay == 2 and not isinstance(node, ImageNode)):\n self._on_display_change(data)\n else:\n node.set_data(data)\n\n if self.layer._empty:\n node.visible = False\n else:\n node.visible = self.layer.visible\n\n # Call to update order of translation values with new dims:\n self._on_matrix_change()\n node.update()\n\n def _on_interpolation_change(self):\n self.node.interpolation = self.layer.interpolation\n\n def _on_rendering_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.method = self.layer.rendering\n self._on_attenuation_change()\n self._on_iso_threshold_change()\n\n def _on_colormap_change(self):\n self.node.cmap = VispyColormap(*self.layer.colormap)\n\n def _on_contrast_limits_change(self):\n self.node.clim = self.layer.contrast_limits\n\n def _on_gamma_change(self):\n if len(self.node.shared_program.frag._set_items) > 0:\n self.node.gamma = self.layer.gamma\n\n def _on_iso_threshold_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.threshold = self.layer.iso_threshold\n\n def _on_attenuation_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.attenuation = self.layer.attenuation\n\n def _on_experimental_slicing_plane_enabled_change(self):\n if isinstance(self.node, VolumeNode):\n if self.layer.experimental_slicing_plane.enabled is True:\n raycasting_mode = 'plane'\n else:\n raycasting_mode = 'volume'\n self.node.raycasting_mode = raycasting_mode\n\n def _on_experimental_slicing_plane_thickness_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.plane_thickness = (\n self.layer.experimental_slicing_plane.thickness\n )\n\n def _on_experimental_slicing_plane_position_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.plane_position = (\n self.layer.experimental_slicing_plane.position\n )\n\n def _on_experimental_slicing_plane_normal_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.plane_normal = (\n self.layer.experimental_slicing_plane.normal\n )\n\n def reset(self, event=None):\n super().reset()\n self._on_interpolation_change()\n self._on_colormap_change()\n self._on_contrast_limits_change()\n self._on_gamma_change()\n self._on_rendering_change()\n self._on_experimental_slicing_plane_enabled_change()\n self._on_experimental_slicing_plane_position_change()\n self._on_experimental_slicing_plane_normal_change()\n self._on_experimental_slicing_plane_thickness_change()\n\n def downsample_texture(self, data, MAX_TEXTURE_SIZE):\n \"\"\"Downsample data based on maximum allowed texture size.\n\n Parameters\n ----------\n data : array\n Data to be downsampled if needed.\n MAX_TEXTURE_SIZE : int\n Maximum allowed texture size.\n\n Returns\n -------\n data : array\n Data that now fits inside texture.\n \"\"\"\n if np.any(np.greater(data.shape, MAX_TEXTURE_SIZE)):\n if self.layer.multiscale:\n raise ValueError(\n trans._(\n \"Shape of in dividual tiles in multiscale {shape} cannot exceed GL_MAX_TEXTURE_SIZE {texture_size}. Rendering is currently in {ndisplay}D mode.\",\n deferred=True,\n shape=data.shape,\n texture_size=MAX_TEXTURE_SIZE,\n ndisplay=self.layer._ndisplay,\n )\n )\n warnings.warn(\n trans._(\n \"data shape {shape} exceeds GL_MAX_TEXTURE_SIZE {texture_size} in at least one axis and will be downsampled. Rendering is currently in {ndisplay}D mode.\",\n deferred=True,\n shape=data.shape,\n texture_size=MAX_TEXTURE_SIZE,\n ndisplay=self.layer._ndisplay,\n )\n )\n downsample = np.ceil(\n np.divide(data.shape, MAX_TEXTURE_SIZE)\n ).astype(int)\n scale = np.ones(self.layer.ndim)\n for i, d in enumerate(self.layer._dims_displayed):\n scale[d] = downsample[i]\n self.layer._transforms['tile2data'].scale = scale\n self._on_matrix_change()\n slices = tuple(slice(None, None, ds) for ds in downsample)\n data = data[slices]\n return data\n", "path": "napari/_vispy/layers/image.py"}], "after_files": [{"content": "import warnings\n\nimport numpy as np\nfrom vispy.color import Colormap as VispyColormap\nfrom vispy.scene.node import Node\n\nfrom ...utils.translations import trans\nfrom ..utils.gl import fix_data_dtype\nfrom ..visuals.image import Image as ImageNode\nfrom ..visuals.volume import Volume as VolumeNode\nfrom .base import VispyBaseLayer\n\n\nclass ImageLayerNode:\n def __init__(self, custom_node: Node = None):\n self._custom_node = custom_node\n self._image_node = ImageNode(None, method='auto')\n self._volume_node = VolumeNode(\n np.zeros((1, 1, 1), dtype=np.float32), clim=[0, 1]\n )\n\n def get_node(self, ndisplay: int) -> Node:\n\n # Return custom node if we have one.\n if self._custom_node is not None:\n return self._custom_node\n\n # Return Image or Volume node based on 2D or 3D.\n if ndisplay == 2:\n return self._image_node\n return self._volume_node\n\n\nclass VispyImageLayer(VispyBaseLayer):\n def __init__(self, layer, node=None):\n\n # Use custom node from caller, or our standard image/volume nodes.\n self._layer_node = ImageLayerNode(node)\n\n # Default to 2D (image) node.\n super().__init__(layer, self._layer_node.get_node(2))\n\n self._array_like = True\n\n self.layer.events.rendering.connect(self._on_rendering_change)\n self.layer.events.interpolation.connect(self._on_interpolation_change)\n self.layer.events.colormap.connect(self._on_colormap_change)\n self.layer.events.contrast_limits.connect(\n self._on_contrast_limits_change\n )\n self.layer.events.gamma.connect(self._on_gamma_change)\n self.layer.events.iso_threshold.connect(self._on_iso_threshold_change)\n self.layer.events.attenuation.connect(self._on_attenuation_change)\n self.layer.experimental_slicing_plane.events.enabled.connect(\n self._on_experimental_slicing_plane_enabled_change\n )\n self.layer.experimental_slicing_plane.events.position.connect(\n self._on_experimental_slicing_plane_position_change\n )\n self.layer.experimental_slicing_plane.events.thickness.connect(\n self._on_experimental_slicing_plane_thickness_change\n )\n self.layer.experimental_slicing_plane.events.normal.connect(\n self._on_experimental_slicing_plane_normal_change\n )\n\n # display_change is special (like data_change) because it requires a self.reset()\n # this means that we have to call it manually. Also, it must be called before reset\n # in order to set the appropriate node first\n self._on_display_change()\n self.reset()\n self._on_data_change()\n\n def _on_display_change(self, data=None):\n parent = self.node.parent\n self.node.parent = None\n\n self.node = self._layer_node.get_node(self.layer._ndisplay)\n\n if data is None:\n data = np.zeros((1,) * self.layer._ndisplay, dtype=np.float32)\n\n if self.layer._empty:\n self.node.visible = False\n else:\n self.node.visible = self.layer.visible\n\n if self.layer.loaded:\n self.node.set_data(data)\n\n self.node.parent = parent\n self.node.order = self.order\n self.reset()\n\n def _on_data_change(self):\n if not self.layer.loaded:\n # Do nothing if we are not yet loaded. Calling astype below could\n # be very expensive. Lets not do it until our data has been loaded.\n return\n\n self._set_node_data(self.node, self.layer._data_view)\n\n def _set_node_data(self, node, data):\n \"\"\"Our self.layer._data_view has been updated, update our node.\"\"\"\n\n data = fix_data_dtype(data)\n\n if self.layer._ndisplay == 3 and self.layer.ndim == 2:\n data = np.expand_dims(data, axis=0)\n\n # Check if data exceeds MAX_TEXTURE_SIZE and downsample\n if self.MAX_TEXTURE_SIZE_2D is not None and self.layer._ndisplay == 2:\n data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_2D)\n elif (\n self.MAX_TEXTURE_SIZE_3D is not None and self.layer._ndisplay == 3\n ):\n data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_3D)\n\n # Check if ndisplay has changed current node type needs updating\n if (\n self.layer._ndisplay == 3 and not isinstance(node, VolumeNode)\n ) or (self.layer._ndisplay == 2 and not isinstance(node, ImageNode)):\n self._on_display_change(data)\n else:\n node.set_data(data)\n\n if self.layer._empty:\n node.visible = False\n else:\n node.visible = self.layer.visible\n\n # Call to update order of translation values with new dims:\n self._on_matrix_change()\n node.update()\n\n def _on_interpolation_change(self):\n self.node.interpolation = self.layer.interpolation\n\n def _on_rendering_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.method = self.layer.rendering\n self._on_attenuation_change()\n self._on_iso_threshold_change()\n\n def _on_colormap_change(self):\n self.node.cmap = VispyColormap(*self.layer.colormap)\n\n def _on_contrast_limits_change(self):\n self.node.clim = self.layer.contrast_limits\n\n def _on_gamma_change(self):\n if len(self.node.shared_program.frag._set_items) > 0:\n self.node.gamma = self.layer.gamma\n\n def _on_iso_threshold_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.threshold = self.layer.iso_threshold\n\n def _on_attenuation_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.attenuation = self.layer.attenuation\n\n def _on_experimental_slicing_plane_enabled_change(self):\n if isinstance(self.node, VolumeNode):\n if self.layer.experimental_slicing_plane.enabled is True:\n raycasting_mode = 'plane'\n else:\n raycasting_mode = 'volume'\n self.node.raycasting_mode = raycasting_mode\n\n def _on_experimental_slicing_plane_thickness_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.plane_thickness = (\n self.layer.experimental_slicing_plane.thickness\n )\n\n def _on_experimental_slicing_plane_position_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.plane_position = (\n self.layer.experimental_slicing_plane.position\n )\n\n def _on_experimental_slicing_plane_normal_change(self):\n if isinstance(self.node, VolumeNode):\n self.node.plane_normal = (\n self.layer.experimental_slicing_plane.normal\n )\n\n def reset(self, event=None):\n super().reset()\n self._on_interpolation_change()\n self._on_colormap_change()\n self._on_contrast_limits_change()\n self._on_gamma_change()\n self._on_rendering_change()\n self._on_experimental_slicing_plane_enabled_change()\n self._on_experimental_slicing_plane_position_change()\n self._on_experimental_slicing_plane_normal_change()\n self._on_experimental_slicing_plane_thickness_change()\n\n def downsample_texture(self, data, MAX_TEXTURE_SIZE):\n \"\"\"Downsample data based on maximum allowed texture size.\n\n Parameters\n ----------\n data : array\n Data to be downsampled if needed.\n MAX_TEXTURE_SIZE : int\n Maximum allowed texture size.\n\n Returns\n -------\n data : array\n Data that now fits inside texture.\n \"\"\"\n if np.any(np.greater(data.shape, MAX_TEXTURE_SIZE)):\n if self.layer.multiscale:\n raise ValueError(\n trans._(\n \"Shape of in dividual tiles in multiscale {shape} cannot exceed GL_MAX_TEXTURE_SIZE {texture_size}. Rendering is currently in {ndisplay}D mode.\",\n deferred=True,\n shape=data.shape,\n texture_size=MAX_TEXTURE_SIZE,\n ndisplay=self.layer._ndisplay,\n )\n )\n warnings.warn(\n trans._(\n \"data shape {shape} exceeds GL_MAX_TEXTURE_SIZE {texture_size} in at least one axis and will be downsampled. Rendering is currently in {ndisplay}D mode.\",\n deferred=True,\n shape=data.shape,\n texture_size=MAX_TEXTURE_SIZE,\n ndisplay=self.layer._ndisplay,\n )\n )\n downsample = np.ceil(\n np.divide(data.shape, MAX_TEXTURE_SIZE)\n ).astype(int)\n scale = np.ones(self.layer.ndim)\n for i, d in enumerate(self.layer._dims_displayed):\n scale[d] = downsample[i]\n self.layer._transforms['tile2data'].scale = scale\n self._on_matrix_change()\n slices = tuple(slice(None, None, ds) for ds in downsample)\n data = data[slices]\n return data\n", "path": "napari/_vispy/layers/image.py"}]}
| 3,269 | 256 |
gh_patches_debug_28173
|
rasdani/github-patches
|
git_diff
|
CMSgov__bluebutton-web-server-5
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
When editing an application user can select capabilities not allowed by his groups
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/dot_ext/views/application.py`
Content:
```
1 from django.core.urlresolvers import reverse_lazy
2 from django.forms.models import modelform_factory
3 from django.views.generic import CreateView, DetailView, DeleteView, ListView, UpdateView
4
5 from braces.views import LoginRequiredMixin
6
7 from oauth2_provider.models import get_application_model
8
9
10 class ApplicationOwnerIsUserMixin(LoginRequiredMixin):
11 """
12 This mixin is used to provide an Application queryset filtered by the current request.user.
13 """
14 fields = '__all__'
15
16 def get_queryset(self):
17 return get_application_model().objects.filter(user=self.request.user)
18
19
20 class ApplicationRegistration(LoginRequiredMixin, CreateView):
21 """
22 View used to register a new Application for the request.user
23 """
24 template_name = "application_registration_form.html"
25
26 def get_form_class(self):
27 """
28 Returns the form class for the application model
29 """
30
31 mff = modelform_factory(
32 get_application_model(),
33 fields=('name', 'client_id', 'client_secret', 'client_type',
34 'authorization_grant_type', 'scope', 'redirect_uris', )
35 )
36 return mff
37
38
39
40 def form_valid(self, form):
41 form.instance.user = self.request.user
42 return super(ApplicationRegistration, self).form_valid(form)
43
44
45 class ApplicationDetail(ApplicationOwnerIsUserMixin, DetailView):
46 """
47 Detail view for an application instance owned by the request.user
48 """
49 context_object_name = 'application'
50 template_name = "application_detail.html"
51
52
53 class ApplicationList(ApplicationOwnerIsUserMixin, ListView):
54 """
55 List view for all the applications owned by the request.user
56 """
57 context_object_name = 'applications'
58 template_name = "application_list.html"
59
60
61 class ApplicationDelete(ApplicationOwnerIsUserMixin, DeleteView):
62 """
63 View used to delete an application owned by the request.user
64 """
65 context_object_name = 'application'
66 success_url = reverse_lazy('dote_list')
67 template_name = "application_confirm_delete.html"
68
69
70 class ApplicationUpdate(ApplicationOwnerIsUserMixin, UpdateView):
71 """
72 View used to update an application owned by the request.user
73 """
74 context_object_name = 'application'
75 template_name = "application_form.html"
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/dot_ext/views/application.py b/apps/dot_ext/views/application.py
--- a/apps/dot_ext/views/application.py
+++ b/apps/dot_ext/views/application.py
@@ -6,6 +6,8 @@
from oauth2_provider.models import get_application_model
+from ..forms import CustomRegisterApplicationForm
+
class ApplicationOwnerIsUserMixin(LoginRequiredMixin):
"""
@@ -27,15 +29,15 @@
"""
Returns the form class for the application model
"""
-
+
mff = modelform_factory(
get_application_model(),
fields=('name', 'client_id', 'client_secret', 'client_type',
'authorization_grant_type', 'scope', 'redirect_uris', )
)
return mff
-
-
+
+
def form_valid(self, form):
form.instance.user = self.request.user
@@ -73,3 +75,15 @@
"""
context_object_name = 'application'
template_name = "application_form.html"
+
+ fields = None
+ form_class = CustomRegisterApplicationForm
+
+ def get_form_kwargs(self):
+ """
+ Add `user` to kwargs because it is required by the constructor of
+ CustomRegisterApplicationForm class.
+ """
+ kwargs = super(ApplicationUpdate, self).get_form_kwargs()
+ kwargs['user'] = self.request.user
+ return kwargs
|
{"golden_diff": "diff --git a/apps/dot_ext/views/application.py b/apps/dot_ext/views/application.py\n--- a/apps/dot_ext/views/application.py\n+++ b/apps/dot_ext/views/application.py\n@@ -6,6 +6,8 @@\n \n from oauth2_provider.models import get_application_model\n \n+from ..forms import CustomRegisterApplicationForm\n+\n \n class ApplicationOwnerIsUserMixin(LoginRequiredMixin):\n \"\"\"\n@@ -27,15 +29,15 @@\n \"\"\"\n Returns the form class for the application model\n \"\"\"\n- \n+\n mff = modelform_factory(\n get_application_model(),\n fields=('name', 'client_id', 'client_secret', 'client_type',\n 'authorization_grant_type', 'scope', 'redirect_uris', )\n )\n return mff\n- \n- \n+\n+\n \n def form_valid(self, form):\n form.instance.user = self.request.user\n@@ -73,3 +75,15 @@\n \"\"\"\n context_object_name = 'application'\n template_name = \"application_form.html\"\n+\n+ fields = None\n+ form_class = CustomRegisterApplicationForm\n+\n+ def get_form_kwargs(self):\n+ \"\"\"\n+ Add `user` to kwargs because it is required by the constructor of\n+ CustomRegisterApplicationForm class.\n+ \"\"\"\n+ kwargs = super(ApplicationUpdate, self).get_form_kwargs()\n+ kwargs['user'] = self.request.user\n+ return kwargs\n", "issue": "When editing an application user can select capabilities not allowed by his groups\n\n", "before_files": [{"content": "from django.core.urlresolvers import reverse_lazy\nfrom django.forms.models import modelform_factory\nfrom django.views.generic import CreateView, DetailView, DeleteView, ListView, UpdateView\n\nfrom braces.views import LoginRequiredMixin\n\nfrom oauth2_provider.models import get_application_model\n\n\nclass ApplicationOwnerIsUserMixin(LoginRequiredMixin):\n \"\"\"\n This mixin is used to provide an Application queryset filtered by the current request.user.\n \"\"\"\n fields = '__all__'\n\n def get_queryset(self):\n return get_application_model().objects.filter(user=self.request.user)\n\n\nclass ApplicationRegistration(LoginRequiredMixin, CreateView):\n \"\"\"\n View used to register a new Application for the request.user\n \"\"\"\n template_name = \"application_registration_form.html\"\n\n def get_form_class(self):\n \"\"\"\n Returns the form class for the application model\n \"\"\"\n \n mff = modelform_factory(\n get_application_model(),\n fields=('name', 'client_id', 'client_secret', 'client_type',\n 'authorization_grant_type', 'scope', 'redirect_uris', )\n )\n return mff\n \n \n\n def form_valid(self, form):\n form.instance.user = self.request.user\n return super(ApplicationRegistration, self).form_valid(form)\n\n\nclass ApplicationDetail(ApplicationOwnerIsUserMixin, DetailView):\n \"\"\"\n Detail view for an application instance owned by the request.user\n \"\"\"\n context_object_name = 'application'\n template_name = \"application_detail.html\"\n\n\nclass ApplicationList(ApplicationOwnerIsUserMixin, ListView):\n \"\"\"\n List view for all the applications owned by the request.user\n \"\"\"\n context_object_name = 'applications'\n template_name = \"application_list.html\"\n\n\nclass ApplicationDelete(ApplicationOwnerIsUserMixin, DeleteView):\n \"\"\"\n View used to delete an application owned by the request.user\n \"\"\"\n context_object_name = 'application'\n success_url = reverse_lazy('dote_list')\n template_name = \"application_confirm_delete.html\"\n\n\nclass ApplicationUpdate(ApplicationOwnerIsUserMixin, UpdateView):\n \"\"\"\n View used to update an application owned by the request.user\n \"\"\"\n context_object_name = 'application'\n template_name = \"application_form.html\"\n", "path": "apps/dot_ext/views/application.py"}], "after_files": [{"content": "from django.core.urlresolvers import reverse_lazy\nfrom django.forms.models import modelform_factory\nfrom django.views.generic import CreateView, DetailView, DeleteView, ListView, UpdateView\n\nfrom braces.views import LoginRequiredMixin\n\nfrom oauth2_provider.models import get_application_model\n\nfrom ..forms import CustomRegisterApplicationForm\n\n\nclass ApplicationOwnerIsUserMixin(LoginRequiredMixin):\n \"\"\"\n This mixin is used to provide an Application queryset filtered by the current request.user.\n \"\"\"\n fields = '__all__'\n\n def get_queryset(self):\n return get_application_model().objects.filter(user=self.request.user)\n\n\nclass ApplicationRegistration(LoginRequiredMixin, CreateView):\n \"\"\"\n View used to register a new Application for the request.user\n \"\"\"\n template_name = \"application_registration_form.html\"\n\n def get_form_class(self):\n \"\"\"\n Returns the form class for the application model\n \"\"\"\n\n mff = modelform_factory(\n get_application_model(),\n fields=('name', 'client_id', 'client_secret', 'client_type',\n 'authorization_grant_type', 'scope', 'redirect_uris', )\n )\n return mff\n\n\n\n def form_valid(self, form):\n form.instance.user = self.request.user\n return super(ApplicationRegistration, self).form_valid(form)\n\n\nclass ApplicationDetail(ApplicationOwnerIsUserMixin, DetailView):\n \"\"\"\n Detail view for an application instance owned by the request.user\n \"\"\"\n context_object_name = 'application'\n template_name = \"application_detail.html\"\n\n\nclass ApplicationList(ApplicationOwnerIsUserMixin, ListView):\n \"\"\"\n List view for all the applications owned by the request.user\n \"\"\"\n context_object_name = 'applications'\n template_name = \"application_list.html\"\n\n\nclass ApplicationDelete(ApplicationOwnerIsUserMixin, DeleteView):\n \"\"\"\n View used to delete an application owned by the request.user\n \"\"\"\n context_object_name = 'application'\n success_url = reverse_lazy('dote_list')\n template_name = \"application_confirm_delete.html\"\n\n\nclass ApplicationUpdate(ApplicationOwnerIsUserMixin, UpdateView):\n \"\"\"\n View used to update an application owned by the request.user\n \"\"\"\n context_object_name = 'application'\n template_name = \"application_form.html\"\n\n fields = None\n form_class = CustomRegisterApplicationForm\n\n def get_form_kwargs(self):\n \"\"\"\n Add `user` to kwargs because it is required by the constructor of\n CustomRegisterApplicationForm class.\n \"\"\"\n kwargs = super(ApplicationUpdate, self).get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n", "path": "apps/dot_ext/views/application.py"}]}
| 885 | 312 |
gh_patches_debug_28544
|
rasdani/github-patches
|
git_diff
|
lk-geimfari__mimesis-778
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The matrix generators for provider Numbers()
# Feature request
<!-- Thanks for coming up with a proposal! -->
## Thesis
The matrix can be useful in working in Jupyter Notebook. So, it would be awesome to support generators of matrix in provider `Numbers()`.
I sketched a rough implementation example:
```python
import random
from typing import Tuple, Optional, Union, List
ValuesRange = Union[
List[int],
Tuple[int, int],
]
def matrix(m: int = 3, n: int = 3, values_range: Optional[ValuesRange] = None) -> list:
"""Creates an 𝑚×𝑛 matrix which has 𝑚 rows and 𝑛 columns.
:param m: Rows count.
:param n: Columns count.
:param values_range: Range of values of matrix items.
:return: Matrix 𝑚×𝑛.
"""
if values_range is None:
values_range = (-100, 100)
if len(values_range) > 2:
raise ValueError('Invalid items range! Usage example: items_range=(-10, 10)')
result = [
[random.randint(*values_range) for _ in range(m)]
for _ in range(n)
]
return result
matrix(m=5, n=4, values_range=[-50, 50])
```
Output:
```python
[[29, 7, 17, -24, -29],
[-26, 13, 29, -25, 6],
[-5, 16, 13, -45, 49],
[23, 20, -25, 14, -25]]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mimesis/providers/numbers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Provides data related to numbers."""
4
5 from typing import List, Union
6
7 from mimesis.providers.base import BaseProvider
8
9 __all__ = ['Numbers']
10
11
12 class Numbers(BaseProvider):
13 """Class for generating numbers."""
14
15 class Meta:
16 """Class for metadata."""
17
18 name = 'numbers'
19
20 def floats(self, start: float = 0, end: float = 1, n: int = 10,
21 rounding: int = 15) -> List[float]:
22 """Generate a list of random float numbers.
23
24 :param start: Start range.
25 :param end: End range.
26 :param n: Length of the list.
27 :param rounding: Max number of decimal digits.
28 :return: The list of floating-point numbers.
29 """
30 return [self.random.uniform(start, end, rounding) for _ in range(n)]
31
32 def integers(self, start: int = 0, end: int = 10,
33 n: int = 10) -> List[int]:
34 """Generate a list of random integers.
35
36 Integers can be negative or positive numbers.
37 .. note: You can use both positive and negative numbers.
38
39 :param start: Start.
40 :param end: End.
41 :param n: Length of list.
42 :return: List of integers.
43
44 :Example:
45 [-20, -19, -18, -17]
46 """
47 return self.random.randints(n, start, end)
48
49 def complexes(self, start_real: float = 0, end_real: float = 1,
50 start_imag: float = 0, end_imag: float = 1,
51 rounding_real: int = 15, rounding_imag: int = 15,
52 n: int = 10) -> List[complex]:
53 """Generate a list of random complex numbers.
54
55 :param start_real: Start real range.
56 :param end_real: End real range.
57 :param start_imag: Start imaginary range.
58 :param end_imag: End imaginary range.
59 :param rounding_real: Rounding real part.
60 :param rounding_imag: Roungind imaginary part.
61 :param n: Length of the list.
62 :return: A list of random complex numbers.
63 """
64 return [
65 complex(self.random.uniform(start_real, end_real, rounding_real),
66 self.random.uniform(start_imag, end_imag, rounding_imag))
67 for _ in range(n)]
68
69 @staticmethod
70 def primes(start: int = 1, end: int = 999) -> List[int]:
71 """Generate a list of prime numbers.
72
73 :param start: First value of range.
74 :param end: Last value of range.
75 :return: A list of prime numbers from start to end.
76 """
77 # TODO: It should generate random primes with passed length.
78 sieve_size = (end // 2 - 1) if end % 2 == 0 else (end // 2)
79 sieve = [True] * sieve_size
80
81 primes = [] # list of primes
82 # add 2 to the list if it's in the given range
83 if end >= 2:
84 primes.append(2)
85 for i in range(sieve_size):
86 if sieve[i]:
87 value_at_i = i * 2 + 3
88 primes.append(value_at_i)
89 for j in range(i, sieve_size, value_at_i):
90 sieve[j] = False
91
92 chop_index = 0
93 for i in range(len(primes)):
94 if primes[i] >= start:
95 chop_index = i
96 break
97 return primes[chop_index:]
98
99 def digit(self, to_bin: bool = False) -> Union[str, int]:
100 """Get a random digit.
101
102 :param to_bin: If True then convert to binary.
103 :return: Digit.
104
105 :Example:
106 4.
107 """
108 digit = self.random.randint(0, 9)
109
110 if to_bin:
111 return bin(digit)
112
113 return digit
114
115 def between(self, minimum: int = 1, maximum: int = 1000) -> int:
116 """Generate a random number between minimum and maximum.
117
118 :param minimum: Minimum of range.
119 :param maximum: Maximum of range.
120 :return: Number.
121 """
122 return self.random.randint(minimum, maximum)
123
```
Path: `mimesis/enums.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Implements enums for a lot of methods.
4
5 Enums from this module are used in a lot of methods.
6 You should always import enums from this module if you want
7 behavior for the methods that differ from the default behavior.
8
9 You should never use your own enums in methods because in this case,
10 there no guarantee that you will get the result which you actually expected.
11
12 Below you can see an example of usage enums in methods of data providers.
13
14 Example:
15 >>> from mimesis import Person
16 >>> from mimesis.enums import Gender
17 >>> person = Person()
18 >>> name = person.name(gender=Gender.FEMALE)
19 >>> name in person._data['names']['female']
20 True
21 """
22
23 from enum import Enum
24
25
26 class PortRange(Enum):
27 """Represents port ranges.
28
29 An argument for :meth:`~mimesis.Internet.port()`.
30 """
31
32 ALL = (1, 65535)
33 WELL_KNOWN = (1, 1023)
34 EPHEMERAL = (49152, 65535)
35 REGISTERED = (1024, 49151)
36
37
38 class Gender(Enum):
39 """Represents genders.
40
41 An argument for a lot of methods which
42 takes argument ``gender``.
43 """
44
45 FEMALE = 'female'
46 MALE = 'male'
47
48
49 class TitleType(Enum):
50 """Represents title types.
51
52 An argument for :meth:`~mimesis.Person.title()`.
53 """
54
55 TYPICAL = 'typical'
56 ACADEMIC = 'academic'
57
58
59 class CardType(Enum):
60 """Provides credit card types.
61
62 An argument for :meth:`~mimesis.Payment.credit_card_number()`.
63 """
64
65 MASTER_CARD = 'MasterCard'
66 VISA = 'Visa'
67 AMERICAN_EXPRESS = 'American Express'
68
69
70 class Algorithm(Enum):
71 """Provides algorithms which available."""
72
73 MD5 = 'md5'
74 SHA1 = 'sha1'
75 SHA224 = 'sha224'
76 SHA256 = 'sha256'
77 SHA384 = 'sha384'
78 SHA512 = 'sha512'
79
80
81 class TLDType(Enum):
82 """Provides top level domain types.
83
84 An argument for :meth:`~mimesis.Internet.top_level_domain()`.
85 """
86
87 CCTLD = 'cctld'
88 GTLD = 'gtld'
89 GEOTLD = 'geotld'
90 UTLD = 'utld'
91 STLD = 'stld'
92
93
94 class Layer(Enum):
95 """Provides network protocol layers.
96
97 An argument for :meth:`~mimesis.Internet.network_protocol()`.
98 """
99
100 APPLICATION = 'application'
101 DATA_LINK = 'data_link'
102 NETWORK = 'network'
103 PHYSICAL = 'physical'
104 PRESENTATION = 'presentation'
105 SESSION = 'session'
106 TRANSPORT = 'transport'
107
108
109 class FileType(Enum):
110 """Provides file types."""
111
112 SOURCE = 'source'
113 TEXT = 'text'
114 DATA = 'data'
115 AUDIO = 'audio'
116 VIDEO = 'video'
117 IMAGE = 'image'
118 EXECUTABLE = 'executable'
119 COMPRESSED = 'compressed'
120
121
122 class MimeType(Enum):
123 """Provides common mime types.
124
125 An argument for :meth:`~mimesis.File.mime_type()`.
126 """
127
128 APPLICATION = 'application'
129 AUDIO = 'audio'
130 IMAGE = 'image'
131 MESSAGE = 'message'
132 TEXT = 'text'
133 VIDEO = 'video'
134
135
136 class PrefixSign(Enum):
137 """Provides prefix signs.
138
139 An argument for :meth:`~mimesis.UnitSystem.prefix()``.
140 """
141
142 POSITIVE = 'positive'
143 NEGATIVE = 'negative'
144
145
146 class CountryCode(Enum):
147 """Provides types of country codes.
148
149 An argument for :meth:`~mimesis.Address.country_code()`.
150 """
151
152 A2 = 'a2'
153 A3 = 'a3'
154 NUMERIC = 'numeric'
155 IOC = 'ioc'
156 FIFA = 'fifa'
157
158
159 class ISBNFormat(Enum):
160 """Provides formats of ISBN.
161
162 An argument for :meth:`~mimesis.Code.isbn()`.
163 """
164
165 ISBN13 = 'isbn-13'
166 ISBN10 = 'isbn-10'
167
168
169 class EANFormat(Enum):
170 """Provides formats of EAN.
171
172 An argument for :meth:`~mimesis.Code.ean()`.
173 """
174
175 EAN8 = 'ean-8'
176 EAN13 = 'ean-13'
177
178
179 class SocialNetwork(Enum):
180 """Provides most popular social networks.
181
182 An argument for :meth:`~mimesis.Person.social_media_profile()``.
183 """
184
185 FACEBOOK = 'facebook'
186 TWITTER = 'twitter'
187 INSTAGRAM = 'instagram'
188 VK = 'vk'
189
190
191 class UnitName(Enum):
192 """Provide unit names.
193
194 An argument for :meth:`~mimesis.UnitSystem.unit()`.
195 """
196
197 MASS = ('gram', 'gr')
198 INFORMATION = ('byte', 'b')
199 THERMODYNAMIC_TEMPERATURE = ('kelvin', 'K')
200 AMOUNT_OF_SUBSTANCE = ('mole', 'mol')
201 ANGLE = ('radian', 'r')
202 SOLID_ANGLE = ('steradian', '㏛')
203 FREQUENCY = ('hertz', 'Hz')
204 FORCE = ('newton', 'N')
205 PRESSURE = ('pascal', 'P')
206 ENERGY = ('joule', 'J')
207 POWER = ('watt', 'W')
208 FLUX = ('watt', 'W')
209 ELECTRIC_CHARGE = ('coulomb', 'C')
210 VOLTAGE = ('volt', 'V')
211 ELECTRIC_CAPACITANCE = ('farad', 'F')
212 ELECTRIC_RESISTANCE = ('ohm', 'Ω')
213 ELECTRICAL_CONDUCTANCE = ('siemens', 'S')
214 MAGNETIC_FLUX = ('weber', 'Wb')
215 MAGNETIC_FLUX_DENSITY = ('tesla', 'T')
216 INDUCTANCE = ('henry', 'H')
217 TEMPERATURE = ('Celsius', '°C')
218 RADIOACTIVITY = ('becquerel', 'Bq')
219
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mimesis/enums.py b/mimesis/enums.py
--- a/mimesis/enums.py
+++ b/mimesis/enums.py
@@ -216,3 +216,14 @@
INDUCTANCE = ('henry', 'H')
TEMPERATURE = ('Celsius', '°C')
RADIOACTIVITY = ('becquerel', 'Bq')
+
+
+class NumTypes(Enum):
+ """Provide number types.
+
+ An argument for :meth:`~mimesis.Numbers.matrix()`.
+ """
+
+ FLOATS = 'floats'
+ INTEGERS = 'integers'
+ COMPLEXES = 'complexes'
diff --git a/mimesis/providers/numbers.py b/mimesis/providers/numbers.py
--- a/mimesis/providers/numbers.py
+++ b/mimesis/providers/numbers.py
@@ -2,8 +2,9 @@
"""Provides data related to numbers."""
-from typing import List, Union
+from typing import Callable, List, Union
+from mimesis.enums import NumTypes
from mimesis.providers.base import BaseProvider
__all__ = ['Numbers']
@@ -66,6 +67,21 @@
self.random.uniform(start_imag, end_imag, rounding_imag))
for _ in range(n)]
+ def matrix(self, m: int = 10, n: int = 10,
+ num_type: NumTypes = NumTypes.FLOATS, **kwargs) -> List[List]:
+ """Generate a m x n matrix with random numbers.
+
+ :param m: Number of rows.
+ :param n: Number of columns.
+ :param num_type: NumTypes enum object.
+ :param **kwargs: Other specific arguments.
+ :return: A matrix of random numbers.
+ """
+ key = self._validate_enum(num_type, NumTypes)
+ kwargs.update({'n': n})
+ method = getattr(self, key)
+ return [method(**kwargs) for _ in range(m)]
+
@staticmethod
def primes(start: int = 1, end: int = 999) -> List[int]:
"""Generate a list of prime numbers.
|
{"golden_diff": "diff --git a/mimesis/enums.py b/mimesis/enums.py\n--- a/mimesis/enums.py\n+++ b/mimesis/enums.py\n@@ -216,3 +216,14 @@\n INDUCTANCE = ('henry', 'H')\n TEMPERATURE = ('Celsius', '\u00b0C')\n RADIOACTIVITY = ('becquerel', 'Bq')\n+\n+\n+class NumTypes(Enum):\n+ \"\"\"Provide number types.\n+\n+ An argument for :meth:`~mimesis.Numbers.matrix()`.\n+ \"\"\"\n+\n+ FLOATS = 'floats'\n+ INTEGERS = 'integers'\n+ COMPLEXES = 'complexes'\ndiff --git a/mimesis/providers/numbers.py b/mimesis/providers/numbers.py\n--- a/mimesis/providers/numbers.py\n+++ b/mimesis/providers/numbers.py\n@@ -2,8 +2,9 @@\n \n \"\"\"Provides data related to numbers.\"\"\"\n \n-from typing import List, Union\n+from typing import Callable, List, Union\n \n+from mimesis.enums import NumTypes\n from mimesis.providers.base import BaseProvider\n \n __all__ = ['Numbers']\n@@ -66,6 +67,21 @@\n self.random.uniform(start_imag, end_imag, rounding_imag))\n for _ in range(n)]\n \n+ def matrix(self, m: int = 10, n: int = 10,\n+ num_type: NumTypes = NumTypes.FLOATS, **kwargs) -> List[List]:\n+ \"\"\"Generate a m x n matrix with random numbers.\n+\n+ :param m: Number of rows.\n+ :param n: Number of columns.\n+ :param num_type: NumTypes enum object.\n+ :param **kwargs: Other specific arguments.\n+ :return: A matrix of random numbers.\n+ \"\"\"\n+ key = self._validate_enum(num_type, NumTypes)\n+ kwargs.update({'n': n})\n+ method = getattr(self, key)\n+ return [method(**kwargs) for _ in range(m)]\n+\n @staticmethod\n def primes(start: int = 1, end: int = 999) -> List[int]:\n \"\"\"Generate a list of prime numbers.\n", "issue": "The matrix generators for provider Numbers()\n# Feature request\r\n\r\n<!-- Thanks for coming up with a proposal! -->\r\n\r\n## Thesis\r\nThe matrix can be useful in working in Jupyter Notebook. So, it would be awesome to support generators of matrix in provider `Numbers()`.\r\n\r\nI sketched a rough implementation example:\r\n\r\n```python\r\nimport random\r\n\r\nfrom typing import Tuple, Optional, Union, List\r\n\r\nValuesRange = Union[\r\n List[int],\r\n Tuple[int, int],\r\n]\r\n\r\n\r\ndef matrix(m: int = 3, n: int = 3, values_range: Optional[ValuesRange] = None) -> list:\r\n \"\"\"Creates an \ud835\udc5a\u00d7\ud835\udc5b matrix which has \ud835\udc5a rows and \ud835\udc5b columns.\r\n\r\n :param m: Rows count.\r\n :param n: Columns count.\r\n :param values_range: Range of values of matrix items.\r\n :return: Matrix \ud835\udc5a\u00d7\ud835\udc5b.\r\n \"\"\"\r\n if values_range is None:\r\n values_range = (-100, 100)\r\n\r\n if len(values_range) > 2:\r\n raise ValueError('Invalid items range! Usage example: items_range=(-10, 10)')\r\n\r\n result = [\r\n [random.randint(*values_range) for _ in range(m)]\r\n for _ in range(n)\r\n ]\r\n\r\n return result\r\n\r\nmatrix(m=5, n=4, values_range=[-50, 50])\r\n```\r\n\r\nOutput:\r\n```python\r\n[[29, 7, 17, -24, -29],\r\n [-26, 13, 29, -25, 6],\r\n [-5, 16, 13, -45, 49],\r\n [23, 20, -25, 14, -25]]\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Provides data related to numbers.\"\"\"\n\nfrom typing import List, Union\n\nfrom mimesis.providers.base import BaseProvider\n\n__all__ = ['Numbers']\n\n\nclass Numbers(BaseProvider):\n \"\"\"Class for generating numbers.\"\"\"\n\n class Meta:\n \"\"\"Class for metadata.\"\"\"\n\n name = 'numbers'\n\n def floats(self, start: float = 0, end: float = 1, n: int = 10,\n rounding: int = 15) -> List[float]:\n \"\"\"Generate a list of random float numbers.\n\n :param start: Start range.\n :param end: End range.\n :param n: Length of the list.\n :param rounding: Max number of decimal digits.\n :return: The list of floating-point numbers.\n \"\"\"\n return [self.random.uniform(start, end, rounding) for _ in range(n)]\n\n def integers(self, start: int = 0, end: int = 10,\n n: int = 10) -> List[int]:\n \"\"\"Generate a list of random integers.\n\n Integers can be negative or positive numbers.\n .. note: You can use both positive and negative numbers.\n\n :param start: Start.\n :param end: End.\n :param n: Length of list.\n :return: List of integers.\n\n :Example:\n [-20, -19, -18, -17]\n \"\"\"\n return self.random.randints(n, start, end)\n\n def complexes(self, start_real: float = 0, end_real: float = 1,\n start_imag: float = 0, end_imag: float = 1,\n rounding_real: int = 15, rounding_imag: int = 15,\n n: int = 10) -> List[complex]:\n \"\"\"Generate a list of random complex numbers.\n\n :param start_real: Start real range.\n :param end_real: End real range.\n :param start_imag: Start imaginary range.\n :param end_imag: End imaginary range.\n :param rounding_real: Rounding real part.\n :param rounding_imag: Roungind imaginary part.\n :param n: Length of the list.\n :return: A list of random complex numbers.\n \"\"\"\n return [\n complex(self.random.uniform(start_real, end_real, rounding_real),\n self.random.uniform(start_imag, end_imag, rounding_imag))\n for _ in range(n)]\n\n @staticmethod\n def primes(start: int = 1, end: int = 999) -> List[int]:\n \"\"\"Generate a list of prime numbers.\n\n :param start: First value of range.\n :param end: Last value of range.\n :return: A list of prime numbers from start to end.\n \"\"\"\n # TODO: It should generate random primes with passed length.\n sieve_size = (end // 2 - 1) if end % 2 == 0 else (end // 2)\n sieve = [True] * sieve_size\n\n primes = [] # list of primes\n # add 2 to the list if it's in the given range\n if end >= 2:\n primes.append(2)\n for i in range(sieve_size):\n if sieve[i]:\n value_at_i = i * 2 + 3\n primes.append(value_at_i)\n for j in range(i, sieve_size, value_at_i):\n sieve[j] = False\n\n chop_index = 0\n for i in range(len(primes)):\n if primes[i] >= start:\n chop_index = i\n break\n return primes[chop_index:]\n\n def digit(self, to_bin: bool = False) -> Union[str, int]:\n \"\"\"Get a random digit.\n\n :param to_bin: If True then convert to binary.\n :return: Digit.\n\n :Example:\n 4.\n \"\"\"\n digit = self.random.randint(0, 9)\n\n if to_bin:\n return bin(digit)\n\n return digit\n\n def between(self, minimum: int = 1, maximum: int = 1000) -> int:\n \"\"\"Generate a random number between minimum and maximum.\n\n :param minimum: Minimum of range.\n :param maximum: Maximum of range.\n :return: Number.\n \"\"\"\n return self.random.randint(minimum, maximum)\n", "path": "mimesis/providers/numbers.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Implements enums for a lot of methods.\n\nEnums from this module are used in a lot of methods.\nYou should always import enums from this module if you want\nbehavior for the methods that differ from the default behavior.\n\nYou should never use your own enums in methods because in this case,\nthere no guarantee that you will get the result which you actually expected.\n\nBelow you can see an example of usage enums in methods of data providers.\n\nExample:\n >>> from mimesis import Person\n >>> from mimesis.enums import Gender\n >>> person = Person()\n >>> name = person.name(gender=Gender.FEMALE)\n >>> name in person._data['names']['female']\n True\n\"\"\"\n\nfrom enum import Enum\n\n\nclass PortRange(Enum):\n \"\"\"Represents port ranges.\n\n An argument for :meth:`~mimesis.Internet.port()`.\n \"\"\"\n\n ALL = (1, 65535)\n WELL_KNOWN = (1, 1023)\n EPHEMERAL = (49152, 65535)\n REGISTERED = (1024, 49151)\n\n\nclass Gender(Enum):\n \"\"\"Represents genders.\n\n An argument for a lot of methods which\n takes argument ``gender``.\n \"\"\"\n\n FEMALE = 'female'\n MALE = 'male'\n\n\nclass TitleType(Enum):\n \"\"\"Represents title types.\n\n An argument for :meth:`~mimesis.Person.title()`.\n \"\"\"\n\n TYPICAL = 'typical'\n ACADEMIC = 'academic'\n\n\nclass CardType(Enum):\n \"\"\"Provides credit card types.\n\n An argument for :meth:`~mimesis.Payment.credit_card_number()`.\n \"\"\"\n\n MASTER_CARD = 'MasterCard'\n VISA = 'Visa'\n AMERICAN_EXPRESS = 'American Express'\n\n\nclass Algorithm(Enum):\n \"\"\"Provides algorithms which available.\"\"\"\n\n MD5 = 'md5'\n SHA1 = 'sha1'\n SHA224 = 'sha224'\n SHA256 = 'sha256'\n SHA384 = 'sha384'\n SHA512 = 'sha512'\n\n\nclass TLDType(Enum):\n \"\"\"Provides top level domain types.\n\n An argument for :meth:`~mimesis.Internet.top_level_domain()`.\n \"\"\"\n\n CCTLD = 'cctld'\n GTLD = 'gtld'\n GEOTLD = 'geotld'\n UTLD = 'utld'\n STLD = 'stld'\n\n\nclass Layer(Enum):\n \"\"\"Provides network protocol layers.\n\n An argument for :meth:`~mimesis.Internet.network_protocol()`.\n \"\"\"\n\n APPLICATION = 'application'\n DATA_LINK = 'data_link'\n NETWORK = 'network'\n PHYSICAL = 'physical'\n PRESENTATION = 'presentation'\n SESSION = 'session'\n TRANSPORT = 'transport'\n\n\nclass FileType(Enum):\n \"\"\"Provides file types.\"\"\"\n\n SOURCE = 'source'\n TEXT = 'text'\n DATA = 'data'\n AUDIO = 'audio'\n VIDEO = 'video'\n IMAGE = 'image'\n EXECUTABLE = 'executable'\n COMPRESSED = 'compressed'\n\n\nclass MimeType(Enum):\n \"\"\"Provides common mime types.\n\n An argument for :meth:`~mimesis.File.mime_type()`.\n \"\"\"\n\n APPLICATION = 'application'\n AUDIO = 'audio'\n IMAGE = 'image'\n MESSAGE = 'message'\n TEXT = 'text'\n VIDEO = 'video'\n\n\nclass PrefixSign(Enum):\n \"\"\"Provides prefix signs.\n\n An argument for :meth:`~mimesis.UnitSystem.prefix()``.\n \"\"\"\n\n POSITIVE = 'positive'\n NEGATIVE = 'negative'\n\n\nclass CountryCode(Enum):\n \"\"\"Provides types of country codes.\n\n An argument for :meth:`~mimesis.Address.country_code()`.\n \"\"\"\n\n A2 = 'a2'\n A3 = 'a3'\n NUMERIC = 'numeric'\n IOC = 'ioc'\n FIFA = 'fifa'\n\n\nclass ISBNFormat(Enum):\n \"\"\"Provides formats of ISBN.\n\n An argument for :meth:`~mimesis.Code.isbn()`.\n \"\"\"\n\n ISBN13 = 'isbn-13'\n ISBN10 = 'isbn-10'\n\n\nclass EANFormat(Enum):\n \"\"\"Provides formats of EAN.\n\n An argument for :meth:`~mimesis.Code.ean()`.\n \"\"\"\n\n EAN8 = 'ean-8'\n EAN13 = 'ean-13'\n\n\nclass SocialNetwork(Enum):\n \"\"\"Provides most popular social networks.\n\n An argument for :meth:`~mimesis.Person.social_media_profile()``.\n \"\"\"\n\n FACEBOOK = 'facebook'\n TWITTER = 'twitter'\n INSTAGRAM = 'instagram'\n VK = 'vk'\n\n\nclass UnitName(Enum):\n \"\"\"Provide unit names.\n\n An argument for :meth:`~mimesis.UnitSystem.unit()`.\n \"\"\"\n\n MASS = ('gram', 'gr')\n INFORMATION = ('byte', 'b')\n THERMODYNAMIC_TEMPERATURE = ('kelvin', 'K')\n AMOUNT_OF_SUBSTANCE = ('mole', 'mol')\n ANGLE = ('radian', 'r')\n SOLID_ANGLE = ('steradian', '\u33db')\n FREQUENCY = ('hertz', 'Hz')\n FORCE = ('newton', 'N')\n PRESSURE = ('pascal', 'P')\n ENERGY = ('joule', 'J')\n POWER = ('watt', 'W')\n FLUX = ('watt', 'W')\n ELECTRIC_CHARGE = ('coulomb', 'C')\n VOLTAGE = ('volt', 'V')\n ELECTRIC_CAPACITANCE = ('farad', 'F')\n ELECTRIC_RESISTANCE = ('ohm', '\u03a9')\n ELECTRICAL_CONDUCTANCE = ('siemens', 'S')\n MAGNETIC_FLUX = ('weber', 'Wb')\n MAGNETIC_FLUX_DENSITY = ('tesla', 'T')\n INDUCTANCE = ('henry', 'H')\n TEMPERATURE = ('Celsius', '\u00b0C')\n RADIOACTIVITY = ('becquerel', 'Bq')\n", "path": "mimesis/enums.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Provides data related to numbers.\"\"\"\n\nfrom typing import Callable, List, Union\n\nfrom mimesis.enums import NumTypes\nfrom mimesis.providers.base import BaseProvider\n\n__all__ = ['Numbers']\n\n\nclass Numbers(BaseProvider):\n \"\"\"Class for generating numbers.\"\"\"\n\n class Meta:\n \"\"\"Class for metadata.\"\"\"\n\n name = 'numbers'\n\n def floats(self, start: float = 0, end: float = 1, n: int = 10,\n rounding: int = 15) -> List[float]:\n \"\"\"Generate a list of random float numbers.\n\n :param start: Start range.\n :param end: End range.\n :param n: Length of the list.\n :param rounding: Max number of decimal digits.\n :return: The list of floating-point numbers.\n \"\"\"\n return [self.random.uniform(start, end, rounding) for _ in range(n)]\n\n def integers(self, start: int = 0, end: int = 10,\n n: int = 10) -> List[int]:\n \"\"\"Generate a list of random integers.\n\n Integers can be negative or positive numbers.\n .. note: You can use both positive and negative numbers.\n\n :param start: Start.\n :param end: End.\n :param n: Length of list.\n :return: List of integers.\n\n :Example:\n [-20, -19, -18, -17]\n \"\"\"\n return self.random.randints(n, start, end)\n\n def complexes(self, start_real: float = 0, end_real: float = 1,\n start_imag: float = 0, end_imag: float = 1,\n rounding_real: int = 15, rounding_imag: int = 15,\n n: int = 10) -> List[complex]:\n \"\"\"Generate a list of random complex numbers.\n\n :param start_real: Start real range.\n :param end_real: End real range.\n :param start_imag: Start imaginary range.\n :param end_imag: End imaginary range.\n :param rounding_real: Rounding real part.\n :param rounding_imag: Roungind imaginary part.\n :param n: Length of the list.\n :return: A list of random complex numbers.\n \"\"\"\n return [\n complex(self.random.uniform(start_real, end_real, rounding_real),\n self.random.uniform(start_imag, end_imag, rounding_imag))\n for _ in range(n)]\n\n def matrix(self, m: int = 10, n: int = 10,\n num_type: NumTypes = NumTypes.FLOATS, **kwargs) -> List[List]:\n \"\"\"Generate a m x n matrix with random numbers.\n\n :param m: Number of rows.\n :param n: Number of columns.\n :param num_type: NumTypes enum object.\n :param **kwargs: Other specific arguments.\n :return: A matrix of random numbers.\n \"\"\"\n key = self._validate_enum(num_type, NumTypes)\n kwargs.update({'n': n})\n method = getattr(self, key)\n return [method(**kwargs) for _ in range(m)]\n\n @staticmethod\n def primes(start: int = 1, end: int = 999) -> List[int]:\n \"\"\"Generate a list of prime numbers.\n\n :param start: First value of range.\n :param end: Last value of range.\n :return: A list of prime numbers from start to end.\n \"\"\"\n # TODO: It should generate random primes with passed length.\n sieve_size = (end // 2 - 1) if end % 2 == 0 else (end // 2)\n sieve = [True] * sieve_size\n\n primes = [] # list of primes\n # add 2 to the list if it's in the given range\n if end >= 2:\n primes.append(2)\n for i in range(sieve_size):\n if sieve[i]:\n value_at_i = i * 2 + 3\n primes.append(value_at_i)\n for j in range(i, sieve_size, value_at_i):\n sieve[j] = False\n\n chop_index = 0\n for i in range(len(primes)):\n if primes[i] >= start:\n chop_index = i\n break\n return primes[chop_index:]\n\n def digit(self, to_bin: bool = False) -> Union[str, int]:\n \"\"\"Get a random digit.\n\n :param to_bin: If True then convert to binary.\n :return: Digit.\n\n :Example:\n 4.\n \"\"\"\n digit = self.random.randint(0, 9)\n\n if to_bin:\n return bin(digit)\n\n return digit\n\n def between(self, minimum: int = 1, maximum: int = 1000) -> int:\n \"\"\"Generate a random number between minimum and maximum.\n\n :param minimum: Minimum of range.\n :param maximum: Maximum of range.\n :return: Number.\n \"\"\"\n return self.random.randint(minimum, maximum)\n", "path": "mimesis/providers/numbers.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Implements enums for a lot of methods.\n\nEnums from this module are used in a lot of methods.\nYou should always import enums from this module if you want\nbehavior for the methods that differ from the default behavior.\n\nYou should never use your own enums in methods because in this case,\nthere no guarantee that you will get the result which you actually expected.\n\nBelow you can see an example of usage enums in methods of data providers.\n\nExample:\n >>> from mimesis import Person\n >>> from mimesis.enums import Gender\n >>> person = Person()\n >>> name = person.name(gender=Gender.FEMALE)\n >>> name in person._data['names']['female']\n True\n\"\"\"\n\nfrom enum import Enum\n\n\nclass PortRange(Enum):\n \"\"\"Represents port ranges.\n\n An argument for :meth:`~mimesis.Internet.port()`.\n \"\"\"\n\n ALL = (1, 65535)\n WELL_KNOWN = (1, 1023)\n EPHEMERAL = (49152, 65535)\n REGISTERED = (1024, 49151)\n\n\nclass Gender(Enum):\n \"\"\"Represents genders.\n\n An argument for a lot of methods which\n takes argument ``gender``.\n \"\"\"\n\n FEMALE = 'female'\n MALE = 'male'\n\n\nclass TitleType(Enum):\n \"\"\"Represents title types.\n\n An argument for :meth:`~mimesis.Person.title()`.\n \"\"\"\n\n TYPICAL = 'typical'\n ACADEMIC = 'academic'\n\n\nclass CardType(Enum):\n \"\"\"Provides credit card types.\n\n An argument for :meth:`~mimesis.Payment.credit_card_number()`.\n \"\"\"\n\n MASTER_CARD = 'MasterCard'\n VISA = 'Visa'\n AMERICAN_EXPRESS = 'American Express'\n\n\nclass Algorithm(Enum):\n \"\"\"Provides algorithms which available.\"\"\"\n\n MD5 = 'md5'\n SHA1 = 'sha1'\n SHA224 = 'sha224'\n SHA256 = 'sha256'\n SHA384 = 'sha384'\n SHA512 = 'sha512'\n\n\nclass TLDType(Enum):\n \"\"\"Provides top level domain types.\n\n An argument for :meth:`~mimesis.Internet.top_level_domain()`.\n \"\"\"\n\n CCTLD = 'cctld'\n GTLD = 'gtld'\n GEOTLD = 'geotld'\n UTLD = 'utld'\n STLD = 'stld'\n\n\nclass Layer(Enum):\n \"\"\"Provides network protocol layers.\n\n An argument for :meth:`~mimesis.Internet.network_protocol()`.\n \"\"\"\n\n APPLICATION = 'application'\n DATA_LINK = 'data_link'\n NETWORK = 'network'\n PHYSICAL = 'physical'\n PRESENTATION = 'presentation'\n SESSION = 'session'\n TRANSPORT = 'transport'\n\n\nclass FileType(Enum):\n \"\"\"Provides file types.\"\"\"\n\n SOURCE = 'source'\n TEXT = 'text'\n DATA = 'data'\n AUDIO = 'audio'\n VIDEO = 'video'\n IMAGE = 'image'\n EXECUTABLE = 'executable'\n COMPRESSED = 'compressed'\n\n\nclass MimeType(Enum):\n \"\"\"Provides common mime types.\n\n An argument for :meth:`~mimesis.File.mime_type()`.\n \"\"\"\n\n APPLICATION = 'application'\n AUDIO = 'audio'\n IMAGE = 'image'\n MESSAGE = 'message'\n TEXT = 'text'\n VIDEO = 'video'\n\n\nclass PrefixSign(Enum):\n \"\"\"Provides prefix signs.\n\n An argument for :meth:`~mimesis.UnitSystem.prefix()``.\n \"\"\"\n\n POSITIVE = 'positive'\n NEGATIVE = 'negative'\n\n\nclass CountryCode(Enum):\n \"\"\"Provides types of country codes.\n\n An argument for :meth:`~mimesis.Address.country_code()`.\n \"\"\"\n\n A2 = 'a2'\n A3 = 'a3'\n NUMERIC = 'numeric'\n IOC = 'ioc'\n FIFA = 'fifa'\n\n\nclass ISBNFormat(Enum):\n \"\"\"Provides formats of ISBN.\n\n An argument for :meth:`~mimesis.Code.isbn()`.\n \"\"\"\n\n ISBN13 = 'isbn-13'\n ISBN10 = 'isbn-10'\n\n\nclass EANFormat(Enum):\n \"\"\"Provides formats of EAN.\n\n An argument for :meth:`~mimesis.Code.ean()`.\n \"\"\"\n\n EAN8 = 'ean-8'\n EAN13 = 'ean-13'\n\n\nclass SocialNetwork(Enum):\n \"\"\"Provides most popular social networks.\n\n An argument for :meth:`~mimesis.Person.social_media_profile()``.\n \"\"\"\n\n FACEBOOK = 'facebook'\n TWITTER = 'twitter'\n INSTAGRAM = 'instagram'\n VK = 'vk'\n\n\nclass UnitName(Enum):\n \"\"\"Provide unit names.\n\n An argument for :meth:`~mimesis.UnitSystem.unit()`.\n \"\"\"\n\n MASS = ('gram', 'gr')\n INFORMATION = ('byte', 'b')\n THERMODYNAMIC_TEMPERATURE = ('kelvin', 'K')\n AMOUNT_OF_SUBSTANCE = ('mole', 'mol')\n ANGLE = ('radian', 'r')\n SOLID_ANGLE = ('steradian', '\u33db')\n FREQUENCY = ('hertz', 'Hz')\n FORCE = ('newton', 'N')\n PRESSURE = ('pascal', 'P')\n ENERGY = ('joule', 'J')\n POWER = ('watt', 'W')\n FLUX = ('watt', 'W')\n ELECTRIC_CHARGE = ('coulomb', 'C')\n VOLTAGE = ('volt', 'V')\n ELECTRIC_CAPACITANCE = ('farad', 'F')\n ELECTRIC_RESISTANCE = ('ohm', '\u03a9')\n ELECTRICAL_CONDUCTANCE = ('siemens', 'S')\n MAGNETIC_FLUX = ('weber', 'Wb')\n MAGNETIC_FLUX_DENSITY = ('tesla', 'T')\n INDUCTANCE = ('henry', 'H')\n TEMPERATURE = ('Celsius', '\u00b0C')\n RADIOACTIVITY = ('becquerel', 'Bq')\n\n\nclass NumTypes(Enum):\n \"\"\"Provide number types.\n\n An argument for :meth:`~mimesis.Numbers.matrix()`.\n \"\"\"\n\n FLOATS = 'floats'\n INTEGERS = 'integers'\n COMPLEXES = 'complexes'\n", "path": "mimesis/enums.py"}]}
| 3,816 | 488 |
gh_patches_debug_34316
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6856
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug: Flaky test `tests/meltano/api/test_workers.py::TestUIAvailableWorker::test_open_browser`
### Meltano Version
N/A
### Python Version
NA
### Bug scope
API
### Operating System
N/A
### Description
From https://github.com/meltano/meltano/issues/6827
- https://github.com/meltano/meltano/actions/runs/3175819389/jobs/5174350171
- https://github.com/meltano/meltano/actions/runs/3159179840/jobs/5142085679
- https://github.com/meltano/meltano/actions/runs/3056794521/jobs/4931297918
- https://github.com/meltano/meltano/actions/runs/3182662282/jobs/5188896027
- https://github.com/meltano/meltano/actions/runs/3183540252/jobs/5190883701
- https://github.com/meltano/meltano/actions/runs/3184585185/jobs/5193156601
Of the recorded instances, we observe:
- They are all using `ubuntu-latest`
- They are all using Python 3.7
- They are using a variety of database backends
The error occurs on the assert on the last line of the following code block. We see that `requests_get` is always called 3 times, but `sleep` is called 300-500 times.
```python
@mock.patch("webbrowser.open")
@mock.patch("requests.get")
def test_open_browser(self, requests_get, webbrowser_open, subject):
error = mock.Mock(status_code=400)
ok = mock.Mock(status_code=200)
requests_get.side_effect = [error, error, ok]
with mock.patch("time.sleep") as sleep:
sleep.return_value = None
subject.run()
webbrowser_open.assert_called_with("http://localhost:5000")
assert requests_get.call_count == sleep.call_count
```
### Code
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/meltano/api/workers/ui_available_worker.py`
Content:
```
1 from __future__ import annotations
2
3 import logging
4 import threading
5 import time
6 import traceback
7 import webbrowser
8
9 import click
10 import requests
11
12 from meltano.core.project_settings_service import ProjectSettingsService
13
14 logger = logging.getLogger(__name__)
15
16 SUCCESS_STATUS_CODE = 200
17
18
19 class UIAvailableWorker(threading.Thread):
20 def __init__(self, project, open_browser=False):
21 super().__init__()
22 self.project = project
23 self.open_browser = open_browser
24 self.settings_service = ProjectSettingsService(self.project)
25 self._terminate = False
26
27 def run(self):
28 url = f"http://localhost:{self.settings_service.get('ui.bind_port')}"
29 headers = {"Host": self.settings_service.get("ui.server_name")}
30
31 while not self._terminate:
32 try:
33 response = requests.get(url, headers=headers)
34 if response.status_code == SUCCESS_STATUS_CODE:
35 click.secho(f"Meltano UI is now available at {url}", fg="green")
36 if self.open_browser:
37 webbrowser.open(url)
38 self._terminate = True
39 except Exception:
40 logger.debug(
41 f"Exception encountered while trying to run Meltano UI:\n{traceback.format_exc()}"
42 )
43
44 time.sleep(2)
45
46 def stop(self):
47 self._terminate = True
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/meltano/api/workers/ui_available_worker.py b/src/meltano/api/workers/ui_available_worker.py
--- a/src/meltano/api/workers/ui_available_worker.py
+++ b/src/meltano/api/workers/ui_available_worker.py
@@ -1,10 +1,11 @@
+"""Meltano UI worker thread definition."""
+
from __future__ import annotations
import logging
import threading
import time
import traceback
-import webbrowser
import click
import requests
@@ -17,14 +18,21 @@
class UIAvailableWorker(threading.Thread):
- def __init__(self, project, open_browser=False):
+ """A thread subclass for Meltano UI workers."""
+
+ def __init__(self, project):
+ """Initialize the `UIAvailableWorker` thread.
+
+ Args:
+ project: The Meltano project.
+ """
super().__init__()
self.project = project
- self.open_browser = open_browser
self.settings_service = ProjectSettingsService(self.project)
self._terminate = False
- def run(self):
+ def run(self) -> None:
+ """Run the thread, and report when the Meltano UI becomes available."""
url = f"http://localhost:{self.settings_service.get('ui.bind_port')}"
headers = {"Host": self.settings_service.get("ui.server_name")}
@@ -33,8 +41,6 @@
response = requests.get(url, headers=headers)
if response.status_code == SUCCESS_STATUS_CODE:
click.secho(f"Meltano UI is now available at {url}", fg="green")
- if self.open_browser:
- webbrowser.open(url)
self._terminate = True
except Exception:
logger.debug(
@@ -44,4 +50,5 @@
time.sleep(2)
def stop(self):
+ """Stop the thread."""
self._terminate = True
|
{"golden_diff": "diff --git a/src/meltano/api/workers/ui_available_worker.py b/src/meltano/api/workers/ui_available_worker.py\n--- a/src/meltano/api/workers/ui_available_worker.py\n+++ b/src/meltano/api/workers/ui_available_worker.py\n@@ -1,10 +1,11 @@\n+\"\"\"Meltano UI worker thread definition.\"\"\"\n+\n from __future__ import annotations\n \n import logging\n import threading\n import time\n import traceback\n-import webbrowser\n \n import click\n import requests\n@@ -17,14 +18,21 @@\n \n \n class UIAvailableWorker(threading.Thread):\n- def __init__(self, project, open_browser=False):\n+ \"\"\"A thread subclass for Meltano UI workers.\"\"\"\n+\n+ def __init__(self, project):\n+ \"\"\"Initialize the `UIAvailableWorker` thread.\n+\n+ Args:\n+ project: The Meltano project.\n+ \"\"\"\n super().__init__()\n self.project = project\n- self.open_browser = open_browser\n self.settings_service = ProjectSettingsService(self.project)\n self._terminate = False\n \n- def run(self):\n+ def run(self) -> None:\n+ \"\"\"Run the thread, and report when the Meltano UI becomes available.\"\"\"\n url = f\"http://localhost:{self.settings_service.get('ui.bind_port')}\"\n headers = {\"Host\": self.settings_service.get(\"ui.server_name\")}\n \n@@ -33,8 +41,6 @@\n response = requests.get(url, headers=headers)\n if response.status_code == SUCCESS_STATUS_CODE:\n click.secho(f\"Meltano UI is now available at {url}\", fg=\"green\")\n- if self.open_browser:\n- webbrowser.open(url)\n self._terminate = True\n except Exception:\n logger.debug(\n@@ -44,4 +50,5 @@\n time.sleep(2)\n \n def stop(self):\n+ \"\"\"Stop the thread.\"\"\"\n self._terminate = True\n", "issue": "bug: Flaky test `tests/meltano/api/test_workers.py::TestUIAvailableWorker::test_open_browser`\n### Meltano Version\n\nN/A\n\n### Python Version\n\nNA\n\n### Bug scope\n\nAPI\n\n### Operating System\n\nN/A\n\n### Description\n\nFrom https://github.com/meltano/meltano/issues/6827\r\n\r\n- https://github.com/meltano/meltano/actions/runs/3175819389/jobs/5174350171\r\n- https://github.com/meltano/meltano/actions/runs/3159179840/jobs/5142085679\r\n- https://github.com/meltano/meltano/actions/runs/3056794521/jobs/4931297918\r\n- https://github.com/meltano/meltano/actions/runs/3182662282/jobs/5188896027\r\n- https://github.com/meltano/meltano/actions/runs/3183540252/jobs/5190883701\r\n- https://github.com/meltano/meltano/actions/runs/3184585185/jobs/5193156601\r\n\r\nOf the recorded instances, we observe:\r\n- They are all using `ubuntu-latest`\r\n- They are all using Python 3.7\r\n- They are using a variety of database backends\r\n\r\nThe error occurs on the assert on the last line of the following code block. We see that `requests_get` is always called 3 times, but `sleep` is called 300-500 times.\r\n\r\n```python\r\n @mock.patch(\"webbrowser.open\")\r\n @mock.patch(\"requests.get\")\r\n def test_open_browser(self, requests_get, webbrowser_open, subject):\r\n error = mock.Mock(status_code=400)\r\n ok = mock.Mock(status_code=200)\r\n requests_get.side_effect = [error, error, ok]\r\n with mock.patch(\"time.sleep\") as sleep:\r\n sleep.return_value = None\r\n subject.run()\r\n webbrowser_open.assert_called_with(\"http://localhost:5000\")\r\n assert requests_get.call_count == sleep.call_count\r\n```\n\n### Code\n\n_No response_\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nimport threading\nimport time\nimport traceback\nimport webbrowser\n\nimport click\nimport requests\n\nfrom meltano.core.project_settings_service import ProjectSettingsService\n\nlogger = logging.getLogger(__name__)\n\nSUCCESS_STATUS_CODE = 200\n\n\nclass UIAvailableWorker(threading.Thread):\n def __init__(self, project, open_browser=False):\n super().__init__()\n self.project = project\n self.open_browser = open_browser\n self.settings_service = ProjectSettingsService(self.project)\n self._terminate = False\n\n def run(self):\n url = f\"http://localhost:{self.settings_service.get('ui.bind_port')}\"\n headers = {\"Host\": self.settings_service.get(\"ui.server_name\")}\n\n while not self._terminate:\n try:\n response = requests.get(url, headers=headers)\n if response.status_code == SUCCESS_STATUS_CODE:\n click.secho(f\"Meltano UI is now available at {url}\", fg=\"green\")\n if self.open_browser:\n webbrowser.open(url)\n self._terminate = True\n except Exception:\n logger.debug(\n f\"Exception encountered while trying to run Meltano UI:\\n{traceback.format_exc()}\"\n )\n\n time.sleep(2)\n\n def stop(self):\n self._terminate = True\n", "path": "src/meltano/api/workers/ui_available_worker.py"}], "after_files": [{"content": "\"\"\"Meltano UI worker thread definition.\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nimport threading\nimport time\nimport traceback\n\nimport click\nimport requests\n\nfrom meltano.core.project_settings_service import ProjectSettingsService\n\nlogger = logging.getLogger(__name__)\n\nSUCCESS_STATUS_CODE = 200\n\n\nclass UIAvailableWorker(threading.Thread):\n \"\"\"A thread subclass for Meltano UI workers.\"\"\"\n\n def __init__(self, project):\n \"\"\"Initialize the `UIAvailableWorker` thread.\n\n Args:\n project: The Meltano project.\n \"\"\"\n super().__init__()\n self.project = project\n self.settings_service = ProjectSettingsService(self.project)\n self._terminate = False\n\n def run(self) -> None:\n \"\"\"Run the thread, and report when the Meltano UI becomes available.\"\"\"\n url = f\"http://localhost:{self.settings_service.get('ui.bind_port')}\"\n headers = {\"Host\": self.settings_service.get(\"ui.server_name\")}\n\n while not self._terminate:\n try:\n response = requests.get(url, headers=headers)\n if response.status_code == SUCCESS_STATUS_CODE:\n click.secho(f\"Meltano UI is now available at {url}\", fg=\"green\")\n self._terminate = True\n except Exception:\n logger.debug(\n f\"Exception encountered while trying to run Meltano UI:\\n{traceback.format_exc()}\"\n )\n\n time.sleep(2)\n\n def stop(self):\n \"\"\"Stop the thread.\"\"\"\n self._terminate = True\n", "path": "src/meltano/api/workers/ui_available_worker.py"}]}
| 1,159 | 423 |
gh_patches_debug_16273
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-4999
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CheckOV can't check bicep files with modules published in Azure container registry
**Describe the issue**
In a larger Bicep environment, we publish entire solutions to the Azure Container Registry. These solutions are called by users as modules in their Bicep deployments. Therefore, it is mandatory for us that CheckOV supports scanning of such environments.
In our view, these two solutions would sufficiently address the problem, with the first solution being our favorite:
1. CheckOV downloads the modules from the Azure container registry to enable scanning of the code.
2. CheckOV checks if the Azure container registry used is whitelisted and then scans only the available code. If the registry used is not whitelisted, CheckOV returns a hardfail.
The second option would be a workaround for us, as we only want users to use our registry. The modules that are published in this registry have been scanned by checkov before publishing.
In the current version of checkov (2.3.152) the scan of the bicep file fails with a parsing error.
**Example Value**
``` bicep
module modSampleSolution 'br:sampleregistry.azurecr.io/bicep/modules/company.department/sampleSolution:1.0.0' = {
name: 'sampleSolution-deploy'
params: {
parSampleParameter: 'Sample'
}
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import json
3 import logging
4 import os
5 from importlib import util
6 from os import path
7 from pathlib import Path
8
9 from setuptools import setup, find_packages
10 from setuptools.command.build_py import build_py
11
12
13 class PreBuildCommand(build_py):
14 """Pre-build command"""
15
16 def transform_graph_yaml_to_json(self) -> None:
17 """Transforms YAML graph checks to JSON and copies them to build/lib"""
18
19 import yaml # can't be top-level, because it needs to be first installed via 'setup_requires'
20
21 graph_check_paths = ("checkov/*/checks/graph_checks",)
22 build_path = Path(self.build_lib)
23 src_path = Path()
24
25 for graph_check_path in graph_check_paths:
26 for yaml_file in src_path.glob(f"{graph_check_path}/**/*.yaml"):
27 json_file = (build_path / yaml_file).with_suffix(".json")
28 self.mkpath(str(json_file.parent))
29 json_file.write_text(json.dumps(yaml.safe_load(yaml_file.read_text())))
30
31 def run(self) -> None:
32 self.execute(self.transform_graph_yaml_to_json, ())
33 build_py.run(self)
34
35
36 # read the contents of your README file
37 this_directory = path.abspath(path.dirname(__file__))
38 with open(path.join(this_directory, "README.md"), encoding="utf-8") as f:
39 long_description = f.read()
40
41 logger = logging.getLogger(__name__)
42 spec = util.spec_from_file_location(
43 "checkov.version", os.path.join("checkov", "version.py")
44 )
45 # noinspection PyUnresolvedReferences
46 mod = util.module_from_spec(spec)
47 spec.loader.exec_module(mod) # type: ignore
48 version = mod.version # type: ignore
49
50 setup(
51 cmdclass={
52 "build_py": PreBuildCommand,
53 },
54 setup_requires=[
55 "pyyaml",
56 ],
57 extras_require={
58 "dev": [
59 "pytest==5.3.1",
60 "coverage==5.5",
61 "coverage-badge",
62 "GitPython==3.1.7",
63 "bandit",
64 "jsonschema",
65 ]
66 },
67 install_requires=[
68 "bc-python-hcl2==0.3.51",
69 "bc-detect-secrets==1.4.24",
70 "bc-jsonpath-ng==1.5.9",
71 "deep-merge",
72 "tabulate",
73 "colorama",
74 "termcolor",
75 "junit-xml>=1.9",
76 "dpath==2.1.3",
77 "pyyaml>=5.4.1",
78 "boto3>=1.17",
79 "gitpython",
80 "jmespath",
81 "tqdm",
82 "update-checker",
83 "semantic-version",
84 "packaging",
85 "cloudsplaining>=0.4.3",
86 "networkx<2.7",
87 "igraph",
88 "dockerfile-parse",
89 "docker",
90 "configargparse",
91 "argcomplete",
92 "policyuniverse",
93 "typing-extensions>=4.1.0",
94 "importlib-metadata>=0.12",
95 "cachetools",
96 "cyclonedx-python-lib<4.0.0,>=2.4.0",
97 "packageurl-python",
98 "click>=8.0.0",
99 "aiohttp",
100 "aiodns",
101 "aiomultiprocess",
102 "jsonschema<5.0.0,>=4.6.0",
103 "prettytable>=3.0.0",
104 "pycep-parser==0.3.9",
105 "charset-normalizer",
106 "pyston-autoload==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'",
107 "pyston==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'",
108 "schema",
109 "requests>=2.27.0",
110 "yarl",
111 "openai",
112 ],
113 dependency_links=[], # keep it empty, needed for pipenv-setup
114 license="Apache License 2.0",
115 name="checkov",
116 version=version,
117 python_requires=">=3.7",
118 description="Infrastructure as code static analysis",
119 author="bridgecrew",
120 author_email="[email protected]",
121 url="https://github.com/bridgecrewio/checkov",
122 packages=find_packages(
123 exclude=[
124 "dogfood_tests*",
125 "flake8_plugins*",
126 "integration_tests*",
127 "performance_tests*",
128 "tests*",
129 ]
130 ),
131 include_package_data=True,
132 package_data={
133 "checkov": ["py.typed"],
134 "checkov.common.util.templates": ["*.jinja2"],
135 "checkov.ansible.checks.graph_checks": ["**/*.json"],
136 "checkov.arm.checks.graph_checks": ["**/*.json"],
137 "checkov.bicep.checks.graph_checks": ["**/*.json"],
138 "checkov.cloudformation.checks.graph_checks": ["**/*.json"],
139 "checkov.dockerfile.checks.graph_checks": ["**/*.json"],
140 "checkov.github_actions.checks.graph_checks": ["**/*.json"],
141 "checkov.kubernetes.checks.graph_checks": ["**/*.json"],
142 "checkov.terraform.checks.graph_checks": ["**/*.json"],
143 },
144 scripts=["bin/checkov", "bin/checkov.cmd"],
145 long_description=long_description,
146 long_description_content_type="text/markdown",
147 classifiers=[
148 "Environment :: Console",
149 "Intended Audience :: Developers",
150 "Intended Audience :: System Administrators",
151 "License :: OSI Approved :: Apache Software License",
152 "Programming Language :: Python :: 3 :: Only",
153 "Programming Language :: Python :: 3.7",
154 "Programming Language :: Python :: 3.8",
155 "Programming Language :: Python :: 3.9",
156 "Programming Language :: Python :: 3.10",
157 "Programming Language :: Python :: 3.11",
158 "Topic :: Security",
159 "Topic :: Software Development :: Build Tools",
160 "Typing :: Typed",
161 ],
162 )
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -101,7 +101,7 @@
"aiomultiprocess",
"jsonschema<5.0.0,>=4.6.0",
"prettytable>=3.0.0",
- "pycep-parser==0.3.9",
+ "pycep-parser==0.4.0",
"charset-normalizer",
"pyston-autoload==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'",
"pyston==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -101,7 +101,7 @@\n \"aiomultiprocess\",\n \"jsonschema<5.0.0,>=4.6.0\",\n \"prettytable>=3.0.0\",\n- \"pycep-parser==0.3.9\",\n+ \"pycep-parser==0.4.0\",\n \"charset-normalizer\",\n \"pyston-autoload==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'\",\n \"pyston==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'\",\n", "issue": "CheckOV can't check bicep files with modules published in Azure container registry\n**Describe the issue**\r\nIn a larger Bicep environment, we publish entire solutions to the Azure Container Registry. These solutions are called by users as modules in their Bicep deployments. Therefore, it is mandatory for us that CheckOV supports scanning of such environments.\r\n\r\nIn our view, these two solutions would sufficiently address the problem, with the first solution being our favorite:\r\n\r\n1. CheckOV downloads the modules from the Azure container registry to enable scanning of the code.\r\n2. CheckOV checks if the Azure container registry used is whitelisted and then scans only the available code. If the registry used is not whitelisted, CheckOV returns a hardfail.\r\n\r\nThe second option would be a workaround for us, as we only want users to use our registry. The modules that are published in this registry have been scanned by checkov before publishing.\r\n\r\nIn the current version of checkov (2.3.152) the scan of the bicep file fails with a parsing error.\r\n\r\n**Example Value**\r\n``` bicep\r\nmodule modSampleSolution 'br:sampleregistry.azurecr.io/bicep/modules/company.department/sampleSolution:1.0.0' = {\r\n name: 'sampleSolution-deploy'\r\n params: {\r\n parSampleParameter: 'Sample'\r\n }\r\n}\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\nimport json\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\nfrom pathlib import Path\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.build_py import build_py\n\n\nclass PreBuildCommand(build_py):\n \"\"\"Pre-build command\"\"\"\n\n def transform_graph_yaml_to_json(self) -> None:\n \"\"\"Transforms YAML graph checks to JSON and copies them to build/lib\"\"\"\n\n import yaml # can't be top-level, because it needs to be first installed via 'setup_requires'\n\n graph_check_paths = (\"checkov/*/checks/graph_checks\",)\n build_path = Path(self.build_lib)\n src_path = Path()\n\n for graph_check_path in graph_check_paths:\n for yaml_file in src_path.glob(f\"{graph_check_path}/**/*.yaml\"):\n json_file = (build_path / yaml_file).with_suffix(\".json\")\n self.mkpath(str(json_file.parent))\n json_file.write_text(json.dumps(yaml.safe_load(yaml_file.read_text())))\n\n def run(self) -> None:\n self.execute(self.transform_graph_yaml_to_json, ())\n build_py.run(self)\n\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n cmdclass={\n \"build_py\": PreBuildCommand,\n },\n setup_requires=[\n \"pyyaml\",\n ],\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage==5.5\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\",\n \"jsonschema\",\n ]\n },\n install_requires=[\n \"bc-python-hcl2==0.3.51\",\n \"bc-detect-secrets==1.4.24\",\n \"bc-jsonpath-ng==1.5.9\",\n \"deep-merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml>=1.9\",\n \"dpath==2.1.3\",\n \"pyyaml>=5.4.1\",\n \"boto3>=1.17\",\n \"gitpython\",\n \"jmespath\",\n \"tqdm\",\n \"update-checker\",\n \"semantic-version\",\n \"packaging\",\n \"cloudsplaining>=0.4.3\",\n \"networkx<2.7\",\n \"igraph\",\n \"dockerfile-parse\",\n \"docker\",\n \"configargparse\",\n \"argcomplete\",\n \"policyuniverse\",\n \"typing-extensions>=4.1.0\",\n \"importlib-metadata>=0.12\",\n \"cachetools\",\n \"cyclonedx-python-lib<4.0.0,>=2.4.0\",\n \"packageurl-python\",\n \"click>=8.0.0\",\n \"aiohttp\",\n \"aiodns\",\n \"aiomultiprocess\",\n \"jsonschema<5.0.0,>=4.6.0\",\n \"prettytable>=3.0.0\",\n \"pycep-parser==0.3.9\",\n \"charset-normalizer\",\n \"pyston-autoload==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'\",\n \"pyston==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'\",\n \"schema\",\n \"requests>=2.27.0\",\n \"yarl\",\n \"openai\",\n ],\n dependency_links=[], # keep it empty, needed for pipenv-setup\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=find_packages(\n exclude=[\n \"dogfood_tests*\",\n \"flake8_plugins*\",\n \"integration_tests*\",\n \"performance_tests*\",\n \"tests*\",\n ]\n ),\n include_package_data=True,\n package_data={\n \"checkov\": [\"py.typed\"],\n \"checkov.common.util.templates\": [\"*.jinja2\"],\n \"checkov.ansible.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.arm.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.bicep.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.cloudformation.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.dockerfile.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.github_actions.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.kubernetes.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.terraform.checks.graph_checks\": [\"**/*.json\"],\n },\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Security\",\n \"Topic :: Software Development :: Build Tools\",\n \"Typing :: Typed\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport json\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\nfrom pathlib import Path\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.build_py import build_py\n\n\nclass PreBuildCommand(build_py):\n \"\"\"Pre-build command\"\"\"\n\n def transform_graph_yaml_to_json(self) -> None:\n \"\"\"Transforms YAML graph checks to JSON and copies them to build/lib\"\"\"\n\n import yaml # can't be top-level, because it needs to be first installed via 'setup_requires'\n\n graph_check_paths = (\"checkov/*/checks/graph_checks\",)\n build_path = Path(self.build_lib)\n src_path = Path()\n\n for graph_check_path in graph_check_paths:\n for yaml_file in src_path.glob(f\"{graph_check_path}/**/*.yaml\"):\n json_file = (build_path / yaml_file).with_suffix(\".json\")\n self.mkpath(str(json_file.parent))\n json_file.write_text(json.dumps(yaml.safe_load(yaml_file.read_text())))\n\n def run(self) -> None:\n self.execute(self.transform_graph_yaml_to_json, ())\n build_py.run(self)\n\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n cmdclass={\n \"build_py\": PreBuildCommand,\n },\n setup_requires=[\n \"pyyaml\",\n ],\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage==5.5\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\",\n \"jsonschema\",\n ]\n },\n install_requires=[\n \"bc-python-hcl2==0.3.51\",\n \"bc-detect-secrets==1.4.24\",\n \"bc-jsonpath-ng==1.5.9\",\n \"deep-merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml>=1.9\",\n \"dpath==2.1.3\",\n \"pyyaml>=5.4.1\",\n \"boto3>=1.17\",\n \"gitpython\",\n \"jmespath\",\n \"tqdm\",\n \"update-checker\",\n \"semantic-version\",\n \"packaging\",\n \"cloudsplaining>=0.4.3\",\n \"networkx<2.7\",\n \"igraph\",\n \"dockerfile-parse\",\n \"docker\",\n \"configargparse\",\n \"argcomplete\",\n \"policyuniverse\",\n \"typing-extensions>=4.1.0\",\n \"importlib-metadata>=0.12\",\n \"cachetools\",\n \"cyclonedx-python-lib<4.0.0,>=2.4.0\",\n \"packageurl-python\",\n \"click>=8.0.0\",\n \"aiohttp\",\n \"aiodns\",\n \"aiomultiprocess\",\n \"jsonschema<5.0.0,>=4.6.0\",\n \"prettytable>=3.0.0\",\n \"pycep-parser==0.4.0\",\n \"charset-normalizer\",\n \"pyston-autoload==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'\",\n \"pyston==2.3.5; python_version < '3.11' and (sys_platform == 'linux' or sys_platform == 'darwin') and platform_machine == 'x86_64' and implementation_name == 'cpython'\",\n \"schema\",\n \"requests>=2.27.0\",\n \"yarl\",\n \"openai\",\n ],\n dependency_links=[], # keep it empty, needed for pipenv-setup\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=find_packages(\n exclude=[\n \"dogfood_tests*\",\n \"flake8_plugins*\",\n \"integration_tests*\",\n \"performance_tests*\",\n \"tests*\",\n ]\n ),\n include_package_data=True,\n package_data={\n \"checkov\": [\"py.typed\"],\n \"checkov.common.util.templates\": [\"*.jinja2\"],\n \"checkov.ansible.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.arm.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.bicep.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.cloudformation.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.dockerfile.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.github_actions.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.kubernetes.checks.graph_checks\": [\"**/*.json\"],\n \"checkov.terraform.checks.graph_checks\": [\"**/*.json\"],\n },\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Security\",\n \"Topic :: Software Development :: Build Tools\",\n \"Typing :: Typed\",\n ],\n)\n", "path": "setup.py"}]}
| 2,302 | 216 |
gh_patches_debug_17586
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-6143
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC] Document the `int_label` option in `BaseSeriesAnnotator`
#### Describe the issue linked to the documentation
<!--
Tell us about the confusion introduced in the documentation.
-->
The `BaseSeriesAnnotator` class has the attribute `labels` which, according to the documentation, can be either `indicator` or `score`. However, the `HMM` class which inherits from `BaseSeriesAnnotator` uses `int_label` for the `labels` attribute, this option is not documented in `BaseSeriesAnnotator`.
The label `int_label` means that for observations `obs = [10, 11, 8, -1. -4. -2]`, the annotator should produce an output like `y = [0, 0, 0, 1, 1, 1]`.
#### Suggest a potential alternative/fix
<!--
Tell us how we could improve the documentation in this regard.
-->
Add proprer documentation to the `BaseSeriesAnnotator` for the `int_label` option for the `labels` parameter.
#### Other comments
There are other algorithms in the annotations module which produce integer labels, such as `GreedyGaussianSegmentation` (GSS), which do not inherit from `BaseSeriesAnnotator`. Perhaps we can change `GreedyGaussianSegmentation` to inherit from `BaseSeriesAnnotator` and use the `int_label` option.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/annotation/base/_base.py`
Content:
```
1 #!/usr/bin/env python3 -u
2 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
3 """Base class template for annotator base type for time series stream.
4
5 class name: BaseSeriesAnnotator
6
7 Scitype defining methods:
8 fitting - fit(self, X, Y=None)
9 annotating - predict(self, X)
10 updating (temporal) - update(self, X, Y=None)
11 update&annotate - update_predict(self, X)
12
13 Inspection methods:
14 hyper-parameter inspection - get_params()
15 fitted parameter inspection - get_fitted_params()
16
17 State:
18 fitted model/strategy - by convention, any attributes ending in "_"
19 fitted state flag - check_is_fitted()
20 """
21
22 __author__ = ["satya-pattnaik ", "fkiraly"]
23 __all__ = ["BaseSeriesAnnotator"]
24
25 from sktime.base import BaseEstimator
26 from sktime.utils.validation.annotation import check_fmt, check_labels
27 from sktime.utils.validation.series import check_series
28
29
30 class BaseSeriesAnnotator(BaseEstimator):
31 """Base series annotator.
32
33 Parameters
34 ----------
35 fmt : str {"dense", "sparse"}, optional (default="dense")
36 Annotation output format:
37 * If "sparse", a sub-series of labels for only the outliers in X is returned,
38 * If "dense", a series of labels for all values in X is returned.
39 labels : str {"indicator", "score"}, optional (default="indicator")
40 Annotation output labels:
41 * If "indicator", returned values are boolean, indicating whether a value is an
42 outlier,
43 * If "score", returned values are floats, giving the outlier score.
44
45 Notes
46 -----
47 Assumes "predict" data is temporal future of "fit"
48 Single time series in both, no meta-data.
49
50 The base series annotator specifies the methods and method
51 signatures that all annotators have to implement.
52
53 Specific implementations of these methods is deferred to concrete
54 annotators.
55 """
56
57 _tags = {
58 "object_type": "series-annotator", # type of object
59 "distribution_type": "None", # Tag to determine test in test_all_annotators
60 } # for unit test cases
61
62 def __init__(self, fmt="dense", labels="indicator"):
63 self.fmt = fmt
64 self.labels = labels
65
66 self._is_fitted = False
67
68 self._X = None
69 self._Y = None
70
71 super().__init__()
72
73 def fit(self, X, Y=None):
74 """Fit to training data.
75
76 Parameters
77 ----------
78 X : pd.DataFrame
79 Training data to fit model to (time series).
80 Y : pd.Series, optional
81 Ground truth annotations for training if annotator is supervised.
82
83 Returns
84 -------
85 self :
86 Reference to self.
87
88 Notes
89 -----
90 Creates fitted model that updates attributes ending in "_". Sets
91 _is_fitted flag to True.
92 """
93 check_labels(self.labels)
94 check_fmt(self.fmt)
95 X = check_series(X)
96
97 if Y is not None:
98 Y = check_series(Y)
99
100 self._X = X
101 self._Y = Y
102
103 # fkiraly: insert checks/conversions here, after PR #1012 I suggest
104
105 self._fit(X=X, Y=Y)
106
107 # this should happen last
108 self._is_fitted = True
109
110 return self
111
112 def predict(self, X):
113 """Create annotations on test/deployment data.
114
115 Parameters
116 ----------
117 X : pd.DataFrame
118 Data to annotate (time series).
119
120 Returns
121 -------
122 Y : pd.Series
123 Annotations for sequence X exact format depends on annotation type.
124 """
125 self.check_is_fitted()
126
127 X = check_series(X)
128
129 # fkiraly: insert checks/conversions here, after PR #1012 I suggest
130
131 Y = self._predict(X=X)
132
133 return Y
134
135 def predict_scores(self, X):
136 """Return scores for predicted annotations on test/deployment data.
137
138 Parameters
139 ----------
140 X : pd.DataFrame
141 Data to annotate (time series).
142
143 Returns
144 -------
145 Y : pd.Series
146 Scores for sequence X exact format depends on annotation type.
147 """
148 self.check_is_fitted()
149 X = check_series(X)
150 return self._predict_scores(X)
151
152 def update(self, X, Y=None):
153 """Update model with new data and optional ground truth annotations.
154
155 Parameters
156 ----------
157 X : pd.DataFrame
158 Training data to update model with (time series).
159 Y : pd.Series, optional
160 Ground truth annotations for training if annotator is supervised.
161
162 Returns
163 -------
164 self :
165 Reference to self.
166
167 Notes
168 -----
169 Updates fitted model that updates attributes ending in "_".
170 """
171 self.check_is_fitted()
172
173 X = check_series(X)
174
175 if Y is not None:
176 Y = check_series(Y)
177
178 self._X = X.combine_first(self._X)
179
180 if Y is not None:
181 self._Y = Y.combine_first(self._Y)
182
183 self._update(X=X, Y=Y)
184
185 return self
186
187 def update_predict(self, X):
188 """Update model with new data and create annotations for it.
189
190 Parameters
191 ----------
192 X : pd.DataFrame
193 Training data to update model with, time series.
194
195 Returns
196 -------
197 Y : pd.Series
198 Annotations for sequence X exact format depends on annotation type.
199
200 Notes
201 -----
202 Updates fitted model that updates attributes ending in "_".
203 """
204 X = check_series(X)
205
206 self.update(X=X)
207 Y = self.predict(X=X)
208
209 return Y
210
211 def fit_predict(self, X, Y=None):
212 """Fit to data, then predict it.
213
214 Fits model to X and Y with given annotation parameters
215 and returns the annotations made by the model.
216
217 Parameters
218 ----------
219 X : pd.DataFrame, pd.Series or np.ndarray
220 Data to be transformed
221 Y : pd.Series or np.ndarray, optional (default=None)
222 Target values of data to be predicted.
223
224 Returns
225 -------
226 self : pd.Series
227 Annotations for sequence X exact format depends on annotation type.
228 """
229 # Non-optimized default implementation; override when a better
230 # method is possible for a given algorithm.
231 return self.fit(X, Y).predict(X)
232
233 def _fit(self, X, Y=None):
234 """Fit to training data.
235
236 core logic
237
238 Parameters
239 ----------
240 X : pd.DataFrame
241 Training data to fit model to time series.
242 Y : pd.Series, optional
243 Ground truth annotations for training if annotator is supervised.
244
245 Returns
246 -------
247 self :
248 Reference to self.
249
250 Notes
251 -----
252 Updates fitted model that updates attributes ending in "_".
253 """
254 raise NotImplementedError("abstract method")
255
256 def _predict(self, X):
257 """Create annotations on test/deployment data.
258
259 core logic
260
261 Parameters
262 ----------
263 X : pd.DataFrame
264 Data to annotate, time series.
265
266 Returns
267 -------
268 Y : pd.Series
269 Annotations for sequence X exact format depends on annotation type.
270 """
271 raise NotImplementedError("abstract method")
272
273 def _predict_scores(self, X):
274 """Return scores for predicted annotations on test/deployment data.
275
276 core logic
277
278 Parameters
279 ----------
280 X : pd.DataFrame
281 Data to annotate, time series.
282
283 Returns
284 -------
285 Y : pd.Series
286 Annotations for sequence X exact format depends on annotation type.
287 """
288 raise NotImplementedError("abstract method")
289
290 def _update(self, X, Y=None):
291 """Update model with new data and optional ground truth annotations.
292
293 core logic
294
295 Parameters
296 ----------
297 X : pd.DataFrame
298 Training data to update model with time series
299 Y : pd.Series, optional
300 Ground truth annotations for training if annotator is supervised.
301
302 Returns
303 -------
304 self :
305 Reference to self.
306
307 Notes
308 -----
309 Updates fitted model that updates attributes ending in "_".
310 """
311 # default/fallback: re-fit to all data
312 self._fit(self._X, self._Y)
313
314 return self
315
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sktime/annotation/base/_base.py b/sktime/annotation/base/_base.py
--- a/sktime/annotation/base/_base.py
+++ b/sktime/annotation/base/_base.py
@@ -36,11 +36,13 @@
Annotation output format:
* If "sparse", a sub-series of labels for only the outliers in X is returned,
* If "dense", a series of labels for all values in X is returned.
- labels : str {"indicator", "score"}, optional (default="indicator")
+ labels : str {"indicator", "score", "int_label"}, optional (default="indicator")
Annotation output labels:
* If "indicator", returned values are boolean, indicating whether a value is an
outlier,
* If "score", returned values are floats, giving the outlier score.
+ * If "int_label", returned values are integers indicating which segment the
+ value belongs to.
Notes
-----
|
{"golden_diff": "diff --git a/sktime/annotation/base/_base.py b/sktime/annotation/base/_base.py\n--- a/sktime/annotation/base/_base.py\n+++ b/sktime/annotation/base/_base.py\n@@ -36,11 +36,13 @@\n Annotation output format:\n * If \"sparse\", a sub-series of labels for only the outliers in X is returned,\n * If \"dense\", a series of labels for all values in X is returned.\n- labels : str {\"indicator\", \"score\"}, optional (default=\"indicator\")\n+ labels : str {\"indicator\", \"score\", \"int_label\"}, optional (default=\"indicator\")\n Annotation output labels:\n * If \"indicator\", returned values are boolean, indicating whether a value is an\n outlier,\n * If \"score\", returned values are floats, giving the outlier score.\n+ * If \"int_label\", returned values are integers indicating which segment the\n+ value belongs to.\n \n Notes\n -----\n", "issue": "[DOC] Document the `int_label` option in `BaseSeriesAnnotator`\n#### Describe the issue linked to the documentation\r\n\r\n<!--\r\nTell us about the confusion introduced in the documentation.\r\n-->\r\n\r\nThe `BaseSeriesAnnotator` class has the attribute `labels` which, according to the documentation, can be either `indicator` or `score`. However, the `HMM` class which inherits from `BaseSeriesAnnotator` uses `int_label` for the `labels` attribute, this option is not documented in `BaseSeriesAnnotator`.\r\n\r\nThe label `int_label` means that for observations `obs = [10, 11, 8, -1. -4. -2]`, the annotator should produce an output like `y = [0, 0, 0, 1, 1, 1]`.\r\n\r\n#### Suggest a potential alternative/fix\r\n\r\n<!--\r\nTell us how we could improve the documentation in this regard.\r\n-->\r\n\r\nAdd proprer documentation to the `BaseSeriesAnnotator` for the `int_label` option for the `labels` parameter.\r\n\r\n#### Other comments\r\n\r\nThere are other algorithms in the annotations module which produce integer labels, such as `GreedyGaussianSegmentation` (GSS), which do not inherit from `BaseSeriesAnnotator`. Perhaps we can change `GreedyGaussianSegmentation` to inherit from `BaseSeriesAnnotator` and use the `int_label` option.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3 -u\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Base class template for annotator base type for time series stream.\n\n class name: BaseSeriesAnnotator\n\nScitype defining methods:\n fitting - fit(self, X, Y=None)\n annotating - predict(self, X)\n updating (temporal) - update(self, X, Y=None)\n update&annotate - update_predict(self, X)\n\nInspection methods:\n hyper-parameter inspection - get_params()\n fitted parameter inspection - get_fitted_params()\n\nState:\n fitted model/strategy - by convention, any attributes ending in \"_\"\n fitted state flag - check_is_fitted()\n\"\"\"\n\n__author__ = [\"satya-pattnaik \", \"fkiraly\"]\n__all__ = [\"BaseSeriesAnnotator\"]\n\nfrom sktime.base import BaseEstimator\nfrom sktime.utils.validation.annotation import check_fmt, check_labels\nfrom sktime.utils.validation.series import check_series\n\n\nclass BaseSeriesAnnotator(BaseEstimator):\n \"\"\"Base series annotator.\n\n Parameters\n ----------\n fmt : str {\"dense\", \"sparse\"}, optional (default=\"dense\")\n Annotation output format:\n * If \"sparse\", a sub-series of labels for only the outliers in X is returned,\n * If \"dense\", a series of labels for all values in X is returned.\n labels : str {\"indicator\", \"score\"}, optional (default=\"indicator\")\n Annotation output labels:\n * If \"indicator\", returned values are boolean, indicating whether a value is an\n outlier,\n * If \"score\", returned values are floats, giving the outlier score.\n\n Notes\n -----\n Assumes \"predict\" data is temporal future of \"fit\"\n Single time series in both, no meta-data.\n\n The base series annotator specifies the methods and method\n signatures that all annotators have to implement.\n\n Specific implementations of these methods is deferred to concrete\n annotators.\n \"\"\"\n\n _tags = {\n \"object_type\": \"series-annotator\", # type of object\n \"distribution_type\": \"None\", # Tag to determine test in test_all_annotators\n } # for unit test cases\n\n def __init__(self, fmt=\"dense\", labels=\"indicator\"):\n self.fmt = fmt\n self.labels = labels\n\n self._is_fitted = False\n\n self._X = None\n self._Y = None\n\n super().__init__()\n\n def fit(self, X, Y=None):\n \"\"\"Fit to training data.\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to fit model to (time series).\n Y : pd.Series, optional\n Ground truth annotations for training if annotator is supervised.\n\n Returns\n -------\n self :\n Reference to self.\n\n Notes\n -----\n Creates fitted model that updates attributes ending in \"_\". Sets\n _is_fitted flag to True.\n \"\"\"\n check_labels(self.labels)\n check_fmt(self.fmt)\n X = check_series(X)\n\n if Y is not None:\n Y = check_series(Y)\n\n self._X = X\n self._Y = Y\n\n # fkiraly: insert checks/conversions here, after PR #1012 I suggest\n\n self._fit(X=X, Y=Y)\n\n # this should happen last\n self._is_fitted = True\n\n return self\n\n def predict(self, X):\n \"\"\"Create annotations on test/deployment data.\n\n Parameters\n ----------\n X : pd.DataFrame\n Data to annotate (time series).\n\n Returns\n -------\n Y : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n \"\"\"\n self.check_is_fitted()\n\n X = check_series(X)\n\n # fkiraly: insert checks/conversions here, after PR #1012 I suggest\n\n Y = self._predict(X=X)\n\n return Y\n\n def predict_scores(self, X):\n \"\"\"Return scores for predicted annotations on test/deployment data.\n\n Parameters\n ----------\n X : pd.DataFrame\n Data to annotate (time series).\n\n Returns\n -------\n Y : pd.Series\n Scores for sequence X exact format depends on annotation type.\n \"\"\"\n self.check_is_fitted()\n X = check_series(X)\n return self._predict_scores(X)\n\n def update(self, X, Y=None):\n \"\"\"Update model with new data and optional ground truth annotations.\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to update model with (time series).\n Y : pd.Series, optional\n Ground truth annotations for training if annotator is supervised.\n\n Returns\n -------\n self :\n Reference to self.\n\n Notes\n -----\n Updates fitted model that updates attributes ending in \"_\".\n \"\"\"\n self.check_is_fitted()\n\n X = check_series(X)\n\n if Y is not None:\n Y = check_series(Y)\n\n self._X = X.combine_first(self._X)\n\n if Y is not None:\n self._Y = Y.combine_first(self._Y)\n\n self._update(X=X, Y=Y)\n\n return self\n\n def update_predict(self, X):\n \"\"\"Update model with new data and create annotations for it.\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to update model with, time series.\n\n Returns\n -------\n Y : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n\n Notes\n -----\n Updates fitted model that updates attributes ending in \"_\".\n \"\"\"\n X = check_series(X)\n\n self.update(X=X)\n Y = self.predict(X=X)\n\n return Y\n\n def fit_predict(self, X, Y=None):\n \"\"\"Fit to data, then predict it.\n\n Fits model to X and Y with given annotation parameters\n and returns the annotations made by the model.\n\n Parameters\n ----------\n X : pd.DataFrame, pd.Series or np.ndarray\n Data to be transformed\n Y : pd.Series or np.ndarray, optional (default=None)\n Target values of data to be predicted.\n\n Returns\n -------\n self : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n \"\"\"\n # Non-optimized default implementation; override when a better\n # method is possible for a given algorithm.\n return self.fit(X, Y).predict(X)\n\n def _fit(self, X, Y=None):\n \"\"\"Fit to training data.\n\n core logic\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to fit model to time series.\n Y : pd.Series, optional\n Ground truth annotations for training if annotator is supervised.\n\n Returns\n -------\n self :\n Reference to self.\n\n Notes\n -----\n Updates fitted model that updates attributes ending in \"_\".\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _predict(self, X):\n \"\"\"Create annotations on test/deployment data.\n\n core logic\n\n Parameters\n ----------\n X : pd.DataFrame\n Data to annotate, time series.\n\n Returns\n -------\n Y : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _predict_scores(self, X):\n \"\"\"Return scores for predicted annotations on test/deployment data.\n\n core logic\n\n Parameters\n ----------\n X : pd.DataFrame\n Data to annotate, time series.\n\n Returns\n -------\n Y : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _update(self, X, Y=None):\n \"\"\"Update model with new data and optional ground truth annotations.\n\n core logic\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to update model with time series\n Y : pd.Series, optional\n Ground truth annotations for training if annotator is supervised.\n\n Returns\n -------\n self :\n Reference to self.\n\n Notes\n -----\n Updates fitted model that updates attributes ending in \"_\".\n \"\"\"\n # default/fallback: re-fit to all data\n self._fit(self._X, self._Y)\n\n return self\n", "path": "sktime/annotation/base/_base.py"}], "after_files": [{"content": "#!/usr/bin/env python3 -u\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Base class template for annotator base type for time series stream.\n\n class name: BaseSeriesAnnotator\n\nScitype defining methods:\n fitting - fit(self, X, Y=None)\n annotating - predict(self, X)\n updating (temporal) - update(self, X, Y=None)\n update&annotate - update_predict(self, X)\n\nInspection methods:\n hyper-parameter inspection - get_params()\n fitted parameter inspection - get_fitted_params()\n\nState:\n fitted model/strategy - by convention, any attributes ending in \"_\"\n fitted state flag - check_is_fitted()\n\"\"\"\n\n__author__ = [\"satya-pattnaik \", \"fkiraly\"]\n__all__ = [\"BaseSeriesAnnotator\"]\n\nfrom sktime.base import BaseEstimator\nfrom sktime.utils.validation.annotation import check_fmt, check_labels\nfrom sktime.utils.validation.series import check_series\n\n\nclass BaseSeriesAnnotator(BaseEstimator):\n \"\"\"Base series annotator.\n\n Parameters\n ----------\n fmt : str {\"dense\", \"sparse\"}, optional (default=\"dense\")\n Annotation output format:\n * If \"sparse\", a sub-series of labels for only the outliers in X is returned,\n * If \"dense\", a series of labels for all values in X is returned.\n labels : str {\"indicator\", \"score\", \"int_label\"}, optional (default=\"indicator\")\n Annotation output labels:\n * If \"indicator\", returned values are boolean, indicating whether a value is an\n outlier,\n * If \"score\", returned values are floats, giving the outlier score.\n * If \"int_label\", returned values are integers indicating which segment the\n value belongs to.\n\n Notes\n -----\n Assumes \"predict\" data is temporal future of \"fit\"\n Single time series in both, no meta-data.\n\n The base series annotator specifies the methods and method\n signatures that all annotators have to implement.\n\n Specific implementations of these methods is deferred to concrete\n annotators.\n \"\"\"\n\n _tags = {\n \"object_type\": \"series-annotator\", # type of object\n \"distribution_type\": \"None\", # Tag to determine test in test_all_annotators\n } # for unit test cases\n\n def __init__(self, fmt=\"dense\", labels=\"indicator\"):\n self.fmt = fmt\n self.labels = labels\n\n self._is_fitted = False\n\n self._X = None\n self._Y = None\n\n super().__init__()\n\n def fit(self, X, Y=None):\n \"\"\"Fit to training data.\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to fit model to (time series).\n Y : pd.Series, optional\n Ground truth annotations for training if annotator is supervised.\n\n Returns\n -------\n self :\n Reference to self.\n\n Notes\n -----\n Creates fitted model that updates attributes ending in \"_\". Sets\n _is_fitted flag to True.\n \"\"\"\n check_labels(self.labels)\n check_fmt(self.fmt)\n X = check_series(X)\n\n if Y is not None:\n Y = check_series(Y)\n\n self._X = X\n self._Y = Y\n\n # fkiraly: insert checks/conversions here, after PR #1012 I suggest\n\n self._fit(X=X, Y=Y)\n\n # this should happen last\n self._is_fitted = True\n\n return self\n\n def predict(self, X):\n \"\"\"Create annotations on test/deployment data.\n\n Parameters\n ----------\n X : pd.DataFrame\n Data to annotate (time series).\n\n Returns\n -------\n Y : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n \"\"\"\n self.check_is_fitted()\n\n X = check_series(X)\n\n # fkiraly: insert checks/conversions here, after PR #1012 I suggest\n\n Y = self._predict(X=X)\n\n return Y\n\n def predict_scores(self, X):\n \"\"\"Return scores for predicted annotations on test/deployment data.\n\n Parameters\n ----------\n X : pd.DataFrame\n Data to annotate (time series).\n\n Returns\n -------\n Y : pd.Series\n Scores for sequence X exact format depends on annotation type.\n \"\"\"\n self.check_is_fitted()\n X = check_series(X)\n return self._predict_scores(X)\n\n def update(self, X, Y=None):\n \"\"\"Update model with new data and optional ground truth annotations.\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to update model with (time series).\n Y : pd.Series, optional\n Ground truth annotations for training if annotator is supervised.\n\n Returns\n -------\n self :\n Reference to self.\n\n Notes\n -----\n Updates fitted model that updates attributes ending in \"_\".\n \"\"\"\n self.check_is_fitted()\n\n X = check_series(X)\n\n if Y is not None:\n Y = check_series(Y)\n\n self._X = X.combine_first(self._X)\n\n if Y is not None:\n self._Y = Y.combine_first(self._Y)\n\n self._update(X=X, Y=Y)\n\n return self\n\n def update_predict(self, X):\n \"\"\"Update model with new data and create annotations for it.\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to update model with, time series.\n\n Returns\n -------\n Y : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n\n Notes\n -----\n Updates fitted model that updates attributes ending in \"_\".\n \"\"\"\n X = check_series(X)\n\n self.update(X=X)\n Y = self.predict(X=X)\n\n return Y\n\n def fit_predict(self, X, Y=None):\n \"\"\"Fit to data, then predict it.\n\n Fits model to X and Y with given annotation parameters\n and returns the annotations made by the model.\n\n Parameters\n ----------\n X : pd.DataFrame, pd.Series or np.ndarray\n Data to be transformed\n Y : pd.Series or np.ndarray, optional (default=None)\n Target values of data to be predicted.\n\n Returns\n -------\n self : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n \"\"\"\n # Non-optimized default implementation; override when a better\n # method is possible for a given algorithm.\n return self.fit(X, Y).predict(X)\n\n def _fit(self, X, Y=None):\n \"\"\"Fit to training data.\n\n core logic\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to fit model to time series.\n Y : pd.Series, optional\n Ground truth annotations for training if annotator is supervised.\n\n Returns\n -------\n self :\n Reference to self.\n\n Notes\n -----\n Updates fitted model that updates attributes ending in \"_\".\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _predict(self, X):\n \"\"\"Create annotations on test/deployment data.\n\n core logic\n\n Parameters\n ----------\n X : pd.DataFrame\n Data to annotate, time series.\n\n Returns\n -------\n Y : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _predict_scores(self, X):\n \"\"\"Return scores for predicted annotations on test/deployment data.\n\n core logic\n\n Parameters\n ----------\n X : pd.DataFrame\n Data to annotate, time series.\n\n Returns\n -------\n Y : pd.Series\n Annotations for sequence X exact format depends on annotation type.\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _update(self, X, Y=None):\n \"\"\"Update model with new data and optional ground truth annotations.\n\n core logic\n\n Parameters\n ----------\n X : pd.DataFrame\n Training data to update model with time series\n Y : pd.Series, optional\n Ground truth annotations for training if annotator is supervised.\n\n Returns\n -------\n self :\n Reference to self.\n\n Notes\n -----\n Updates fitted model that updates attributes ending in \"_\".\n \"\"\"\n # default/fallback: re-fit to all data\n self._fit(self._X, self._Y)\n\n return self\n", "path": "sktime/annotation/base/_base.py"}]}
| 3,224 | 214 |
gh_patches_debug_14584
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-14526
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Poor performances on create/edit tenant field with high number of tenants
### Deployment Type
Self-hosted
### NetBox Version
v3.6.6
### Python Version
3.11
### Steps to Reproduce
* Create around 85k tenant in a single tenant group
* Create around 150k prefixes linked with differents tenants
* Create some sites, linked with differents tenants
* Then edit one of the prefix, and click on tenant selector.
### Expected Behavior
Maybe one second or less should be a good request timer
Or maybe the searchable select should be async to allow user to enter characters while the first request is running in background.
### Observed Behavior
With a postgres 16 database, and 16GB of ram and 4 cpus, It will take 3.5 seconds to respond and give the user the opportunity to enter some search characters.
As far i can go through the debug process, i can see theses kinds of SQL request with long running timer ( >3s):
```sql
SELECT
DISTINCT "tenancy_tenant"."id",
"tenancy_tenant"."created",
"tenancy_tenant"."last_updated",
"tenancy_tenant"."custom_field_data",
"tenancy_tenant"."description",
"tenancy_tenant"."comments",
"tenancy_tenant"."name",
"tenancy_tenant"."slug",
"tenancy_tenant"."group_id",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"circuits_circuit" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "circuit_count",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"dcim_device" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "device_count",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"ipam_ipaddress" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "ipaddress_count",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"ipam_prefix" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "prefix_count",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"dcim_rack" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "rack_count",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"dcim_site" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "site_count",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"virtualization_virtualmachine" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "virtualmachine_count",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"ipam_vlan" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "vlan_count",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"ipam_vrf" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "vrf_count",
COALESCE(
(
SELECT
COUNT(*) AS "c"
FROM
"virtualization_cluster" U0
WHERE
U0."tenant_id" = ("tenancy_tenant"."id")
GROUP BY
U0."tenant_id"
),
0
) AS "cluster_count"
FROM
"tenancy_tenant"
WHERE
"tenancy_tenant"."group_id" IN (
SELECT
U0."id"
FROM
"tenancy_tenantgroup" U0
WHERE
U0."id" = 4
)
ORDER BY
"tenancy_tenant"."name" ASC
LIMIT
50
```
Theses request, for each kind of related objects making a count of them. So with 85k tenant, each count is executed 85k times.
From my point of view, to display a dynamic selector with customer display field, it's not necessary to count every objects.
To be more specific on the pgsql server, the request is executed on RAM buffered datas. No hit on system SSD. The cache usage is around 30% and hit objects.
We already have a discussion about it on slack: https://netdev-community.slack.com/archives/C01P0FRSXRV/p1702476390699559
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/netbox/api/viewsets/mixins.py`
Content:
```
1 import logging
2
3 from django.contrib.contenttypes.models import ContentType
4 from django.core.exceptions import ObjectDoesNotExist
5 from django.db import transaction
6 from django.http import Http404
7 from rest_framework import status
8 from rest_framework.response import Response
9
10 from extras.models import ExportTemplate
11 from netbox.api.exceptions import SerializerNotFound
12 from netbox.api.serializers import BulkOperationSerializer
13 from netbox.constants import NESTED_SERIALIZER_PREFIX
14 from utilities.api import get_serializer_for_model
15
16 __all__ = (
17 'BriefModeMixin',
18 'BulkDestroyModelMixin',
19 'BulkUpdateModelMixin',
20 'CustomFieldsMixin',
21 'ExportTemplatesMixin',
22 'ObjectValidationMixin',
23 'SequentialBulkCreatesMixin',
24 )
25
26
27 class BriefModeMixin:
28 """
29 Enables brief mode support, so that the client can invoke a model's nested serializer by passing e.g.
30 GET /api/dcim/sites/?brief=True
31 """
32 brief = False
33 brief_prefetch_fields = []
34
35 def initialize_request(self, request, *args, **kwargs):
36 # Annotate whether brief mode is active
37 self.brief = request.method == 'GET' and request.GET.get('brief')
38
39 return super().initialize_request(request, *args, **kwargs)
40
41 def get_serializer_class(self):
42 logger = logging.getLogger(f'netbox.api.views.{self.__class__.__name__}')
43
44 # If using 'brief' mode, find and return the nested serializer for this model, if one exists
45 if self.brief:
46 logger.debug("Request is for 'brief' format; initializing nested serializer")
47 try:
48 return get_serializer_for_model(self.queryset.model, prefix=NESTED_SERIALIZER_PREFIX)
49 except SerializerNotFound:
50 logger.debug(
51 f"Nested serializer for {self.queryset.model} not found! Using serializer {self.serializer_class}"
52 )
53
54 return self.serializer_class
55
56 def get_queryset(self):
57 qs = super().get_queryset()
58
59 # If using brief mode, clear all prefetches from the queryset and append only brief_prefetch_fields (if any)
60 if self.brief:
61 return qs.prefetch_related(None).prefetch_related(*self.brief_prefetch_fields)
62
63 return qs
64
65
66 class CustomFieldsMixin:
67 """
68 For models which support custom fields, populate the `custom_fields` context.
69 """
70 def get_serializer_context(self):
71 context = super().get_serializer_context()
72
73 if hasattr(self.queryset.model, 'custom_fields'):
74 content_type = ContentType.objects.get_for_model(self.queryset.model)
75 context.update({
76 'custom_fields': content_type.custom_fields.all(),
77 })
78
79 return context
80
81
82 class ExportTemplatesMixin:
83 """
84 Enable ExportTemplate support for list views.
85 """
86 def list(self, request, *args, **kwargs):
87 if 'export' in request.GET:
88 content_type = ContentType.objects.get_for_model(self.get_serializer_class().Meta.model)
89 et = ExportTemplate.objects.filter(content_types=content_type, name=request.GET['export']).first()
90 if et is None:
91 raise Http404
92 queryset = self.filter_queryset(self.get_queryset())
93 return et.render_to_response(queryset)
94
95 return super().list(request, *args, **kwargs)
96
97
98 class SequentialBulkCreatesMixin:
99 """
100 Perform bulk creation of new objects sequentially, rather than all at once. This ensures that any validation
101 which depends on the evaluation of existing objects (such as checking for free space within a rack) functions
102 appropriately.
103 """
104 @transaction.atomic
105 def create(self, request, *args, **kwargs):
106 if not isinstance(request.data, list):
107 # Creating a single object
108 return super().create(request, *args, **kwargs)
109
110 return_data = []
111 for data in request.data:
112 serializer = self.get_serializer(data=data)
113 serializer.is_valid(raise_exception=True)
114 self.perform_create(serializer)
115 return_data.append(serializer.data)
116
117 headers = self.get_success_headers(serializer.data)
118
119 return Response(return_data, status=status.HTTP_201_CREATED, headers=headers)
120
121
122 class BulkUpdateModelMixin:
123 """
124 Support bulk modification of objects using the list endpoint for a model. Accepts a PATCH action with a list of one
125 or more JSON objects, each specifying the numeric ID of an object to be updated as well as the attributes to be set.
126 For example:
127
128 PATCH /api/dcim/sites/
129 [
130 {
131 "id": 123,
132 "name": "New name"
133 },
134 {
135 "id": 456,
136 "status": "planned"
137 }
138 ]
139 """
140 def bulk_update(self, request, *args, **kwargs):
141 partial = kwargs.pop('partial', False)
142 serializer = BulkOperationSerializer(data=request.data, many=True)
143 serializer.is_valid(raise_exception=True)
144 qs = self.get_queryset().filter(
145 pk__in=[o['id'] for o in serializer.data]
146 )
147
148 # Map update data by object ID
149 update_data = {
150 obj.pop('id'): obj for obj in request.data
151 }
152
153 data = self.perform_bulk_update(qs, update_data, partial=partial)
154
155 return Response(data, status=status.HTTP_200_OK)
156
157 def perform_bulk_update(self, objects, update_data, partial):
158 with transaction.atomic():
159 data_list = []
160 for obj in objects:
161 data = update_data.get(obj.id)
162 if hasattr(obj, 'snapshot'):
163 obj.snapshot()
164 serializer = self.get_serializer(obj, data=data, partial=partial)
165 serializer.is_valid(raise_exception=True)
166 self.perform_update(serializer)
167 data_list.append(serializer.data)
168
169 return data_list
170
171 def bulk_partial_update(self, request, *args, **kwargs):
172 kwargs['partial'] = True
173 return self.bulk_update(request, *args, **kwargs)
174
175
176 class BulkDestroyModelMixin:
177 """
178 Support bulk deletion of objects using the list endpoint for a model. Accepts a DELETE action with a list of one
179 or more JSON objects, each specifying the numeric ID of an object to be deleted. For example:
180
181 DELETE /api/dcim/sites/
182 [
183 {"id": 123},
184 {"id": 456}
185 ]
186 """
187 def bulk_destroy(self, request, *args, **kwargs):
188 serializer = BulkOperationSerializer(data=request.data, many=True)
189 serializer.is_valid(raise_exception=True)
190 qs = self.get_queryset().filter(
191 pk__in=[o['id'] for o in serializer.data]
192 )
193
194 self.perform_bulk_destroy(qs)
195
196 return Response(status=status.HTTP_204_NO_CONTENT)
197
198 def perform_bulk_destroy(self, objects):
199 with transaction.atomic():
200 for obj in objects:
201 if hasattr(obj, 'snapshot'):
202 obj.snapshot()
203 self.perform_destroy(obj)
204
205
206 class ObjectValidationMixin:
207
208 def _validate_objects(self, instance):
209 """
210 Check that the provided instance or list of instances are matched by the current queryset. This confirms that
211 any newly created or modified objects abide by the attributes granted by any applicable ObjectPermissions.
212 """
213 if type(instance) is list:
214 # Check that all instances are still included in the view's queryset
215 conforming_count = self.queryset.filter(pk__in=[obj.pk for obj in instance]).count()
216 if conforming_count != len(instance):
217 raise ObjectDoesNotExist
218 elif not self.queryset.filter(pk=instance.pk).exists():
219 raise ObjectDoesNotExist
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/netbox/netbox/api/viewsets/mixins.py b/netbox/netbox/api/viewsets/mixins.py
--- a/netbox/netbox/api/viewsets/mixins.py
+++ b/netbox/netbox/api/viewsets/mixins.py
@@ -56,8 +56,15 @@
def get_queryset(self):
qs = super().get_queryset()
- # If using brief mode, clear all prefetches from the queryset and append only brief_prefetch_fields (if any)
if self.brief:
+ serializer_class = self.get_serializer_class()
+
+ # Clear any annotations for fields not present on the nested serializer
+ for annotation in list(qs.query.annotations.keys()):
+ if annotation not in serializer_class().fields:
+ qs.query.annotations.pop(annotation)
+
+ # Clear any prefetches from the queryset and append only brief_prefetch_fields (if any)
return qs.prefetch_related(None).prefetch_related(*self.brief_prefetch_fields)
return qs
|
{"golden_diff": "diff --git a/netbox/netbox/api/viewsets/mixins.py b/netbox/netbox/api/viewsets/mixins.py\n--- a/netbox/netbox/api/viewsets/mixins.py\n+++ b/netbox/netbox/api/viewsets/mixins.py\n@@ -56,8 +56,15 @@\n def get_queryset(self):\n qs = super().get_queryset()\n \n- # If using brief mode, clear all prefetches from the queryset and append only brief_prefetch_fields (if any)\n if self.brief:\n+ serializer_class = self.get_serializer_class()\n+\n+ # Clear any annotations for fields not present on the nested serializer\n+ for annotation in list(qs.query.annotations.keys()):\n+ if annotation not in serializer_class().fields:\n+ qs.query.annotations.pop(annotation)\n+\n+ # Clear any prefetches from the queryset and append only brief_prefetch_fields (if any)\n return qs.prefetch_related(None).prefetch_related(*self.brief_prefetch_fields)\n \n return qs\n", "issue": "Poor performances on create/edit tenant field with high number of tenants \n### Deployment Type\r\n\r\nSelf-hosted\r\n\r\n### NetBox Version\r\n\r\nv3.6.6\r\n\r\n### Python Version\r\n\r\n3.11\r\n\r\n### Steps to Reproduce\r\n\r\n* Create around 85k tenant in a single tenant group\r\n* Create around 150k prefixes linked with differents tenants\r\n* Create some sites, linked with differents tenants\r\n* Then edit one of the prefix, and click on tenant selector.\r\n\r\n### Expected Behavior\r\n\r\nMaybe one second or less should be a good request timer\r\nOr maybe the searchable select should be async to allow user to enter characters while the first request is running in background.\r\n\r\n### Observed Behavior\r\n\r\nWith a postgres 16 database, and 16GB of ram and 4 cpus, It will take 3.5 seconds to respond and give the user the opportunity to enter some search characters.\r\n\r\nAs far i can go through the debug process, i can see theses kinds of SQL request with long running timer ( >3s):\r\n```sql\r\nSELECT \r\n DISTINCT \"tenancy_tenant\".\"id\", \r\n \"tenancy_tenant\".\"created\", \r\n \"tenancy_tenant\".\"last_updated\", \r\n \"tenancy_tenant\".\"custom_field_data\", \r\n \"tenancy_tenant\".\"description\", \r\n \"tenancy_tenant\".\"comments\", \r\n \"tenancy_tenant\".\"name\", \r\n \"tenancy_tenant\".\"slug\", \r\n \"tenancy_tenant\".\"group_id\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"circuits_circuit\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"circuit_count\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"dcim_device\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"device_count\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"ipam_ipaddress\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"ipaddress_count\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"ipam_prefix\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"prefix_count\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"dcim_rack\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"rack_count\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"dcim_site\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"site_count\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"virtualization_virtualmachine\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"virtualmachine_count\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"ipam_vlan\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"vlan_count\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"ipam_vrf\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"vrf_count\", \r\n COALESCE(\r\n (\r\n SELECT \r\n COUNT(*) AS \"c\" \r\n FROM \r\n \"virtualization_cluster\" U0 \r\n WHERE \r\n U0.\"tenant_id\" = (\"tenancy_tenant\".\"id\") \r\n GROUP BY \r\n U0.\"tenant_id\"\r\n ), \r\n 0\r\n ) AS \"cluster_count\" \r\nFROM \r\n \"tenancy_tenant\" \r\nWHERE \r\n \"tenancy_tenant\".\"group_id\" IN (\r\n SELECT \r\n U0.\"id\" \r\n FROM \r\n \"tenancy_tenantgroup\" U0 \r\n WHERE \r\n U0.\"id\" = 4\r\n ) \r\nORDER BY \r\n \"tenancy_tenant\".\"name\" ASC \r\nLIMIT \r\n 50\r\n```\r\n\r\nTheses request, for each kind of related objects making a count of them. So with 85k tenant, each count is executed 85k times.\r\nFrom my point of view, to display a dynamic selector with customer display field, it's not necessary to count every objects.\r\n\r\nTo be more specific on the pgsql server, the request is executed on RAM buffered datas. No hit on system SSD. The cache usage is around 30% and hit objects.\r\n\r\nWe already have a discussion about it on slack: https://netdev-community.slack.com/archives/C01P0FRSXRV/p1702476390699559\n", "before_files": [{"content": "import logging\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import transaction\nfrom django.http import Http404\nfrom rest_framework import status\nfrom rest_framework.response import Response\n\nfrom extras.models import ExportTemplate\nfrom netbox.api.exceptions import SerializerNotFound\nfrom netbox.api.serializers import BulkOperationSerializer\nfrom netbox.constants import NESTED_SERIALIZER_PREFIX\nfrom utilities.api import get_serializer_for_model\n\n__all__ = (\n 'BriefModeMixin',\n 'BulkDestroyModelMixin',\n 'BulkUpdateModelMixin',\n 'CustomFieldsMixin',\n 'ExportTemplatesMixin',\n 'ObjectValidationMixin',\n 'SequentialBulkCreatesMixin',\n)\n\n\nclass BriefModeMixin:\n \"\"\"\n Enables brief mode support, so that the client can invoke a model's nested serializer by passing e.g.\n GET /api/dcim/sites/?brief=True\n \"\"\"\n brief = False\n brief_prefetch_fields = []\n\n def initialize_request(self, request, *args, **kwargs):\n # Annotate whether brief mode is active\n self.brief = request.method == 'GET' and request.GET.get('brief')\n\n return super().initialize_request(request, *args, **kwargs)\n\n def get_serializer_class(self):\n logger = logging.getLogger(f'netbox.api.views.{self.__class__.__name__}')\n\n # If using 'brief' mode, find and return the nested serializer for this model, if one exists\n if self.brief:\n logger.debug(\"Request is for 'brief' format; initializing nested serializer\")\n try:\n return get_serializer_for_model(self.queryset.model, prefix=NESTED_SERIALIZER_PREFIX)\n except SerializerNotFound:\n logger.debug(\n f\"Nested serializer for {self.queryset.model} not found! Using serializer {self.serializer_class}\"\n )\n\n return self.serializer_class\n\n def get_queryset(self):\n qs = super().get_queryset()\n\n # If using brief mode, clear all prefetches from the queryset and append only brief_prefetch_fields (if any)\n if self.brief:\n return qs.prefetch_related(None).prefetch_related(*self.brief_prefetch_fields)\n\n return qs\n\n\nclass CustomFieldsMixin:\n \"\"\"\n For models which support custom fields, populate the `custom_fields` context.\n \"\"\"\n def get_serializer_context(self):\n context = super().get_serializer_context()\n\n if hasattr(self.queryset.model, 'custom_fields'):\n content_type = ContentType.objects.get_for_model(self.queryset.model)\n context.update({\n 'custom_fields': content_type.custom_fields.all(),\n })\n\n return context\n\n\nclass ExportTemplatesMixin:\n \"\"\"\n Enable ExportTemplate support for list views.\n \"\"\"\n def list(self, request, *args, **kwargs):\n if 'export' in request.GET:\n content_type = ContentType.objects.get_for_model(self.get_serializer_class().Meta.model)\n et = ExportTemplate.objects.filter(content_types=content_type, name=request.GET['export']).first()\n if et is None:\n raise Http404\n queryset = self.filter_queryset(self.get_queryset())\n return et.render_to_response(queryset)\n\n return super().list(request, *args, **kwargs)\n\n\nclass SequentialBulkCreatesMixin:\n \"\"\"\n Perform bulk creation of new objects sequentially, rather than all at once. This ensures that any validation\n which depends on the evaluation of existing objects (such as checking for free space within a rack) functions\n appropriately.\n \"\"\"\n @transaction.atomic\n def create(self, request, *args, **kwargs):\n if not isinstance(request.data, list):\n # Creating a single object\n return super().create(request, *args, **kwargs)\n\n return_data = []\n for data in request.data:\n serializer = self.get_serializer(data=data)\n serializer.is_valid(raise_exception=True)\n self.perform_create(serializer)\n return_data.append(serializer.data)\n\n headers = self.get_success_headers(serializer.data)\n\n return Response(return_data, status=status.HTTP_201_CREATED, headers=headers)\n\n\nclass BulkUpdateModelMixin:\n \"\"\"\n Support bulk modification of objects using the list endpoint for a model. Accepts a PATCH action with a list of one\n or more JSON objects, each specifying the numeric ID of an object to be updated as well as the attributes to be set.\n For example:\n\n PATCH /api/dcim/sites/\n [\n {\n \"id\": 123,\n \"name\": \"New name\"\n },\n {\n \"id\": 456,\n \"status\": \"planned\"\n }\n ]\n \"\"\"\n def bulk_update(self, request, *args, **kwargs):\n partial = kwargs.pop('partial', False)\n serializer = BulkOperationSerializer(data=request.data, many=True)\n serializer.is_valid(raise_exception=True)\n qs = self.get_queryset().filter(\n pk__in=[o['id'] for o in serializer.data]\n )\n\n # Map update data by object ID\n update_data = {\n obj.pop('id'): obj for obj in request.data\n }\n\n data = self.perform_bulk_update(qs, update_data, partial=partial)\n\n return Response(data, status=status.HTTP_200_OK)\n\n def perform_bulk_update(self, objects, update_data, partial):\n with transaction.atomic():\n data_list = []\n for obj in objects:\n data = update_data.get(obj.id)\n if hasattr(obj, 'snapshot'):\n obj.snapshot()\n serializer = self.get_serializer(obj, data=data, partial=partial)\n serializer.is_valid(raise_exception=True)\n self.perform_update(serializer)\n data_list.append(serializer.data)\n\n return data_list\n\n def bulk_partial_update(self, request, *args, **kwargs):\n kwargs['partial'] = True\n return self.bulk_update(request, *args, **kwargs)\n\n\nclass BulkDestroyModelMixin:\n \"\"\"\n Support bulk deletion of objects using the list endpoint for a model. Accepts a DELETE action with a list of one\n or more JSON objects, each specifying the numeric ID of an object to be deleted. For example:\n\n DELETE /api/dcim/sites/\n [\n {\"id\": 123},\n {\"id\": 456}\n ]\n \"\"\"\n def bulk_destroy(self, request, *args, **kwargs):\n serializer = BulkOperationSerializer(data=request.data, many=True)\n serializer.is_valid(raise_exception=True)\n qs = self.get_queryset().filter(\n pk__in=[o['id'] for o in serializer.data]\n )\n\n self.perform_bulk_destroy(qs)\n\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n def perform_bulk_destroy(self, objects):\n with transaction.atomic():\n for obj in objects:\n if hasattr(obj, 'snapshot'):\n obj.snapshot()\n self.perform_destroy(obj)\n\n\nclass ObjectValidationMixin:\n\n def _validate_objects(self, instance):\n \"\"\"\n Check that the provided instance or list of instances are matched by the current queryset. This confirms that\n any newly created or modified objects abide by the attributes granted by any applicable ObjectPermissions.\n \"\"\"\n if type(instance) is list:\n # Check that all instances are still included in the view's queryset\n conforming_count = self.queryset.filter(pk__in=[obj.pk for obj in instance]).count()\n if conforming_count != len(instance):\n raise ObjectDoesNotExist\n elif not self.queryset.filter(pk=instance.pk).exists():\n raise ObjectDoesNotExist\n", "path": "netbox/netbox/api/viewsets/mixins.py"}], "after_files": [{"content": "import logging\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import transaction\nfrom django.http import Http404\nfrom rest_framework import status\nfrom rest_framework.response import Response\n\nfrom extras.models import ExportTemplate\nfrom netbox.api.exceptions import SerializerNotFound\nfrom netbox.api.serializers import BulkOperationSerializer\nfrom netbox.constants import NESTED_SERIALIZER_PREFIX\nfrom utilities.api import get_serializer_for_model\n\n__all__ = (\n 'BriefModeMixin',\n 'BulkDestroyModelMixin',\n 'BulkUpdateModelMixin',\n 'CustomFieldsMixin',\n 'ExportTemplatesMixin',\n 'ObjectValidationMixin',\n 'SequentialBulkCreatesMixin',\n)\n\n\nclass BriefModeMixin:\n \"\"\"\n Enables brief mode support, so that the client can invoke a model's nested serializer by passing e.g.\n GET /api/dcim/sites/?brief=True\n \"\"\"\n brief = False\n brief_prefetch_fields = []\n\n def initialize_request(self, request, *args, **kwargs):\n # Annotate whether brief mode is active\n self.brief = request.method == 'GET' and request.GET.get('brief')\n\n return super().initialize_request(request, *args, **kwargs)\n\n def get_serializer_class(self):\n logger = logging.getLogger(f'netbox.api.views.{self.__class__.__name__}')\n\n # If using 'brief' mode, find and return the nested serializer for this model, if one exists\n if self.brief:\n logger.debug(\"Request is for 'brief' format; initializing nested serializer\")\n try:\n return get_serializer_for_model(self.queryset.model, prefix=NESTED_SERIALIZER_PREFIX)\n except SerializerNotFound:\n logger.debug(\n f\"Nested serializer for {self.queryset.model} not found! Using serializer {self.serializer_class}\"\n )\n\n return self.serializer_class\n\n def get_queryset(self):\n qs = super().get_queryset()\n\n if self.brief:\n serializer_class = self.get_serializer_class()\n\n # Clear any annotations for fields not present on the nested serializer\n for annotation in list(qs.query.annotations.keys()):\n if annotation not in serializer_class().fields:\n qs.query.annotations.pop(annotation)\n\n # Clear any prefetches from the queryset and append only brief_prefetch_fields (if any)\n return qs.prefetch_related(None).prefetch_related(*self.brief_prefetch_fields)\n\n return qs\n\n\nclass CustomFieldsMixin:\n \"\"\"\n For models which support custom fields, populate the `custom_fields` context.\n \"\"\"\n def get_serializer_context(self):\n context = super().get_serializer_context()\n\n if hasattr(self.queryset.model, 'custom_fields'):\n content_type = ContentType.objects.get_for_model(self.queryset.model)\n context.update({\n 'custom_fields': content_type.custom_fields.all(),\n })\n\n return context\n\n\nclass ExportTemplatesMixin:\n \"\"\"\n Enable ExportTemplate support for list views.\n \"\"\"\n def list(self, request, *args, **kwargs):\n if 'export' in request.GET:\n content_type = ContentType.objects.get_for_model(self.get_serializer_class().Meta.model)\n et = ExportTemplate.objects.filter(content_types=content_type, name=request.GET['export']).first()\n if et is None:\n raise Http404\n queryset = self.filter_queryset(self.get_queryset())\n return et.render_to_response(queryset)\n\n return super().list(request, *args, **kwargs)\n\n\nclass SequentialBulkCreatesMixin:\n \"\"\"\n Perform bulk creation of new objects sequentially, rather than all at once. This ensures that any validation\n which depends on the evaluation of existing objects (such as checking for free space within a rack) functions\n appropriately.\n \"\"\"\n @transaction.atomic\n def create(self, request, *args, **kwargs):\n if not isinstance(request.data, list):\n # Creating a single object\n return super().create(request, *args, **kwargs)\n\n return_data = []\n for data in request.data:\n serializer = self.get_serializer(data=data)\n serializer.is_valid(raise_exception=True)\n self.perform_create(serializer)\n return_data.append(serializer.data)\n\n headers = self.get_success_headers(serializer.data)\n\n return Response(return_data, status=status.HTTP_201_CREATED, headers=headers)\n\n\nclass BulkUpdateModelMixin:\n \"\"\"\n Support bulk modification of objects using the list endpoint for a model. Accepts a PATCH action with a list of one\n or more JSON objects, each specifying the numeric ID of an object to be updated as well as the attributes to be set.\n For example:\n\n PATCH /api/dcim/sites/\n [\n {\n \"id\": 123,\n \"name\": \"New name\"\n },\n {\n \"id\": 456,\n \"status\": \"planned\"\n }\n ]\n \"\"\"\n def bulk_update(self, request, *args, **kwargs):\n partial = kwargs.pop('partial', False)\n serializer = BulkOperationSerializer(data=request.data, many=True)\n serializer.is_valid(raise_exception=True)\n qs = self.get_queryset().filter(\n pk__in=[o['id'] for o in serializer.data]\n )\n\n # Map update data by object ID\n update_data = {\n obj.pop('id'): obj for obj in request.data\n }\n\n data = self.perform_bulk_update(qs, update_data, partial=partial)\n\n return Response(data, status=status.HTTP_200_OK)\n\n def perform_bulk_update(self, objects, update_data, partial):\n with transaction.atomic():\n data_list = []\n for obj in objects:\n data = update_data.get(obj.id)\n if hasattr(obj, 'snapshot'):\n obj.snapshot()\n serializer = self.get_serializer(obj, data=data, partial=partial)\n serializer.is_valid(raise_exception=True)\n self.perform_update(serializer)\n data_list.append(serializer.data)\n\n return data_list\n\n def bulk_partial_update(self, request, *args, **kwargs):\n kwargs['partial'] = True\n return self.bulk_update(request, *args, **kwargs)\n\n\nclass BulkDestroyModelMixin:\n \"\"\"\n Support bulk deletion of objects using the list endpoint for a model. Accepts a DELETE action with a list of one\n or more JSON objects, each specifying the numeric ID of an object to be deleted. For example:\n\n DELETE /api/dcim/sites/\n [\n {\"id\": 123},\n {\"id\": 456}\n ]\n \"\"\"\n def bulk_destroy(self, request, *args, **kwargs):\n serializer = BulkOperationSerializer(data=request.data, many=True)\n serializer.is_valid(raise_exception=True)\n qs = self.get_queryset().filter(\n pk__in=[o['id'] for o in serializer.data]\n )\n\n self.perform_bulk_destroy(qs)\n\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n def perform_bulk_destroy(self, objects):\n with transaction.atomic():\n for obj in objects:\n if hasattr(obj, 'snapshot'):\n obj.snapshot()\n self.perform_destroy(obj)\n\n\nclass ObjectValidationMixin:\n\n def _validate_objects(self, instance):\n \"\"\"\n Check that the provided instance or list of instances are matched by the current queryset. This confirms that\n any newly created or modified objects abide by the attributes granted by any applicable ObjectPermissions.\n \"\"\"\n if type(instance) is list:\n # Check that all instances are still included in the view's queryset\n conforming_count = self.queryset.filter(pk__in=[obj.pk for obj in instance]).count()\n if conforming_count != len(instance):\n raise ObjectDoesNotExist\n elif not self.queryset.filter(pk=instance.pk).exists():\n raise ObjectDoesNotExist\n", "path": "netbox/netbox/api/viewsets/mixins.py"}]}
| 3,747 | 219 |
gh_patches_debug_2595
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-6171
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
num_qubits() for DictStateFn is inefficient
To get the number of qubits, a list of all keys in the dictionary is constructed. But, only the length of the first key is used. Constructing the entire list is wasteful.
https://github.com/Qiskit/qiskit-terra/blob/c3b2d7acb80fa89043e6f38efb501275ec296616/qiskit/opflow/state_fns/dict_state_fn.py#L82
This code should work:
```python
len(next(iter(self.primitive)))
```
`%timeit` shows that the latter is faster even when the dict contains only two keys.
- **Qiskit Terra version**: 123d829ac, Feb 3 master
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/opflow/state_fns/dict_state_fn.py`
Content:
```
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2020, 2021.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """ DictStateFn Class """
14
15 import itertools
16 from typing import Dict, List, Optional, Set, Union, cast
17
18 import numpy as np
19 from scipy import sparse
20
21 from qiskit.circuit import ParameterExpression
22 from qiskit.opflow.exceptions import OpflowError
23 from qiskit.opflow.list_ops.list_op import ListOp
24 from qiskit.opflow.operator_base import OperatorBase
25 from qiskit.opflow.state_fns.state_fn import StateFn
26 from qiskit.opflow.state_fns.vector_state_fn import VectorStateFn
27 from qiskit.quantum_info import Statevector
28 from qiskit.result import Result
29 from qiskit.utils import algorithm_globals
30
31
32 class DictStateFn(StateFn):
33 """ A class for state functions and measurements which are defined by a lookup table,
34 stored in a dict.
35 """
36 primitive: Dict[str, complex]
37
38 # TODO allow normalization somehow?
39 def __init__(self,
40 primitive: Union[str, dict, Result] = None,
41 coeff: Union[complex, ParameterExpression] = 1.0,
42 is_measurement: bool = False) -> None:
43 """
44 Args:
45 primitive: The dict, single bitstring (if defining a basis sate), or Qiskit
46 Result, which defines the behavior of the underlying function.
47 coeff: A coefficient by which to multiply the state function.
48 is_measurement: Whether the StateFn is a measurement operator.
49
50 Raises:
51 TypeError: invalid parameters.
52 """
53 # If the initial density is a string, treat this as a density dict
54 # with only a single basis state.
55 if isinstance(primitive, str):
56 primitive = {primitive: 1}
57
58 # NOTE:
59 # 1) This is not the same as passing in the counts dict directly, as this will
60 # convert the shot numbers to
61 # probabilities, whereas passing in the counts dict will not.
62 # 2) This will extract counts for both shot and statevector simulations.
63 # To use the statevector,
64 # simply pass in the statevector.
65 # 3) This will only extract the first result.
66 if isinstance(primitive, Result):
67 counts = primitive.get_counts()
68 # NOTE: Need to square root to take correct Pauli measurements!
69 primitive = {bstr: (shots / sum(counts.values()))**.5 for
70 (bstr, shots) in counts.items()}
71
72 if not isinstance(primitive, dict):
73 raise TypeError(
74 'DictStateFn can only be instantiated with dict, '
75 'string, or Qiskit Result, not {}'.format(type(primitive)))
76
77 super().__init__(primitive, coeff=coeff, is_measurement=is_measurement)
78
79 def primitive_strings(self) -> Set[str]:
80 return {'Dict'}
81
82 @property
83 def num_qubits(self) -> int:
84 return len(list(self.primitive.keys())[0])
85
86 def add(self, other: OperatorBase) -> OperatorBase:
87 if not self.num_qubits == other.num_qubits:
88 raise ValueError(
89 'Sum over statefns with different numbers of qubits, {} and {}, is not well '
90 'defined'.format(self.num_qubits, other.num_qubits))
91
92 # Right now doesn't make sense to add a StateFn to a Measurement
93 if isinstance(other, DictStateFn) and self.is_measurement == other.is_measurement:
94 # TODO add compatibility with vector and Operator?
95 if self.primitive == other.primitive:
96 return DictStateFn(self.primitive,
97 coeff=self.coeff + other.coeff,
98 is_measurement=self.is_measurement)
99 else:
100 new_dict = {b: (v * self.coeff) + (other.primitive.get(b, 0) * other.coeff)
101 for (b, v) in self.primitive.items()}
102 new_dict.update({b: v * other.coeff for (b, v) in other.primitive.items()
103 if b not in self.primitive})
104 return DictStateFn(new_dict, is_measurement=self._is_measurement)
105 # pylint: disable=cyclic-import
106 from ..list_ops.summed_op import SummedOp
107 return SummedOp([self, other])
108
109 def adjoint(self) -> "DictStateFn":
110 return DictStateFn({b: np.conj(v) for (b, v) in self.primitive.items()},
111 coeff=self.coeff.conjugate(),
112 is_measurement=(not self.is_measurement))
113
114 def permute(self, permutation: List[int]) -> 'DictStateFn':
115 new_num_qubits = max(permutation) + 1
116 if self.num_qubits != len(permutation):
117 raise OpflowError("New index must be defined for each qubit of the operator.")
118
119 # helper function to permute the key
120 def perm(key):
121 list_key = ['0'] * new_num_qubits
122 for i, k in enumerate(permutation):
123 list_key[k] = key[i]
124 return ''.join(list_key)
125
126 new_dict = {perm(key): value for key, value in self.primitive.items()}
127 return DictStateFn(new_dict, coeff=self.coeff, is_measurement=self.is_measurement)
128
129 def _expand_dim(self, num_qubits: int) -> 'DictStateFn':
130 pad = '0'*num_qubits
131 new_dict = {key + pad: value for key, value in self.primitive.items()}
132 return DictStateFn(new_dict, coeff=self.coeff, is_measurement=self.is_measurement)
133
134 def tensor(self, other: OperatorBase) -> OperatorBase:
135 # Both dicts
136 if isinstance(other, DictStateFn):
137 new_dict = {k1 + k2: v1 * v2 for ((k1, v1,), (k2, v2)) in
138 itertools.product(self.primitive.items(), other.primitive.items())}
139 return StateFn(new_dict,
140 coeff=self.coeff * other.coeff,
141 is_measurement=self.is_measurement)
142 # pylint: disable=cyclic-import
143 from ..list_ops.tensored_op import TensoredOp
144 return TensoredOp([self, other])
145
146 def to_density_matrix(self, massive: bool = False) -> np.ndarray:
147 OperatorBase._check_massive('to_density_matrix', True, self.num_qubits, massive)
148 states = int(2 ** self.num_qubits)
149 return self.to_matrix(massive=massive) * np.eye(states) * self.coeff
150
151 def to_matrix(self, massive: bool = False) -> np.ndarray:
152 OperatorBase._check_massive('to_matrix', False, self.num_qubits, massive)
153 states = int(2 ** self.num_qubits)
154 probs = np.zeros(states) + 0.j
155 for k, v in self.primitive.items():
156 probs[int(k, 2)] = v
157 vec = probs * self.coeff
158
159 # Reshape for measurements so np.dot still works for composition.
160 return vec if not self.is_measurement else vec.reshape(1, -1)
161
162 def to_spmatrix(self) -> sparse.spmatrix:
163 """Same as to_matrix, but returns csr sparse matrix.
164
165 Returns:
166 CSR sparse matrix representation of the State function.
167
168 Raises:
169 ValueError: invalid parameters.
170 """
171
172 indices = [int(v, 2) for v in self.primitive.keys()]
173 vals = np.array(list(self.primitive.values())) * self.coeff
174 spvec = sparse.csr_matrix((vals, (np.zeros(len(indices), dtype=int), indices)),
175 shape=(1, 2**self.num_qubits))
176 return spvec if not self.is_measurement else spvec.transpose()
177
178 def to_spmatrix_op(self) -> OperatorBase:
179 """Convert this state function to a ``SparseVectorStateFn``."""
180 from .sparse_vector_state_fn import SparseVectorStateFn
181 return SparseVectorStateFn(self.to_spmatrix(), self.coeff, self.is_measurement)
182
183 def to_circuit_op(self) -> OperatorBase:
184 """Convert this state function to a ``CircuitStateFn``."""
185 from .circuit_state_fn import CircuitStateFn
186 csfn = CircuitStateFn.from_dict(self.primitive) * self.coeff
187 return csfn.adjoint() if self.is_measurement else csfn
188
189 def __str__(self) -> str:
190 prim_str = str(self.primitive)
191 if self.coeff == 1.0:
192 return "{}({})".format('DictStateFn' if not self.is_measurement
193 else 'DictMeasurement', prim_str)
194 else:
195 return "{}({}) * {}".format('DictStateFn' if not self.is_measurement
196 else 'DictMeasurement',
197 prim_str,
198 self.coeff)
199
200 # pylint: disable=too-many-return-statements
201 def eval(
202 self,
203 front: Optional[
204 Union[str, Dict[str, complex], np.ndarray, OperatorBase, Statevector]
205 ] = None,
206 ) -> Union[OperatorBase, complex]:
207 if front is None:
208 sparse_vector_state_fn = self.to_spmatrix_op().eval()
209 return sparse_vector_state_fn
210
211 if not self.is_measurement and isinstance(front, OperatorBase):
212 raise ValueError(
213 'Cannot compute overlap with StateFn or Operator if not Measurement. Try taking '
214 'sf.adjoint() first to convert to measurement.')
215
216 if isinstance(front, ListOp) and front.distributive:
217 return front.combo_fn([self.eval(front.coeff * front_elem)
218 for front_elem in front.oplist])
219
220 # For now, always do this. If it's not performant, we can be more granular.
221 if not isinstance(front, OperatorBase):
222 front = StateFn(front)
223
224 # pylint: disable=cyclic-import
225 from ..operator_globals import EVAL_SIG_DIGITS
226
227 # If the primitive is a lookup of bitstrings,
228 # we define all missing strings to have a function value of
229 # zero.
230 if isinstance(front, DictStateFn):
231 return np.round(
232 cast(float, sum([v * front.primitive.get(b, 0) for (b, v) in
233 self.primitive.items()]) * self.coeff * front.coeff),
234 decimals=EVAL_SIG_DIGITS)
235
236 # All remaining possibilities only apply when self.is_measurement is True
237
238 if isinstance(front, VectorStateFn):
239 # TODO does it need to be this way for measurement?
240 # return sum([v * front.primitive.data[int(b, 2)] *
241 # np.conj(front.primitive.data[int(b, 2)])
242 return np.round(
243 cast(float, sum([v * front.primitive.data[int(b, 2)] for (b, v) in
244 self.primitive.items()]) * self.coeff),
245 decimals=EVAL_SIG_DIGITS)
246
247 from .circuit_state_fn import CircuitStateFn
248 if isinstance(front, CircuitStateFn):
249 # Don't reimplement logic from CircuitStateFn
250 self_adjoint = cast(DictStateFn, self.adjoint())
251 return np.conj(front.adjoint().eval(self_adjoint.primitive)) * self.coeff
252
253 from .operator_state_fn import OperatorStateFn
254 if isinstance(front, OperatorStateFn):
255 return cast(Union[OperatorBase, float, complex], front.adjoint().eval(self.adjoint()))
256
257 # All other OperatorBases go here
258 self_adjoint = cast(DictStateFn, self.adjoint())
259 adjointed_eval = cast(OperatorBase, front.adjoint().eval(self_adjoint.primitive))
260 return adjointed_eval.adjoint() * self.coeff
261
262 def sample(self,
263 shots: int = 1024,
264 massive: bool = False,
265 reverse_endianness: bool = False) -> Dict[str, float]:
266 probs = np.square(np.abs(np.array(list(self.primitive.values()))))
267 unique, counts = np.unique(algorithm_globals.random.choice(list(self.primitive.keys()),
268 size=shots,
269 p=(probs / sum(probs))),
270 return_counts=True)
271 counts = dict(zip(unique, counts))
272 if reverse_endianness:
273 scaled_dict = {bstr[::-1]: (prob / shots) for (bstr, prob) in counts.items()}
274 else:
275 scaled_dict = {bstr: (prob / shots) for (bstr, prob) in counts.items()}
276 return dict(sorted(scaled_dict.items(), key=lambda x: x[1], reverse=True))
277
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qiskit/opflow/state_fns/dict_state_fn.py b/qiskit/opflow/state_fns/dict_state_fn.py
--- a/qiskit/opflow/state_fns/dict_state_fn.py
+++ b/qiskit/opflow/state_fns/dict_state_fn.py
@@ -81,7 +81,7 @@
@property
def num_qubits(self) -> int:
- return len(list(self.primitive.keys())[0])
+ return len(next(iter(self.primitive)))
def add(self, other: OperatorBase) -> OperatorBase:
if not self.num_qubits == other.num_qubits:
|
{"golden_diff": "diff --git a/qiskit/opflow/state_fns/dict_state_fn.py b/qiskit/opflow/state_fns/dict_state_fn.py\n--- a/qiskit/opflow/state_fns/dict_state_fn.py\n+++ b/qiskit/opflow/state_fns/dict_state_fn.py\n@@ -81,7 +81,7 @@\n \n @property\n def num_qubits(self) -> int:\n- return len(list(self.primitive.keys())[0])\n+ return len(next(iter(self.primitive)))\n \n def add(self, other: OperatorBase) -> OperatorBase:\n if not self.num_qubits == other.num_qubits:\n", "issue": "num_qubits() for DictStateFn is inefficient\nTo get the number of qubits, a list of all keys in the dictionary is constructed. But, only the length of the first key is used. Constructing the entire list is wasteful.\r\n\r\nhttps://github.com/Qiskit/qiskit-terra/blob/c3b2d7acb80fa89043e6f38efb501275ec296616/qiskit/opflow/state_fns/dict_state_fn.py#L82\r\n\r\nThis code should work:\r\n```python\r\nlen(next(iter(self.primitive)))\r\n```\r\n\r\n`%timeit` shows that the latter is faster even when the dict contains only two keys.\r\n\r\n- **Qiskit Terra version**: 123d829ac, Feb 3 master\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2020, 2021.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\" DictStateFn Class \"\"\"\n\nimport itertools\nfrom typing import Dict, List, Optional, Set, Union, cast\n\nimport numpy as np\nfrom scipy import sparse\n\nfrom qiskit.circuit import ParameterExpression\nfrom qiskit.opflow.exceptions import OpflowError\nfrom qiskit.opflow.list_ops.list_op import ListOp\nfrom qiskit.opflow.operator_base import OperatorBase\nfrom qiskit.opflow.state_fns.state_fn import StateFn\nfrom qiskit.opflow.state_fns.vector_state_fn import VectorStateFn\nfrom qiskit.quantum_info import Statevector\nfrom qiskit.result import Result\nfrom qiskit.utils import algorithm_globals\n\n\nclass DictStateFn(StateFn):\n \"\"\" A class for state functions and measurements which are defined by a lookup table,\n stored in a dict.\n \"\"\"\n primitive: Dict[str, complex]\n\n # TODO allow normalization somehow?\n def __init__(self,\n primitive: Union[str, dict, Result] = None,\n coeff: Union[complex, ParameterExpression] = 1.0,\n is_measurement: bool = False) -> None:\n \"\"\"\n Args:\n primitive: The dict, single bitstring (if defining a basis sate), or Qiskit\n Result, which defines the behavior of the underlying function.\n coeff: A coefficient by which to multiply the state function.\n is_measurement: Whether the StateFn is a measurement operator.\n\n Raises:\n TypeError: invalid parameters.\n \"\"\"\n # If the initial density is a string, treat this as a density dict\n # with only a single basis state.\n if isinstance(primitive, str):\n primitive = {primitive: 1}\n\n # NOTE:\n # 1) This is not the same as passing in the counts dict directly, as this will\n # convert the shot numbers to\n # probabilities, whereas passing in the counts dict will not.\n # 2) This will extract counts for both shot and statevector simulations.\n # To use the statevector,\n # simply pass in the statevector.\n # 3) This will only extract the first result.\n if isinstance(primitive, Result):\n counts = primitive.get_counts()\n # NOTE: Need to square root to take correct Pauli measurements!\n primitive = {bstr: (shots / sum(counts.values()))**.5 for\n (bstr, shots) in counts.items()}\n\n if not isinstance(primitive, dict):\n raise TypeError(\n 'DictStateFn can only be instantiated with dict, '\n 'string, or Qiskit Result, not {}'.format(type(primitive)))\n\n super().__init__(primitive, coeff=coeff, is_measurement=is_measurement)\n\n def primitive_strings(self) -> Set[str]:\n return {'Dict'}\n\n @property\n def num_qubits(self) -> int:\n return len(list(self.primitive.keys())[0])\n\n def add(self, other: OperatorBase) -> OperatorBase:\n if not self.num_qubits == other.num_qubits:\n raise ValueError(\n 'Sum over statefns with different numbers of qubits, {} and {}, is not well '\n 'defined'.format(self.num_qubits, other.num_qubits))\n\n # Right now doesn't make sense to add a StateFn to a Measurement\n if isinstance(other, DictStateFn) and self.is_measurement == other.is_measurement:\n # TODO add compatibility with vector and Operator?\n if self.primitive == other.primitive:\n return DictStateFn(self.primitive,\n coeff=self.coeff + other.coeff,\n is_measurement=self.is_measurement)\n else:\n new_dict = {b: (v * self.coeff) + (other.primitive.get(b, 0) * other.coeff)\n for (b, v) in self.primitive.items()}\n new_dict.update({b: v * other.coeff for (b, v) in other.primitive.items()\n if b not in self.primitive})\n return DictStateFn(new_dict, is_measurement=self._is_measurement)\n # pylint: disable=cyclic-import\n from ..list_ops.summed_op import SummedOp\n return SummedOp([self, other])\n\n def adjoint(self) -> \"DictStateFn\":\n return DictStateFn({b: np.conj(v) for (b, v) in self.primitive.items()},\n coeff=self.coeff.conjugate(),\n is_measurement=(not self.is_measurement))\n\n def permute(self, permutation: List[int]) -> 'DictStateFn':\n new_num_qubits = max(permutation) + 1\n if self.num_qubits != len(permutation):\n raise OpflowError(\"New index must be defined for each qubit of the operator.\")\n\n # helper function to permute the key\n def perm(key):\n list_key = ['0'] * new_num_qubits\n for i, k in enumerate(permutation):\n list_key[k] = key[i]\n return ''.join(list_key)\n\n new_dict = {perm(key): value for key, value in self.primitive.items()}\n return DictStateFn(new_dict, coeff=self.coeff, is_measurement=self.is_measurement)\n\n def _expand_dim(self, num_qubits: int) -> 'DictStateFn':\n pad = '0'*num_qubits\n new_dict = {key + pad: value for key, value in self.primitive.items()}\n return DictStateFn(new_dict, coeff=self.coeff, is_measurement=self.is_measurement)\n\n def tensor(self, other: OperatorBase) -> OperatorBase:\n # Both dicts\n if isinstance(other, DictStateFn):\n new_dict = {k1 + k2: v1 * v2 for ((k1, v1,), (k2, v2)) in\n itertools.product(self.primitive.items(), other.primitive.items())}\n return StateFn(new_dict,\n coeff=self.coeff * other.coeff,\n is_measurement=self.is_measurement)\n # pylint: disable=cyclic-import\n from ..list_ops.tensored_op import TensoredOp\n return TensoredOp([self, other])\n\n def to_density_matrix(self, massive: bool = False) -> np.ndarray:\n OperatorBase._check_massive('to_density_matrix', True, self.num_qubits, massive)\n states = int(2 ** self.num_qubits)\n return self.to_matrix(massive=massive) * np.eye(states) * self.coeff\n\n def to_matrix(self, massive: bool = False) -> np.ndarray:\n OperatorBase._check_massive('to_matrix', False, self.num_qubits, massive)\n states = int(2 ** self.num_qubits)\n probs = np.zeros(states) + 0.j\n for k, v in self.primitive.items():\n probs[int(k, 2)] = v\n vec = probs * self.coeff\n\n # Reshape for measurements so np.dot still works for composition.\n return vec if not self.is_measurement else vec.reshape(1, -1)\n\n def to_spmatrix(self) -> sparse.spmatrix:\n \"\"\"Same as to_matrix, but returns csr sparse matrix.\n\n Returns:\n CSR sparse matrix representation of the State function.\n\n Raises:\n ValueError: invalid parameters.\n \"\"\"\n\n indices = [int(v, 2) for v in self.primitive.keys()]\n vals = np.array(list(self.primitive.values())) * self.coeff\n spvec = sparse.csr_matrix((vals, (np.zeros(len(indices), dtype=int), indices)),\n shape=(1, 2**self.num_qubits))\n return spvec if not self.is_measurement else spvec.transpose()\n\n def to_spmatrix_op(self) -> OperatorBase:\n \"\"\"Convert this state function to a ``SparseVectorStateFn``.\"\"\"\n from .sparse_vector_state_fn import SparseVectorStateFn\n return SparseVectorStateFn(self.to_spmatrix(), self.coeff, self.is_measurement)\n\n def to_circuit_op(self) -> OperatorBase:\n \"\"\"Convert this state function to a ``CircuitStateFn``.\"\"\"\n from .circuit_state_fn import CircuitStateFn\n csfn = CircuitStateFn.from_dict(self.primitive) * self.coeff\n return csfn.adjoint() if self.is_measurement else csfn\n\n def __str__(self) -> str:\n prim_str = str(self.primitive)\n if self.coeff == 1.0:\n return \"{}({})\".format('DictStateFn' if not self.is_measurement\n else 'DictMeasurement', prim_str)\n else:\n return \"{}({}) * {}\".format('DictStateFn' if not self.is_measurement\n else 'DictMeasurement',\n prim_str,\n self.coeff)\n\n # pylint: disable=too-many-return-statements\n def eval(\n self,\n front: Optional[\n Union[str, Dict[str, complex], np.ndarray, OperatorBase, Statevector]\n ] = None,\n ) -> Union[OperatorBase, complex]:\n if front is None:\n sparse_vector_state_fn = self.to_spmatrix_op().eval()\n return sparse_vector_state_fn\n\n if not self.is_measurement and isinstance(front, OperatorBase):\n raise ValueError(\n 'Cannot compute overlap with StateFn or Operator if not Measurement. Try taking '\n 'sf.adjoint() first to convert to measurement.')\n\n if isinstance(front, ListOp) and front.distributive:\n return front.combo_fn([self.eval(front.coeff * front_elem)\n for front_elem in front.oplist])\n\n # For now, always do this. If it's not performant, we can be more granular.\n if not isinstance(front, OperatorBase):\n front = StateFn(front)\n\n # pylint: disable=cyclic-import\n from ..operator_globals import EVAL_SIG_DIGITS\n\n # If the primitive is a lookup of bitstrings,\n # we define all missing strings to have a function value of\n # zero.\n if isinstance(front, DictStateFn):\n return np.round(\n cast(float, sum([v * front.primitive.get(b, 0) for (b, v) in\n self.primitive.items()]) * self.coeff * front.coeff),\n decimals=EVAL_SIG_DIGITS)\n\n # All remaining possibilities only apply when self.is_measurement is True\n\n if isinstance(front, VectorStateFn):\n # TODO does it need to be this way for measurement?\n # return sum([v * front.primitive.data[int(b, 2)] *\n # np.conj(front.primitive.data[int(b, 2)])\n return np.round(\n cast(float, sum([v * front.primitive.data[int(b, 2)] for (b, v) in\n self.primitive.items()]) * self.coeff),\n decimals=EVAL_SIG_DIGITS)\n\n from .circuit_state_fn import CircuitStateFn\n if isinstance(front, CircuitStateFn):\n # Don't reimplement logic from CircuitStateFn\n self_adjoint = cast(DictStateFn, self.adjoint())\n return np.conj(front.adjoint().eval(self_adjoint.primitive)) * self.coeff\n\n from .operator_state_fn import OperatorStateFn\n if isinstance(front, OperatorStateFn):\n return cast(Union[OperatorBase, float, complex], front.adjoint().eval(self.adjoint()))\n\n # All other OperatorBases go here\n self_adjoint = cast(DictStateFn, self.adjoint())\n adjointed_eval = cast(OperatorBase, front.adjoint().eval(self_adjoint.primitive))\n return adjointed_eval.adjoint() * self.coeff\n\n def sample(self,\n shots: int = 1024,\n massive: bool = False,\n reverse_endianness: bool = False) -> Dict[str, float]:\n probs = np.square(np.abs(np.array(list(self.primitive.values()))))\n unique, counts = np.unique(algorithm_globals.random.choice(list(self.primitive.keys()),\n size=shots,\n p=(probs / sum(probs))),\n return_counts=True)\n counts = dict(zip(unique, counts))\n if reverse_endianness:\n scaled_dict = {bstr[::-1]: (prob / shots) for (bstr, prob) in counts.items()}\n else:\n scaled_dict = {bstr: (prob / shots) for (bstr, prob) in counts.items()}\n return dict(sorted(scaled_dict.items(), key=lambda x: x[1], reverse=True))\n", "path": "qiskit/opflow/state_fns/dict_state_fn.py"}], "after_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2020, 2021.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\" DictStateFn Class \"\"\"\n\nimport itertools\nfrom typing import Dict, List, Optional, Set, Union, cast\n\nimport numpy as np\nfrom scipy import sparse\n\nfrom qiskit.circuit import ParameterExpression\nfrom qiskit.opflow.exceptions import OpflowError\nfrom qiskit.opflow.list_ops.list_op import ListOp\nfrom qiskit.opflow.operator_base import OperatorBase\nfrom qiskit.opflow.state_fns.state_fn import StateFn\nfrom qiskit.opflow.state_fns.vector_state_fn import VectorStateFn\nfrom qiskit.quantum_info import Statevector\nfrom qiskit.result import Result\nfrom qiskit.utils import algorithm_globals\n\n\nclass DictStateFn(StateFn):\n \"\"\" A class for state functions and measurements which are defined by a lookup table,\n stored in a dict.\n \"\"\"\n primitive: Dict[str, complex]\n\n # TODO allow normalization somehow?\n def __init__(self,\n primitive: Union[str, dict, Result] = None,\n coeff: Union[complex, ParameterExpression] = 1.0,\n is_measurement: bool = False) -> None:\n \"\"\"\n Args:\n primitive: The dict, single bitstring (if defining a basis sate), or Qiskit\n Result, which defines the behavior of the underlying function.\n coeff: A coefficient by which to multiply the state function.\n is_measurement: Whether the StateFn is a measurement operator.\n\n Raises:\n TypeError: invalid parameters.\n \"\"\"\n # If the initial density is a string, treat this as a density dict\n # with only a single basis state.\n if isinstance(primitive, str):\n primitive = {primitive: 1}\n\n # NOTE:\n # 1) This is not the same as passing in the counts dict directly, as this will\n # convert the shot numbers to\n # probabilities, whereas passing in the counts dict will not.\n # 2) This will extract counts for both shot and statevector simulations.\n # To use the statevector,\n # simply pass in the statevector.\n # 3) This will only extract the first result.\n if isinstance(primitive, Result):\n counts = primitive.get_counts()\n # NOTE: Need to square root to take correct Pauli measurements!\n primitive = {bstr: (shots / sum(counts.values()))**.5 for\n (bstr, shots) in counts.items()}\n\n if not isinstance(primitive, dict):\n raise TypeError(\n 'DictStateFn can only be instantiated with dict, '\n 'string, or Qiskit Result, not {}'.format(type(primitive)))\n\n super().__init__(primitive, coeff=coeff, is_measurement=is_measurement)\n\n def primitive_strings(self) -> Set[str]:\n return {'Dict'}\n\n @property\n def num_qubits(self) -> int:\n return len(next(iter(self.primitive)))\n\n def add(self, other: OperatorBase) -> OperatorBase:\n if not self.num_qubits == other.num_qubits:\n raise ValueError(\n 'Sum over statefns with different numbers of qubits, {} and {}, is not well '\n 'defined'.format(self.num_qubits, other.num_qubits))\n\n # Right now doesn't make sense to add a StateFn to a Measurement\n if isinstance(other, DictStateFn) and self.is_measurement == other.is_measurement:\n # TODO add compatibility with vector and Operator?\n if self.primitive == other.primitive:\n return DictStateFn(self.primitive,\n coeff=self.coeff + other.coeff,\n is_measurement=self.is_measurement)\n else:\n new_dict = {b: (v * self.coeff) + (other.primitive.get(b, 0) * other.coeff)\n for (b, v) in self.primitive.items()}\n new_dict.update({b: v * other.coeff for (b, v) in other.primitive.items()\n if b not in self.primitive})\n return DictStateFn(new_dict, is_measurement=self._is_measurement)\n # pylint: disable=cyclic-import\n from ..list_ops.summed_op import SummedOp\n return SummedOp([self, other])\n\n def adjoint(self) -> \"DictStateFn\":\n return DictStateFn({b: np.conj(v) for (b, v) in self.primitive.items()},\n coeff=self.coeff.conjugate(),\n is_measurement=(not self.is_measurement))\n\n def permute(self, permutation: List[int]) -> 'DictStateFn':\n new_num_qubits = max(permutation) + 1\n if self.num_qubits != len(permutation):\n raise OpflowError(\"New index must be defined for each qubit of the operator.\")\n\n # helper function to permute the key\n def perm(key):\n list_key = ['0'] * new_num_qubits\n for i, k in enumerate(permutation):\n list_key[k] = key[i]\n return ''.join(list_key)\n\n new_dict = {perm(key): value for key, value in self.primitive.items()}\n return DictStateFn(new_dict, coeff=self.coeff, is_measurement=self.is_measurement)\n\n def _expand_dim(self, num_qubits: int) -> 'DictStateFn':\n pad = '0'*num_qubits\n new_dict = {key + pad: value for key, value in self.primitive.items()}\n return DictStateFn(new_dict, coeff=self.coeff, is_measurement=self.is_measurement)\n\n def tensor(self, other: OperatorBase) -> OperatorBase:\n # Both dicts\n if isinstance(other, DictStateFn):\n new_dict = {k1 + k2: v1 * v2 for ((k1, v1,), (k2, v2)) in\n itertools.product(self.primitive.items(), other.primitive.items())}\n return StateFn(new_dict,\n coeff=self.coeff * other.coeff,\n is_measurement=self.is_measurement)\n # pylint: disable=cyclic-import\n from ..list_ops.tensored_op import TensoredOp\n return TensoredOp([self, other])\n\n def to_density_matrix(self, massive: bool = False) -> np.ndarray:\n OperatorBase._check_massive('to_density_matrix', True, self.num_qubits, massive)\n states = int(2 ** self.num_qubits)\n return self.to_matrix(massive=massive) * np.eye(states) * self.coeff\n\n def to_matrix(self, massive: bool = False) -> np.ndarray:\n OperatorBase._check_massive('to_matrix', False, self.num_qubits, massive)\n states = int(2 ** self.num_qubits)\n probs = np.zeros(states) + 0.j\n for k, v in self.primitive.items():\n probs[int(k, 2)] = v\n vec = probs * self.coeff\n\n # Reshape for measurements so np.dot still works for composition.\n return vec if not self.is_measurement else vec.reshape(1, -1)\n\n def to_spmatrix(self) -> sparse.spmatrix:\n \"\"\"Same as to_matrix, but returns csr sparse matrix.\n\n Returns:\n CSR sparse matrix representation of the State function.\n\n Raises:\n ValueError: invalid parameters.\n \"\"\"\n\n indices = [int(v, 2) for v in self.primitive.keys()]\n vals = np.array(list(self.primitive.values())) * self.coeff\n spvec = sparse.csr_matrix((vals, (np.zeros(len(indices), dtype=int), indices)),\n shape=(1, 2**self.num_qubits))\n return spvec if not self.is_measurement else spvec.transpose()\n\n def to_spmatrix_op(self) -> OperatorBase:\n \"\"\"Convert this state function to a ``SparseVectorStateFn``.\"\"\"\n from .sparse_vector_state_fn import SparseVectorStateFn\n return SparseVectorStateFn(self.to_spmatrix(), self.coeff, self.is_measurement)\n\n def to_circuit_op(self) -> OperatorBase:\n \"\"\"Convert this state function to a ``CircuitStateFn``.\"\"\"\n from .circuit_state_fn import CircuitStateFn\n csfn = CircuitStateFn.from_dict(self.primitive) * self.coeff\n return csfn.adjoint() if self.is_measurement else csfn\n\n def __str__(self) -> str:\n prim_str = str(self.primitive)\n if self.coeff == 1.0:\n return \"{}({})\".format('DictStateFn' if not self.is_measurement\n else 'DictMeasurement', prim_str)\n else:\n return \"{}({}) * {}\".format('DictStateFn' if not self.is_measurement\n else 'DictMeasurement',\n prim_str,\n self.coeff)\n\n # pylint: disable=too-many-return-statements\n def eval(\n self,\n front: Optional[\n Union[str, Dict[str, complex], np.ndarray, OperatorBase, Statevector]\n ] = None,\n ) -> Union[OperatorBase, complex]:\n if front is None:\n sparse_vector_state_fn = self.to_spmatrix_op().eval()\n return sparse_vector_state_fn\n\n if not self.is_measurement and isinstance(front, OperatorBase):\n raise ValueError(\n 'Cannot compute overlap with StateFn or Operator if not Measurement. Try taking '\n 'sf.adjoint() first to convert to measurement.')\n\n if isinstance(front, ListOp) and front.distributive:\n return front.combo_fn([self.eval(front.coeff * front_elem)\n for front_elem in front.oplist])\n\n # For now, always do this. If it's not performant, we can be more granular.\n if not isinstance(front, OperatorBase):\n front = StateFn(front)\n\n # pylint: disable=cyclic-import\n from ..operator_globals import EVAL_SIG_DIGITS\n\n # If the primitive is a lookup of bitstrings,\n # we define all missing strings to have a function value of\n # zero.\n if isinstance(front, DictStateFn):\n return np.round(\n cast(float, sum([v * front.primitive.get(b, 0) for (b, v) in\n self.primitive.items()]) * self.coeff * front.coeff),\n decimals=EVAL_SIG_DIGITS)\n\n # All remaining possibilities only apply when self.is_measurement is True\n\n if isinstance(front, VectorStateFn):\n # TODO does it need to be this way for measurement?\n # return sum([v * front.primitive.data[int(b, 2)] *\n # np.conj(front.primitive.data[int(b, 2)])\n return np.round(\n cast(float, sum([v * front.primitive.data[int(b, 2)] for (b, v) in\n self.primitive.items()]) * self.coeff),\n decimals=EVAL_SIG_DIGITS)\n\n from .circuit_state_fn import CircuitStateFn\n if isinstance(front, CircuitStateFn):\n # Don't reimplement logic from CircuitStateFn\n self_adjoint = cast(DictStateFn, self.adjoint())\n return np.conj(front.adjoint().eval(self_adjoint.primitive)) * self.coeff\n\n from .operator_state_fn import OperatorStateFn\n if isinstance(front, OperatorStateFn):\n return cast(Union[OperatorBase, float, complex], front.adjoint().eval(self.adjoint()))\n\n # All other OperatorBases go here\n self_adjoint = cast(DictStateFn, self.adjoint())\n adjointed_eval = cast(OperatorBase, front.adjoint().eval(self_adjoint.primitive))\n return adjointed_eval.adjoint() * self.coeff\n\n def sample(self,\n shots: int = 1024,\n massive: bool = False,\n reverse_endianness: bool = False) -> Dict[str, float]:\n probs = np.square(np.abs(np.array(list(self.primitive.values()))))\n unique, counts = np.unique(algorithm_globals.random.choice(list(self.primitive.keys()),\n size=shots,\n p=(probs / sum(probs))),\n return_counts=True)\n counts = dict(zip(unique, counts))\n if reverse_endianness:\n scaled_dict = {bstr[::-1]: (prob / shots) for (bstr, prob) in counts.items()}\n else:\n scaled_dict = {bstr: (prob / shots) for (bstr, prob) in counts.items()}\n return dict(sorted(scaled_dict.items(), key=lambda x: x[1], reverse=True))\n", "path": "qiskit/opflow/state_fns/dict_state_fn.py"}]}
| 3,952 | 141 |
gh_patches_debug_33049
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-8258
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unify tax calculation at one place
Right now tax calculation of the system is distributed in different places in different files. There is also repetition of code and some of this calculation is in frontend also. This creates confusion while editing tax info in one place. Also there in `/calculate-amount` endpoint in API for calculating `order_amount`. The idea is to unify tax calculation at one place, preferably `/calculate-amount` endpoint in API
PDF Invoice: Show Discounts in the invoice
The PDF invoice for ticket buyers does not show discount codes clearly. Please implement the following:
* Add the discount code into the description with "Discount code: samplecodehere, Discount: 10% or USD 2.00 etc."
* Show the discount in the item price list with the original price strike through
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/helpers/order.py`
Content:
```
1 import logging
2 from datetime import datetime, timedelta, timezone
3
4 from flask import render_template
5 from flask_rest_jsonapi.exceptions import ObjectNotFound
6
7 from app.api.helpers.db import (
8 get_count,
9 safe_query_without_soft_deleted_entries,
10 save_to_db,
11 )
12 from app.api.helpers.errors import ConflictError, UnprocessableEntityError
13 from app.api.helpers.files import create_save_pdf
14 from app.api.helpers.mail import (
15 send_email_to_attendees,
16 send_order_purchase_organizer_email,
17 )
18 from app.api.helpers.notification import (
19 notify_ticket_purchase_attendee,
20 notify_ticket_purchase_organizer,
21 )
22 from app.api.helpers.storage import UPLOAD_PATHS
23 from app.models import db
24 from app.models.order import OrderTicket
25 from app.models.ticket import Ticket
26 from app.models.ticket_fee import TicketFees
27 from app.models.ticket_holder import TicketHolder
28 from app.models.setting import Setting
29 from app.settings import get_settings
30
31
32 def delete_related_attendees_for_order(order):
33 """
34 Delete the associated attendees of an order when it is cancelled/deleted/expired
35 :param order: Order whose attendees have to be deleted.
36 :return:
37 """
38 for ticket_holder in order.ticket_holders:
39 db.session.delete(ticket_holder)
40 try:
41 db.session.commit()
42 except Exception:
43 logging.exception('DB Exception!')
44 db.session.rollback()
45
46
47 def set_expiry_for_order(order, override=False):
48 """
49 Expire the order after the time slot(10 minutes) if the order is initializing.
50 Also expires the order if we want to expire an order regardless of the state and time.
51 :param order: Order to be expired.
52 :param override: flag to force expiry.
53 :return:
54 """
55 order_expiry_time = get_settings()['order_expiry_time']
56 if (
57 order
58 and not order.paid_via
59 and (
60 override
61 or (
62 order.status == 'initializing'
63 and (order.created_at + timedelta(minutes=order_expiry_time))
64 < datetime.now(timezone.utc)
65 )
66 )
67 ):
68 order.status = 'expired'
69 delete_related_attendees_for_order(order)
70 save_to_db(order)
71 return order
72
73
74 def create_pdf_tickets_for_holder(order):
75 """
76 Create tickets and invoices for the holders of an order.
77 :param order: The order for which to create tickets for.
78 """
79 if order.status == 'completed' or order.status == 'placed':
80 pdf = create_save_pdf(
81 render_template('pdf/ticket_purchaser.html', order=order),
82 UPLOAD_PATHS['pdf']['tickets_all'],
83 dir_path='/static/uploads/pdf/tickets/',
84 identifier=order.identifier,
85 extra_identifiers={'extra_identifier': order.identifier},
86 upload_dir='generated/tickets/',
87 )
88
89 order.tickets_pdf_url = pdf
90
91 for holder in order.ticket_holders:
92 # create attendee pdf for every ticket holder
93 pdf = create_save_pdf(
94 render_template('pdf/ticket_attendee.html', order=order, holder=holder),
95 UPLOAD_PATHS['pdf']['tickets_all'],
96 dir_path='/static/uploads/pdf/tickets/',
97 identifier=order.identifier,
98 extra_identifiers={'extra_identifier': holder.id},
99 upload_dir='generated/tickets/',
100 )
101 holder.pdf_url = pdf
102 save_to_db(holder)
103
104 admin_info = Setting.query.first()
105
106 # create order invoices pdf
107 order_tickets = OrderTicket.query.filter_by(order_id=order.id).all()
108
109 attendee = TicketHolder.query.filter_by(order_id=order.id).first()
110
111 tickets = []
112 for order_ticket in order_tickets:
113 ticket = dict(
114 id=order_ticket.ticket.id,
115 price=order_ticket.ticket.price,
116 quantity=order_ticket.quantity
117 )
118 tickets.append(ticket)
119
120 # calculate order amount using helper function
121 order_amount = calculate_order_amount(tickets, discount_code=order.discount_code)
122
123 create_save_pdf(
124 render_template(
125 'pdf/order_invoice.html',
126 order=order,
127 event=order.event,
128 tax=order.event.tax,
129 order_tickets=order_tickets,
130 attendee=attendee,
131 event_starts_at=order.event.starts_at_tz.strftime('%d %B %Y'),
132 created_at=order.created_at.strftime('%d %B %Y'),
133 admin_info=admin_info,
134 order_amount=order_amount
135 ),
136 UPLOAD_PATHS['pdf']['order'],
137 dir_path='/static/uploads/pdf/tickets/',
138 identifier=order.identifier,
139 upload_dir='generated/invoices/',
140 new_renderer=True,
141 )
142 save_to_db(order)
143
144
145 def create_onsite_attendees_for_order(data):
146 """
147 Creates on site ticket holders for an order and adds it into the request data.
148 :param data: data initially passed in the POST request for order.
149 :return:
150 """
151 on_site_tickets = data.get('on_site_tickets')
152
153 if not on_site_tickets:
154 raise UnprocessableEntityError(
155 {'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing'
156 )
157
158 data['ticket_holders'] = []
159
160 for on_site_ticket in on_site_tickets:
161 ticket_id = on_site_ticket['id']
162 quantity = int(on_site_ticket['quantity'])
163
164 ticket = safe_query_without_soft_deleted_entries(
165 Ticket, 'id', ticket_id, 'ticket_id'
166 )
167
168 ticket_sold_count = get_count(
169 db.session.query(TicketHolder.id).filter_by(
170 ticket_id=int(ticket.id), deleted_at=None
171 )
172 )
173
174 # Check if the ticket is already sold out or not.
175 if ticket_sold_count + quantity > ticket.quantity:
176 # delete the already created attendees.
177 for holder in data['ticket_holders']:
178 ticket_holder = (
179 db.session.query(TicketHolder).filter(id == int(holder)).one()
180 )
181 db.session.delete(ticket_holder)
182 try:
183 db.session.commit()
184 except Exception:
185 logging.exception('DB Exception!')
186 db.session.rollback()
187
188 raise ConflictError(
189 {'pointer': '/data/attributes/on_site_tickets'},
190 "Ticket with id: {} already sold out. You can buy at most {} tickets".format(
191 ticket_id, ticket.quantity - ticket_sold_count
192 ),
193 )
194
195 for _ in range(1, quantity):
196 ticket_holder = TicketHolder(
197 firstname='onsite',
198 lastname='attendee',
199 email='[email protected]',
200 ticket_id=ticket.id,
201 event_id=data.get('event'),
202 )
203 save_to_db(ticket_holder)
204 data['ticket_holders'].append(ticket_holder.id)
205
206 # delete from the data.
207 del data['on_site_tickets']
208
209
210 def calculate_order_amount(tickets, discount_code=None):
211 from app.api.helpers.ticketing import validate_discount_code, validate_tickets
212 from app.models.discount_code import DiscountCode
213
214 ticket_ids = {ticket['id'] for ticket in tickets}
215 ticket_map = {int(ticket['id']): ticket for ticket in tickets}
216 fetched_tickets = validate_tickets(ticket_ids)
217
218 if tickets and discount_code:
219 discount_code = validate_discount_code(discount_code, tickets=tickets)
220
221 event = tax = tax_included = fees = None
222 total_amount = total_tax = total_discount = 0.0
223 ticket_list = []
224 for ticket in fetched_tickets:
225 ticket_info = ticket_map[ticket.id]
226 discount_amount = 0.0
227 discount_data = None
228 ticket_fee = 0.0
229
230 quantity = ticket_info.get('quantity', 1) # Default to single ticket
231 if not event:
232 event = ticket.event
233
234 if event.deleted_at:
235 raise ObjectNotFound(
236 {'pointer': 'tickets/event'}, f'Event: {event.id} not found'
237 )
238
239 fees = TicketFees.query.filter_by(currency=event.payment_currency).first()
240
241 if not tax and event.tax:
242 tax = event.tax
243 tax_included = tax.is_tax_included_in_price
244
245 if ticket.type == 'donation':
246 price = ticket_info.get('price')
247 if not price or price > ticket.max_price or price < ticket.min_price:
248 raise UnprocessableEntityError(
249 {'pointer': 'tickets/price'},
250 f"Price for donation ticket should be present and within range "
251 f"{ticket.min_price} to {ticket.max_price}",
252 )
253 else:
254 price = ticket.price if ticket.type != 'free' else 0.0
255
256 if discount_code and ticket.type != 'free':
257 code = (
258 DiscountCode.query.with_parent(ticket)
259 .filter_by(id=discount_code.id)
260 .first()
261 )
262 if code:
263 if discount_code.id == code.id:
264 if code.type == 'amount':
265 discount_amount = min(code.value, price)
266 discount_percent = (discount_amount / price) * 100
267 else:
268 discount_amount = (price * code.value) / 100
269 discount_percent = code.value
270 discount_data = {
271 'code': discount_code.code,
272 'percent': round(discount_percent, 2),
273 'amount': round(discount_amount, 2),
274 'total': round(discount_amount * quantity, 2),
275 }
276
277 total_discount += round(discount_amount * quantity, 2)
278 if fees and not ticket.is_fee_absorbed:
279 ticket_fee = fees.service_fee * (price * quantity) / 100
280 if ticket_fee > fees.maximum_fee:
281 ticket_fee = fees.maximum_fee
282 sub_total = ticket_fee + (price - discount_amount) * quantity
283 total_amount = total_amount + sub_total
284 ticket_list.append(
285 {
286 'id': ticket.id,
287 'name': ticket.name,
288 'price': price,
289 'quantity': quantity,
290 'discount': discount_data,
291 'ticket_fee': round(ticket_fee, 2),
292 'sub_total': round(sub_total, 2),
293 }
294 )
295
296 sub_total = total_amount
297 tax_dict = None
298 if tax:
299 if tax_included:
300 total_tax = total_amount - total_amount / (1 + tax.rate / 100)
301 else:
302 total_tax = total_amount * tax.rate / 100
303 total_amount += total_tax
304 tax_dict = dict(
305 included=tax_included,
306 amount=round(total_tax, 2),
307 percent=tax.rate if tax else 0.0,
308 name=tax.name,
309 )
310
311 return dict(
312 tax=tax_dict,
313 sub_total=round(sub_total, 2),
314 total=round(total_amount, 2),
315 discount=round(total_discount, 2),
316 tickets=ticket_list,
317 )
318
319
320 def on_order_completed(order):
321 # send e-mail and notifications if the order status is completed
322 if not (order.status == 'completed' or order.status == 'placed'):
323 return
324
325 create_pdf_tickets_for_holder(order)
326
327 # send email and notifications.
328 send_email_to_attendees(order)
329 notify_ticket_purchase_attendee(order)
330
331 if order.payment_mode in ['free', 'bank', 'cheque', 'onsite']:
332 order.completed_at = datetime.utcnow()
333
334 organizer_set = set(
335 filter(
336 bool, order.event.organizers + order.event.coorganizers + [order.event.owner]
337 )
338 )
339 send_order_purchase_organizer_email(order, organizer_set)
340 notify_ticket_purchase_organizer(order)
341
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/api/helpers/order.py b/app/api/helpers/order.py
--- a/app/api/helpers/order.py
+++ b/app/api/helpers/order.py
@@ -222,6 +222,7 @@
total_amount = total_tax = total_discount = 0.0
ticket_list = []
for ticket in fetched_tickets:
+ ticket_tax = discounted_tax = 0.0
ticket_info = ticket_map[ticket.id]
discount_amount = 0.0
discount_data = None
@@ -253,6 +254,12 @@
else:
price = ticket.price if ticket.type != 'free' else 0.0
+ if tax:
+ if tax_included:
+ ticket_tax = price - price / (1 + tax.rate / 100)
+ else:
+ ticket_tax = price * tax.rate / 100
+
if discount_code and ticket.type != 'free':
code = (
DiscountCode.query.with_parent(ticket)
@@ -266,6 +273,8 @@
discount_percent = (discount_amount / price) * 100
else:
discount_amount = (price * code.value) / 100
+ if tax:
+ discounted_tax = ticket_tax - (ticket_tax * code.value / 100)
discount_percent = code.value
discount_data = {
'code': discount_code.code,
@@ -290,6 +299,8 @@
'discount': discount_data,
'ticket_fee': round(ticket_fee, 2),
'sub_total': round(sub_total, 2),
+ 'ticket_tax': round(ticket_tax, 2),
+ 'discounted_tax': round(discounted_tax, 2)
}
)
|
{"golden_diff": "diff --git a/app/api/helpers/order.py b/app/api/helpers/order.py\n--- a/app/api/helpers/order.py\n+++ b/app/api/helpers/order.py\n@@ -222,6 +222,7 @@\n total_amount = total_tax = total_discount = 0.0\n ticket_list = []\n for ticket in fetched_tickets:\n+ ticket_tax = discounted_tax = 0.0\n ticket_info = ticket_map[ticket.id]\n discount_amount = 0.0\n discount_data = None\n@@ -253,6 +254,12 @@\n else:\n price = ticket.price if ticket.type != 'free' else 0.0\n \n+ if tax:\n+ if tax_included:\n+ ticket_tax = price - price / (1 + tax.rate / 100)\n+ else:\n+ ticket_tax = price * tax.rate / 100\n+\n if discount_code and ticket.type != 'free':\n code = (\n DiscountCode.query.with_parent(ticket)\n@@ -266,6 +273,8 @@\n discount_percent = (discount_amount / price) * 100\n else:\n discount_amount = (price * code.value) / 100\n+ if tax:\n+ discounted_tax = ticket_tax - (ticket_tax * code.value / 100)\n discount_percent = code.value\n discount_data = {\n 'code': discount_code.code,\n@@ -290,6 +299,8 @@\n 'discount': discount_data,\n 'ticket_fee': round(ticket_fee, 2),\n 'sub_total': round(sub_total, 2),\n+ 'ticket_tax': round(ticket_tax, 2),\n+ 'discounted_tax': round(discounted_tax, 2)\n }\n )\n", "issue": "Unify tax calculation at one place\nRight now tax calculation of the system is distributed in different places in different files. There is also repetition of code and some of this calculation is in frontend also. This creates confusion while editing tax info in one place. Also there in `/calculate-amount` endpoint in API for calculating `order_amount`. The idea is to unify tax calculation at one place, preferably `/calculate-amount` endpoint in API\nPDF Invoice: Show Discounts in the invoice\nThe PDF invoice for ticket buyers does not show discount codes clearly. Please implement the following:\r\n* Add the discount code into the description with \"Discount code: samplecodehere, Discount: 10% or USD 2.00 etc.\"\r\n* Show the discount in the item price list with the original price strike through\n", "before_files": [{"content": "import logging\nfrom datetime import datetime, timedelta, timezone\n\nfrom flask import render_template\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.db import (\n get_count,\n safe_query_without_soft_deleted_entries,\n save_to_db,\n)\nfrom app.api.helpers.errors import ConflictError, UnprocessableEntityError\nfrom app.api.helpers.files import create_save_pdf\nfrom app.api.helpers.mail import (\n send_email_to_attendees,\n send_order_purchase_organizer_email,\n)\nfrom app.api.helpers.notification import (\n notify_ticket_purchase_attendee,\n notify_ticket_purchase_organizer,\n)\nfrom app.api.helpers.storage import UPLOAD_PATHS\nfrom app.models import db\nfrom app.models.order import OrderTicket\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_fee import TicketFees\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.setting import Setting\nfrom app.settings import get_settings\n\n\ndef delete_related_attendees_for_order(order):\n \"\"\"\n Delete the associated attendees of an order when it is cancelled/deleted/expired\n :param order: Order whose attendees have to be deleted.\n :return:\n \"\"\"\n for ticket_holder in order.ticket_holders:\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception:\n logging.exception('DB Exception!')\n db.session.rollback()\n\n\ndef set_expiry_for_order(order, override=False):\n \"\"\"\n Expire the order after the time slot(10 minutes) if the order is initializing.\n Also expires the order if we want to expire an order regardless of the state and time.\n :param order: Order to be expired.\n :param override: flag to force expiry.\n :return:\n \"\"\"\n order_expiry_time = get_settings()['order_expiry_time']\n if (\n order\n and not order.paid_via\n and (\n override\n or (\n order.status == 'initializing'\n and (order.created_at + timedelta(minutes=order_expiry_time))\n < datetime.now(timezone.utc)\n )\n )\n ):\n order.status = 'expired'\n delete_related_attendees_for_order(order)\n save_to_db(order)\n return order\n\n\ndef create_pdf_tickets_for_holder(order):\n \"\"\"\n Create tickets and invoices for the holders of an order.\n :param order: The order for which to create tickets for.\n \"\"\"\n if order.status == 'completed' or order.status == 'placed':\n pdf = create_save_pdf(\n render_template('pdf/ticket_purchaser.html', order=order),\n UPLOAD_PATHS['pdf']['tickets_all'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n extra_identifiers={'extra_identifier': order.identifier},\n upload_dir='generated/tickets/',\n )\n\n order.tickets_pdf_url = pdf\n\n for holder in order.ticket_holders:\n # create attendee pdf for every ticket holder\n pdf = create_save_pdf(\n render_template('pdf/ticket_attendee.html', order=order, holder=holder),\n UPLOAD_PATHS['pdf']['tickets_all'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n extra_identifiers={'extra_identifier': holder.id},\n upload_dir='generated/tickets/',\n )\n holder.pdf_url = pdf\n save_to_db(holder)\n \n admin_info = Setting.query.first()\n\n # create order invoices pdf\n order_tickets = OrderTicket.query.filter_by(order_id=order.id).all()\n\n attendee = TicketHolder.query.filter_by(order_id=order.id).first()\n\n tickets = []\n for order_ticket in order_tickets:\n ticket = dict(\n id=order_ticket.ticket.id,\n price=order_ticket.ticket.price,\n quantity=order_ticket.quantity\n )\n tickets.append(ticket)\n \n # calculate order amount using helper function\n order_amount = calculate_order_amount(tickets, discount_code=order.discount_code) \n\n create_save_pdf(\n render_template(\n 'pdf/order_invoice.html',\n order=order,\n event=order.event,\n tax=order.event.tax,\n order_tickets=order_tickets,\n attendee=attendee,\n event_starts_at=order.event.starts_at_tz.strftime('%d %B %Y'),\n created_at=order.created_at.strftime('%d %B %Y'),\n admin_info=admin_info,\n order_amount=order_amount\n ),\n UPLOAD_PATHS['pdf']['order'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n upload_dir='generated/invoices/',\n new_renderer=True,\n )\n save_to_db(order)\n\n\ndef create_onsite_attendees_for_order(data):\n \"\"\"\n Creates on site ticket holders for an order and adds it into the request data.\n :param data: data initially passed in the POST request for order.\n :return:\n \"\"\"\n on_site_tickets = data.get('on_site_tickets')\n\n if not on_site_tickets:\n raise UnprocessableEntityError(\n {'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing'\n )\n\n data['ticket_holders'] = []\n\n for on_site_ticket in on_site_tickets:\n ticket_id = on_site_ticket['id']\n quantity = int(on_site_ticket['quantity'])\n\n ticket = safe_query_without_soft_deleted_entries(\n Ticket, 'id', ticket_id, 'ticket_id'\n )\n\n ticket_sold_count = get_count(\n db.session.query(TicketHolder.id).filter_by(\n ticket_id=int(ticket.id), deleted_at=None\n )\n )\n\n # Check if the ticket is already sold out or not.\n if ticket_sold_count + quantity > ticket.quantity:\n # delete the already created attendees.\n for holder in data['ticket_holders']:\n ticket_holder = (\n db.session.query(TicketHolder).filter(id == int(holder)).one()\n )\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception:\n logging.exception('DB Exception!')\n db.session.rollback()\n\n raise ConflictError(\n {'pointer': '/data/attributes/on_site_tickets'},\n \"Ticket with id: {} already sold out. You can buy at most {} tickets\".format(\n ticket_id, ticket.quantity - ticket_sold_count\n ),\n )\n\n for _ in range(1, quantity):\n ticket_holder = TicketHolder(\n firstname='onsite',\n lastname='attendee',\n email='[email protected]',\n ticket_id=ticket.id,\n event_id=data.get('event'),\n )\n save_to_db(ticket_holder)\n data['ticket_holders'].append(ticket_holder.id)\n\n # delete from the data.\n del data['on_site_tickets']\n\n\ndef calculate_order_amount(tickets, discount_code=None):\n from app.api.helpers.ticketing import validate_discount_code, validate_tickets\n from app.models.discount_code import DiscountCode\n\n ticket_ids = {ticket['id'] for ticket in tickets}\n ticket_map = {int(ticket['id']): ticket for ticket in tickets}\n fetched_tickets = validate_tickets(ticket_ids)\n\n if tickets and discount_code:\n discount_code = validate_discount_code(discount_code, tickets=tickets)\n\n event = tax = tax_included = fees = None\n total_amount = total_tax = total_discount = 0.0\n ticket_list = []\n for ticket in fetched_tickets:\n ticket_info = ticket_map[ticket.id]\n discount_amount = 0.0\n discount_data = None\n ticket_fee = 0.0\n\n quantity = ticket_info.get('quantity', 1) # Default to single ticket\n if not event:\n event = ticket.event\n\n if event.deleted_at:\n raise ObjectNotFound(\n {'pointer': 'tickets/event'}, f'Event: {event.id} not found'\n )\n\n fees = TicketFees.query.filter_by(currency=event.payment_currency).first()\n\n if not tax and event.tax:\n tax = event.tax\n tax_included = tax.is_tax_included_in_price\n\n if ticket.type == 'donation':\n price = ticket_info.get('price')\n if not price or price > ticket.max_price or price < ticket.min_price:\n raise UnprocessableEntityError(\n {'pointer': 'tickets/price'},\n f\"Price for donation ticket should be present and within range \"\n f\"{ticket.min_price} to {ticket.max_price}\",\n )\n else:\n price = ticket.price if ticket.type != 'free' else 0.0\n\n if discount_code and ticket.type != 'free':\n code = (\n DiscountCode.query.with_parent(ticket)\n .filter_by(id=discount_code.id)\n .first()\n )\n if code:\n if discount_code.id == code.id:\n if code.type == 'amount':\n discount_amount = min(code.value, price)\n discount_percent = (discount_amount / price) * 100\n else:\n discount_amount = (price * code.value) / 100\n discount_percent = code.value\n discount_data = {\n 'code': discount_code.code,\n 'percent': round(discount_percent, 2),\n 'amount': round(discount_amount, 2),\n 'total': round(discount_amount * quantity, 2),\n }\n\n total_discount += round(discount_amount * quantity, 2)\n if fees and not ticket.is_fee_absorbed:\n ticket_fee = fees.service_fee * (price * quantity) / 100\n if ticket_fee > fees.maximum_fee:\n ticket_fee = fees.maximum_fee\n sub_total = ticket_fee + (price - discount_amount) * quantity\n total_amount = total_amount + sub_total\n ticket_list.append(\n {\n 'id': ticket.id,\n 'name': ticket.name,\n 'price': price,\n 'quantity': quantity,\n 'discount': discount_data,\n 'ticket_fee': round(ticket_fee, 2),\n 'sub_total': round(sub_total, 2),\n }\n )\n\n sub_total = total_amount\n tax_dict = None\n if tax:\n if tax_included:\n total_tax = total_amount - total_amount / (1 + tax.rate / 100)\n else:\n total_tax = total_amount * tax.rate / 100\n total_amount += total_tax\n tax_dict = dict(\n included=tax_included,\n amount=round(total_tax, 2),\n percent=tax.rate if tax else 0.0,\n name=tax.name,\n )\n\n return dict(\n tax=tax_dict,\n sub_total=round(sub_total, 2),\n total=round(total_amount, 2),\n discount=round(total_discount, 2),\n tickets=ticket_list,\n )\n\n\ndef on_order_completed(order):\n # send e-mail and notifications if the order status is completed\n if not (order.status == 'completed' or order.status == 'placed'):\n return\n\n create_pdf_tickets_for_holder(order)\n\n # send email and notifications.\n send_email_to_attendees(order)\n notify_ticket_purchase_attendee(order)\n\n if order.payment_mode in ['free', 'bank', 'cheque', 'onsite']:\n order.completed_at = datetime.utcnow()\n\n organizer_set = set(\n filter(\n bool, order.event.organizers + order.event.coorganizers + [order.event.owner]\n )\n )\n send_order_purchase_organizer_email(order, organizer_set)\n notify_ticket_purchase_organizer(order)\n", "path": "app/api/helpers/order.py"}], "after_files": [{"content": "import logging\nfrom datetime import datetime, timedelta, timezone\n\nfrom flask import render_template\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.db import (\n get_count,\n safe_query_without_soft_deleted_entries,\n save_to_db,\n)\nfrom app.api.helpers.errors import ConflictError, UnprocessableEntityError\nfrom app.api.helpers.files import create_save_pdf\nfrom app.api.helpers.mail import (\n send_email_to_attendees,\n send_order_purchase_organizer_email,\n)\nfrom app.api.helpers.notification import (\n notify_ticket_purchase_attendee,\n notify_ticket_purchase_organizer,\n)\nfrom app.api.helpers.storage import UPLOAD_PATHS\nfrom app.models import db\nfrom app.models.order import OrderTicket\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_fee import TicketFees\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.setting import Setting\nfrom app.settings import get_settings\n\n\ndef delete_related_attendees_for_order(order):\n \"\"\"\n Delete the associated attendees of an order when it is cancelled/deleted/expired\n :param order: Order whose attendees have to be deleted.\n :return:\n \"\"\"\n for ticket_holder in order.ticket_holders:\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception:\n logging.exception('DB Exception!')\n db.session.rollback()\n\n\ndef set_expiry_for_order(order, override=False):\n \"\"\"\n Expire the order after the time slot(10 minutes) if the order is initializing.\n Also expires the order if we want to expire an order regardless of the state and time.\n :param order: Order to be expired.\n :param override: flag to force expiry.\n :return:\n \"\"\"\n order_expiry_time = get_settings()['order_expiry_time']\n if (\n order\n and not order.paid_via\n and (\n override\n or (\n order.status == 'initializing'\n and (order.created_at + timedelta(minutes=order_expiry_time))\n < datetime.now(timezone.utc)\n )\n )\n ):\n order.status = 'expired'\n delete_related_attendees_for_order(order)\n save_to_db(order)\n return order\n\n\ndef create_pdf_tickets_for_holder(order):\n \"\"\"\n Create tickets and invoices for the holders of an order.\n :param order: The order for which to create tickets for.\n \"\"\"\n if order.status == 'completed' or order.status == 'placed':\n pdf = create_save_pdf(\n render_template('pdf/ticket_purchaser.html', order=order),\n UPLOAD_PATHS['pdf']['tickets_all'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n extra_identifiers={'extra_identifier': order.identifier},\n upload_dir='generated/tickets/',\n )\n\n order.tickets_pdf_url = pdf\n\n for holder in order.ticket_holders:\n # create attendee pdf for every ticket holder\n pdf = create_save_pdf(\n render_template('pdf/ticket_attendee.html', order=order, holder=holder),\n UPLOAD_PATHS['pdf']['tickets_all'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n extra_identifiers={'extra_identifier': holder.id},\n upload_dir='generated/tickets/',\n )\n holder.pdf_url = pdf\n save_to_db(holder)\n \n admin_info = Setting.query.first()\n\n # create order invoices pdf\n order_tickets = OrderTicket.query.filter_by(order_id=order.id).all()\n\n attendee = TicketHolder.query.filter_by(order_id=order.id).first()\n\n tickets = []\n for order_ticket in order_tickets:\n ticket = dict(\n id=order_ticket.ticket.id,\n price=order_ticket.ticket.price,\n quantity=order_ticket.quantity\n )\n tickets.append(ticket)\n \n # calculate order amount using helper function\n order_amount = calculate_order_amount(tickets, discount_code=order.discount_code) \n\n create_save_pdf(\n render_template(\n 'pdf/order_invoice.html',\n order=order,\n event=order.event,\n tax=order.event.tax,\n order_tickets=order_tickets,\n attendee=attendee,\n event_starts_at=order.event.starts_at_tz.strftime('%d %B %Y'),\n created_at=order.created_at.strftime('%d %B %Y'),\n admin_info=admin_info,\n order_amount=order_amount\n ),\n UPLOAD_PATHS['pdf']['order'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n upload_dir='generated/invoices/',\n new_renderer=True,\n )\n save_to_db(order)\n\n\ndef create_onsite_attendees_for_order(data):\n \"\"\"\n Creates on site ticket holders for an order and adds it into the request data.\n :param data: data initially passed in the POST request for order.\n :return:\n \"\"\"\n on_site_tickets = data.get('on_site_tickets')\n\n if not on_site_tickets:\n raise UnprocessableEntityError(\n {'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing'\n )\n\n data['ticket_holders'] = []\n\n for on_site_ticket in on_site_tickets:\n ticket_id = on_site_ticket['id']\n quantity = int(on_site_ticket['quantity'])\n\n ticket = safe_query_without_soft_deleted_entries(\n Ticket, 'id', ticket_id, 'ticket_id'\n )\n\n ticket_sold_count = get_count(\n db.session.query(TicketHolder.id).filter_by(\n ticket_id=int(ticket.id), deleted_at=None\n )\n )\n\n # Check if the ticket is already sold out or not.\n if ticket_sold_count + quantity > ticket.quantity:\n # delete the already created attendees.\n for holder in data['ticket_holders']:\n ticket_holder = (\n db.session.query(TicketHolder).filter(id == int(holder)).one()\n )\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception:\n logging.exception('DB Exception!')\n db.session.rollback()\n\n raise ConflictError(\n {'pointer': '/data/attributes/on_site_tickets'},\n \"Ticket with id: {} already sold out. You can buy at most {} tickets\".format(\n ticket_id, ticket.quantity - ticket_sold_count\n ),\n )\n\n for _ in range(1, quantity):\n ticket_holder = TicketHolder(\n firstname='onsite',\n lastname='attendee',\n email='[email protected]',\n ticket_id=ticket.id,\n event_id=data.get('event'),\n )\n save_to_db(ticket_holder)\n data['ticket_holders'].append(ticket_holder.id)\n\n # delete from the data.\n del data['on_site_tickets']\n\n\ndef calculate_order_amount(tickets, discount_code=None):\n from app.api.helpers.ticketing import validate_discount_code, validate_tickets\n from app.models.discount_code import DiscountCode\n\n ticket_ids = {ticket['id'] for ticket in tickets}\n ticket_map = {int(ticket['id']): ticket for ticket in tickets}\n fetched_tickets = validate_tickets(ticket_ids)\n\n if tickets and discount_code:\n discount_code = validate_discount_code(discount_code, tickets=tickets)\n\n event = tax = tax_included = fees = None\n total_amount = total_tax = total_discount = 0.0\n ticket_list = []\n for ticket in fetched_tickets:\n ticket_tax = discounted_tax = 0.0\n ticket_info = ticket_map[ticket.id]\n discount_amount = 0.0\n discount_data = None\n ticket_fee = 0.0\n\n quantity = ticket_info.get('quantity', 1) # Default to single ticket\n if not event:\n event = ticket.event\n\n if event.deleted_at:\n raise ObjectNotFound(\n {'pointer': 'tickets/event'}, f'Event: {event.id} not found'\n )\n\n fees = TicketFees.query.filter_by(currency=event.payment_currency).first()\n\n if not tax and event.tax:\n tax = event.tax\n tax_included = tax.is_tax_included_in_price\n\n if ticket.type == 'donation':\n price = ticket_info.get('price')\n if not price or price > ticket.max_price or price < ticket.min_price:\n raise UnprocessableEntityError(\n {'pointer': 'tickets/price'},\n f\"Price for donation ticket should be present and within range \"\n f\"{ticket.min_price} to {ticket.max_price}\",\n )\n else:\n price = ticket.price if ticket.type != 'free' else 0.0\n\n if tax:\n if tax_included:\n ticket_tax = price - price / (1 + tax.rate / 100)\n else:\n ticket_tax = price * tax.rate / 100\n\n if discount_code and ticket.type != 'free':\n code = (\n DiscountCode.query.with_parent(ticket)\n .filter_by(id=discount_code.id)\n .first()\n )\n if code:\n if discount_code.id == code.id:\n if code.type == 'amount':\n discount_amount = min(code.value, price)\n discount_percent = (discount_amount / price) * 100\n else:\n discount_amount = (price * code.value) / 100\n if tax:\n discounted_tax = ticket_tax - (ticket_tax * code.value / 100)\n discount_percent = code.value\n discount_data = {\n 'code': discount_code.code,\n 'percent': round(discount_percent, 2),\n 'amount': round(discount_amount, 2),\n 'total': round(discount_amount * quantity, 2),\n }\n\n total_discount += round(discount_amount * quantity, 2)\n if fees and not ticket.is_fee_absorbed:\n ticket_fee = fees.service_fee * (price * quantity) / 100\n if ticket_fee > fees.maximum_fee:\n ticket_fee = fees.maximum_fee\n sub_total = ticket_fee + (price - discount_amount) * quantity\n total_amount = total_amount + sub_total\n ticket_list.append(\n {\n 'id': ticket.id,\n 'name': ticket.name,\n 'price': price,\n 'quantity': quantity,\n 'discount': discount_data,\n 'ticket_fee': round(ticket_fee, 2),\n 'sub_total': round(sub_total, 2),\n 'ticket_tax': round(ticket_tax, 2),\n 'discounted_tax': round(discounted_tax, 2)\n }\n )\n\n sub_total = total_amount\n tax_dict = None\n if tax:\n if tax_included:\n total_tax = total_amount - total_amount / (1 + tax.rate / 100)\n else:\n total_tax = total_amount * tax.rate / 100\n total_amount += total_tax\n tax_dict = dict(\n included=tax_included,\n amount=round(total_tax, 2),\n percent=tax.rate if tax else 0.0,\n name=tax.name,\n )\n\n return dict(\n tax=tax_dict,\n sub_total=round(sub_total, 2),\n total=round(total_amount, 2),\n discount=round(total_discount, 2),\n tickets=ticket_list,\n )\n\n\ndef on_order_completed(order):\n # send e-mail and notifications if the order status is completed\n if not (order.status == 'completed' or order.status == 'placed'):\n return\n\n create_pdf_tickets_for_holder(order)\n\n # send email and notifications.\n send_email_to_attendees(order)\n notify_ticket_purchase_attendee(order)\n\n if order.payment_mode in ['free', 'bank', 'cheque', 'onsite']:\n order.completed_at = datetime.utcnow()\n\n organizer_set = set(\n filter(\n bool, order.event.organizers + order.event.coorganizers + [order.event.owner]\n )\n )\n send_order_purchase_organizer_email(order, organizer_set)\n notify_ticket_purchase_organizer(order)\n", "path": "app/api/helpers/order.py"}]}
| 3,791 | 394 |
gh_patches_debug_167
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-1526
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jupyterhub 0.8.0 radio buttons unclickable or ugly due to form-control class
```
jupyterhub --version
0.8.0
```
I have some radio buttons in my spawner's `_option_form_default`:
```
return """<label for="type">Which type of instance do you want to launch?</label>
<table>
<tr>
<td><input type="radio" name="type" value="c4.8xlarge" checked="checked"></td>
<td> c4.8xlarge (36 CPU, 60GB RAM, $1.591/h)</td>
</tr>
<tr>
<td><input type="radio" name="type" value="r4.8xlarge"></td>
<td> r4.8xlarge (32 CPU, 244GB RAM, $2.341/h)</td>
</tr>
</table><br>
"""
```
In `0.8.0` version these are unclickable. Removing `form-control` class introduced [here](https://github.com/jupyterhub/jupyterhub/blob/master/share/jupyter/hub/templates/spawn.html) fixes the issue for me.
I also tried buttons like this:
```
<tr>
<td><label>
<input type="radio" name="type" value="c4.8xlarge">
c4.8xlarge (36 CPU, 60GB RAM, $1.591/h)
</label></td>
</tr>
```
These are clickable but look ugly with the `form-control` class.
Removing the `form-control` class makes them both clickable and pretty :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/_version.py`
Content:
```
1 """JupyterHub version info"""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 version_info = (
7 0,
8 8,
9 2,
10 'dev',
11 )
12
13 __version__ = '.'.join(map(str, version_info))
14
15
16 def _check_version(hub_version, singleuser_version, log):
17 """Compare Hub and single-user server versions"""
18 if not hub_version:
19 log.warning("Hub has no version header, which means it is likely < 0.8. Expected %s", __version__)
20 return
21
22 if not singleuser_version:
23 log.warning("Single-user server has no version header, which means it is likely < 0.8. Expected %s", __version__)
24 return
25
26 # compare minor X.Y versions
27 if hub_version != singleuser_version:
28 from distutils.version import LooseVersion as V
29 hub_major_minor = V(hub_version).version[:2]
30 singleuser_major_minor = V(singleuser_version).version[:2]
31 extra = ""
32 if singleuser_major_minor == hub_major_minor:
33 # patch-level mismatch or lower, log difference at debug-level
34 # because this should be fine
35 log_method = log.debug
36 else:
37 # log warning-level for more significant mismatch, such as 0.8 vs 0.9, etc.
38 log_method = log.warning
39 extra = " This could cause failure to authenticate and result in redirect loops!"
40 log_method(
41 "jupyterhub version %s != jupyterhub-singleuser version %s." + extra,
42 hub_version,
43 singleuser_version,
44 )
45 else:
46 log.debug("jupyterhub and jupyterhub-singleuser both on version %s" % hub_version)
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/jupyterhub/_version.py b/jupyterhub/_version.py
--- a/jupyterhub/_version.py
+++ b/jupyterhub/_version.py
@@ -6,8 +6,8 @@
version_info = (
0,
8,
- 1,
- # 'dev',
+ 2,
+ 'dev',
)
__version__ = '.'.join(map(str, version_info))
|
{"golden_diff": "diff --git a/jupyterhub/_version.py b/jupyterhub/_version.py\n--- a/jupyterhub/_version.py\n+++ b/jupyterhub/_version.py\n@@ -6,8 +6,8 @@\n version_info = (\n 0,\n 8,\n- 1,\n- # 'dev',\n+ 2,\n+ 'dev',\n )\n \n __version__ = '.'.join(map(str, version_info))\n", "issue": "Jupyterhub 0.8.0 radio buttons unclickable or ugly due to form-control class\n```\r\njupyterhub --version\r\n0.8.0\r\n```\r\n\r\nI have some radio buttons in my spawner's `_option_form_default`:\r\n\r\n```\r\nreturn \"\"\"<label for=\"type\">Which type of instance do you want to launch?</label>\r\n <table>\r\n <tr>\r\n <td><input type=\"radio\" name=\"type\" value=\"c4.8xlarge\" checked=\"checked\"></td>\r\n <td> c4.8xlarge (36 CPU, 60GB RAM, $1.591/h)</td>\r\n </tr>\r\n <tr>\r\n <td><input type=\"radio\" name=\"type\" value=\"r4.8xlarge\"></td>\r\n <td> r4.8xlarge (32 CPU, 244GB RAM, $2.341/h)</td>\r\n </tr>\r\n </table><br>\r\n \"\"\"\r\n```\r\n\r\nIn `0.8.0` version these are unclickable. Removing `form-control` class introduced [here](https://github.com/jupyterhub/jupyterhub/blob/master/share/jupyter/hub/templates/spawn.html) fixes the issue for me. \r\n\r\nI also tried buttons like this:\r\n\r\n```\r\n <tr>\r\n <td><label>\r\n <input type=\"radio\" name=\"type\" value=\"c4.8xlarge\">\r\n c4.8xlarge (36 CPU, 60GB RAM, $1.591/h)\r\n </label></td>\r\n </tr>\r\n```\r\n\r\nThese are clickable but look ugly with the `form-control` class. \r\n\r\nRemoving the `form-control` class makes them both clickable and pretty :) \n", "before_files": [{"content": "\"\"\"JupyterHub version info\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nversion_info = (\n 0,\n 8,\n 2,\n 'dev',\n)\n\n__version__ = '.'.join(map(str, version_info))\n\n\ndef _check_version(hub_version, singleuser_version, log):\n \"\"\"Compare Hub and single-user server versions\"\"\"\n if not hub_version:\n log.warning(\"Hub has no version header, which means it is likely < 0.8. Expected %s\", __version__)\n return\n\n if not singleuser_version:\n log.warning(\"Single-user server has no version header, which means it is likely < 0.8. Expected %s\", __version__)\n return\n\n # compare minor X.Y versions\n if hub_version != singleuser_version:\n from distutils.version import LooseVersion as V\n hub_major_minor = V(hub_version).version[:2]\n singleuser_major_minor = V(singleuser_version).version[:2]\n extra = \"\"\n if singleuser_major_minor == hub_major_minor:\n # patch-level mismatch or lower, log difference at debug-level\n # because this should be fine\n log_method = log.debug\n else:\n # log warning-level for more significant mismatch, such as 0.8 vs 0.9, etc.\n log_method = log.warning\n extra = \" This could cause failure to authenticate and result in redirect loops!\"\n log_method(\n \"jupyterhub version %s != jupyterhub-singleuser version %s.\" + extra,\n hub_version,\n singleuser_version,\n )\n else:\n log.debug(\"jupyterhub and jupyterhub-singleuser both on version %s\" % hub_version)\n", "path": "jupyterhub/_version.py"}], "after_files": [{"content": "\"\"\"JupyterHub version info\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nversion_info = (\n 0,\n 8,\n 2,\n 'dev',\n)\n\n__version__ = '.'.join(map(str, version_info))\n\n\ndef _check_version(hub_version, singleuser_version, log):\n \"\"\"Compare Hub and single-user server versions\"\"\"\n if not hub_version:\n log.warning(\"Hub has no version header, which means it is likely < 0.8. Expected %s\", __version__)\n return\n\n if not singleuser_version:\n log.warning(\"Single-user server has no version header, which means it is likely < 0.8. Expected %s\", __version__)\n return\n\n # compare minor X.Y versions\n if hub_version != singleuser_version:\n from distutils.version import LooseVersion as V\n hub_major_minor = V(hub_version).version[:2]\n singleuser_major_minor = V(singleuser_version).version[:2]\n extra = \"\"\n if singleuser_major_minor == hub_major_minor:\n # patch-level mismatch or lower, log difference at debug-level\n # because this should be fine\n log_method = log.debug\n else:\n # log warning-level for more significant mismatch, such as 0.8 vs 0.9, etc.\n log_method = log.warning\n extra = \" This could cause failure to authenticate and result in redirect loops!\"\n log_method(\n \"jupyterhub version %s != jupyterhub-singleuser version %s.\" + extra,\n hub_version,\n singleuser_version,\n )\n else:\n log.debug(\"jupyterhub and jupyterhub-singleuser both on version %s\" % hub_version)\n", "path": "jupyterhub/_version.py"}]}
| 1,114 | 94 |
gh_patches_debug_28062
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-1072
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
opentelemetry-instrumentation-aiohttp-client
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py`
Content:
```
1 # Copyright 2020, OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 The opentelemetry-instrumentation-aiohttp-client package allows tracing HTTP
17 requests made by the aiohttp client library.
18
19 Usage
20 -----
21 Explicitly instrumenting a single client session:
22
23 .. code:: python
24
25 import aiohttp
26 from opentelemetry.instrumentation.aiohttp_client import create_trace_config
27 import yarl
28
29 def strip_query_params(url: yarl.URL) -> str:
30 return str(url.with_query(None))
31
32 async with aiohttp.ClientSession(trace_configs=[create_trace_config(
33 # Remove all query params from the URL attribute on the span.
34 url_filter=strip_query_params,
35 )]) as session:
36 async with session.get(url) as response:
37 await response.text()
38
39 Instrumenting all client sessions:
40
41 .. code:: python
42
43 import aiohttp
44 from opentelemetry.instrumentation.aiohttp_client import (
45 AioHttpClientInstrumentor
46 )
47
48 # Enable instrumentation
49 AioHttpClientInstrumentor().instrument()
50
51 # Create a session and make an HTTP get request
52 async with aiohttp.ClientSession() as session:
53 async with session.get(url) as response:
54 await response.text()
55
56 Configuration
57 -------------
58
59 Request/Response hooks
60 **********************
61
62 Utilize request/reponse hooks to execute custom logic to be performed before/after performing a request.
63
64 .. code-block:: python
65
66 def request_hook(span: Span, params: aiohttp.TraceRequestStartParams):
67 if span and span.is_recording():
68 span.set_attribute("custom_user_attribute_from_request_hook", "some-value")
69
70 def response_hook(span: Span, params: typing.Union[
71 aiohttp.TraceRequestEndParams,
72 aiohttp.TraceRequestExceptionParams,
73 ]):
74 if span and span.is_recording():
75 span.set_attribute("custom_user_attribute_from_response_hook", "some-value")
76
77 AioHttpClientInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook)
78
79 API
80 ---
81 """
82
83 import types
84 import typing
85 from typing import Collection
86
87 import aiohttp
88 import wrapt
89 import yarl
90
91 from opentelemetry import context as context_api
92 from opentelemetry import trace
93 from opentelemetry.instrumentation.aiohttp_client.package import _instruments
94 from opentelemetry.instrumentation.aiohttp_client.version import __version__
95 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
96 from opentelemetry.instrumentation.utils import (
97 _SUPPRESS_INSTRUMENTATION_KEY,
98 http_status_to_status_code,
99 unwrap,
100 )
101 from opentelemetry.propagate import inject
102 from opentelemetry.semconv.trace import SpanAttributes
103 from opentelemetry.trace import Span, SpanKind, TracerProvider, get_tracer
104 from opentelemetry.trace.status import Status, StatusCode
105 from opentelemetry.util.http import remove_url_credentials
106
107 _UrlFilterT = typing.Optional[typing.Callable[[yarl.URL], str]]
108 _RequestHookT = typing.Optional[
109 typing.Callable[[Span, aiohttp.TraceRequestStartParams], None]
110 ]
111 _ResponseHookT = typing.Optional[
112 typing.Callable[
113 [
114 Span,
115 typing.Union[
116 aiohttp.TraceRequestEndParams,
117 aiohttp.TraceRequestExceptionParams,
118 ],
119 ],
120 None,
121 ]
122 ]
123
124
125 def create_trace_config(
126 url_filter: _UrlFilterT = None,
127 request_hook: _RequestHookT = None,
128 response_hook: _ResponseHookT = None,
129 tracer_provider: TracerProvider = None,
130 ) -> aiohttp.TraceConfig:
131 """Create an aiohttp-compatible trace configuration.
132
133 One span is created for the entire HTTP request, including initial
134 TCP/TLS setup if the connection doesn't exist.
135
136 By default the span name is set to the HTTP request method.
137
138 Example usage:
139
140 .. code:: python
141
142 import aiohttp
143 from opentelemetry.instrumentation.aiohttp_client import create_trace_config
144
145 async with aiohttp.ClientSession(trace_configs=[create_trace_config()]) as session:
146 async with session.get(url) as response:
147 await response.text()
148
149
150 :param url_filter: A callback to process the requested URL prior to adding
151 it as a span attribute. This can be useful to remove sensitive data
152 such as API keys or user personal information.
153
154 :param Callable request_hook: Optional callback that can modify span name and request params.
155 :param Callable response_hook: Optional callback that can modify span name and response params.
156 :param tracer_provider: optional TracerProvider from which to get a Tracer
157
158 :return: An object suitable for use with :py:class:`aiohttp.ClientSession`.
159 :rtype: :py:class:`aiohttp.TraceConfig`
160 """
161 # `aiohttp.TraceRequestStartParams` resolves to `aiohttp.tracing.TraceRequestStartParams`
162 # which doesn't exist in the aiohttp intersphinx inventory.
163 # Explicitly specify the type for the `request_hook` and `response_hook` param and rtype to work
164 # around this issue.
165
166 tracer = get_tracer(__name__, __version__, tracer_provider)
167
168 def _end_trace(trace_config_ctx: types.SimpleNamespace):
169 context_api.detach(trace_config_ctx.token)
170 trace_config_ctx.span.end()
171
172 async def on_request_start(
173 unused_session: aiohttp.ClientSession,
174 trace_config_ctx: types.SimpleNamespace,
175 params: aiohttp.TraceRequestStartParams,
176 ):
177 if context_api.get_value(_SUPPRESS_INSTRUMENTATION_KEY):
178 trace_config_ctx.span = None
179 return
180
181 http_method = params.method.upper()
182 request_span_name = f"HTTP {http_method}"
183
184 trace_config_ctx.span = trace_config_ctx.tracer.start_span(
185 request_span_name,
186 kind=SpanKind.CLIENT,
187 )
188
189 if callable(request_hook):
190 request_hook(trace_config_ctx.span, params)
191
192 if trace_config_ctx.span.is_recording():
193 attributes = {
194 SpanAttributes.HTTP_METHOD: http_method,
195 SpanAttributes.HTTP_URL: remove_url_credentials(
196 trace_config_ctx.url_filter(params.url)
197 )
198 if callable(trace_config_ctx.url_filter)
199 else remove_url_credentials(str(params.url)),
200 }
201 for key, value in attributes.items():
202 trace_config_ctx.span.set_attribute(key, value)
203
204 trace_config_ctx.token = context_api.attach(
205 trace.set_span_in_context(trace_config_ctx.span)
206 )
207
208 inject(params.headers)
209
210 async def on_request_end(
211 unused_session: aiohttp.ClientSession,
212 trace_config_ctx: types.SimpleNamespace,
213 params: aiohttp.TraceRequestEndParams,
214 ):
215 if trace_config_ctx.span is None:
216 return
217
218 if callable(response_hook):
219 response_hook(trace_config_ctx.span, params)
220
221 if trace_config_ctx.span.is_recording():
222 trace_config_ctx.span.set_status(
223 Status(http_status_to_status_code(int(params.response.status)))
224 )
225 trace_config_ctx.span.set_attribute(
226 SpanAttributes.HTTP_STATUS_CODE, params.response.status
227 )
228 _end_trace(trace_config_ctx)
229
230 async def on_request_exception(
231 unused_session: aiohttp.ClientSession,
232 trace_config_ctx: types.SimpleNamespace,
233 params: aiohttp.TraceRequestExceptionParams,
234 ):
235 if trace_config_ctx.span is None:
236 return
237
238 if callable(response_hook):
239 response_hook(trace_config_ctx.span, params)
240
241 if trace_config_ctx.span.is_recording() and params.exception:
242 trace_config_ctx.span.set_status(Status(StatusCode.ERROR))
243 trace_config_ctx.span.record_exception(params.exception)
244 _end_trace(trace_config_ctx)
245
246 def _trace_config_ctx_factory(**kwargs):
247 kwargs.setdefault("trace_request_ctx", {})
248 return types.SimpleNamespace(
249 tracer=tracer, url_filter=url_filter, **kwargs
250 )
251
252 trace_config = aiohttp.TraceConfig(
253 trace_config_ctx_factory=_trace_config_ctx_factory
254 )
255
256 trace_config.on_request_start.append(on_request_start)
257 trace_config.on_request_end.append(on_request_end)
258 trace_config.on_request_exception.append(on_request_exception)
259
260 return trace_config
261
262
263 def _instrument(
264 tracer_provider: TracerProvider = None,
265 url_filter: _UrlFilterT = None,
266 request_hook: _RequestHookT = None,
267 response_hook: _ResponseHookT = None,
268 ):
269 """Enables tracing of all ClientSessions
270
271 When a ClientSession gets created a TraceConfig is automatically added to
272 the session's trace_configs.
273 """
274 # pylint:disable=unused-argument
275 def instrumented_init(wrapped, instance, args, kwargs):
276 if context_api.get_value(_SUPPRESS_INSTRUMENTATION_KEY):
277 return wrapped(*args, **kwargs)
278
279 trace_configs = list(kwargs.get("trace_configs") or ())
280
281 trace_config = create_trace_config(
282 url_filter=url_filter,
283 request_hook=request_hook,
284 response_hook=response_hook,
285 tracer_provider=tracer_provider,
286 )
287 trace_config._is_instrumented_by_opentelemetry = True
288 trace_configs.append(trace_config)
289
290 kwargs["trace_configs"] = trace_configs
291 return wrapped(*args, **kwargs)
292
293 wrapt.wrap_function_wrapper(
294 aiohttp.ClientSession, "__init__", instrumented_init
295 )
296
297
298 def _uninstrument():
299 """Disables instrumenting for all newly created ClientSessions"""
300 unwrap(aiohttp.ClientSession, "__init__")
301
302
303 def _uninstrument_session(client_session: aiohttp.ClientSession):
304 """Disables instrumentation for the given ClientSession"""
305 # pylint: disable=protected-access
306 trace_configs = client_session._trace_configs
307 client_session._trace_configs = [
308 trace_config
309 for trace_config in trace_configs
310 if not hasattr(trace_config, "_is_instrumented_by_opentelemetry")
311 ]
312
313
314 class AioHttpClientInstrumentor(BaseInstrumentor):
315 """An instrumentor for aiohttp client sessions
316
317 See `BaseInstrumentor`
318 """
319
320 def instrumentation_dependencies(self) -> Collection[str]:
321 return _instruments
322
323 def _instrument(self, **kwargs):
324 """Instruments aiohttp ClientSession
325
326 Args:
327 **kwargs: Optional arguments
328 ``tracer_provider``: a TracerProvider, defaults to global
329 ``url_filter``: A callback to process the requested URL prior to adding
330 it as a span attribute. This can be useful to remove sensitive data
331 such as API keys or user personal information.
332 ``request_hook``: An optional callback that is invoked right after a span is created.
333 ``response_hook``: An optional callback which is invoked right before the span is finished processing a response.
334 """
335 _instrument(
336 tracer_provider=kwargs.get("tracer_provider"),
337 url_filter=kwargs.get("url_filter"),
338 request_hook=kwargs.get("request_hook"),
339 response_hook=kwargs.get("response_hook"),
340 )
341
342 def _uninstrument(self, **kwargs):
343 _uninstrument()
344
345 @staticmethod
346 def uninstrument_session(client_session: aiohttp.ClientSession):
347 """Disables instrumentation for the given session"""
348 _uninstrument_session(client_session)
349
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py b/instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py
@@ -180,27 +180,24 @@
http_method = params.method.upper()
request_span_name = f"HTTP {http_method}"
+ request_url = (
+ remove_url_credentials(trace_config_ctx.url_filter(params.url))
+ if callable(trace_config_ctx.url_filter)
+ else remove_url_credentials(str(params.url))
+ )
+
+ span_attributes = {
+ SpanAttributes.HTTP_METHOD: http_method,
+ SpanAttributes.HTTP_URL: request_url,
+ }
trace_config_ctx.span = trace_config_ctx.tracer.start_span(
- request_span_name,
- kind=SpanKind.CLIENT,
+ request_span_name, kind=SpanKind.CLIENT, attributes=span_attributes
)
if callable(request_hook):
request_hook(trace_config_ctx.span, params)
- if trace_config_ctx.span.is_recording():
- attributes = {
- SpanAttributes.HTTP_METHOD: http_method,
- SpanAttributes.HTTP_URL: remove_url_credentials(
- trace_config_ctx.url_filter(params.url)
- )
- if callable(trace_config_ctx.url_filter)
- else remove_url_credentials(str(params.url)),
- }
- for key, value in attributes.items():
- trace_config_ctx.span.set_attribute(key, value)
-
trace_config_ctx.token = context_api.attach(
trace.set_span_in_context(trace_config_ctx.span)
)
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py b/instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py\n@@ -180,27 +180,24 @@\n \n http_method = params.method.upper()\n request_span_name = f\"HTTP {http_method}\"\n+ request_url = (\n+ remove_url_credentials(trace_config_ctx.url_filter(params.url))\n+ if callable(trace_config_ctx.url_filter)\n+ else remove_url_credentials(str(params.url))\n+ )\n+\n+ span_attributes = {\n+ SpanAttributes.HTTP_METHOD: http_method,\n+ SpanAttributes.HTTP_URL: request_url,\n+ }\n \n trace_config_ctx.span = trace_config_ctx.tracer.start_span(\n- request_span_name,\n- kind=SpanKind.CLIENT,\n+ request_span_name, kind=SpanKind.CLIENT, attributes=span_attributes\n )\n \n if callable(request_hook):\n request_hook(trace_config_ctx.span, params)\n \n- if trace_config_ctx.span.is_recording():\n- attributes = {\n- SpanAttributes.HTTP_METHOD: http_method,\n- SpanAttributes.HTTP_URL: remove_url_credentials(\n- trace_config_ctx.url_filter(params.url)\n- )\n- if callable(trace_config_ctx.url_filter)\n- else remove_url_credentials(str(params.url)),\n- }\n- for key, value in attributes.items():\n- trace_config_ctx.span.set_attribute(key, value)\n-\n trace_config_ctx.token = context_api.attach(\n trace.set_span_in_context(trace_config_ctx.span)\n )\n", "issue": "opentelemetry-instrumentation-aiohttp-client\n\n", "before_files": [{"content": "# Copyright 2020, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThe opentelemetry-instrumentation-aiohttp-client package allows tracing HTTP\nrequests made by the aiohttp client library.\n\nUsage\n-----\nExplicitly instrumenting a single client session:\n\n.. code:: python\n\n import aiohttp\n from opentelemetry.instrumentation.aiohttp_client import create_trace_config\n import yarl\n\n def strip_query_params(url: yarl.URL) -> str:\n return str(url.with_query(None))\n\n async with aiohttp.ClientSession(trace_configs=[create_trace_config(\n # Remove all query params from the URL attribute on the span.\n url_filter=strip_query_params,\n )]) as session:\n async with session.get(url) as response:\n await response.text()\n\nInstrumenting all client sessions:\n\n.. code:: python\n\n import aiohttp\n from opentelemetry.instrumentation.aiohttp_client import (\n AioHttpClientInstrumentor\n )\n\n # Enable instrumentation\n AioHttpClientInstrumentor().instrument()\n\n # Create a session and make an HTTP get request\n async with aiohttp.ClientSession() as session:\n async with session.get(url) as response:\n await response.text()\n\nConfiguration\n-------------\n\nRequest/Response hooks\n**********************\n\nUtilize request/reponse hooks to execute custom logic to be performed before/after performing a request.\n\n.. code-block:: python\n\n def request_hook(span: Span, params: aiohttp.TraceRequestStartParams):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_request_hook\", \"some-value\")\n\n def response_hook(span: Span, params: typing.Union[\n aiohttp.TraceRequestEndParams,\n aiohttp.TraceRequestExceptionParams,\n ]):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_response_hook\", \"some-value\")\n\n AioHttpClientInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook)\n\nAPI\n---\n\"\"\"\n\nimport types\nimport typing\nfrom typing import Collection\n\nimport aiohttp\nimport wrapt\nimport yarl\n\nfrom opentelemetry import context as context_api\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.aiohttp_client.package import _instruments\nfrom opentelemetry.instrumentation.aiohttp_client.version import __version__\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.utils import (\n _SUPPRESS_INSTRUMENTATION_KEY,\n http_status_to_status_code,\n unwrap,\n)\nfrom opentelemetry.propagate import inject\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace import Span, SpanKind, TracerProvider, get_tracer\nfrom opentelemetry.trace.status import Status, StatusCode\nfrom opentelemetry.util.http import remove_url_credentials\n\n_UrlFilterT = typing.Optional[typing.Callable[[yarl.URL], str]]\n_RequestHookT = typing.Optional[\n typing.Callable[[Span, aiohttp.TraceRequestStartParams], None]\n]\n_ResponseHookT = typing.Optional[\n typing.Callable[\n [\n Span,\n typing.Union[\n aiohttp.TraceRequestEndParams,\n aiohttp.TraceRequestExceptionParams,\n ],\n ],\n None,\n ]\n]\n\n\ndef create_trace_config(\n url_filter: _UrlFilterT = None,\n request_hook: _RequestHookT = None,\n response_hook: _ResponseHookT = None,\n tracer_provider: TracerProvider = None,\n) -> aiohttp.TraceConfig:\n \"\"\"Create an aiohttp-compatible trace configuration.\n\n One span is created for the entire HTTP request, including initial\n TCP/TLS setup if the connection doesn't exist.\n\n By default the span name is set to the HTTP request method.\n\n Example usage:\n\n .. code:: python\n\n import aiohttp\n from opentelemetry.instrumentation.aiohttp_client import create_trace_config\n\n async with aiohttp.ClientSession(trace_configs=[create_trace_config()]) as session:\n async with session.get(url) as response:\n await response.text()\n\n\n :param url_filter: A callback to process the requested URL prior to adding\n it as a span attribute. This can be useful to remove sensitive data\n such as API keys or user personal information.\n\n :param Callable request_hook: Optional callback that can modify span name and request params.\n :param Callable response_hook: Optional callback that can modify span name and response params.\n :param tracer_provider: optional TracerProvider from which to get a Tracer\n\n :return: An object suitable for use with :py:class:`aiohttp.ClientSession`.\n :rtype: :py:class:`aiohttp.TraceConfig`\n \"\"\"\n # `aiohttp.TraceRequestStartParams` resolves to `aiohttp.tracing.TraceRequestStartParams`\n # which doesn't exist in the aiohttp intersphinx inventory.\n # Explicitly specify the type for the `request_hook` and `response_hook` param and rtype to work\n # around this issue.\n\n tracer = get_tracer(__name__, __version__, tracer_provider)\n\n def _end_trace(trace_config_ctx: types.SimpleNamespace):\n context_api.detach(trace_config_ctx.token)\n trace_config_ctx.span.end()\n\n async def on_request_start(\n unused_session: aiohttp.ClientSession,\n trace_config_ctx: types.SimpleNamespace,\n params: aiohttp.TraceRequestStartParams,\n ):\n if context_api.get_value(_SUPPRESS_INSTRUMENTATION_KEY):\n trace_config_ctx.span = None\n return\n\n http_method = params.method.upper()\n request_span_name = f\"HTTP {http_method}\"\n\n trace_config_ctx.span = trace_config_ctx.tracer.start_span(\n request_span_name,\n kind=SpanKind.CLIENT,\n )\n\n if callable(request_hook):\n request_hook(trace_config_ctx.span, params)\n\n if trace_config_ctx.span.is_recording():\n attributes = {\n SpanAttributes.HTTP_METHOD: http_method,\n SpanAttributes.HTTP_URL: remove_url_credentials(\n trace_config_ctx.url_filter(params.url)\n )\n if callable(trace_config_ctx.url_filter)\n else remove_url_credentials(str(params.url)),\n }\n for key, value in attributes.items():\n trace_config_ctx.span.set_attribute(key, value)\n\n trace_config_ctx.token = context_api.attach(\n trace.set_span_in_context(trace_config_ctx.span)\n )\n\n inject(params.headers)\n\n async def on_request_end(\n unused_session: aiohttp.ClientSession,\n trace_config_ctx: types.SimpleNamespace,\n params: aiohttp.TraceRequestEndParams,\n ):\n if trace_config_ctx.span is None:\n return\n\n if callable(response_hook):\n response_hook(trace_config_ctx.span, params)\n\n if trace_config_ctx.span.is_recording():\n trace_config_ctx.span.set_status(\n Status(http_status_to_status_code(int(params.response.status)))\n )\n trace_config_ctx.span.set_attribute(\n SpanAttributes.HTTP_STATUS_CODE, params.response.status\n )\n _end_trace(trace_config_ctx)\n\n async def on_request_exception(\n unused_session: aiohttp.ClientSession,\n trace_config_ctx: types.SimpleNamespace,\n params: aiohttp.TraceRequestExceptionParams,\n ):\n if trace_config_ctx.span is None:\n return\n\n if callable(response_hook):\n response_hook(trace_config_ctx.span, params)\n\n if trace_config_ctx.span.is_recording() and params.exception:\n trace_config_ctx.span.set_status(Status(StatusCode.ERROR))\n trace_config_ctx.span.record_exception(params.exception)\n _end_trace(trace_config_ctx)\n\n def _trace_config_ctx_factory(**kwargs):\n kwargs.setdefault(\"trace_request_ctx\", {})\n return types.SimpleNamespace(\n tracer=tracer, url_filter=url_filter, **kwargs\n )\n\n trace_config = aiohttp.TraceConfig(\n trace_config_ctx_factory=_trace_config_ctx_factory\n )\n\n trace_config.on_request_start.append(on_request_start)\n trace_config.on_request_end.append(on_request_end)\n trace_config.on_request_exception.append(on_request_exception)\n\n return trace_config\n\n\ndef _instrument(\n tracer_provider: TracerProvider = None,\n url_filter: _UrlFilterT = None,\n request_hook: _RequestHookT = None,\n response_hook: _ResponseHookT = None,\n):\n \"\"\"Enables tracing of all ClientSessions\n\n When a ClientSession gets created a TraceConfig is automatically added to\n the session's trace_configs.\n \"\"\"\n # pylint:disable=unused-argument\n def instrumented_init(wrapped, instance, args, kwargs):\n if context_api.get_value(_SUPPRESS_INSTRUMENTATION_KEY):\n return wrapped(*args, **kwargs)\n\n trace_configs = list(kwargs.get(\"trace_configs\") or ())\n\n trace_config = create_trace_config(\n url_filter=url_filter,\n request_hook=request_hook,\n response_hook=response_hook,\n tracer_provider=tracer_provider,\n )\n trace_config._is_instrumented_by_opentelemetry = True\n trace_configs.append(trace_config)\n\n kwargs[\"trace_configs\"] = trace_configs\n return wrapped(*args, **kwargs)\n\n wrapt.wrap_function_wrapper(\n aiohttp.ClientSession, \"__init__\", instrumented_init\n )\n\n\ndef _uninstrument():\n \"\"\"Disables instrumenting for all newly created ClientSessions\"\"\"\n unwrap(aiohttp.ClientSession, \"__init__\")\n\n\ndef _uninstrument_session(client_session: aiohttp.ClientSession):\n \"\"\"Disables instrumentation for the given ClientSession\"\"\"\n # pylint: disable=protected-access\n trace_configs = client_session._trace_configs\n client_session._trace_configs = [\n trace_config\n for trace_config in trace_configs\n if not hasattr(trace_config, \"_is_instrumented_by_opentelemetry\")\n ]\n\n\nclass AioHttpClientInstrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for aiohttp client sessions\n\n See `BaseInstrumentor`\n \"\"\"\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"Instruments aiohttp ClientSession\n\n Args:\n **kwargs: Optional arguments\n ``tracer_provider``: a TracerProvider, defaults to global\n ``url_filter``: A callback to process the requested URL prior to adding\n it as a span attribute. This can be useful to remove sensitive data\n such as API keys or user personal information.\n ``request_hook``: An optional callback that is invoked right after a span is created.\n ``response_hook``: An optional callback which is invoked right before the span is finished processing a response.\n \"\"\"\n _instrument(\n tracer_provider=kwargs.get(\"tracer_provider\"),\n url_filter=kwargs.get(\"url_filter\"),\n request_hook=kwargs.get(\"request_hook\"),\n response_hook=kwargs.get(\"response_hook\"),\n )\n\n def _uninstrument(self, **kwargs):\n _uninstrument()\n\n @staticmethod\n def uninstrument_session(client_session: aiohttp.ClientSession):\n \"\"\"Disables instrumentation for the given session\"\"\"\n _uninstrument_session(client_session)\n", "path": "instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py"}], "after_files": [{"content": "# Copyright 2020, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThe opentelemetry-instrumentation-aiohttp-client package allows tracing HTTP\nrequests made by the aiohttp client library.\n\nUsage\n-----\nExplicitly instrumenting a single client session:\n\n.. code:: python\n\n import aiohttp\n from opentelemetry.instrumentation.aiohttp_client import create_trace_config\n import yarl\n\n def strip_query_params(url: yarl.URL) -> str:\n return str(url.with_query(None))\n\n async with aiohttp.ClientSession(trace_configs=[create_trace_config(\n # Remove all query params from the URL attribute on the span.\n url_filter=strip_query_params,\n )]) as session:\n async with session.get(url) as response:\n await response.text()\n\nInstrumenting all client sessions:\n\n.. code:: python\n\n import aiohttp\n from opentelemetry.instrumentation.aiohttp_client import (\n AioHttpClientInstrumentor\n )\n\n # Enable instrumentation\n AioHttpClientInstrumentor().instrument()\n\n # Create a session and make an HTTP get request\n async with aiohttp.ClientSession() as session:\n async with session.get(url) as response:\n await response.text()\n\nConfiguration\n-------------\n\nRequest/Response hooks\n**********************\n\nUtilize request/reponse hooks to execute custom logic to be performed before/after performing a request.\n\n.. code-block:: python\n\n def request_hook(span: Span, params: aiohttp.TraceRequestStartParams):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_request_hook\", \"some-value\")\n\n def response_hook(span: Span, params: typing.Union[\n aiohttp.TraceRequestEndParams,\n aiohttp.TraceRequestExceptionParams,\n ]):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_response_hook\", \"some-value\")\n\n AioHttpClientInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook)\n\nAPI\n---\n\"\"\"\n\nimport types\nimport typing\nfrom typing import Collection\n\nimport aiohttp\nimport wrapt\nimport yarl\n\nfrom opentelemetry import context as context_api\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.aiohttp_client.package import _instruments\nfrom opentelemetry.instrumentation.aiohttp_client.version import __version__\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.utils import (\n _SUPPRESS_INSTRUMENTATION_KEY,\n http_status_to_status_code,\n unwrap,\n)\nfrom opentelemetry.propagate import inject\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace import Span, SpanKind, TracerProvider, get_tracer\nfrom opentelemetry.trace.status import Status, StatusCode\nfrom opentelemetry.util.http import remove_url_credentials\n\n_UrlFilterT = typing.Optional[typing.Callable[[yarl.URL], str]]\n_RequestHookT = typing.Optional[\n typing.Callable[[Span, aiohttp.TraceRequestStartParams], None]\n]\n_ResponseHookT = typing.Optional[\n typing.Callable[\n [\n Span,\n typing.Union[\n aiohttp.TraceRequestEndParams,\n aiohttp.TraceRequestExceptionParams,\n ],\n ],\n None,\n ]\n]\n\n\ndef create_trace_config(\n url_filter: _UrlFilterT = None,\n request_hook: _RequestHookT = None,\n response_hook: _ResponseHookT = None,\n tracer_provider: TracerProvider = None,\n) -> aiohttp.TraceConfig:\n \"\"\"Create an aiohttp-compatible trace configuration.\n\n One span is created for the entire HTTP request, including initial\n TCP/TLS setup if the connection doesn't exist.\n\n By default the span name is set to the HTTP request method.\n\n Example usage:\n\n .. code:: python\n\n import aiohttp\n from opentelemetry.instrumentation.aiohttp_client import create_trace_config\n\n async with aiohttp.ClientSession(trace_configs=[create_trace_config()]) as session:\n async with session.get(url) as response:\n await response.text()\n\n\n :param url_filter: A callback to process the requested URL prior to adding\n it as a span attribute. This can be useful to remove sensitive data\n such as API keys or user personal information.\n\n :param Callable request_hook: Optional callback that can modify span name and request params.\n :param Callable response_hook: Optional callback that can modify span name and response params.\n :param tracer_provider: optional TracerProvider from which to get a Tracer\n\n :return: An object suitable for use with :py:class:`aiohttp.ClientSession`.\n :rtype: :py:class:`aiohttp.TraceConfig`\n \"\"\"\n # `aiohttp.TraceRequestStartParams` resolves to `aiohttp.tracing.TraceRequestStartParams`\n # which doesn't exist in the aiohttp intersphinx inventory.\n # Explicitly specify the type for the `request_hook` and `response_hook` param and rtype to work\n # around this issue.\n\n tracer = get_tracer(__name__, __version__, tracer_provider)\n\n def _end_trace(trace_config_ctx: types.SimpleNamespace):\n context_api.detach(trace_config_ctx.token)\n trace_config_ctx.span.end()\n\n async def on_request_start(\n unused_session: aiohttp.ClientSession,\n trace_config_ctx: types.SimpleNamespace,\n params: aiohttp.TraceRequestStartParams,\n ):\n if context_api.get_value(_SUPPRESS_INSTRUMENTATION_KEY):\n trace_config_ctx.span = None\n return\n\n http_method = params.method.upper()\n request_span_name = f\"HTTP {http_method}\"\n request_url = (\n remove_url_credentials(trace_config_ctx.url_filter(params.url))\n if callable(trace_config_ctx.url_filter)\n else remove_url_credentials(str(params.url))\n )\n\n span_attributes = {\n SpanAttributes.HTTP_METHOD: http_method,\n SpanAttributes.HTTP_URL: request_url,\n }\n\n trace_config_ctx.span = trace_config_ctx.tracer.start_span(\n request_span_name, kind=SpanKind.CLIENT, attributes=span_attributes\n )\n\n if callable(request_hook):\n request_hook(trace_config_ctx.span, params)\n\n trace_config_ctx.token = context_api.attach(\n trace.set_span_in_context(trace_config_ctx.span)\n )\n\n inject(params.headers)\n\n async def on_request_end(\n unused_session: aiohttp.ClientSession,\n trace_config_ctx: types.SimpleNamespace,\n params: aiohttp.TraceRequestEndParams,\n ):\n if trace_config_ctx.span is None:\n return\n\n if callable(response_hook):\n response_hook(trace_config_ctx.span, params)\n\n if trace_config_ctx.span.is_recording():\n trace_config_ctx.span.set_status(\n Status(http_status_to_status_code(int(params.response.status)))\n )\n trace_config_ctx.span.set_attribute(\n SpanAttributes.HTTP_STATUS_CODE, params.response.status\n )\n _end_trace(trace_config_ctx)\n\n async def on_request_exception(\n unused_session: aiohttp.ClientSession,\n trace_config_ctx: types.SimpleNamespace,\n params: aiohttp.TraceRequestExceptionParams,\n ):\n if trace_config_ctx.span is None:\n return\n\n if callable(response_hook):\n response_hook(trace_config_ctx.span, params)\n\n if trace_config_ctx.span.is_recording() and params.exception:\n trace_config_ctx.span.set_status(Status(StatusCode.ERROR))\n trace_config_ctx.span.record_exception(params.exception)\n _end_trace(trace_config_ctx)\n\n def _trace_config_ctx_factory(**kwargs):\n kwargs.setdefault(\"trace_request_ctx\", {})\n return types.SimpleNamespace(\n tracer=tracer, url_filter=url_filter, **kwargs\n )\n\n trace_config = aiohttp.TraceConfig(\n trace_config_ctx_factory=_trace_config_ctx_factory\n )\n\n trace_config.on_request_start.append(on_request_start)\n trace_config.on_request_end.append(on_request_end)\n trace_config.on_request_exception.append(on_request_exception)\n\n return trace_config\n\n\ndef _instrument(\n tracer_provider: TracerProvider = None,\n url_filter: _UrlFilterT = None,\n request_hook: _RequestHookT = None,\n response_hook: _ResponseHookT = None,\n):\n \"\"\"Enables tracing of all ClientSessions\n\n When a ClientSession gets created a TraceConfig is automatically added to\n the session's trace_configs.\n \"\"\"\n # pylint:disable=unused-argument\n def instrumented_init(wrapped, instance, args, kwargs):\n if context_api.get_value(_SUPPRESS_INSTRUMENTATION_KEY):\n return wrapped(*args, **kwargs)\n\n trace_configs = list(kwargs.get(\"trace_configs\") or ())\n\n trace_config = create_trace_config(\n url_filter=url_filter,\n request_hook=request_hook,\n response_hook=response_hook,\n tracer_provider=tracer_provider,\n )\n trace_config._is_instrumented_by_opentelemetry = True\n trace_configs.append(trace_config)\n\n kwargs[\"trace_configs\"] = trace_configs\n return wrapped(*args, **kwargs)\n\n wrapt.wrap_function_wrapper(\n aiohttp.ClientSession, \"__init__\", instrumented_init\n )\n\n\ndef _uninstrument():\n \"\"\"Disables instrumenting for all newly created ClientSessions\"\"\"\n unwrap(aiohttp.ClientSession, \"__init__\")\n\n\ndef _uninstrument_session(client_session: aiohttp.ClientSession):\n \"\"\"Disables instrumentation for the given ClientSession\"\"\"\n # pylint: disable=protected-access\n trace_configs = client_session._trace_configs\n client_session._trace_configs = [\n trace_config\n for trace_config in trace_configs\n if not hasattr(trace_config, \"_is_instrumented_by_opentelemetry\")\n ]\n\n\nclass AioHttpClientInstrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for aiohttp client sessions\n\n See `BaseInstrumentor`\n \"\"\"\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"Instruments aiohttp ClientSession\n\n Args:\n **kwargs: Optional arguments\n ``tracer_provider``: a TracerProvider, defaults to global\n ``url_filter``: A callback to process the requested URL prior to adding\n it as a span attribute. This can be useful to remove sensitive data\n such as API keys or user personal information.\n ``request_hook``: An optional callback that is invoked right after a span is created.\n ``response_hook``: An optional callback which is invoked right before the span is finished processing a response.\n \"\"\"\n _instrument(\n tracer_provider=kwargs.get(\"tracer_provider\"),\n url_filter=kwargs.get(\"url_filter\"),\n request_hook=kwargs.get(\"request_hook\"),\n response_hook=kwargs.get(\"response_hook\"),\n )\n\n def _uninstrument(self, **kwargs):\n _uninstrument()\n\n @staticmethod\n def uninstrument_session(client_session: aiohttp.ClientSession):\n \"\"\"Disables instrumentation for the given session\"\"\"\n _uninstrument_session(client_session)\n", "path": "instrumentation/opentelemetry-instrumentation-aiohttp-client/src/opentelemetry/instrumentation/aiohttp_client/__init__.py"}]}
| 3,715 | 418 |
gh_patches_debug_15740
|
rasdani/github-patches
|
git_diff
|
StackStorm__st2-3469
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Default values in pack config schema not being honored
If a pack has a schema that declares a default value for a specific property and the property is absent in the config instance (`/opt/stackstorm/configs/<pack>.yaml`), then a python runner attempts to read this value from the config a `KeyError` is thrown instead of the default value being returned.
# Steps to reproduce
I've created a test pack that demonstrates this: https://github.com/nmaludy/stackstorm-testconfigdefault
```
st2 pack install https://github.com/nmaludy/stackstorm-testconfigdefault
st2 run testconfigdefault.get_config key='output'
```
# Output:
```
[root@stackstorm ~]# st2 run testconfigdefault.get_config key='output'
.
id: 593fdecfa814c06997fc8fd4
status: failed
parameters:
key: output
result:
exit_code: 1
result: None
stderr: "Traceback (most recent call last):
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/runners/python_action_wrapper.py", line 259, in <module>
obj.run()
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/runners/python_action_wrapper.py", line 155, in run
output = action.run(**self._parameters)
File "/opt/stackstorm/packs/testconfigdefault/actions/get_config.py", line 11, in run
return self.config[key]
KeyError: u'output'
"
stdout: ''
```
# Expected output:
I expect to have the default value of 'json' from the schema be returned. Maybe i'm incorrect in my assumption?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2common/st2common/util/config_loader.py`
Content:
```
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import copy
17
18 import six
19
20 from oslo_config import cfg
21
22 from st2common import log as logging
23 from st2common.models.db.pack import ConfigDB
24 from st2common.persistence.pack import ConfigSchema
25 from st2common.persistence.pack import Config
26 from st2common.content import utils as content_utils
27 from st2common.util import jinja as jinja_utils
28 from st2common.util.templating import render_template_with_system_and_user_context
29 from st2common.util.config_parser import ContentPackConfigParser
30 from st2common.exceptions.db import StackStormDBObjectNotFoundError
31
32 __all__ = [
33 'ContentPackConfigLoader'
34 ]
35
36 LOG = logging.getLogger(__name__)
37
38
39 class ContentPackConfigLoader(object):
40 """
41 Class which loads and resolves all the config values and returns a dictionary of resolved values
42 which can be passed to the resource.
43
44 It loads and resolves values in the following order:
45
46 1. Static values from <pack path>/config.yaml file
47 2. Dynamic and or static values from /opt/stackstorm/configs/<pack name>.yaml file.
48
49 Values are merged from left to right which means values from "<pack name>.yaml" file have
50 precedence and override values from pack local config file.
51 """
52
53 def __init__(self, pack_name, user=None):
54 self.pack_name = pack_name
55 self.user = user or cfg.CONF.system_user.user
56
57 self.pack_path = content_utils.get_pack_base_path(pack_name=pack_name)
58 self._config_parser = ContentPackConfigParser(pack_name=pack_name)
59
60 def get_config(self):
61 result = {}
62
63 # 1. Retrieve values from pack local config.yaml file
64 config = self._config_parser.get_config()
65
66 if config:
67 config = config.config or {}
68 result.update(config)
69
70 # Retrieve corresponding ConfigDB and ConfigSchemaDB object
71 # Note: ConfigSchemaDB is optional right now. If it doesn't exist, we assume every value
72 # is of a type string
73 try:
74 config_db = Config.get_by_pack(value=self.pack_name)
75 except StackStormDBObjectNotFoundError:
76 # Corresponding pack config doesn't exist. We set config_db to an empty config so
77 # that the default values from config schema are still correctly applied even if
78 # pack doesn't contain a config.
79 config_db = ConfigDB(pack=self.pack_name, values={})
80
81 try:
82 config_schema_db = ConfigSchema.get_by_pack(value=self.pack_name)
83 except StackStormDBObjectNotFoundError:
84 config_schema_db = None
85
86 # 2. Retrieve values from "global" pack config file (if available) and resolve them if
87 # necessary
88 config = self._get_values_for_config(config_schema_db=config_schema_db,
89 config_db=config_db)
90 result.update(config)
91
92 return result
93
94 def _get_values_for_config(self, config_schema_db, config_db):
95 schema_values = getattr(config_schema_db, 'attributes', {})
96 config_values = getattr(config_db, 'values', {})
97
98 config = copy.deepcopy(config_values)
99
100 # Assign dynamic config values based on the values in the datastore
101 config = self._assign_dynamic_config_values(schema=schema_values, config=config)
102
103 # If config_schema is available we do a second pass and set default values for required
104 # items which values are not provided / available in the config itself
105 config = self._assign_default_values(schema=schema_values, config=config)
106 return config
107
108 def _assign_dynamic_config_values(self, schema, config, parent_keys=None):
109 """
110 Assign dynamic config value for a particular config item if the ite utilizes a Jinja
111 expression for dynamic config values.
112
113 Note: This method mutates config argument in place.
114
115 :rtype: ``dict``
116 """
117 parent_keys = parent_keys or []
118
119 for config_item_key, config_item_value in six.iteritems(config):
120 schema_item = schema.get(config_item_key, {})
121 is_dictionary = isinstance(config_item_value, dict)
122
123 # Inspect nested object properties
124 if is_dictionary:
125 parent_keys += [config_item_key]
126 self._assign_dynamic_config_values(schema=schema_item.get('properties', {}),
127 config=config[config_item_key],
128 parent_keys=parent_keys)
129 else:
130 is_jinja_expression = jinja_utils.is_jinja_expression(value=config_item_value)
131
132 if is_jinja_expression:
133 # Resolve / render the Jinja template expression
134 full_config_item_key = '.'.join(parent_keys + [config_item_key])
135 value = self._get_datastore_value_for_expression(key=full_config_item_key,
136 value=config_item_value,
137 config_schema_item=schema_item)
138
139 config[config_item_key] = value
140 else:
141 # Static value, no resolution needed
142 config[config_item_key] = config_item_value
143
144 return config
145
146 def _assign_default_values(self, schema, config):
147 """
148 Assign default values for particular config if default values are provided in the config
149 schema and a value is not specified in the config.
150
151 Note: This method mutates config argument in place.
152
153 :rtype: ``dict``
154 """
155 for schema_item_key, schema_item in six.iteritems(schema):
156 default_value = schema_item.get('default', None)
157 is_required = schema_item.get('required', False)
158 is_object = schema_item.get('type', None) == 'object'
159 has_properties = schema_item.get('properties', None)
160
161 if is_required and default_value and not config.get(schema_item_key, None):
162 config[schema_item_key] = default_value
163
164 # Inspect nested object properties
165 if is_object and has_properties:
166 if not config.get(schema_item_key, None):
167 config[schema_item_key] = {}
168
169 self._assign_default_values(schema=schema_item['properties'],
170 config=config[schema_item_key])
171
172 return config
173
174 def _get_datastore_value_for_expression(self, key, value, config_schema_item=None):
175 """
176 Retrieve datastore value by first resolving the datastore expression and then retrieving
177 the value from the datastore.
178
179 :param key: Full path to the config item key (e.g. "token" / "auth.settings.token", etc.)
180 """
181 from st2common.services.config import deserialize_key_value
182
183 config_schema_item = config_schema_item or {}
184 secret = config_schema_item.get('secret', False)
185
186 try:
187 value = render_template_with_system_and_user_context(value=value,
188 user=self.user)
189 except Exception as e:
190 # Throw a more user-friendly exception on failed render
191 exc_class = type(e)
192 original_msg = str(e)
193 msg = ('Failed to render dynamic configuration value for key "%s" with value '
194 '"%s" for pack "%s" config: %s ' % (key, value, self.pack_name, original_msg))
195 raise exc_class(msg)
196
197 if value:
198 # Deserialize the value
199 value = deserialize_key_value(value=value, secret=secret)
200 else:
201 value = None
202
203 return value
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/st2common/st2common/util/config_loader.py b/st2common/st2common/util/config_loader.py
--- a/st2common/st2common/util/config_loader.py
+++ b/st2common/st2common/util/config_loader.py
@@ -154,11 +154,10 @@
"""
for schema_item_key, schema_item in six.iteritems(schema):
default_value = schema_item.get('default', None)
- is_required = schema_item.get('required', False)
is_object = schema_item.get('type', None) == 'object'
has_properties = schema_item.get('properties', None)
- if is_required and default_value and not config.get(schema_item_key, None):
+ if default_value and not config.get(schema_item_key, None):
config[schema_item_key] = default_value
# Inspect nested object properties
|
{"golden_diff": "diff --git a/st2common/st2common/util/config_loader.py b/st2common/st2common/util/config_loader.py\n--- a/st2common/st2common/util/config_loader.py\n+++ b/st2common/st2common/util/config_loader.py\n@@ -154,11 +154,10 @@\n \"\"\"\n for schema_item_key, schema_item in six.iteritems(schema):\n default_value = schema_item.get('default', None)\n- is_required = schema_item.get('required', False)\n is_object = schema_item.get('type', None) == 'object'\n has_properties = schema_item.get('properties', None)\n \n- if is_required and default_value and not config.get(schema_item_key, None):\n+ if default_value and not config.get(schema_item_key, None):\n config[schema_item_key] = default_value\n \n # Inspect nested object properties\n", "issue": "Default values in pack config schema not being honored\nIf a pack has a schema that declares a default value for a specific property and the property is absent in the config instance (`/opt/stackstorm/configs/<pack>.yaml`), then a python runner attempts to read this value from the config a `KeyError` is thrown instead of the default value being returned.\r\n\r\n# Steps to reproduce\r\nI've created a test pack that demonstrates this: https://github.com/nmaludy/stackstorm-testconfigdefault\r\n```\r\nst2 pack install https://github.com/nmaludy/stackstorm-testconfigdefault\r\nst2 run testconfigdefault.get_config key='output'\r\n```\r\n\r\n# Output:\r\n```\r\n[root@stackstorm ~]# st2 run testconfigdefault.get_config key='output'\r\n.\r\nid: 593fdecfa814c06997fc8fd4\r\nstatus: failed\r\nparameters: \r\n key: output\r\nresult: \r\n exit_code: 1\r\n result: None\r\n stderr: \"Traceback (most recent call last):\r\n File \"/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/runners/python_action_wrapper.py\", line 259, in <module>\r\n obj.run()\r\n File \"/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/runners/python_action_wrapper.py\", line 155, in run\r\n output = action.run(**self._parameters)\r\n File \"/opt/stackstorm/packs/testconfigdefault/actions/get_config.py\", line 11, in run\r\n return self.config[key]\r\nKeyError: u'output'\r\n\"\r\n stdout: ''\r\n```\r\n\r\n# Expected output:\r\nI expect to have the default value of 'json' from the schema be returned. Maybe i'm incorrect in my assumption?\r\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\n\nimport six\n\nfrom oslo_config import cfg\n\nfrom st2common import log as logging\nfrom st2common.models.db.pack import ConfigDB\nfrom st2common.persistence.pack import ConfigSchema\nfrom st2common.persistence.pack import Config\nfrom st2common.content import utils as content_utils\nfrom st2common.util import jinja as jinja_utils\nfrom st2common.util.templating import render_template_with_system_and_user_context\nfrom st2common.util.config_parser import ContentPackConfigParser\nfrom st2common.exceptions.db import StackStormDBObjectNotFoundError\n\n__all__ = [\n 'ContentPackConfigLoader'\n]\n\nLOG = logging.getLogger(__name__)\n\n\nclass ContentPackConfigLoader(object):\n \"\"\"\n Class which loads and resolves all the config values and returns a dictionary of resolved values\n which can be passed to the resource.\n\n It loads and resolves values in the following order:\n\n 1. Static values from <pack path>/config.yaml file\n 2. Dynamic and or static values from /opt/stackstorm/configs/<pack name>.yaml file.\n\n Values are merged from left to right which means values from \"<pack name>.yaml\" file have\n precedence and override values from pack local config file.\n \"\"\"\n\n def __init__(self, pack_name, user=None):\n self.pack_name = pack_name\n self.user = user or cfg.CONF.system_user.user\n\n self.pack_path = content_utils.get_pack_base_path(pack_name=pack_name)\n self._config_parser = ContentPackConfigParser(pack_name=pack_name)\n\n def get_config(self):\n result = {}\n\n # 1. Retrieve values from pack local config.yaml file\n config = self._config_parser.get_config()\n\n if config:\n config = config.config or {}\n result.update(config)\n\n # Retrieve corresponding ConfigDB and ConfigSchemaDB object\n # Note: ConfigSchemaDB is optional right now. If it doesn't exist, we assume every value\n # is of a type string\n try:\n config_db = Config.get_by_pack(value=self.pack_name)\n except StackStormDBObjectNotFoundError:\n # Corresponding pack config doesn't exist. We set config_db to an empty config so\n # that the default values from config schema are still correctly applied even if\n # pack doesn't contain a config.\n config_db = ConfigDB(pack=self.pack_name, values={})\n\n try:\n config_schema_db = ConfigSchema.get_by_pack(value=self.pack_name)\n except StackStormDBObjectNotFoundError:\n config_schema_db = None\n\n # 2. Retrieve values from \"global\" pack config file (if available) and resolve them if\n # necessary\n config = self._get_values_for_config(config_schema_db=config_schema_db,\n config_db=config_db)\n result.update(config)\n\n return result\n\n def _get_values_for_config(self, config_schema_db, config_db):\n schema_values = getattr(config_schema_db, 'attributes', {})\n config_values = getattr(config_db, 'values', {})\n\n config = copy.deepcopy(config_values)\n\n # Assign dynamic config values based on the values in the datastore\n config = self._assign_dynamic_config_values(schema=schema_values, config=config)\n\n # If config_schema is available we do a second pass and set default values for required\n # items which values are not provided / available in the config itself\n config = self._assign_default_values(schema=schema_values, config=config)\n return config\n\n def _assign_dynamic_config_values(self, schema, config, parent_keys=None):\n \"\"\"\n Assign dynamic config value for a particular config item if the ite utilizes a Jinja\n expression for dynamic config values.\n\n Note: This method mutates config argument in place.\n\n :rtype: ``dict``\n \"\"\"\n parent_keys = parent_keys or []\n\n for config_item_key, config_item_value in six.iteritems(config):\n schema_item = schema.get(config_item_key, {})\n is_dictionary = isinstance(config_item_value, dict)\n\n # Inspect nested object properties\n if is_dictionary:\n parent_keys += [config_item_key]\n self._assign_dynamic_config_values(schema=schema_item.get('properties', {}),\n config=config[config_item_key],\n parent_keys=parent_keys)\n else:\n is_jinja_expression = jinja_utils.is_jinja_expression(value=config_item_value)\n\n if is_jinja_expression:\n # Resolve / render the Jinja template expression\n full_config_item_key = '.'.join(parent_keys + [config_item_key])\n value = self._get_datastore_value_for_expression(key=full_config_item_key,\n value=config_item_value,\n config_schema_item=schema_item)\n\n config[config_item_key] = value\n else:\n # Static value, no resolution needed\n config[config_item_key] = config_item_value\n\n return config\n\n def _assign_default_values(self, schema, config):\n \"\"\"\n Assign default values for particular config if default values are provided in the config\n schema and a value is not specified in the config.\n\n Note: This method mutates config argument in place.\n\n :rtype: ``dict``\n \"\"\"\n for schema_item_key, schema_item in six.iteritems(schema):\n default_value = schema_item.get('default', None)\n is_required = schema_item.get('required', False)\n is_object = schema_item.get('type', None) == 'object'\n has_properties = schema_item.get('properties', None)\n\n if is_required and default_value and not config.get(schema_item_key, None):\n config[schema_item_key] = default_value\n\n # Inspect nested object properties\n if is_object and has_properties:\n if not config.get(schema_item_key, None):\n config[schema_item_key] = {}\n\n self._assign_default_values(schema=schema_item['properties'],\n config=config[schema_item_key])\n\n return config\n\n def _get_datastore_value_for_expression(self, key, value, config_schema_item=None):\n \"\"\"\n Retrieve datastore value by first resolving the datastore expression and then retrieving\n the value from the datastore.\n\n :param key: Full path to the config item key (e.g. \"token\" / \"auth.settings.token\", etc.)\n \"\"\"\n from st2common.services.config import deserialize_key_value\n\n config_schema_item = config_schema_item or {}\n secret = config_schema_item.get('secret', False)\n\n try:\n value = render_template_with_system_and_user_context(value=value,\n user=self.user)\n except Exception as e:\n # Throw a more user-friendly exception on failed render\n exc_class = type(e)\n original_msg = str(e)\n msg = ('Failed to render dynamic configuration value for key \"%s\" with value '\n '\"%s\" for pack \"%s\" config: %s ' % (key, value, self.pack_name, original_msg))\n raise exc_class(msg)\n\n if value:\n # Deserialize the value\n value = deserialize_key_value(value=value, secret=secret)\n else:\n value = None\n\n return value\n", "path": "st2common/st2common/util/config_loader.py"}], "after_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\n\nimport six\n\nfrom oslo_config import cfg\n\nfrom st2common import log as logging\nfrom st2common.models.db.pack import ConfigDB\nfrom st2common.persistence.pack import ConfigSchema\nfrom st2common.persistence.pack import Config\nfrom st2common.content import utils as content_utils\nfrom st2common.util import jinja as jinja_utils\nfrom st2common.util.templating import render_template_with_system_and_user_context\nfrom st2common.util.config_parser import ContentPackConfigParser\nfrom st2common.exceptions.db import StackStormDBObjectNotFoundError\n\n__all__ = [\n 'ContentPackConfigLoader'\n]\n\nLOG = logging.getLogger(__name__)\n\n\nclass ContentPackConfigLoader(object):\n \"\"\"\n Class which loads and resolves all the config values and returns a dictionary of resolved values\n which can be passed to the resource.\n\n It loads and resolves values in the following order:\n\n 1. Static values from <pack path>/config.yaml file\n 2. Dynamic and or static values from /opt/stackstorm/configs/<pack name>.yaml file.\n\n Values are merged from left to right which means values from \"<pack name>.yaml\" file have\n precedence and override values from pack local config file.\n \"\"\"\n\n def __init__(self, pack_name, user=None):\n self.pack_name = pack_name\n self.user = user or cfg.CONF.system_user.user\n\n self.pack_path = content_utils.get_pack_base_path(pack_name=pack_name)\n self._config_parser = ContentPackConfigParser(pack_name=pack_name)\n\n def get_config(self):\n result = {}\n\n # 1. Retrieve values from pack local config.yaml file\n config = self._config_parser.get_config()\n\n if config:\n config = config.config or {}\n result.update(config)\n\n # Retrieve corresponding ConfigDB and ConfigSchemaDB object\n # Note: ConfigSchemaDB is optional right now. If it doesn't exist, we assume every value\n # is of a type string\n try:\n config_db = Config.get_by_pack(value=self.pack_name)\n except StackStormDBObjectNotFoundError:\n # Corresponding pack config doesn't exist. We set config_db to an empty config so\n # that the default values from config schema are still correctly applied even if\n # pack doesn't contain a config.\n config_db = ConfigDB(pack=self.pack_name, values={})\n\n try:\n config_schema_db = ConfigSchema.get_by_pack(value=self.pack_name)\n except StackStormDBObjectNotFoundError:\n config_schema_db = None\n\n # 2. Retrieve values from \"global\" pack config file (if available) and resolve them if\n # necessary\n config = self._get_values_for_config(config_schema_db=config_schema_db,\n config_db=config_db)\n result.update(config)\n\n return result\n\n def _get_values_for_config(self, config_schema_db, config_db):\n schema_values = getattr(config_schema_db, 'attributes', {})\n config_values = getattr(config_db, 'values', {})\n\n config = copy.deepcopy(config_values)\n\n # Assign dynamic config values based on the values in the datastore\n config = self._assign_dynamic_config_values(schema=schema_values, config=config)\n\n # If config_schema is available we do a second pass and set default values for required\n # items which values are not provided / available in the config itself\n config = self._assign_default_values(schema=schema_values, config=config)\n return config\n\n def _assign_dynamic_config_values(self, schema, config, parent_keys=None):\n \"\"\"\n Assign dynamic config value for a particular config item if the ite utilizes a Jinja\n expression for dynamic config values.\n\n Note: This method mutates config argument in place.\n\n :rtype: ``dict``\n \"\"\"\n parent_keys = parent_keys or []\n\n for config_item_key, config_item_value in six.iteritems(config):\n schema_item = schema.get(config_item_key, {})\n is_dictionary = isinstance(config_item_value, dict)\n\n # Inspect nested object properties\n if is_dictionary:\n parent_keys += [config_item_key]\n self._assign_dynamic_config_values(schema=schema_item.get('properties', {}),\n config=config[config_item_key],\n parent_keys=parent_keys)\n else:\n is_jinja_expression = jinja_utils.is_jinja_expression(value=config_item_value)\n\n if is_jinja_expression:\n # Resolve / render the Jinja template expression\n full_config_item_key = '.'.join(parent_keys + [config_item_key])\n value = self._get_datastore_value_for_expression(key=full_config_item_key,\n value=config_item_value,\n config_schema_item=schema_item)\n\n config[config_item_key] = value\n else:\n # Static value, no resolution needed\n config[config_item_key] = config_item_value\n\n return config\n\n def _assign_default_values(self, schema, config):\n \"\"\"\n Assign default values for particular config if default values are provided in the config\n schema and a value is not specified in the config.\n\n Note: This method mutates config argument in place.\n\n :rtype: ``dict``\n \"\"\"\n for schema_item_key, schema_item in six.iteritems(schema):\n default_value = schema_item.get('default', None)\n is_object = schema_item.get('type', None) == 'object'\n has_properties = schema_item.get('properties', None)\n\n if default_value and not config.get(schema_item_key, None):\n config[schema_item_key] = default_value\n\n # Inspect nested object properties\n if is_object and has_properties:\n if not config.get(schema_item_key, None):\n config[schema_item_key] = {}\n\n self._assign_default_values(schema=schema_item['properties'],\n config=config[schema_item_key])\n\n return config\n\n def _get_datastore_value_for_expression(self, key, value, config_schema_item=None):\n \"\"\"\n Retrieve datastore value by first resolving the datastore expression and then retrieving\n the value from the datastore.\n\n :param key: Full path to the config item key (e.g. \"token\" / \"auth.settings.token\", etc.)\n \"\"\"\n from st2common.services.config import deserialize_key_value\n\n config_schema_item = config_schema_item or {}\n secret = config_schema_item.get('secret', False)\n\n try:\n value = render_template_with_system_and_user_context(value=value,\n user=self.user)\n except Exception as e:\n # Throw a more user-friendly exception on failed render\n exc_class = type(e)\n original_msg = str(e)\n msg = ('Failed to render dynamic configuration value for key \"%s\" with value '\n '\"%s\" for pack \"%s\" config: %s ' % (key, value, self.pack_name, original_msg))\n raise exc_class(msg)\n\n if value:\n # Deserialize the value\n value = deserialize_key_value(value=value, secret=secret)\n else:\n value = None\n\n return value\n", "path": "st2common/st2common/util/config_loader.py"}]}
| 2,842 | 189 |
gh_patches_debug_13267
|
rasdani/github-patches
|
git_diff
|
PokemonGoF__PokemonGo-Bot-1025
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bot unnecessarily walks to center of Pokestops
This issue may also increase detectability since it's unlikely that users would walk to the exact center of every stop they visit.
Also, most stops are spinnable from the road or other more accessible place. Once we start following roads, this will need fixed.
### Expected Behavior
Bot should walk to anywhere within the range of the stop.
### Actual Behavior
Bot walks to exactly the center of a stop.
### Steps to Reproduce
Run the bot and observe its path.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pokemongo_bot/cell_workers/move_to_fort_worker.py`
Content:
```
1 from utils import distance, format_dist
2 from pokemongo_bot.human_behaviour import sleep
3 from pokemongo_bot import logger
4 from pokemongo_bot.step_walker import StepWalker
5
6 class MoveToFortWorker(object):
7 def __init__(self, fort, bot):
8 self.bot = bot
9 self.fort = fort
10 self.api = bot.api
11 self.config = bot.config
12 self.navigator = bot.navigator
13 self.position = bot.position
14
15 def work(self):
16 lat = self.fort['latitude']
17 lng = self.fort['longitude']
18 fortID = self.fort['id']
19 unit = self.config.distance_unit # Unit to use when printing formatted distance
20
21 dist = distance(self.position[0], self.position[1], lat, lng)
22
23 # print('Found fort {} at distance {}m'.format(fortID, dist))
24 logger.log('Found fort {} at distance {}'.format(
25 fortID, format_dist(dist, unit)))
26
27 if dist > 10:
28 logger.log('Need to move closer to Pokestop')
29 position = (lat, lng, 0.0)
30
31 if self.config.walk > 0:
32 step_walker = StepWalker(
33 self.bot,
34 self.config.walk,
35 self.api._position_lat,
36 self.api._position_lng,
37 position[0],
38 position[1]
39 )
40
41 while True:
42 if step_walker.step():
43 break
44
45 else:
46 self.api.set_position(*position)
47
48 self.api.player_update(latitude=lat, longitude=lng)
49 response_dict = self.api.call()
50 logger.log('Arrived at Pokestop')
51 sleep(2)
52 return response_dict
53
54 return None
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pokemongo_bot/cell_workers/move_to_fort_worker.py b/pokemongo_bot/cell_workers/move_to_fort_worker.py
--- a/pokemongo_bot/cell_workers/move_to_fort_worker.py
+++ b/pokemongo_bot/cell_workers/move_to_fort_worker.py
@@ -1,4 +1,4 @@
-from utils import distance, format_dist
+from utils import distance, format_dist, i2f
from pokemongo_bot.human_behaviour import sleep
from pokemongo_bot import logger
from pokemongo_bot.step_walker import StepWalker
@@ -38,7 +38,7 @@
position[1]
)
- while True:
+ while distance(i2f(self.api._position_lat), i2f(self.api._position_lng), lat, lng) > 10:
if step_walker.step():
break
|
{"golden_diff": "diff --git a/pokemongo_bot/cell_workers/move_to_fort_worker.py b/pokemongo_bot/cell_workers/move_to_fort_worker.py\n--- a/pokemongo_bot/cell_workers/move_to_fort_worker.py\n+++ b/pokemongo_bot/cell_workers/move_to_fort_worker.py\n@@ -1,4 +1,4 @@\n-from utils import distance, format_dist\n+from utils import distance, format_dist, i2f\n from pokemongo_bot.human_behaviour import sleep\n from pokemongo_bot import logger\n from pokemongo_bot.step_walker import StepWalker\n@@ -38,7 +38,7 @@\n position[1]\n )\n \n- while True:\n+ while distance(i2f(self.api._position_lat), i2f(self.api._position_lng), lat, lng) > 10:\n if step_walker.step():\n break\n", "issue": "Bot unnecessarily walks to center of Pokestops\nThis issue may also increase detectability since it's unlikely that users would walk to the exact center of every stop they visit.\n\nAlso, most stops are spinnable from the road or other more accessible place. Once we start following roads, this will need fixed.\n### Expected Behavior\n\nBot should walk to anywhere within the range of the stop.\n### Actual Behavior\n\nBot walks to exactly the center of a stop.\n### Steps to Reproduce\n\nRun the bot and observe its path.\n\n", "before_files": [{"content": "from utils import distance, format_dist\nfrom pokemongo_bot.human_behaviour import sleep\nfrom pokemongo_bot import logger\nfrom pokemongo_bot.step_walker import StepWalker\n\nclass MoveToFortWorker(object):\n def __init__(self, fort, bot):\n self.bot = bot\n self.fort = fort\n self.api = bot.api\n self.config = bot.config\n self.navigator = bot.navigator\n self.position = bot.position\n\n def work(self):\n lat = self.fort['latitude']\n lng = self.fort['longitude']\n fortID = self.fort['id']\n unit = self.config.distance_unit # Unit to use when printing formatted distance\n\n dist = distance(self.position[0], self.position[1], lat, lng)\n\n # print('Found fort {} at distance {}m'.format(fortID, dist))\n logger.log('Found fort {} at distance {}'.format(\n fortID, format_dist(dist, unit)))\n\n if dist > 10:\n logger.log('Need to move closer to Pokestop')\n position = (lat, lng, 0.0)\n\n if self.config.walk > 0:\n step_walker = StepWalker(\n self.bot,\n self.config.walk,\n self.api._position_lat,\n self.api._position_lng,\n position[0],\n position[1]\n )\n\n while True:\n if step_walker.step():\n break\n\n else:\n self.api.set_position(*position)\n\n self.api.player_update(latitude=lat, longitude=lng)\n response_dict = self.api.call()\n logger.log('Arrived at Pokestop')\n sleep(2)\n return response_dict\n\n return None\n", "path": "pokemongo_bot/cell_workers/move_to_fort_worker.py"}], "after_files": [{"content": "from utils import distance, format_dist, i2f\nfrom pokemongo_bot.human_behaviour import sleep\nfrom pokemongo_bot import logger\nfrom pokemongo_bot.step_walker import StepWalker\n\nclass MoveToFortWorker(object):\n def __init__(self, fort, bot):\n self.bot = bot\n self.fort = fort\n self.api = bot.api\n self.config = bot.config\n self.navigator = bot.navigator\n self.position = bot.position\n\n def work(self):\n lat = self.fort['latitude']\n lng = self.fort['longitude']\n fortID = self.fort['id']\n unit = self.config.distance_unit # Unit to use when printing formatted distance\n\n dist = distance(self.position[0], self.position[1], lat, lng)\n\n # print('Found fort {} at distance {}m'.format(fortID, dist))\n logger.log('Found fort {} at distance {}'.format(\n fortID, format_dist(dist, unit)))\n\n if dist > 10:\n logger.log('Need to move closer to Pokestop')\n position = (lat, lng, 0.0)\n\n if self.config.walk > 0:\n step_walker = StepWalker(\n self.bot,\n self.config.walk,\n self.api._position_lat,\n self.api._position_lng,\n position[0],\n position[1]\n )\n\n while distance(i2f(self.api._position_lat), i2f(self.api._position_lng), lat, lng) > 10:\n if step_walker.step():\n break\n\n else:\n self.api.set_position(*position)\n\n self.api.player_update(latitude=lat, longitude=lng)\n response_dict = self.api.call()\n logger.log('Arrived at Pokestop')\n sleep(2)\n return response_dict\n\n return None\n", "path": "pokemongo_bot/cell_workers/move_to_fort_worker.py"}]}
| 849 | 201 |
gh_patches_debug_30713
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-7288
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Form media are not included in account edition with settings panels
### Issue Summary
I added some fields including a `RichTextField` to a custom `User` model. Then I added a `UserSettingsPanel(BaseSettingsPanel)` in a new tab, registered via `register_account_settings_panel`.
All worked but I was unable to get the Draftail editor.
The bug is very close to #4466. But I am unsure what is the way to properly fix it, due to panels. Moreover it seems that `form` is an undefined value in `wagtail/admin/templates/wagtailadmin/account/account.html`
### Steps to Reproduce
1. Start a new project with `wagtail start myproject`
2. Custom the User model and account forms this way: https://forge.cliss21.org/cliss21/services_pour_tous/commit/4ab71c92b999fd3dc2872d491f952404c2a935f8
3. Go to `/admin/account/`
3. I locally overcame it this way: https://forge.cliss21.org/cliss21/services_pour_tous/commit/669bf290b0b58a6bc532cd17c2523d3b23afe554
I have confirmed that this issue can be reproduced as described on a fresh Wagtail project.
PS: a link to https://docs.wagtail.io/en/stable/reference/hooks.html#register-account-settings-panel could be done from https://docs.wagtail.io/en/stable/advanced_topics/customisation/custom_user_models.html.
### Technical details
```
$ python --version
Python 3.9.2
$ pip show django | grep Version
Version: 3.2.4
$ pip show wagtail | grep Version:
Version: 2.13.2
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/views/account.py`
Content:
```
1 from collections import OrderedDict
2
3 from django.conf import settings
4 from django.contrib import messages
5 from django.contrib.auth import update_session_auth_hash
6 from django.contrib.auth import views as auth_views
7 from django.db import transaction
8 from django.http import Http404
9 from django.shortcuts import redirect
10 from django.template.loader import render_to_string
11 from django.template.response import TemplateResponse
12 from django.urls import reverse, reverse_lazy
13 from django.utils.translation import gettext as _
14 from django.utils.translation import gettext_lazy, override
15 from django.views.decorators.debug import sensitive_post_parameters
16
17 from wagtail.admin.forms.account import (
18 AvatarPreferencesForm, LocalePreferencesForm, NameEmailForm, NotificationPreferencesForm)
19 from wagtail.admin.forms.auth import LoginForm, PasswordChangeForm, PasswordResetForm
20 from wagtail.admin.localization import get_available_admin_languages, get_available_admin_time_zones
21 from wagtail.core import hooks
22 from wagtail.core.models import UserPagePermissionsProxy
23 from wagtail.users.models import UserProfile
24 from wagtail.utils.loading import get_custom_form
25
26
27 def get_user_login_form():
28 form_setting = 'WAGTAILADMIN_USER_LOGIN_FORM'
29 if hasattr(settings, form_setting):
30 return get_custom_form(form_setting)
31 else:
32 return LoginForm
33
34
35 # Helper functions to check password management settings to enable/disable views as appropriate.
36 # These are functions rather than class-level constants so that they can be overridden in tests
37 # by override_settings
38
39 def password_management_enabled():
40 return getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True)
41
42
43 def email_management_enabled():
44 return getattr(settings, 'WAGTAIL_EMAIL_MANAGEMENT_ENABLED', True)
45
46
47 def password_reset_enabled():
48 return getattr(settings, 'WAGTAIL_PASSWORD_RESET_ENABLED', password_management_enabled())
49
50
51 # Tabs
52
53
54 class SettingsTab:
55 def __init__(self, name, title, order=0):
56 self.name = name
57 self.title = title
58 self.order = order
59
60
61 profile_tab = SettingsTab('profile', gettext_lazy("Profile"), order=100)
62 notifications_tab = SettingsTab('notifications', gettext_lazy("Notifications"), order=200)
63
64
65 # Panels
66
67 class BaseSettingsPanel:
68 name = ''
69 title = ''
70 tab = profile_tab
71 help_text = None
72 template_name = 'wagtailadmin/account/settings_panels/base.html'
73 form_class = None
74 form_object = 'user'
75
76 def __init__(self, request, user, profile):
77 self.request = request
78 self.user = user
79 self.profile = profile
80
81 def is_active(self):
82 """
83 Returns True to display the panel.
84 """
85 return True
86
87 def get_form(self):
88 """
89 Returns an initialised form.
90 """
91 kwargs = {
92 'instance': self.profile if self.form_object == 'profile' else self.user,
93 'prefix': self.name
94 }
95
96 if self.request.method == 'POST':
97 return self.form_class(self.request.POST, self.request.FILES, **kwargs)
98 else:
99 return self.form_class(**kwargs)
100
101 def get_context_data(self):
102 """
103 Returns the template context to use when rendering the template.
104 """
105 return {
106 'form': self.get_form()
107 }
108
109 def render(self):
110 """
111 Renders the panel using the template specified in .template_name and context from .get_context_data()
112 """
113 return render_to_string(self.template_name, self.get_context_data(), request=self.request)
114
115
116 class NameEmailSettingsPanel(BaseSettingsPanel):
117 name = 'name_email'
118 title = gettext_lazy('Name and Email')
119 order = 100
120 form_class = NameEmailForm
121
122
123 class AvatarSettingsPanel(BaseSettingsPanel):
124 name = 'avatar'
125 title = gettext_lazy('Profile picture')
126 order = 300
127 template_name = 'wagtailadmin/account/settings_panels/avatar.html'
128 form_class = AvatarPreferencesForm
129 form_object = 'profile'
130
131
132 class NotificationsSettingsPanel(BaseSettingsPanel):
133 name = 'notifications'
134 title = gettext_lazy('Notifications')
135 tab = notifications_tab
136 order = 100
137 form_class = NotificationPreferencesForm
138 form_object = 'profile'
139
140 def is_active(self):
141 # Hide the panel if the user can't edit or publish pages
142 user_perms = UserPagePermissionsProxy(self.request.user)
143 if not user_perms.can_edit_pages() and not user_perms.can_publish_pages():
144 return False
145
146 # Hide the panel if there are no notification preferences
147 return self.get_form().fields
148
149
150 class LocaleSettingsPanel(BaseSettingsPanel):
151 name = 'locale'
152 title = gettext_lazy('Locale')
153 order = 400
154 form_class = LocalePreferencesForm
155 form_object = 'profile'
156
157 def is_active(self):
158 return len(get_available_admin_languages()) > 1 or len(get_available_admin_time_zones()) > 1
159
160
161 class ChangePasswordPanel(BaseSettingsPanel):
162 name = 'password'
163 title = gettext_lazy('Password')
164 order = 500
165 form_class = PasswordChangeForm
166
167 def is_active(self):
168 return password_management_enabled() and self.user.has_usable_password()
169
170 def get_form(self):
171 # Note: don't bind the form unless a field is specified
172 # This prevents the validation error from displaying if the user wishes to ignore this
173 bind_form = False
174 if self.request.method == 'POST':
175 bind_form = any([
176 self.request.POST.get(self.name + '-old_password'),
177 self.request.POST.get(self.name + '-new_password1'),
178 self.request.POST.get(self.name + '-new_password2'),
179 ])
180
181 if bind_form:
182 return self.form_class(self.user, self.request.POST, prefix=self.name)
183 else:
184 return self.form_class(self.user, prefix=self.name)
185
186
187 # Views
188
189 @sensitive_post_parameters()
190 def account(request):
191 # Fetch the user and profile objects once and pass into each panel
192 # We need to use the same instances for all forms so they don't overwrite each other
193 user = request.user
194 profile = UserProfile.get_for_user(user)
195
196 # Panels
197 panels = [
198 NameEmailSettingsPanel(request, user, profile),
199 AvatarSettingsPanel(request, user, profile),
200 NotificationsSettingsPanel(request, user, profile),
201 LocaleSettingsPanel(request, user, profile),
202 ChangePasswordPanel(request, user, profile),
203 ]
204 for fn in hooks.get_hooks('register_account_settings_panel'):
205 panel = fn(request, user, profile)
206 if panel and panel.is_active():
207 panels.append(panel)
208
209 panels = [panel for panel in panels if panel.is_active()]
210
211 # Get tabs and order them
212 tabs = list(set(panel.tab for panel in panels))
213 tabs.sort(key=lambda tab: tab.order)
214
215 # Get dict of tabs to ordered panels
216 panels_by_tab = OrderedDict([(tab, []) for tab in tabs])
217 for panel in panels:
218 panels_by_tab[panel.tab].append(panel)
219 for tab, tab_panels in panels_by_tab.items():
220 tab_panels.sort(key=lambda panel: panel.order)
221
222 if request.method == 'POST':
223 panel_forms = [panel.get_form() for panel in panels]
224
225 if all(form.is_valid() or not form.is_bound for form in panel_forms):
226 with transaction.atomic():
227 for form in panel_forms:
228 if form.is_bound:
229 form.save()
230
231 # Prevent a password change from logging this user out
232 update_session_auth_hash(request, user)
233
234 # Override the language when creating the success message
235 # If the user has changed their language in this request, the message should
236 # be in the new language, not the existing one
237 with override(profile.get_preferred_language()):
238 messages.success(request, _("Your account settings have been changed successfully!"))
239
240 return redirect('wagtailadmin_account')
241
242 # Menu items
243 menu_items = []
244 for fn in hooks.get_hooks('register_account_menu_item'):
245 item = fn(request)
246 if item:
247 menu_items.append(item)
248
249 return TemplateResponse(request, 'wagtailadmin/account/account.html', {
250 'panels_by_tab': panels_by_tab,
251 'menu_items': menu_items,
252 })
253
254
255 class PasswordResetEnabledViewMixin:
256 """
257 Class based view mixin that disables the view if password reset is disabled by one of the following settings:
258 - WAGTAIL_PASSWORD_RESET_ENABLED
259 - WAGTAIL_PASSWORD_MANAGEMENT_ENABLED
260 """
261 def dispatch(self, *args, **kwargs):
262 if not password_reset_enabled():
263 raise Http404
264
265 return super().dispatch(*args, **kwargs)
266
267
268 class PasswordResetView(PasswordResetEnabledViewMixin, auth_views.PasswordResetView):
269 template_name = 'wagtailadmin/account/password_reset/form.html'
270 email_template_name = 'wagtailadmin/account/password_reset/email.txt'
271 subject_template_name = 'wagtailadmin/account/password_reset/email_subject.txt'
272 form_class = PasswordResetForm
273 success_url = reverse_lazy('wagtailadmin_password_reset_done')
274
275
276 class PasswordResetDoneView(PasswordResetEnabledViewMixin, auth_views.PasswordResetDoneView):
277 template_name = 'wagtailadmin/account/password_reset/done.html'
278
279
280 class PasswordResetConfirmView(PasswordResetEnabledViewMixin, auth_views.PasswordResetConfirmView):
281 template_name = 'wagtailadmin/account/password_reset/confirm.html'
282 success_url = reverse_lazy('wagtailadmin_password_reset_complete')
283
284
285 class PasswordResetCompleteView(PasswordResetEnabledViewMixin, auth_views.PasswordResetCompleteView):
286 template_name = 'wagtailadmin/account/password_reset/complete.html'
287
288
289 class LoginView(auth_views.LoginView):
290 template_name = 'wagtailadmin/login.html'
291
292 def get_success_url(self):
293 return self.get_redirect_url() or reverse('wagtailadmin_home')
294
295 def get(self, *args, **kwargs):
296 # If user is already logged in, redirect them to the dashboard
297 if self.request.user.is_authenticated and self.request.user.has_perm('wagtailadmin.access_admin'):
298 return redirect(self.get_success_url())
299
300 return super().get(*args, **kwargs)
301
302 def get_form_class(self):
303 return get_user_login_form()
304
305 def get_context_data(self, **kwargs):
306 context = super().get_context_data(**kwargs)
307
308 context['show_password_reset'] = password_reset_enabled()
309
310 from django.contrib.auth import get_user_model
311 User = get_user_model()
312 context['username_field'] = User._meta.get_field(User.USERNAME_FIELD).verbose_name
313
314 return context
315
316
317 class LogoutView(auth_views.LogoutView):
318 next_page = 'wagtailadmin_login'
319
320 def dispatch(self, request, *args, **kwargs):
321 response = super().dispatch(request, *args, **kwargs)
322
323 messages.success(self.request, _('You have been successfully logged out.'))
324 # By default, logging out will generate a fresh sessionid cookie. We want to use the
325 # absence of sessionid as an indication that front-end pages are being viewed by a
326 # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.
327 response.delete_cookie(
328 settings.SESSION_COOKIE_NAME,
329 domain=settings.SESSION_COOKIE_DOMAIN,
330 path=settings.SESSION_COOKIE_PATH
331 )
332
333 # HACK: pretend that the session hasn't been modified, so that SessionMiddleware
334 # won't override the above and write a new cookie.
335 self.request.session.modified = False
336
337 return response
338
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/admin/views/account.py b/wagtail/admin/views/account.py
--- a/wagtail/admin/views/account.py
+++ b/wagtail/admin/views/account.py
@@ -5,6 +5,7 @@
from django.contrib.auth import update_session_auth_hash
from django.contrib.auth import views as auth_views
from django.db import transaction
+from django.forms import Media
from django.http import Http404
from django.shortcuts import redirect
from django.template.loader import render_to_string
@@ -219,8 +220,9 @@
for tab, tab_panels in panels_by_tab.items():
tab_panels.sort(key=lambda panel: panel.order)
+ panel_forms = [panel.get_form() for panel in panels]
+
if request.method == 'POST':
- panel_forms = [panel.get_form() for panel in panels]
if all(form.is_valid() or not form.is_bound for form in panel_forms):
with transaction.atomic():
@@ -239,6 +241,10 @@
return redirect('wagtailadmin_account')
+ media = Media()
+ for form in panel_forms:
+ media += form.media
+
# Menu items
menu_items = []
for fn in hooks.get_hooks('register_account_menu_item'):
@@ -249,6 +255,7 @@
return TemplateResponse(request, 'wagtailadmin/account/account.html', {
'panels_by_tab': panels_by_tab,
'menu_items': menu_items,
+ 'media': media,
})
|
{"golden_diff": "diff --git a/wagtail/admin/views/account.py b/wagtail/admin/views/account.py\n--- a/wagtail/admin/views/account.py\n+++ b/wagtail/admin/views/account.py\n@@ -5,6 +5,7 @@\n from django.contrib.auth import update_session_auth_hash\n from django.contrib.auth import views as auth_views\n from django.db import transaction\n+from django.forms import Media\n from django.http import Http404\n from django.shortcuts import redirect\n from django.template.loader import render_to_string\n@@ -219,8 +220,9 @@\n for tab, tab_panels in panels_by_tab.items():\n tab_panels.sort(key=lambda panel: panel.order)\n \n+ panel_forms = [panel.get_form() for panel in panels]\n+\n if request.method == 'POST':\n- panel_forms = [panel.get_form() for panel in panels]\n \n if all(form.is_valid() or not form.is_bound for form in panel_forms):\n with transaction.atomic():\n@@ -239,6 +241,10 @@\n \n return redirect('wagtailadmin_account')\n \n+ media = Media()\n+ for form in panel_forms:\n+ media += form.media\n+\n # Menu items\n menu_items = []\n for fn in hooks.get_hooks('register_account_menu_item'):\n@@ -249,6 +255,7 @@\n return TemplateResponse(request, 'wagtailadmin/account/account.html', {\n 'panels_by_tab': panels_by_tab,\n 'menu_items': menu_items,\n+ 'media': media,\n })\n", "issue": "Form media are not included in account edition with settings panels\n### Issue Summary\r\n\r\nI added some fields including a `RichTextField` to a custom `User` model. Then I added a `UserSettingsPanel(BaseSettingsPanel)` in a new tab, registered via `register_account_settings_panel`.\r\n\r\nAll worked but I was unable to get the Draftail editor.\r\n\r\nThe bug is very close to #4466. But I am unsure what is the way to properly fix it, due to panels. Moreover it seems that `form` is an undefined value in `wagtail/admin/templates/wagtailadmin/account/account.html`\r\n\r\n### Steps to Reproduce\r\n\r\n1. Start a new project with `wagtail start myproject`\r\n2. Custom the User model and account forms this way: https://forge.cliss21.org/cliss21/services_pour_tous/commit/4ab71c92b999fd3dc2872d491f952404c2a935f8\r\n3. Go to `/admin/account/`\r\n3. I locally overcame it this way: https://forge.cliss21.org/cliss21/services_pour_tous/commit/669bf290b0b58a6bc532cd17c2523d3b23afe554\r\n\r\nI have confirmed that this issue can be reproduced as described on a fresh Wagtail project.\r\n\r\nPS: a link to https://docs.wagtail.io/en/stable/reference/hooks.html#register-account-settings-panel could be done from https://docs.wagtail.io/en/stable/advanced_topics/customisation/custom_user_models.html.\r\n\r\n### Technical details\r\n\r\n```\r\n$ python --version\r\nPython 3.9.2\r\n$ pip show django | grep Version\r\nVersion: 3.2.4\r\n$ pip show wagtail | grep Version:\r\nVersion: 2.13.2\r\n```\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.contrib.auth import views as auth_views\nfrom django.db import transaction\nfrom django.http import Http404\nfrom django.shortcuts import redirect\nfrom django.template.loader import render_to_string\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse, reverse_lazy\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy, override\nfrom django.views.decorators.debug import sensitive_post_parameters\n\nfrom wagtail.admin.forms.account import (\n AvatarPreferencesForm, LocalePreferencesForm, NameEmailForm, NotificationPreferencesForm)\nfrom wagtail.admin.forms.auth import LoginForm, PasswordChangeForm, PasswordResetForm\nfrom wagtail.admin.localization import get_available_admin_languages, get_available_admin_time_zones\nfrom wagtail.core import hooks\nfrom wagtail.core.models import UserPagePermissionsProxy\nfrom wagtail.users.models import UserProfile\nfrom wagtail.utils.loading import get_custom_form\n\n\ndef get_user_login_form():\n form_setting = 'WAGTAILADMIN_USER_LOGIN_FORM'\n if hasattr(settings, form_setting):\n return get_custom_form(form_setting)\n else:\n return LoginForm\n\n\n# Helper functions to check password management settings to enable/disable views as appropriate.\n# These are functions rather than class-level constants so that they can be overridden in tests\n# by override_settings\n\ndef password_management_enabled():\n return getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True)\n\n\ndef email_management_enabled():\n return getattr(settings, 'WAGTAIL_EMAIL_MANAGEMENT_ENABLED', True)\n\n\ndef password_reset_enabled():\n return getattr(settings, 'WAGTAIL_PASSWORD_RESET_ENABLED', password_management_enabled())\n\n\n# Tabs\n\n\nclass SettingsTab:\n def __init__(self, name, title, order=0):\n self.name = name\n self.title = title\n self.order = order\n\n\nprofile_tab = SettingsTab('profile', gettext_lazy(\"Profile\"), order=100)\nnotifications_tab = SettingsTab('notifications', gettext_lazy(\"Notifications\"), order=200)\n\n\n# Panels\n\nclass BaseSettingsPanel:\n name = ''\n title = ''\n tab = profile_tab\n help_text = None\n template_name = 'wagtailadmin/account/settings_panels/base.html'\n form_class = None\n form_object = 'user'\n\n def __init__(self, request, user, profile):\n self.request = request\n self.user = user\n self.profile = profile\n\n def is_active(self):\n \"\"\"\n Returns True to display the panel.\n \"\"\"\n return True\n\n def get_form(self):\n \"\"\"\n Returns an initialised form.\n \"\"\"\n kwargs = {\n 'instance': self.profile if self.form_object == 'profile' else self.user,\n 'prefix': self.name\n }\n\n if self.request.method == 'POST':\n return self.form_class(self.request.POST, self.request.FILES, **kwargs)\n else:\n return self.form_class(**kwargs)\n\n def get_context_data(self):\n \"\"\"\n Returns the template context to use when rendering the template.\n \"\"\"\n return {\n 'form': self.get_form()\n }\n\n def render(self):\n \"\"\"\n Renders the panel using the template specified in .template_name and context from .get_context_data()\n \"\"\"\n return render_to_string(self.template_name, self.get_context_data(), request=self.request)\n\n\nclass NameEmailSettingsPanel(BaseSettingsPanel):\n name = 'name_email'\n title = gettext_lazy('Name and Email')\n order = 100\n form_class = NameEmailForm\n\n\nclass AvatarSettingsPanel(BaseSettingsPanel):\n name = 'avatar'\n title = gettext_lazy('Profile picture')\n order = 300\n template_name = 'wagtailadmin/account/settings_panels/avatar.html'\n form_class = AvatarPreferencesForm\n form_object = 'profile'\n\n\nclass NotificationsSettingsPanel(BaseSettingsPanel):\n name = 'notifications'\n title = gettext_lazy('Notifications')\n tab = notifications_tab\n order = 100\n form_class = NotificationPreferencesForm\n form_object = 'profile'\n\n def is_active(self):\n # Hide the panel if the user can't edit or publish pages\n user_perms = UserPagePermissionsProxy(self.request.user)\n if not user_perms.can_edit_pages() and not user_perms.can_publish_pages():\n return False\n\n # Hide the panel if there are no notification preferences\n return self.get_form().fields\n\n\nclass LocaleSettingsPanel(BaseSettingsPanel):\n name = 'locale'\n title = gettext_lazy('Locale')\n order = 400\n form_class = LocalePreferencesForm\n form_object = 'profile'\n\n def is_active(self):\n return len(get_available_admin_languages()) > 1 or len(get_available_admin_time_zones()) > 1\n\n\nclass ChangePasswordPanel(BaseSettingsPanel):\n name = 'password'\n title = gettext_lazy('Password')\n order = 500\n form_class = PasswordChangeForm\n\n def is_active(self):\n return password_management_enabled() and self.user.has_usable_password()\n\n def get_form(self):\n # Note: don't bind the form unless a field is specified\n # This prevents the validation error from displaying if the user wishes to ignore this\n bind_form = False\n if self.request.method == 'POST':\n bind_form = any([\n self.request.POST.get(self.name + '-old_password'),\n self.request.POST.get(self.name + '-new_password1'),\n self.request.POST.get(self.name + '-new_password2'),\n ])\n\n if bind_form:\n return self.form_class(self.user, self.request.POST, prefix=self.name)\n else:\n return self.form_class(self.user, prefix=self.name)\n\n\n# Views\n\n@sensitive_post_parameters()\ndef account(request):\n # Fetch the user and profile objects once and pass into each panel\n # We need to use the same instances for all forms so they don't overwrite each other\n user = request.user\n profile = UserProfile.get_for_user(user)\n\n # Panels\n panels = [\n NameEmailSettingsPanel(request, user, profile),\n AvatarSettingsPanel(request, user, profile),\n NotificationsSettingsPanel(request, user, profile),\n LocaleSettingsPanel(request, user, profile),\n ChangePasswordPanel(request, user, profile),\n ]\n for fn in hooks.get_hooks('register_account_settings_panel'):\n panel = fn(request, user, profile)\n if panel and panel.is_active():\n panels.append(panel)\n\n panels = [panel for panel in panels if panel.is_active()]\n\n # Get tabs and order them\n tabs = list(set(panel.tab for panel in panels))\n tabs.sort(key=lambda tab: tab.order)\n\n # Get dict of tabs to ordered panels\n panels_by_tab = OrderedDict([(tab, []) for tab in tabs])\n for panel in panels:\n panels_by_tab[panel.tab].append(panel)\n for tab, tab_panels in panels_by_tab.items():\n tab_panels.sort(key=lambda panel: panel.order)\n\n if request.method == 'POST':\n panel_forms = [panel.get_form() for panel in panels]\n\n if all(form.is_valid() or not form.is_bound for form in panel_forms):\n with transaction.atomic():\n for form in panel_forms:\n if form.is_bound:\n form.save()\n\n # Prevent a password change from logging this user out\n update_session_auth_hash(request, user)\n\n # Override the language when creating the success message\n # If the user has changed their language in this request, the message should\n # be in the new language, not the existing one\n with override(profile.get_preferred_language()):\n messages.success(request, _(\"Your account settings have been changed successfully!\"))\n\n return redirect('wagtailadmin_account')\n\n # Menu items\n menu_items = []\n for fn in hooks.get_hooks('register_account_menu_item'):\n item = fn(request)\n if item:\n menu_items.append(item)\n\n return TemplateResponse(request, 'wagtailadmin/account/account.html', {\n 'panels_by_tab': panels_by_tab,\n 'menu_items': menu_items,\n })\n\n\nclass PasswordResetEnabledViewMixin:\n \"\"\"\n Class based view mixin that disables the view if password reset is disabled by one of the following settings:\n - WAGTAIL_PASSWORD_RESET_ENABLED\n - WAGTAIL_PASSWORD_MANAGEMENT_ENABLED\n \"\"\"\n def dispatch(self, *args, **kwargs):\n if not password_reset_enabled():\n raise Http404\n\n return super().dispatch(*args, **kwargs)\n\n\nclass PasswordResetView(PasswordResetEnabledViewMixin, auth_views.PasswordResetView):\n template_name = 'wagtailadmin/account/password_reset/form.html'\n email_template_name = 'wagtailadmin/account/password_reset/email.txt'\n subject_template_name = 'wagtailadmin/account/password_reset/email_subject.txt'\n form_class = PasswordResetForm\n success_url = reverse_lazy('wagtailadmin_password_reset_done')\n\n\nclass PasswordResetDoneView(PasswordResetEnabledViewMixin, auth_views.PasswordResetDoneView):\n template_name = 'wagtailadmin/account/password_reset/done.html'\n\n\nclass PasswordResetConfirmView(PasswordResetEnabledViewMixin, auth_views.PasswordResetConfirmView):\n template_name = 'wagtailadmin/account/password_reset/confirm.html'\n success_url = reverse_lazy('wagtailadmin_password_reset_complete')\n\n\nclass PasswordResetCompleteView(PasswordResetEnabledViewMixin, auth_views.PasswordResetCompleteView):\n template_name = 'wagtailadmin/account/password_reset/complete.html'\n\n\nclass LoginView(auth_views.LoginView):\n template_name = 'wagtailadmin/login.html'\n\n def get_success_url(self):\n return self.get_redirect_url() or reverse('wagtailadmin_home')\n\n def get(self, *args, **kwargs):\n # If user is already logged in, redirect them to the dashboard\n if self.request.user.is_authenticated and self.request.user.has_perm('wagtailadmin.access_admin'):\n return redirect(self.get_success_url())\n\n return super().get(*args, **kwargs)\n\n def get_form_class(self):\n return get_user_login_form()\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n context['show_password_reset'] = password_reset_enabled()\n\n from django.contrib.auth import get_user_model\n User = get_user_model()\n context['username_field'] = User._meta.get_field(User.USERNAME_FIELD).verbose_name\n\n return context\n\n\nclass LogoutView(auth_views.LogoutView):\n next_page = 'wagtailadmin_login'\n\n def dispatch(self, request, *args, **kwargs):\n response = super().dispatch(request, *args, **kwargs)\n\n messages.success(self.request, _('You have been successfully logged out.'))\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(\n settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH\n )\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n self.request.session.modified = False\n\n return response\n", "path": "wagtail/admin/views/account.py"}], "after_files": [{"content": "from collections import OrderedDict\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.contrib.auth import views as auth_views\nfrom django.db import transaction\nfrom django.forms import Media\nfrom django.http import Http404\nfrom django.shortcuts import redirect\nfrom django.template.loader import render_to_string\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse, reverse_lazy\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy, override\nfrom django.views.decorators.debug import sensitive_post_parameters\n\nfrom wagtail.admin.forms.account import (\n AvatarPreferencesForm, LocalePreferencesForm, NameEmailForm, NotificationPreferencesForm)\nfrom wagtail.admin.forms.auth import LoginForm, PasswordChangeForm, PasswordResetForm\nfrom wagtail.admin.localization import get_available_admin_languages, get_available_admin_time_zones\nfrom wagtail.core import hooks\nfrom wagtail.core.models import UserPagePermissionsProxy\nfrom wagtail.users.models import UserProfile\nfrom wagtail.utils.loading import get_custom_form\n\n\ndef get_user_login_form():\n form_setting = 'WAGTAILADMIN_USER_LOGIN_FORM'\n if hasattr(settings, form_setting):\n return get_custom_form(form_setting)\n else:\n return LoginForm\n\n\n# Helper functions to check password management settings to enable/disable views as appropriate.\n# These are functions rather than class-level constants so that they can be overridden in tests\n# by override_settings\n\ndef password_management_enabled():\n return getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True)\n\n\ndef email_management_enabled():\n return getattr(settings, 'WAGTAIL_EMAIL_MANAGEMENT_ENABLED', True)\n\n\ndef password_reset_enabled():\n return getattr(settings, 'WAGTAIL_PASSWORD_RESET_ENABLED', password_management_enabled())\n\n\n# Tabs\n\n\nclass SettingsTab:\n def __init__(self, name, title, order=0):\n self.name = name\n self.title = title\n self.order = order\n\n\nprofile_tab = SettingsTab('profile', gettext_lazy(\"Profile\"), order=100)\nnotifications_tab = SettingsTab('notifications', gettext_lazy(\"Notifications\"), order=200)\n\n\n# Panels\n\nclass BaseSettingsPanel:\n name = ''\n title = ''\n tab = profile_tab\n help_text = None\n template_name = 'wagtailadmin/account/settings_panels/base.html'\n form_class = None\n form_object = 'user'\n\n def __init__(self, request, user, profile):\n self.request = request\n self.user = user\n self.profile = profile\n\n def is_active(self):\n \"\"\"\n Returns True to display the panel.\n \"\"\"\n return True\n\n def get_form(self):\n \"\"\"\n Returns an initialised form.\n \"\"\"\n kwargs = {\n 'instance': self.profile if self.form_object == 'profile' else self.user,\n 'prefix': self.name\n }\n\n if self.request.method == 'POST':\n return self.form_class(self.request.POST, self.request.FILES, **kwargs)\n else:\n return self.form_class(**kwargs)\n\n def get_context_data(self):\n \"\"\"\n Returns the template context to use when rendering the template.\n \"\"\"\n return {\n 'form': self.get_form()\n }\n\n def render(self):\n \"\"\"\n Renders the panel using the template specified in .template_name and context from .get_context_data()\n \"\"\"\n return render_to_string(self.template_name, self.get_context_data(), request=self.request)\n\n\nclass NameEmailSettingsPanel(BaseSettingsPanel):\n name = 'name_email'\n title = gettext_lazy('Name and Email')\n order = 100\n form_class = NameEmailForm\n\n\nclass AvatarSettingsPanel(BaseSettingsPanel):\n name = 'avatar'\n title = gettext_lazy('Profile picture')\n order = 300\n template_name = 'wagtailadmin/account/settings_panels/avatar.html'\n form_class = AvatarPreferencesForm\n form_object = 'profile'\n\n\nclass NotificationsSettingsPanel(BaseSettingsPanel):\n name = 'notifications'\n title = gettext_lazy('Notifications')\n tab = notifications_tab\n order = 100\n form_class = NotificationPreferencesForm\n form_object = 'profile'\n\n def is_active(self):\n # Hide the panel if the user can't edit or publish pages\n user_perms = UserPagePermissionsProxy(self.request.user)\n if not user_perms.can_edit_pages() and not user_perms.can_publish_pages():\n return False\n\n # Hide the panel if there are no notification preferences\n return self.get_form().fields\n\n\nclass LocaleSettingsPanel(BaseSettingsPanel):\n name = 'locale'\n title = gettext_lazy('Locale')\n order = 400\n form_class = LocalePreferencesForm\n form_object = 'profile'\n\n def is_active(self):\n return len(get_available_admin_languages()) > 1 or len(get_available_admin_time_zones()) > 1\n\n\nclass ChangePasswordPanel(BaseSettingsPanel):\n name = 'password'\n title = gettext_lazy('Password')\n order = 500\n form_class = PasswordChangeForm\n\n def is_active(self):\n return password_management_enabled() and self.user.has_usable_password()\n\n def get_form(self):\n # Note: don't bind the form unless a field is specified\n # This prevents the validation error from displaying if the user wishes to ignore this\n bind_form = False\n if self.request.method == 'POST':\n bind_form = any([\n self.request.POST.get(self.name + '-old_password'),\n self.request.POST.get(self.name + '-new_password1'),\n self.request.POST.get(self.name + '-new_password2'),\n ])\n\n if bind_form:\n return self.form_class(self.user, self.request.POST, prefix=self.name)\n else:\n return self.form_class(self.user, prefix=self.name)\n\n\n# Views\n\n@sensitive_post_parameters()\ndef account(request):\n # Fetch the user and profile objects once and pass into each panel\n # We need to use the same instances for all forms so they don't overwrite each other\n user = request.user\n profile = UserProfile.get_for_user(user)\n\n # Panels\n panels = [\n NameEmailSettingsPanel(request, user, profile),\n AvatarSettingsPanel(request, user, profile),\n NotificationsSettingsPanel(request, user, profile),\n LocaleSettingsPanel(request, user, profile),\n ChangePasswordPanel(request, user, profile),\n ]\n for fn in hooks.get_hooks('register_account_settings_panel'):\n panel = fn(request, user, profile)\n if panel and panel.is_active():\n panels.append(panel)\n\n panels = [panel for panel in panels if panel.is_active()]\n\n # Get tabs and order them\n tabs = list(set(panel.tab for panel in panels))\n tabs.sort(key=lambda tab: tab.order)\n\n # Get dict of tabs to ordered panels\n panels_by_tab = OrderedDict([(tab, []) for tab in tabs])\n for panel in panels:\n panels_by_tab[panel.tab].append(panel)\n for tab, tab_panels in panels_by_tab.items():\n tab_panels.sort(key=lambda panel: panel.order)\n\n panel_forms = [panel.get_form() for panel in panels]\n\n if request.method == 'POST':\n\n if all(form.is_valid() or not form.is_bound for form in panel_forms):\n with transaction.atomic():\n for form in panel_forms:\n if form.is_bound:\n form.save()\n\n # Prevent a password change from logging this user out\n update_session_auth_hash(request, user)\n\n # Override the language when creating the success message\n # If the user has changed their language in this request, the message should\n # be in the new language, not the existing one\n with override(profile.get_preferred_language()):\n messages.success(request, _(\"Your account settings have been changed successfully!\"))\n\n return redirect('wagtailadmin_account')\n\n media = Media()\n for form in panel_forms:\n media += form.media\n\n # Menu items\n menu_items = []\n for fn in hooks.get_hooks('register_account_menu_item'):\n item = fn(request)\n if item:\n menu_items.append(item)\n\n return TemplateResponse(request, 'wagtailadmin/account/account.html', {\n 'panels_by_tab': panels_by_tab,\n 'menu_items': menu_items,\n 'media': media,\n })\n\n\nclass PasswordResetEnabledViewMixin:\n \"\"\"\n Class based view mixin that disables the view if password reset is disabled by one of the following settings:\n - WAGTAIL_PASSWORD_RESET_ENABLED\n - WAGTAIL_PASSWORD_MANAGEMENT_ENABLED\n \"\"\"\n def dispatch(self, *args, **kwargs):\n if not password_reset_enabled():\n raise Http404\n\n return super().dispatch(*args, **kwargs)\n\n\nclass PasswordResetView(PasswordResetEnabledViewMixin, auth_views.PasswordResetView):\n template_name = 'wagtailadmin/account/password_reset/form.html'\n email_template_name = 'wagtailadmin/account/password_reset/email.txt'\n subject_template_name = 'wagtailadmin/account/password_reset/email_subject.txt'\n form_class = PasswordResetForm\n success_url = reverse_lazy('wagtailadmin_password_reset_done')\n\n\nclass PasswordResetDoneView(PasswordResetEnabledViewMixin, auth_views.PasswordResetDoneView):\n template_name = 'wagtailadmin/account/password_reset/done.html'\n\n\nclass PasswordResetConfirmView(PasswordResetEnabledViewMixin, auth_views.PasswordResetConfirmView):\n template_name = 'wagtailadmin/account/password_reset/confirm.html'\n success_url = reverse_lazy('wagtailadmin_password_reset_complete')\n\n\nclass PasswordResetCompleteView(PasswordResetEnabledViewMixin, auth_views.PasswordResetCompleteView):\n template_name = 'wagtailadmin/account/password_reset/complete.html'\n\n\nclass LoginView(auth_views.LoginView):\n template_name = 'wagtailadmin/login.html'\n\n def get_success_url(self):\n return self.get_redirect_url() or reverse('wagtailadmin_home')\n\n def get(self, *args, **kwargs):\n # If user is already logged in, redirect them to the dashboard\n if self.request.user.is_authenticated and self.request.user.has_perm('wagtailadmin.access_admin'):\n return redirect(self.get_success_url())\n\n return super().get(*args, **kwargs)\n\n def get_form_class(self):\n return get_user_login_form()\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n context['show_password_reset'] = password_reset_enabled()\n\n from django.contrib.auth import get_user_model\n User = get_user_model()\n context['username_field'] = User._meta.get_field(User.USERNAME_FIELD).verbose_name\n\n return context\n\n\nclass LogoutView(auth_views.LogoutView):\n next_page = 'wagtailadmin_login'\n\n def dispatch(self, request, *args, **kwargs):\n response = super().dispatch(request, *args, **kwargs)\n\n messages.success(self.request, _('You have been successfully logged out.'))\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(\n settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH\n )\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n self.request.session.modified = False\n\n return response\n", "path": "wagtail/admin/views/account.py"}]}
| 4,060 | 339 |
gh_patches_debug_5877
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-4354
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
South Korean electricity price is off by a factor of 1000x
Someone reached out to us on email with the following feedback:
Just want to mention, that the electricity price in South Korea is probably off by factor of 1000x.
I believe this is due to different way to use decimal point. Great Britain and the United States are two of the few places in the world that use a period to indicate the decimal place. Many other countries use a comma instead. The decimal separator is also called the radix character. Likewise, while the U.K. and U.S. use a comma to separate groups of thousands, many other countries use a period instead, and some countries separate thousands groups with a thin space.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/KR.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import datetime
4 import json
5 import logging
6 import pprint
7 import re
8
9 import arrow
10 import pandas as pd
11 import requests
12 from bs4 import BeautifulSoup
13
14 from parsers.lib.config import refetch_frequency
15
16 TIMEZONE = "Asia/Seoul"
17 REAL_TIME_URL = "https://new.kpx.or.kr/powerinfoSubmain.es?mid=a10606030000"
18 PRICE_URL = "https://new.kpx.or.kr/smpInland.es?mid=a10606080100&device=pc"
19 LONG_TERM_PRODUCTION_URL = (
20 "https://new.kpx.or.kr/powerSource.es?mid=a10606030000&device=chart"
21 )
22
23 pp = pprint.PrettyPrinter(indent=4)
24
25 #### Classification of New & Renewable Energy Sources ####
26 # Source: https://cms.khnp.co.kr/eng/content/563/main.do?mnCd=EN040101
27 # New energy: Hydrogen, Fuel Cell, Coal liquefied or gasified energy, and vacuum residue gasified energy, etc.
28 # Renewable: Solar, Wind power, Water power, ocean energy, Geothermal, Bio energy, etc.
29
30 # src: https://stackoverflow.com/questions/3463930/how-to-round-the-minute-of-a-datetime-object
31 def time_floor(time, delta, epoch=None):
32 if epoch is None:
33 epoch = datetime.datetime(1970, 1, 1, tzinfo=time.tzinfo)
34 mod = (time - epoch) % delta
35 return time - mod
36
37
38 def extract_chart_data(html):
39 """
40 Extracts generation breakdown chart data from the source code of the page.
41 """
42 # Extract object with data
43 data_source = re.search(r"var ictArr = (\[\{.+\}\]);", html).group(1)
44 # Un-quoted keys ({key:"value"}) are valid JavaScript but not valid JSON (which requires {"key":"value"}).
45 # Will break if other keys than these are introduced. Alternatively, use a JSON5 library (JSON5 allows un-quoted keys)
46 data_source = re.sub(
47 r'"(localCoal|newRenewable|oil|once|gas|nuclearPower|coal|regDate|raisingWater|waterPower|seq)"',
48 r'"\1"',
49 data_source,
50 )
51 json_obj = json.loads(data_source)
52
53 timed_data = {}
54
55 for item in json_obj:
56 if item["regDate"] == "0":
57 break
58
59 date = datetime.datetime.strptime(item["regDate"], "%Y-%m-%d %H:%M")
60 date = arrow.get(date, TIMEZONE).datetime
61
62 timed_data[date] = {
63 "coal": round(float(item["coal"]) + float(item["localCoal"]), 5),
64 "gas": round(float(item["gas"]), 5),
65 "hydro": round(float(item["waterPower"]), 5),
66 "nuclear": round(float(item["nuclearPower"]), 5),
67 "oil": round(float(item["oil"]), 5),
68 "renewable": round(float(item["newRenewable"]), 5),
69 "pumpedHydro": round(float(item["raisingWater"]), 5),
70 }
71
72 return timed_data
73
74
75 @refetch_frequency(datetime.timedelta(minutes=5))
76 def fetch_consumption(
77 zone_key="KR",
78 session=None,
79 target_datetime=None,
80 logger=logging.getLogger(__name__),
81 ) -> dict:
82 """
83 Fetches consumption.
84 """
85
86 if target_datetime:
87 raise NotImplementedError("This parser is not yet able to parse past dates")
88
89 r = session or requests.session()
90 url = REAL_TIME_URL
91
92 response = r.get(url)
93 assert response.status_code == 200
94
95 soup = BeautifulSoup(response.text, "html.parser")
96 consumption_title = soup.find("th", string=re.compile(r"\s*현재부하\s*"))
97 consumption_val = float(
98 consumption_title.find_next_sibling().text.split()[0].replace(",", "")
99 )
100
101 consumption_date_list = soup.find("p", {"class": "info_top"}).text.split(" ")[:2]
102 consumption_date_list[0] = consumption_date_list[0].replace(".", "-").split("(")[0]
103 consumption_date = datetime.datetime.strptime(
104 " ".join(consumption_date_list), "%Y-%m-%d %H:%M"
105 )
106 consumption_date = arrow.get(consumption_date, TIMEZONE).datetime
107
108 data = {
109 "consumption": consumption_val,
110 "datetime": consumption_date,
111 "source": url,
112 "zoneKey": zone_key,
113 }
114
115 return data
116
117
118 @refetch_frequency(datetime.timedelta(hours=1))
119 def fetch_price(
120 zone_key="KR",
121 session=None,
122 target_datetime: datetime.datetime = None,
123 logger=logging.getLogger(__name__),
124 ):
125
126 first_available_date = time_floor(
127 arrow.now(TIMEZONE).shift(days=-6), datetime.timedelta(days=1)
128 ).shift(hours=1)
129
130 if target_datetime is not None and target_datetime < first_available_date:
131 raise NotImplementedError(
132 "This parser is not able to parse dates more than one week in the past."
133 )
134
135 if target_datetime is None:
136 target_datetime = arrow.now(TIMEZONE).datetime
137
138 r = session or requests.session()
139 url = PRICE_URL
140
141 response = r.get(url)
142 assert response.status_code == 200
143
144 all_data = []
145 table_prices = pd.read_html(response.text, header=0)[0]
146
147 for col_idx in range(1, table_prices.shape[1]):
148 for row_idx in range(24):
149
150 day = col_idx
151 hour = row_idx + 1
152
153 if hour == 24:
154 hour = 0
155 day += 1
156
157 arw_day = (
158 arrow.now(TIMEZONE)
159 .shift(days=-1 * (7 - day))
160 .replace(hour=hour, minute=0, second=0, microsecond=0)
161 )
162 price_value = table_prices.iloc[row_idx, col_idx]
163
164 data = {
165 "zoneKey": zone_key,
166 "datetime": arw_day.datetime,
167 "currency": "KRW",
168 "price": price_value,
169 "source": "new.kpx.or.kr",
170 }
171
172 all_data.append(data)
173
174 return all_data
175
176
177 def get_long_term_prod_data(
178 session=None, target_datetime: datetime.datetime = None
179 ) -> dict:
180 target_datetime_formatted_daily = target_datetime.strftime("%Y-%m-%d")
181
182 r = session or requests.session()
183
184 # CSRF token is needed to access the production data
185 r.get(LONG_TERM_PRODUCTION_URL)
186 cookies_dict = r.cookies.get_dict()
187
188 payload = {
189 "mid": "a10606030000",
190 "device": "chart",
191 "view_sdate": target_datetime_formatted_daily,
192 "view_edate": target_datetime_formatted_daily,
193 "_csrf": cookies_dict["XSRF-TOKEN"],
194 }
195
196 res = r.post(LONG_TERM_PRODUCTION_URL, payload)
197
198 assert res.status_code == 200
199
200 all_data = []
201
202 soup = BeautifulSoup(res.text, "html.parser")
203 table_rows = soup.find_all("tr")[1:]
204
205 for row in table_rows:
206
207 sanitized_date = [value[:-1] for value in row.find_all("td")[0].text.split(" ")]
208 curr_prod_datetime_string = (
209 "-".join(sanitized_date[:3]) + "T" + ":".join(sanitized_date[3:]) + ":00"
210 )
211 arw_datetime = arrow.get(
212 curr_prod_datetime_string, "YYYY-MM-DDTHH:mm:ss", tzinfo=TIMEZONE
213 ).datetime
214
215 data = {
216 "zoneKey": "KR",
217 "datetime": arw_datetime,
218 "capacity": {},
219 "production": {},
220 "storage": {},
221 "source": "https://new.kpx.or.kr",
222 }
223
224 row_values = row.find_all("td")
225 production_values = [
226 int("".join(value.text.split(","))) for value in row_values[1:]
227 ]
228
229 # order of production_values
230 # 0. other, 1. gas, 2. renewable, 3. coal, 4. nuclear
231 # other can be negative as well as positive due to pumped hydro
232
233 data["datetime"] = arw_datetime
234 data["production"]["unknown"] = production_values[0] + production_values[2]
235 data["production"]["gas"] = production_values[1]
236 data["production"]["coal"] = production_values[3]
237 data["production"]["nuclear"] = production_values[4]
238
239 all_data.append(data)
240
241 return all_data
242
243
244 def get_granular_real_time_prod_data(session=None) -> dict:
245 r0 = session or requests.session()
246 res_0 = r0.get(REAL_TIME_URL)
247 chart_data = extract_chart_data(res_0.text)
248
249 return chart_data
250
251
252 @refetch_frequency(datetime.timedelta(minutes=5))
253 def fetch_production(
254 zone_key="KR",
255 session=None,
256 target_datetime: datetime.datetime = None,
257 logger: logging.Logger = logging.getLogger(__name__),
258 ) -> dict:
259
260 if target_datetime is not None and target_datetime < arrow.get(
261 2021, 12, 22, 0, 0, 0, tzinfo=TIMEZONE
262 ):
263 raise NotImplementedError(
264 "This parser is not able to parse dates before 2021-12-22."
265 )
266
267 if target_datetime is None:
268 target_datetime = arrow.now(TIMEZONE).datetime
269
270 all_data = []
271
272 if target_datetime.date() == arrow.now(TIMEZONE).date():
273 chart_data = get_granular_real_time_prod_data(session=session)
274
275 for datetime_key, chart_data_values in chart_data.items():
276 data = {
277 "zoneKey": "KR",
278 "datetime": datetime_key,
279 "capacity": {},
280 "production": {},
281 "storage": {},
282 "source": "https://new.kpx.or.kr",
283 }
284
285 data["storage"]["hydro"] = chart_data_values["pumpedHydro"]
286
287 data["production"]["coal"] = chart_data_values["coal"]
288 data["production"]["gas"] = chart_data_values["gas"]
289 data["production"]["nuclear"] = chart_data_values["nuclear"]
290 data["production"]["oil"] = chart_data_values["oil"]
291 data["production"]["hydro"] = chart_data_values["hydro"]
292 data["production"]["unknown"] = chart_data_values["renewable"]
293
294 all_data.append(data)
295
296 else:
297 all_data = get_long_term_prod_data(
298 session=session, target_datetime=target_datetime
299 )
300
301 return all_data
302
303
304 if __name__ == "__main__":
305 # Testing datetime on specific date
306 target_datetime = arrow.get(2022, 2, 7, 16, 35, 0, tzinfo=TIMEZONE).datetime
307
308 print("fetch_production() ->")
309 # pp.pprint(fetch_production(target_datetime=target_datetime))
310 pp.pprint(fetch_production())
311
312 print("fetch_price() -> ")
313 # pp.pprint(fetch_price(target_datetime=target_datetime))
314 pp.pprint(fetch_price())
315
316 print("fetch_consumption() -> ")
317 pp.pprint(fetch_consumption())
318
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/KR.py b/parsers/KR.py
--- a/parsers/KR.py
+++ b/parsers/KR.py
@@ -159,7 +159,9 @@
.shift(days=-1 * (7 - day))
.replace(hour=hour, minute=0, second=0, microsecond=0)
)
- price_value = table_prices.iloc[row_idx, col_idx]
+ price_value = (
+ table_prices.iloc[row_idx, col_idx] * 1000
+ ) # Convert from Won/kWh to Won/MWh
data = {
"zoneKey": zone_key,
|
{"golden_diff": "diff --git a/parsers/KR.py b/parsers/KR.py\n--- a/parsers/KR.py\n+++ b/parsers/KR.py\n@@ -159,7 +159,9 @@\n .shift(days=-1 * (7 - day))\n .replace(hour=hour, minute=0, second=0, microsecond=0)\n )\n- price_value = table_prices.iloc[row_idx, col_idx]\n+ price_value = (\n+ table_prices.iloc[row_idx, col_idx] * 1000\n+ ) # Convert from Won/kWh to Won/MWh\n \n data = {\n \"zoneKey\": zone_key,\n", "issue": "South Korean electricity price is off by a factor of 1000x\nSomeone reached out to us on email with the following feedback:\r\n\r\nJust want to mention, that the electricity price in South Korea is probably off by factor of 1000x.\r\nI believe this is due to different way to use decimal point. Great Britain and the United States are two of the few places in the world that use a period to indicate the decimal place. Many other countries use a comma instead. The decimal separator is also called the radix character. Likewise, while the U.K. and U.S. use a comma to separate groups of thousands, many other countries use a period instead, and some countries separate thousands groups with a thin space.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport datetime\nimport json\nimport logging\nimport pprint\nimport re\n\nimport arrow\nimport pandas as pd\nimport requests\nfrom bs4 import BeautifulSoup\n\nfrom parsers.lib.config import refetch_frequency\n\nTIMEZONE = \"Asia/Seoul\"\nREAL_TIME_URL = \"https://new.kpx.or.kr/powerinfoSubmain.es?mid=a10606030000\"\nPRICE_URL = \"https://new.kpx.or.kr/smpInland.es?mid=a10606080100&device=pc\"\nLONG_TERM_PRODUCTION_URL = (\n \"https://new.kpx.or.kr/powerSource.es?mid=a10606030000&device=chart\"\n)\n\npp = pprint.PrettyPrinter(indent=4)\n\n#### Classification of New & Renewable Energy Sources ####\n# Source: https://cms.khnp.co.kr/eng/content/563/main.do?mnCd=EN040101\n# New energy: Hydrogen, Fuel Cell, Coal liquefied or gasified energy, and vacuum residue gasified energy, etc.\n# Renewable: Solar, Wind power, Water power, ocean energy, Geothermal, Bio energy, etc.\n\n# src: https://stackoverflow.com/questions/3463930/how-to-round-the-minute-of-a-datetime-object\ndef time_floor(time, delta, epoch=None):\n if epoch is None:\n epoch = datetime.datetime(1970, 1, 1, tzinfo=time.tzinfo)\n mod = (time - epoch) % delta\n return time - mod\n\n\ndef extract_chart_data(html):\n \"\"\"\n Extracts generation breakdown chart data from the source code of the page.\n \"\"\"\n # Extract object with data\n data_source = re.search(r\"var ictArr = (\\[\\{.+\\}\\]);\", html).group(1)\n # Un-quoted keys ({key:\"value\"}) are valid JavaScript but not valid JSON (which requires {\"key\":\"value\"}).\n # Will break if other keys than these are introduced. Alternatively, use a JSON5 library (JSON5 allows un-quoted keys)\n data_source = re.sub(\n r'\"(localCoal|newRenewable|oil|once|gas|nuclearPower|coal|regDate|raisingWater|waterPower|seq)\"',\n r'\"\\1\"',\n data_source,\n )\n json_obj = json.loads(data_source)\n\n timed_data = {}\n\n for item in json_obj:\n if item[\"regDate\"] == \"0\":\n break\n\n date = datetime.datetime.strptime(item[\"regDate\"], \"%Y-%m-%d %H:%M\")\n date = arrow.get(date, TIMEZONE).datetime\n\n timed_data[date] = {\n \"coal\": round(float(item[\"coal\"]) + float(item[\"localCoal\"]), 5),\n \"gas\": round(float(item[\"gas\"]), 5),\n \"hydro\": round(float(item[\"waterPower\"]), 5),\n \"nuclear\": round(float(item[\"nuclearPower\"]), 5),\n \"oil\": round(float(item[\"oil\"]), 5),\n \"renewable\": round(float(item[\"newRenewable\"]), 5),\n \"pumpedHydro\": round(float(item[\"raisingWater\"]), 5),\n }\n\n return timed_data\n\n\n@refetch_frequency(datetime.timedelta(minutes=5))\ndef fetch_consumption(\n zone_key=\"KR\",\n session=None,\n target_datetime=None,\n logger=logging.getLogger(__name__),\n) -> dict:\n \"\"\"\n Fetches consumption.\n \"\"\"\n\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates\")\n\n r = session or requests.session()\n url = REAL_TIME_URL\n\n response = r.get(url)\n assert response.status_code == 200\n\n soup = BeautifulSoup(response.text, \"html.parser\")\n consumption_title = soup.find(\"th\", string=re.compile(r\"\\s*\ud604\uc7ac\ubd80\ud558\\s*\"))\n consumption_val = float(\n consumption_title.find_next_sibling().text.split()[0].replace(\",\", \"\")\n )\n\n consumption_date_list = soup.find(\"p\", {\"class\": \"info_top\"}).text.split(\" \")[:2]\n consumption_date_list[0] = consumption_date_list[0].replace(\".\", \"-\").split(\"(\")[0]\n consumption_date = datetime.datetime.strptime(\n \" \".join(consumption_date_list), \"%Y-%m-%d %H:%M\"\n )\n consumption_date = arrow.get(consumption_date, TIMEZONE).datetime\n\n data = {\n \"consumption\": consumption_val,\n \"datetime\": consumption_date,\n \"source\": url,\n \"zoneKey\": zone_key,\n }\n\n return data\n\n\n@refetch_frequency(datetime.timedelta(hours=1))\ndef fetch_price(\n zone_key=\"KR\",\n session=None,\n target_datetime: datetime.datetime = None,\n logger=logging.getLogger(__name__),\n):\n\n first_available_date = time_floor(\n arrow.now(TIMEZONE).shift(days=-6), datetime.timedelta(days=1)\n ).shift(hours=1)\n\n if target_datetime is not None and target_datetime < first_available_date:\n raise NotImplementedError(\n \"This parser is not able to parse dates more than one week in the past.\"\n )\n\n if target_datetime is None:\n target_datetime = arrow.now(TIMEZONE).datetime\n\n r = session or requests.session()\n url = PRICE_URL\n\n response = r.get(url)\n assert response.status_code == 200\n\n all_data = []\n table_prices = pd.read_html(response.text, header=0)[0]\n\n for col_idx in range(1, table_prices.shape[1]):\n for row_idx in range(24):\n\n day = col_idx\n hour = row_idx + 1\n\n if hour == 24:\n hour = 0\n day += 1\n\n arw_day = (\n arrow.now(TIMEZONE)\n .shift(days=-1 * (7 - day))\n .replace(hour=hour, minute=0, second=0, microsecond=0)\n )\n price_value = table_prices.iloc[row_idx, col_idx]\n\n data = {\n \"zoneKey\": zone_key,\n \"datetime\": arw_day.datetime,\n \"currency\": \"KRW\",\n \"price\": price_value,\n \"source\": \"new.kpx.or.kr\",\n }\n\n all_data.append(data)\n\n return all_data\n\n\ndef get_long_term_prod_data(\n session=None, target_datetime: datetime.datetime = None\n) -> dict:\n target_datetime_formatted_daily = target_datetime.strftime(\"%Y-%m-%d\")\n\n r = session or requests.session()\n\n # CSRF token is needed to access the production data\n r.get(LONG_TERM_PRODUCTION_URL)\n cookies_dict = r.cookies.get_dict()\n\n payload = {\n \"mid\": \"a10606030000\",\n \"device\": \"chart\",\n \"view_sdate\": target_datetime_formatted_daily,\n \"view_edate\": target_datetime_formatted_daily,\n \"_csrf\": cookies_dict[\"XSRF-TOKEN\"],\n }\n\n res = r.post(LONG_TERM_PRODUCTION_URL, payload)\n\n assert res.status_code == 200\n\n all_data = []\n\n soup = BeautifulSoup(res.text, \"html.parser\")\n table_rows = soup.find_all(\"tr\")[1:]\n\n for row in table_rows:\n\n sanitized_date = [value[:-1] for value in row.find_all(\"td\")[0].text.split(\" \")]\n curr_prod_datetime_string = (\n \"-\".join(sanitized_date[:3]) + \"T\" + \":\".join(sanitized_date[3:]) + \":00\"\n )\n arw_datetime = arrow.get(\n curr_prod_datetime_string, \"YYYY-MM-DDTHH:mm:ss\", tzinfo=TIMEZONE\n ).datetime\n\n data = {\n \"zoneKey\": \"KR\",\n \"datetime\": arw_datetime,\n \"capacity\": {},\n \"production\": {},\n \"storage\": {},\n \"source\": \"https://new.kpx.or.kr\",\n }\n\n row_values = row.find_all(\"td\")\n production_values = [\n int(\"\".join(value.text.split(\",\"))) for value in row_values[1:]\n ]\n\n # order of production_values\n # 0. other, 1. gas, 2. renewable, 3. coal, 4. nuclear\n # other can be negative as well as positive due to pumped hydro\n\n data[\"datetime\"] = arw_datetime\n data[\"production\"][\"unknown\"] = production_values[0] + production_values[2]\n data[\"production\"][\"gas\"] = production_values[1]\n data[\"production\"][\"coal\"] = production_values[3]\n data[\"production\"][\"nuclear\"] = production_values[4]\n\n all_data.append(data)\n\n return all_data\n\n\ndef get_granular_real_time_prod_data(session=None) -> dict:\n r0 = session or requests.session()\n res_0 = r0.get(REAL_TIME_URL)\n chart_data = extract_chart_data(res_0.text)\n\n return chart_data\n\n\n@refetch_frequency(datetime.timedelta(minutes=5))\ndef fetch_production(\n zone_key=\"KR\",\n session=None,\n target_datetime: datetime.datetime = None,\n logger: logging.Logger = logging.getLogger(__name__),\n) -> dict:\n\n if target_datetime is not None and target_datetime < arrow.get(\n 2021, 12, 22, 0, 0, 0, tzinfo=TIMEZONE\n ):\n raise NotImplementedError(\n \"This parser is not able to parse dates before 2021-12-22.\"\n )\n\n if target_datetime is None:\n target_datetime = arrow.now(TIMEZONE).datetime\n\n all_data = []\n\n if target_datetime.date() == arrow.now(TIMEZONE).date():\n chart_data = get_granular_real_time_prod_data(session=session)\n\n for datetime_key, chart_data_values in chart_data.items():\n data = {\n \"zoneKey\": \"KR\",\n \"datetime\": datetime_key,\n \"capacity\": {},\n \"production\": {},\n \"storage\": {},\n \"source\": \"https://new.kpx.or.kr\",\n }\n\n data[\"storage\"][\"hydro\"] = chart_data_values[\"pumpedHydro\"]\n\n data[\"production\"][\"coal\"] = chart_data_values[\"coal\"]\n data[\"production\"][\"gas\"] = chart_data_values[\"gas\"]\n data[\"production\"][\"nuclear\"] = chart_data_values[\"nuclear\"]\n data[\"production\"][\"oil\"] = chart_data_values[\"oil\"]\n data[\"production\"][\"hydro\"] = chart_data_values[\"hydro\"]\n data[\"production\"][\"unknown\"] = chart_data_values[\"renewable\"]\n\n all_data.append(data)\n\n else:\n all_data = get_long_term_prod_data(\n session=session, target_datetime=target_datetime\n )\n\n return all_data\n\n\nif __name__ == \"__main__\":\n # Testing datetime on specific date\n target_datetime = arrow.get(2022, 2, 7, 16, 35, 0, tzinfo=TIMEZONE).datetime\n\n print(\"fetch_production() ->\")\n # pp.pprint(fetch_production(target_datetime=target_datetime))\n pp.pprint(fetch_production())\n\n print(\"fetch_price() -> \")\n # pp.pprint(fetch_price(target_datetime=target_datetime))\n pp.pprint(fetch_price())\n\n print(\"fetch_consumption() -> \")\n pp.pprint(fetch_consumption())\n", "path": "parsers/KR.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport datetime\nimport json\nimport logging\nimport pprint\nimport re\n\nimport arrow\nimport pandas as pd\nimport requests\nfrom bs4 import BeautifulSoup\n\nfrom parsers.lib.config import refetch_frequency\n\nTIMEZONE = \"Asia/Seoul\"\nREAL_TIME_URL = \"https://new.kpx.or.kr/powerinfoSubmain.es?mid=a10606030000\"\nPRICE_URL = \"https://new.kpx.or.kr/smpInland.es?mid=a10606080100&device=pc\"\nLONG_TERM_PRODUCTION_URL = (\n \"https://new.kpx.or.kr/powerSource.es?mid=a10606030000&device=chart\"\n)\n\npp = pprint.PrettyPrinter(indent=4)\n\n#### Classification of New & Renewable Energy Sources ####\n# Source: https://cms.khnp.co.kr/eng/content/563/main.do?mnCd=EN040101\n# New energy: Hydrogen, Fuel Cell, Coal liquefied or gasified energy, and vacuum residue gasified energy, etc.\n# Renewable: Solar, Wind power, Water power, ocean energy, Geothermal, Bio energy, etc.\n\n# src: https://stackoverflow.com/questions/3463930/how-to-round-the-minute-of-a-datetime-object\ndef time_floor(time, delta, epoch=None):\n if epoch is None:\n epoch = datetime.datetime(1970, 1, 1, tzinfo=time.tzinfo)\n mod = (time - epoch) % delta\n return time - mod\n\n\ndef extract_chart_data(html):\n \"\"\"\n Extracts generation breakdown chart data from the source code of the page.\n \"\"\"\n # Extract object with data\n data_source = re.search(r\"var ictArr = (\\[\\{.+\\}\\]);\", html).group(1)\n # Un-quoted keys ({key:\"value\"}) are valid JavaScript but not valid JSON (which requires {\"key\":\"value\"}).\n # Will break if other keys than these are introduced. Alternatively, use a JSON5 library (JSON5 allows un-quoted keys)\n data_source = re.sub(\n r'\"(localCoal|newRenewable|oil|once|gas|nuclearPower|coal|regDate|raisingWater|waterPower|seq)\"',\n r'\"\\1\"',\n data_source,\n )\n json_obj = json.loads(data_source)\n\n timed_data = {}\n\n for item in json_obj:\n if item[\"regDate\"] == \"0\":\n break\n\n date = datetime.datetime.strptime(item[\"regDate\"], \"%Y-%m-%d %H:%M\")\n date = arrow.get(date, TIMEZONE).datetime\n\n timed_data[date] = {\n \"coal\": round(float(item[\"coal\"]) + float(item[\"localCoal\"]), 5),\n \"gas\": round(float(item[\"gas\"]), 5),\n \"hydro\": round(float(item[\"waterPower\"]), 5),\n \"nuclear\": round(float(item[\"nuclearPower\"]), 5),\n \"oil\": round(float(item[\"oil\"]), 5),\n \"renewable\": round(float(item[\"newRenewable\"]), 5),\n \"pumpedHydro\": round(float(item[\"raisingWater\"]), 5),\n }\n\n return timed_data\n\n\n@refetch_frequency(datetime.timedelta(minutes=5))\ndef fetch_consumption(\n zone_key=\"KR\",\n session=None,\n target_datetime=None,\n logger=logging.getLogger(__name__),\n) -> dict:\n \"\"\"\n Fetches consumption.\n \"\"\"\n\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates\")\n\n r = session or requests.session()\n url = REAL_TIME_URL\n\n response = r.get(url)\n assert response.status_code == 200\n\n soup = BeautifulSoup(response.text, \"html.parser\")\n consumption_title = soup.find(\"th\", string=re.compile(r\"\\s*\ud604\uc7ac\ubd80\ud558\\s*\"))\n consumption_val = float(\n consumption_title.find_next_sibling().text.split()[0].replace(\",\", \"\")\n )\n\n consumption_date_list = soup.find(\"p\", {\"class\": \"info_top\"}).text.split(\" \")[:2]\n consumption_date_list[0] = consumption_date_list[0].replace(\".\", \"-\").split(\"(\")[0]\n consumption_date = datetime.datetime.strptime(\n \" \".join(consumption_date_list), \"%Y-%m-%d %H:%M\"\n )\n consumption_date = arrow.get(consumption_date, TIMEZONE).datetime\n\n data = {\n \"consumption\": consumption_val,\n \"datetime\": consumption_date,\n \"source\": url,\n \"zoneKey\": zone_key,\n }\n\n return data\n\n\n@refetch_frequency(datetime.timedelta(hours=1))\ndef fetch_price(\n zone_key=\"KR\",\n session=None,\n target_datetime: datetime.datetime = None,\n logger=logging.getLogger(__name__),\n):\n\n first_available_date = time_floor(\n arrow.now(TIMEZONE).shift(days=-6), datetime.timedelta(days=1)\n ).shift(hours=1)\n\n if target_datetime is not None and target_datetime < first_available_date:\n raise NotImplementedError(\n \"This parser is not able to parse dates more than one week in the past.\"\n )\n\n if target_datetime is None:\n target_datetime = arrow.now(TIMEZONE).datetime\n\n r = session or requests.session()\n url = PRICE_URL\n\n response = r.get(url)\n assert response.status_code == 200\n\n all_data = []\n table_prices = pd.read_html(response.text, header=0)[0]\n\n for col_idx in range(1, table_prices.shape[1]):\n for row_idx in range(24):\n\n day = col_idx\n hour = row_idx + 1\n\n if hour == 24:\n hour = 0\n day += 1\n\n arw_day = (\n arrow.now(TIMEZONE)\n .shift(days=-1 * (7 - day))\n .replace(hour=hour, minute=0, second=0, microsecond=0)\n )\n price_value = (\n table_prices.iloc[row_idx, col_idx] * 1000\n ) # Convert from Won/kWh to Won/MWh\n\n data = {\n \"zoneKey\": zone_key,\n \"datetime\": arw_day.datetime,\n \"currency\": \"KRW\",\n \"price\": price_value,\n \"source\": \"new.kpx.or.kr\",\n }\n\n all_data.append(data)\n\n return all_data\n\n\ndef get_long_term_prod_data(\n session=None, target_datetime: datetime.datetime = None\n) -> dict:\n target_datetime_formatted_daily = target_datetime.strftime(\"%Y-%m-%d\")\n\n r = session or requests.session()\n\n # CSRF token is needed to access the production data\n r.get(LONG_TERM_PRODUCTION_URL)\n cookies_dict = r.cookies.get_dict()\n\n payload = {\n \"mid\": \"a10606030000\",\n \"device\": \"chart\",\n \"view_sdate\": target_datetime_formatted_daily,\n \"view_edate\": target_datetime_formatted_daily,\n \"_csrf\": cookies_dict[\"XSRF-TOKEN\"],\n }\n\n res = r.post(LONG_TERM_PRODUCTION_URL, payload)\n\n assert res.status_code == 200\n\n all_data = []\n\n soup = BeautifulSoup(res.text, \"html.parser\")\n table_rows = soup.find_all(\"tr\")[1:]\n\n for row in table_rows:\n\n sanitized_date = [value[:-1] for value in row.find_all(\"td\")[0].text.split(\" \")]\n curr_prod_datetime_string = (\n \"-\".join(sanitized_date[:3]) + \"T\" + \":\".join(sanitized_date[3:]) + \":00\"\n )\n arw_datetime = arrow.get(\n curr_prod_datetime_string, \"YYYY-MM-DDTHH:mm:ss\", tzinfo=TIMEZONE\n ).datetime\n\n data = {\n \"zoneKey\": \"KR\",\n \"datetime\": arw_datetime,\n \"capacity\": {},\n \"production\": {},\n \"storage\": {},\n \"source\": \"https://new.kpx.or.kr\",\n }\n\n row_values = row.find_all(\"td\")\n production_values = [\n int(\"\".join(value.text.split(\",\"))) for value in row_values[1:]\n ]\n\n # order of production_values\n # 0. other, 1. gas, 2. renewable, 3. coal, 4. nuclear\n # other can be negative as well as positive due to pumped hydro\n\n data[\"datetime\"] = arw_datetime\n data[\"production\"][\"unknown\"] = production_values[0] + production_values[2]\n data[\"production\"][\"gas\"] = production_values[1]\n data[\"production\"][\"coal\"] = production_values[3]\n data[\"production\"][\"nuclear\"] = production_values[4]\n\n all_data.append(data)\n\n return all_data\n\n\ndef get_granular_real_time_prod_data(session=None) -> dict:\n r0 = session or requests.session()\n res_0 = r0.get(REAL_TIME_URL)\n chart_data = extract_chart_data(res_0.text)\n\n return chart_data\n\n\n@refetch_frequency(datetime.timedelta(minutes=5))\ndef fetch_production(\n zone_key=\"KR\",\n session=None,\n target_datetime: datetime.datetime = None,\n logger: logging.Logger = logging.getLogger(__name__),\n) -> dict:\n\n if target_datetime is not None and target_datetime < arrow.get(\n 2021, 12, 22, 0, 0, 0, tzinfo=TIMEZONE\n ):\n raise NotImplementedError(\n \"This parser is not able to parse dates before 2021-12-22.\"\n )\n\n if target_datetime is None:\n target_datetime = arrow.now(TIMEZONE).datetime\n\n all_data = []\n\n if target_datetime.date() == arrow.now(TIMEZONE).date():\n chart_data = get_granular_real_time_prod_data(session=session)\n\n for datetime_key, chart_data_values in chart_data.items():\n data = {\n \"zoneKey\": \"KR\",\n \"datetime\": datetime_key,\n \"capacity\": {},\n \"production\": {},\n \"storage\": {},\n \"source\": \"https://new.kpx.or.kr\",\n }\n\n data[\"storage\"][\"hydro\"] = chart_data_values[\"pumpedHydro\"]\n\n data[\"production\"][\"coal\"] = chart_data_values[\"coal\"]\n data[\"production\"][\"gas\"] = chart_data_values[\"gas\"]\n data[\"production\"][\"nuclear\"] = chart_data_values[\"nuclear\"]\n data[\"production\"][\"oil\"] = chart_data_values[\"oil\"]\n data[\"production\"][\"hydro\"] = chart_data_values[\"hydro\"]\n data[\"production\"][\"unknown\"] = chart_data_values[\"renewable\"]\n\n all_data.append(data)\n\n else:\n all_data = get_long_term_prod_data(\n session=session, target_datetime=target_datetime\n )\n\n return all_data\n\n\nif __name__ == \"__main__\":\n # Testing datetime on specific date\n target_datetime = arrow.get(2022, 2, 7, 16, 35, 0, tzinfo=TIMEZONE).datetime\n\n print(\"fetch_production() ->\")\n # pp.pprint(fetch_production(target_datetime=target_datetime))\n pp.pprint(fetch_production())\n\n print(\"fetch_price() -> \")\n # pp.pprint(fetch_price(target_datetime=target_datetime))\n pp.pprint(fetch_price())\n\n print(\"fetch_consumption() -> \")\n pp.pprint(fetch_consumption())\n", "path": "parsers/KR.py"}]}
| 3,778 | 145 |
gh_patches_debug_62609
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-4907
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3
4 from types import MethodType
5 from typing import Optional, Tuple
6
7 import torch
8 import torch.nn.functional as F
9 from transformers.models.llama.modeling_llama import (
10 LlamaRMSNorm,
11 LlamaAttention,
12 LlamaModel,
13 LlamaForCausalLM,
14 apply_rotary_pos_emb,
15 repeat_kv,
16 )
17
18 from colossalai.logging import get_dist_logger
19 from einops import rearrange
20
21 from flash_attn.bert_padding import pad_input, unpad_input
22 from flash_attn.flash_attn_interface import (
23 flash_attn_func,
24 flash_attn_varlen_kvpacked_func,
25 )
26 from flash_attn.ops.rms_norm import rms_norm
27
28
29 logger = get_dist_logger()
30
31
32 def _prepare_decoder_attention_mask(
33 self: LlamaModel,
34 attention_mask: torch.BoolTensor,
35 input_shape: torch.Size,
36 inputs_embeds: torch.Tensor,
37 past_key_values_length: int,
38 ) -> Optional[torch.Tensor]:
39 """
40 Decoder attetion mask
41 """
42 if past_key_values_length > 0 and attention_mask is not None:
43 attention_mask = torch.cat(
44 tensors=(
45 torch.full(
46 size=(input_shape[0], past_key_values_length),
47 fill_value=True,
48 dtype=attention_mask.dtype,
49 device=attention_mask.device,
50 ),
51 attention_mask,
52 ),
53 dim=-1,
54 ) # (bsz, past_key_values_length + q_len)
55 if attention_mask is not None and torch.all(attention_mask):
56 return None # Faster
57 return attention_mask
58
59
60 def attention_forward(
61 self: LlamaAttention,
62 hidden_states: torch.Tensor,
63 attention_mask: Optional[torch.Tensor] = None,
64 position_ids: Optional[torch.LongTensor] = None,
65 past_key_value: Optional[Tuple[torch.Tensor]] = None,
66 output_attentions: bool = False,
67 use_cache: bool = False,
68 ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
69 """
70 Re-define LLaMA-2 `LlamaAttention` forward method using flash-attention.
71 """
72 if output_attentions:
73 logger.warning(
74 "Argument `output_attentions` is not supported for flash-attention patched `LlamaAttention`, "
75 "return `None` instead."
76 )
77
78 bsz, q_len, _ = hidden_states.size()
79
80 if self.config.pretraining_tp > 1:
81 q_slicing, kv_slicing = (
82 dim // self.config.pretraining_tp
83 for dim in (
84 self.num_heads * self.head_dim,
85 self.num_key_value_heads * self.head_dim,
86 )
87 ) # `Tuple[int, int]`
88 q_slices, k_slices, v_slices = (
89 proj.weight.split(slicing, dim=0)
90 for proj, slicing in (
91 (self.q_proj, q_slicing),
92 (self.k_proj, kv_slicing),
93 (self.v_proj, kv_slicing),
94 )
95 ) # Tuple[Tuple[torch.Tensor], Tuple[torch.Tensor], Tuple[torch.Tensor]]
96 q, k, v = (
97 torch.cat(
98 [F.linear(hidden_states, slices[i]) for i in range(self.config.pretraining_tp)],
99 dim=-1,
100 )
101 for slices in (q_slices, k_slices, v_slices)
102 )
103 # `Tuple[torch.Tensor, torch.Tensor, torch.Tensor]` of shape:
104 # (bsz, q_len, num_heads * head_dim),
105 # (bsz, q_len, num_key_value_heads * head_dim),
106 # (bsz, q_len, num_key_value_heads * head_dim)
107 else:
108 q, k, v = (proj(hidden_states) for proj in (self.q_proj, self.k_proj, self.v_proj))
109 # `Tuple[torch.Tensor, torch.Tensor, torch.Tensor]` of shape:
110 # (bsz, q_len, num_heads * head_dim),
111 # (bsz, q_len, num_key_value_heads * head_dim),
112 # (bsz, q_len, num_key_value_heads * head_dim)
113
114 # (bsz, q_len, num_heads * head_dim) -> (bsz, num_heads, q_len, head_dim);
115 # (bsz, q_len, num_key_value_heads * head_dim) -> (bsz, num_key_value_heads, q_len, head_dim);
116 # (bsz, q_len, num_key_value_heads * head_dim) -> (bsz, num_key_value_heads, q_len, head_dim)
117 q, k, v = (
118 states.view(bsz, q_len, num_heads, self.head_dim).transpose(1, 2)
119 for states, num_heads in (
120 (q, self.num_heads),
121 (k, self.num_key_value_heads),
122 (v, self.num_key_value_heads),
123 )
124 )
125 kv_len = k.shape[-2] # initially, `kv_len` == `q_len`
126 past_kv_len = 0
127 if past_key_value is not None:
128 # if `past_key_value` is not None, `kv_len` > `q_len`.
129 past_kv_len = past_key_value[0].shape[-2]
130 kv_len += past_kv_len
131
132 # two `torch.Tensor` objs of shape (1, 1, kv_len, head_dim)
133 cos, sin = self.rotary_emb(v, seq_len=kv_len)
134 # (bsz, num_heads, q_len, head_dim), (bsz, num_key_value_heads, q_len, head_dim)
135 q, k = apply_rotary_pos_emb(q=q, k=k, cos=cos, sin=sin, position_ids=position_ids)
136 if past_key_value is not None:
137 # reuse k, v, self_attention
138 k = torch.cat([past_key_value[0], k], dim=2)
139 v = torch.cat([past_key_value[1], v], dim=2)
140
141 past_key_value = (k, v) if use_cache else None
142
143 # repeat k/v heads if n_kv_heads < n_heads
144 k = repeat_kv(hidden_states=k, n_rep=self.num_key_value_groups)
145 # (bsz, num_key_value_heads, q_len, head_dim) -> (bsz, num_heads, q_len, head_dim)
146 v = repeat_kv(hidden_states=v, n_rep=self.num_key_value_groups)
147 # (bsz, num_key_value_heads, q_len, head_dim) -> (bsz, num_heads, q_len, head_dim)
148
149 key_padding_mask = attention_mask
150 # (bsz, num_heads, q_len, head_dim) -> (bsz, q_len, num_heads, head_dim)
151 q, k, v = (states.transpose(1, 2) for states in (q, k, v))
152
153 if past_kv_len > 0:
154 q = torch.cat(
155 tensors=(
156 torch.full(
157 size=(bsz, past_kv_len, self.num_heads, self.head_dim),
158 fill_value=0.0,
159 dtype=q.dtype,
160 device=q.device,
161 ),
162 q,
163 ),
164 dim=1,
165 ) # (bsz, past_kv_len + q_len, num_heads, head_dim)
166
167 if key_padding_mask is None:
168 # (bsz, past_kv_len + q_len, num_heads, head_dim)
169 output = flash_attn_func(q=q, k=k, v=v, dropout_p=0.0, softmax_scale=None, causal=True) # (bsz, )
170 output = rearrange(output, pattern="... h d -> ... (h d)") # (bsz, past_kv_len + q_len, num_heads * head_dim)
171 else:
172 q, indices, cu_q_lens, max_q_len = unpad_input(hidden_states=q, attention_mask=key_padding_mask)
173 kv, _, cu_kv_lens, max_kv_len = unpad_input(
174 hidden_states=torch.stack(tensors=(k, v), dim=2),
175 attention_mask=key_padding_mask,
176 )
177 output_unpad = flash_attn_varlen_kvpacked_func(
178 q=q,
179 kv=kv,
180 cu_seqlens_q=cu_q_lens,
181 cu_seqlens_k=cu_kv_lens,
182 max_seqlen_q=max_q_len,
183 max_seqlen_k=max_kv_len,
184 dropout_p=0.0,
185 softmax_scale=None,
186 causal=True,
187 )
188 output = pad_input(
189 hidden_states=rearrange(output_unpad, pattern="nnz h d -> nnz (h d)"),
190 indices=indices,
191 batch=bsz,
192 seqlen=past_kv_len + q_len,
193 ) # (bsz, past_kv_len + q_len, num_heads * head_dim)
194
195 if past_kv_len > 0:
196 # Strip off the zero query outputs.
197 output = output[:, past_kv_len:, ...] # (bsz, q_len, num_heads * head_dim)
198 output = self.o_proj(output) # (bsz, q_len, hidden_size)
199 return output, None, past_key_value
200
201
202 def rms_norm_forward(self: LlamaRMSNorm, hidden_states: torch.Tensor) -> torch.Tensor:
203 """
204 Formard function for RMS Norm
205 """
206 return rms_norm(x=hidden_states, weight=self.weight, epsilon=self.variance_epsilon)
207
208
209 def replace_with_flash_attention(model: LlamaForCausalLM) -> None:
210 for name, module in model.named_modules():
211 if isinstance(module, LlamaAttention):
212 module.forward = MethodType(attention_forward, module)
213 if isinstance(module, LlamaModel):
214 module._prepare_decoder_attention_mask = MethodType(_prepare_decoder_attention_mask, module)
215 if isinstance(module, LlamaRMSNorm):
216 module.forward = MethodType(rms_norm_forward, module)
217
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py b/applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py
--- a/applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py
+++ b/applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py
@@ -65,6 +65,7 @@
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: bool = False,
use_cache: bool = False,
+ **kwargs
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
"""
Re-define LLaMA-2 `LlamaAttention` forward method using flash-attention.
|
{"golden_diff": "diff --git a/applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py b/applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py\n--- a/applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py\n+++ b/applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py\n@@ -65,6 +65,7 @@\n past_key_value: Optional[Tuple[torch.Tensor]] = None,\n output_attentions: bool = False,\n use_cache: bool = False,\n+ **kwargs\n ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:\n \"\"\"\n Re-define LLaMA-2 `LlamaAttention` forward method using flash-attention.\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\nfrom types import MethodType\nfrom typing import Optional, Tuple\n\nimport torch\nimport torch.nn.functional as F\nfrom transformers.models.llama.modeling_llama import (\n LlamaRMSNorm,\n LlamaAttention,\n LlamaModel,\n LlamaForCausalLM,\n apply_rotary_pos_emb,\n repeat_kv,\n)\n\nfrom colossalai.logging import get_dist_logger\nfrom einops import rearrange\n\nfrom flash_attn.bert_padding import pad_input, unpad_input\nfrom flash_attn.flash_attn_interface import (\n flash_attn_func,\n flash_attn_varlen_kvpacked_func,\n)\nfrom flash_attn.ops.rms_norm import rms_norm\n\n\nlogger = get_dist_logger()\n\n\ndef _prepare_decoder_attention_mask(\n self: LlamaModel,\n attention_mask: torch.BoolTensor,\n input_shape: torch.Size,\n inputs_embeds: torch.Tensor,\n past_key_values_length: int,\n) -> Optional[torch.Tensor]:\n \"\"\"\n Decoder attetion mask\n \"\"\"\n if past_key_values_length > 0 and attention_mask is not None:\n attention_mask = torch.cat(\n tensors=(\n torch.full(\n size=(input_shape[0], past_key_values_length),\n fill_value=True,\n dtype=attention_mask.dtype,\n device=attention_mask.device,\n ),\n attention_mask,\n ),\n dim=-1,\n ) # (bsz, past_key_values_length + q_len)\n if attention_mask is not None and torch.all(attention_mask):\n return None # Faster\n return attention_mask\n\n\ndef attention_forward(\n self: LlamaAttention,\n hidden_states: torch.Tensor,\n attention_mask: Optional[torch.Tensor] = None,\n position_ids: Optional[torch.LongTensor] = None,\n past_key_value: Optional[Tuple[torch.Tensor]] = None,\n output_attentions: bool = False,\n use_cache: bool = False,\n) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:\n \"\"\"\n Re-define LLaMA-2 `LlamaAttention` forward method using flash-attention.\n \"\"\"\n if output_attentions:\n logger.warning(\n \"Argument `output_attentions` is not supported for flash-attention patched `LlamaAttention`, \"\n \"return `None` instead.\"\n )\n\n bsz, q_len, _ = hidden_states.size()\n\n if self.config.pretraining_tp > 1:\n q_slicing, kv_slicing = (\n dim // self.config.pretraining_tp\n for dim in (\n self.num_heads * self.head_dim,\n self.num_key_value_heads * self.head_dim,\n )\n ) # `Tuple[int, int]`\n q_slices, k_slices, v_slices = (\n proj.weight.split(slicing, dim=0)\n for proj, slicing in (\n (self.q_proj, q_slicing),\n (self.k_proj, kv_slicing),\n (self.v_proj, kv_slicing),\n )\n ) # Tuple[Tuple[torch.Tensor], Tuple[torch.Tensor], Tuple[torch.Tensor]]\n q, k, v = (\n torch.cat(\n [F.linear(hidden_states, slices[i]) for i in range(self.config.pretraining_tp)],\n dim=-1,\n )\n for slices in (q_slices, k_slices, v_slices)\n )\n # `Tuple[torch.Tensor, torch.Tensor, torch.Tensor]` of shape:\n # (bsz, q_len, num_heads * head_dim),\n # (bsz, q_len, num_key_value_heads * head_dim),\n # (bsz, q_len, num_key_value_heads * head_dim)\n else:\n q, k, v = (proj(hidden_states) for proj in (self.q_proj, self.k_proj, self.v_proj))\n # `Tuple[torch.Tensor, torch.Tensor, torch.Tensor]` of shape:\n # (bsz, q_len, num_heads * head_dim),\n # (bsz, q_len, num_key_value_heads * head_dim),\n # (bsz, q_len, num_key_value_heads * head_dim)\n\n # (bsz, q_len, num_heads * head_dim) -> (bsz, num_heads, q_len, head_dim);\n # (bsz, q_len, num_key_value_heads * head_dim) -> (bsz, num_key_value_heads, q_len, head_dim);\n # (bsz, q_len, num_key_value_heads * head_dim) -> (bsz, num_key_value_heads, q_len, head_dim)\n q, k, v = (\n states.view(bsz, q_len, num_heads, self.head_dim).transpose(1, 2)\n for states, num_heads in (\n (q, self.num_heads),\n (k, self.num_key_value_heads),\n (v, self.num_key_value_heads),\n )\n )\n kv_len = k.shape[-2] # initially, `kv_len` == `q_len`\n past_kv_len = 0\n if past_key_value is not None:\n # if `past_key_value` is not None, `kv_len` > `q_len`.\n past_kv_len = past_key_value[0].shape[-2]\n kv_len += past_kv_len\n\n # two `torch.Tensor` objs of shape (1, 1, kv_len, head_dim)\n cos, sin = self.rotary_emb(v, seq_len=kv_len)\n # (bsz, num_heads, q_len, head_dim), (bsz, num_key_value_heads, q_len, head_dim)\n q, k = apply_rotary_pos_emb(q=q, k=k, cos=cos, sin=sin, position_ids=position_ids)\n if past_key_value is not None:\n # reuse k, v, self_attention\n k = torch.cat([past_key_value[0], k], dim=2)\n v = torch.cat([past_key_value[1], v], dim=2)\n\n past_key_value = (k, v) if use_cache else None\n\n # repeat k/v heads if n_kv_heads < n_heads\n k = repeat_kv(hidden_states=k, n_rep=self.num_key_value_groups)\n # (bsz, num_key_value_heads, q_len, head_dim) -> (bsz, num_heads, q_len, head_dim)\n v = repeat_kv(hidden_states=v, n_rep=self.num_key_value_groups)\n # (bsz, num_key_value_heads, q_len, head_dim) -> (bsz, num_heads, q_len, head_dim)\n\n key_padding_mask = attention_mask\n # (bsz, num_heads, q_len, head_dim) -> (bsz, q_len, num_heads, head_dim)\n q, k, v = (states.transpose(1, 2) for states in (q, k, v))\n\n if past_kv_len > 0:\n q = torch.cat(\n tensors=(\n torch.full(\n size=(bsz, past_kv_len, self.num_heads, self.head_dim),\n fill_value=0.0,\n dtype=q.dtype,\n device=q.device,\n ),\n q,\n ),\n dim=1,\n ) # (bsz, past_kv_len + q_len, num_heads, head_dim)\n\n if key_padding_mask is None:\n # (bsz, past_kv_len + q_len, num_heads, head_dim)\n output = flash_attn_func(q=q, k=k, v=v, dropout_p=0.0, softmax_scale=None, causal=True) # (bsz, )\n output = rearrange(output, pattern=\"... h d -> ... (h d)\") # (bsz, past_kv_len + q_len, num_heads * head_dim)\n else:\n q, indices, cu_q_lens, max_q_len = unpad_input(hidden_states=q, attention_mask=key_padding_mask)\n kv, _, cu_kv_lens, max_kv_len = unpad_input(\n hidden_states=torch.stack(tensors=(k, v), dim=2),\n attention_mask=key_padding_mask,\n )\n output_unpad = flash_attn_varlen_kvpacked_func(\n q=q,\n kv=kv,\n cu_seqlens_q=cu_q_lens,\n cu_seqlens_k=cu_kv_lens,\n max_seqlen_q=max_q_len,\n max_seqlen_k=max_kv_len,\n dropout_p=0.0,\n softmax_scale=None,\n causal=True,\n )\n output = pad_input(\n hidden_states=rearrange(output_unpad, pattern=\"nnz h d -> nnz (h d)\"),\n indices=indices,\n batch=bsz,\n seqlen=past_kv_len + q_len,\n ) # (bsz, past_kv_len + q_len, num_heads * head_dim)\n\n if past_kv_len > 0:\n # Strip off the zero query outputs.\n output = output[:, past_kv_len:, ...] # (bsz, q_len, num_heads * head_dim)\n output = self.o_proj(output) # (bsz, q_len, hidden_size)\n return output, None, past_key_value\n\n\ndef rms_norm_forward(self: LlamaRMSNorm, hidden_states: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Formard function for RMS Norm\n \"\"\"\n return rms_norm(x=hidden_states, weight=self.weight, epsilon=self.variance_epsilon)\n\n\ndef replace_with_flash_attention(model: LlamaForCausalLM) -> None:\n for name, module in model.named_modules():\n if isinstance(module, LlamaAttention):\n module.forward = MethodType(attention_forward, module)\n if isinstance(module, LlamaModel):\n module._prepare_decoder_attention_mask = MethodType(_prepare_decoder_attention_mask, module)\n if isinstance(module, LlamaRMSNorm):\n module.forward = MethodType(rms_norm_forward, module)\n", "path": "applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\nfrom types import MethodType\nfrom typing import Optional, Tuple\n\nimport torch\nimport torch.nn.functional as F\nfrom transformers.models.llama.modeling_llama import (\n LlamaRMSNorm,\n LlamaAttention,\n LlamaModel,\n LlamaForCausalLM,\n apply_rotary_pos_emb,\n repeat_kv,\n)\n\nfrom colossalai.logging import get_dist_logger\nfrom einops import rearrange\n\nfrom flash_attn.bert_padding import pad_input, unpad_input\nfrom flash_attn.flash_attn_interface import (\n flash_attn_func,\n flash_attn_varlen_kvpacked_func,\n)\nfrom flash_attn.ops.rms_norm import rms_norm\n\n\nlogger = get_dist_logger()\n\n\ndef _prepare_decoder_attention_mask(\n self: LlamaModel,\n attention_mask: torch.BoolTensor,\n input_shape: torch.Size,\n inputs_embeds: torch.Tensor,\n past_key_values_length: int,\n) -> Optional[torch.Tensor]:\n \"\"\"\n Decoder attetion mask\n \"\"\"\n if past_key_values_length > 0 and attention_mask is not None:\n attention_mask = torch.cat(\n tensors=(\n torch.full(\n size=(input_shape[0], past_key_values_length),\n fill_value=True,\n dtype=attention_mask.dtype,\n device=attention_mask.device,\n ),\n attention_mask,\n ),\n dim=-1,\n ) # (bsz, past_key_values_length + q_len)\n if attention_mask is not None and torch.all(attention_mask):\n return None # Faster\n return attention_mask\n\n\ndef attention_forward(\n self: LlamaAttention,\n hidden_states: torch.Tensor,\n attention_mask: Optional[torch.Tensor] = None,\n position_ids: Optional[torch.LongTensor] = None,\n past_key_value: Optional[Tuple[torch.Tensor]] = None,\n output_attentions: bool = False,\n use_cache: bool = False,\n **kwargs\n) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:\n \"\"\"\n Re-define LLaMA-2 `LlamaAttention` forward method using flash-attention.\n \"\"\"\n if output_attentions:\n logger.warning(\n \"Argument `output_attentions` is not supported for flash-attention patched `LlamaAttention`, \"\n \"return `None` instead.\"\n )\n\n bsz, q_len, _ = hidden_states.size()\n\n if self.config.pretraining_tp > 1:\n q_slicing, kv_slicing = (\n dim // self.config.pretraining_tp\n for dim in (\n self.num_heads * self.head_dim,\n self.num_key_value_heads * self.head_dim,\n )\n ) # `Tuple[int, int]`\n q_slices, k_slices, v_slices = (\n proj.weight.split(slicing, dim=0)\n for proj, slicing in (\n (self.q_proj, q_slicing),\n (self.k_proj, kv_slicing),\n (self.v_proj, kv_slicing),\n )\n ) # Tuple[Tuple[torch.Tensor], Tuple[torch.Tensor], Tuple[torch.Tensor]]\n q, k, v = (\n torch.cat(\n [F.linear(hidden_states, slices[i]) for i in range(self.config.pretraining_tp)],\n dim=-1,\n )\n for slices in (q_slices, k_slices, v_slices)\n )\n # `Tuple[torch.Tensor, torch.Tensor, torch.Tensor]` of shape:\n # (bsz, q_len, num_heads * head_dim),\n # (bsz, q_len, num_key_value_heads * head_dim),\n # (bsz, q_len, num_key_value_heads * head_dim)\n else:\n q, k, v = (proj(hidden_states) for proj in (self.q_proj, self.k_proj, self.v_proj))\n # `Tuple[torch.Tensor, torch.Tensor, torch.Tensor]` of shape:\n # (bsz, q_len, num_heads * head_dim),\n # (bsz, q_len, num_key_value_heads * head_dim),\n # (bsz, q_len, num_key_value_heads * head_dim)\n\n # (bsz, q_len, num_heads * head_dim) -> (bsz, num_heads, q_len, head_dim);\n # (bsz, q_len, num_key_value_heads * head_dim) -> (bsz, num_key_value_heads, q_len, head_dim);\n # (bsz, q_len, num_key_value_heads * head_dim) -> (bsz, num_key_value_heads, q_len, head_dim)\n q, k, v = (\n states.view(bsz, q_len, num_heads, self.head_dim).transpose(1, 2)\n for states, num_heads in (\n (q, self.num_heads),\n (k, self.num_key_value_heads),\n (v, self.num_key_value_heads),\n )\n )\n kv_len = k.shape[-2] # initially, `kv_len` == `q_len`\n past_kv_len = 0\n if past_key_value is not None:\n # if `past_key_value` is not None, `kv_len` > `q_len`.\n past_kv_len = past_key_value[0].shape[-2]\n kv_len += past_kv_len\n\n # two `torch.Tensor` objs of shape (1, 1, kv_len, head_dim)\n cos, sin = self.rotary_emb(v, seq_len=kv_len)\n # (bsz, num_heads, q_len, head_dim), (bsz, num_key_value_heads, q_len, head_dim)\n q, k = apply_rotary_pos_emb(q=q, k=k, cos=cos, sin=sin, position_ids=position_ids)\n if past_key_value is not None:\n # reuse k, v, self_attention\n k = torch.cat([past_key_value[0], k], dim=2)\n v = torch.cat([past_key_value[1], v], dim=2)\n\n past_key_value = (k, v) if use_cache else None\n\n # repeat k/v heads if n_kv_heads < n_heads\n k = repeat_kv(hidden_states=k, n_rep=self.num_key_value_groups)\n # (bsz, num_key_value_heads, q_len, head_dim) -> (bsz, num_heads, q_len, head_dim)\n v = repeat_kv(hidden_states=v, n_rep=self.num_key_value_groups)\n # (bsz, num_key_value_heads, q_len, head_dim) -> (bsz, num_heads, q_len, head_dim)\n\n key_padding_mask = attention_mask\n # (bsz, num_heads, q_len, head_dim) -> (bsz, q_len, num_heads, head_dim)\n q, k, v = (states.transpose(1, 2) for states in (q, k, v))\n\n if past_kv_len > 0:\n q = torch.cat(\n tensors=(\n torch.full(\n size=(bsz, past_kv_len, self.num_heads, self.head_dim),\n fill_value=0.0,\n dtype=q.dtype,\n device=q.device,\n ),\n q,\n ),\n dim=1,\n ) # (bsz, past_kv_len + q_len, num_heads, head_dim)\n\n if key_padding_mask is None:\n # (bsz, past_kv_len + q_len, num_heads, head_dim)\n output = flash_attn_func(q=q, k=k, v=v, dropout_p=0.0, softmax_scale=None, causal=True) # (bsz, )\n output = rearrange(output, pattern=\"... h d -> ... (h d)\") # (bsz, past_kv_len + q_len, num_heads * head_dim)\n else:\n q, indices, cu_q_lens, max_q_len = unpad_input(hidden_states=q, attention_mask=key_padding_mask)\n kv, _, cu_kv_lens, max_kv_len = unpad_input(\n hidden_states=torch.stack(tensors=(k, v), dim=2),\n attention_mask=key_padding_mask,\n )\n output_unpad = flash_attn_varlen_kvpacked_func(\n q=q,\n kv=kv,\n cu_seqlens_q=cu_q_lens,\n cu_seqlens_k=cu_kv_lens,\n max_seqlen_q=max_q_len,\n max_seqlen_k=max_kv_len,\n dropout_p=0.0,\n softmax_scale=None,\n causal=True,\n )\n output = pad_input(\n hidden_states=rearrange(output_unpad, pattern=\"nnz h d -> nnz (h d)\"),\n indices=indices,\n batch=bsz,\n seqlen=past_kv_len + q_len,\n ) # (bsz, past_kv_len + q_len, num_heads * head_dim)\n\n if past_kv_len > 0:\n # Strip off the zero query outputs.\n output = output[:, past_kv_len:, ...] # (bsz, q_len, num_heads * head_dim)\n output = self.o_proj(output) # (bsz, q_len, hidden_size)\n return output, None, past_key_value\n\n\ndef rms_norm_forward(self: LlamaRMSNorm, hidden_states: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Formard function for RMS Norm\n \"\"\"\n return rms_norm(x=hidden_states, weight=self.weight, epsilon=self.variance_epsilon)\n\n\ndef replace_with_flash_attention(model: LlamaForCausalLM) -> None:\n for name, module in model.named_modules():\n if isinstance(module, LlamaAttention):\n module.forward = MethodType(attention_forward, module)\n if isinstance(module, LlamaModel):\n module._prepare_decoder_attention_mask = MethodType(_prepare_decoder_attention_mask, module)\n if isinstance(module, LlamaRMSNorm):\n module.forward = MethodType(rms_norm_forward, module)\n", "path": "applications/Colossal-LLaMA-2/colossal_llama2/utils/flash_attention_patch.py"}]}
| 2,993 | 196 |
gh_patches_debug_15537
|
rasdani/github-patches
|
git_diff
|
apluslms__a-plus-622
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
automatic notification for user about arriving on pages of a past course
A+ knows that a course has closed since course instances have opening and closing times. Any closed course instance could show a warning easily. Adding an automatic link to the newest course instance would be easy too.
[Teacher needs spring 2020](https://wiki.aalto.fi/display/EDIT/Teacher+needs+spring+2020): nr 12
To be allocated to one of the summer interns
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `course/views.py`
Content:
```
1 import datetime
2
3 import icalendar
4 from django.conf import settings
5 from django.contrib import messages
6 from django.core.exceptions import PermissionDenied
7 from django.http import Http404
8 from django.http.response import HttpResponse
9 from django.shortcuts import get_object_or_404, redirect
10 from django.utils import html
11 from django.utils import timezone
12 from django.utils.translation import ugettext_lazy as _
13
14 from authorization.permissions import ACCESS
15 from exercise.cache.hierarchy import NoSuchContent
16 from exercise.models import LearningObject
17 from lib.helpers import settings_text
18 from lib.viewbase import BaseTemplateView, BaseRedirectMixin, BaseFormView, BaseView
19 from userprofile.viewbase import UserProfileView
20 from .forms import GroupsForm, GroupSelectForm
21 from .models import CourseInstance, Enrollment
22 from .permissions import EnrollInfoVisiblePermission
23 from .renders import group_info_context
24 from .viewbase import CourseModuleBaseView, CourseInstanceMixin, EnrollableViewMixin
25
26
27 class HomeView(UserProfileView):
28 access_mode = ACCESS.ANONYMOUS
29 template_name = "course/index.html"
30
31 def get_common_objects(self):
32 super().get_common_objects()
33 self.welcome_text = settings_text('WELCOME_TEXT')
34 self.internal_user_label = settings_text('INTERNAL_USER_LABEL')
35 self.external_user_label = settings_text('EXTERNAL_USER_LABEL')
36 my_instances = []
37 all_instances = []
38 end_threshold = timezone.now() - datetime.timedelta(days=30)
39 user = self.request.user
40 is_logged_in = False
41
42 if user and user.is_authenticated:
43 is_logged_in = True
44 for instance in (CourseInstance.objects
45 .filter(course__teachers=user.userprofile,
46 ending_time__gte=end_threshold)
47 .all()):
48 my_instances.append(instance)
49
50 for instance in user.userprofile.assisting_courses.all().filter(ending_time__gte=end_threshold):
51 if instance not in my_instances:
52 my_instances.append(instance)
53
54 for instance in user.userprofile.enrolled.all().filter(ending_time__gte=end_threshold):
55 if instance not in my_instances:
56 my_instances.append(instance)
57
58 all_instances = CourseInstance.objects.get_visible(user).filter(ending_time__gte=end_threshold)
59 all_instances = [c for c in all_instances if c not in my_instances]
60
61 self.all_instances = all_instances
62 self.my_instances = my_instances
63 self.is_logged_in = is_logged_in
64
65 self.note("welcome_text",
66 "internal_user_label",
67 "external_user_label",
68 "my_instances",
69 "all_instances",
70 "is_logged_in",
71 )
72
73
74 class ArchiveView(UserProfileView):
75 access_mode = ACCESS.ANONYMOUS
76 template_name = "course/archive.html"
77
78 def get_common_objects(self):
79 super().get_common_objects()
80 self.instances = CourseInstance.objects.get_visible(self.request.user)
81 self.note("instances")
82
83 class InstanceView(EnrollableViewMixin, BaseTemplateView):
84 access_mode = ACCESS.STUDENT
85 # ACCESS.STUDENT requires users to log in, but the access mode is dropped
86 # in public courses. CourseVisiblePermission has more restrictions as well.
87 template_name = "course/course.html"
88
89 def handle_no_permission(self):
90 if self.request.user.is_authenticated \
91 and self.instance.view_content_to == CourseInstance.VIEW_ACCESS.ENROLLED:
92 # The course instance is visible to only enrolled students, so
93 # redirect the user to the enroll page instead of showing
94 # a 403 Forbidden error.
95 return redirect(self.instance.get_url('enroll'))
96 return super().handle_no_permission()
97
98 def get(self, request, *args, **kwargs):
99 # external LTI Tool Providers may return the user to the course instance view
100 # with a message given in GET query parameters
101 lti_error_msg = request.GET.get('lti_errormsg')
102 lti_msg = request.GET.get('lti_msg')
103 # message HTML is not escaped in the templates so escape it here
104 if lti_error_msg:
105 messages.error(request, html.escape(lti_error_msg))
106 elif lti_msg:
107 messages.info(request, html.escape(lti_msg))
108
109 return super().get(request, *args, **kwargs)
110
111
112 class Enroll(EnrollableViewMixin, BaseRedirectMixin, BaseTemplateView):
113 permission_classes = [EnrollInfoVisiblePermission]
114 course_permission_classes = []
115 template_name = "course/enroll.html"
116
117 def post(self, request, *args, **kwargs):
118
119 if self.is_student or not self.enrollable:
120 messages.error(self.request, _("You cannot enroll, or have already enrolled, in this course."))
121 raise PermissionDenied()
122
123 if not self.instance.is_enrollment_open():
124 messages.error(self.request, _("The enrollment is not open."))
125 raise PermissionDenied()
126
127 # Support enrollment questionnaires.
128 exercise = LearningObject.objects.find_enrollment_exercise(
129 self.instance, self.profile)
130 if exercise:
131 return self.redirect(exercise.get_absolute_url())
132
133 self.instance.enroll_student(self.request.user)
134 return self.redirect(self.instance.get_absolute_url())
135
136
137 class ModuleView(CourseModuleBaseView):
138 template_name = "course/module.html"
139
140 def get_common_objects(self):
141 super().get_common_objects()
142 self.now = timezone.now()
143 try:
144 self.children = self.content.flat_module(self.module)
145 cur, tree, prev, nex = self.content.find(self.module)
146 self.previous = prev
147 self.current = cur
148 self.next = nex
149 except NoSuchContent:
150 raise Http404
151 self.note('now', 'children', 'previous', 'current', 'next')
152
153
154 class CalendarExport(CourseInstanceMixin, BaseView):
155
156 def get(self, request, *args, **kwargs):
157 cal = icalendar.Calendar()
158 cal.add('prodid', '-// {} calendar //'.format(settings.BRAND_NAME))
159 cal.add('version', '2.0')
160 for module in self.instance.course_modules.all():
161 event = icalendar.Event()
162 event.add('summary', module.name)
163 event.add('dtstart',
164 module.closing_time - datetime.timedelta(hours=1))
165 event.add('dtend', module.closing_time)
166 event.add('dtstamp', module.closing_time)
167 event['uid'] = "module/" + str(module.id) + "/A+"
168 cal.add_component(event)
169
170 return HttpResponse(cal.to_ical(),
171 content_type="text/calendar; charset=utf-8")
172
173
174 class GroupsView(CourseInstanceMixin, BaseFormView):
175 access_mode = ACCESS.ENROLLED
176 template_name = "course/groups.html"
177 form_class = GroupsForm
178
179 def get_common_objects(self):
180 super().get_common_objects()
181 self.enrollment = self.instance.get_enrollment_for(self.request.user)
182 self.groups = list(self.profile.groups.filter(course_instance=self.instance))
183 self.note('enrollment','groups')
184
185 def get_form_kwargs(self):
186 kwargs = super().get_form_kwargs()
187 kwargs["profile"] = self.profile
188 kwargs["instance"] = self.instance
189 kwargs["content"] = self.content
190 return kwargs
191
192 def get_success_url(self):
193 return self.instance.get_url('groups')
194
195 def form_valid(self, form):
196 form.save()
197 messages.success(self.request, _("A new student group was created."))
198 return super().form_valid(form)
199
200
201 class GroupSelect(CourseInstanceMixin, BaseFormView):
202 access_mode = ACCESS.ENROLLED
203 form_class = GroupSelectForm
204 template_name = "course/_group_info.html"
205
206 def get_form_kwargs(self):
207 kwargs = super().get_form_kwargs()
208 kwargs["profile"] = self.profile
209 kwargs["instance"] = self.instance
210 return kwargs
211
212 def get_success_url(self):
213 return self.instance.get_absolute_url()
214
215 def get(self, request, *args, **kwargs):
216 return self.http_method_not_allowed(request, *args, **kwargs)
217
218 def form_invalid(self, form):
219 return HttpResponse('Invalid group selection')
220
221 def form_valid(self, form):
222 enrollment = form.save()
223 if self.request.is_ajax():
224 return self.render_to_response(self.get_context_data(
225 **group_info_context(enrollment.selected_group, self.profile)))
226 return super().form_valid(form)
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/course/views.py b/course/views.py
--- a/course/views.py
+++ b/course/views.py
@@ -106,8 +106,22 @@
elif lti_msg:
messages.info(request, html.escape(lti_msg))
- return super().get(request, *args, **kwargs)
+ later_instance = None
+
+ if self.instance.is_past:
+ try:
+ later_instance = (
+ CourseInstance.objects
+ .get_visible(request.user)
+ .filter(course=self.course, ending_time__gte=timezone.now())
+ .latest('starting_time')
+ )
+ except CourseInstance.DoesNotExist:
+ pass
+ kwargs['later_instance'] = later_instance
+
+ return super().get(request, *args, **kwargs)
class Enroll(EnrollableViewMixin, BaseRedirectMixin, BaseTemplateView):
permission_classes = [EnrollInfoVisiblePermission]
|
{"golden_diff": "diff --git a/course/views.py b/course/views.py\n--- a/course/views.py\n+++ b/course/views.py\n@@ -106,8 +106,22 @@\n elif lti_msg:\n messages.info(request, html.escape(lti_msg))\n \n- return super().get(request, *args, **kwargs)\n+ later_instance = None\n+\n+ if self.instance.is_past:\n+ try:\n+ later_instance = (\n+ CourseInstance.objects\n+ .get_visible(request.user)\n+ .filter(course=self.course, ending_time__gte=timezone.now())\n+ .latest('starting_time')\n+ )\n+ except CourseInstance.DoesNotExist:\n+ pass\n \n+ kwargs['later_instance'] = later_instance\n+\n+ return super().get(request, *args, **kwargs)\n \n class Enroll(EnrollableViewMixin, BaseRedirectMixin, BaseTemplateView):\n permission_classes = [EnrollInfoVisiblePermission]\n", "issue": "automatic notification for user about arriving on pages of a past course\nA+ knows that a course has closed since course instances have opening and closing times. Any closed course instance could show a warning easily. Adding an automatic link to the newest course instance would be easy too.\r\n\r\n[Teacher needs spring 2020](https://wiki.aalto.fi/display/EDIT/Teacher+needs+spring+2020): nr 12\r\n\r\nTo be allocated to one of the summer interns\n", "before_files": [{"content": "import datetime\n\nimport icalendar\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import Http404\nfrom django.http.response import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import html\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom authorization.permissions import ACCESS\nfrom exercise.cache.hierarchy import NoSuchContent\nfrom exercise.models import LearningObject\nfrom lib.helpers import settings_text\nfrom lib.viewbase import BaseTemplateView, BaseRedirectMixin, BaseFormView, BaseView\nfrom userprofile.viewbase import UserProfileView\nfrom .forms import GroupsForm, GroupSelectForm\nfrom .models import CourseInstance, Enrollment\nfrom .permissions import EnrollInfoVisiblePermission\nfrom .renders import group_info_context\nfrom .viewbase import CourseModuleBaseView, CourseInstanceMixin, EnrollableViewMixin\n\n\nclass HomeView(UserProfileView):\n access_mode = ACCESS.ANONYMOUS\n template_name = \"course/index.html\"\n\n def get_common_objects(self):\n super().get_common_objects()\n self.welcome_text = settings_text('WELCOME_TEXT')\n self.internal_user_label = settings_text('INTERNAL_USER_LABEL')\n self.external_user_label = settings_text('EXTERNAL_USER_LABEL')\n my_instances = []\n all_instances = []\n end_threshold = timezone.now() - datetime.timedelta(days=30)\n user = self.request.user\n is_logged_in = False\n\n if user and user.is_authenticated:\n is_logged_in = True\n for instance in (CourseInstance.objects\n .filter(course__teachers=user.userprofile,\n ending_time__gte=end_threshold)\n .all()):\n my_instances.append(instance)\n\n for instance in user.userprofile.assisting_courses.all().filter(ending_time__gte=end_threshold):\n if instance not in my_instances:\n my_instances.append(instance)\n \n for instance in user.userprofile.enrolled.all().filter(ending_time__gte=end_threshold):\n if instance not in my_instances:\n my_instances.append(instance)\n \n all_instances = CourseInstance.objects.get_visible(user).filter(ending_time__gte=end_threshold)\n all_instances = [c for c in all_instances if c not in my_instances]\n \n self.all_instances = all_instances\n self.my_instances = my_instances\n self.is_logged_in = is_logged_in\n\n self.note(\"welcome_text\", \n \"internal_user_label\", \n \"external_user_label\",\n \"my_instances\",\n \"all_instances\",\n \"is_logged_in\",\n )\n\n\nclass ArchiveView(UserProfileView):\n access_mode = ACCESS.ANONYMOUS\n template_name = \"course/archive.html\"\n\n def get_common_objects(self):\n super().get_common_objects()\n self.instances = CourseInstance.objects.get_visible(self.request.user)\n self.note(\"instances\")\n\nclass InstanceView(EnrollableViewMixin, BaseTemplateView):\n access_mode = ACCESS.STUDENT\n # ACCESS.STUDENT requires users to log in, but the access mode is dropped\n # in public courses. CourseVisiblePermission has more restrictions as well.\n template_name = \"course/course.html\"\n\n def handle_no_permission(self):\n if self.request.user.is_authenticated \\\n and self.instance.view_content_to == CourseInstance.VIEW_ACCESS.ENROLLED:\n # The course instance is visible to only enrolled students, so\n # redirect the user to the enroll page instead of showing\n # a 403 Forbidden error.\n return redirect(self.instance.get_url('enroll'))\n return super().handle_no_permission()\n\n def get(self, request, *args, **kwargs):\n # external LTI Tool Providers may return the user to the course instance view\n # with a message given in GET query parameters\n lti_error_msg = request.GET.get('lti_errormsg')\n lti_msg = request.GET.get('lti_msg')\n # message HTML is not escaped in the templates so escape it here\n if lti_error_msg:\n messages.error(request, html.escape(lti_error_msg))\n elif lti_msg:\n messages.info(request, html.escape(lti_msg))\n\n return super().get(request, *args, **kwargs)\n\n\nclass Enroll(EnrollableViewMixin, BaseRedirectMixin, BaseTemplateView):\n permission_classes = [EnrollInfoVisiblePermission]\n course_permission_classes = []\n template_name = \"course/enroll.html\"\n\n def post(self, request, *args, **kwargs):\n\n if self.is_student or not self.enrollable:\n messages.error(self.request, _(\"You cannot enroll, or have already enrolled, in this course.\"))\n raise PermissionDenied()\n\n if not self.instance.is_enrollment_open():\n messages.error(self.request, _(\"The enrollment is not open.\"))\n raise PermissionDenied()\n\n # Support enrollment questionnaires.\n exercise = LearningObject.objects.find_enrollment_exercise(\n self.instance, self.profile)\n if exercise:\n return self.redirect(exercise.get_absolute_url())\n\n self.instance.enroll_student(self.request.user)\n return self.redirect(self.instance.get_absolute_url())\n\n\nclass ModuleView(CourseModuleBaseView):\n template_name = \"course/module.html\"\n\n def get_common_objects(self):\n super().get_common_objects()\n self.now = timezone.now()\n try:\n self.children = self.content.flat_module(self.module)\n cur, tree, prev, nex = self.content.find(self.module)\n self.previous = prev\n self.current = cur\n self.next = nex\n except NoSuchContent:\n raise Http404\n self.note('now', 'children', 'previous', 'current', 'next')\n\n\nclass CalendarExport(CourseInstanceMixin, BaseView):\n\n def get(self, request, *args, **kwargs):\n cal = icalendar.Calendar()\n cal.add('prodid', '-// {} calendar //'.format(settings.BRAND_NAME))\n cal.add('version', '2.0')\n for module in self.instance.course_modules.all():\n event = icalendar.Event()\n event.add('summary', module.name)\n event.add('dtstart',\n module.closing_time - datetime.timedelta(hours=1))\n event.add('dtend', module.closing_time)\n event.add('dtstamp', module.closing_time)\n event['uid'] = \"module/\" + str(module.id) + \"/A+\"\n cal.add_component(event)\n\n return HttpResponse(cal.to_ical(),\n content_type=\"text/calendar; charset=utf-8\")\n\n\nclass GroupsView(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.ENROLLED\n template_name = \"course/groups.html\"\n form_class = GroupsForm\n\n def get_common_objects(self):\n super().get_common_objects()\n self.enrollment = self.instance.get_enrollment_for(self.request.user)\n self.groups = list(self.profile.groups.filter(course_instance=self.instance))\n self.note('enrollment','groups')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"profile\"] = self.profile\n kwargs[\"instance\"] = self.instance\n kwargs[\"content\"] = self.content\n return kwargs\n\n def get_success_url(self):\n return self.instance.get_url('groups')\n\n def form_valid(self, form):\n form.save()\n messages.success(self.request, _(\"A new student group was created.\"))\n return super().form_valid(form)\n\n\nclass GroupSelect(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.ENROLLED\n form_class = GroupSelectForm\n template_name = \"course/_group_info.html\"\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"profile\"] = self.profile\n kwargs[\"instance\"] = self.instance\n return kwargs\n\n def get_success_url(self):\n return self.instance.get_absolute_url()\n\n def get(self, request, *args, **kwargs):\n return self.http_method_not_allowed(request, *args, **kwargs)\n\n def form_invalid(self, form):\n return HttpResponse('Invalid group selection')\n\n def form_valid(self, form):\n enrollment = form.save()\n if self.request.is_ajax():\n return self.render_to_response(self.get_context_data(\n **group_info_context(enrollment.selected_group, self.profile)))\n return super().form_valid(form)\n", "path": "course/views.py"}], "after_files": [{"content": "import datetime\n\nimport icalendar\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import Http404\nfrom django.http.response import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import html\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom authorization.permissions import ACCESS\nfrom exercise.cache.hierarchy import NoSuchContent\nfrom exercise.models import LearningObject\nfrom lib.helpers import settings_text\nfrom lib.viewbase import BaseTemplateView, BaseRedirectMixin, BaseFormView, BaseView\nfrom userprofile.viewbase import UserProfileView\nfrom .forms import GroupsForm, GroupSelectForm\nfrom .models import CourseInstance, Enrollment\nfrom .permissions import EnrollInfoVisiblePermission\nfrom .renders import group_info_context\nfrom .viewbase import CourseModuleBaseView, CourseInstanceMixin, EnrollableViewMixin\n\n\nclass HomeView(UserProfileView):\n access_mode = ACCESS.ANONYMOUS\n template_name = \"course/index.html\"\n\n def get_common_objects(self):\n super().get_common_objects()\n self.welcome_text = settings_text('WELCOME_TEXT')\n self.internal_user_label = settings_text('INTERNAL_USER_LABEL')\n self.external_user_label = settings_text('EXTERNAL_USER_LABEL')\n my_instances = []\n all_instances = []\n end_threshold = timezone.now() - datetime.timedelta(days=30)\n user = self.request.user\n is_logged_in = False\n\n if user and user.is_authenticated:\n is_logged_in = True\n for instance in (CourseInstance.objects\n .filter(course__teachers=user.userprofile,\n ending_time__gte=end_threshold)\n .all()):\n my_instances.append(instance)\n\n for instance in user.userprofile.assisting_courses.all().filter(ending_time__gte=end_threshold):\n if instance not in my_instances:\n my_instances.append(instance)\n \n for instance in user.userprofile.enrolled.all().filter(ending_time__gte=end_threshold):\n if instance not in my_instances:\n my_instances.append(instance)\n \n all_instances = CourseInstance.objects.get_visible(user).filter(ending_time__gte=end_threshold)\n all_instances = [c for c in all_instances if c not in my_instances]\n \n self.all_instances = all_instances\n self.my_instances = my_instances\n self.is_logged_in = is_logged_in\n\n self.note(\"welcome_text\", \n \"internal_user_label\", \n \"external_user_label\",\n \"my_instances\",\n \"all_instances\",\n \"is_logged_in\",\n )\n\n\nclass ArchiveView(UserProfileView):\n access_mode = ACCESS.ANONYMOUS\n template_name = \"course/archive.html\"\n\n def get_common_objects(self):\n super().get_common_objects()\n self.instances = CourseInstance.objects.get_visible(self.request.user)\n self.note(\"instances\")\n\nclass InstanceView(EnrollableViewMixin, BaseTemplateView):\n access_mode = ACCESS.STUDENT\n # ACCESS.STUDENT requires users to log in, but the access mode is dropped\n # in public courses. CourseVisiblePermission has more restrictions as well.\n template_name = \"course/course.html\"\n\n def handle_no_permission(self):\n if self.request.user.is_authenticated \\\n and self.instance.view_content_to == CourseInstance.VIEW_ACCESS.ENROLLED:\n # The course instance is visible to only enrolled students, so\n # redirect the user to the enroll page instead of showing\n # a 403 Forbidden error.\n return redirect(self.instance.get_url('enroll'))\n return super().handle_no_permission()\n\n def get(self, request, *args, **kwargs):\n # external LTI Tool Providers may return the user to the course instance view\n # with a message given in GET query parameters\n lti_error_msg = request.GET.get('lti_errormsg')\n lti_msg = request.GET.get('lti_msg')\n # message HTML is not escaped in the templates so escape it here\n if lti_error_msg:\n messages.error(request, html.escape(lti_error_msg))\n elif lti_msg:\n messages.info(request, html.escape(lti_msg))\n\n later_instance = None\n\n if self.instance.is_past:\n try:\n later_instance = (\n CourseInstance.objects\n .get_visible(request.user)\n .filter(course=self.course, ending_time__gte=timezone.now())\n .latest('starting_time')\n )\n except CourseInstance.DoesNotExist:\n pass\n\n kwargs['later_instance'] = later_instance\n\n return super().get(request, *args, **kwargs)\n\nclass Enroll(EnrollableViewMixin, BaseRedirectMixin, BaseTemplateView):\n permission_classes = [EnrollInfoVisiblePermission]\n course_permission_classes = []\n template_name = \"course/enroll.html\"\n\n def post(self, request, *args, **kwargs):\n\n if self.is_student or not self.enrollable:\n messages.error(self.request, _(\"You cannot enroll, or have already enrolled, in this course.\"))\n raise PermissionDenied()\n\n if not self.instance.is_enrollment_open():\n messages.error(self.request, _(\"The enrollment is not open.\"))\n raise PermissionDenied()\n\n # Support enrollment questionnaires.\n exercise = LearningObject.objects.find_enrollment_exercise(\n self.instance, self.profile)\n if exercise:\n return self.redirect(exercise.get_absolute_url())\n\n self.instance.enroll_student(self.request.user)\n return self.redirect(self.instance.get_absolute_url())\n\n\nclass ModuleView(CourseModuleBaseView):\n template_name = \"course/module.html\"\n\n def get_common_objects(self):\n super().get_common_objects()\n self.now = timezone.now()\n try:\n self.children = self.content.flat_module(self.module)\n cur, tree, prev, nex = self.content.find(self.module)\n self.previous = prev\n self.current = cur\n self.next = nex\n except NoSuchContent:\n raise Http404\n self.note('now', 'children', 'previous', 'current', 'next')\n\n\nclass CalendarExport(CourseInstanceMixin, BaseView):\n\n def get(self, request, *args, **kwargs):\n cal = icalendar.Calendar()\n cal.add('prodid', '-// {} calendar //'.format(settings.BRAND_NAME))\n cal.add('version', '2.0')\n for module in self.instance.course_modules.all():\n event = icalendar.Event()\n event.add('summary', module.name)\n event.add('dtstart',\n module.closing_time - datetime.timedelta(hours=1))\n event.add('dtend', module.closing_time)\n event.add('dtstamp', module.closing_time)\n event['uid'] = \"module/\" + str(module.id) + \"/A+\"\n cal.add_component(event)\n\n return HttpResponse(cal.to_ical(),\n content_type=\"text/calendar; charset=utf-8\")\n\n\nclass GroupsView(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.ENROLLED\n template_name = \"course/groups.html\"\n form_class = GroupsForm\n\n def get_common_objects(self):\n super().get_common_objects()\n self.enrollment = self.instance.get_enrollment_for(self.request.user)\n self.groups = list(self.profile.groups.filter(course_instance=self.instance))\n self.note('enrollment','groups')\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"profile\"] = self.profile\n kwargs[\"instance\"] = self.instance\n kwargs[\"content\"] = self.content\n return kwargs\n\n def get_success_url(self):\n return self.instance.get_url('groups')\n\n def form_valid(self, form):\n form.save()\n messages.success(self.request, _(\"A new student group was created.\"))\n return super().form_valid(form)\n\n\nclass GroupSelect(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.ENROLLED\n form_class = GroupSelectForm\n template_name = \"course/_group_info.html\"\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"profile\"] = self.profile\n kwargs[\"instance\"] = self.instance\n return kwargs\n\n def get_success_url(self):\n return self.instance.get_absolute_url()\n\n def get(self, request, *args, **kwargs):\n return self.http_method_not_allowed(request, *args, **kwargs)\n\n def form_invalid(self, form):\n return HttpResponse('Invalid group selection')\n\n def form_valid(self, form):\n enrollment = form.save()\n if self.request.is_ajax():\n return self.render_to_response(self.get_context_data(\n **group_info_context(enrollment.selected_group, self.profile)))\n return super().form_valid(form)\n", "path": "course/views.py"}]}
| 2,711 | 204 |
gh_patches_debug_25235
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-5889
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Prediction fails for TransformedTargetForecaster with LTSFNLinearForecaster with a non-relative ForecastingHorizon
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
When constructing a TransformedTargetForecaster with a transformer and LTSFNLinearForecaster, prediction fails saying that a different fh has been provided even if that is not the case.
```
Traceback (most recent call last):
File "bug_report/tmp.py", line 29, in <module>
pipe.predict(fh=fh)
File "lib/python3.11/site-packages/sktime/forecasting/base/_base.py", line 431, in predict
y_pred = self._predict(fh=fh, X=X_inner)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/sktime/forecasting/compose/_pipeline.py", line 1032, in _predict
y_pred = self.forecaster_.predict(fh=fh, X=X)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/sktime/forecasting/base/_base.py", line 424, in predict
fh = self._check_fh(fh)
^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/sktime/forecasting/base/_base.py", line 1745, in _check_fh
raise ValueError(
ValueError: A different forecasting horizon `fh` has been provided from the one seen already in `fit`, in this instance of LTSFNLinearForecaster. If you want to change the forecasting horizon, please re-fit the forecaster. This is because fitting of the forecaster LTSFNLinearForecaster depends on `fh`.
```
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
import numpy as np
import pandas as pd
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.ltsf import LTSFNLinearForecaster
from sktime.split import temporal_train_test_split
from sktime.transformations.series.boxcox import LogTransformer
y = pd.DataFrame(
np.random.random([100, 3]), index=pd.date_range("2023-01-01", periods=100)
)
y_train, y_test = temporal_train_test_split(y, test_size=30)
# TransformedTargetForecaster works with relative ForecastingHorizon
fh = ForecastingHorizon(np.arange(1, 31), is_relative=True)
pipe = LogTransformer() * LTSFNLinearForecaster(seq_len=20, pred_len=30)
pipe.fit(y_train, fh=fh)
pipe.predict()
# Forecaster works with non-relative ForecastingHorizon
fh = ForecastingHorizon(y_test.index, is_relative=False)
pipe = LTSFNLinearForecaster(seq_len=20, pred_len=30, batch_size=256)
pipe.fit(y_train, fh=fh)
pipe.predict()
# TransformedTargetForecaster fails with non-relative ForecastingHorizon
fh = ForecastingHorizon(y_test.index, is_relative=False)
pipe = LogTransformer() * LTSFNLinearForecaster(seq_len=20, pred_len=30)
pipe.fit(y_train, fh=fh)
pipe.predict()
# TransformedTargetForecaster fails with non-relative ForecastingHorizon even if ForecastingHorizon is provided to predict
fh = ForecastingHorizon(y_test.index, is_relative=False)
pipe = LogTransformer() * LTSFNLinearForecaster(seq_len=20, pred_len=30)
pipe.fit(y_train, fh=fh)
pipe.predict(fh=fh)
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
I would expect the TransformedTargetForecaster prediction to work with a non-relative ForecastingHorizon same as when using only the LTSFNLinearForecaster without needing to provide the ForecastingHorizon again.
**Additional context**
<!--
Add any other context about the problem here.
-->
I suspect that it has to do with the conversion of the ForecastingHorizon to relative if it's non-relative and then it thinks that it's not the same ForecastingHorizon anymore.
https://github.com/sktime/sktime/blob/d8574257d62a36930203bcb11f532623869fd7d7/sktime/forecasting/base/adapters/_pytorch.py#L64
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
System:
python: 3.11.7 (main, Dec 17 2023, 18:26:46) [GCC 11.4.0]
executable: /home/usr/.pyenv/versions/bug_report/bin/python
machine: Linux-6.5.0-15-generic-x86_64-with-glibc2.35
Python dependencies:
pip: 23.3.2
sktime: 0.26.0
sklearn: 1.4.0
skbase: 0.7.2
numpy: 1.26.3
scipy: 1.12.0
pandas: 2.1.4
matplotlib: 3.8.2
joblib: 1.3.2
numba: 0.59.0
statsmodels: 0.14.1
pmdarima: 2.0.4
statsforecast: 1.6.0
tsfresh: None
tslearn: None
torch: 2.2.0+cpu
tensorflow: None
tensorflow_probability: None
</details>
<!-- Thanks for contributing! -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/forecasting/base/adapters/_pytorch.py`
Content:
```
1 from abc import ABC
2
3 import numpy as np
4 import pandas as pd
5
6 from sktime.forecasting.base import BaseForecaster
7 from sktime.utils.validation._dependencies import _check_soft_dependencies
8
9 if _check_soft_dependencies("torch", severity="none"):
10 import torch
11
12
13 class BaseDeepNetworkPyTorch(BaseForecaster, ABC):
14 """Abstract base class for deep learning networks using torch.nn."""
15
16 _tags = {
17 "python_dependencies": "torch",
18 "y_inner_mtype": "pd.DataFrame",
19 "capability:insample": False,
20 "capability:pred_int:insample": False,
21 "scitype:y": "both",
22 "ignores-exogeneous-X": True,
23 }
24
25 def __init__(
26 self,
27 num_epochs=16,
28 batch_size=8,
29 in_channels=1,
30 individual=False,
31 criterion_kwargs=None,
32 optimizer=None,
33 optimizer_kwargs=None,
34 lr=0.001,
35 ):
36 self.num_epochs = num_epochs
37 self.batch_size = batch_size
38 self.in_channels = in_channels
39 self.individual = individual
40 self.criterion_kwargs = criterion_kwargs
41 self.optimizer = optimizer
42 self.optimizer_kwargs = optimizer_kwargs
43 self.lr = lr
44
45 super().__init__()
46
47 def _fit(self, y, fh, X=None):
48 """Fit the network.
49
50 Changes to state:
51 writes to self._network.state_dict
52
53 Parameters
54 ----------
55 X : iterable-style or map-style dataset
56 see (https://pytorch.org/docs/stable/data.html) for more information
57 """
58 from sktime.forecasting.base import ForecastingHorizon
59
60 # save fh and y for prediction later
61 if fh.is_relative:
62 self._fh = fh
63 else:
64 fh = fh.to_relative(self.cutoff)
65 self._fh = fh
66
67 self._y = y
68
69 if type(fh) is ForecastingHorizon:
70 self.network = self._build_network(fh._values[-1])
71 else:
72 self.network = self._build_network(fh)
73
74 if self.criterion:
75 if self.criterion in self.criterions.keys():
76 if self.criterion_kwargs:
77 self._criterion = self.criterions[self.criterion](
78 **self.criterion_kwargs
79 )
80 else:
81 self._criterion = self.criterions[self.criterion]()
82 else:
83 raise TypeError(
84 f"Please pass one of {self.criterions.keys()} for `criterion`."
85 )
86 else:
87 # default criterion
88 self._criterion = torch.nn.MSELoss()
89
90 if self.optimizer:
91 if self.optimizer in self.optimizers.keys():
92 if self.optimizer_kwargs:
93 self._optimizer = self.optimizers[self.optimizer](
94 self.network.parameters(), lr=self.lr, **self.optimizer_kwargs
95 )
96 else:
97 self._optimizer = self.optimizers[self.optimizer](
98 self.network.parameters(), lr=self.lr
99 )
100 else:
101 raise TypeError(
102 f"Please pass one of {self.optimizers.keys()} for `optimizer`."
103 )
104 else:
105 # default optimizer
106 self._optimizer = torch.optim.Adam(self.network.parameters(), lr=self.lr)
107
108 dataloader = self.build_pytorch_train_dataloader(y)
109 self.network.train()
110
111 for _ in range(self.num_epochs):
112 for x, y in dataloader:
113 y_pred = self.network(x)
114 loss = self._criterion(y_pred, y)
115 self._optimizer.zero_grad()
116 loss.backward()
117 self._optimizer.step()
118
119 def _predict(self, X=None, fh=None):
120 """Predict with fitted model."""
121 from torch import cat
122
123 if fh is None:
124 fh = self._fh
125
126 if max(fh._values) > self.network.pred_len or min(fh._values) < 0:
127 raise ValueError(
128 f"fh of {fh} passed to {self.__class__.__name__} is not "
129 "within `pred_len`. Please use a fh that aligns with the `pred_len` of "
130 "the forecaster."
131 )
132
133 if X is None:
134 dataloader = self.build_pytorch_pred_dataloader(self._y, fh)
135 else:
136 dataloader = self.build_pytorch_pred_dataloader(X, fh)
137
138 y_pred = []
139 for x, _ in dataloader:
140 y_pred.append(self.network(x).detach())
141 y_pred = cat(y_pred, dim=0).view(-1, y_pred[0].shape[-1]).numpy()
142 y_pred = y_pred[fh._values.values - 1]
143 y_pred = pd.DataFrame(
144 y_pred, columns=self._y.columns, index=fh.to_absolute_index(self.cutoff)
145 )
146
147 return y_pred
148
149 def build_pytorch_train_dataloader(self, y):
150 """Build PyTorch DataLoader for training."""
151 from torch.utils.data import DataLoader
152
153 if self.custom_dataset_train:
154 if hasattr(self.custom_dataset_train, "build_dataset") and callable(
155 self.custom_dataset_train.build_dataset
156 ):
157 self.custom_dataset_train.build_dataset(y)
158 dataset = self.custom_dataset_train
159 else:
160 raise NotImplementedError(
161 "Custom Dataset `build_dataset` method is not available. Please "
162 f"refer to the {self.__class__.__name__}.build_dataset "
163 "documentation."
164 )
165 else:
166 dataset = PyTorchTrainDataset(
167 y=y,
168 seq_len=self.network.seq_len,
169 fh=self._fh._values[-1],
170 )
171
172 return DataLoader(
173 dataset,
174 self.batch_size,
175 )
176
177 def build_pytorch_pred_dataloader(self, y, fh):
178 """Build PyTorch DataLoader for prediction."""
179 from torch.utils.data import DataLoader
180
181 if self.custom_dataset_pred:
182 if hasattr(self.custom_dataset_pred, "build_dataset") and callable(
183 self.custom_dataset_pred.build_dataset
184 ):
185 self.custom_dataset_train.build_dataset(y)
186 dataset = self.custom_dataset_train
187 else:
188 raise NotImplementedError(
189 "Custom Dataset `build_dataset` method is not available. Please"
190 f"refer to the {self.__class__.__name__}.build_dataset"
191 "documentation."
192 )
193 else:
194 dataset = PyTorchPredDataset(
195 y=y[-self.network.seq_len :],
196 seq_len=self.network.seq_len,
197 )
198
199 return DataLoader(
200 dataset,
201 self.batch_size,
202 )
203
204 def get_y_true(self, y):
205 """Get y_true values for validation."""
206 dataloader = self.build_pytorch_pred_dataloader(y)
207 y_true = [y.flatten().numpy() for _, y in dataloader]
208 return np.concatenate(y_true, axis=0)
209
210
211 class PyTorchTrainDataset:
212 """Dataset for use in sktime deep learning forecasters."""
213
214 def __init__(self, y, seq_len, fh):
215 self.y = y.values
216 self.seq_len = seq_len
217 self.fh = fh
218
219 def __len__(self):
220 """Return length of dataset."""
221 return len(self.y) - self.seq_len - self.fh + 1
222
223 def __getitem__(self, i):
224 """Return data point."""
225 from torch import from_numpy, tensor
226
227 return (
228 tensor(self.y[i : i + self.seq_len]).float(),
229 from_numpy(self.y[i + self.seq_len : i + self.seq_len + self.fh]).float(),
230 )
231
232
233 class PyTorchPredDataset:
234 """Dataset for use in sktime deep learning forecasters."""
235
236 def __init__(self, y, seq_len):
237 self.y = y.values
238 self.seq_len = seq_len
239
240 def __len__(self):
241 """Return length of dataset."""
242 return 1
243
244 def __getitem__(self, i):
245 """Return data point."""
246 from torch import from_numpy, tensor
247
248 return (
249 tensor(self.y[i : i + self.seq_len]).float(),
250 from_numpy(self.y[i + self.seq_len : i + self.seq_len]).float(),
251 )
252
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sktime/forecasting/base/adapters/_pytorch.py b/sktime/forecasting/base/adapters/_pytorch.py
--- a/sktime/forecasting/base/adapters/_pytorch.py
+++ b/sktime/forecasting/base/adapters/_pytorch.py
@@ -57,14 +57,7 @@
"""
from sktime.forecasting.base import ForecastingHorizon
- # save fh and y for prediction later
- if fh.is_relative:
- self._fh = fh
- else:
- fh = fh.to_relative(self.cutoff)
- self._fh = fh
-
- self._y = y
+ fh = fh.to_relative(self.cutoff)
if type(fh) is ForecastingHorizon:
self.network = self._build_network(fh._values[-1])
@@ -122,6 +115,7 @@
if fh is None:
fh = self._fh
+ fh = fh.to_relative(self.cutoff)
if max(fh._values) > self.network.pred_len or min(fh._values) < 0:
raise ValueError(
@@ -166,7 +160,7 @@
dataset = PyTorchTrainDataset(
y=y,
seq_len=self.network.seq_len,
- fh=self._fh._values[-1],
+ fh=self._fh.to_relative(self.cutoff)._values[-1],
)
return DataLoader(
|
{"golden_diff": "diff --git a/sktime/forecasting/base/adapters/_pytorch.py b/sktime/forecasting/base/adapters/_pytorch.py\n--- a/sktime/forecasting/base/adapters/_pytorch.py\n+++ b/sktime/forecasting/base/adapters/_pytorch.py\n@@ -57,14 +57,7 @@\n \"\"\"\n from sktime.forecasting.base import ForecastingHorizon\n \n- # save fh and y for prediction later\n- if fh.is_relative:\n- self._fh = fh\n- else:\n- fh = fh.to_relative(self.cutoff)\n- self._fh = fh\n-\n- self._y = y\n+ fh = fh.to_relative(self.cutoff)\n \n if type(fh) is ForecastingHorizon:\n self.network = self._build_network(fh._values[-1])\n@@ -122,6 +115,7 @@\n \n if fh is None:\n fh = self._fh\n+ fh = fh.to_relative(self.cutoff)\n \n if max(fh._values) > self.network.pred_len or min(fh._values) < 0:\n raise ValueError(\n@@ -166,7 +160,7 @@\n dataset = PyTorchTrainDataset(\n y=y,\n seq_len=self.network.seq_len,\n- fh=self._fh._values[-1],\n+ fh=self._fh.to_relative(self.cutoff)._values[-1],\n )\n \n return DataLoader(\n", "issue": "[BUG] Prediction fails for TransformedTargetForecaster with LTSFNLinearForecaster with a non-relative ForecastingHorizon\n**Describe the bug**\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\nWhen constructing a TransformedTargetForecaster with a transformer and LTSFNLinearForecaster, prediction fails saying that a different fh has been provided even if that is not the case.\r\n```\r\nTraceback (most recent call last):\r\n File \"bug_report/tmp.py\", line 29, in <module>\r\n pipe.predict(fh=fh)\r\n File \"lib/python3.11/site-packages/sktime/forecasting/base/_base.py\", line 431, in predict\r\n y_pred = self._predict(fh=fh, X=X_inner)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"lib/python3.11/site-packages/sktime/forecasting/compose/_pipeline.py\", line 1032, in _predict\r\n y_pred = self.forecaster_.predict(fh=fh, X=X)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"lib/python3.11/site-packages/sktime/forecasting/base/_base.py\", line 424, in predict\r\n fh = self._check_fh(fh)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"lib/python3.11/site-packages/sktime/forecasting/base/_base.py\", line 1745, in _check_fh\r\n raise ValueError(\r\nValueError: A different forecasting horizon `fh` has been provided from the one seen already in `fit`, in this instance of LTSFNLinearForecaster. If you want to change the forecasting horizon, please re-fit the forecaster. This is because fitting of the forecaster LTSFNLinearForecaster depends on `fh`.\r\n```\r\n\r\n**To Reproduce**\r\n<!--\r\nAdd a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve\r\n\r\nIf the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com\r\n-->\r\n\r\n```python\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom sktime.forecasting.base import ForecastingHorizon\r\nfrom sktime.forecasting.ltsf import LTSFNLinearForecaster\r\nfrom sktime.split import temporal_train_test_split\r\nfrom sktime.transformations.series.boxcox import LogTransformer\r\n\r\ny = pd.DataFrame(\r\n np.random.random([100, 3]), index=pd.date_range(\"2023-01-01\", periods=100)\r\n)\r\ny_train, y_test = temporal_train_test_split(y, test_size=30)\r\n\r\n# TransformedTargetForecaster works with relative ForecastingHorizon\r\nfh = ForecastingHorizon(np.arange(1, 31), is_relative=True)\r\npipe = LogTransformer() * LTSFNLinearForecaster(seq_len=20, pred_len=30)\r\npipe.fit(y_train, fh=fh)\r\npipe.predict()\r\n\r\n# Forecaster works with non-relative ForecastingHorizon\r\nfh = ForecastingHorizon(y_test.index, is_relative=False)\r\npipe = LTSFNLinearForecaster(seq_len=20, pred_len=30, batch_size=256)\r\npipe.fit(y_train, fh=fh)\r\npipe.predict()\r\n\r\n# TransformedTargetForecaster fails with non-relative ForecastingHorizon\r\nfh = ForecastingHorizon(y_test.index, is_relative=False)\r\npipe = LogTransformer() * LTSFNLinearForecaster(seq_len=20, pred_len=30)\r\npipe.fit(y_train, fh=fh)\r\npipe.predict()\r\n\r\n# TransformedTargetForecaster fails with non-relative ForecastingHorizon even if ForecastingHorizon is provided to predict\r\nfh = ForecastingHorizon(y_test.index, is_relative=False)\r\npipe = LogTransformer() * LTSFNLinearForecaster(seq_len=20, pred_len=30)\r\npipe.fit(y_train, fh=fh)\r\npipe.predict(fh=fh)\r\n\r\n```\r\n\r\n**Expected behavior**\r\n<!--\r\nA clear and concise description of what you expected to happen.\r\n-->\r\nI would expect the TransformedTargetForecaster prediction to work with a non-relative ForecastingHorizon same as when using only the LTSFNLinearForecaster without needing to provide the ForecastingHorizon again.\r\n\r\n**Additional context**\r\n<!--\r\nAdd any other context about the problem here.\r\n-->\r\nI suspect that it has to do with the conversion of the ForecastingHorizon to relative if it's non-relative and then it thinks that it's not the same ForecastingHorizon anymore.\r\nhttps://github.com/sktime/sktime/blob/d8574257d62a36930203bcb11f532623869fd7d7/sktime/forecasting/base/adapters/_pytorch.py#L64\r\n\r\n**Versions**\r\n<details>\r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n\r\nfrom sktime import show_versions; show_versions()\r\n-->\r\n\r\nSystem:\r\n python: 3.11.7 (main, Dec 17 2023, 18:26:46) [GCC 11.4.0]\r\nexecutable: /home/usr/.pyenv/versions/bug_report/bin/python\r\n machine: Linux-6.5.0-15-generic-x86_64-with-glibc2.35\r\n\r\nPython dependencies:\r\n pip: 23.3.2\r\n sktime: 0.26.0\r\n sklearn: 1.4.0\r\n skbase: 0.7.2\r\n numpy: 1.26.3\r\n scipy: 1.12.0\r\n pandas: 2.1.4\r\n matplotlib: 3.8.2\r\n joblib: 1.3.2\r\n numba: 0.59.0\r\n statsmodels: 0.14.1\r\n pmdarima: 2.0.4\r\nstatsforecast: 1.6.0\r\n tsfresh: None\r\n tslearn: None\r\n torch: 2.2.0+cpu\r\n tensorflow: None\r\ntensorflow_probability: None\r\n\r\n\r\n</details>\r\n\r\n\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "from abc import ABC\n\nimport numpy as np\nimport pandas as pd\n\nfrom sktime.forecasting.base import BaseForecaster\nfrom sktime.utils.validation._dependencies import _check_soft_dependencies\n\nif _check_soft_dependencies(\"torch\", severity=\"none\"):\n import torch\n\n\nclass BaseDeepNetworkPyTorch(BaseForecaster, ABC):\n \"\"\"Abstract base class for deep learning networks using torch.nn.\"\"\"\n\n _tags = {\n \"python_dependencies\": \"torch\",\n \"y_inner_mtype\": \"pd.DataFrame\",\n \"capability:insample\": False,\n \"capability:pred_int:insample\": False,\n \"scitype:y\": \"both\",\n \"ignores-exogeneous-X\": True,\n }\n\n def __init__(\n self,\n num_epochs=16,\n batch_size=8,\n in_channels=1,\n individual=False,\n criterion_kwargs=None,\n optimizer=None,\n optimizer_kwargs=None,\n lr=0.001,\n ):\n self.num_epochs = num_epochs\n self.batch_size = batch_size\n self.in_channels = in_channels\n self.individual = individual\n self.criterion_kwargs = criterion_kwargs\n self.optimizer = optimizer\n self.optimizer_kwargs = optimizer_kwargs\n self.lr = lr\n\n super().__init__()\n\n def _fit(self, y, fh, X=None):\n \"\"\"Fit the network.\n\n Changes to state:\n writes to self._network.state_dict\n\n Parameters\n ----------\n X : iterable-style or map-style dataset\n see (https://pytorch.org/docs/stable/data.html) for more information\n \"\"\"\n from sktime.forecasting.base import ForecastingHorizon\n\n # save fh and y for prediction later\n if fh.is_relative:\n self._fh = fh\n else:\n fh = fh.to_relative(self.cutoff)\n self._fh = fh\n\n self._y = y\n\n if type(fh) is ForecastingHorizon:\n self.network = self._build_network(fh._values[-1])\n else:\n self.network = self._build_network(fh)\n\n if self.criterion:\n if self.criterion in self.criterions.keys():\n if self.criterion_kwargs:\n self._criterion = self.criterions[self.criterion](\n **self.criterion_kwargs\n )\n else:\n self._criterion = self.criterions[self.criterion]()\n else:\n raise TypeError(\n f\"Please pass one of {self.criterions.keys()} for `criterion`.\"\n )\n else:\n # default criterion\n self._criterion = torch.nn.MSELoss()\n\n if self.optimizer:\n if self.optimizer in self.optimizers.keys():\n if self.optimizer_kwargs:\n self._optimizer = self.optimizers[self.optimizer](\n self.network.parameters(), lr=self.lr, **self.optimizer_kwargs\n )\n else:\n self._optimizer = self.optimizers[self.optimizer](\n self.network.parameters(), lr=self.lr\n )\n else:\n raise TypeError(\n f\"Please pass one of {self.optimizers.keys()} for `optimizer`.\"\n )\n else:\n # default optimizer\n self._optimizer = torch.optim.Adam(self.network.parameters(), lr=self.lr)\n\n dataloader = self.build_pytorch_train_dataloader(y)\n self.network.train()\n\n for _ in range(self.num_epochs):\n for x, y in dataloader:\n y_pred = self.network(x)\n loss = self._criterion(y_pred, y)\n self._optimizer.zero_grad()\n loss.backward()\n self._optimizer.step()\n\n def _predict(self, X=None, fh=None):\n \"\"\"Predict with fitted model.\"\"\"\n from torch import cat\n\n if fh is None:\n fh = self._fh\n\n if max(fh._values) > self.network.pred_len or min(fh._values) < 0:\n raise ValueError(\n f\"fh of {fh} passed to {self.__class__.__name__} is not \"\n \"within `pred_len`. Please use a fh that aligns with the `pred_len` of \"\n \"the forecaster.\"\n )\n\n if X is None:\n dataloader = self.build_pytorch_pred_dataloader(self._y, fh)\n else:\n dataloader = self.build_pytorch_pred_dataloader(X, fh)\n\n y_pred = []\n for x, _ in dataloader:\n y_pred.append(self.network(x).detach())\n y_pred = cat(y_pred, dim=0).view(-1, y_pred[0].shape[-1]).numpy()\n y_pred = y_pred[fh._values.values - 1]\n y_pred = pd.DataFrame(\n y_pred, columns=self._y.columns, index=fh.to_absolute_index(self.cutoff)\n )\n\n return y_pred\n\n def build_pytorch_train_dataloader(self, y):\n \"\"\"Build PyTorch DataLoader for training.\"\"\"\n from torch.utils.data import DataLoader\n\n if self.custom_dataset_train:\n if hasattr(self.custom_dataset_train, \"build_dataset\") and callable(\n self.custom_dataset_train.build_dataset\n ):\n self.custom_dataset_train.build_dataset(y)\n dataset = self.custom_dataset_train\n else:\n raise NotImplementedError(\n \"Custom Dataset `build_dataset` method is not available. Please \"\n f\"refer to the {self.__class__.__name__}.build_dataset \"\n \"documentation.\"\n )\n else:\n dataset = PyTorchTrainDataset(\n y=y,\n seq_len=self.network.seq_len,\n fh=self._fh._values[-1],\n )\n\n return DataLoader(\n dataset,\n self.batch_size,\n )\n\n def build_pytorch_pred_dataloader(self, y, fh):\n \"\"\"Build PyTorch DataLoader for prediction.\"\"\"\n from torch.utils.data import DataLoader\n\n if self.custom_dataset_pred:\n if hasattr(self.custom_dataset_pred, \"build_dataset\") and callable(\n self.custom_dataset_pred.build_dataset\n ):\n self.custom_dataset_train.build_dataset(y)\n dataset = self.custom_dataset_train\n else:\n raise NotImplementedError(\n \"Custom Dataset `build_dataset` method is not available. Please\"\n f\"refer to the {self.__class__.__name__}.build_dataset\"\n \"documentation.\"\n )\n else:\n dataset = PyTorchPredDataset(\n y=y[-self.network.seq_len :],\n seq_len=self.network.seq_len,\n )\n\n return DataLoader(\n dataset,\n self.batch_size,\n )\n\n def get_y_true(self, y):\n \"\"\"Get y_true values for validation.\"\"\"\n dataloader = self.build_pytorch_pred_dataloader(y)\n y_true = [y.flatten().numpy() for _, y in dataloader]\n return np.concatenate(y_true, axis=0)\n\n\nclass PyTorchTrainDataset:\n \"\"\"Dataset for use in sktime deep learning forecasters.\"\"\"\n\n def __init__(self, y, seq_len, fh):\n self.y = y.values\n self.seq_len = seq_len\n self.fh = fh\n\n def __len__(self):\n \"\"\"Return length of dataset.\"\"\"\n return len(self.y) - self.seq_len - self.fh + 1\n\n def __getitem__(self, i):\n \"\"\"Return data point.\"\"\"\n from torch import from_numpy, tensor\n\n return (\n tensor(self.y[i : i + self.seq_len]).float(),\n from_numpy(self.y[i + self.seq_len : i + self.seq_len + self.fh]).float(),\n )\n\n\nclass PyTorchPredDataset:\n \"\"\"Dataset for use in sktime deep learning forecasters.\"\"\"\n\n def __init__(self, y, seq_len):\n self.y = y.values\n self.seq_len = seq_len\n\n def __len__(self):\n \"\"\"Return length of dataset.\"\"\"\n return 1\n\n def __getitem__(self, i):\n \"\"\"Return data point.\"\"\"\n from torch import from_numpy, tensor\n\n return (\n tensor(self.y[i : i + self.seq_len]).float(),\n from_numpy(self.y[i + self.seq_len : i + self.seq_len]).float(),\n )\n", "path": "sktime/forecasting/base/adapters/_pytorch.py"}], "after_files": [{"content": "from abc import ABC\n\nimport numpy as np\nimport pandas as pd\n\nfrom sktime.forecasting.base import BaseForecaster\nfrom sktime.utils.validation._dependencies import _check_soft_dependencies\n\nif _check_soft_dependencies(\"torch\", severity=\"none\"):\n import torch\n\n\nclass BaseDeepNetworkPyTorch(BaseForecaster, ABC):\n \"\"\"Abstract base class for deep learning networks using torch.nn.\"\"\"\n\n _tags = {\n \"python_dependencies\": \"torch\",\n \"y_inner_mtype\": \"pd.DataFrame\",\n \"capability:insample\": False,\n \"capability:pred_int:insample\": False,\n \"scitype:y\": \"both\",\n \"ignores-exogeneous-X\": True,\n }\n\n def __init__(\n self,\n num_epochs=16,\n batch_size=8,\n in_channels=1,\n individual=False,\n criterion_kwargs=None,\n optimizer=None,\n optimizer_kwargs=None,\n lr=0.001,\n ):\n self.num_epochs = num_epochs\n self.batch_size = batch_size\n self.in_channels = in_channels\n self.individual = individual\n self.criterion_kwargs = criterion_kwargs\n self.optimizer = optimizer\n self.optimizer_kwargs = optimizer_kwargs\n self.lr = lr\n\n super().__init__()\n\n def _fit(self, y, fh, X=None):\n \"\"\"Fit the network.\n\n Changes to state:\n writes to self._network.state_dict\n\n Parameters\n ----------\n X : iterable-style or map-style dataset\n see (https://pytorch.org/docs/stable/data.html) for more information\n \"\"\"\n from sktime.forecasting.base import ForecastingHorizon\n\n fh = fh.to_relative(self.cutoff)\n\n if type(fh) is ForecastingHorizon:\n self.network = self._build_network(fh._values[-1])\n else:\n self.network = self._build_network(fh)\n\n if self.criterion:\n if self.criterion in self.criterions.keys():\n if self.criterion_kwargs:\n self._criterion = self.criterions[self.criterion](\n **self.criterion_kwargs\n )\n else:\n self._criterion = self.criterions[self.criterion]()\n else:\n raise TypeError(\n f\"Please pass one of {self.criterions.keys()} for `criterion`.\"\n )\n else:\n # default criterion\n self._criterion = torch.nn.MSELoss()\n\n if self.optimizer:\n if self.optimizer in self.optimizers.keys():\n if self.optimizer_kwargs:\n self._optimizer = self.optimizers[self.optimizer](\n self.network.parameters(), lr=self.lr, **self.optimizer_kwargs\n )\n else:\n self._optimizer = self.optimizers[self.optimizer](\n self.network.parameters(), lr=self.lr\n )\n else:\n raise TypeError(\n f\"Please pass one of {self.optimizers.keys()} for `optimizer`.\"\n )\n else:\n # default optimizer\n self._optimizer = torch.optim.Adam(self.network.parameters(), lr=self.lr)\n\n dataloader = self.build_pytorch_train_dataloader(y)\n self.network.train()\n\n for _ in range(self.num_epochs):\n for x, y in dataloader:\n y_pred = self.network(x)\n loss = self._criterion(y_pred, y)\n self._optimizer.zero_grad()\n loss.backward()\n self._optimizer.step()\n\n def _predict(self, X=None, fh=None):\n \"\"\"Predict with fitted model.\"\"\"\n from torch import cat\n\n if fh is None:\n fh = self._fh\n fh = fh.to_relative(self.cutoff)\n\n if max(fh._values) > self.network.pred_len or min(fh._values) < 0:\n raise ValueError(\n f\"fh of {fh} passed to {self.__class__.__name__} is not \"\n \"within `pred_len`. Please use a fh that aligns with the `pred_len` of \"\n \"the forecaster.\"\n )\n\n if X is None:\n dataloader = self.build_pytorch_pred_dataloader(self._y, fh)\n else:\n dataloader = self.build_pytorch_pred_dataloader(X, fh)\n\n y_pred = []\n for x, _ in dataloader:\n y_pred.append(self.network(x).detach())\n y_pred = cat(y_pred, dim=0).view(-1, y_pred[0].shape[-1]).numpy()\n y_pred = y_pred[fh._values.values - 1]\n y_pred = pd.DataFrame(\n y_pred, columns=self._y.columns, index=fh.to_absolute_index(self.cutoff)\n )\n\n return y_pred\n\n def build_pytorch_train_dataloader(self, y):\n \"\"\"Build PyTorch DataLoader for training.\"\"\"\n from torch.utils.data import DataLoader\n\n if self.custom_dataset_train:\n if hasattr(self.custom_dataset_train, \"build_dataset\") and callable(\n self.custom_dataset_train.build_dataset\n ):\n self.custom_dataset_train.build_dataset(y)\n dataset = self.custom_dataset_train\n else:\n raise NotImplementedError(\n \"Custom Dataset `build_dataset` method is not available. Please \"\n f\"refer to the {self.__class__.__name__}.build_dataset \"\n \"documentation.\"\n )\n else:\n dataset = PyTorchTrainDataset(\n y=y,\n seq_len=self.network.seq_len,\n fh=self._fh.to_relative(self.cutoff)._values[-1],\n )\n\n return DataLoader(\n dataset,\n self.batch_size,\n )\n\n def build_pytorch_pred_dataloader(self, y, fh):\n \"\"\"Build PyTorch DataLoader for prediction.\"\"\"\n from torch.utils.data import DataLoader\n\n if self.custom_dataset_pred:\n if hasattr(self.custom_dataset_pred, \"build_dataset\") and callable(\n self.custom_dataset_pred.build_dataset\n ):\n self.custom_dataset_train.build_dataset(y)\n dataset = self.custom_dataset_train\n else:\n raise NotImplementedError(\n \"Custom Dataset `build_dataset` method is not available. Please\"\n f\"refer to the {self.__class__.__name__}.build_dataset\"\n \"documentation.\"\n )\n else:\n dataset = PyTorchPredDataset(\n y=y[-self.network.seq_len :],\n seq_len=self.network.seq_len,\n )\n\n return DataLoader(\n dataset,\n self.batch_size,\n )\n\n def get_y_true(self, y):\n \"\"\"Get y_true values for validation.\"\"\"\n dataloader = self.build_pytorch_pred_dataloader(y)\n y_true = [y.flatten().numpy() for _, y in dataloader]\n return np.concatenate(y_true, axis=0)\n\n\nclass PyTorchTrainDataset:\n \"\"\"Dataset for use in sktime deep learning forecasters.\"\"\"\n\n def __init__(self, y, seq_len, fh):\n self.y = y.values\n self.seq_len = seq_len\n self.fh = fh\n\n def __len__(self):\n \"\"\"Return length of dataset.\"\"\"\n return len(self.y) - self.seq_len - self.fh + 1\n\n def __getitem__(self, i):\n \"\"\"Return data point.\"\"\"\n from torch import from_numpy, tensor\n\n return (\n tensor(self.y[i : i + self.seq_len]).float(),\n from_numpy(self.y[i + self.seq_len : i + self.seq_len + self.fh]).float(),\n )\n\n\nclass PyTorchPredDataset:\n \"\"\"Dataset for use in sktime deep learning forecasters.\"\"\"\n\n def __init__(self, y, seq_len):\n self.y = y.values\n self.seq_len = seq_len\n\n def __len__(self):\n \"\"\"Return length of dataset.\"\"\"\n return 1\n\n def __getitem__(self, i):\n \"\"\"Return data point.\"\"\"\n from torch import from_numpy, tensor\n\n return (\n tensor(self.y[i : i + self.seq_len]).float(),\n from_numpy(self.y[i + self.seq_len : i + self.seq_len]).float(),\n )\n", "path": "sktime/forecasting/base/adapters/_pytorch.py"}]}
| 4,004 | 322 |
gh_patches_debug_43042
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-17008
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support `parametrize(...)` assignments in the `__defaults__` macro
**Is your feature request related to a problem? Please describe.**
The `__defaults__` BUILD file construct helps eliminate the need for boilerplate target field assignments across an entire
directory tree. However, it does not support assigning the result of the `parametrize` macro to a field.
```
# BUILD
__defaults__({
python_sources: dict(resolve=parametrize(my_resolves())), # InvalidFieldTypeException: expects string, not Parametrize
})
```
**Describe the solution you'd like**
Support `parametrize` in this context in the natural way, i.e. interpret applicable targets as having the field assigned to
`parametrize(...)`.
**Describe alternatives you've considered**
Modifying `tailor` to populate this field on a per-directory basis might work.
**Additional context**
This is very useful when a single Python subproject is intended to run under several lockfiles.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/engine/internals/defaults.py`
Content:
```
1 # Copyright 2022 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3 """The `BuildFileDefaultsParserState.set_defaults` is used by the pants.engine.internals.Parser,
4 exposed as the `__defaults__` BUILD file symbol.
5
6 When parsing a BUILD (from the rule `pants.engine.internals.build_files.parse_address_family`) the
7 defaults from the closest parent BUILD file is passed as input to the parser, and the new defaults
8 resulting after the BUILD file have been parsed is returned in the `AddressFamily`.
9
10 These defaults are then applied when creating the `TargetAdaptor` targets by the `Registrar` in the
11 parser.
12 """
13 from __future__ import annotations
14
15 from dataclasses import dataclass
16 from typing import Any, Iterable, Mapping, Tuple, Union
17
18 from pants.engine.addresses import Address
19 from pants.engine.target import (
20 Field,
21 ImmutableValue,
22 InvalidFieldException,
23 RegisteredTargetTypes,
24 Target,
25 TargetGenerator,
26 )
27 from pants.engine.unions import UnionMembership
28 from pants.util.frozendict import FrozenDict
29
30 SetDefaultsValueT = Mapping[str, Any]
31 SetDefaultsKeyT = Union[str, Tuple[str, ...]]
32 SetDefaultsT = Mapping[SetDefaultsKeyT, SetDefaultsValueT]
33
34
35 class BuildFileDefaults(FrozenDict[str, FrozenDict[str, ImmutableValue]]):
36 """Map target types to default field values."""
37
38
39 @dataclass
40 class BuildFileDefaultsParserState:
41 address: Address
42 defaults: dict[str, Mapping[str, Any]]
43 registered_target_types: RegisteredTargetTypes
44 union_membership: UnionMembership
45
46 @classmethod
47 def create(
48 cls,
49 path: str,
50 defaults: BuildFileDefaults,
51 registered_target_types: RegisteredTargetTypes,
52 union_membership: UnionMembership,
53 ) -> BuildFileDefaultsParserState:
54 return cls(
55 address=Address(path, generated_name="__defaults__"),
56 defaults=dict(defaults),
57 registered_target_types=registered_target_types,
58 union_membership=union_membership,
59 )
60
61 def get_frozen_defaults(self) -> BuildFileDefaults:
62 types = self.registered_target_types.aliases_to_types
63 return BuildFileDefaults(
64 {
65 target_alias: FrozenDict(
66 {
67 field_type.alias: field_type.compute_value(
68 raw_value=default, address=self.address
69 )
70 for field_alias, default in fields.items()
71 for field_type in self._target_type_field_types(types[target_alias])
72 if field_alias in (field_type.alias, field_type.deprecated_alias)
73 }
74 )
75 for target_alias, fields in self.defaults.items()
76 }
77 )
78
79 def get(self, target_alias: str) -> Mapping[str, Any]:
80 # Used by `pants.engine.internals.parser.Parser._generate_symbols.Registrar.__call__`
81 return self.defaults.get(target_alias, {})
82
83 def set_defaults(
84 self,
85 *args: SetDefaultsT,
86 all: SetDefaultsValueT | None = None,
87 extend: bool = False,
88 **kwargs,
89 ) -> None:
90 defaults: dict[str, dict[str, Any]] = (
91 {} if not extend else {k: dict(v) for k, v in self.defaults.items()}
92 )
93
94 if all is not None:
95 self._process_defaults(
96 defaults,
97 {tuple(self.registered_target_types.aliases): all},
98 ignore_unknown_fields=True,
99 )
100
101 for arg in args:
102 self._process_defaults(defaults, arg)
103
104 # Update with new defaults, dropping targets without any default values.
105 for tgt, default in defaults.items():
106 if not default:
107 self.defaults.pop(tgt, None)
108 else:
109 self.defaults[tgt] = default
110
111 def _target_type_field_types(self, target_type: type[Target]) -> tuple[type[Field], ...]:
112 return (
113 *target_type.class_field_types(self.union_membership),
114 *(target_type.moved_fields if issubclass(target_type, TargetGenerator) else ()),
115 )
116
117 def _process_defaults(
118 self,
119 defaults: dict[str, dict[str, Any]],
120 targets_defaults: SetDefaultsT,
121 ignore_unknown_fields: bool = False,
122 ):
123 if not isinstance(targets_defaults, dict):
124 raise ValueError(
125 f"Expected dictionary mapping targets to default field values for {self.address} "
126 f"but got: {type(targets_defaults).__name__}."
127 )
128
129 types = self.registered_target_types.aliases_to_types
130 for target, default in targets_defaults.items():
131 if not isinstance(default, dict):
132 raise ValueError(
133 f"Invalid default field values in {self.address} for target type {target}, "
134 f"must be an `dict` but was {default!r} with type `{type(default).__name__}`."
135 )
136
137 targets: Iterable[str]
138 targets = target if isinstance(target, tuple) else (target,)
139 for target_alias in map(str, targets):
140 if target_alias in types:
141 target_type = types[target_alias]
142 else:
143 raise ValueError(f"Unrecognized target type {target_alias} in {self.address}.")
144
145 # Copy default dict if we may mutate it.
146 raw_values = dict(default) if ignore_unknown_fields else default
147
148 # Validate that field exists on target
149 valid_field_aliases = set(
150 target_type._get_field_aliases_to_field_types(
151 self._target_type_field_types(target_type)
152 ).keys()
153 )
154
155 for field_alias in default.keys():
156 if field_alias not in valid_field_aliases:
157 if ignore_unknown_fields:
158 del raw_values[field_alias]
159 else:
160 raise InvalidFieldException(
161 f"Unrecognized field `{field_alias}` for target {target_type.alias}. "
162 f"Valid fields are: {', '.join(sorted(valid_field_aliases))}.",
163 )
164
165 # TODO: support parametrization ? --needs special care due to Parametrize object not
166 # being hashable, and thus not acceptable in a FrozenDict instance.
167
168 # Merge all provided defaults for this call.
169 defaults.setdefault(target_type.alias, {}).update(raw_values)
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/engine/internals/defaults.py b/src/python/pants/engine/internals/defaults.py
--- a/src/python/pants/engine/internals/defaults.py
+++ b/src/python/pants/engine/internals/defaults.py
@@ -13,9 +13,10 @@
from __future__ import annotations
from dataclasses import dataclass
-from typing import Any, Iterable, Mapping, Tuple, Union
+from typing import Any, Callable, Iterable, Mapping, Tuple, Union
from pants.engine.addresses import Address
+from pants.engine.internals.parametrize import Parametrize
from pants.engine.target import (
Field,
ImmutableValue,
@@ -26,6 +27,7 @@
)
from pants.engine.unions import UnionMembership
from pants.util.frozendict import FrozenDict
+from pants.util.meta import frozen_after_init
SetDefaultsValueT = Mapping[str, Any]
SetDefaultsKeyT = Union[str, Tuple[str, ...]]
@@ -36,6 +38,36 @@
"""Map target types to default field values."""
+@frozen_after_init
+@dataclass(unsafe_hash=True)
+class ParametrizeDefault(Parametrize):
+ """A frozen version of `Parametrize` for defaults.
+
+ This is needed since all defaults must be hashable, which the `Parametrize` class is not nor can
+ it be as it may get unhashable data as input and is unaware of the field type it is being
+ applied to.
+ """
+
+ args: tuple[str, ...]
+ kwargs: FrozenDict[str, ImmutableValue] # type: ignore[assignment]
+
+ def __init__(self, *args: str, **kwargs: ImmutableValue) -> None:
+ self.args = args
+ self.kwargs = FrozenDict(kwargs)
+
+ @classmethod
+ def create(
+ cls, freeze: Callable[[Any], ImmutableValue], parametrize: Parametrize
+ ) -> ParametrizeDefault:
+ return cls(
+ *map(freeze, parametrize.args),
+ **{kw: freeze(arg) for kw, arg in parametrize.kwargs.items()},
+ )
+
+ def __repr__(self) -> str:
+ return super().__repr__()
+
+
@dataclass
class BuildFileDefaultsParserState:
address: Address
@@ -58,15 +90,25 @@
union_membership=union_membership,
)
+ def _freeze_field_value(self, field_type: type[Field], value: Any) -> ImmutableValue:
+ if isinstance(value, ParametrizeDefault):
+ return value
+ elif isinstance(value, Parametrize):
+
+ def freeze(v: Any) -> ImmutableValue:
+ return self._freeze_field_value(field_type, v)
+
+ return ParametrizeDefault.create(freeze, value)
+ else:
+ return field_type.compute_value(raw_value=value, address=self.address)
+
def get_frozen_defaults(self) -> BuildFileDefaults:
types = self.registered_target_types.aliases_to_types
return BuildFileDefaults(
{
target_alias: FrozenDict(
{
- field_type.alias: field_type.compute_value(
- raw_value=default, address=self.address
- )
+ field_type.alias: self._freeze_field_value(field_type, default)
for field_alias, default in fields.items()
for field_type in self._target_type_field_types(types[target_alias])
if field_alias in (field_type.alias, field_type.deprecated_alias)
@@ -162,8 +204,5 @@
f"Valid fields are: {', '.join(sorted(valid_field_aliases))}.",
)
- # TODO: support parametrization ? --needs special care due to Parametrize object not
- # being hashable, and thus not acceptable in a FrozenDict instance.
-
# Merge all provided defaults for this call.
defaults.setdefault(target_type.alias, {}).update(raw_values)
|
{"golden_diff": "diff --git a/src/python/pants/engine/internals/defaults.py b/src/python/pants/engine/internals/defaults.py\n--- a/src/python/pants/engine/internals/defaults.py\n+++ b/src/python/pants/engine/internals/defaults.py\n@@ -13,9 +13,10 @@\n from __future__ import annotations\n \n from dataclasses import dataclass\n-from typing import Any, Iterable, Mapping, Tuple, Union\n+from typing import Any, Callable, Iterable, Mapping, Tuple, Union\n \n from pants.engine.addresses import Address\n+from pants.engine.internals.parametrize import Parametrize\n from pants.engine.target import (\n Field,\n ImmutableValue,\n@@ -26,6 +27,7 @@\n )\n from pants.engine.unions import UnionMembership\n from pants.util.frozendict import FrozenDict\n+from pants.util.meta import frozen_after_init\n \n SetDefaultsValueT = Mapping[str, Any]\n SetDefaultsKeyT = Union[str, Tuple[str, ...]]\n@@ -36,6 +38,36 @@\n \"\"\"Map target types to default field values.\"\"\"\n \n \n+@frozen_after_init\n+@dataclass(unsafe_hash=True)\n+class ParametrizeDefault(Parametrize):\n+ \"\"\"A frozen version of `Parametrize` for defaults.\n+\n+ This is needed since all defaults must be hashable, which the `Parametrize` class is not nor can\n+ it be as it may get unhashable data as input and is unaware of the field type it is being\n+ applied to.\n+ \"\"\"\n+\n+ args: tuple[str, ...]\n+ kwargs: FrozenDict[str, ImmutableValue] # type: ignore[assignment]\n+\n+ def __init__(self, *args: str, **kwargs: ImmutableValue) -> None:\n+ self.args = args\n+ self.kwargs = FrozenDict(kwargs)\n+\n+ @classmethod\n+ def create(\n+ cls, freeze: Callable[[Any], ImmutableValue], parametrize: Parametrize\n+ ) -> ParametrizeDefault:\n+ return cls(\n+ *map(freeze, parametrize.args),\n+ **{kw: freeze(arg) for kw, arg in parametrize.kwargs.items()},\n+ )\n+\n+ def __repr__(self) -> str:\n+ return super().__repr__()\n+\n+\n @dataclass\n class BuildFileDefaultsParserState:\n address: Address\n@@ -58,15 +90,25 @@\n union_membership=union_membership,\n )\n \n+ def _freeze_field_value(self, field_type: type[Field], value: Any) -> ImmutableValue:\n+ if isinstance(value, ParametrizeDefault):\n+ return value\n+ elif isinstance(value, Parametrize):\n+\n+ def freeze(v: Any) -> ImmutableValue:\n+ return self._freeze_field_value(field_type, v)\n+\n+ return ParametrizeDefault.create(freeze, value)\n+ else:\n+ return field_type.compute_value(raw_value=value, address=self.address)\n+\n def get_frozen_defaults(self) -> BuildFileDefaults:\n types = self.registered_target_types.aliases_to_types\n return BuildFileDefaults(\n {\n target_alias: FrozenDict(\n {\n- field_type.alias: field_type.compute_value(\n- raw_value=default, address=self.address\n- )\n+ field_type.alias: self._freeze_field_value(field_type, default)\n for field_alias, default in fields.items()\n for field_type in self._target_type_field_types(types[target_alias])\n if field_alias in (field_type.alias, field_type.deprecated_alias)\n@@ -162,8 +204,5 @@\n f\"Valid fields are: {', '.join(sorted(valid_field_aliases))}.\",\n )\n \n- # TODO: support parametrization ? --needs special care due to Parametrize object not\n- # being hashable, and thus not acceptable in a FrozenDict instance.\n-\n # Merge all provided defaults for this call.\n defaults.setdefault(target_type.alias, {}).update(raw_values)\n", "issue": "Support `parametrize(...)` assignments in the `__defaults__` macro\n**Is your feature request related to a problem? Please describe.**\r\n\r\nThe `__defaults__` BUILD file construct helps eliminate the need for boilerplate target field assignments across an entire\r\ndirectory tree. However, it does not support assigning the result of the `parametrize` macro to a field.\r\n\r\n```\r\n# BUILD\r\n\r\n__defaults__({\r\n python_sources: dict(resolve=parametrize(my_resolves())), # InvalidFieldTypeException: expects string, not Parametrize\r\n})\r\n```\r\n\r\n**Describe the solution you'd like**\r\n\r\nSupport `parametrize` in this context in the natural way, i.e. interpret applicable targets as having the field assigned to\r\n`parametrize(...)`.\r\n\r\n**Describe alternatives you've considered**\r\n\r\nModifying `tailor` to populate this field on a per-directory basis might work.\r\n\r\n**Additional context**\r\n\r\nThis is very useful when a single Python subproject is intended to run under several lockfiles.\n", "before_files": [{"content": "# Copyright 2022 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\"\"\"The `BuildFileDefaultsParserState.set_defaults` is used by the pants.engine.internals.Parser,\nexposed as the `__defaults__` BUILD file symbol.\n\nWhen parsing a BUILD (from the rule `pants.engine.internals.build_files.parse_address_family`) the\ndefaults from the closest parent BUILD file is passed as input to the parser, and the new defaults\nresulting after the BUILD file have been parsed is returned in the `AddressFamily`.\n\nThese defaults are then applied when creating the `TargetAdaptor` targets by the `Registrar` in the\nparser.\n\"\"\"\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Any, Iterable, Mapping, Tuple, Union\n\nfrom pants.engine.addresses import Address\nfrom pants.engine.target import (\n Field,\n ImmutableValue,\n InvalidFieldException,\n RegisteredTargetTypes,\n Target,\n TargetGenerator,\n)\nfrom pants.engine.unions import UnionMembership\nfrom pants.util.frozendict import FrozenDict\n\nSetDefaultsValueT = Mapping[str, Any]\nSetDefaultsKeyT = Union[str, Tuple[str, ...]]\nSetDefaultsT = Mapping[SetDefaultsKeyT, SetDefaultsValueT]\n\n\nclass BuildFileDefaults(FrozenDict[str, FrozenDict[str, ImmutableValue]]):\n \"\"\"Map target types to default field values.\"\"\"\n\n\n@dataclass\nclass BuildFileDefaultsParserState:\n address: Address\n defaults: dict[str, Mapping[str, Any]]\n registered_target_types: RegisteredTargetTypes\n union_membership: UnionMembership\n\n @classmethod\n def create(\n cls,\n path: str,\n defaults: BuildFileDefaults,\n registered_target_types: RegisteredTargetTypes,\n union_membership: UnionMembership,\n ) -> BuildFileDefaultsParserState:\n return cls(\n address=Address(path, generated_name=\"__defaults__\"),\n defaults=dict(defaults),\n registered_target_types=registered_target_types,\n union_membership=union_membership,\n )\n\n def get_frozen_defaults(self) -> BuildFileDefaults:\n types = self.registered_target_types.aliases_to_types\n return BuildFileDefaults(\n {\n target_alias: FrozenDict(\n {\n field_type.alias: field_type.compute_value(\n raw_value=default, address=self.address\n )\n for field_alias, default in fields.items()\n for field_type in self._target_type_field_types(types[target_alias])\n if field_alias in (field_type.alias, field_type.deprecated_alias)\n }\n )\n for target_alias, fields in self.defaults.items()\n }\n )\n\n def get(self, target_alias: str) -> Mapping[str, Any]:\n # Used by `pants.engine.internals.parser.Parser._generate_symbols.Registrar.__call__`\n return self.defaults.get(target_alias, {})\n\n def set_defaults(\n self,\n *args: SetDefaultsT,\n all: SetDefaultsValueT | None = None,\n extend: bool = False,\n **kwargs,\n ) -> None:\n defaults: dict[str, dict[str, Any]] = (\n {} if not extend else {k: dict(v) for k, v in self.defaults.items()}\n )\n\n if all is not None:\n self._process_defaults(\n defaults,\n {tuple(self.registered_target_types.aliases): all},\n ignore_unknown_fields=True,\n )\n\n for arg in args:\n self._process_defaults(defaults, arg)\n\n # Update with new defaults, dropping targets without any default values.\n for tgt, default in defaults.items():\n if not default:\n self.defaults.pop(tgt, None)\n else:\n self.defaults[tgt] = default\n\n def _target_type_field_types(self, target_type: type[Target]) -> tuple[type[Field], ...]:\n return (\n *target_type.class_field_types(self.union_membership),\n *(target_type.moved_fields if issubclass(target_type, TargetGenerator) else ()),\n )\n\n def _process_defaults(\n self,\n defaults: dict[str, dict[str, Any]],\n targets_defaults: SetDefaultsT,\n ignore_unknown_fields: bool = False,\n ):\n if not isinstance(targets_defaults, dict):\n raise ValueError(\n f\"Expected dictionary mapping targets to default field values for {self.address} \"\n f\"but got: {type(targets_defaults).__name__}.\"\n )\n\n types = self.registered_target_types.aliases_to_types\n for target, default in targets_defaults.items():\n if not isinstance(default, dict):\n raise ValueError(\n f\"Invalid default field values in {self.address} for target type {target}, \"\n f\"must be an `dict` but was {default!r} with type `{type(default).__name__}`.\"\n )\n\n targets: Iterable[str]\n targets = target if isinstance(target, tuple) else (target,)\n for target_alias in map(str, targets):\n if target_alias in types:\n target_type = types[target_alias]\n else:\n raise ValueError(f\"Unrecognized target type {target_alias} in {self.address}.\")\n\n # Copy default dict if we may mutate it.\n raw_values = dict(default) if ignore_unknown_fields else default\n\n # Validate that field exists on target\n valid_field_aliases = set(\n target_type._get_field_aliases_to_field_types(\n self._target_type_field_types(target_type)\n ).keys()\n )\n\n for field_alias in default.keys():\n if field_alias not in valid_field_aliases:\n if ignore_unknown_fields:\n del raw_values[field_alias]\n else:\n raise InvalidFieldException(\n f\"Unrecognized field `{field_alias}` for target {target_type.alias}. \"\n f\"Valid fields are: {', '.join(sorted(valid_field_aliases))}.\",\n )\n\n # TODO: support parametrization ? --needs special care due to Parametrize object not\n # being hashable, and thus not acceptable in a FrozenDict instance.\n\n # Merge all provided defaults for this call.\n defaults.setdefault(target_type.alias, {}).update(raw_values)\n", "path": "src/python/pants/engine/internals/defaults.py"}], "after_files": [{"content": "# Copyright 2022 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\"\"\"The `BuildFileDefaultsParserState.set_defaults` is used by the pants.engine.internals.Parser,\nexposed as the `__defaults__` BUILD file symbol.\n\nWhen parsing a BUILD (from the rule `pants.engine.internals.build_files.parse_address_family`) the\ndefaults from the closest parent BUILD file is passed as input to the parser, and the new defaults\nresulting after the BUILD file have been parsed is returned in the `AddressFamily`.\n\nThese defaults are then applied when creating the `TargetAdaptor` targets by the `Registrar` in the\nparser.\n\"\"\"\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Iterable, Mapping, Tuple, Union\n\nfrom pants.engine.addresses import Address\nfrom pants.engine.internals.parametrize import Parametrize\nfrom pants.engine.target import (\n Field,\n ImmutableValue,\n InvalidFieldException,\n RegisteredTargetTypes,\n Target,\n TargetGenerator,\n)\nfrom pants.engine.unions import UnionMembership\nfrom pants.util.frozendict import FrozenDict\nfrom pants.util.meta import frozen_after_init\n\nSetDefaultsValueT = Mapping[str, Any]\nSetDefaultsKeyT = Union[str, Tuple[str, ...]]\nSetDefaultsT = Mapping[SetDefaultsKeyT, SetDefaultsValueT]\n\n\nclass BuildFileDefaults(FrozenDict[str, FrozenDict[str, ImmutableValue]]):\n \"\"\"Map target types to default field values.\"\"\"\n\n\n@frozen_after_init\n@dataclass(unsafe_hash=True)\nclass ParametrizeDefault(Parametrize):\n \"\"\"A frozen version of `Parametrize` for defaults.\n\n This is needed since all defaults must be hashable, which the `Parametrize` class is not nor can\n it be as it may get unhashable data as input and is unaware of the field type it is being\n applied to.\n \"\"\"\n\n args: tuple[str, ...]\n kwargs: FrozenDict[str, ImmutableValue] # type: ignore[assignment]\n\n def __init__(self, *args: str, **kwargs: ImmutableValue) -> None:\n self.args = args\n self.kwargs = FrozenDict(kwargs)\n\n @classmethod\n def create(\n cls, freeze: Callable[[Any], ImmutableValue], parametrize: Parametrize\n ) -> ParametrizeDefault:\n return cls(\n *map(freeze, parametrize.args),\n **{kw: freeze(arg) for kw, arg in parametrize.kwargs.items()},\n )\n\n def __repr__(self) -> str:\n return super().__repr__()\n\n\n@dataclass\nclass BuildFileDefaultsParserState:\n address: Address\n defaults: dict[str, Mapping[str, Any]]\n registered_target_types: RegisteredTargetTypes\n union_membership: UnionMembership\n\n @classmethod\n def create(\n cls,\n path: str,\n defaults: BuildFileDefaults,\n registered_target_types: RegisteredTargetTypes,\n union_membership: UnionMembership,\n ) -> BuildFileDefaultsParserState:\n return cls(\n address=Address(path, generated_name=\"__defaults__\"),\n defaults=dict(defaults),\n registered_target_types=registered_target_types,\n union_membership=union_membership,\n )\n\n def _freeze_field_value(self, field_type: type[Field], value: Any) -> ImmutableValue:\n if isinstance(value, ParametrizeDefault):\n return value\n elif isinstance(value, Parametrize):\n\n def freeze(v: Any) -> ImmutableValue:\n return self._freeze_field_value(field_type, v)\n\n return ParametrizeDefault.create(freeze, value)\n else:\n return field_type.compute_value(raw_value=value, address=self.address)\n\n def get_frozen_defaults(self) -> BuildFileDefaults:\n types = self.registered_target_types.aliases_to_types\n return BuildFileDefaults(\n {\n target_alias: FrozenDict(\n {\n field_type.alias: self._freeze_field_value(field_type, default)\n for field_alias, default in fields.items()\n for field_type in self._target_type_field_types(types[target_alias])\n if field_alias in (field_type.alias, field_type.deprecated_alias)\n }\n )\n for target_alias, fields in self.defaults.items()\n }\n )\n\n def get(self, target_alias: str) -> Mapping[str, Any]:\n # Used by `pants.engine.internals.parser.Parser._generate_symbols.Registrar.__call__`\n return self.defaults.get(target_alias, {})\n\n def set_defaults(\n self,\n *args: SetDefaultsT,\n all: SetDefaultsValueT | None = None,\n extend: bool = False,\n **kwargs,\n ) -> None:\n defaults: dict[str, dict[str, Any]] = (\n {} if not extend else {k: dict(v) for k, v in self.defaults.items()}\n )\n\n if all is not None:\n self._process_defaults(\n defaults,\n {tuple(self.registered_target_types.aliases): all},\n ignore_unknown_fields=True,\n )\n\n for arg in args:\n self._process_defaults(defaults, arg)\n\n # Update with new defaults, dropping targets without any default values.\n for tgt, default in defaults.items():\n if not default:\n self.defaults.pop(tgt, None)\n else:\n self.defaults[tgt] = default\n\n def _target_type_field_types(self, target_type: type[Target]) -> tuple[type[Field], ...]:\n return (\n *target_type.class_field_types(self.union_membership),\n *(target_type.moved_fields if issubclass(target_type, TargetGenerator) else ()),\n )\n\n def _process_defaults(\n self,\n defaults: dict[str, dict[str, Any]],\n targets_defaults: SetDefaultsT,\n ignore_unknown_fields: bool = False,\n ):\n if not isinstance(targets_defaults, dict):\n raise ValueError(\n f\"Expected dictionary mapping targets to default field values for {self.address} \"\n f\"but got: {type(targets_defaults).__name__}.\"\n )\n\n types = self.registered_target_types.aliases_to_types\n for target, default in targets_defaults.items():\n if not isinstance(default, dict):\n raise ValueError(\n f\"Invalid default field values in {self.address} for target type {target}, \"\n f\"must be an `dict` but was {default!r} with type `{type(default).__name__}`.\"\n )\n\n targets: Iterable[str]\n targets = target if isinstance(target, tuple) else (target,)\n for target_alias in map(str, targets):\n if target_alias in types:\n target_type = types[target_alias]\n else:\n raise ValueError(f\"Unrecognized target type {target_alias} in {self.address}.\")\n\n # Copy default dict if we may mutate it.\n raw_values = dict(default) if ignore_unknown_fields else default\n\n # Validate that field exists on target\n valid_field_aliases = set(\n target_type._get_field_aliases_to_field_types(\n self._target_type_field_types(target_type)\n ).keys()\n )\n\n for field_alias in default.keys():\n if field_alias not in valid_field_aliases:\n if ignore_unknown_fields:\n del raw_values[field_alias]\n else:\n raise InvalidFieldException(\n f\"Unrecognized field `{field_alias}` for target {target_type.alias}. \"\n f\"Valid fields are: {', '.join(sorted(valid_field_aliases))}.\",\n )\n\n # Merge all provided defaults for this call.\n defaults.setdefault(target_type.alias, {}).update(raw_values)\n", "path": "src/python/pants/engine/internals/defaults.py"}]}
| 2,174 | 860 |
gh_patches_debug_48993
|
rasdani/github-patches
|
git_diff
|
googleapis__google-api-python-client-1030
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
googleapiclient.discovery.build fails with module 'six.moves' has no attribute 'collections_abc' since version 1.12.0
#### Environment details
- OS type and version: 18.04.1-Ubuntu
- Python version: Python 3.6.9
- pip version: `pip --version` pip 9.0.1
- `google-api-python-client` version: `pip show google-api-python-client`: Version: 1.12.0
#### Code example
googleapiclient.discovery.build() fails with message: module 'six.moves' has no attribute 'collections_abc'
We only see this problem with google-api-python-client 1.12.0. 1.11.0 is fine.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Setup script for Google API Python client.
16
17 Also installs included versions of third party libraries, if those libraries
18 are not already installed.
19 """
20 from __future__ import print_function
21
22 import sys
23
24 if sys.version_info < (2, 7):
25 print("google-api-python-client requires python version >= 2.7.", file=sys.stderr)
26 sys.exit(1)
27 if (3, 1) <= sys.version_info < (3, 4):
28 print("google-api-python-client requires python3 version >= 3.4.", file=sys.stderr)
29 sys.exit(1)
30
31 import io
32 import os
33 from setuptools import setup
34
35 packages = ["apiclient", "googleapiclient", "googleapiclient/discovery_cache"]
36
37 install_requires = [
38 # NOTE: Apache Beam tests depend on this library and cannot
39 # currently upgrade their httplib2 version.
40 # Please see https://github.com/googleapis/google-api-python-client/pull/841
41 "httplib2>=0.9.2,<1dev",
42 "google-auth>=1.16.0",
43 "google-auth-httplib2>=0.0.3",
44 "google-api-core>=1.21.0,<2dev",
45 "six>=1.6.1,<2dev",
46 "uritemplate>=3.0.0,<4dev",
47 ]
48
49 package_root = os.path.abspath(os.path.dirname(__file__))
50
51 readme_filename = os.path.join(package_root, "README.md")
52 with io.open(readme_filename, encoding="utf-8") as readme_file:
53 readme = readme_file.read()
54
55 version = "1.12.0"
56
57 setup(
58 name="google-api-python-client",
59 version=version,
60 description="Google API Client Library for Python",
61 long_description=readme,
62 long_description_content_type='text/markdown',
63 author="Google LLC",
64 author_email="[email protected]",
65 url="https://github.com/googleapis/google-api-python-client/",
66 install_requires=install_requires,
67 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
68 packages=packages,
69 package_data={},
70 license="Apache 2.0",
71 keywords="google api client",
72 classifiers=[
73 "Programming Language :: Python :: 2",
74 "Programming Language :: Python :: 2.7",
75 "Programming Language :: Python :: 3",
76 "Programming Language :: Python :: 3.5",
77 "Programming Language :: Python :: 3.6",
78 "Programming Language :: Python :: 3.7",
79 "Development Status :: 5 - Production/Stable",
80 "Intended Audience :: Developers",
81 "License :: OSI Approved :: Apache Software License",
82 "Operating System :: OS Independent",
83 "Topic :: Internet :: WWW/HTTP",
84 ],
85 )
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -42,7 +42,7 @@
"google-auth>=1.16.0",
"google-auth-httplib2>=0.0.3",
"google-api-core>=1.21.0,<2dev",
- "six>=1.6.1,<2dev",
+ "six>=1.13.0,<2dev",
"uritemplate>=3.0.0,<4dev",
]
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -42,7 +42,7 @@\n \"google-auth>=1.16.0\",\n \"google-auth-httplib2>=0.0.3\",\n \"google-api-core>=1.21.0,<2dev\",\n- \"six>=1.6.1,<2dev\",\n+ \"six>=1.13.0,<2dev\",\n \"uritemplate>=3.0.0,<4dev\",\n ]\n", "issue": "googleapiclient.discovery.build fails with module 'six.moves' has no attribute 'collections_abc' since version 1.12.0\n#### Environment details\r\n\r\n - OS type and version: 18.04.1-Ubuntu\r\n - Python version: Python 3.6.9\r\n - pip version: `pip --version` pip 9.0.1\r\n - `google-api-python-client` version: `pip show google-api-python-client`: Version: 1.12.0\r\n\r\n#### Code example\r\ngoogleapiclient.discovery.build() fails with message: module 'six.moves' has no attribute 'collections_abc'\r\n\r\nWe only see this problem with google-api-python-client 1.12.0. 1.11.0 is fine.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (2, 7):\n print(\"google-api-python-client requires python version >= 2.7.\", file=sys.stderr)\n sys.exit(1)\nif (3, 1) <= sys.version_info < (3, 4):\n print(\"google-api-python-client requires python3 version >= 3.4.\", file=sys.stderr)\n sys.exit(1)\n\nimport io\nimport os\nfrom setuptools import setup\n\npackages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n\ninstall_requires = [\n # NOTE: Apache Beam tests depend on this library and cannot\n # currently upgrade their httplib2 version.\n # Please see https://github.com/googleapis/google-api-python-client/pull/841\n \"httplib2>=0.9.2,<1dev\",\n \"google-auth>=1.16.0\",\n \"google-auth-httplib2>=0.0.3\",\n \"google-api-core>=1.21.0,<2dev\",\n \"six>=1.6.1,<2dev\",\n \"uritemplate>=3.0.0,<4dev\",\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.md\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = \"1.12.0\"\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=readme,\n long_description_content_type='text/markdown',\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (2, 7):\n print(\"google-api-python-client requires python version >= 2.7.\", file=sys.stderr)\n sys.exit(1)\nif (3, 1) <= sys.version_info < (3, 4):\n print(\"google-api-python-client requires python3 version >= 3.4.\", file=sys.stderr)\n sys.exit(1)\n\nimport io\nimport os\nfrom setuptools import setup\n\npackages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n\ninstall_requires = [\n # NOTE: Apache Beam tests depend on this library and cannot\n # currently upgrade their httplib2 version.\n # Please see https://github.com/googleapis/google-api-python-client/pull/841\n \"httplib2>=0.9.2,<1dev\",\n \"google-auth>=1.16.0\",\n \"google-auth-httplib2>=0.0.3\",\n \"google-api-core>=1.21.0,<2dev\",\n \"six>=1.13.0,<2dev\",\n \"uritemplate>=3.0.0,<4dev\",\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.md\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = \"1.12.0\"\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=readme,\n long_description_content_type='text/markdown',\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}]}
| 1,344 | 120 |
gh_patches_debug_9665
|
rasdani/github-patches
|
git_diff
|
great-expectations__great_expectations-2958
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `great_expectations/rule_based_profiler/profiler.py`
Content:
```
1 import uuid
2 from typing import Dict, List, Optional, Union
3
4 import great_expectations.exceptions as ge_exceptions
5 from great_expectations import DataContext
6 from great_expectations.core import ExpectationConfiguration, ExpectationSuite
7 from great_expectations.data_context.util import instantiate_class_from_config
8 from great_expectations.rule_based_profiler.domain_builder.domain_builder import (
9 DomainBuilder,
10 )
11 from great_expectations.rule_based_profiler.expectation_configuration_builder.expectation_configuration_builder import (
12 ExpectationConfigurationBuilder,
13 )
14 from great_expectations.rule_based_profiler.parameter_builder.parameter_builder import (
15 ParameterBuilder,
16 )
17 from great_expectations.rule_based_profiler.parameter_builder.parameter_container import (
18 ParameterContainer,
19 build_parameter_container_for_variables,
20 )
21 from great_expectations.rule_based_profiler.rule.rule import Rule
22
23
24 class Profiler:
25 """
26 Profiler object serves to profile, or automatically evaluate a set of rules, upon a given
27 batch / multiple batches of data.
28 """
29
30 def __init__(
31 self,
32 *,
33 profiler_config: Optional[Dict[str, Dict[str, Dict]]] = None,
34 data_context: Optional[DataContext] = None,
35 ):
36 """
37 Create a new Profiler using configured rules.
38 For a rule or an item in a rule configuration, instantiates the following if
39 available: a domain builder, a parameter builder, and a configuration builder.
40 These will be used to define profiler computation patterns.
41
42 Args:
43 variables_configs: Variables from a profiler configuration
44 rules_configs: Rule configuration as a dictionary
45 data_context: DataContext object that defines a full runtime environment (data access, etc.)
46 """
47 self._data_context = data_context
48 self._rules = []
49
50 rules_configs: Dict[str, Dict] = profiler_config.get("rules", {})
51 rule_name: str
52 rule_config: dict
53
54 for rule_name, rule_config in rules_configs.items():
55 domain_builder_config: dict = rule_config.get("domain_builder")
56
57 if domain_builder_config is None:
58 raise ge_exceptions.ProfilerConfigurationError(
59 message=f'Invalid rule "{rule_name}": no domain_builder found.'
60 )
61
62 domain_builder: DomainBuilder = instantiate_class_from_config(
63 config=domain_builder_config,
64 runtime_environment={"data_context": data_context},
65 config_defaults={
66 "module_name": "great_expectations.rule_based_profiler.domain_builder"
67 },
68 )
69
70 parameter_builders: List[ParameterBuilder] = []
71
72 parameter_builder_configs: dict = rule_config.get("parameter_builders")
73
74 if parameter_builder_configs:
75 parameter_builder_config: dict
76 for parameter_builder_config in parameter_builder_configs:
77 parameter_builders.append(
78 instantiate_class_from_config(
79 config=parameter_builder_config,
80 runtime_environment={"data_context": data_context},
81 config_defaults={
82 "module_name": "great_expectations.rule_based_profiler.parameter_builder"
83 },
84 )
85 )
86
87 expectation_configuration_builders: List[
88 ExpectationConfigurationBuilder
89 ] = []
90
91 expectation_configuration_builder_configs: dict = rule_config.get(
92 "expectation_configuration_builders"
93 )
94
95 if expectation_configuration_builder_configs:
96 expectation_configuration_builder_config: dict
97 for (
98 expectation_configuration_builder_config
99 ) in expectation_configuration_builder_configs:
100 expectation_configuration_builders.append(
101 instantiate_class_from_config(
102 config=expectation_configuration_builder_config,
103 runtime_environment={},
104 config_defaults={
105 "class_name": "DefaultExpectationConfigurationBuilder",
106 "module_name": "great_expectations.rule_based_profiler.expectation_configuration_builder",
107 },
108 )
109 )
110
111 variables_configs: Dict[str, Dict] = profiler_config.get("variables", {})
112 variables: Optional[ParameterContainer] = None
113
114 if variables_configs:
115 variables = build_parameter_container_for_variables(
116 variables_configs=variables_configs
117 )
118
119 self._rules.append(
120 Rule(
121 name=rule_name,
122 domain_builder=domain_builder,
123 parameter_builders=parameter_builders,
124 expectation_configuration_builders=expectation_configuration_builders,
125 variables=variables,
126 )
127 )
128
129 def profile(
130 self,
131 *,
132 expectation_suite_name: Optional[str] = None,
133 ) -> ExpectationSuite:
134 """
135 Args:
136 :param expectation_suite_name: A name for returned Expectation suite.
137 :return: Set of rule evaluation results in the form of an ExpectationSuite
138 """
139 if expectation_suite_name is None:
140 expectation_suite_name = (
141 f"tmp_suite_{self.__class__.__name__}_{str(uuid.uuid4())[:8]}"
142 )
143
144 expectation_suite: ExpectationSuite = ExpectationSuite(
145 expectation_suite_name=expectation_suite_name
146 )
147
148 rule: Rule
149 for rule in self._rules:
150 expectation_configurations: List[ExpectationConfiguration] = rule.generate()
151 expectation_configuration: ExpectationConfiguration
152 for expectation_configuration in expectation_configurations:
153 expectation_suite.add_expectation(
154 expectation_configuration=expectation_configuration
155 )
156
157 return expectation_suite
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/great_expectations/rule_based_profiler/profiler.py b/great_expectations/rule_based_profiler/profiler.py
--- a/great_expectations/rule_based_profiler/profiler.py
+++ b/great_expectations/rule_based_profiler/profiler.py
@@ -40,8 +40,7 @@
These will be used to define profiler computation patterns.
Args:
- variables_configs: Variables from a profiler configuration
- rules_configs: Rule configuration as a dictionary
+ profiler_config: Variables and Rules configuration as a dictionary
data_context: DataContext object that defines a full runtime environment (data access, etc.)
"""
self._data_context = data_context
|
{"golden_diff": "diff --git a/great_expectations/rule_based_profiler/profiler.py b/great_expectations/rule_based_profiler/profiler.py\n--- a/great_expectations/rule_based_profiler/profiler.py\n+++ b/great_expectations/rule_based_profiler/profiler.py\n@@ -40,8 +40,7 @@\n These will be used to define profiler computation patterns.\n \n Args:\n- variables_configs: Variables from a profiler configuration\n- rules_configs: Rule configuration as a dictionary\n+ profiler_config: Variables and Rules configuration as a dictionary\n data_context: DataContext object that defines a full runtime environment (data access, etc.)\n \"\"\"\n self._data_context = data_context\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import uuid\nfrom typing import Dict, List, Optional, Union\n\nimport great_expectations.exceptions as ge_exceptions\nfrom great_expectations import DataContext\nfrom great_expectations.core import ExpectationConfiguration, ExpectationSuite\nfrom great_expectations.data_context.util import instantiate_class_from_config\nfrom great_expectations.rule_based_profiler.domain_builder.domain_builder import (\n DomainBuilder,\n)\nfrom great_expectations.rule_based_profiler.expectation_configuration_builder.expectation_configuration_builder import (\n ExpectationConfigurationBuilder,\n)\nfrom great_expectations.rule_based_profiler.parameter_builder.parameter_builder import (\n ParameterBuilder,\n)\nfrom great_expectations.rule_based_profiler.parameter_builder.parameter_container import (\n ParameterContainer,\n build_parameter_container_for_variables,\n)\nfrom great_expectations.rule_based_profiler.rule.rule import Rule\n\n\nclass Profiler:\n \"\"\"\n Profiler object serves to profile, or automatically evaluate a set of rules, upon a given\n batch / multiple batches of data.\n \"\"\"\n\n def __init__(\n self,\n *,\n profiler_config: Optional[Dict[str, Dict[str, Dict]]] = None,\n data_context: Optional[DataContext] = None,\n ):\n \"\"\"\n Create a new Profiler using configured rules.\n For a rule or an item in a rule configuration, instantiates the following if\n available: a domain builder, a parameter builder, and a configuration builder.\n These will be used to define profiler computation patterns.\n\n Args:\n variables_configs: Variables from a profiler configuration\n rules_configs: Rule configuration as a dictionary\n data_context: DataContext object that defines a full runtime environment (data access, etc.)\n \"\"\"\n self._data_context = data_context\n self._rules = []\n\n rules_configs: Dict[str, Dict] = profiler_config.get(\"rules\", {})\n rule_name: str\n rule_config: dict\n\n for rule_name, rule_config in rules_configs.items():\n domain_builder_config: dict = rule_config.get(\"domain_builder\")\n\n if domain_builder_config is None:\n raise ge_exceptions.ProfilerConfigurationError(\n message=f'Invalid rule \"{rule_name}\": no domain_builder found.'\n )\n\n domain_builder: DomainBuilder = instantiate_class_from_config(\n config=domain_builder_config,\n runtime_environment={\"data_context\": data_context},\n config_defaults={\n \"module_name\": \"great_expectations.rule_based_profiler.domain_builder\"\n },\n )\n\n parameter_builders: List[ParameterBuilder] = []\n\n parameter_builder_configs: dict = rule_config.get(\"parameter_builders\")\n\n if parameter_builder_configs:\n parameter_builder_config: dict\n for parameter_builder_config in parameter_builder_configs:\n parameter_builders.append(\n instantiate_class_from_config(\n config=parameter_builder_config,\n runtime_environment={\"data_context\": data_context},\n config_defaults={\n \"module_name\": \"great_expectations.rule_based_profiler.parameter_builder\"\n },\n )\n )\n\n expectation_configuration_builders: List[\n ExpectationConfigurationBuilder\n ] = []\n\n expectation_configuration_builder_configs: dict = rule_config.get(\n \"expectation_configuration_builders\"\n )\n\n if expectation_configuration_builder_configs:\n expectation_configuration_builder_config: dict\n for (\n expectation_configuration_builder_config\n ) in expectation_configuration_builder_configs:\n expectation_configuration_builders.append(\n instantiate_class_from_config(\n config=expectation_configuration_builder_config,\n runtime_environment={},\n config_defaults={\n \"class_name\": \"DefaultExpectationConfigurationBuilder\",\n \"module_name\": \"great_expectations.rule_based_profiler.expectation_configuration_builder\",\n },\n )\n )\n\n variables_configs: Dict[str, Dict] = profiler_config.get(\"variables\", {})\n variables: Optional[ParameterContainer] = None\n\n if variables_configs:\n variables = build_parameter_container_for_variables(\n variables_configs=variables_configs\n )\n\n self._rules.append(\n Rule(\n name=rule_name,\n domain_builder=domain_builder,\n parameter_builders=parameter_builders,\n expectation_configuration_builders=expectation_configuration_builders,\n variables=variables,\n )\n )\n\n def profile(\n self,\n *,\n expectation_suite_name: Optional[str] = None,\n ) -> ExpectationSuite:\n \"\"\"\n Args:\n :param expectation_suite_name: A name for returned Expectation suite.\n :return: Set of rule evaluation results in the form of an ExpectationSuite\n \"\"\"\n if expectation_suite_name is None:\n expectation_suite_name = (\n f\"tmp_suite_{self.__class__.__name__}_{str(uuid.uuid4())[:8]}\"\n )\n\n expectation_suite: ExpectationSuite = ExpectationSuite(\n expectation_suite_name=expectation_suite_name\n )\n\n rule: Rule\n for rule in self._rules:\n expectation_configurations: List[ExpectationConfiguration] = rule.generate()\n expectation_configuration: ExpectationConfiguration\n for expectation_configuration in expectation_configurations:\n expectation_suite.add_expectation(\n expectation_configuration=expectation_configuration\n )\n\n return expectation_suite\n", "path": "great_expectations/rule_based_profiler/profiler.py"}], "after_files": [{"content": "import uuid\nfrom typing import Dict, List, Optional, Union\n\nimport great_expectations.exceptions as ge_exceptions\nfrom great_expectations import DataContext\nfrom great_expectations.core import ExpectationConfiguration, ExpectationSuite\nfrom great_expectations.data_context.util import instantiate_class_from_config\nfrom great_expectations.rule_based_profiler.domain_builder.domain_builder import (\n DomainBuilder,\n)\nfrom great_expectations.rule_based_profiler.expectation_configuration_builder.expectation_configuration_builder import (\n ExpectationConfigurationBuilder,\n)\nfrom great_expectations.rule_based_profiler.parameter_builder.parameter_builder import (\n ParameterBuilder,\n)\nfrom great_expectations.rule_based_profiler.parameter_builder.parameter_container import (\n ParameterContainer,\n build_parameter_container_for_variables,\n)\nfrom great_expectations.rule_based_profiler.rule.rule import Rule\n\n\nclass Profiler:\n \"\"\"\n Profiler object serves to profile, or automatically evaluate a set of rules, upon a given\n batch / multiple batches of data.\n \"\"\"\n\n def __init__(\n self,\n *,\n profiler_config: Optional[Dict[str, Dict[str, Dict]]] = None,\n data_context: Optional[DataContext] = None,\n ):\n \"\"\"\n Create a new Profiler using configured rules.\n For a rule or an item in a rule configuration, instantiates the following if\n available: a domain builder, a parameter builder, and a configuration builder.\n These will be used to define profiler computation patterns.\n\n Args:\n profiler_config: Variables and Rules configuration as a dictionary\n data_context: DataContext object that defines a full runtime environment (data access, etc.)\n \"\"\"\n self._data_context = data_context\n self._rules = []\n\n rules_configs: Dict[str, Dict] = profiler_config.get(\"rules\", {})\n rule_name: str\n rule_config: dict\n\n for rule_name, rule_config in rules_configs.items():\n domain_builder_config: dict = rule_config.get(\"domain_builder\")\n\n if domain_builder_config is None:\n raise ge_exceptions.ProfilerConfigurationError(\n message=f'Invalid rule \"{rule_name}\": no domain_builder found.'\n )\n\n domain_builder: DomainBuilder = instantiate_class_from_config(\n config=domain_builder_config,\n runtime_environment={\"data_context\": data_context},\n config_defaults={\n \"module_name\": \"great_expectations.rule_based_profiler.domain_builder\"\n },\n )\n\n parameter_builders: List[ParameterBuilder] = []\n\n parameter_builder_configs: dict = rule_config.get(\"parameter_builders\")\n\n if parameter_builder_configs:\n parameter_builder_config: dict\n for parameter_builder_config in parameter_builder_configs:\n parameter_builders.append(\n instantiate_class_from_config(\n config=parameter_builder_config,\n runtime_environment={\"data_context\": data_context},\n config_defaults={\n \"module_name\": \"great_expectations.rule_based_profiler.parameter_builder\"\n },\n )\n )\n\n expectation_configuration_builders: List[\n ExpectationConfigurationBuilder\n ] = []\n\n expectation_configuration_builder_configs: dict = rule_config.get(\n \"expectation_configuration_builders\"\n )\n\n if expectation_configuration_builder_configs:\n expectation_configuration_builder_config: dict\n for (\n expectation_configuration_builder_config\n ) in expectation_configuration_builder_configs:\n expectation_configuration_builders.append(\n instantiate_class_from_config(\n config=expectation_configuration_builder_config,\n runtime_environment={},\n config_defaults={\n \"class_name\": \"DefaultExpectationConfigurationBuilder\",\n \"module_name\": \"great_expectations.rule_based_profiler.expectation_configuration_builder\",\n },\n )\n )\n\n variables_configs: Dict[str, Dict] = profiler_config.get(\"variables\", {})\n variables: Optional[ParameterContainer] = None\n\n if variables_configs:\n variables = build_parameter_container_for_variables(\n variables_configs=variables_configs\n )\n\n self._rules.append(\n Rule(\n name=rule_name,\n domain_builder=domain_builder,\n parameter_builders=parameter_builders,\n expectation_configuration_builders=expectation_configuration_builders,\n variables=variables,\n )\n )\n\n def profile(\n self,\n *,\n expectation_suite_name: Optional[str] = None,\n ) -> ExpectationSuite:\n \"\"\"\n Args:\n :param expectation_suite_name: A name for returned Expectation suite.\n :return: Set of rule evaluation results in the form of an ExpectationSuite\n \"\"\"\n if expectation_suite_name is None:\n expectation_suite_name = (\n f\"tmp_suite_{self.__class__.__name__}_{str(uuid.uuid4())[:8]}\"\n )\n\n expectation_suite: ExpectationSuite = ExpectationSuite(\n expectation_suite_name=expectation_suite_name\n )\n\n rule: Rule\n for rule in self._rules:\n expectation_configurations: List[ExpectationConfiguration] = rule.generate()\n expectation_configuration: ExpectationConfiguration\n for expectation_configuration in expectation_configurations:\n expectation_suite.add_expectation(\n expectation_configuration=expectation_configuration\n )\n\n return expectation_suite\n", "path": "great_expectations/rule_based_profiler/profiler.py"}]}
| 1,720 | 151 |
gh_patches_debug_23216
|
rasdani/github-patches
|
git_diff
|
apache__tvm-3462
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug][AutoTVM] Bug introduced by #3413
After #3413, elements in `in_heap` are tuples, no longer values. However, the subsequent code still pops value from `in_heap`, which causes key not found error.
https://github.com/dmlc/tvm/pull/3413/files#diff-1b305247d46fd804148ffbe2ceb5eb04R86.
Similarly, in the following code, `x` is also a tuple, cannot directly compare with 0. I'm not sure whether we should use `x[0]` or` x[1]` in this case.
https://github.com/dmlc/tvm/pull/3413/files#diff-1b305247d46fd804148ffbe2ceb5eb04R131
@merrymercy Could you take a look at this issue?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/tvm/autotvm/tuner/sa_model_optimizer.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 # pylint: disable=consider-using-enumerate, invalid-name
18 """
19 Cost model optimizer based on simulated annealing
20 """
21
22 import heapq
23 import logging
24 import time
25
26 import numpy as np
27
28 from ..util import sample_ints
29 from .model_based_tuner import ModelOptimizer, knob2point, point2knob
30
31 logger = logging.getLogger('autotvm')
32
33 class SimulatedAnnealingOptimizer(ModelOptimizer):
34 """parallel simulated annealing optimization algorithm
35
36 Parameters
37 ----------
38 task: Task
39 The tuning task
40 n_iter: int
41 The number of iterations of simulated annealing
42 temp: float or Array of float
43 If is a single float, then use a constant temperature.
44 If is an Array, then perform linear cooling from temp[0] to temp[1]
45 early_stop: int, optional
46 Stop iteration if the optimal set do not change in `early_stop` rounds
47 log_interval: int, optional
48 Print log every `log_interval` iterations
49 """
50 def __init__(self, task, n_iter=500, temp=(1, 0), persistent=True, parallel_size=128,
51 early_stop=50, log_interval=50):
52 super(SimulatedAnnealingOptimizer, self).__init__()
53
54 self.task = task
55 self.dims = [len(x) for x in self.task.config_space.space_map.values()]
56
57 self.n_iter = n_iter
58 self.temp = temp
59 self.persistent = persistent
60 self.parallel_size = min(parallel_size, len(self.task.config_space))
61 self.early_stop = early_stop or 1e9
62 self.log_interval = log_interval
63 self.points = None
64
65 def find_maximums(self, model, num, exclusive):
66 tic = time.time()
67 temp, n_iter, early_stop, log_interval = \
68 self.temp, self.n_iter, self.early_stop, self.log_interval
69
70 if self.persistent and self.points is not None:
71 points = self.points
72 else:
73 points = np.array(sample_ints(0, len(self.task.config_space), self.parallel_size))
74
75 scores = model.predict(points)
76
77 # build heap and insert initial points
78 heap_items = [(float('-inf'), - 1 - i) for i in range(num)]
79 heapq.heapify(heap_items)
80 in_heap = set(exclusive)
81 in_heap.update([x for x in heap_items])
82
83 for s, p in zip(scores, points):
84 if s > heap_items[0][0] and p not in in_heap:
85 pop = heapq.heapreplace(heap_items, (s, p))
86 in_heap.remove(pop[1])
87 in_heap.add(p)
88
89 k = 0
90 k_last_modify = 0
91
92 if isinstance(temp, (tuple, list, np.ndarray)):
93 t = temp[0]
94 cool = 1.0 * (temp[0] - temp[1]) / (n_iter + 1)
95 else:
96 t = temp
97 cool = 0
98
99 while k < n_iter and k < k_last_modify + early_stop:
100 new_points = np.empty_like(points)
101 for i, p in enumerate(points):
102 new_points[i] = random_walk(p, self.dims)
103
104 new_scores = model.predict(new_points)
105
106 ac_prob = np.exp(np.minimum((new_scores - scores) / (t + 1e-5), 1))
107 ac_index = np.random.random(len(ac_prob)) < ac_prob
108
109 points[ac_index] = new_points[ac_index]
110 scores[ac_index] = new_scores[ac_index]
111
112 for s, p in zip(new_scores, new_points):
113 if s > heap_items[0][0] and p not in in_heap:
114 pop = heapq.heapreplace(heap_items, (s, p))
115 in_heap.remove(pop[1])
116 in_heap.add(p)
117 k_last_modify = k
118
119 k += 1
120 t -= cool
121
122 if log_interval and k % log_interval == 0:
123 t_str = "%.2f" % t
124 logger.debug("SA iter: %d\tlast_update: %d\tmax-0: %.2f\tmax-1: %.2f\ttemp: %s\t"
125 "elapsed: %.2f",
126 k, k_last_modify, heap_items[0][0],
127 np.max([v for v, _ in heap_items]), t_str,
128 time.time() - tic)
129
130 heap_items.sort(key=lambda item: -item[0])
131 heap_items = [x for x in heap_items if x >= 0]
132 logger.debug("SA iter: %d\tlast_update: %d\tmax-0: %.2f\tmax-1: %.2f\telapsed: %.2f",
133 k, k_last_modify, heap_items[-1][0], heap_items[0][0], time.time() - tic)
134 logger.debug("SA Maximums: %s", heap_items)
135
136 if self.persistent:
137 self.points = points
138
139 return [x[1] for x in heap_items]
140
141 def random_walk(p, dims):
142 """random walk as local transition
143
144 Parameters
145 ----------
146 p: int
147 index of the ConfigEntity
148 dims: Array of int
149 sizes of each dimension
150
151 Returns
152 -------
153 new_p: int
154 new neighborhood index
155 """
156 # transform to knob form
157 old = point2knob(p, dims)
158 new = list(old)
159
160 # mutate
161 while new == old:
162 from_i = np.random.randint(len(old))
163 to_v = np.random.randint(dims[from_i])
164 new[from_i] = to_v
165
166 # transform to index form
167 return knob2point(new, dims)
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/tvm/autotvm/tuner/sa_model_optimizer.py b/python/tvm/autotvm/tuner/sa_model_optimizer.py
--- a/python/tvm/autotvm/tuner/sa_model_optimizer.py
+++ b/python/tvm/autotvm/tuner/sa_model_optimizer.py
@@ -78,7 +78,7 @@
heap_items = [(float('-inf'), - 1 - i) for i in range(num)]
heapq.heapify(heap_items)
in_heap = set(exclusive)
- in_heap.update([x for x in heap_items])
+ in_heap.update([x[1] for x in heap_items])
for s, p in zip(scores, points):
if s > heap_items[0][0] and p not in in_heap:
@@ -128,9 +128,9 @@
time.time() - tic)
heap_items.sort(key=lambda item: -item[0])
- heap_items = [x for x in heap_items if x >= 0]
- logger.debug("SA iter: %d\tlast_update: %d\tmax-0: %.2f\tmax-1: %.2f\telapsed: %.2f",
- k, k_last_modify, heap_items[-1][0], heap_items[0][0], time.time() - tic)
+ heap_items = [x for x in heap_items if x[0] >= 0]
+ logger.debug("SA iter: %d\tlast_update: %d\telapsed: %.2f",
+ k, k_last_modify, time.time() - tic)
logger.debug("SA Maximums: %s", heap_items)
if self.persistent:
|
{"golden_diff": "diff --git a/python/tvm/autotvm/tuner/sa_model_optimizer.py b/python/tvm/autotvm/tuner/sa_model_optimizer.py\n--- a/python/tvm/autotvm/tuner/sa_model_optimizer.py\n+++ b/python/tvm/autotvm/tuner/sa_model_optimizer.py\n@@ -78,7 +78,7 @@\n heap_items = [(float('-inf'), - 1 - i) for i in range(num)]\n heapq.heapify(heap_items)\n in_heap = set(exclusive)\n- in_heap.update([x for x in heap_items])\n+ in_heap.update([x[1] for x in heap_items])\n \n for s, p in zip(scores, points):\n if s > heap_items[0][0] and p not in in_heap:\n@@ -128,9 +128,9 @@\n time.time() - tic)\n \n heap_items.sort(key=lambda item: -item[0])\n- heap_items = [x for x in heap_items if x >= 0]\n- logger.debug(\"SA iter: %d\\tlast_update: %d\\tmax-0: %.2f\\tmax-1: %.2f\\telapsed: %.2f\",\n- k, k_last_modify, heap_items[-1][0], heap_items[0][0], time.time() - tic)\n+ heap_items = [x for x in heap_items if x[0] >= 0]\n+ logger.debug(\"SA iter: %d\\tlast_update: %d\\telapsed: %.2f\",\n+ k, k_last_modify, time.time() - tic)\n logger.debug(\"SA Maximums: %s\", heap_items)\n \n if self.persistent:\n", "issue": "[Bug][AutoTVM] Bug introduced by #3413\nAfter #3413, elements in `in_heap` are tuples, no longer values. However, the subsequent code still pops value from `in_heap`, which causes key not found error.\r\nhttps://github.com/dmlc/tvm/pull/3413/files#diff-1b305247d46fd804148ffbe2ceb5eb04R86.\r\n\r\nSimilarly, in the following code, `x` is also a tuple, cannot directly compare with 0. I'm not sure whether we should use `x[0]` or` x[1]` in this case.\r\nhttps://github.com/dmlc/tvm/pull/3413/files#diff-1b305247d46fd804148ffbe2ceb5eb04R131\r\n\r\n@merrymercy Could you take a look at this issue?\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=consider-using-enumerate, invalid-name\n\"\"\"\nCost model optimizer based on simulated annealing\n\"\"\"\n\nimport heapq\nimport logging\nimport time\n\nimport numpy as np\n\nfrom ..util import sample_ints\nfrom .model_based_tuner import ModelOptimizer, knob2point, point2knob\n\nlogger = logging.getLogger('autotvm')\n\nclass SimulatedAnnealingOptimizer(ModelOptimizer):\n \"\"\"parallel simulated annealing optimization algorithm\n\n Parameters\n ----------\n task: Task\n The tuning task\n n_iter: int\n The number of iterations of simulated annealing\n temp: float or Array of float\n If is a single float, then use a constant temperature.\n If is an Array, then perform linear cooling from temp[0] to temp[1]\n early_stop: int, optional\n Stop iteration if the optimal set do not change in `early_stop` rounds\n log_interval: int, optional\n Print log every `log_interval` iterations\n \"\"\"\n def __init__(self, task, n_iter=500, temp=(1, 0), persistent=True, parallel_size=128,\n early_stop=50, log_interval=50):\n super(SimulatedAnnealingOptimizer, self).__init__()\n\n self.task = task\n self.dims = [len(x) for x in self.task.config_space.space_map.values()]\n\n self.n_iter = n_iter\n self.temp = temp\n self.persistent = persistent\n self.parallel_size = min(parallel_size, len(self.task.config_space))\n self.early_stop = early_stop or 1e9\n self.log_interval = log_interval\n self.points = None\n\n def find_maximums(self, model, num, exclusive):\n tic = time.time()\n temp, n_iter, early_stop, log_interval = \\\n self.temp, self.n_iter, self.early_stop, self.log_interval\n\n if self.persistent and self.points is not None:\n points = self.points\n else:\n points = np.array(sample_ints(0, len(self.task.config_space), self.parallel_size))\n\n scores = model.predict(points)\n\n # build heap and insert initial points\n heap_items = [(float('-inf'), - 1 - i) for i in range(num)]\n heapq.heapify(heap_items)\n in_heap = set(exclusive)\n in_heap.update([x for x in heap_items])\n\n for s, p in zip(scores, points):\n if s > heap_items[0][0] and p not in in_heap:\n pop = heapq.heapreplace(heap_items, (s, p))\n in_heap.remove(pop[1])\n in_heap.add(p)\n\n k = 0\n k_last_modify = 0\n\n if isinstance(temp, (tuple, list, np.ndarray)):\n t = temp[0]\n cool = 1.0 * (temp[0] - temp[1]) / (n_iter + 1)\n else:\n t = temp\n cool = 0\n\n while k < n_iter and k < k_last_modify + early_stop:\n new_points = np.empty_like(points)\n for i, p in enumerate(points):\n new_points[i] = random_walk(p, self.dims)\n\n new_scores = model.predict(new_points)\n\n ac_prob = np.exp(np.minimum((new_scores - scores) / (t + 1e-5), 1))\n ac_index = np.random.random(len(ac_prob)) < ac_prob\n\n points[ac_index] = new_points[ac_index]\n scores[ac_index] = new_scores[ac_index]\n\n for s, p in zip(new_scores, new_points):\n if s > heap_items[0][0] and p not in in_heap:\n pop = heapq.heapreplace(heap_items, (s, p))\n in_heap.remove(pop[1])\n in_heap.add(p)\n k_last_modify = k\n\n k += 1\n t -= cool\n\n if log_interval and k % log_interval == 0:\n t_str = \"%.2f\" % t\n logger.debug(\"SA iter: %d\\tlast_update: %d\\tmax-0: %.2f\\tmax-1: %.2f\\ttemp: %s\\t\"\n \"elapsed: %.2f\",\n k, k_last_modify, heap_items[0][0],\n np.max([v for v, _ in heap_items]), t_str,\n time.time() - tic)\n\n heap_items.sort(key=lambda item: -item[0])\n heap_items = [x for x in heap_items if x >= 0]\n logger.debug(\"SA iter: %d\\tlast_update: %d\\tmax-0: %.2f\\tmax-1: %.2f\\telapsed: %.2f\",\n k, k_last_modify, heap_items[-1][0], heap_items[0][0], time.time() - tic)\n logger.debug(\"SA Maximums: %s\", heap_items)\n\n if self.persistent:\n self.points = points\n\n return [x[1] for x in heap_items]\n\ndef random_walk(p, dims):\n \"\"\"random walk as local transition\n\n Parameters\n ----------\n p: int\n index of the ConfigEntity\n dims: Array of int\n sizes of each dimension\n\n Returns\n -------\n new_p: int\n new neighborhood index\n \"\"\"\n # transform to knob form\n old = point2knob(p, dims)\n new = list(old)\n\n # mutate\n while new == old:\n from_i = np.random.randint(len(old))\n to_v = np.random.randint(dims[from_i])\n new[from_i] = to_v\n\n # transform to index form\n return knob2point(new, dims)\n", "path": "python/tvm/autotvm/tuner/sa_model_optimizer.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=consider-using-enumerate, invalid-name\n\"\"\"\nCost model optimizer based on simulated annealing\n\"\"\"\n\nimport heapq\nimport logging\nimport time\n\nimport numpy as np\n\nfrom ..util import sample_ints\nfrom .model_based_tuner import ModelOptimizer, knob2point, point2knob\n\nlogger = logging.getLogger('autotvm')\n\nclass SimulatedAnnealingOptimizer(ModelOptimizer):\n \"\"\"parallel simulated annealing optimization algorithm\n\n Parameters\n ----------\n task: Task\n The tuning task\n n_iter: int\n The number of iterations of simulated annealing\n temp: float or Array of float\n If is a single float, then use a constant temperature.\n If is an Array, then perform linear cooling from temp[0] to temp[1]\n early_stop: int, optional\n Stop iteration if the optimal set do not change in `early_stop` rounds\n log_interval: int, optional\n Print log every `log_interval` iterations\n \"\"\"\n def __init__(self, task, n_iter=500, temp=(1, 0), persistent=True, parallel_size=128,\n early_stop=50, log_interval=50):\n super(SimulatedAnnealingOptimizer, self).__init__()\n\n self.task = task\n self.dims = [len(x) for x in self.task.config_space.space_map.values()]\n\n self.n_iter = n_iter\n self.temp = temp\n self.persistent = persistent\n self.parallel_size = min(parallel_size, len(self.task.config_space))\n self.early_stop = early_stop or 1e9\n self.log_interval = log_interval\n self.points = None\n\n def find_maximums(self, model, num, exclusive):\n tic = time.time()\n temp, n_iter, early_stop, log_interval = \\\n self.temp, self.n_iter, self.early_stop, self.log_interval\n\n if self.persistent and self.points is not None:\n points = self.points\n else:\n points = np.array(sample_ints(0, len(self.task.config_space), self.parallel_size))\n\n scores = model.predict(points)\n\n # build heap and insert initial points\n heap_items = [(float('-inf'), - 1 - i) for i in range(num)]\n heapq.heapify(heap_items)\n in_heap = set(exclusive)\n in_heap.update([x[1] for x in heap_items])\n\n for s, p in zip(scores, points):\n if s > heap_items[0][0] and p not in in_heap:\n pop = heapq.heapreplace(heap_items, (s, p))\n in_heap.remove(pop[1])\n in_heap.add(p)\n\n k = 0\n k_last_modify = 0\n\n if isinstance(temp, (tuple, list, np.ndarray)):\n t = temp[0]\n cool = 1.0 * (temp[0] - temp[1]) / (n_iter + 1)\n else:\n t = temp\n cool = 0\n\n while k < n_iter and k < k_last_modify + early_stop:\n new_points = np.empty_like(points)\n for i, p in enumerate(points):\n new_points[i] = random_walk(p, self.dims)\n\n new_scores = model.predict(new_points)\n\n ac_prob = np.exp(np.minimum((new_scores - scores) / (t + 1e-5), 1))\n ac_index = np.random.random(len(ac_prob)) < ac_prob\n\n points[ac_index] = new_points[ac_index]\n scores[ac_index] = new_scores[ac_index]\n\n for s, p in zip(new_scores, new_points):\n if s > heap_items[0][0] and p not in in_heap:\n pop = heapq.heapreplace(heap_items, (s, p))\n in_heap.remove(pop[1])\n in_heap.add(p)\n k_last_modify = k\n\n k += 1\n t -= cool\n\n if log_interval and k % log_interval == 0:\n t_str = \"%.2f\" % t\n logger.debug(\"SA iter: %d\\tlast_update: %d\\tmax-0: %.2f\\tmax-1: %.2f\\ttemp: %s\\t\"\n \"elapsed: %.2f\",\n k, k_last_modify, heap_items[0][0],\n np.max([v for v, _ in heap_items]), t_str,\n time.time() - tic)\n\n heap_items.sort(key=lambda item: -item[0])\n heap_items = [x for x in heap_items if x[0] >= 0]\n logger.debug(\"SA iter: %d\\tlast_update: %d\\telapsed: %.2f\",\n k, k_last_modify, time.time() - tic)\n logger.debug(\"SA Maximums: %s\", heap_items)\n\n if self.persistent:\n self.points = points\n\n return [x[1] for x in heap_items]\n\ndef random_walk(p, dims):\n \"\"\"random walk as local transition\n\n Parameters\n ----------\n p: int\n index of the ConfigEntity\n dims: Array of int\n sizes of each dimension\n\n Returns\n -------\n new_p: int\n new neighborhood index\n \"\"\"\n # transform to knob form\n old = point2knob(p, dims)\n new = list(old)\n\n # mutate\n while new == old:\n from_i = np.random.randint(len(old))\n to_v = np.random.randint(dims[from_i])\n new[from_i] = to_v\n\n # transform to index form\n return knob2point(new, dims)\n", "path": "python/tvm/autotvm/tuner/sa_model_optimizer.py"}]}
| 2,316 | 373 |
gh_patches_debug_10820
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmcv-172
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
shape check of impad function
In this [line](https://github.com/open-mmlab/mmcv/blob/8829ff0dfd9b9c9484cbce17f5cecd1128b61723/mmcv/image/transforms/geometry.py#L182):
Instead of `for i in range(len(shape) - 1):
assert shape[i] >= img.shape[i]`
Should it be
`for i in range(len(shape)):
assert shape[i] >= img.shape[i]`
to ensure all input shape dimensions equal or less than pad shape?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmcv/image/transforms/geometry.py`
Content:
```
1 # Copyright (c) Open-MMLab. All rights reserved.
2 from __future__ import division
3
4 import cv2
5 import numpy as np
6
7
8 def imflip(img, direction='horizontal'):
9 """Flip an image horizontally or vertically.
10
11 Args:
12 img (ndarray): Image to be flipped.
13 direction (str): The flip direction, either "horizontal" or "vertical".
14
15 Returns:
16 ndarray: The flipped image.
17 """
18 assert direction in ['horizontal', 'vertical']
19 if direction == 'horizontal':
20 return np.flip(img, axis=1)
21 else:
22 return np.flip(img, axis=0)
23
24
25 def imrotate(img,
26 angle,
27 center=None,
28 scale=1.0,
29 border_value=0,
30 auto_bound=False):
31 """Rotate an image.
32
33 Args:
34 img (ndarray): Image to be rotated.
35 angle (float): Rotation angle in degrees, positive values mean
36 clockwise rotation.
37 center (tuple): Center of the rotation in the source image, by default
38 it is the center of the image.
39 scale (float): Isotropic scale factor.
40 border_value (int): Border value.
41 auto_bound (bool): Whether to adjust the image size to cover the whole
42 rotated image.
43
44 Returns:
45 ndarray: The rotated image.
46 """
47 if center is not None and auto_bound:
48 raise ValueError('`auto_bound` conflicts with `center`')
49 h, w = img.shape[:2]
50 if center is None:
51 center = ((w - 1) * 0.5, (h - 1) * 0.5)
52 assert isinstance(center, tuple)
53
54 matrix = cv2.getRotationMatrix2D(center, -angle, scale)
55 if auto_bound:
56 cos = np.abs(matrix[0, 0])
57 sin = np.abs(matrix[0, 1])
58 new_w = h * sin + w * cos
59 new_h = h * cos + w * sin
60 matrix[0, 2] += (new_w - w) * 0.5
61 matrix[1, 2] += (new_h - h) * 0.5
62 w = int(np.round(new_w))
63 h = int(np.round(new_h))
64 rotated = cv2.warpAffine(img, matrix, (w, h), borderValue=border_value)
65 return rotated
66
67
68 def bbox_clip(bboxes, img_shape):
69 """Clip bboxes to fit the image shape.
70
71 Args:
72 bboxes (ndarray): Shape (..., 4*k)
73 img_shape (tuple): (height, width) of the image.
74
75 Returns:
76 ndarray: Clipped bboxes.
77 """
78 assert bboxes.shape[-1] % 4 == 0
79 clipped_bboxes = np.empty_like(bboxes, dtype=bboxes.dtype)
80 clipped_bboxes[..., 0::2] = np.maximum(
81 np.minimum(bboxes[..., 0::2], img_shape[1] - 1), 0)
82 clipped_bboxes[..., 1::2] = np.maximum(
83 np.minimum(bboxes[..., 1::2], img_shape[0] - 1), 0)
84 return clipped_bboxes
85
86
87 def bbox_scaling(bboxes, scale, clip_shape=None):
88 """Scaling bboxes w.r.t the box center.
89
90 Args:
91 bboxes (ndarray): Shape(..., 4).
92 scale (float): Scaling factor.
93 clip_shape (tuple, optional): If specified, bboxes that exceed the
94 boundary will be clipped according to the given shape (h, w).
95
96 Returns:
97 ndarray: Scaled bboxes.
98 """
99 if float(scale) == 1.0:
100 scaled_bboxes = bboxes.copy()
101 else:
102 w = bboxes[..., 2] - bboxes[..., 0] + 1
103 h = bboxes[..., 3] - bboxes[..., 1] + 1
104 dw = (w * (scale - 1)) * 0.5
105 dh = (h * (scale - 1)) * 0.5
106 scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1)
107 if clip_shape is not None:
108 return bbox_clip(scaled_bboxes, clip_shape)
109 else:
110 return scaled_bboxes
111
112
113 def imcrop(img, bboxes, scale=1.0, pad_fill=None):
114 """Crop image patches.
115
116 3 steps: scale the bboxes -> clip bboxes -> crop and pad.
117
118 Args:
119 img (ndarray): Image to be cropped.
120 bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes.
121 scale (float, optional): Scale ratio of bboxes, the default value
122 1.0 means no padding.
123 pad_fill (number or list): Value to be filled for padding, None for
124 no padding.
125
126 Returns:
127 list or ndarray: The cropped image patches.
128 """
129 chn = 1 if img.ndim == 2 else img.shape[2]
130 if pad_fill is not None:
131 if isinstance(pad_fill, (int, float)):
132 pad_fill = [pad_fill for _ in range(chn)]
133 assert len(pad_fill) == chn
134
135 _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes
136 scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32)
137 clipped_bbox = bbox_clip(scaled_bboxes, img.shape)
138
139 patches = []
140 for i in range(clipped_bbox.shape[0]):
141 x1, y1, x2, y2 = tuple(clipped_bbox[i, :])
142 if pad_fill is None:
143 patch = img[y1:y2 + 1, x1:x2 + 1, ...]
144 else:
145 _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :])
146 if chn == 1:
147 patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1)
148 else:
149 patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn)
150 patch = np.array(
151 pad_fill, dtype=img.dtype) * np.ones(
152 patch_shape, dtype=img.dtype)
153 x_start = 0 if _x1 >= 0 else -_x1
154 y_start = 0 if _y1 >= 0 else -_y1
155 w = x2 - x1 + 1
156 h = y2 - y1 + 1
157 patch[y_start:y_start + h, x_start:x_start + w,
158 ...] = img[y1:y1 + h, x1:x1 + w, ...]
159 patches.append(patch)
160
161 if bboxes.ndim == 1:
162 return patches[0]
163 else:
164 return patches
165
166
167 def impad(img, shape, pad_val=0):
168 """Pad an image to a certain shape.
169
170 Args:
171 img (ndarray): Image to be padded.
172 shape (tuple): Expected padding shape.
173 pad_val (number or sequence): Values to be filled in padding areas.
174
175 Returns:
176 ndarray: The padded image.
177 """
178 if not isinstance(pad_val, (int, float)):
179 assert len(pad_val) == img.shape[-1]
180 if len(shape) < len(img.shape):
181 shape = shape + (img.shape[-1], )
182 assert len(shape) == len(img.shape)
183 for i in range(len(shape) - 1):
184 assert shape[i] >= img.shape[i]
185 pad = np.empty(shape, dtype=img.dtype)
186 pad[...] = pad_val
187 pad[:img.shape[0], :img.shape[1], ...] = img
188 return pad
189
190
191 def impad_to_multiple(img, divisor, pad_val=0):
192 """Pad an image to ensure each edge to be multiple to some number.
193
194 Args:
195 img (ndarray): Image to be padded.
196 divisor (int): Padded image edges will be multiple to divisor.
197 pad_val (number or sequence): Same as :func:`impad`.
198
199 Returns:
200 ndarray: The padded image.
201 """
202 pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor
203 pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor
204 return impad(img, (pad_h, pad_w), pad_val)
205
```
Path: `mmcv/version.py`
Content:
```
1 # Copyright (c) Open-MMLab. All rights reserved.
2 __version__ = '0.2.15'
3
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmcv/image/transforms/geometry.py b/mmcv/image/transforms/geometry.py
--- a/mmcv/image/transforms/geometry.py
+++ b/mmcv/image/transforms/geometry.py
@@ -180,7 +180,7 @@
if len(shape) < len(img.shape):
shape = shape + (img.shape[-1], )
assert len(shape) == len(img.shape)
- for i in range(len(shape) - 1):
+ for i in range(len(shape)):
assert shape[i] >= img.shape[i]
pad = np.empty(shape, dtype=img.dtype)
pad[...] = pad_val
diff --git a/mmcv/version.py b/mmcv/version.py
--- a/mmcv/version.py
+++ b/mmcv/version.py
@@ -1,2 +1,2 @@
# Copyright (c) Open-MMLab. All rights reserved.
-__version__ = '0.2.15'
+__version__ = '0.2.16'
|
{"golden_diff": "diff --git a/mmcv/image/transforms/geometry.py b/mmcv/image/transforms/geometry.py\n--- a/mmcv/image/transforms/geometry.py\n+++ b/mmcv/image/transforms/geometry.py\n@@ -180,7 +180,7 @@\n if len(shape) < len(img.shape):\n shape = shape + (img.shape[-1], )\n assert len(shape) == len(img.shape)\n- for i in range(len(shape) - 1):\n+ for i in range(len(shape)):\n assert shape[i] >= img.shape[i]\n pad = np.empty(shape, dtype=img.dtype)\n pad[...] = pad_val\ndiff --git a/mmcv/version.py b/mmcv/version.py\n--- a/mmcv/version.py\n+++ b/mmcv/version.py\n@@ -1,2 +1,2 @@\n # Copyright (c) Open-MMLab. All rights reserved.\n-__version__ = '0.2.15'\n+__version__ = '0.2.16'\n", "issue": "shape check of impad function\nIn this [line](https://github.com/open-mmlab/mmcv/blob/8829ff0dfd9b9c9484cbce17f5cecd1128b61723/mmcv/image/transforms/geometry.py#L182):\r\nInstead of `for i in range(len(shape) - 1):\r\n assert shape[i] >= img.shape[i]`\r\nShould it be\r\n`for i in range(len(shape)):\r\n assert shape[i] >= img.shape[i]`\r\nto ensure all input shape dimensions equal or less than pad shape?\n", "before_files": [{"content": "# Copyright (c) Open-MMLab. All rights reserved.\nfrom __future__ import division\n\nimport cv2\nimport numpy as np\n\n\ndef imflip(img, direction='horizontal'):\n \"\"\"Flip an image horizontally or vertically.\n\n Args:\n img (ndarray): Image to be flipped.\n direction (str): The flip direction, either \"horizontal\" or \"vertical\".\n\n Returns:\n ndarray: The flipped image.\n \"\"\"\n assert direction in ['horizontal', 'vertical']\n if direction == 'horizontal':\n return np.flip(img, axis=1)\n else:\n return np.flip(img, axis=0)\n\n\ndef imrotate(img,\n angle,\n center=None,\n scale=1.0,\n border_value=0,\n auto_bound=False):\n \"\"\"Rotate an image.\n\n Args:\n img (ndarray): Image to be rotated.\n angle (float): Rotation angle in degrees, positive values mean\n clockwise rotation.\n center (tuple): Center of the rotation in the source image, by default\n it is the center of the image.\n scale (float): Isotropic scale factor.\n border_value (int): Border value.\n auto_bound (bool): Whether to adjust the image size to cover the whole\n rotated image.\n\n Returns:\n ndarray: The rotated image.\n \"\"\"\n if center is not None and auto_bound:\n raise ValueError('`auto_bound` conflicts with `center`')\n h, w = img.shape[:2]\n if center is None:\n center = ((w - 1) * 0.5, (h - 1) * 0.5)\n assert isinstance(center, tuple)\n\n matrix = cv2.getRotationMatrix2D(center, -angle, scale)\n if auto_bound:\n cos = np.abs(matrix[0, 0])\n sin = np.abs(matrix[0, 1])\n new_w = h * sin + w * cos\n new_h = h * cos + w * sin\n matrix[0, 2] += (new_w - w) * 0.5\n matrix[1, 2] += (new_h - h) * 0.5\n w = int(np.round(new_w))\n h = int(np.round(new_h))\n rotated = cv2.warpAffine(img, matrix, (w, h), borderValue=border_value)\n return rotated\n\n\ndef bbox_clip(bboxes, img_shape):\n \"\"\"Clip bboxes to fit the image shape.\n\n Args:\n bboxes (ndarray): Shape (..., 4*k)\n img_shape (tuple): (height, width) of the image.\n\n Returns:\n ndarray: Clipped bboxes.\n \"\"\"\n assert bboxes.shape[-1] % 4 == 0\n clipped_bboxes = np.empty_like(bboxes, dtype=bboxes.dtype)\n clipped_bboxes[..., 0::2] = np.maximum(\n np.minimum(bboxes[..., 0::2], img_shape[1] - 1), 0)\n clipped_bboxes[..., 1::2] = np.maximum(\n np.minimum(bboxes[..., 1::2], img_shape[0] - 1), 0)\n return clipped_bboxes\n\n\ndef bbox_scaling(bboxes, scale, clip_shape=None):\n \"\"\"Scaling bboxes w.r.t the box center.\n\n Args:\n bboxes (ndarray): Shape(..., 4).\n scale (float): Scaling factor.\n clip_shape (tuple, optional): If specified, bboxes that exceed the\n boundary will be clipped according to the given shape (h, w).\n\n Returns:\n ndarray: Scaled bboxes.\n \"\"\"\n if float(scale) == 1.0:\n scaled_bboxes = bboxes.copy()\n else:\n w = bboxes[..., 2] - bboxes[..., 0] + 1\n h = bboxes[..., 3] - bboxes[..., 1] + 1\n dw = (w * (scale - 1)) * 0.5\n dh = (h * (scale - 1)) * 0.5\n scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1)\n if clip_shape is not None:\n return bbox_clip(scaled_bboxes, clip_shape)\n else:\n return scaled_bboxes\n\n\ndef imcrop(img, bboxes, scale=1.0, pad_fill=None):\n \"\"\"Crop image patches.\n\n 3 steps: scale the bboxes -> clip bboxes -> crop and pad.\n\n Args:\n img (ndarray): Image to be cropped.\n bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes.\n scale (float, optional): Scale ratio of bboxes, the default value\n 1.0 means no padding.\n pad_fill (number or list): Value to be filled for padding, None for\n no padding.\n\n Returns:\n list or ndarray: The cropped image patches.\n \"\"\"\n chn = 1 if img.ndim == 2 else img.shape[2]\n if pad_fill is not None:\n if isinstance(pad_fill, (int, float)):\n pad_fill = [pad_fill for _ in range(chn)]\n assert len(pad_fill) == chn\n\n _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes\n scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32)\n clipped_bbox = bbox_clip(scaled_bboxes, img.shape)\n\n patches = []\n for i in range(clipped_bbox.shape[0]):\n x1, y1, x2, y2 = tuple(clipped_bbox[i, :])\n if pad_fill is None:\n patch = img[y1:y2 + 1, x1:x2 + 1, ...]\n else:\n _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :])\n if chn == 1:\n patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1)\n else:\n patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn)\n patch = np.array(\n pad_fill, dtype=img.dtype) * np.ones(\n patch_shape, dtype=img.dtype)\n x_start = 0 if _x1 >= 0 else -_x1\n y_start = 0 if _y1 >= 0 else -_y1\n w = x2 - x1 + 1\n h = y2 - y1 + 1\n patch[y_start:y_start + h, x_start:x_start + w,\n ...] = img[y1:y1 + h, x1:x1 + w, ...]\n patches.append(patch)\n\n if bboxes.ndim == 1:\n return patches[0]\n else:\n return patches\n\n\ndef impad(img, shape, pad_val=0):\n \"\"\"Pad an image to a certain shape.\n\n Args:\n img (ndarray): Image to be padded.\n shape (tuple): Expected padding shape.\n pad_val (number or sequence): Values to be filled in padding areas.\n\n Returns:\n ndarray: The padded image.\n \"\"\"\n if not isinstance(pad_val, (int, float)):\n assert len(pad_val) == img.shape[-1]\n if len(shape) < len(img.shape):\n shape = shape + (img.shape[-1], )\n assert len(shape) == len(img.shape)\n for i in range(len(shape) - 1):\n assert shape[i] >= img.shape[i]\n pad = np.empty(shape, dtype=img.dtype)\n pad[...] = pad_val\n pad[:img.shape[0], :img.shape[1], ...] = img\n return pad\n\n\ndef impad_to_multiple(img, divisor, pad_val=0):\n \"\"\"Pad an image to ensure each edge to be multiple to some number.\n\n Args:\n img (ndarray): Image to be padded.\n divisor (int): Padded image edges will be multiple to divisor.\n pad_val (number or sequence): Same as :func:`impad`.\n\n Returns:\n ndarray: The padded image.\n \"\"\"\n pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor\n pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor\n return impad(img, (pad_h, pad_w), pad_val)\n", "path": "mmcv/image/transforms/geometry.py"}, {"content": "# Copyright (c) Open-MMLab. All rights reserved.\n__version__ = '0.2.15'\n", "path": "mmcv/version.py"}], "after_files": [{"content": "# Copyright (c) Open-MMLab. All rights reserved.\nfrom __future__ import division\n\nimport cv2\nimport numpy as np\n\n\ndef imflip(img, direction='horizontal'):\n \"\"\"Flip an image horizontally or vertically.\n\n Args:\n img (ndarray): Image to be flipped.\n direction (str): The flip direction, either \"horizontal\" or \"vertical\".\n\n Returns:\n ndarray: The flipped image.\n \"\"\"\n assert direction in ['horizontal', 'vertical']\n if direction == 'horizontal':\n return np.flip(img, axis=1)\n else:\n return np.flip(img, axis=0)\n\n\ndef imrotate(img,\n angle,\n center=None,\n scale=1.0,\n border_value=0,\n auto_bound=False):\n \"\"\"Rotate an image.\n\n Args:\n img (ndarray): Image to be rotated.\n angle (float): Rotation angle in degrees, positive values mean\n clockwise rotation.\n center (tuple): Center of the rotation in the source image, by default\n it is the center of the image.\n scale (float): Isotropic scale factor.\n border_value (int): Border value.\n auto_bound (bool): Whether to adjust the image size to cover the whole\n rotated image.\n\n Returns:\n ndarray: The rotated image.\n \"\"\"\n if center is not None and auto_bound:\n raise ValueError('`auto_bound` conflicts with `center`')\n h, w = img.shape[:2]\n if center is None:\n center = ((w - 1) * 0.5, (h - 1) * 0.5)\n assert isinstance(center, tuple)\n\n matrix = cv2.getRotationMatrix2D(center, -angle, scale)\n if auto_bound:\n cos = np.abs(matrix[0, 0])\n sin = np.abs(matrix[0, 1])\n new_w = h * sin + w * cos\n new_h = h * cos + w * sin\n matrix[0, 2] += (new_w - w) * 0.5\n matrix[1, 2] += (new_h - h) * 0.5\n w = int(np.round(new_w))\n h = int(np.round(new_h))\n rotated = cv2.warpAffine(img, matrix, (w, h), borderValue=border_value)\n return rotated\n\n\ndef bbox_clip(bboxes, img_shape):\n \"\"\"Clip bboxes to fit the image shape.\n\n Args:\n bboxes (ndarray): Shape (..., 4*k)\n img_shape (tuple): (height, width) of the image.\n\n Returns:\n ndarray: Clipped bboxes.\n \"\"\"\n assert bboxes.shape[-1] % 4 == 0\n clipped_bboxes = np.empty_like(bboxes, dtype=bboxes.dtype)\n clipped_bboxes[..., 0::2] = np.maximum(\n np.minimum(bboxes[..., 0::2], img_shape[1] - 1), 0)\n clipped_bboxes[..., 1::2] = np.maximum(\n np.minimum(bboxes[..., 1::2], img_shape[0] - 1), 0)\n return clipped_bboxes\n\n\ndef bbox_scaling(bboxes, scale, clip_shape=None):\n \"\"\"Scaling bboxes w.r.t the box center.\n\n Args:\n bboxes (ndarray): Shape(..., 4).\n scale (float): Scaling factor.\n clip_shape (tuple, optional): If specified, bboxes that exceed the\n boundary will be clipped according to the given shape (h, w).\n\n Returns:\n ndarray: Scaled bboxes.\n \"\"\"\n if float(scale) == 1.0:\n scaled_bboxes = bboxes.copy()\n else:\n w = bboxes[..., 2] - bboxes[..., 0] + 1\n h = bboxes[..., 3] - bboxes[..., 1] + 1\n dw = (w * (scale - 1)) * 0.5\n dh = (h * (scale - 1)) * 0.5\n scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1)\n if clip_shape is not None:\n return bbox_clip(scaled_bboxes, clip_shape)\n else:\n return scaled_bboxes\n\n\ndef imcrop(img, bboxes, scale=1.0, pad_fill=None):\n \"\"\"Crop image patches.\n\n 3 steps: scale the bboxes -> clip bboxes -> crop and pad.\n\n Args:\n img (ndarray): Image to be cropped.\n bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes.\n scale (float, optional): Scale ratio of bboxes, the default value\n 1.0 means no padding.\n pad_fill (number or list): Value to be filled for padding, None for\n no padding.\n\n Returns:\n list or ndarray: The cropped image patches.\n \"\"\"\n chn = 1 if img.ndim == 2 else img.shape[2]\n if pad_fill is not None:\n if isinstance(pad_fill, (int, float)):\n pad_fill = [pad_fill for _ in range(chn)]\n assert len(pad_fill) == chn\n\n _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes\n scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32)\n clipped_bbox = bbox_clip(scaled_bboxes, img.shape)\n\n patches = []\n for i in range(clipped_bbox.shape[0]):\n x1, y1, x2, y2 = tuple(clipped_bbox[i, :])\n if pad_fill is None:\n patch = img[y1:y2 + 1, x1:x2 + 1, ...]\n else:\n _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :])\n if chn == 1:\n patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1)\n else:\n patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn)\n patch = np.array(\n pad_fill, dtype=img.dtype) * np.ones(\n patch_shape, dtype=img.dtype)\n x_start = 0 if _x1 >= 0 else -_x1\n y_start = 0 if _y1 >= 0 else -_y1\n w = x2 - x1 + 1\n h = y2 - y1 + 1\n patch[y_start:y_start + h, x_start:x_start + w,\n ...] = img[y1:y1 + h, x1:x1 + w, ...]\n patches.append(patch)\n\n if bboxes.ndim == 1:\n return patches[0]\n else:\n return patches\n\n\ndef impad(img, shape, pad_val=0):\n \"\"\"Pad an image to a certain shape.\n\n Args:\n img (ndarray): Image to be padded.\n shape (tuple): Expected padding shape.\n pad_val (number or sequence): Values to be filled in padding areas.\n\n Returns:\n ndarray: The padded image.\n \"\"\"\n if not isinstance(pad_val, (int, float)):\n assert len(pad_val) == img.shape[-1]\n if len(shape) < len(img.shape):\n shape = shape + (img.shape[-1], )\n assert len(shape) == len(img.shape)\n for i in range(len(shape)):\n assert shape[i] >= img.shape[i]\n pad = np.empty(shape, dtype=img.dtype)\n pad[...] = pad_val\n pad[:img.shape[0], :img.shape[1], ...] = img\n return pad\n\n\ndef impad_to_multiple(img, divisor, pad_val=0):\n \"\"\"Pad an image to ensure each edge to be multiple to some number.\n\n Args:\n img (ndarray): Image to be padded.\n divisor (int): Padded image edges will be multiple to divisor.\n pad_val (number or sequence): Same as :func:`impad`.\n\n Returns:\n ndarray: The padded image.\n \"\"\"\n pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor\n pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor\n return impad(img, (pad_h, pad_w), pad_val)\n", "path": "mmcv/image/transforms/geometry.py"}, {"content": "# Copyright (c) Open-MMLab. All rights reserved.\n__version__ = '0.2.16'\n", "path": "mmcv/version.py"}]}
| 2,817 | 227 |
gh_patches_debug_567
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-891
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.3
On the docket:
+ [x] Error eagerly if an interpreter binary doesn't exist #886
+ [x] The pip-powered resolve in pex 2 will re-tokenize --find-links pages on each transitive requirement #887
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '2.1.2'
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '2.1.2'
+__version__ = '2.1.3'
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '2.1.2'\n+__version__ = '2.1.3'\n", "issue": "Release 2.1.3\nOn the docket:\r\n+ [x] Error eagerly if an interpreter binary doesn't exist #886 \r\n+ [x] The pip-powered resolve in pex 2 will re-tokenize --find-links pages on each transitive requirement #887 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '2.1.2'\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '2.1.3'\n", "path": "pex/version.py"}]}
| 368 | 94 |
gh_patches_debug_11717
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-23087
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ios_config multiline banner timeout
<!---
This looks to match issue 5318 which is listed as resolved but I am having the same issue. I have used ^ as the deliminator in the src file as well as the default @.
-->
##### ISSUE TYPE
<!---
- Bug Report
-->
##### COMPONENT NAME
```
<!---
ios_config
-->
```
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
adminlen@len-nm-1:~/ansible$ ansible --version
ansible 2.3.0
config file = /home/adminlen/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
-->
##### CONFIGURATION
<!---
```
adminlen@len-nm-1:~/ansible$ cat ansible.cfg
[defaults]
transport = paramiko
hostfile = ./myhosts
host_key_checking = True
timeout = 5
```
-->
##### OS / ENVIRONMENT
<!---
```
I am running from ubuntu server 16.04.
i am managing Cisco IOS 15.5.3
```
-->
##### SUMMARY
<!---
```
The playbook timeouts when applying multiline banner, either login or motd.
```
-->
##### STEPS TO REPRODUCE
<!---
```
Here is my playbook and src file:
ansible$ cat router-login-banner.yml
---
-hosts: testsitehsrp1
gather_facts: no
connection: local
tasks:
name: Set motd and alias commands for dsl
ios_config:
username: adminlen
save: yes
src: /home/adminlen/ansible/router-login-banner-test
adminlen@len-nm-1:~/ansible$ cat router-login-banner-test
banner login @
test login
banner
@
-->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: testsitehsrp1
gather_facts: no
connection: local
tasks:
- name: Set motd and alias commands for dsl
ios_config:
username: adminlen
save: yes
src: /home/adminlen/ansible/router-login-banner-test
```
##### EXPECTED RESULTS
<!---
```
Here is what happens when the banner is contained on a single line. This is what I expect to happen with the multiline banner
---
- hosts: testsitehsrp1
gather_facts: no
connection: local
tasks:
- name: Set motd and alias commands for dsl
ios_config:
username: adminlen
save: yes
src: /home/adminlen/ansible/router-login-banner-test-1line
adminlen@len-nm-1:~/ansible$ cat router-login-banner-test-1line
banner login @ test 1 line banner @
adminlen@len-nm-1:~/ansible$ ansible-playbook router-login-banner-2.yml --ask-pass
SSH password:
PLAY [testsitehsrp1] ******************************************************************************************************************
TASK [Set motd and alias commands for dsl] ********************************************************************************************
[WARNING]: argument username has been deprecated and will be removed in a future version
changed: [testsitehsrp1]
PLAY RECAP ****************************************************************************************************************************
testsitehsrp1 : ok=1 changed=1 unreachable=0 failed=0
```
-->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
adminlen@len-nm-1:~/ansible$ ansible-playbook -vvv router-login-banner.yml --ask-pass
Using /home/adminlen/ansible/ansible.cfg as config file
SSH password:
PLAYBOOK: router-login-banner.yml *******************************************************************************
1 plays in router-login-banner.yml
PLAY [testsitehsrp1] ********************************************************************************************
META: ran handlers
TASK [Set motd and alias commands for dsl] **********************************************************************
task path: /home/adminlen/ansible/router-login-banner.yml:7
Using module file /usr/local/lib/python2.7/dist-packages/ansible-2.3.0-py2.7.egg/ansible/modules/network/ios/ios_config.py
<10.100.80.2> ESTABLISH LOCAL CONNECTION FOR USER: adminlen
<10.100.80.2> EXEC /bin/sh -c 'echo ~ && sleep 0'
<10.100.80.2> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534" && echo ansible-tmp-1487961451.38-133078886647534="echo /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534" ) && sleep 0'
<10.100.80.2> PUT /tmp/tmpIbJY6x TO /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/ios_config.py
<10.100.80.2> EXEC /bin/sh -c 'chmod u+x /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/ /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/ios_config.py && sleep 0'
<10.100.80.2> EXEC /bin/sh -c '/usr/bin/python /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/ios_config.py; rm -rf "/home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/" > /dev/null 2>&1 && sleep 0'
fatal: [testsitehsrp1]: FAILED! => {
"changed": false,
"command": "banner login @",
"failed": true,
"invocation": {
"module_args": {
"after": null,
"auth_pass": null,
"authorize": null,
"backup": false,
"before": null,
"config": null,
"defaults": false,
"force": false,
"host": null,
"lines": null,
"match": "line",
"multiline_delimiter": "@",
"parents": null,
"password": null,
"port": null,
"provider": null,
"replace": "line",
"save": true,
"src": "banner login @ \n\ntest login\nbanner\n\n@\n\n",
"ssh_keyfile": null,
"timeout": null,
"username": "adminlen"
}
},
"msg": "timeout trying to send command: banner login @",
"rc": 1
}
to retry, use: --limit @/home/adminlen/ansible/router-login-banner.retry
PLAY RECAP ******************************************************************************************************
testsitehsrp1 : ok=0 changed=0 unreachable=0 failed=1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/modules/network/ios/ios_banner.py`
Content:
```
1 #!/usr/bin/python
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17 #
18
19 ANSIBLE_METADATA = {'metadata_version': '1.0',
20 'status': ['preview'],
21 'supported_by': 'community'}
22
23
24 DOCUMENTATION = """
25 ---
26 module: ios_banner
27 version_added: "2.3"
28 author: "Ricardo Carrillo Cruz (@rcarrillocruz)"
29 short_description: Manage multiline banners on Cisco IOS devices
30 description:
31 - This will configure both login and motd banners on remote devices
32 running Cisco IOS. It allows playbooks to add or remote
33 banner text from the active running configuration.
34 extends_documentation_fragment: ios
35 options:
36 banner:
37 description:
38 - Specifies which banner that should be
39 configured on the remote device.
40 required: true
41 default: null
42 choices: ['login', 'banner']
43 text:
44 description:
45 - The banner text that should be
46 present in the remote device running configuration. This argument
47 accepts a multiline string. Requires I(state=present).
48 default: null
49 state:
50 description:
51 - Specifies whether or not the configuration is
52 present in the current devices active running configuration.
53 default: present
54 choices: ['present', 'absent']
55 """
56
57 EXAMPLES = """
58 - name: configure the login banner
59 ios_banner:
60 banner: login
61 text: |
62 this is my login banner
63 that contains a multiline
64 string
65 state: present
66
67 - name: remove the motd banner
68 ios_banner:
69 banner: motd
70 state: absent
71 """
72
73 RETURN = """
74 commands:
75 description: The list of configuration mode commands to send to the device
76 returned: always
77 type: list
78 sample:
79 - banner login
80 - this is my login banner
81 - that contains a multiline
82 - string
83 """
84 from ansible.module_utils.basic import AnsibleModule
85 from ansible.module_utils.connection import exec_command
86 from ansible.module_utils.ios import load_config, run_commands
87 from ansible.module_utils.ios import ios_argument_spec, check_args
88 import re
89
90 def map_obj_to_commands(updates, module):
91 commands = list()
92 want, have = updates
93 state = module.params['state']
94
95 if state == 'absent' and have['text']:
96 commands.append('no banner %s' % module.params['banner'])
97
98 elif state == 'present':
99 if want['text'] and (want['text'] != have.get('text')):
100 banner_cmd = 'banner %s' % module.params['banner']
101 banner_cmd += ' @\n'
102 banner_cmd += want['text'].strip()
103 banner_cmd += '\n@'
104 commands.append(banner_cmd)
105
106 return commands
107
108 def map_config_to_obj(module):
109 rc, out, err = exec_command(module, 'show banner %s' % module.params['banner'])
110 if rc == 0:
111 output = out
112 else:
113 rc, out, err = exec_command(module,
114 'show running-config | begin banner %s'
115 % module.params['banner'])
116 output = re.search('\^C(.*)\^C', out, re.S).group(1).strip()
117 obj = {'banner': module.params['banner'], 'state': 'absent'}
118 if output:
119 obj['text'] = output
120 obj['state'] = 'present'
121 return obj
122
123 def map_params_to_obj(module):
124 text = module.params['text']
125 if text:
126 text = str(text).strip()
127
128 return {
129 'banner': module.params['banner'],
130 'text': text,
131 'state': module.params['state']
132 }
133
134 def main():
135 """ main entry point for module execution
136 """
137 argument_spec = dict(
138 banner=dict(required=True, choices=['login', 'motd']),
139 text=dict(),
140 state=dict(default='present', choices=['present', 'absent'])
141 )
142
143 argument_spec.update(ios_argument_spec)
144
145 required_if = [('state', 'present', ('text',))]
146
147 module = AnsibleModule(argument_spec=argument_spec,
148 required_if=required_if,
149 supports_check_mode=True)
150
151 warnings = list()
152 check_args(module, warnings)
153
154 result = {'changed': False}
155 if warnings:
156 result['warnings'] = warnings
157 want = map_params_to_obj(module)
158 have = map_config_to_obj(module)
159
160 commands = map_obj_to_commands((want, have), module)
161 result['commands'] = commands
162
163 if commands:
164 if not module.check_mode:
165 response = load_config(module, commands)
166
167 result['changed'] = True
168
169 module.exit_json(**result)
170
171 if __name__ == '__main__':
172 main()
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/ansible/modules/network/ios/ios_banner.py b/lib/ansible/modules/network/ios/ios_banner.py
--- a/lib/ansible/modules/network/ios/ios_banner.py
+++ b/lib/ansible/modules/network/ios/ios_banner.py
@@ -113,7 +113,10 @@
rc, out, err = exec_command(module,
'show running-config | begin banner %s'
% module.params['banner'])
- output = re.search('\^C(.*)\^C', out, re.S).group(1).strip()
+ if out:
+ output = re.search('\^C(.*)\^C', out, re.S).group(1).strip()
+ else:
+ output = None
obj = {'banner': module.params['banner'], 'state': 'absent'}
if output:
obj['text'] = output
|
{"golden_diff": "diff --git a/lib/ansible/modules/network/ios/ios_banner.py b/lib/ansible/modules/network/ios/ios_banner.py\n--- a/lib/ansible/modules/network/ios/ios_banner.py\n+++ b/lib/ansible/modules/network/ios/ios_banner.py\n@@ -113,7 +113,10 @@\n rc, out, err = exec_command(module,\n 'show running-config | begin banner %s'\n % module.params['banner'])\n- output = re.search('\\^C(.*)\\^C', out, re.S).group(1).strip()\n+ if out:\n+ output = re.search('\\^C(.*)\\^C', out, re.S).group(1).strip()\n+ else:\n+ output = None\n obj = {'banner': module.params['banner'], 'state': 'absent'}\n if output:\n obj['text'] = output\n", "issue": "ios_config multiline banner timeout\n<!---\r\nThis looks to match issue 5318 which is listed as resolved but I am having the same issue. I have used ^ as the deliminator in the src file as well as the default @.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!---\r\n - Bug Report\r\n-->\r\n##### COMPONENT NAME\r\n```\r\n<!---\r\nios_config\r\n-->\r\n```\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \u201cansible --version\u201d between quotes below -->\r\n```\r\nadminlen@len-nm-1:~/ansible$ ansible --version\r\nansible 2.3.0\r\n config file = /home/adminlen/ansible/ansible.cfg\r\n configured module search path = Default w/o overrides\r\n```\r\n-->\r\n\r\n##### CONFIGURATION\r\n<!---\r\n```\r\nadminlen@len-nm-1:~/ansible$ cat ansible.cfg\r\n[defaults]\r\ntransport = paramiko\r\nhostfile = ./myhosts\r\nhost_key_checking = True\r\ntimeout = 5\r\n```\r\n-->\r\n\r\n##### OS / ENVIRONMENT\r\n<!---\r\n```\r\nI am running from ubuntu server 16.04.\r\ni am managing Cisco IOS 15.5.3\r\n```\r\n-->\r\n\r\n##### SUMMARY\r\n<!---\r\n```\r\nThe playbook timeouts when applying multiline banner, either login or motd.\r\n```\r\n-->\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\n```\r\nHere is my playbook and src file:\r\nansible$ cat router-login-banner.yml\r\n---\r\n-hosts: testsitehsrp1\r\ngather_facts: no\r\nconnection: local\r\n\r\ntasks:\r\n name: Set motd and alias commands for dsl\r\n ios_config:\r\n username: adminlen\r\n save: yes\r\n src: /home/adminlen/ansible/router-login-banner-test\r\n\r\nadminlen@len-nm-1:~/ansible$ cat router-login-banner-test\r\nbanner login @\r\n\r\ntest login\r\nbanner\r\n\r\n@\r\n-->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n---\r\n- hosts: testsitehsrp1\r\n gather_facts: no\r\n connection: local\r\n\r\n tasks:\r\n - name: Set motd and alias commands for dsl\r\n ios_config:\r\n username: adminlen\r\n save: yes\r\n src: /home/adminlen/ansible/router-login-banner-test\r\n\r\n```\r\n\r\n##### EXPECTED RESULTS\r\n<!--- \r\n```\r\nHere is what happens when the banner is contained on a single line. This is what I expect to happen with the multiline banner\r\n\r\n---\r\n- hosts: testsitehsrp1\r\n gather_facts: no\r\n connection: local\r\n\r\n tasks:\r\n - name: Set motd and alias commands for dsl\r\n ios_config:\r\n username: adminlen\r\n save: yes\r\n src: /home/adminlen/ansible/router-login-banner-test-1line\r\n\r\nadminlen@len-nm-1:~/ansible$ cat router-login-banner-test-1line\r\nbanner login @ test 1 line banner @\r\n\r\nadminlen@len-nm-1:~/ansible$ ansible-playbook router-login-banner-2.yml --ask-pass\r\nSSH password:\r\n\r\nPLAY [testsitehsrp1] ******************************************************************************************************************\r\n\r\nTASK [Set motd and alias commands for dsl] ********************************************************************************************\r\n [WARNING]: argument username has been deprecated and will be removed in a future version\r\n\r\nchanged: [testsitehsrp1]\r\n\r\nPLAY RECAP ****************************************************************************************************************************\r\ntestsitehsrp1 : ok=1 changed=1 unreachable=0 failed=0\r\n\r\n```\r\n -->\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\nadminlen@len-nm-1:~/ansible$ ansible-playbook -vvv router-login-banner.yml --ask-pass\r\nUsing /home/adminlen/ansible/ansible.cfg as config file\r\nSSH password:\r\n\r\nPLAYBOOK: router-login-banner.yml *******************************************************************************\r\n1 plays in router-login-banner.yml\r\n\r\nPLAY [testsitehsrp1] ********************************************************************************************\r\nMETA: ran handlers\r\n\r\nTASK [Set motd and alias commands for dsl] **********************************************************************\r\ntask path: /home/adminlen/ansible/router-login-banner.yml:7\r\nUsing module file /usr/local/lib/python2.7/dist-packages/ansible-2.3.0-py2.7.egg/ansible/modules/network/ios/ios_config.py\r\n<10.100.80.2> ESTABLISH LOCAL CONNECTION FOR USER: adminlen\r\n<10.100.80.2> EXEC /bin/sh -c 'echo ~ && sleep 0'\r\n<10.100.80.2> EXEC /bin/sh -c '( umask 77 && mkdir -p \"echo /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534\" && echo ansible-tmp-1487961451.38-133078886647534=\"echo /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534\" ) && sleep 0'\r\n<10.100.80.2> PUT /tmp/tmpIbJY6x TO /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/ios_config.py\r\n<10.100.80.2> EXEC /bin/sh -c 'chmod u+x /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/ /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/ios_config.py && sleep 0'\r\n<10.100.80.2> EXEC /bin/sh -c '/usr/bin/python /home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/ios_config.py; rm -rf \"/home/adminlen/.ansible/tmp/ansible-tmp-1487961451.38-133078886647534/\" > /dev/null 2>&1 && sleep 0'\r\nfatal: [testsitehsrp1]: FAILED! => {\r\n\"changed\": false,\r\n\"command\": \"banner login @\",\r\n\"failed\": true,\r\n\"invocation\": {\r\n\"module_args\": {\r\n\"after\": null,\r\n\"auth_pass\": null,\r\n\"authorize\": null,\r\n\"backup\": false,\r\n\"before\": null,\r\n\"config\": null,\r\n\"defaults\": false,\r\n\"force\": false,\r\n\"host\": null,\r\n\"lines\": null,\r\n\"match\": \"line\",\r\n\"multiline_delimiter\": \"@\",\r\n\"parents\": null,\r\n\"password\": null,\r\n\"port\": null,\r\n\"provider\": null,\r\n\"replace\": \"line\",\r\n\"save\": true,\r\n\"src\": \"banner login @ \\n\\ntest login\\nbanner\\n\\n@\\n\\n\",\r\n\"ssh_keyfile\": null,\r\n\"timeout\": null,\r\n\"username\": \"adminlen\"\r\n}\r\n},\r\n\"msg\": \"timeout trying to send command: banner login @\",\r\n\"rc\": 1\r\n}\r\nto retry, use: --limit @/home/adminlen/ansible/router-login-banner.retry\r\n\r\nPLAY RECAP ******************************************************************************************************\r\ntestsitehsrp1 : ok=0 changed=0 unreachable=0 failed=1\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nANSIBLE_METADATA = {'metadata_version': '1.0',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = \"\"\"\n---\nmodule: ios_banner\nversion_added: \"2.3\"\nauthor: \"Ricardo Carrillo Cruz (@rcarrillocruz)\"\nshort_description: Manage multiline banners on Cisco IOS devices\ndescription:\n - This will configure both login and motd banners on remote devices\n running Cisco IOS. It allows playbooks to add or remote\n banner text from the active running configuration.\nextends_documentation_fragment: ios\noptions:\n banner:\n description:\n - Specifies which banner that should be\n configured on the remote device.\n required: true\n default: null\n choices: ['login', 'banner']\n text:\n description:\n - The banner text that should be\n present in the remote device running configuration. This argument\n accepts a multiline string. Requires I(state=present).\n default: null\n state:\n description:\n - Specifies whether or not the configuration is\n present in the current devices active running configuration.\n default: present\n choices: ['present', 'absent']\n\"\"\"\n\nEXAMPLES = \"\"\"\n- name: configure the login banner\n ios_banner:\n banner: login\n text: |\n this is my login banner\n that contains a multiline\n string\n state: present\n\n- name: remove the motd banner\n ios_banner:\n banner: motd\n state: absent\n\"\"\"\n\nRETURN = \"\"\"\ncommands:\n description: The list of configuration mode commands to send to the device\n returned: always\n type: list\n sample:\n - banner login\n - this is my login banner\n - that contains a multiline\n - string\n\"\"\"\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.connection import exec_command\nfrom ansible.module_utils.ios import load_config, run_commands\nfrom ansible.module_utils.ios import ios_argument_spec, check_args\nimport re\n\ndef map_obj_to_commands(updates, module):\n commands = list()\n want, have = updates\n state = module.params['state']\n\n if state == 'absent' and have['text']:\n commands.append('no banner %s' % module.params['banner'])\n\n elif state == 'present':\n if want['text'] and (want['text'] != have.get('text')):\n banner_cmd = 'banner %s' % module.params['banner']\n banner_cmd += ' @\\n'\n banner_cmd += want['text'].strip()\n banner_cmd += '\\n@'\n commands.append(banner_cmd)\n\n return commands\n\ndef map_config_to_obj(module):\n rc, out, err = exec_command(module, 'show banner %s' % module.params['banner'])\n if rc == 0:\n output = out\n else:\n rc, out, err = exec_command(module,\n 'show running-config | begin banner %s'\n % module.params['banner'])\n output = re.search('\\^C(.*)\\^C', out, re.S).group(1).strip()\n obj = {'banner': module.params['banner'], 'state': 'absent'}\n if output:\n obj['text'] = output\n obj['state'] = 'present'\n return obj\n\ndef map_params_to_obj(module):\n text = module.params['text']\n if text:\n text = str(text).strip()\n\n return {\n 'banner': module.params['banner'],\n 'text': text,\n 'state': module.params['state']\n }\n\ndef main():\n \"\"\" main entry point for module execution\n \"\"\"\n argument_spec = dict(\n banner=dict(required=True, choices=['login', 'motd']),\n text=dict(),\n state=dict(default='present', choices=['present', 'absent'])\n )\n\n argument_spec.update(ios_argument_spec)\n\n required_if = [('state', 'present', ('text',))]\n\n module = AnsibleModule(argument_spec=argument_spec,\n required_if=required_if,\n supports_check_mode=True)\n\n warnings = list()\n check_args(module, warnings)\n\n result = {'changed': False}\n if warnings:\n result['warnings'] = warnings\n want = map_params_to_obj(module)\n have = map_config_to_obj(module)\n\n commands = map_obj_to_commands((want, have), module)\n result['commands'] = commands\n\n if commands:\n if not module.check_mode:\n response = load_config(module, commands)\n\n result['changed'] = True\n\n module.exit_json(**result)\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/network/ios/ios_banner.py"}], "after_files": [{"content": "#!/usr/bin/python\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nANSIBLE_METADATA = {'metadata_version': '1.0',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = \"\"\"\n---\nmodule: ios_banner\nversion_added: \"2.3\"\nauthor: \"Ricardo Carrillo Cruz (@rcarrillocruz)\"\nshort_description: Manage multiline banners on Cisco IOS devices\ndescription:\n - This will configure both login and motd banners on remote devices\n running Cisco IOS. It allows playbooks to add or remote\n banner text from the active running configuration.\nextends_documentation_fragment: ios\noptions:\n banner:\n description:\n - Specifies which banner that should be\n configured on the remote device.\n required: true\n default: null\n choices: ['login', 'banner']\n text:\n description:\n - The banner text that should be\n present in the remote device running configuration. This argument\n accepts a multiline string. Requires I(state=present).\n default: null\n state:\n description:\n - Specifies whether or not the configuration is\n present in the current devices active running configuration.\n default: present\n choices: ['present', 'absent']\n\"\"\"\n\nEXAMPLES = \"\"\"\n- name: configure the login banner\n ios_banner:\n banner: login\n text: |\n this is my login banner\n that contains a multiline\n string\n state: present\n\n- name: remove the motd banner\n ios_banner:\n banner: motd\n state: absent\n\"\"\"\n\nRETURN = \"\"\"\ncommands:\n description: The list of configuration mode commands to send to the device\n returned: always\n type: list\n sample:\n - banner login\n - this is my login banner\n - that contains a multiline\n - string\n\"\"\"\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.connection import exec_command\nfrom ansible.module_utils.ios import load_config, run_commands\nfrom ansible.module_utils.ios import ios_argument_spec, check_args\nimport re\n\ndef map_obj_to_commands(updates, module):\n commands = list()\n want, have = updates\n state = module.params['state']\n\n if state == 'absent' and have['text']:\n commands.append('no banner %s' % module.params['banner'])\n\n elif state == 'present':\n if want['text'] and (want['text'] != have.get('text')):\n banner_cmd = 'banner %s' % module.params['banner']\n banner_cmd += ' @\\n'\n banner_cmd += want['text'].strip()\n banner_cmd += '\\n@'\n commands.append(banner_cmd)\n\n return commands\n\ndef map_config_to_obj(module):\n rc, out, err = exec_command(module, 'show banner %s' % module.params['banner'])\n if rc == 0:\n output = out\n else:\n rc, out, err = exec_command(module,\n 'show running-config | begin banner %s'\n % module.params['banner'])\n if out:\n output = re.search('\\^C(.*)\\^C', out, re.S).group(1).strip()\n else:\n output = None\n obj = {'banner': module.params['banner'], 'state': 'absent'}\n if output:\n obj['text'] = output\n obj['state'] = 'present'\n return obj\n\ndef map_params_to_obj(module):\n text = module.params['text']\n if text:\n text = str(text).strip()\n\n return {\n 'banner': module.params['banner'],\n 'text': text,\n 'state': module.params['state']\n }\n\ndef main():\n \"\"\" main entry point for module execution\n \"\"\"\n argument_spec = dict(\n banner=dict(required=True, choices=['login', 'motd']),\n text=dict(),\n state=dict(default='present', choices=['present', 'absent'])\n )\n\n argument_spec.update(ios_argument_spec)\n\n required_if = [('state', 'present', ('text',))]\n\n module = AnsibleModule(argument_spec=argument_spec,\n required_if=required_if,\n supports_check_mode=True)\n\n warnings = list()\n check_args(module, warnings)\n\n result = {'changed': False}\n if warnings:\n result['warnings'] = warnings\n want = map_params_to_obj(module)\n have = map_config_to_obj(module)\n\n commands = map_obj_to_commands((want, have), module)\n result['commands'] = commands\n\n if commands:\n if not module.check_mode:\n response = load_config(module, commands)\n\n result['changed'] = True\n\n module.exit_json(**result)\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/network/ios/ios_banner.py"}]}
| 3,525 | 197 |
gh_patches_debug_24149
|
rasdani/github-patches
|
git_diff
|
lightly-ai__lightly-482
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
when pip version older than newest version, it calls API on every command rather than caching the information
When I use an older pip version, I see multiple lines of output like this when I run `lightly-magic`
```
...Python/3.8/lib/python/site-packages/lightly/api/version_checking.py:57: Warning: You are using lightly version 1.1.17. There is a newer version of the package available. For compatability reasons, please upgrade your current version: pip install lightly==1.1.18
warnings.warn(Warning(warning))
```
Also tracking the connections it makes, it calls the API for **every** images I want to upload. So the pip does not cache the information that it is an outdated version. This is no bueno
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightly/__init__.py`
Content:
```
1 """Lightly is a computer vision framework for self-supervised learning.
2
3 With Lightly you can train deep learning models using
4 self-supervision. This means, that you don't require
5 any labels to train a model. Lightly has been built
6 to help you understand and work with large unlabeled datasets.
7 It is built on top of PyTorch and therefore fully compatible
8 with other frameworks such as Fast.ai.
9
10 The framework is structured into the following modules:
11
12 - **api**:
13
14 The lightly.api module handles communication with the Lightly web-app.
15
16 - **cli**:
17
18 The lightly.cli module provides a command-line interface for training
19 self-supervised models and embedding images. Furthermore, the command-line
20 tool can be used to upload and download images from/to the Lightly web-app.
21
22 - **core**:
23
24 The lightly.core module offers one-liners for simple self-supervised learning.
25
26 - **data**:
27
28 The lightly.data module provides a dataset wrapper and collate functions. The
29 collate functions are in charge of the data augmentations which are crucial for
30 self-supervised learning.
31
32 - **embedding**:
33
34 The lightly.embedding module combines the self-supervised models with a dataloader,
35 optimizer, and loss function to provide a simple pytorch-lightning trainable.
36
37 - **loss**:
38
39 The lightly.loss module contains implementations of popular self-supervised training
40 loss functions.
41
42 - **models**:
43
44 The lightly.models module holds the implementation of the ResNet as well as self-
45 supervised methods. Currently implements:
46
47 - SimCLR
48
49 - MoCo
50
51 - SimSiam
52
53 - Barlow Twins
54
55 - BYOL
56
57 - NNCLR
58
59 - **transforms**:
60
61 The lightly.transforms module implements custom data transforms. Currently implements:
62
63 - Gaussian Blur
64
65 - Random Rotation
66
67 - Random Solarization
68
69 - **utils**:
70
71 The lightly.utils package provides global utility methods.
72 The io module contains utility to save and load embeddings in a format which is
73 understood by the Lightly library.
74
75 """
76
77 # Copyright (c) 2020. Lightly AG and its affiliates.
78 # All Rights Reserved
79
80 __name__ = 'lightly'
81 __version__ = '1.1.18'
82
83
84 try:
85 # See (https://github.com/PyTorchLightning/pytorch-lightning)
86 # This variable is injected in the __builtins__ by the build
87 # process. It used to enable importing subpackages of skimage when
88 # the binaries are not built
89 __LIGHTLY_SETUP__
90 except NameError:
91 __LIGHTLY_SETUP__ = False
92
93
94 if __LIGHTLY_SETUP__:
95 # setting up lightly
96 msg = f'Partial import of {__name__}=={__version__} during build process.'
97 print(msg)
98 else:
99 # see if prefetch_generator is available
100 try:
101 import prefetch_generator
102 except ImportError:
103 _prefetch_generator_available = False
104 else:
105 _prefetch_generator_available = True
106
107 def _is_prefetch_generator_available():
108 return _prefetch_generator_available
109
110 from lightly.core import *
111 from lightly import active_learning
112 from lightly import api
113 from lightly import data
114 from lightly import embedding
115 from lightly import loss
116 from lightly import models
117 from lightly import openapi_generated
118 from lightly import transforms
119 from lightly import utils
120
121
122 # check for latest version
123 from lightly.api.version_checking import get_latest_version
124 from lightly.api.version_checking import version_compare
125 from lightly.api.version_checking import pretty_print_latest_version
126
127 latest_version = get_latest_version(__version__)
128 if latest_version is not None:
129 if version_compare(__version__, latest_version) < 0:
130 # local version is behind latest version
131 pretty_print_latest_version(latest_version)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lightly/__init__.py b/lightly/__init__.py
--- a/lightly/__init__.py
+++ b/lightly/__init__.py
@@ -80,6 +80,7 @@
__name__ = 'lightly'
__version__ = '1.1.18'
+from multiprocessing import current_process
try:
# See (https://github.com/PyTorchLightning/pytorch-lightning)
@@ -118,14 +119,15 @@
from lightly import transforms
from lightly import utils
+ if current_process().name == 'MainProcess':
+ # check for latest version
+ from lightly.api.version_checking import get_latest_version
+ from lightly.api.version_checking import version_compare
+ from lightly.api.version_checking import pretty_print_latest_version
- # check for latest version
- from lightly.api.version_checking import get_latest_version
- from lightly.api.version_checking import version_compare
- from lightly.api.version_checking import pretty_print_latest_version
+ latest_version = get_latest_version(__version__)
+ if latest_version is not None:
+ if version_compare(__version__, latest_version) < 0:
+ # local version is behind latest version
+ pretty_print_latest_version(latest_version)
- latest_version = get_latest_version(__version__)
- if latest_version is not None:
- if version_compare(__version__, latest_version) < 0:
- # local version is behind latest version
- pretty_print_latest_version(latest_version)
|
{"golden_diff": "diff --git a/lightly/__init__.py b/lightly/__init__.py\n--- a/lightly/__init__.py\n+++ b/lightly/__init__.py\n@@ -80,6 +80,7 @@\n __name__ = 'lightly'\n __version__ = '1.1.18'\n \n+from multiprocessing import current_process\n \n try:\n # See (https://github.com/PyTorchLightning/pytorch-lightning)\n@@ -118,14 +119,15 @@\n from lightly import transforms\n from lightly import utils\n \n+ if current_process().name == 'MainProcess':\n+ # check for latest version\n+ from lightly.api.version_checking import get_latest_version\n+ from lightly.api.version_checking import version_compare\n+ from lightly.api.version_checking import pretty_print_latest_version\n \n- # check for latest version\n- from lightly.api.version_checking import get_latest_version\n- from lightly.api.version_checking import version_compare\n- from lightly.api.version_checking import pretty_print_latest_version\n+ latest_version = get_latest_version(__version__)\n+ if latest_version is not None:\n+ if version_compare(__version__, latest_version) < 0:\n+ # local version is behind latest version\n+ pretty_print_latest_version(latest_version)\n \n- latest_version = get_latest_version(__version__)\n- if latest_version is not None:\n- if version_compare(__version__, latest_version) < 0:\n- # local version is behind latest version\n- pretty_print_latest_version(latest_version)\n", "issue": "when pip version older than newest version, it calls API on every command rather than caching the information\nWhen I use an older pip version, I see multiple lines of output like this when I run `lightly-magic`\r\n```\r\n...Python/3.8/lib/python/site-packages/lightly/api/version_checking.py:57: Warning: You are using lightly version 1.1.17. There is a newer version of the package available. For compatability reasons, please upgrade your current version: pip install lightly==1.1.18\r\n warnings.warn(Warning(warning))\r\n```\r\n\r\nAlso tracking the connections it makes, it calls the API for **every** images I want to upload. So the pip does not cache the information that it is an outdated version. This is no bueno\r\n \r\n \n", "before_files": [{"content": "\"\"\"Lightly is a computer vision framework for self-supervised learning.\n\nWith Lightly you can train deep learning models using\nself-supervision. This means, that you don't require\nany labels to train a model. Lightly has been built\nto help you understand and work with large unlabeled datasets.\nIt is built on top of PyTorch and therefore fully compatible \nwith other frameworks such as Fast.ai.\n\nThe framework is structured into the following modules:\n\n- **api**: \n\n The lightly.api module handles communication with the Lightly web-app.\n\n- **cli**:\n\n The lightly.cli module provides a command-line interface for training \n self-supervised models and embedding images. Furthermore, the command-line\n tool can be used to upload and download images from/to the Lightly web-app.\n\n- **core**:\n\n The lightly.core module offers one-liners for simple self-supervised learning.\n\n- **data**:\n\n The lightly.data module provides a dataset wrapper and collate functions. The\n collate functions are in charge of the data augmentations which are crucial for\n self-supervised learning.\n\n- **embedding**:\n\n The lightly.embedding module combines the self-supervised models with a dataloader,\n optimizer, and loss function to provide a simple pytorch-lightning trainable.\n\n- **loss**:\n\n The lightly.loss module contains implementations of popular self-supervised training\n loss functions.\n\n- **models**:\n\n The lightly.models module holds the implementation of the ResNet as well as self-\n supervised methods. Currently implements:\n\n - SimCLR\n\n - MoCo\n\n - SimSiam\n\n - Barlow Twins\n\n - BYOL\n\n - NNCLR\n\n- **transforms**:\n\n The lightly.transforms module implements custom data transforms. Currently implements:\n\n - Gaussian Blur\n\n - Random Rotation\n\n - Random Solarization\n\n- **utils**:\n\n The lightly.utils package provides global utility methods.\n The io module contains utility to save and load embeddings in a format which is\n understood by the Lightly library.\n\n\"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\n__name__ = 'lightly'\n__version__ = '1.1.18'\n\n\ntry:\n # See (https://github.com/PyTorchLightning/pytorch-lightning)\n # This variable is injected in the __builtins__ by the build\n # process. It used to enable importing subpackages of skimage when\n # the binaries are not built\n __LIGHTLY_SETUP__\nexcept NameError:\n __LIGHTLY_SETUP__ = False\n\n\nif __LIGHTLY_SETUP__:\n # setting up lightly\n msg = f'Partial import of {__name__}=={__version__} during build process.' \n print(msg)\nelse:\n # see if prefetch_generator is available\n try:\n import prefetch_generator\n except ImportError:\n _prefetch_generator_available = False\n else:\n _prefetch_generator_available = True\n\n def _is_prefetch_generator_available():\n return _prefetch_generator_available\n\n from lightly.core import *\n from lightly import active_learning\n from lightly import api\n from lightly import data\n from lightly import embedding\n from lightly import loss\n from lightly import models\n from lightly import openapi_generated\n from lightly import transforms\n from lightly import utils\n\n\n # check for latest version\n from lightly.api.version_checking import get_latest_version\n from lightly.api.version_checking import version_compare\n from lightly.api.version_checking import pretty_print_latest_version\n\n latest_version = get_latest_version(__version__)\n if latest_version is not None:\n if version_compare(__version__, latest_version) < 0:\n # local version is behind latest version\n pretty_print_latest_version(latest_version)\n", "path": "lightly/__init__.py"}], "after_files": [{"content": "\"\"\"Lightly is a computer vision framework for self-supervised learning.\n\nWith Lightly you can train deep learning models using\nself-supervision. This means, that you don't require\nany labels to train a model. Lightly has been built\nto help you understand and work with large unlabeled datasets.\nIt is built on top of PyTorch and therefore fully compatible \nwith other frameworks such as Fast.ai.\n\nThe framework is structured into the following modules:\n\n- **api**: \n\n The lightly.api module handles communication with the Lightly web-app.\n\n- **cli**:\n\n The lightly.cli module provides a command-line interface for training \n self-supervised models and embedding images. Furthermore, the command-line\n tool can be used to upload and download images from/to the Lightly web-app.\n\n- **core**:\n\n The lightly.core module offers one-liners for simple self-supervised learning.\n\n- **data**:\n\n The lightly.data module provides a dataset wrapper and collate functions. The\n collate functions are in charge of the data augmentations which are crucial for\n self-supervised learning.\n\n- **embedding**:\n\n The lightly.embedding module combines the self-supervised models with a dataloader,\n optimizer, and loss function to provide a simple pytorch-lightning trainable.\n\n- **loss**:\n\n The lightly.loss module contains implementations of popular self-supervised training\n loss functions.\n\n- **models**:\n\n The lightly.models module holds the implementation of the ResNet as well as self-\n supervised methods. Currently implements:\n\n - SimCLR\n\n - MoCo\n\n - SimSiam\n\n - Barlow Twins\n\n - BYOL\n\n - NNCLR\n\n- **transforms**:\n\n The lightly.transforms module implements custom data transforms. Currently implements:\n\n - Gaussian Blur\n\n - Random Rotation\n\n - Random Solarization\n\n- **utils**:\n\n The lightly.utils package provides global utility methods.\n The io module contains utility to save and load embeddings in a format which is\n understood by the Lightly library.\n\n\"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\n__name__ = 'lightly'\n__version__ = '1.1.18'\n\nfrom multiprocessing import current_process\n\ntry:\n # See (https://github.com/PyTorchLightning/pytorch-lightning)\n # This variable is injected in the __builtins__ by the build\n # process. It used to enable importing subpackages of skimage when\n # the binaries are not built\n __LIGHTLY_SETUP__\nexcept NameError:\n __LIGHTLY_SETUP__ = False\n\n\nif __LIGHTLY_SETUP__:\n # setting up lightly\n msg = f'Partial import of {__name__}=={__version__} during build process.' \n print(msg)\nelse:\n # see if prefetch_generator is available\n try:\n import prefetch_generator\n except ImportError:\n _prefetch_generator_available = False\n else:\n _prefetch_generator_available = True\n\n def _is_prefetch_generator_available():\n return _prefetch_generator_available\n\n from lightly.core import *\n from lightly import active_learning\n from lightly import api\n from lightly import data\n from lightly import embedding\n from lightly import loss\n from lightly import models\n from lightly import openapi_generated\n from lightly import transforms\n from lightly import utils\n\n if current_process().name == 'MainProcess':\n # check for latest version\n from lightly.api.version_checking import get_latest_version\n from lightly.api.version_checking import version_compare\n from lightly.api.version_checking import pretty_print_latest_version\n\n latest_version = get_latest_version(__version__)\n if latest_version is not None:\n if version_compare(__version__, latest_version) < 0:\n # local version is behind latest version\n pretty_print_latest_version(latest_version)\n\n", "path": "lightly/__init__.py"}]}
| 1,562 | 341 |
gh_patches_debug_25623
|
rasdani/github-patches
|
git_diff
|
StackStorm__st2-5468
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
An exception is raised when calling MockDatastoreService.set_value() with a ttl
## SUMMARY
When using `MockDatastoreService.set_value()` for unit testing (e.g. via `BaseActionTestCase`), a `ValueError` exception will be raised if `ttl` argument is specified.
### STACKSTORM VERSION
`st2 3.1.0, on Python 3.6.8`
### OS, environment, install method
Custom install, but not relevant to this case, see below.
## Steps to reproduce the problem
See [code](https://github.com/StackStorm/st2/blob/6d1809a4bb577e117baa00f249757284db9c6e76/st2tests/st2tests/mocks/datastore.py#L108) for `MockDatastoreService.set_value()`. If `ttl` argument is specified, `ValueError` will explicitly be raised.
```python
class MockDatastoreService(BaseDatastoreService):
# ...
def set_value(self, name, value, ttl=None, local=True, scope=SYSTEM_SCOPE, encrypt=False):
"""
Store a value in a dictionary which is local to this class.
"""
if ttl:
raise ValueError('MockDatastoreService.set_value doesn\'t support "ttl" argument')
# ...
```
## Expected Results
Both `action_service.set_value()` and `sensor_service.set_value()` support `ttl` argument. Although I understand that this argument cannot be honored in a mock situation, `MockDatastoreService` should be consistent with the behavior of the class it is mocking - perhaps fire a warning instead of an Exception?
## Actual Results
An exception is returned while running tests:
```python
ValueError: MockDatastoreService.set_value doesn't support "ttl" argument
```
It is not possible to test actions or sensors that use `xxx.set_value()` with a `ttl` argument.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2tests/st2tests/mocks/datastore.py`
Content:
```
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """
17 Mock classes for use in pack testing.
18 """
19
20 from __future__ import absolute_import
21 from st2common.constants.keyvalue import SYSTEM_SCOPE
22 from st2common.services.datastore import BaseDatastoreService
23 from st2client.models.keyvalue import KeyValuePair
24
25 __all__ = ["MockDatastoreService"]
26
27
28 class MockDatastoreService(BaseDatastoreService):
29 """
30 Mock DatastoreService for use in testing.
31 """
32
33 def __init__(self, logger, pack_name, class_name, api_username=None):
34 self._pack_name = pack_name
35 self._class_name = class_name
36 self._username = api_username or "admin"
37
38 # Holds mock KeyValuePair objects
39 # Key is a KeyValuePair name and value is the KeyValuePair object
40 self._datastore_items = {}
41
42 ##################################
43 # General methods
44 ##################################
45
46 def get_user_info(self):
47 """
48 Retrieve information about the current user which is authenticated against StackStorm and
49 used to perform other datastore operations via the API.
50
51 :rtype: ``dict``
52 """
53 result = {
54 "username": self._username,
55 "rbac": {"is_admin": True, "enabled": True, "roles": ["admin"]},
56 "authentication": {"method": "authentication token", "location": "header"},
57 }
58
59 return result
60
61 ##################################
62 # Methods for datastore management
63 ##################################
64
65 def list_values(self, local=True, prefix=None):
66 """
67 Return a list of all values stored in a dictionary which is local to this class.
68 """
69 key_prefix = self._get_full_key_prefix(local=local, prefix=prefix)
70
71 if not key_prefix:
72 return list(self._datastore_items.values())
73
74 result = []
75 for name, kvp in self._datastore_items.items():
76 if name.startswith(key_prefix):
77 result.append(kvp)
78
79 return result
80
81 def get_value(self, name, local=True, scope=SYSTEM_SCOPE, decrypt=False):
82 """
83 Return a particular value stored in a dictionary which is local to this class.
84 """
85 name = self._get_full_key_name(name=name, local=local)
86
87 if name not in self._datastore_items:
88 return None
89
90 kvp = self._datastore_items[name]
91 return kvp.value
92
93 def set_value(
94 self, name, value, ttl=None, local=True, scope=SYSTEM_SCOPE, encrypt=False
95 ):
96 """
97 Store a value in a dictionary which is local to this class.
98 """
99 if ttl:
100 raise ValueError(
101 'MockDatastoreService.set_value doesn\'t support "ttl" argument'
102 )
103
104 name = self._get_full_key_name(name=name, local=local)
105
106 instance = KeyValuePair()
107 instance.id = name
108 instance.name = name
109 instance.value = value
110
111 self._datastore_items[name] = instance
112 return True
113
114 def delete_value(self, name, local=True, scope=SYSTEM_SCOPE):
115 """
116 Delete a value from a dictionary which is local to this class.
117 """
118 name = self._get_full_key_name(name=name, local=local)
119
120 if name not in self._datastore_items:
121 return False
122
123 del self._datastore_items[name]
124 return True
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/st2tests/st2tests/mocks/datastore.py b/st2tests/st2tests/mocks/datastore.py
--- a/st2tests/st2tests/mocks/datastore.py
+++ b/st2tests/st2tests/mocks/datastore.py
@@ -34,6 +34,7 @@
self._pack_name = pack_name
self._class_name = class_name
self._username = api_username or "admin"
+ self._logger = logger
# Holds mock KeyValuePair objects
# Key is a KeyValuePair name and value is the KeyValuePair object
@@ -96,10 +97,6 @@
"""
Store a value in a dictionary which is local to this class.
"""
- if ttl:
- raise ValueError(
- 'MockDatastoreService.set_value doesn\'t support "ttl" argument'
- )
name = self._get_full_key_name(name=name, local=local)
@@ -107,6 +104,11 @@
instance.id = name
instance.name = name
instance.value = value
+ if ttl:
+ self._logger.warning(
+ "MockDatastoreService is not able to expire keys based on ttl."
+ )
+ instance.ttl = ttl
self._datastore_items[name] = instance
return True
|
{"golden_diff": "diff --git a/st2tests/st2tests/mocks/datastore.py b/st2tests/st2tests/mocks/datastore.py\n--- a/st2tests/st2tests/mocks/datastore.py\n+++ b/st2tests/st2tests/mocks/datastore.py\n@@ -34,6 +34,7 @@\n self._pack_name = pack_name\n self._class_name = class_name\n self._username = api_username or \"admin\"\n+ self._logger = logger\n \n # Holds mock KeyValuePair objects\n # Key is a KeyValuePair name and value is the KeyValuePair object\n@@ -96,10 +97,6 @@\n \"\"\"\n Store a value in a dictionary which is local to this class.\n \"\"\"\n- if ttl:\n- raise ValueError(\n- 'MockDatastoreService.set_value doesn\\'t support \"ttl\" argument'\n- )\n \n name = self._get_full_key_name(name=name, local=local)\n \n@@ -107,6 +104,11 @@\n instance.id = name\n instance.name = name\n instance.value = value\n+ if ttl:\n+ self._logger.warning(\n+ \"MockDatastoreService is not able to expire keys based on ttl.\"\n+ )\n+ instance.ttl = ttl\n \n self._datastore_items[name] = instance\n return True\n", "issue": "An exception is raised when calling MockDatastoreService.set_value() with a ttl\n## SUMMARY\r\n\r\nWhen using `MockDatastoreService.set_value()` for unit testing (e.g. via `BaseActionTestCase`), a `ValueError` exception will be raised if `ttl` argument is specified.\r\n\r\n\r\n### STACKSTORM VERSION\r\n\r\n`st2 3.1.0, on Python 3.6.8`\r\n\r\n### OS, environment, install method\r\n\r\nCustom install, but not relevant to this case, see below.\r\n\r\n## Steps to reproduce the problem\r\n\r\nSee [code](https://github.com/StackStorm/st2/blob/6d1809a4bb577e117baa00f249757284db9c6e76/st2tests/st2tests/mocks/datastore.py#L108) for `MockDatastoreService.set_value()`. If `ttl` argument is specified, `ValueError` will explicitly be raised.\r\n\r\n```python\r\nclass MockDatastoreService(BaseDatastoreService):\r\n # ...\r\n def set_value(self, name, value, ttl=None, local=True, scope=SYSTEM_SCOPE, encrypt=False):\r\n \"\"\"\r\n Store a value in a dictionary which is local to this class.\r\n \"\"\"\r\n if ttl:\r\n raise ValueError('MockDatastoreService.set_value doesn\\'t support \"ttl\" argument')\r\n # ...\r\n```\r\n\r\n\r\n## Expected Results\r\n\r\nBoth `action_service.set_value()` and `sensor_service.set_value()` support `ttl` argument. Although I understand that this argument cannot be honored in a mock situation, `MockDatastoreService` should be consistent with the behavior of the class it is mocking - perhaps fire a warning instead of an Exception?\r\n\r\n## Actual Results\r\n\r\nAn exception is returned while running tests:\r\n```python\r\nValueError: MockDatastoreService.set_value doesn't support \"ttl\" argument\r\n```\r\nIt is not possible to test actions or sensors that use `xxx.set_value()` with a `ttl` argument.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nMock classes for use in pack testing.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom st2common.constants.keyvalue import SYSTEM_SCOPE\nfrom st2common.services.datastore import BaseDatastoreService\nfrom st2client.models.keyvalue import KeyValuePair\n\n__all__ = [\"MockDatastoreService\"]\n\n\nclass MockDatastoreService(BaseDatastoreService):\n \"\"\"\n Mock DatastoreService for use in testing.\n \"\"\"\n\n def __init__(self, logger, pack_name, class_name, api_username=None):\n self._pack_name = pack_name\n self._class_name = class_name\n self._username = api_username or \"admin\"\n\n # Holds mock KeyValuePair objects\n # Key is a KeyValuePair name and value is the KeyValuePair object\n self._datastore_items = {}\n\n ##################################\n # General methods\n ##################################\n\n def get_user_info(self):\n \"\"\"\n Retrieve information about the current user which is authenticated against StackStorm and\n used to perform other datastore operations via the API.\n\n :rtype: ``dict``\n \"\"\"\n result = {\n \"username\": self._username,\n \"rbac\": {\"is_admin\": True, \"enabled\": True, \"roles\": [\"admin\"]},\n \"authentication\": {\"method\": \"authentication token\", \"location\": \"header\"},\n }\n\n return result\n\n ##################################\n # Methods for datastore management\n ##################################\n\n def list_values(self, local=True, prefix=None):\n \"\"\"\n Return a list of all values stored in a dictionary which is local to this class.\n \"\"\"\n key_prefix = self._get_full_key_prefix(local=local, prefix=prefix)\n\n if not key_prefix:\n return list(self._datastore_items.values())\n\n result = []\n for name, kvp in self._datastore_items.items():\n if name.startswith(key_prefix):\n result.append(kvp)\n\n return result\n\n def get_value(self, name, local=True, scope=SYSTEM_SCOPE, decrypt=False):\n \"\"\"\n Return a particular value stored in a dictionary which is local to this class.\n \"\"\"\n name = self._get_full_key_name(name=name, local=local)\n\n if name not in self._datastore_items:\n return None\n\n kvp = self._datastore_items[name]\n return kvp.value\n\n def set_value(\n self, name, value, ttl=None, local=True, scope=SYSTEM_SCOPE, encrypt=False\n ):\n \"\"\"\n Store a value in a dictionary which is local to this class.\n \"\"\"\n if ttl:\n raise ValueError(\n 'MockDatastoreService.set_value doesn\\'t support \"ttl\" argument'\n )\n\n name = self._get_full_key_name(name=name, local=local)\n\n instance = KeyValuePair()\n instance.id = name\n instance.name = name\n instance.value = value\n\n self._datastore_items[name] = instance\n return True\n\n def delete_value(self, name, local=True, scope=SYSTEM_SCOPE):\n \"\"\"\n Delete a value from a dictionary which is local to this class.\n \"\"\"\n name = self._get_full_key_name(name=name, local=local)\n\n if name not in self._datastore_items:\n return False\n\n del self._datastore_items[name]\n return True\n", "path": "st2tests/st2tests/mocks/datastore.py"}], "after_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nMock classes for use in pack testing.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom st2common.constants.keyvalue import SYSTEM_SCOPE\nfrom st2common.services.datastore import BaseDatastoreService\nfrom st2client.models.keyvalue import KeyValuePair\n\n__all__ = [\"MockDatastoreService\"]\n\n\nclass MockDatastoreService(BaseDatastoreService):\n \"\"\"\n Mock DatastoreService for use in testing.\n \"\"\"\n\n def __init__(self, logger, pack_name, class_name, api_username=None):\n self._pack_name = pack_name\n self._class_name = class_name\n self._username = api_username or \"admin\"\n self._logger = logger\n\n # Holds mock KeyValuePair objects\n # Key is a KeyValuePair name and value is the KeyValuePair object\n self._datastore_items = {}\n\n ##################################\n # General methods\n ##################################\n\n def get_user_info(self):\n \"\"\"\n Retrieve information about the current user which is authenticated against StackStorm and\n used to perform other datastore operations via the API.\n\n :rtype: ``dict``\n \"\"\"\n result = {\n \"username\": self._username,\n \"rbac\": {\"is_admin\": True, \"enabled\": True, \"roles\": [\"admin\"]},\n \"authentication\": {\"method\": \"authentication token\", \"location\": \"header\"},\n }\n\n return result\n\n ##################################\n # Methods for datastore management\n ##################################\n\n def list_values(self, local=True, prefix=None):\n \"\"\"\n Return a list of all values stored in a dictionary which is local to this class.\n \"\"\"\n key_prefix = self._get_full_key_prefix(local=local, prefix=prefix)\n\n if not key_prefix:\n return list(self._datastore_items.values())\n\n result = []\n for name, kvp in self._datastore_items.items():\n if name.startswith(key_prefix):\n result.append(kvp)\n\n return result\n\n def get_value(self, name, local=True, scope=SYSTEM_SCOPE, decrypt=False):\n \"\"\"\n Return a particular value stored in a dictionary which is local to this class.\n \"\"\"\n name = self._get_full_key_name(name=name, local=local)\n\n if name not in self._datastore_items:\n return None\n\n kvp = self._datastore_items[name]\n return kvp.value\n\n def set_value(\n self, name, value, ttl=None, local=True, scope=SYSTEM_SCOPE, encrypt=False\n ):\n \"\"\"\n Store a value in a dictionary which is local to this class.\n \"\"\"\n\n name = self._get_full_key_name(name=name, local=local)\n\n instance = KeyValuePair()\n instance.id = name\n instance.name = name\n instance.value = value\n if ttl:\n self._logger.warning(\n \"MockDatastoreService is not able to expire keys based on ttl.\"\n )\n instance.ttl = ttl\n\n self._datastore_items[name] = instance\n return True\n\n def delete_value(self, name, local=True, scope=SYSTEM_SCOPE):\n \"\"\"\n Delete a value from a dictionary which is local to this class.\n \"\"\"\n name = self._get_full_key_name(name=name, local=local)\n\n if name not in self._datastore_items:\n return False\n\n del self._datastore_items[name]\n return True\n", "path": "st2tests/st2tests/mocks/datastore.py"}]}
| 1,809 | 294 |
gh_patches_debug_30371
|
rasdani/github-patches
|
git_diff
|
Flexget__Flexget-2222
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot use variables with integer-only values
### Expected behaviour:
Variables should be able to handle integer-only values.
### Actual behaviour:
Configuration parsing error (when using ``'{? deluge.port ?}'``): ``Got `50586`, expected: integer``
Configuration parsing error (when using ``{? deluge.port ?}``):
```
while parsing a flow mapping in "<unicode string>", line 16, column 13: port: {? deluge.port ?} ^ expected ',' or '}', but got '?' in "<unicode string>", line 16, column 28: port: {? deluge.port ?} ^
```
### Steps to reproduce:
- Step 1: Try to use below config.
#### Config:
```
from_deluge:
host: '{? deluge.host ?}'
port: '{? deluge.port ?}'
```
#### Log:
See above.
### Additional information:
- Flexget Version: 2.10.24
- Python Version: 2.7.9
- Installation method: pip
- OS and version: macOS El Capitan 10.11.6
- Link to crash log: n/a
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/plugins/modify/variables.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3
4 import codecs
5 import logging
6 import os
7 from datetime import datetime
8
9 import yaml
10
11 from jinja2 import Environment, TemplateError
12
13 from sqlalchemy import Column
14 from sqlalchemy.sql.sqltypes import Unicode, DateTime, Integer
15
16 from flexget import db_schema
17 from flexget.config_schema import register_config_key
18 from flexget.event import event
19 from flexget.manager import Session
20 from flexget.plugin import PluginError
21 from flexget.utils.database import json_synonym
22
23 log = logging.getLogger('variables')
24
25 DB_VERSION = 0
26 Base = db_schema.versioned_base('variables', DB_VERSION)
27
28
29 class Variables(Base):
30 __tablename__ = 'variables'
31
32 id = Column(Integer, primary_key=True)
33 _variables = Column('variables', Unicode)
34 variables = json_synonym('_variables')
35 added = Column(DateTime, default=datetime.now)
36
37
38 def variables_from_file(config_base, filename):
39 variables_file = os.path.join(config_base, filename)
40 if not os.path.exists(variables_file):
41 raise PluginError('File %s does not exist!' % variables_file)
42 try:
43 with codecs.open(variables_file, 'rb', 'utf-8') as f:
44 variables_dict = yaml.safe_load(f.read())
45 except yaml.YAMLError as e:
46 raise PluginError('Invalid variables file: %s' % e)
47 return variables_dict or {}
48
49
50 def variables_from_db():
51 with Session() as session:
52 variables = session.query(Variables).first()
53 if variables:
54 return variables.variables
55 else:
56 return {}
57
58
59 def variables_to_db(variables_dict):
60 with Session() as session:
61 variables = session.query(Variables).first()
62 if not variables:
63 variables = Variables()
64 variables.variables = variables_dict
65 session.merge(variables)
66
67
68 @event('manager.before_config_validate')
69 def process_variables(config, manager):
70 """Render all string elements of the config against defined variables."""
71 env_params = {
72 'block_start_string': '^^disabled^^',
73 'block_end_string': '^^disabled^^',
74 'variable_start_string': '{?',
75 'variable_end_string': '?}'
76 }
77 if 'variables' not in config or config.get('variables') is False:
78 return
79 env = Environment(**env_params)
80 if isinstance(config['variables'], bool):
81 log.debug('trying to load variables from DB')
82 variables = variables_from_db()
83 else:
84 log.debug('trying to load variables from file')
85 variables = variables_from_file(manager.config_base, config['variables'])
86 log.debug('updating DB with variable file contents')
87 variables_to_db(variables)
88 env.globals = variables
89 _process(config, env)
90 return config
91
92
93 def _process(element, environment):
94 if isinstance(element, dict):
95 for k, v in element.items():
96 new_key = _process(k, environment)
97 if new_key:
98 element[new_key] = element.pop(k)
99 k = new_key
100 val = _process(element[k], environment)
101 if val:
102 element[k] = val
103 elif isinstance(element, list):
104 for i, v in enumerate(element):
105 val = _process(v, environment)
106 if val:
107 element[i] = val
108 elif isinstance(element, str) and '{?' in element:
109 try:
110 template = environment.from_string(element)
111 return template.render()
112 except (TemplateError, TypeError):
113 return None
114
115
116 variables_config_schema = {'type': ['string', 'boolean']}
117
118
119 @event('config.register')
120 def register_config():
121 register_config_key('variables', variables_config_schema)
122
```
Path: `flexget/_version.py`
Content:
```
1 """
2 Current FlexGet version.
3 This is contained in a separate file so that it can be easily read by setup.py, and easily edited and committed by
4 release scripts in continuous integration. Should (almost) never be set manually.
5
6 The version should always be set to the <next release version>.dev
7 The jenkins release job will automatically strip the .dev for release,
8 and update the version again for continued development.
9 """
10 __version__ = '2.15.2.dev'
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flexget/_version.py b/flexget/_version.py
--- a/flexget/_version.py
+++ b/flexget/_version.py
@@ -7,4 +7,4 @@
The jenkins release job will automatically strip the .dev for release,
and update the version again for continued development.
"""
-__version__ = '2.15.2.dev'
+__version__ = '2.16.0.dev'
diff --git a/flexget/plugins/modify/variables.py b/flexget/plugins/modify/variables.py
--- a/flexget/plugins/modify/variables.py
+++ b/flexget/plugins/modify/variables.py
@@ -8,7 +8,8 @@
import yaml
-from jinja2 import Environment, TemplateError
+from jinja2 import TemplateError
+from jinja2.nativetypes import NativeEnvironment
from sqlalchemy import Column
from sqlalchemy.sql.sqltypes import Unicode, DateTime, Integer
@@ -76,10 +77,13 @@
}
if 'variables' not in config or config.get('variables') is False:
return
- env = Environment(**env_params)
+ env = NativeEnvironment(**env_params)
if isinstance(config['variables'], bool):
log.debug('trying to load variables from DB')
variables = variables_from_db()
+ elif isinstance(config['variables'], dict):
+ log.debug('loading variables from config')
+ variables = config['variables']
else:
log.debug('trying to load variables from file')
variables = variables_from_file(manager.config_base, config['variables'])
@@ -113,7 +117,7 @@
return None
-variables_config_schema = {'type': ['string', 'boolean']}
+variables_config_schema = {'type': ['string', 'boolean', 'object']}
@event('config.register')
|
{"golden_diff": "diff --git a/flexget/_version.py b/flexget/_version.py\n--- a/flexget/_version.py\n+++ b/flexget/_version.py\n@@ -7,4 +7,4 @@\n The jenkins release job will automatically strip the .dev for release,\n and update the version again for continued development.\n \"\"\"\n-__version__ = '2.15.2.dev'\n+__version__ = '2.16.0.dev'\ndiff --git a/flexget/plugins/modify/variables.py b/flexget/plugins/modify/variables.py\n--- a/flexget/plugins/modify/variables.py\n+++ b/flexget/plugins/modify/variables.py\n@@ -8,7 +8,8 @@\n \n import yaml\n \n-from jinja2 import Environment, TemplateError\n+from jinja2 import TemplateError\n+from jinja2.nativetypes import NativeEnvironment\n \n from sqlalchemy import Column\n from sqlalchemy.sql.sqltypes import Unicode, DateTime, Integer\n@@ -76,10 +77,13 @@\n }\n if 'variables' not in config or config.get('variables') is False:\n return\n- env = Environment(**env_params)\n+ env = NativeEnvironment(**env_params)\n if isinstance(config['variables'], bool):\n log.debug('trying to load variables from DB')\n variables = variables_from_db()\n+ elif isinstance(config['variables'], dict):\n+ log.debug('loading variables from config')\n+ variables = config['variables']\n else:\n log.debug('trying to load variables from file')\n variables = variables_from_file(manager.config_base, config['variables'])\n@@ -113,7 +117,7 @@\n return None\n \n \n-variables_config_schema = {'type': ['string', 'boolean']}\n+variables_config_schema = {'type': ['string', 'boolean', 'object']}\n \n \n @event('config.register')\n", "issue": "Cannot use variables with integer-only values\n### Expected behaviour:\r\nVariables should be able to handle integer-only values.\r\n\r\n### Actual behaviour:\r\nConfiguration parsing error (when using ``'{? deluge.port ?}'``): ``Got `50586`, expected: integer``\r\nConfiguration parsing error (when using ``{? deluge.port ?}``):\r\n```\r\nwhile parsing a flow mapping in \"<unicode string>\", line 16, column 13: port: {? deluge.port ?} ^ expected ',' or '}', but got '?' in \"<unicode string>\", line 16, column 28: port: {? deluge.port ?} ^\r\n```\r\n\r\n### Steps to reproduce:\r\n- Step 1: Try to use below config.\r\n\r\n#### Config:\r\n```\r\n from_deluge:\r\n host: '{? deluge.host ?}'\r\n port: '{? deluge.port ?}'\r\n\r\n```\r\n \r\n#### Log:\r\nSee above.\r\n\r\n### Additional information:\r\n\r\n- Flexget Version: 2.10.24\r\n- Python Version: 2.7.9\r\n- Installation method: pip\r\n- OS and version: macOS El Capitan 10.11.6\r\n- Link to crash log: n/a\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport codecs\nimport logging\nimport os\nfrom datetime import datetime\n\nimport yaml\n\nfrom jinja2 import Environment, TemplateError\n\nfrom sqlalchemy import Column\nfrom sqlalchemy.sql.sqltypes import Unicode, DateTime, Integer\n\nfrom flexget import db_schema\nfrom flexget.config_schema import register_config_key\nfrom flexget.event import event\nfrom flexget.manager import Session\nfrom flexget.plugin import PluginError\nfrom flexget.utils.database import json_synonym\n\nlog = logging.getLogger('variables')\n\nDB_VERSION = 0\nBase = db_schema.versioned_base('variables', DB_VERSION)\n\n\nclass Variables(Base):\n __tablename__ = 'variables'\n\n id = Column(Integer, primary_key=True)\n _variables = Column('variables', Unicode)\n variables = json_synonym('_variables')\n added = Column(DateTime, default=datetime.now)\n\n\ndef variables_from_file(config_base, filename):\n variables_file = os.path.join(config_base, filename)\n if not os.path.exists(variables_file):\n raise PluginError('File %s does not exist!' % variables_file)\n try:\n with codecs.open(variables_file, 'rb', 'utf-8') as f:\n variables_dict = yaml.safe_load(f.read())\n except yaml.YAMLError as e:\n raise PluginError('Invalid variables file: %s' % e)\n return variables_dict or {}\n\n\ndef variables_from_db():\n with Session() as session:\n variables = session.query(Variables).first()\n if variables:\n return variables.variables\n else:\n return {}\n\n\ndef variables_to_db(variables_dict):\n with Session() as session:\n variables = session.query(Variables).first()\n if not variables:\n variables = Variables()\n variables.variables = variables_dict\n session.merge(variables)\n\n\n@event('manager.before_config_validate')\ndef process_variables(config, manager):\n \"\"\"Render all string elements of the config against defined variables.\"\"\"\n env_params = {\n 'block_start_string': '^^disabled^^',\n 'block_end_string': '^^disabled^^',\n 'variable_start_string': '{?',\n 'variable_end_string': '?}'\n }\n if 'variables' not in config or config.get('variables') is False:\n return\n env = Environment(**env_params)\n if isinstance(config['variables'], bool):\n log.debug('trying to load variables from DB')\n variables = variables_from_db()\n else:\n log.debug('trying to load variables from file')\n variables = variables_from_file(manager.config_base, config['variables'])\n log.debug('updating DB with variable file contents')\n variables_to_db(variables)\n env.globals = variables\n _process(config, env)\n return config\n\n\ndef _process(element, environment):\n if isinstance(element, dict):\n for k, v in element.items():\n new_key = _process(k, environment)\n if new_key:\n element[new_key] = element.pop(k)\n k = new_key\n val = _process(element[k], environment)\n if val:\n element[k] = val\n elif isinstance(element, list):\n for i, v in enumerate(element):\n val = _process(v, environment)\n if val:\n element[i] = val\n elif isinstance(element, str) and '{?' in element:\n try:\n template = environment.from_string(element)\n return template.render()\n except (TemplateError, TypeError):\n return None\n\n\nvariables_config_schema = {'type': ['string', 'boolean']}\n\n\n@event('config.register')\ndef register_config():\n register_config_key('variables', variables_config_schema)\n", "path": "flexget/plugins/modify/variables.py"}, {"content": "\"\"\"\nCurrent FlexGet version.\nThis is contained in a separate file so that it can be easily read by setup.py, and easily edited and committed by\nrelease scripts in continuous integration. Should (almost) never be set manually.\n\nThe version should always be set to the <next release version>.dev\nThe jenkins release job will automatically strip the .dev for release,\nand update the version again for continued development.\n\"\"\"\n__version__ = '2.15.2.dev'\n", "path": "flexget/_version.py"}], "after_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport codecs\nimport logging\nimport os\nfrom datetime import datetime\n\nimport yaml\n\nfrom jinja2 import TemplateError\nfrom jinja2.nativetypes import NativeEnvironment\n\nfrom sqlalchemy import Column\nfrom sqlalchemy.sql.sqltypes import Unicode, DateTime, Integer\n\nfrom flexget import db_schema\nfrom flexget.config_schema import register_config_key\nfrom flexget.event import event\nfrom flexget.manager import Session\nfrom flexget.plugin import PluginError\nfrom flexget.utils.database import json_synonym\n\nlog = logging.getLogger('variables')\n\nDB_VERSION = 0\nBase = db_schema.versioned_base('variables', DB_VERSION)\n\n\nclass Variables(Base):\n __tablename__ = 'variables'\n\n id = Column(Integer, primary_key=True)\n _variables = Column('variables', Unicode)\n variables = json_synonym('_variables')\n added = Column(DateTime, default=datetime.now)\n\n\ndef variables_from_file(config_base, filename):\n variables_file = os.path.join(config_base, filename)\n if not os.path.exists(variables_file):\n raise PluginError('File %s does not exist!' % variables_file)\n try:\n with codecs.open(variables_file, 'rb', 'utf-8') as f:\n variables_dict = yaml.safe_load(f.read())\n except yaml.YAMLError as e:\n raise PluginError('Invalid variables file: %s' % e)\n return variables_dict or {}\n\n\ndef variables_from_db():\n with Session() as session:\n variables = session.query(Variables).first()\n if variables:\n return variables.variables\n else:\n return {}\n\n\ndef variables_to_db(variables_dict):\n with Session() as session:\n variables = session.query(Variables).first()\n if not variables:\n variables = Variables()\n variables.variables = variables_dict\n session.merge(variables)\n\n\n@event('manager.before_config_validate')\ndef process_variables(config, manager):\n \"\"\"Render all string elements of the config against defined variables.\"\"\"\n env_params = {\n 'block_start_string': '^^disabled^^',\n 'block_end_string': '^^disabled^^',\n 'variable_start_string': '{?',\n 'variable_end_string': '?}'\n }\n if 'variables' not in config or config.get('variables') is False:\n return\n env = NativeEnvironment(**env_params)\n if isinstance(config['variables'], bool):\n log.debug('trying to load variables from DB')\n variables = variables_from_db()\n elif isinstance(config['variables'], dict):\n log.debug('loading variables from config')\n variables = config['variables']\n else:\n log.debug('trying to load variables from file')\n variables = variables_from_file(manager.config_base, config['variables'])\n log.debug('updating DB with variable file contents')\n variables_to_db(variables)\n env.globals = variables\n _process(config, env)\n return config\n\n\ndef _process(element, environment):\n if isinstance(element, dict):\n for k, v in element.items():\n new_key = _process(k, environment)\n if new_key:\n element[new_key] = element.pop(k)\n k = new_key\n val = _process(element[k], environment)\n if val:\n element[k] = val\n elif isinstance(element, list):\n for i, v in enumerate(element):\n val = _process(v, environment)\n if val:\n element[i] = val\n elif isinstance(element, str) and '{?' in element:\n try:\n template = environment.from_string(element)\n return template.render()\n except (TemplateError, TypeError):\n return None\n\n\nvariables_config_schema = {'type': ['string', 'boolean', 'object']}\n\n\n@event('config.register')\ndef register_config():\n register_config_key('variables', variables_config_schema)\n", "path": "flexget/plugins/modify/variables.py"}, {"content": "\"\"\"\nCurrent FlexGet version.\nThis is contained in a separate file so that it can be easily read by setup.py, and easily edited and committed by\nrelease scripts in continuous integration. Should (almost) never be set manually.\n\nThe version should always be set to the <next release version>.dev\nThe jenkins release job will automatically strip the .dev for release,\nand update the version again for continued development.\n\"\"\"\n__version__ = '2.16.0.dev'\n", "path": "flexget/_version.py"}]}
| 1,704 | 403 |
gh_patches_debug_5056
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2808
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mitmproxy crashes, when viewing response body in url-encoded mode
##### Steps to reproduce the problem:
1. Run mitmproxy.
2. `n` -> `Enter` -> `Enter` -> `m` -> `url-encoded`
_I am seeing:_
```
Traceback (most recent call last):
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/master.py", line 216, in run
self.loop.run()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 682, in run
self._loop()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 719, in _loop
self._watch_files[fd]()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py", line 393, in <lambda>
event_loop, callback, self.get_available_raw_input())
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py", line 493, in parse_input
callback(processed, processed_codes)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 403, in _update
self.process_input(keys)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 503, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py", line 308, in keypress
k = super().keypress(size, k)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py", line 1128, in keypress
return self.body.keypress( (maxcol, remaining), key )
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py", line 44, in keypress
ret = super().keypress(size, key)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py", line 1128, in keypress
return self.body.keypress( (maxcol, remaining), key )
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py", line 592, in keypress
*self.calculate_padding_filler(size, True)), key)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/overlay.py", line 130, in keypress
self.master.keymap.handle("global", key)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/keymap.py", line 130, in handle
return self.executor(b.command)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/commandexecutor.py", line 17, in __call__
ret = self.master.commands.call(cmd)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py", line 221, in call
return self.call_args(parts[0], parts[1:])
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py", line 212, in call_args
return self.commands[path].call(args)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py", line 101, in call
ret = self.func(*pargs)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py", line 251, in wrapper
return function(*args, **kwargs)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/consoleaddons.py", line 174, in nav_select
self.master.inject_key("m_select")
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/master.py", line 181, in inject_key
self.loop.process_input([key])
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 503, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py", line 308, in keypress
k = super().keypress(size, k)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py", line 1128, in keypress
return self.body.keypress( (maxcol, remaining), key )
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py", line 44, in keypress
ret = super().keypress(size, key)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py", line 1128, in keypress
return self.body.keypress( (maxcol, remaining), key )
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py", line 592, in keypress
*self.calculate_padding_filler(size, True)), key)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/overlay.py", line 121, in keypress
signals.pop_view_state.send(self)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/blinker/base.py", line 267, in send
for receiver in self.receivers_for(sender)]
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/blinker/base.py", line 267, in <listcomp>
for receiver in self.receivers_for(sender)]
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py", line 244, in pop
self.focus_changed()
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py", line 209, in focus_changed
i.call("focus_changed")
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py", line 118, in call
getattr(self.top_window(), name)(*args, **kwargs)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py", line 233, in focus_changed
self.body.focus_changed()
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py", line 58, in focus_changed
self.show()
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/tabs.py", line 73, in show
body = self.tabs[self.tab_offset][1](),
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py", line 86, in view_request
return self.conn_text(self.flow.request)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py", line 185, in conn_text
msg, body = self.content_view(viewmode, conn)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py", line 112, in content_view
return self._get_content_view(viewmode, limit, flow_modify_cache_invalidation)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py", line 132, in _get_content_view
for line in lines:
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/contentviews/__init__.py", line 85, in safe_to_print
for line in lines:
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/contentviews/base.py", line 52, in format_dict
max_key_len = max(len(k) for k in d.keys())
ValueError: max() arg is an empty sequence
```
##### Any other comments? What have you tried so far?
This issue is reproducible not only for request/response without body, but for `query` (maybe some others) content type as well. I continue digging.
##### System information
Mitmproxy: 3.0.0.dev53 (commit 3b5237c)
Python: 3.5.2
OpenSSL: OpenSSL 1.1.0g 2 Nov 2017
Platform: Linux-4.4.0-109-generic-x86_64-with-Ubuntu-16.04-xenial
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/contentviews/base.py`
Content:
```
1 # Default view cutoff *in lines*
2 import typing
3
4 VIEW_CUTOFF = 512
5
6 KEY_MAX = 30
7
8 TTextType = typing.Union[str, bytes] # FIXME: This should be either bytes or str ultimately.
9 TViewLine = typing.List[typing.Tuple[str, TTextType]]
10 TViewResult = typing.Tuple[str, typing.Iterator[TViewLine]]
11
12
13 class View:
14 name = None # type: str
15 prompt = None # type: typing.Tuple[str,str]
16 content_types = [] # type: typing.List[str]
17
18 def __call__(self, data: bytes, **metadata) -> TViewResult:
19 """
20 Transform raw data into human-readable output.
21
22 Args:
23 data: the data to decode/format.
24 metadata: optional keyword-only arguments for metadata. Implementations must not
25 rely on a given argument being present.
26
27 Returns:
28 A (description, content generator) tuple.
29
30 The content generator yields lists of (style, text) tuples, where each list represents
31 a single line. ``text`` is a unfiltered byte string which may need to be escaped,
32 depending on the used output.
33
34 Caveats:
35 The content generator must not yield tuples of tuples,
36 because urwid cannot process that. You have to yield a *list* of tuples per line.
37 """
38 raise NotImplementedError() # pragma: no cover
39
40
41 def format_dict(
42 d: typing.Mapping[TTextType, TTextType]
43 ) -> typing.Iterator[TViewLine]:
44 """
45 Helper function that transforms the given dictionary into a list of
46 [
47 ("key", key )
48 ("value", value)
49 ]
50 entries, where key is padded to a uniform width.
51 """
52 max_key_len = max(len(k) for k in d.keys())
53 max_key_len = min(max_key_len, KEY_MAX)
54 for key, value in d.items():
55 if isinstance(key, bytes):
56 key += b":"
57 else:
58 key += ":"
59 key = key.ljust(max_key_len + 2)
60 yield [
61 ("header", key),
62 ("text", value)
63 ]
64
65
66 def format_text(text: TTextType) -> typing.Iterator[TViewLine]:
67 """
68 Helper function that transforms bytes into the view output format.
69 """
70 for line in text.splitlines():
71 yield [("text", line)]
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/contentviews/base.py b/mitmproxy/contentviews/base.py
--- a/mitmproxy/contentviews/base.py
+++ b/mitmproxy/contentviews/base.py
@@ -49,8 +49,9 @@
]
entries, where key is padded to a uniform width.
"""
- max_key_len = max(len(k) for k in d.keys())
- max_key_len = min(max_key_len, KEY_MAX)
+
+ max_key_len = max((len(k) for k in d.keys()), default=0)
+ max_key_len = min((max_key_len, KEY_MAX), default=0)
for key, value in d.items():
if isinstance(key, bytes):
key += b":"
|
{"golden_diff": "diff --git a/mitmproxy/contentviews/base.py b/mitmproxy/contentviews/base.py\n--- a/mitmproxy/contentviews/base.py\n+++ b/mitmproxy/contentviews/base.py\n@@ -49,8 +49,9 @@\n ]\n entries, where key is padded to a uniform width.\n \"\"\"\n- max_key_len = max(len(k) for k in d.keys())\n- max_key_len = min(max_key_len, KEY_MAX)\n+\n+ max_key_len = max((len(k) for k in d.keys()), default=0)\n+ max_key_len = min((max_key_len, KEY_MAX), default=0)\n for key, value in d.items():\n if isinstance(key, bytes):\n key += b\":\"\n", "issue": "Mitmproxy crashes, when viewing response body in url-encoded mode\n##### Steps to reproduce the problem:\r\n\r\n1. Run mitmproxy.\r\n2. `n` -> `Enter` -> `Enter` -> `m` -> `url-encoded`\r\n\r\n_I am seeing:_\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/master.py\", line 216, in run\r\n self.loop.run()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 278, in run\r\n self._run()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 376, in _run\r\n self.event_loop.run()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 682, in run\r\n self._loop()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 719, in _loop\r\n self._watch_files[fd]()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py\", line 393, in <lambda>\r\n event_loop, callback, self.get_available_raw_input())\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py\", line 493, in parse_input\r\n callback(processed, processed_codes)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 403, in _update\r\n self.process_input(keys)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 503, in process_input\r\n k = self._topmost_widget.keypress(self.screen_size, k)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py\", line 308, in keypress\r\n k = super().keypress(size, k)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py\", line 1128, in keypress\r\n return self.body.keypress( (maxcol, remaining), key )\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py\", line 44, in keypress\r\n ret = super().keypress(size, key)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py\", line 1128, in keypress\r\n return self.body.keypress( (maxcol, remaining), key )\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py\", line 592, in keypress\r\n *self.calculate_padding_filler(size, True)), key)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/overlay.py\", line 130, in keypress\r\n self.master.keymap.handle(\"global\", key)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/keymap.py\", line 130, in handle\r\n return self.executor(b.command)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/commandexecutor.py\", line 17, in __call__\r\n ret = self.master.commands.call(cmd)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py\", line 221, in call\r\n return self.call_args(parts[0], parts[1:])\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py\", line 212, in call_args\r\n return self.commands[path].call(args)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py\", line 101, in call\r\n ret = self.func(*pargs)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py\", line 251, in wrapper\r\n return function(*args, **kwargs)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/consoleaddons.py\", line 174, in nav_select\r\n self.master.inject_key(\"m_select\")\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/master.py\", line 181, in inject_key\r\n self.loop.process_input([key])\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 503, in process_input\r\n k = self._topmost_widget.keypress(self.screen_size, k)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py\", line 308, in keypress\r\n k = super().keypress(size, k)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py\", line 1128, in keypress\r\n return self.body.keypress( (maxcol, remaining), key )\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py\", line 44, in keypress\r\n ret = super().keypress(size, key)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py\", line 1128, in keypress\r\n return self.body.keypress( (maxcol, remaining), key )\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py\", line 592, in keypress\r\n *self.calculate_padding_filler(size, True)), key)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/overlay.py\", line 121, in keypress\r\n signals.pop_view_state.send(self)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/blinker/base.py\", line 267, in send\r\n for receiver in self.receivers_for(sender)]\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/blinker/base.py\", line 267, in <listcomp>\r\n for receiver in self.receivers_for(sender)]\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py\", line 244, in pop\r\n self.focus_changed()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py\", line 209, in focus_changed\r\n i.call(\"focus_changed\")\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py\", line 118, in call\r\n getattr(self.top_window(), name)(*args, **kwargs)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py\", line 233, in focus_changed\r\n self.body.focus_changed()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py\", line 58, in focus_changed\r\n self.show()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/tabs.py\", line 73, in show\r\n body = self.tabs[self.tab_offset][1](),\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py\", line 86, in view_request\r\n return self.conn_text(self.flow.request)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py\", line 185, in conn_text\r\n msg, body = self.content_view(viewmode, conn)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py\", line 112, in content_view\r\n return self._get_content_view(viewmode, limit, flow_modify_cache_invalidation)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/flowview.py\", line 132, in _get_content_view\r\n for line in lines:\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/contentviews/__init__.py\", line 85, in safe_to_print\r\n for line in lines:\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/contentviews/base.py\", line 52, in format_dict\r\n max_key_len = max(len(k) for k in d.keys())\r\nValueError: max() arg is an empty sequence\r\n```\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nThis issue is reproducible not only for request/response without body, but for `query` (maybe some others) content type as well. I continue digging.\r\n\r\n##### System information\r\n\r\nMitmproxy: 3.0.0.dev53 (commit 3b5237c) \r\nPython: 3.5.2\r\nOpenSSL: OpenSSL 1.1.0g 2 Nov 2017\r\nPlatform: Linux-4.4.0-109-generic-x86_64-with-Ubuntu-16.04-xenial\r\n\n", "before_files": [{"content": "# Default view cutoff *in lines*\nimport typing\n\nVIEW_CUTOFF = 512\n\nKEY_MAX = 30\n\nTTextType = typing.Union[str, bytes] # FIXME: This should be either bytes or str ultimately.\nTViewLine = typing.List[typing.Tuple[str, TTextType]]\nTViewResult = typing.Tuple[str, typing.Iterator[TViewLine]]\n\n\nclass View:\n name = None # type: str\n prompt = None # type: typing.Tuple[str,str]\n content_types = [] # type: typing.List[str]\n\n def __call__(self, data: bytes, **metadata) -> TViewResult:\n \"\"\"\n Transform raw data into human-readable output.\n\n Args:\n data: the data to decode/format.\n metadata: optional keyword-only arguments for metadata. Implementations must not\n rely on a given argument being present.\n\n Returns:\n A (description, content generator) tuple.\n\n The content generator yields lists of (style, text) tuples, where each list represents\n a single line. ``text`` is a unfiltered byte string which may need to be escaped,\n depending on the used output.\n\n Caveats:\n The content generator must not yield tuples of tuples,\n because urwid cannot process that. You have to yield a *list* of tuples per line.\n \"\"\"\n raise NotImplementedError() # pragma: no cover\n\n\ndef format_dict(\n d: typing.Mapping[TTextType, TTextType]\n) -> typing.Iterator[TViewLine]:\n \"\"\"\n Helper function that transforms the given dictionary into a list of\n [\n (\"key\", key )\n (\"value\", value)\n ]\n entries, where key is padded to a uniform width.\n \"\"\"\n max_key_len = max(len(k) for k in d.keys())\n max_key_len = min(max_key_len, KEY_MAX)\n for key, value in d.items():\n if isinstance(key, bytes):\n key += b\":\"\n else:\n key += \":\"\n key = key.ljust(max_key_len + 2)\n yield [\n (\"header\", key),\n (\"text\", value)\n ]\n\n\ndef format_text(text: TTextType) -> typing.Iterator[TViewLine]:\n \"\"\"\n Helper function that transforms bytes into the view output format.\n \"\"\"\n for line in text.splitlines():\n yield [(\"text\", line)]\n", "path": "mitmproxy/contentviews/base.py"}], "after_files": [{"content": "# Default view cutoff *in lines*\nimport typing\n\nVIEW_CUTOFF = 512\n\nKEY_MAX = 30\n\nTTextType = typing.Union[str, bytes] # FIXME: This should be either bytes or str ultimately.\nTViewLine = typing.List[typing.Tuple[str, TTextType]]\nTViewResult = typing.Tuple[str, typing.Iterator[TViewLine]]\n\n\nclass View:\n name = None # type: str\n prompt = None # type: typing.Tuple[str,str]\n content_types = [] # type: typing.List[str]\n\n def __call__(self, data: bytes, **metadata) -> TViewResult:\n \"\"\"\n Transform raw data into human-readable output.\n\n Args:\n data: the data to decode/format.\n metadata: optional keyword-only arguments for metadata. Implementations must not\n rely on a given argument being present.\n\n Returns:\n A (description, content generator) tuple.\n\n The content generator yields lists of (style, text) tuples, where each list represents\n a single line. ``text`` is a unfiltered byte string which may need to be escaped,\n depending on the used output.\n\n Caveats:\n The content generator must not yield tuples of tuples,\n because urwid cannot process that. You have to yield a *list* of tuples per line.\n \"\"\"\n raise NotImplementedError() # pragma: no cover\n\n\ndef format_dict(\n d: typing.Mapping[TTextType, TTextType]\n) -> typing.Iterator[TViewLine]:\n \"\"\"\n Helper function that transforms the given dictionary into a list of\n [\n (\"key\", key )\n (\"value\", value)\n ]\n entries, where key is padded to a uniform width.\n \"\"\"\n\n max_key_len = max((len(k) for k in d.keys()), default=0)\n max_key_len = min((max_key_len, KEY_MAX), default=0)\n for key, value in d.items():\n if isinstance(key, bytes):\n key += b\":\"\n else:\n key += \":\"\n key = key.ljust(max_key_len + 2)\n yield [\n (\"header\", key),\n (\"text\", value)\n ]\n\n\ndef format_text(text: TTextType) -> typing.Iterator[TViewLine]:\n \"\"\"\n Helper function that transforms bytes into the view output format.\n \"\"\"\n for line in text.splitlines():\n yield [(\"text\", line)]\n", "path": "mitmproxy/contentviews/base.py"}]}
| 3,087 | 161 |
gh_patches_debug_21179
|
rasdani/github-patches
|
git_diff
|
googleapis__google-api-python-client-903
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a new mechanism to avoid calling the legacy memcache API
This API is only supported on the python27 runtime. Lets only try to import it in environments that actually support it.
The problem I'm running into is a bit esoteric, but here goes. We've built some GAE API shims to help move our apps to newer App Engine runtimes (like python3) that don't include these legacy APIs. Because of that, when this library tries to import google.appengine.api.memcache, it imports and uses our shim, and that isn't always desirable. Having some way to configure googleapiclient to not use this legacy API even if it is importable would be useful.
Despite this sorta niche use case, I figured I'd propose this change upstream since reducing reliance on a py2-only API shouldn't be too controversial these days.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `googleapiclient/discovery_cache/__init__.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Caching utility for the discovery document."""
16
17 from __future__ import absolute_import
18
19 import logging
20 import datetime
21
22
23 LOGGER = logging.getLogger(__name__)
24
25 DISCOVERY_DOC_MAX_AGE = 60 * 60 * 24 # 1 day
26
27
28 def autodetect():
29 """Detects an appropriate cache module and returns it.
30
31 Returns:
32 googleapiclient.discovery_cache.base.Cache, a cache object which
33 is auto detected, or None if no cache object is available.
34 """
35 try:
36 from google.appengine.api import memcache
37 from . import appengine_memcache
38
39 return appengine_memcache.cache
40 except Exception:
41 try:
42 from . import file_cache
43
44 return file_cache.cache
45 except Exception as e:
46 LOGGER.warning(e, exc_info=True)
47 return None
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/googleapiclient/discovery_cache/__init__.py b/googleapiclient/discovery_cache/__init__.py
--- a/googleapiclient/discovery_cache/__init__.py
+++ b/googleapiclient/discovery_cache/__init__.py
@@ -18,7 +18,7 @@
import logging
import datetime
-
+import os
LOGGER = logging.getLogger(__name__)
@@ -32,16 +32,18 @@
googleapiclient.discovery_cache.base.Cache, a cache object which
is auto detected, or None if no cache object is available.
"""
- try:
- from google.appengine.api import memcache
- from . import appengine_memcache
-
- return appengine_memcache.cache
- except Exception:
+ if 'APPENGINE_RUNTIME' in os.environ:
try:
- from . import file_cache
+ from google.appengine.api import memcache
+ from . import appengine_memcache
+
+ return appengine_memcache.cache
+ except Exception:
+ pass
+ try:
+ from . import file_cache
- return file_cache.cache
- except Exception as e:
- LOGGER.warning(e, exc_info=True)
- return None
+ return file_cache.cache
+ except Exception as e:
+ LOGGER.warning(e, exc_info=True)
+ return None
|
{"golden_diff": "diff --git a/googleapiclient/discovery_cache/__init__.py b/googleapiclient/discovery_cache/__init__.py\n--- a/googleapiclient/discovery_cache/__init__.py\n+++ b/googleapiclient/discovery_cache/__init__.py\n@@ -18,7 +18,7 @@\n \n import logging\n import datetime\n-\n+import os\n \n LOGGER = logging.getLogger(__name__)\n \n@@ -32,16 +32,18 @@\n googleapiclient.discovery_cache.base.Cache, a cache object which\n is auto detected, or None if no cache object is available.\n \"\"\"\n- try:\n- from google.appengine.api import memcache\n- from . import appengine_memcache\n-\n- return appengine_memcache.cache\n- except Exception:\n+ if 'APPENGINE_RUNTIME' in os.environ:\n try:\n- from . import file_cache\n+ from google.appengine.api import memcache\n+ from . import appengine_memcache\n+\n+ return appengine_memcache.cache\n+ except Exception:\n+ pass\n+ try:\n+ from . import file_cache\n \n- return file_cache.cache\n- except Exception as e:\n- LOGGER.warning(e, exc_info=True)\n- return None\n+ return file_cache.cache\n+ except Exception as e:\n+ LOGGER.warning(e, exc_info=True)\n+ return None\n", "issue": "Add a new mechanism to avoid calling the legacy memcache API\nThis API is only supported on the python27 runtime. Lets only try to import it in environments that actually support it.\r\n\r\nThe problem I'm running into is a bit esoteric, but here goes. We've built some GAE API shims to help move our apps to newer App Engine runtimes (like python3) that don't include these legacy APIs. Because of that, when this library tries to import google.appengine.api.memcache, it imports and uses our shim, and that isn't always desirable. Having some way to configure googleapiclient to not use this legacy API even if it is importable would be useful.\r\n\r\nDespite this sorta niche use case, I figured I'd propose this change upstream since reducing reliance on a py2-only API shouldn't be too controversial these days.\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Caching utility for the discovery document.\"\"\"\n\nfrom __future__ import absolute_import\n\nimport logging\nimport datetime\n\n\nLOGGER = logging.getLogger(__name__)\n\nDISCOVERY_DOC_MAX_AGE = 60 * 60 * 24 # 1 day\n\n\ndef autodetect():\n \"\"\"Detects an appropriate cache module and returns it.\n\n Returns:\n googleapiclient.discovery_cache.base.Cache, a cache object which\n is auto detected, or None if no cache object is available.\n \"\"\"\n try:\n from google.appengine.api import memcache\n from . import appengine_memcache\n\n return appengine_memcache.cache\n except Exception:\n try:\n from . import file_cache\n\n return file_cache.cache\n except Exception as e:\n LOGGER.warning(e, exc_info=True)\n return None\n", "path": "googleapiclient/discovery_cache/__init__.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Caching utility for the discovery document.\"\"\"\n\nfrom __future__ import absolute_import\n\nimport logging\nimport datetime\nimport os\n\nLOGGER = logging.getLogger(__name__)\n\nDISCOVERY_DOC_MAX_AGE = 60 * 60 * 24 # 1 day\n\n\ndef autodetect():\n \"\"\"Detects an appropriate cache module and returns it.\n\n Returns:\n googleapiclient.discovery_cache.base.Cache, a cache object which\n is auto detected, or None if no cache object is available.\n \"\"\"\n if 'APPENGINE_RUNTIME' in os.environ:\n try:\n from google.appengine.api import memcache\n from . import appengine_memcache\n\n return appengine_memcache.cache\n except Exception:\n pass\n try:\n from . import file_cache\n\n return file_cache.cache\n except Exception as e:\n LOGGER.warning(e, exc_info=True)\n return None\n", "path": "googleapiclient/discovery_cache/__init__.py"}]}
| 852 | 307 |
gh_patches_debug_26506
|
rasdani/github-patches
|
git_diff
|
Flexget__Flexget-1651
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unexpected crash in plugin discover: NoneType object has no attribute 'text'
### Expected behaviour:
List of new episodes are sent to remote deluge daemon
### Actual behaviour:
Unexpected crash has occurred.
BUG: Unhandled error in plugin discover: 'NoneType' object has no attribute 'text'
### Steps to reproduce:
I can reproduce it using my config, but I'm unable to figure out how to gather more detailed information (aside from setting flexget's log to debug).
This issue didn't start occurring until I updated flexget to the latest version. Since then, I've tried the latest version as well as the development version. I've even tried different versions of Python. At the time of this writing, I'm using:
flexget 2.9.7.dev
python 2.7.9
I'm pretty new at flexget, but I think the issue with related with this search string:
https://iptorrents.com/t?4=&5=&22=&q=Teen+Wolf+S06E08&qf=
#### Config:
http://pastebin.com/6sLVmUSe
#### Log:
http://pastebin.com/AY1pqZuP
### Additional information:
- Flexget Version: 2.9.7.dev
- Python Version: 2.7.9
- Installation method: https://flexget.com/GitHubInstall
- OS and version: running docker container phusion/baseimage
- Link to crash log: http://pastebin.com/cweVDm9S
<!---
Please verify that the following data is present before submitting your issue:
- Paste or link to a paste service (http://pastebin.com/ for example) of relevant config (preferably full config including templates if present. Remember to redact any personal information! Please make sure the paste does not expire, if possible.
- Paste or link to a paste service of debug level logs of relevant task/s. Use `flexget -L debug execute --tasks <Task_name>`
- Flexget version (Use `flexget -V` to get it).
- Full Python version (`2.7.11` for example). Run `python -V` to get it.
- Installation method (pip, git install, etc.)
- OS and version
- Attach crash log if available
--->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/plugins/sites/iptorrents.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3 from future.moves.urllib.parse import quote_plus
4
5 import re
6 import logging
7
8 from flexget import plugin
9 from flexget.config_schema import one_or_more
10 from flexget.entry import Entry
11 from flexget.event import event
12 from flexget.plugins.internal.urlrewriting import UrlRewritingError
13 from flexget.utils import requests
14 from flexget.utils.soup import get_soup
15 from flexget.utils.search import torrent_availability, normalize_unicode
16 from flexget.utils.tools import parse_filesize
17
18 log = logging.getLogger('iptorrents')
19
20 CATEGORIES = {
21
22 # Movies
23 'Movie-all': 72,
24 'Movie-3D': 87,
25 'Movie-480p': 77,
26 'Movie-BD-R': 89,
27 'Movie-BD-Rip': 90,
28 'Movie-DVD-R': 6,
29 'Movie-HD-Bluray': 48,
30 'Movie-Kids': 54,
31 'Movie-MP4': 62,
32 'Movie-Non-English': 38,
33 'Movie-Packs': 68,
34 'Movie-XviD': 17,
35
36 # TV
37 'TV-all': 73,
38 'TV-Sports': 55,
39 'TV-480p': 78,
40 'TV-MP4': 66,
41 'TV-Non-English': 82,
42 'TV-Packs': 65,
43 'TV-Packs-Non-English': 83,
44 'TV-SD-x264': 79,
45 'TV-x264': 5,
46 'TV-XVID': 4,
47 'TV-Web-DL': 22
48 }
49
50 BASE_URL = 'https://iptorrents.com'
51
52
53 class UrlRewriteIPTorrents(object):
54 """
55 IpTorrents urlrewriter and search plugin.
56
57 iptorrents:
58 rss_key: xxxxxxxxx (required)
59 uid: xxxxxxxx (required)
60 password: xxxxxxxx (required)
61 category: HD
62
63 Category is any combination of: all, Movie-3D, Movie-480p, Movie-3D,
64 Movie-480p, Movie-BD-R, Movie-BD-Rip, Movie-DVD-R,
65 Movie-HD-Bluray, Movie-Kids, Movie-MP4,
66 Movie-Non-English, Movie-Packs, Movie-XviD,
67
68 TV-all, TV-Sports, TV-480p, TV-MP4, TV-Non-English, TV-Packs,
69 TV-Packs-Non-English, TV-SD-x264, TV-x264, TV-XVID, TV-Web-DL
70 """
71
72 schema = {
73 'type': 'object',
74 'properties': {
75 'rss_key': {'type': 'string'},
76 'uid': {'oneOf': [
77 {'type': 'integer'},
78 {'type': 'string'}
79 ]},
80 'password': {'type': 'string'},
81 'category': one_or_more({
82 'oneOf': [
83 {'type': 'integer'},
84 {'type': 'string', 'enum': list(CATEGORIES)},
85 ]}),
86 },
87 'required': ['rss_key', 'uid', 'password'],
88 'additionalProperties': False
89 }
90
91 # urlrewriter API
92 def url_rewritable(self, task, entry):
93 url = entry['url']
94 if url.startswith(BASE_URL + '/download.php/'):
95 return False
96 if url.startswith(BASE_URL + '/'):
97 return True
98 return False
99
100 # urlrewriter API
101 def url_rewrite(self, task, entry):
102 if 'url' not in entry:
103 log.error("Didn't actually get a URL...")
104 else:
105 log.debug("Got the URL: %s" % entry['url'])
106 if entry['url'].startswith(BASE_URL + '/t?'):
107 # use search
108 results = self.search(task, entry)
109 if not results:
110 raise UrlRewritingError("No search results found")
111 # TODO: Search doesn't enforce close match to title, be more picky
112 entry['url'] = results[0]['url']
113
114 @plugin.internet(log)
115 def search(self, task, entry, config=None):
116 """
117 Search for name from iptorrents
118 """
119
120 categories = config.get('category', 'all')
121 # Make sure categories is a list
122 if not isinstance(categories, list):
123 categories = [categories]
124
125 # If there are any text categories, turn them into their id number
126 categories = [c if isinstance(c, int) else CATEGORIES[c]
127 for c in categories]
128 filter_url = '&'.join((str(c) + '=') for c in categories)
129
130 entries = set()
131
132 for search_string in entry.get('search_strings', [entry['title']]):
133 query = normalize_unicode(search_string)
134 query = quote_plus(query.encode('utf8'))
135
136 url = "{base_url}/t?{filter}&q={query}&qf=".format(base_url=BASE_URL, filter=filter_url, query=query)
137 log.debug('searching with url: %s' % url)
138 req = requests.get(url, cookies={'uid': str(config['uid']), 'pass': config['password']})
139
140 if '/u/' + str(config.get('uid')) not in req.content:
141 raise plugin.PluginError("Invalid cookies (user not logged in)...")
142
143 soup = get_soup(req.content, parser="html.parser")
144 torrents = soup.find('table', {'id': 'torrents'})
145
146 for torrent in torrents.findAll('a', href=re.compile('\.torrent$')):
147 entry = Entry()
148 entry['url'] = "{base}{link}?torrent_pass={key}".format(
149 base=BASE_URL, link=torrent['href'], key=config.get('rss_key'))
150 entry['title'] = torrent.findPrevious("a", attrs={'class': 't_title'}).text
151
152 seeders = torrent.findNext('td', {'class': 'ac t_seeders'}).text
153 leechers = torrent.findNext('td', {'class': 'ac t_leechers'}).text
154 entry['torrent_seeds'] = int(seeders)
155 entry['torrent_leeches'] = int(leechers)
156 entry['search_sort'] = torrent_availability(entry['torrent_seeds'],
157 entry['torrent_leeches'])
158
159 size = torrent.findNext(text=re.compile('^([\.\d]+) ([GMK]?)B$'))
160 size = re.search('^([\.\d]+) ([GMK]?)B$', size)
161
162 entry['content_size'] = parse_filesize(size.group(0))
163
164 entries.add(entry)
165
166 return entries
167
168
169 @event('plugin.register')
170 def register_plugin():
171 plugin.register(UrlRewriteIPTorrents, 'iptorrents', interfaces=['urlrewriter', 'search'], api_ver=2)
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flexget/plugins/sites/iptorrents.py b/flexget/plugins/sites/iptorrents.py
--- a/flexget/plugins/sites/iptorrents.py
+++ b/flexget/plugins/sites/iptorrents.py
@@ -137,7 +137,7 @@
log.debug('searching with url: %s' % url)
req = requests.get(url, cookies={'uid': str(config['uid']), 'pass': config['password']})
- if '/u/' + str(config.get('uid')) not in req.content:
+ if '/u/' + str(config['uid']) not in req.text:
raise plugin.PluginError("Invalid cookies (user not logged in)...")
soup = get_soup(req.content, parser="html.parser")
@@ -147,7 +147,7 @@
entry = Entry()
entry['url'] = "{base}{link}?torrent_pass={key}".format(
base=BASE_URL, link=torrent['href'], key=config.get('rss_key'))
- entry['title'] = torrent.findPrevious("a", attrs={'class': 't_title'}).text
+ entry['title'] = torrent.findPrevious('a', attrs={'class': 'b'}).text
seeders = torrent.findNext('td', {'class': 'ac t_seeders'}).text
leechers = torrent.findNext('td', {'class': 'ac t_leechers'}).text
|
{"golden_diff": "diff --git a/flexget/plugins/sites/iptorrents.py b/flexget/plugins/sites/iptorrents.py\n--- a/flexget/plugins/sites/iptorrents.py\n+++ b/flexget/plugins/sites/iptorrents.py\n@@ -137,7 +137,7 @@\n log.debug('searching with url: %s' % url)\n req = requests.get(url, cookies={'uid': str(config['uid']), 'pass': config['password']})\n \n- if '/u/' + str(config.get('uid')) not in req.content:\n+ if '/u/' + str(config['uid']) not in req.text:\n raise plugin.PluginError(\"Invalid cookies (user not logged in)...\")\n \n soup = get_soup(req.content, parser=\"html.parser\")\n@@ -147,7 +147,7 @@\n entry = Entry()\n entry['url'] = \"{base}{link}?torrent_pass={key}\".format(\n base=BASE_URL, link=torrent['href'], key=config.get('rss_key'))\n- entry['title'] = torrent.findPrevious(\"a\", attrs={'class': 't_title'}).text\n+ entry['title'] = torrent.findPrevious('a', attrs={'class': 'b'}).text\n \n seeders = torrent.findNext('td', {'class': 'ac t_seeders'}).text\n leechers = torrent.findNext('td', {'class': 'ac t_leechers'}).text\n", "issue": "Unexpected crash in plugin discover: NoneType object has no attribute 'text'\n### Expected behaviour:\r\nList of new episodes are sent to remote deluge daemon\r\n### Actual behaviour:\r\nUnexpected crash has occurred. \r\nBUG: Unhandled error in plugin discover: 'NoneType' object has no attribute 'text'\r\n\r\n### Steps to reproduce:\r\nI can reproduce it using my config, but I'm unable to figure out how to gather more detailed information (aside from setting flexget's log to debug).\r\n\r\nThis issue didn't start occurring until I updated flexget to the latest version. Since then, I've tried the latest version as well as the development version. I've even tried different versions of Python. At the time of this writing, I'm using:\r\n\r\nflexget 2.9.7.dev\r\npython 2.7.9\r\n\r\nI'm pretty new at flexget, but I think the issue with related with this search string:\r\nhttps://iptorrents.com/t?4=&5=&22=&q=Teen+Wolf+S06E08&qf=\r\n\r\n\r\n#### Config:\r\nhttp://pastebin.com/6sLVmUSe\r\n\r\n#### Log:\r\nhttp://pastebin.com/AY1pqZuP\r\n\r\n### Additional information:\r\n\r\n- Flexget Version: 2.9.7.dev\r\n- Python Version: 2.7.9\r\n- Installation method: https://flexget.com/GitHubInstall\r\n- OS and version: running docker container phusion/baseimage\r\n- Link to crash log: http://pastebin.com/cweVDm9S\r\n\r\n<!---\r\nPlease verify that the following data is present before submitting your issue:\r\n\r\n- Paste or link to a paste service (http://pastebin.com/ for example) of relevant config (preferably full config including templates if present. Remember to redact any personal information! Please make sure the paste does not expire, if possible.\r\n- Paste or link to a paste service of debug level logs of relevant task/s. Use `flexget -L debug execute --tasks <Task_name>`\r\n- Flexget version (Use `flexget -V` to get it).\r\n- Full Python version (`2.7.11` for example). Run `python -V` to get it.\r\n- Installation method (pip, git install, etc.)\r\n- OS and version\r\n- Attach crash log if available\r\n--->\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\nfrom future.moves.urllib.parse import quote_plus\n\nimport re\nimport logging\n\nfrom flexget import plugin\nfrom flexget.config_schema import one_or_more\nfrom flexget.entry import Entry\nfrom flexget.event import event\nfrom flexget.plugins.internal.urlrewriting import UrlRewritingError\nfrom flexget.utils import requests\nfrom flexget.utils.soup import get_soup\nfrom flexget.utils.search import torrent_availability, normalize_unicode\nfrom flexget.utils.tools import parse_filesize\n\nlog = logging.getLogger('iptorrents')\n\nCATEGORIES = {\n\n # Movies\n 'Movie-all': 72,\n 'Movie-3D': 87,\n 'Movie-480p': 77,\n 'Movie-BD-R': 89,\n 'Movie-BD-Rip': 90,\n 'Movie-DVD-R': 6,\n 'Movie-HD-Bluray': 48,\n 'Movie-Kids': 54,\n 'Movie-MP4': 62,\n 'Movie-Non-English': 38,\n 'Movie-Packs': 68,\n 'Movie-XviD': 17,\n\n # TV\n 'TV-all': 73,\n 'TV-Sports': 55,\n 'TV-480p': 78,\n 'TV-MP4': 66,\n 'TV-Non-English': 82,\n 'TV-Packs': 65,\n 'TV-Packs-Non-English': 83,\n 'TV-SD-x264': 79,\n 'TV-x264': 5,\n 'TV-XVID': 4,\n 'TV-Web-DL': 22\n}\n\nBASE_URL = 'https://iptorrents.com'\n\n\nclass UrlRewriteIPTorrents(object):\n \"\"\"\n IpTorrents urlrewriter and search plugin.\n\n iptorrents:\n rss_key: xxxxxxxxx (required)\n uid: xxxxxxxx (required)\n password: xxxxxxxx (required)\n category: HD\n\n Category is any combination of: all, Movie-3D, Movie-480p, Movie-3D,\n Movie-480p, Movie-BD-R, Movie-BD-Rip, Movie-DVD-R,\n Movie-HD-Bluray, Movie-Kids, Movie-MP4,\n Movie-Non-English, Movie-Packs, Movie-XviD,\n\n TV-all, TV-Sports, TV-480p, TV-MP4, TV-Non-English, TV-Packs,\n TV-Packs-Non-English, TV-SD-x264, TV-x264, TV-XVID, TV-Web-DL\n \"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'rss_key': {'type': 'string'},\n 'uid': {'oneOf': [\n {'type': 'integer'},\n {'type': 'string'}\n ]},\n 'password': {'type': 'string'},\n 'category': one_or_more({\n 'oneOf': [\n {'type': 'integer'},\n {'type': 'string', 'enum': list(CATEGORIES)},\n ]}),\n },\n 'required': ['rss_key', 'uid', 'password'],\n 'additionalProperties': False\n }\n\n # urlrewriter API\n def url_rewritable(self, task, entry):\n url = entry['url']\n if url.startswith(BASE_URL + '/download.php/'):\n return False\n if url.startswith(BASE_URL + '/'):\n return True\n return False\n\n # urlrewriter API\n def url_rewrite(self, task, entry):\n if 'url' not in entry:\n log.error(\"Didn't actually get a URL...\")\n else:\n log.debug(\"Got the URL: %s\" % entry['url'])\n if entry['url'].startswith(BASE_URL + '/t?'):\n # use search\n results = self.search(task, entry)\n if not results:\n raise UrlRewritingError(\"No search results found\")\n # TODO: Search doesn't enforce close match to title, be more picky\n entry['url'] = results[0]['url']\n\n @plugin.internet(log)\n def search(self, task, entry, config=None):\n \"\"\"\n Search for name from iptorrents\n \"\"\"\n\n categories = config.get('category', 'all')\n # Make sure categories is a list\n if not isinstance(categories, list):\n categories = [categories]\n\n # If there are any text categories, turn them into their id number\n categories = [c if isinstance(c, int) else CATEGORIES[c]\n for c in categories]\n filter_url = '&'.join((str(c) + '=') for c in categories)\n\n entries = set()\n\n for search_string in entry.get('search_strings', [entry['title']]):\n query = normalize_unicode(search_string)\n query = quote_plus(query.encode('utf8'))\n\n url = \"{base_url}/t?{filter}&q={query}&qf=\".format(base_url=BASE_URL, filter=filter_url, query=query)\n log.debug('searching with url: %s' % url)\n req = requests.get(url, cookies={'uid': str(config['uid']), 'pass': config['password']})\n\n if '/u/' + str(config.get('uid')) not in req.content:\n raise plugin.PluginError(\"Invalid cookies (user not logged in)...\")\n\n soup = get_soup(req.content, parser=\"html.parser\")\n torrents = soup.find('table', {'id': 'torrents'})\n\n for torrent in torrents.findAll('a', href=re.compile('\\.torrent$')):\n entry = Entry()\n entry['url'] = \"{base}{link}?torrent_pass={key}\".format(\n base=BASE_URL, link=torrent['href'], key=config.get('rss_key'))\n entry['title'] = torrent.findPrevious(\"a\", attrs={'class': 't_title'}).text\n\n seeders = torrent.findNext('td', {'class': 'ac t_seeders'}).text\n leechers = torrent.findNext('td', {'class': 'ac t_leechers'}).text\n entry['torrent_seeds'] = int(seeders)\n entry['torrent_leeches'] = int(leechers)\n entry['search_sort'] = torrent_availability(entry['torrent_seeds'],\n entry['torrent_leeches'])\n\n size = torrent.findNext(text=re.compile('^([\\.\\d]+) ([GMK]?)B$'))\n size = re.search('^([\\.\\d]+) ([GMK]?)B$', size)\n\n entry['content_size'] = parse_filesize(size.group(0))\n\n entries.add(entry)\n\n return entries\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(UrlRewriteIPTorrents, 'iptorrents', interfaces=['urlrewriter', 'search'], api_ver=2)\n", "path": "flexget/plugins/sites/iptorrents.py"}], "after_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\nfrom future.moves.urllib.parse import quote_plus\n\nimport re\nimport logging\n\nfrom flexget import plugin\nfrom flexget.config_schema import one_or_more\nfrom flexget.entry import Entry\nfrom flexget.event import event\nfrom flexget.plugins.internal.urlrewriting import UrlRewritingError\nfrom flexget.utils import requests\nfrom flexget.utils.soup import get_soup\nfrom flexget.utils.search import torrent_availability, normalize_unicode\nfrom flexget.utils.tools import parse_filesize\n\nlog = logging.getLogger('iptorrents')\n\nCATEGORIES = {\n\n # Movies\n 'Movie-all': 72,\n 'Movie-3D': 87,\n 'Movie-480p': 77,\n 'Movie-BD-R': 89,\n 'Movie-BD-Rip': 90,\n 'Movie-DVD-R': 6,\n 'Movie-HD-Bluray': 48,\n 'Movie-Kids': 54,\n 'Movie-MP4': 62,\n 'Movie-Non-English': 38,\n 'Movie-Packs': 68,\n 'Movie-XviD': 17,\n\n # TV\n 'TV-all': 73,\n 'TV-Sports': 55,\n 'TV-480p': 78,\n 'TV-MP4': 66,\n 'TV-Non-English': 82,\n 'TV-Packs': 65,\n 'TV-Packs-Non-English': 83,\n 'TV-SD-x264': 79,\n 'TV-x264': 5,\n 'TV-XVID': 4,\n 'TV-Web-DL': 22\n}\n\nBASE_URL = 'https://iptorrents.com'\n\n\nclass UrlRewriteIPTorrents(object):\n \"\"\"\n IpTorrents urlrewriter and search plugin.\n\n iptorrents:\n rss_key: xxxxxxxxx (required)\n uid: xxxxxxxx (required)\n password: xxxxxxxx (required)\n category: HD\n\n Category is any combination of: all, Movie-3D, Movie-480p, Movie-3D,\n Movie-480p, Movie-BD-R, Movie-BD-Rip, Movie-DVD-R,\n Movie-HD-Bluray, Movie-Kids, Movie-MP4,\n Movie-Non-English, Movie-Packs, Movie-XviD,\n\n TV-all, TV-Sports, TV-480p, TV-MP4, TV-Non-English, TV-Packs,\n TV-Packs-Non-English, TV-SD-x264, TV-x264, TV-XVID, TV-Web-DL\n \"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'rss_key': {'type': 'string'},\n 'uid': {'oneOf': [\n {'type': 'integer'},\n {'type': 'string'}\n ]},\n 'password': {'type': 'string'},\n 'category': one_or_more({\n 'oneOf': [\n {'type': 'integer'},\n {'type': 'string', 'enum': list(CATEGORIES)},\n ]}),\n },\n 'required': ['rss_key', 'uid', 'password'],\n 'additionalProperties': False\n }\n\n # urlrewriter API\n def url_rewritable(self, task, entry):\n url = entry['url']\n if url.startswith(BASE_URL + '/download.php/'):\n return False\n if url.startswith(BASE_URL + '/'):\n return True\n return False\n\n # urlrewriter API\n def url_rewrite(self, task, entry):\n if 'url' not in entry:\n log.error(\"Didn't actually get a URL...\")\n else:\n log.debug(\"Got the URL: %s\" % entry['url'])\n if entry['url'].startswith(BASE_URL + '/t?'):\n # use search\n results = self.search(task, entry)\n if not results:\n raise UrlRewritingError(\"No search results found\")\n # TODO: Search doesn't enforce close match to title, be more picky\n entry['url'] = results[0]['url']\n\n @plugin.internet(log)\n def search(self, task, entry, config=None):\n \"\"\"\n Search for name from iptorrents\n \"\"\"\n\n categories = config.get('category', 'all')\n # Make sure categories is a list\n if not isinstance(categories, list):\n categories = [categories]\n\n # If there are any text categories, turn them into their id number\n categories = [c if isinstance(c, int) else CATEGORIES[c]\n for c in categories]\n filter_url = '&'.join((str(c) + '=') for c in categories)\n\n entries = set()\n\n for search_string in entry.get('search_strings', [entry['title']]):\n query = normalize_unicode(search_string)\n query = quote_plus(query.encode('utf8'))\n\n url = \"{base_url}/t?{filter}&q={query}&qf=\".format(base_url=BASE_URL, filter=filter_url, query=query)\n log.debug('searching with url: %s' % url)\n req = requests.get(url, cookies={'uid': str(config['uid']), 'pass': config['password']})\n\n if '/u/' + str(config['uid']) not in req.text:\n raise plugin.PluginError(\"Invalid cookies (user not logged in)...\")\n\n soup = get_soup(req.content, parser=\"html.parser\")\n torrents = soup.find('table', {'id': 'torrents'})\n\n for torrent in torrents.findAll('a', href=re.compile('\\.torrent$')):\n entry = Entry()\n entry['url'] = \"{base}{link}?torrent_pass={key}\".format(\n base=BASE_URL, link=torrent['href'], key=config.get('rss_key'))\n entry['title'] = torrent.findPrevious('a', attrs={'class': 'b'}).text\n\n seeders = torrent.findNext('td', {'class': 'ac t_seeders'}).text\n leechers = torrent.findNext('td', {'class': 'ac t_leechers'}).text\n entry['torrent_seeds'] = int(seeders)\n entry['torrent_leeches'] = int(leechers)\n entry['search_sort'] = torrent_availability(entry['torrent_seeds'],\n entry['torrent_leeches'])\n\n size = torrent.findNext(text=re.compile('^([\\.\\d]+) ([GMK]?)B$'))\n size = re.search('^([\\.\\d]+) ([GMK]?)B$', size)\n\n entry['content_size'] = parse_filesize(size.group(0))\n\n entries.add(entry)\n\n return entries\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(UrlRewriteIPTorrents, 'iptorrents', interfaces=['urlrewriter', 'search'], api_ver=2)\n", "path": "flexget/plugins/sites/iptorrents.py"}]}
| 2,725 | 315 |
gh_patches_debug_25847
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-604
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
We should document plans for future releases
In particular, the [ongoing development](https://hypothesis.readthedocs.io/en/latest/development.html) page should include:
- Hypothesis' policy for which versions of Python are supported. Also how and when we may decide to stop supporting older versions.
- A timeline for dropping Python 2 support - we keep mentioning 2020 but it's not documented anywhere. [python3statement.org](http://www.python3statement.org) has a fantastic [practicalities page](http://www.python3statement.org/practicalities/) on how to make this work, especially if we want to make a py3-only release before the EOL date.
- ~~Hypothesis' policy for which versions of *Hypothesis* are supported. Latest patch release only? Latest patch of last *n* minor versions? We don't need [diagrams](https://www.djangoproject.com/download/#supported-versions) or [extensive detail](https://docs.djangoproject.com/en/dev/internals/release-process/#supported-versions) like Django, but the basic approach is important information.~~
- Hypothesis' policy for deciding to release new major and minor versions of *Hypothesis*. Feature accumulation? Calendar schedule? Deprecated features becoming a burden on development? Probably related to #408.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2017 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import os
21 import sys
22
23 from setuptools import setup, find_packages
24
25
26 def local_file(name):
27 return os.path.relpath(os.path.join(os.path.dirname(__file__), name))
28
29
30 SOURCE = local_file('src')
31 README = local_file('README.rst')
32
33
34 # Assignment to placate pyflakes. The actual version is from the exec that
35 # follows.
36 __version__ = None
37
38 with open(local_file('src/hypothesis/version.py')) as o:
39 exec(o.read())
40
41 assert __version__ is not None
42
43
44 extras = {
45 'datetime': ['pytz'],
46 'fakefactory': ['Faker>=0.7.0,<=0.7.1'],
47 'django': ['pytz', 'django>=1.8,<2'],
48 'numpy': ['numpy>=1.9.0'],
49 'pytest': ['pytest>=2.8.0'],
50 }
51
52 extras['faker'] = extras['fakefactory']
53
54 extras['all'] = sorted(sum(extras.values(), []))
55
56 extras[":python_version == '2.7'"] = ['enum34']
57 extras[":python_version == '3.3'"] = ['enum34']
58
59 install_requires = []
60
61 if sys.version_info[0] < 3:
62 install_requires.append('enum34')
63
64 setup(
65 name='hypothesis',
66 version=__version__,
67 author='David R. MacIver',
68 author_email='[email protected]',
69 packages=find_packages(SOURCE),
70 package_dir={'': SOURCE},
71 url='https://github.com/HypothesisWorks/hypothesis-python',
72 license='MPL v2',
73 description='A library for property based testing',
74 zip_safe=False,
75 extras_require=extras,
76 install_requires=install_requires,
77 classifiers=[
78 'Development Status :: 5 - Production/Stable',
79 'Intended Audience :: Developers',
80 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',
81 'Operating System :: Unix',
82 'Operating System :: POSIX',
83 'Operating System :: Microsoft :: Windows',
84 'Programming Language :: Python',
85 'Programming Language :: Python :: 2.7',
86 'Programming Language :: Python :: 3',
87 'Programming Language :: Python :: 3.4',
88 'Programming Language :: Python :: 3.5',
89 'Programming Language :: Python :: 3.6',
90 'Programming Language :: Python :: Implementation :: CPython',
91 'Programming Language :: Python :: Implementation :: PyPy',
92 'Topic :: Software Development :: Testing',
93 ],
94 entry_points={
95 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],
96 },
97 long_description=open(README).read(),
98 )
99
```
Path: `docs/conf.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2017 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 # -*- coding: utf-8 -*-
19
20 from __future__ import division, print_function, absolute_import
21
22 # on_rtd is whether we are on readthedocs.org
23 import os
24 import sys
25
26 from hypothesis import __version__
27
28 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
29
30 sys.path.append(
31 os.path.join(os.path.dirname(__file__), '..', 'src')
32 )
33
34
35 autodoc_member_order = 'bysource'
36
37 extensions = [
38 'sphinx.ext.autodoc',
39 'sphinx.ext.doctest',
40 'sphinx.ext.viewcode',
41 'sphinx.ext.intersphinx',
42 ]
43
44 templates_path = ['_templates']
45
46 source_suffix = '.rst'
47
48 # The master toctree document.
49 master_doc = 'index'
50
51 # General information about the project.
52 project = u'Hypothesis'
53 copyright = u'2015, David R. MacIver'
54 author = u'David R. MacIver'
55
56 version = __version__
57 release = __version__
58
59 language = None
60
61 exclude_patterns = ['_build']
62
63 pygments_style = 'sphinx'
64
65 todo_include_todos = False
66
67 intersphinx_mapping = {
68 'python': ('http://docs.python.org/', None),
69 }
70
71 autodoc_mock_imports = ['numpy']
72
73 doctest_global_setup = '''
74 # Some standard imports
75 from hypothesis import *
76 from hypothesis.strategies import *
77 # Ensure that output (including from strategies) is deterministic
78 import random
79 random.seed(0)
80 '''
81
82
83 # -- Options for HTML output ----------------------------------------------
84
85 if not on_rtd: # only import and set the theme if we're building docs locally
86 import sphinx_rtd_theme
87 html_theme = 'sphinx_rtd_theme'
88 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
89
90 html_static_path = ['_static']
91
92 htmlhelp_basename = 'Hypothesisdoc'
93
94 # -- Options for LaTeX output ---------------------------------------------
95
96 latex_elements = {
97 }
98
99 latex_documents = [
100 (master_doc, 'Hypothesis.tex', u'Hypothesis Documentation',
101 u'David R. MacIver', 'manual'),
102 ]
103
104 man_pages = [
105 (master_doc, 'hypothesis', u'Hypothesis Documentation',
106 [author], 1)
107 ]
108
109 texinfo_documents = [
110 (master_doc, 'Hypothesis', u'Hypothesis Documentation',
111 author, 'Hypothesis', 'One line description of project.',
112 'Miscellaneous'),
113 ]
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -37,6 +37,7 @@
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
+ 'sphinx.ext.extlinks',
'sphinx.ext.viewcode',
'sphinx.ext.intersphinx',
]
@@ -79,6 +80,16 @@
random.seed(0)
'''
+# This config value must be a dictionary of external sites, mapping unique
+# short alias names to a base URL and a prefix.
+# See http://sphinx-doc.org/ext/extlinks.html
+extlinks = {
+ 'commit': ('https://github.com/HypothesisWorks/hypothesis-python/commit/%s', 'commit '),
+ 'gh-file': ('https://github.com/HypothesisWorks/hypothesis-python/blob/master/%s', ''),
+ 'gh-link': ('https://github.com/HypothesisWorks/hypothesis-python/%s', ''),
+ 'issue': ('https://github.com/HypothesisWorks/hypothesis-python/issues/%s', 'issue #'),
+ 'pull': ('https://github.com/HypothesisWorks/hypothesis-python/pulls/%s', 'pull request #'),
+}
# -- Options for HTML output ----------------------------------------------
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -74,6 +74,7 @@
zip_safe=False,
extras_require=extras,
install_requires=install_requires,
+ python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*',
classifiers=[
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -37,6 +37,7 @@\n extensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n+ 'sphinx.ext.extlinks',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.intersphinx',\n ]\n@@ -79,6 +80,16 @@\n random.seed(0)\n '''\n \n+# This config value must be a dictionary of external sites, mapping unique\n+# short alias names to a base URL and a prefix.\n+# See http://sphinx-doc.org/ext/extlinks.html\n+extlinks = {\n+ 'commit': ('https://github.com/HypothesisWorks/hypothesis-python/commit/%s', 'commit '),\n+ 'gh-file': ('https://github.com/HypothesisWorks/hypothesis-python/blob/master/%s', ''),\n+ 'gh-link': ('https://github.com/HypothesisWorks/hypothesis-python/%s', ''),\n+ 'issue': ('https://github.com/HypothesisWorks/hypothesis-python/issues/%s', 'issue #'),\n+ 'pull': ('https://github.com/HypothesisWorks/hypothesis-python/pulls/%s', 'pull request #'),\n+}\n \n # -- Options for HTML output ----------------------------------------------\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -74,6 +74,7 @@\n zip_safe=False,\n extras_require=extras,\n install_requires=install_requires,\n+ python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n", "issue": "We should document plans for future releases\nIn particular, the [ongoing development](https://hypothesis.readthedocs.io/en/latest/development.html) page should include:\r\n\r\n- Hypothesis' policy for which versions of Python are supported. Also how and when we may decide to stop supporting older versions.\r\n\r\n- A timeline for dropping Python 2 support - we keep mentioning 2020 but it's not documented anywhere. [python3statement.org](http://www.python3statement.org) has a fantastic [practicalities page](http://www.python3statement.org/practicalities/) on how to make this work, especially if we want to make a py3-only release before the EOL date.\r\n\r\n- ~~Hypothesis' policy for which versions of *Hypothesis* are supported. Latest patch release only? Latest patch of last *n* minor versions? We don't need [diagrams](https://www.djangoproject.com/download/#supported-versions) or [extensive detail](https://docs.djangoproject.com/en/dev/internals/release-process/#supported-versions) like Django, but the basic approach is important information.~~\r\n\r\n- Hypothesis' policy for deciding to release new major and minor versions of *Hypothesis*. Feature accumulation? Calendar schedule? Deprecated features becoming a burden on development? Probably related to #408.\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\ndef local_file(name):\n return os.path.relpath(os.path.join(os.path.dirname(__file__), name))\n\n\nSOURCE = local_file('src')\nREADME = local_file('README.rst')\n\n\n# Assignment to placate pyflakes. The actual version is from the exec that\n# follows.\n__version__ = None\n\nwith open(local_file('src/hypothesis/version.py')) as o:\n exec(o.read())\n\nassert __version__ is not None\n\n\nextras = {\n 'datetime': ['pytz'],\n 'fakefactory': ['Faker>=0.7.0,<=0.7.1'],\n 'django': ['pytz', 'django>=1.8,<2'],\n 'numpy': ['numpy>=1.9.0'],\n 'pytest': ['pytest>=2.8.0'],\n}\n\nextras['faker'] = extras['fakefactory']\n\nextras['all'] = sorted(sum(extras.values(), []))\n\nextras[\":python_version == '2.7'\"] = ['enum34']\nextras[\":python_version == '3.3'\"] = ['enum34']\n\ninstall_requires = []\n\nif sys.version_info[0] < 3:\n install_requires.append('enum34')\n\nsetup(\n name='hypothesis',\n version=__version__,\n author='David R. MacIver',\n author_email='[email protected]',\n packages=find_packages(SOURCE),\n package_dir={'': SOURCE},\n url='https://github.com/HypothesisWorks/hypothesis-python',\n license='MPL v2',\n description='A library for property based testing',\n zip_safe=False,\n extras_require=extras,\n install_requires=install_requires,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',\n 'Operating System :: Unix',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Testing',\n ],\n entry_points={\n 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],\n },\n long_description=open(README).read(),\n)\n", "path": "setup.py"}, {"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\n# -*- coding: utf-8 -*-\n\nfrom __future__ import division, print_function, absolute_import\n\n# on_rtd is whether we are on readthedocs.org\nimport os\nimport sys\n\nfrom hypothesis import __version__\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\nsys.path.append(\n os.path.join(os.path.dirname(__file__), '..', 'src')\n)\n\n\nautodoc_member_order = 'bysource'\n\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.intersphinx',\n]\n\ntemplates_path = ['_templates']\n\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Hypothesis'\ncopyright = u'2015, David R. MacIver'\nauthor = u'David R. MacIver'\n\nversion = __version__\nrelease = __version__\n\nlanguage = None\n\nexclude_patterns = ['_build']\n\npygments_style = 'sphinx'\n\ntodo_include_todos = False\n\nintersphinx_mapping = {\n 'python': ('http://docs.python.org/', None),\n}\n\nautodoc_mock_imports = ['numpy']\n\ndoctest_global_setup = '''\n# Some standard imports\nfrom hypothesis import *\nfrom hypothesis.strategies import *\n# Ensure that output (including from strategies) is deterministic\nimport random\nrandom.seed(0)\n'''\n\n\n# -- Options for HTML output ----------------------------------------------\n\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\nhtml_static_path = ['_static']\n\nhtmlhelp_basename = 'Hypothesisdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n}\n\nlatex_documents = [\n (master_doc, 'Hypothesis.tex', u'Hypothesis Documentation',\n u'David R. MacIver', 'manual'),\n]\n\nman_pages = [\n (master_doc, 'hypothesis', u'Hypothesis Documentation',\n [author], 1)\n]\n\ntexinfo_documents = [\n (master_doc, 'Hypothesis', u'Hypothesis Documentation',\n author, 'Hypothesis', 'One line description of project.',\n 'Miscellaneous'),\n]\n", "path": "docs/conf.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\ndef local_file(name):\n return os.path.relpath(os.path.join(os.path.dirname(__file__), name))\n\n\nSOURCE = local_file('src')\nREADME = local_file('README.rst')\n\n\n# Assignment to placate pyflakes. The actual version is from the exec that\n# follows.\n__version__ = None\n\nwith open(local_file('src/hypothesis/version.py')) as o:\n exec(o.read())\n\nassert __version__ is not None\n\n\nextras = {\n 'datetime': ['pytz'],\n 'fakefactory': ['Faker>=0.7.0,<=0.7.1'],\n 'django': ['pytz', 'django>=1.8,<2'],\n 'numpy': ['numpy>=1.9.0'],\n 'pytest': ['pytest>=2.8.0'],\n}\n\nextras['faker'] = extras['fakefactory']\n\nextras['all'] = sorted(sum(extras.values(), []))\n\nextras[\":python_version == '2.7'\"] = ['enum34']\nextras[\":python_version == '3.3'\"] = ['enum34']\n\ninstall_requires = []\n\nif sys.version_info[0] < 3:\n install_requires.append('enum34')\n\nsetup(\n name='hypothesis',\n version=__version__,\n author='David R. MacIver',\n author_email='[email protected]',\n packages=find_packages(SOURCE),\n package_dir={'': SOURCE},\n url='https://github.com/HypothesisWorks/hypothesis-python',\n license='MPL v2',\n description='A library for property based testing',\n zip_safe=False,\n extras_require=extras,\n install_requires=install_requires,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',\n 'Operating System :: Unix',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Testing',\n ],\n entry_points={\n 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],\n },\n long_description=open(README).read(),\n)\n", "path": "setup.py"}, {"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\n# -*- coding: utf-8 -*-\n\nfrom __future__ import division, print_function, absolute_import\n\n# on_rtd is whether we are on readthedocs.org\nimport os\nimport sys\n\nfrom hypothesis import __version__\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\nsys.path.append(\n os.path.join(os.path.dirname(__file__), '..', 'src')\n)\n\n\nautodoc_member_order = 'bysource'\n\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.extlinks',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.intersphinx',\n]\n\ntemplates_path = ['_templates']\n\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Hypothesis'\ncopyright = u'2015, David R. MacIver'\nauthor = u'David R. MacIver'\n\nversion = __version__\nrelease = __version__\n\nlanguage = None\n\nexclude_patterns = ['_build']\n\npygments_style = 'sphinx'\n\ntodo_include_todos = False\n\nintersphinx_mapping = {\n 'python': ('http://docs.python.org/', None),\n}\n\nautodoc_mock_imports = ['numpy']\n\ndoctest_global_setup = '''\n# Some standard imports\nfrom hypothesis import *\nfrom hypothesis.strategies import *\n# Ensure that output (including from strategies) is deterministic\nimport random\nrandom.seed(0)\n'''\n\n# This config value must be a dictionary of external sites, mapping unique\n# short alias names to a base URL and a prefix.\n# See http://sphinx-doc.org/ext/extlinks.html\nextlinks = {\n 'commit': ('https://github.com/HypothesisWorks/hypothesis-python/commit/%s', 'commit '),\n 'gh-file': ('https://github.com/HypothesisWorks/hypothesis-python/blob/master/%s', ''),\n 'gh-link': ('https://github.com/HypothesisWorks/hypothesis-python/%s', ''),\n 'issue': ('https://github.com/HypothesisWorks/hypothesis-python/issues/%s', 'issue #'),\n 'pull': ('https://github.com/HypothesisWorks/hypothesis-python/pulls/%s', 'pull request #'),\n}\n\n# -- Options for HTML output ----------------------------------------------\n\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\nhtml_static_path = ['_static']\n\nhtmlhelp_basename = 'Hypothesisdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n}\n\nlatex_documents = [\n (master_doc, 'Hypothesis.tex', u'Hypothesis Documentation',\n u'David R. MacIver', 'manual'),\n]\n\nman_pages = [\n (master_doc, 'hypothesis', u'Hypothesis Documentation',\n [author], 1)\n]\n\ntexinfo_documents = [\n (master_doc, 'Hypothesis', u'Hypothesis Documentation',\n author, 'Hypothesis', 'One line description of project.',\n 'Miscellaneous'),\n]\n", "path": "docs/conf.py"}]}
| 2,484 | 388 |
gh_patches_debug_7930
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-11396
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add ability for slugs to be auto incremented on a per model basis
### Is your proposal related to a problem?
I would like to be able to use auto incrementing page slugs in the case of a duplicate slug within the current parent page's URL path.
Example use case: An event calendar where I don't really care so much about the slugs, but there is a high potential for duplicates with the types of events that may be submitted.
The ability to generate 'nice' URLs is a positive selling point of Wagtail that we don't want to undermine but we do want to help users create pages easily.
### Describe the solution you'd like
`Page` models can have an `auto_increment_slug` meta flag, instead of a global Django setting.
When the value is `True`, and pages are created as children of that page (important: the slug behaviour will be determined by parent models, not the page models itself) will have an extra step on form creation.
If a page is being created (not edited) and there is a value error because of a duplicate URL path, instead of failing the form submission it will automatically append an incremented number to the end of the URL. This incrementing number would need to parse the conflicting page path for the last number and then add to it.
**Edge cases to consider**
* What if the path part is not really meant to be an incremental number (E.g. `path/to/blog-post-about-pi-being-3`)
* Confirmation that we want the flag on parent pages and not the page itself that is being created
* Supporting simple numbers only?
* What if the number is `0`, should we increment from `1`
* Should we provide the ability to set the start number `auto_increment_slug = 0` or `auto_increment_slug = 10` or just always assume it is 1 and the first page that contains the slug without any increment will be the zeroth item (without the `-0`)
**Requirements**
* Unit tests are a must
* Documentation for this feature to be added - maybe in https://docs.wagtail.org/en/latest/advanced_topics/customisation/page_editing_interface.html#customising-generated-forms & also page model reference
### Describe alternatives you've considered
* Do not try to solve this in an automated way and keep the page validation as is, users can decide on what the slug should be manually.
### Additional context
* An older PR has an approach (settings) to this problem - see https://github.com/wagtail/wagtail/pull/2454
> This proposed behaviour has come up again independently in #3477. We've now discussed this among the core team - while auto-rewriting of slugs is a behaviour we'd want to avoid in the default case (because the ability to generate 'nice' URLs is a positive selling point of Wagtail that we don't want to undermine...) we're agreed that this would be a worthwhile feature to support on a per-model basis (perhaps with a `auto_increment_slug` meta flag).
> _Originally posted by @gasman in https://github.com/wagtail/wagtail/issues/2454#issuecomment-290208940_ (March 2017)
Add ability for slugs to be auto incremented on a per model basis
### Is your proposal related to a problem?
I would like to be able to use auto incrementing page slugs in the case of a duplicate slug within the current parent page's URL path.
Example use case: An event calendar where I don't really care so much about the slugs, but there is a high potential for duplicates with the types of events that may be submitted.
The ability to generate 'nice' URLs is a positive selling point of Wagtail that we don't want to undermine but we do want to help users create pages easily.
### Describe the solution you'd like
`Page` models can have an `auto_increment_slug` meta flag, instead of a global Django setting.
When the value is `True`, and pages are created as children of that page (important: the slug behaviour will be determined by parent models, not the page models itself) will have an extra step on form creation.
If a page is being created (not edited) and there is a value error because of a duplicate URL path, instead of failing the form submission it will automatically append an incremented number to the end of the URL. This incrementing number would need to parse the conflicting page path for the last number and then add to it.
**Edge cases to consider**
* What if the path part is not really meant to be an incremental number (E.g. `path/to/blog-post-about-pi-being-3`)
* Confirmation that we want the flag on parent pages and not the page itself that is being created
* Supporting simple numbers only?
* What if the number is `0`, should we increment from `1`
* Should we provide the ability to set the start number `auto_increment_slug = 0` or `auto_increment_slug = 10` or just always assume it is 1 and the first page that contains the slug without any increment will be the zeroth item (without the `-0`)
**Requirements**
* Unit tests are a must
* Documentation for this feature to be added - maybe in https://docs.wagtail.org/en/latest/advanced_topics/customisation/page_editing_interface.html#customising-generated-forms & also page model reference
### Describe alternatives you've considered
* Do not try to solve this in an automated way and keep the page validation as is, users can decide on what the slug should be manually.
### Additional context
* An older PR has an approach (settings) to this problem - see https://github.com/wagtail/wagtail/pull/2454
> This proposed behaviour has come up again independently in #3477. We've now discussed this among the core team - while auto-rewriting of slugs is a behaviour we'd want to avoid in the default case (because the ability to generate 'nice' URLs is a positive selling point of Wagtail that we don't want to undermine...) we're agreed that this would be a worthwhile feature to support on a per-model basis (perhaps with a `auto_increment_slug` meta flag).
> _Originally posted by @gasman in https://github.com/wagtail/wagtail/issues/2454#issuecomment-290208940_ (March 2017)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/views/pages/copy.py`
Content:
```
1 from django.shortcuts import redirect
2 from django.template.response import TemplateResponse
3 from django.utils.translation import gettext as _
4
5 from wagtail import hooks
6 from wagtail.actions.copy_page import CopyPageAction
7 from wagtail.actions.create_alias import CreatePageAliasAction
8 from wagtail.admin import messages
9 from wagtail.admin.auth import user_has_any_page_permission, user_passes_test
10 from wagtail.admin.forms.pages import CopyForm
11 from wagtail.admin.utils import get_valid_next_url_from_request
12 from wagtail.models import Page
13
14
15 @user_passes_test(user_has_any_page_permission)
16 def copy(request, page_id):
17 page = Page.objects.get(id=page_id)
18
19 # Parent page defaults to parent of source page
20 parent_page = page.get_parent()
21
22 # Check if the user has permission to publish subpages on the parent
23 can_publish = parent_page.permissions_for_user(request.user).can_publish_subpage()
24
25 # Create the form
26 form = CopyForm(
27 request.POST or None, user=request.user, page=page, can_publish=can_publish
28 )
29
30 next_url = get_valid_next_url_from_request(request)
31
32 for fn in hooks.get_hooks("before_copy_page"):
33 result = fn(request, page)
34 if hasattr(result, "status_code"):
35 return result
36
37 # Check if user is submitting
38 if request.method == "POST":
39 # Prefill parent_page in case the form is invalid (as prepopulated value for the form field,
40 # because ModelChoiceField seems to not fall back to the user given value)
41 parent_page = Page.objects.get(id=request.POST["new_parent_page"])
42
43 if form.is_valid():
44 # Receive the parent page (this should never be empty)
45 if form.cleaned_data["new_parent_page"]:
46 parent_page = form.cleaned_data["new_parent_page"]
47
48 # Re-check if the user has permission to publish subpages on the new parent
49 can_publish = parent_page.permissions_for_user(
50 request.user
51 ).can_publish_subpage()
52 keep_live = can_publish and form.cleaned_data.get("publish_copies")
53
54 # Copy the page
55 # Note that only users who can publish in the new parent page can create an alias.
56 # This is because alias pages must always match their original page's state.
57 if can_publish and form.cleaned_data.get("alias"):
58 action = CreatePageAliasAction(
59 page.specific,
60 recursive=form.cleaned_data.get("copy_subpages"),
61 parent=parent_page,
62 update_slug=form.cleaned_data["new_slug"],
63 user=request.user,
64 )
65 new_page = action.execute(skip_permission_checks=True)
66 else:
67 action = CopyPageAction(
68 page=page,
69 recursive=form.cleaned_data.get("copy_subpages"),
70 to=parent_page,
71 update_attrs={
72 "title": form.cleaned_data["new_title"],
73 "slug": form.cleaned_data["new_slug"],
74 },
75 keep_live=keep_live,
76 user=request.user,
77 )
78 new_page = action.execute()
79
80 # Give a success message back to the user
81 if form.cleaned_data.get("copy_subpages"):
82 messages.success(
83 request,
84 _("Page '%(page_title)s' and %(subpages_count)s subpages copied.")
85 % {
86 "page_title": page.specific_deferred.get_admin_display_title(),
87 "subpages_count": new_page.get_descendants().count(),
88 },
89 )
90 else:
91 messages.success(
92 request,
93 _("Page '%(page_title)s' copied.")
94 % {"page_title": page.specific_deferred.get_admin_display_title()},
95 )
96
97 for fn in hooks.get_hooks("after_copy_page"):
98 result = fn(request, page, new_page)
99 if hasattr(result, "status_code"):
100 return result
101
102 # Redirect to explore of parent page
103 if next_url:
104 return redirect(next_url)
105 return redirect("wagtailadmin_explore", parent_page.id)
106
107 return TemplateResponse(
108 request,
109 "wagtailadmin/pages/copy.html",
110 {
111 "page": page,
112 "form": form,
113 "next": next_url,
114 },
115 )
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/admin/views/pages/copy.py b/wagtail/admin/views/pages/copy.py
--- a/wagtail/admin/views/pages/copy.py
+++ b/wagtail/admin/views/pages/copy.py
@@ -22,8 +22,8 @@
# Check if the user has permission to publish subpages on the parent
can_publish = parent_page.permissions_for_user(request.user).can_publish_subpage()
- # Create the form
- form = CopyForm(
+ form_class = getattr(page.specific_class, "copy_form_class", CopyForm)
+ form = form_class(
request.POST or None, user=request.user, page=page, can_publish=can_publish
)
|
{"golden_diff": "diff --git a/wagtail/admin/views/pages/copy.py b/wagtail/admin/views/pages/copy.py\n--- a/wagtail/admin/views/pages/copy.py\n+++ b/wagtail/admin/views/pages/copy.py\n@@ -22,8 +22,8 @@\n # Check if the user has permission to publish subpages on the parent\n can_publish = parent_page.permissions_for_user(request.user).can_publish_subpage()\n \n- # Create the form\n- form = CopyForm(\n+ form_class = getattr(page.specific_class, \"copy_form_class\", CopyForm)\n+ form = form_class(\n request.POST or None, user=request.user, page=page, can_publish=can_publish\n )\n", "issue": "Add ability for slugs to be auto incremented on a per model basis\n### Is your proposal related to a problem?\r\n\r\nI would like to be able to use auto incrementing page slugs in the case of a duplicate slug within the current parent page's URL path.\r\n\r\nExample use case: An event calendar where I don't really care so much about the slugs, but there is a high potential for duplicates with the types of events that may be submitted.\r\n\r\nThe ability to generate 'nice' URLs is a positive selling point of Wagtail that we don't want to undermine but we do want to help users create pages easily.\r\n\r\n### Describe the solution you'd like\r\n\r\n`Page` models can have an `auto_increment_slug` meta flag, instead of a global Django setting.\r\n\r\nWhen the value is `True`, and pages are created as children of that page (important: the slug behaviour will be determined by parent models, not the page models itself) will have an extra step on form creation.\r\n\r\nIf a page is being created (not edited) and there is a value error because of a duplicate URL path, instead of failing the form submission it will automatically append an incremented number to the end of the URL. This incrementing number would need to parse the conflicting page path for the last number and then add to it.\r\n\r\n**Edge cases to consider**\r\n\r\n* What if the path part is not really meant to be an incremental number (E.g. `path/to/blog-post-about-pi-being-3`)\r\n* Confirmation that we want the flag on parent pages and not the page itself that is being created\r\n* Supporting simple numbers only?\r\n* What if the number is `0`, should we increment from `1`\r\n* Should we provide the ability to set the start number `auto_increment_slug = 0` or `auto_increment_slug = 10` or just always assume it is 1 and the first page that contains the slug without any increment will be the zeroth item (without the `-0`)\r\n\r\n\r\n**Requirements**\r\n\r\n* Unit tests are a must\r\n* Documentation for this feature to be added - maybe in https://docs.wagtail.org/en/latest/advanced_topics/customisation/page_editing_interface.html#customising-generated-forms & also page model reference\r\n\r\n\r\n### Describe alternatives you've considered\r\n\r\n* Do not try to solve this in an automated way and keep the page validation as is, users can decide on what the slug should be manually.\r\n\r\n### Additional context\r\n\r\n* An older PR has an approach (settings) to this problem - see https://github.com/wagtail/wagtail/pull/2454\r\n\r\n> This proposed behaviour has come up again independently in #3477. We've now discussed this among the core team - while auto-rewriting of slugs is a behaviour we'd want to avoid in the default case (because the ability to generate 'nice' URLs is a positive selling point of Wagtail that we don't want to undermine...) we're agreed that this would be a worthwhile feature to support on a per-model basis (perhaps with a `auto_increment_slug` meta flag).\r\n> _Originally posted by @gasman in https://github.com/wagtail/wagtail/issues/2454#issuecomment-290208940_ (March 2017)\r\n \r\n\nAdd ability for slugs to be auto incremented on a per model basis\n### Is your proposal related to a problem?\r\n\r\nI would like to be able to use auto incrementing page slugs in the case of a duplicate slug within the current parent page's URL path.\r\n\r\nExample use case: An event calendar where I don't really care so much about the slugs, but there is a high potential for duplicates with the types of events that may be submitted.\r\n\r\nThe ability to generate 'nice' URLs is a positive selling point of Wagtail that we don't want to undermine but we do want to help users create pages easily.\r\n\r\n### Describe the solution you'd like\r\n\r\n`Page` models can have an `auto_increment_slug` meta flag, instead of a global Django setting.\r\n\r\nWhen the value is `True`, and pages are created as children of that page (important: the slug behaviour will be determined by parent models, not the page models itself) will have an extra step on form creation.\r\n\r\nIf a page is being created (not edited) and there is a value error because of a duplicate URL path, instead of failing the form submission it will automatically append an incremented number to the end of the URL. This incrementing number would need to parse the conflicting page path for the last number and then add to it.\r\n\r\n**Edge cases to consider**\r\n\r\n* What if the path part is not really meant to be an incremental number (E.g. `path/to/blog-post-about-pi-being-3`)\r\n* Confirmation that we want the flag on parent pages and not the page itself that is being created\r\n* Supporting simple numbers only?\r\n* What if the number is `0`, should we increment from `1`\r\n* Should we provide the ability to set the start number `auto_increment_slug = 0` or `auto_increment_slug = 10` or just always assume it is 1 and the first page that contains the slug without any increment will be the zeroth item (without the `-0`)\r\n\r\n\r\n**Requirements**\r\n\r\n* Unit tests are a must\r\n* Documentation for this feature to be added - maybe in https://docs.wagtail.org/en/latest/advanced_topics/customisation/page_editing_interface.html#customising-generated-forms & also page model reference\r\n\r\n\r\n### Describe alternatives you've considered\r\n\r\n* Do not try to solve this in an automated way and keep the page validation as is, users can decide on what the slug should be manually.\r\n\r\n### Additional context\r\n\r\n* An older PR has an approach (settings) to this problem - see https://github.com/wagtail/wagtail/pull/2454\r\n\r\n> This proposed behaviour has come up again independently in #3477. We've now discussed this among the core team - while auto-rewriting of slugs is a behaviour we'd want to avoid in the default case (because the ability to generate 'nice' URLs is a positive selling point of Wagtail that we don't want to undermine...) we're agreed that this would be a worthwhile feature to support on a per-model basis (perhaps with a `auto_increment_slug` meta flag).\r\n> _Originally posted by @gasman in https://github.com/wagtail/wagtail/issues/2454#issuecomment-290208940_ (March 2017)\r\n \r\n\n", "before_files": [{"content": "from django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import gettext as _\n\nfrom wagtail import hooks\nfrom wagtail.actions.copy_page import CopyPageAction\nfrom wagtail.actions.create_alias import CreatePageAliasAction\nfrom wagtail.admin import messages\nfrom wagtail.admin.auth import user_has_any_page_permission, user_passes_test\nfrom wagtail.admin.forms.pages import CopyForm\nfrom wagtail.admin.utils import get_valid_next_url_from_request\nfrom wagtail.models import Page\n\n\n@user_passes_test(user_has_any_page_permission)\ndef copy(request, page_id):\n page = Page.objects.get(id=page_id)\n\n # Parent page defaults to parent of source page\n parent_page = page.get_parent()\n\n # Check if the user has permission to publish subpages on the parent\n can_publish = parent_page.permissions_for_user(request.user).can_publish_subpage()\n\n # Create the form\n form = CopyForm(\n request.POST or None, user=request.user, page=page, can_publish=can_publish\n )\n\n next_url = get_valid_next_url_from_request(request)\n\n for fn in hooks.get_hooks(\"before_copy_page\"):\n result = fn(request, page)\n if hasattr(result, \"status_code\"):\n return result\n\n # Check if user is submitting\n if request.method == \"POST\":\n # Prefill parent_page in case the form is invalid (as prepopulated value for the form field,\n # because ModelChoiceField seems to not fall back to the user given value)\n parent_page = Page.objects.get(id=request.POST[\"new_parent_page\"])\n\n if form.is_valid():\n # Receive the parent page (this should never be empty)\n if form.cleaned_data[\"new_parent_page\"]:\n parent_page = form.cleaned_data[\"new_parent_page\"]\n\n # Re-check if the user has permission to publish subpages on the new parent\n can_publish = parent_page.permissions_for_user(\n request.user\n ).can_publish_subpage()\n keep_live = can_publish and form.cleaned_data.get(\"publish_copies\")\n\n # Copy the page\n # Note that only users who can publish in the new parent page can create an alias.\n # This is because alias pages must always match their original page's state.\n if can_publish and form.cleaned_data.get(\"alias\"):\n action = CreatePageAliasAction(\n page.specific,\n recursive=form.cleaned_data.get(\"copy_subpages\"),\n parent=parent_page,\n update_slug=form.cleaned_data[\"new_slug\"],\n user=request.user,\n )\n new_page = action.execute(skip_permission_checks=True)\n else:\n action = CopyPageAction(\n page=page,\n recursive=form.cleaned_data.get(\"copy_subpages\"),\n to=parent_page,\n update_attrs={\n \"title\": form.cleaned_data[\"new_title\"],\n \"slug\": form.cleaned_data[\"new_slug\"],\n },\n keep_live=keep_live,\n user=request.user,\n )\n new_page = action.execute()\n\n # Give a success message back to the user\n if form.cleaned_data.get(\"copy_subpages\"):\n messages.success(\n request,\n _(\"Page '%(page_title)s' and %(subpages_count)s subpages copied.\")\n % {\n \"page_title\": page.specific_deferred.get_admin_display_title(),\n \"subpages_count\": new_page.get_descendants().count(),\n },\n )\n else:\n messages.success(\n request,\n _(\"Page '%(page_title)s' copied.\")\n % {\"page_title\": page.specific_deferred.get_admin_display_title()},\n )\n\n for fn in hooks.get_hooks(\"after_copy_page\"):\n result = fn(request, page, new_page)\n if hasattr(result, \"status_code\"):\n return result\n\n # Redirect to explore of parent page\n if next_url:\n return redirect(next_url)\n return redirect(\"wagtailadmin_explore\", parent_page.id)\n\n return TemplateResponse(\n request,\n \"wagtailadmin/pages/copy.html\",\n {\n \"page\": page,\n \"form\": form,\n \"next\": next_url,\n },\n )\n", "path": "wagtail/admin/views/pages/copy.py"}], "after_files": [{"content": "from django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import gettext as _\n\nfrom wagtail import hooks\nfrom wagtail.actions.copy_page import CopyPageAction\nfrom wagtail.actions.create_alias import CreatePageAliasAction\nfrom wagtail.admin import messages\nfrom wagtail.admin.auth import user_has_any_page_permission, user_passes_test\nfrom wagtail.admin.forms.pages import CopyForm\nfrom wagtail.admin.utils import get_valid_next_url_from_request\nfrom wagtail.models import Page\n\n\n@user_passes_test(user_has_any_page_permission)\ndef copy(request, page_id):\n page = Page.objects.get(id=page_id)\n\n # Parent page defaults to parent of source page\n parent_page = page.get_parent()\n\n # Check if the user has permission to publish subpages on the parent\n can_publish = parent_page.permissions_for_user(request.user).can_publish_subpage()\n\n form_class = getattr(page.specific_class, \"copy_form_class\", CopyForm)\n form = form_class(\n request.POST or None, user=request.user, page=page, can_publish=can_publish\n )\n\n next_url = get_valid_next_url_from_request(request)\n\n for fn in hooks.get_hooks(\"before_copy_page\"):\n result = fn(request, page)\n if hasattr(result, \"status_code\"):\n return result\n\n # Check if user is submitting\n if request.method == \"POST\":\n # Prefill parent_page in case the form is invalid (as prepopulated value for the form field,\n # because ModelChoiceField seems to not fall back to the user given value)\n parent_page = Page.objects.get(id=request.POST[\"new_parent_page\"])\n\n if form.is_valid():\n # Receive the parent page (this should never be empty)\n if form.cleaned_data[\"new_parent_page\"]:\n parent_page = form.cleaned_data[\"new_parent_page\"]\n\n # Re-check if the user has permission to publish subpages on the new parent\n can_publish = parent_page.permissions_for_user(\n request.user\n ).can_publish_subpage()\n keep_live = can_publish and form.cleaned_data.get(\"publish_copies\")\n\n # Copy the page\n # Note that only users who can publish in the new parent page can create an alias.\n # This is because alias pages must always match their original page's state.\n if can_publish and form.cleaned_data.get(\"alias\"):\n action = CreatePageAliasAction(\n page.specific,\n recursive=form.cleaned_data.get(\"copy_subpages\"),\n parent=parent_page,\n update_slug=form.cleaned_data[\"new_slug\"],\n user=request.user,\n )\n new_page = action.execute(skip_permission_checks=True)\n else:\n action = CopyPageAction(\n page=page,\n recursive=form.cleaned_data.get(\"copy_subpages\"),\n to=parent_page,\n update_attrs={\n \"title\": form.cleaned_data[\"new_title\"],\n \"slug\": form.cleaned_data[\"new_slug\"],\n },\n keep_live=keep_live,\n user=request.user,\n )\n new_page = action.execute()\n\n # Give a success message back to the user\n if form.cleaned_data.get(\"copy_subpages\"):\n messages.success(\n request,\n _(\"Page '%(page_title)s' and %(subpages_count)s subpages copied.\")\n % {\n \"page_title\": page.specific_deferred.get_admin_display_title(),\n \"subpages_count\": new_page.get_descendants().count(),\n },\n )\n else:\n messages.success(\n request,\n _(\"Page '%(page_title)s' copied.\")\n % {\"page_title\": page.specific_deferred.get_admin_display_title()},\n )\n\n for fn in hooks.get_hooks(\"after_copy_page\"):\n result = fn(request, page, new_page)\n if hasattr(result, \"status_code\"):\n return result\n\n # Redirect to explore of parent page\n if next_url:\n return redirect(next_url)\n return redirect(\"wagtailadmin_explore\", parent_page.id)\n\n return TemplateResponse(\n request,\n \"wagtailadmin/pages/copy.html\",\n {\n \"page\": page,\n \"form\": form,\n \"next\": next_url,\n },\n )\n", "path": "wagtail/admin/views/pages/copy.py"}]}
| 2,754 | 156 |
gh_patches_debug_30141
|
rasdani/github-patches
|
git_diff
|
SciTools__cartopy-442
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update srtm.py
prevent from bug when nx,ny fall on the sea
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/cartopy/io/srtm.py`
Content:
```
1 # (C) British Crown Copyright 2011 - 2012, Met Office
2 #
3 # This file is part of cartopy.
4 #
5 # cartopy is free software: you can redistribute it and/or modify it under
6 # the terms of the GNU Lesser General Public License as published by the
7 # Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # cartopy is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU Lesser General Public License for more details.
14 #
15 # You should have received a copy of the GNU Lesser General Public License
16 # along with cartopy. If not, see <http://www.gnu.org/licenses/>.
17
18
19 """
20 The Shuttle Radar Topography Mission (SRTM) is an international research
21 effort that obtained digital elevation models on a near-global scale from
22 56S to 60N, to generate the most complete high-resolution digital topographic
23 database of Earth prior to the release of the ASTER GDEM in 2009.
24
25 - Wikipedia (August 2012)
26
27 """
28 import json
29 import os
30
31 import numpy as np
32 import six
33
34 from cartopy import config
35 import cartopy.crs as ccrs
36 from cartopy.io import fh_getter, Downloader
37
38
39 def srtm(lon, lat):
40 """
41 Return (elevation, crs, extent) for the given longitude latitude.
42 Elevation is in meters.
43 """
44 fname = SRTM3_retrieve(lon, lat)
45 if fname is None:
46 raise ValueError('No srtm tile found for those coordinates.')
47 return read_SRTM3(fname)
48
49
50 def add_shading(elevation, azimuth, altitude):
51 """Adds shading to SRTM elevation data, using azimuth and altitude
52 of the sun.
53
54 :type elevation: numpy.ndarray
55 :param elevation: SRTM elevation data (in meters)
56 :type azimuth: float
57 :param azimuth: azimuth of the Sun (in degrees)
58 :type altitude: float
59 :param altitude: altitude of the Sun (in degrees)
60
61 :rtype: numpy.ndarray
62 :return: shaded SRTM relief map.
63 """
64 azimuth = np.deg2rad(azimuth)
65 altitude = np.deg2rad(altitude)
66 x, y = np.gradient(elevation)
67 slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y))
68 # -x here because of pixel orders in the SRTM tile
69 aspect = np.arctan2(-x, y)
70 shaded = np.sin(altitude) * np.sin(slope)\
71 + np.cos(altitude) * np.cos(slope)\
72 * np.cos((azimuth - np.pi/2.) - aspect)
73 return shaded
74
75
76 def fill_gaps(elevation, max_distance=10):
77 """Fills gaps in SRTM elevation data for which the distance from
78 missing pixel to nearest existing one is smaller than `max_distance`.
79
80 This function requires osgeo/gdal to work.
81
82 :type elevation: numpy.ndarray
83 :param elevation: SRTM elevation data (in meters)
84 :type max_distance: int
85 :param max_distance: maximal distance (in pixels) between a missing point
86 and the nearest valid one.
87
88 :rtype: numpy.ndarray
89 :return: SRTM elevation data with filled gaps..
90 """
91 # Lazily import osgeo - it is only an optional dependency for cartopy.
92 from osgeo import gdal
93 from osgeo import gdal_array
94
95 src_ds = gdal_array.OpenArray(elevation)
96 srcband = src_ds.GetRasterBand(1)
97 dstband = srcband
98 maskband = srcband
99 smoothing_iterations = 0
100 options = []
101 gdal.FillNodata(dstband, maskband,
102 max_distance, smoothing_iterations, options,
103 callback=None)
104 elevation = dstband.ReadAsArray()
105 return elevation
106
107
108 def srtm_composite(lon_min, lat_min, nx, ny):
109
110 # XXX nx and ny have got confused in the code (numpy array ordering?).
111 # However, the interface works well.
112
113 bottom_left_ll = (lon_min, lat_min)
114 shape = np.array([1201, 1201])
115 img = np.empty(shape * (nx, ny))
116
117 for i in range(nx):
118 for j in range(ny):
119 x_img_slice = slice(i * shape[0], (i + 1) * shape[0])
120 y_img_slice = slice(j * shape[1], (j + 1) * shape[1])
121
122 tile_img, crs, extent = srtm(bottom_left_ll[0] + j,
123 bottom_left_ll[1] + i)
124 img[x_img_slice, y_img_slice] = tile_img
125
126 extent = (bottom_left_ll[0], bottom_left_ll[0] + ny,
127 bottom_left_ll[1], bottom_left_ll[1] + nx)
128
129 return img, crs, extent
130
131
132 def read_SRTM3(fh):
133 fh, fname = fh_getter(fh, needs_filename=True)
134 if fname.endswith('.zip'):
135 from zipfile import ZipFile
136 zfh = ZipFile(fh, 'rb')
137 fh = zfh.open(os.path.basename(fname[:-4]), 'r')
138
139 elev = np.fromfile(fh, dtype=np.dtype('>i2'))
140 elev.shape = (1201, 1201)
141
142 fname = os.path.basename(fname)
143 y_dir, y, x_dir, x = fname[0], int(fname[1:3]), fname[3], int(fname[4:7])
144
145 if y_dir == 'S':
146 y *= -1
147
148 if x_dir == 'W':
149 x *= -1
150
151 # xxx extent may need to be wider by half a pixel
152 return elev[::-1, ...], ccrs.PlateCarree(), [x, x + 1, y, y + 1]
153
154
155 def SRTM3_retrieve(lon, lat):
156 x = '%s%03d' % ('E' if lon > 0 else 'W', abs(int(lon)))
157 y = '%s%02d' % ('N' if lat > 0 else 'S', abs(int(lat)))
158
159 srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))
160 return srtm_downloader.path({'config': config, 'x': x, 'y': y})
161
162
163 class SRTM3Downloader(Downloader):
164 """
165 Provides a SRTM3 download mechanism.
166
167 """
168 FORMAT_KEYS = ('config', 'x', 'y')
169
170 _JSON_SRTM3_LOOKUP = os.path.join(os.path.dirname(__file__),
171 'srtm.json')
172 _SRTM3_LOOKUP_URL = json.load(open(_JSON_SRTM3_LOOKUP, 'r'))
173 """
174 The SRTM3 url lookup dictionary maps keys such as 'N43E043' to the url
175 of the file to download.
176
177 """
178
179 def __init__(self,
180 target_path_template,
181 pre_downloaded_path_template='',
182 ):
183 # adds some SRTM3 defaults to the __init__ of a Downloader
184 # namely, the URl is determined on the fly using the
185 # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary
186 Downloader.__init__(self, None,
187 target_path_template,
188 pre_downloaded_path_template)
189
190 def url(self, format_dict):
191 # override the url method, looking up the url from the
192 # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary
193 key = u'{y}{x}'.format(**format_dict)
194 url = SRTM3Downloader._SRTM3_LOOKUP_URL.get(key, None)
195 return url
196
197 def acquire_resource(self, target_path, format_dict):
198 from zipfile import ZipFile
199
200 target_dir = os.path.dirname(target_path)
201 if not os.path.isdir(target_dir):
202 os.makedirs(target_dir)
203
204 url = self.url(format_dict)
205
206 srtm_online = self._urlopen(url)
207 zfh = ZipFile(six.BytesIO(srtm_online.read()), 'r')
208
209 zip_member_path = u'{y}{x}.hgt'.format(**format_dict)
210 member = zfh.getinfo(zip_member_path)
211 with open(target_path, 'wb') as fh:
212 fh.write(zfh.open(member).read())
213
214 srtm_online.close()
215 zfh.close()
216
217 return target_path
218
219 @staticmethod
220 def _create_srtm3_dict():
221 """
222 Returns a dictionary mapping srtm filename to the URL of the file.
223
224 This is slow as it must query the SRTM server to identify the
225 continent from which the tile comes. Hence a json file with this
226 content exists in ``SRTM3Downloader._JSON_SRTM3_LOOKUP``.
227
228 The json file was created with::
229
230 import cartopy.io.srtm as srtm
231 import json
232 fh = open(srtm.SRTM3Downloader._JSON_SRTM3_LOOKUP, 'w')
233 json.dump(srtm.SRTM3Downloader._create_srtm3_dict(), fh)
234
235 """
236 # lazy imports. In most situations, these are not
237 # dependencies of cartopy.
238 if six.PY3:
239 from urllib.request import urlopen
240 else:
241 from urllib2 import urlopen
242 from BeautifulSoup import BeautifulSoup
243
244 files = {}
245
246 for continent in ['Australia', 'Africa', 'Eurasia', 'Islands',
247 'North_America', 'South_America']:
248
249 url = "http://dds.cr.usgs.gov/srtm/version2_1/SRTM3/%s" % continent
250 f = urlopen(url)
251 html = f.read()
252 soup = BeautifulSoup(html)
253
254 for link in soup('li'):
255 name = str(link.text)
256 if name != ' Parent Directory':
257 # remove the '.hgt.zip'
258 files[name[:-8]] = url + '/' + name
259 f.close()
260 return files
261
262 @classmethod
263 def default_downloader(cls):
264 """
265 Returns a typical downloader for this class. In general, this static
266 method is used to create the default configuration in cartopy.config
267
268 """
269 default_spec = ('SRTM', 'SRTM3', '{y}{x}.hgt')
270 target_path_template = os.path.join('{config[data_dir]}',
271 *default_spec)
272 pre_path_template = os.path.join('{config[pre_existing_data_dir]}',
273 *default_spec)
274 return cls(target_path_template=target_path_template,
275 pre_downloaded_path_template=pre_path_template)
276
277
278 # add a generic SRTM downloader to the config 'downloaders' section.
279 config['downloaders'].setdefault(('SRTM', 'SRTM3'),
280 SRTM3Downloader.default_downloader())
281
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/cartopy/io/srtm.py b/lib/cartopy/io/srtm.py
--- a/lib/cartopy/io/srtm.py
+++ b/lib/cartopy/io/srtm.py
@@ -119,14 +119,18 @@
x_img_slice = slice(i * shape[0], (i + 1) * shape[0])
y_img_slice = slice(j * shape[1], (j + 1) * shape[1])
- tile_img, crs, extent = srtm(bottom_left_ll[0] + j,
- bottom_left_ll[1] + i)
- img[x_img_slice, y_img_slice] = tile_img
+ try:
+ tile_img, _, _ = srtm(bottom_left_ll[0] + j,
+ bottom_left_ll[1] + i)
+ except ValueError:
+ img[x_img_slice, y_img_slice] = 0
+ else:
+ img[x_img_slice, y_img_slice] = tile_img
extent = (bottom_left_ll[0], bottom_left_ll[0] + ny,
bottom_left_ll[1], bottom_left_ll[1] + nx)
- return img, crs, extent
+ return img, ccrs.PlateCarree(), extent
def read_SRTM3(fh):
@@ -153,11 +157,22 @@
def SRTM3_retrieve(lon, lat):
+ """
+ Return the path of a .hgt file for the given SRTM location.
+
+ If no such .hgt file exists (because it is over the ocean)
+ None will be returned.
+
+ """
x = '%s%03d' % ('E' if lon > 0 else 'W', abs(int(lon)))
y = '%s%02d' % ('N' if lat > 0 else 'S', abs(int(lat)))
srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))
- return srtm_downloader.path({'config': config, 'x': x, 'y': y})
+ params = {'config': config, 'x': x, 'y': y}
+ if srtm_downloader.url(params) is None:
+ return None
+ else:
+ return srtm_downloader.path({'config': config, 'x': x, 'y': y})
class SRTM3Downloader(Downloader):
|
{"golden_diff": "diff --git a/lib/cartopy/io/srtm.py b/lib/cartopy/io/srtm.py\n--- a/lib/cartopy/io/srtm.py\n+++ b/lib/cartopy/io/srtm.py\n@@ -119,14 +119,18 @@\n x_img_slice = slice(i * shape[0], (i + 1) * shape[0])\n y_img_slice = slice(j * shape[1], (j + 1) * shape[1])\n \n- tile_img, crs, extent = srtm(bottom_left_ll[0] + j,\n- bottom_left_ll[1] + i)\n- img[x_img_slice, y_img_slice] = tile_img\n+ try:\n+ tile_img, _, _ = srtm(bottom_left_ll[0] + j,\n+ bottom_left_ll[1] + i)\n+ except ValueError:\n+ img[x_img_slice, y_img_slice] = 0\n+ else:\n+ img[x_img_slice, y_img_slice] = tile_img\n \n extent = (bottom_left_ll[0], bottom_left_ll[0] + ny,\n bottom_left_ll[1], bottom_left_ll[1] + nx)\n \n- return img, crs, extent\n+ return img, ccrs.PlateCarree(), extent\n \n \n def read_SRTM3(fh):\n@@ -153,11 +157,22 @@\n \n \n def SRTM3_retrieve(lon, lat):\n+ \"\"\"\n+ Return the path of a .hgt file for the given SRTM location.\n+\n+ If no such .hgt file exists (because it is over the ocean)\n+ None will be returned.\n+\n+ \"\"\"\n x = '%s%03d' % ('E' if lon > 0 else 'W', abs(int(lon)))\n y = '%s%02d' % ('N' if lat > 0 else 'S', abs(int(lat)))\n \n srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))\n- return srtm_downloader.path({'config': config, 'x': x, 'y': y})\n+ params = {'config': config, 'x': x, 'y': y}\n+ if srtm_downloader.url(params) is None:\n+ return None\n+ else:\n+ return srtm_downloader.path({'config': config, 'x': x, 'y': y})\n \n \n class SRTM3Downloader(Downloader):\n", "issue": "Update srtm.py\nprevent from bug when nx,ny fall on the sea\n\n", "before_files": [{"content": "# (C) British Crown Copyright 2011 - 2012, Met Office\n#\n# This file is part of cartopy.\n#\n# cartopy is free software: you can redistribute it and/or modify it under\n# the terms of the GNU Lesser General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# cartopy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public License\n# along with cartopy. If not, see <http://www.gnu.org/licenses/>.\n\n\n\"\"\"\nThe Shuttle Radar Topography Mission (SRTM) is an international research\neffort that obtained digital elevation models on a near-global scale from\n56S to 60N, to generate the most complete high-resolution digital topographic\ndatabase of Earth prior to the release of the ASTER GDEM in 2009.\n\n - Wikipedia (August 2012)\n\n\"\"\"\nimport json\nimport os\n\nimport numpy as np\nimport six\n\nfrom cartopy import config\nimport cartopy.crs as ccrs\nfrom cartopy.io import fh_getter, Downloader\n\n\ndef srtm(lon, lat):\n \"\"\"\n Return (elevation, crs, extent) for the given longitude latitude.\n Elevation is in meters.\n \"\"\"\n fname = SRTM3_retrieve(lon, lat)\n if fname is None:\n raise ValueError('No srtm tile found for those coordinates.')\n return read_SRTM3(fname)\n\n\ndef add_shading(elevation, azimuth, altitude):\n \"\"\"Adds shading to SRTM elevation data, using azimuth and altitude\n of the sun.\n\n :type elevation: numpy.ndarray\n :param elevation: SRTM elevation data (in meters)\n :type azimuth: float\n :param azimuth: azimuth of the Sun (in degrees)\n :type altitude: float\n :param altitude: altitude of the Sun (in degrees)\n\n :rtype: numpy.ndarray\n :return: shaded SRTM relief map.\n \"\"\"\n azimuth = np.deg2rad(azimuth)\n altitude = np.deg2rad(altitude)\n x, y = np.gradient(elevation)\n slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y))\n # -x here because of pixel orders in the SRTM tile\n aspect = np.arctan2(-x, y)\n shaded = np.sin(altitude) * np.sin(slope)\\\n + np.cos(altitude) * np.cos(slope)\\\n * np.cos((azimuth - np.pi/2.) - aspect)\n return shaded\n\n\ndef fill_gaps(elevation, max_distance=10):\n \"\"\"Fills gaps in SRTM elevation data for which the distance from\n missing pixel to nearest existing one is smaller than `max_distance`.\n\n This function requires osgeo/gdal to work.\n\n :type elevation: numpy.ndarray\n :param elevation: SRTM elevation data (in meters)\n :type max_distance: int\n :param max_distance: maximal distance (in pixels) between a missing point\n and the nearest valid one.\n\n :rtype: numpy.ndarray\n :return: SRTM elevation data with filled gaps..\n \"\"\"\n # Lazily import osgeo - it is only an optional dependency for cartopy.\n from osgeo import gdal\n from osgeo import gdal_array\n\n src_ds = gdal_array.OpenArray(elevation)\n srcband = src_ds.GetRasterBand(1)\n dstband = srcband\n maskband = srcband\n smoothing_iterations = 0\n options = []\n gdal.FillNodata(dstband, maskband,\n max_distance, smoothing_iterations, options,\n callback=None)\n elevation = dstband.ReadAsArray()\n return elevation\n\n\ndef srtm_composite(lon_min, lat_min, nx, ny):\n\n # XXX nx and ny have got confused in the code (numpy array ordering?).\n # However, the interface works well.\n\n bottom_left_ll = (lon_min, lat_min)\n shape = np.array([1201, 1201])\n img = np.empty(shape * (nx, ny))\n\n for i in range(nx):\n for j in range(ny):\n x_img_slice = slice(i * shape[0], (i + 1) * shape[0])\n y_img_slice = slice(j * shape[1], (j + 1) * shape[1])\n\n tile_img, crs, extent = srtm(bottom_left_ll[0] + j,\n bottom_left_ll[1] + i)\n img[x_img_slice, y_img_slice] = tile_img\n\n extent = (bottom_left_ll[0], bottom_left_ll[0] + ny,\n bottom_left_ll[1], bottom_left_ll[1] + nx)\n\n return img, crs, extent\n\n\ndef read_SRTM3(fh):\n fh, fname = fh_getter(fh, needs_filename=True)\n if fname.endswith('.zip'):\n from zipfile import ZipFile\n zfh = ZipFile(fh, 'rb')\n fh = zfh.open(os.path.basename(fname[:-4]), 'r')\n\n elev = np.fromfile(fh, dtype=np.dtype('>i2'))\n elev.shape = (1201, 1201)\n\n fname = os.path.basename(fname)\n y_dir, y, x_dir, x = fname[0], int(fname[1:3]), fname[3], int(fname[4:7])\n\n if y_dir == 'S':\n y *= -1\n\n if x_dir == 'W':\n x *= -1\n\n # xxx extent may need to be wider by half a pixel\n return elev[::-1, ...], ccrs.PlateCarree(), [x, x + 1, y, y + 1]\n\n\ndef SRTM3_retrieve(lon, lat):\n x = '%s%03d' % ('E' if lon > 0 else 'W', abs(int(lon)))\n y = '%s%02d' % ('N' if lat > 0 else 'S', abs(int(lat)))\n\n srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))\n return srtm_downloader.path({'config': config, 'x': x, 'y': y})\n\n\nclass SRTM3Downloader(Downloader):\n \"\"\"\n Provides a SRTM3 download mechanism.\n\n \"\"\"\n FORMAT_KEYS = ('config', 'x', 'y')\n\n _JSON_SRTM3_LOOKUP = os.path.join(os.path.dirname(__file__),\n 'srtm.json')\n _SRTM3_LOOKUP_URL = json.load(open(_JSON_SRTM3_LOOKUP, 'r'))\n \"\"\"\n The SRTM3 url lookup dictionary maps keys such as 'N43E043' to the url\n of the file to download.\n\n \"\"\"\n\n def __init__(self,\n target_path_template,\n pre_downloaded_path_template='',\n ):\n # adds some SRTM3 defaults to the __init__ of a Downloader\n # namely, the URl is determined on the fly using the\n # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary\n Downloader.__init__(self, None,\n target_path_template,\n pre_downloaded_path_template)\n\n def url(self, format_dict):\n # override the url method, looking up the url from the\n # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary\n key = u'{y}{x}'.format(**format_dict)\n url = SRTM3Downloader._SRTM3_LOOKUP_URL.get(key, None)\n return url\n\n def acquire_resource(self, target_path, format_dict):\n from zipfile import ZipFile\n\n target_dir = os.path.dirname(target_path)\n if not os.path.isdir(target_dir):\n os.makedirs(target_dir)\n\n url = self.url(format_dict)\n\n srtm_online = self._urlopen(url)\n zfh = ZipFile(six.BytesIO(srtm_online.read()), 'r')\n\n zip_member_path = u'{y}{x}.hgt'.format(**format_dict)\n member = zfh.getinfo(zip_member_path)\n with open(target_path, 'wb') as fh:\n fh.write(zfh.open(member).read())\n\n srtm_online.close()\n zfh.close()\n\n return target_path\n\n @staticmethod\n def _create_srtm3_dict():\n \"\"\"\n Returns a dictionary mapping srtm filename to the URL of the file.\n\n This is slow as it must query the SRTM server to identify the\n continent from which the tile comes. Hence a json file with this\n content exists in ``SRTM3Downloader._JSON_SRTM3_LOOKUP``.\n\n The json file was created with::\n\n import cartopy.io.srtm as srtm\n import json\n fh = open(srtm.SRTM3Downloader._JSON_SRTM3_LOOKUP, 'w')\n json.dump(srtm.SRTM3Downloader._create_srtm3_dict(), fh)\n\n \"\"\"\n # lazy imports. In most situations, these are not\n # dependencies of cartopy.\n if six.PY3:\n from urllib.request import urlopen\n else:\n from urllib2 import urlopen\n from BeautifulSoup import BeautifulSoup\n\n files = {}\n\n for continent in ['Australia', 'Africa', 'Eurasia', 'Islands',\n 'North_America', 'South_America']:\n\n url = \"http://dds.cr.usgs.gov/srtm/version2_1/SRTM3/%s\" % continent\n f = urlopen(url)\n html = f.read()\n soup = BeautifulSoup(html)\n\n for link in soup('li'):\n name = str(link.text)\n if name != ' Parent Directory':\n # remove the '.hgt.zip'\n files[name[:-8]] = url + '/' + name\n f.close()\n return files\n\n @classmethod\n def default_downloader(cls):\n \"\"\"\n Returns a typical downloader for this class. In general, this static\n method is used to create the default configuration in cartopy.config\n\n \"\"\"\n default_spec = ('SRTM', 'SRTM3', '{y}{x}.hgt')\n target_path_template = os.path.join('{config[data_dir]}',\n *default_spec)\n pre_path_template = os.path.join('{config[pre_existing_data_dir]}',\n *default_spec)\n return cls(target_path_template=target_path_template,\n pre_downloaded_path_template=pre_path_template)\n\n\n# add a generic SRTM downloader to the config 'downloaders' section.\nconfig['downloaders'].setdefault(('SRTM', 'SRTM3'),\n SRTM3Downloader.default_downloader())\n", "path": "lib/cartopy/io/srtm.py"}], "after_files": [{"content": "# (C) British Crown Copyright 2011 - 2012, Met Office\n#\n# This file is part of cartopy.\n#\n# cartopy is free software: you can redistribute it and/or modify it under\n# the terms of the GNU Lesser General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# cartopy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public License\n# along with cartopy. If not, see <http://www.gnu.org/licenses/>.\n\n\n\"\"\"\nThe Shuttle Radar Topography Mission (SRTM) is an international research\neffort that obtained digital elevation models on a near-global scale from\n56S to 60N, to generate the most complete high-resolution digital topographic\ndatabase of Earth prior to the release of the ASTER GDEM in 2009.\n\n - Wikipedia (August 2012)\n\n\"\"\"\nimport json\nimport os\n\nimport numpy as np\nimport six\n\nfrom cartopy import config\nimport cartopy.crs as ccrs\nfrom cartopy.io import fh_getter, Downloader\n\n\ndef srtm(lon, lat):\n \"\"\"\n Return (elevation, crs, extent) for the given longitude latitude.\n Elevation is in meters.\n \"\"\"\n fname = SRTM3_retrieve(lon, lat)\n if fname is None:\n raise ValueError('No srtm tile found for those coordinates.')\n return read_SRTM3(fname)\n\n\ndef add_shading(elevation, azimuth, altitude):\n \"\"\"Adds shading to SRTM elevation data, using azimuth and altitude\n of the sun.\n\n :type elevation: numpy.ndarray\n :param elevation: SRTM elevation data (in meters)\n :type azimuth: float\n :param azimuth: azimuth of the Sun (in degrees)\n :type altitude: float\n :param altitude: altitude of the Sun (in degrees)\n\n :rtype: numpy.ndarray\n :return: shaded SRTM relief map.\n \"\"\"\n azimuth = np.deg2rad(azimuth)\n altitude = np.deg2rad(altitude)\n x, y = np.gradient(elevation)\n slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y))\n # -x here because of pixel orders in the SRTM tile\n aspect = np.arctan2(-x, y)\n shaded = np.sin(altitude) * np.sin(slope)\\\n + np.cos(altitude) * np.cos(slope)\\\n * np.cos((azimuth - np.pi/2.) - aspect)\n return shaded\n\n\ndef fill_gaps(elevation, max_distance=10):\n \"\"\"Fills gaps in SRTM elevation data for which the distance from\n missing pixel to nearest existing one is smaller than `max_distance`.\n\n This function requires osgeo/gdal to work.\n\n :type elevation: numpy.ndarray\n :param elevation: SRTM elevation data (in meters)\n :type max_distance: int\n :param max_distance: maximal distance (in pixels) between a missing point\n and the nearest valid one.\n\n :rtype: numpy.ndarray\n :return: SRTM elevation data with filled gaps..\n \"\"\"\n # Lazily import osgeo - it is only an optional dependency for cartopy.\n from osgeo import gdal\n from osgeo import gdal_array\n\n src_ds = gdal_array.OpenArray(elevation)\n srcband = src_ds.GetRasterBand(1)\n dstband = srcband\n maskband = srcband\n smoothing_iterations = 0\n options = []\n gdal.FillNodata(dstband, maskband,\n max_distance, smoothing_iterations, options,\n callback=None)\n elevation = dstband.ReadAsArray()\n return elevation\n\n\ndef srtm_composite(lon_min, lat_min, nx, ny):\n\n # XXX nx and ny have got confused in the code (numpy array ordering?).\n # However, the interface works well.\n\n bottom_left_ll = (lon_min, lat_min)\n shape = np.array([1201, 1201])\n img = np.empty(shape * (nx, ny))\n\n for i in range(nx):\n for j in range(ny):\n x_img_slice = slice(i * shape[0], (i + 1) * shape[0])\n y_img_slice = slice(j * shape[1], (j + 1) * shape[1])\n\n try:\n tile_img, _, _ = srtm(bottom_left_ll[0] + j,\n bottom_left_ll[1] + i)\n except ValueError:\n img[x_img_slice, y_img_slice] = 0\n else:\n img[x_img_slice, y_img_slice] = tile_img\n\n extent = (bottom_left_ll[0], bottom_left_ll[0] + ny,\n bottom_left_ll[1], bottom_left_ll[1] + nx)\n\n return img, ccrs.PlateCarree(), extent\n\n\ndef read_SRTM3(fh):\n fh, fname = fh_getter(fh, needs_filename=True)\n if fname.endswith('.zip'):\n from zipfile import ZipFile\n zfh = ZipFile(fh, 'rb')\n fh = zfh.open(os.path.basename(fname[:-4]), 'r')\n\n elev = np.fromfile(fh, dtype=np.dtype('>i2'))\n elev.shape = (1201, 1201)\n\n fname = os.path.basename(fname)\n y_dir, y, x_dir, x = fname[0], int(fname[1:3]), fname[3], int(fname[4:7])\n\n if y_dir == 'S':\n y *= -1\n\n if x_dir == 'W':\n x *= -1\n\n # xxx extent may need to be wider by half a pixel\n return elev[::-1, ...], ccrs.PlateCarree(), [x, x + 1, y, y + 1]\n\n\ndef SRTM3_retrieve(lon, lat):\n \"\"\"\n Return the path of a .hgt file for the given SRTM location.\n\n If no such .hgt file exists (because it is over the ocean)\n None will be returned.\n\n \"\"\"\n x = '%s%03d' % ('E' if lon > 0 else 'W', abs(int(lon)))\n y = '%s%02d' % ('N' if lat > 0 else 'S', abs(int(lat)))\n\n srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))\n params = {'config': config, 'x': x, 'y': y}\n if srtm_downloader.url(params) is None:\n return None\n else:\n return srtm_downloader.path({'config': config, 'x': x, 'y': y})\n\n\nclass SRTM3Downloader(Downloader):\n \"\"\"\n Provides a SRTM3 download mechanism.\n\n \"\"\"\n FORMAT_KEYS = ('config', 'x', 'y')\n\n _JSON_SRTM3_LOOKUP = os.path.join(os.path.dirname(__file__),\n 'srtm.json')\n _SRTM3_LOOKUP_URL = json.load(open(_JSON_SRTM3_LOOKUP, 'r'))\n \"\"\"\n The SRTM3 url lookup dictionary maps keys such as 'N43E043' to the url\n of the file to download.\n\n \"\"\"\n\n def __init__(self,\n target_path_template,\n pre_downloaded_path_template='',\n ):\n # adds some SRTM3 defaults to the __init__ of a Downloader\n # namely, the URl is determined on the fly using the\n # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary\n Downloader.__init__(self, None,\n target_path_template,\n pre_downloaded_path_template)\n\n def url(self, format_dict):\n # override the url method, looking up the url from the\n # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary\n key = u'{y}{x}'.format(**format_dict)\n url = SRTM3Downloader._SRTM3_LOOKUP_URL.get(key, None)\n return url\n\n def acquire_resource(self, target_path, format_dict):\n from zipfile import ZipFile\n\n target_dir = os.path.dirname(target_path)\n if not os.path.isdir(target_dir):\n os.makedirs(target_dir)\n\n url = self.url(format_dict)\n\n srtm_online = self._urlopen(url)\n zfh = ZipFile(six.BytesIO(srtm_online.read()), 'r')\n\n zip_member_path = u'{y}{x}.hgt'.format(**format_dict)\n member = zfh.getinfo(zip_member_path)\n with open(target_path, 'wb') as fh:\n fh.write(zfh.open(member).read())\n\n srtm_online.close()\n zfh.close()\n\n return target_path\n\n @staticmethod\n def _create_srtm3_dict():\n \"\"\"\n Returns a dictionary mapping srtm filename to the URL of the file.\n\n This is slow as it must query the SRTM server to identify the\n continent from which the tile comes. Hence a json file with this\n content exists in ``SRTM3Downloader._JSON_SRTM3_LOOKUP``.\n\n The json file was created with::\n\n import cartopy.io.srtm as srtm\n import json\n fh = open(srtm.SRTM3Downloader._JSON_SRTM3_LOOKUP, 'w')\n json.dump(srtm.SRTM3Downloader._create_srtm3_dict(), fh)\n\n \"\"\"\n # lazy imports. In most situations, these are not\n # dependencies of cartopy.\n if six.PY3:\n from urllib.request import urlopen\n else:\n from urllib2 import urlopen\n from BeautifulSoup import BeautifulSoup\n\n files = {}\n\n for continent in ['Australia', 'Africa', 'Eurasia', 'Islands',\n 'North_America', 'South_America']:\n\n url = \"http://dds.cr.usgs.gov/srtm/version2_1/SRTM3/%s\" % continent\n f = urlopen(url)\n html = f.read()\n soup = BeautifulSoup(html)\n\n for link in soup('li'):\n name = str(link.text)\n if name != ' Parent Directory':\n # remove the '.hgt.zip'\n files[name[:-8]] = url + '/' + name\n f.close()\n return files\n\n @classmethod\n def default_downloader(cls):\n \"\"\"\n Returns a typical downloader for this class. In general, this static\n method is used to create the default configuration in cartopy.config\n\n \"\"\"\n default_spec = ('SRTM', 'SRTM3', '{y}{x}.hgt')\n target_path_template = os.path.join('{config[data_dir]}',\n *default_spec)\n pre_path_template = os.path.join('{config[pre_existing_data_dir]}',\n *default_spec)\n return cls(target_path_template=target_path_template,\n pre_downloaded_path_template=pre_path_template)\n\n\n# add a generic SRTM downloader to the config 'downloaders' section.\nconfig['downloaders'].setdefault(('SRTM', 'SRTM3'),\n SRTM3Downloader.default_downloader())\n", "path": "lib/cartopy/io/srtm.py"}]}
| 3,467 | 556 |
gh_patches_debug_21037
|
rasdani/github-patches
|
git_diff
|
interactions-py__interactions.py-168
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ephemeral (hidden) Messages now support embeds
Whenever an embed is sent with the `hidden=True`, a warning is printed : `Embed/File is not supported for hidden!`. But ephemeral messages now support embeds.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `discord_slash/context.py`
Content:
```
1 import typing
2 import asyncio
3 from warnings import warn
4
5 import discord
6 from contextlib import suppress
7 from discord.ext import commands
8 from . import http
9 from . import error
10 from . import model
11
12
13 class SlashContext:
14 """
15 Context of the slash command.\n
16 Kinda similar with discord.ext.commands.Context.
17
18 .. warning::
19 Do not manually init this model.
20
21 :ivar message: Message that invoked the slash command.
22 :ivar name: Name of the command.
23 :ivar args: List of processed arguments invoked with the command.
24 :ivar kwargs: Dictionary of processed arguments invoked with the command.
25 :ivar subcommand_name: Subcommand of the command.
26 :ivar subcommand_group: Subcommand group of the command.
27 :ivar interaction_id: Interaction ID of the command message.
28 :ivar command_id: ID of the command.
29 :ivar bot: discord.py client.
30 :ivar _http: :class:`.http.SlashCommandRequest` of the client.
31 :ivar _logger: Logger instance.
32 :ivar deferred: Whether the command is current deferred (loading state)
33 :ivar _deferred_hidden: Internal var to check that state stays the same
34 :ivar responded: Whether you have responded with a message to the interaction.
35 :ivar guild_id: Guild ID of the command message. If the command was invoked in DM, then it is ``None``
36 :ivar author_id: User ID representing author of the command message.
37 :ivar channel_id: Channel ID representing channel of the command message.
38 :ivar author: User or Member instance of the command invoke.
39 """
40
41 def __init__(self,
42 _http: http.SlashCommandRequest,
43 _json: dict,
44 _discord: typing.Union[discord.Client, commands.Bot],
45 logger):
46 self.__token = _json["token"]
47 self.message = None # Should be set later.
48 self.name = self.command = self.invoked_with = _json["data"]["name"]
49 self.args = []
50 self.kwargs = {}
51 self.subcommand_name = self.invoked_subcommand = self.subcommand_passed = None
52 self.subcommand_group = self.invoked_subcommand_group = self.subcommand_group_passed = None
53 self.interaction_id = _json["id"]
54 self.command_id = _json["data"]["id"]
55 self._http = _http
56 self.bot = _discord
57 self._logger = logger
58 self.deferred = False
59 self.responded = False
60 self._deferred_hidden = False # To check if the patch to the deferred response matches
61 self.guild_id = int(_json["guild_id"]) if "guild_id" in _json.keys() else None
62 self.author_id = int(_json["member"]["user"]["id"] if "member" in _json.keys() else _json["user"]["id"])
63 self.channel_id = int(_json["channel_id"])
64 if self.guild:
65 self.author = discord.Member(data=_json["member"], state=self.bot._connection, guild=self.guild)
66 elif self.guild_id:
67 self.author = discord.User(data=_json["member"]["user"], state=self.bot._connection)
68 else:
69 self.author = discord.User(data=_json["user"], state=self.bot._connection)
70
71 @property
72 def _deffered_hidden(self):
73 warn("`_deffered_hidden` as been renamed to `_deferred_hidden`.", DeprecationWarning, stacklevel=2)
74 return self._deferred_hidden
75
76 @_deffered_hidden.setter
77 def _deffered_hidden(self, value):
78 warn("`_deffered_hidden` as been renamed to `_deferred_hidden`.", DeprecationWarning, stacklevel=2)
79 self._deferred_hidden = value
80
81 @property
82 def deffered(self):
83 warn("`deffered` as been renamed to `deferred`.", DeprecationWarning, stacklevel=2)
84 return self.deferred
85
86 @deffered.setter
87 def deffered(self, value):
88 warn("`deffered` as been renamed to `deferred`.", DeprecationWarning, stacklevel=2)
89 self.deferred = value
90
91 @property
92 def guild(self) -> typing.Optional[discord.Guild]:
93 """
94 Guild instance of the command invoke. If the command was invoked in DM, then it is ``None``
95
96 :return: Optional[discord.Guild]
97 """
98 return self.bot.get_guild(self.guild_id) if self.guild_id else None
99
100 @property
101 def channel(self) -> typing.Optional[typing.Union[discord.TextChannel, discord.DMChannel]]:
102 """
103 Channel instance of the command invoke.
104
105 :return: Optional[Union[discord.abc.GuildChannel, discord.abc.PrivateChannel]]
106 """
107 return self.bot.get_channel(self.channel_id)
108
109 async def defer(self, hidden: bool = False):
110 """
111 'Defers' the response, showing a loading state to the user
112
113 :param hidden: Whether the deferred response should be ephemeral . Default ``False``.
114 """
115 if self.deferred or self.responded:
116 raise error.AlreadyResponded("You have already responded to this command!")
117 base = {"type": 5}
118 if hidden:
119 base["data"] = {"flags": 64}
120 self._deferred_hidden = True
121 await self._http.post_initial_response(base, self.interaction_id, self.__token)
122 self.deferred = True
123
124 async def send(self,
125 content: str = "", *,
126 embed: discord.Embed = None,
127 embeds: typing.List[discord.Embed] = None,
128 tts: bool = False,
129 file: discord.File = None,
130 files: typing.List[discord.File] = None,
131 allowed_mentions: discord.AllowedMentions = None,
132 hidden: bool = False,
133 delete_after: float = None) -> model.SlashMessage:
134 """
135 Sends response of the slash command.
136
137 .. note::
138 - Param ``hidden`` doesn't support embed and file.
139
140 .. warning::
141 - Since Release 1.0.9, this is completely changed. If you are migrating from older version, please make sure to fix the usage.
142 - You can't use both ``embed`` and ``embeds`` at the same time, also applies to ``file`` and ``files``.
143 - If you send files in the initial response, this will defer if it's not been deferred, and then PATCH with the message
144
145 :param content: Content of the response.
146 :type content: str
147 :param embed: Embed of the response.
148 :type embed: discord.Embed
149 :param embeds: Embeds of the response. Maximum 10.
150 :type embeds: List[discord.Embed]
151 :param tts: Whether to speak message using tts. Default ``False``.
152 :type tts: bool
153 :param file: File to send.
154 :type file: discord.File
155 :param files: Files to send.
156 :type files: List[discord.File]
157 :param allowed_mentions: AllowedMentions of the message.
158 :type allowed_mentions: discord.AllowedMentions
159 :param hidden: Whether the message is hidden, which means message content will only be seen to the author.
160 :type hidden: bool
161 :param delete_after: If provided, the number of seconds to wait in the background before deleting the message we just sent. If the deletion fails, then it is silently ignored.
162 :type delete_after: float
163 :return: Union[discord.Message, dict]
164 """
165 if embed and embeds:
166 raise error.IncorrectFormat("You can't use both `embed` and `embeds`!")
167 if embed:
168 embeds = [embed]
169 if embeds:
170 if not isinstance(embeds, list):
171 raise error.IncorrectFormat("Provide a list of embeds.")
172 elif len(embeds) > 10:
173 raise error.IncorrectFormat("Do not provide more than 10 embeds.")
174 if file and files:
175 raise error.IncorrectFormat("You can't use both `file` and `files`!")
176 if file:
177 files = [file]
178 if delete_after and hidden:
179 raise error.IncorrectFormat("You can't delete a hidden message!")
180
181 base = {
182 "content": content,
183 "tts": tts,
184 "embeds": [x.to_dict() for x in embeds] if embeds else [],
185 "allowed_mentions": allowed_mentions.to_dict() if allowed_mentions
186 else self.bot.allowed_mentions.to_dict() if self.bot.allowed_mentions else {}
187 }
188 if hidden:
189 if embeds or files:
190 self._logger.warning("Embed/File is not supported for `hidden`!")
191 base["flags"] = 64
192
193 initial_message = False
194 if not self.responded:
195 initial_message = True
196 if files and not self.deferred:
197 await self.defer(hidden=hidden)
198 if self.deferred:
199 if self._deferred_hidden != hidden:
200 self._logger.warning(
201 "Deferred response might not be what you set it to! (hidden / visible) "
202 "This is because it was deferred in a different state."
203 )
204 resp = await self._http.edit(base, self.__token, files=files)
205 self.deferred = False
206 else:
207 json_data = {
208 "type": 4,
209 "data": base
210 }
211 await self._http.post_initial_response(json_data, self.interaction_id, self.__token)
212 if not hidden:
213 resp = await self._http.edit({}, self.__token)
214 else:
215 resp = {}
216 self.responded = True
217 else:
218 resp = await self._http.post_followup(base, self.__token, files=files)
219 if files:
220 for file in files:
221 file.close()
222 if not hidden:
223 smsg = model.SlashMessage(state=self.bot._connection,
224 data=resp,
225 channel=self.channel or discord.Object(id=self.channel_id),
226 _http=self._http,
227 interaction_token=self.__token)
228 if delete_after:
229 self.bot.loop.create_task(smsg.delete(delay=delete_after))
230 if initial_message:
231 self.message = smsg
232 return smsg
233 else:
234 return resp
235
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/discord_slash/context.py b/discord_slash/context.py
--- a/discord_slash/context.py
+++ b/discord_slash/context.py
@@ -134,9 +134,6 @@
"""
Sends response of the slash command.
- .. note::
- - Param ``hidden`` doesn't support embed and file.
-
.. warning::
- Since Release 1.0.9, this is completely changed. If you are migrating from older version, please make sure to fix the usage.
- You can't use both ``embed`` and ``embeds`` at the same time, also applies to ``file`` and ``files``.
@@ -186,8 +183,6 @@
else self.bot.allowed_mentions.to_dict() if self.bot.allowed_mentions else {}
}
if hidden:
- if embeds or files:
- self._logger.warning("Embed/File is not supported for `hidden`!")
base["flags"] = 64
initial_message = False
|
{"golden_diff": "diff --git a/discord_slash/context.py b/discord_slash/context.py\n--- a/discord_slash/context.py\n+++ b/discord_slash/context.py\n@@ -134,9 +134,6 @@\n \"\"\"\n Sends response of the slash command.\n \n- .. note::\n- - Param ``hidden`` doesn't support embed and file.\n-\n .. warning::\n - Since Release 1.0.9, this is completely changed. If you are migrating from older version, please make sure to fix the usage.\n - You can't use both ``embed`` and ``embeds`` at the same time, also applies to ``file`` and ``files``.\n@@ -186,8 +183,6 @@\n else self.bot.allowed_mentions.to_dict() if self.bot.allowed_mentions else {}\n }\n if hidden:\n- if embeds or files:\n- self._logger.warning(\"Embed/File is not supported for `hidden`!\")\n base[\"flags\"] = 64\n \n initial_message = False\n", "issue": "Ephemeral (hidden) Messages now support embeds\nWhenever an embed is sent with the `hidden=True`, a warning is printed : `Embed/File is not supported for hidden!`. But ephemeral messages now support embeds.\n", "before_files": [{"content": "import typing\nimport asyncio\nfrom warnings import warn\n\nimport discord\nfrom contextlib import suppress\nfrom discord.ext import commands\nfrom . import http\nfrom . import error\nfrom . import model\n\n\nclass SlashContext:\n \"\"\"\n Context of the slash command.\\n\n Kinda similar with discord.ext.commands.Context.\n\n .. warning::\n Do not manually init this model.\n\n :ivar message: Message that invoked the slash command.\n :ivar name: Name of the command.\n :ivar args: List of processed arguments invoked with the command.\n :ivar kwargs: Dictionary of processed arguments invoked with the command.\n :ivar subcommand_name: Subcommand of the command.\n :ivar subcommand_group: Subcommand group of the command.\n :ivar interaction_id: Interaction ID of the command message.\n :ivar command_id: ID of the command.\n :ivar bot: discord.py client.\n :ivar _http: :class:`.http.SlashCommandRequest` of the client.\n :ivar _logger: Logger instance.\n :ivar deferred: Whether the command is current deferred (loading state)\n :ivar _deferred_hidden: Internal var to check that state stays the same\n :ivar responded: Whether you have responded with a message to the interaction.\n :ivar guild_id: Guild ID of the command message. If the command was invoked in DM, then it is ``None``\n :ivar author_id: User ID representing author of the command message.\n :ivar channel_id: Channel ID representing channel of the command message.\n :ivar author: User or Member instance of the command invoke.\n \"\"\"\n\n def __init__(self,\n _http: http.SlashCommandRequest,\n _json: dict,\n _discord: typing.Union[discord.Client, commands.Bot],\n logger):\n self.__token = _json[\"token\"]\n self.message = None # Should be set later.\n self.name = self.command = self.invoked_with = _json[\"data\"][\"name\"]\n self.args = []\n self.kwargs = {}\n self.subcommand_name = self.invoked_subcommand = self.subcommand_passed = None\n self.subcommand_group = self.invoked_subcommand_group = self.subcommand_group_passed = None\n self.interaction_id = _json[\"id\"]\n self.command_id = _json[\"data\"][\"id\"]\n self._http = _http\n self.bot = _discord\n self._logger = logger\n self.deferred = False\n self.responded = False\n self._deferred_hidden = False # To check if the patch to the deferred response matches\n self.guild_id = int(_json[\"guild_id\"]) if \"guild_id\" in _json.keys() else None\n self.author_id = int(_json[\"member\"][\"user\"][\"id\"] if \"member\" in _json.keys() else _json[\"user\"][\"id\"])\n self.channel_id = int(_json[\"channel_id\"])\n if self.guild:\n self.author = discord.Member(data=_json[\"member\"], state=self.bot._connection, guild=self.guild)\n elif self.guild_id:\n self.author = discord.User(data=_json[\"member\"][\"user\"], state=self.bot._connection)\n else:\n self.author = discord.User(data=_json[\"user\"], state=self.bot._connection)\n\n @property\n def _deffered_hidden(self):\n warn(\"`_deffered_hidden` as been renamed to `_deferred_hidden`.\", DeprecationWarning, stacklevel=2)\n return self._deferred_hidden\n\n @_deffered_hidden.setter\n def _deffered_hidden(self, value):\n warn(\"`_deffered_hidden` as been renamed to `_deferred_hidden`.\", DeprecationWarning, stacklevel=2)\n self._deferred_hidden = value\n\n @property\n def deffered(self):\n warn(\"`deffered` as been renamed to `deferred`.\", DeprecationWarning, stacklevel=2)\n return self.deferred\n\n @deffered.setter\n def deffered(self, value):\n warn(\"`deffered` as been renamed to `deferred`.\", DeprecationWarning, stacklevel=2)\n self.deferred = value\n\n @property\n def guild(self) -> typing.Optional[discord.Guild]:\n \"\"\"\n Guild instance of the command invoke. If the command was invoked in DM, then it is ``None``\n\n :return: Optional[discord.Guild]\n \"\"\"\n return self.bot.get_guild(self.guild_id) if self.guild_id else None\n\n @property\n def channel(self) -> typing.Optional[typing.Union[discord.TextChannel, discord.DMChannel]]:\n \"\"\"\n Channel instance of the command invoke.\n\n :return: Optional[Union[discord.abc.GuildChannel, discord.abc.PrivateChannel]]\n \"\"\"\n return self.bot.get_channel(self.channel_id)\n\n async def defer(self, hidden: bool = False):\n \"\"\"\n 'Defers' the response, showing a loading state to the user\n\n :param hidden: Whether the deferred response should be ephemeral . Default ``False``.\n \"\"\"\n if self.deferred or self.responded:\n raise error.AlreadyResponded(\"You have already responded to this command!\")\n base = {\"type\": 5}\n if hidden:\n base[\"data\"] = {\"flags\": 64}\n self._deferred_hidden = True\n await self._http.post_initial_response(base, self.interaction_id, self.__token)\n self.deferred = True\n\n async def send(self,\n content: str = \"\", *,\n embed: discord.Embed = None,\n embeds: typing.List[discord.Embed] = None,\n tts: bool = False,\n file: discord.File = None,\n files: typing.List[discord.File] = None,\n allowed_mentions: discord.AllowedMentions = None,\n hidden: bool = False,\n delete_after: float = None) -> model.SlashMessage:\n \"\"\"\n Sends response of the slash command.\n\n .. note::\n - Param ``hidden`` doesn't support embed and file.\n\n .. warning::\n - Since Release 1.0.9, this is completely changed. If you are migrating from older version, please make sure to fix the usage.\n - You can't use both ``embed`` and ``embeds`` at the same time, also applies to ``file`` and ``files``.\n - If you send files in the initial response, this will defer if it's not been deferred, and then PATCH with the message\n\n :param content: Content of the response.\n :type content: str\n :param embed: Embed of the response.\n :type embed: discord.Embed\n :param embeds: Embeds of the response. Maximum 10.\n :type embeds: List[discord.Embed]\n :param tts: Whether to speak message using tts. Default ``False``.\n :type tts: bool\n :param file: File to send.\n :type file: discord.File\n :param files: Files to send.\n :type files: List[discord.File]\n :param allowed_mentions: AllowedMentions of the message.\n :type allowed_mentions: discord.AllowedMentions\n :param hidden: Whether the message is hidden, which means message content will only be seen to the author.\n :type hidden: bool\n :param delete_after: If provided, the number of seconds to wait in the background before deleting the message we just sent. If the deletion fails, then it is silently ignored.\n :type delete_after: float\n :return: Union[discord.Message, dict]\n \"\"\"\n if embed and embeds:\n raise error.IncorrectFormat(\"You can't use both `embed` and `embeds`!\")\n if embed:\n embeds = [embed]\n if embeds:\n if not isinstance(embeds, list):\n raise error.IncorrectFormat(\"Provide a list of embeds.\")\n elif len(embeds) > 10:\n raise error.IncorrectFormat(\"Do not provide more than 10 embeds.\")\n if file and files:\n raise error.IncorrectFormat(\"You can't use both `file` and `files`!\")\n if file:\n files = [file]\n if delete_after and hidden:\n raise error.IncorrectFormat(\"You can't delete a hidden message!\")\n\n base = {\n \"content\": content,\n \"tts\": tts,\n \"embeds\": [x.to_dict() for x in embeds] if embeds else [],\n \"allowed_mentions\": allowed_mentions.to_dict() if allowed_mentions\n else self.bot.allowed_mentions.to_dict() if self.bot.allowed_mentions else {}\n }\n if hidden:\n if embeds or files:\n self._logger.warning(\"Embed/File is not supported for `hidden`!\")\n base[\"flags\"] = 64\n\n initial_message = False\n if not self.responded:\n initial_message = True\n if files and not self.deferred:\n await self.defer(hidden=hidden)\n if self.deferred:\n if self._deferred_hidden != hidden:\n self._logger.warning(\n \"Deferred response might not be what you set it to! (hidden / visible) \"\n \"This is because it was deferred in a different state.\"\n )\n resp = await self._http.edit(base, self.__token, files=files)\n self.deferred = False\n else:\n json_data = {\n \"type\": 4,\n \"data\": base\n }\n await self._http.post_initial_response(json_data, self.interaction_id, self.__token)\n if not hidden:\n resp = await self._http.edit({}, self.__token)\n else:\n resp = {}\n self.responded = True\n else:\n resp = await self._http.post_followup(base, self.__token, files=files)\n if files:\n for file in files:\n file.close()\n if not hidden:\n smsg = model.SlashMessage(state=self.bot._connection,\n data=resp,\n channel=self.channel or discord.Object(id=self.channel_id),\n _http=self._http,\n interaction_token=self.__token)\n if delete_after:\n self.bot.loop.create_task(smsg.delete(delay=delete_after))\n if initial_message:\n self.message = smsg\n return smsg\n else:\n return resp\n", "path": "discord_slash/context.py"}], "after_files": [{"content": "import typing\nimport asyncio\nfrom warnings import warn\n\nimport discord\nfrom contextlib import suppress\nfrom discord.ext import commands\nfrom . import http\nfrom . import error\nfrom . import model\n\n\nclass SlashContext:\n \"\"\"\n Context of the slash command.\\n\n Kinda similar with discord.ext.commands.Context.\n\n .. warning::\n Do not manually init this model.\n\n :ivar message: Message that invoked the slash command.\n :ivar name: Name of the command.\n :ivar args: List of processed arguments invoked with the command.\n :ivar kwargs: Dictionary of processed arguments invoked with the command.\n :ivar subcommand_name: Subcommand of the command.\n :ivar subcommand_group: Subcommand group of the command.\n :ivar interaction_id: Interaction ID of the command message.\n :ivar command_id: ID of the command.\n :ivar bot: discord.py client.\n :ivar _http: :class:`.http.SlashCommandRequest` of the client.\n :ivar _logger: Logger instance.\n :ivar deferred: Whether the command is current deferred (loading state)\n :ivar _deferred_hidden: Internal var to check that state stays the same\n :ivar responded: Whether you have responded with a message to the interaction.\n :ivar guild_id: Guild ID of the command message. If the command was invoked in DM, then it is ``None``\n :ivar author_id: User ID representing author of the command message.\n :ivar channel_id: Channel ID representing channel of the command message.\n :ivar author: User or Member instance of the command invoke.\n \"\"\"\n\n def __init__(self,\n _http: http.SlashCommandRequest,\n _json: dict,\n _discord: typing.Union[discord.Client, commands.Bot],\n logger):\n self.__token = _json[\"token\"]\n self.message = None # Should be set later.\n self.name = self.command = self.invoked_with = _json[\"data\"][\"name\"]\n self.args = []\n self.kwargs = {}\n self.subcommand_name = self.invoked_subcommand = self.subcommand_passed = None\n self.subcommand_group = self.invoked_subcommand_group = self.subcommand_group_passed = None\n self.interaction_id = _json[\"id\"]\n self.command_id = _json[\"data\"][\"id\"]\n self._http = _http\n self.bot = _discord\n self._logger = logger\n self.deferred = False\n self.responded = False\n self._deferred_hidden = False # To check if the patch to the deferred response matches\n self.guild_id = int(_json[\"guild_id\"]) if \"guild_id\" in _json.keys() else None\n self.author_id = int(_json[\"member\"][\"user\"][\"id\"] if \"member\" in _json.keys() else _json[\"user\"][\"id\"])\n self.channel_id = int(_json[\"channel_id\"])\n if self.guild:\n self.author = discord.Member(data=_json[\"member\"], state=self.bot._connection, guild=self.guild)\n elif self.guild_id:\n self.author = discord.User(data=_json[\"member\"][\"user\"], state=self.bot._connection)\n else:\n self.author = discord.User(data=_json[\"user\"], state=self.bot._connection)\n\n @property\n def _deffered_hidden(self):\n warn(\"`_deffered_hidden` as been renamed to `_deferred_hidden`.\", DeprecationWarning, stacklevel=2)\n return self._deferred_hidden\n\n @_deffered_hidden.setter\n def _deffered_hidden(self, value):\n warn(\"`_deffered_hidden` as been renamed to `_deferred_hidden`.\", DeprecationWarning, stacklevel=2)\n self._deferred_hidden = value\n\n @property\n def deffered(self):\n warn(\"`deffered` as been renamed to `deferred`.\", DeprecationWarning, stacklevel=2)\n return self.deferred\n\n @deffered.setter\n def deffered(self, value):\n warn(\"`deffered` as been renamed to `deferred`.\", DeprecationWarning, stacklevel=2)\n self.deferred = value\n\n @property\n def guild(self) -> typing.Optional[discord.Guild]:\n \"\"\"\n Guild instance of the command invoke. If the command was invoked in DM, then it is ``None``\n\n :return: Optional[discord.Guild]\n \"\"\"\n return self.bot.get_guild(self.guild_id) if self.guild_id else None\n\n @property\n def channel(self) -> typing.Optional[typing.Union[discord.TextChannel, discord.DMChannel]]:\n \"\"\"\n Channel instance of the command invoke.\n\n :return: Optional[Union[discord.abc.GuildChannel, discord.abc.PrivateChannel]]\n \"\"\"\n return self.bot.get_channel(self.channel_id)\n\n async def defer(self, hidden: bool = False):\n \"\"\"\n 'Defers' the response, showing a loading state to the user\n\n :param hidden: Whether the deferred response should be ephemeral . Default ``False``.\n \"\"\"\n if self.deferred or self.responded:\n raise error.AlreadyResponded(\"You have already responded to this command!\")\n base = {\"type\": 5}\n if hidden:\n base[\"data\"] = {\"flags\": 64}\n self._deferred_hidden = True\n await self._http.post_initial_response(base, self.interaction_id, self.__token)\n self.deferred = True\n\n async def send(self,\n content: str = \"\", *,\n embed: discord.Embed = None,\n embeds: typing.List[discord.Embed] = None,\n tts: bool = False,\n file: discord.File = None,\n files: typing.List[discord.File] = None,\n allowed_mentions: discord.AllowedMentions = None,\n hidden: bool = False,\n delete_after: float = None) -> model.SlashMessage:\n \"\"\"\n Sends response of the slash command.\n\n .. warning::\n - Since Release 1.0.9, this is completely changed. If you are migrating from older version, please make sure to fix the usage.\n - You can't use both ``embed`` and ``embeds`` at the same time, also applies to ``file`` and ``files``.\n - If you send files in the initial response, this will defer if it's not been deferred, and then PATCH with the message\n\n :param content: Content of the response.\n :type content: str\n :param embed: Embed of the response.\n :type embed: discord.Embed\n :param embeds: Embeds of the response. Maximum 10.\n :type embeds: List[discord.Embed]\n :param tts: Whether to speak message using tts. Default ``False``.\n :type tts: bool\n :param file: File to send.\n :type file: discord.File\n :param files: Files to send.\n :type files: List[discord.File]\n :param allowed_mentions: AllowedMentions of the message.\n :type allowed_mentions: discord.AllowedMentions\n :param hidden: Whether the message is hidden, which means message content will only be seen to the author.\n :type hidden: bool\n :param delete_after: If provided, the number of seconds to wait in the background before deleting the message we just sent. If the deletion fails, then it is silently ignored.\n :type delete_after: float\n :return: Union[discord.Message, dict]\n \"\"\"\n if embed and embeds:\n raise error.IncorrectFormat(\"You can't use both `embed` and `embeds`!\")\n if embed:\n embeds = [embed]\n if embeds:\n if not isinstance(embeds, list):\n raise error.IncorrectFormat(\"Provide a list of embeds.\")\n elif len(embeds) > 10:\n raise error.IncorrectFormat(\"Do not provide more than 10 embeds.\")\n if file and files:\n raise error.IncorrectFormat(\"You can't use both `file` and `files`!\")\n if file:\n files = [file]\n if delete_after and hidden:\n raise error.IncorrectFormat(\"You can't delete a hidden message!\")\n\n base = {\n \"content\": content,\n \"tts\": tts,\n \"embeds\": [x.to_dict() for x in embeds] if embeds else [],\n \"allowed_mentions\": allowed_mentions.to_dict() if allowed_mentions\n else self.bot.allowed_mentions.to_dict() if self.bot.allowed_mentions else {}\n }\n if hidden:\n base[\"flags\"] = 64\n\n initial_message = False\n if not self.responded:\n initial_message = True\n if files and not self.deferred:\n await self.defer(hidden=hidden)\n if self.deferred:\n if self._deferred_hidden != hidden:\n self._logger.warning(\n \"Deferred response might not be what you set it to! (hidden / visible) \"\n \"This is because it was deferred in a different state.\"\n )\n resp = await self._http.edit(base, self.__token, files=files)\n self.deferred = False\n else:\n json_data = {\n \"type\": 4,\n \"data\": base\n }\n await self._http.post_initial_response(json_data, self.interaction_id, self.__token)\n if not hidden:\n resp = await self._http.edit({}, self.__token)\n else:\n resp = {}\n self.responded = True\n else:\n resp = await self._http.post_followup(base, self.__token, files=files)\n if files:\n for file in files:\n file.close()\n if not hidden:\n smsg = model.SlashMessage(state=self.bot._connection,\n data=resp,\n channel=self.channel or discord.Object(id=self.channel_id),\n _http=self._http,\n interaction_token=self.__token)\n if delete_after:\n self.bot.loop.create_task(smsg.delete(delay=delete_after))\n if initial_message:\n self.message = smsg\n return smsg\n else:\n return resp\n", "path": "discord_slash/context.py"}]}
| 3,131 | 225 |
gh_patches_debug_4186
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-5182
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not able to create mac executable if we have used PyautoGUI in python script
I am facing issue in pyinstaller if I try to make executable file from mac when I use pyautogui library in Python script(Working fine on windows).
After debugging I got to know that issue is when I use pyautogui and pyinstaller together.
So, I've tried this simple program which has just import of pyautogui
demo.py
```
import pyautogui as pg
if __name__ == "__main__":
`print("Hello,` WOrld!")
```
It was giving me below error if I just type pyinstaller demo.py --onefile
exit;
Traceback (most recent call last):
File "PyInstaller/hooks/rthooks/pyi_rth__tkinter.py", line 20, in <module>
FileNotFoundError: Tcl data directory "/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIyqvIJz/tcl" not found.
[57142] Failed to execute script pyi_rth__tkinter
So, I've tried this
pyinstaller --onefile --add-binary='/System/Library/Frameworks/Tk.framework/Tk':'tk' --add-binary='/System/Library/Frameworks/Tcl.framework/Tcl':'tcl' demo.py
below is the stacktrace of error:
exit;
Traceback (most recent call last):
File "PyInstaller/loader/pyiboot01_bootstrap.py", line 144, in __init__
File "ctypes/__init__.py", line 348, in __init__
OSError: dlopen(/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIjmFz1e/Foundation, 6): no suitable image found. Did find:
/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIjmFz1e/Foundation: not a file
/private/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIjmFz1e/Foundation: not a file
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "demo.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "/Users/admin/Documents/django_infoware/venv_demo/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 493, in exec_module
exec(bytecode, module.__dict__)
File "pyautogui/__init__.py", line 241, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "/Users/admin/Documents/django_infoware/venv_demo/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 493, in exec_module
exec(bytecode, module.__dict__)
File "mouseinfo/__init__.py", line 100, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "/Users/admin/Documents/django_infoware/venv_demo/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 493, in exec_module
exec(bytecode, module.__dict__)
File "rubicon/objc/__init__.py", line 15, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "/Users/admin/Documents/django_infoware/venv_demo/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 493, in exec_module
exec(bytecode, module.__dict__)
File "rubicon/objc/runtime.py", line 81, in <module>
File "rubicon/objc/runtime.py", line 58, in load_library
File "PyInstaller/loader/pyiboot01_bootstrap.py", line 146, in __init__
__main__.PyInstallerImportError: Failed to load dynlib/dll '/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIjmFz1e/Foundation'. Most probably this dynlib/dll was not found when the application was frozen.
[57474] Failed to execute script demo
[Process completed]
It seems like there is some windows dependency when I use pyautoGUI. Even I am on mac
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/loader/pyiboot01_bootstrap.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2020, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12
13 ### Start bootstrap process
14 # Only python built-in modules can be used.
15
16 import sys
17 import pyimod03_importers
18
19 # Extend Python import machinery by adding PEP302 importers to sys.meta_path.
20 pyimod03_importers.install()
21
22
23 ### Bootstrap process is complete.
24 # We can use other python modules (e.g. os)
25
26
27 import os
28
29
30 # Let other python modules know that the code is running in frozen mode.
31 if not hasattr(sys, 'frozen'):
32 sys.frozen = True
33
34 # sys._MEIPASS is now set in the bootloader. Hooray.
35
36
37 # Python 3 C-API function Py_SetPath() resets sys.prefix to empty string.
38 # Python 2 was using PYTHONHOME for sys.prefix. Let's do the same for Python 3.
39 sys.prefix = sys._MEIPASS
40 sys.exec_prefix = sys.prefix
41
42
43 # Python 3.3+ defines also sys.base_prefix. Let's set them too.
44 sys.base_prefix = sys.prefix
45 sys.base_exec_prefix = sys.exec_prefix
46
47
48 # Some packages behaves differently when running inside virtual environment.
49 # E.g. IPython tries to append path VIRTUAL_ENV to sys.path.
50 # For the frozen app we want to prevent this behavior.
51 VIRTENV = 'VIRTUAL_ENV'
52 if VIRTENV in os.environ:
53 # On some platforms (e.g. AIX) 'os.unsetenv()' is not available and then
54 # deleting the var from os.environ does not delete it from the environment.
55 os.environ[VIRTENV] = ''
56 del os.environ[VIRTENV]
57
58
59 # Ensure sys.path contains absolute paths. Otherwise import of other python
60 # modules will fail when current working directory is changed by frozen
61 # application.
62 python_path = []
63 for pth in sys.path:
64 python_path.append(os.path.abspath(pth))
65 sys.path = python_path
66
67
68 # Implement workaround for prints in non-console mode. In non-console mode
69 # (with "pythonw"), print randomly fails with "[errno 9] Bad file descriptor"
70 # when the printed text is flushed (eg: buffer full); this is because the
71 # sys.stdout object is bound to an invalid file descriptor.
72 # Python 3000 has a fix for it (http://bugs.python.org/issue1415), but we
73 # feel that a workaround in PyInstaller is a good thing since most people
74 # found this problem for the first time with PyInstaller as they don't
75 # usually run their code with "pythonw" (and it's hard to debug anyway).
76 class NullWriter:
77 softspace = 0
78 encoding = 'UTF-8'
79
80 def write(*args):
81 pass
82
83 def flush(*args):
84 pass
85
86 # Some packages are checking if stdout/stderr is available.
87 # e.g. youtube-dl for details see #1883
88 def isatty(self):
89 return False
90
91
92 # sys.stdout/err is None in GUI mode on Windows.
93 if sys.stdout is None:
94 sys.stdout = NullWriter()
95 if sys.stderr is None:
96 sys.stderr = NullWriter()
97
98
99 # At least on Windows, Python seems to hook up the codecs on this
100 # import, so it's not enough to just package up all the encodings.
101 #
102 # It was also reported that without 'encodings' module the frozen executable
103 # will fail to load in some configurations:
104 #
105 # http://www.pyinstaller.org/ticket/651
106 #
107 # Import 'encodings' module in a run-time hook is not enough since some
108 # run-time hooks require this module and the order of running code from
109 # from run-time hooks is not defined.
110 try:
111 import encodings
112 except ImportError:
113 pass
114
115
116 # In the Python interpreter 'warnings' module is imported when 'sys.warnoptions'
117 # is not empty. Mimic this behavior in PyInstaller.
118 if sys.warnoptions:
119 import warnings
120
121 try:
122 import ctypes
123 import os
124 from ctypes import LibraryLoader, DEFAULT_MODE
125
126 def _frozen_name(name):
127 if name:
128 frozen_name = os.path.join(sys._MEIPASS, os.path.basename(name))
129 if os.path.exists(frozen_name):
130 name = frozen_name
131 return name
132
133 class PyInstallerImportError(OSError):
134 def __init__(self, name):
135 self.msg = ("Failed to load dynlib/dll %r. "
136 "Most probably this dynlib/dll was not found "
137 "when the application was frozen.") % name
138 self.args = (self.msg,)
139
140 class PyInstallerCDLL(ctypes.CDLL):
141 def __init__(self, name, *args, **kwargs):
142 name = _frozen_name(name)
143 try:
144 super(PyInstallerCDLL, self).__init__(name, *args, **kwargs)
145 except Exception as base_error:
146 raise PyInstallerImportError(name) from base_error
147
148 ctypes.CDLL = PyInstallerCDLL
149 ctypes.cdll = LibraryLoader(PyInstallerCDLL)
150
151 class PyInstallerPyDLL(ctypes.PyDLL):
152 def __init__(self, name, *args, **kwargs):
153 name = _frozen_name(name)
154 try:
155 super(PyInstallerPyDLL, self).__init__(name, *args, **kwargs)
156 except Exception as base_error:
157 raise PyInstallerImportError(name) from base_error
158
159 ctypes.PyDLL = PyInstallerPyDLL
160 ctypes.pydll = LibraryLoader(PyInstallerPyDLL)
161
162 if sys.platform.startswith('win'):
163 class PyInstallerWinDLL(ctypes.WinDLL):
164 def __init__(self, name,*args, **kwargs):
165 name = _frozen_name(name)
166 try:
167 super(PyInstallerWinDLL, self).__init__(name, *args, **kwargs)
168 except Exception as base_error:
169 raise PyInstallerImportError(name) from base_error
170
171 ctypes.WinDLL = PyInstallerWinDLL
172 ctypes.windll = LibraryLoader(PyInstallerWinDLL)
173
174 class PyInstallerOleDLL(ctypes.OleDLL):
175 def __init__(self, name,*args, **kwargs):
176 name = _frozen_name(name)
177 try:
178 super(PyInstallerOleDLL, self).__init__(name, *args, **kwargs)
179 except Exception as base_error:
180 raise PyInstallerImportError(name) from base_error
181
182 ctypes.OleDLL = PyInstallerOleDLL
183 ctypes.oledll = LibraryLoader(PyInstallerOleDLL)
184 except ImportError:
185 pass
186
187 # On Mac OS X insert sys._MEIPASS in the first position of the list of paths
188 # that ctypes uses to search for libraries.
189 #
190 # Note: 'ctypes' module will NOT be bundled with every app because code in this
191 # module is not scanned for module dependencies. It is safe to wrap
192 # 'ctypes' module into 'try/except ImportError' block.
193 if sys.platform.startswith('darwin'):
194 try:
195 from ctypes.macholib import dyld
196 dyld.DEFAULT_LIBRARY_FALLBACK.insert(0, sys._MEIPASS)
197 except ImportError:
198 # Do nothing when module 'ctypes' is not available.
199 pass
200
201
202 # Make .eggs and zipfiles available at runtime
203 d = "eggs"
204 d = os.path.join(sys._MEIPASS, d)
205 # Test if the 'eggs' directory exists. This allows to
206 # opportunistically including this script into the packaged exe, even
207 # if no eggs as found when packaging the program. (Which may be a
208 # use-case, see issue #653.
209 if os.path.isdir(d):
210 for fn in os.listdir(d):
211 sys.path.append(os.path.join(d, fn))
212
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PyInstaller/loader/pyiboot01_bootstrap.py b/PyInstaller/loader/pyiboot01_bootstrap.py
--- a/PyInstaller/loader/pyiboot01_bootstrap.py
+++ b/PyInstaller/loader/pyiboot01_bootstrap.py
@@ -126,7 +126,7 @@
def _frozen_name(name):
if name:
frozen_name = os.path.join(sys._MEIPASS, os.path.basename(name))
- if os.path.exists(frozen_name):
+ if os.path.exists(frozen_name) and not os.path.isdir(frozen_name):
name = frozen_name
return name
|
{"golden_diff": "diff --git a/PyInstaller/loader/pyiboot01_bootstrap.py b/PyInstaller/loader/pyiboot01_bootstrap.py\n--- a/PyInstaller/loader/pyiboot01_bootstrap.py\n+++ b/PyInstaller/loader/pyiboot01_bootstrap.py\n@@ -126,7 +126,7 @@\n def _frozen_name(name):\n if name:\n frozen_name = os.path.join(sys._MEIPASS, os.path.basename(name))\n- if os.path.exists(frozen_name):\n+ if os.path.exists(frozen_name) and not os.path.isdir(frozen_name):\n name = frozen_name\n return name\n", "issue": "Not able to create mac executable if we have used PyautoGUI in python script\nI am facing issue in pyinstaller if I try to make executable file from mac when I use pyautogui library in Python script(Working fine on windows).\r\nAfter debugging I got to know that issue is when I use pyautogui and pyinstaller together.\r\nSo, I've tried this simple program which has just import of pyautogui\r\ndemo.py\r\n```\r\nimport pyautogui as pg\r\nif __name__ == \"__main__\":\r\n `print(\"Hello,` WOrld!\")\r\n```\r\n\r\nIt was giving me below error if I just type pyinstaller demo.py --onefile\r\nexit;\r\nTraceback (most recent call last):\r\n File \"PyInstaller/hooks/rthooks/pyi_rth__tkinter.py\", line 20, in <module>\r\nFileNotFoundError: Tcl data directory \"/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIyqvIJz/tcl\" not found.\r\n[57142] Failed to execute script pyi_rth__tkinter\r\n\r\nSo, I've tried this\r\npyinstaller --onefile --add-binary='/System/Library/Frameworks/Tk.framework/Tk':'tk' --add-binary='/System/Library/Frameworks/Tcl.framework/Tcl':'tcl' demo.py\r\n\r\nbelow is the stacktrace of error:\r\n\r\nexit;\r\nTraceback (most recent call last):\r\n File \"PyInstaller/loader/pyiboot01_bootstrap.py\", line 144, in __init__\r\n File \"ctypes/__init__.py\", line 348, in __init__\r\nOSError: dlopen(/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIjmFz1e/Foundation, 6): no suitable image found. Did find:\r\n\t/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIjmFz1e/Foundation: not a file\r\n\t/private/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIjmFz1e/Foundation: not a file\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"demo.py\", line 1, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 955, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 665, in _load_unlocked\r\n File \"/Users/admin/Documents/django_infoware/venv_demo/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py\", line 493, in exec_module\r\n exec(bytecode, module.__dict__)\r\n File \"pyautogui/__init__.py\", line 241, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 955, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 665, in _load_unlocked\r\n File \"/Users/admin/Documents/django_infoware/venv_demo/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py\", line 493, in exec_module\r\n exec(bytecode, module.__dict__)\r\n File \"mouseinfo/__init__.py\", line 100, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 955, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 665, in _load_unlocked\r\n File \"/Users/admin/Documents/django_infoware/venv_demo/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py\", line 493, in exec_module\r\n exec(bytecode, module.__dict__)\r\n File \"rubicon/objc/__init__.py\", line 15, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 955, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 665, in _load_unlocked\r\n File \"/Users/admin/Documents/django_infoware/venv_demo/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py\", line 493, in exec_module\r\n exec(bytecode, module.__dict__)\r\n File \"rubicon/objc/runtime.py\", line 81, in <module>\r\n File \"rubicon/objc/runtime.py\", line 58, in load_library\r\n File \"PyInstaller/loader/pyiboot01_bootstrap.py\", line 146, in __init__\r\n__main__.PyInstallerImportError: Failed to load dynlib/dll '/var/folders/83/vlr93lcs4bg1r6ytzy1vvjy40000gn/T/_MEIjmFz1e/Foundation'. Most probably this dynlib/dll was not found when the application was frozen.\r\n[57474] Failed to execute script demo\r\n\r\n[Process completed]\r\n\r\nIt seems like there is some windows dependency when I use pyautoGUI. Even I am on mac\r\n\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\n\n### Start bootstrap process\n# Only python built-in modules can be used.\n\nimport sys\nimport pyimod03_importers\n\n# Extend Python import machinery by adding PEP302 importers to sys.meta_path.\npyimod03_importers.install()\n\n\n### Bootstrap process is complete.\n# We can use other python modules (e.g. os)\n\n\nimport os\n\n\n# Let other python modules know that the code is running in frozen mode.\nif not hasattr(sys, 'frozen'):\n sys.frozen = True\n\n# sys._MEIPASS is now set in the bootloader. Hooray.\n\n\n# Python 3 C-API function Py_SetPath() resets sys.prefix to empty string.\n# Python 2 was using PYTHONHOME for sys.prefix. Let's do the same for Python 3.\nsys.prefix = sys._MEIPASS\nsys.exec_prefix = sys.prefix\n\n\n# Python 3.3+ defines also sys.base_prefix. Let's set them too.\nsys.base_prefix = sys.prefix\nsys.base_exec_prefix = sys.exec_prefix\n\n\n# Some packages behaves differently when running inside virtual environment.\n# E.g. IPython tries to append path VIRTUAL_ENV to sys.path.\n# For the frozen app we want to prevent this behavior.\nVIRTENV = 'VIRTUAL_ENV'\nif VIRTENV in os.environ:\n # On some platforms (e.g. AIX) 'os.unsetenv()' is not available and then\n # deleting the var from os.environ does not delete it from the environment.\n os.environ[VIRTENV] = ''\n del os.environ[VIRTENV]\n\n\n# Ensure sys.path contains absolute paths. Otherwise import of other python\n# modules will fail when current working directory is changed by frozen\n# application.\npython_path = []\nfor pth in sys.path:\n python_path.append(os.path.abspath(pth))\n sys.path = python_path\n\n\n# Implement workaround for prints in non-console mode. In non-console mode\n# (with \"pythonw\"), print randomly fails with \"[errno 9] Bad file descriptor\"\n# when the printed text is flushed (eg: buffer full); this is because the\n# sys.stdout object is bound to an invalid file descriptor.\n# Python 3000 has a fix for it (http://bugs.python.org/issue1415), but we\n# feel that a workaround in PyInstaller is a good thing since most people\n# found this problem for the first time with PyInstaller as they don't\n# usually run their code with \"pythonw\" (and it's hard to debug anyway).\nclass NullWriter:\n softspace = 0\n encoding = 'UTF-8'\n\n def write(*args):\n pass\n\n def flush(*args):\n pass\n\n # Some packages are checking if stdout/stderr is available.\n # e.g. youtube-dl for details see #1883\n def isatty(self):\n return False\n\n\n# sys.stdout/err is None in GUI mode on Windows.\nif sys.stdout is None:\n sys.stdout = NullWriter()\nif sys.stderr is None:\n sys.stderr = NullWriter()\n\n\n# At least on Windows, Python seems to hook up the codecs on this\n# import, so it's not enough to just package up all the encodings.\n#\n# It was also reported that without 'encodings' module the frozen executable\n# will fail to load in some configurations:\n#\n# http://www.pyinstaller.org/ticket/651\n#\n# Import 'encodings' module in a run-time hook is not enough since some\n# run-time hooks require this module and the order of running code from\n# from run-time hooks is not defined.\ntry:\n import encodings\nexcept ImportError:\n pass\n\n\n# In the Python interpreter 'warnings' module is imported when 'sys.warnoptions'\n# is not empty. Mimic this behavior in PyInstaller.\nif sys.warnoptions:\n import warnings\n\ntry:\n import ctypes\n import os\n from ctypes import LibraryLoader, DEFAULT_MODE\n\n def _frozen_name(name):\n if name:\n frozen_name = os.path.join(sys._MEIPASS, os.path.basename(name))\n if os.path.exists(frozen_name):\n name = frozen_name\n return name\n\n class PyInstallerImportError(OSError):\n def __init__(self, name):\n self.msg = (\"Failed to load dynlib/dll %r. \"\n \"Most probably this dynlib/dll was not found \"\n \"when the application was frozen.\") % name\n self.args = (self.msg,)\n\n class PyInstallerCDLL(ctypes.CDLL):\n def __init__(self, name, *args, **kwargs):\n name = _frozen_name(name)\n try:\n super(PyInstallerCDLL, self).__init__(name, *args, **kwargs)\n except Exception as base_error:\n raise PyInstallerImportError(name) from base_error\n\n ctypes.CDLL = PyInstallerCDLL\n ctypes.cdll = LibraryLoader(PyInstallerCDLL)\n\n class PyInstallerPyDLL(ctypes.PyDLL):\n def __init__(self, name, *args, **kwargs):\n name = _frozen_name(name)\n try:\n super(PyInstallerPyDLL, self).__init__(name, *args, **kwargs)\n except Exception as base_error:\n raise PyInstallerImportError(name) from base_error\n\n ctypes.PyDLL = PyInstallerPyDLL\n ctypes.pydll = LibraryLoader(PyInstallerPyDLL)\n\n if sys.platform.startswith('win'):\n class PyInstallerWinDLL(ctypes.WinDLL):\n def __init__(self, name,*args, **kwargs):\n name = _frozen_name(name)\n try:\n super(PyInstallerWinDLL, self).__init__(name, *args, **kwargs)\n except Exception as base_error:\n raise PyInstallerImportError(name) from base_error\n\n ctypes.WinDLL = PyInstallerWinDLL\n ctypes.windll = LibraryLoader(PyInstallerWinDLL)\n\n class PyInstallerOleDLL(ctypes.OleDLL):\n def __init__(self, name,*args, **kwargs):\n name = _frozen_name(name)\n try:\n super(PyInstallerOleDLL, self).__init__(name, *args, **kwargs)\n except Exception as base_error:\n raise PyInstallerImportError(name) from base_error\n\n ctypes.OleDLL = PyInstallerOleDLL\n ctypes.oledll = LibraryLoader(PyInstallerOleDLL)\nexcept ImportError:\n pass\n\n# On Mac OS X insert sys._MEIPASS in the first position of the list of paths\n# that ctypes uses to search for libraries.\n#\n# Note: 'ctypes' module will NOT be bundled with every app because code in this\n# module is not scanned for module dependencies. It is safe to wrap\n# 'ctypes' module into 'try/except ImportError' block.\nif sys.platform.startswith('darwin'):\n try:\n from ctypes.macholib import dyld\n dyld.DEFAULT_LIBRARY_FALLBACK.insert(0, sys._MEIPASS)\n except ImportError:\n # Do nothing when module 'ctypes' is not available.\n pass\n\n\n# Make .eggs and zipfiles available at runtime\nd = \"eggs\"\nd = os.path.join(sys._MEIPASS, d)\n# Test if the 'eggs' directory exists. This allows to\n# opportunistically including this script into the packaged exe, even\n# if no eggs as found when packaging the program. (Which may be a\n# use-case, see issue #653.\nif os.path.isdir(d):\n for fn in os.listdir(d):\n sys.path.append(os.path.join(d, fn))\n", "path": "PyInstaller/loader/pyiboot01_bootstrap.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\n\n### Start bootstrap process\n# Only python built-in modules can be used.\n\nimport sys\nimport pyimod03_importers\n\n# Extend Python import machinery by adding PEP302 importers to sys.meta_path.\npyimod03_importers.install()\n\n\n### Bootstrap process is complete.\n# We can use other python modules (e.g. os)\n\n\nimport os\n\n\n# Let other python modules know that the code is running in frozen mode.\nif not hasattr(sys, 'frozen'):\n sys.frozen = True\n\n# sys._MEIPASS is now set in the bootloader. Hooray.\n\n\n# Python 3 C-API function Py_SetPath() resets sys.prefix to empty string.\n# Python 2 was using PYTHONHOME for sys.prefix. Let's do the same for Python 3.\nsys.prefix = sys._MEIPASS\nsys.exec_prefix = sys.prefix\n\n\n# Python 3.3+ defines also sys.base_prefix. Let's set them too.\nsys.base_prefix = sys.prefix\nsys.base_exec_prefix = sys.exec_prefix\n\n\n# Some packages behaves differently when running inside virtual environment.\n# E.g. IPython tries to append path VIRTUAL_ENV to sys.path.\n# For the frozen app we want to prevent this behavior.\nVIRTENV = 'VIRTUAL_ENV'\nif VIRTENV in os.environ:\n # On some platforms (e.g. AIX) 'os.unsetenv()' is not available and then\n # deleting the var from os.environ does not delete it from the environment.\n os.environ[VIRTENV] = ''\n del os.environ[VIRTENV]\n\n\n# Ensure sys.path contains absolute paths. Otherwise import of other python\n# modules will fail when current working directory is changed by frozen\n# application.\npython_path = []\nfor pth in sys.path:\n python_path.append(os.path.abspath(pth))\n sys.path = python_path\n\n\n# Implement workaround for prints in non-console mode. In non-console mode\n# (with \"pythonw\"), print randomly fails with \"[errno 9] Bad file descriptor\"\n# when the printed text is flushed (eg: buffer full); this is because the\n# sys.stdout object is bound to an invalid file descriptor.\n# Python 3000 has a fix for it (http://bugs.python.org/issue1415), but we\n# feel that a workaround in PyInstaller is a good thing since most people\n# found this problem for the first time with PyInstaller as they don't\n# usually run their code with \"pythonw\" (and it's hard to debug anyway).\nclass NullWriter:\n softspace = 0\n encoding = 'UTF-8'\n\n def write(*args):\n pass\n\n def flush(*args):\n pass\n\n # Some packages are checking if stdout/stderr is available.\n # e.g. youtube-dl for details see #1883\n def isatty(self):\n return False\n\n\n# sys.stdout/err is None in GUI mode on Windows.\nif sys.stdout is None:\n sys.stdout = NullWriter()\nif sys.stderr is None:\n sys.stderr = NullWriter()\n\n\n# At least on Windows, Python seems to hook up the codecs on this\n# import, so it's not enough to just package up all the encodings.\n#\n# It was also reported that without 'encodings' module the frozen executable\n# will fail to load in some configurations:\n#\n# http://www.pyinstaller.org/ticket/651\n#\n# Import 'encodings' module in a run-time hook is not enough since some\n# run-time hooks require this module and the order of running code from\n# from run-time hooks is not defined.\ntry:\n import encodings\nexcept ImportError:\n pass\n\n\n# In the Python interpreter 'warnings' module is imported when 'sys.warnoptions'\n# is not empty. Mimic this behavior in PyInstaller.\nif sys.warnoptions:\n import warnings\n\ntry:\n import ctypes\n import os\n from ctypes import LibraryLoader, DEFAULT_MODE\n\n def _frozen_name(name):\n if name:\n frozen_name = os.path.join(sys._MEIPASS, os.path.basename(name))\n if os.path.exists(frozen_name) and not os.path.isdir(frozen_name):\n name = frozen_name\n return name\n\n class PyInstallerImportError(OSError):\n def __init__(self, name):\n self.msg = (\"Failed to load dynlib/dll %r. \"\n \"Most probably this dynlib/dll was not found \"\n \"when the application was frozen.\") % name\n self.args = (self.msg,)\n\n class PyInstallerCDLL(ctypes.CDLL):\n def __init__(self, name, *args, **kwargs):\n name = _frozen_name(name)\n try:\n super(PyInstallerCDLL, self).__init__(name, *args, **kwargs)\n except Exception as base_error:\n raise PyInstallerImportError(name) from base_error\n\n ctypes.CDLL = PyInstallerCDLL\n ctypes.cdll = LibraryLoader(PyInstallerCDLL)\n\n class PyInstallerPyDLL(ctypes.PyDLL):\n def __init__(self, name, *args, **kwargs):\n name = _frozen_name(name)\n try:\n super(PyInstallerPyDLL, self).__init__(name, *args, **kwargs)\n except Exception as base_error:\n raise PyInstallerImportError(name) from base_error\n\n ctypes.PyDLL = PyInstallerPyDLL\n ctypes.pydll = LibraryLoader(PyInstallerPyDLL)\n\n if sys.platform.startswith('win'):\n class PyInstallerWinDLL(ctypes.WinDLL):\n def __init__(self, name,*args, **kwargs):\n name = _frozen_name(name)\n try:\n super(PyInstallerWinDLL, self).__init__(name, *args, **kwargs)\n except Exception as base_error:\n raise PyInstallerImportError(name) from base_error\n\n ctypes.WinDLL = PyInstallerWinDLL\n ctypes.windll = LibraryLoader(PyInstallerWinDLL)\n\n class PyInstallerOleDLL(ctypes.OleDLL):\n def __init__(self, name,*args, **kwargs):\n name = _frozen_name(name)\n try:\n super(PyInstallerOleDLL, self).__init__(name, *args, **kwargs)\n except Exception as base_error:\n raise PyInstallerImportError(name) from base_error\n\n ctypes.OleDLL = PyInstallerOleDLL\n ctypes.oledll = LibraryLoader(PyInstallerOleDLL)\nexcept ImportError:\n pass\n\n# On Mac OS X insert sys._MEIPASS in the first position of the list of paths\n# that ctypes uses to search for libraries.\n#\n# Note: 'ctypes' module will NOT be bundled with every app because code in this\n# module is not scanned for module dependencies. It is safe to wrap\n# 'ctypes' module into 'try/except ImportError' block.\nif sys.platform.startswith('darwin'):\n try:\n from ctypes.macholib import dyld\n dyld.DEFAULT_LIBRARY_FALLBACK.insert(0, sys._MEIPASS)\n except ImportError:\n # Do nothing when module 'ctypes' is not available.\n pass\n\n\n# Make .eggs and zipfiles available at runtime\nd = \"eggs\"\nd = os.path.join(sys._MEIPASS, d)\n# Test if the 'eggs' directory exists. This allows to\n# opportunistically including this script into the packaged exe, even\n# if no eggs as found when packaging the program. (Which may be a\n# use-case, see issue #653.\nif os.path.isdir(d):\n for fn in os.listdir(d):\n sys.path.append(os.path.join(d, fn))\n", "path": "PyInstaller/loader/pyiboot01_bootstrap.py"}]}
| 3,813 | 139 |
gh_patches_debug_19492
|
rasdani/github-patches
|
git_diff
|
zigpy__zha-device-handlers-664
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Device Support Request] Phillips Button (ROM001) Missing support for manufactuer specific button events
**Is your feature request related to a problem? Please describe.**
I have just started migrating my devices from deconz to ZHA and had success with the hue dimmer remote event codes.
However, the hue button which also supports the 64512 is not configured for it.
**Describe the solution you'd like**
Add the 64512 (0xfc00) cluster to the hue button.
**Device signature - this can be acquired by removing the device from ZHA and pairing it again from the add devices screen. Be sure to add the entire content of the log panel after pairing the device to a code block below this line.**
Button:
```
{
"node_descriptor": "NodeDescriptor(byte1=2, byte2=64, mac_capability_flags=128, manufacturer_code=4107, maximum_buffer_size=82, maximum_incoming_transfer_size=128, server_mask=11264, maximum_outgoing_transfer_size=128, descriptor_capability_field=0)",
"endpoints": {
"1": {
"profile_id": 260,
"device_type": "0x0830",
"in_clusters": [
"0x0000",
"0x0001",
"0x0003",
"0x1000",
"0xfc00"
],
"out_clusters": [
"0x0000",
"0x0003",
"0x0004",
"0x0005",
"0x0006",
"0x0008",
"0x0019",
"0x1000"
]
}
},
"manufacturer": "Philips",
"model": "ROM001",
"class": "zhaquirks.philips.rom001.PhilipsROM001"
}
```
Dimmer:
```
{
"node_descriptor": "NodeDescriptor(byte1=2, byte2=64, mac_capability_flags=128, manufacturer_code=4107, maximum_buffer_size=89, maximum_incoming_transfer_size=63, server_mask=0, maximum_outgoing_transfer_size=63, descriptor_capability_field=0)",
"endpoints": {
"1": {
"profile_id": 49246,
"device_type": "0x0830",
"in_clusters": [
"0x0000"
],
"out_clusters": [
"0x0000",
"0x0003",
"0x0004",
"0x0005",
"0x0006",
"0x0008"
]
},
"2": {
"profile_id": 260,
"device_type": "0x000c",
"in_clusters": [
"0x0000",
"0x0001",
"0x0003",
"0x000f",
"0xfc00"
],
"out_clusters": [
"0x0019"
]
}
},
"manufacturer": "Philips",
"model": "RWL021",
"class": "zhaquirks.philips.rwl021.PhilipsRWL021"
}
```
**Additional context**
Add any other context or screenshots about the feature request here.
This should be a simple case of importing PhilipsRemoteCluster and applying it
https://github.com/zigpy/zha-device-handlers/blob/71d4dcb9c8f502dee7f73ac4bbf1593b916e794e/zhaquirks/philips/rwl020.py#L80
https://github.com/zigpy/zha-device-handlers/blob/71d4dcb9c8f502dee7f73ac4bbf1593b916e794e/zhaquirks/philips/rom001.py#L75
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zhaquirks/philips/rom001.py`
Content:
```
1 """Philips ROM001 device."""
2 from zigpy.profiles import zha
3 from zigpy.quirks import CustomDevice
4 from zigpy.zcl.clusters.general import (
5 Basic,
6 Groups,
7 Identify,
8 LevelControl,
9 OnOff,
10 Ota,
11 PowerConfiguration,
12 Scenes,
13 )
14 from zigpy.zcl.clusters.lightlink import LightLink
15
16 from ..const import (
17 COMMAND,
18 COMMAND_OFF_WITH_EFFECT,
19 COMMAND_ON,
20 DEVICE_TYPE,
21 ENDPOINTS,
22 INPUT_CLUSTERS,
23 OUTPUT_CLUSTERS,
24 PROFILE_ID,
25 SHORT_PRESS,
26 TURN_OFF,
27 TURN_ON,
28 )
29
30 DEVICE_SPECIFIC_UNKNOWN = 64512
31
32
33 class PhilipsROM001(CustomDevice):
34 """Philips ROM001 device."""
35
36 signature = {
37 # <SimpleDescriptor endpoint=1 profile=260 device_type=2096
38 # device_version=1
39 # input_clusters=[0, 1, 3, 64512, 4096]
40 # output_clusters=[25, 0, 3, 4, 6, 8, 5, 4096]>
41 ENDPOINTS: {
42 1: {
43 PROFILE_ID: zha.PROFILE_ID,
44 DEVICE_TYPE: zha.DeviceType.NON_COLOR_SCENE_CONTROLLER,
45 INPUT_CLUSTERS: [
46 Basic.cluster_id,
47 PowerConfiguration.cluster_id,
48 Identify.cluster_id,
49 DEVICE_SPECIFIC_UNKNOWN,
50 LightLink.cluster_id,
51 ],
52 OUTPUT_CLUSTERS: [
53 Ota.cluster_id,
54 Basic.cluster_id,
55 Identify.cluster_id,
56 Groups.cluster_id,
57 OnOff.cluster_id,
58 LevelControl.cluster_id,
59 Scenes.cluster_id,
60 LightLink.cluster_id,
61 ],
62 }
63 }
64 }
65
66 replacement = {
67 ENDPOINTS: {
68 1: {
69 PROFILE_ID: zha.PROFILE_ID,
70 DEVICE_TYPE: zha.DeviceType.NON_COLOR_SCENE_CONTROLLER,
71 INPUT_CLUSTERS: [
72 Basic.cluster_id,
73 PowerConfiguration.cluster_id,
74 Identify.cluster_id,
75 DEVICE_SPECIFIC_UNKNOWN,
76 LightLink.cluster_id,
77 ],
78 OUTPUT_CLUSTERS: [
79 Ota.cluster_id,
80 Basic.cluster_id,
81 Identify.cluster_id,
82 Groups.cluster_id,
83 OnOff.cluster_id,
84 LevelControl.cluster_id,
85 Scenes.cluster_id,
86 LightLink.cluster_id,
87 ],
88 }
89 }
90 }
91
92 device_automation_triggers = {
93 (SHORT_PRESS, TURN_ON): {COMMAND: COMMAND_ON},
94 (SHORT_PRESS, TURN_OFF): {COMMAND: COMMAND_OFF_WITH_EFFECT},
95 }
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zhaquirks/philips/rom001.py b/zhaquirks/philips/rom001.py
--- a/zhaquirks/philips/rom001.py
+++ b/zhaquirks/philips/rom001.py
@@ -13,6 +13,7 @@
)
from zigpy.zcl.clusters.lightlink import LightLink
+from . import PhilipsBasicCluster, PhilipsRemoteCluster
from ..const import (
COMMAND,
COMMAND_OFF_WITH_EFFECT,
@@ -69,10 +70,10 @@
PROFILE_ID: zha.PROFILE_ID,
DEVICE_TYPE: zha.DeviceType.NON_COLOR_SCENE_CONTROLLER,
INPUT_CLUSTERS: [
- Basic.cluster_id,
+ PhilipsBasicCluster,
PowerConfiguration.cluster_id,
Identify.cluster_id,
- DEVICE_SPECIFIC_UNKNOWN,
+ PhilipsRemoteCluster,
LightLink.cluster_id,
],
OUTPUT_CLUSTERS: [
|
{"golden_diff": "diff --git a/zhaquirks/philips/rom001.py b/zhaquirks/philips/rom001.py\n--- a/zhaquirks/philips/rom001.py\n+++ b/zhaquirks/philips/rom001.py\n@@ -13,6 +13,7 @@\n )\n from zigpy.zcl.clusters.lightlink import LightLink\n \n+from . import PhilipsBasicCluster, PhilipsRemoteCluster\n from ..const import (\n COMMAND,\n COMMAND_OFF_WITH_EFFECT,\n@@ -69,10 +70,10 @@\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.NON_COLOR_SCENE_CONTROLLER,\n INPUT_CLUSTERS: [\n- Basic.cluster_id,\n+ PhilipsBasicCluster,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n- DEVICE_SPECIFIC_UNKNOWN,\n+ PhilipsRemoteCluster,\n LightLink.cluster_id,\n ],\n OUTPUT_CLUSTERS: [\n", "issue": "[Device Support Request] Phillips Button (ROM001) Missing support for manufactuer specific button events\n**Is your feature request related to a problem? Please describe.**\r\nI have just started migrating my devices from deconz to ZHA and had success with the hue dimmer remote event codes.\r\nHowever, the hue button which also supports the 64512 is not configured for it.\r\n\r\n**Describe the solution you'd like**\r\nAdd the 64512 (0xfc00) cluster to the hue button.\r\n\r\n**Device signature - this can be acquired by removing the device from ZHA and pairing it again from the add devices screen. Be sure to add the entire content of the log panel after pairing the device to a code block below this line.**\r\n\r\nButton:\r\n```\r\n{\r\n \"node_descriptor\": \"NodeDescriptor(byte1=2, byte2=64, mac_capability_flags=128, manufacturer_code=4107, maximum_buffer_size=82, maximum_incoming_transfer_size=128, server_mask=11264, maximum_outgoing_transfer_size=128, descriptor_capability_field=0)\",\r\n \"endpoints\": {\r\n \"1\": {\r\n \"profile_id\": 260,\r\n \"device_type\": \"0x0830\",\r\n \"in_clusters\": [\r\n \"0x0000\",\r\n \"0x0001\",\r\n \"0x0003\",\r\n \"0x1000\",\r\n \"0xfc00\"\r\n ],\r\n \"out_clusters\": [\r\n \"0x0000\",\r\n \"0x0003\",\r\n \"0x0004\",\r\n \"0x0005\",\r\n \"0x0006\",\r\n \"0x0008\",\r\n \"0x0019\",\r\n \"0x1000\"\r\n ]\r\n }\r\n },\r\n \"manufacturer\": \"Philips\",\r\n \"model\": \"ROM001\",\r\n \"class\": \"zhaquirks.philips.rom001.PhilipsROM001\"\r\n}\r\n```\r\n\r\nDimmer:\r\n```\r\n{\r\n \"node_descriptor\": \"NodeDescriptor(byte1=2, byte2=64, mac_capability_flags=128, manufacturer_code=4107, maximum_buffer_size=89, maximum_incoming_transfer_size=63, server_mask=0, maximum_outgoing_transfer_size=63, descriptor_capability_field=0)\",\r\n \"endpoints\": {\r\n \"1\": {\r\n \"profile_id\": 49246,\r\n \"device_type\": \"0x0830\",\r\n \"in_clusters\": [\r\n \"0x0000\"\r\n ],\r\n \"out_clusters\": [\r\n \"0x0000\",\r\n \"0x0003\",\r\n \"0x0004\",\r\n \"0x0005\",\r\n \"0x0006\",\r\n \"0x0008\"\r\n ]\r\n },\r\n \"2\": {\r\n \"profile_id\": 260,\r\n \"device_type\": \"0x000c\",\r\n \"in_clusters\": [\r\n \"0x0000\",\r\n \"0x0001\",\r\n \"0x0003\",\r\n \"0x000f\",\r\n \"0xfc00\"\r\n ],\r\n \"out_clusters\": [\r\n \"0x0019\"\r\n ]\r\n }\r\n },\r\n \"manufacturer\": \"Philips\",\r\n \"model\": \"RWL021\",\r\n \"class\": \"zhaquirks.philips.rwl021.PhilipsRWL021\"\r\n}\r\n```\r\n**Additional context**\r\nAdd any other context or screenshots about the feature request here.\r\n\r\nThis should be a simple case of importing PhilipsRemoteCluster and applying it\r\nhttps://github.com/zigpy/zha-device-handlers/blob/71d4dcb9c8f502dee7f73ac4bbf1593b916e794e/zhaquirks/philips/rwl020.py#L80\r\n\r\nhttps://github.com/zigpy/zha-device-handlers/blob/71d4dcb9c8f502dee7f73ac4bbf1593b916e794e/zhaquirks/philips/rom001.py#L75\r\n\n", "before_files": [{"content": "\"\"\"Philips ROM001 device.\"\"\"\nfrom zigpy.profiles import zha\nfrom zigpy.quirks import CustomDevice\nfrom zigpy.zcl.clusters.general import (\n Basic,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n PowerConfiguration,\n Scenes,\n)\nfrom zigpy.zcl.clusters.lightlink import LightLink\n\nfrom ..const import (\n COMMAND,\n COMMAND_OFF_WITH_EFFECT,\n COMMAND_ON,\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n SHORT_PRESS,\n TURN_OFF,\n TURN_ON,\n)\n\nDEVICE_SPECIFIC_UNKNOWN = 64512\n\n\nclass PhilipsROM001(CustomDevice):\n \"\"\"Philips ROM001 device.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=2096\n # device_version=1\n # input_clusters=[0, 1, 3, 64512, 4096]\n # output_clusters=[25, 0, 3, 4, 6, 8, 5, 4096]>\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.NON_COLOR_SCENE_CONTROLLER,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n DEVICE_SPECIFIC_UNKNOWN,\n LightLink.cluster_id,\n ],\n OUTPUT_CLUSTERS: [\n Ota.cluster_id,\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n LightLink.cluster_id,\n ],\n }\n }\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.NON_COLOR_SCENE_CONTROLLER,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n DEVICE_SPECIFIC_UNKNOWN,\n LightLink.cluster_id,\n ],\n OUTPUT_CLUSTERS: [\n Ota.cluster_id,\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n LightLink.cluster_id,\n ],\n }\n }\n }\n\n device_automation_triggers = {\n (SHORT_PRESS, TURN_ON): {COMMAND: COMMAND_ON},\n (SHORT_PRESS, TURN_OFF): {COMMAND: COMMAND_OFF_WITH_EFFECT},\n }\n", "path": "zhaquirks/philips/rom001.py"}], "after_files": [{"content": "\"\"\"Philips ROM001 device.\"\"\"\nfrom zigpy.profiles import zha\nfrom zigpy.quirks import CustomDevice\nfrom zigpy.zcl.clusters.general import (\n Basic,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n PowerConfiguration,\n Scenes,\n)\nfrom zigpy.zcl.clusters.lightlink import LightLink\n\nfrom . import PhilipsBasicCluster, PhilipsRemoteCluster\nfrom ..const import (\n COMMAND,\n COMMAND_OFF_WITH_EFFECT,\n COMMAND_ON,\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n SHORT_PRESS,\n TURN_OFF,\n TURN_ON,\n)\n\nDEVICE_SPECIFIC_UNKNOWN = 64512\n\n\nclass PhilipsROM001(CustomDevice):\n \"\"\"Philips ROM001 device.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=2096\n # device_version=1\n # input_clusters=[0, 1, 3, 64512, 4096]\n # output_clusters=[25, 0, 3, 4, 6, 8, 5, 4096]>\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.NON_COLOR_SCENE_CONTROLLER,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n DEVICE_SPECIFIC_UNKNOWN,\n LightLink.cluster_id,\n ],\n OUTPUT_CLUSTERS: [\n Ota.cluster_id,\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n LightLink.cluster_id,\n ],\n }\n }\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.NON_COLOR_SCENE_CONTROLLER,\n INPUT_CLUSTERS: [\n PhilipsBasicCluster,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n PhilipsRemoteCluster,\n LightLink.cluster_id,\n ],\n OUTPUT_CLUSTERS: [\n Ota.cluster_id,\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n LightLink.cluster_id,\n ],\n }\n }\n }\n\n device_automation_triggers = {\n (SHORT_PRESS, TURN_ON): {COMMAND: COMMAND_ON},\n (SHORT_PRESS, TURN_OFF): {COMMAND: COMMAND_OFF_WITH_EFFECT},\n }\n", "path": "zhaquirks/philips/rom001.py"}]}
| 1,988 | 212 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.