problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_14730
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-4775
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document link extractor usage outside CrawlSpider rules
https://docs.scrapy.org/en/latest/topics/link-extractors.html mentions that link extractors may be used outside `CrawlSpider`, but it does not go into detail on how to do that.
Also, there are broken references to `scrapy.link.Link`, we should provide reference documentation for that class.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/link.py`
Content:
```
1 """
2 This module defines the Link object used in Link extractors.
3
4 For actual link extractors implementation see scrapy.linkextractors, or
5 its documentation in: docs/topics/link-extractors.rst
6 """
7
8
9 class Link:
10 """Link objects represent an extracted link by the LinkExtractor."""
11
12 __slots__ = ['url', 'text', 'fragment', 'nofollow']
13
14 def __init__(self, url, text='', fragment='', nofollow=False):
15 if not isinstance(url, str):
16 got = url.__class__.__name__
17 raise TypeError(f"Link urls must be str objects, got {got}")
18 self.url = url
19 self.text = text
20 self.fragment = fragment
21 self.nofollow = nofollow
22
23 def __eq__(self, other):
24 return (
25 self.url == other.url
26 and self.text == other.text
27 and self.fragment == other.fragment
28 and self.nofollow == other.nofollow
29 )
30
31 def __hash__(self):
32 return hash(self.url) ^ hash(self.text) ^ hash(self.fragment) ^ hash(self.nofollow)
33
34 def __repr__(self):
35 return (
36 f'Link(url={self.url!r}, text={self.text!r}, '
37 f'fragment={self.fragment!r}, nofollow={self.nofollow!r})'
38 )
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/link.py b/scrapy/link.py
--- a/scrapy/link.py
+++ b/scrapy/link.py
@@ -7,7 +7,22 @@
class Link:
- """Link objects represent an extracted link by the LinkExtractor."""
+ """Link objects represent an extracted link by the LinkExtractor.
+
+ Using the anchor tag sample below to illustrate the parameters::
+
+ <a href="https://example.com/nofollow.html#foo" rel="nofollow">Dont follow this one</a>
+
+ :param url: the absolute url being linked to in the anchor tag.
+ From the sample, this is ``https://example.com/nofollow.html``.
+
+ :param text: the text in the anchor tag. From the sample, this is ``Dont follow this one``.
+
+ :param fragment: the part of the url after the hash symbol. From the sample, this is ``foo``.
+
+ :param nofollow: an indication of the presence or absence of a nofollow value in the ``rel`` attribute
+ of the anchor tag.
+ """
__slots__ = ['url', 'text', 'fragment', 'nofollow']
|
{"golden_diff": "diff --git a/scrapy/link.py b/scrapy/link.py\n--- a/scrapy/link.py\n+++ b/scrapy/link.py\n@@ -7,7 +7,22 @@\n \n \n class Link:\n- \"\"\"Link objects represent an extracted link by the LinkExtractor.\"\"\"\n+ \"\"\"Link objects represent an extracted link by the LinkExtractor.\n+\n+ Using the anchor tag sample below to illustrate the parameters::\n+\n+ <a href=\"https://example.com/nofollow.html#foo\" rel=\"nofollow\">Dont follow this one</a>\n+\n+ :param url: the absolute url being linked to in the anchor tag.\n+ From the sample, this is ``https://example.com/nofollow.html``.\n+\n+ :param text: the text in the anchor tag. From the sample, this is ``Dont follow this one``.\n+\n+ :param fragment: the part of the url after the hash symbol. From the sample, this is ``foo``.\n+\n+ :param nofollow: an indication of the presence or absence of a nofollow value in the ``rel`` attribute\n+ of the anchor tag.\n+ \"\"\"\n \n __slots__ = ['url', 'text', 'fragment', 'nofollow']\n", "issue": "Document link extractor usage outside CrawlSpider rules\nhttps://docs.scrapy.org/en/latest/topics/link-extractors.html mentions that link extractors may be used outside `CrawlSpider`, but it does not go into detail on how to do that.\r\n\r\nAlso, there are broken references to `scrapy.link.Link`, we should provide reference documentation for that class.\n", "before_files": [{"content": "\"\"\"\nThis module defines the Link object used in Link extractors.\n\nFor actual link extractors implementation see scrapy.linkextractors, or\nits documentation in: docs/topics/link-extractors.rst\n\"\"\"\n\n\nclass Link:\n \"\"\"Link objects represent an extracted link by the LinkExtractor.\"\"\"\n\n __slots__ = ['url', 'text', 'fragment', 'nofollow']\n\n def __init__(self, url, text='', fragment='', nofollow=False):\n if not isinstance(url, str):\n got = url.__class__.__name__\n raise TypeError(f\"Link urls must be str objects, got {got}\")\n self.url = url\n self.text = text\n self.fragment = fragment\n self.nofollow = nofollow\n\n def __eq__(self, other):\n return (\n self.url == other.url\n and self.text == other.text\n and self.fragment == other.fragment\n and self.nofollow == other.nofollow\n )\n\n def __hash__(self):\n return hash(self.url) ^ hash(self.text) ^ hash(self.fragment) ^ hash(self.nofollow)\n\n def __repr__(self):\n return (\n f'Link(url={self.url!r}, text={self.text!r}, '\n f'fragment={self.fragment!r}, nofollow={self.nofollow!r})'\n )\n", "path": "scrapy/link.py"}], "after_files": [{"content": "\"\"\"\nThis module defines the Link object used in Link extractors.\n\nFor actual link extractors implementation see scrapy.linkextractors, or\nits documentation in: docs/topics/link-extractors.rst\n\"\"\"\n\n\nclass Link:\n \"\"\"Link objects represent an extracted link by the LinkExtractor.\n\n Using the anchor tag sample below to illustrate the parameters::\n\n <a href=\"https://example.com/nofollow.html#foo\" rel=\"nofollow\">Dont follow this one</a>\n\n :param url: the absolute url being linked to in the anchor tag.\n From the sample, this is ``https://example.com/nofollow.html``.\n\n :param text: the text in the anchor tag. From the sample, this is ``Dont follow this one``.\n\n :param fragment: the part of the url after the hash symbol. From the sample, this is ``foo``.\n\n :param nofollow: an indication of the presence or absence of a nofollow value in the ``rel`` attribute\n of the anchor tag.\n \"\"\"\n\n __slots__ = ['url', 'text', 'fragment', 'nofollow']\n\n def __init__(self, url, text='', fragment='', nofollow=False):\n if not isinstance(url, str):\n got = url.__class__.__name__\n raise TypeError(\"Link urls must be str objects, got %s\" % got)\n self.url = url\n self.text = text\n self.fragment = fragment\n self.nofollow = nofollow\n\n def __eq__(self, other):\n return (\n self.url == other.url\n and self.text == other.text\n and self.fragment == other.fragment\n and self.nofollow == other.nofollow\n )\n\n def __hash__(self):\n return hash(self.url) ^ hash(self.text) ^ hash(self.fragment) ^ hash(self.nofollow)\n\n def __repr__(self):\n return (\n 'Link(url=%r, text=%r, fragment=%r, nofollow=%r)'\n % (self.url, self.text, self.fragment, self.nofollow)\n )\n", "path": "scrapy/link.py"}]}
| 690 | 262 |
gh_patches_debug_41130
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-modules-core-3623
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
junos_template module does not handle the `overwrite` action correctly
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
junos_template
##### ANSIBLE VERSION
```
ansible 2.2.0 (detached HEAD d98bd3551e) last updated 2016/05/06 18:06:05 (GMT +100)
lib/ansible/modules/core: (detached HEAD 1f5cf669dd) last updated 2016/05/06 18:06:07 (GMT +100)
lib/ansible/modules/extras: (detached HEAD 431591c2b4) last updated 2016/05/06 18:06:07 (GMT +100)
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The argument parsing for the `junos_template` module is flawed.
If `overwrite=true` is set, `action==merge` as `merge` defaults to `True`
Trying to set `merge=false` as well, gives `FAILED! => {"changed": false, "failed": true, "msg": "parameters are mutually exclusive: ('merge', 'overwrite')"}`
If you set `merge=false`, `action==replace`.
It's impossible to get `action==overwrite`
##### STEPS TO REPRODUCE
```
---
- name: Install configuration
junos_template: >
host='{{ ansible_host }}'
port={{ ansible_port }}
src='{{ junos_conf }}'
comment='configured by ansible'
timeout=120
overwrite=true
```
##### EXPECTED RESULTS
The config is replaced in it's entirety
##### ACTUAL RESULTS
The config is merged
```
<!--- Paste verbatim command output between quotes -->
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `network/junos/junos_template.py`
Content:
```
1 #!/usr/bin/python
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17 #
18
19 DOCUMENTATION = """
20 ---
21 module: junos_template
22 version_added: "2.1"
23 author: "Peter Sprygada (@privateip)"
24 short_description: Manage configuration on remote devices running Junos
25 description:
26 - The M(junos_template) module will load a candidate configuration
27 from a template file onto a remote device running Junos. The
28 module will return the differences in configuration if the diff
29 option is specified on the Ansible command line
30 extends_documentation_fragment: junos
31 options:
32 src:
33 description:
34 - The path to the config source. The source can be either a
35 file with config or a template that will be merged during
36 runtime. By default the task will search for the source
37 file in role or playbook root folder in templates directory.
38 required: true
39 default: null
40 backup:
41 description:
42 - When this argument is configured true, the module will backup
43 the configuration from the node prior to making any changes.
44 The backup file will be written to backup_{{ hostname }} in
45 the root of the playbook directory.
46 required: false
47 default: false
48 choices: ["true", "false"]
49 confirm:
50 description:
51 - The C(confirm) argument will configure a time out value for
52 the commit to be confirmed before it is automatically
53 rolled back. If the C(confirm) argument is set to False, this
54 argument is silently ignored. If the value for this argument
55 is set to 0, the commit is confirmed immediately.
56 required: false
57 default: 0
58 comment:
59 description:
60 - The C(comment) argument specifies a text string to be used
61 when committing the configuration. If the C(confirm) argument
62 is set to False, this argument is silently ignored.
63 required: false
64 default: configured by junos_template
65 merge:
66 description:
67 - The C(merge) argument instructs the module to merge the contents
68 of C(src) with the configuration running on the remote device. If
69 both C(merge) and C(overwrite) are set to false, the configuration
70 is replaced.
71 required: false
72 default: true
73 overwrite:
74 description:
75 - The C(overwrite) argument will overwrite the entire configuration
76 on the remote device with the contents loaded from C(src). If
77 both C(merge) and C(overwrite) are set to false, the configuration
78 is replaced.
79 required: false
80 default: false
81 config_format:
82 description:
83 - The C(format) argument specifies the format of the configuration
84 template specified in C(src). If the format argument is not
85 specified, the module will attempt to infer the configuration
86 format based of file extension. Files that end in I(xml) will set
87 the format to xml. Files that end in I(set) will set the format
88 to set and all other files will default the format to text.
89 required: false
90 default: null
91 choices: ['text', 'xml', 'set']
92 requirements:
93 - junos-eznc
94 notes:
95 - This module requires the netconf system service be enabled on
96 the remote device being managed
97 """
98
99 EXAMPLES = """
100 - junos_template:
101 src: config.j2
102 comment: update system config
103
104 - name: replace config hierarchy
105 src: config.j2
106 replace: yes
107
108 - name: overwrite the config
109 src: config.j2
110 overwrite: yes
111 """
112
113 DEFAULT_COMMENT = 'configured by junos_template'
114
115 def main():
116
117 argument_spec = dict(
118 src=dict(required=True, type='path'),
119 confirm=dict(default=0, type='int'),
120 comment=dict(default=DEFAULT_COMMENT),
121 merge=dict(default=True, type='bool'),
122 overwrite=dict(default=False, type='bool'),
123 config_format=dict(choices=['text', 'set', 'xml']),
124 backup=dict(default=False, type='bool'),
125 transport=dict(default='netconf', choices=['netconf'])
126 )
127
128 mutually_exclusive = [('merge', 'overwrite')]
129
130 module = get_module(argument_spec=argument_spec,
131 mutually_exclusive=mutually_exclusive,
132 supports_check_mode=True)
133
134 comment = module.params['comment']
135 confirm = module.params['confirm']
136 commit = not module.check_mode
137
138 merge = module.params['merge']
139 overwrite = module.params['overwrite']
140
141 src = module.params['src']
142 fmt = module.params['config_format']
143
144 if overwrite and fmt == 'set':
145 module.fail_json(msg="overwrite cannot be used when format is "
146 "set per junos documentation")
147
148 if merge:
149 action = 'merge'
150 elif overwrite:
151 action = 'overwrite'
152 else:
153 action = 'replace'
154
155 results = dict(changed=False)
156 results['_backup'] = str(module.get_config()).strip()
157
158 diff = module.load_config(src, action=action, comment=comment,
159 format=fmt, commit=commit, confirm=confirm)
160
161 if diff:
162 results['changed'] = True
163 results['diff'] = dict(prepared=diff)
164
165 module.exit_json(**results)
166
167
168 from ansible.module_utils.basic import *
169 from ansible.module_utils.junos import *
170
171 if __name__ == '__main__':
172 main()
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/network/junos/junos_template.py b/network/junos/junos_template.py
--- a/network/junos/junos_template.py
+++ b/network/junos/junos_template.py
@@ -62,22 +62,13 @@
is set to False, this argument is silently ignored.
required: false
default: configured by junos_template
- merge:
+ action:
description:
- - The C(merge) argument instructs the module to merge the contents
- of C(src) with the configuration running on the remote device. If
- both C(merge) and C(overwrite) are set to false, the configuration
- is replaced.
+ - The C(action) argument specifies how the module will apply changes.
required: false
- default: true
- overwrite:
- description:
- - The C(overwrite) argument will overwrite the entire configuration
- on the remote device with the contents loaded from C(src). If
- both C(merge) and C(overwrite) are set to false, the configuration
- is replaced.
- required: false
- default: false
+ default: merge
+ choices: ['merge', 'overwrite', 'replace']
+ version_added: "2.2"
config_format:
description:
- The C(format) argument specifies the format of the configuration
@@ -103,11 +94,11 @@
- name: replace config hierarchy
src: config.j2
- replace: yes
+ action: replace
- name: overwrite the config
src: config.j2
- overwrite: yes
+ action: overwrite
"""
DEFAULT_COMMENT = 'configured by junos_template'
@@ -118,40 +109,28 @@
src=dict(required=True, type='path'),
confirm=dict(default=0, type='int'),
comment=dict(default=DEFAULT_COMMENT),
- merge=dict(default=True, type='bool'),
- overwrite=dict(default=False, type='bool'),
+ action=dict(default='merge', choices=['merge', 'overwrite', 'replace']),
config_format=dict(choices=['text', 'set', 'xml']),
backup=dict(default=False, type='bool'),
transport=dict(default='netconf', choices=['netconf'])
)
- mutually_exclusive = [('merge', 'overwrite')]
-
module = get_module(argument_spec=argument_spec,
- mutually_exclusive=mutually_exclusive,
supports_check_mode=True)
comment = module.params['comment']
confirm = module.params['confirm']
commit = not module.check_mode
- merge = module.params['merge']
- overwrite = module.params['overwrite']
+ action = module.params['action']
src = module.params['src']
fmt = module.params['config_format']
- if overwrite and fmt == 'set':
+ if action == 'overwrite' and fmt == 'set':
module.fail_json(msg="overwrite cannot be used when format is "
"set per junos documentation")
- if merge:
- action = 'merge'
- elif overwrite:
- action = 'overwrite'
- else:
- action = 'replace'
-
results = dict(changed=False)
results['_backup'] = str(module.get_config()).strip()
|
{"golden_diff": "diff --git a/network/junos/junos_template.py b/network/junos/junos_template.py\n--- a/network/junos/junos_template.py\n+++ b/network/junos/junos_template.py\n@@ -62,22 +62,13 @@\n is set to False, this argument is silently ignored.\n required: false\n default: configured by junos_template\n- merge:\n+ action:\n description:\n- - The C(merge) argument instructs the module to merge the contents\n- of C(src) with the configuration running on the remote device. If\n- both C(merge) and C(overwrite) are set to false, the configuration\n- is replaced.\n+ - The C(action) argument specifies how the module will apply changes.\n required: false\n- default: true\n- overwrite:\n- description:\n- - The C(overwrite) argument will overwrite the entire configuration\n- on the remote device with the contents loaded from C(src). If\n- both C(merge) and C(overwrite) are set to false, the configuration\n- is replaced.\n- required: false\n- default: false\n+ default: merge\n+ choices: ['merge', 'overwrite', 'replace']\n+ version_added: \"2.2\"\n config_format:\n description:\n - The C(format) argument specifies the format of the configuration\n@@ -103,11 +94,11 @@\n \n - name: replace config hierarchy\n src: config.j2\n- replace: yes\n+ action: replace\n \n - name: overwrite the config\n src: config.j2\n- overwrite: yes\n+ action: overwrite\n \"\"\"\n \n DEFAULT_COMMENT = 'configured by junos_template'\n@@ -118,40 +109,28 @@\n src=dict(required=True, type='path'),\n confirm=dict(default=0, type='int'),\n comment=dict(default=DEFAULT_COMMENT),\n- merge=dict(default=True, type='bool'),\n- overwrite=dict(default=False, type='bool'),\n+ action=dict(default='merge', choices=['merge', 'overwrite', 'replace']),\n config_format=dict(choices=['text', 'set', 'xml']),\n backup=dict(default=False, type='bool'),\n transport=dict(default='netconf', choices=['netconf'])\n )\n \n- mutually_exclusive = [('merge', 'overwrite')]\n-\n module = get_module(argument_spec=argument_spec,\n- mutually_exclusive=mutually_exclusive,\n supports_check_mode=True)\n \n comment = module.params['comment']\n confirm = module.params['confirm']\n commit = not module.check_mode\n \n- merge = module.params['merge']\n- overwrite = module.params['overwrite']\n+ action = module.params['action']\n \n src = module.params['src']\n fmt = module.params['config_format']\n \n- if overwrite and fmt == 'set':\n+ if action == 'overwrite' and fmt == 'set':\n module.fail_json(msg=\"overwrite cannot be used when format is \"\n \"set per junos documentation\")\n \n- if merge:\n- action = 'merge'\n- elif overwrite:\n- action = 'overwrite'\n- else:\n- action = 'replace'\n-\n results = dict(changed=False)\n results['_backup'] = str(module.get_config()).strip()\n", "issue": "junos_template module does not handle the `overwrite` action correctly\n<!--- Verify first that your issue/request is not already reported in GitHub -->\n##### ISSUE TYPE\n\n<!--- Pick one below and delete the rest: -->\n- Bug Report\n##### COMPONENT NAME\n\njunos_template\n##### ANSIBLE VERSION\n\n```\nansible 2.2.0 (detached HEAD d98bd3551e) last updated 2016/05/06 18:06:05 (GMT +100)\n lib/ansible/modules/core: (detached HEAD 1f5cf669dd) last updated 2016/05/06 18:06:07 (GMT +100)\n lib/ansible/modules/extras: (detached HEAD 431591c2b4) last updated 2016/05/06 18:06:07 (GMT +100)\n```\n##### OS / ENVIRONMENT\n\nN/A\n##### SUMMARY\n\nThe argument parsing for the `junos_template` module is flawed.\nIf `overwrite=true` is set, `action==merge` as `merge` defaults to `True`\nTrying to set `merge=false` as well, gives `FAILED! => {\"changed\": false, \"failed\": true, \"msg\": \"parameters are mutually exclusive: ('merge', 'overwrite')\"}`\n\nIf you set `merge=false`, `action==replace`.\n\nIt's impossible to get `action==overwrite`\n##### STEPS TO REPRODUCE\n\n```\n\n---\n- name: Install configuration\n junos_template: >\n host='{{ ansible_host }}'\n port={{ ansible_port }}\n src='{{ junos_conf }}'\n comment='configured by ansible'\n timeout=120\n overwrite=true\n\n```\n##### EXPECTED RESULTS\n\nThe config is replaced in it's entirety\n##### ACTUAL RESULTS\n\nThe config is merged\n\n```\n<!--- Paste verbatim command output between quotes -->\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = \"\"\"\n---\nmodule: junos_template\nversion_added: \"2.1\"\nauthor: \"Peter Sprygada (@privateip)\"\nshort_description: Manage configuration on remote devices running Junos\ndescription:\n - The M(junos_template) module will load a candidate configuration\n from a template file onto a remote device running Junos. The\n module will return the differences in configuration if the diff\n option is specified on the Ansible command line\nextends_documentation_fragment: junos\noptions:\n src:\n description:\n - The path to the config source. The source can be either a\n file with config or a template that will be merged during\n runtime. By default the task will search for the source\n file in role or playbook root folder in templates directory.\n required: true\n default: null\n backup:\n description:\n - When this argument is configured true, the module will backup\n the configuration from the node prior to making any changes.\n The backup file will be written to backup_{{ hostname }} in\n the root of the playbook directory.\n required: false\n default: false\n choices: [\"true\", \"false\"]\n confirm:\n description:\n - The C(confirm) argument will configure a time out value for\n the commit to be confirmed before it is automatically\n rolled back. If the C(confirm) argument is set to False, this\n argument is silently ignored. If the value for this argument\n is set to 0, the commit is confirmed immediately.\n required: false\n default: 0\n comment:\n description:\n - The C(comment) argument specifies a text string to be used\n when committing the configuration. If the C(confirm) argument\n is set to False, this argument is silently ignored.\n required: false\n default: configured by junos_template\n merge:\n description:\n - The C(merge) argument instructs the module to merge the contents\n of C(src) with the configuration running on the remote device. If\n both C(merge) and C(overwrite) are set to false, the configuration\n is replaced.\n required: false\n default: true\n overwrite:\n description:\n - The C(overwrite) argument will overwrite the entire configuration\n on the remote device with the contents loaded from C(src). If\n both C(merge) and C(overwrite) are set to false, the configuration\n is replaced.\n required: false\n default: false\n config_format:\n description:\n - The C(format) argument specifies the format of the configuration\n template specified in C(src). If the format argument is not\n specified, the module will attempt to infer the configuration\n format based of file extension. Files that end in I(xml) will set\n the format to xml. Files that end in I(set) will set the format\n to set and all other files will default the format to text.\n required: false\n default: null\n choices: ['text', 'xml', 'set']\nrequirements:\n - junos-eznc\nnotes:\n - This module requires the netconf system service be enabled on\n the remote device being managed\n\"\"\"\n\nEXAMPLES = \"\"\"\n- junos_template:\n src: config.j2\n comment: update system config\n\n- name: replace config hierarchy\n src: config.j2\n replace: yes\n\n- name: overwrite the config\n src: config.j2\n overwrite: yes\n\"\"\"\n\nDEFAULT_COMMENT = 'configured by junos_template'\n\ndef main():\n\n argument_spec = dict(\n src=dict(required=True, type='path'),\n confirm=dict(default=0, type='int'),\n comment=dict(default=DEFAULT_COMMENT),\n merge=dict(default=True, type='bool'),\n overwrite=dict(default=False, type='bool'),\n config_format=dict(choices=['text', 'set', 'xml']),\n backup=dict(default=False, type='bool'),\n transport=dict(default='netconf', choices=['netconf'])\n )\n\n mutually_exclusive = [('merge', 'overwrite')]\n\n module = get_module(argument_spec=argument_spec,\n mutually_exclusive=mutually_exclusive,\n supports_check_mode=True)\n\n comment = module.params['comment']\n confirm = module.params['confirm']\n commit = not module.check_mode\n\n merge = module.params['merge']\n overwrite = module.params['overwrite']\n\n src = module.params['src']\n fmt = module.params['config_format']\n\n if overwrite and fmt == 'set':\n module.fail_json(msg=\"overwrite cannot be used when format is \"\n \"set per junos documentation\")\n\n if merge:\n action = 'merge'\n elif overwrite:\n action = 'overwrite'\n else:\n action = 'replace'\n\n results = dict(changed=False)\n results['_backup'] = str(module.get_config()).strip()\n\n diff = module.load_config(src, action=action, comment=comment,\n format=fmt, commit=commit, confirm=confirm)\n\n if diff:\n results['changed'] = True\n results['diff'] = dict(prepared=diff)\n\n module.exit_json(**results)\n\n\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.junos import *\n\nif __name__ == '__main__':\n main()\n", "path": "network/junos/junos_template.py"}], "after_files": [{"content": "#!/usr/bin/python\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = \"\"\"\n---\nmodule: junos_template\nversion_added: \"2.1\"\nauthor: \"Peter Sprygada (@privateip)\"\nshort_description: Manage configuration on remote devices running Junos\ndescription:\n - The M(junos_template) module will load a candidate configuration\n from a template file onto a remote device running Junos. The\n module will return the differences in configuration if the diff\n option is specified on the Ansible command line\nextends_documentation_fragment: junos\noptions:\n src:\n description:\n - The path to the config source. The source can be either a\n file with config or a template that will be merged during\n runtime. By default the task will search for the source\n file in role or playbook root folder in templates directory.\n required: true\n default: null\n backup:\n description:\n - When this argument is configured true, the module will backup\n the configuration from the node prior to making any changes.\n The backup file will be written to backup_{{ hostname }} in\n the root of the playbook directory.\n required: false\n default: false\n choices: [\"true\", \"false\"]\n confirm:\n description:\n - The C(confirm) argument will configure a time out value for\n the commit to be confirmed before it is automatically\n rolled back. If the C(confirm) argument is set to False, this\n argument is silently ignored. If the value for this argument\n is set to 0, the commit is confirmed immediately.\n required: false\n default: 0\n comment:\n description:\n - The C(comment) argument specifies a text string to be used\n when committing the configuration. If the C(confirm) argument\n is set to False, this argument is silently ignored.\n required: false\n default: configured by junos_template\n action:\n description:\n - The C(action) argument specifies how the module will apply changes.\n required: false\n default: merge\n choices: ['merge', 'overwrite', 'replace']\n version_added: \"2.2\"\n config_format:\n description:\n - The C(format) argument specifies the format of the configuration\n template specified in C(src). If the format argument is not\n specified, the module will attempt to infer the configuration\n format based of file extension. Files that end in I(xml) will set\n the format to xml. Files that end in I(set) will set the format\n to set and all other files will default the format to text.\n required: false\n default: null\n choices: ['text', 'xml', 'set']\nrequirements:\n - junos-eznc\nnotes:\n - This module requires the netconf system service be enabled on\n the remote device being managed\n\"\"\"\n\nEXAMPLES = \"\"\"\n- junos_template:\n src: config.j2\n comment: update system config\n\n- name: replace config hierarchy\n src: config.j2\n action: replace\n\n- name: overwrite the config\n src: config.j2\n action: overwrite\n\"\"\"\n\nDEFAULT_COMMENT = 'configured by junos_template'\n\ndef main():\n\n argument_spec = dict(\n src=dict(required=True, type='path'),\n confirm=dict(default=0, type='int'),\n comment=dict(default=DEFAULT_COMMENT),\n action=dict(default='merge', choices=['merge', 'overwrite', 'replace']),\n config_format=dict(choices=['text', 'set', 'xml']),\n backup=dict(default=False, type='bool'),\n transport=dict(default='netconf', choices=['netconf'])\n )\n\n module = get_module(argument_spec=argument_spec,\n supports_check_mode=True)\n\n comment = module.params['comment']\n confirm = module.params['confirm']\n commit = not module.check_mode\n\n action = module.params['action']\n\n src = module.params['src']\n fmt = module.params['config_format']\n\n if action == 'overwrite' and fmt == 'set':\n module.fail_json(msg=\"overwrite cannot be used when format is \"\n \"set per junos documentation\")\n\n results = dict(changed=False)\n results['_backup'] = str(module.get_config()).strip()\n\n diff = module.load_config(src, action=action, comment=comment,\n format=fmt, commit=commit, confirm=confirm)\n\n if diff:\n results['changed'] = True\n results['diff'] = dict(prepared=diff)\n\n module.exit_json(**results)\n\n\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.junos import *\n\nif __name__ == '__main__':\n main()\n", "path": "network/junos/junos_template.py"}]}
| 2,418 | 727 |
gh_patches_debug_4975
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-19598
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
imdb.load_data function returns a python list instead of ndarray object
In Keras 3.2.1 the imdb.load_data function returns a Python list instead of a ndarray object as described in the function documentation.
In Keras 2.15 the train and test data are converted to a ndarray using the following statements before returning the tuple.
x_train, y_train = np.array(xs[:idx], dtype="object"), labels[:idx]
x_test, y_test = np.array(xs[idx:], dtype="object"), labels[idx:]
In Keras 3.2.1 the conversion is not applied, i.e.,
x_train, y_train = xs[:idx], labels[:idx]
x_test, y_test = xs[idx:], labels[idx:]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras/src/datasets/imdb.py`
Content:
```
1 """IMDB sentiment classification dataset."""
2
3 import json
4
5 import numpy as np
6
7 from keras.src.api_export import keras_export
8 from keras.src.utils.file_utils import get_file
9 from keras.src.utils.python_utils import remove_long_seq
10
11
12 @keras_export("keras.datasets.imdb.load_data")
13 def load_data(
14 path="imdb.npz",
15 num_words=None,
16 skip_top=0,
17 maxlen=None,
18 seed=113,
19 start_char=1,
20 oov_char=2,
21 index_from=3,
22 **kwargs,
23 ):
24 """Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/).
25
26 This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment
27 (positive/negative). Reviews have been preprocessed, and each review is
28 encoded as a list of word indexes (integers).
29 For convenience, words are indexed by overall frequency in the dataset,
30 so that for instance the integer "3" encodes the 3rd most frequent word in
31 the data. This allows for quick filtering operations such as:
32 "only consider the top 10,000 most
33 common words, but eliminate the top 20 most common words".
34
35 As a convention, "0" does not stand for a specific word, but instead is used
36 to encode the pad token.
37
38 Args:
39 path: where to cache the data (relative to `~/.keras/dataset`).
40 num_words: integer or None. Words are
41 ranked by how often they occur (in the training set) and only
42 the `num_words` most frequent words are kept. Any less frequent word
43 will appear as `oov_char` value in the sequence data. If None,
44 all words are kept. Defaults to `None`.
45 skip_top: skip the top N most frequently occurring words
46 (which may not be informative). These words will appear as
47 `oov_char` value in the dataset. When 0, no words are
48 skipped. Defaults to `0`.
49 maxlen: int or None. Maximum sequence length.
50 Any longer sequence will be truncated. None, means no truncation.
51 Defaults to `None`.
52 seed: int. Seed for reproducible data shuffling.
53 start_char: int. The start of a sequence will be marked with this
54 character. 0 is usually the padding character. Defaults to `1`.
55 oov_char: int. The out-of-vocabulary character.
56 Words that were cut out because of the `num_words` or
57 `skip_top` limits will be replaced with this character.
58 index_from: int. Index actual words with this index and higher.
59
60 Returns:
61 Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
62
63 **`x_train`, `x_test`**: lists of sequences, which are lists of indexes
64 (integers). If the num_words argument was specific, the maximum
65 possible index value is `num_words - 1`. If the `maxlen` argument was
66 specified, the largest possible sequence length is `maxlen`.
67
68 **`y_train`, `y_test`**: lists of integer labels (1 or 0).
69
70 **Note**: The 'out of vocabulary' character is only used for
71 words that were present in the training set but are not included
72 because they're not making the `num_words` cut here.
73 Words that were not seen in the training set but are in the test set
74 have simply been skipped.
75 """
76 origin_folder = (
77 "https://storage.googleapis.com/tensorflow/tf-keras-datasets/"
78 )
79 path = get_file(
80 fname=path,
81 origin=origin_folder + "imdb.npz",
82 file_hash=( # noqa: E501
83 "69664113be75683a8fe16e3ed0ab59fda8886cb3cd7ada244f7d9544e4676b9f"
84 ),
85 )
86 with np.load(path, allow_pickle=True) as f:
87 x_train, labels_train = f["x_train"], f["y_train"]
88 x_test, labels_test = f["x_test"], f["y_test"]
89
90 rng = np.random.RandomState(seed)
91 indices = np.arange(len(x_train))
92 rng.shuffle(indices)
93 x_train = x_train[indices]
94 labels_train = labels_train[indices]
95
96 indices = np.arange(len(x_test))
97 rng.shuffle(indices)
98 x_test = x_test[indices]
99 labels_test = labels_test[indices]
100
101 if start_char is not None:
102 x_train = [[start_char] + [w + index_from for w in x] for x in x_train]
103 x_test = [[start_char] + [w + index_from for w in x] for x in x_test]
104 elif index_from:
105 x_train = [[w + index_from for w in x] for x in x_train]
106 x_test = [[w + index_from for w in x] for x in x_test]
107 else:
108 x_train = [[w for w in x] for x in x_train]
109 x_test = [[w for w in x] for x in x_test]
110
111 if maxlen:
112 x_train, labels_train = remove_long_seq(maxlen, x_train, labels_train)
113 x_test, labels_test = remove_long_seq(maxlen, x_test, labels_test)
114 if not x_train or not x_test:
115 raise ValueError(
116 "After filtering for sequences shorter than maxlen="
117 f"{str(maxlen)}, no sequence was kept. Increase maxlen."
118 )
119
120 xs = x_train + x_test
121 labels = np.concatenate([labels_train, labels_test])
122
123 if not num_words:
124 num_words = max(max(x) for x in xs)
125
126 # by convention, use 2 as OOV word
127 # reserve 'index_from' (=3 by default) characters:
128 # 0 (padding), 1 (start), 2 (OOV)
129 if oov_char is not None:
130 xs = [
131 [w if (skip_top <= w < num_words) else oov_char for w in x]
132 for x in xs
133 ]
134 else:
135 xs = [[w for w in x if skip_top <= w < num_words] for x in xs]
136
137 idx = len(x_train)
138 x_train, y_train = xs[:idx], labels[:idx]
139 x_test, y_test = xs[idx:], labels[idx:]
140 return (x_train, y_train), (x_test, y_test)
141
142
143 @keras_export("keras.datasets.imdb.get_word_index")
144 def get_word_index(path="imdb_word_index.json"):
145 """Retrieves a dict mapping words to their index in the IMDB dataset.
146
147 Args:
148 path: where to cache the data (relative to `~/.keras/dataset`).
149
150 Returns:
151 The word index dictionary. Keys are word strings, values are their
152 index.
153
154 Example:
155
156 ```python
157 # Use the default parameters to keras.datasets.imdb.load_data
158 start_char = 1
159 oov_char = 2
160 index_from = 3
161 # Retrieve the training sequences.
162 (x_train, _), _ = keras.datasets.imdb.load_data(
163 start_char=start_char, oov_char=oov_char, index_from=index_from
164 )
165 # Retrieve the word index file mapping words to indices
166 word_index = keras.datasets.imdb.get_word_index()
167 # Reverse the word index to obtain a dict mapping indices to words
168 # And add `index_from` to indices to sync with `x_train`
169 inverted_word_index = dict(
170 (i + index_from, word) for (word, i) in word_index.items()
171 )
172 # Update `inverted_word_index` to include `start_char` and `oov_char`
173 inverted_word_index[start_char] = "[START]"
174 inverted_word_index[oov_char] = "[OOV]"
175 # Decode the first sequence in the dataset
176 decoded_sequence = " ".join(inverted_word_index[i] for i in x_train[0])
177 ```
178 """
179 origin_folder = (
180 "https://storage.googleapis.com/tensorflow/tf-keras-datasets/"
181 )
182 path = get_file(
183 fname=path,
184 origin=origin_folder + "imdb_word_index.json",
185 file_hash="bfafd718b763782e994055a2d397834f",
186 )
187 with open(path) as f:
188 return json.load(f)
189
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/keras/src/datasets/imdb.py b/keras/src/datasets/imdb.py
--- a/keras/src/datasets/imdb.py
+++ b/keras/src/datasets/imdb.py
@@ -135,8 +135,8 @@
xs = [[w for w in x if skip_top <= w < num_words] for x in xs]
idx = len(x_train)
- x_train, y_train = xs[:idx], labels[:idx]
- x_test, y_test = xs[idx:], labels[idx:]
+ x_train, y_train = np.array(xs[:idx], dtype="object"), labels[:idx]
+ x_test, y_test = np.array(xs[idx:], dtype="object"), labels[idx:]
return (x_train, y_train), (x_test, y_test)
|
{"golden_diff": "diff --git a/keras/src/datasets/imdb.py b/keras/src/datasets/imdb.py\n--- a/keras/src/datasets/imdb.py\n+++ b/keras/src/datasets/imdb.py\n@@ -135,8 +135,8 @@\n xs = [[w for w in x if skip_top <= w < num_words] for x in xs]\n \n idx = len(x_train)\n- x_train, y_train = xs[:idx], labels[:idx]\n- x_test, y_test = xs[idx:], labels[idx:]\n+ x_train, y_train = np.array(xs[:idx], dtype=\"object\"), labels[:idx]\n+ x_test, y_test = np.array(xs[idx:], dtype=\"object\"), labels[idx:]\n return (x_train, y_train), (x_test, y_test)\n", "issue": "imdb.load_data function returns a python list instead of ndarray object\nIn Keras 3.2.1 the imdb.load_data function returns a Python list instead of a ndarray object as described in the function documentation. \r\nIn Keras 2.15 the train and test data are converted to a ndarray using the following statements before returning the tuple.\r\nx_train, y_train = np.array(xs[:idx], dtype=\"object\"), labels[:idx]\r\nx_test, y_test = np.array(xs[idx:], dtype=\"object\"), labels[idx:]\r\n\r\nIn Keras 3.2.1 the conversion is not applied, i.e., \r\nx_train, y_train = xs[:idx], labels[:idx]\r\nx_test, y_test = xs[idx:], labels[idx:]\r\n\n", "before_files": [{"content": "\"\"\"IMDB sentiment classification dataset.\"\"\"\n\nimport json\n\nimport numpy as np\n\nfrom keras.src.api_export import keras_export\nfrom keras.src.utils.file_utils import get_file\nfrom keras.src.utils.python_utils import remove_long_seq\n\n\n@keras_export(\"keras.datasets.imdb.load_data\")\ndef load_data(\n path=\"imdb.npz\",\n num_words=None,\n skip_top=0,\n maxlen=None,\n seed=113,\n start_char=1,\n oov_char=2,\n index_from=3,\n **kwargs,\n):\n \"\"\"Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/).\n\n This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment\n (positive/negative). Reviews have been preprocessed, and each review is\n encoded as a list of word indexes (integers).\n For convenience, words are indexed by overall frequency in the dataset,\n so that for instance the integer \"3\" encodes the 3rd most frequent word in\n the data. This allows for quick filtering operations such as:\n \"only consider the top 10,000 most\n common words, but eliminate the top 20 most common words\".\n\n As a convention, \"0\" does not stand for a specific word, but instead is used\n to encode the pad token.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n num_words: integer or None. Words are\n ranked by how often they occur (in the training set) and only\n the `num_words` most frequent words are kept. Any less frequent word\n will appear as `oov_char` value in the sequence data. If None,\n all words are kept. Defaults to `None`.\n skip_top: skip the top N most frequently occurring words\n (which may not be informative). These words will appear as\n `oov_char` value in the dataset. When 0, no words are\n skipped. Defaults to `0`.\n maxlen: int or None. Maximum sequence length.\n Any longer sequence will be truncated. None, means no truncation.\n Defaults to `None`.\n seed: int. Seed for reproducible data shuffling.\n start_char: int. The start of a sequence will be marked with this\n character. 0 is usually the padding character. Defaults to `1`.\n oov_char: int. The out-of-vocabulary character.\n Words that were cut out because of the `num_words` or\n `skip_top` limits will be replaced with this character.\n index_from: int. Index actual words with this index and higher.\n\n Returns:\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **`x_train`, `x_test`**: lists of sequences, which are lists of indexes\n (integers). If the num_words argument was specific, the maximum\n possible index value is `num_words - 1`. If the `maxlen` argument was\n specified, the largest possible sequence length is `maxlen`.\n\n **`y_train`, `y_test`**: lists of integer labels (1 or 0).\n\n **Note**: The 'out of vocabulary' character is only used for\n words that were present in the training set but are not included\n because they're not making the `num_words` cut here.\n Words that were not seen in the training set but are in the test set\n have simply been skipped.\n \"\"\"\n origin_folder = (\n \"https://storage.googleapis.com/tensorflow/tf-keras-datasets/\"\n )\n path = get_file(\n fname=path,\n origin=origin_folder + \"imdb.npz\",\n file_hash=( # noqa: E501\n \"69664113be75683a8fe16e3ed0ab59fda8886cb3cd7ada244f7d9544e4676b9f\"\n ),\n )\n with np.load(path, allow_pickle=True) as f:\n x_train, labels_train = f[\"x_train\"], f[\"y_train\"]\n x_test, labels_test = f[\"x_test\"], f[\"y_test\"]\n\n rng = np.random.RandomState(seed)\n indices = np.arange(len(x_train))\n rng.shuffle(indices)\n x_train = x_train[indices]\n labels_train = labels_train[indices]\n\n indices = np.arange(len(x_test))\n rng.shuffle(indices)\n x_test = x_test[indices]\n labels_test = labels_test[indices]\n\n if start_char is not None:\n x_train = [[start_char] + [w + index_from for w in x] for x in x_train]\n x_test = [[start_char] + [w + index_from for w in x] for x in x_test]\n elif index_from:\n x_train = [[w + index_from for w in x] for x in x_train]\n x_test = [[w + index_from for w in x] for x in x_test]\n else:\n x_train = [[w for w in x] for x in x_train]\n x_test = [[w for w in x] for x in x_test]\n\n if maxlen:\n x_train, labels_train = remove_long_seq(maxlen, x_train, labels_train)\n x_test, labels_test = remove_long_seq(maxlen, x_test, labels_test)\n if not x_train or not x_test:\n raise ValueError(\n \"After filtering for sequences shorter than maxlen=\"\n f\"{str(maxlen)}, no sequence was kept. Increase maxlen.\"\n )\n\n xs = x_train + x_test\n labels = np.concatenate([labels_train, labels_test])\n\n if not num_words:\n num_words = max(max(x) for x in xs)\n\n # by convention, use 2 as OOV word\n # reserve 'index_from' (=3 by default) characters:\n # 0 (padding), 1 (start), 2 (OOV)\n if oov_char is not None:\n xs = [\n [w if (skip_top <= w < num_words) else oov_char for w in x]\n for x in xs\n ]\n else:\n xs = [[w for w in x if skip_top <= w < num_words] for x in xs]\n\n idx = len(x_train)\n x_train, y_train = xs[:idx], labels[:idx]\n x_test, y_test = xs[idx:], labels[idx:]\n return (x_train, y_train), (x_test, y_test)\n\n\n@keras_export(\"keras.datasets.imdb.get_word_index\")\ndef get_word_index(path=\"imdb_word_index.json\"):\n \"\"\"Retrieves a dict mapping words to their index in the IMDB dataset.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n\n Returns:\n The word index dictionary. Keys are word strings, values are their\n index.\n\n Example:\n\n ```python\n # Use the default parameters to keras.datasets.imdb.load_data\n start_char = 1\n oov_char = 2\n index_from = 3\n # Retrieve the training sequences.\n (x_train, _), _ = keras.datasets.imdb.load_data(\n start_char=start_char, oov_char=oov_char, index_from=index_from\n )\n # Retrieve the word index file mapping words to indices\n word_index = keras.datasets.imdb.get_word_index()\n # Reverse the word index to obtain a dict mapping indices to words\n # And add `index_from` to indices to sync with `x_train`\n inverted_word_index = dict(\n (i + index_from, word) for (word, i) in word_index.items()\n )\n # Update `inverted_word_index` to include `start_char` and `oov_char`\n inverted_word_index[start_char] = \"[START]\"\n inverted_word_index[oov_char] = \"[OOV]\"\n # Decode the first sequence in the dataset\n decoded_sequence = \" \".join(inverted_word_index[i] for i in x_train[0])\n ```\n \"\"\"\n origin_folder = (\n \"https://storage.googleapis.com/tensorflow/tf-keras-datasets/\"\n )\n path = get_file(\n fname=path,\n origin=origin_folder + \"imdb_word_index.json\",\n file_hash=\"bfafd718b763782e994055a2d397834f\",\n )\n with open(path) as f:\n return json.load(f)\n", "path": "keras/src/datasets/imdb.py"}], "after_files": [{"content": "\"\"\"IMDB sentiment classification dataset.\"\"\"\n\nimport json\n\nimport numpy as np\n\nfrom keras.src.api_export import keras_export\nfrom keras.src.utils.file_utils import get_file\nfrom keras.src.utils.python_utils import remove_long_seq\n\n\n@keras_export(\"keras.datasets.imdb.load_data\")\ndef load_data(\n path=\"imdb.npz\",\n num_words=None,\n skip_top=0,\n maxlen=None,\n seed=113,\n start_char=1,\n oov_char=2,\n index_from=3,\n **kwargs,\n):\n \"\"\"Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/).\n\n This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment\n (positive/negative). Reviews have been preprocessed, and each review is\n encoded as a list of word indexes (integers).\n For convenience, words are indexed by overall frequency in the dataset,\n so that for instance the integer \"3\" encodes the 3rd most frequent word in\n the data. This allows for quick filtering operations such as:\n \"only consider the top 10,000 most\n common words, but eliminate the top 20 most common words\".\n\n As a convention, \"0\" does not stand for a specific word, but instead is used\n to encode the pad token.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n num_words: integer or None. Words are\n ranked by how often they occur (in the training set) and only\n the `num_words` most frequent words are kept. Any less frequent word\n will appear as `oov_char` value in the sequence data. If None,\n all words are kept. Defaults to `None`.\n skip_top: skip the top N most frequently occurring words\n (which may not be informative). These words will appear as\n `oov_char` value in the dataset. When 0, no words are\n skipped. Defaults to `0`.\n maxlen: int or None. Maximum sequence length.\n Any longer sequence will be truncated. None, means no truncation.\n Defaults to `None`.\n seed: int. Seed for reproducible data shuffling.\n start_char: int. The start of a sequence will be marked with this\n character. 0 is usually the padding character. Defaults to `1`.\n oov_char: int. The out-of-vocabulary character.\n Words that were cut out because of the `num_words` or\n `skip_top` limits will be replaced with this character.\n index_from: int. Index actual words with this index and higher.\n\n Returns:\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **`x_train`, `x_test`**: lists of sequences, which are lists of indexes\n (integers). If the num_words argument was specific, the maximum\n possible index value is `num_words - 1`. If the `maxlen` argument was\n specified, the largest possible sequence length is `maxlen`.\n\n **`y_train`, `y_test`**: lists of integer labels (1 or 0).\n\n **Note**: The 'out of vocabulary' character is only used for\n words that were present in the training set but are not included\n because they're not making the `num_words` cut here.\n Words that were not seen in the training set but are in the test set\n have simply been skipped.\n \"\"\"\n origin_folder = (\n \"https://storage.googleapis.com/tensorflow/tf-keras-datasets/\"\n )\n path = get_file(\n fname=path,\n origin=origin_folder + \"imdb.npz\",\n file_hash=( # noqa: E501\n \"69664113be75683a8fe16e3ed0ab59fda8886cb3cd7ada244f7d9544e4676b9f\"\n ),\n )\n with np.load(path, allow_pickle=True) as f:\n x_train, labels_train = f[\"x_train\"], f[\"y_train\"]\n x_test, labels_test = f[\"x_test\"], f[\"y_test\"]\n\n rng = np.random.RandomState(seed)\n indices = np.arange(len(x_train))\n rng.shuffle(indices)\n x_train = x_train[indices]\n labels_train = labels_train[indices]\n\n indices = np.arange(len(x_test))\n rng.shuffle(indices)\n x_test = x_test[indices]\n labels_test = labels_test[indices]\n\n if start_char is not None:\n x_train = [[start_char] + [w + index_from for w in x] for x in x_train]\n x_test = [[start_char] + [w + index_from for w in x] for x in x_test]\n elif index_from:\n x_train = [[w + index_from for w in x] for x in x_train]\n x_test = [[w + index_from for w in x] for x in x_test]\n else:\n x_train = [[w for w in x] for x in x_train]\n x_test = [[w for w in x] for x in x_test]\n\n if maxlen:\n x_train, labels_train = remove_long_seq(maxlen, x_train, labels_train)\n x_test, labels_test = remove_long_seq(maxlen, x_test, labels_test)\n if not x_train or not x_test:\n raise ValueError(\n \"After filtering for sequences shorter than maxlen=\"\n f\"{str(maxlen)}, no sequence was kept. Increase maxlen.\"\n )\n\n xs = x_train + x_test\n labels = np.concatenate([labels_train, labels_test])\n\n if not num_words:\n num_words = max(max(x) for x in xs)\n\n # by convention, use 2 as OOV word\n # reserve 'index_from' (=3 by default) characters:\n # 0 (padding), 1 (start), 2 (OOV)\n if oov_char is not None:\n xs = [\n [w if (skip_top <= w < num_words) else oov_char for w in x]\n for x in xs\n ]\n else:\n xs = [[w for w in x if skip_top <= w < num_words] for x in xs]\n\n idx = len(x_train)\n x_train, y_train = np.array(xs[:idx], dtype=\"object\"), labels[:idx]\n x_test, y_test = np.array(xs[idx:], dtype=\"object\"), labels[idx:]\n return (x_train, y_train), (x_test, y_test)\n\n\n@keras_export(\"keras.datasets.imdb.get_word_index\")\ndef get_word_index(path=\"imdb_word_index.json\"):\n \"\"\"Retrieves a dict mapping words to their index in the IMDB dataset.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n\n Returns:\n The word index dictionary. Keys are word strings, values are their\n index.\n\n Example:\n\n ```python\n # Use the default parameters to keras.datasets.imdb.load_data\n start_char = 1\n oov_char = 2\n index_from = 3\n # Retrieve the training sequences.\n (x_train, _), _ = keras.datasets.imdb.load_data(\n start_char=start_char, oov_char=oov_char, index_from=index_from\n )\n # Retrieve the word index file mapping words to indices\n word_index = keras.datasets.imdb.get_word_index()\n # Reverse the word index to obtain a dict mapping indices to words\n # And add `index_from` to indices to sync with `x_train`\n inverted_word_index = dict(\n (i + index_from, word) for (word, i) in word_index.items()\n )\n # Update `inverted_word_index` to include `start_char` and `oov_char`\n inverted_word_index[start_char] = \"[START]\"\n inverted_word_index[oov_char] = \"[OOV]\"\n # Decode the first sequence in the dataset\n decoded_sequence = \" \".join(inverted_word_index[i] for i in x_train[0])\n ```\n \"\"\"\n origin_folder = (\n \"https://storage.googleapis.com/tensorflow/tf-keras-datasets/\"\n )\n path = get_file(\n fname=path,\n origin=origin_folder + \"imdb_word_index.json\",\n file_hash=\"bfafd718b763782e994055a2d397834f\",\n )\n with open(path) as f:\n return json.load(f)\n", "path": "keras/src/datasets/imdb.py"}]}
| 2,781 | 181 |
gh_patches_debug_34049
|
rasdani/github-patches
|
git_diff
|
huggingface__text-generation-inference-1022
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
missing 2 required positional arguments: 'top_n_tokens' and 'top_n_tokens_tensor' for the galactica model
### System Info
v1.0.3
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
having a local galactica model in a folder called `facebook-galactica-30b-gptq` won't be detected since it will fail this check https://github.com/huggingface/text-generation-inference/blob/main/server/text_generation_server/models/__init__.py#L88
I suggest making it check `if "galactica" in in model_id` instead.
### Expected behavior
expected to detect a galactica model
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `server/text_generation_server/models/galactica.py`
Content:
```
1 import re
2 import torch
3 import torch.distributed
4
5 from typing import List, Optional, Type
6
7 from transformers import (
8 AutoTokenizer,
9 AutoConfig,
10 PreTrainedTokenizerBase,
11 )
12 from text_generation_server.models import CausalLM
13 from text_generation_server.models.causal_lm import CausalLMBatch
14 from text_generation_server.pb import generate_pb2
15 from text_generation_server.models.custom_modeling.opt_modeling import OPTForCausalLM
16 from text_generation_server.utils import (
17 NextTokenChooser,
18 StoppingCriteria,
19 initialize_torch_distributed,
20 weight_files,
21 Weights,
22 )
23
24 # CREDIT: Papers with code => https://github.com/paperswithcode/galai/blob/main/galai/utils.py
25
26 # we split individual characters inside special tokens like [START_DNA]
27 CUSTOM_SEQ_RE = re.compile(r"(\[START_(DNA|SMILES|I_SMILES|AMINO)])(.*?)(\[END_\2])")
28
29 # token added to implement a custom sequence tokenization. This token is added at
30 # corpus cleaning step and removed in pretokenization. The digits are added to increase the chance
31 # that they do not occur in the corpus. The digits are escaped so that the token does not appear
32 # literally in the source code in case we ever include it in the training data.
33 SPLIT_MARKER = f"SPL{1}T-TH{1}S-Pl3A5E"
34
35
36 def _insert_split_marker(m: re.Match):
37 """
38 Applies split marker based on a regex match of special tokens such as
39 [START_DNA].
40 Parameters
41 ----------
42 n : str
43 Input text to split
44 Returns
45 ----------
46 str - the text with the split token added
47 """
48 start_token, _, sequence, end_token = m.groups()
49 sequence = re.sub(r"(.)", rf"{SPLIT_MARKER}\1", sequence, flags=re.DOTALL)
50 return f"{start_token}{sequence}{SPLIT_MARKER}{end_token}"
51
52
53 def escape_custom_split_sequence(text):
54 """
55 Applies custom splitting to the text for GALILEO's tokenization
56 Parameters
57 ----------
58 text : str
59 Input text to split
60 Returns
61 ----------
62 str - the text with the split token added
63 """
64 return CUSTOM_SEQ_RE.sub(_insert_split_marker, text)
65
66
67 # END CREDIT
68
69
70 class GalacticaCausalLMBatch(CausalLMBatch):
71 @classmethod
72 def from_pb(
73 cls,
74 pb: generate_pb2.Batch,
75 tokenizer: PreTrainedTokenizerBase,
76 dtype: torch.dtype,
77 device: torch.device,
78 ) -> "GalacticaCausalLMBatch":
79 inputs = []
80 next_token_choosers = []
81 stopping_criterias = []
82 prefix_offsets = []
83 read_offsets = []
84 requests_idx_mapping = {}
85
86 # Parse batch
87 max_truncation = 0
88 padding_right_offset = 0
89 max_decode_tokens = 0
90 for i, r in enumerate(pb.requests):
91 requests_idx_mapping[r.id] = i
92 # Add escape_custom_split_sequence to the CausalLMBatch logic
93 inputs.append(escape_custom_split_sequence(r.inputs))
94 next_token_choosers.append(NextTokenChooser.from_pb(r.parameters, device))
95 stopping_criteria = StoppingCriteria.from_pb(
96 r.stopping_parameters, tokenizer
97 )
98 stopping_criterias.append(stopping_criteria)
99 max_truncation = max(max_truncation, r.truncate)
100 max_decode_tokens += stopping_criteria.max_new_tokens
101 padding_right_offset = max(
102 padding_right_offset, stopping_criteria.max_new_tokens
103 )
104
105 tokenized_inputs = tokenizer(
106 inputs,
107 return_tensors="pt",
108 padding=True,
109 return_token_type_ids=False,
110 truncation=True,
111 max_length=max_truncation,
112 ).to(device)
113 for _ in pb.requests:
114 input_len = tokenized_inputs["input_ids"].shape[1]
115 prefix_offsets.append(0)
116 read_offsets.append(input_len)
117
118 input_lengths = tokenized_inputs["attention_mask"].sum(1)
119 max_input_length = input_lengths.max()
120
121 input_ids = tokenized_inputs["input_ids"]
122 # Allocate maximum attention_mask
123 attention_mask = input_ids.new_zeros(
124 (pb.size, max_input_length + padding_right_offset)
125 )
126 # Copy tokenizer attention_mask into fully allocated attention_mask
127 attention_mask[:, :max_input_length] = tokenized_inputs["attention_mask"]
128
129 position_ids = tokenized_inputs["attention_mask"].long().cumsum(-1) - 1
130 position_ids.masked_fill_(tokenized_inputs["attention_mask"] == 0, 1)
131 all_input_ids = tokenized_inputs["input_ids"].T.split(1, dim=1)
132
133 max_tokens = len(inputs) * max_input_length + max_decode_tokens
134
135 return cls(
136 batch_id=pb.id,
137 requests=pb.requests,
138 requests_idx_mapping=requests_idx_mapping,
139 input_ids=input_ids,
140 attention_mask=attention_mask,
141 position_ids=position_ids,
142 past_key_values=None,
143 all_input_ids=list(all_input_ids),
144 input_lengths=input_lengths.tolist(),
145 prefix_offsets=prefix_offsets,
146 read_offsets=read_offsets,
147 next_token_choosers=next_token_choosers,
148 stopping_criterias=stopping_criterias,
149 max_input_length=max_input_length.item(),
150 padding_right_offset=padding_right_offset,
151 max_tokens=max_tokens,
152 )
153
154
155 class GalacticaSharded(CausalLM):
156 def __init__(
157 self,
158 model_id: str,
159 revision: Optional[str] = None,
160 quantize: Optional[str] = None,
161 dtype: Optional[torch.dtype] = None,
162 trust_remote_code: bool = False,
163 ):
164 self.process_group, rank, world_size = initialize_torch_distributed()
165 if torch.cuda.is_available():
166 device = torch.device(f"cuda:{rank}")
167 dtype = torch.float16 if dtype is None else dtype
168 else:
169 device = torch.device("cpu")
170 dtype = torch.float32
171
172 tokenizer = AutoTokenizer.from_pretrained(
173 model_id,
174 revision=revision,
175 padding_side="left",
176 truncation_side="left",
177 trust_remote_code=trust_remote_code,
178 )
179
180 config = AutoConfig.from_pretrained(
181 model_id,
182 revision=revision,
183 tp_parallel=True,
184 trust_remote_code=trust_remote_code,
185 )
186 config.quantize = quantize
187 tokenizer.pad_token_id = config.pad_token_id
188
189 torch.distributed.barrier(group=self.process_group)
190 filenames = weight_files(model_id, revision=revision, extension=".safetensors")
191 weights = Weights(
192 filenames, device=device, dtype=dtype, process_group=self.process_group
193 )
194 if config.quantize == "gptq":
195 weights._set_gptq_params(model_id)
196
197 model = OPTForCausalLM(config, weights)
198
199 torch.distributed.barrier(group=self.process_group)
200 super(CausalLM, self).__init__(
201 model=model,
202 tokenizer=tokenizer,
203 requires_padding=True,
204 dtype=dtype,
205 device=device,
206 rank=rank,
207 world_size=world_size,
208 )
209
210 @property
211 def batch_type(self) -> Type[CausalLMBatch]:
212 return GalacticaCausalLMBatch
213
214 def decode(self, generated_ids: List[int]) -> str:
215 # Do not skip special tokens as they are used for custom parsing rules of the generated text
216 return self.tokenizer.decode(
217 generated_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False
218 )
219
220 def forward(
221 self, input_ids, attention_mask, position_ids, past_key_values: Optional = None
222 ):
223 outputs = self.model.forward(
224 input_ids=input_ids,
225 attention_mask=attention_mask,
226 past_key_values=past_key_values,
227 use_cache=True,
228 )
229 return outputs.logits, outputs.past_key_values
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/server/text_generation_server/models/galactica.py b/server/text_generation_server/models/galactica.py
--- a/server/text_generation_server/models/galactica.py
+++ b/server/text_generation_server/models/galactica.py
@@ -80,6 +80,7 @@
next_token_choosers = []
stopping_criterias = []
prefix_offsets = []
+ top_n_tokens = []
read_offsets = []
requests_idx_mapping = {}
@@ -96,6 +97,7 @@
r.stopping_parameters, tokenizer
)
stopping_criterias.append(stopping_criteria)
+ top_n_tokens.append(r.top_n_tokens)
max_truncation = max(max_truncation, r.truncate)
max_decode_tokens += stopping_criteria.max_new_tokens
padding_right_offset = max(
@@ -129,6 +131,9 @@
position_ids = tokenized_inputs["attention_mask"].long().cumsum(-1) - 1
position_ids.masked_fill_(tokenized_inputs["attention_mask"] == 0, 1)
all_input_ids = tokenized_inputs["input_ids"].T.split(1, dim=1)
+ top_n_tokens_tensor = torch.tensor(
+ top_n_tokens, device=device, dtype=torch.int64
+ )
max_tokens = len(inputs) * max_input_length + max_decode_tokens
@@ -146,6 +151,8 @@
read_offsets=read_offsets,
next_token_choosers=next_token_choosers,
stopping_criterias=stopping_criterias,
+ top_n_tokens=top_n_tokens,
+ top_n_tokens_tensor=top_n_tokens_tensor,
max_input_length=max_input_length.item(),
padding_right_offset=padding_right_offset,
max_tokens=max_tokens,
|
{"golden_diff": "diff --git a/server/text_generation_server/models/galactica.py b/server/text_generation_server/models/galactica.py\n--- a/server/text_generation_server/models/galactica.py\n+++ b/server/text_generation_server/models/galactica.py\n@@ -80,6 +80,7 @@\n next_token_choosers = []\n stopping_criterias = []\n prefix_offsets = []\n+ top_n_tokens = []\n read_offsets = []\n requests_idx_mapping = {}\n \n@@ -96,6 +97,7 @@\n r.stopping_parameters, tokenizer\n )\n stopping_criterias.append(stopping_criteria)\n+ top_n_tokens.append(r.top_n_tokens)\n max_truncation = max(max_truncation, r.truncate)\n max_decode_tokens += stopping_criteria.max_new_tokens\n padding_right_offset = max(\n@@ -129,6 +131,9 @@\n position_ids = tokenized_inputs[\"attention_mask\"].long().cumsum(-1) - 1\n position_ids.masked_fill_(tokenized_inputs[\"attention_mask\"] == 0, 1)\n all_input_ids = tokenized_inputs[\"input_ids\"].T.split(1, dim=1)\n+ top_n_tokens_tensor = torch.tensor(\n+ top_n_tokens, device=device, dtype=torch.int64\n+ )\n \n max_tokens = len(inputs) * max_input_length + max_decode_tokens\n \n@@ -146,6 +151,8 @@\n read_offsets=read_offsets,\n next_token_choosers=next_token_choosers,\n stopping_criterias=stopping_criterias,\n+ top_n_tokens=top_n_tokens,\n+ top_n_tokens_tensor=top_n_tokens_tensor,\n max_input_length=max_input_length.item(),\n padding_right_offset=padding_right_offset,\n max_tokens=max_tokens,\n", "issue": "missing 2 required positional arguments: 'top_n_tokens' and 'top_n_tokens_tensor' for the galactica model\n### System Info\r\n\r\nv1.0.3\r\n\r\n### Information\r\n\r\n- [ ] Docker\r\n- [ ] The CLI directly\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported command\r\n- [ ] My own modifications\r\n\r\n### Reproduction\r\n\r\nhaving a local galactica model in a folder called `facebook-galactica-30b-gptq` won't be detected since it will fail this check https://github.com/huggingface/text-generation-inference/blob/main/server/text_generation_server/models/__init__.py#L88\r\n\r\nI suggest making it check `if \"galactica\" in in model_id` instead.\r\n\r\n### Expected behavior\r\n\r\nexpected to detect a galactica model\n", "before_files": [{"content": "import re\nimport torch\nimport torch.distributed\n\nfrom typing import List, Optional, Type\n\nfrom transformers import (\n AutoTokenizer,\n AutoConfig,\n PreTrainedTokenizerBase,\n)\nfrom text_generation_server.models import CausalLM\nfrom text_generation_server.models.causal_lm import CausalLMBatch\nfrom text_generation_server.pb import generate_pb2\nfrom text_generation_server.models.custom_modeling.opt_modeling import OPTForCausalLM\nfrom text_generation_server.utils import (\n NextTokenChooser,\n StoppingCriteria,\n initialize_torch_distributed,\n weight_files,\n Weights,\n)\n\n# CREDIT: Papers with code => https://github.com/paperswithcode/galai/blob/main/galai/utils.py\n\n# we split individual characters inside special tokens like [START_DNA]\nCUSTOM_SEQ_RE = re.compile(r\"(\\[START_(DNA|SMILES|I_SMILES|AMINO)])(.*?)(\\[END_\\2])\")\n\n# token added to implement a custom sequence tokenization. This token is added at\n# corpus cleaning step and removed in pretokenization. The digits are added to increase the chance\n# that they do not occur in the corpus. The digits are escaped so that the token does not appear\n# literally in the source code in case we ever include it in the training data.\nSPLIT_MARKER = f\"SPL{1}T-TH{1}S-Pl3A5E\"\n\n\ndef _insert_split_marker(m: re.Match):\n \"\"\"\n Applies split marker based on a regex match of special tokens such as\n [START_DNA].\n Parameters\n ----------\n n : str\n Input text to split\n Returns\n ----------\n str - the text with the split token added\n \"\"\"\n start_token, _, sequence, end_token = m.groups()\n sequence = re.sub(r\"(.)\", rf\"{SPLIT_MARKER}\\1\", sequence, flags=re.DOTALL)\n return f\"{start_token}{sequence}{SPLIT_MARKER}{end_token}\"\n\n\ndef escape_custom_split_sequence(text):\n \"\"\"\n Applies custom splitting to the text for GALILEO's tokenization\n Parameters\n ----------\n text : str\n Input text to split\n Returns\n ----------\n str - the text with the split token added\n \"\"\"\n return CUSTOM_SEQ_RE.sub(_insert_split_marker, text)\n\n\n# END CREDIT\n\n\nclass GalacticaCausalLMBatch(CausalLMBatch):\n @classmethod\n def from_pb(\n cls,\n pb: generate_pb2.Batch,\n tokenizer: PreTrainedTokenizerBase,\n dtype: torch.dtype,\n device: torch.device,\n ) -> \"GalacticaCausalLMBatch\":\n inputs = []\n next_token_choosers = []\n stopping_criterias = []\n prefix_offsets = []\n read_offsets = []\n requests_idx_mapping = {}\n\n # Parse batch\n max_truncation = 0\n padding_right_offset = 0\n max_decode_tokens = 0\n for i, r in enumerate(pb.requests):\n requests_idx_mapping[r.id] = i\n # Add escape_custom_split_sequence to the CausalLMBatch logic\n inputs.append(escape_custom_split_sequence(r.inputs))\n next_token_choosers.append(NextTokenChooser.from_pb(r.parameters, device))\n stopping_criteria = StoppingCriteria.from_pb(\n r.stopping_parameters, tokenizer\n )\n stopping_criterias.append(stopping_criteria)\n max_truncation = max(max_truncation, r.truncate)\n max_decode_tokens += stopping_criteria.max_new_tokens\n padding_right_offset = max(\n padding_right_offset, stopping_criteria.max_new_tokens\n )\n\n tokenized_inputs = tokenizer(\n inputs,\n return_tensors=\"pt\",\n padding=True,\n return_token_type_ids=False,\n truncation=True,\n max_length=max_truncation,\n ).to(device)\n for _ in pb.requests:\n input_len = tokenized_inputs[\"input_ids\"].shape[1]\n prefix_offsets.append(0)\n read_offsets.append(input_len)\n\n input_lengths = tokenized_inputs[\"attention_mask\"].sum(1)\n max_input_length = input_lengths.max()\n\n input_ids = tokenized_inputs[\"input_ids\"]\n # Allocate maximum attention_mask\n attention_mask = input_ids.new_zeros(\n (pb.size, max_input_length + padding_right_offset)\n )\n # Copy tokenizer attention_mask into fully allocated attention_mask\n attention_mask[:, :max_input_length] = tokenized_inputs[\"attention_mask\"]\n\n position_ids = tokenized_inputs[\"attention_mask\"].long().cumsum(-1) - 1\n position_ids.masked_fill_(tokenized_inputs[\"attention_mask\"] == 0, 1)\n all_input_ids = tokenized_inputs[\"input_ids\"].T.split(1, dim=1)\n\n max_tokens = len(inputs) * max_input_length + max_decode_tokens\n\n return cls(\n batch_id=pb.id,\n requests=pb.requests,\n requests_idx_mapping=requests_idx_mapping,\n input_ids=input_ids,\n attention_mask=attention_mask,\n position_ids=position_ids,\n past_key_values=None,\n all_input_ids=list(all_input_ids),\n input_lengths=input_lengths.tolist(),\n prefix_offsets=prefix_offsets,\n read_offsets=read_offsets,\n next_token_choosers=next_token_choosers,\n stopping_criterias=stopping_criterias,\n max_input_length=max_input_length.item(),\n padding_right_offset=padding_right_offset,\n max_tokens=max_tokens,\n )\n\n\nclass GalacticaSharded(CausalLM):\n def __init__(\n self,\n model_id: str,\n revision: Optional[str] = None,\n quantize: Optional[str] = None,\n dtype: Optional[torch.dtype] = None,\n trust_remote_code: bool = False,\n ):\n self.process_group, rank, world_size = initialize_torch_distributed()\n if torch.cuda.is_available():\n device = torch.device(f\"cuda:{rank}\")\n dtype = torch.float16 if dtype is None else dtype\n else:\n device = torch.device(\"cpu\")\n dtype = torch.float32\n\n tokenizer = AutoTokenizer.from_pretrained(\n model_id,\n revision=revision,\n padding_side=\"left\",\n truncation_side=\"left\",\n trust_remote_code=trust_remote_code,\n )\n\n config = AutoConfig.from_pretrained(\n model_id,\n revision=revision,\n tp_parallel=True,\n trust_remote_code=trust_remote_code,\n )\n config.quantize = quantize\n tokenizer.pad_token_id = config.pad_token_id\n\n torch.distributed.barrier(group=self.process_group)\n filenames = weight_files(model_id, revision=revision, extension=\".safetensors\")\n weights = Weights(\n filenames, device=device, dtype=dtype, process_group=self.process_group\n )\n if config.quantize == \"gptq\":\n weights._set_gptq_params(model_id)\n\n model = OPTForCausalLM(config, weights)\n\n torch.distributed.barrier(group=self.process_group)\n super(CausalLM, self).__init__(\n model=model,\n tokenizer=tokenizer,\n requires_padding=True,\n dtype=dtype,\n device=device,\n rank=rank,\n world_size=world_size,\n )\n\n @property\n def batch_type(self) -> Type[CausalLMBatch]:\n return GalacticaCausalLMBatch\n\n def decode(self, generated_ids: List[int]) -> str:\n # Do not skip special tokens as they are used for custom parsing rules of the generated text\n return self.tokenizer.decode(\n generated_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False\n )\n\n def forward(\n self, input_ids, attention_mask, position_ids, past_key_values: Optional = None\n ):\n outputs = self.model.forward(\n input_ids=input_ids,\n attention_mask=attention_mask,\n past_key_values=past_key_values,\n use_cache=True,\n )\n return outputs.logits, outputs.past_key_values\n", "path": "server/text_generation_server/models/galactica.py"}], "after_files": [{"content": "import re\nimport torch\nimport torch.distributed\n\nfrom typing import List, Optional, Type\n\nfrom transformers import (\n AutoTokenizer,\n AutoConfig,\n PreTrainedTokenizerBase,\n)\nfrom text_generation_server.models import CausalLM\nfrom text_generation_server.models.causal_lm import CausalLMBatch\nfrom text_generation_server.pb import generate_pb2\nfrom text_generation_server.models.custom_modeling.opt_modeling import OPTForCausalLM\nfrom text_generation_server.utils import (\n NextTokenChooser,\n StoppingCriteria,\n initialize_torch_distributed,\n weight_files,\n Weights,\n)\n\n# CREDIT: Papers with code => https://github.com/paperswithcode/galai/blob/main/galai/utils.py\n\n# we split individual characters inside special tokens like [START_DNA]\nCUSTOM_SEQ_RE = re.compile(r\"(\\[START_(DNA|SMILES|I_SMILES|AMINO)])(.*?)(\\[END_\\2])\")\n\n# token added to implement a custom sequence tokenization. This token is added at\n# corpus cleaning step and removed in pretokenization. The digits are added to increase the chance\n# that they do not occur in the corpus. The digits are escaped so that the token does not appear\n# literally in the source code in case we ever include it in the training data.\nSPLIT_MARKER = f\"SPL{1}T-TH{1}S-Pl3A5E\"\n\n\ndef _insert_split_marker(m: re.Match):\n \"\"\"\n Applies split marker based on a regex match of special tokens such as\n [START_DNA].\n Parameters\n ----------\n n : str\n Input text to split\n Returns\n ----------\n str - the text with the split token added\n \"\"\"\n start_token, _, sequence, end_token = m.groups()\n sequence = re.sub(r\"(.)\", rf\"{SPLIT_MARKER}\\1\", sequence, flags=re.DOTALL)\n return f\"{start_token}{sequence}{SPLIT_MARKER}{end_token}\"\n\n\ndef escape_custom_split_sequence(text):\n \"\"\"\n Applies custom splitting to the text for GALILEO's tokenization\n Parameters\n ----------\n text : str\n Input text to split\n Returns\n ----------\n str - the text with the split token added\n \"\"\"\n return CUSTOM_SEQ_RE.sub(_insert_split_marker, text)\n\n\n# END CREDIT\n\n\nclass GalacticaCausalLMBatch(CausalLMBatch):\n @classmethod\n def from_pb(\n cls,\n pb: generate_pb2.Batch,\n tokenizer: PreTrainedTokenizerBase,\n dtype: torch.dtype,\n device: torch.device,\n ) -> \"GalacticaCausalLMBatch\":\n inputs = []\n next_token_choosers = []\n stopping_criterias = []\n prefix_offsets = []\n top_n_tokens = []\n read_offsets = []\n requests_idx_mapping = {}\n\n # Parse batch\n max_truncation = 0\n padding_right_offset = 0\n max_decode_tokens = 0\n for i, r in enumerate(pb.requests):\n requests_idx_mapping[r.id] = i\n # Add escape_custom_split_sequence to the CausalLMBatch logic\n inputs.append(escape_custom_split_sequence(r.inputs))\n next_token_choosers.append(NextTokenChooser.from_pb(r.parameters, device))\n stopping_criteria = StoppingCriteria.from_pb(\n r.stopping_parameters, tokenizer\n )\n stopping_criterias.append(stopping_criteria)\n top_n_tokens.append(r.top_n_tokens)\n max_truncation = max(max_truncation, r.truncate)\n max_decode_tokens += stopping_criteria.max_new_tokens\n padding_right_offset = max(\n padding_right_offset, stopping_criteria.max_new_tokens\n )\n\n tokenized_inputs = tokenizer(\n inputs,\n return_tensors=\"pt\",\n padding=True,\n return_token_type_ids=False,\n truncation=True,\n max_length=max_truncation,\n ).to(device)\n for _ in pb.requests:\n input_len = tokenized_inputs[\"input_ids\"].shape[1]\n prefix_offsets.append(0)\n read_offsets.append(input_len)\n\n input_lengths = tokenized_inputs[\"attention_mask\"].sum(1)\n max_input_length = input_lengths.max()\n\n input_ids = tokenized_inputs[\"input_ids\"]\n # Allocate maximum attention_mask\n attention_mask = input_ids.new_zeros(\n (pb.size, max_input_length + padding_right_offset)\n )\n # Copy tokenizer attention_mask into fully allocated attention_mask\n attention_mask[:, :max_input_length] = tokenized_inputs[\"attention_mask\"]\n\n position_ids = tokenized_inputs[\"attention_mask\"].long().cumsum(-1) - 1\n position_ids.masked_fill_(tokenized_inputs[\"attention_mask\"] == 0, 1)\n all_input_ids = tokenized_inputs[\"input_ids\"].T.split(1, dim=1)\n top_n_tokens_tensor = torch.tensor(\n top_n_tokens, device=device, dtype=torch.int64\n )\n\n max_tokens = len(inputs) * max_input_length + max_decode_tokens\n\n return cls(\n batch_id=pb.id,\n requests=pb.requests,\n requests_idx_mapping=requests_idx_mapping,\n input_ids=input_ids,\n attention_mask=attention_mask,\n position_ids=position_ids,\n past_key_values=None,\n all_input_ids=list(all_input_ids),\n input_lengths=input_lengths.tolist(),\n prefix_offsets=prefix_offsets,\n read_offsets=read_offsets,\n next_token_choosers=next_token_choosers,\n stopping_criterias=stopping_criterias,\n top_n_tokens=top_n_tokens,\n top_n_tokens_tensor=top_n_tokens_tensor,\n max_input_length=max_input_length.item(),\n padding_right_offset=padding_right_offset,\n max_tokens=max_tokens,\n )\n\n\nclass GalacticaSharded(CausalLM):\n def __init__(\n self,\n model_id: str,\n revision: Optional[str] = None,\n quantize: Optional[str] = None,\n dtype: Optional[torch.dtype] = None,\n trust_remote_code: bool = False,\n ):\n self.process_group, rank, world_size = initialize_torch_distributed()\n if torch.cuda.is_available():\n device = torch.device(f\"cuda:{rank}\")\n dtype = torch.float16 if dtype is None else dtype\n else:\n device = torch.device(\"cpu\")\n dtype = torch.float32\n\n tokenizer = AutoTokenizer.from_pretrained(\n model_id,\n revision=revision,\n padding_side=\"left\",\n truncation_side=\"left\",\n trust_remote_code=trust_remote_code,\n )\n\n config = AutoConfig.from_pretrained(\n model_id,\n revision=revision,\n tp_parallel=True,\n trust_remote_code=trust_remote_code,\n )\n config.quantize = quantize\n tokenizer.pad_token_id = config.pad_token_id\n\n torch.distributed.barrier(group=self.process_group)\n filenames = weight_files(model_id, revision=revision, extension=\".safetensors\")\n weights = Weights(\n filenames, device=device, dtype=dtype, process_group=self.process_group\n )\n if config.quantize == \"gptq\":\n weights._set_gptq_params(model_id)\n\n model = OPTForCausalLM(config, weights)\n\n torch.distributed.barrier(group=self.process_group)\n super(CausalLM, self).__init__(\n model=model,\n tokenizer=tokenizer,\n requires_padding=True,\n dtype=dtype,\n device=device,\n rank=rank,\n world_size=world_size,\n )\n\n @property\n def batch_type(self) -> Type[CausalLMBatch]:\n return GalacticaCausalLMBatch\n\n def decode(self, generated_ids: List[int]) -> str:\n # Do not skip special tokens as they are used for custom parsing rules of the generated text\n return self.tokenizer.decode(\n generated_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False\n )\n\n def forward(\n self, input_ids, attention_mask, position_ids, past_key_values: Optional = None\n ):\n outputs = self.model.forward(\n input_ids=input_ids,\n attention_mask=attention_mask,\n past_key_values=past_key_values,\n use_cache=True,\n )\n return outputs.logits, outputs.past_key_values\n", "path": "server/text_generation_server/models/galactica.py"}]}
| 2,746 | 395 |
gh_patches_debug_10940
|
rasdani/github-patches
|
git_diff
|
ciudadanointeligente__votainteligente-portal-electoral-697
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Propuesta] Al momento de ser publicada no se envíá automáticamente a los candidatos.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `popular_proposal/models.py`
Content:
```
1 # coding=utf-8
2 from __future__ import unicode_literals
3
4 from django.db import models
5 from picklefield.fields import PickledObjectField
6 from django.contrib.auth.models import User
7 from djchoices import DjangoChoices, ChoiceItem
8 from votainteligente.send_mails import send_mail
9 from django.utils.encoding import python_2_unicode_compatible
10 from django.contrib.sites.models import Site
11 from autoslug import AutoSlugField
12 from django.core.urlresolvers import reverse
13 from backend_citizen.models import Organization
14 from votainteligente.open_graph import OGPMixin
15 from elections.models import Candidate, Area
16 from django.db.models import Count
17 from django.utils.translation import ugettext_lazy as _
18 from django.conf import settings
19 from django.core.mail import mail_admins
20
21
22 class NeedingModerationManager(models.Manager):
23 def get_queryset(self):
24 qs = super(NeedingModerationManager, self).get_queryset()
25 qs = qs.filter(status=ProposalTemporaryData.Statuses.InOurSide)
26 return qs
27
28
29 class ProposalCreationMixin(object):
30 def determine_kwargs(self, **kwargs):
31 model = kwargs.pop('model_class', self.__class__)
32 for f in model._meta.fields:
33 if f.name in kwargs['data'].keys():
34 kwargs[f.name] = kwargs['data'].pop(f.name)
35 return kwargs
36
37
38 @python_2_unicode_compatible
39 class ProposalTemporaryData(models.Model, ProposalCreationMixin):
40 class Statuses(DjangoChoices):
41 InOurSide = ChoiceItem('in_our_side')
42 InTheirSide = ChoiceItem('in_their_side')
43 Rejected = ChoiceItem('rejected')
44 Accepted = ChoiceItem('accepted')
45 proposer = models.ForeignKey(User, related_name='temporary_proposals')
46 area = models.ForeignKey(Area, related_name='temporary_proposals', null=True, blank=True)
47 join_advocacy_url = models.URLField(null=True, blank=True)
48 data = PickledObjectField()
49 rejected = models.BooleanField(default=False)
50 rejected_reason = models.TextField(null=True,
51 blank=True)
52 organization = models.ForeignKey(Organization,
53 related_name='temporary_proposals',
54 null=True,
55 blank=True,
56 default=None)
57 comments = PickledObjectField()
58 status = models.CharField(max_length=16,
59 choices=Statuses.choices,
60 validators=[Statuses.validator],
61 default=Statuses.InOurSide)
62 overall_comments = models.CharField(max_length=512,
63 blank=True,
64 null=True,
65 default="")
66 created = models.DateTimeField(auto_now_add=True,
67 blank=True,
68 null=True)
69 updated = models.DateTimeField(auto_now=True,
70 blank=True,
71 null=True)
72
73 needing_moderation = NeedingModerationManager()
74 objects = models.Manager()
75
76 def save(self, *args, **kwargs):
77 creating = self.id is None
78 if not self.comments:
79 self.comments = {}
80 for key in self.data.keys():
81 if key not in self.comments.keys():
82 self.comments[key] = ''
83 return super(ProposalTemporaryData, self).save(*args, **kwargs)
84
85 def notify_new(self):
86 site = Site.objects.get_current()
87 mail_context = {
88 'area': self.area,
89 'temporary_data': self,
90 'site': site,
91 }
92 if self.proposer.email:
93 send_mail(mail_context, 'new_temporary_proposal',
94 to=[self.proposer.email])
95
96 def create_proposal(self, moderator=None):
97 self.status = ProposalTemporaryData.Statuses.Accepted
98 self.save()
99 title = self.get_title()
100 clasification = self.data.get('clasification', '')
101 org_id = self.data.pop('organization', None)
102
103 creation_kwargs = self.determine_kwargs(title=title,
104 clasification=clasification,
105 area=self.area,
106 proposer=self.proposer,
107 data=self.data,
108 temporary=self)
109 popular_proposal = PopularProposal(**creation_kwargs)
110 if org_id:
111 enrollment = self.proposer.enrollments.get(organization__id=org_id)
112 popular_proposal.organization = enrollment.organization
113 popular_proposal.save()
114 site = Site.objects.get_current()
115 mail_context = {
116 'area': self.area,
117 'temporary_data': self,
118 'moderator': moderator,
119 'site': site,
120 }
121 send_mail(mail_context, 'popular_proposal_accepted', to=[self.proposer.email])
122 return popular_proposal
123
124 def reject(self, reason, moderator=None):
125 self.rejected_reason = reason
126 self.status = ProposalTemporaryData.Statuses.Rejected
127 self.save()
128 site = Site.objects.get_current()
129 mail_context = {
130 'area': self.area,
131 'temporary_data': self,
132 'moderator': moderator,
133 'site': site,
134 }
135 send_mail(mail_context, 'popular_proposal_rejected',
136 to=[self.proposer.email])
137
138 def get_title(self):
139 return self.data.get('title', u'')
140
141 def __str__(self):
142 return self.get_title()
143
144 class ProposalsOrderedManager(models.Manager):
145 def by_likers(self, *args, **kwargs):
146 qs = self.get_queryset()
147 qs = qs.annotate(num_likers=Count('likers')).order_by('-num_likers')
148 return qs
149
150
151 @python_2_unicode_compatible
152 class PopularProposal(models.Model, OGPMixin):
153 title = models.CharField(max_length=255, default='')
154 slug = AutoSlugField(populate_from='title', unique=True)
155 proposer = models.ForeignKey(User, related_name='proposals')
156 area = models.ForeignKey(Area, related_name='proposals', null=True, blank=True)
157 join_advocacy_url = models.URLField(null=True, blank=True)
158 data = PickledObjectField()
159 created = models.DateTimeField(auto_now_add=True)
160 updated = models.DateTimeField(auto_now_add=True)
161 temporary = models.OneToOneField(ProposalTemporaryData,
162 related_name='created_proposal',
163 blank=True,
164 null=True,
165 default=None)
166 likers = models.ManyToManyField(User, through='ProposalLike')
167 organization = models.ForeignKey(Organization,
168 related_name='popular_proposals',
169 null=True)
170 background = models.TextField(null=True, blank=True, help_text=_(u"Antecedentes sobre tu propuesta"))
171 contact_details = models.TextField(null=True,
172 blank=True,
173 help_text=_(u'¿Cómo te puede contactar un candidato?'))
174 document = models.FileField(upload_to='uploads/proposal/backgrounds/%Y/%m/%d/',
175 help_text=_(u'¿Tienes algún documento para complementar tu propuesta?'),
176 null=True,
177 blank=True)
178 image = models.ImageField(upload_to='proposals/image/',
179 max_length=512,
180 null=True,
181 blank=True)
182 clasification = models.CharField(blank=True, null=True, max_length=255)
183 for_all_areas = models.BooleanField(default=False)
184
185 ogp_enabled = True
186
187 ordered = ProposalsOrderedManager()
188 objects = models.Manager()
189
190 class Meta:
191 ordering = ['for_all_areas', '-created']
192
193 def __str__(self):
194 return self.title
195
196 def get_absolute_url(self):
197 return reverse('popular_proposals:detail', kwargs={'slug': self.slug})
198
199 def save(self, *args, **kwargs):
200 creating = self.pk is None
201 super(PopularProposal, self).save(*args, **kwargs)
202 if self.pk is not None and creating:
203 self.notify_candidates_of_new()
204
205 def notify_candidates_of_new(self):
206 if not (settings.NOTIFY_CANDIDATES and settings.NOTIFY_CANDIDATES_OF_NEW_PROPOSAL):
207 return
208 template = 'notification_for_candidates_of_new_proposal'
209 context = {'proposal': self}
210 area = Area.objects.get(id=self.area.id)
211 for election in area.elections.all():
212 for candidate in election.candidates.all():
213 for contact in candidate.contacts.all():
214 context.update({'candidate': candidate})
215 send_mail(context,
216 template,
217 to=[contact.mail])
218
219 class ProposalLike(models.Model):
220 user = models.ForeignKey(User)
221 proposal = models.ForeignKey(PopularProposal)
222 created = models.DateTimeField(auto_now_add=True)
223 updated = models.DateTimeField(auto_now_add=True)
224
225 def save(self, *args, **kwargs):
226 super(ProposalLike, self).save(*args, **kwargs)
227 created = self.pk is not None
228 if created:
229 self.numerical_notification()
230
231 def numerical_notification(self):
232 the_number = ProposalLike.objects.filter(proposal=self.proposal).count()
233 if the_number in settings.WHEN_TO_NOTIFY:
234 from popular_proposal.subscriptions import YouAreAHeroNotification, ManyCitizensSupportingNotification
235 notifier = YouAreAHeroNotification(proposal=self.proposal,
236 number=the_number)
237 notifier.notify()
238 notifier = ManyCitizensSupportingNotification(proposal=self.proposal,
239 number=the_number)
240 notifier.notify()
241
242
243 class Commitment(models.Model):
244 proposal = models.ForeignKey(PopularProposal,
245 related_name='commitments')
246 candidate = models.ForeignKey(Candidate,
247 related_name='commitments')
248 detail = models.CharField(max_length=12288,
249 null=True,
250 blank=True)
251 commited = models.NullBooleanField(default=None)
252
253 def save(self, *args, **kwargs):
254 instance = super(Commitment, self).save(*args, **kwargs)
255 from popular_proposal.subscriptions import notification_trigger
256 notification_trigger('new-commitment',
257 proposal=self.proposal,
258 commitment=self)
259 return instance
260
261 def get_absolute_url(self):
262 url = reverse('popular_proposals:commitment', kwargs={'candidate_slug': self.candidate.id,
263 'proposal_slug': self.proposal.slug})
264 return url
265
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/popular_proposal/models.py b/popular_proposal/models.py
--- a/popular_proposal/models.py
+++ b/popular_proposal/models.py
@@ -196,12 +196,6 @@
def get_absolute_url(self):
return reverse('popular_proposals:detail', kwargs={'slug': self.slug})
- def save(self, *args, **kwargs):
- creating = self.pk is None
- super(PopularProposal, self).save(*args, **kwargs)
- if self.pk is not None and creating:
- self.notify_candidates_of_new()
-
def notify_candidates_of_new(self):
if not (settings.NOTIFY_CANDIDATES and settings.NOTIFY_CANDIDATES_OF_NEW_PROPOSAL):
return
|
{"golden_diff": "diff --git a/popular_proposal/models.py b/popular_proposal/models.py\n--- a/popular_proposal/models.py\n+++ b/popular_proposal/models.py\n@@ -196,12 +196,6 @@\n def get_absolute_url(self):\n return reverse('popular_proposals:detail', kwargs={'slug': self.slug})\n \n- def save(self, *args, **kwargs):\n- creating = self.pk is None\n- super(PopularProposal, self).save(*args, **kwargs)\n- if self.pk is not None and creating:\n- self.notify_candidates_of_new()\n-\n def notify_candidates_of_new(self):\n if not (settings.NOTIFY_CANDIDATES and settings.NOTIFY_CANDIDATES_OF_NEW_PROPOSAL):\n return\n", "issue": "[Propuesta] Al momento de ser publicada no se env\u00ed\u00e1 autom\u00e1ticamente a los candidatos.\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import unicode_literals\n\nfrom django.db import models\nfrom picklefield.fields import PickledObjectField\nfrom django.contrib.auth.models import User\nfrom djchoices import DjangoChoices, ChoiceItem\nfrom votainteligente.send_mails import send_mail\nfrom django.utils.encoding import python_2_unicode_compatible\nfrom django.contrib.sites.models import Site\nfrom autoslug import AutoSlugField\nfrom django.core.urlresolvers import reverse\nfrom backend_citizen.models import Organization\nfrom votainteligente.open_graph import OGPMixin\nfrom elections.models import Candidate, Area\nfrom django.db.models import Count\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.conf import settings\nfrom django.core.mail import mail_admins\n\n\nclass NeedingModerationManager(models.Manager):\n def get_queryset(self):\n qs = super(NeedingModerationManager, self).get_queryset()\n qs = qs.filter(status=ProposalTemporaryData.Statuses.InOurSide)\n return qs\n\n\nclass ProposalCreationMixin(object):\n def determine_kwargs(self, **kwargs):\n model = kwargs.pop('model_class', self.__class__)\n for f in model._meta.fields:\n if f.name in kwargs['data'].keys():\n kwargs[f.name] = kwargs['data'].pop(f.name)\n return kwargs\n\n\n@python_2_unicode_compatible\nclass ProposalTemporaryData(models.Model, ProposalCreationMixin):\n class Statuses(DjangoChoices):\n InOurSide = ChoiceItem('in_our_side')\n InTheirSide = ChoiceItem('in_their_side')\n Rejected = ChoiceItem('rejected')\n Accepted = ChoiceItem('accepted')\n proposer = models.ForeignKey(User, related_name='temporary_proposals')\n area = models.ForeignKey(Area, related_name='temporary_proposals', null=True, blank=True)\n join_advocacy_url = models.URLField(null=True, blank=True)\n data = PickledObjectField()\n rejected = models.BooleanField(default=False)\n rejected_reason = models.TextField(null=True,\n blank=True)\n organization = models.ForeignKey(Organization,\n related_name='temporary_proposals',\n null=True,\n blank=True,\n default=None)\n comments = PickledObjectField()\n status = models.CharField(max_length=16,\n choices=Statuses.choices,\n validators=[Statuses.validator],\n default=Statuses.InOurSide)\n overall_comments = models.CharField(max_length=512,\n blank=True,\n null=True,\n default=\"\")\n created = models.DateTimeField(auto_now_add=True,\n blank=True,\n null=True)\n updated = models.DateTimeField(auto_now=True,\n blank=True,\n null=True)\n\n needing_moderation = NeedingModerationManager()\n objects = models.Manager()\n\n def save(self, *args, **kwargs):\n creating = self.id is None\n if not self.comments:\n self.comments = {}\n for key in self.data.keys():\n if key not in self.comments.keys():\n self.comments[key] = ''\n return super(ProposalTemporaryData, self).save(*args, **kwargs)\n\n def notify_new(self):\n site = Site.objects.get_current()\n mail_context = {\n 'area': self.area,\n 'temporary_data': self,\n 'site': site,\n }\n if self.proposer.email:\n send_mail(mail_context, 'new_temporary_proposal',\n to=[self.proposer.email])\n\n def create_proposal(self, moderator=None):\n self.status = ProposalTemporaryData.Statuses.Accepted\n self.save()\n title = self.get_title()\n clasification = self.data.get('clasification', '')\n org_id = self.data.pop('organization', None)\n\n creation_kwargs = self.determine_kwargs(title=title,\n clasification=clasification,\n area=self.area,\n proposer=self.proposer,\n data=self.data,\n temporary=self)\n popular_proposal = PopularProposal(**creation_kwargs)\n if org_id:\n enrollment = self.proposer.enrollments.get(organization__id=org_id)\n popular_proposal.organization = enrollment.organization\n popular_proposal.save()\n site = Site.objects.get_current()\n mail_context = {\n 'area': self.area,\n 'temporary_data': self,\n 'moderator': moderator,\n 'site': site,\n }\n send_mail(mail_context, 'popular_proposal_accepted', to=[self.proposer.email])\n return popular_proposal\n\n def reject(self, reason, moderator=None):\n self.rejected_reason = reason\n self.status = ProposalTemporaryData.Statuses.Rejected\n self.save()\n site = Site.objects.get_current()\n mail_context = {\n 'area': self.area,\n 'temporary_data': self,\n 'moderator': moderator,\n 'site': site,\n }\n send_mail(mail_context, 'popular_proposal_rejected',\n to=[self.proposer.email])\n\n def get_title(self):\n return self.data.get('title', u'')\n\n def __str__(self):\n return self.get_title()\n\nclass ProposalsOrderedManager(models.Manager):\n def by_likers(self, *args, **kwargs):\n qs = self.get_queryset()\n qs = qs.annotate(num_likers=Count('likers')).order_by('-num_likers')\n return qs\n\n\n@python_2_unicode_compatible\nclass PopularProposal(models.Model, OGPMixin):\n title = models.CharField(max_length=255, default='')\n slug = AutoSlugField(populate_from='title', unique=True)\n proposer = models.ForeignKey(User, related_name='proposals')\n area = models.ForeignKey(Area, related_name='proposals', null=True, blank=True)\n join_advocacy_url = models.URLField(null=True, blank=True)\n data = PickledObjectField()\n created = models.DateTimeField(auto_now_add=True)\n updated = models.DateTimeField(auto_now_add=True)\n temporary = models.OneToOneField(ProposalTemporaryData,\n related_name='created_proposal',\n blank=True,\n null=True,\n default=None)\n likers = models.ManyToManyField(User, through='ProposalLike')\n organization = models.ForeignKey(Organization,\n related_name='popular_proposals',\n null=True)\n background = models.TextField(null=True, blank=True, help_text=_(u\"Antecedentes sobre tu propuesta\"))\n contact_details = models.TextField(null=True,\n blank=True,\n help_text=_(u'\u00bfC\u00f3mo te puede contactar un candidato?'))\n document = models.FileField(upload_to='uploads/proposal/backgrounds/%Y/%m/%d/',\n help_text=_(u'\u00bfTienes alg\u00fan documento para complementar tu propuesta?'),\n null=True,\n blank=True)\n image = models.ImageField(upload_to='proposals/image/',\n max_length=512,\n null=True,\n blank=True)\n clasification = models.CharField(blank=True, null=True, max_length=255)\n for_all_areas = models.BooleanField(default=False)\n\n ogp_enabled = True\n\n ordered = ProposalsOrderedManager()\n objects = models.Manager()\n\n class Meta:\n ordering = ['for_all_areas', '-created']\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse('popular_proposals:detail', kwargs={'slug': self.slug})\n\n def save(self, *args, **kwargs):\n creating = self.pk is None\n super(PopularProposal, self).save(*args, **kwargs)\n if self.pk is not None and creating:\n self.notify_candidates_of_new()\n\n def notify_candidates_of_new(self):\n if not (settings.NOTIFY_CANDIDATES and settings.NOTIFY_CANDIDATES_OF_NEW_PROPOSAL):\n return\n template = 'notification_for_candidates_of_new_proposal'\n context = {'proposal': self}\n area = Area.objects.get(id=self.area.id)\n for election in area.elections.all():\n for candidate in election.candidates.all():\n for contact in candidate.contacts.all():\n context.update({'candidate': candidate})\n send_mail(context,\n template,\n to=[contact.mail])\n\nclass ProposalLike(models.Model):\n user = models.ForeignKey(User)\n proposal = models.ForeignKey(PopularProposal)\n created = models.DateTimeField(auto_now_add=True)\n updated = models.DateTimeField(auto_now_add=True)\n\n def save(self, *args, **kwargs):\n super(ProposalLike, self).save(*args, **kwargs)\n created = self.pk is not None\n if created:\n self.numerical_notification()\n\n def numerical_notification(self):\n the_number = ProposalLike.objects.filter(proposal=self.proposal).count()\n if the_number in settings.WHEN_TO_NOTIFY:\n from popular_proposal.subscriptions import YouAreAHeroNotification, ManyCitizensSupportingNotification\n notifier = YouAreAHeroNotification(proposal=self.proposal,\n number=the_number)\n notifier.notify()\n notifier = ManyCitizensSupportingNotification(proposal=self.proposal,\n number=the_number)\n notifier.notify()\n\n\nclass Commitment(models.Model):\n proposal = models.ForeignKey(PopularProposal,\n related_name='commitments')\n candidate = models.ForeignKey(Candidate,\n related_name='commitments')\n detail = models.CharField(max_length=12288,\n null=True,\n blank=True)\n commited = models.NullBooleanField(default=None)\n\n def save(self, *args, **kwargs):\n instance = super(Commitment, self).save(*args, **kwargs)\n from popular_proposal.subscriptions import notification_trigger\n notification_trigger('new-commitment',\n proposal=self.proposal,\n commitment=self)\n return instance\n\n def get_absolute_url(self):\n url = reverse('popular_proposals:commitment', kwargs={'candidate_slug': self.candidate.id,\n 'proposal_slug': self.proposal.slug})\n return url\n", "path": "popular_proposal/models.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import unicode_literals\n\nfrom django.db import models\nfrom picklefield.fields import PickledObjectField\nfrom django.contrib.auth.models import User\nfrom djchoices import DjangoChoices, ChoiceItem\nfrom votainteligente.send_mails import send_mail\nfrom django.utils.encoding import python_2_unicode_compatible\nfrom django.contrib.sites.models import Site\nfrom autoslug import AutoSlugField\nfrom django.core.urlresolvers import reverse\nfrom backend_citizen.models import Organization\nfrom votainteligente.open_graph import OGPMixin\nfrom elections.models import Candidate, Area\nfrom django.db.models import Count\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.conf import settings\nfrom django.core.mail import mail_admins\n\n\nclass NeedingModerationManager(models.Manager):\n def get_queryset(self):\n qs = super(NeedingModerationManager, self).get_queryset()\n qs = qs.filter(status=ProposalTemporaryData.Statuses.InOurSide)\n return qs\n\n\nclass ProposalCreationMixin(object):\n def determine_kwargs(self, **kwargs):\n model = kwargs.pop('model_class', self.__class__)\n for f in model._meta.fields:\n if f.name in kwargs['data'].keys():\n kwargs[f.name] = kwargs['data'].pop(f.name)\n return kwargs\n\n\n@python_2_unicode_compatible\nclass ProposalTemporaryData(models.Model, ProposalCreationMixin):\n class Statuses(DjangoChoices):\n InOurSide = ChoiceItem('in_our_side')\n InTheirSide = ChoiceItem('in_their_side')\n Rejected = ChoiceItem('rejected')\n Accepted = ChoiceItem('accepted')\n proposer = models.ForeignKey(User, related_name='temporary_proposals')\n area = models.ForeignKey(Area, related_name='temporary_proposals', null=True, blank=True)\n join_advocacy_url = models.URLField(null=True, blank=True)\n data = PickledObjectField()\n rejected = models.BooleanField(default=False)\n rejected_reason = models.TextField(null=True,\n blank=True)\n organization = models.ForeignKey(Organization,\n related_name='temporary_proposals',\n null=True,\n blank=True,\n default=None)\n comments = PickledObjectField()\n status = models.CharField(max_length=16,\n choices=Statuses.choices,\n validators=[Statuses.validator],\n default=Statuses.InOurSide)\n overall_comments = models.CharField(max_length=512,\n blank=True,\n null=True,\n default=\"\")\n created = models.DateTimeField(auto_now_add=True,\n blank=True,\n null=True)\n updated = models.DateTimeField(auto_now=True,\n blank=True,\n null=True)\n\n needing_moderation = NeedingModerationManager()\n objects = models.Manager()\n\n def save(self, *args, **kwargs):\n creating = self.id is None\n if not self.comments:\n self.comments = {}\n for key in self.data.keys():\n if key not in self.comments.keys():\n self.comments[key] = ''\n return super(ProposalTemporaryData, self).save(*args, **kwargs)\n\n def notify_new(self):\n site = Site.objects.get_current()\n mail_context = {\n 'area': self.area,\n 'temporary_data': self,\n 'site': site,\n }\n if self.proposer.email:\n send_mail(mail_context, 'new_temporary_proposal',\n to=[self.proposer.email])\n\n def create_proposal(self, moderator=None):\n self.status = ProposalTemporaryData.Statuses.Accepted\n self.save()\n title = self.get_title()\n clasification = self.data.get('clasification', '')\n org_id = self.data.pop('organization', None)\n\n creation_kwargs = self.determine_kwargs(title=title,\n clasification=clasification,\n area=self.area,\n proposer=self.proposer,\n data=self.data,\n temporary=self)\n popular_proposal = PopularProposal(**creation_kwargs)\n if org_id:\n enrollment = self.proposer.enrollments.get(organization__id=org_id)\n popular_proposal.organization = enrollment.organization\n popular_proposal.save()\n site = Site.objects.get_current()\n mail_context = {\n 'area': self.area,\n 'temporary_data': self,\n 'moderator': moderator,\n 'site': site,\n }\n send_mail(mail_context, 'popular_proposal_accepted', to=[self.proposer.email])\n return popular_proposal\n\n def reject(self, reason, moderator=None):\n self.rejected_reason = reason\n self.status = ProposalTemporaryData.Statuses.Rejected\n self.save()\n site = Site.objects.get_current()\n mail_context = {\n 'area': self.area,\n 'temporary_data': self,\n 'moderator': moderator,\n 'site': site,\n }\n send_mail(mail_context, 'popular_proposal_rejected',\n to=[self.proposer.email])\n\n def get_title(self):\n return self.data.get('title', u'')\n\n def __str__(self):\n return self.get_title()\n\nclass ProposalsOrderedManager(models.Manager):\n def by_likers(self, *args, **kwargs):\n qs = self.get_queryset()\n qs = qs.annotate(num_likers=Count('likers')).order_by('-num_likers')\n return qs\n\n\n@python_2_unicode_compatible\nclass PopularProposal(models.Model, OGPMixin):\n title = models.CharField(max_length=255, default='')\n slug = AutoSlugField(populate_from='title', unique=True)\n proposer = models.ForeignKey(User, related_name='proposals')\n area = models.ForeignKey(Area, related_name='proposals', null=True, blank=True)\n join_advocacy_url = models.URLField(null=True, blank=True)\n data = PickledObjectField()\n created = models.DateTimeField(auto_now_add=True)\n updated = models.DateTimeField(auto_now_add=True)\n temporary = models.OneToOneField(ProposalTemporaryData,\n related_name='created_proposal',\n blank=True,\n null=True,\n default=None)\n likers = models.ManyToManyField(User, through='ProposalLike')\n organization = models.ForeignKey(Organization,\n related_name='popular_proposals',\n null=True)\n background = models.TextField(null=True, blank=True, help_text=_(u\"Antecedentes sobre tu propuesta\"))\n contact_details = models.TextField(null=True,\n blank=True,\n help_text=_(u'\u00bfC\u00f3mo te puede contactar un candidato?'))\n document = models.FileField(upload_to='uploads/proposal/backgrounds/%Y/%m/%d/',\n help_text=_(u'\u00bfTienes alg\u00fan documento para complementar tu propuesta?'),\n null=True,\n blank=True)\n image = models.ImageField(upload_to='proposals/image/',\n max_length=512,\n null=True,\n blank=True)\n clasification = models.CharField(blank=True, null=True, max_length=255)\n for_all_areas = models.BooleanField(default=False)\n\n ogp_enabled = True\n\n ordered = ProposalsOrderedManager()\n objects = models.Manager()\n\n class Meta:\n ordering = ['for_all_areas', '-created']\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse('popular_proposals:detail', kwargs={'slug': self.slug})\n\n def notify_candidates_of_new(self):\n if not (settings.NOTIFY_CANDIDATES and settings.NOTIFY_CANDIDATES_OF_NEW_PROPOSAL):\n return\n template = 'notification_for_candidates_of_new_proposal'\n context = {'proposal': self}\n area = Area.objects.get(id=self.area.id)\n for election in area.elections.all():\n for candidate in election.candidates.all():\n for contact in candidate.contacts.all():\n context.update({'candidate': candidate})\n send_mail(context,\n template,\n to=[contact.mail])\n\nclass ProposalLike(models.Model):\n user = models.ForeignKey(User)\n proposal = models.ForeignKey(PopularProposal)\n created = models.DateTimeField(auto_now_add=True)\n updated = models.DateTimeField(auto_now_add=True)\n\n def save(self, *args, **kwargs):\n super(ProposalLike, self).save(*args, **kwargs)\n created = self.pk is not None\n if created:\n self.numerical_notification()\n\n def numerical_notification(self):\n the_number = ProposalLike.objects.filter(proposal=self.proposal).count()\n if the_number in settings.WHEN_TO_NOTIFY:\n from popular_proposal.subscriptions import YouAreAHeroNotification, ManyCitizensSupportingNotification\n notifier = YouAreAHeroNotification(proposal=self.proposal,\n number=the_number)\n notifier.notify()\n notifier = ManyCitizensSupportingNotification(proposal=self.proposal,\n number=the_number)\n notifier.notify()\n\n\nclass Commitment(models.Model):\n proposal = models.ForeignKey(PopularProposal,\n related_name='commitments')\n candidate = models.ForeignKey(Candidate,\n related_name='commitments')\n detail = models.CharField(max_length=12288,\n null=True,\n blank=True)\n commited = models.NullBooleanField(default=None)\n\n def save(self, *args, **kwargs):\n instance = super(Commitment, self).save(*args, **kwargs)\n from popular_proposal.subscriptions import notification_trigger\n notification_trigger('new-commitment',\n proposal=self.proposal,\n commitment=self)\n return instance\n\n def get_absolute_url(self):\n url = reverse('popular_proposals:commitment', kwargs={'candidate_slug': self.candidate.id,\n 'proposal_slug': self.proposal.slug})\n return url\n", "path": "popular_proposal/models.py"}]}
| 3,065 | 170 |
gh_patches_debug_38824
|
rasdani/github-patches
|
git_diff
|
liqd__a4-opin-399
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use Generic Relation everywhere where generic fks are used
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/documents/serializers.py`
Content:
```
1 from rest_framework import serializers
2
3 from .models import Document, Paragraph
4
5
6 class ParagraphSerializer(serializers.Serializer):
7 id = serializers.IntegerField(required=False)
8 name = serializers.CharField(
9 required=False,
10 max_length=Paragraph._meta.get_field('name').max_length
11 )
12 weight = serializers.IntegerField()
13 text = serializers.CharField()
14
15
16 class DocumentSerializer(serializers.ModelSerializer):
17 paragraphs = ParagraphSerializer(many=True, partial=True)
18
19 class Meta:
20 model = Document
21 exclude = ('creator',)
22
23 def create(self, validated_data):
24 paragraphs = validated_data.pop('paragraphs')
25 user = self.context['request'].user
26 document = Document.objects.create(creator=user, **validated_data)
27
28 for paragraph in paragraphs:
29 Paragraph.objects.create(document=document, **paragraph)
30
31 return document
32
33 def update(self, instance, validated_data):
34 instance.name = validated_data['name']
35 instance.save()
36 paragraphs = validated_data.pop('paragraphs')
37
38 paragraph_ids = [item['id'] for item in paragraphs if 'id' in item]
39 instance.paragraphs.exclude(id__in=paragraph_ids).delete()
40
41 for paragraph in paragraphs:
42 paragraph['document'] = instance
43 if 'id' in paragraph:
44 instance.paragraphs.filter(id=paragraph['id'])\
45 .update(**paragraph)
46 else:
47 instance.paragraphs.create(**paragraph)
48
49 return instance
50
```
Path: `euth/documents/models.py`
Content:
```
1 from ckeditor.fields import RichTextField
2 from django.contrib.contenttypes.models import ContentType
3 from django.core.exceptions import ObjectDoesNotExist, ValidationError
4 from django.db import models
5 from django.utils.functional import cached_property
6 from django.utils.translation import ugettext_lazy as _
7
8 from contrib.transforms import html_transforms
9 from euth.comments import models as comment_models
10 from euth.contrib import base_models
11 from euth.modules import models as module_models
12
13
14 class Document(module_models.Item):
15 name = models.CharField(max_length=120)
16
17 def __str__(self):
18 return "{}_document_{}".format(str(self.module), self.pk)
19
20 def clean(self, *args, **kwargs):
21 if not self.pk:
22 try:
23 Document.objects.get(module=self.module)
24 raise ValidationError(
25 _('Document for that module already exists'))
26 except ObjectDoesNotExist:
27 super().clean(*args, **kwargs)
28 super().clean(*args, **kwargs)
29
30 @cached_property
31 def paragraphs_sorted(self):
32 return self.paragraphs.all().order_by('weight')
33
34 @cached_property
35 def comments(self):
36 contenttype = ContentType.objects.get_for_model(self)
37 pk = self.id
38 comments = comment_models.Comment.objects.all().filter(
39 content_type=contenttype, object_pk=pk)
40 return comments
41
42
43 class Paragraph(base_models.TimeStampedModel):
44 name = models.CharField(max_length=120, blank=True)
45 text = RichTextField()
46 weight = models.PositiveIntegerField()
47 document = models.ForeignKey(Document,
48 on_delete=models.CASCADE,
49 related_name='paragraphs')
50
51 class Meta:
52 ordering = ('weight',)
53
54 def __str__(self):
55 return "{}_paragraph_{}".format(str(self.document), self.weight)
56
57 def save(self, *args, **kwargs):
58 self.text = html_transforms.clean_html_field(
59 self.text)
60 super().save(*args, **kwargs)
61
62 @cached_property
63 def comments(self):
64 contenttype = ContentType.objects.get_for_model(self)
65 pk = self.id
66 comments = comment_models.Comment.objects.all().filter(
67 content_type=contenttype, object_pk=pk)
68 return comments
69
70 @cached_property
71 def project(self):
72 return self.document.project
73
```
Path: `euth/documents/phases.py`
Content:
```
1 from django.utils.translation import ugettext_lazy as _
2
3 from euth import phases
4
5 from . import apps, models, views
6
7
8 class CreateDocumentPhase(phases.PhaseContent):
9 """
10 Allows no interaction for participants, only
11 creation for moderators.
12 """
13 app = apps.DocumentConfig.label
14 phase = 'create_document'
15 view = views.DocumentCreateView
16 weight = 30
17
18 name = _('Create document phase')
19 module_name = _('commenting text')
20 description = _('Create text for the project.')
21
22 features = {}
23
24
25 phases.content.register(CreateDocumentPhase())
26
27
28 class CommentPhase(phases.PhaseContent):
29 """
30 Allows only commenting of paragraphs.
31 """
32 app = apps.DocumentConfig.label
33 phase = 'comment'
34 view = views.DocumentDetailView
35 weight = 40
36
37 name = _('Comment phase')
38 module_name = _('commenting text')
39 description = _('Collect comments for the text.')
40
41 features = {
42 'comment': (models.Paragraph,),
43 }
44
45
46 phases.content.register(CommentPhase())
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/euth/documents/models.py b/euth/documents/models.py
--- a/euth/documents/models.py
+++ b/euth/documents/models.py
@@ -1,5 +1,5 @@
from ckeditor.fields import RichTextField
-from django.contrib.contenttypes.models import ContentType
+from django.contrib.contenttypes.fields import GenericRelation
from django.core.exceptions import ObjectDoesNotExist, ValidationError
from django.db import models
from django.utils.functional import cached_property
@@ -27,18 +27,6 @@
super().clean(*args, **kwargs)
super().clean(*args, **kwargs)
- @cached_property
- def paragraphs_sorted(self):
- return self.paragraphs.all().order_by('weight')
-
- @cached_property
- def comments(self):
- contenttype = ContentType.objects.get_for_model(self)
- pk = self.id
- comments = comment_models.Comment.objects.all().filter(
- content_type=contenttype, object_pk=pk)
- return comments
-
class Paragraph(base_models.TimeStampedModel):
name = models.CharField(max_length=120, blank=True)
@@ -47,6 +35,9 @@
document = models.ForeignKey(Document,
on_delete=models.CASCADE,
related_name='paragraphs')
+ comments = GenericRelation(comment_models.Comment,
+ related_query_name='paragraph',
+ object_id_field='object_pk')
class Meta:
ordering = ('weight',)
@@ -59,14 +50,6 @@
self.text)
super().save(*args, **kwargs)
- @cached_property
- def comments(self):
- contenttype = ContentType.objects.get_for_model(self)
- pk = self.id
- comments = comment_models.Comment.objects.all().filter(
- content_type=contenttype, object_pk=pk)
- return comments
-
@cached_property
def project(self):
return self.document.project
diff --git a/euth/documents/phases.py b/euth/documents/phases.py
--- a/euth/documents/phases.py
+++ b/euth/documents/phases.py
@@ -39,7 +39,7 @@
description = _('Collect comments for the text.')
features = {
- 'comment': (models.Paragraph,),
+ 'comment': (models.Paragraph, models.Document),
}
diff --git a/euth/documents/serializers.py b/euth/documents/serializers.py
--- a/euth/documents/serializers.py
+++ b/euth/documents/serializers.py
@@ -7,6 +7,7 @@
id = serializers.IntegerField(required=False)
name = serializers.CharField(
required=False,
+ allow_blank=True,
max_length=Paragraph._meta.get_field('name').max_length
)
weight = serializers.IntegerField()
|
{"golden_diff": "diff --git a/euth/documents/models.py b/euth/documents/models.py\n--- a/euth/documents/models.py\n+++ b/euth/documents/models.py\n@@ -1,5 +1,5 @@\n from ckeditor.fields import RichTextField\n-from django.contrib.contenttypes.models import ContentType\n+from django.contrib.contenttypes.fields import GenericRelation\n from django.core.exceptions import ObjectDoesNotExist, ValidationError\n from django.db import models\n from django.utils.functional import cached_property\n@@ -27,18 +27,6 @@\n super().clean(*args, **kwargs)\n super().clean(*args, **kwargs)\n \n- @cached_property\n- def paragraphs_sorted(self):\n- return self.paragraphs.all().order_by('weight')\n-\n- @cached_property\n- def comments(self):\n- contenttype = ContentType.objects.get_for_model(self)\n- pk = self.id\n- comments = comment_models.Comment.objects.all().filter(\n- content_type=contenttype, object_pk=pk)\n- return comments\n-\n \n class Paragraph(base_models.TimeStampedModel):\n name = models.CharField(max_length=120, blank=True)\n@@ -47,6 +35,9 @@\n document = models.ForeignKey(Document,\n on_delete=models.CASCADE,\n related_name='paragraphs')\n+ comments = GenericRelation(comment_models.Comment,\n+ related_query_name='paragraph',\n+ object_id_field='object_pk')\n \n class Meta:\n ordering = ('weight',)\n@@ -59,14 +50,6 @@\n self.text)\n super().save(*args, **kwargs)\n \n- @cached_property\n- def comments(self):\n- contenttype = ContentType.objects.get_for_model(self)\n- pk = self.id\n- comments = comment_models.Comment.objects.all().filter(\n- content_type=contenttype, object_pk=pk)\n- return comments\n-\n @cached_property\n def project(self):\n return self.document.project\ndiff --git a/euth/documents/phases.py b/euth/documents/phases.py\n--- a/euth/documents/phases.py\n+++ b/euth/documents/phases.py\n@@ -39,7 +39,7 @@\n description = _('Collect comments for the text.')\n \n features = {\n- 'comment': (models.Paragraph,),\n+ 'comment': (models.Paragraph, models.Document),\n }\n \n \ndiff --git a/euth/documents/serializers.py b/euth/documents/serializers.py\n--- a/euth/documents/serializers.py\n+++ b/euth/documents/serializers.py\n@@ -7,6 +7,7 @@\n id = serializers.IntegerField(required=False)\n name = serializers.CharField(\n required=False,\n+ allow_blank=True,\n max_length=Paragraph._meta.get_field('name').max_length\n )\n weight = serializers.IntegerField()\n", "issue": "Use Generic Relation everywhere where generic fks are used\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom .models import Document, Paragraph\n\n\nclass ParagraphSerializer(serializers.Serializer):\n id = serializers.IntegerField(required=False)\n name = serializers.CharField(\n required=False,\n max_length=Paragraph._meta.get_field('name').max_length\n )\n weight = serializers.IntegerField()\n text = serializers.CharField()\n\n\nclass DocumentSerializer(serializers.ModelSerializer):\n paragraphs = ParagraphSerializer(many=True, partial=True)\n\n class Meta:\n model = Document\n exclude = ('creator',)\n\n def create(self, validated_data):\n paragraphs = validated_data.pop('paragraphs')\n user = self.context['request'].user\n document = Document.objects.create(creator=user, **validated_data)\n\n for paragraph in paragraphs:\n Paragraph.objects.create(document=document, **paragraph)\n\n return document\n\n def update(self, instance, validated_data):\n instance.name = validated_data['name']\n instance.save()\n paragraphs = validated_data.pop('paragraphs')\n\n paragraph_ids = [item['id'] for item in paragraphs if 'id' in item]\n instance.paragraphs.exclude(id__in=paragraph_ids).delete()\n\n for paragraph in paragraphs:\n paragraph['document'] = instance\n if 'id' in paragraph:\n instance.paragraphs.filter(id=paragraph['id'])\\\n .update(**paragraph)\n else:\n instance.paragraphs.create(**paragraph)\n\n return instance\n", "path": "euth/documents/serializers.py"}, {"content": "from ckeditor.fields import RichTextField\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.db import models\nfrom django.utils.functional import cached_property\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom contrib.transforms import html_transforms\nfrom euth.comments import models as comment_models\nfrom euth.contrib import base_models\nfrom euth.modules import models as module_models\n\n\nclass Document(module_models.Item):\n name = models.CharField(max_length=120)\n\n def __str__(self):\n return \"{}_document_{}\".format(str(self.module), self.pk)\n\n def clean(self, *args, **kwargs):\n if not self.pk:\n try:\n Document.objects.get(module=self.module)\n raise ValidationError(\n _('Document for that module already exists'))\n except ObjectDoesNotExist:\n super().clean(*args, **kwargs)\n super().clean(*args, **kwargs)\n\n @cached_property\n def paragraphs_sorted(self):\n return self.paragraphs.all().order_by('weight')\n\n @cached_property\n def comments(self):\n contenttype = ContentType.objects.get_for_model(self)\n pk = self.id\n comments = comment_models.Comment.objects.all().filter(\n content_type=contenttype, object_pk=pk)\n return comments\n\n\nclass Paragraph(base_models.TimeStampedModel):\n name = models.CharField(max_length=120, blank=True)\n text = RichTextField()\n weight = models.PositiveIntegerField()\n document = models.ForeignKey(Document,\n on_delete=models.CASCADE,\n related_name='paragraphs')\n\n class Meta:\n ordering = ('weight',)\n\n def __str__(self):\n return \"{}_paragraph_{}\".format(str(self.document), self.weight)\n\n def save(self, *args, **kwargs):\n self.text = html_transforms.clean_html_field(\n self.text)\n super().save(*args, **kwargs)\n\n @cached_property\n def comments(self):\n contenttype = ContentType.objects.get_for_model(self)\n pk = self.id\n comments = comment_models.Comment.objects.all().filter(\n content_type=contenttype, object_pk=pk)\n return comments\n\n @cached_property\n def project(self):\n return self.document.project\n", "path": "euth/documents/models.py"}, {"content": "from django.utils.translation import ugettext_lazy as _\n\nfrom euth import phases\n\nfrom . import apps, models, views\n\n\nclass CreateDocumentPhase(phases.PhaseContent):\n \"\"\"\n Allows no interaction for participants, only\n creation for moderators.\n \"\"\"\n app = apps.DocumentConfig.label\n phase = 'create_document'\n view = views.DocumentCreateView\n weight = 30\n\n name = _('Create document phase')\n module_name = _('commenting text')\n description = _('Create text for the project.')\n\n features = {}\n\n\nphases.content.register(CreateDocumentPhase())\n\n\nclass CommentPhase(phases.PhaseContent):\n \"\"\"\n Allows only commenting of paragraphs.\n \"\"\"\n app = apps.DocumentConfig.label\n phase = 'comment'\n view = views.DocumentDetailView\n weight = 40\n\n name = _('Comment phase')\n module_name = _('commenting text')\n description = _('Collect comments for the text.')\n\n features = {\n 'comment': (models.Paragraph,),\n }\n\n\nphases.content.register(CommentPhase())\n", "path": "euth/documents/phases.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom .models import Document, Paragraph\n\n\nclass ParagraphSerializer(serializers.Serializer):\n id = serializers.IntegerField(required=False)\n name = serializers.CharField(\n required=False,\n allow_blank=True,\n max_length=Paragraph._meta.get_field('name').max_length\n )\n weight = serializers.IntegerField()\n text = serializers.CharField()\n\n\nclass DocumentSerializer(serializers.ModelSerializer):\n paragraphs = ParagraphSerializer(many=True, partial=True)\n\n class Meta:\n model = Document\n exclude = ('creator',)\n\n def create(self, validated_data):\n paragraphs = validated_data.pop('paragraphs')\n user = self.context['request'].user\n document = Document.objects.create(creator=user, **validated_data)\n\n for paragraph in paragraphs:\n Paragraph.objects.create(document=document, **paragraph)\n\n return document\n\n def update(self, instance, validated_data):\n instance.name = validated_data['name']\n instance.save()\n paragraphs = validated_data.pop('paragraphs')\n\n paragraph_ids = [item['id'] for item in paragraphs if 'id' in item]\n instance.paragraphs.exclude(id__in=paragraph_ids).delete()\n\n for paragraph in paragraphs:\n paragraph['document'] = instance\n if 'id' in paragraph:\n instance.paragraphs.filter(id=paragraph['id'])\\\n .update(**paragraph)\n else:\n instance.paragraphs.create(**paragraph)\n\n return instance\n", "path": "euth/documents/serializers.py"}, {"content": "from ckeditor.fields import RichTextField\nfrom django.contrib.contenttypes.fields import GenericRelation\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.db import models\nfrom django.utils.functional import cached_property\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom contrib.transforms import html_transforms\nfrom euth.comments import models as comment_models\nfrom euth.contrib import base_models\nfrom euth.modules import models as module_models\n\n\nclass Document(module_models.Item):\n name = models.CharField(max_length=120)\n\n def __str__(self):\n return \"{}_document_{}\".format(str(self.module), self.pk)\n\n def clean(self, *args, **kwargs):\n if not self.pk:\n try:\n Document.objects.get(module=self.module)\n raise ValidationError(\n _('Document for that module already exists'))\n except ObjectDoesNotExist:\n super().clean(*args, **kwargs)\n super().clean(*args, **kwargs)\n\n\nclass Paragraph(base_models.TimeStampedModel):\n name = models.CharField(max_length=120, blank=True)\n text = RichTextField()\n weight = models.PositiveIntegerField()\n document = models.ForeignKey(Document,\n on_delete=models.CASCADE,\n related_name='paragraphs')\n comments = GenericRelation(comment_models.Comment,\n related_query_name='paragraph',\n object_id_field='object_pk')\n\n class Meta:\n ordering = ('weight',)\n\n def __str__(self):\n return \"{}_paragraph_{}\".format(str(self.document), self.weight)\n\n def save(self, *args, **kwargs):\n self.text = html_transforms.clean_html_field(\n self.text)\n super().save(*args, **kwargs)\n\n @cached_property\n def project(self):\n return self.document.project\n", "path": "euth/documents/models.py"}, {"content": "from django.utils.translation import ugettext_lazy as _\n\nfrom euth import phases\n\nfrom . import apps, models, views\n\n\nclass CreateDocumentPhase(phases.PhaseContent):\n \"\"\"\n Allows no interaction for participants, only\n creation for moderators.\n \"\"\"\n app = apps.DocumentConfig.label\n phase = 'create_document'\n view = views.DocumentCreateView\n weight = 30\n\n name = _('Create document phase')\n module_name = _('commenting text')\n description = _('Create text for the project.')\n\n features = {}\n\n\nphases.content.register(CreateDocumentPhase())\n\n\nclass CommentPhase(phases.PhaseContent):\n \"\"\"\n Allows only commenting of paragraphs.\n \"\"\"\n app = apps.DocumentConfig.label\n phase = 'comment'\n view = views.DocumentDetailView\n weight = 40\n\n name = _('Comment phase')\n module_name = _('commenting text')\n description = _('Collect comments for the text.')\n\n features = {\n 'comment': (models.Paragraph, models.Document),\n }\n\n\nphases.content.register(CommentPhase())\n", "path": "euth/documents/phases.py"}]}
| 1,624 | 603 |
gh_patches_debug_22070
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-2290
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invite existing users gives an IntegrityError
See http://sentry.support.akvo-ops.org/rsr/live/group/797/.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/views/employment.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Akvo RSR is covered by the GNU Affero General Public License.
3
4 See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 """
7
8 from django.contrib.auth.models import Group
9 from rest_framework.decorators import api_view, permission_classes
10 from rest_framework.exceptions import PermissionDenied
11 from rest_framework.permissions import IsAuthenticated
12 from rest_framework.response import Response
13 from akvo.rsr.models import Employment
14 from ..serializers import EmploymentSerializer
15 from ..viewsets import BaseRSRViewSet
16
17
18 class EmploymentViewSet(BaseRSRViewSet):
19
20 """Employment resource."""
21
22 queryset = Employment.objects.select_related('organisation')
23 serializer_class = EmploymentSerializer
24
25
26 @api_view(['POST'])
27 @permission_classes((IsAuthenticated, ))
28 def approve_employment(request, pk=None):
29 employment = Employment.objects.get(pk=pk)
30 user = request.user
31
32 if not user.has_perm('rsr.change_employment', employment):
33 raise PermissionDenied
34
35 employment.approve(user)
36
37 return Response({'status': 'employment approved'})
38
39
40 @api_view(['POST'])
41 @permission_classes((IsAuthenticated, ))
42 def set_group(request, pk=None, group_id=None):
43 employment = Employment.objects.get(pk=pk)
44 group = Group.objects.get(pk=group_id)
45 user = request.user
46
47 if not user.has_perm('rsr.change_employment', employment):
48 raise PermissionDenied
49
50 employment.group = group
51 employment.save()
52
53 return Response({'status': 'group set'})
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/akvo/rest/views/employment.py b/akvo/rest/views/employment.py
--- a/akvo/rest/views/employment.py
+++ b/akvo/rest/views/employment.py
@@ -6,10 +6,12 @@
"""
from django.contrib.auth.models import Group
+from django.db import IntegrityError
from rest_framework.decorators import api_view, permission_classes
from rest_framework.exceptions import PermissionDenied
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
+from rest_framework import status
from akvo.rsr.models import Employment
from ..serializers import EmploymentSerializer
from ..viewsets import BaseRSRViewSet
@@ -48,6 +50,10 @@
raise PermissionDenied
employment.group = group
- employment.save()
+ try:
+ employment.save()
+ except IntegrityError:
+ return Response({'status': 'group not set', 'error': 'Employment already exists.'},
+ status=status.HTTP_400_BAD_REQUEST)
return Response({'status': 'group set'})
|
{"golden_diff": "diff --git a/akvo/rest/views/employment.py b/akvo/rest/views/employment.py\n--- a/akvo/rest/views/employment.py\n+++ b/akvo/rest/views/employment.py\n@@ -6,10 +6,12 @@\n \"\"\"\n \n from django.contrib.auth.models import Group\n+from django.db import IntegrityError\n from rest_framework.decorators import api_view, permission_classes\n from rest_framework.exceptions import PermissionDenied\n from rest_framework.permissions import IsAuthenticated\n from rest_framework.response import Response\n+from rest_framework import status\n from akvo.rsr.models import Employment\n from ..serializers import EmploymentSerializer\n from ..viewsets import BaseRSRViewSet\n@@ -48,6 +50,10 @@\n raise PermissionDenied\n \n employment.group = group\n- employment.save()\n+ try:\n+ employment.save()\n+ except IntegrityError:\n+ return Response({'status': 'group not set', 'error': 'Employment already exists.'},\n+ status=status.HTTP_400_BAD_REQUEST)\n \n return Response({'status': 'group set'})\n", "issue": "Invite existing users gives an IntegrityError\nSee http://sentry.support.akvo-ops.org/rsr/live/group/797/.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django.contrib.auth.models import Group\nfrom rest_framework.decorators import api_view, permission_classes\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom akvo.rsr.models import Employment\nfrom ..serializers import EmploymentSerializer\nfrom ..viewsets import BaseRSRViewSet\n\n\nclass EmploymentViewSet(BaseRSRViewSet):\n\n \"\"\"Employment resource.\"\"\"\n\n queryset = Employment.objects.select_related('organisation')\n serializer_class = EmploymentSerializer\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef approve_employment(request, pk=None):\n employment = Employment.objects.get(pk=pk)\n user = request.user\n\n if not user.has_perm('rsr.change_employment', employment):\n raise PermissionDenied\n\n employment.approve(user)\n\n return Response({'status': 'employment approved'})\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef set_group(request, pk=None, group_id=None):\n employment = Employment.objects.get(pk=pk)\n group = Group.objects.get(pk=group_id)\n user = request.user\n\n if not user.has_perm('rsr.change_employment', employment):\n raise PermissionDenied\n\n employment.group = group\n employment.save()\n\n return Response({'status': 'group set'})\n", "path": "akvo/rest/views/employment.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django.contrib.auth.models import Group\nfrom django.db import IntegrityError\nfrom rest_framework.decorators import api_view, permission_classes\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework import status\nfrom akvo.rsr.models import Employment\nfrom ..serializers import EmploymentSerializer\nfrom ..viewsets import BaseRSRViewSet\n\n\nclass EmploymentViewSet(BaseRSRViewSet):\n\n \"\"\"Employment resource.\"\"\"\n\n queryset = Employment.objects.select_related('organisation')\n serializer_class = EmploymentSerializer\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef approve_employment(request, pk=None):\n employment = Employment.objects.get(pk=pk)\n user = request.user\n\n if not user.has_perm('rsr.change_employment', employment):\n raise PermissionDenied\n\n employment.approve(user)\n\n return Response({'status': 'employment approved'})\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef set_group(request, pk=None, group_id=None):\n employment = Employment.objects.get(pk=pk)\n group = Group.objects.get(pk=group_id)\n user = request.user\n\n if not user.has_perm('rsr.change_employment', employment):\n raise PermissionDenied\n\n employment.group = group\n try:\n employment.save()\n except IntegrityError:\n return Response({'status': 'group not set', 'error': 'Employment already exists.'},\n status=status.HTTP_400_BAD_REQUEST)\n\n return Response({'status': 'group set'})\n", "path": "akvo/rest/views/employment.py"}]}
| 737 | 231 |
gh_patches_debug_19254
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-2529
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] ThetaForecaster fails to give prediction intevals for pd.DataFrames
**Describe the bug**
@fkiraly , yesterday you mentioned that pd.Series should not appear in the code. I feel that this is somehow connected.
I noticed that `ThetaForecaster` is unable to generate the prediction intervals when it's inside a `ForecastingPipeline`. It turns out that the problem is not with the pipeline but with the input being forced to be a `pd.DataFrame`.
The error i'm getting is:
```python
ValueError: Unable to coerce to Series, length must be 1: given 36
```
The problem is connected with the addition of `y_pred` with `sem` in `_predict_quantiles`, in theta.py.
I'd offer a work around but I'm sure a temporal change of data type is not the best solution;-).
Btw, is it expected that `sem` is a pd.Float64Index instance?
Here's a toy example:
```python
pd.DataFrame([1,2])+pd.Float64Index([.1,.2])
```
<!--
-->
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
import pandas as pd
from sktime.forecasting.theta import ThetaForecaster
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.datasets import load_airline
y = load_airline()
y_train, y_test = temporal_train_test_split(y)
fp = ThetaForecaster(sp=12)
fp.fit(pd.DataFrame(y_train))
fp.predict_quantiles(fh=y_test.index)
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
```python
import pandas as pd
from sktime.forecasting.theta import ThetaForecaster
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.datasets import load_airline
y = load_airline()
y_train, y_test = temporal_train_test_split(y)
fp = ThetaForecaster(sp=12)
fp.fit(y_train)
fp.predict_quantiles(fh=y_test.index)
```
**Additional context**
<!--
Add any other context about the problem here.
-->
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
System:
python: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 05:59:45) [MSC v.1929 64 bit (AMD64)]
executable: C:\Users\Mi\miniconda3\envs\sktime-dev\python.exe
machine: Windows-10-10.0.22000-SP0
Python dependencies:
pip: 22.0.4
setuptools: 62.1.0
sklearn: 1.0.2
sktime: 0.11.2
statsmodels: 0.13.2
numpy: 1.22.3
scipy: 1.8.0
pandas: 1.4.2
matplotlib: 3.5.1
joblib: 1.1.0
numba: 0.53.0
pmdarima: 1.8.5
tsfresh: None
</details>
<!-- Thanks for contributing! -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/forecasting/theta.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Theta forecaster."""
4
5 __all__ = ["ThetaForecaster"]
6 __author__ = ["big-o", "mloning", "kejsitake", "fkiraly"]
7
8 from warnings import warn
9
10 import numpy as np
11 import pandas as pd
12 from scipy.stats import norm
13
14 from sktime.forecasting.exp_smoothing import ExponentialSmoothing
15 from sktime.transformations.series.detrend import Deseasonalizer
16 from sktime.utils.slope_and_trend import _fit_trend
17 from sktime.utils.validation.forecasting import check_sp
18
19
20 class ThetaForecaster(ExponentialSmoothing):
21 """Theta method for forecasting.
22
23 The theta method as defined in [1]_ is equivalent to simple exponential
24 smoothing (SES) with drift (as demonstrated in [2]_).
25
26 The series is tested for seasonality using the test outlined in A&N. If
27 deemed seasonal, the series is seasonally adjusted using a classical
28 multiplicative decomposition before applying the theta method. The
29 resulting forecasts are then reseasonalised.
30
31 In cases where SES results in a constant forecast, the theta forecaster
32 will revert to predicting the SES constant plus a linear trend derived
33 from the training data.
34
35 Prediction intervals are computed using the underlying state space model.
36
37 Parameters
38 ----------
39 initial_level : float, optional
40 The alpha value of the simple exponential smoothing, if the value is
41 set then
42 this will be used, otherwise it will be estimated from the data.
43 deseasonalize : bool, optional (default=True)
44 If True, data is seasonally adjusted.
45 sp : int, optional (default=1)
46 The number of observations that constitute a seasonal period for a
47 multiplicative deseasonaliser, which is used if seasonality is
48 detected in the
49 training data. Ignored if a deseasonaliser transformer is provided.
50 Default is
51 1 (no seasonality).
52
53 Attributes
54 ----------
55 initial_level_ : float
56 The estimated alpha value of the SES fit.
57 drift_ : float
58 The estimated drift of the fitted model.
59 se_ : float
60 The standard error of the predictions. Used to calculate prediction
61 intervals.
62
63 References
64 ----------
65 .. [1] Assimakopoulos, V. and Nikolopoulos, K. The theta model: a
66 decomposition approach to forecasting. International Journal of
67 Forecasting 16, 521-530, 2000.
68 https://www.sciencedirect.com/science/article/pii/S0169207000000662
69
70 .. [2] `Hyndman, Rob J., and Billah, Baki. Unmasking the Theta method.
71 International J. Forecasting, 19, 287-290, 2003.
72 https://www.sciencedirect.com/science/article/pii/S0169207001001431
73
74 Examples
75 --------
76 >>> from sktime.datasets import load_airline
77 >>> from sktime.forecasting.theta import ThetaForecaster
78 >>> y = load_airline()
79 >>> forecaster = ThetaForecaster(sp=12)
80 >>> forecaster.fit(y)
81 ThetaForecaster(...)
82 >>> y_pred = forecaster.predict(fh=[1,2,3])
83 """
84
85 _fitted_param_names = ("initial_level", "smoothing_level")
86 _tags = {
87 "scitype:y": "univariate",
88 "ignores-exogeneous-X": True,
89 "capability:pred_int": True,
90 "requires-fh-in-fit": False,
91 "handles-missing-data": False,
92 }
93
94 def __init__(self, initial_level=None, deseasonalize=True, sp=1):
95
96 self.sp = sp
97 self.deseasonalize = deseasonalize
98 self.deseasonalizer_ = None
99 self.trend_ = None
100 self.initial_level_ = None
101 self.drift_ = None
102 self.se_ = None
103 super(ThetaForecaster, self).__init__(initial_level=initial_level, sp=sp)
104
105 def _fit(self, y, X=None, fh=None):
106 """Fit to training data.
107
108 Parameters
109 ----------
110 y : pd.Series
111 Target time series to which to fit the forecaster.
112 fh : int, list or np.array, optional (default=None)
113 The forecasters horizon with the steps ahead to to predict.
114 X : pd.DataFrame, optional (default=None)
115 Exogenous variables are ignored
116
117 Returns
118 -------
119 self : returns an instance of self.
120 """
121 sp = check_sp(self.sp)
122 if sp > 1 and not self.deseasonalize:
123 warn("`sp` is ignored when `deseasonalise`=False")
124
125 if self.deseasonalize:
126 self.deseasonalizer_ = Deseasonalizer(sp=self.sp, model="multiplicative")
127 y = self.deseasonalizer_.fit_transform(y)
128
129 self.initialization_method = "known" if self.initial_level else "estimated"
130 # fit exponential smoothing forecaster
131 # find theta lines: Theta lines are just SES + drift
132 super(ThetaForecaster, self)._fit(y, fh=fh)
133 self.initial_level_ = self._fitted_forecaster.params["smoothing_level"]
134
135 # compute and store historical residual standard error
136 self.sigma_ = np.sqrt(self._fitted_forecaster.sse / (len(y) - 1))
137
138 # compute trend
139 self.trend_ = self._compute_trend(y)
140
141 return self
142
143 def _predict(self, fh, X=None):
144 """Make forecasts.
145
146 Parameters
147 ----------
148 fh : array-like
149 The forecasters horizon with the steps ahead to to predict.
150 Default is
151 one-step ahead forecast, i.e. np.array([1]).
152
153 Returns
154 -------
155 y_pred : pandas.Series
156 Returns series of predicted values.
157 """
158 y_pred = super(ThetaForecaster, self)._predict(fh, X)
159
160 # Add drift.
161 drift = self._compute_drift()
162 y_pred += drift
163
164 if self.deseasonalize:
165 y_pred = self.deseasonalizer_.inverse_transform(y_pred)
166
167 return y_pred
168
169 @staticmethod
170 def _compute_trend(y):
171 # Trend calculated through least squares regression.
172 coefs = _fit_trend(y.values.reshape(1, -1), order=1)
173 return coefs[0, 0] / 2
174
175 def _compute_drift(self):
176 fh = self.fh.to_relative(self.cutoff)
177 if np.isclose(self.initial_level_, 0.0):
178 # SES was constant, so revert to simple trend
179 drift = self.trend_ * fh
180 else:
181 # Calculate drift from SES parameters
182 n_timepoints = len(self._y)
183 drift = self.trend_ * (
184 fh
185 + (1 - (1 - self.initial_level_) ** n_timepoints) / self.initial_level_
186 )
187
188 return drift
189
190 def _predict_quantiles(self, fh, X=None, alpha=None):
191 """Compute/return prediction quantiles for a forecast.
192
193 private _predict_quantiles containing the core logic,
194 called from predict_quantiles and predict_interval
195
196 Parameters
197 ----------
198 fh : int, list, np.array or ForecastingHorizon
199 Forecasting horizon
200 X : pd.DataFrame, optional (default=None)
201 Exogenous time series
202 alpha : list of float, optional (default=[0.5])
203 A list of probabilities at which quantile forecasts are computed.
204
205 Returns
206 -------
207 quantiles : pd.DataFrame
208 Column has multi-index: first level is variable name from y in fit,
209 second level being the values of alpha passed to the function.
210 Row index is fh. Entries are quantile forecasts, for var in col index,
211 at quantile probability in second col index, for the row index.
212 """
213 # prepare return data frame
214 index = pd.MultiIndex.from_product([["Quantiles"], alpha])
215 pred_quantiles = pd.DataFrame(columns=index)
216
217 sem = self.sigma_ * np.sqrt(
218 self.fh.to_relative(self.cutoff) * self.initial_level_ ** 2 + 1
219 )
220
221 y_pred = super(ThetaForecaster, self).predict(fh, X)
222
223 # we assume normal additive noise with sem variance
224 for a in alpha:
225 pred_quantiles[("Quantiles", a)] = y_pred + norm.ppf(a) * sem
226 # todo: should this not increase with the horizon?
227 # i.e., sth like norm.ppf(a) * sem * fh.to_absolute(cutoff) ?
228 # I've just refactored this so will leave it for now
229
230 return pred_quantiles
231
232 def _update(self, y, X=None, update_params=True):
233 super(ThetaForecaster, self)._update(
234 y, X, update_params=False
235 ) # use custom update_params routine
236 if update_params:
237 if self.deseasonalize:
238 y = self.deseasonalizer_.transform(self._y) # use updated y
239 self.initial_level_ = self._fitted_forecaster.params["smoothing_level"]
240 self.trend_ = self._compute_trend(y)
241 return self
242
243
244 def _zscore(level: float, two_tailed: bool = True) -> float:
245 """Calculate a z-score from a confidence level.
246
247 Parameters
248 ----------
249 level : float
250 A confidence level, in the open interval (0, 1).
251 two_tailed : bool (default=True)
252 If True, return the two-tailed z score.
253
254 Returns
255 -------
256 z : float
257 The z score.
258 """
259 alpha = 1 - level
260 if two_tailed:
261 alpha /= 2
262 return -norm.ppf(alpha)
263
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sktime/forecasting/theta.py b/sktime/forecasting/theta.py
--- a/sktime/forecasting/theta.py
+++ b/sktime/forecasting/theta.py
@@ -149,6 +149,8 @@
The forecasters horizon with the steps ahead to to predict.
Default is
one-step ahead forecast, i.e. np.array([1]).
+ X : pd.DataFrame, optional (default=None)
+ Exogenous time series
Returns
-------
@@ -215,10 +217,10 @@
pred_quantiles = pd.DataFrame(columns=index)
sem = self.sigma_ * np.sqrt(
- self.fh.to_relative(self.cutoff) * self.initial_level_ ** 2 + 1
+ self.fh.to_relative(self.cutoff) * self.initial_level_**2 + 1
)
- y_pred = super(ThetaForecaster, self).predict(fh, X)
+ y_pred = super(ThetaForecaster, self)._predict(fh, X)
# we assume normal additive noise with sem variance
for a in alpha:
|
{"golden_diff": "diff --git a/sktime/forecasting/theta.py b/sktime/forecasting/theta.py\n--- a/sktime/forecasting/theta.py\n+++ b/sktime/forecasting/theta.py\n@@ -149,6 +149,8 @@\n The forecasters horizon with the steps ahead to to predict.\n Default is\n one-step ahead forecast, i.e. np.array([1]).\n+ X : pd.DataFrame, optional (default=None)\n+ Exogenous time series\n \n Returns\n -------\n@@ -215,10 +217,10 @@\n pred_quantiles = pd.DataFrame(columns=index)\n \n sem = self.sigma_ * np.sqrt(\n- self.fh.to_relative(self.cutoff) * self.initial_level_ ** 2 + 1\n+ self.fh.to_relative(self.cutoff) * self.initial_level_**2 + 1\n )\n \n- y_pred = super(ThetaForecaster, self).predict(fh, X)\n+ y_pred = super(ThetaForecaster, self)._predict(fh, X)\n \n # we assume normal additive noise with sem variance\n for a in alpha:\n", "issue": "[BUG] ThetaForecaster fails to give prediction intevals for pd.DataFrames\n**Describe the bug**\r\n@fkiraly , yesterday you mentioned that pd.Series should not appear in the code. I feel that this is somehow connected.\r\n\r\nI noticed that `ThetaForecaster` is unable to generate the prediction intervals when it's inside a `ForecastingPipeline`. It turns out that the problem is not with the pipeline but with the input being forced to be a `pd.DataFrame`.\r\nThe error i'm getting is:\r\n```python\r\nValueError: Unable to coerce to Series, length must be 1: given 36\r\n```\r\n\r\nThe problem is connected with the addition of `y_pred` with `sem` in `_predict_quantiles`, in theta.py.\r\nI'd offer a work around but I'm sure a temporal change of data type is not the best solution;-).\r\n\r\nBtw, is it expected that `sem` is a pd.Float64Index instance?\r\n\r\nHere's a toy example:\r\n```python\r\npd.DataFrame([1,2])+pd.Float64Index([.1,.2])\r\n```\r\n<!--\r\n\r\n\r\n-->\r\n\r\n**To Reproduce**\r\n<!--\r\nAdd a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve\r\n\r\nIf the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com\r\n-->\r\n\r\n```python\r\nimport pandas as pd\r\nfrom sktime.forecasting.theta import ThetaForecaster\r\nfrom sktime.forecasting.model_selection import temporal_train_test_split\r\nfrom sktime.datasets import load_airline\r\ny = load_airline()\r\ny_train, y_test = temporal_train_test_split(y)\r\n\r\nfp = ThetaForecaster(sp=12)\r\nfp.fit(pd.DataFrame(y_train))\r\nfp.predict_quantiles(fh=y_test.index)\r\n```\r\n\r\n**Expected behavior**\r\n<!--\r\nA clear and concise description of what you expected to happen.\r\n-->\r\n```python\r\nimport pandas as pd\r\nfrom sktime.forecasting.theta import ThetaForecaster\r\nfrom sktime.forecasting.model_selection import temporal_train_test_split\r\nfrom sktime.datasets import load_airline\r\ny = load_airline()\r\ny_train, y_test = temporal_train_test_split(y)\r\n\r\nfp = ThetaForecaster(sp=12)\r\nfp.fit(y_train)\r\nfp.predict_quantiles(fh=y_test.index)\r\n```\r\n**Additional context**\r\n<!--\r\nAdd any other context about the problem here.\r\n-->\r\n\r\n**Versions**\r\n<details>\r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n \r\nfrom sktime import show_versions; show_versions()\r\n-->\r\n\r\nSystem:\r\n python: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 05:59:45) [MSC v.1929 64 bit (AMD64)]\r\nexecutable: C:\\Users\\Mi\\miniconda3\\envs\\sktime-dev\\python.exe\r\n machine: Windows-10-10.0.22000-SP0\r\n\r\nPython dependencies:\r\n pip: 22.0.4\r\n setuptools: 62.1.0\r\n sklearn: 1.0.2\r\n sktime: 0.11.2\r\n statsmodels: 0.13.2\r\n numpy: 1.22.3\r\n scipy: 1.8.0\r\n pandas: 1.4.2\r\n matplotlib: 3.5.1\r\n joblib: 1.1.0\r\n numba: 0.53.0\r\n pmdarima: 1.8.5\r\n tsfresh: None\r\n\r\n</details>\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Theta forecaster.\"\"\"\n\n__all__ = [\"ThetaForecaster\"]\n__author__ = [\"big-o\", \"mloning\", \"kejsitake\", \"fkiraly\"]\n\nfrom warnings import warn\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\n\nfrom sktime.forecasting.exp_smoothing import ExponentialSmoothing\nfrom sktime.transformations.series.detrend import Deseasonalizer\nfrom sktime.utils.slope_and_trend import _fit_trend\nfrom sktime.utils.validation.forecasting import check_sp\n\n\nclass ThetaForecaster(ExponentialSmoothing):\n \"\"\"Theta method for forecasting.\n\n The theta method as defined in [1]_ is equivalent to simple exponential\n smoothing (SES) with drift (as demonstrated in [2]_).\n\n The series is tested for seasonality using the test outlined in A&N. If\n deemed seasonal, the series is seasonally adjusted using a classical\n multiplicative decomposition before applying the theta method. The\n resulting forecasts are then reseasonalised.\n\n In cases where SES results in a constant forecast, the theta forecaster\n will revert to predicting the SES constant plus a linear trend derived\n from the training data.\n\n Prediction intervals are computed using the underlying state space model.\n\n Parameters\n ----------\n initial_level : float, optional\n The alpha value of the simple exponential smoothing, if the value is\n set then\n this will be used, otherwise it will be estimated from the data.\n deseasonalize : bool, optional (default=True)\n If True, data is seasonally adjusted.\n sp : int, optional (default=1)\n The number of observations that constitute a seasonal period for a\n multiplicative deseasonaliser, which is used if seasonality is\n detected in the\n training data. Ignored if a deseasonaliser transformer is provided.\n Default is\n 1 (no seasonality).\n\n Attributes\n ----------\n initial_level_ : float\n The estimated alpha value of the SES fit.\n drift_ : float\n The estimated drift of the fitted model.\n se_ : float\n The standard error of the predictions. Used to calculate prediction\n intervals.\n\n References\n ----------\n .. [1] Assimakopoulos, V. and Nikolopoulos, K. The theta model: a\n decomposition approach to forecasting. International Journal of\n Forecasting 16, 521-530, 2000.\n https://www.sciencedirect.com/science/article/pii/S0169207000000662\n\n .. [2] `Hyndman, Rob J., and Billah, Baki. Unmasking the Theta method.\n International J. Forecasting, 19, 287-290, 2003.\n https://www.sciencedirect.com/science/article/pii/S0169207001001431\n\n Examples\n --------\n >>> from sktime.datasets import load_airline\n >>> from sktime.forecasting.theta import ThetaForecaster\n >>> y = load_airline()\n >>> forecaster = ThetaForecaster(sp=12)\n >>> forecaster.fit(y)\n ThetaForecaster(...)\n >>> y_pred = forecaster.predict(fh=[1,2,3])\n \"\"\"\n\n _fitted_param_names = (\"initial_level\", \"smoothing_level\")\n _tags = {\n \"scitype:y\": \"univariate\",\n \"ignores-exogeneous-X\": True,\n \"capability:pred_int\": True,\n \"requires-fh-in-fit\": False,\n \"handles-missing-data\": False,\n }\n\n def __init__(self, initial_level=None, deseasonalize=True, sp=1):\n\n self.sp = sp\n self.deseasonalize = deseasonalize\n self.deseasonalizer_ = None\n self.trend_ = None\n self.initial_level_ = None\n self.drift_ = None\n self.se_ = None\n super(ThetaForecaster, self).__init__(initial_level=initial_level, sp=sp)\n\n def _fit(self, y, X=None, fh=None):\n \"\"\"Fit to training data.\n\n Parameters\n ----------\n y : pd.Series\n Target time series to which to fit the forecaster.\n fh : int, list or np.array, optional (default=None)\n The forecasters horizon with the steps ahead to to predict.\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored\n\n Returns\n -------\n self : returns an instance of self.\n \"\"\"\n sp = check_sp(self.sp)\n if sp > 1 and not self.deseasonalize:\n warn(\"`sp` is ignored when `deseasonalise`=False\")\n\n if self.deseasonalize:\n self.deseasonalizer_ = Deseasonalizer(sp=self.sp, model=\"multiplicative\")\n y = self.deseasonalizer_.fit_transform(y)\n\n self.initialization_method = \"known\" if self.initial_level else \"estimated\"\n # fit exponential smoothing forecaster\n # find theta lines: Theta lines are just SES + drift\n super(ThetaForecaster, self)._fit(y, fh=fh)\n self.initial_level_ = self._fitted_forecaster.params[\"smoothing_level\"]\n\n # compute and store historical residual standard error\n self.sigma_ = np.sqrt(self._fitted_forecaster.sse / (len(y) - 1))\n\n # compute trend\n self.trend_ = self._compute_trend(y)\n\n return self\n\n def _predict(self, fh, X=None):\n \"\"\"Make forecasts.\n\n Parameters\n ----------\n fh : array-like\n The forecasters horizon with the steps ahead to to predict.\n Default is\n one-step ahead forecast, i.e. np.array([1]).\n\n Returns\n -------\n y_pred : pandas.Series\n Returns series of predicted values.\n \"\"\"\n y_pred = super(ThetaForecaster, self)._predict(fh, X)\n\n # Add drift.\n drift = self._compute_drift()\n y_pred += drift\n\n if self.deseasonalize:\n y_pred = self.deseasonalizer_.inverse_transform(y_pred)\n\n return y_pred\n\n @staticmethod\n def _compute_trend(y):\n # Trend calculated through least squares regression.\n coefs = _fit_trend(y.values.reshape(1, -1), order=1)\n return coefs[0, 0] / 2\n\n def _compute_drift(self):\n fh = self.fh.to_relative(self.cutoff)\n if np.isclose(self.initial_level_, 0.0):\n # SES was constant, so revert to simple trend\n drift = self.trend_ * fh\n else:\n # Calculate drift from SES parameters\n n_timepoints = len(self._y)\n drift = self.trend_ * (\n fh\n + (1 - (1 - self.initial_level_) ** n_timepoints) / self.initial_level_\n )\n\n return drift\n\n def _predict_quantiles(self, fh, X=None, alpha=None):\n \"\"\"Compute/return prediction quantiles for a forecast.\n\n private _predict_quantiles containing the core logic,\n called from predict_quantiles and predict_interval\n\n Parameters\n ----------\n fh : int, list, np.array or ForecastingHorizon\n Forecasting horizon\n X : pd.DataFrame, optional (default=None)\n Exogenous time series\n alpha : list of float, optional (default=[0.5])\n A list of probabilities at which quantile forecasts are computed.\n\n Returns\n -------\n quantiles : pd.DataFrame\n Column has multi-index: first level is variable name from y in fit,\n second level being the values of alpha passed to the function.\n Row index is fh. Entries are quantile forecasts, for var in col index,\n at quantile probability in second col index, for the row index.\n \"\"\"\n # prepare return data frame\n index = pd.MultiIndex.from_product([[\"Quantiles\"], alpha])\n pred_quantiles = pd.DataFrame(columns=index)\n\n sem = self.sigma_ * np.sqrt(\n self.fh.to_relative(self.cutoff) * self.initial_level_ ** 2 + 1\n )\n\n y_pred = super(ThetaForecaster, self).predict(fh, X)\n\n # we assume normal additive noise with sem variance\n for a in alpha:\n pred_quantiles[(\"Quantiles\", a)] = y_pred + norm.ppf(a) * sem\n # todo: should this not increase with the horizon?\n # i.e., sth like norm.ppf(a) * sem * fh.to_absolute(cutoff) ?\n # I've just refactored this so will leave it for now\n\n return pred_quantiles\n\n def _update(self, y, X=None, update_params=True):\n super(ThetaForecaster, self)._update(\n y, X, update_params=False\n ) # use custom update_params routine\n if update_params:\n if self.deseasonalize:\n y = self.deseasonalizer_.transform(self._y) # use updated y\n self.initial_level_ = self._fitted_forecaster.params[\"smoothing_level\"]\n self.trend_ = self._compute_trend(y)\n return self\n\n\ndef _zscore(level: float, two_tailed: bool = True) -> float:\n \"\"\"Calculate a z-score from a confidence level.\n\n Parameters\n ----------\n level : float\n A confidence level, in the open interval (0, 1).\n two_tailed : bool (default=True)\n If True, return the two-tailed z score.\n\n Returns\n -------\n z : float\n The z score.\n \"\"\"\n alpha = 1 - level\n if two_tailed:\n alpha /= 2\n return -norm.ppf(alpha)\n", "path": "sktime/forecasting/theta.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Theta forecaster.\"\"\"\n\n__all__ = [\"ThetaForecaster\"]\n__author__ = [\"big-o\", \"mloning\", \"kejsitake\", \"fkiraly\"]\n\nfrom warnings import warn\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\n\nfrom sktime.forecasting.exp_smoothing import ExponentialSmoothing\nfrom sktime.transformations.series.detrend import Deseasonalizer\nfrom sktime.utils.slope_and_trend import _fit_trend\nfrom sktime.utils.validation.forecasting import check_sp\n\n\nclass ThetaForecaster(ExponentialSmoothing):\n \"\"\"Theta method for forecasting.\n\n The theta method as defined in [1]_ is equivalent to simple exponential\n smoothing (SES) with drift (as demonstrated in [2]_).\n\n The series is tested for seasonality using the test outlined in A&N. If\n deemed seasonal, the series is seasonally adjusted using a classical\n multiplicative decomposition before applying the theta method. The\n resulting forecasts are then reseasonalised.\n\n In cases where SES results in a constant forecast, the theta forecaster\n will revert to predicting the SES constant plus a linear trend derived\n from the training data.\n\n Prediction intervals are computed using the underlying state space model.\n\n Parameters\n ----------\n initial_level : float, optional\n The alpha value of the simple exponential smoothing, if the value is\n set then\n this will be used, otherwise it will be estimated from the data.\n deseasonalize : bool, optional (default=True)\n If True, data is seasonally adjusted.\n sp : int, optional (default=1)\n The number of observations that constitute a seasonal period for a\n multiplicative deseasonaliser, which is used if seasonality is\n detected in the\n training data. Ignored if a deseasonaliser transformer is provided.\n Default is\n 1 (no seasonality).\n\n Attributes\n ----------\n initial_level_ : float\n The estimated alpha value of the SES fit.\n drift_ : float\n The estimated drift of the fitted model.\n se_ : float\n The standard error of the predictions. Used to calculate prediction\n intervals.\n\n References\n ----------\n .. [1] Assimakopoulos, V. and Nikolopoulos, K. The theta model: a\n decomposition approach to forecasting. International Journal of\n Forecasting 16, 521-530, 2000.\n https://www.sciencedirect.com/science/article/pii/S0169207000000662\n\n .. [2] `Hyndman, Rob J., and Billah, Baki. Unmasking the Theta method.\n International J. Forecasting, 19, 287-290, 2003.\n https://www.sciencedirect.com/science/article/pii/S0169207001001431\n\n Examples\n --------\n >>> from sktime.datasets import load_airline\n >>> from sktime.forecasting.theta import ThetaForecaster\n >>> y = load_airline()\n >>> forecaster = ThetaForecaster(sp=12)\n >>> forecaster.fit(y)\n ThetaForecaster(...)\n >>> y_pred = forecaster.predict(fh=[1,2,3])\n \"\"\"\n\n _fitted_param_names = (\"initial_level\", \"smoothing_level\")\n _tags = {\n \"scitype:y\": \"univariate\",\n \"ignores-exogeneous-X\": True,\n \"capability:pred_int\": True,\n \"requires-fh-in-fit\": False,\n \"handles-missing-data\": False,\n }\n\n def __init__(self, initial_level=None, deseasonalize=True, sp=1):\n\n self.sp = sp\n self.deseasonalize = deseasonalize\n self.deseasonalizer_ = None\n self.trend_ = None\n self.initial_level_ = None\n self.drift_ = None\n self.se_ = None\n super(ThetaForecaster, self).__init__(initial_level=initial_level, sp=sp)\n\n def _fit(self, y, X=None, fh=None):\n \"\"\"Fit to training data.\n\n Parameters\n ----------\n y : pd.Series\n Target time series to which to fit the forecaster.\n fh : int, list or np.array, optional (default=None)\n The forecasters horizon with the steps ahead to to predict.\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored\n\n Returns\n -------\n self : returns an instance of self.\n \"\"\"\n sp = check_sp(self.sp)\n if sp > 1 and not self.deseasonalize:\n warn(\"`sp` is ignored when `deseasonalise`=False\")\n\n if self.deseasonalize:\n self.deseasonalizer_ = Deseasonalizer(sp=self.sp, model=\"multiplicative\")\n y = self.deseasonalizer_.fit_transform(y)\n\n self.initialization_method = \"known\" if self.initial_level else \"estimated\"\n # fit exponential smoothing forecaster\n # find theta lines: Theta lines are just SES + drift\n super(ThetaForecaster, self)._fit(y, fh=fh)\n self.initial_level_ = self._fitted_forecaster.params[\"smoothing_level\"]\n\n # compute and store historical residual standard error\n self.sigma_ = np.sqrt(self._fitted_forecaster.sse / (len(y) - 1))\n\n # compute trend\n self.trend_ = self._compute_trend(y)\n\n return self\n\n def _predict(self, fh, X=None):\n \"\"\"Make forecasts.\n\n Parameters\n ----------\n fh : array-like\n The forecasters horizon with the steps ahead to to predict.\n Default is\n one-step ahead forecast, i.e. np.array([1]).\n X : pd.DataFrame, optional (default=None)\n Exogenous time series\n\n Returns\n -------\n y_pred : pandas.Series\n Returns series of predicted values.\n \"\"\"\n y_pred = super(ThetaForecaster, self)._predict(fh, X)\n\n # Add drift.\n drift = self._compute_drift()\n y_pred += drift\n\n if self.deseasonalize:\n y_pred = self.deseasonalizer_.inverse_transform(y_pred)\n\n return y_pred\n\n @staticmethod\n def _compute_trend(y):\n # Trend calculated through least squares regression.\n coefs = _fit_trend(y.values.reshape(1, -1), order=1)\n return coefs[0, 0] / 2\n\n def _compute_drift(self):\n fh = self.fh.to_relative(self.cutoff)\n if np.isclose(self.initial_level_, 0.0):\n # SES was constant, so revert to simple trend\n drift = self.trend_ * fh\n else:\n # Calculate drift from SES parameters\n n_timepoints = len(self._y)\n drift = self.trend_ * (\n fh\n + (1 - (1 - self.initial_level_) ** n_timepoints) / self.initial_level_\n )\n\n return drift\n\n def _predict_quantiles(self, fh, X=None, alpha=None):\n \"\"\"Compute/return prediction quantiles for a forecast.\n\n private _predict_quantiles containing the core logic,\n called from predict_quantiles and predict_interval\n\n Parameters\n ----------\n fh : int, list, np.array or ForecastingHorizon\n Forecasting horizon\n X : pd.DataFrame, optional (default=None)\n Exogenous time series\n alpha : list of float, optional (default=[0.5])\n A list of probabilities at which quantile forecasts are computed.\n\n Returns\n -------\n quantiles : pd.DataFrame\n Column has multi-index: first level is variable name from y in fit,\n second level being the values of alpha passed to the function.\n Row index is fh. Entries are quantile forecasts, for var in col index,\n at quantile probability in second col index, for the row index.\n \"\"\"\n # prepare return data frame\n index = pd.MultiIndex.from_product([[\"Quantiles\"], alpha])\n pred_quantiles = pd.DataFrame(columns=index)\n\n sem = self.sigma_ * np.sqrt(\n self.fh.to_relative(self.cutoff) * self.initial_level_**2 + 1\n )\n\n y_pred = super(ThetaForecaster, self)._predict(fh, X)\n\n # we assume normal additive noise with sem variance\n for a in alpha:\n pred_quantiles[(\"Quantiles\", a)] = y_pred + norm.ppf(a) * sem\n # todo: should this not increase with the horizon?\n # i.e., sth like norm.ppf(a) * sem * fh.to_absolute(cutoff) ?\n # I've just refactored this so will leave it for now\n\n return pred_quantiles\n\n def _update(self, y, X=None, update_params=True):\n super(ThetaForecaster, self)._update(\n y, X, update_params=False\n ) # use custom update_params routine\n if update_params:\n if self.deseasonalize:\n y = self.deseasonalizer_.transform(self._y) # use updated y\n self.initial_level_ = self._fitted_forecaster.params[\"smoothing_level\"]\n self.trend_ = self._compute_trend(y)\n return self\n\n\ndef _zscore(level: float, two_tailed: bool = True) -> float:\n \"\"\"Calculate a z-score from a confidence level.\n\n Parameters\n ----------\n level : float\n A confidence level, in the open interval (0, 1).\n two_tailed : bool (default=True)\n If True, return the two-tailed z score.\n\n Returns\n -------\n z : float\n The z score.\n \"\"\"\n alpha = 1 - level\n if two_tailed:\n alpha /= 2\n return -norm.ppf(alpha)\n", "path": "sktime/forecasting/theta.py"}]}
| 3,953 | 256 |
gh_patches_debug_26325
|
rasdani/github-patches
|
git_diff
|
getredash__redash-6652
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
For self-hosted instance, Google sheet connection test failed, but can't find any logs, how to trouble shooting such cases


When test connections timeout, can't find any logs on worker/scheduler/server docker logs.
even make sure the log level is debug, can't find any related logs. how to trouble shooting this
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/query_runner/google_spreadsheets.py`
Content:
```
1 import logging
2 import re
3 from base64 import b64decode
4
5 from dateutil import parser
6 from requests import Session
7 from xlsxwriter.utility import xl_col_to_name
8
9 from redash.query_runner import (
10 TYPE_BOOLEAN,
11 TYPE_DATETIME,
12 TYPE_FLOAT,
13 TYPE_INTEGER,
14 TYPE_STRING,
15 BaseQueryRunner,
16 guess_type,
17 register,
18 )
19 from redash.utils import json_dumps, json_loads
20
21 logger = logging.getLogger(__name__)
22
23 try:
24 import google.auth
25 import gspread
26 from google.oauth2.service_account import Credentials
27 from gspread.exceptions import APIError
28 from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound
29
30 enabled = True
31 except ImportError:
32 enabled = False
33
34
35 def _load_key(filename):
36 with open(filename, "rb") as f:
37 return json_loads(f.read())
38
39
40 def _get_columns_and_column_names(row):
41 column_names = []
42 columns = []
43 duplicate_counter = 1
44
45 for i, column_name in enumerate(row):
46 if not column_name:
47 column_name = "column_{}".format(xl_col_to_name(i))
48
49 if column_name in column_names:
50 column_name = "{}{}".format(column_name, duplicate_counter)
51 duplicate_counter += 1
52
53 column_names.append(column_name)
54 columns.append({"name": column_name, "friendly_name": column_name, "type": TYPE_STRING})
55
56 return columns, column_names
57
58
59 def _value_eval_list(row_values, col_types):
60 value_list = []
61 raw_values = zip(col_types, row_values)
62 for typ, rval in raw_values:
63 try:
64 if rval is None or rval == "":
65 val = None
66 elif typ == TYPE_BOOLEAN:
67 val = True if str(rval).lower() == "true" else False
68 elif typ == TYPE_DATETIME:
69 val = parser.parse(rval)
70 elif typ == TYPE_FLOAT:
71 val = float(rval)
72 elif typ == TYPE_INTEGER:
73 val = int(rval)
74 else:
75 # for TYPE_STRING and default
76 val = str(rval)
77 value_list.append(val)
78 except (ValueError, OverflowError):
79 value_list.append(rval)
80 return value_list
81
82
83 HEADER_INDEX = 0
84
85
86 class WorksheetNotFoundError(Exception):
87 def __init__(self, worksheet_num, worksheet_count):
88 message = "Worksheet number {} not found. Spreadsheet has {} worksheets. Note that the worksheet count is zero based.".format(
89 worksheet_num, worksheet_count
90 )
91 super(WorksheetNotFoundError, self).__init__(message)
92
93
94 class WorksheetNotFoundByTitleError(Exception):
95 def __init__(self, worksheet_title):
96 message = "Worksheet title '{}' not found.".format(worksheet_title)
97 super(WorksheetNotFoundByTitleError, self).__init__(message)
98
99
100 def parse_query(query):
101 values = query.split("|")
102 key = values[0] # key of the spreadsheet
103 worksheet_num_or_title = 0 # A default value for when a number of inputs is invalid
104 if len(values) == 2:
105 s = values[1].strip()
106 if len(s) > 0:
107 if re.match(r"^\"(.*?)\"$", s):
108 # A string quoted by " means a title of worksheet
109 worksheet_num_or_title = s[1:-1]
110 else:
111 # if spreadsheet contains more than one worksheet - this is the number of it
112 worksheet_num_or_title = int(s)
113
114 return key, worksheet_num_or_title
115
116
117 def parse_worksheet(worksheet):
118 if not worksheet:
119 return {"columns": [], "rows": []}
120
121 columns, column_names = _get_columns_and_column_names(worksheet[HEADER_INDEX])
122
123 if len(worksheet) > 1:
124 for j, value in enumerate(worksheet[HEADER_INDEX + 1]):
125 columns[j]["type"] = guess_type(value)
126
127 column_types = [c["type"] for c in columns]
128 rows = [dict(zip(column_names, _value_eval_list(row, column_types))) for row in worksheet[HEADER_INDEX + 1 :]]
129 data = {"columns": columns, "rows": rows}
130
131 return data
132
133
134 def parse_spreadsheet(spreadsheet, worksheet_num_or_title):
135 worksheet = None
136 if isinstance(worksheet_num_or_title, int):
137 worksheet = spreadsheet.get_worksheet_by_index(worksheet_num_or_title)
138 if worksheet is None:
139 worksheet_count = len(spreadsheet.worksheets())
140 raise WorksheetNotFoundError(worksheet_num_or_title, worksheet_count)
141 elif isinstance(worksheet_num_or_title, str):
142 worksheet = spreadsheet.get_worksheet_by_title(worksheet_num_or_title)
143 if worksheet is None:
144 raise WorksheetNotFoundByTitleError(worksheet_num_or_title)
145
146 worksheet_values = worksheet.get_all_values()
147
148 return parse_worksheet(worksheet_values)
149
150
151 def is_url_key(key):
152 return key.startswith("https://")
153
154
155 def parse_api_error(error):
156 error_data = error.response.json()
157
158 if "error" in error_data and "message" in error_data["error"]:
159 message = error_data["error"]["message"]
160 else:
161 message = str(error)
162
163 return message
164
165
166 class SpreadsheetWrapper:
167 def __init__(self, spreadsheet):
168 self.spreadsheet = spreadsheet
169
170 def worksheets(self):
171 return self.spreadsheet.worksheets()
172
173 def get_worksheet_by_index(self, index):
174 return self.spreadsheet.get_worksheet(index)
175
176 def get_worksheet_by_title(self, title):
177 try:
178 return self.spreadsheet.worksheet(title)
179 except GSWorksheetNotFound:
180 return None
181
182
183 class TimeoutSession(Session):
184 def request(self, *args, **kwargs):
185 kwargs.setdefault("timeout", 300)
186 return super(TimeoutSession, self).request(*args, **kwargs)
187
188
189 class GoogleSpreadsheet(BaseQueryRunner):
190 should_annotate_query = False
191
192 def __init__(self, configuration):
193 super(GoogleSpreadsheet, self).__init__(configuration)
194 self.syntax = "custom"
195
196 @classmethod
197 def name(cls):
198 return "Google Sheets"
199
200 @classmethod
201 def type(cls):
202 return "google_spreadsheets"
203
204 @classmethod
205 def enabled(cls):
206 return enabled
207
208 @classmethod
209 def configuration_schema(cls):
210 return {
211 "type": "object",
212 "properties": {"jsonKeyFile": {"type": "string", "title": "JSON Key File (ADC is used if omitted)"}},
213 "required": [],
214 "secret": ["jsonKeyFile"],
215 }
216
217 def _get_spreadsheet_service(self):
218 scopes = ["https://spreadsheets.google.com/feeds"]
219
220 try:
221 key = json_loads(b64decode(self.configuration["jsonKeyFile"]))
222 creds = Credentials.from_service_account_info(key, scopes=scopes)
223 except KeyError:
224 creds = google.auth.default(scopes=scopes)[0]
225
226 timeout_session = Session()
227 timeout_session.requests_session = TimeoutSession()
228 spreadsheetservice = gspread.Client(auth=creds, session=timeout_session)
229 spreadsheetservice.login()
230 return spreadsheetservice
231
232 def test_connection(self):
233 service = self._get_spreadsheet_service()
234 test_spreadsheet_key = "1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis"
235 try:
236 service.open_by_key(test_spreadsheet_key).worksheets()
237 except APIError as e:
238 message = parse_api_error(e)
239 raise Exception(message)
240
241 def run_query(self, query, user):
242 logger.debug("Spreadsheet is about to execute query: %s", query)
243 key, worksheet_num_or_title = parse_query(query)
244
245 try:
246 spreadsheet_service = self._get_spreadsheet_service()
247
248 if is_url_key(key):
249 spreadsheet = spreadsheet_service.open_by_url(key)
250 else:
251 spreadsheet = spreadsheet_service.open_by_key(key)
252
253 data = parse_spreadsheet(SpreadsheetWrapper(spreadsheet), worksheet_num_or_title)
254
255 return json_dumps(data), None
256 except gspread.SpreadsheetNotFound:
257 return (
258 None,
259 "Spreadsheet ({}) not found. Make sure you used correct id.".format(key),
260 )
261 except APIError as e:
262 return None, parse_api_error(e)
263
264
265 register(GoogleSpreadsheet)
266
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redash/query_runner/google_spreadsheets.py b/redash/query_runner/google_spreadsheets.py
--- a/redash/query_runner/google_spreadsheets.py
+++ b/redash/query_runner/google_spreadsheets.py
@@ -23,6 +23,7 @@
try:
import google.auth
import gspread
+ from google.auth.exceptions import GoogleAuthError
from google.oauth2.service_account import Credentials
from gspread.exceptions import APIError
from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound
@@ -230,13 +231,17 @@
return spreadsheetservice
def test_connection(self):
- service = self._get_spreadsheet_service()
test_spreadsheet_key = "1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis"
try:
+ service = self._get_spreadsheet_service()
service.open_by_key(test_spreadsheet_key).worksheets()
except APIError as e:
+ logger.exception(e)
message = parse_api_error(e)
raise Exception(message)
+ except GoogleAuthError as e:
+ logger.exception(e)
+ raise Exception(str(e))
def run_query(self, query, user):
logger.debug("Spreadsheet is about to execute query: %s", query)
|
{"golden_diff": "diff --git a/redash/query_runner/google_spreadsheets.py b/redash/query_runner/google_spreadsheets.py\n--- a/redash/query_runner/google_spreadsheets.py\n+++ b/redash/query_runner/google_spreadsheets.py\n@@ -23,6 +23,7 @@\n try:\n import google.auth\n import gspread\n+ from google.auth.exceptions import GoogleAuthError\n from google.oauth2.service_account import Credentials\n from gspread.exceptions import APIError\n from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound\n@@ -230,13 +231,17 @@\n return spreadsheetservice\n \n def test_connection(self):\n- service = self._get_spreadsheet_service()\n test_spreadsheet_key = \"1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis\"\n try:\n+ service = self._get_spreadsheet_service()\n service.open_by_key(test_spreadsheet_key).worksheets()\n except APIError as e:\n+ logger.exception(e)\n message = parse_api_error(e)\n raise Exception(message)\n+ except GoogleAuthError as e:\n+ logger.exception(e)\n+ raise Exception(str(e))\n \n def run_query(self, query, user):\n logger.debug(\"Spreadsheet is about to execute query: %s\", query)\n", "issue": "For self-hosted instance, Google sheet connection test failed, but can't find any logs, how to trouble shooting such cases\n\r\n\r\n\r\n\r\nWhen test connections timeout, can't find any logs on worker/scheduler/server docker logs.\r\neven make sure the log level is debug, can't find any related logs. how to trouble shooting this\n", "before_files": [{"content": "import logging\nimport re\nfrom base64 import b64decode\n\nfrom dateutil import parser\nfrom requests import Session\nfrom xlsxwriter.utility import xl_col_to_name\n\nfrom redash.query_runner import (\n TYPE_BOOLEAN,\n TYPE_DATETIME,\n TYPE_FLOAT,\n TYPE_INTEGER,\n TYPE_STRING,\n BaseQueryRunner,\n guess_type,\n register,\n)\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\n\ntry:\n import google.auth\n import gspread\n from google.oauth2.service_account import Credentials\n from gspread.exceptions import APIError\n from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound\n\n enabled = True\nexcept ImportError:\n enabled = False\n\n\ndef _load_key(filename):\n with open(filename, \"rb\") as f:\n return json_loads(f.read())\n\n\ndef _get_columns_and_column_names(row):\n column_names = []\n columns = []\n duplicate_counter = 1\n\n for i, column_name in enumerate(row):\n if not column_name:\n column_name = \"column_{}\".format(xl_col_to_name(i))\n\n if column_name in column_names:\n column_name = \"{}{}\".format(column_name, duplicate_counter)\n duplicate_counter += 1\n\n column_names.append(column_name)\n columns.append({\"name\": column_name, \"friendly_name\": column_name, \"type\": TYPE_STRING})\n\n return columns, column_names\n\n\ndef _value_eval_list(row_values, col_types):\n value_list = []\n raw_values = zip(col_types, row_values)\n for typ, rval in raw_values:\n try:\n if rval is None or rval == \"\":\n val = None\n elif typ == TYPE_BOOLEAN:\n val = True if str(rval).lower() == \"true\" else False\n elif typ == TYPE_DATETIME:\n val = parser.parse(rval)\n elif typ == TYPE_FLOAT:\n val = float(rval)\n elif typ == TYPE_INTEGER:\n val = int(rval)\n else:\n # for TYPE_STRING and default\n val = str(rval)\n value_list.append(val)\n except (ValueError, OverflowError):\n value_list.append(rval)\n return value_list\n\n\nHEADER_INDEX = 0\n\n\nclass WorksheetNotFoundError(Exception):\n def __init__(self, worksheet_num, worksheet_count):\n message = \"Worksheet number {} not found. Spreadsheet has {} worksheets. Note that the worksheet count is zero based.\".format(\n worksheet_num, worksheet_count\n )\n super(WorksheetNotFoundError, self).__init__(message)\n\n\nclass WorksheetNotFoundByTitleError(Exception):\n def __init__(self, worksheet_title):\n message = \"Worksheet title '{}' not found.\".format(worksheet_title)\n super(WorksheetNotFoundByTitleError, self).__init__(message)\n\n\ndef parse_query(query):\n values = query.split(\"|\")\n key = values[0] # key of the spreadsheet\n worksheet_num_or_title = 0 # A default value for when a number of inputs is invalid\n if len(values) == 2:\n s = values[1].strip()\n if len(s) > 0:\n if re.match(r\"^\\\"(.*?)\\\"$\", s):\n # A string quoted by \" means a title of worksheet\n worksheet_num_or_title = s[1:-1]\n else:\n # if spreadsheet contains more than one worksheet - this is the number of it\n worksheet_num_or_title = int(s)\n\n return key, worksheet_num_or_title\n\n\ndef parse_worksheet(worksheet):\n if not worksheet:\n return {\"columns\": [], \"rows\": []}\n\n columns, column_names = _get_columns_and_column_names(worksheet[HEADER_INDEX])\n\n if len(worksheet) > 1:\n for j, value in enumerate(worksheet[HEADER_INDEX + 1]):\n columns[j][\"type\"] = guess_type(value)\n\n column_types = [c[\"type\"] for c in columns]\n rows = [dict(zip(column_names, _value_eval_list(row, column_types))) for row in worksheet[HEADER_INDEX + 1 :]]\n data = {\"columns\": columns, \"rows\": rows}\n\n return data\n\n\ndef parse_spreadsheet(spreadsheet, worksheet_num_or_title):\n worksheet = None\n if isinstance(worksheet_num_or_title, int):\n worksheet = spreadsheet.get_worksheet_by_index(worksheet_num_or_title)\n if worksheet is None:\n worksheet_count = len(spreadsheet.worksheets())\n raise WorksheetNotFoundError(worksheet_num_or_title, worksheet_count)\n elif isinstance(worksheet_num_or_title, str):\n worksheet = spreadsheet.get_worksheet_by_title(worksheet_num_or_title)\n if worksheet is None:\n raise WorksheetNotFoundByTitleError(worksheet_num_or_title)\n\n worksheet_values = worksheet.get_all_values()\n\n return parse_worksheet(worksheet_values)\n\n\ndef is_url_key(key):\n return key.startswith(\"https://\")\n\n\ndef parse_api_error(error):\n error_data = error.response.json()\n\n if \"error\" in error_data and \"message\" in error_data[\"error\"]:\n message = error_data[\"error\"][\"message\"]\n else:\n message = str(error)\n\n return message\n\n\nclass SpreadsheetWrapper:\n def __init__(self, spreadsheet):\n self.spreadsheet = spreadsheet\n\n def worksheets(self):\n return self.spreadsheet.worksheets()\n\n def get_worksheet_by_index(self, index):\n return self.spreadsheet.get_worksheet(index)\n\n def get_worksheet_by_title(self, title):\n try:\n return self.spreadsheet.worksheet(title)\n except GSWorksheetNotFound:\n return None\n\n\nclass TimeoutSession(Session):\n def request(self, *args, **kwargs):\n kwargs.setdefault(\"timeout\", 300)\n return super(TimeoutSession, self).request(*args, **kwargs)\n\n\nclass GoogleSpreadsheet(BaseQueryRunner):\n should_annotate_query = False\n\n def __init__(self, configuration):\n super(GoogleSpreadsheet, self).__init__(configuration)\n self.syntax = \"custom\"\n\n @classmethod\n def name(cls):\n return \"Google Sheets\"\n\n @classmethod\n def type(cls):\n return \"google_spreadsheets\"\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def configuration_schema(cls):\n return {\n \"type\": \"object\",\n \"properties\": {\"jsonKeyFile\": {\"type\": \"string\", \"title\": \"JSON Key File (ADC is used if omitted)\"}},\n \"required\": [],\n \"secret\": [\"jsonKeyFile\"],\n }\n\n def _get_spreadsheet_service(self):\n scopes = [\"https://spreadsheets.google.com/feeds\"]\n\n try:\n key = json_loads(b64decode(self.configuration[\"jsonKeyFile\"]))\n creds = Credentials.from_service_account_info(key, scopes=scopes)\n except KeyError:\n creds = google.auth.default(scopes=scopes)[0]\n\n timeout_session = Session()\n timeout_session.requests_session = TimeoutSession()\n spreadsheetservice = gspread.Client(auth=creds, session=timeout_session)\n spreadsheetservice.login()\n return spreadsheetservice\n\n def test_connection(self):\n service = self._get_spreadsheet_service()\n test_spreadsheet_key = \"1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis\"\n try:\n service.open_by_key(test_spreadsheet_key).worksheets()\n except APIError as e:\n message = parse_api_error(e)\n raise Exception(message)\n\n def run_query(self, query, user):\n logger.debug(\"Spreadsheet is about to execute query: %s\", query)\n key, worksheet_num_or_title = parse_query(query)\n\n try:\n spreadsheet_service = self._get_spreadsheet_service()\n\n if is_url_key(key):\n spreadsheet = spreadsheet_service.open_by_url(key)\n else:\n spreadsheet = spreadsheet_service.open_by_key(key)\n\n data = parse_spreadsheet(SpreadsheetWrapper(spreadsheet), worksheet_num_or_title)\n\n return json_dumps(data), None\n except gspread.SpreadsheetNotFound:\n return (\n None,\n \"Spreadsheet ({}) not found. Make sure you used correct id.\".format(key),\n )\n except APIError as e:\n return None, parse_api_error(e)\n\n\nregister(GoogleSpreadsheet)\n", "path": "redash/query_runner/google_spreadsheets.py"}], "after_files": [{"content": "import logging\nimport re\nfrom base64 import b64decode\n\nfrom dateutil import parser\nfrom requests import Session\nfrom xlsxwriter.utility import xl_col_to_name\n\nfrom redash.query_runner import (\n TYPE_BOOLEAN,\n TYPE_DATETIME,\n TYPE_FLOAT,\n TYPE_INTEGER,\n TYPE_STRING,\n BaseQueryRunner,\n guess_type,\n register,\n)\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\n\ntry:\n import google.auth\n import gspread\n from google.auth.exceptions import GoogleAuthError\n from google.oauth2.service_account import Credentials\n from gspread.exceptions import APIError\n from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound\n\n enabled = True\nexcept ImportError:\n enabled = False\n\n\ndef _load_key(filename):\n with open(filename, \"rb\") as f:\n return json_loads(f.read())\n\n\ndef _get_columns_and_column_names(row):\n column_names = []\n columns = []\n duplicate_counter = 1\n\n for i, column_name in enumerate(row):\n if not column_name:\n column_name = \"column_{}\".format(xl_col_to_name(i))\n\n if column_name in column_names:\n column_name = \"{}{}\".format(column_name, duplicate_counter)\n duplicate_counter += 1\n\n column_names.append(column_name)\n columns.append({\"name\": column_name, \"friendly_name\": column_name, \"type\": TYPE_STRING})\n\n return columns, column_names\n\n\ndef _value_eval_list(row_values, col_types):\n value_list = []\n raw_values = zip(col_types, row_values)\n for typ, rval in raw_values:\n try:\n if rval is None or rval == \"\":\n val = None\n elif typ == TYPE_BOOLEAN:\n val = True if str(rval).lower() == \"true\" else False\n elif typ == TYPE_DATETIME:\n val = parser.parse(rval)\n elif typ == TYPE_FLOAT:\n val = float(rval)\n elif typ == TYPE_INTEGER:\n val = int(rval)\n else:\n # for TYPE_STRING and default\n val = str(rval)\n value_list.append(val)\n except (ValueError, OverflowError):\n value_list.append(rval)\n return value_list\n\n\nHEADER_INDEX = 0\n\n\nclass WorksheetNotFoundError(Exception):\n def __init__(self, worksheet_num, worksheet_count):\n message = \"Worksheet number {} not found. Spreadsheet has {} worksheets. Note that the worksheet count is zero based.\".format(\n worksheet_num, worksheet_count\n )\n super(WorksheetNotFoundError, self).__init__(message)\n\n\nclass WorksheetNotFoundByTitleError(Exception):\n def __init__(self, worksheet_title):\n message = \"Worksheet title '{}' not found.\".format(worksheet_title)\n super(WorksheetNotFoundByTitleError, self).__init__(message)\n\n\ndef parse_query(query):\n values = query.split(\"|\")\n key = values[0] # key of the spreadsheet\n worksheet_num_or_title = 0 # A default value for when a number of inputs is invalid\n if len(values) == 2:\n s = values[1].strip()\n if len(s) > 0:\n if re.match(r\"^\\\"(.*?)\\\"$\", s):\n # A string quoted by \" means a title of worksheet\n worksheet_num_or_title = s[1:-1]\n else:\n # if spreadsheet contains more than one worksheet - this is the number of it\n worksheet_num_or_title = int(s)\n\n return key, worksheet_num_or_title\n\n\ndef parse_worksheet(worksheet):\n if not worksheet:\n return {\"columns\": [], \"rows\": []}\n\n columns, column_names = _get_columns_and_column_names(worksheet[HEADER_INDEX])\n\n if len(worksheet) > 1:\n for j, value in enumerate(worksheet[HEADER_INDEX + 1]):\n columns[j][\"type\"] = guess_type(value)\n\n column_types = [c[\"type\"] for c in columns]\n rows = [dict(zip(column_names, _value_eval_list(row, column_types))) for row in worksheet[HEADER_INDEX + 1 :]]\n data = {\"columns\": columns, \"rows\": rows}\n\n return data\n\n\ndef parse_spreadsheet(spreadsheet, worksheet_num_or_title):\n worksheet = None\n if isinstance(worksheet_num_or_title, int):\n worksheet = spreadsheet.get_worksheet_by_index(worksheet_num_or_title)\n if worksheet is None:\n worksheet_count = len(spreadsheet.worksheets())\n raise WorksheetNotFoundError(worksheet_num_or_title, worksheet_count)\n elif isinstance(worksheet_num_or_title, str):\n worksheet = spreadsheet.get_worksheet_by_title(worksheet_num_or_title)\n if worksheet is None:\n raise WorksheetNotFoundByTitleError(worksheet_num_or_title)\n\n worksheet_values = worksheet.get_all_values()\n\n return parse_worksheet(worksheet_values)\n\n\ndef is_url_key(key):\n return key.startswith(\"https://\")\n\n\ndef parse_api_error(error):\n error_data = error.response.json()\n\n if \"error\" in error_data and \"message\" in error_data[\"error\"]:\n message = error_data[\"error\"][\"message\"]\n else:\n message = str(error)\n\n return message\n\n\nclass SpreadsheetWrapper:\n def __init__(self, spreadsheet):\n self.spreadsheet = spreadsheet\n\n def worksheets(self):\n return self.spreadsheet.worksheets()\n\n def get_worksheet_by_index(self, index):\n return self.spreadsheet.get_worksheet(index)\n\n def get_worksheet_by_title(self, title):\n try:\n return self.spreadsheet.worksheet(title)\n except GSWorksheetNotFound:\n return None\n\n\nclass TimeoutSession(Session):\n def request(self, *args, **kwargs):\n kwargs.setdefault(\"timeout\", 300)\n return super(TimeoutSession, self).request(*args, **kwargs)\n\n\nclass GoogleSpreadsheet(BaseQueryRunner):\n should_annotate_query = False\n\n def __init__(self, configuration):\n super(GoogleSpreadsheet, self).__init__(configuration)\n self.syntax = \"custom\"\n\n @classmethod\n def name(cls):\n return \"Google Sheets\"\n\n @classmethod\n def type(cls):\n return \"google_spreadsheets\"\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def configuration_schema(cls):\n return {\n \"type\": \"object\",\n \"properties\": {\"jsonKeyFile\": {\"type\": \"string\", \"title\": \"JSON Key File (ADC is used if omitted)\"}},\n \"required\": [],\n \"secret\": [\"jsonKeyFile\"],\n }\n\n def _get_spreadsheet_service(self):\n scopes = [\"https://spreadsheets.google.com/feeds\"]\n\n try:\n key = json_loads(b64decode(self.configuration[\"jsonKeyFile\"]))\n creds = Credentials.from_service_account_info(key, scopes=scopes)\n except KeyError:\n creds = google.auth.default(scopes=scopes)[0]\n\n timeout_session = Session()\n timeout_session.requests_session = TimeoutSession()\n spreadsheetservice = gspread.Client(auth=creds, session=timeout_session)\n spreadsheetservice.login()\n return spreadsheetservice\n\n def test_connection(self):\n test_spreadsheet_key = \"1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis\"\n try:\n service = self._get_spreadsheet_service()\n service.open_by_key(test_spreadsheet_key).worksheets()\n except APIError as e:\n logger.exception(e)\n message = parse_api_error(e)\n raise Exception(message)\n except GoogleAuthError as e:\n logger.exception(e)\n raise Exception(str(e))\n\n def run_query(self, query, user):\n logger.debug(\"Spreadsheet is about to execute query: %s\", query)\n key, worksheet_num_or_title = parse_query(query)\n\n try:\n spreadsheet_service = self._get_spreadsheet_service()\n\n if is_url_key(key):\n spreadsheet = spreadsheet_service.open_by_url(key)\n else:\n spreadsheet = spreadsheet_service.open_by_key(key)\n\n data = parse_spreadsheet(SpreadsheetWrapper(spreadsheet), worksheet_num_or_title)\n\n return json_dumps(data), None\n except gspread.SpreadsheetNotFound:\n return (\n None,\n \"Spreadsheet ({}) not found. Make sure you used correct id.\".format(key),\n )\n except APIError as e:\n return None, parse_api_error(e)\n\n\nregister(GoogleSpreadsheet)\n", "path": "redash/query_runner/google_spreadsheets.py"}]}
| 2,961 | 304 |
gh_patches_debug_22466
|
rasdani/github-patches
|
git_diff
|
encode__starlette-1262
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
test_custom_middleware[trio] fails on 1-core-VM
### Checklist
<!-- Please make sure you check all these items before submitting your bug report. -->
- [X] The bug is reproducible against the latest release and/or `master`.
- [ ] There are no similar issues or pull requests to fix it yet.
### Describe the bug
While working on [reproducible builds](https://reproducible-builds.org/) for [openSUSE](https://en.opensuse.org/openSUSE:Reproducible_Builds), I found that our `python-starlette-0.16.0` package failed 1 test when running in a 1-core-VM (usually this happens due to differences in scheduling/timing)
### To reproduce
maybe run tests as `taskset 1 pytest`
or on Debian or openSUSE run
```
osc checkout openSUSE:Factory/python-starlette && cd $_
osc build --vm-type=kvm --noservice --clean -j1 standard
```
<!-- Provide a *minimal* example with steps to reproduce the bug locally.
NOTE: try to keep any external dependencies *at an absolute minimum*
(middleware, servers, proxies, certificates...).
In other words, remove anything that doesn't make the bug go away.
-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
tests should pass
### Actual behavior
`test_custom_middleware[trio]` fails
<!-- A clear and concise description of what actually happens. -->
### Debugging material
```
=================================== FAILURES ===================================
_________________________ test_custom_middleware[trio] _________________________
test_client_factory = functools.partial(<class 'starlette.testclient.TestClient'>, backend='trio', backend_options={})
def test_custom_middleware(test_client_factory):
client = test_client_factory(app)
response = client.get("/")
assert response.headers["Custom-Header"] == "Example"
with pytest.raises(Exception):
> response = client.get("/exc")
tests/middleware/test_base.py:56:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.9/site-packages/requests/sessions.py:555: in get
return self.request('GET', url, **kwargs)
starlette/testclient.py:468: in request
return super().request(
/usr/lib/python3.9/site-packages/requests/sessions.py:542: in request
resp = self.send(prep, **send_kwargs)
/usr/lib/python3.9/site-packages/requests/sessions.py:655: in send
r = adapter.send(request, **kwargs)
starlette/testclient.py:266: in send
raise exc
starlette/testclient.py:263: in send
portal.call(self.app, scope, receive, send)
/usr/lib/python3.9/site-packages/anyio/from_thread.py:229: in call
return self.start_task_soon(func, *args).result()
/usr/lib64/python3.9/concurrent/futures/_base.py:445: in result
return self.__get_result()
/usr/lib64/python3.9/concurrent/futures/_base.py:390: in __get_result
raise self._exception
/usr/lib/python3.9/site-packages/anyio/from_thread.py:176: in _call_func
retval = await retval
starlette/applications.py:112: in __call__
await self.middleware_stack(scope, receive, send)
starlette/middleware/errors.py:159: in __call__
await self.app(scope, receive, _send)
starlette/middleware/base.py:57: in __call__
task_group.cancel_scope.cancel()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <anyio._backends._trio.TaskGroup object at 0x7f1d7c50a5b0>
exc_type = <class 'RuntimeError'>
exc_val = RuntimeError('No response returned.')
exc_tb = <traceback object at 0x7f1d7c5294c0>
async def __aexit__(self, exc_type: Optional[Type[BaseException]],
exc_val: Optional[BaseException],
exc_tb: Optional[TracebackType]) -> Optional[bool]:
try:
return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb)
except trio.MultiError as exc:
> raise ExceptionGroup(exc.exceptions) from None
E anyio._backends._trio.ExceptionGroup: 2 exceptions were raised in the task group:
E ----------------------------
E Traceback (most recent call last):
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py", line 30, in coro
E await self.app(scope, request.receive, send_stream.send)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/exceptions.py", line 82, in __call__
E raise exc
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/exceptions.py", line 71, in __call__
E await self.app(scope, receive, sender)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py", line 656, in __call__
E await route.handle(scope, receive, send)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py", line 259, in handle
E await self.app(scope, receive, send)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py", line 63, in app
E response = await run_in_threadpool(func, request)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/concurrency.py", line 39, in run_in_threadpool
E return await anyio.to_thread.run_sync(func, *args)
E File "/usr/lib/python3.9/site-packages/anyio/to_thread.py", line 28, in run_sync
E return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,
E File "/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py", line 170, in run_sync_in_worker_thread
E return await run_sync(wrapper, cancellable=cancellable, limiter=limiter)
E File "/usr/lib/python3.9/site-packages/trio/_threads.py", line 205, in to_thread_run_sync
E return await trio.lowlevel.wait_task_rescheduled(abort)
E File "/usr/lib/python3.9/site-packages/trio/_core/_traps.py", line 166, in wait_task_rescheduled
E return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()
E File "/usr/lib/python3.9/site-packages/outcome/_sync.py", line 111, in unwrap
E raise captured_error
E File "/usr/lib/python3.9/site-packages/trio/_threads.py", line 155, in do_release_then_return_result
E return result.unwrap()
E File "/usr/lib/python3.9/site-packages/outcome/_sync.py", line 111, in unwrap
E raise captured_error
E File "/usr/lib/python3.9/site-packages/trio/_threads.py", line 168, in worker_fn
E ret = sync_fn(*args)
E File "/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py", line 168, in wrapper
E return func(*args)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/tests/middleware/test_base.py", line 28, in exc
E raise Exception()
E Exception
E ----------------------------
E Traceback (most recent call last):
E File "/usr/lib/python3.9/site-packages/anyio/streams/memory.py", line 78, in receive
E return self.receive_nowait()
E File "/usr/lib/python3.9/site-packages/anyio/streams/memory.py", line 73, in receive_nowait
E raise WouldBlock
E anyio.WouldBlock
E
E During handling of the above exception, another exception occurred:
E
E Traceback (most recent call last):
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py", line 35, in call_next
E message = await recv_stream.receive()
E File "/usr/lib/python3.9/site-packages/anyio/streams/memory.py", line 98, in receive
E raise EndOfStream
E anyio.EndOfStream
E
E During handling of the above exception, another exception occurred:
E
E Traceback (most recent call last):
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py", line 55, in __call__
E response = await self.dispatch_func(request, call_next)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/tests/middleware/test_base.py", line 12, in dispatch
E response = await call_next(request)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py", line 37, in call_next
E raise RuntimeError("No response returned.")
E RuntimeError: No response returned.
/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py:141: ExceptionGroup
=========================== short test summary info ============================
SKIPPED [3] tests/conftest.py:14: Trio not supported (yet!)
=================== 1 failed, 464 passed, 3 skipped in 4.95s ===================
```
<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.
NOTE:
- Please list tracebacks in full (don't truncate them).
- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).
-->
### Environment
- OS: Linux = openSUSE Tumbleweed 20210726
- Python version: 3.9
- Starlette version: 0.16.0
### Additional context
This bug was found while working on [reproducible builds for openSUSE](https://en.opensuse.org/openSUSE:Reproducible_Builds).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/middleware/base.py`
Content:
```
1 import typing
2
3 import anyio
4
5 from starlette.requests import Request
6 from starlette.responses import Response, StreamingResponse
7 from starlette.types import ASGIApp, Receive, Scope, Send
8
9 RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]
10 DispatchFunction = typing.Callable[
11 [Request, RequestResponseEndpoint], typing.Awaitable[Response]
12 ]
13
14
15 class BaseHTTPMiddleware:
16 def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:
17 self.app = app
18 self.dispatch_func = self.dispatch if dispatch is None else dispatch
19
20 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
21 if scope["type"] != "http":
22 await self.app(scope, receive, send)
23 return
24
25 async def call_next(request: Request) -> Response:
26 send_stream, recv_stream = anyio.create_memory_object_stream()
27
28 async def coro() -> None:
29 async with send_stream:
30 await self.app(scope, request.receive, send_stream.send)
31
32 task_group.start_soon(coro)
33
34 try:
35 message = await recv_stream.receive()
36 except anyio.EndOfStream:
37 raise RuntimeError("No response returned.")
38
39 assert message["type"] == "http.response.start"
40
41 async def body_stream() -> typing.AsyncGenerator[bytes, None]:
42 async with recv_stream:
43 async for message in recv_stream:
44 assert message["type"] == "http.response.body"
45 yield message.get("body", b"")
46
47 response = StreamingResponse(
48 status_code=message["status"], content=body_stream()
49 )
50 response.raw_headers = message["headers"]
51 return response
52
53 async with anyio.create_task_group() as task_group:
54 request = Request(scope, receive=receive)
55 response = await self.dispatch_func(request, call_next)
56 await response(scope, receive, send)
57 task_group.cancel_scope.cancel()
58
59 async def dispatch(
60 self, request: Request, call_next: RequestResponseEndpoint
61 ) -> Response:
62 raise NotImplementedError() # pragma: no cover
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py
--- a/starlette/middleware/base.py
+++ b/starlette/middleware/base.py
@@ -23,17 +23,25 @@
return
async def call_next(request: Request) -> Response:
+ app_exc: typing.Optional[Exception] = None
send_stream, recv_stream = anyio.create_memory_object_stream()
async def coro() -> None:
+ nonlocal app_exc
+
async with send_stream:
- await self.app(scope, request.receive, send_stream.send)
+ try:
+ await self.app(scope, request.receive, send_stream.send)
+ except Exception as exc:
+ app_exc = exc
task_group.start_soon(coro)
try:
message = await recv_stream.receive()
except anyio.EndOfStream:
+ if app_exc is not None:
+ raise app_exc
raise RuntimeError("No response returned.")
assert message["type"] == "http.response.start"
|
{"golden_diff": "diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py\n--- a/starlette/middleware/base.py\n+++ b/starlette/middleware/base.py\n@@ -23,17 +23,25 @@\n return\n \n async def call_next(request: Request) -> Response:\n+ app_exc: typing.Optional[Exception] = None\n send_stream, recv_stream = anyio.create_memory_object_stream()\n \n async def coro() -> None:\n+ nonlocal app_exc\n+\n async with send_stream:\n- await self.app(scope, request.receive, send_stream.send)\n+ try:\n+ await self.app(scope, request.receive, send_stream.send)\n+ except Exception as exc:\n+ app_exc = exc\n \n task_group.start_soon(coro)\n \n try:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n+ if app_exc is not None:\n+ raise app_exc\n raise RuntimeError(\"No response returned.\")\n \n assert message[\"type\"] == \"http.response.start\"\n", "issue": "test_custom_middleware[trio] fails on 1-core-VM\n### Checklist\r\n\r\n<!-- Please make sure you check all these items before submitting your bug report. -->\r\n\r\n- [X] The bug is reproducible against the latest release and/or `master`.\r\n- [ ] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\nWhile working on [reproducible builds](https://reproducible-builds.org/) for [openSUSE](https://en.opensuse.org/openSUSE:Reproducible_Builds), I found that our `python-starlette-0.16.0` package failed 1 test when running in a 1-core-VM (usually this happens due to differences in scheduling/timing)\r\n\r\n\r\n\r\n### To reproduce\r\n\r\nmaybe run tests as `taskset 1 pytest`\r\n\r\nor on Debian or openSUSE run\r\n```\r\nosc checkout openSUSE:Factory/python-starlette && cd $_\r\nosc build --vm-type=kvm --noservice --clean -j1 standard\r\n```\r\n\r\n<!-- Provide a *minimal* example with steps to reproduce the bug locally.\r\n\r\nNOTE: try to keep any external dependencies *at an absolute minimum*\r\n(middleware, servers, proxies, certificates...).\r\nIn other words, remove anything that doesn't make the bug go away.\r\n-->\r\n\r\n### Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\ntests should pass\r\n\r\n### Actual behavior\r\n\r\n`test_custom_middleware[trio]` fails\r\n\r\n<!-- A clear and concise description of what actually happens. -->\r\n\r\n### Debugging material\r\n\r\n```\r\n =================================== FAILURES ===================================\r\n _________________________ test_custom_middleware[trio] _________________________\r\n \r\n test_client_factory = functools.partial(<class 'starlette.testclient.TestClient'>, backend='trio', backend_options={})\r\n \r\n def test_custom_middleware(test_client_factory):\r\n client = test_client_factory(app)\r\n response = client.get(\"/\")\r\n assert response.headers[\"Custom-Header\"] == \"Example\"\r\n \r\n with pytest.raises(Exception):\r\n > response = client.get(\"/exc\")\r\n \r\n tests/middleware/test_base.py:56: \r\n _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n /usr/lib/python3.9/site-packages/requests/sessions.py:555: in get\r\n return self.request('GET', url, **kwargs)\r\n starlette/testclient.py:468: in request\r\n return super().request(\r\n /usr/lib/python3.9/site-packages/requests/sessions.py:542: in request\r\n resp = self.send(prep, **send_kwargs)\r\n /usr/lib/python3.9/site-packages/requests/sessions.py:655: in send\r\n r = adapter.send(request, **kwargs)\r\n starlette/testclient.py:266: in send\r\n raise exc\r\n starlette/testclient.py:263: in send\r\n portal.call(self.app, scope, receive, send)\r\n /usr/lib/python3.9/site-packages/anyio/from_thread.py:229: in call\r\n return self.start_task_soon(func, *args).result()\r\n /usr/lib64/python3.9/concurrent/futures/_base.py:445: in result\r\n return self.__get_result()\r\n /usr/lib64/python3.9/concurrent/futures/_base.py:390: in __get_result\r\n raise self._exception\r\n /usr/lib/python3.9/site-packages/anyio/from_thread.py:176: in _call_func\r\n retval = await retval\r\n starlette/applications.py:112: in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n starlette/middleware/errors.py:159: in __call__\r\n await self.app(scope, receive, _send)\r\n starlette/middleware/base.py:57: in __call__\r\n task_group.cancel_scope.cancel()\r\n _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n \r\n self = <anyio._backends._trio.TaskGroup object at 0x7f1d7c50a5b0>\r\n exc_type = <class 'RuntimeError'>\r\n exc_val = RuntimeError('No response returned.')\r\n exc_tb = <traceback object at 0x7f1d7c5294c0>\r\n \r\n async def __aexit__(self, exc_type: Optional[Type[BaseException]],\r\n exc_val: Optional[BaseException],\r\n exc_tb: Optional[TracebackType]) -> Optional[bool]:\r\n try:\r\n return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb)\r\n except trio.MultiError as exc:\r\n > raise ExceptionGroup(exc.exceptions) from None\r\n E anyio._backends._trio.ExceptionGroup: 2 exceptions were raised in the task group:\r\n E ----------------------------\r\n E Traceback (most recent call last):\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py\", line 30, in coro\r\n E await self.app(scope, request.receive, send_stream.send)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/exceptions.py\", line 82, in __call__\r\n E raise exc\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/exceptions.py\", line 71, in __call__\r\n E await self.app(scope, receive, sender)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py\", line 656, in __call__\r\n E await route.handle(scope, receive, send)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py\", line 259, in handle\r\n E await self.app(scope, receive, send)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py\", line 63, in app\r\n E response = await run_in_threadpool(func, request)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/concurrency.py\", line 39, in run_in_threadpool\r\n E return await anyio.to_thread.run_sync(func, *args)\r\n E File \"/usr/lib/python3.9/site-packages/anyio/to_thread.py\", line 28, in run_sync\r\n E return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,\r\n E File \"/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py\", line 170, in run_sync_in_worker_thread\r\n E return await run_sync(wrapper, cancellable=cancellable, limiter=limiter)\r\n E File \"/usr/lib/python3.9/site-packages/trio/_threads.py\", line 205, in to_thread_run_sync\r\n E return await trio.lowlevel.wait_task_rescheduled(abort)\r\n E File \"/usr/lib/python3.9/site-packages/trio/_core/_traps.py\", line 166, in wait_task_rescheduled\r\n E return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()\r\n E File \"/usr/lib/python3.9/site-packages/outcome/_sync.py\", line 111, in unwrap\r\n E raise captured_error\r\n E File \"/usr/lib/python3.9/site-packages/trio/_threads.py\", line 155, in do_release_then_return_result\r\n E return result.unwrap()\r\n E File \"/usr/lib/python3.9/site-packages/outcome/_sync.py\", line 111, in unwrap\r\n E raise captured_error\r\n E File \"/usr/lib/python3.9/site-packages/trio/_threads.py\", line 168, in worker_fn\r\n E ret = sync_fn(*args)\r\n E File \"/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py\", line 168, in wrapper\r\n E return func(*args)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/tests/middleware/test_base.py\", line 28, in exc\r\n E raise Exception()\r\n E Exception\r\n E ----------------------------\r\n E Traceback (most recent call last):\r\n E File \"/usr/lib/python3.9/site-packages/anyio/streams/memory.py\", line 78, in receive\r\n E return self.receive_nowait()\r\n E File \"/usr/lib/python3.9/site-packages/anyio/streams/memory.py\", line 73, in receive_nowait\r\n E raise WouldBlock\r\n E anyio.WouldBlock\r\n E \r\n E During handling of the above exception, another exception occurred:\r\n E \r\n E Traceback (most recent call last):\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py\", line 35, in call_next\r\n E message = await recv_stream.receive()\r\n E File \"/usr/lib/python3.9/site-packages/anyio/streams/memory.py\", line 98, in receive\r\n E raise EndOfStream\r\n E anyio.EndOfStream\r\n E \r\n E During handling of the above exception, another exception occurred:\r\n E \r\n E Traceback (most recent call last):\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py\", line 55, in __call__\r\n E response = await self.dispatch_func(request, call_next)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/tests/middleware/test_base.py\", line 12, in dispatch\r\n E response = await call_next(request)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py\", line 37, in call_next\r\n E raise RuntimeError(\"No response returned.\")\r\n E RuntimeError: No response returned.\r\n \r\n /usr/lib/python3.9/site-packages/anyio/_backends/_trio.py:141: ExceptionGroup\r\n =========================== short test summary info ============================\r\n SKIPPED [3] tests/conftest.py:14: Trio not supported (yet!)\r\n =================== 1 failed, 464 passed, 3 skipped in 4.95s ===================\r\n```\r\n\r\n<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.\r\n\r\nNOTE:\r\n- Please list tracebacks in full (don't truncate them).\r\n- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).\r\n-->\r\n\r\n### Environment\r\n\r\n- OS: Linux = openSUSE Tumbleweed 20210726 \r\n- Python version: 3.9\r\n- Starlette version: 0.16.0\r\n\r\n### Additional context\r\n\r\nThis bug was found while working on [reproducible builds for openSUSE](https://en.opensuse.org/openSUSE:Reproducible_Builds).\n", "before_files": [{"content": "import typing\n\nimport anyio\n\nfrom starlette.requests import Request\nfrom starlette.responses import Response, StreamingResponse\nfrom starlette.types import ASGIApp, Receive, Scope, Send\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\n\n\nclass BaseHTTPMiddleware:\n def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n async def call_next(request: Request) -> Response:\n send_stream, recv_stream = anyio.create_memory_object_stream()\n\n async def coro() -> None:\n async with send_stream:\n await self.app(scope, request.receive, send_stream.send)\n\n task_group.start_soon(coro)\n\n try:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n raise RuntimeError(\"No response returned.\")\n\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n async with anyio.create_task_group() as task_group:\n request = Request(scope, receive=receive)\n response = await self.dispatch_func(request, call_next)\n await response(scope, receive, send)\n task_group.cancel_scope.cancel()\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n", "path": "starlette/middleware/base.py"}], "after_files": [{"content": "import typing\n\nimport anyio\n\nfrom starlette.requests import Request\nfrom starlette.responses import Response, StreamingResponse\nfrom starlette.types import ASGIApp, Receive, Scope, Send\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\n\n\nclass BaseHTTPMiddleware:\n def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n async def call_next(request: Request) -> Response:\n app_exc: typing.Optional[Exception] = None\n send_stream, recv_stream = anyio.create_memory_object_stream()\n\n async def coro() -> None:\n nonlocal app_exc\n\n async with send_stream:\n try:\n await self.app(scope, request.receive, send_stream.send)\n except Exception as exc:\n app_exc = exc\n\n task_group.start_soon(coro)\n\n try:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n if app_exc is not None:\n raise app_exc\n raise RuntimeError(\"No response returned.\")\n\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n async with anyio.create_task_group() as task_group:\n request = Request(scope, receive=receive)\n response = await self.dispatch_func(request, call_next)\n await response(scope, receive, send)\n task_group.cancel_scope.cancel()\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n", "path": "starlette/middleware/base.py"}]}
| 3,365 | 227 |
gh_patches_debug_3886
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-1169
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dropping scikit-learn dependency < 0.14.1
It makes easier to set base for decoder object in nilearn which actually requires lot of backports for versions. #1148
I don't have great justifications though. Let me know if I miss something important in nilearn which we need to take into account dropping 0.13.
FYI: https://packages.debian.org/jessie/python-sklearn
Discussions are welcome.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nilearn/version.py`
Content:
```
1 # *- encoding: utf-8 -*-
2 """
3 nilearn version, required package versions, and utilities for checking
4 """
5 # Author: Loïc Estève, Ben Cipollini
6 # License: simplified BSD
7
8 # PEP0440 compatible formatted version, see:
9 # https://www.python.org/dev/peps/pep-0440/
10 #
11 # Generic release markers:
12 # X.Y
13 # X.Y.Z # For bugfix releases
14 #
15 # Admissible pre-release markers:
16 # X.YaN # Alpha release
17 # X.YbN # Beta release
18 # X.YrcN # Release Candidate
19 # X.Y # Final release
20 #
21 # Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
22 # 'X.Y.dev0' is the canonical version of 'X.Y.dev'
23 #
24 __version__ = '0.2.5'
25
26 _NILEARN_INSTALL_MSG = 'See %s for installation information.' % (
27 'http://nilearn.github.io/introduction.html#installation')
28
29 # This is a tuple to preserve order, so that dependencies are checked
30 # in some meaningful order (more => less 'core'). We avoid using
31 # collections.OrderedDict to preserve Python 2.6 compatibility.
32 REQUIRED_MODULE_METADATA = (
33 ('numpy', {
34 'min_version': '1.6.1',
35 'required_at_installation': True,
36 'install_info': _NILEARN_INSTALL_MSG}),
37 ('scipy', {
38 'min_version': '0.9.0',
39 'required_at_installation': True,
40 'install_info': _NILEARN_INSTALL_MSG}),
41 ('sklearn', {
42 'min_version': '0.13',
43 'required_at_installation': True,
44 'install_info': _NILEARN_INSTALL_MSG}),
45 ('nibabel', {
46 'min_version': '1.1.0',
47 'required_at_installation': False}))
48
49 OPTIONAL_MATPLOTLIB_MIN_VERSION = '1.1.1'
50
51
52 def _import_module_with_version_check(
53 module_name,
54 minimum_version,
55 install_info=None):
56 """Check that module is installed with a recent enough version
57 """
58 from distutils.version import LooseVersion
59
60 try:
61 module = __import__(module_name)
62 except ImportError as exc:
63 user_friendly_info = ('Module "{0}" could not be found. {1}').format(
64 module_name,
65 install_info or 'Please install it properly to use nilearn.')
66 exc.args += (user_friendly_info,)
67 raise
68
69 # Avoid choking on modules with no __version__ attribute
70 module_version = getattr(module, '__version__', '0.0.0')
71
72 version_too_old = (not LooseVersion(module_version) >=
73 LooseVersion(minimum_version))
74
75 if version_too_old:
76 message = (
77 'A {module_name} version of at least {minimum_version} '
78 'is required to use nilearn. {module_version} was found. '
79 'Please upgrade {module_name}').format(
80 module_name=module_name,
81 minimum_version=minimum_version,
82 module_version=module_version)
83
84 raise ImportError(message)
85
86 return module
87
88
89 def _check_module_dependencies(is_nilearn_installing=False):
90 """Throw an exception if nilearn dependencies are not installed.
91
92 Parameters
93 ----------
94 is_nilearn_installing: boolean
95 if True, only error on missing packages that cannot be auto-installed.
96 if False, error on any missing package.
97
98 Throws
99 -------
100 ImportError
101 """
102
103 for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:
104 if not (is_nilearn_installing and
105 not module_metadata['required_at_installation']):
106 # Skip check only when installing and it's a module that
107 # will be auto-installed.
108 _import_module_with_version_check(
109 module_name=module_name,
110 minimum_version=module_metadata['min_version'],
111 install_info=module_metadata.get('install_info'))
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nilearn/version.py b/nilearn/version.py
--- a/nilearn/version.py
+++ b/nilearn/version.py
@@ -39,7 +39,7 @@
'required_at_installation': True,
'install_info': _NILEARN_INSTALL_MSG}),
('sklearn', {
- 'min_version': '0.13',
+ 'min_version': '0.14.1',
'required_at_installation': True,
'install_info': _NILEARN_INSTALL_MSG}),
('nibabel', {
|
{"golden_diff": "diff --git a/nilearn/version.py b/nilearn/version.py\n--- a/nilearn/version.py\n+++ b/nilearn/version.py\n@@ -39,7 +39,7 @@\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('sklearn', {\n- 'min_version': '0.13',\n+ 'min_version': '0.14.1',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('nibabel', {\n", "issue": "Dropping scikit-learn dependency < 0.14.1\nIt makes easier to set base for decoder object in nilearn which actually requires lot of backports for versions. #1148 \n\nI don't have great justifications though. Let me know if I miss something important in nilearn which we need to take into account dropping 0.13.\n\nFYI: https://packages.debian.org/jessie/python-sklearn\n\nDiscussions are welcome.\n\n", "before_files": [{"content": "# *- encoding: utf-8 -*-\n\"\"\"\nnilearn version, required package versions, and utilities for checking\n\"\"\"\n# Author: Lo\u00efc Est\u00e8ve, Ben Cipollini\n# License: simplified BSD\n\n# PEP0440 compatible formatted version, see:\n# https://www.python.org/dev/peps/pep-0440/\n#\n# Generic release markers:\n# X.Y\n# X.Y.Z # For bugfix releases\n#\n# Admissible pre-release markers:\n# X.YaN # Alpha release\n# X.YbN # Beta release\n# X.YrcN # Release Candidate\n# X.Y # Final release\n#\n# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.\n# 'X.Y.dev0' is the canonical version of 'X.Y.dev'\n#\n__version__ = '0.2.5'\n\n_NILEARN_INSTALL_MSG = 'See %s for installation information.' % (\n 'http://nilearn.github.io/introduction.html#installation')\n\n# This is a tuple to preserve order, so that dependencies are checked\n# in some meaningful order (more => less 'core'). We avoid using\n# collections.OrderedDict to preserve Python 2.6 compatibility.\nREQUIRED_MODULE_METADATA = (\n ('numpy', {\n 'min_version': '1.6.1',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('scipy', {\n 'min_version': '0.9.0',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('sklearn', {\n 'min_version': '0.13',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('nibabel', {\n 'min_version': '1.1.0',\n 'required_at_installation': False}))\n\nOPTIONAL_MATPLOTLIB_MIN_VERSION = '1.1.1'\n\n\ndef _import_module_with_version_check(\n module_name,\n minimum_version,\n install_info=None):\n \"\"\"Check that module is installed with a recent enough version\n \"\"\"\n from distutils.version import LooseVersion\n\n try:\n module = __import__(module_name)\n except ImportError as exc:\n user_friendly_info = ('Module \"{0}\" could not be found. {1}').format(\n module_name,\n install_info or 'Please install it properly to use nilearn.')\n exc.args += (user_friendly_info,)\n raise\n\n # Avoid choking on modules with no __version__ attribute\n module_version = getattr(module, '__version__', '0.0.0')\n\n version_too_old = (not LooseVersion(module_version) >=\n LooseVersion(minimum_version))\n\n if version_too_old:\n message = (\n 'A {module_name} version of at least {minimum_version} '\n 'is required to use nilearn. {module_version} was found. '\n 'Please upgrade {module_name}').format(\n module_name=module_name,\n minimum_version=minimum_version,\n module_version=module_version)\n\n raise ImportError(message)\n\n return module\n\n\ndef _check_module_dependencies(is_nilearn_installing=False):\n \"\"\"Throw an exception if nilearn dependencies are not installed.\n\n Parameters\n ----------\n is_nilearn_installing: boolean\n if True, only error on missing packages that cannot be auto-installed.\n if False, error on any missing package.\n\n Throws\n -------\n ImportError\n \"\"\"\n\n for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:\n if not (is_nilearn_installing and\n not module_metadata['required_at_installation']):\n # Skip check only when installing and it's a module that\n # will be auto-installed.\n _import_module_with_version_check(\n module_name=module_name,\n minimum_version=module_metadata['min_version'],\n install_info=module_metadata.get('install_info'))\n", "path": "nilearn/version.py"}], "after_files": [{"content": "# *- encoding: utf-8 -*-\n\"\"\"\nnilearn version, required package versions, and utilities for checking\n\"\"\"\n# Author: Lo\u00efc Est\u00e8ve, Ben Cipollini\n# License: simplified BSD\n\n# PEP0440 compatible formatted version, see:\n# https://www.python.org/dev/peps/pep-0440/\n#\n# Generic release markers:\n# X.Y\n# X.Y.Z # For bugfix releases\n#\n# Admissible pre-release markers:\n# X.YaN # Alpha release\n# X.YbN # Beta release\n# X.YrcN # Release Candidate\n# X.Y # Final release\n#\n# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.\n# 'X.Y.dev0' is the canonical version of 'X.Y.dev'\n#\n__version__ = '0.2.5'\n\n_NILEARN_INSTALL_MSG = 'See %s for installation information.' % (\n 'http://nilearn.github.io/introduction.html#installation')\n\n# This is a tuple to preserve order, so that dependencies are checked\n# in some meaningful order (more => less 'core'). We avoid using\n# collections.OrderedDict to preserve Python 2.6 compatibility.\nREQUIRED_MODULE_METADATA = (\n ('numpy', {\n 'min_version': '1.6.1',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('scipy', {\n 'min_version': '0.9.0',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('sklearn', {\n 'min_version': '0.14.1',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('nibabel', {\n 'min_version': '1.1.0',\n 'required_at_installation': False}))\n\nOPTIONAL_MATPLOTLIB_MIN_VERSION = '1.1.1'\n\n\ndef _import_module_with_version_check(\n module_name,\n minimum_version,\n install_info=None):\n \"\"\"Check that module is installed with a recent enough version\n \"\"\"\n from distutils.version import LooseVersion\n\n try:\n module = __import__(module_name)\n except ImportError as exc:\n user_friendly_info = ('Module \"{0}\" could not be found. {1}').format(\n module_name,\n install_info or 'Please install it properly to use nilearn.')\n exc.args += (user_friendly_info,)\n raise\n\n # Avoid choking on modules with no __version__ attribute\n module_version = getattr(module, '__version__', '0.0.0')\n\n version_too_old = (not LooseVersion(module_version) >=\n LooseVersion(minimum_version))\n\n if version_too_old:\n message = (\n 'A {module_name} version of at least {minimum_version} '\n 'is required to use nilearn. {module_version} was found. '\n 'Please upgrade {module_name}').format(\n module_name=module_name,\n minimum_version=minimum_version,\n module_version=module_version)\n\n raise ImportError(message)\n\n return module\n\n\ndef _check_module_dependencies(is_nilearn_installing=False):\n \"\"\"Throw an exception if nilearn dependencies are not installed.\n\n Parameters\n ----------\n is_nilearn_installing: boolean\n if True, only error on missing packages that cannot be auto-installed.\n if False, error on any missing package.\n\n Throws\n -------\n ImportError\n \"\"\"\n\n for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:\n if not (is_nilearn_installing and\n not module_metadata['required_at_installation']):\n # Skip check only when installing and it's a module that\n # will be auto-installed.\n _import_module_with_version_check(\n module_name=module_name,\n minimum_version=module_metadata['min_version'],\n install_info=module_metadata.get('install_info'))\n", "path": "nilearn/version.py"}]}
| 1,468 | 125 |
gh_patches_debug_49191
|
rasdani/github-patches
|
git_diff
|
plone__Products.CMFPlone-3529
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow to migrate to Volto after updating a site to Plone 6
When Plone is updated to 6 the upgrade-view (@@plone-upgrade) should display a link to the migration to Volto.
See https://github.com/plone/plone.volto/issues/55 for the migration itself.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/browser/admin.py`
Content:
```
1 from AccessControl import getSecurityManager
2 from AccessControl.Permissions import view as View
3 from OFS.interfaces import IApplication
4 from Products.CMFCore.permissions import ManagePortal
5 from Products.CMFPlone.factory import _DEFAULT_PROFILE
6 from Products.CMFPlone.factory import addPloneSite
7 from plone.base.interfaces import INonInstallable
8 from plone.base.interfaces import IPloneSiteRoot
9 from Products.CMFPlone.utils import get_installer
10 from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
11 from Products.GenericSetup import BASE, EXTENSION
12 from Products.GenericSetup import profile_registry
13 from Products.GenericSetup.upgrade import normalize_version
14 from ZPublisher.BaseRequest import DefaultPublishTraverse
15 from collections import OrderedDict
16 from plone.i18n.locales.interfaces import IContentLanguageAvailability
17 from plone.keyring.interfaces import IKeyManager
18 from plone.protect.authenticator import check as checkCSRF
19 from plone.protect.interfaces import IDisableCSRFProtection
20 from urllib import parse
21 from ZODB.broken import Broken
22 from zope.component import adapts
23 from zope.component import getAllUtilitiesRegisteredFor
24 from zope.component import getUtility
25 from zope.component import queryMultiAdapter
26 from zope.component import queryUtility
27 from zope.i18n.interfaces import IUserPreferredLanguages
28 from zope.i18n.locales import locales, LoadLocaleError
29 from zope.interface import Interface
30 from zope.interface import alsoProvides
31 from zope.publisher.browser import BrowserView
32 from zope.publisher.interfaces import IRequest
33 from zope.schema.interfaces import IVocabularyFactory
34
35 import logging
36 import pkg_resources
37
38
39 try:
40 pkg_resources.get_distribution("plone.volto")
41 HAS_VOLTO = True
42 except pkg_resources.DistributionNotFound:
43 HAS_VOLTO = False
44 LOGGER = logging.getLogger('Products.CMFPlone')
45
46
47 class AppTraverser(DefaultPublishTraverse):
48 adapts(IApplication, IRequest)
49
50 def publishTraverse(self, request, name):
51 if name == 'index_html':
52 view = queryMultiAdapter(
53 (self.context, request), Interface, 'plone-overview')
54 if view is not None:
55 return view
56 return DefaultPublishTraverse.publishTraverse(self, request, name)
57
58
59 class Overview(BrowserView):
60 has_volto = HAS_VOLTO
61
62 def sites(self, root=None):
63 if root is None:
64 root = self.context
65
66 result = []
67 secman = getSecurityManager()
68 candidates = (
69 obj for obj in root.values() if not isinstance(obj, Broken)
70 )
71 for obj in candidates:
72 if obj.meta_type == 'Folder':
73 result = result + self.sites(obj)
74 elif IPloneSiteRoot.providedBy(obj):
75 if secman.checkPermission(View, obj):
76 result.append(obj)
77 elif obj.getId() in getattr(root, '_mount_points', {}):
78 result.extend(self.sites(root=obj))
79 return result
80
81 def outdated(self, obj):
82 # Try to pick the portal_migration as an attribute
83 # (Plone 5 unmigrated site root) or as an item
84 mig = (
85 getattr(obj, "portal_migration", None)
86 or obj.get('portal_migration', None)
87 )
88 if mig is not None:
89 return mig.needUpgrading()
90 return False
91
92 def can_manage(self):
93 secman = getSecurityManager()
94 return secman.checkPermission(ManagePortal, self.context)
95
96 def upgrade_url(self, site, can_manage=None):
97 if can_manage is None:
98 can_manage = self.can_manage()
99 if can_manage:
100 return site.absolute_url() + '/@@plone-upgrade'
101 else:
102 return self.context.absolute_url() + '/@@plone-root-login'
103
104
105 class RootLoginRedirect(BrowserView):
106 """ @@plone-root-login
107
108 This view of the Zope root forces authentication via the root
109 acl_users and then redirects elsewhere.
110 """
111
112 def __call__(self, came_from=None):
113 if came_from is not None:
114 # see if this is a relative url or an absolute
115 if len(parse.urlparse(came_from)[1]) == 0:
116 # No host specified, so url is relative. Get an absolute url.
117 # Note: '\\domain.org' is not recognised as host,
118 # which is good.
119 came_from = parse.urljoin(
120 self.context.absolute_url() + '/', came_from,
121 )
122 elif not came_from.startswith(self.context.absolute_url()):
123 # Note: we cannot use portal_url.isURLInPortal here, because we
124 # are not in a Plone portal, but in the Zope root.
125 came_from = None
126 if came_from is None:
127 came_from = self.context.absolute_url()
128 self.request.response.redirect(came_from)
129
130
131 class RootLogout(BrowserView):
132 """ @@plone-root-logout """
133
134 logout = ViewPageTemplateFile('templates/plone-admin-logged-out.pt')
135
136 def __call__(self):
137 response = self.request.response
138 realm = response.realm
139 response.setStatus(401)
140 response.setHeader('WWW-Authenticate', 'basic realm="%s"' % realm, 1)
141 response.setBody(self.logout())
142 return
143
144
145 class FrontPage(BrowserView):
146
147 index = ViewPageTemplateFile('templates/plone-frontpage.pt')
148
149
150 class AddPloneSite(BrowserView):
151
152 # Profiles that are installed by default,
153 # but can be removed later.
154 default_extension_profiles = (
155 'plone.app.caching:default',
156 'plonetheme.barceloneta:default',
157 )
158 # Let's have a separate list for Volto.
159 volto_default_extension_profiles = (
160 'plone.app.caching:default',
161 # We could choose to not install Barceloneta:
162 'plonetheme.barceloneta:default',
163 'plone.volto:default',
164 'plone.volto:default-homepage'
165 )
166
167 def profiles(self):
168 base_profiles = []
169 extension_profiles = []
170 if HAS_VOLTO and not self.request.get('classic'):
171 selected_extension_profiles = self.volto_default_extension_profiles
172 else:
173 selected_extension_profiles = self.default_extension_profiles
174
175 # profiles available for install/uninstall, but hidden at the time
176 # the Plone site is created
177 not_installable = [
178 'Products.CMFPlacefulWorkflow:CMFPlacefulWorkflow',
179 ]
180 utils = getAllUtilitiesRegisteredFor(INonInstallable)
181 for util in utils:
182 not_installable.extend(util.getNonInstallableProfiles())
183
184 for info in profile_registry.listProfileInfo():
185 if info.get('type') == EXTENSION and \
186 info.get('for') in (IPloneSiteRoot, None):
187 profile_id = info.get('id')
188 if profile_id not in not_installable:
189 if profile_id in selected_extension_profiles:
190 info['selected'] = 'selected'
191 extension_profiles.append(info)
192
193 def _key(v):
194 # Make sure implicitly selected items come first
195 selected = v.get('selected') and 'automatic' or 'manual'
196 return '{}-{}'.format(selected, v.get('title', ''))
197 extension_profiles.sort(key=_key)
198
199 for info in profile_registry.listProfileInfo():
200 if info.get('type') == BASE and \
201 info.get('for') in (IPloneSiteRoot, None):
202 base_profiles.append(info)
203
204 return dict(
205 base=tuple(base_profiles),
206 default=_DEFAULT_PROFILE,
207 extensions=tuple(extension_profiles),
208 )
209
210 def browser_language(self):
211 language = 'en'
212 pl = IUserPreferredLanguages(self.request)
213 if pl is not None:
214 languages = pl.getPreferredLanguages()
215 for httplang in languages:
216 parts = (httplang.split('-') + [None, None])[:3]
217 if parts[0] == parts[1]:
218 # Avoid creating a country code for simple languages codes
219 parts = [parts[0], None, None]
220 try:
221 locale = locales.getLocale(*parts)
222 language = locale.getLocaleID().replace('_', '-').lower()
223 break
224 except LoadLocaleError:
225 # Just try the next combination
226 pass
227 return language
228
229 def grouped_languages(self, default='en'):
230 util = queryUtility(IContentLanguageAvailability)
231 available = util.getLanguages(combined=True)
232 languages = dict(util.getLanguageListing())
233
234 # Group country specific versions by language
235 grouped = OrderedDict()
236 for langcode, data in available.items():
237 lang = langcode.split('-')[0]
238 language = languages.get(lang, lang) # Label
239
240 struct = grouped.get(lang, {'label': language, 'languages': []})
241
242 langs = struct['languages']
243 langs.append({
244 'langcode': langcode,
245 'label': data.get('native', data.get('name')),
246 })
247
248 grouped[lang] = struct
249
250 # Sort list by language, next by country
251 data = sorted(grouped.values(), key=lambda k: k['label'])
252 for item in data:
253 item['languages'] = sorted(
254 item['languages'], key=lambda k: k['label'].lower())
255 return data
256
257 def timezones(self):
258 tz_vocab = getUtility(
259 IVocabularyFactory,
260 'plone.app.vocabularies.CommonTimezones'
261 )(self.context)
262
263 grouped = OrderedDict()
264 tz_values = [it.value for it in tz_vocab]
265 for value in tz_values:
266 splitted = value.split('/')
267 group = splitted.pop(0)
268 label = '/'.join(splitted)
269
270 entries = grouped.get(group, [])
271 entries.append({'label': label or group, 'value': value})
272 grouped[group] = entries
273
274 return grouped
275
276 def __call__(self):
277 context = self.context
278 form = self.request.form
279 submitted = form.get('form.submitted', False)
280 if submitted:
281 site_id = form.get('site_id', 'Plone')
282
283 # CSRF protect. DO NOT use auto CSRF protection for adding a site
284 alsoProvides(self.request, IDisableCSRFProtection)
285
286 # check if keyring is installed on root, disable CSRF protection
287 # if it is because it is not installed until a plone site
288 # is created
289 if queryUtility(IKeyManager) is None:
290 LOGGER.info('CSRF protection disabled on initial site '
291 'creation')
292 else:
293 # we have a keymanager, check csrf protection manually now
294 checkCSRF(self.request)
295 site = addPloneSite(
296 context, site_id,
297 title=form.get('title', ''),
298 profile_id=form.get('profile_id', _DEFAULT_PROFILE),
299 extension_ids=form.get('extension_ids', ()),
300 setup_content=form.get('setup_content', False),
301 default_language=form.get('default_language', 'en'),
302 portal_timezone=form.get('portal_timezone', 'UTC')
303 )
304 self.request.response.redirect(site.absolute_url())
305 return ''
306
307 return self.index()
308
309
310 class Upgrade(BrowserView):
311
312 def upgrades(self):
313 pm = getattr(self.context, 'portal_migration')
314 return pm.listUpgrades()
315
316 def versions(self):
317 pm = getattr(self.context, 'portal_migration')
318 result = {}
319 result['instance'] = pm.getInstanceVersion()
320 result['fs'] = pm.getFileSystemVersion()
321 result['equal'] = result['instance'] == result['fs']
322 instance_version = normalize_version(result['instance'])
323 fs_version = normalize_version(result['fs'])
324 result['instance_gt'] = instance_version > fs_version
325 result['instance_lt'] = instance_version < fs_version
326 result['corelist'] = pm.coreVersions()
327 return result
328
329 def __call__(self):
330 form = self.request.form
331 submitted = form.get('form.submitted', False)
332 if submitted:
333 # CSRF protect. DO NOT use auto CSRF protection for upgrading sites
334 alsoProvides(self.request, IDisableCSRFProtection)
335
336 pm = getattr(self.context, 'portal_migration')
337 report = pm.upgrade(
338 REQUEST=self.request,
339 dry_run=form.get('dry_run', False),
340 )
341 return self.index(
342 report=report,
343 )
344
345 return self.index()
346
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/Products/CMFPlone/browser/admin.py b/Products/CMFPlone/browser/admin.py
--- a/Products/CMFPlone/browser/admin.py
+++ b/Products/CMFPlone/browser/admin.py
@@ -343,3 +343,16 @@
)
return self.index()
+
+ def can_migrate_to_volto(self):
+ if not HAS_VOLTO:
+ return False
+ pm = getattr(self.context, 'portal_migration')
+ if pm.getInstanceVersion() < "6005":
+ return False
+ try:
+ from plone.volto.browser import migrate_to_volto
+ except ImportError:
+ return False
+ installer = get_installer(self.context, self.request)
+ return not installer.is_product_installed("plone.volto")
|
{"golden_diff": "diff --git a/Products/CMFPlone/browser/admin.py b/Products/CMFPlone/browser/admin.py\n--- a/Products/CMFPlone/browser/admin.py\n+++ b/Products/CMFPlone/browser/admin.py\n@@ -343,3 +343,16 @@\n )\n \n return self.index()\n+\n+ def can_migrate_to_volto(self):\n+ if not HAS_VOLTO:\n+ return False\n+ pm = getattr(self.context, 'portal_migration')\n+ if pm.getInstanceVersion() < \"6005\":\n+ return False\n+ try:\n+ from plone.volto.browser import migrate_to_volto\n+ except ImportError:\n+ return False\n+ installer = get_installer(self.context, self.request)\n+ return not installer.is_product_installed(\"plone.volto\")\n", "issue": "Allow to migrate to Volto after updating a site to Plone 6\nWhen Plone is updated to 6 the upgrade-view (@@plone-upgrade) should display a link to the migration to Volto.\r\nSee https://github.com/plone/plone.volto/issues/55 for the migration itself.\n", "before_files": [{"content": "from AccessControl import getSecurityManager\nfrom AccessControl.Permissions import view as View\nfrom OFS.interfaces import IApplication\nfrom Products.CMFCore.permissions import ManagePortal\nfrom Products.CMFPlone.factory import _DEFAULT_PROFILE\nfrom Products.CMFPlone.factory import addPloneSite\nfrom plone.base.interfaces import INonInstallable\nfrom plone.base.interfaces import IPloneSiteRoot\nfrom Products.CMFPlone.utils import get_installer\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom Products.GenericSetup import BASE, EXTENSION\nfrom Products.GenericSetup import profile_registry\nfrom Products.GenericSetup.upgrade import normalize_version\nfrom ZPublisher.BaseRequest import DefaultPublishTraverse\nfrom collections import OrderedDict\nfrom plone.i18n.locales.interfaces import IContentLanguageAvailability\nfrom plone.keyring.interfaces import IKeyManager\nfrom plone.protect.authenticator import check as checkCSRF\nfrom plone.protect.interfaces import IDisableCSRFProtection\nfrom urllib import parse\nfrom ZODB.broken import Broken\nfrom zope.component import adapts\nfrom zope.component import getAllUtilitiesRegisteredFor\nfrom zope.component import getUtility\nfrom zope.component import queryMultiAdapter\nfrom zope.component import queryUtility\nfrom zope.i18n.interfaces import IUserPreferredLanguages\nfrom zope.i18n.locales import locales, LoadLocaleError\nfrom zope.interface import Interface\nfrom zope.interface import alsoProvides\nfrom zope.publisher.browser import BrowserView\nfrom zope.publisher.interfaces import IRequest\nfrom zope.schema.interfaces import IVocabularyFactory\n\nimport logging\nimport pkg_resources\n\n\ntry:\n pkg_resources.get_distribution(\"plone.volto\")\n HAS_VOLTO = True\nexcept pkg_resources.DistributionNotFound:\n HAS_VOLTO = False\nLOGGER = logging.getLogger('Products.CMFPlone')\n\n\nclass AppTraverser(DefaultPublishTraverse):\n adapts(IApplication, IRequest)\n\n def publishTraverse(self, request, name):\n if name == 'index_html':\n view = queryMultiAdapter(\n (self.context, request), Interface, 'plone-overview')\n if view is not None:\n return view\n return DefaultPublishTraverse.publishTraverse(self, request, name)\n\n\nclass Overview(BrowserView):\n has_volto = HAS_VOLTO\n\n def sites(self, root=None):\n if root is None:\n root = self.context\n\n result = []\n secman = getSecurityManager()\n candidates = (\n obj for obj in root.values() if not isinstance(obj, Broken)\n )\n for obj in candidates:\n if obj.meta_type == 'Folder':\n result = result + self.sites(obj)\n elif IPloneSiteRoot.providedBy(obj):\n if secman.checkPermission(View, obj):\n result.append(obj)\n elif obj.getId() in getattr(root, '_mount_points', {}):\n result.extend(self.sites(root=obj))\n return result\n\n def outdated(self, obj):\n # Try to pick the portal_migration as an attribute\n # (Plone 5 unmigrated site root) or as an item\n mig = (\n getattr(obj, \"portal_migration\", None)\n or obj.get('portal_migration', None)\n )\n if mig is not None:\n return mig.needUpgrading()\n return False\n\n def can_manage(self):\n secman = getSecurityManager()\n return secman.checkPermission(ManagePortal, self.context)\n\n def upgrade_url(self, site, can_manage=None):\n if can_manage is None:\n can_manage = self.can_manage()\n if can_manage:\n return site.absolute_url() + '/@@plone-upgrade'\n else:\n return self.context.absolute_url() + '/@@plone-root-login'\n\n\nclass RootLoginRedirect(BrowserView):\n \"\"\" @@plone-root-login\n\n This view of the Zope root forces authentication via the root\n acl_users and then redirects elsewhere.\n \"\"\"\n\n def __call__(self, came_from=None):\n if came_from is not None:\n # see if this is a relative url or an absolute\n if len(parse.urlparse(came_from)[1]) == 0:\n # No host specified, so url is relative. Get an absolute url.\n # Note: '\\\\domain.org' is not recognised as host,\n # which is good.\n came_from = parse.urljoin(\n self.context.absolute_url() + '/', came_from,\n )\n elif not came_from.startswith(self.context.absolute_url()):\n # Note: we cannot use portal_url.isURLInPortal here, because we\n # are not in a Plone portal, but in the Zope root.\n came_from = None\n if came_from is None:\n came_from = self.context.absolute_url()\n self.request.response.redirect(came_from)\n\n\nclass RootLogout(BrowserView):\n \"\"\" @@plone-root-logout \"\"\"\n\n logout = ViewPageTemplateFile('templates/plone-admin-logged-out.pt')\n\n def __call__(self):\n response = self.request.response\n realm = response.realm\n response.setStatus(401)\n response.setHeader('WWW-Authenticate', 'basic realm=\"%s\"' % realm, 1)\n response.setBody(self.logout())\n return\n\n\nclass FrontPage(BrowserView):\n\n index = ViewPageTemplateFile('templates/plone-frontpage.pt')\n\n\nclass AddPloneSite(BrowserView):\n\n # Profiles that are installed by default,\n # but can be removed later.\n default_extension_profiles = (\n 'plone.app.caching:default',\n 'plonetheme.barceloneta:default',\n )\n # Let's have a separate list for Volto.\n volto_default_extension_profiles = (\n 'plone.app.caching:default',\n # We could choose to not install Barceloneta:\n 'plonetheme.barceloneta:default',\n 'plone.volto:default',\n 'plone.volto:default-homepage'\n )\n\n def profiles(self):\n base_profiles = []\n extension_profiles = []\n if HAS_VOLTO and not self.request.get('classic'):\n selected_extension_profiles = self.volto_default_extension_profiles\n else:\n selected_extension_profiles = self.default_extension_profiles\n\n # profiles available for install/uninstall, but hidden at the time\n # the Plone site is created\n not_installable = [\n 'Products.CMFPlacefulWorkflow:CMFPlacefulWorkflow',\n ]\n utils = getAllUtilitiesRegisteredFor(INonInstallable)\n for util in utils:\n not_installable.extend(util.getNonInstallableProfiles())\n\n for info in profile_registry.listProfileInfo():\n if info.get('type') == EXTENSION and \\\n info.get('for') in (IPloneSiteRoot, None):\n profile_id = info.get('id')\n if profile_id not in not_installable:\n if profile_id in selected_extension_profiles:\n info['selected'] = 'selected'\n extension_profiles.append(info)\n\n def _key(v):\n # Make sure implicitly selected items come first\n selected = v.get('selected') and 'automatic' or 'manual'\n return '{}-{}'.format(selected, v.get('title', ''))\n extension_profiles.sort(key=_key)\n\n for info in profile_registry.listProfileInfo():\n if info.get('type') == BASE and \\\n info.get('for') in (IPloneSiteRoot, None):\n base_profiles.append(info)\n\n return dict(\n base=tuple(base_profiles),\n default=_DEFAULT_PROFILE,\n extensions=tuple(extension_profiles),\n )\n\n def browser_language(self):\n language = 'en'\n pl = IUserPreferredLanguages(self.request)\n if pl is not None:\n languages = pl.getPreferredLanguages()\n for httplang in languages:\n parts = (httplang.split('-') + [None, None])[:3]\n if parts[0] == parts[1]:\n # Avoid creating a country code for simple languages codes\n parts = [parts[0], None, None]\n try:\n locale = locales.getLocale(*parts)\n language = locale.getLocaleID().replace('_', '-').lower()\n break\n except LoadLocaleError:\n # Just try the next combination\n pass\n return language\n\n def grouped_languages(self, default='en'):\n util = queryUtility(IContentLanguageAvailability)\n available = util.getLanguages(combined=True)\n languages = dict(util.getLanguageListing())\n\n # Group country specific versions by language\n grouped = OrderedDict()\n for langcode, data in available.items():\n lang = langcode.split('-')[0]\n language = languages.get(lang, lang) # Label\n\n struct = grouped.get(lang, {'label': language, 'languages': []})\n\n langs = struct['languages']\n langs.append({\n 'langcode': langcode,\n 'label': data.get('native', data.get('name')),\n })\n\n grouped[lang] = struct\n\n # Sort list by language, next by country\n data = sorted(grouped.values(), key=lambda k: k['label'])\n for item in data:\n item['languages'] = sorted(\n item['languages'], key=lambda k: k['label'].lower())\n return data\n\n def timezones(self):\n tz_vocab = getUtility(\n IVocabularyFactory,\n 'plone.app.vocabularies.CommonTimezones'\n )(self.context)\n\n grouped = OrderedDict()\n tz_values = [it.value for it in tz_vocab]\n for value in tz_values:\n splitted = value.split('/')\n group = splitted.pop(0)\n label = '/'.join(splitted)\n\n entries = grouped.get(group, [])\n entries.append({'label': label or group, 'value': value})\n grouped[group] = entries\n\n return grouped\n\n def __call__(self):\n context = self.context\n form = self.request.form\n submitted = form.get('form.submitted', False)\n if submitted:\n site_id = form.get('site_id', 'Plone')\n\n # CSRF protect. DO NOT use auto CSRF protection for adding a site\n alsoProvides(self.request, IDisableCSRFProtection)\n\n # check if keyring is installed on root, disable CSRF protection\n # if it is because it is not installed until a plone site\n # is created\n if queryUtility(IKeyManager) is None:\n LOGGER.info('CSRF protection disabled on initial site '\n 'creation')\n else:\n # we have a keymanager, check csrf protection manually now\n checkCSRF(self.request)\n site = addPloneSite(\n context, site_id,\n title=form.get('title', ''),\n profile_id=form.get('profile_id', _DEFAULT_PROFILE),\n extension_ids=form.get('extension_ids', ()),\n setup_content=form.get('setup_content', False),\n default_language=form.get('default_language', 'en'),\n portal_timezone=form.get('portal_timezone', 'UTC')\n )\n self.request.response.redirect(site.absolute_url())\n return ''\n\n return self.index()\n\n\nclass Upgrade(BrowserView):\n\n def upgrades(self):\n pm = getattr(self.context, 'portal_migration')\n return pm.listUpgrades()\n\n def versions(self):\n pm = getattr(self.context, 'portal_migration')\n result = {}\n result['instance'] = pm.getInstanceVersion()\n result['fs'] = pm.getFileSystemVersion()\n result['equal'] = result['instance'] == result['fs']\n instance_version = normalize_version(result['instance'])\n fs_version = normalize_version(result['fs'])\n result['instance_gt'] = instance_version > fs_version\n result['instance_lt'] = instance_version < fs_version\n result['corelist'] = pm.coreVersions()\n return result\n\n def __call__(self):\n form = self.request.form\n submitted = form.get('form.submitted', False)\n if submitted:\n # CSRF protect. DO NOT use auto CSRF protection for upgrading sites\n alsoProvides(self.request, IDisableCSRFProtection)\n\n pm = getattr(self.context, 'portal_migration')\n report = pm.upgrade(\n REQUEST=self.request,\n dry_run=form.get('dry_run', False),\n )\n return self.index(\n report=report,\n )\n\n return self.index()\n", "path": "Products/CMFPlone/browser/admin.py"}], "after_files": [{"content": "from AccessControl import getSecurityManager\nfrom AccessControl.Permissions import view as View\nfrom OFS.interfaces import IApplication\nfrom Products.CMFCore.permissions import ManagePortal\nfrom Products.CMFPlone.factory import _DEFAULT_PROFILE\nfrom Products.CMFPlone.factory import addPloneSite\nfrom plone.base.interfaces import INonInstallable\nfrom plone.base.interfaces import IPloneSiteRoot\nfrom Products.CMFPlone.utils import get_installer\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom Products.GenericSetup import BASE, EXTENSION\nfrom Products.GenericSetup import profile_registry\nfrom Products.GenericSetup.upgrade import normalize_version\nfrom ZPublisher.BaseRequest import DefaultPublishTraverse\nfrom collections import OrderedDict\nfrom plone.i18n.locales.interfaces import IContentLanguageAvailability\nfrom plone.keyring.interfaces import IKeyManager\nfrom plone.protect.authenticator import check as checkCSRF\nfrom plone.protect.interfaces import IDisableCSRFProtection\nfrom urllib import parse\nfrom ZODB.broken import Broken\nfrom zope.component import adapts\nfrom zope.component import getAllUtilitiesRegisteredFor\nfrom zope.component import getUtility\nfrom zope.component import queryMultiAdapter\nfrom zope.component import queryUtility\nfrom zope.i18n.interfaces import IUserPreferredLanguages\nfrom zope.i18n.locales import locales, LoadLocaleError\nfrom zope.interface import Interface\nfrom zope.interface import alsoProvides\nfrom zope.publisher.browser import BrowserView\nfrom zope.publisher.interfaces import IRequest\nfrom zope.schema.interfaces import IVocabularyFactory\n\nimport logging\nimport pkg_resources\n\n\ntry:\n pkg_resources.get_distribution(\"plone.volto\")\n HAS_VOLTO = True\nexcept pkg_resources.DistributionNotFound:\n HAS_VOLTO = False\nLOGGER = logging.getLogger('Products.CMFPlone')\n\n\nclass AppTraverser(DefaultPublishTraverse):\n adapts(IApplication, IRequest)\n\n def publishTraverse(self, request, name):\n if name == 'index_html':\n view = queryMultiAdapter(\n (self.context, request), Interface, 'plone-overview')\n if view is not None:\n return view\n return DefaultPublishTraverse.publishTraverse(self, request, name)\n\n\nclass Overview(BrowserView):\n has_volto = HAS_VOLTO\n\n def sites(self, root=None):\n if root is None:\n root = self.context\n\n result = []\n secman = getSecurityManager()\n candidates = (\n obj for obj in root.values() if not isinstance(obj, Broken)\n )\n for obj in candidates:\n if obj.meta_type == 'Folder':\n result = result + self.sites(obj)\n elif IPloneSiteRoot.providedBy(obj):\n if secman.checkPermission(View, obj):\n result.append(obj)\n elif obj.getId() in getattr(root, '_mount_points', {}):\n result.extend(self.sites(root=obj))\n return result\n\n def outdated(self, obj):\n # Try to pick the portal_migration as an attribute\n # (Plone 5 unmigrated site root) or as an item\n mig = (\n getattr(obj, \"portal_migration\", None)\n or obj.get('portal_migration', None)\n )\n if mig is not None:\n return mig.needUpgrading()\n return False\n\n def can_manage(self):\n secman = getSecurityManager()\n return secman.checkPermission(ManagePortal, self.context)\n\n def upgrade_url(self, site, can_manage=None):\n if can_manage is None:\n can_manage = self.can_manage()\n if can_manage:\n return site.absolute_url() + '/@@plone-upgrade'\n else:\n return self.context.absolute_url() + '/@@plone-root-login'\n\n\nclass RootLoginRedirect(BrowserView):\n \"\"\" @@plone-root-login\n\n This view of the Zope root forces authentication via the root\n acl_users and then redirects elsewhere.\n \"\"\"\n\n def __call__(self, came_from=None):\n if came_from is not None:\n # see if this is a relative url or an absolute\n if len(parse.urlparse(came_from)[1]) == 0:\n # No host specified, so url is relative. Get an absolute url.\n # Note: '\\\\domain.org' is not recognised as host,\n # which is good.\n came_from = parse.urljoin(\n self.context.absolute_url() + '/', came_from,\n )\n elif not came_from.startswith(self.context.absolute_url()):\n # Note: we cannot use portal_url.isURLInPortal here, because we\n # are not in a Plone portal, but in the Zope root.\n came_from = None\n if came_from is None:\n came_from = self.context.absolute_url()\n self.request.response.redirect(came_from)\n\n\nclass RootLogout(BrowserView):\n \"\"\" @@plone-root-logout \"\"\"\n\n logout = ViewPageTemplateFile('templates/plone-admin-logged-out.pt')\n\n def __call__(self):\n response = self.request.response\n realm = response.realm\n response.setStatus(401)\n response.setHeader('WWW-Authenticate', 'basic realm=\"%s\"' % realm, 1)\n response.setBody(self.logout())\n return\n\n\nclass FrontPage(BrowserView):\n\n index = ViewPageTemplateFile('templates/plone-frontpage.pt')\n\n\nclass AddPloneSite(BrowserView):\n\n # Profiles that are installed by default,\n # but can be removed later.\n default_extension_profiles = (\n 'plone.app.caching:default',\n 'plonetheme.barceloneta:default',\n )\n # Let's have a separate list for Volto.\n volto_default_extension_profiles = (\n 'plone.app.caching:default',\n # We could choose to not install Barceloneta:\n 'plonetheme.barceloneta:default',\n 'plone.volto:default',\n 'plone.volto:default-homepage'\n )\n\n def profiles(self):\n base_profiles = []\n extension_profiles = []\n if HAS_VOLTO and not self.request.get('classic'):\n selected_extension_profiles = self.volto_default_extension_profiles\n else:\n selected_extension_profiles = self.default_extension_profiles\n\n # profiles available for install/uninstall, but hidden at the time\n # the Plone site is created\n not_installable = [\n 'Products.CMFPlacefulWorkflow:CMFPlacefulWorkflow',\n ]\n utils = getAllUtilitiesRegisteredFor(INonInstallable)\n for util in utils:\n not_installable.extend(util.getNonInstallableProfiles())\n\n for info in profile_registry.listProfileInfo():\n if info.get('type') == EXTENSION and \\\n info.get('for') in (IPloneSiteRoot, None):\n profile_id = info.get('id')\n if profile_id not in not_installable:\n if profile_id in selected_extension_profiles:\n info['selected'] = 'selected'\n extension_profiles.append(info)\n\n def _key(v):\n # Make sure implicitly selected items come first\n selected = v.get('selected') and 'automatic' or 'manual'\n return '{}-{}'.format(selected, v.get('title', ''))\n extension_profiles.sort(key=_key)\n\n for info in profile_registry.listProfileInfo():\n if info.get('type') == BASE and \\\n info.get('for') in (IPloneSiteRoot, None):\n base_profiles.append(info)\n\n return dict(\n base=tuple(base_profiles),\n default=_DEFAULT_PROFILE,\n extensions=tuple(extension_profiles),\n )\n\n def browser_language(self):\n language = 'en'\n pl = IUserPreferredLanguages(self.request)\n if pl is not None:\n languages = pl.getPreferredLanguages()\n for httplang in languages:\n parts = (httplang.split('-') + [None, None])[:3]\n if parts[0] == parts[1]:\n # Avoid creating a country code for simple languages codes\n parts = [parts[0], None, None]\n try:\n locale = locales.getLocale(*parts)\n language = locale.getLocaleID().replace('_', '-').lower()\n break\n except LoadLocaleError:\n # Just try the next combination\n pass\n return language\n\n def grouped_languages(self, default='en'):\n util = queryUtility(IContentLanguageAvailability)\n available = util.getLanguages(combined=True)\n languages = dict(util.getLanguageListing())\n\n # Group country specific versions by language\n grouped = OrderedDict()\n for langcode, data in available.items():\n lang = langcode.split('-')[0]\n language = languages.get(lang, lang) # Label\n\n struct = grouped.get(lang, {'label': language, 'languages': []})\n\n langs = struct['languages']\n langs.append({\n 'langcode': langcode,\n 'label': data.get('native', data.get('name')),\n })\n\n grouped[lang] = struct\n\n # Sort list by language, next by country\n data = sorted(grouped.values(), key=lambda k: k['label'])\n for item in data:\n item['languages'] = sorted(\n item['languages'], key=lambda k: k['label'].lower())\n return data\n\n def timezones(self):\n tz_vocab = getUtility(\n IVocabularyFactory,\n 'plone.app.vocabularies.CommonTimezones'\n )(self.context)\n\n grouped = OrderedDict()\n tz_values = [it.value for it in tz_vocab]\n for value in tz_values:\n splitted = value.split('/')\n group = splitted.pop(0)\n label = '/'.join(splitted)\n\n entries = grouped.get(group, [])\n entries.append({'label': label or group, 'value': value})\n grouped[group] = entries\n\n return grouped\n\n def __call__(self):\n context = self.context\n form = self.request.form\n submitted = form.get('form.submitted', False)\n if submitted:\n site_id = form.get('site_id', 'Plone')\n\n # CSRF protect. DO NOT use auto CSRF protection for adding a site\n alsoProvides(self.request, IDisableCSRFProtection)\n\n # check if keyring is installed on root, disable CSRF protection\n # if it is because it is not installed until a plone site\n # is created\n if queryUtility(IKeyManager) is None:\n LOGGER.info('CSRF protection disabled on initial site '\n 'creation')\n else:\n # we have a keymanager, check csrf protection manually now\n checkCSRF(self.request)\n site = addPloneSite(\n context, site_id,\n title=form.get('title', ''),\n profile_id=form.get('profile_id', _DEFAULT_PROFILE),\n extension_ids=form.get('extension_ids', ()),\n setup_content=form.get('setup_content', False),\n default_language=form.get('default_language', 'en'),\n portal_timezone=form.get('portal_timezone', 'UTC')\n )\n self.request.response.redirect(site.absolute_url())\n return ''\n\n return self.index()\n\n\nclass Upgrade(BrowserView):\n\n def upgrades(self):\n pm = getattr(self.context, 'portal_migration')\n return pm.listUpgrades()\n\n def versions(self):\n pm = getattr(self.context, 'portal_migration')\n result = {}\n result['instance'] = pm.getInstanceVersion()\n result['fs'] = pm.getFileSystemVersion()\n result['equal'] = result['instance'] == result['fs']\n instance_version = normalize_version(result['instance'])\n fs_version = normalize_version(result['fs'])\n result['instance_gt'] = instance_version > fs_version\n result['instance_lt'] = instance_version < fs_version\n result['corelist'] = pm.coreVersions()\n return result\n\n def __call__(self):\n form = self.request.form\n submitted = form.get('form.submitted', False)\n if submitted:\n # CSRF protect. DO NOT use auto CSRF protection for upgrading sites\n alsoProvides(self.request, IDisableCSRFProtection)\n\n pm = getattr(self.context, 'portal_migration')\n report = pm.upgrade(\n REQUEST=self.request,\n dry_run=form.get('dry_run', False),\n )\n return self.index(\n report=report,\n )\n\n return self.index()\n\n def can_migrate_to_volto(self):\n if not HAS_VOLTO:\n return False\n pm = getattr(self.context, 'portal_migration')\n if pm.getInstanceVersion() < \"6005\":\n return False\n try:\n from plone.volto.browser import migrate_to_volto\n except ImportError:\n return False\n installer = get_installer(self.context, self.request)\n return not installer.is_product_installed(\"plone.volto\")\n", "path": "Products/CMFPlone/browser/admin.py"}]}
| 3,905 | 186 |
gh_patches_debug_19787
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-538
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pages endpoint: skip pages if parent is not visible
If a parent page in the page tree is in any state that makes it invisible in the API, we do not want the children to appear in the pages endpoint. This should allow for easy deactivating of "full chapters".
Pages endpoint: skip pages if parent is not visible
If a parent page in the page tree is in any state that makes it invisible in the API, we do not want the children to appear in the pages endpoint. This should allow for easy deactivating of "full chapters".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/api/v3/pages.py`
Content:
```
1 from django.http import JsonResponse
2
3 from cms.models import Region
4
5
6 def transform_page(page_translation):
7 if page_translation.page.parent:
8 parent = {
9 "id": page_translation.page.parent.id,
10 "url": page_translation.page.parent.get_translation(
11 page_translation.language.code
12 ).permalink,
13 "path": page_translation.page.parent.get_translation(
14 page_translation.language.code
15 ).slug,
16 }
17 else:
18 parent = None
19 return {
20 "id": page_translation.id,
21 "url": page_translation.permalink,
22 "path": page_translation.slug,
23 "title": page_translation.title,
24 "modified_gmt": page_translation.last_updated,
25 "excerpt": page_translation.text,
26 "content": page_translation.combined_text,
27 "parent": parent,
28 "order": page_translation.page.lft, # use left edge indicator of mptt model for order
29 "available_languages": page_translation.available_languages,
30 "thumbnail": None,
31 "hash": None,
32 }
33
34
35 # pylint: disable=unused-argument
36 def pages(request, region_slug, language_code):
37 region = Region.get_current_region(request)
38 result = []
39 for page in region.pages.all():
40 page_translation = page.get_public_translation(language_code)
41 if page_translation:
42 result.append(transform_page(page_translation))
43 return JsonResponse(
44 result, safe=False
45 ) # Turn off Safe-Mode to allow serializing arrays
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/api/v3/pages.py b/src/api/v3/pages.py
--- a/src/api/v3/pages.py
+++ b/src/api/v3/pages.py
@@ -36,10 +36,20 @@
def pages(request, region_slug, language_code):
region = Region.get_current_region(request)
result = []
- for page in region.pages.all():
+ for page in region.pages.filter(archived=False, parent=None): # get main level
page_translation = page.get_public_translation(language_code)
if page_translation:
result.append(transform_page(page_translation))
+ result = get_children(page, language_code, result)
return JsonResponse(
result, safe=False
) # Turn off Safe-Mode to allow serializing arrays
+
+
+def get_children(parent, language_code, result):
+ for page in parent.children.filter(archived=False):
+ page_translation = page.get_public_translation(language_code)
+ if page_translation:
+ result.append(transform_page(page_translation))
+ result = get_children(page, language_code, result)
+ return result
|
{"golden_diff": "diff --git a/src/api/v3/pages.py b/src/api/v3/pages.py\n--- a/src/api/v3/pages.py\n+++ b/src/api/v3/pages.py\n@@ -36,10 +36,20 @@\n def pages(request, region_slug, language_code):\n region = Region.get_current_region(request)\n result = []\n- for page in region.pages.all():\n+ for page in region.pages.filter(archived=False, parent=None): # get main level\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n result.append(transform_page(page_translation))\n+ result = get_children(page, language_code, result)\n return JsonResponse(\n result, safe=False\n ) # Turn off Safe-Mode to allow serializing arrays\n+\n+\n+def get_children(parent, language_code, result):\n+ for page in parent.children.filter(archived=False):\n+ page_translation = page.get_public_translation(language_code)\n+ if page_translation:\n+ result.append(transform_page(page_translation))\n+ result = get_children(page, language_code, result)\n+ return result\n", "issue": "Pages endpoint: skip pages if parent is not visible\nIf a parent page in the page tree is in any state that makes it invisible in the API, we do not want the children to appear in the pages endpoint. This should allow for easy deactivating of \"full chapters\". \nPages endpoint: skip pages if parent is not visible\nIf a parent page in the page tree is in any state that makes it invisible in the API, we do not want the children to appear in the pages endpoint. This should allow for easy deactivating of \"full chapters\". \n", "before_files": [{"content": "from django.http import JsonResponse\n\nfrom cms.models import Region\n\n\ndef transform_page(page_translation):\n if page_translation.page.parent:\n parent = {\n \"id\": page_translation.page.parent.id,\n \"url\": page_translation.page.parent.get_translation(\n page_translation.language.code\n ).permalink,\n \"path\": page_translation.page.parent.get_translation(\n page_translation.language.code\n ).slug,\n }\n else:\n parent = None\n return {\n \"id\": page_translation.id,\n \"url\": page_translation.permalink,\n \"path\": page_translation.slug,\n \"title\": page_translation.title,\n \"modified_gmt\": page_translation.last_updated,\n \"excerpt\": page_translation.text,\n \"content\": page_translation.combined_text,\n \"parent\": parent,\n \"order\": page_translation.page.lft, # use left edge indicator of mptt model for order\n \"available_languages\": page_translation.available_languages,\n \"thumbnail\": None,\n \"hash\": None,\n }\n\n\n# pylint: disable=unused-argument\ndef pages(request, region_slug, language_code):\n region = Region.get_current_region(request)\n result = []\n for page in region.pages.all():\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n result.append(transform_page(page_translation))\n return JsonResponse(\n result, safe=False\n ) # Turn off Safe-Mode to allow serializing arrays\n", "path": "src/api/v3/pages.py"}], "after_files": [{"content": "from django.http import JsonResponse\n\nfrom cms.models import Region\n\n\ndef transform_page(page_translation):\n if page_translation.page.parent:\n parent = {\n \"id\": page_translation.page.parent.id,\n \"url\": page_translation.page.parent.get_translation(\n page_translation.language.code\n ).permalink,\n \"path\": page_translation.page.parent.get_translation(\n page_translation.language.code\n ).slug,\n }\n else:\n parent = None\n return {\n \"id\": page_translation.id,\n \"url\": page_translation.permalink,\n \"path\": page_translation.slug,\n \"title\": page_translation.title,\n \"modified_gmt\": page_translation.last_updated,\n \"excerpt\": page_translation.text,\n \"content\": page_translation.combined_text,\n \"parent\": parent,\n \"order\": page_translation.page.lft, # use left edge indicator of mptt model for order\n \"available_languages\": page_translation.available_languages,\n \"thumbnail\": None,\n \"hash\": None,\n }\n\n\n# pylint: disable=unused-argument\ndef pages(request, region_slug, language_code):\n region = Region.get_current_region(request)\n result = []\n for page in region.pages.filter(archived=False, parent=None): # get main level\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n result.append(transform_page(page_translation))\n result = get_children(page, language_code, result)\n return JsonResponse(\n result, safe=False\n ) # Turn off Safe-Mode to allow serializing arrays\n\n\ndef get_children(parent, language_code, result):\n for page in parent.children.filter(archived=False):\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n result.append(transform_page(page_translation))\n result = get_children(page, language_code, result)\n return result\n", "path": "src/api/v3/pages.py"}]}
| 764 | 237 |
gh_patches_debug_45013
|
rasdani/github-patches
|
git_diff
|
OpenMined__PySyft-5169
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix all darglint docs warnings
## Description
Currently there are 419 warnings:
```
$ flake8 src tests | wc -l
419
```
We can progressively improve this with multiple PRs.
If you are interested, please run the checker on the `src` and `tests` folders and try to fix the warnings until there are none.
Some will require adding docs that don't exist while others are just fixes.
This will likely require multiple PRs so if you are interested open a PR with a few fixes and we will go from there.
## Definition of Done
All docstring warnings are fixed and CI has the darglint checker turned on to fail the build.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/syft/core/common/object.py`
Content:
```
1 # stdlib
2 from typing import Any
3 from typing import Optional
4
5 # third party
6 from google.protobuf.reflection import GeneratedProtocolMessageType
7
8 # syft relative
9 from ...proto.core.common.common_object_pb2 import ObjectWithID as ObjectWithID_PB
10 from ...util import validate_type
11 from ..common.serde.deserialize import _deserialize
12 from ..common.serde.serializable import Serializable
13 from .uid import UID
14
15
16 class ObjectWithID(Serializable):
17 """This object is the superclass for nearly all Syft objects. Subclassing
18 from this object will cause an object to be initialized with a unique id
19 using the process specified in the UID class.
20
21 .. note::
22 At the time of writing, the only class in Syft which doesn't have an ID
23 of some kind is the Client class because it's job is to point to another
24 object (which has an ID).
25
26 .. note::
27 Be aware of performance choices in this class because it is used so
28 heavily across the entire codebase. Assume every method is going to
29 be called thousands of times during the working day of an average
30 data scientist using syft (and millions of times in the context of a
31 machine learning job).
32
33 """
34
35 def __init__(self, id: Optional[UID] = None):
36 """This initializer only exists to set the id attribute, which is the
37 primary purpose of this class. It also sets the 'as_wrapper' flag
38 for the 'Serializable' superclass.
39
40 :param id: an override which can be used to set an ID for this object
41 manually. This is probably only used for deserialization.
42 :type id: UID
43
44 """
45
46 if id is None:
47 id = UID()
48
49 self._id: UID = id
50
51 # while this class is never used as a simple wrapper,
52 # it's possible that sub-classes of this class will be.
53 super().__init__()
54
55 @property
56 def id(self) -> UID:
57 """We reveal ObjectWithID.id as a property to discourage users and
58 developers of Syft from modifying .id attributes after an object
59 has been initialized.
60
61 :return: returns the unique id of the object
62 :rtype: UID
63 """
64 return self._id
65
66 def __eq__(self, other: Any) -> bool:
67 """Checks to see if two ObjectWithIDs are actually the same object.
68
69 This checks to see whether this ObjectWithIDs is equal to another by
70 comparing whether they have the same .id objects. These objects
71 come with their own __eq__ function which we assume to be correct.
72
73 :param other: this is the other ObjectWithIDs to be compared with
74 :type other: Any (note this must be Any or __eq__ fails on other types)
75 :return: returns True/False based on whether the objects are the same
76 :rtype: bool
77 """
78
79 try:
80 return self.id == other.id
81 except Exception:
82 return False
83
84 def __repr__(self) -> str:
85 """Returns a human-readable version of the ObjectWithID
86
87 Return a human-readable representation of the ObjectWithID with brackets
88 so that it can be easily spotted when nested inside of the human-
89 readable representations of other objects."""
90
91 no_dash = str(self.id.value).replace("-", "")
92 return f"<{type(self).__name__}: {no_dash}>"
93
94 def repr_short(self) -> str:
95 """Returns a SHORT human-readable version of SpecificLocation
96
97 Return a SHORT human-readable version of the ID which
98 makes it print nicer when embedded (often alongside other
99 UID objects) within other object __repr__ methods."""
100
101 return f"<{type(self).__name__}:{self.id.repr_short()}>"
102
103 def _object2proto(self) -> ObjectWithID_PB:
104 """Returns a protobuf serialization of self.
105
106 As a requirement of all objects which inherit from Serializable,
107 this method transforms the current object into the corresponding
108 Protobuf object so that it can be further serialized.
109
110 :return: returns a protobuf object
111 :rtype: ObjectWithID_PB
112
113 .. note::
114 This method is purely an internal method. Please use object.serialize() or one of
115 the other public serialization methods if you wish to serialize an
116 object.
117 """
118 return ObjectWithID_PB(id=self.id.serialize())
119
120 @staticmethod
121 def _proto2object(proto: ObjectWithID_PB) -> "ObjectWithID":
122 """Creates a ObjectWithID from a protobuf
123
124 As a requirement of all objects which inherit from Serializable,
125 this method transforms a protobuf object into an instance of this class.
126
127 :return: returns an instance of ObjectWithID
128 :rtype: ObjectWithID
129
130 .. note::
131 This method is purely an internal method. Please use syft.deserialize()
132 if you wish to deserialize an object.
133 """
134 _id = validate_type(_object=_deserialize(proto.id), _type=UID, optional=True)
135 return ObjectWithID(id=_id)
136
137 @staticmethod
138 def get_protobuf_schema() -> GeneratedProtocolMessageType:
139 """Return the type of protobuf object which stores a class of this type
140
141 As a part of serialization and deserialization, we need the ability to
142 lookup the protobuf object type directly from the object type. This
143 static method allows us to do this.
144
145 Importantly, this method is also used to create the reverse lookup ability within
146 the metaclass of Serializable. In the metaclass, it calls this method and then
147 it takes whatever type is returned from this method and adds an attribute to it
148 with the type of this class attached to it. See the MetaSerializable class for details.
149
150 :return: the type of protobuf object which corresponds to this class.
151 :rtype: GeneratedProtocolMessageType
152
153 """
154
155 return ObjectWithID_PB
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/syft/core/common/object.py b/src/syft/core/common/object.py
--- a/src/syft/core/common/object.py
+++ b/src/syft/core/common/object.py
@@ -37,9 +37,8 @@
primary purpose of this class. It also sets the 'as_wrapper' flag
for the 'Serializable' superclass.
- :param id: an override which can be used to set an ID for this object
- manually. This is probably only used for deserialization.
- :type id: UID
+ Args:
+ id: an override which can be used to set an ID for this object
"""
@@ -58,8 +57,8 @@
developers of Syft from modifying .id attributes after an object
has been initialized.
- :return: returns the unique id of the object
- :rtype: UID
+ Returns:
+ returns the unique id of the object
"""
return self._id
@@ -70,10 +69,11 @@
comparing whether they have the same .id objects. These objects
come with their own __eq__ function which we assume to be correct.
- :param other: this is the other ObjectWithIDs to be compared with
- :type other: Any (note this must be Any or __eq__ fails on other types)
- :return: returns True/False based on whether the objects are the same
- :rtype: bool
+ Args:
+ other: this is the other ObjectWithIDs to be compared with
+
+ Returns:
+ True/False based on whether the objects are the same
"""
try:
@@ -82,33 +82,39 @@
return False
def __repr__(self) -> str:
- """Returns a human-readable version of the ObjectWithID
-
+ """
Return a human-readable representation of the ObjectWithID with brackets
so that it can be easily spotted when nested inside of the human-
- readable representations of other objects."""
+ readable representations of other objects.
+
+ Returns:
+ a human-readable version of the ObjectWithID
+
+ """
no_dash = str(self.id.value).replace("-", "")
return f"<{type(self).__name__}: {no_dash}>"
def repr_short(self) -> str:
- """Returns a SHORT human-readable version of SpecificLocation
-
+ """
Return a SHORT human-readable version of the ID which
makes it print nicer when embedded (often alongside other
- UID objects) within other object __repr__ methods."""
+ UID objects) within other object __repr__ methods.
+
+ Returns:
+ a SHORT human-readable version of SpecificLocation
+ """
return f"<{type(self).__name__}:{self.id.repr_short()}>"
def _object2proto(self) -> ObjectWithID_PB:
- """Returns a protobuf serialization of self.
-
+ """
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
- :return: returns a protobuf object
- :rtype: ObjectWithID_PB
+ Returns:
+ a protobuf object that is the serialization of self.
.. note::
This method is purely an internal method. Please use object.serialize() or one of
@@ -124,8 +130,11 @@
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
- :return: returns an instance of ObjectWithID
- :rtype: ObjectWithID
+ Args:
+ proto: a protobuf object that we wish to convert to instance of this class
+
+ Returns:
+ an instance of ObjectWithID
.. note::
This method is purely an internal method. Please use syft.deserialize()
@@ -147,8 +156,8 @@
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for details.
- :return: the type of protobuf object which corresponds to this class.
- :rtype: GeneratedProtocolMessageType
+ Returns:
+ the type of protobuf object which corresponds to this class.
"""
|
{"golden_diff": "diff --git a/src/syft/core/common/object.py b/src/syft/core/common/object.py\n--- a/src/syft/core/common/object.py\n+++ b/src/syft/core/common/object.py\n@@ -37,9 +37,8 @@\n primary purpose of this class. It also sets the 'as_wrapper' flag\n for the 'Serializable' superclass.\n \n- :param id: an override which can be used to set an ID for this object\n- manually. This is probably only used for deserialization.\n- :type id: UID\n+ Args:\n+ id: an override which can be used to set an ID for this object\n \n \"\"\"\n \n@@ -58,8 +57,8 @@\n developers of Syft from modifying .id attributes after an object\n has been initialized.\n \n- :return: returns the unique id of the object\n- :rtype: UID\n+ Returns:\n+ returns the unique id of the object\n \"\"\"\n return self._id\n \n@@ -70,10 +69,11 @@\n comparing whether they have the same .id objects. These objects\n come with their own __eq__ function which we assume to be correct.\n \n- :param other: this is the other ObjectWithIDs to be compared with\n- :type other: Any (note this must be Any or __eq__ fails on other types)\n- :return: returns True/False based on whether the objects are the same\n- :rtype: bool\n+ Args:\n+ other: this is the other ObjectWithIDs to be compared with\n+\n+ Returns:\n+ True/False based on whether the objects are the same\n \"\"\"\n \n try:\n@@ -82,33 +82,39 @@\n return False\n \n def __repr__(self) -> str:\n- \"\"\"Returns a human-readable version of the ObjectWithID\n-\n+ \"\"\"\n Return a human-readable representation of the ObjectWithID with brackets\n so that it can be easily spotted when nested inside of the human-\n- readable representations of other objects.\"\"\"\n+ readable representations of other objects.\n+\n+ Returns:\n+ a human-readable version of the ObjectWithID\n+\n+ \"\"\"\n \n no_dash = str(self.id.value).replace(\"-\", \"\")\n return f\"<{type(self).__name__}: {no_dash}>\"\n \n def repr_short(self) -> str:\n- \"\"\"Returns a SHORT human-readable version of SpecificLocation\n-\n+ \"\"\"\n Return a SHORT human-readable version of the ID which\n makes it print nicer when embedded (often alongside other\n- UID objects) within other object __repr__ methods.\"\"\"\n+ UID objects) within other object __repr__ methods.\n+\n+ Returns:\n+ a SHORT human-readable version of SpecificLocation\n+ \"\"\"\n \n return f\"<{type(self).__name__}:{self.id.repr_short()}>\"\n \n def _object2proto(self) -> ObjectWithID_PB:\n- \"\"\"Returns a protobuf serialization of self.\n-\n+ \"\"\"\n As a requirement of all objects which inherit from Serializable,\n this method transforms the current object into the corresponding\n Protobuf object so that it can be further serialized.\n \n- :return: returns a protobuf object\n- :rtype: ObjectWithID_PB\n+ Returns:\n+ a protobuf object that is the serialization of self.\n \n .. note::\n This method is purely an internal method. Please use object.serialize() or one of\n@@ -124,8 +130,11 @@\n As a requirement of all objects which inherit from Serializable,\n this method transforms a protobuf object into an instance of this class.\n \n- :return: returns an instance of ObjectWithID\n- :rtype: ObjectWithID\n+ Args:\n+ proto: a protobuf object that we wish to convert to instance of this class\n+\n+ Returns:\n+ an instance of ObjectWithID\n \n .. note::\n This method is purely an internal method. Please use syft.deserialize()\n@@ -147,8 +156,8 @@\n it takes whatever type is returned from this method and adds an attribute to it\n with the type of this class attached to it. See the MetaSerializable class for details.\n \n- :return: the type of protobuf object which corresponds to this class.\n- :rtype: GeneratedProtocolMessageType\n+ Returns:\n+ the type of protobuf object which corresponds to this class.\n \n \"\"\"\n", "issue": "Fix all darglint docs warnings\n## Description\r\nCurrently there are 419 warnings:\r\n```\r\n$ flake8 src tests | wc -l\r\n419\r\n```\r\n\r\nWe can progressively improve this with multiple PRs.\r\nIf you are interested, please run the checker on the `src` and `tests` folders and try to fix the warnings until there are none.\r\nSome will require adding docs that don't exist while others are just fixes.\r\n\r\nThis will likely require multiple PRs so if you are interested open a PR with a few fixes and we will go from there.\r\n\r\n## Definition of Done\r\nAll docstring warnings are fixed and CI has the darglint checker turned on to fail the build.\n", "before_files": [{"content": "# stdlib\nfrom typing import Any\nfrom typing import Optional\n\n# third party\nfrom google.protobuf.reflection import GeneratedProtocolMessageType\n\n# syft relative\nfrom ...proto.core.common.common_object_pb2 import ObjectWithID as ObjectWithID_PB\nfrom ...util import validate_type\nfrom ..common.serde.deserialize import _deserialize\nfrom ..common.serde.serializable import Serializable\nfrom .uid import UID\n\n\nclass ObjectWithID(Serializable):\n \"\"\"This object is the superclass for nearly all Syft objects. Subclassing\n from this object will cause an object to be initialized with a unique id\n using the process specified in the UID class.\n\n .. note::\n At the time of writing, the only class in Syft which doesn't have an ID\n of some kind is the Client class because it's job is to point to another\n object (which has an ID).\n\n .. note::\n Be aware of performance choices in this class because it is used so\n heavily across the entire codebase. Assume every method is going to\n be called thousands of times during the working day of an average\n data scientist using syft (and millions of times in the context of a\n machine learning job).\n\n \"\"\"\n\n def __init__(self, id: Optional[UID] = None):\n \"\"\"This initializer only exists to set the id attribute, which is the\n primary purpose of this class. It also sets the 'as_wrapper' flag\n for the 'Serializable' superclass.\n\n :param id: an override which can be used to set an ID for this object\n manually. This is probably only used for deserialization.\n :type id: UID\n\n \"\"\"\n\n if id is None:\n id = UID()\n\n self._id: UID = id\n\n # while this class is never used as a simple wrapper,\n # it's possible that sub-classes of this class will be.\n super().__init__()\n\n @property\n def id(self) -> UID:\n \"\"\"We reveal ObjectWithID.id as a property to discourage users and\n developers of Syft from modifying .id attributes after an object\n has been initialized.\n\n :return: returns the unique id of the object\n :rtype: UID\n \"\"\"\n return self._id\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"Checks to see if two ObjectWithIDs are actually the same object.\n\n This checks to see whether this ObjectWithIDs is equal to another by\n comparing whether they have the same .id objects. These objects\n come with their own __eq__ function which we assume to be correct.\n\n :param other: this is the other ObjectWithIDs to be compared with\n :type other: Any (note this must be Any or __eq__ fails on other types)\n :return: returns True/False based on whether the objects are the same\n :rtype: bool\n \"\"\"\n\n try:\n return self.id == other.id\n except Exception:\n return False\n\n def __repr__(self) -> str:\n \"\"\"Returns a human-readable version of the ObjectWithID\n\n Return a human-readable representation of the ObjectWithID with brackets\n so that it can be easily spotted when nested inside of the human-\n readable representations of other objects.\"\"\"\n\n no_dash = str(self.id.value).replace(\"-\", \"\")\n return f\"<{type(self).__name__}: {no_dash}>\"\n\n def repr_short(self) -> str:\n \"\"\"Returns a SHORT human-readable version of SpecificLocation\n\n Return a SHORT human-readable version of the ID which\n makes it print nicer when embedded (often alongside other\n UID objects) within other object __repr__ methods.\"\"\"\n\n return f\"<{type(self).__name__}:{self.id.repr_short()}>\"\n\n def _object2proto(self) -> ObjectWithID_PB:\n \"\"\"Returns a protobuf serialization of self.\n\n As a requirement of all objects which inherit from Serializable,\n this method transforms the current object into the corresponding\n Protobuf object so that it can be further serialized.\n\n :return: returns a protobuf object\n :rtype: ObjectWithID_PB\n\n .. note::\n This method is purely an internal method. Please use object.serialize() or one of\n the other public serialization methods if you wish to serialize an\n object.\n \"\"\"\n return ObjectWithID_PB(id=self.id.serialize())\n\n @staticmethod\n def _proto2object(proto: ObjectWithID_PB) -> \"ObjectWithID\":\n \"\"\"Creates a ObjectWithID from a protobuf\n\n As a requirement of all objects which inherit from Serializable,\n this method transforms a protobuf object into an instance of this class.\n\n :return: returns an instance of ObjectWithID\n :rtype: ObjectWithID\n\n .. note::\n This method is purely an internal method. Please use syft.deserialize()\n if you wish to deserialize an object.\n \"\"\"\n _id = validate_type(_object=_deserialize(proto.id), _type=UID, optional=True)\n return ObjectWithID(id=_id)\n\n @staticmethod\n def get_protobuf_schema() -> GeneratedProtocolMessageType:\n \"\"\"Return the type of protobuf object which stores a class of this type\n\n As a part of serialization and deserialization, we need the ability to\n lookup the protobuf object type directly from the object type. This\n static method allows us to do this.\n\n Importantly, this method is also used to create the reverse lookup ability within\n the metaclass of Serializable. In the metaclass, it calls this method and then\n it takes whatever type is returned from this method and adds an attribute to it\n with the type of this class attached to it. See the MetaSerializable class for details.\n\n :return: the type of protobuf object which corresponds to this class.\n :rtype: GeneratedProtocolMessageType\n\n \"\"\"\n\n return ObjectWithID_PB\n", "path": "src/syft/core/common/object.py"}], "after_files": [{"content": "# stdlib\nfrom typing import Any\nfrom typing import Optional\n\n# third party\nfrom google.protobuf.reflection import GeneratedProtocolMessageType\n\n# syft relative\nfrom ...proto.core.common.common_object_pb2 import ObjectWithID as ObjectWithID_PB\nfrom ...util import validate_type\nfrom ..common.serde.deserialize import _deserialize\nfrom ..common.serde.serializable import Serializable\nfrom .uid import UID\n\n\nclass ObjectWithID(Serializable):\n \"\"\"This object is the superclass for nearly all Syft objects. Subclassing\n from this object will cause an object to be initialized with a unique id\n using the process specified in the UID class.\n\n .. note::\n At the time of writing, the only class in Syft which doesn't have an ID\n of some kind is the Client class because it's job is to point to another\n object (which has an ID).\n\n .. note::\n Be aware of performance choices in this class because it is used so\n heavily across the entire codebase. Assume every method is going to\n be called thousands of times during the working day of an average\n data scientist using syft (and millions of times in the context of a\n machine learning job).\n\n \"\"\"\n\n def __init__(self, id: Optional[UID] = None):\n \"\"\"This initializer only exists to set the id attribute, which is the\n primary purpose of this class. It also sets the 'as_wrapper' flag\n for the 'Serializable' superclass.\n\n Args:\n id: an override which can be used to set an ID for this object\n\n \"\"\"\n\n if id is None:\n id = UID()\n\n self._id: UID = id\n\n # while this class is never used as a simple wrapper,\n # it's possible that sub-classes of this class will be.\n super().__init__()\n\n @property\n def id(self) -> UID:\n \"\"\"We reveal ObjectWithID.id as a property to discourage users and\n developers of Syft from modifying .id attributes after an object\n has been initialized.\n\n Returns:\n returns the unique id of the object\n \"\"\"\n return self._id\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"Checks to see if two ObjectWithIDs are actually the same object.\n\n This checks to see whether this ObjectWithIDs is equal to another by\n comparing whether they have the same .id objects. These objects\n come with their own __eq__ function which we assume to be correct.\n\n Args:\n other: this is the other ObjectWithIDs to be compared with\n\n Returns:\n True/False based on whether the objects are the same\n \"\"\"\n\n try:\n return self.id == other.id\n except Exception:\n return False\n\n def __repr__(self) -> str:\n \"\"\"\n Return a human-readable representation of the ObjectWithID with brackets\n so that it can be easily spotted when nested inside of the human-\n readable representations of other objects.\n\n Returns:\n a human-readable version of the ObjectWithID\n\n \"\"\"\n\n no_dash = str(self.id.value).replace(\"-\", \"\")\n return f\"<{type(self).__name__}: {no_dash}>\"\n\n def repr_short(self) -> str:\n \"\"\"\n Return a SHORT human-readable version of the ID which\n makes it print nicer when embedded (often alongside other\n UID objects) within other object __repr__ methods.\n\n Returns:\n a SHORT human-readable version of SpecificLocation\n \"\"\"\n\n return f\"<{type(self).__name__}:{self.id.repr_short()}>\"\n\n def _object2proto(self) -> ObjectWithID_PB:\n \"\"\"\n As a requirement of all objects which inherit from Serializable,\n this method transforms the current object into the corresponding\n Protobuf object so that it can be further serialized.\n\n Returns:\n a protobuf object that is the serialization of self.\n\n .. note::\n This method is purely an internal method. Please use object.serialize() or one of\n the other public serialization methods if you wish to serialize an\n object.\n \"\"\"\n return ObjectWithID_PB(id=self.id.serialize())\n\n @staticmethod\n def _proto2object(proto: ObjectWithID_PB) -> \"ObjectWithID\":\n \"\"\"Creates a ObjectWithID from a protobuf\n\n As a requirement of all objects which inherit from Serializable,\n this method transforms a protobuf object into an instance of this class.\n\n Args:\n proto: a protobuf object that we wish to convert to instance of this class\n\n Returns:\n an instance of ObjectWithID\n\n .. note::\n This method is purely an internal method. Please use syft.deserialize()\n if you wish to deserialize an object.\n \"\"\"\n _id = validate_type(_object=_deserialize(proto.id), _type=UID, optional=True)\n return ObjectWithID(id=_id)\n\n @staticmethod\n def get_protobuf_schema() -> GeneratedProtocolMessageType:\n \"\"\"Return the type of protobuf object which stores a class of this type\n\n As a part of serialization and deserialization, we need the ability to\n lookup the protobuf object type directly from the object type. This\n static method allows us to do this.\n\n Importantly, this method is also used to create the reverse lookup ability within\n the metaclass of Serializable. In the metaclass, it calls this method and then\n it takes whatever type is returned from this method and adds an attribute to it\n with the type of this class attached to it. See the MetaSerializable class for details.\n\n Returns:\n the type of protobuf object which corresponds to this class.\n\n \"\"\"\n\n return ObjectWithID_PB\n", "path": "src/syft/core/common/object.py"}]}
| 2,052 | 973 |
gh_patches_debug_18483
|
rasdani/github-patches
|
git_diff
|
openai__openai-python-148
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Encoding Error on Windows with WandB
When syncing with `openai wandb sync`, I get a character encoding issue on one of the run files. This happens specifically on Windows which often struggles with the default encoding on a file with `open(filename)`.
One solution is `artifact.new_file(filename, "w", encoding="utf-8")` on line 279 of the `wandb_logger.py` which solves the problem locally for me. Alternatively, using a default of `"utf-8"` in the `artifact.new_file` function should work too, but might have other unintended side effects.
Here is the output:
```
wandb: ERROR Failed to open the provided file (UnicodeEncodeError: 'charmap' codec can't encode character '\u03bc' in position 205764: character maps to <undefined>). Please provide the proper encoding.
Traceback (most recent call last):
File "C:\Users\miniconda3\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\kaiser\miniconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\venv\Scripts\openai.exe\__main__.py", line 7, in <module>
sys.exit(main())
File "C:\venv\lib\site-packages\openai\_openai_scripts.py", line 63, in main
args.func(args)
File "C:\venv\lib\site-packages\openai\cli.py", line 586, in sync
resp = openai.wandb_logger.WandbLogger.sync(
File "C:\venv\lib\site-packages\openai\wandb_logger.py", line 74, in sync
fine_tune_logged = [
File "C:\venv\lib\site-packages\openai\wandb_logger.py", line 75, in <listcomp>
cls._log_fine_tune(
File "C:\venv\lib\site-packages\openai\wandb_logger.py", line 172, in _log_fine_tune
cls._log_artifacts(fine_tune, project, entity)
File "C:\venv\lib\site-packages\openai\wandb_logger.py", line 236, in _log_artifacts
cls._log_artifact_inputs(file, prefix, artifact_type, project, entity)
File "C:\venv\lib\site-packages\openai\wandb_logger.py", line 280, in _log_artifact_inputs
f.write(file_content)
File "C:\Users\miniconda3\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u03bc' in position 205764: character maps to <undefined>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openai/wandb_logger.py`
Content:
```
1 try:
2 import wandb
3
4 WANDB_AVAILABLE = True
5 except:
6 WANDB_AVAILABLE = False
7
8
9 if WANDB_AVAILABLE:
10 import datetime
11 import io
12 import json
13 import re
14 from pathlib import Path
15
16 import numpy as np
17 import pandas as pd
18
19 from openai import File, FineTune
20
21
22 class WandbLogger:
23 """
24 Log fine-tunes to [Weights & Biases](https://wandb.me/openai-docs)
25 """
26
27 if not WANDB_AVAILABLE:
28 print("Logging requires wandb to be installed. Run `pip install wandb`.")
29 else:
30 _wandb_api = None
31 _logged_in = False
32
33 @classmethod
34 def sync(
35 cls,
36 id=None,
37 n_fine_tunes=None,
38 project="GPT-3",
39 entity=None,
40 force=False,
41 **kwargs_wandb_init,
42 ):
43 """
44 Sync fine-tunes to Weights & Biases.
45 :param id: The id of the fine-tune (optional)
46 :param n_fine_tunes: Number of most recent fine-tunes to log when an id is not provided. By default, every fine-tune is synced.
47 :param project: Name of the project where you're sending runs. By default, it is "GPT-3".
48 :param entity: Username or team name where you're sending runs. By default, your default entity is used, which is usually your username.
49 :param force: Forces logging and overwrite existing wandb run of the same fine-tune.
50 """
51
52 if not WANDB_AVAILABLE:
53 return
54
55 if id:
56 fine_tune = FineTune.retrieve(id=id)
57 fine_tune.pop("events", None)
58 fine_tunes = [fine_tune]
59
60 else:
61 # get list of fine_tune to log
62 fine_tunes = FineTune.list()
63 if not fine_tunes or fine_tunes.get("data") is None:
64 print("No fine-tune has been retrieved")
65 return
66 fine_tunes = fine_tunes["data"][
67 -n_fine_tunes if n_fine_tunes is not None else None :
68 ]
69
70 # log starting from oldest fine_tune
71 show_individual_warnings = (
72 False if id is None and n_fine_tunes is None else True
73 )
74 fine_tune_logged = [
75 cls._log_fine_tune(
76 fine_tune,
77 project,
78 entity,
79 force,
80 show_individual_warnings,
81 **kwargs_wandb_init,
82 )
83 for fine_tune in fine_tunes
84 ]
85
86 if not show_individual_warnings and not any(fine_tune_logged):
87 print("No new successful fine-tunes were found")
88
89 return "🎉 wandb sync completed successfully"
90
91 @classmethod
92 def _log_fine_tune(
93 cls,
94 fine_tune,
95 project,
96 entity,
97 force,
98 show_individual_warnings,
99 **kwargs_wandb_init,
100 ):
101 fine_tune_id = fine_tune.get("id")
102 status = fine_tune.get("status")
103
104 # check run completed successfully
105 if status != "succeeded":
106 if show_individual_warnings:
107 print(
108 f'Fine-tune {fine_tune_id} has the status "{status}" and will not be logged'
109 )
110 return
111
112 # check results are present
113 try:
114 results_id = fine_tune["result_files"][0]["id"]
115 results = File.download(id=results_id).decode("utf-8")
116 except:
117 if show_individual_warnings:
118 print(f"Fine-tune {fine_tune_id} has no results and will not be logged")
119 return
120
121 # check run has not been logged already
122 run_path = f"{project}/{fine_tune_id}"
123 if entity is not None:
124 run_path = f"{entity}/{run_path}"
125 wandb_run = cls._get_wandb_run(run_path)
126 if wandb_run:
127 wandb_status = wandb_run.summary.get("status")
128 if show_individual_warnings:
129 if wandb_status == "succeeded":
130 print(
131 f"Fine-tune {fine_tune_id} has already been logged successfully at {wandb_run.url}"
132 )
133 if not force:
134 print(
135 'Use "--force" in the CLI or "force=True" in python if you want to overwrite previous run'
136 )
137 else:
138 print(
139 f"A run for fine-tune {fine_tune_id} was previously created but didn't end successfully"
140 )
141 if wandb_status != "succeeded" or force:
142 print(
143 f"A new wandb run will be created for fine-tune {fine_tune_id} and previous run will be overwritten"
144 )
145 if wandb_status == "succeeded" and not force:
146 return
147
148 # start a wandb run
149 wandb.init(
150 job_type="fine-tune",
151 config=cls._get_config(fine_tune),
152 project=project,
153 entity=entity,
154 name=fine_tune_id,
155 id=fine_tune_id,
156 **kwargs_wandb_init,
157 )
158
159 # log results
160 df_results = pd.read_csv(io.StringIO(results))
161 for _, row in df_results.iterrows():
162 metrics = {k: v for k, v in row.items() if not np.isnan(v)}
163 step = metrics.pop("step")
164 if step is not None:
165 step = int(step)
166 wandb.log(metrics, step=step)
167 fine_tuned_model = fine_tune.get("fine_tuned_model")
168 if fine_tuned_model is not None:
169 wandb.summary["fine_tuned_model"] = fine_tuned_model
170
171 # training/validation files and fine-tune details
172 cls._log_artifacts(fine_tune, project, entity)
173
174 # mark run as complete
175 wandb.summary["status"] = "succeeded"
176
177 wandb.finish()
178 return True
179
180 @classmethod
181 def _ensure_logged_in(cls):
182 if not cls._logged_in:
183 if wandb.login():
184 cls._logged_in = True
185 else:
186 raise Exception("You need to log in to wandb")
187
188 @classmethod
189 def _get_wandb_run(cls, run_path):
190 cls._ensure_logged_in()
191 try:
192 if cls._wandb_api is None:
193 cls._wandb_api = wandb.Api()
194 return cls._wandb_api.run(run_path)
195 except Exception:
196 return None
197
198 @classmethod
199 def _get_wandb_artifact(cls, artifact_path):
200 cls._ensure_logged_in()
201 try:
202 if cls._wandb_api is None:
203 cls._wandb_api = wandb.Api()
204 return cls._wandb_api.artifact(artifact_path)
205 except Exception:
206 return None
207
208 @classmethod
209 def _get_config(cls, fine_tune):
210 config = dict(fine_tune)
211 for key in ("training_files", "validation_files", "result_files"):
212 if config.get(key) and len(config[key]):
213 config[key] = config[key][0]
214 if config.get("created_at"):
215 config["created_at"] = datetime.datetime.fromtimestamp(config["created_at"])
216 return config
217
218 @classmethod
219 def _log_artifacts(cls, fine_tune, project, entity):
220 # training/validation files
221 training_file = (
222 fine_tune["training_files"][0]
223 if fine_tune.get("training_files") and len(fine_tune["training_files"])
224 else None
225 )
226 validation_file = (
227 fine_tune["validation_files"][0]
228 if fine_tune.get("validation_files") and len(fine_tune["validation_files"])
229 else None
230 )
231 for file, prefix, artifact_type in (
232 (training_file, "train", "training_files"),
233 (validation_file, "valid", "validation_files"),
234 ):
235 if file is not None:
236 cls._log_artifact_inputs(file, prefix, artifact_type, project, entity)
237
238 # fine-tune details
239 fine_tune_id = fine_tune.get("id")
240 artifact = wandb.Artifact(
241 "fine_tune_details",
242 type="fine_tune_details",
243 metadata=fine_tune,
244 )
245 with artifact.new_file("fine_tune_details.json") as f:
246 json.dump(fine_tune, f, indent=2)
247 wandb.run.log_artifact(
248 artifact,
249 aliases=["latest", fine_tune_id],
250 )
251
252 @classmethod
253 def _log_artifact_inputs(cls, file, prefix, artifact_type, project, entity):
254 file_id = file["id"]
255 filename = Path(file["filename"]).name
256 stem = Path(file["filename"]).stem
257
258 # get input artifact
259 artifact_name = f"{prefix}-{filename}"
260 # sanitize name to valid wandb artifact name
261 artifact_name = re.sub(r"[^a-zA-Z0-9_\-.]", "_", artifact_name)
262 artifact_alias = file_id
263 artifact_path = f"{project}/{artifact_name}:{artifact_alias}"
264 if entity is not None:
265 artifact_path = f"{entity}/{artifact_path}"
266 artifact = cls._get_wandb_artifact(artifact_path)
267
268 # create artifact if file not already logged previously
269 if artifact is None:
270 # get file content
271 try:
272 file_content = File.download(id=file_id).decode("utf-8")
273 except:
274 print(
275 f"File {file_id} could not be retrieved. Make sure you are allowed to download training/validation files"
276 )
277 return
278 artifact = wandb.Artifact(artifact_name, type=artifact_type, metadata=file)
279 with artifact.new_file(filename, mode="w") as f:
280 f.write(file_content)
281
282 # create a Table
283 try:
284 table, n_items = cls._make_table(file_content)
285 artifact.add(table, stem)
286 wandb.config.update({f"n_{prefix}": n_items})
287 artifact.metadata["items"] = n_items
288 except:
289 print(f"File {file_id} could not be read as a valid JSON file")
290 else:
291 # log number of items
292 wandb.config.update({f"n_{prefix}": artifact.metadata.get("items")})
293
294 wandb.run.use_artifact(artifact, aliases=["latest", artifact_alias])
295
296 @classmethod
297 def _make_table(cls, file_content):
298 df = pd.read_json(io.StringIO(file_content), orient="records", lines=True)
299 return wandb.Table(dataframe=df), len(df)
300
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openai/wandb_logger.py b/openai/wandb_logger.py
--- a/openai/wandb_logger.py
+++ b/openai/wandb_logger.py
@@ -242,7 +242,9 @@
type="fine_tune_details",
metadata=fine_tune,
)
- with artifact.new_file("fine_tune_details.json") as f:
+ with artifact.new_file(
+ "fine_tune_details.json", mode="w", encoding="utf-8"
+ ) as f:
json.dump(fine_tune, f, indent=2)
wandb.run.log_artifact(
artifact,
@@ -276,7 +278,7 @@
)
return
artifact = wandb.Artifact(artifact_name, type=artifact_type, metadata=file)
- with artifact.new_file(filename, mode="w") as f:
+ with artifact.new_file(filename, mode="w", encoding="utf-8") as f:
f.write(file_content)
# create a Table
|
{"golden_diff": "diff --git a/openai/wandb_logger.py b/openai/wandb_logger.py\n--- a/openai/wandb_logger.py\n+++ b/openai/wandb_logger.py\n@@ -242,7 +242,9 @@\n type=\"fine_tune_details\",\n metadata=fine_tune,\n )\n- with artifact.new_file(\"fine_tune_details.json\") as f:\n+ with artifact.new_file(\n+ \"fine_tune_details.json\", mode=\"w\", encoding=\"utf-8\"\n+ ) as f:\n json.dump(fine_tune, f, indent=2)\n wandb.run.log_artifact(\n artifact,\n@@ -276,7 +278,7 @@\n )\n return\n artifact = wandb.Artifact(artifact_name, type=artifact_type, metadata=file)\n- with artifact.new_file(filename, mode=\"w\") as f:\n+ with artifact.new_file(filename, mode=\"w\", encoding=\"utf-8\") as f:\n f.write(file_content)\n \n # create a Table\n", "issue": "Encoding Error on Windows with WandB\nWhen syncing with `openai wandb sync`, I get a character encoding issue on one of the run files. This happens specifically on Windows which often struggles with the default encoding on a file with `open(filename)`.\r\n\r\nOne solution is `artifact.new_file(filename, \"w\", encoding=\"utf-8\")` on line 279 of the `wandb_logger.py` which solves the problem locally for me. Alternatively, using a default of `\"utf-8\"` in the `artifact.new_file` function should work too, but might have other unintended side effects.\r\n\r\nHere is the output:\r\n```\r\nwandb: ERROR Failed to open the provided file (UnicodeEncodeError: 'charmap' codec can't encode character '\\u03bc' in position 205764: character maps to <undefined>). Please provide the proper encoding.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\miniconda3\\lib\\runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\kaiser\\miniconda3\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\venv\\Scripts\\openai.exe\\__main__.py\", line 7, in <module>\r\n sys.exit(main())\r\n File \"C:\\venv\\lib\\site-packages\\openai\\_openai_scripts.py\", line 63, in main\r\n args.func(args)\r\n File \"C:\\venv\\lib\\site-packages\\openai\\cli.py\", line 586, in sync\r\n resp = openai.wandb_logger.WandbLogger.sync(\r\n File \"C:\\venv\\lib\\site-packages\\openai\\wandb_logger.py\", line 74, in sync\r\n fine_tune_logged = [\r\n File \"C:\\venv\\lib\\site-packages\\openai\\wandb_logger.py\", line 75, in <listcomp>\r\n cls._log_fine_tune(\r\n File \"C:\\venv\\lib\\site-packages\\openai\\wandb_logger.py\", line 172, in _log_fine_tune\r\n cls._log_artifacts(fine_tune, project, entity)\r\n File \"C:\\venv\\lib\\site-packages\\openai\\wandb_logger.py\", line 236, in _log_artifacts\r\n cls._log_artifact_inputs(file, prefix, artifact_type, project, entity)\r\n File \"C:\\venv\\lib\\site-packages\\openai\\wandb_logger.py\", line 280, in _log_artifact_inputs\r\n f.write(file_content)\r\n File \"C:\\Users\\miniconda3\\lib\\encodings\\cp1252.py\", line 19, in encode\r\n return codecs.charmap_encode(input,self.errors,encoding_table)[0]\r\nUnicodeEncodeError: 'charmap' codec can't encode character '\\u03bc' in position 205764: character maps to <undefined>\r\n```\n", "before_files": [{"content": "try:\n import wandb\n\n WANDB_AVAILABLE = True\nexcept:\n WANDB_AVAILABLE = False\n\n\nif WANDB_AVAILABLE:\n import datetime\n import io\n import json\n import re\n from pathlib import Path\n\n import numpy as np\n import pandas as pd\n\n from openai import File, FineTune\n\n\nclass WandbLogger:\n \"\"\"\n Log fine-tunes to [Weights & Biases](https://wandb.me/openai-docs)\n \"\"\"\n\n if not WANDB_AVAILABLE:\n print(\"Logging requires wandb to be installed. Run `pip install wandb`.\")\n else:\n _wandb_api = None\n _logged_in = False\n\n @classmethod\n def sync(\n cls,\n id=None,\n n_fine_tunes=None,\n project=\"GPT-3\",\n entity=None,\n force=False,\n **kwargs_wandb_init,\n ):\n \"\"\"\n Sync fine-tunes to Weights & Biases.\n :param id: The id of the fine-tune (optional)\n :param n_fine_tunes: Number of most recent fine-tunes to log when an id is not provided. By default, every fine-tune is synced.\n :param project: Name of the project where you're sending runs. By default, it is \"GPT-3\".\n :param entity: Username or team name where you're sending runs. By default, your default entity is used, which is usually your username.\n :param force: Forces logging and overwrite existing wandb run of the same fine-tune.\n \"\"\"\n\n if not WANDB_AVAILABLE:\n return\n\n if id:\n fine_tune = FineTune.retrieve(id=id)\n fine_tune.pop(\"events\", None)\n fine_tunes = [fine_tune]\n\n else:\n # get list of fine_tune to log\n fine_tunes = FineTune.list()\n if not fine_tunes or fine_tunes.get(\"data\") is None:\n print(\"No fine-tune has been retrieved\")\n return\n fine_tunes = fine_tunes[\"data\"][\n -n_fine_tunes if n_fine_tunes is not None else None :\n ]\n\n # log starting from oldest fine_tune\n show_individual_warnings = (\n False if id is None and n_fine_tunes is None else True\n )\n fine_tune_logged = [\n cls._log_fine_tune(\n fine_tune,\n project,\n entity,\n force,\n show_individual_warnings,\n **kwargs_wandb_init,\n )\n for fine_tune in fine_tunes\n ]\n\n if not show_individual_warnings and not any(fine_tune_logged):\n print(\"No new successful fine-tunes were found\")\n\n return \"\ud83c\udf89 wandb sync completed successfully\"\n\n @classmethod\n def _log_fine_tune(\n cls,\n fine_tune,\n project,\n entity,\n force,\n show_individual_warnings,\n **kwargs_wandb_init,\n ):\n fine_tune_id = fine_tune.get(\"id\")\n status = fine_tune.get(\"status\")\n\n # check run completed successfully\n if status != \"succeeded\":\n if show_individual_warnings:\n print(\n f'Fine-tune {fine_tune_id} has the status \"{status}\" and will not be logged'\n )\n return\n\n # check results are present\n try:\n results_id = fine_tune[\"result_files\"][0][\"id\"]\n results = File.download(id=results_id).decode(\"utf-8\")\n except:\n if show_individual_warnings:\n print(f\"Fine-tune {fine_tune_id} has no results and will not be logged\")\n return\n\n # check run has not been logged already\n run_path = f\"{project}/{fine_tune_id}\"\n if entity is not None:\n run_path = f\"{entity}/{run_path}\"\n wandb_run = cls._get_wandb_run(run_path)\n if wandb_run:\n wandb_status = wandb_run.summary.get(\"status\")\n if show_individual_warnings:\n if wandb_status == \"succeeded\":\n print(\n f\"Fine-tune {fine_tune_id} has already been logged successfully at {wandb_run.url}\"\n )\n if not force:\n print(\n 'Use \"--force\" in the CLI or \"force=True\" in python if you want to overwrite previous run'\n )\n else:\n print(\n f\"A run for fine-tune {fine_tune_id} was previously created but didn't end successfully\"\n )\n if wandb_status != \"succeeded\" or force:\n print(\n f\"A new wandb run will be created for fine-tune {fine_tune_id} and previous run will be overwritten\"\n )\n if wandb_status == \"succeeded\" and not force:\n return\n\n # start a wandb run\n wandb.init(\n job_type=\"fine-tune\",\n config=cls._get_config(fine_tune),\n project=project,\n entity=entity,\n name=fine_tune_id,\n id=fine_tune_id,\n **kwargs_wandb_init,\n )\n\n # log results\n df_results = pd.read_csv(io.StringIO(results))\n for _, row in df_results.iterrows():\n metrics = {k: v for k, v in row.items() if not np.isnan(v)}\n step = metrics.pop(\"step\")\n if step is not None:\n step = int(step)\n wandb.log(metrics, step=step)\n fine_tuned_model = fine_tune.get(\"fine_tuned_model\")\n if fine_tuned_model is not None:\n wandb.summary[\"fine_tuned_model\"] = fine_tuned_model\n\n # training/validation files and fine-tune details\n cls._log_artifacts(fine_tune, project, entity)\n\n # mark run as complete\n wandb.summary[\"status\"] = \"succeeded\"\n\n wandb.finish()\n return True\n\n @classmethod\n def _ensure_logged_in(cls):\n if not cls._logged_in:\n if wandb.login():\n cls._logged_in = True\n else:\n raise Exception(\"You need to log in to wandb\")\n\n @classmethod\n def _get_wandb_run(cls, run_path):\n cls._ensure_logged_in()\n try:\n if cls._wandb_api is None:\n cls._wandb_api = wandb.Api()\n return cls._wandb_api.run(run_path)\n except Exception:\n return None\n\n @classmethod\n def _get_wandb_artifact(cls, artifact_path):\n cls._ensure_logged_in()\n try:\n if cls._wandb_api is None:\n cls._wandb_api = wandb.Api()\n return cls._wandb_api.artifact(artifact_path)\n except Exception:\n return None\n\n @classmethod\n def _get_config(cls, fine_tune):\n config = dict(fine_tune)\n for key in (\"training_files\", \"validation_files\", \"result_files\"):\n if config.get(key) and len(config[key]):\n config[key] = config[key][0]\n if config.get(\"created_at\"):\n config[\"created_at\"] = datetime.datetime.fromtimestamp(config[\"created_at\"])\n return config\n\n @classmethod\n def _log_artifacts(cls, fine_tune, project, entity):\n # training/validation files\n training_file = (\n fine_tune[\"training_files\"][0]\n if fine_tune.get(\"training_files\") and len(fine_tune[\"training_files\"])\n else None\n )\n validation_file = (\n fine_tune[\"validation_files\"][0]\n if fine_tune.get(\"validation_files\") and len(fine_tune[\"validation_files\"])\n else None\n )\n for file, prefix, artifact_type in (\n (training_file, \"train\", \"training_files\"),\n (validation_file, \"valid\", \"validation_files\"),\n ):\n if file is not None:\n cls._log_artifact_inputs(file, prefix, artifact_type, project, entity)\n\n # fine-tune details\n fine_tune_id = fine_tune.get(\"id\")\n artifact = wandb.Artifact(\n \"fine_tune_details\",\n type=\"fine_tune_details\",\n metadata=fine_tune,\n )\n with artifact.new_file(\"fine_tune_details.json\") as f:\n json.dump(fine_tune, f, indent=2)\n wandb.run.log_artifact(\n artifact,\n aliases=[\"latest\", fine_tune_id],\n )\n\n @classmethod\n def _log_artifact_inputs(cls, file, prefix, artifact_type, project, entity):\n file_id = file[\"id\"]\n filename = Path(file[\"filename\"]).name\n stem = Path(file[\"filename\"]).stem\n\n # get input artifact\n artifact_name = f\"{prefix}-{filename}\"\n # sanitize name to valid wandb artifact name\n artifact_name = re.sub(r\"[^a-zA-Z0-9_\\-.]\", \"_\", artifact_name)\n artifact_alias = file_id\n artifact_path = f\"{project}/{artifact_name}:{artifact_alias}\"\n if entity is not None:\n artifact_path = f\"{entity}/{artifact_path}\"\n artifact = cls._get_wandb_artifact(artifact_path)\n\n # create artifact if file not already logged previously\n if artifact is None:\n # get file content\n try:\n file_content = File.download(id=file_id).decode(\"utf-8\")\n except:\n print(\n f\"File {file_id} could not be retrieved. Make sure you are allowed to download training/validation files\"\n )\n return\n artifact = wandb.Artifact(artifact_name, type=artifact_type, metadata=file)\n with artifact.new_file(filename, mode=\"w\") as f:\n f.write(file_content)\n\n # create a Table\n try:\n table, n_items = cls._make_table(file_content)\n artifact.add(table, stem)\n wandb.config.update({f\"n_{prefix}\": n_items})\n artifact.metadata[\"items\"] = n_items\n except:\n print(f\"File {file_id} could not be read as a valid JSON file\")\n else:\n # log number of items\n wandb.config.update({f\"n_{prefix}\": artifact.metadata.get(\"items\")})\n\n wandb.run.use_artifact(artifact, aliases=[\"latest\", artifact_alias])\n\n @classmethod\n def _make_table(cls, file_content):\n df = pd.read_json(io.StringIO(file_content), orient=\"records\", lines=True)\n return wandb.Table(dataframe=df), len(df)\n", "path": "openai/wandb_logger.py"}], "after_files": [{"content": "try:\n import wandb\n\n WANDB_AVAILABLE = True\nexcept:\n WANDB_AVAILABLE = False\n\n\nif WANDB_AVAILABLE:\n import datetime\n import io\n import json\n import re\n from pathlib import Path\n\n import numpy as np\n import pandas as pd\n\n from openai import File, FineTune\n\n\nclass WandbLogger:\n \"\"\"\n Log fine-tunes to [Weights & Biases](https://wandb.me/openai-docs)\n \"\"\"\n\n if not WANDB_AVAILABLE:\n print(\"Logging requires wandb to be installed. Run `pip install wandb`.\")\n else:\n _wandb_api = None\n _logged_in = False\n\n @classmethod\n def sync(\n cls,\n id=None,\n n_fine_tunes=None,\n project=\"GPT-3\",\n entity=None,\n force=False,\n **kwargs_wandb_init,\n ):\n \"\"\"\n Sync fine-tunes to Weights & Biases.\n :param id: The id of the fine-tune (optional)\n :param n_fine_tunes: Number of most recent fine-tunes to log when an id is not provided. By default, every fine-tune is synced.\n :param project: Name of the project where you're sending runs. By default, it is \"GPT-3\".\n :param entity: Username or team name where you're sending runs. By default, your default entity is used, which is usually your username.\n :param force: Forces logging and overwrite existing wandb run of the same fine-tune.\n \"\"\"\n\n if not WANDB_AVAILABLE:\n return\n\n if id:\n fine_tune = FineTune.retrieve(id=id)\n fine_tune.pop(\"events\", None)\n fine_tunes = [fine_tune]\n\n else:\n # get list of fine_tune to log\n fine_tunes = FineTune.list()\n if not fine_tunes or fine_tunes.get(\"data\") is None:\n print(\"No fine-tune has been retrieved\")\n return\n fine_tunes = fine_tunes[\"data\"][\n -n_fine_tunes if n_fine_tunes is not None else None :\n ]\n\n # log starting from oldest fine_tune\n show_individual_warnings = (\n False if id is None and n_fine_tunes is None else True\n )\n fine_tune_logged = [\n cls._log_fine_tune(\n fine_tune,\n project,\n entity,\n force,\n show_individual_warnings,\n **kwargs_wandb_init,\n )\n for fine_tune in fine_tunes\n ]\n\n if not show_individual_warnings and not any(fine_tune_logged):\n print(\"No new successful fine-tunes were found\")\n\n return \"\ud83c\udf89 wandb sync completed successfully\"\n\n @classmethod\n def _log_fine_tune(\n cls,\n fine_tune,\n project,\n entity,\n force,\n show_individual_warnings,\n **kwargs_wandb_init,\n ):\n fine_tune_id = fine_tune.get(\"id\")\n status = fine_tune.get(\"status\")\n\n # check run completed successfully\n if status != \"succeeded\":\n if show_individual_warnings:\n print(\n f'Fine-tune {fine_tune_id} has the status \"{status}\" and will not be logged'\n )\n return\n\n # check results are present\n try:\n results_id = fine_tune[\"result_files\"][0][\"id\"]\n results = File.download(id=results_id).decode(\"utf-8\")\n except:\n if show_individual_warnings:\n print(f\"Fine-tune {fine_tune_id} has no results and will not be logged\")\n return\n\n # check run has not been logged already\n run_path = f\"{project}/{fine_tune_id}\"\n if entity is not None:\n run_path = f\"{entity}/{run_path}\"\n wandb_run = cls._get_wandb_run(run_path)\n if wandb_run:\n wandb_status = wandb_run.summary.get(\"status\")\n if show_individual_warnings:\n if wandb_status == \"succeeded\":\n print(\n f\"Fine-tune {fine_tune_id} has already been logged successfully at {wandb_run.url}\"\n )\n if not force:\n print(\n 'Use \"--force\" in the CLI or \"force=True\" in python if you want to overwrite previous run'\n )\n else:\n print(\n f\"A run for fine-tune {fine_tune_id} was previously created but didn't end successfully\"\n )\n if wandb_status != \"succeeded\" or force:\n print(\n f\"A new wandb run will be created for fine-tune {fine_tune_id} and previous run will be overwritten\"\n )\n if wandb_status == \"succeeded\" and not force:\n return\n\n # start a wandb run\n wandb.init(\n job_type=\"fine-tune\",\n config=cls._get_config(fine_tune),\n project=project,\n entity=entity,\n name=fine_tune_id,\n id=fine_tune_id,\n **kwargs_wandb_init,\n )\n\n # log results\n df_results = pd.read_csv(io.StringIO(results))\n for _, row in df_results.iterrows():\n metrics = {k: v for k, v in row.items() if not np.isnan(v)}\n step = metrics.pop(\"step\")\n if step is not None:\n step = int(step)\n wandb.log(metrics, step=step)\n fine_tuned_model = fine_tune.get(\"fine_tuned_model\")\n if fine_tuned_model is not None:\n wandb.summary[\"fine_tuned_model\"] = fine_tuned_model\n\n # training/validation files and fine-tune details\n cls._log_artifacts(fine_tune, project, entity)\n\n # mark run as complete\n wandb.summary[\"status\"] = \"succeeded\"\n\n wandb.finish()\n return True\n\n @classmethod\n def _ensure_logged_in(cls):\n if not cls._logged_in:\n if wandb.login():\n cls._logged_in = True\n else:\n raise Exception(\"You need to log in to wandb\")\n\n @classmethod\n def _get_wandb_run(cls, run_path):\n cls._ensure_logged_in()\n try:\n if cls._wandb_api is None:\n cls._wandb_api = wandb.Api()\n return cls._wandb_api.run(run_path)\n except Exception:\n return None\n\n @classmethod\n def _get_wandb_artifact(cls, artifact_path):\n cls._ensure_logged_in()\n try:\n if cls._wandb_api is None:\n cls._wandb_api = wandb.Api()\n return cls._wandb_api.artifact(artifact_path)\n except Exception:\n return None\n\n @classmethod\n def _get_config(cls, fine_tune):\n config = dict(fine_tune)\n for key in (\"training_files\", \"validation_files\", \"result_files\"):\n if config.get(key) and len(config[key]):\n config[key] = config[key][0]\n if config.get(\"created_at\"):\n config[\"created_at\"] = datetime.datetime.fromtimestamp(config[\"created_at\"])\n return config\n\n @classmethod\n def _log_artifacts(cls, fine_tune, project, entity):\n # training/validation files\n training_file = (\n fine_tune[\"training_files\"][0]\n if fine_tune.get(\"training_files\") and len(fine_tune[\"training_files\"])\n else None\n )\n validation_file = (\n fine_tune[\"validation_files\"][0]\n if fine_tune.get(\"validation_files\") and len(fine_tune[\"validation_files\"])\n else None\n )\n for file, prefix, artifact_type in (\n (training_file, \"train\", \"training_files\"),\n (validation_file, \"valid\", \"validation_files\"),\n ):\n if file is not None:\n cls._log_artifact_inputs(file, prefix, artifact_type, project, entity)\n\n # fine-tune details\n fine_tune_id = fine_tune.get(\"id\")\n artifact = wandb.Artifact(\n \"fine_tune_details\",\n type=\"fine_tune_details\",\n metadata=fine_tune,\n )\n with artifact.new_file(\n \"fine_tune_details.json\", mode=\"w\", encoding=\"utf-8\"\n ) as f:\n json.dump(fine_tune, f, indent=2)\n wandb.run.log_artifact(\n artifact,\n aliases=[\"latest\", fine_tune_id],\n )\n\n @classmethod\n def _log_artifact_inputs(cls, file, prefix, artifact_type, project, entity):\n file_id = file[\"id\"]\n filename = Path(file[\"filename\"]).name\n stem = Path(file[\"filename\"]).stem\n\n # get input artifact\n artifact_name = f\"{prefix}-{filename}\"\n # sanitize name to valid wandb artifact name\n artifact_name = re.sub(r\"[^a-zA-Z0-9_\\-.]\", \"_\", artifact_name)\n artifact_alias = file_id\n artifact_path = f\"{project}/{artifact_name}:{artifact_alias}\"\n if entity is not None:\n artifact_path = f\"{entity}/{artifact_path}\"\n artifact = cls._get_wandb_artifact(artifact_path)\n\n # create artifact if file not already logged previously\n if artifact is None:\n # get file content\n try:\n file_content = File.download(id=file_id).decode(\"utf-8\")\n except:\n print(\n f\"File {file_id} could not be retrieved. Make sure you are allowed to download training/validation files\"\n )\n return\n artifact = wandb.Artifact(artifact_name, type=artifact_type, metadata=file)\n with artifact.new_file(filename, mode=\"w\", encoding=\"utf-8\") as f:\n f.write(file_content)\n\n # create a Table\n try:\n table, n_items = cls._make_table(file_content)\n artifact.add(table, stem)\n wandb.config.update({f\"n_{prefix}\": n_items})\n artifact.metadata[\"items\"] = n_items\n except:\n print(f\"File {file_id} could not be read as a valid JSON file\")\n else:\n # log number of items\n wandb.config.update({f\"n_{prefix}\": artifact.metadata.get(\"items\")})\n\n wandb.run.use_artifact(artifact, aliases=[\"latest\", artifact_alias])\n\n @classmethod\n def _make_table(cls, file_content):\n df = pd.read_json(io.StringIO(file_content), orient=\"records\", lines=True)\n return wandb.Table(dataframe=df), len(df)\n", "path": "openai/wandb_logger.py"}]}
| 4,050 | 231 |
gh_patches_debug_15105
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-2355
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix: elasticsearch `bulk` API produces lists
# Description
Fixes an issue seen in production where the elasticsearch bulk API would end up sending a list through this code path and not a dictionary.
For example:
```
File \"/code/platform_be/core/logic/ingest.py\", line 378, in update_source_doc_version_history
bulk(es_client, bulk_es_updates)
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\", line 521, in bulk
for ok, item in streaming_bulk(
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\", line 436, in streaming_bulk
for data, (ok, info) in zip(
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\", line 339, in _process_bulk_chunk
resp = client.bulk(*args, operations=bulk_actions, **kwargs) # type: ignore[arg-type]
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/utils.py\", line 414, in wrapped
return api(*args, **kwargs)
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/__init__.py\", line 704, in bulk
return self.perform_request( # type: ignore[return-value]
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/_base.py\", line 285, in perform_request
meta, resp_body = self.transport.perform_request(
File \"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/__init__.py\", line 242, in wrapper
attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(
File \"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/utils.py\", line 58, in sanitize_body
flatten_body = _flatten_dict(body)
File \"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/utils.py\", line 31, in _flatten_dict
for k, v in d.items():
AttributeError: 'list' object has no attribute 'items'"
```
## Type of change
Please delete options that are not relevant.
- [x] Bug fix (non-breaking change which fixes an issue)
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [x] Unit tests
# Does This PR Require a Core Repo Change?
- [x] No.
# Checklist:
See [contributing.md](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/CONTRIBUTING.md) for styleguide, changelog guidelines, and more.
- [ ] Followed the style guidelines of this project
- [ ] Changelogs have been updated
- [ ] Unit tests have been added
- [ ] Documentation has been updated
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This library allows tracing HTTP elasticsearch made by the
17 `elasticsearch <https://elasticsearch-py.readthedocs.io/en/master/>`_ library.
18
19 Usage
20 -----
21
22 .. code-block:: python
23
24 from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor
25 import elasticsearch
26
27
28 # instrument elasticsearch
29 ElasticsearchInstrumentor().instrument()
30
31 # Using elasticsearch as normal now will automatically generate spans
32 es = elasticsearch.Elasticsearch()
33 es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})
34 es.get(index='my-index', doc_type='my-type', id=1)
35
36 Elasticsearch instrumentation prefixes operation names with the string "Elasticsearch". This
37 can be changed to a different string by either setting the OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX
38 environment variable or by passing the prefix as an argument to the instrumentor. For example,
39
40
41 .. code-block:: python
42
43 ElasticsearchInstrumentor("my-custom-prefix").instrument()
44
45 The instrument() method accepts the following keyword args:
46 tracer_provider (TracerProvider) - an optional tracer provider
47 request_hook (Callable) - a function with extra user-defined logic to be performed before performing the request
48 this function signature is:
49 def request_hook(span: Span, method: str, url: str, kwargs)
50
51 response_hook (Callable) - a function with extra user-defined logic to be performed after performing the request
52 this function signature is:
53 def response_hook(span: Span, response: dict)
54
55 for example:
56
57 .. code: python
58
59 from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor
60 import elasticsearch
61
62 def request_hook(span, method, url, kwargs):
63 if span and span.is_recording():
64 span.set_attribute("custom_user_attribute_from_request_hook", "some-value")
65
66 def response_hook(span, response):
67 if span and span.is_recording():
68 span.set_attribute("custom_user_attribute_from_response_hook", "some-value")
69
70 # instrument elasticsearch with request and response hooks
71 ElasticsearchInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook)
72
73 # Using elasticsearch as normal now will automatically generate spans,
74 # including user custom attributes added from the hooks
75 es = elasticsearch.Elasticsearch()
76 es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})
77 es.get(index='my-index', doc_type='my-type', id=1)
78
79 API
80 ---
81 """
82
83 import re
84 from logging import getLogger
85 from os import environ
86 from typing import Collection
87
88 import elasticsearch
89 import elasticsearch.exceptions
90 from wrapt import wrap_function_wrapper as _wrap
91
92 from opentelemetry.instrumentation.elasticsearch.package import _instruments
93 from opentelemetry.instrumentation.elasticsearch.version import __version__
94 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
95 from opentelemetry.instrumentation.utils import unwrap
96 from opentelemetry.semconv.trace import SpanAttributes
97 from opentelemetry.trace import SpanKind, get_tracer
98
99 from .utils import sanitize_body
100
101 # Split of elasticsearch and elastic_transport in 8.0.0+
102 # https://www.elastic.co/guide/en/elasticsearch/client/python-api/master/release-notes.html#rn-8-0-0
103 es_transport_split = elasticsearch.VERSION[0] > 7
104 if es_transport_split:
105 import elastic_transport
106
107 logger = getLogger(__name__)
108
109
110 # Values to add as tags from the actual
111 # payload returned by Elasticsearch, if any.
112 _ATTRIBUTES_FROM_RESULT = [
113 "found",
114 "timed_out",
115 "took",
116 ]
117
118 _DEFAULT_OP_NAME = "request"
119
120
121 class ElasticsearchInstrumentor(BaseInstrumentor):
122 """An instrumentor for elasticsearch
123 See `BaseInstrumentor`
124 """
125
126 def __init__(self, span_name_prefix=None):
127 if not span_name_prefix:
128 span_name_prefix = environ.get(
129 "OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX",
130 "Elasticsearch",
131 )
132 self._span_name_prefix = span_name_prefix.strip()
133 super().__init__()
134
135 def instrumentation_dependencies(self) -> Collection[str]:
136 return _instruments
137
138 def _instrument(self, **kwargs):
139 """
140 Instruments Elasticsearch module
141 """
142 tracer_provider = kwargs.get("tracer_provider")
143 tracer = get_tracer(
144 __name__,
145 __version__,
146 tracer_provider,
147 schema_url="https://opentelemetry.io/schemas/1.11.0",
148 )
149 request_hook = kwargs.get("request_hook")
150 response_hook = kwargs.get("response_hook")
151 if es_transport_split:
152 _wrap(
153 elastic_transport,
154 "Transport.perform_request",
155 _wrap_perform_request(
156 tracer,
157 self._span_name_prefix,
158 request_hook,
159 response_hook,
160 ),
161 )
162 else:
163 _wrap(
164 elasticsearch,
165 "Transport.perform_request",
166 _wrap_perform_request(
167 tracer,
168 self._span_name_prefix,
169 request_hook,
170 response_hook,
171 ),
172 )
173
174 def _uninstrument(self, **kwargs):
175 # pylint: disable=no-member
176 unwrap(elasticsearch.Transport, "perform_request")
177
178
179 _regex_doc_url = re.compile(r"/_doc/([^/]+)")
180
181 # search api https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html
182 _regex_search_url = re.compile(r"/([^/]+)/_search[/]?")
183
184
185 def _wrap_perform_request(
186 tracer,
187 span_name_prefix,
188 request_hook=None,
189 response_hook=None,
190 ):
191 # pylint: disable=R0912,R0914
192 def wrapper(wrapped, _, args, kwargs):
193 method = url = None
194 try:
195 method, url, *_ = args
196 except IndexError:
197 logger.warning(
198 "expected perform_request to receive two positional arguments. "
199 "Got %d",
200 len(args),
201 )
202
203 op_name = span_name_prefix + (url or method or _DEFAULT_OP_NAME)
204
205 doc_id = None
206 search_target = None
207
208 if url:
209 # TODO: This regex-based solution avoids creating an unbounded number of span names, but should be replaced by instrumenting individual Elasticsearch methods instead of Transport.perform_request()
210 # A limitation of the regex is that only the '_doc' mapping type is supported. Mapping types are deprecated since Elasticsearch 7
211 # https://github.com/open-telemetry/opentelemetry-python-contrib/issues/708
212 match = _regex_doc_url.search(url)
213 if match is not None:
214 # Remove the full document ID from the URL
215 doc_span = match.span()
216 op_name = (
217 span_name_prefix
218 + url[: doc_span[0]]
219 + "/_doc/:id"
220 + url[doc_span[1] :]
221 )
222 # Put the document ID in attributes
223 doc_id = match.group(1)
224 match = _regex_search_url.search(url)
225 if match is not None:
226 op_name = span_name_prefix + "/<target>/_search"
227 search_target = match.group(1)
228
229 params = kwargs.get("params", {})
230 body = kwargs.get("body", None)
231
232 with tracer.start_as_current_span(
233 op_name,
234 kind=SpanKind.CLIENT,
235 ) as span:
236 if callable(request_hook):
237 request_hook(span, method, url, kwargs)
238
239 if span.is_recording():
240 attributes = {
241 SpanAttributes.DB_SYSTEM: "elasticsearch",
242 }
243 if url:
244 attributes["elasticsearch.url"] = url
245 if method:
246 attributes["elasticsearch.method"] = method
247 if body:
248 attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(
249 body
250 )
251 if params:
252 attributes["elasticsearch.params"] = str(params)
253 if doc_id:
254 attributes["elasticsearch.id"] = doc_id
255 if search_target:
256 attributes["elasticsearch.target"] = search_target
257 for key, value in attributes.items():
258 span.set_attribute(key, value)
259
260 rv = wrapped(*args, **kwargs)
261 if isinstance(rv, dict) and span.is_recording():
262 for member in _ATTRIBUTES_FROM_RESULT:
263 if member in rv:
264 span.set_attribute(
265 f"elasticsearch.{member}",
266 str(rv[member]),
267 )
268
269 if callable(response_hook):
270 response_hook(span, rv)
271 return rv
272
273 return wrapper
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py b/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py
@@ -245,9 +245,11 @@
if method:
attributes["elasticsearch.method"] = method
if body:
- attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(
- body
- )
+ # Don't set db.statement for bulk requests, as it can be very large
+ if isinstance(body, dict):
+ attributes[
+ SpanAttributes.DB_STATEMENT
+ ] = sanitize_body(body)
if params:
attributes["elasticsearch.params"] = str(params)
if doc_id:
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py b/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py\n@@ -245,9 +245,11 @@\n if method:\n attributes[\"elasticsearch.method\"] = method\n if body:\n- attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(\n- body\n- )\n+ # Don't set db.statement for bulk requests, as it can be very large\n+ if isinstance(body, dict):\n+ attributes[\n+ SpanAttributes.DB_STATEMENT\n+ ] = sanitize_body(body)\n if params:\n attributes[\"elasticsearch.params\"] = str(params)\n if doc_id:\n", "issue": "fix: elasticsearch `bulk` API produces lists\n# Description\r\n\r\nFixes an issue seen in production where the elasticsearch bulk API would end up sending a list through this code path and not a dictionary.\r\n\r\nFor example:\r\n\r\n```\r\nFile \\\"/code/platform_be/core/logic/ingest.py\\\", line 378, in update_source_doc_version_history\r\n bulk(es_client, bulk_es_updates)\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\\\", line 521, in bulk\r\n for ok, item in streaming_bulk(\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\\\", line 436, in streaming_bulk\r\n for data, (ok, info) in zip(\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\\\", line 339, in _process_bulk_chunk\r\n resp = client.bulk(*args, operations=bulk_actions, **kwargs) # type: ignore[arg-type]\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/utils.py\\\", line 414, in wrapped\r\n return api(*args, **kwargs)\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/__init__.py\\\", line 704, in bulk\r\n return self.perform_request( # type: ignore[return-value]\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/_base.py\\\", line 285, in perform_request\r\n meta, resp_body = self.transport.perform_request(\r\n File \\\"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/__init__.py\\\", line 242, in wrapper\r\n attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(\r\n File \\\"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/utils.py\\\", line 58, in sanitize_body\r\n flatten_body = _flatten_dict(body)\r\n File \\\"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/utils.py\\\", line 31, in _flatten_dict\r\n for k, v in d.items():\r\n AttributeError: 'list' object has no attribute 'items'\"\r\n```\r\n\r\n## Type of change\r\n\r\nPlease delete options that are not relevant.\r\n\r\n- [x] Bug fix (non-breaking change which fixes an issue)\r\n\r\n# How Has This Been Tested?\r\n\r\nPlease describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration\r\n\r\n- [x] Unit tests\r\n\r\n# Does This PR Require a Core Repo Change?\r\n\r\n- [x] No.\r\n\r\n# Checklist:\r\n\r\nSee [contributing.md](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/CONTRIBUTING.md) for styleguide, changelog guidelines, and more.\r\n\r\n- [ ] Followed the style guidelines of this project\r\n- [ ] Changelogs have been updated\r\n- [ ] Unit tests have been added\r\n- [ ] Documentation has been updated\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows tracing HTTP elasticsearch made by the\n`elasticsearch <https://elasticsearch-py.readthedocs.io/en/master/>`_ library.\n\nUsage\n-----\n\n.. code-block:: python\n\n from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor\n import elasticsearch\n\n\n # instrument elasticsearch\n ElasticsearchInstrumentor().instrument()\n\n # Using elasticsearch as normal now will automatically generate spans\n es = elasticsearch.Elasticsearch()\n es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})\n es.get(index='my-index', doc_type='my-type', id=1)\n\nElasticsearch instrumentation prefixes operation names with the string \"Elasticsearch\". This\ncan be changed to a different string by either setting the OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX\nenvironment variable or by passing the prefix as an argument to the instrumentor. For example,\n\n\n.. code-block:: python\n\n ElasticsearchInstrumentor(\"my-custom-prefix\").instrument()\n\nThe instrument() method accepts the following keyword args:\ntracer_provider (TracerProvider) - an optional tracer provider\nrequest_hook (Callable) - a function with extra user-defined logic to be performed before performing the request\nthis function signature is:\ndef request_hook(span: Span, method: str, url: str, kwargs)\n\nresponse_hook (Callable) - a function with extra user-defined logic to be performed after performing the request\nthis function signature is:\ndef response_hook(span: Span, response: dict)\n\nfor example:\n\n.. code: python\n\n from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor\n import elasticsearch\n\n def request_hook(span, method, url, kwargs):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_request_hook\", \"some-value\")\n\n def response_hook(span, response):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_response_hook\", \"some-value\")\n\n # instrument elasticsearch with request and response hooks\n ElasticsearchInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook)\n\n # Using elasticsearch as normal now will automatically generate spans,\n # including user custom attributes added from the hooks\n es = elasticsearch.Elasticsearch()\n es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})\n es.get(index='my-index', doc_type='my-type', id=1)\n\nAPI\n---\n\"\"\"\n\nimport re\nfrom logging import getLogger\nfrom os import environ\nfrom typing import Collection\n\nimport elasticsearch\nimport elasticsearch.exceptions\nfrom wrapt import wrap_function_wrapper as _wrap\n\nfrom opentelemetry.instrumentation.elasticsearch.package import _instruments\nfrom opentelemetry.instrumentation.elasticsearch.version import __version__\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.utils import unwrap\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace import SpanKind, get_tracer\n\nfrom .utils import sanitize_body\n\n# Split of elasticsearch and elastic_transport in 8.0.0+\n# https://www.elastic.co/guide/en/elasticsearch/client/python-api/master/release-notes.html#rn-8-0-0\nes_transport_split = elasticsearch.VERSION[0] > 7\nif es_transport_split:\n import elastic_transport\n\nlogger = getLogger(__name__)\n\n\n# Values to add as tags from the actual\n# payload returned by Elasticsearch, if any.\n_ATTRIBUTES_FROM_RESULT = [\n \"found\",\n \"timed_out\",\n \"took\",\n]\n\n_DEFAULT_OP_NAME = \"request\"\n\n\nclass ElasticsearchInstrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for elasticsearch\n See `BaseInstrumentor`\n \"\"\"\n\n def __init__(self, span_name_prefix=None):\n if not span_name_prefix:\n span_name_prefix = environ.get(\n \"OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX\",\n \"Elasticsearch\",\n )\n self._span_name_prefix = span_name_prefix.strip()\n super().__init__()\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"\n Instruments Elasticsearch module\n \"\"\"\n tracer_provider = kwargs.get(\"tracer_provider\")\n tracer = get_tracer(\n __name__,\n __version__,\n tracer_provider,\n schema_url=\"https://opentelemetry.io/schemas/1.11.0\",\n )\n request_hook = kwargs.get(\"request_hook\")\n response_hook = kwargs.get(\"response_hook\")\n if es_transport_split:\n _wrap(\n elastic_transport,\n \"Transport.perform_request\",\n _wrap_perform_request(\n tracer,\n self._span_name_prefix,\n request_hook,\n response_hook,\n ),\n )\n else:\n _wrap(\n elasticsearch,\n \"Transport.perform_request\",\n _wrap_perform_request(\n tracer,\n self._span_name_prefix,\n request_hook,\n response_hook,\n ),\n )\n\n def _uninstrument(self, **kwargs):\n # pylint: disable=no-member\n unwrap(elasticsearch.Transport, \"perform_request\")\n\n\n_regex_doc_url = re.compile(r\"/_doc/([^/]+)\")\n\n# search api https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html\n_regex_search_url = re.compile(r\"/([^/]+)/_search[/]?\")\n\n\ndef _wrap_perform_request(\n tracer,\n span_name_prefix,\n request_hook=None,\n response_hook=None,\n):\n # pylint: disable=R0912,R0914\n def wrapper(wrapped, _, args, kwargs):\n method = url = None\n try:\n method, url, *_ = args\n except IndexError:\n logger.warning(\n \"expected perform_request to receive two positional arguments. \"\n \"Got %d\",\n len(args),\n )\n\n op_name = span_name_prefix + (url or method or _DEFAULT_OP_NAME)\n\n doc_id = None\n search_target = None\n\n if url:\n # TODO: This regex-based solution avoids creating an unbounded number of span names, but should be replaced by instrumenting individual Elasticsearch methods instead of Transport.perform_request()\n # A limitation of the regex is that only the '_doc' mapping type is supported. Mapping types are deprecated since Elasticsearch 7\n # https://github.com/open-telemetry/opentelemetry-python-contrib/issues/708\n match = _regex_doc_url.search(url)\n if match is not None:\n # Remove the full document ID from the URL\n doc_span = match.span()\n op_name = (\n span_name_prefix\n + url[: doc_span[0]]\n + \"/_doc/:id\"\n + url[doc_span[1] :]\n )\n # Put the document ID in attributes\n doc_id = match.group(1)\n match = _regex_search_url.search(url)\n if match is not None:\n op_name = span_name_prefix + \"/<target>/_search\"\n search_target = match.group(1)\n\n params = kwargs.get(\"params\", {})\n body = kwargs.get(\"body\", None)\n\n with tracer.start_as_current_span(\n op_name,\n kind=SpanKind.CLIENT,\n ) as span:\n if callable(request_hook):\n request_hook(span, method, url, kwargs)\n\n if span.is_recording():\n attributes = {\n SpanAttributes.DB_SYSTEM: \"elasticsearch\",\n }\n if url:\n attributes[\"elasticsearch.url\"] = url\n if method:\n attributes[\"elasticsearch.method\"] = method\n if body:\n attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(\n body\n )\n if params:\n attributes[\"elasticsearch.params\"] = str(params)\n if doc_id:\n attributes[\"elasticsearch.id\"] = doc_id\n if search_target:\n attributes[\"elasticsearch.target\"] = search_target\n for key, value in attributes.items():\n span.set_attribute(key, value)\n\n rv = wrapped(*args, **kwargs)\n if isinstance(rv, dict) and span.is_recording():\n for member in _ATTRIBUTES_FROM_RESULT:\n if member in rv:\n span.set_attribute(\n f\"elasticsearch.{member}\",\n str(rv[member]),\n )\n\n if callable(response_hook):\n response_hook(span, rv)\n return rv\n\n return wrapper\n", "path": "instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows tracing HTTP elasticsearch made by the\n`elasticsearch <https://elasticsearch-py.readthedocs.io/en/master/>`_ library.\n\nUsage\n-----\n\n.. code-block:: python\n\n from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor\n import elasticsearch\n\n\n # instrument elasticsearch\n ElasticsearchInstrumentor().instrument()\n\n # Using elasticsearch as normal now will automatically generate spans\n es = elasticsearch.Elasticsearch()\n es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})\n es.get(index='my-index', doc_type='my-type', id=1)\n\nElasticsearch instrumentation prefixes operation names with the string \"Elasticsearch\". This\ncan be changed to a different string by either setting the OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX\nenvironment variable or by passing the prefix as an argument to the instrumentor. For example,\n\n\n.. code-block:: python\n\n ElasticsearchInstrumentor(\"my-custom-prefix\").instrument()\n\nThe instrument() method accepts the following keyword args:\ntracer_provider (TracerProvider) - an optional tracer provider\nrequest_hook (Callable) - a function with extra user-defined logic to be performed before performing the request\nthis function signature is:\ndef request_hook(span: Span, method: str, url: str, kwargs)\n\nresponse_hook (Callable) - a function with extra user-defined logic to be performed after performing the request\nthis function signature is:\ndef response_hook(span: Span, response: dict)\n\nfor example:\n\n.. code: python\n\n from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor\n import elasticsearch\n\n def request_hook(span, method, url, kwargs):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_request_hook\", \"some-value\")\n\n def response_hook(span, response):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_response_hook\", \"some-value\")\n\n # instrument elasticsearch with request and response hooks\n ElasticsearchInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook)\n\n # Using elasticsearch as normal now will automatically generate spans,\n # including user custom attributes added from the hooks\n es = elasticsearch.Elasticsearch()\n es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})\n es.get(index='my-index', doc_type='my-type', id=1)\n\nAPI\n---\n\"\"\"\n\nimport re\nfrom logging import getLogger\nfrom os import environ\nfrom typing import Collection\n\nimport elasticsearch\nimport elasticsearch.exceptions\nfrom wrapt import wrap_function_wrapper as _wrap\n\nfrom opentelemetry.instrumentation.elasticsearch.package import _instruments\nfrom opentelemetry.instrumentation.elasticsearch.version import __version__\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.utils import unwrap\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace import SpanKind, get_tracer\n\nfrom .utils import sanitize_body\n\n# Split of elasticsearch and elastic_transport in 8.0.0+\n# https://www.elastic.co/guide/en/elasticsearch/client/python-api/master/release-notes.html#rn-8-0-0\nes_transport_split = elasticsearch.VERSION[0] > 7\nif es_transport_split:\n import elastic_transport\n\nlogger = getLogger(__name__)\n\n\n# Values to add as tags from the actual\n# payload returned by Elasticsearch, if any.\n_ATTRIBUTES_FROM_RESULT = [\n \"found\",\n \"timed_out\",\n \"took\",\n]\n\n_DEFAULT_OP_NAME = \"request\"\n\n\nclass ElasticsearchInstrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for elasticsearch\n See `BaseInstrumentor`\n \"\"\"\n\n def __init__(self, span_name_prefix=None):\n if not span_name_prefix:\n span_name_prefix = environ.get(\n \"OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX\",\n \"Elasticsearch\",\n )\n self._span_name_prefix = span_name_prefix.strip()\n super().__init__()\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"\n Instruments Elasticsearch module\n \"\"\"\n tracer_provider = kwargs.get(\"tracer_provider\")\n tracer = get_tracer(\n __name__,\n __version__,\n tracer_provider,\n schema_url=\"https://opentelemetry.io/schemas/1.11.0\",\n )\n request_hook = kwargs.get(\"request_hook\")\n response_hook = kwargs.get(\"response_hook\")\n if es_transport_split:\n _wrap(\n elastic_transport,\n \"Transport.perform_request\",\n _wrap_perform_request(\n tracer,\n self._span_name_prefix,\n request_hook,\n response_hook,\n ),\n )\n else:\n _wrap(\n elasticsearch,\n \"Transport.perform_request\",\n _wrap_perform_request(\n tracer,\n self._span_name_prefix,\n request_hook,\n response_hook,\n ),\n )\n\n def _uninstrument(self, **kwargs):\n # pylint: disable=no-member\n unwrap(elasticsearch.Transport, \"perform_request\")\n\n\n_regex_doc_url = re.compile(r\"/_doc/([^/]+)\")\n\n# search api https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html\n_regex_search_url = re.compile(r\"/([^/]+)/_search[/]?\")\n\n\ndef _wrap_perform_request(\n tracer,\n span_name_prefix,\n request_hook=None,\n response_hook=None,\n):\n # pylint: disable=R0912,R0914\n def wrapper(wrapped, _, args, kwargs):\n method = url = None\n try:\n method, url, *_ = args\n except IndexError:\n logger.warning(\n \"expected perform_request to receive two positional arguments. \"\n \"Got %d\",\n len(args),\n )\n\n op_name = span_name_prefix + (url or method or _DEFAULT_OP_NAME)\n\n doc_id = None\n search_target = None\n\n if url:\n # TODO: This regex-based solution avoids creating an unbounded number of span names, but should be replaced by instrumenting individual Elasticsearch methods instead of Transport.perform_request()\n # A limitation of the regex is that only the '_doc' mapping type is supported. Mapping types are deprecated since Elasticsearch 7\n # https://github.com/open-telemetry/opentelemetry-python-contrib/issues/708\n match = _regex_doc_url.search(url)\n if match is not None:\n # Remove the full document ID from the URL\n doc_span = match.span()\n op_name = (\n span_name_prefix\n + url[: doc_span[0]]\n + \"/_doc/:id\"\n + url[doc_span[1] :]\n )\n # Put the document ID in attributes\n doc_id = match.group(1)\n match = _regex_search_url.search(url)\n if match is not None:\n op_name = span_name_prefix + \"/<target>/_search\"\n search_target = match.group(1)\n\n params = kwargs.get(\"params\", {})\n body = kwargs.get(\"body\", None)\n\n with tracer.start_as_current_span(\n op_name,\n kind=SpanKind.CLIENT,\n ) as span:\n if callable(request_hook):\n request_hook(span, method, url, kwargs)\n\n if span.is_recording():\n attributes = {\n SpanAttributes.DB_SYSTEM: \"elasticsearch\",\n }\n if url:\n attributes[\"elasticsearch.url\"] = url\n if method:\n attributes[\"elasticsearch.method\"] = method\n if body:\n # Don't set db.statement for bulk requests, as it can be very large\n if isinstance(body, dict):\n attributes[\n SpanAttributes.DB_STATEMENT\n ] = sanitize_body(body)\n if params:\n attributes[\"elasticsearch.params\"] = str(params)\n if doc_id:\n attributes[\"elasticsearch.id\"] = doc_id\n if search_target:\n attributes[\"elasticsearch.target\"] = search_target\n for key, value in attributes.items():\n span.set_attribute(key, value)\n\n rv = wrapped(*args, **kwargs)\n if isinstance(rv, dict) and span.is_recording():\n for member in _ATTRIBUTES_FROM_RESULT:\n if member in rv:\n span.set_attribute(\n f\"elasticsearch.{member}\",\n str(rv[member]),\n )\n\n if callable(response_hook):\n response_hook(span, rv)\n return rv\n\n return wrapper\n", "path": "instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py"}]}
| 3,614 | 230 |
gh_patches_debug_15390
|
rasdani/github-patches
|
git_diff
|
pyodide__pyodide-1138
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Nested attribute access in JS->Python type conversion
Currently the following code fails,
```js
>>> from js import window
>>> window.URL.createObjectURL
Error: Traceback (most recent call last):
File "/lib/python3.7/site-packages/pyodide.py", line 45, in eval_code
return eval(compile(expr, '<eval>', mode='eval'), ns, ns)
File "<eval>", line 1, in <module>
AttributeError: 'JsBoundMethod' object has no attribute 'createObjectURL'
```
(while `window.URL.createObjectURL` is a valid JS object) because nested attributes (i.e. attribute of an attribute) don't seem to be supported. It would have been nice to make it work, though I have not looked at how difficult that would be.
from js import fetch treats fetch as a free function
`fetch` is a member function of `window`.
However, using `from js import fetch` doesn't realize that and leads to the error:
`TypeError: 'fetch' called on an object that does not implement interface Window.`
For Reproducing the Error:
```
%%py
from js import document, Request, fetch, URL
img_tag = document.createElement('img')
req = Request.new('https://i.ibb.co/3f4yJQS/face4.jpg')
def func(response):
return response.blob()
def func2(blob):
objURL = URL.createObjectURL(blob)
img_tag.src = objURL
fetch(req).then(func).then(func2)
document.body.appendChild(img_tag)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyodide-py/pyodide/_core.py`
Content:
```
1 # type: ignore
2 import platform
3
4 if platform.system() == "Emscripten":
5 from _pyodide_core import JsProxy, JsBoundMethod, JsException
6 else:
7 # Can add shims here if we are so inclined.
8 class JsException(Exception):
9 """
10 A wrapper around a Javascript Error to allow the Error to be thrown in Python.
11 """
12
13 # Defined in jsproxy.c
14
15 class JsProxy:
16 """A proxy to make a Javascript object behave like a Python object"""
17
18 # Defined in jsproxy.c
19
20 class JsBoundMethod:
21 """A proxy to make it possible to call Javascript bound methods from Python."""
22
23 # Defined in jsproxy.c
24
25
26 __all__ = [JsProxy, JsBoundMethod, JsException]
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pyodide-py/pyodide/_core.py b/src/pyodide-py/pyodide/_core.py
--- a/src/pyodide-py/pyodide/_core.py
+++ b/src/pyodide-py/pyodide/_core.py
@@ -2,7 +2,7 @@
import platform
if platform.system() == "Emscripten":
- from _pyodide_core import JsProxy, JsBoundMethod, JsException
+ from _pyodide_core import JsProxy, JsMethod, JsException
else:
# Can add shims here if we are so inclined.
class JsException(Exception):
@@ -17,10 +17,10 @@
# Defined in jsproxy.c
- class JsBoundMethod:
+ class JsMethod:
"""A proxy to make it possible to call Javascript bound methods from Python."""
# Defined in jsproxy.c
-__all__ = [JsProxy, JsBoundMethod, JsException]
+__all__ = [JsProxy, JsMethod, JsException]
|
{"golden_diff": "diff --git a/src/pyodide-py/pyodide/_core.py b/src/pyodide-py/pyodide/_core.py\n--- a/src/pyodide-py/pyodide/_core.py\n+++ b/src/pyodide-py/pyodide/_core.py\n@@ -2,7 +2,7 @@\n import platform\n \n if platform.system() == \"Emscripten\":\n- from _pyodide_core import JsProxy, JsBoundMethod, JsException\n+ from _pyodide_core import JsProxy, JsMethod, JsException\n else:\n # Can add shims here if we are so inclined.\n class JsException(Exception):\n@@ -17,10 +17,10 @@\n \n # Defined in jsproxy.c\n \n- class JsBoundMethod:\n+ class JsMethod:\n \"\"\"A proxy to make it possible to call Javascript bound methods from Python.\"\"\"\n \n # Defined in jsproxy.c\n \n \n-__all__ = [JsProxy, JsBoundMethod, JsException]\n+__all__ = [JsProxy, JsMethod, JsException]\n", "issue": "Nested attribute access in JS->Python type conversion\nCurrently the following code fails,\r\n```js\r\n>>> from js import window\r\n>>> window.URL.createObjectURL\r\nError: Traceback (most recent call last):\r\n File \"/lib/python3.7/site-packages/pyodide.py\", line 45, in eval_code\r\n return eval(compile(expr, '<eval>', mode='eval'), ns, ns)\r\n File \"<eval>\", line 1, in <module>\r\nAttributeError: 'JsBoundMethod' object has no attribute 'createObjectURL'\r\n```\r\n(while `window.URL.createObjectURL` is a valid JS object) because nested attributes (i.e. attribute of an attribute) don't seem to be supported. It would have been nice to make it work, though I have not looked at how difficult that would be.\nfrom js import fetch treats fetch as a free function\n`fetch` is a member function of `window`.\r\nHowever, using `from js import fetch` doesn't realize that and leads to the error:\r\n\r\n`TypeError: 'fetch' called on an object that does not implement interface Window.`\r\n\r\nFor Reproducing the Error:\r\n```\r\n%%py\r\n\r\nfrom js import document, Request, fetch, URL\r\nimg_tag = document.createElement('img')\r\nreq = Request.new('https://i.ibb.co/3f4yJQS/face4.jpg')\r\n\r\ndef func(response):\r\n return response.blob()\r\n\r\ndef func2(blob):\r\n objURL = URL.createObjectURL(blob)\r\n img_tag.src = objURL\r\n\r\nfetch(req).then(func).then(func2)\r\n\r\ndocument.body.appendChild(img_tag)\r\n```\n", "before_files": [{"content": "# type: ignore\nimport platform\n\nif platform.system() == \"Emscripten\":\n from _pyodide_core import JsProxy, JsBoundMethod, JsException\nelse:\n # Can add shims here if we are so inclined.\n class JsException(Exception):\n \"\"\"\n A wrapper around a Javascript Error to allow the Error to be thrown in Python.\n \"\"\"\n\n # Defined in jsproxy.c\n\n class JsProxy:\n \"\"\"A proxy to make a Javascript object behave like a Python object\"\"\"\n\n # Defined in jsproxy.c\n\n class JsBoundMethod:\n \"\"\"A proxy to make it possible to call Javascript bound methods from Python.\"\"\"\n\n # Defined in jsproxy.c\n\n\n__all__ = [JsProxy, JsBoundMethod, JsException]\n", "path": "src/pyodide-py/pyodide/_core.py"}], "after_files": [{"content": "# type: ignore\nimport platform\n\nif platform.system() == \"Emscripten\":\n from _pyodide_core import JsProxy, JsMethod, JsException\nelse:\n # Can add shims here if we are so inclined.\n class JsException(Exception):\n \"\"\"\n A wrapper around a Javascript Error to allow the Error to be thrown in Python.\n \"\"\"\n\n # Defined in jsproxy.c\n\n class JsProxy:\n \"\"\"A proxy to make a Javascript object behave like a Python object\"\"\"\n\n # Defined in jsproxy.c\n\n class JsMethod:\n \"\"\"A proxy to make it possible to call Javascript bound methods from Python.\"\"\"\n\n # Defined in jsproxy.c\n\n\n__all__ = [JsProxy, JsMethod, JsException]\n", "path": "src/pyodide-py/pyodide/_core.py"}]}
| 811 | 235 |
gh_patches_debug_5221
|
rasdani/github-patches
|
git_diff
|
saulpw__visidata-1011
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
vds: cannot export sheets containing a date column
**Small description**
It's not possible to save a sheet to a .vds file if said sheet contains a date column.
This results in the error below.
**Expected result**
It should just work(tm).
**Actual result with screenshot**
```stacktrace
Traceback (most recent call last):
File "/nix/store/srkr2wnwq95ylmgiadh28p3jiaadl5yw-visidata-2.4/lib/python3.8/site-packages/visidata/threads.py", line 215, in _toplevelTryFunc
t.status = func(*args, **kwargs)
File "/nix/store/srkr2wnwq95ylmgiadh28p3jiaadl5yw-visidata-2.4/lib/python3.8/site-packages/visidata/loaders/vds.py", line 32, in save_vds
fp.write(json.dumps(d)+NL)
File "/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type date is not JSON serializable
```
**Steps to reproduce with sample data and a .vd**
❯ cat testsheet.vd
```log
sheet col row longname input keystrokes comment
visidata_menu sheets-stack S open Sheets Stack: join or jump between the active sheets on the current stack
sheets キsheets add-row a append a blank row
sheets name キ edit-cell testsheet e edit contents of current cell
sheets キtestsheet open-row ^J open sheet referenced in current row
testsheet 0 rename-col testcol ^ edit name of current column
testsheet testcol type-date @ set type of current column to date
testsheet add-row a append a blank row
testsheet testcol 0 edit-cell 2021-06-14 e edit contents of current cell
testsheet save-all test.vds g^S save all sheets to given file or directory)
```
**Additional context**
Problem is present on v2.4 and on the develop branch (commit 3350d9fd8c9e64ebf409deae4b31085d12efeb7f)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `visidata/loaders/vds.py`
Content:
```
1 'Custom VisiData save format'
2
3 import json
4 from visidata import *
5
6 NL='\n'
7
8 @VisiData.api
9 def open_vds(vd, p):
10 return VdsIndexSheet(p.name, source=p)
11
12
13 @VisiData.api
14 def save_vds(vd, p, *sheets):
15 'Save in custom VisiData format, preserving columns and their attributes.'
16
17 with p.open_text(mode='w') as fp:
18 for vs in sheets:
19 # class and attrs for vs
20 d = { 'name': vs.name, }
21 fp.write('#'+json.dumps(d)+NL)
22
23 # class and attrs for each column in vs
24 for col in vs.visibleCols:
25 d = col.__getstate__()
26 d['col'] = type(col).__name__
27 fp.write('#'+json.dumps(d)+NL)
28
29 with Progress(gerund='saving'):
30 for row in vs.iterdispvals(*vs.visibleCols, format=False):
31 d = {col.name:val for col, val in row.items()}
32 fp.write(json.dumps(d)+NL)
33
34
35 class VdsIndexSheet(IndexSheet):
36 def iterload(self):
37 vs = None
38 with self.source.open_text() as fp:
39 line = fp.readline()
40 while line:
41 if line.startswith('#{'):
42 d = json.loads(line[1:])
43 if 'col' not in d:
44 vs = VdsSheet(d.pop('name'), columns=[], source=self.source, source_fpos=fp.tell())
45 yield vs
46 line = fp.readline()
47
48
49 class VdsSheet(Sheet):
50 def newRow(self):
51 return {} # rowdef: dict
52
53 def iterload(self):
54 self.colnames = {}
55 self.columns = []
56
57 with self.source.open_text() as fp:
58 fp.seek(self.source_fpos)
59
60 # consume all metadata, create columns
61 line = fp.readline()
62 while line and line.startswith('#{'):
63 d = json.loads(line[1:])
64 if 'col' not in d:
65 raise Exception(d)
66 classname = d.pop('col')
67 if classname == 'Column':
68 classname = 'ItemColumn'
69 d['expr'] = d['name']
70
71 c = globals()[classname](d.pop('name'))
72 self.colnames[c.name] = c
73 self.addColumn(c)
74 for k, v in d.items():
75 setattr(c, k, v)
76
77 line = fp.readline()
78
79 while line and not line.startswith('#{'):
80 d = json.loads(line)
81 yield d
82 line = fp.readline()
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/visidata/loaders/vds.py b/visidata/loaders/vds.py
--- a/visidata/loaders/vds.py
+++ b/visidata/loaders/vds.py
@@ -29,7 +29,7 @@
with Progress(gerund='saving'):
for row in vs.iterdispvals(*vs.visibleCols, format=False):
d = {col.name:val for col, val in row.items()}
- fp.write(json.dumps(d)+NL)
+ fp.write(json.dumps(d, default=str)+NL)
class VdsIndexSheet(IndexSheet):
|
{"golden_diff": "diff --git a/visidata/loaders/vds.py b/visidata/loaders/vds.py\n--- a/visidata/loaders/vds.py\n+++ b/visidata/loaders/vds.py\n@@ -29,7 +29,7 @@\n with Progress(gerund='saving'):\n for row in vs.iterdispvals(*vs.visibleCols, format=False):\n d = {col.name:val for col, val in row.items()}\n- fp.write(json.dumps(d)+NL)\n+ fp.write(json.dumps(d, default=str)+NL)\n \n \n class VdsIndexSheet(IndexSheet):\n", "issue": "vds: cannot export sheets containing a date column\n**Small description**\r\n\r\nIt's not possible to save a sheet to a .vds file if said sheet contains a date column.\r\nThis results in the error below.\r\n\r\n**Expected result**\r\n\r\nIt should just work(tm).\r\n\r\n**Actual result with screenshot**\r\n\r\n```stacktrace\r\nTraceback (most recent call last):\r\n File \"/nix/store/srkr2wnwq95ylmgiadh28p3jiaadl5yw-visidata-2.4/lib/python3.8/site-packages/visidata/threads.py\", line 215, in _toplevelTryFunc\r\n t.status = func(*args, **kwargs)\r\n File \"/nix/store/srkr2wnwq95ylmgiadh28p3jiaadl5yw-visidata-2.4/lib/python3.8/site-packages/visidata/loaders/vds.py\", line 32, in save_vds\r\n fp.write(json.dumps(d)+NL)\r\n File \"/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/__init__.py\", line 231, in dumps\r\n return _default_encoder.encode(obj)\r\n File \"/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py\", line 199, in encode\r\n chunks = self.iterencode(o, _one_shot=True)\r\n File \"/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py\", line 257, in iterencode\r\n return _iterencode(o, 0)\r\n File \"/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type date is not JSON serializable\r\n```\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\n\r\n\u276f cat testsheet.vd\r\n```log\r\nsheet\tcol\trow\tlongname\tinput\tkeystrokes\tcomment\r\nvisidata_menu\t\t\tsheets-stack\t\tS\topen Sheets Stack: join or jump between the active sheets on the current stack\r\nsheets\t\t\u30adsheets\tadd-row\t\ta\tappend a blank row\r\nsheets\tname\t\u30ad\tedit-cell\ttestsheet\te\tedit contents of current cell\r\nsheets\t\t\u30adtestsheet\topen-row\t\t^J\topen sheet referenced in current row\r\ntestsheet\t0\t\trename-col\ttestcol\t^\tedit name of current column\r\ntestsheet\ttestcol\t\ttype-date\t\t@\tset type of current column to date\r\ntestsheet\t\t\tadd-row\t\ta\tappend a blank row\r\ntestsheet\ttestcol\t0\tedit-cell\t2021-06-14\te\tedit contents of current cell\r\ntestsheet\t\t\tsave-all\ttest.vds\tg^S\tsave all sheets to given file or directory)\r\n```\r\n\r\n\r\n**Additional context**\r\n\r\nProblem is present on v2.4 and on the develop branch (commit 3350d9fd8c9e64ebf409deae4b31085d12efeb7f)\n", "before_files": [{"content": "'Custom VisiData save format'\n\nimport json\nfrom visidata import *\n\nNL='\\n'\n\[email protected]\ndef open_vds(vd, p):\n return VdsIndexSheet(p.name, source=p)\n\n\[email protected]\ndef save_vds(vd, p, *sheets):\n 'Save in custom VisiData format, preserving columns and their attributes.'\n\n with p.open_text(mode='w') as fp:\n for vs in sheets:\n # class and attrs for vs\n d = { 'name': vs.name, }\n fp.write('#'+json.dumps(d)+NL)\n\n # class and attrs for each column in vs\n for col in vs.visibleCols:\n d = col.__getstate__()\n d['col'] = type(col).__name__\n fp.write('#'+json.dumps(d)+NL)\n\n with Progress(gerund='saving'):\n for row in vs.iterdispvals(*vs.visibleCols, format=False):\n d = {col.name:val for col, val in row.items()}\n fp.write(json.dumps(d)+NL)\n\n\nclass VdsIndexSheet(IndexSheet):\n def iterload(self):\n vs = None\n with self.source.open_text() as fp:\n line = fp.readline()\n while line:\n if line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n vs = VdsSheet(d.pop('name'), columns=[], source=self.source, source_fpos=fp.tell())\n yield vs\n line = fp.readline()\n\n\nclass VdsSheet(Sheet):\n def newRow(self):\n return {} # rowdef: dict\n\n def iterload(self):\n self.colnames = {}\n self.columns = []\n\n with self.source.open_text() as fp:\n fp.seek(self.source_fpos)\n\n # consume all metadata, create columns\n line = fp.readline()\n while line and line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n raise Exception(d)\n classname = d.pop('col')\n if classname == 'Column':\n classname = 'ItemColumn'\n d['expr'] = d['name']\n\n c = globals()[classname](d.pop('name'))\n self.colnames[c.name] = c\n self.addColumn(c)\n for k, v in d.items():\n setattr(c, k, v)\n\n line = fp.readline()\n\n while line and not line.startswith('#{'):\n d = json.loads(line)\n yield d\n line = fp.readline()\n", "path": "visidata/loaders/vds.py"}], "after_files": [{"content": "'Custom VisiData save format'\n\nimport json\nfrom visidata import *\n\nNL='\\n'\n\[email protected]\ndef open_vds(vd, p):\n return VdsIndexSheet(p.name, source=p)\n\n\[email protected]\ndef save_vds(vd, p, *sheets):\n 'Save in custom VisiData format, preserving columns and their attributes.'\n\n with p.open_text(mode='w') as fp:\n for vs in sheets:\n # class and attrs for vs\n d = { 'name': vs.name, }\n fp.write('#'+json.dumps(d)+NL)\n\n # class and attrs for each column in vs\n for col in vs.visibleCols:\n d = col.__getstate__()\n d['col'] = type(col).__name__\n fp.write('#'+json.dumps(d)+NL)\n\n with Progress(gerund='saving'):\n for row in vs.iterdispvals(*vs.visibleCols, format=False):\n d = {col.name:val for col, val in row.items()}\n fp.write(json.dumps(d, default=str)+NL)\n\n\nclass VdsIndexSheet(IndexSheet):\n def iterload(self):\n vs = None\n with self.source.open_text() as fp:\n line = fp.readline()\n while line:\n if line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n vs = VdsSheet(d.pop('name'), columns=[], source=self.source, source_fpos=fp.tell())\n yield vs\n line = fp.readline()\n\n\nclass VdsSheet(Sheet):\n def newRow(self):\n return {} # rowdef: dict\n\n def iterload(self):\n self.colnames = {}\n self.columns = []\n\n with self.source.open_text() as fp:\n fp.seek(self.source_fpos)\n\n # consume all metadata, create columns\n line = fp.readline()\n while line and line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n raise Exception(d)\n classname = d.pop('col')\n if classname == 'Column':\n classname = 'ItemColumn'\n d['expr'] = d['name']\n\n c = globals()[classname](d.pop('name'))\n self.colnames[c.name] = c\n self.addColumn(c)\n for k, v in d.items():\n setattr(c, k, v)\n\n line = fp.readline()\n\n while line and not line.startswith('#{'):\n d = json.loads(line)\n yield d\n line = fp.readline()\n", "path": "visidata/loaders/vds.py"}]}
| 1,750 | 129 |
gh_patches_debug_34383
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-11310
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Keras documentation: multi_gpu_model examples do not show properly on the home page keras.io
Some examples of usage of `multi_gpu_model` appear on the documentation of the function in the [source code](https://github.com/keras-team/keras/blob/master/keras/utils/multi_gpu_utils.py). However they do not display correctly on the [Keras home page](https://keras.io/utils/):
```Example 1 - Training models with weights merge on CPU
$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$0
Example 2 - Training models with weights merge on CPU using cpu_relocation
$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$1
Example 3 - Training models with weights merge on GPU (recommended for NV-link)
$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$2```
Keras documentation of multi_gpu_model: example 2 can be misleading
In the Keras documentation for `multi_gpu_model`, it is stated:
> To save the multi-gpu model, use .save(fname) or .save_weights(fname) with the template model (the argument you passed to multi_gpu_model), rather than the model returned by multi_gpu_model.
However in example 2 the template model is overwritten by the multi-gpu model:
```python
..
# Not needed to change the device scope for model definition:
model = Xception(weights=None, ..)
try:
model = multi_gpu_model(model, cpu_relocation=True)
print("Training using multiple GPUs..")
except:
print("Training using single GPU or CPU..")
model.compile(..)
..
```
This means that in this example it would not be possible to save the weights of the template model. I suggest rewritting to something like:
```python
..
# Not needed to change the device scope for model definition:
model = Xception(weights=None, ..)
try:
parallel_model = multi_gpu_model(model, cpu_relocation=True)
print("Training using multiple GPUs..")
except ValueError:
parallel_model = model
print("Training using single GPU or CPU..")
parallel_model.compile(..)
..
```
(I take this opportunity to except only a specific error)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras/utils/multi_gpu_utils.py`
Content:
```
1 """Multi-GPU training utilities.
2 """
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6
7 from ..layers.merge import concatenate
8 from .. import backend as K
9 from ..layers.core import Lambda
10 from ..engine.training import Model
11 from ..models import clone_model
12 from ..utils.generic_utils import to_list
13
14
15 def _get_available_devices():
16 return [x.name for x in K.get_session().list_devices()]
17
18
19 def _normalize_device_name(name):
20 name = '/' + ':'.join(name.lower().replace('/', '').split(':')[-2:])
21 return name
22
23
24 def multi_gpu_model(model, gpus=None, cpu_merge=True, cpu_relocation=False):
25 """Replicates a model on different GPUs.
26
27 Specifically, this function implements single-machine
28 multi-GPU data parallelism. It works in the following way:
29
30 - Divide the model's input(s) into multiple sub-batches.
31 - Apply a model copy on each sub-batch. Every model copy
32 is executed on a dedicated GPU.
33 - Concatenate the results (on CPU) into one big batch.
34
35 E.g. if your `batch_size` is 64 and you use `gpus=2`,
36 then we will divide the input into 2 sub-batches of 32 samples,
37 process each sub-batch on one GPU, then return the full
38 batch of 64 processed samples.
39
40 This induces quasi-linear speedup on up to 8 GPUs.
41
42 This function is only available with the TensorFlow backend
43 for the time being.
44
45 # Arguments
46 model: A Keras model instance. To avoid OOM errors,
47 this model could have been built on CPU, for instance
48 (see usage example below).
49 gpus: Integer >= 2 or list of integers, number of GPUs or
50 list of GPU IDs on which to create model replicas.
51 cpu_merge: A boolean value to identify whether to force
52 merging model weights under the scope of the CPU or not.
53 cpu_relocation: A boolean value to identify whether to
54 create the model's weights under the scope of the CPU.
55 If the model is not defined under any preceding device
56 scope, you can still rescue it by activating this option.
57
58 # Returns
59 A Keras `Model` instance which can be used just like the initial
60 `model` argument, but which distributes its workload on multiple GPUs.
61
62 # Example 1 - Training models with weights merge on CPU
63
64 ```python
65 import tensorflow as tf
66 from keras.applications import Xception
67 from keras.utils import multi_gpu_model
68 import numpy as np
69
70 num_samples = 1000
71 height = 224
72 width = 224
73 num_classes = 1000
74
75 # Instantiate the base model (or "template" model).
76 # We recommend doing this with under a CPU device scope,
77 # so that the model's weights are hosted on CPU memory.
78 # Otherwise they may end up hosted on a GPU, which would
79 # complicate weight sharing.
80 with tf.device('/cpu:0'):
81 model = Xception(weights=None,
82 input_shape=(height, width, 3),
83 classes=num_classes)
84
85 # Replicates the model on 8 GPUs.
86 # This assumes that your machine has 8 available GPUs.
87 parallel_model = multi_gpu_model(model, gpus=8)
88 parallel_model.compile(loss='categorical_crossentropy',
89 optimizer='rmsprop')
90
91 # Generate dummy data.
92 x = np.random.random((num_samples, height, width, 3))
93 y = np.random.random((num_samples, num_classes))
94
95 # This `fit` call will be distributed on 8 GPUs.
96 # Since the batch size is 256, each GPU will process 32 samples.
97 parallel_model.fit(x, y, epochs=20, batch_size=256)
98
99 # Save model via the template model (which shares the same weights):
100 model.save('my_model.h5')
101 ```
102
103 # Example 2 - Training models with weights merge on CPU using cpu_relocation
104
105 ```python
106 ..
107 # Not needed to change the device scope for model definition:
108 model = Xception(weights=None, ..)
109
110 try:
111 model = multi_gpu_model(model, cpu_relocation=True)
112 print("Training using multiple GPUs..")
113 except:
114 print("Training using single GPU or CPU..")
115
116 model.compile(..)
117 ..
118 ```
119
120 # Example 3 - Training models with weights merge on GPU (recommended for NV-link)
121
122 ```python
123 ..
124 # Not needed to change the device scope for model definition:
125 model = Xception(weights=None, ..)
126
127 try:
128 model = multi_gpu_model(model, cpu_merge=False)
129 print("Training using multiple GPUs..")
130 except:
131 print("Training using single GPU or CPU..")
132
133 model.compile(..)
134 ..
135 ```
136
137 # On model saving
138
139 To save the multi-gpu model, use `.save(fname)` or `.save_weights(fname)`
140 with the template model (the argument you passed to `multi_gpu_model`),
141 rather than the model returned by `multi_gpu_model`.
142 """
143 if K.backend() != 'tensorflow':
144 raise ValueError('`multi_gpu_model` is only available '
145 'with the TensorFlow backend.')
146
147 available_devices = _get_available_devices()
148 available_devices = [_normalize_device_name(name)
149 for name in available_devices]
150 if not gpus:
151 # Using all visible GPUs when not specifying `gpus`
152 # e.g. CUDA_VISIBLE_DEVICES=0,2 python keras_mgpu.py
153 gpus = len([x for x in available_devices if 'gpu' in x])
154
155 if isinstance(gpus, (list, tuple)):
156 if len(gpus) <= 1:
157 raise ValueError('For multi-gpu usage to be effective, '
158 'call `multi_gpu_model` with `len(gpus) >= 2`. '
159 'Received: `gpus=%s`' % gpus)
160 num_gpus = len(gpus)
161 target_gpu_ids = gpus
162 else:
163 if gpus <= 1:
164 raise ValueError('For multi-gpu usage to be effective, '
165 'call `multi_gpu_model` with `gpus >= 2`. '
166 'Received: `gpus=%d`' % gpus)
167 num_gpus = gpus
168 target_gpu_ids = range(num_gpus)
169
170 import tensorflow as tf
171
172 target_devices = ['/cpu:0'] + ['/gpu:%d' % i for i in target_gpu_ids]
173 for device in target_devices:
174 if device not in available_devices:
175 raise ValueError(
176 'To call `multi_gpu_model` with `gpus=%s`, '
177 'we expect the following devices to be available: %s. '
178 'However this machine only has: %s. '
179 'Try reducing `gpus`.' % (gpus,
180 target_devices,
181 available_devices))
182
183 def get_slice(data, i, parts):
184 shape = K.shape(data)
185 batch_size = shape[:1]
186 input_shape = shape[1:]
187 step = batch_size // parts
188 if i == parts - 1:
189 size = batch_size - step * i
190 else:
191 size = step
192 size = K.concatenate([size, input_shape], axis=0)
193 stride = K.concatenate([step, input_shape * 0], axis=0)
194 start = stride * i
195 return K.slice(data, start, size)
196
197 # Relocate the model definition under CPU device scope if needed
198 if cpu_relocation:
199 with tf.device('/cpu:0'):
200 model = clone_model(model)
201
202 all_outputs = []
203 for i in range(len(model.outputs)):
204 all_outputs.append([])
205
206 # Place a copy of the model on each GPU,
207 # each getting a slice of the inputs.
208 for i, gpu_id in enumerate(target_gpu_ids):
209 with tf.device('/gpu:%d' % gpu_id):
210 with tf.name_scope('replica_%d' % gpu_id):
211 inputs = []
212 # Retrieve a slice of the input.
213 for x in model.inputs:
214 # In-place input splitting which is not only
215 # 5% ~ 12% faster but also less GPU memory
216 # duplication.
217 with tf.device(x.device):
218 input_shape = K.int_shape(x)[1:]
219 slice_i = Lambda(get_slice,
220 output_shape=input_shape,
221 arguments={'i': i,
222 'parts': num_gpus})(x)
223 inputs.append(slice_i)
224
225 # Apply model on slice
226 # (creating a model replica on the target device).
227 outputs = model(inputs)
228 outputs = to_list(outputs)
229
230 # Save the outputs for merging back together later.
231 for o in range(len(outputs)):
232 all_outputs[o].append(outputs[o])
233
234 # Deduplicate output names to handle Siamese networks.
235 occurrences = {}
236 for n in model.output_names:
237 if n not in occurrences:
238 occurrences[n] = 1
239 else:
240 occurrences[n] += 1
241 conflict_counter = {n: 0 for n, count in occurrences.items() if count > 1}
242 output_names = []
243 for n in model.output_names:
244 if n in conflict_counter:
245 conflict_counter[n] += 1
246 n += '_%d' % conflict_counter[n]
247 output_names.append(n)
248
249 # Merge outputs under expected scope.
250 with tf.device('/cpu:0' if cpu_merge else '/gpu:%d' % target_gpu_ids[0]):
251 merged = []
252 for name, outputs in zip(output_names, all_outputs):
253 merged.append(concatenate(outputs,
254 axis=0, name=name))
255 return Model(model.inputs, merged)
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/keras/utils/multi_gpu_utils.py b/keras/utils/multi_gpu_utils.py
--- a/keras/utils/multi_gpu_utils.py
+++ b/keras/utils/multi_gpu_utils.py
@@ -59,7 +59,9 @@
A Keras `Model` instance which can be used just like the initial
`model` argument, but which distributes its workload on multiple GPUs.
- # Example 1 - Training models with weights merge on CPU
+ # Examples
+
+ Example 1 - Training models with weights merge on CPU
```python
import tensorflow as tf
@@ -100,7 +102,7 @@
model.save('my_model.h5')
```
- # Example 2 - Training models with weights merge on CPU using cpu_relocation
+ Example 2 - Training models with weights merge on CPU using cpu_relocation
```python
..
@@ -108,16 +110,16 @@
model = Xception(weights=None, ..)
try:
- model = multi_gpu_model(model, cpu_relocation=True)
+ parallel_model = multi_gpu_model(model, cpu_relocation=True)
print("Training using multiple GPUs..")
- except:
+ except ValueError:
+ parallel_model = model
print("Training using single GPU or CPU..")
-
- model.compile(..)
+ parallel_model.compile(..)
..
```
- # Example 3 - Training models with weights merge on GPU (recommended for NV-link)
+ Example 3 - Training models with weights merge on GPU (recommended for NV-link)
```python
..
@@ -125,12 +127,13 @@
model = Xception(weights=None, ..)
try:
- model = multi_gpu_model(model, cpu_merge=False)
+ parallel_model = multi_gpu_model(model, cpu_merge=False)
print("Training using multiple GPUs..")
except:
+ parallel_model = model
print("Training using single GPU or CPU..")
- model.compile(..)
+ parallel_model.compile(..)
..
```
|
{"golden_diff": "diff --git a/keras/utils/multi_gpu_utils.py b/keras/utils/multi_gpu_utils.py\n--- a/keras/utils/multi_gpu_utils.py\n+++ b/keras/utils/multi_gpu_utils.py\n@@ -59,7 +59,9 @@\n A Keras `Model` instance which can be used just like the initial\n `model` argument, but which distributes its workload on multiple GPUs.\n \n- # Example 1 - Training models with weights merge on CPU\n+ # Examples\n+\n+ Example 1 - Training models with weights merge on CPU\n \n ```python\n import tensorflow as tf\n@@ -100,7 +102,7 @@\n model.save('my_model.h5')\n ```\n \n- # Example 2 - Training models with weights merge on CPU using cpu_relocation\n+ Example 2 - Training models with weights merge on CPU using cpu_relocation\n \n ```python\n ..\n@@ -108,16 +110,16 @@\n model = Xception(weights=None, ..)\n \n try:\n- model = multi_gpu_model(model, cpu_relocation=True)\n+ parallel_model = multi_gpu_model(model, cpu_relocation=True)\n print(\"Training using multiple GPUs..\")\n- except:\n+ except ValueError:\n+ parallel_model = model\n print(\"Training using single GPU or CPU..\")\n-\n- model.compile(..)\n+ parallel_model.compile(..)\n ..\n ```\n \n- # Example 3 - Training models with weights merge on GPU (recommended for NV-link)\n+ Example 3 - Training models with weights merge on GPU (recommended for NV-link)\n \n ```python\n ..\n@@ -125,12 +127,13 @@\n model = Xception(weights=None, ..)\n \n try:\n- model = multi_gpu_model(model, cpu_merge=False)\n+ parallel_model = multi_gpu_model(model, cpu_merge=False)\n print(\"Training using multiple GPUs..\")\n except:\n+ parallel_model = model\n print(\"Training using single GPU or CPU..\")\n \n- model.compile(..)\n+ parallel_model.compile(..)\n ..\n ```\n", "issue": "Keras documentation: multi_gpu_model examples do not show properly on the home page keras.io\nSome examples of usage of `multi_gpu_model` appear on the documentation of the function in the [source code](https://github.com/keras-team/keras/blob/master/keras/utils/multi_gpu_utils.py). However they do not display correctly on the [Keras home page](https://keras.io/utils/):\r\n\r\n```Example 1 - Training models with weights merge on CPU\r\n\r\n$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$0\r\n\r\nExample 2 - Training models with weights merge on CPU using cpu_relocation\r\n\r\n$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$1\r\n\r\nExample 3 - Training models with weights merge on GPU (recommended for NV-link)\r\n\r\n$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$2```\nKeras documentation of multi_gpu_model: example 2 can be misleading \nIn the Keras documentation for `multi_gpu_model`, it is stated:\r\n\r\n> To save the multi-gpu model, use .save(fname) or .save_weights(fname) with the template model (the argument you passed to multi_gpu_model), rather than the model returned by multi_gpu_model.\r\n\r\nHowever in example 2 the template model is overwritten by the multi-gpu model:\r\n\r\n```python\r\n ..\r\n # Not needed to change the device scope for model definition:\r\n model = Xception(weights=None, ..)\r\n try:\r\n model = multi_gpu_model(model, cpu_relocation=True)\r\n print(\"Training using multiple GPUs..\")\r\n except:\r\n print(\"Training using single GPU or CPU..\")\r\n model.compile(..)\r\n ..\r\n```\r\n\r\nThis means that in this example it would not be possible to save the weights of the template model. I suggest rewritting to something like:\r\n\r\n\r\n```python\r\n ..\r\n # Not needed to change the device scope for model definition:\r\n model = Xception(weights=None, ..)\r\n try:\r\n parallel_model = multi_gpu_model(model, cpu_relocation=True)\r\n print(\"Training using multiple GPUs..\")\r\n except ValueError:\r\n parallel_model = model\r\n print(\"Training using single GPU or CPU..\")\r\n parallel_model.compile(..)\r\n ..\r\n```\r\n\r\n(I take this opportunity to except only a specific error)\r\n\n", "before_files": [{"content": "\"\"\"Multi-GPU training utilities.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom ..layers.merge import concatenate\nfrom .. import backend as K\nfrom ..layers.core import Lambda\nfrom ..engine.training import Model\nfrom ..models import clone_model\nfrom ..utils.generic_utils import to_list\n\n\ndef _get_available_devices():\n return [x.name for x in K.get_session().list_devices()]\n\n\ndef _normalize_device_name(name):\n name = '/' + ':'.join(name.lower().replace('/', '').split(':')[-2:])\n return name\n\n\ndef multi_gpu_model(model, gpus=None, cpu_merge=True, cpu_relocation=False):\n \"\"\"Replicates a model on different GPUs.\n\n Specifically, this function implements single-machine\n multi-GPU data parallelism. It works in the following way:\n\n - Divide the model's input(s) into multiple sub-batches.\n - Apply a model copy on each sub-batch. Every model copy\n is executed on a dedicated GPU.\n - Concatenate the results (on CPU) into one big batch.\n\n E.g. if your `batch_size` is 64 and you use `gpus=2`,\n then we will divide the input into 2 sub-batches of 32 samples,\n process each sub-batch on one GPU, then return the full\n batch of 64 processed samples.\n\n This induces quasi-linear speedup on up to 8 GPUs.\n\n This function is only available with the TensorFlow backend\n for the time being.\n\n # Arguments\n model: A Keras model instance. To avoid OOM errors,\n this model could have been built on CPU, for instance\n (see usage example below).\n gpus: Integer >= 2 or list of integers, number of GPUs or\n list of GPU IDs on which to create model replicas.\n cpu_merge: A boolean value to identify whether to force\n merging model weights under the scope of the CPU or not.\n cpu_relocation: A boolean value to identify whether to\n create the model's weights under the scope of the CPU.\n If the model is not defined under any preceding device\n scope, you can still rescue it by activating this option.\n\n # Returns\n A Keras `Model` instance which can be used just like the initial\n `model` argument, but which distributes its workload on multiple GPUs.\n\n # Example 1 - Training models with weights merge on CPU\n\n ```python\n import tensorflow as tf\n from keras.applications import Xception\n from keras.utils import multi_gpu_model\n import numpy as np\n\n num_samples = 1000\n height = 224\n width = 224\n num_classes = 1000\n\n # Instantiate the base model (or \"template\" model).\n # We recommend doing this with under a CPU device scope,\n # so that the model's weights are hosted on CPU memory.\n # Otherwise they may end up hosted on a GPU, which would\n # complicate weight sharing.\n with tf.device('/cpu:0'):\n model = Xception(weights=None,\n input_shape=(height, width, 3),\n classes=num_classes)\n\n # Replicates the model on 8 GPUs.\n # This assumes that your machine has 8 available GPUs.\n parallel_model = multi_gpu_model(model, gpus=8)\n parallel_model.compile(loss='categorical_crossentropy',\n optimizer='rmsprop')\n\n # Generate dummy data.\n x = np.random.random((num_samples, height, width, 3))\n y = np.random.random((num_samples, num_classes))\n\n # This `fit` call will be distributed on 8 GPUs.\n # Since the batch size is 256, each GPU will process 32 samples.\n parallel_model.fit(x, y, epochs=20, batch_size=256)\n\n # Save model via the template model (which shares the same weights):\n model.save('my_model.h5')\n ```\n\n # Example 2 - Training models with weights merge on CPU using cpu_relocation\n\n ```python\n ..\n # Not needed to change the device scope for model definition:\n model = Xception(weights=None, ..)\n\n try:\n model = multi_gpu_model(model, cpu_relocation=True)\n print(\"Training using multiple GPUs..\")\n except:\n print(\"Training using single GPU or CPU..\")\n\n model.compile(..)\n ..\n ```\n\n # Example 3 - Training models with weights merge on GPU (recommended for NV-link)\n\n ```python\n ..\n # Not needed to change the device scope for model definition:\n model = Xception(weights=None, ..)\n\n try:\n model = multi_gpu_model(model, cpu_merge=False)\n print(\"Training using multiple GPUs..\")\n except:\n print(\"Training using single GPU or CPU..\")\n\n model.compile(..)\n ..\n ```\n\n # On model saving\n\n To save the multi-gpu model, use `.save(fname)` or `.save_weights(fname)`\n with the template model (the argument you passed to `multi_gpu_model`),\n rather than the model returned by `multi_gpu_model`.\n \"\"\"\n if K.backend() != 'tensorflow':\n raise ValueError('`multi_gpu_model` is only available '\n 'with the TensorFlow backend.')\n\n available_devices = _get_available_devices()\n available_devices = [_normalize_device_name(name)\n for name in available_devices]\n if not gpus:\n # Using all visible GPUs when not specifying `gpus`\n # e.g. CUDA_VISIBLE_DEVICES=0,2 python keras_mgpu.py\n gpus = len([x for x in available_devices if 'gpu' in x])\n\n if isinstance(gpus, (list, tuple)):\n if len(gpus) <= 1:\n raise ValueError('For multi-gpu usage to be effective, '\n 'call `multi_gpu_model` with `len(gpus) >= 2`. '\n 'Received: `gpus=%s`' % gpus)\n num_gpus = len(gpus)\n target_gpu_ids = gpus\n else:\n if gpus <= 1:\n raise ValueError('For multi-gpu usage to be effective, '\n 'call `multi_gpu_model` with `gpus >= 2`. '\n 'Received: `gpus=%d`' % gpus)\n num_gpus = gpus\n target_gpu_ids = range(num_gpus)\n\n import tensorflow as tf\n\n target_devices = ['/cpu:0'] + ['/gpu:%d' % i for i in target_gpu_ids]\n for device in target_devices:\n if device not in available_devices:\n raise ValueError(\n 'To call `multi_gpu_model` with `gpus=%s`, '\n 'we expect the following devices to be available: %s. '\n 'However this machine only has: %s. '\n 'Try reducing `gpus`.' % (gpus,\n target_devices,\n available_devices))\n\n def get_slice(data, i, parts):\n shape = K.shape(data)\n batch_size = shape[:1]\n input_shape = shape[1:]\n step = batch_size // parts\n if i == parts - 1:\n size = batch_size - step * i\n else:\n size = step\n size = K.concatenate([size, input_shape], axis=0)\n stride = K.concatenate([step, input_shape * 0], axis=0)\n start = stride * i\n return K.slice(data, start, size)\n\n # Relocate the model definition under CPU device scope if needed\n if cpu_relocation:\n with tf.device('/cpu:0'):\n model = clone_model(model)\n\n all_outputs = []\n for i in range(len(model.outputs)):\n all_outputs.append([])\n\n # Place a copy of the model on each GPU,\n # each getting a slice of the inputs.\n for i, gpu_id in enumerate(target_gpu_ids):\n with tf.device('/gpu:%d' % gpu_id):\n with tf.name_scope('replica_%d' % gpu_id):\n inputs = []\n # Retrieve a slice of the input.\n for x in model.inputs:\n # In-place input splitting which is not only\n # 5% ~ 12% faster but also less GPU memory\n # duplication.\n with tf.device(x.device):\n input_shape = K.int_shape(x)[1:]\n slice_i = Lambda(get_slice,\n output_shape=input_shape,\n arguments={'i': i,\n 'parts': num_gpus})(x)\n inputs.append(slice_i)\n\n # Apply model on slice\n # (creating a model replica on the target device).\n outputs = model(inputs)\n outputs = to_list(outputs)\n\n # Save the outputs for merging back together later.\n for o in range(len(outputs)):\n all_outputs[o].append(outputs[o])\n\n # Deduplicate output names to handle Siamese networks.\n occurrences = {}\n for n in model.output_names:\n if n not in occurrences:\n occurrences[n] = 1\n else:\n occurrences[n] += 1\n conflict_counter = {n: 0 for n, count in occurrences.items() if count > 1}\n output_names = []\n for n in model.output_names:\n if n in conflict_counter:\n conflict_counter[n] += 1\n n += '_%d' % conflict_counter[n]\n output_names.append(n)\n\n # Merge outputs under expected scope.\n with tf.device('/cpu:0' if cpu_merge else '/gpu:%d' % target_gpu_ids[0]):\n merged = []\n for name, outputs in zip(output_names, all_outputs):\n merged.append(concatenate(outputs,\n axis=0, name=name))\n return Model(model.inputs, merged)\n", "path": "keras/utils/multi_gpu_utils.py"}], "after_files": [{"content": "\"\"\"Multi-GPU training utilities.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom ..layers.merge import concatenate\nfrom .. import backend as K\nfrom ..layers.core import Lambda\nfrom ..engine.training import Model\nfrom ..models import clone_model\nfrom ..utils.generic_utils import to_list\n\n\ndef _get_available_devices():\n return [x.name for x in K.get_session().list_devices()]\n\n\ndef _normalize_device_name(name):\n name = '/' + ':'.join(name.lower().replace('/', '').split(':')[-2:])\n return name\n\n\ndef multi_gpu_model(model, gpus=None, cpu_merge=True, cpu_relocation=False):\n \"\"\"Replicates a model on different GPUs.\n\n Specifically, this function implements single-machine\n multi-GPU data parallelism. It works in the following way:\n\n - Divide the model's input(s) into multiple sub-batches.\n - Apply a model copy on each sub-batch. Every model copy\n is executed on a dedicated GPU.\n - Concatenate the results (on CPU) into one big batch.\n\n E.g. if your `batch_size` is 64 and you use `gpus=2`,\n then we will divide the input into 2 sub-batches of 32 samples,\n process each sub-batch on one GPU, then return the full\n batch of 64 processed samples.\n\n This induces quasi-linear speedup on up to 8 GPUs.\n\n This function is only available with the TensorFlow backend\n for the time being.\n\n # Arguments\n model: A Keras model instance. To avoid OOM errors,\n this model could have been built on CPU, for instance\n (see usage example below).\n gpus: Integer >= 2 or list of integers, number of GPUs or\n list of GPU IDs on which to create model replicas.\n cpu_merge: A boolean value to identify whether to force\n merging model weights under the scope of the CPU or not.\n cpu_relocation: A boolean value to identify whether to\n create the model's weights under the scope of the CPU.\n If the model is not defined under any preceding device\n scope, you can still rescue it by activating this option.\n\n # Returns\n A Keras `Model` instance which can be used just like the initial\n `model` argument, but which distributes its workload on multiple GPUs.\n\n # Examples\n\n Example 1 - Training models with weights merge on CPU\n\n ```python\n import tensorflow as tf\n from keras.applications import Xception\n from keras.utils import multi_gpu_model\n import numpy as np\n\n num_samples = 1000\n height = 224\n width = 224\n num_classes = 1000\n\n # Instantiate the base model (or \"template\" model).\n # We recommend doing this with under a CPU device scope,\n # so that the model's weights are hosted on CPU memory.\n # Otherwise they may end up hosted on a GPU, which would\n # complicate weight sharing.\n with tf.device('/cpu:0'):\n model = Xception(weights=None,\n input_shape=(height, width, 3),\n classes=num_classes)\n\n # Replicates the model on 8 GPUs.\n # This assumes that your machine has 8 available GPUs.\n parallel_model = multi_gpu_model(model, gpus=8)\n parallel_model.compile(loss='categorical_crossentropy',\n optimizer='rmsprop')\n\n # Generate dummy data.\n x = np.random.random((num_samples, height, width, 3))\n y = np.random.random((num_samples, num_classes))\n\n # This `fit` call will be distributed on 8 GPUs.\n # Since the batch size is 256, each GPU will process 32 samples.\n parallel_model.fit(x, y, epochs=20, batch_size=256)\n\n # Save model via the template model (which shares the same weights):\n model.save('my_model.h5')\n ```\n\n Example 2 - Training models with weights merge on CPU using cpu_relocation\n\n ```python\n ..\n # Not needed to change the device scope for model definition:\n model = Xception(weights=None, ..)\n\n try:\n parallel_model = multi_gpu_model(model, cpu_relocation=True)\n print(\"Training using multiple GPUs..\")\n except ValueError:\n parallel_model = model\n print(\"Training using single GPU or CPU..\")\n parallel_model.compile(..)\n ..\n ```\n\n Example 3 - Training models with weights merge on GPU (recommended for NV-link)\n\n ```python\n ..\n # Not needed to change the device scope for model definition:\n model = Xception(weights=None, ..)\n\n try:\n parallel_model = multi_gpu_model(model, cpu_merge=False)\n print(\"Training using multiple GPUs..\")\n except:\n parallel_model = model\n print(\"Training using single GPU or CPU..\")\n\n parallel_model.compile(..)\n ..\n ```\n\n # On model saving\n\n To save the multi-gpu model, use `.save(fname)` or `.save_weights(fname)`\n with the template model (the argument you passed to `multi_gpu_model`),\n rather than the model returned by `multi_gpu_model`.\n \"\"\"\n if K.backend() != 'tensorflow':\n raise ValueError('`multi_gpu_model` is only available '\n 'with the TensorFlow backend.')\n\n available_devices = _get_available_devices()\n available_devices = [_normalize_device_name(name)\n for name in available_devices]\n if not gpus:\n # Using all visible GPUs when not specifying `gpus`\n # e.g. CUDA_VISIBLE_DEVICES=0,2 python keras_mgpu.py\n gpus = len([x for x in available_devices if 'gpu' in x])\n\n if isinstance(gpus, (list, tuple)):\n if len(gpus) <= 1:\n raise ValueError('For multi-gpu usage to be effective, '\n 'call `multi_gpu_model` with `len(gpus) >= 2`. '\n 'Received: `gpus=%s`' % gpus)\n num_gpus = len(gpus)\n target_gpu_ids = gpus\n else:\n if gpus <= 1:\n raise ValueError('For multi-gpu usage to be effective, '\n 'call `multi_gpu_model` with `gpus >= 2`. '\n 'Received: `gpus=%d`' % gpus)\n num_gpus = gpus\n target_gpu_ids = range(num_gpus)\n\n import tensorflow as tf\n\n target_devices = ['/cpu:0'] + ['/gpu:%d' % i for i in target_gpu_ids]\n for device in target_devices:\n if device not in available_devices:\n raise ValueError(\n 'To call `multi_gpu_model` with `gpus=%s`, '\n 'we expect the following devices to be available: %s. '\n 'However this machine only has: %s. '\n 'Try reducing `gpus`.' % (gpus,\n target_devices,\n available_devices))\n\n def get_slice(data, i, parts):\n shape = K.shape(data)\n batch_size = shape[:1]\n input_shape = shape[1:]\n step = batch_size // parts\n if i == parts - 1:\n size = batch_size - step * i\n else:\n size = step\n size = K.concatenate([size, input_shape], axis=0)\n stride = K.concatenate([step, input_shape * 0], axis=0)\n start = stride * i\n return K.slice(data, start, size)\n\n # Relocate the model definition under CPU device scope if needed\n if cpu_relocation:\n with tf.device('/cpu:0'):\n model = clone_model(model)\n\n all_outputs = []\n for i in range(len(model.outputs)):\n all_outputs.append([])\n\n # Place a copy of the model on each GPU,\n # each getting a slice of the inputs.\n for i, gpu_id in enumerate(target_gpu_ids):\n with tf.device('/gpu:%d' % gpu_id):\n with tf.name_scope('replica_%d' % gpu_id):\n inputs = []\n # Retrieve a slice of the input.\n for x in model.inputs:\n # In-place input splitting which is not only\n # 5% ~ 12% faster but also less GPU memory\n # duplication.\n with tf.device(x.device):\n input_shape = K.int_shape(x)[1:]\n slice_i = Lambda(get_slice,\n output_shape=input_shape,\n arguments={'i': i,\n 'parts': num_gpus})(x)\n inputs.append(slice_i)\n\n # Apply model on slice\n # (creating a model replica on the target device).\n outputs = model(inputs)\n outputs = to_list(outputs)\n\n # Save the outputs for merging back together later.\n for o in range(len(outputs)):\n all_outputs[o].append(outputs[o])\n\n # Deduplicate output names to handle Siamese networks.\n occurrences = {}\n for n in model.output_names:\n if n not in occurrences:\n occurrences[n] = 1\n else:\n occurrences[n] += 1\n conflict_counter = {n: 0 for n, count in occurrences.items() if count > 1}\n output_names = []\n for n in model.output_names:\n if n in conflict_counter:\n conflict_counter[n] += 1\n n += '_%d' % conflict_counter[n]\n output_names.append(n)\n\n # Merge outputs under expected scope.\n with tf.device('/cpu:0' if cpu_merge else '/gpu:%d' % target_gpu_ids[0]):\n merged = []\n for name, outputs in zip(output_names, all_outputs):\n merged.append(concatenate(outputs,\n axis=0, name=name))\n return Model(model.inputs, merged)\n", "path": "keras/utils/multi_gpu_utils.py"}]}
| 3,552 | 472 |
gh_patches_debug_33630
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-1150
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Possibly incorrect hook names?
Going through the docs, I see two rather unusual hook names: `construct_wagtail_edit_bird` and `construct_whitelister_element_rules`.
The first seems like a placeholder name that accidentally made it out of the alpha stage. Based on the docs, it seems like it should be called `construct_wagtail_userbar`.
The second seems like a straight up typo. I've never heard the word "whitelister" before. I'm pretty sure this hook should be called `construct_whitelisted_element_rules`.
Changing the names of hooks is obviously a major undertaking, since some code bases will have already implemented them. But adding the new names and deprecating the old ones for a few releases should be entirely possible. I'd be happy to do this in a pull request, since it's only a dozen or lines of code to change, but I don't really know how wagtail handles deprecating old APIs.
Possibly incorrect hook names?
Going through the docs, I see two rather unusual hook names: `construct_wagtail_edit_bird` and `construct_whitelister_element_rules`.
The first seems like a placeholder name that accidentally made it out of the alpha stage. Based on the docs, it seems like it should be called `construct_wagtail_userbar`.
The second seems like a straight up typo. I've never heard the word "whitelister" before. I'm pretty sure this hook should be called `construct_whitelisted_element_rules`.
Changing the names of hooks is obviously a major undertaking, since some code bases will have already implemented them. But adding the new names and deprecating the old ones for a few releases should be entirely possible. I'd be happy to do this in a pull request, since it's only a dozen or lines of code to change, but I don't really know how wagtail handles deprecating old APIs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailadmin/views/userbar.py`
Content:
```
1 from django.shortcuts import render
2 from django.contrib.auth.decorators import permission_required
3
4 from wagtail.wagtailadmin.userbar import EditPageItem, AddPageItem, ApproveModerationEditPageItem, RejectModerationEditPageItem
5 from wagtail.wagtailcore import hooks
6 from wagtail.wagtailcore.models import Page, PageRevision
7
8
9 @permission_required('wagtailadmin.access_admin', raise_exception=True)
10 def for_frontend(request, page_id):
11 items = [
12 EditPageItem(Page.objects.get(id=page_id)),
13 AddPageItem(Page.objects.get(id=page_id)),
14 ]
15
16 for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
17 fn(request, items)
18
19 # Render the items
20 rendered_items = [item.render(request) for item in items]
21
22 # Remove any unrendered items
23 rendered_items = [item for item in rendered_items if item]
24
25 # Render the edit bird
26 return render(request, 'wagtailadmin/userbar/base.html', {
27 'items': rendered_items,
28 })
29
30
31 @permission_required('wagtailadmin.access_admin', raise_exception=True)
32 def for_moderation(request, revision_id):
33 items = [
34 EditPageItem(PageRevision.objects.get(id=revision_id).page),
35 AddPageItem(PageRevision.objects.get(id=revision_id).page),
36 ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
37 RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
38 ]
39
40 for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
41 fn(request, items)
42
43 # Render the items
44 rendered_items = [item.render(request) for item in items]
45
46 # Remove any unrendered items
47 rendered_items = [item for item in rendered_items if item]
48
49 # Render the edit bird
50 return render(request, 'wagtailadmin/userbar/base.html', {
51 'items': rendered_items,
52 })
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/wagtailadmin/views/userbar.py b/wagtail/wagtailadmin/views/userbar.py
--- a/wagtail/wagtailadmin/views/userbar.py
+++ b/wagtail/wagtailadmin/views/userbar.py
@@ -1,3 +1,5 @@
+import warnings
+
from django.shortcuts import render
from django.contrib.auth.decorators import permission_required
@@ -5,6 +7,8 @@
from wagtail.wagtailcore import hooks
from wagtail.wagtailcore.models import Page, PageRevision
+from wagtail.utils.deprecation import RemovedInWagtail11Warning
+
@permission_required('wagtailadmin.access_admin', raise_exception=True)
def for_frontend(request, page_id):
@@ -13,7 +17,10 @@
AddPageItem(Page.objects.get(id=page_id)),
]
- for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
+ # TODO: Remove in 1.1 release
+ run_deprecated_edit_bird_hook(request, items)
+
+ for fn in hooks.get_hooks('construct_wagtail_userbar'):
fn(request, items)
# Render the items
@@ -37,7 +44,10 @@
RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
]
- for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
+ # TODO: Remove in 1.1 release
+ run_deprecated_edit_bird_hook(request, items)
+
+ for fn in hooks.get_hooks('construct_wagtail_userbar'):
fn(request, items)
# Render the items
@@ -50,3 +60,13 @@
return render(request, 'wagtailadmin/userbar/base.html', {
'items': rendered_items,
})
+
+
+def run_deprecated_edit_bird_hook(request, items):
+ for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
+ fn(request, items)
+
+ warnings.warn(
+ "The 'construct_wagtail_edit_bird' hook has been renamed to 'construct_wagtail_userbar'."
+ "Please update function '%s' in '%s'." % (fn.__name__, fn.__module__), RemovedInWagtail11Warning
+ )
|
{"golden_diff": "diff --git a/wagtail/wagtailadmin/views/userbar.py b/wagtail/wagtailadmin/views/userbar.py\n--- a/wagtail/wagtailadmin/views/userbar.py\n+++ b/wagtail/wagtailadmin/views/userbar.py\n@@ -1,3 +1,5 @@\n+import warnings\n+\n from django.shortcuts import render\n from django.contrib.auth.decorators import permission_required\n \n@@ -5,6 +7,8 @@\n from wagtail.wagtailcore import hooks\n from wagtail.wagtailcore.models import Page, PageRevision\n \n+from wagtail.utils.deprecation import RemovedInWagtail11Warning\n+\n \n @permission_required('wagtailadmin.access_admin', raise_exception=True)\n def for_frontend(request, page_id):\n@@ -13,7 +17,10 @@\n AddPageItem(Page.objects.get(id=page_id)),\n ]\n \n- for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n+ # TODO: Remove in 1.1 release\n+ run_deprecated_edit_bird_hook(request, items)\n+\n+ for fn in hooks.get_hooks('construct_wagtail_userbar'):\n fn(request, items)\n \n # Render the items\n@@ -37,7 +44,10 @@\n RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n ]\n \n- for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n+ # TODO: Remove in 1.1 release\n+ run_deprecated_edit_bird_hook(request, items)\n+\n+ for fn in hooks.get_hooks('construct_wagtail_userbar'):\n fn(request, items)\n \n # Render the items\n@@ -50,3 +60,13 @@\n return render(request, 'wagtailadmin/userbar/base.html', {\n 'items': rendered_items,\n })\n+\n+\n+def run_deprecated_edit_bird_hook(request, items):\n+ for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n+ fn(request, items)\n+\n+ warnings.warn(\n+ \"The 'construct_wagtail_edit_bird' hook has been renamed to 'construct_wagtail_userbar'.\"\n+ \"Please update function '%s' in '%s'.\" % (fn.__name__, fn.__module__), RemovedInWagtail11Warning\n+ )\n", "issue": "Possibly incorrect hook names?\nGoing through the docs, I see two rather unusual hook names: `construct_wagtail_edit_bird` and `construct_whitelister_element_rules`. \n\nThe first seems like a placeholder name that accidentally made it out of the alpha stage. Based on the docs, it seems like it should be called `construct_wagtail_userbar`.\n\nThe second seems like a straight up typo. I've never heard the word \"whitelister\" before. I'm pretty sure this hook should be called `construct_whitelisted_element_rules`.\n\nChanging the names of hooks is obviously a major undertaking, since some code bases will have already implemented them. But adding the new names and deprecating the old ones for a few releases should be entirely possible. I'd be happy to do this in a pull request, since it's only a dozen or lines of code to change, but I don't really know how wagtail handles deprecating old APIs.\n\nPossibly incorrect hook names?\nGoing through the docs, I see two rather unusual hook names: `construct_wagtail_edit_bird` and `construct_whitelister_element_rules`. \n\nThe first seems like a placeholder name that accidentally made it out of the alpha stage. Based on the docs, it seems like it should be called `construct_wagtail_userbar`.\n\nThe second seems like a straight up typo. I've never heard the word \"whitelister\" before. I'm pretty sure this hook should be called `construct_whitelisted_element_rules`.\n\nChanging the names of hooks is obviously a major undertaking, since some code bases will have already implemented them. But adding the new names and deprecating the old ones for a few releases should be entirely possible. I'd be happy to do this in a pull request, since it's only a dozen or lines of code to change, but I don't really know how wagtail handles deprecating old APIs.\n\n", "before_files": [{"content": "from django.shortcuts import render\nfrom django.contrib.auth.decorators import permission_required\n\nfrom wagtail.wagtailadmin.userbar import EditPageItem, AddPageItem, ApproveModerationEditPageItem, RejectModerationEditPageItem\nfrom wagtail.wagtailcore import hooks\nfrom wagtail.wagtailcore.models import Page, PageRevision\n\n\n@permission_required('wagtailadmin.access_admin', raise_exception=True)\ndef for_frontend(request, page_id):\n items = [\n EditPageItem(Page.objects.get(id=page_id)),\n AddPageItem(Page.objects.get(id=page_id)),\n ]\n\n for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n fn(request, items)\n\n # Render the items\n rendered_items = [item.render(request) for item in items]\n\n # Remove any unrendered items\n rendered_items = [item for item in rendered_items if item]\n\n # Render the edit bird\n return render(request, 'wagtailadmin/userbar/base.html', {\n 'items': rendered_items,\n })\n\n\n@permission_required('wagtailadmin.access_admin', raise_exception=True)\ndef for_moderation(request, revision_id):\n items = [\n EditPageItem(PageRevision.objects.get(id=revision_id).page),\n AddPageItem(PageRevision.objects.get(id=revision_id).page),\n ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n ]\n\n for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n fn(request, items)\n\n # Render the items\n rendered_items = [item.render(request) for item in items]\n\n # Remove any unrendered items\n rendered_items = [item for item in rendered_items if item]\n\n # Render the edit bird\n return render(request, 'wagtailadmin/userbar/base.html', {\n 'items': rendered_items,\n })\n", "path": "wagtail/wagtailadmin/views/userbar.py"}], "after_files": [{"content": "import warnings\n\nfrom django.shortcuts import render\nfrom django.contrib.auth.decorators import permission_required\n\nfrom wagtail.wagtailadmin.userbar import EditPageItem, AddPageItem, ApproveModerationEditPageItem, RejectModerationEditPageItem\nfrom wagtail.wagtailcore import hooks\nfrom wagtail.wagtailcore.models import Page, PageRevision\n\nfrom wagtail.utils.deprecation import RemovedInWagtail11Warning\n\n\n@permission_required('wagtailadmin.access_admin', raise_exception=True)\ndef for_frontend(request, page_id):\n items = [\n EditPageItem(Page.objects.get(id=page_id)),\n AddPageItem(Page.objects.get(id=page_id)),\n ]\n\n # TODO: Remove in 1.1 release\n run_deprecated_edit_bird_hook(request, items)\n\n for fn in hooks.get_hooks('construct_wagtail_userbar'):\n fn(request, items)\n\n # Render the items\n rendered_items = [item.render(request) for item in items]\n\n # Remove any unrendered items\n rendered_items = [item for item in rendered_items if item]\n\n # Render the edit bird\n return render(request, 'wagtailadmin/userbar/base.html', {\n 'items': rendered_items,\n })\n\n\n@permission_required('wagtailadmin.access_admin', raise_exception=True)\ndef for_moderation(request, revision_id):\n items = [\n EditPageItem(PageRevision.objects.get(id=revision_id).page),\n AddPageItem(PageRevision.objects.get(id=revision_id).page),\n ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n ]\n\n # TODO: Remove in 1.1 release\n run_deprecated_edit_bird_hook(request, items)\n\n for fn in hooks.get_hooks('construct_wagtail_userbar'):\n fn(request, items)\n\n # Render the items\n rendered_items = [item.render(request) for item in items]\n\n # Remove any unrendered items\n rendered_items = [item for item in rendered_items if item]\n\n # Render the edit bird\n return render(request, 'wagtailadmin/userbar/base.html', {\n 'items': rendered_items,\n })\n\n\ndef run_deprecated_edit_bird_hook(request, items):\n for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n fn(request, items)\n\n warnings.warn(\n \"The 'construct_wagtail_edit_bird' hook has been renamed to 'construct_wagtail_userbar'.\"\n \"Please update function '%s' in '%s'.\" % (fn.__name__, fn.__module__), RemovedInWagtail11Warning\n )\n", "path": "wagtail/wagtailadmin/views/userbar.py"}]}
| 1,182 | 514 |
gh_patches_debug_3533
|
rasdani/github-patches
|
git_diff
|
OpenNMT__OpenNMT-tf-189
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash loading parallel inputs with --data_dir
I found the next issue if I follow the tutorial and try to do
data:
train_features_file:
- train_source_1.records
- train_source_2.txt
- train_source_3.txt
in main.py at the method _prefix_paths
new_path = os.path.join(prefix, path)
will crash because paths is a list and join can't be done on a list.
The fix should be just check the instance type at paths and iterate
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opennmt/bin/main.py`
Content:
```
1 """Main script."""
2
3 import argparse
4 import json
5 import os
6 import six
7
8 import tensorflow as tf
9
10 from opennmt.models import catalog
11 from opennmt.runner import Runner
12 from opennmt.config import load_model, load_config
13 from opennmt.utils.misc import classes_in_module
14
15
16 def _prefix_paths(prefix, paths):
17 """Recursively prefix paths.
18
19 Args:
20 prefix: The prefix to apply.
21 data: A dict of relative paths.
22
23 Returns:
24 The updated dict.
25 """
26 if isinstance(paths, dict):
27 for key, path in six.iteritems(paths):
28 paths[key] = _prefix_paths(prefix, path)
29 return paths
30 else:
31 path = paths
32 new_path = os.path.join(prefix, path)
33 if os.path.isfile(new_path):
34 return new_path
35 else:
36 return path
37
38 def main():
39 parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
40 parser.add_argument("run",
41 choices=["train_and_eval", "train", "eval", "infer", "export", "score"],
42 help="Run type.")
43 parser.add_argument("--config", required=True, nargs="+",
44 help="List of configuration files.")
45 parser.add_argument("--model_type", default="", choices=list(classes_in_module(catalog)),
46 help="Model type from the catalog.")
47 parser.add_argument("--model", default="",
48 help="Custom model configuration file.")
49 parser.add_argument("--run_dir", default="",
50 help="If set, model_dir will be created relative to this location.")
51 parser.add_argument("--data_dir", default="",
52 help="If set, data files are expected to be relative to this location.")
53 parser.add_argument("--features_file", default=[], nargs="+",
54 help="Run inference on this file.")
55 parser.add_argument("--predictions_file", default="",
56 help=("File used to save predictions. If not set, predictions are printed "
57 "on the standard output."))
58 parser.add_argument("--log_prediction_time", default=False, action="store_true",
59 help="Logs some prediction time metrics.")
60 parser.add_argument("--checkpoint_path", default=None,
61 help=("Checkpoint or directory to use for inference or export "
62 "(when a directory is set, the latest checkpoint is used)."))
63 parser.add_argument("--num_gpus", type=int, default=1,
64 help="Number of GPUs to use for in-graph replication.")
65 parser.add_argument("--chief_host", default="",
66 help="hostname:port of the chief worker (for distributed training).")
67 parser.add_argument("--worker_hosts", default="",
68 help=("Comma-separated list of hostname:port of workers "
69 "(for distributed training)."))
70 parser.add_argument("--ps_hosts", default="",
71 help=("Comma-separated list of hostname:port of parameter servers "
72 "(for distributed training)."))
73 parser.add_argument("--task_type", default="chief",
74 choices=["chief", "worker", "ps", "evaluator"],
75 help="Type of the task to run (for distributed training).")
76 parser.add_argument("--task_index", type=int, default=0,
77 help="ID of the task (for distributed training).")
78 parser.add_argument("--log_level", default="INFO",
79 choices=["DEBUG", "ERROR", "FATAL", "INFO", "WARN"],
80 help="Logs verbosity.")
81 parser.add_argument("--seed", type=int, default=None,
82 help="Random seed.")
83 parser.add_argument("--gpu_allow_growth", default=False, action="store_true",
84 help="Allocate GPU memory dynamically.")
85 parser.add_argument("--intra_op_parallelism_threads", type=int, default=0,
86 help=("Number of intra op threads (0 means the system picks "
87 "an appropriate number)."))
88 parser.add_argument("--inter_op_parallelism_threads", type=int, default=0,
89 help=("Number of inter op threads (0 means the system picks "
90 "an appropriate number)."))
91 args = parser.parse_args()
92
93 tf.logging.set_verbosity(getattr(tf.logging, args.log_level))
94
95 # Setup cluster if defined.
96 if args.chief_host:
97 os.environ["TF_CONFIG"] = json.dumps({
98 "cluster": {
99 "chief": [args.chief_host],
100 "worker": args.worker_hosts.split(","),
101 "ps": args.ps_hosts.split(",")
102 },
103 "task": {
104 "type": args.task_type,
105 "index": args.task_index
106 }
107 })
108
109 # Load and merge run configurations.
110 config = load_config(args.config)
111 if args.run_dir:
112 config["model_dir"] = os.path.join(args.run_dir, config["model_dir"])
113 if args.data_dir:
114 config["data"] = _prefix_paths(args.data_dir, config["data"])
115
116 if not os.path.isdir(config["model_dir"]):
117 tf.logging.info("Creating model directory %s", config["model_dir"])
118 os.makedirs(config["model_dir"])
119
120 model = load_model(config["model_dir"], model_file=args.model, model_name=args.model_type)
121 session_config = tf.ConfigProto(
122 intra_op_parallelism_threads=args.intra_op_parallelism_threads,
123 inter_op_parallelism_threads=args.inter_op_parallelism_threads)
124 runner = Runner(
125 model,
126 config,
127 seed=args.seed,
128 num_devices=args.num_gpus,
129 gpu_allow_growth=args.gpu_allow_growth,
130 session_config=session_config)
131
132 if args.run == "train_and_eval":
133 runner.train_and_evaluate()
134 elif args.run == "train":
135 runner.train()
136 elif args.run == "eval":
137 runner.evaluate(checkpoint_path=args.checkpoint_path)
138 elif args.run == "infer":
139 if not args.features_file:
140 parser.error("--features_file is required for inference.")
141 elif len(args.features_file) == 1:
142 args.features_file = args.features_file[0]
143 runner.infer(
144 args.features_file,
145 predictions_file=args.predictions_file,
146 checkpoint_path=args.checkpoint_path,
147 log_time=args.log_prediction_time)
148 elif args.run == "export":
149 runner.export(checkpoint_path=args.checkpoint_path)
150 elif args.run == "score":
151 if not args.features_file:
152 parser.error("--features_file is required for scoring.")
153 if not args.predictions_file:
154 parser.error("--predictions_file is required for scoring.")
155 runner.score(
156 args.features_file,
157 args.predictions_file,
158 checkpoint_path=args.checkpoint_path)
159
160
161 if __name__ == "__main__":
162 main()
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opennmt/bin/main.py b/opennmt/bin/main.py
--- a/opennmt/bin/main.py
+++ b/opennmt/bin/main.py
@@ -27,6 +27,10 @@
for key, path in six.iteritems(paths):
paths[key] = _prefix_paths(prefix, path)
return paths
+ elif isinstance(paths, list):
+ for i, path in enumerate(paths):
+ paths[i] = _prefix_paths(prefix, path)
+ return paths
else:
path = paths
new_path = os.path.join(prefix, path)
|
{"golden_diff": "diff --git a/opennmt/bin/main.py b/opennmt/bin/main.py\n--- a/opennmt/bin/main.py\n+++ b/opennmt/bin/main.py\n@@ -27,6 +27,10 @@\n for key, path in six.iteritems(paths):\n paths[key] = _prefix_paths(prefix, path)\n return paths\n+ elif isinstance(paths, list):\n+ for i, path in enumerate(paths):\n+ paths[i] = _prefix_paths(prefix, path)\n+ return paths\n else:\n path = paths\n new_path = os.path.join(prefix, path)\n", "issue": "Crash loading parallel inputs with --data_dir\nI found the next issue if I follow the tutorial and try to do\r\n\r\ndata:\r\n train_features_file:\r\n - train_source_1.records\r\n - train_source_2.txt\r\n - train_source_3.txt\r\n\r\nin main.py at the method _prefix_paths\r\nnew_path = os.path.join(prefix, path) \r\nwill crash because paths is a list and join can't be done on a list.\r\n\r\nThe fix should be just check the instance type at paths and iterate\n", "before_files": [{"content": "\"\"\"Main script.\"\"\"\n\nimport argparse\nimport json\nimport os\nimport six\n\nimport tensorflow as tf\n\nfrom opennmt.models import catalog\nfrom opennmt.runner import Runner\nfrom opennmt.config import load_model, load_config\nfrom opennmt.utils.misc import classes_in_module\n\n\ndef _prefix_paths(prefix, paths):\n \"\"\"Recursively prefix paths.\n\n Args:\n prefix: The prefix to apply.\n data: A dict of relative paths.\n\n Returns:\n The updated dict.\n \"\"\"\n if isinstance(paths, dict):\n for key, path in six.iteritems(paths):\n paths[key] = _prefix_paths(prefix, path)\n return paths\n else:\n path = paths\n new_path = os.path.join(prefix, path)\n if os.path.isfile(new_path):\n return new_path\n else:\n return path\n\ndef main():\n parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument(\"run\",\n choices=[\"train_and_eval\", \"train\", \"eval\", \"infer\", \"export\", \"score\"],\n help=\"Run type.\")\n parser.add_argument(\"--config\", required=True, nargs=\"+\",\n help=\"List of configuration files.\")\n parser.add_argument(\"--model_type\", default=\"\", choices=list(classes_in_module(catalog)),\n help=\"Model type from the catalog.\")\n parser.add_argument(\"--model\", default=\"\",\n help=\"Custom model configuration file.\")\n parser.add_argument(\"--run_dir\", default=\"\",\n help=\"If set, model_dir will be created relative to this location.\")\n parser.add_argument(\"--data_dir\", default=\"\",\n help=\"If set, data files are expected to be relative to this location.\")\n parser.add_argument(\"--features_file\", default=[], nargs=\"+\",\n help=\"Run inference on this file.\")\n parser.add_argument(\"--predictions_file\", default=\"\",\n help=(\"File used to save predictions. If not set, predictions are printed \"\n \"on the standard output.\"))\n parser.add_argument(\"--log_prediction_time\", default=False, action=\"store_true\",\n help=\"Logs some prediction time metrics.\")\n parser.add_argument(\"--checkpoint_path\", default=None,\n help=(\"Checkpoint or directory to use for inference or export \"\n \"(when a directory is set, the latest checkpoint is used).\"))\n parser.add_argument(\"--num_gpus\", type=int, default=1,\n help=\"Number of GPUs to use for in-graph replication.\")\n parser.add_argument(\"--chief_host\", default=\"\",\n help=\"hostname:port of the chief worker (for distributed training).\")\n parser.add_argument(\"--worker_hosts\", default=\"\",\n help=(\"Comma-separated list of hostname:port of workers \"\n \"(for distributed training).\"))\n parser.add_argument(\"--ps_hosts\", default=\"\",\n help=(\"Comma-separated list of hostname:port of parameter servers \"\n \"(for distributed training).\"))\n parser.add_argument(\"--task_type\", default=\"chief\",\n choices=[\"chief\", \"worker\", \"ps\", \"evaluator\"],\n help=\"Type of the task to run (for distributed training).\")\n parser.add_argument(\"--task_index\", type=int, default=0,\n help=\"ID of the task (for distributed training).\")\n parser.add_argument(\"--log_level\", default=\"INFO\",\n choices=[\"DEBUG\", \"ERROR\", \"FATAL\", \"INFO\", \"WARN\"],\n help=\"Logs verbosity.\")\n parser.add_argument(\"--seed\", type=int, default=None,\n help=\"Random seed.\")\n parser.add_argument(\"--gpu_allow_growth\", default=False, action=\"store_true\",\n help=\"Allocate GPU memory dynamically.\")\n parser.add_argument(\"--intra_op_parallelism_threads\", type=int, default=0,\n help=(\"Number of intra op threads (0 means the system picks \"\n \"an appropriate number).\"))\n parser.add_argument(\"--inter_op_parallelism_threads\", type=int, default=0,\n help=(\"Number of inter op threads (0 means the system picks \"\n \"an appropriate number).\"))\n args = parser.parse_args()\n\n tf.logging.set_verbosity(getattr(tf.logging, args.log_level))\n\n # Setup cluster if defined.\n if args.chief_host:\n os.environ[\"TF_CONFIG\"] = json.dumps({\n \"cluster\": {\n \"chief\": [args.chief_host],\n \"worker\": args.worker_hosts.split(\",\"),\n \"ps\": args.ps_hosts.split(\",\")\n },\n \"task\": {\n \"type\": args.task_type,\n \"index\": args.task_index\n }\n })\n\n # Load and merge run configurations.\n config = load_config(args.config)\n if args.run_dir:\n config[\"model_dir\"] = os.path.join(args.run_dir, config[\"model_dir\"])\n if args.data_dir:\n config[\"data\"] = _prefix_paths(args.data_dir, config[\"data\"])\n\n if not os.path.isdir(config[\"model_dir\"]):\n tf.logging.info(\"Creating model directory %s\", config[\"model_dir\"])\n os.makedirs(config[\"model_dir\"])\n\n model = load_model(config[\"model_dir\"], model_file=args.model, model_name=args.model_type)\n session_config = tf.ConfigProto(\n intra_op_parallelism_threads=args.intra_op_parallelism_threads,\n inter_op_parallelism_threads=args.inter_op_parallelism_threads)\n runner = Runner(\n model,\n config,\n seed=args.seed,\n num_devices=args.num_gpus,\n gpu_allow_growth=args.gpu_allow_growth,\n session_config=session_config)\n\n if args.run == \"train_and_eval\":\n runner.train_and_evaluate()\n elif args.run == \"train\":\n runner.train()\n elif args.run == \"eval\":\n runner.evaluate(checkpoint_path=args.checkpoint_path)\n elif args.run == \"infer\":\n if not args.features_file:\n parser.error(\"--features_file is required for inference.\")\n elif len(args.features_file) == 1:\n args.features_file = args.features_file[0]\n runner.infer(\n args.features_file,\n predictions_file=args.predictions_file,\n checkpoint_path=args.checkpoint_path,\n log_time=args.log_prediction_time)\n elif args.run == \"export\":\n runner.export(checkpoint_path=args.checkpoint_path)\n elif args.run == \"score\":\n if not args.features_file:\n parser.error(\"--features_file is required for scoring.\")\n if not args.predictions_file:\n parser.error(\"--predictions_file is required for scoring.\")\n runner.score(\n args.features_file,\n args.predictions_file,\n checkpoint_path=args.checkpoint_path)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "opennmt/bin/main.py"}], "after_files": [{"content": "\"\"\"Main script.\"\"\"\n\nimport argparse\nimport json\nimport os\nimport six\n\nimport tensorflow as tf\n\nfrom opennmt.models import catalog\nfrom opennmt.runner import Runner\nfrom opennmt.config import load_model, load_config\nfrom opennmt.utils.misc import classes_in_module\n\n\ndef _prefix_paths(prefix, paths):\n \"\"\"Recursively prefix paths.\n\n Args:\n prefix: The prefix to apply.\n data: A dict of relative paths.\n\n Returns:\n The updated dict.\n \"\"\"\n if isinstance(paths, dict):\n for key, path in six.iteritems(paths):\n paths[key] = _prefix_paths(prefix, path)\n return paths\n elif isinstance(paths, list):\n for i, path in enumerate(paths):\n paths[i] = _prefix_paths(prefix, path)\n return paths\n else:\n path = paths\n new_path = os.path.join(prefix, path)\n if os.path.isfile(new_path):\n return new_path\n else:\n return path\n\ndef main():\n parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument(\"run\",\n choices=[\"train_and_eval\", \"train\", \"eval\", \"infer\", \"export\", \"score\"],\n help=\"Run type.\")\n parser.add_argument(\"--config\", required=True, nargs=\"+\",\n help=\"List of configuration files.\")\n parser.add_argument(\"--model_type\", default=\"\", choices=list(classes_in_module(catalog)),\n help=\"Model type from the catalog.\")\n parser.add_argument(\"--model\", default=\"\",\n help=\"Custom model configuration file.\")\n parser.add_argument(\"--run_dir\", default=\"\",\n help=\"If set, model_dir will be created relative to this location.\")\n parser.add_argument(\"--data_dir\", default=\"\",\n help=\"If set, data files are expected to be relative to this location.\")\n parser.add_argument(\"--features_file\", default=[], nargs=\"+\",\n help=\"Run inference on this file.\")\n parser.add_argument(\"--predictions_file\", default=\"\",\n help=(\"File used to save predictions. If not set, predictions are printed \"\n \"on the standard output.\"))\n parser.add_argument(\"--log_prediction_time\", default=False, action=\"store_true\",\n help=\"Logs some prediction time metrics.\")\n parser.add_argument(\"--checkpoint_path\", default=None,\n help=(\"Checkpoint or directory to use for inference or export \"\n \"(when a directory is set, the latest checkpoint is used).\"))\n parser.add_argument(\"--num_gpus\", type=int, default=1,\n help=\"Number of GPUs to use for in-graph replication.\")\n parser.add_argument(\"--chief_host\", default=\"\",\n help=\"hostname:port of the chief worker (for distributed training).\")\n parser.add_argument(\"--worker_hosts\", default=\"\",\n help=(\"Comma-separated list of hostname:port of workers \"\n \"(for distributed training).\"))\n parser.add_argument(\"--ps_hosts\", default=\"\",\n help=(\"Comma-separated list of hostname:port of parameter servers \"\n \"(for distributed training).\"))\n parser.add_argument(\"--task_type\", default=\"chief\",\n choices=[\"chief\", \"worker\", \"ps\", \"evaluator\"],\n help=\"Type of the task to run (for distributed training).\")\n parser.add_argument(\"--task_index\", type=int, default=0,\n help=\"ID of the task (for distributed training).\")\n parser.add_argument(\"--log_level\", default=\"INFO\",\n choices=[\"DEBUG\", \"ERROR\", \"FATAL\", \"INFO\", \"WARN\"],\n help=\"Logs verbosity.\")\n parser.add_argument(\"--seed\", type=int, default=None,\n help=\"Random seed.\")\n parser.add_argument(\"--gpu_allow_growth\", default=False, action=\"store_true\",\n help=\"Allocate GPU memory dynamically.\")\n parser.add_argument(\"--intra_op_parallelism_threads\", type=int, default=0,\n help=(\"Number of intra op threads (0 means the system picks \"\n \"an appropriate number).\"))\n parser.add_argument(\"--inter_op_parallelism_threads\", type=int, default=0,\n help=(\"Number of inter op threads (0 means the system picks \"\n \"an appropriate number).\"))\n args = parser.parse_args()\n\n tf.logging.set_verbosity(getattr(tf.logging, args.log_level))\n\n # Setup cluster if defined.\n if args.chief_host:\n os.environ[\"TF_CONFIG\"] = json.dumps({\n \"cluster\": {\n \"chief\": [args.chief_host],\n \"worker\": args.worker_hosts.split(\",\"),\n \"ps\": args.ps_hosts.split(\",\")\n },\n \"task\": {\n \"type\": args.task_type,\n \"index\": args.task_index\n }\n })\n\n # Load and merge run configurations.\n config = load_config(args.config)\n if args.run_dir:\n config[\"model_dir\"] = os.path.join(args.run_dir, config[\"model_dir\"])\n if args.data_dir:\n config[\"data\"] = _prefix_paths(args.data_dir, config[\"data\"])\n\n if not os.path.isdir(config[\"model_dir\"]):\n tf.logging.info(\"Creating model directory %s\", config[\"model_dir\"])\n os.makedirs(config[\"model_dir\"])\n\n model = load_model(config[\"model_dir\"], model_file=args.model, model_name=args.model_type)\n session_config = tf.ConfigProto(\n intra_op_parallelism_threads=args.intra_op_parallelism_threads,\n inter_op_parallelism_threads=args.inter_op_parallelism_threads)\n runner = Runner(\n model,\n config,\n seed=args.seed,\n num_devices=args.num_gpus,\n gpu_allow_growth=args.gpu_allow_growth,\n session_config=session_config)\n\n if args.run == \"train_and_eval\":\n runner.train_and_evaluate()\n elif args.run == \"train\":\n runner.train()\n elif args.run == \"eval\":\n runner.evaluate(checkpoint_path=args.checkpoint_path)\n elif args.run == \"infer\":\n if not args.features_file:\n parser.error(\"--features_file is required for inference.\")\n elif len(args.features_file) == 1:\n args.features_file = args.features_file[0]\n runner.infer(\n args.features_file,\n predictions_file=args.predictions_file,\n checkpoint_path=args.checkpoint_path,\n log_time=args.log_prediction_time)\n elif args.run == \"export\":\n runner.export(checkpoint_path=args.checkpoint_path)\n elif args.run == \"score\":\n if not args.features_file:\n parser.error(\"--features_file is required for scoring.\")\n if not args.predictions_file:\n parser.error(\"--predictions_file is required for scoring.\")\n runner.score(\n args.features_file,\n args.predictions_file,\n checkpoint_path=args.checkpoint_path)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "opennmt/bin/main.py"}]}
| 2,122 | 130 |
gh_patches_debug_39227
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-3930
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use astropy instead of pyfits to read FITS images
pyfits is not currently installable on 3.5, and it looks like AstroPy is more actively maintained.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/io/_plugins/fits_plugin.py`
Content:
```
1 __all__ = ['imread', 'imread_collection']
2
3 import skimage.io as io
4
5 try:
6 from astropy.io import fits as pyfits
7 except ImportError:
8 try:
9 import pyfits
10 except ImportError:
11 raise ImportError(
12 "PyFITS could not be found. Please refer to\n"
13 "http://www.stsci.edu/resources/software_hardware/pyfits\n"
14 "for further instructions.")
15
16
17 def imread(fname, dtype=None):
18 """Load an image from a FITS file.
19
20 Parameters
21 ----------
22 fname : string
23 Image file name, e.g. ``test.fits``.
24 dtype : dtype, optional
25 For FITS, this argument is ignored because Stefan is planning on
26 removing the dtype argument from imread anyway.
27
28 Returns
29 -------
30 img_array : ndarray
31 Unlike plugins such as PIL, where different color bands/channels are
32 stored in the third dimension, FITS images are greyscale-only and can
33 be N-dimensional, so an array of the native FITS dimensionality is
34 returned, without color channels.
35
36 Currently if no image is found in the file, None will be returned
37
38 Notes
39 -----
40
41 Currently FITS ``imread()`` always returns the first image extension when
42 given a Multi-Extension FITS file; use ``imread_collection()`` (which does
43 lazy loading) to get all the extensions at once.
44
45 """
46
47 hdulist = pyfits.open(fname)
48
49 # Iterate over FITS image extensions, ignoring any other extension types
50 # such as binary tables, and get the first image data array:
51 img_array = None
52 for hdu in hdulist:
53 if isinstance(hdu, pyfits.ImageHDU) or \
54 isinstance(hdu, pyfits.PrimaryHDU):
55 if hdu.data is not None:
56 img_array = hdu.data
57 break
58 hdulist.close()
59
60 return img_array
61
62
63 def imread_collection(load_pattern, conserve_memory=True):
64 """Load a collection of images from one or more FITS files
65
66 Parameters
67 ----------
68 load_pattern : str or list
69 List of extensions to load. Filename globbing is currently
70 unsupported.
71 converve_memory : bool
72 If True, never keep more than one in memory at a specific
73 time. Otherwise, images will be cached once they are loaded.
74
75 Returns
76 -------
77
78 ic : ImageCollection
79 Collection of images.
80
81 """
82
83 intype = type(load_pattern)
84 if intype is not list and intype is not str:
85 raise TypeError("Input must be a filename or list of filenames")
86
87 # Ensure we have a list, otherwise we'll end up iterating over the string:
88 if intype is not list:
89 load_pattern = [load_pattern]
90
91 # Generate a list of filename/extension pairs by opening the list of
92 # files and finding the image extensions in each one:
93 ext_list = []
94 for filename in load_pattern:
95 hdulist = pyfits.open(filename)
96 for n, hdu in zip(range(len(hdulist)), hdulist):
97 if isinstance(hdu, pyfits.ImageHDU) or \
98 isinstance(hdu, pyfits.PrimaryHDU):
99 # Ignore (primary) header units with no data (use '.size'
100 # rather than '.data' to avoid actually loading the image):
101 try:
102 data_size = hdu.size()
103 except TypeError: # (size changed to int in PyFITS 3.1)
104 data_size = hdu.size
105 if data_size > 0:
106 ext_list.append((filename, n))
107 hdulist.close()
108
109 return io.ImageCollection(ext_list, load_func=FITSFactory,
110 conserve_memory=conserve_memory)
111
112
113 def FITSFactory(image_ext):
114 """Load an image extension from a FITS file and return a NumPy array
115
116 Parameters
117 ----------
118
119 image_ext : tuple
120 FITS extension to load, in the format ``(filename, ext_num)``.
121 The FITS ``(extname, extver)`` format is unsupported, since this
122 function is not called directly by the user and
123 ``imread_collection()`` does the work of figuring out which
124 extensions need loading.
125
126 """
127
128 # Expect a length-2 tuple with a filename as the first element:
129 if not isinstance(image_ext, tuple):
130 raise TypeError("Expected a tuple")
131
132 if len(image_ext) != 2:
133 raise ValueError("Expected a tuple of length 2")
134
135 filename = image_ext[0]
136 extnum = image_ext[1]
137
138 if type(filename) is not str or type(extnum) is not int:
139 raise ValueError("Expected a (filename, extension) tuple")
140
141 hdulist = pyfits.open(filename)
142
143 data = hdulist[extnum].data
144
145 hdulist.close()
146
147 if data is None:
148 raise RuntimeError(
149 "Extension %d of %s has no data" % (extnum, filename))
150
151 return data
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/skimage/io/_plugins/fits_plugin.py b/skimage/io/_plugins/fits_plugin.py
--- a/skimage/io/_plugins/fits_plugin.py
+++ b/skimage/io/_plugins/fits_plugin.py
@@ -3,15 +3,12 @@
import skimage.io as io
try:
- from astropy.io import fits as pyfits
+ from astropy.io import fits
except ImportError:
- try:
- import pyfits
- except ImportError:
- raise ImportError(
- "PyFITS could not be found. Please refer to\n"
- "http://www.stsci.edu/resources/software_hardware/pyfits\n"
- "for further instructions.")
+ raise ImportError(
+ "Astropy could not be found. It is needed to read FITS files.\n"
+ "Please refer to http://www.astropy.org for installation\n"
+ "instructions.")
def imread(fname, dtype=None):
@@ -44,14 +41,14 @@
"""
- hdulist = pyfits.open(fname)
+ hdulist = fits.open(fname)
# Iterate over FITS image extensions, ignoring any other extension types
# such as binary tables, and get the first image data array:
img_array = None
for hdu in hdulist:
- if isinstance(hdu, pyfits.ImageHDU) or \
- isinstance(hdu, pyfits.PrimaryHDU):
+ if isinstance(hdu, fits.ImageHDU) or \
+ isinstance(hdu, fits.PrimaryHDU):
if hdu.data is not None:
img_array = hdu.data
break
@@ -92,16 +89,16 @@
# files and finding the image extensions in each one:
ext_list = []
for filename in load_pattern:
- hdulist = pyfits.open(filename)
+ hdulist = fits.open(filename)
for n, hdu in zip(range(len(hdulist)), hdulist):
- if isinstance(hdu, pyfits.ImageHDU) or \
- isinstance(hdu, pyfits.PrimaryHDU):
+ if isinstance(hdu, fits.ImageHDU) or \
+ isinstance(hdu, fits.PrimaryHDU):
# Ignore (primary) header units with no data (use '.size'
# rather than '.data' to avoid actually loading the image):
try:
+ data_size = hdu.size # size is int in Astropy 3.1.2
+ except TypeError:
data_size = hdu.size()
- except TypeError: # (size changed to int in PyFITS 3.1)
- data_size = hdu.size
if data_size > 0:
ext_list.append((filename, n))
hdulist.close()
@@ -138,7 +135,7 @@
if type(filename) is not str or type(extnum) is not int:
raise ValueError("Expected a (filename, extension) tuple")
- hdulist = pyfits.open(filename)
+ hdulist = fits.open(filename)
data = hdulist[extnum].data
|
{"golden_diff": "diff --git a/skimage/io/_plugins/fits_plugin.py b/skimage/io/_plugins/fits_plugin.py\n--- a/skimage/io/_plugins/fits_plugin.py\n+++ b/skimage/io/_plugins/fits_plugin.py\n@@ -3,15 +3,12 @@\n import skimage.io as io\n \n try:\n- from astropy.io import fits as pyfits\n+ from astropy.io import fits\n except ImportError:\n- try:\n- import pyfits\n- except ImportError:\n- raise ImportError(\n- \"PyFITS could not be found. Please refer to\\n\"\n- \"http://www.stsci.edu/resources/software_hardware/pyfits\\n\"\n- \"for further instructions.\")\n+ raise ImportError(\n+ \"Astropy could not be found. It is needed to read FITS files.\\n\"\n+ \"Please refer to http://www.astropy.org for installation\\n\"\n+ \"instructions.\")\n \n \n def imread(fname, dtype=None):\n@@ -44,14 +41,14 @@\n \n \"\"\"\n \n- hdulist = pyfits.open(fname)\n+ hdulist = fits.open(fname)\n \n # Iterate over FITS image extensions, ignoring any other extension types\n # such as binary tables, and get the first image data array:\n img_array = None\n for hdu in hdulist:\n- if isinstance(hdu, pyfits.ImageHDU) or \\\n- isinstance(hdu, pyfits.PrimaryHDU):\n+ if isinstance(hdu, fits.ImageHDU) or \\\n+ isinstance(hdu, fits.PrimaryHDU):\n if hdu.data is not None:\n img_array = hdu.data\n break\n@@ -92,16 +89,16 @@\n # files and finding the image extensions in each one:\n ext_list = []\n for filename in load_pattern:\n- hdulist = pyfits.open(filename)\n+ hdulist = fits.open(filename)\n for n, hdu in zip(range(len(hdulist)), hdulist):\n- if isinstance(hdu, pyfits.ImageHDU) or \\\n- isinstance(hdu, pyfits.PrimaryHDU):\n+ if isinstance(hdu, fits.ImageHDU) or \\\n+ isinstance(hdu, fits.PrimaryHDU):\n # Ignore (primary) header units with no data (use '.size'\n # rather than '.data' to avoid actually loading the image):\n try:\n+ data_size = hdu.size # size is int in Astropy 3.1.2\n+ except TypeError:\n data_size = hdu.size()\n- except TypeError: # (size changed to int in PyFITS 3.1)\n- data_size = hdu.size\n if data_size > 0:\n ext_list.append((filename, n))\n hdulist.close()\n@@ -138,7 +135,7 @@\n if type(filename) is not str or type(extnum) is not int:\n raise ValueError(\"Expected a (filename, extension) tuple\")\n \n- hdulist = pyfits.open(filename)\n+ hdulist = fits.open(filename)\n \n data = hdulist[extnum].data\n", "issue": "Use astropy instead of pyfits to read FITS images\npyfits is not currently installable on 3.5, and it looks like AstroPy is more actively maintained.\n", "before_files": [{"content": "__all__ = ['imread', 'imread_collection']\n\nimport skimage.io as io\n\ntry:\n from astropy.io import fits as pyfits\nexcept ImportError:\n try:\n import pyfits\n except ImportError:\n raise ImportError(\n \"PyFITS could not be found. Please refer to\\n\"\n \"http://www.stsci.edu/resources/software_hardware/pyfits\\n\"\n \"for further instructions.\")\n\n\ndef imread(fname, dtype=None):\n \"\"\"Load an image from a FITS file.\n\n Parameters\n ----------\n fname : string\n Image file name, e.g. ``test.fits``.\n dtype : dtype, optional\n For FITS, this argument is ignored because Stefan is planning on\n removing the dtype argument from imread anyway.\n\n Returns\n -------\n img_array : ndarray\n Unlike plugins such as PIL, where different color bands/channels are\n stored in the third dimension, FITS images are greyscale-only and can\n be N-dimensional, so an array of the native FITS dimensionality is\n returned, without color channels.\n\n Currently if no image is found in the file, None will be returned\n\n Notes\n -----\n\n Currently FITS ``imread()`` always returns the first image extension when\n given a Multi-Extension FITS file; use ``imread_collection()`` (which does\n lazy loading) to get all the extensions at once.\n\n \"\"\"\n\n hdulist = pyfits.open(fname)\n\n # Iterate over FITS image extensions, ignoring any other extension types\n # such as binary tables, and get the first image data array:\n img_array = None\n for hdu in hdulist:\n if isinstance(hdu, pyfits.ImageHDU) or \\\n isinstance(hdu, pyfits.PrimaryHDU):\n if hdu.data is not None:\n img_array = hdu.data\n break\n hdulist.close()\n\n return img_array\n\n\ndef imread_collection(load_pattern, conserve_memory=True):\n \"\"\"Load a collection of images from one or more FITS files\n\n Parameters\n ----------\n load_pattern : str or list\n List of extensions to load. Filename globbing is currently\n unsupported.\n converve_memory : bool\n If True, never keep more than one in memory at a specific\n time. Otherwise, images will be cached once they are loaded.\n\n Returns\n -------\n\n ic : ImageCollection\n Collection of images.\n\n \"\"\"\n\n intype = type(load_pattern)\n if intype is not list and intype is not str:\n raise TypeError(\"Input must be a filename or list of filenames\")\n\n # Ensure we have a list, otherwise we'll end up iterating over the string:\n if intype is not list:\n load_pattern = [load_pattern]\n\n # Generate a list of filename/extension pairs by opening the list of\n # files and finding the image extensions in each one:\n ext_list = []\n for filename in load_pattern:\n hdulist = pyfits.open(filename)\n for n, hdu in zip(range(len(hdulist)), hdulist):\n if isinstance(hdu, pyfits.ImageHDU) or \\\n isinstance(hdu, pyfits.PrimaryHDU):\n # Ignore (primary) header units with no data (use '.size'\n # rather than '.data' to avoid actually loading the image):\n try:\n data_size = hdu.size()\n except TypeError: # (size changed to int in PyFITS 3.1)\n data_size = hdu.size\n if data_size > 0:\n ext_list.append((filename, n))\n hdulist.close()\n\n return io.ImageCollection(ext_list, load_func=FITSFactory,\n conserve_memory=conserve_memory)\n\n\ndef FITSFactory(image_ext):\n \"\"\"Load an image extension from a FITS file and return a NumPy array\n\n Parameters\n ----------\n\n image_ext : tuple\n FITS extension to load, in the format ``(filename, ext_num)``.\n The FITS ``(extname, extver)`` format is unsupported, since this\n function is not called directly by the user and\n ``imread_collection()`` does the work of figuring out which\n extensions need loading.\n\n \"\"\"\n\n # Expect a length-2 tuple with a filename as the first element:\n if not isinstance(image_ext, tuple):\n raise TypeError(\"Expected a tuple\")\n\n if len(image_ext) != 2:\n raise ValueError(\"Expected a tuple of length 2\")\n\n filename = image_ext[0]\n extnum = image_ext[1]\n\n if type(filename) is not str or type(extnum) is not int:\n raise ValueError(\"Expected a (filename, extension) tuple\")\n\n hdulist = pyfits.open(filename)\n\n data = hdulist[extnum].data\n\n hdulist.close()\n\n if data is None:\n raise RuntimeError(\n \"Extension %d of %s has no data\" % (extnum, filename))\n\n return data\n", "path": "skimage/io/_plugins/fits_plugin.py"}], "after_files": [{"content": "__all__ = ['imread', 'imread_collection']\n\nimport skimage.io as io\n\ntry:\n from astropy.io import fits\nexcept ImportError:\n raise ImportError(\n \"Astropy could not be found. It is needed to read FITS files.\\n\"\n \"Please refer to http://www.astropy.org for installation\\n\"\n \"instructions.\")\n\n\ndef imread(fname, dtype=None):\n \"\"\"Load an image from a FITS file.\n\n Parameters\n ----------\n fname : string\n Image file name, e.g. ``test.fits``.\n dtype : dtype, optional\n For FITS, this argument is ignored because Stefan is planning on\n removing the dtype argument from imread anyway.\n\n Returns\n -------\n img_array : ndarray\n Unlike plugins such as PIL, where different color bands/channels are\n stored in the third dimension, FITS images are greyscale-only and can\n be N-dimensional, so an array of the native FITS dimensionality is\n returned, without color channels.\n\n Currently if no image is found in the file, None will be returned\n\n Notes\n -----\n\n Currently FITS ``imread()`` always returns the first image extension when\n given a Multi-Extension FITS file; use ``imread_collection()`` (which does\n lazy loading) to get all the extensions at once.\n\n \"\"\"\n\n hdulist = fits.open(fname)\n\n # Iterate over FITS image extensions, ignoring any other extension types\n # such as binary tables, and get the first image data array:\n img_array = None\n for hdu in hdulist:\n if isinstance(hdu, fits.ImageHDU) or \\\n isinstance(hdu, fits.PrimaryHDU):\n if hdu.data is not None:\n img_array = hdu.data\n break\n hdulist.close()\n\n return img_array\n\n\ndef imread_collection(load_pattern, conserve_memory=True):\n \"\"\"Load a collection of images from one or more FITS files\n\n Parameters\n ----------\n load_pattern : str or list\n List of extensions to load. Filename globbing is currently\n unsupported.\n converve_memory : bool\n If True, never keep more than one in memory at a specific\n time. Otherwise, images will be cached once they are loaded.\n\n Returns\n -------\n\n ic : ImageCollection\n Collection of images.\n\n \"\"\"\n\n intype = type(load_pattern)\n if intype is not list and intype is not str:\n raise TypeError(\"Input must be a filename or list of filenames\")\n\n # Ensure we have a list, otherwise we'll end up iterating over the string:\n if intype is not list:\n load_pattern = [load_pattern]\n\n # Generate a list of filename/extension pairs by opening the list of\n # files and finding the image extensions in each one:\n ext_list = []\n for filename in load_pattern:\n hdulist = fits.open(filename)\n for n, hdu in zip(range(len(hdulist)), hdulist):\n if isinstance(hdu, fits.ImageHDU) or \\\n isinstance(hdu, fits.PrimaryHDU):\n # Ignore (primary) header units with no data (use '.size'\n # rather than '.data' to avoid actually loading the image):\n try:\n data_size = hdu.size # size is int in Astropy 3.1.2\n except TypeError:\n data_size = hdu.size()\n if data_size > 0:\n ext_list.append((filename, n))\n hdulist.close()\n\n return io.ImageCollection(ext_list, load_func=FITSFactory,\n conserve_memory=conserve_memory)\n\n\ndef FITSFactory(image_ext):\n \"\"\"Load an image extension from a FITS file and return a NumPy array\n\n Parameters\n ----------\n\n image_ext : tuple\n FITS extension to load, in the format ``(filename, ext_num)``.\n The FITS ``(extname, extver)`` format is unsupported, since this\n function is not called directly by the user and\n ``imread_collection()`` does the work of figuring out which\n extensions need loading.\n\n \"\"\"\n\n # Expect a length-2 tuple with a filename as the first element:\n if not isinstance(image_ext, tuple):\n raise TypeError(\"Expected a tuple\")\n\n if len(image_ext) != 2:\n raise ValueError(\"Expected a tuple of length 2\")\n\n filename = image_ext[0]\n extnum = image_ext[1]\n\n if type(filename) is not str or type(extnum) is not int:\n raise ValueError(\"Expected a (filename, extension) tuple\")\n\n hdulist = fits.open(filename)\n\n data = hdulist[extnum].data\n\n hdulist.close()\n\n if data is None:\n raise RuntimeError(\n \"Extension %d of %s has no data\" % (extnum, filename))\n\n return data\n", "path": "skimage/io/_plugins/fits_plugin.py"}]}
| 1,741 | 688 |
gh_patches_debug_22343
|
rasdani/github-patches
|
git_diff
|
huggingface__diffusers-7821
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multi-controlnet formatting issue
### Describe the bug
Hi.
There is an inconsistency between `from_pretrained` and `save_pretrained` within the Multicontrolnet class.
The from_pretrained function returns a directory structure like this: controlnet, controlnet_1, controlnet_2,
whereas save_pretrained is like this: controlnet, controlnet_1, controlnet_1_2.
When loading a saved model, if there are 3 controlnets, the last controlnet will not be loaded. (more than 2 always same issue)
### Reproduction
I don't think there is no need to reproduce the code as it's pretty clear issue.
For compatibility, how about changing the `save_pretrained` function in Multi-ControlNet to look like the code below?
```
def save_pretrained(
self,
save_directory: Union[str, os.PathLike],
is_main_process: bool = True,
save_function: Callable = None,
safe_serialization: bool = True,
variant: Optional[str] = None,
):
"""
Save a model and its configuration file to a directory, so that it can be re-loaded using the
`[`~pipelines.controlnet.MultiControlNetModel.from_pretrained`]` class method.
Arguments:
save_directory (`str` or `os.PathLike`):
Directory to which to save. Will be created if it doesn't exist.
is_main_process (`bool`, *optional*, defaults to `True`):
Whether the process calling this is the main process or not. Useful when in distributed training like
TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on
the main process to avoid race conditions.
save_function (`Callable`):
The function to use to save the state dictionary. Useful on distributed training like TPUs when one
need to replace `torch.save` by another method. Can be configured with the environment variable
`DIFFUSERS_SAVE_MODE`.
safe_serialization (`bool`, *optional*, defaults to `True`):
Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
variant (`str`, *optional*):
If specified, weights are saved in the format pytorch_model.<variant>.bin.
"""
model_path_to_save = save_directory
for idx, controlnet in enumerate(self.nets):
suffix = "" if idx == 0 else f"_{idx}"
controlnet.save_pretrained(
model_path_to_save + suffix,
is_main_process=is_main_process,
save_function=save_function,
safe_serialization=safe_serialization,
variant=variant,
)
```
### Logs
_No response_
### System Info
Diffusers 0.27.2
### Who can help?
@sayakpaul
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/diffusers/pipelines/controlnet/multicontrolnet.py`
Content:
```
1 import os
2 from typing import Any, Callable, Dict, List, Optional, Tuple, Union
3
4 import torch
5 from torch import nn
6
7 from ...models.controlnet import ControlNetModel, ControlNetOutput
8 from ...models.modeling_utils import ModelMixin
9 from ...utils import logging
10
11
12 logger = logging.get_logger(__name__)
13
14
15 class MultiControlNetModel(ModelMixin):
16 r"""
17 Multiple `ControlNetModel` wrapper class for Multi-ControlNet
18
19 This module is a wrapper for multiple instances of the `ControlNetModel`. The `forward()` API is designed to be
20 compatible with `ControlNetModel`.
21
22 Args:
23 controlnets (`List[ControlNetModel]`):
24 Provides additional conditioning to the unet during the denoising process. You must set multiple
25 `ControlNetModel` as a list.
26 """
27
28 def __init__(self, controlnets: Union[List[ControlNetModel], Tuple[ControlNetModel]]):
29 super().__init__()
30 self.nets = nn.ModuleList(controlnets)
31
32 def forward(
33 self,
34 sample: torch.Tensor,
35 timestep: Union[torch.Tensor, float, int],
36 encoder_hidden_states: torch.Tensor,
37 controlnet_cond: List[torch.tensor],
38 conditioning_scale: List[float],
39 class_labels: Optional[torch.Tensor] = None,
40 timestep_cond: Optional[torch.Tensor] = None,
41 attention_mask: Optional[torch.Tensor] = None,
42 added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None,
43 cross_attention_kwargs: Optional[Dict[str, Any]] = None,
44 guess_mode: bool = False,
45 return_dict: bool = True,
46 ) -> Union[ControlNetOutput, Tuple]:
47 for i, (image, scale, controlnet) in enumerate(zip(controlnet_cond, conditioning_scale, self.nets)):
48 down_samples, mid_sample = controlnet(
49 sample=sample,
50 timestep=timestep,
51 encoder_hidden_states=encoder_hidden_states,
52 controlnet_cond=image,
53 conditioning_scale=scale,
54 class_labels=class_labels,
55 timestep_cond=timestep_cond,
56 attention_mask=attention_mask,
57 added_cond_kwargs=added_cond_kwargs,
58 cross_attention_kwargs=cross_attention_kwargs,
59 guess_mode=guess_mode,
60 return_dict=return_dict,
61 )
62
63 # merge samples
64 if i == 0:
65 down_block_res_samples, mid_block_res_sample = down_samples, mid_sample
66 else:
67 down_block_res_samples = [
68 samples_prev + samples_curr
69 for samples_prev, samples_curr in zip(down_block_res_samples, down_samples)
70 ]
71 mid_block_res_sample += mid_sample
72
73 return down_block_res_samples, mid_block_res_sample
74
75 def save_pretrained(
76 self,
77 save_directory: Union[str, os.PathLike],
78 is_main_process: bool = True,
79 save_function: Callable = None,
80 safe_serialization: bool = True,
81 variant: Optional[str] = None,
82 ):
83 """
84 Save a model and its configuration file to a directory, so that it can be re-loaded using the
85 `[`~pipelines.controlnet.MultiControlNetModel.from_pretrained`]` class method.
86
87 Arguments:
88 save_directory (`str` or `os.PathLike`):
89 Directory to which to save. Will be created if it doesn't exist.
90 is_main_process (`bool`, *optional*, defaults to `True`):
91 Whether the process calling this is the main process or not. Useful when in distributed training like
92 TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on
93 the main process to avoid race conditions.
94 save_function (`Callable`):
95 The function to use to save the state dictionary. Useful on distributed training like TPUs when one
96 need to replace `torch.save` by another method. Can be configured with the environment variable
97 `DIFFUSERS_SAVE_MODE`.
98 safe_serialization (`bool`, *optional*, defaults to `True`):
99 Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
100 variant (`str`, *optional*):
101 If specified, weights are saved in the format pytorch_model.<variant>.bin.
102 """
103 idx = 0
104 model_path_to_save = save_directory
105 for controlnet in self.nets:
106 controlnet.save_pretrained(
107 model_path_to_save,
108 is_main_process=is_main_process,
109 save_function=save_function,
110 safe_serialization=safe_serialization,
111 variant=variant,
112 )
113
114 idx += 1
115 model_path_to_save = model_path_to_save + f"_{idx}"
116
117 @classmethod
118 def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]], **kwargs):
119 r"""
120 Instantiate a pretrained MultiControlNet model from multiple pre-trained controlnet models.
121
122 The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train
123 the model, you should first set it back in training mode with `model.train()`.
124
125 The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come
126 pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning
127 task.
128
129 The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those
130 weights are discarded.
131
132 Parameters:
133 pretrained_model_path (`os.PathLike`):
134 A path to a *directory* containing model weights saved using
135 [`~diffusers.pipelines.controlnet.MultiControlNetModel.save_pretrained`], e.g.,
136 `./my_model_directory/controlnet`.
137 torch_dtype (`str` or `torch.dtype`, *optional*):
138 Override the default `torch.dtype` and load the model under this dtype. If `"auto"` is passed the dtype
139 will be automatically derived from the model's weights.
140 output_loading_info(`bool`, *optional*, defaults to `False`):
141 Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
142 device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*):
143 A map that specifies where each submodule should go. It doesn't need to be refined to each
144 parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the
145 same device.
146
147 To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For
148 more information about each option see [designing a device
149 map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
150 max_memory (`Dict`, *optional*):
151 A dictionary device identifier to maximum memory. Will default to the maximum memory available for each
152 GPU and the available CPU RAM if unset.
153 low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`):
154 Speed up model loading by not initializing the weights and only loading the pre-trained weights. This
155 also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the
156 model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch,
157 setting this argument to `True` will raise an error.
158 variant (`str`, *optional*):
159 If specified load weights from `variant` filename, *e.g.* pytorch_model.<variant>.bin. `variant` is
160 ignored when using `from_flax`.
161 use_safetensors (`bool`, *optional*, defaults to `None`):
162 If set to `None`, the `safetensors` weights will be downloaded if they're available **and** if the
163 `safetensors` library is installed. If set to `True`, the model will be forcibly loaded from
164 `safetensors` weights. If set to `False`, loading will *not* use `safetensors`.
165 """
166 idx = 0
167 controlnets = []
168
169 # load controlnet and append to list until no controlnet directory exists anymore
170 # first controlnet has to be saved under `./mydirectory/controlnet` to be compliant with `DiffusionPipeline.from_prertained`
171 # second, third, ... controlnets have to be saved under `./mydirectory/controlnet_1`, `./mydirectory/controlnet_2`, ...
172 model_path_to_load = pretrained_model_path
173 while os.path.isdir(model_path_to_load):
174 controlnet = ControlNetModel.from_pretrained(model_path_to_load, **kwargs)
175 controlnets.append(controlnet)
176
177 idx += 1
178 model_path_to_load = pretrained_model_path + f"_{idx}"
179
180 logger.info(f"{len(controlnets)} controlnets loaded from {pretrained_model_path}.")
181
182 if len(controlnets) == 0:
183 raise ValueError(
184 f"No ControlNets found under {os.path.dirname(pretrained_model_path)}. Expected at least {pretrained_model_path + '_0'}."
185 )
186
187 return cls(controlnets)
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/diffusers/pipelines/controlnet/multicontrolnet.py b/src/diffusers/pipelines/controlnet/multicontrolnet.py
--- a/src/diffusers/pipelines/controlnet/multicontrolnet.py
+++ b/src/diffusers/pipelines/controlnet/multicontrolnet.py
@@ -100,20 +100,16 @@
variant (`str`, *optional*):
If specified, weights are saved in the format pytorch_model.<variant>.bin.
"""
- idx = 0
- model_path_to_save = save_directory
- for controlnet in self.nets:
+ for idx, controlnet in enumerate(self.nets):
+ suffix = "" if idx == 0 else f"_{idx}"
controlnet.save_pretrained(
- model_path_to_save,
+ save_directory + suffix,
is_main_process=is_main_process,
save_function=save_function,
safe_serialization=safe_serialization,
variant=variant,
)
- idx += 1
- model_path_to_save = model_path_to_save + f"_{idx}"
-
@classmethod
def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]], **kwargs):
r"""
|
{"golden_diff": "diff --git a/src/diffusers/pipelines/controlnet/multicontrolnet.py b/src/diffusers/pipelines/controlnet/multicontrolnet.py\n--- a/src/diffusers/pipelines/controlnet/multicontrolnet.py\n+++ b/src/diffusers/pipelines/controlnet/multicontrolnet.py\n@@ -100,20 +100,16 @@\n variant (`str`, *optional*):\n If specified, weights are saved in the format pytorch_model.<variant>.bin.\n \"\"\"\n- idx = 0\n- model_path_to_save = save_directory\n- for controlnet in self.nets:\n+ for idx, controlnet in enumerate(self.nets):\n+ suffix = \"\" if idx == 0 else f\"_{idx}\"\n controlnet.save_pretrained(\n- model_path_to_save,\n+ save_directory + suffix,\n is_main_process=is_main_process,\n save_function=save_function,\n safe_serialization=safe_serialization,\n variant=variant,\n )\n \n- idx += 1\n- model_path_to_save = model_path_to_save + f\"_{idx}\"\n-\n @classmethod\n def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]], **kwargs):\n r\"\"\"\n", "issue": "Multi-controlnet formatting issue\n### Describe the bug\r\n\r\nHi.\r\nThere is an inconsistency between `from_pretrained` and `save_pretrained` within the Multicontrolnet class.\r\nThe from_pretrained function returns a directory structure like this: controlnet, controlnet_1, controlnet_2, \r\nwhereas save_pretrained is like this: controlnet, controlnet_1, controlnet_1_2.\r\nWhen loading a saved model, if there are 3 controlnets, the last controlnet will not be loaded. (more than 2 always same issue)\r\n\r\n\r\n### Reproduction\r\n\r\nI don't think there is no need to reproduce the code as it's pretty clear issue.\r\nFor compatibility, how about changing the `save_pretrained` function in Multi-ControlNet to look like the code below? \r\n```\r\ndef save_pretrained(\r\n self,\r\n save_directory: Union[str, os.PathLike],\r\n is_main_process: bool = True,\r\n save_function: Callable = None,\r\n safe_serialization: bool = True,\r\n variant: Optional[str] = None,\r\n ):\r\n \"\"\"\r\n Save a model and its configuration file to a directory, so that it can be re-loaded using the\r\n `[`~pipelines.controlnet.MultiControlNetModel.from_pretrained`]` class method.\r\n\r\n Arguments:\r\n save_directory (`str` or `os.PathLike`):\r\n Directory to which to save. Will be created if it doesn't exist.\r\n is_main_process (`bool`, *optional*, defaults to `True`):\r\n Whether the process calling this is the main process or not. Useful when in distributed training like\r\n TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on\r\n the main process to avoid race conditions.\r\n save_function (`Callable`):\r\n The function to use to save the state dictionary. Useful on distributed training like TPUs when one\r\n need to replace `torch.save` by another method. Can be configured with the environment variable\r\n `DIFFUSERS_SAVE_MODE`.\r\n safe_serialization (`bool`, *optional*, defaults to `True`):\r\n Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).\r\n variant (`str`, *optional*):\r\n If specified, weights are saved in the format pytorch_model.<variant>.bin.\r\n \"\"\"\r\n model_path_to_save = save_directory\r\n for idx, controlnet in enumerate(self.nets):\r\n suffix = \"\" if idx == 0 else f\"_{idx}\"\r\n controlnet.save_pretrained(\r\n model_path_to_save + suffix,\r\n is_main_process=is_main_process,\r\n save_function=save_function,\r\n safe_serialization=safe_serialization,\r\n variant=variant,\r\n )\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nDiffusers 0.27.2\r\n\r\n### Who can help?\r\n\r\n@sayakpaul \n", "before_files": [{"content": "import os\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, Union\n\nimport torch\nfrom torch import nn\n\nfrom ...models.controlnet import ControlNetModel, ControlNetOutput\nfrom ...models.modeling_utils import ModelMixin\nfrom ...utils import logging\n\n\nlogger = logging.get_logger(__name__)\n\n\nclass MultiControlNetModel(ModelMixin):\n r\"\"\"\n Multiple `ControlNetModel` wrapper class for Multi-ControlNet\n\n This module is a wrapper for multiple instances of the `ControlNetModel`. The `forward()` API is designed to be\n compatible with `ControlNetModel`.\n\n Args:\n controlnets (`List[ControlNetModel]`):\n Provides additional conditioning to the unet during the denoising process. You must set multiple\n `ControlNetModel` as a list.\n \"\"\"\n\n def __init__(self, controlnets: Union[List[ControlNetModel], Tuple[ControlNetModel]]):\n super().__init__()\n self.nets = nn.ModuleList(controlnets)\n\n def forward(\n self,\n sample: torch.Tensor,\n timestep: Union[torch.Tensor, float, int],\n encoder_hidden_states: torch.Tensor,\n controlnet_cond: List[torch.tensor],\n conditioning_scale: List[float],\n class_labels: Optional[torch.Tensor] = None,\n timestep_cond: Optional[torch.Tensor] = None,\n attention_mask: Optional[torch.Tensor] = None,\n added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None,\n cross_attention_kwargs: Optional[Dict[str, Any]] = None,\n guess_mode: bool = False,\n return_dict: bool = True,\n ) -> Union[ControlNetOutput, Tuple]:\n for i, (image, scale, controlnet) in enumerate(zip(controlnet_cond, conditioning_scale, self.nets)):\n down_samples, mid_sample = controlnet(\n sample=sample,\n timestep=timestep,\n encoder_hidden_states=encoder_hidden_states,\n controlnet_cond=image,\n conditioning_scale=scale,\n class_labels=class_labels,\n timestep_cond=timestep_cond,\n attention_mask=attention_mask,\n added_cond_kwargs=added_cond_kwargs,\n cross_attention_kwargs=cross_attention_kwargs,\n guess_mode=guess_mode,\n return_dict=return_dict,\n )\n\n # merge samples\n if i == 0:\n down_block_res_samples, mid_block_res_sample = down_samples, mid_sample\n else:\n down_block_res_samples = [\n samples_prev + samples_curr\n for samples_prev, samples_curr in zip(down_block_res_samples, down_samples)\n ]\n mid_block_res_sample += mid_sample\n\n return down_block_res_samples, mid_block_res_sample\n\n def save_pretrained(\n self,\n save_directory: Union[str, os.PathLike],\n is_main_process: bool = True,\n save_function: Callable = None,\n safe_serialization: bool = True,\n variant: Optional[str] = None,\n ):\n \"\"\"\n Save a model and its configuration file to a directory, so that it can be re-loaded using the\n `[`~pipelines.controlnet.MultiControlNetModel.from_pretrained`]` class method.\n\n Arguments:\n save_directory (`str` or `os.PathLike`):\n Directory to which to save. Will be created if it doesn't exist.\n is_main_process (`bool`, *optional*, defaults to `True`):\n Whether the process calling this is the main process or not. Useful when in distributed training like\n TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on\n the main process to avoid race conditions.\n save_function (`Callable`):\n The function to use to save the state dictionary. Useful on distributed training like TPUs when one\n need to replace `torch.save` by another method. Can be configured with the environment variable\n `DIFFUSERS_SAVE_MODE`.\n safe_serialization (`bool`, *optional*, defaults to `True`):\n Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).\n variant (`str`, *optional*):\n If specified, weights are saved in the format pytorch_model.<variant>.bin.\n \"\"\"\n idx = 0\n model_path_to_save = save_directory\n for controlnet in self.nets:\n controlnet.save_pretrained(\n model_path_to_save,\n is_main_process=is_main_process,\n save_function=save_function,\n safe_serialization=safe_serialization,\n variant=variant,\n )\n\n idx += 1\n model_path_to_save = model_path_to_save + f\"_{idx}\"\n\n @classmethod\n def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]], **kwargs):\n r\"\"\"\n Instantiate a pretrained MultiControlNet model from multiple pre-trained controlnet models.\n\n The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train\n the model, you should first set it back in training mode with `model.train()`.\n\n The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come\n pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning\n task.\n\n The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those\n weights are discarded.\n\n Parameters:\n pretrained_model_path (`os.PathLike`):\n A path to a *directory* containing model weights saved using\n [`~diffusers.pipelines.controlnet.MultiControlNetModel.save_pretrained`], e.g.,\n `./my_model_directory/controlnet`.\n torch_dtype (`str` or `torch.dtype`, *optional*):\n Override the default `torch.dtype` and load the model under this dtype. If `\"auto\"` is passed the dtype\n will be automatically derived from the model's weights.\n output_loading_info(`bool`, *optional*, defaults to `False`):\n Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.\n device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*):\n A map that specifies where each submodule should go. It doesn't need to be refined to each\n parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the\n same device.\n\n To have Accelerate compute the most optimized `device_map` automatically, set `device_map=\"auto\"`. For\n more information about each option see [designing a device\n map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).\n max_memory (`Dict`, *optional*):\n A dictionary device identifier to maximum memory. Will default to the maximum memory available for each\n GPU and the available CPU RAM if unset.\n low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`):\n Speed up model loading by not initializing the weights and only loading the pre-trained weights. This\n also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the\n model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch,\n setting this argument to `True` will raise an error.\n variant (`str`, *optional*):\n If specified load weights from `variant` filename, *e.g.* pytorch_model.<variant>.bin. `variant` is\n ignored when using `from_flax`.\n use_safetensors (`bool`, *optional*, defaults to `None`):\n If set to `None`, the `safetensors` weights will be downloaded if they're available **and** if the\n `safetensors` library is installed. If set to `True`, the model will be forcibly loaded from\n `safetensors` weights. If set to `False`, loading will *not* use `safetensors`.\n \"\"\"\n idx = 0\n controlnets = []\n\n # load controlnet and append to list until no controlnet directory exists anymore\n # first controlnet has to be saved under `./mydirectory/controlnet` to be compliant with `DiffusionPipeline.from_prertained`\n # second, third, ... controlnets have to be saved under `./mydirectory/controlnet_1`, `./mydirectory/controlnet_2`, ...\n model_path_to_load = pretrained_model_path\n while os.path.isdir(model_path_to_load):\n controlnet = ControlNetModel.from_pretrained(model_path_to_load, **kwargs)\n controlnets.append(controlnet)\n\n idx += 1\n model_path_to_load = pretrained_model_path + f\"_{idx}\"\n\n logger.info(f\"{len(controlnets)} controlnets loaded from {pretrained_model_path}.\")\n\n if len(controlnets) == 0:\n raise ValueError(\n f\"No ControlNets found under {os.path.dirname(pretrained_model_path)}. Expected at least {pretrained_model_path + '_0'}.\"\n )\n\n return cls(controlnets)\n", "path": "src/diffusers/pipelines/controlnet/multicontrolnet.py"}], "after_files": [{"content": "import os\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, Union\n\nimport torch\nfrom torch import nn\n\nfrom ...models.controlnet import ControlNetModel, ControlNetOutput\nfrom ...models.modeling_utils import ModelMixin\nfrom ...utils import logging\n\n\nlogger = logging.get_logger(__name__)\n\n\nclass MultiControlNetModel(ModelMixin):\n r\"\"\"\n Multiple `ControlNetModel` wrapper class for Multi-ControlNet\n\n This module is a wrapper for multiple instances of the `ControlNetModel`. The `forward()` API is designed to be\n compatible with `ControlNetModel`.\n\n Args:\n controlnets (`List[ControlNetModel]`):\n Provides additional conditioning to the unet during the denoising process. You must set multiple\n `ControlNetModel` as a list.\n \"\"\"\n\n def __init__(self, controlnets: Union[List[ControlNetModel], Tuple[ControlNetModel]]):\n super().__init__()\n self.nets = nn.ModuleList(controlnets)\n\n def forward(\n self,\n sample: torch.Tensor,\n timestep: Union[torch.Tensor, float, int],\n encoder_hidden_states: torch.Tensor,\n controlnet_cond: List[torch.tensor],\n conditioning_scale: List[float],\n class_labels: Optional[torch.Tensor] = None,\n timestep_cond: Optional[torch.Tensor] = None,\n attention_mask: Optional[torch.Tensor] = None,\n added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None,\n cross_attention_kwargs: Optional[Dict[str, Any]] = None,\n guess_mode: bool = False,\n return_dict: bool = True,\n ) -> Union[ControlNetOutput, Tuple]:\n for i, (image, scale, controlnet) in enumerate(zip(controlnet_cond, conditioning_scale, self.nets)):\n down_samples, mid_sample = controlnet(\n sample=sample,\n timestep=timestep,\n encoder_hidden_states=encoder_hidden_states,\n controlnet_cond=image,\n conditioning_scale=scale,\n class_labels=class_labels,\n timestep_cond=timestep_cond,\n attention_mask=attention_mask,\n added_cond_kwargs=added_cond_kwargs,\n cross_attention_kwargs=cross_attention_kwargs,\n guess_mode=guess_mode,\n return_dict=return_dict,\n )\n\n # merge samples\n if i == 0:\n down_block_res_samples, mid_block_res_sample = down_samples, mid_sample\n else:\n down_block_res_samples = [\n samples_prev + samples_curr\n for samples_prev, samples_curr in zip(down_block_res_samples, down_samples)\n ]\n mid_block_res_sample += mid_sample\n\n return down_block_res_samples, mid_block_res_sample\n\n def save_pretrained(\n self,\n save_directory: Union[str, os.PathLike],\n is_main_process: bool = True,\n save_function: Callable = None,\n safe_serialization: bool = True,\n variant: Optional[str] = None,\n ):\n \"\"\"\n Save a model and its configuration file to a directory, so that it can be re-loaded using the\n `[`~pipelines.controlnet.MultiControlNetModel.from_pretrained`]` class method.\n\n Arguments:\n save_directory (`str` or `os.PathLike`):\n Directory to which to save. Will be created if it doesn't exist.\n is_main_process (`bool`, *optional*, defaults to `True`):\n Whether the process calling this is the main process or not. Useful when in distributed training like\n TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on\n the main process to avoid race conditions.\n save_function (`Callable`):\n The function to use to save the state dictionary. Useful on distributed training like TPUs when one\n need to replace `torch.save` by another method. Can be configured with the environment variable\n `DIFFUSERS_SAVE_MODE`.\n safe_serialization (`bool`, *optional*, defaults to `True`):\n Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).\n variant (`str`, *optional*):\n If specified, weights are saved in the format pytorch_model.<variant>.bin.\n \"\"\"\n for idx, controlnet in enumerate(self.nets):\n suffix = \"\" if idx == 0 else f\"_{idx}\"\n controlnet.save_pretrained(\n save_directory + suffix,\n is_main_process=is_main_process,\n save_function=save_function,\n safe_serialization=safe_serialization,\n variant=variant,\n )\n\n @classmethod\n def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]], **kwargs):\n r\"\"\"\n Instantiate a pretrained MultiControlNet model from multiple pre-trained controlnet models.\n\n The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train\n the model, you should first set it back in training mode with `model.train()`.\n\n The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come\n pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning\n task.\n\n The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those\n weights are discarded.\n\n Parameters:\n pretrained_model_path (`os.PathLike`):\n A path to a *directory* containing model weights saved using\n [`~diffusers.pipelines.controlnet.MultiControlNetModel.save_pretrained`], e.g.,\n `./my_model_directory/controlnet`.\n torch_dtype (`str` or `torch.dtype`, *optional*):\n Override the default `torch.dtype` and load the model under this dtype. If `\"auto\"` is passed the dtype\n will be automatically derived from the model's weights.\n output_loading_info(`bool`, *optional*, defaults to `False`):\n Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.\n device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*):\n A map that specifies where each submodule should go. It doesn't need to be refined to each\n parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the\n same device.\n\n To have Accelerate compute the most optimized `device_map` automatically, set `device_map=\"auto\"`. For\n more information about each option see [designing a device\n map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).\n max_memory (`Dict`, *optional*):\n A dictionary device identifier to maximum memory. Will default to the maximum memory available for each\n GPU and the available CPU RAM if unset.\n low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`):\n Speed up model loading by not initializing the weights and only loading the pre-trained weights. This\n also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the\n model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch,\n setting this argument to `True` will raise an error.\n variant (`str`, *optional*):\n If specified load weights from `variant` filename, *e.g.* pytorch_model.<variant>.bin. `variant` is\n ignored when using `from_flax`.\n use_safetensors (`bool`, *optional*, defaults to `None`):\n If set to `None`, the `safetensors` weights will be downloaded if they're available **and** if the\n `safetensors` library is installed. If set to `True`, the model will be forcibly loaded from\n `safetensors` weights. If set to `False`, loading will *not* use `safetensors`.\n \"\"\"\n idx = 0\n controlnets = []\n\n # load controlnet and append to list until no controlnet directory exists anymore\n # first controlnet has to be saved under `./mydirectory/controlnet` to be compliant with `DiffusionPipeline.from_prertained`\n # second, third, ... controlnets have to be saved under `./mydirectory/controlnet_1`, `./mydirectory/controlnet_2`, ...\n model_path_to_load = pretrained_model_path\n while os.path.isdir(model_path_to_load):\n controlnet = ControlNetModel.from_pretrained(model_path_to_load, **kwargs)\n controlnets.append(controlnet)\n\n idx += 1\n model_path_to_load = pretrained_model_path + f\"_{idx}\"\n\n logger.info(f\"{len(controlnets)} controlnets loaded from {pretrained_model_path}.\")\n\n if len(controlnets) == 0:\n raise ValueError(\n f\"No ControlNets found under {os.path.dirname(pretrained_model_path)}. Expected at least {pretrained_model_path + '_0'}.\"\n )\n\n return cls(controlnets)\n", "path": "src/diffusers/pipelines/controlnet/multicontrolnet.py"}]}
| 3,342 | 273 |
gh_patches_debug_34926
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-3770
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clarify the `num_workers` in `ThreadDataLoader`
**Is your feature request related to a problem? Please describe.**
When I was introducing GPU transforms and the associated `ThreadDataLoader` to users, got several times feedback about the `num_workers` arg, which is confusing that users think it means the multi-threads in `ThreadDataLoader`, but actually it's the multi-processing workers of PyTorch DataLoader.
Would be nice to clarify this arg and the use cases.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/data/thread_buffer.py`
Content:
```
1 # Copyright (c) MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12
13 from queue import Empty, Full, Queue
14 from threading import Thread
15
16 from monai.data import DataLoader, Dataset
17
18
19 class ThreadBuffer:
20 """
21 Iterates over values from self.src in a separate thread but yielding them in the current thread. This allows values
22 to be queued up asynchronously. The internal thread will continue running so long as the source has values or until
23 the stop() method is called.
24
25 One issue raised by using a thread in this way is that during the lifetime of the thread the source object is being
26 iterated over, so if the thread hasn't finished another attempt to iterate over it will raise an exception or yield
27 unexpected results. To ensure the thread releases the iteration and proper cleanup is done the stop() method must
28 be called which will join with the thread.
29
30 Args:
31 src: Source data iterable
32 buffer_size: Number of items to buffer from the source
33 timeout: Time to wait for an item from the buffer, or to wait while the buffer is full when adding items
34 """
35
36 def __init__(self, src, buffer_size: int = 1, timeout: float = 0.01):
37 self.src = src
38 self.buffer_size = buffer_size
39 self.timeout = timeout
40 self.buffer: Queue = Queue(self.buffer_size)
41 self.gen_thread = None
42 self.is_running = False
43
44 def enqueue_values(self):
45 for src_val in self.src:
46 while self.is_running:
47 try:
48 self.buffer.put(src_val, timeout=self.timeout)
49 except Full:
50 pass # try to add the item again
51 else:
52 break # successfully added the item, quit trying
53 else: # quit the thread cleanly when requested to stop
54 break
55
56 def stop(self):
57 self.is_running = False # signal the thread to exit
58
59 if self.gen_thread is not None:
60 self.gen_thread.join()
61
62 self.gen_thread = None
63
64 def __iter__(self):
65
66 self.is_running = True
67 self.gen_thread = Thread(target=self.enqueue_values, daemon=True)
68 self.gen_thread.start()
69
70 try:
71 while self.is_running and (self.gen_thread.is_alive() or not self.buffer.empty()):
72 try:
73 yield self.buffer.get(timeout=self.timeout)
74 except Empty:
75 pass # queue was empty this time, try again
76 finally:
77 self.stop() # ensure thread completion
78
79
80 class ThreadDataLoader(DataLoader):
81 """
82 Subclass of `DataLoader` using a `ThreadBuffer` object to implement `__iter__` method asynchronously. This will
83 iterate over data from the loader as expected however the data is generated on a separate thread. Use this class
84 where a `DataLoader` instance is required and not just an iterable object.
85
86 The default behaviour with `repeats` set to 1 is to yield each batch as it is generated, however with a higher
87 value the generated batch is yielded that many times while underlying dataset asynchronously generates the next.
88 Typically not all relevant information is learned from a batch in a single iteration so training multiple times
89 on the same batch will still produce good training with minimal short-term overfitting while allowing a slow batch
90 generation process more time to produce a result.
91
92 See:
93 * Fischetti et al. "Faster SGD training by minibatch persistency." ArXiv (2018) https://arxiv.org/abs/1806.07353
94 * Dami et al., "Faster Neural Network Training with Data Echoing" ArXiv (2020) https://arxiv.org/abs/1907.05550
95 * Ramezani et al. "GCN meets GPU: Decoupling "When to Sample" from "How to Sample"." NeurIPS (2020).
96 https://proceedings.neurips.cc/paper/2020/file/d714d2c5a796d5814c565d78dd16188d-Paper.pdf
97
98 Args:
99 dataset: input dataset.
100 buffer_size: number of items to buffer from the data source.
101 buffer_timeout: time to wait for an item from the buffer, or to wait while the buffer is full when adding items.
102 num_workers: number of the multi-processing workers in PyTorch DataLoader.
103 repeats: number of times to yield the same batch
104 """
105
106 def __init__(
107 self,
108 dataset: Dataset,
109 buffer_size: int = 1,
110 buffer_timeout: float = 0.01,
111 num_workers: int = 0,
112 repeats: int = 1,
113 **kwargs,
114 ):
115 super().__init__(dataset, num_workers, **kwargs)
116 self.buffer_size = buffer_size
117 self.buffer_timeout = buffer_timeout
118 self.repeats = repeats
119
120 def __iter__(self):
121 buffer = ThreadBuffer(src=super().__iter__(), buffer_size=self.buffer_size, timeout=self.buffer_timeout)
122
123 for batch in buffer:
124 for _ in range(self.repeats):
125 yield batch
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/monai/data/thread_buffer.py b/monai/data/thread_buffer.py
--- a/monai/data/thread_buffer.py
+++ b/monai/data/thread_buffer.py
@@ -89,6 +89,12 @@
on the same batch will still produce good training with minimal short-term overfitting while allowing a slow batch
generation process more time to produce a result.
+ Another typical usage is to accelerate light-weight preprocessing (usually cached all the deterministic transforms
+ and no IO operations), because it leverages the separate thread to execute preprocessing to avoid unnecessary IPC
+ between multiple workers of DataLoader. And as CUDA may not work well with the multi-processing of DataLoader,
+ `ThreadDataLoader` can be useful for GPU transforms. For more details:
+ https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_model_training_guide.md.
+
See:
* Fischetti et al. "Faster SGD training by minibatch persistency." ArXiv (2018) https://arxiv.org/abs/1806.07353
* Dami et al., "Faster Neural Network Training with Data Echoing" ArXiv (2020) https://arxiv.org/abs/1907.05550
@@ -99,20 +105,15 @@
dataset: input dataset.
buffer_size: number of items to buffer from the data source.
buffer_timeout: time to wait for an item from the buffer, or to wait while the buffer is full when adding items.
- num_workers: number of the multi-processing workers in PyTorch DataLoader.
- repeats: number of times to yield the same batch
+ repeats: number of times to yield the same batch.
+ kwargs: other arguments for `DataLoader` except for `dataset`.
+
"""
def __init__(
- self,
- dataset: Dataset,
- buffer_size: int = 1,
- buffer_timeout: float = 0.01,
- num_workers: int = 0,
- repeats: int = 1,
- **kwargs,
+ self, dataset: Dataset, buffer_size: int = 1, buffer_timeout: float = 0.01, repeats: int = 1, **kwargs
):
- super().__init__(dataset, num_workers, **kwargs)
+ super().__init__(dataset, **kwargs)
self.buffer_size = buffer_size
self.buffer_timeout = buffer_timeout
self.repeats = repeats
|
{"golden_diff": "diff --git a/monai/data/thread_buffer.py b/monai/data/thread_buffer.py\n--- a/monai/data/thread_buffer.py\n+++ b/monai/data/thread_buffer.py\n@@ -89,6 +89,12 @@\n on the same batch will still produce good training with minimal short-term overfitting while allowing a slow batch\n generation process more time to produce a result.\n \n+ Another typical usage is to accelerate light-weight preprocessing (usually cached all the deterministic transforms\n+ and no IO operations), because it leverages the separate thread to execute preprocessing to avoid unnecessary IPC\n+ between multiple workers of DataLoader. And as CUDA may not work well with the multi-processing of DataLoader,\n+ `ThreadDataLoader` can be useful for GPU transforms. For more details:\n+ https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_model_training_guide.md.\n+\n See:\n * Fischetti et al. \"Faster SGD training by minibatch persistency.\" ArXiv (2018) https://arxiv.org/abs/1806.07353\n * Dami et al., \"Faster Neural Network Training with Data Echoing\" ArXiv (2020) https://arxiv.org/abs/1907.05550\n@@ -99,20 +105,15 @@\n dataset: input dataset.\n buffer_size: number of items to buffer from the data source.\n buffer_timeout: time to wait for an item from the buffer, or to wait while the buffer is full when adding items.\n- num_workers: number of the multi-processing workers in PyTorch DataLoader.\n- repeats: number of times to yield the same batch\n+ repeats: number of times to yield the same batch.\n+ kwargs: other arguments for `DataLoader` except for `dataset`.\n+\n \"\"\"\n \n def __init__(\n- self,\n- dataset: Dataset,\n- buffer_size: int = 1,\n- buffer_timeout: float = 0.01,\n- num_workers: int = 0,\n- repeats: int = 1,\n- **kwargs,\n+ self, dataset: Dataset, buffer_size: int = 1, buffer_timeout: float = 0.01, repeats: int = 1, **kwargs\n ):\n- super().__init__(dataset, num_workers, **kwargs)\n+ super().__init__(dataset, **kwargs)\n self.buffer_size = buffer_size\n self.buffer_timeout = buffer_timeout\n self.repeats = repeats\n", "issue": "Clarify the `num_workers` in `ThreadDataLoader`\n**Is your feature request related to a problem? Please describe.**\r\nWhen I was introducing GPU transforms and the associated `ThreadDataLoader` to users, got several times feedback about the `num_workers` arg, which is confusing that users think it means the multi-threads in `ThreadDataLoader`, but actually it's the multi-processing workers of PyTorch DataLoader.\r\nWould be nice to clarify this arg and the use cases.\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom queue import Empty, Full, Queue\nfrom threading import Thread\n\nfrom monai.data import DataLoader, Dataset\n\n\nclass ThreadBuffer:\n \"\"\"\n Iterates over values from self.src in a separate thread but yielding them in the current thread. This allows values\n to be queued up asynchronously. The internal thread will continue running so long as the source has values or until\n the stop() method is called.\n\n One issue raised by using a thread in this way is that during the lifetime of the thread the source object is being\n iterated over, so if the thread hasn't finished another attempt to iterate over it will raise an exception or yield\n unexpected results. To ensure the thread releases the iteration and proper cleanup is done the stop() method must\n be called which will join with the thread.\n\n Args:\n src: Source data iterable\n buffer_size: Number of items to buffer from the source\n timeout: Time to wait for an item from the buffer, or to wait while the buffer is full when adding items\n \"\"\"\n\n def __init__(self, src, buffer_size: int = 1, timeout: float = 0.01):\n self.src = src\n self.buffer_size = buffer_size\n self.timeout = timeout\n self.buffer: Queue = Queue(self.buffer_size)\n self.gen_thread = None\n self.is_running = False\n\n def enqueue_values(self):\n for src_val in self.src:\n while self.is_running:\n try:\n self.buffer.put(src_val, timeout=self.timeout)\n except Full:\n pass # try to add the item again\n else:\n break # successfully added the item, quit trying\n else: # quit the thread cleanly when requested to stop\n break\n\n def stop(self):\n self.is_running = False # signal the thread to exit\n\n if self.gen_thread is not None:\n self.gen_thread.join()\n\n self.gen_thread = None\n\n def __iter__(self):\n\n self.is_running = True\n self.gen_thread = Thread(target=self.enqueue_values, daemon=True)\n self.gen_thread.start()\n\n try:\n while self.is_running and (self.gen_thread.is_alive() or not self.buffer.empty()):\n try:\n yield self.buffer.get(timeout=self.timeout)\n except Empty:\n pass # queue was empty this time, try again\n finally:\n self.stop() # ensure thread completion\n\n\nclass ThreadDataLoader(DataLoader):\n \"\"\"\n Subclass of `DataLoader` using a `ThreadBuffer` object to implement `__iter__` method asynchronously. This will\n iterate over data from the loader as expected however the data is generated on a separate thread. Use this class\n where a `DataLoader` instance is required and not just an iterable object.\n\n The default behaviour with `repeats` set to 1 is to yield each batch as it is generated, however with a higher\n value the generated batch is yielded that many times while underlying dataset asynchronously generates the next.\n Typically not all relevant information is learned from a batch in a single iteration so training multiple times\n on the same batch will still produce good training with minimal short-term overfitting while allowing a slow batch\n generation process more time to produce a result.\n\n See:\n * Fischetti et al. \"Faster SGD training by minibatch persistency.\" ArXiv (2018) https://arxiv.org/abs/1806.07353\n * Dami et al., \"Faster Neural Network Training with Data Echoing\" ArXiv (2020) https://arxiv.org/abs/1907.05550\n * Ramezani et al. \"GCN meets GPU: Decoupling \"When to Sample\" from \"How to Sample\".\" NeurIPS (2020).\n https://proceedings.neurips.cc/paper/2020/file/d714d2c5a796d5814c565d78dd16188d-Paper.pdf\n\n Args:\n dataset: input dataset.\n buffer_size: number of items to buffer from the data source.\n buffer_timeout: time to wait for an item from the buffer, or to wait while the buffer is full when adding items.\n num_workers: number of the multi-processing workers in PyTorch DataLoader.\n repeats: number of times to yield the same batch\n \"\"\"\n\n def __init__(\n self,\n dataset: Dataset,\n buffer_size: int = 1,\n buffer_timeout: float = 0.01,\n num_workers: int = 0,\n repeats: int = 1,\n **kwargs,\n ):\n super().__init__(dataset, num_workers, **kwargs)\n self.buffer_size = buffer_size\n self.buffer_timeout = buffer_timeout\n self.repeats = repeats\n\n def __iter__(self):\n buffer = ThreadBuffer(src=super().__iter__(), buffer_size=self.buffer_size, timeout=self.buffer_timeout)\n\n for batch in buffer:\n for _ in range(self.repeats):\n yield batch\n", "path": "monai/data/thread_buffer.py"}], "after_files": [{"content": "# Copyright (c) MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom queue import Empty, Full, Queue\nfrom threading import Thread\n\nfrom monai.data import DataLoader, Dataset\n\n\nclass ThreadBuffer:\n \"\"\"\n Iterates over values from self.src in a separate thread but yielding them in the current thread. This allows values\n to be queued up asynchronously. The internal thread will continue running so long as the source has values or until\n the stop() method is called.\n\n One issue raised by using a thread in this way is that during the lifetime of the thread the source object is being\n iterated over, so if the thread hasn't finished another attempt to iterate over it will raise an exception or yield\n unexpected results. To ensure the thread releases the iteration and proper cleanup is done the stop() method must\n be called which will join with the thread.\n\n Args:\n src: Source data iterable\n buffer_size: Number of items to buffer from the source\n timeout: Time to wait for an item from the buffer, or to wait while the buffer is full when adding items\n \"\"\"\n\n def __init__(self, src, buffer_size: int = 1, timeout: float = 0.01):\n self.src = src\n self.buffer_size = buffer_size\n self.timeout = timeout\n self.buffer: Queue = Queue(self.buffer_size)\n self.gen_thread = None\n self.is_running = False\n\n def enqueue_values(self):\n for src_val in self.src:\n while self.is_running:\n try:\n self.buffer.put(src_val, timeout=self.timeout)\n except Full:\n pass # try to add the item again\n else:\n break # successfully added the item, quit trying\n else: # quit the thread cleanly when requested to stop\n break\n\n def stop(self):\n self.is_running = False # signal the thread to exit\n\n if self.gen_thread is not None:\n self.gen_thread.join()\n\n self.gen_thread = None\n\n def __iter__(self):\n\n self.is_running = True\n self.gen_thread = Thread(target=self.enqueue_values, daemon=True)\n self.gen_thread.start()\n\n try:\n while self.is_running and (self.gen_thread.is_alive() or not self.buffer.empty()):\n try:\n yield self.buffer.get(timeout=self.timeout)\n except Empty:\n pass # queue was empty this time, try again\n finally:\n self.stop() # ensure thread completion\n\n\nclass ThreadDataLoader(DataLoader):\n \"\"\"\n Subclass of `DataLoader` using a `ThreadBuffer` object to implement `__iter__` method asynchronously. This will\n iterate over data from the loader as expected however the data is generated on a separate thread. Use this class\n where a `DataLoader` instance is required and not just an iterable object.\n\n The default behaviour with `repeats` set to 1 is to yield each batch as it is generated, however with a higher\n value the generated batch is yielded that many times while underlying dataset asynchronously generates the next.\n Typically not all relevant information is learned from a batch in a single iteration so training multiple times\n on the same batch will still produce good training with minimal short-term overfitting while allowing a slow batch\n generation process more time to produce a result.\n\n Another typical usage is to accelerate light-weight preprocessing (usually cached all the deterministic transforms\n and no IO operations), because it leverages the separate thread to execute preprocessing to avoid unnecessary IPC\n between multiple workers of DataLoader. And as CUDA may not work well with the multi-processing of DataLoader,\n `ThreadDataLoader` can be useful for GPU transforms. For more details:\n https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_model_training_guide.md.\n\n See:\n * Fischetti et al. \"Faster SGD training by minibatch persistency.\" ArXiv (2018) https://arxiv.org/abs/1806.07353\n * Dami et al., \"Faster Neural Network Training with Data Echoing\" ArXiv (2020) https://arxiv.org/abs/1907.05550\n * Ramezani et al. \"GCN meets GPU: Decoupling \"When to Sample\" from \"How to Sample\".\" NeurIPS (2020).\n https://proceedings.neurips.cc/paper/2020/file/d714d2c5a796d5814c565d78dd16188d-Paper.pdf\n\n Args:\n dataset: input dataset.\n buffer_size: number of items to buffer from the data source.\n buffer_timeout: time to wait for an item from the buffer, or to wait while the buffer is full when adding items.\n repeats: number of times to yield the same batch.\n kwargs: other arguments for `DataLoader` except for `dataset`.\n\n \"\"\"\n\n def __init__(\n self, dataset: Dataset, buffer_size: int = 1, buffer_timeout: float = 0.01, repeats: int = 1, **kwargs\n ):\n super().__init__(dataset, **kwargs)\n self.buffer_size = buffer_size\n self.buffer_timeout = buffer_timeout\n self.repeats = repeats\n\n def __iter__(self):\n buffer = ThreadBuffer(src=super().__iter__(), buffer_size=self.buffer_size, timeout=self.buffer_timeout)\n\n for batch in buffer:\n for _ in range(self.repeats):\n yield batch\n", "path": "monai/data/thread_buffer.py"}]}
| 1,873 | 562 |
gh_patches_debug_16186
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-909
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make user handle search case insensitive
**Is your feature request related to a problem? Please describe.**
I'm not sure if this is intentional, but currently, you can only look up users if you spell their name witht he correct cases. This is also the case when you try to find BW users from Mastodon. I don't think case sensitivity makes much sense here.
**Additional context**
Mastodon hase case insensitive user names as well.
One can also register `Tofuwabohu` when there is a `tofuwabohu` already. I think that should be prevented before instances get opened?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/helpers.py`
Content:
```
1 """ helper functions used in various views """
2 import re
3 from requests import HTTPError
4 from django.core.exceptions import FieldError
5 from django.db.models import Count, Max, Q
6
7 from bookwyrm import activitypub, models
8 from bookwyrm.connectors import ConnectorException, get_data
9 from bookwyrm.status import create_generated_note
10 from bookwyrm.utils import regex
11
12
13 def get_user_from_username(viewer, username):
14 """ helper function to resolve a localname or a username to a user """
15 # raises DoesNotExist if user is now found
16 try:
17 return models.User.viewer_aware_objects(viewer).get(localname=username)
18 except models.User.DoesNotExist:
19 return models.User.viewer_aware_objects(viewer).get(username=username)
20
21
22 def is_api_request(request):
23 """ check whether a request is asking for html or data """
24 return "json" in request.headers.get("Accept", "") or request.path[-5:] == ".json"
25
26
27 def is_bookwyrm_request(request):
28 """ check if the request is coming from another bookwyrm instance """
29 user_agent = request.headers.get("User-Agent")
30 if user_agent is None or re.search(regex.bookwyrm_user_agent, user_agent) is None:
31 return False
32 return True
33
34
35 def object_visible_to_user(viewer, obj):
36 """ is a user authorized to view an object? """
37 if not obj:
38 return False
39
40 # viewer can't see it if the object's owner blocked them
41 if viewer in obj.user.blocks.all():
42 return False
43
44 # you can see your own posts and any public or unlisted posts
45 if viewer == obj.user or obj.privacy in ["public", "unlisted"]:
46 return True
47
48 # you can see the followers only posts of people you follow
49 if obj.privacy == "followers" and obj.user.followers.filter(id=viewer.id).first():
50 return True
51
52 # you can see dms you are tagged in
53 if isinstance(obj, models.Status):
54 if obj.privacy == "direct" and obj.mention_users.filter(id=viewer.id).first():
55 return True
56 return False
57
58
59 def privacy_filter(viewer, queryset, privacy_levels=None, following_only=False):
60 """ filter objects that have "user" and "privacy" fields """
61 privacy_levels = privacy_levels or ["public", "unlisted", "followers", "direct"]
62 # if there'd a deleted field, exclude deleted items
63 try:
64 queryset = queryset.filter(deleted=False)
65 except FieldError:
66 pass
67
68 # exclude blocks from both directions
69 if not viewer.is_anonymous:
70 blocked = models.User.objects.filter(id__in=viewer.blocks.all()).all()
71 queryset = queryset.exclude(Q(user__in=blocked) | Q(user__blocks=viewer))
72
73 # you can't see followers only or direct messages if you're not logged in
74 if viewer.is_anonymous:
75 privacy_levels = [p for p in privacy_levels if not p in ["followers", "direct"]]
76
77 # filter to only privided privacy levels
78 queryset = queryset.filter(privacy__in=privacy_levels)
79
80 # only include statuses the user follows
81 if following_only:
82 queryset = queryset.exclude(
83 ~Q( # remove everythign except
84 Q(user__in=viewer.following.all())
85 | Q(user=viewer) # user following
86 | Q(mention_users=viewer) # is self # mentions user
87 ),
88 )
89 # exclude followers-only statuses the user doesn't follow
90 elif "followers" in privacy_levels:
91 queryset = queryset.exclude(
92 ~Q( # user isn't following and it isn't their own status
93 Q(user__in=viewer.following.all()) | Q(user=viewer)
94 ),
95 privacy="followers", # and the status is followers only
96 )
97
98 # exclude direct messages not intended for the user
99 if "direct" in privacy_levels:
100 try:
101 queryset = queryset.exclude(
102 ~Q(Q(user=viewer) | Q(mention_users=viewer)), privacy="direct"
103 )
104 except FieldError:
105 queryset = queryset.exclude(~Q(user=viewer), privacy="direct")
106
107 return queryset
108
109
110 def handle_remote_webfinger(query):
111 """ webfingerin' other servers """
112 user = None
113
114 # usernames could be @user@domain or user@domain
115 if not query:
116 return None
117
118 if query[0] == "@":
119 query = query[1:]
120
121 try:
122 domain = query.split("@")[1]
123 except IndexError:
124 return None
125
126 try:
127 user = models.User.objects.get(username=query)
128 except models.User.DoesNotExist:
129 url = "https://%s/.well-known/webfinger?resource=acct:%s" % (domain, query)
130 try:
131 data = get_data(url)
132 except (ConnectorException, HTTPError):
133 return None
134
135 for link in data.get("links"):
136 if link.get("rel") == "self":
137 try:
138 user = activitypub.resolve_remote_id(
139 link["href"], model=models.User
140 )
141 except (KeyError, activitypub.ActivitySerializerError):
142 return None
143 return user
144
145
146 def get_edition(book_id):
147 """ look up a book in the db and return an edition """
148 book = models.Book.objects.select_subclasses().get(id=book_id)
149 if isinstance(book, models.Work):
150 book = book.get_default_edition()
151 return book
152
153
154 def handle_reading_status(user, shelf, book, privacy):
155 """ post about a user reading a book """
156 # tell the world about this cool thing that happened
157 try:
158 message = {
159 "to-read": "wants to read",
160 "reading": "started reading",
161 "read": "finished reading",
162 }[shelf.identifier]
163 except KeyError:
164 # it's a non-standard shelf, don't worry about it
165 return
166
167 status = create_generated_note(user, message, mention_books=[book], privacy=privacy)
168 status.save()
169
170
171 def is_blocked(viewer, user):
172 """ is this viewer blocked by the user? """
173 if viewer.is_authenticated and viewer in user.blocks.all():
174 return True
175 return False
176
177
178 def get_discover_books():
179 """ list of books for the discover page """
180 return list(
181 set(
182 models.Edition.objects.filter(
183 review__published_date__isnull=False,
184 review__deleted=False,
185 review__user__local=True,
186 review__privacy__in=["public", "unlisted"],
187 )
188 .exclude(cover__exact="")
189 .annotate(Max("review__published_date"))
190 .order_by("-review__published_date__max")[:6]
191 )
192 )
193
194
195 def get_suggested_users(user):
196 """ bookwyrm users you don't already know """
197 return (
198 get_annotated_users(
199 user,
200 ~Q(id=user.id),
201 ~Q(followers=user),
202 ~Q(follower_requests=user),
203 bookwyrm_user=True,
204 )
205 .order_by("-mutuals", "-last_active_date")
206 .all()[:5]
207 )
208
209
210 def get_annotated_users(user, *args, **kwargs):
211 """ Users, annotated with things they have in common """
212 return (
213 models.User.objects.filter(discoverable=True, is_active=True, *args, **kwargs)
214 .exclude(Q(id__in=user.blocks.all()) | Q(blocks=user))
215 .annotate(
216 mutuals=Count(
217 "following",
218 filter=Q(
219 ~Q(id=user.id),
220 ~Q(id__in=user.following.all()),
221 following__in=user.following.all(),
222 ),
223 distinct=True,
224 ),
225 shared_books=Count(
226 "shelfbook",
227 filter=Q(
228 ~Q(id=user.id),
229 shelfbook__book__parent_work__in=[
230 s.book.parent_work for s in user.shelfbook_set.all()
231 ],
232 ),
233 distinct=True,
234 ),
235 )
236 )
237
```
Path: `bookwyrm/views/search.py`
Content:
```
1 """ search views"""
2 import re
3
4 from django.contrib.postgres.search import TrigramSimilarity
5 from django.db.models.functions import Greatest
6 from django.http import JsonResponse
7 from django.template.response import TemplateResponse
8 from django.views import View
9
10 from bookwyrm import models
11 from bookwyrm.connectors import connector_manager
12 from bookwyrm.utils import regex
13 from .helpers import is_api_request, privacy_filter
14 from .helpers import handle_remote_webfinger
15
16
17 # pylint: disable= no-self-use
18 class Search(View):
19 """ search users or books """
20
21 def get(self, request):
22 """ that search bar up top """
23 query = request.GET.get("q")
24 min_confidence = request.GET.get("min_confidence", 0.1)
25
26 if is_api_request(request):
27 # only return local book results via json so we don't cascade
28 book_results = connector_manager.local_search(
29 query, min_confidence=min_confidence
30 )
31 return JsonResponse([r.json() for r in book_results], safe=False)
32
33 # use webfinger for mastodon style [email protected] username
34 if query and re.match(regex.full_username, query):
35 handle_remote_webfinger(query)
36
37 # do a user search
38 user_results = (
39 models.User.viewer_aware_objects(request.user)
40 .annotate(
41 similarity=Greatest(
42 TrigramSimilarity("username", query),
43 TrigramSimilarity("localname", query),
44 )
45 )
46 .filter(
47 similarity__gt=0.5,
48 )
49 .order_by("-similarity")[:10]
50 )
51
52 # any relevent lists?
53 list_results = (
54 privacy_filter(
55 request.user,
56 models.List.objects,
57 privacy_levels=["public", "followers"],
58 )
59 .annotate(
60 similarity=Greatest(
61 TrigramSimilarity("name", query),
62 TrigramSimilarity("description", query),
63 )
64 )
65 .filter(
66 similarity__gt=0.1,
67 )
68 .order_by("-similarity")[:10]
69 )
70
71 book_results = connector_manager.search(query, min_confidence=min_confidence)
72 data = {
73 "book_results": book_results,
74 "user_results": user_results,
75 "list_results": list_results,
76 "query": query or "",
77 }
78 return TemplateResponse(request, "search_results.html", data)
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/views/helpers.py b/bookwyrm/views/helpers.py
--- a/bookwyrm/views/helpers.py
+++ b/bookwyrm/views/helpers.py
@@ -124,7 +124,7 @@
return None
try:
- user = models.User.objects.get(username=query)
+ user = models.User.objects.get(username__iexact=query)
except models.User.DoesNotExist:
url = "https://%s/.well-known/webfinger?resource=acct:%s" % (domain, query)
try:
diff --git a/bookwyrm/views/search.py b/bookwyrm/views/search.py
--- a/bookwyrm/views/search.py
+++ b/bookwyrm/views/search.py
@@ -34,7 +34,7 @@
if query and re.match(regex.full_username, query):
handle_remote_webfinger(query)
- # do a user search
+ # do a user search
user_results = (
models.User.viewer_aware_objects(request.user)
.annotate(
|
{"golden_diff": "diff --git a/bookwyrm/views/helpers.py b/bookwyrm/views/helpers.py\n--- a/bookwyrm/views/helpers.py\n+++ b/bookwyrm/views/helpers.py\n@@ -124,7 +124,7 @@\n return None\n \n try:\n- user = models.User.objects.get(username=query)\n+ user = models.User.objects.get(username__iexact=query)\n except models.User.DoesNotExist:\n url = \"https://%s/.well-known/webfinger?resource=acct:%s\" % (domain, query)\n try:\ndiff --git a/bookwyrm/views/search.py b/bookwyrm/views/search.py\n--- a/bookwyrm/views/search.py\n+++ b/bookwyrm/views/search.py\n@@ -34,7 +34,7 @@\n if query and re.match(regex.full_username, query):\n handle_remote_webfinger(query)\n \n- # do a user search\n+ # do a user search\n user_results = (\n models.User.viewer_aware_objects(request.user)\n .annotate(\n", "issue": "Make user handle search case insensitive\n**Is your feature request related to a problem? Please describe.**\r\nI'm not sure if this is intentional, but currently, you can only look up users if you spell their name witht he correct cases. This is also the case when you try to find BW users from Mastodon. I don't think case sensitivity makes much sense here.\r\n\r\n**Additional context**\r\nMastodon hase case insensitive user names as well.\r\n\r\nOne can also register `Tofuwabohu` when there is a `tofuwabohu` already. I think that should be prevented before instances get opened?\n", "before_files": [{"content": "\"\"\" helper functions used in various views \"\"\"\nimport re\nfrom requests import HTTPError\nfrom django.core.exceptions import FieldError\nfrom django.db.models import Count, Max, Q\n\nfrom bookwyrm import activitypub, models\nfrom bookwyrm.connectors import ConnectorException, get_data\nfrom bookwyrm.status import create_generated_note\nfrom bookwyrm.utils import regex\n\n\ndef get_user_from_username(viewer, username):\n \"\"\" helper function to resolve a localname or a username to a user \"\"\"\n # raises DoesNotExist if user is now found\n try:\n return models.User.viewer_aware_objects(viewer).get(localname=username)\n except models.User.DoesNotExist:\n return models.User.viewer_aware_objects(viewer).get(username=username)\n\n\ndef is_api_request(request):\n \"\"\" check whether a request is asking for html or data \"\"\"\n return \"json\" in request.headers.get(\"Accept\", \"\") or request.path[-5:] == \".json\"\n\n\ndef is_bookwyrm_request(request):\n \"\"\" check if the request is coming from another bookwyrm instance \"\"\"\n user_agent = request.headers.get(\"User-Agent\")\n if user_agent is None or re.search(regex.bookwyrm_user_agent, user_agent) is None:\n return False\n return True\n\n\ndef object_visible_to_user(viewer, obj):\n \"\"\" is a user authorized to view an object? \"\"\"\n if not obj:\n return False\n\n # viewer can't see it if the object's owner blocked them\n if viewer in obj.user.blocks.all():\n return False\n\n # you can see your own posts and any public or unlisted posts\n if viewer == obj.user or obj.privacy in [\"public\", \"unlisted\"]:\n return True\n\n # you can see the followers only posts of people you follow\n if obj.privacy == \"followers\" and obj.user.followers.filter(id=viewer.id).first():\n return True\n\n # you can see dms you are tagged in\n if isinstance(obj, models.Status):\n if obj.privacy == \"direct\" and obj.mention_users.filter(id=viewer.id).first():\n return True\n return False\n\n\ndef privacy_filter(viewer, queryset, privacy_levels=None, following_only=False):\n \"\"\" filter objects that have \"user\" and \"privacy\" fields \"\"\"\n privacy_levels = privacy_levels or [\"public\", \"unlisted\", \"followers\", \"direct\"]\n # if there'd a deleted field, exclude deleted items\n try:\n queryset = queryset.filter(deleted=False)\n except FieldError:\n pass\n\n # exclude blocks from both directions\n if not viewer.is_anonymous:\n blocked = models.User.objects.filter(id__in=viewer.blocks.all()).all()\n queryset = queryset.exclude(Q(user__in=blocked) | Q(user__blocks=viewer))\n\n # you can't see followers only or direct messages if you're not logged in\n if viewer.is_anonymous:\n privacy_levels = [p for p in privacy_levels if not p in [\"followers\", \"direct\"]]\n\n # filter to only privided privacy levels\n queryset = queryset.filter(privacy__in=privacy_levels)\n\n # only include statuses the user follows\n if following_only:\n queryset = queryset.exclude(\n ~Q( # remove everythign except\n Q(user__in=viewer.following.all())\n | Q(user=viewer) # user following\n | Q(mention_users=viewer) # is self # mentions user\n ),\n )\n # exclude followers-only statuses the user doesn't follow\n elif \"followers\" in privacy_levels:\n queryset = queryset.exclude(\n ~Q( # user isn't following and it isn't their own status\n Q(user__in=viewer.following.all()) | Q(user=viewer)\n ),\n privacy=\"followers\", # and the status is followers only\n )\n\n # exclude direct messages not intended for the user\n if \"direct\" in privacy_levels:\n try:\n queryset = queryset.exclude(\n ~Q(Q(user=viewer) | Q(mention_users=viewer)), privacy=\"direct\"\n )\n except FieldError:\n queryset = queryset.exclude(~Q(user=viewer), privacy=\"direct\")\n\n return queryset\n\n\ndef handle_remote_webfinger(query):\n \"\"\" webfingerin' other servers \"\"\"\n user = None\n\n # usernames could be @user@domain or user@domain\n if not query:\n return None\n\n if query[0] == \"@\":\n query = query[1:]\n\n try:\n domain = query.split(\"@\")[1]\n except IndexError:\n return None\n\n try:\n user = models.User.objects.get(username=query)\n except models.User.DoesNotExist:\n url = \"https://%s/.well-known/webfinger?resource=acct:%s\" % (domain, query)\n try:\n data = get_data(url)\n except (ConnectorException, HTTPError):\n return None\n\n for link in data.get(\"links\"):\n if link.get(\"rel\") == \"self\":\n try:\n user = activitypub.resolve_remote_id(\n link[\"href\"], model=models.User\n )\n except (KeyError, activitypub.ActivitySerializerError):\n return None\n return user\n\n\ndef get_edition(book_id):\n \"\"\" look up a book in the db and return an edition \"\"\"\n book = models.Book.objects.select_subclasses().get(id=book_id)\n if isinstance(book, models.Work):\n book = book.get_default_edition()\n return book\n\n\ndef handle_reading_status(user, shelf, book, privacy):\n \"\"\" post about a user reading a book \"\"\"\n # tell the world about this cool thing that happened\n try:\n message = {\n \"to-read\": \"wants to read\",\n \"reading\": \"started reading\",\n \"read\": \"finished reading\",\n }[shelf.identifier]\n except KeyError:\n # it's a non-standard shelf, don't worry about it\n return\n\n status = create_generated_note(user, message, mention_books=[book], privacy=privacy)\n status.save()\n\n\ndef is_blocked(viewer, user):\n \"\"\" is this viewer blocked by the user? \"\"\"\n if viewer.is_authenticated and viewer in user.blocks.all():\n return True\n return False\n\n\ndef get_discover_books():\n \"\"\" list of books for the discover page \"\"\"\n return list(\n set(\n models.Edition.objects.filter(\n review__published_date__isnull=False,\n review__deleted=False,\n review__user__local=True,\n review__privacy__in=[\"public\", \"unlisted\"],\n )\n .exclude(cover__exact=\"\")\n .annotate(Max(\"review__published_date\"))\n .order_by(\"-review__published_date__max\")[:6]\n )\n )\n\n\ndef get_suggested_users(user):\n \"\"\" bookwyrm users you don't already know \"\"\"\n return (\n get_annotated_users(\n user,\n ~Q(id=user.id),\n ~Q(followers=user),\n ~Q(follower_requests=user),\n bookwyrm_user=True,\n )\n .order_by(\"-mutuals\", \"-last_active_date\")\n .all()[:5]\n )\n\n\ndef get_annotated_users(user, *args, **kwargs):\n \"\"\" Users, annotated with things they have in common \"\"\"\n return (\n models.User.objects.filter(discoverable=True, is_active=True, *args, **kwargs)\n .exclude(Q(id__in=user.blocks.all()) | Q(blocks=user))\n .annotate(\n mutuals=Count(\n \"following\",\n filter=Q(\n ~Q(id=user.id),\n ~Q(id__in=user.following.all()),\n following__in=user.following.all(),\n ),\n distinct=True,\n ),\n shared_books=Count(\n \"shelfbook\",\n filter=Q(\n ~Q(id=user.id),\n shelfbook__book__parent_work__in=[\n s.book.parent_work for s in user.shelfbook_set.all()\n ],\n ),\n distinct=True,\n ),\n )\n )\n", "path": "bookwyrm/views/helpers.py"}, {"content": "\"\"\" search views\"\"\"\nimport re\n\nfrom django.contrib.postgres.search import TrigramSimilarity\nfrom django.db.models.functions import Greatest\nfrom django.http import JsonResponse\nfrom django.template.response import TemplateResponse\nfrom django.views import View\n\nfrom bookwyrm import models\nfrom bookwyrm.connectors import connector_manager\nfrom bookwyrm.utils import regex\nfrom .helpers import is_api_request, privacy_filter\nfrom .helpers import handle_remote_webfinger\n\n\n# pylint: disable= no-self-use\nclass Search(View):\n \"\"\" search users or books \"\"\"\n\n def get(self, request):\n \"\"\" that search bar up top \"\"\"\n query = request.GET.get(\"q\")\n min_confidence = request.GET.get(\"min_confidence\", 0.1)\n\n if is_api_request(request):\n # only return local book results via json so we don't cascade\n book_results = connector_manager.local_search(\n query, min_confidence=min_confidence\n )\n return JsonResponse([r.json() for r in book_results], safe=False)\n\n # use webfinger for mastodon style [email protected] username\n if query and re.match(regex.full_username, query):\n handle_remote_webfinger(query)\n\n # do a user search\n user_results = (\n models.User.viewer_aware_objects(request.user)\n .annotate(\n similarity=Greatest(\n TrigramSimilarity(\"username\", query),\n TrigramSimilarity(\"localname\", query),\n )\n )\n .filter(\n similarity__gt=0.5,\n )\n .order_by(\"-similarity\")[:10]\n )\n\n # any relevent lists?\n list_results = (\n privacy_filter(\n request.user,\n models.List.objects,\n privacy_levels=[\"public\", \"followers\"],\n )\n .annotate(\n similarity=Greatest(\n TrigramSimilarity(\"name\", query),\n TrigramSimilarity(\"description\", query),\n )\n )\n .filter(\n similarity__gt=0.1,\n )\n .order_by(\"-similarity\")[:10]\n )\n\n book_results = connector_manager.search(query, min_confidence=min_confidence)\n data = {\n \"book_results\": book_results,\n \"user_results\": user_results,\n \"list_results\": list_results,\n \"query\": query or \"\",\n }\n return TemplateResponse(request, \"search_results.html\", data)\n", "path": "bookwyrm/views/search.py"}], "after_files": [{"content": "\"\"\" helper functions used in various views \"\"\"\nimport re\nfrom requests import HTTPError\nfrom django.core.exceptions import FieldError\nfrom django.db.models import Count, Max, Q\n\nfrom bookwyrm import activitypub, models\nfrom bookwyrm.connectors import ConnectorException, get_data\nfrom bookwyrm.status import create_generated_note\nfrom bookwyrm.utils import regex\n\n\ndef get_user_from_username(viewer, username):\n \"\"\" helper function to resolve a localname or a username to a user \"\"\"\n # raises DoesNotExist if user is now found\n try:\n return models.User.viewer_aware_objects(viewer).get(localname=username)\n except models.User.DoesNotExist:\n return models.User.viewer_aware_objects(viewer).get(username=username)\n\n\ndef is_api_request(request):\n \"\"\" check whether a request is asking for html or data \"\"\"\n return \"json\" in request.headers.get(\"Accept\", \"\") or request.path[-5:] == \".json\"\n\n\ndef is_bookwyrm_request(request):\n \"\"\" check if the request is coming from another bookwyrm instance \"\"\"\n user_agent = request.headers.get(\"User-Agent\")\n if user_agent is None or re.search(regex.bookwyrm_user_agent, user_agent) is None:\n return False\n return True\n\n\ndef object_visible_to_user(viewer, obj):\n \"\"\" is a user authorized to view an object? \"\"\"\n if not obj:\n return False\n\n # viewer can't see it if the object's owner blocked them\n if viewer in obj.user.blocks.all():\n return False\n\n # you can see your own posts and any public or unlisted posts\n if viewer == obj.user or obj.privacy in [\"public\", \"unlisted\"]:\n return True\n\n # you can see the followers only posts of people you follow\n if obj.privacy == \"followers\" and obj.user.followers.filter(id=viewer.id).first():\n return True\n\n # you can see dms you are tagged in\n if isinstance(obj, models.Status):\n if obj.privacy == \"direct\" and obj.mention_users.filter(id=viewer.id).first():\n return True\n return False\n\n\ndef privacy_filter(viewer, queryset, privacy_levels=None, following_only=False):\n \"\"\" filter objects that have \"user\" and \"privacy\" fields \"\"\"\n privacy_levels = privacy_levels or [\"public\", \"unlisted\", \"followers\", \"direct\"]\n # if there'd a deleted field, exclude deleted items\n try:\n queryset = queryset.filter(deleted=False)\n except FieldError:\n pass\n\n # exclude blocks from both directions\n if not viewer.is_anonymous:\n blocked = models.User.objects.filter(id__in=viewer.blocks.all()).all()\n queryset = queryset.exclude(Q(user__in=blocked) | Q(user__blocks=viewer))\n\n # you can't see followers only or direct messages if you're not logged in\n if viewer.is_anonymous:\n privacy_levels = [p for p in privacy_levels if not p in [\"followers\", \"direct\"]]\n\n # filter to only privided privacy levels\n queryset = queryset.filter(privacy__in=privacy_levels)\n\n # only include statuses the user follows\n if following_only:\n queryset = queryset.exclude(\n ~Q( # remove everythign except\n Q(user__in=viewer.following.all())\n | Q(user=viewer) # user following\n | Q(mention_users=viewer) # is self # mentions user\n ),\n )\n # exclude followers-only statuses the user doesn't follow\n elif \"followers\" in privacy_levels:\n queryset = queryset.exclude(\n ~Q( # user isn't following and it isn't their own status\n Q(user__in=viewer.following.all()) | Q(user=viewer)\n ),\n privacy=\"followers\", # and the status is followers only\n )\n\n # exclude direct messages not intended for the user\n if \"direct\" in privacy_levels:\n try:\n queryset = queryset.exclude(\n ~Q(Q(user=viewer) | Q(mention_users=viewer)), privacy=\"direct\"\n )\n except FieldError:\n queryset = queryset.exclude(~Q(user=viewer), privacy=\"direct\")\n\n return queryset\n\n\ndef handle_remote_webfinger(query):\n \"\"\" webfingerin' other servers \"\"\"\n user = None\n\n # usernames could be @user@domain or user@domain\n if not query:\n return None\n\n if query[0] == \"@\":\n query = query[1:]\n\n try:\n domain = query.split(\"@\")[1]\n except IndexError:\n return None\n\n try:\n user = models.User.objects.get(username__iexact=query)\n except models.User.DoesNotExist:\n url = \"https://%s/.well-known/webfinger?resource=acct:%s\" % (domain, query)\n try:\n data = get_data(url)\n except (ConnectorException, HTTPError):\n return None\n\n for link in data.get(\"links\"):\n if link.get(\"rel\") == \"self\":\n try:\n user = activitypub.resolve_remote_id(\n link[\"href\"], model=models.User\n )\n except (KeyError, activitypub.ActivitySerializerError):\n return None\n return user\n\n\ndef get_edition(book_id):\n \"\"\" look up a book in the db and return an edition \"\"\"\n book = models.Book.objects.select_subclasses().get(id=book_id)\n if isinstance(book, models.Work):\n book = book.get_default_edition()\n return book\n\n\ndef handle_reading_status(user, shelf, book, privacy):\n \"\"\" post about a user reading a book \"\"\"\n # tell the world about this cool thing that happened\n try:\n message = {\n \"to-read\": \"wants to read\",\n \"reading\": \"started reading\",\n \"read\": \"finished reading\",\n }[shelf.identifier]\n except KeyError:\n # it's a non-standard shelf, don't worry about it\n return\n\n status = create_generated_note(user, message, mention_books=[book], privacy=privacy)\n status.save()\n\n\ndef is_blocked(viewer, user):\n \"\"\" is this viewer blocked by the user? \"\"\"\n if viewer.is_authenticated and viewer in user.blocks.all():\n return True\n return False\n\n\ndef get_discover_books():\n \"\"\" list of books for the discover page \"\"\"\n return list(\n set(\n models.Edition.objects.filter(\n review__published_date__isnull=False,\n review__deleted=False,\n review__user__local=True,\n review__privacy__in=[\"public\", \"unlisted\"],\n )\n .exclude(cover__exact=\"\")\n .annotate(Max(\"review__published_date\"))\n .order_by(\"-review__published_date__max\")[:6]\n )\n )\n\n\ndef get_suggested_users(user):\n \"\"\" bookwyrm users you don't already know \"\"\"\n return (\n get_annotated_users(\n user,\n ~Q(id=user.id),\n ~Q(followers=user),\n ~Q(follower_requests=user),\n bookwyrm_user=True,\n )\n .order_by(\"-mutuals\", \"-last_active_date\")\n .all()[:5]\n )\n\n\ndef get_annotated_users(user, *args, **kwargs):\n \"\"\" Users, annotated with things they have in common \"\"\"\n return (\n models.User.objects.filter(discoverable=True, is_active=True, *args, **kwargs)\n .exclude(Q(id__in=user.blocks.all()) | Q(blocks=user))\n .annotate(\n mutuals=Count(\n \"following\",\n filter=Q(\n ~Q(id=user.id),\n ~Q(id__in=user.following.all()),\n following__in=user.following.all(),\n ),\n distinct=True,\n ),\n shared_books=Count(\n \"shelfbook\",\n filter=Q(\n ~Q(id=user.id),\n shelfbook__book__parent_work__in=[\n s.book.parent_work for s in user.shelfbook_set.all()\n ],\n ),\n distinct=True,\n ),\n )\n )\n", "path": "bookwyrm/views/helpers.py"}, {"content": "\"\"\" search views\"\"\"\nimport re\n\nfrom django.contrib.postgres.search import TrigramSimilarity\nfrom django.db.models.functions import Greatest\nfrom django.http import JsonResponse\nfrom django.template.response import TemplateResponse\nfrom django.views import View\n\nfrom bookwyrm import models\nfrom bookwyrm.connectors import connector_manager\nfrom bookwyrm.utils import regex\nfrom .helpers import is_api_request, privacy_filter\nfrom .helpers import handle_remote_webfinger\n\n\n# pylint: disable= no-self-use\nclass Search(View):\n \"\"\" search users or books \"\"\"\n\n def get(self, request):\n \"\"\" that search bar up top \"\"\"\n query = request.GET.get(\"q\")\n min_confidence = request.GET.get(\"min_confidence\", 0.1)\n\n if is_api_request(request):\n # only return local book results via json so we don't cascade\n book_results = connector_manager.local_search(\n query, min_confidence=min_confidence\n )\n return JsonResponse([r.json() for r in book_results], safe=False)\n\n # use webfinger for mastodon style [email protected] username\n if query and re.match(regex.full_username, query):\n handle_remote_webfinger(query)\n\n # do a user search\n user_results = (\n models.User.viewer_aware_objects(request.user)\n .annotate(\n similarity=Greatest(\n TrigramSimilarity(\"username\", query),\n TrigramSimilarity(\"localname\", query),\n )\n )\n .filter(\n similarity__gt=0.5,\n )\n .order_by(\"-similarity\")[:10]\n )\n\n # any relevent lists?\n list_results = (\n privacy_filter(\n request.user,\n models.List.objects,\n privacy_levels=[\"public\", \"followers\"],\n )\n .annotate(\n similarity=Greatest(\n TrigramSimilarity(\"name\", query),\n TrigramSimilarity(\"description\", query),\n )\n )\n .filter(\n similarity__gt=0.1,\n )\n .order_by(\"-similarity\")[:10]\n )\n\n book_results = connector_manager.search(query, min_confidence=min_confidence)\n data = {\n \"book_results\": book_results,\n \"user_results\": user_results,\n \"list_results\": list_results,\n \"query\": query or \"\",\n }\n return TemplateResponse(request, \"search_results.html\", data)\n", "path": "bookwyrm/views/search.py"}]}
| 3,393 | 220 |
gh_patches_debug_6496
|
rasdani/github-patches
|
git_diff
|
interlegis__sapl-3150
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Períodos de parlamentares sumiram
## Comportamento Esperado
Devem aparecer todos os períodos dos parlamentares, correspondentes às legislaturas.
## Comportamento Atual
Aparecem apenas da 6ª legislatura à atual.
Passos para Reproduzir (para bugs)
https://sapl.agudo.rs.leg.br/parlamentar/
## Contexto
Pesquisa de parlamentares por legislatura.
## Imagens do Ocorrido

## Seu Ambiente
Versão usada (Release): 3.1.160-RC11
Nome e versão do navegador: Versão 80.0.3987.163 (Versão oficial)
Nome e versão do Sistema Operacional (desktop ou mobile): Ubuntu 16.04 64 bits
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sapl/api/pagination.py`
Content:
```
1 from django.core.paginator import EmptyPage
2 from rest_framework import pagination
3 from rest_framework.response import Response
4
5
6 class StandardPagination(pagination.PageNumberPagination):
7 page_size = 10
8 page_size_query_param = 'page_size'
9 max_page_size = 50
10
11 def get_paginated_response(self, data):
12 try:
13 previous_page_number = self.page.previous_page_number()
14 except EmptyPage:
15 previous_page_number = None
16
17 try:
18 next_page_number = self.page.next_page_number()
19 except EmptyPage:
20 next_page_number = None
21
22 return Response({
23 'pagination': {
24 'previous_page': previous_page_number,
25 'next_page': next_page_number,
26 'start_index': self.page.start_index(),
27 'end_index': self.page.end_index(),
28 'total_entries': self.page.paginator.count,
29 'total_pages': self.page.paginator.num_pages,
30 'page': self.page.number,
31 },
32 'results': data,
33
34 })
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sapl/api/pagination.py b/sapl/api/pagination.py
--- a/sapl/api/pagination.py
+++ b/sapl/api/pagination.py
@@ -8,6 +8,11 @@
page_size_query_param = 'page_size'
max_page_size = 50
+ def paginate_queryset(self, queryset, request, view=None):
+ if request.query_params.get('get_all', False) == 'true':
+ return None
+ return super().paginate_queryset(queryset, request, view=view)
+
def get_paginated_response(self, data):
try:
previous_page_number = self.page.previous_page_number()
|
{"golden_diff": "diff --git a/sapl/api/pagination.py b/sapl/api/pagination.py\n--- a/sapl/api/pagination.py\n+++ b/sapl/api/pagination.py\n@@ -8,6 +8,11 @@\n page_size_query_param = 'page_size'\n max_page_size = 50\n \n+ def paginate_queryset(self, queryset, request, view=None):\n+ if request.query_params.get('get_all', False) == 'true':\n+ return None\n+ return super().paginate_queryset(queryset, request, view=view)\n+\n def get_paginated_response(self, data):\n try:\n previous_page_number = self.page.previous_page_number()\n", "issue": "Per\u00edodos de parlamentares sumiram\n## Comportamento Esperado\r\nDevem aparecer todos os per\u00edodos dos parlamentares, correspondentes \u00e0s legislaturas.\r\n\r\n## Comportamento Atual\r\nAparecem apenas da 6\u00aa legislatura \u00e0 atual.\r\n\r\nPassos para Reproduzir (para bugs)\r\nhttps://sapl.agudo.rs.leg.br/parlamentar/\r\n\r\n## Contexto\r\nPesquisa de parlamentares por legislatura.\r\n\r\n## Imagens do Ocorrido\r\n\r\n\r\n## Seu Ambiente\r\nVers\u00e3o usada (Release): 3.1.160-RC11\r\nNome e vers\u00e3o do navegador: Vers\u00e3o 80.0.3987.163 (Vers\u00e3o oficial)\r\nNome e vers\u00e3o do Sistema Operacional (desktop ou mobile): Ubuntu 16.04 64 bits\r\n\r\n\n", "before_files": [{"content": "from django.core.paginator import EmptyPage\nfrom rest_framework import pagination\nfrom rest_framework.response import Response\n\n\nclass StandardPagination(pagination.PageNumberPagination):\n page_size = 10\n page_size_query_param = 'page_size'\n max_page_size = 50\n\n def get_paginated_response(self, data):\n try:\n previous_page_number = self.page.previous_page_number()\n except EmptyPage:\n previous_page_number = None\n\n try:\n next_page_number = self.page.next_page_number()\n except EmptyPage:\n next_page_number = None\n\n return Response({\n 'pagination': {\n 'previous_page': previous_page_number,\n 'next_page': next_page_number,\n 'start_index': self.page.start_index(),\n 'end_index': self.page.end_index(),\n 'total_entries': self.page.paginator.count,\n 'total_pages': self.page.paginator.num_pages,\n 'page': self.page.number,\n },\n 'results': data,\n\n })\n", "path": "sapl/api/pagination.py"}], "after_files": [{"content": "from django.core.paginator import EmptyPage\nfrom rest_framework import pagination\nfrom rest_framework.response import Response\n\n\nclass StandardPagination(pagination.PageNumberPagination):\n page_size = 10\n page_size_query_param = 'page_size'\n max_page_size = 50\n\n def paginate_queryset(self, queryset, request, view=None):\n if request.query_params.get('get_all', False) == 'true':\n return None\n return super().paginate_queryset(queryset, request, view=view)\n\n def get_paginated_response(self, data):\n try:\n previous_page_number = self.page.previous_page_number()\n except EmptyPage:\n previous_page_number = None\n\n try:\n next_page_number = self.page.next_page_number()\n except EmptyPage:\n next_page_number = None\n\n return Response({\n 'pagination': {\n 'previous_page': previous_page_number,\n 'next_page': next_page_number,\n 'start_index': self.page.start_index(),\n 'end_index': self.page.end_index(),\n 'total_entries': self.page.paginator.count,\n 'total_pages': self.page.paginator.num_pages,\n 'page': self.page.number,\n },\n 'results': data,\n\n })\n", "path": "sapl/api/pagination.py"}]}
| 807 | 142 |
gh_patches_debug_20074
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-3430
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: regular handler under asgi mount path conflicts with routing to asgi app
### Description
If we have a regular handler mounted under an asgi mounted path, and the path of the regular handler is a non-prefix sub-string of the a request path the request will not get routed to the asgi app.
I.e., if we have an asgi mounted on `"/"`, and a regular handler at `"/path"`, a request to `"/some/path"` does not get routed to the asgi app.
### URL to code causing the issue
_No response_
### MCVE
```python
from __future__ import annotations
from typing import TYPE_CHECKING
from litestar import Litestar, asgi, get
from litestar.testing import TestClient
if TYPE_CHECKING:
from litestar.types.asgi_types import Receive, Scope, Send
async def asgi_app(scope: Scope, receive: Receive, send: Send) -> None:
assert scope["type"] == "http"
await send({
"type": "http.response.start",
"status": 200,
"headers": [
(b"content-type", b"text/plain"),
(b"content-length", b"%d" % len(scope["raw_path"])),
],
})
await send({
"type": "http.response.body",
"body": scope["raw_path"],
})
asgi_handler = asgi("/", is_mount=True)(asgi_app)
@get("/path")
def get_handler() -> str:
return "Hello, world!"
def test_regular_handler_under_mounted_asgi_app() -> None:
app = Litestar(
route_handlers=[asgi("/", is_mount=True)(asgi_app), get_handler],
openapi_config=None,
debug=True,
)
with TestClient(app) as client:
resp = client.get("/some/path") # currently this is a 404
assert resp.content == b"/some/path"
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
main
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/3429">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/3429/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/3429/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/_asgi/routing_trie/traversal.py`
Content:
```
1 from __future__ import annotations
2
3 from functools import lru_cache
4 from typing import TYPE_CHECKING, Any, Pattern
5
6 from litestar._asgi.routing_trie.types import PathParameterSentinel
7 from litestar.exceptions import MethodNotAllowedException, NotFoundException
8 from litestar.utils import normalize_path
9
10 __all__ = ("parse_node_handlers", "parse_path_params", "parse_path_to_route", "traverse_route_map")
11
12
13 if TYPE_CHECKING:
14 from litestar._asgi.routing_trie.types import ASGIHandlerTuple, RouteTrieNode
15 from litestar.types import ASGIApp, Method, RouteHandlerType
16 from litestar.types.internal_types import PathParameterDefinition
17
18
19 def traverse_route_map(
20 root_node: RouteTrieNode,
21 path: str,
22 ) -> tuple[RouteTrieNode, list[str], str]:
23 """Traverses the application route mapping and retrieves the correct node for the request url.
24
25 Args:
26 root_node: The root trie node.
27 path: The request's path.
28
29 Raises:
30 NotFoundException: If no correlating node is found.
31
32 Returns:
33 A tuple containing the target RouteMapNode and a list containing all path parameter values.
34 """
35 current_node = root_node
36 path_params: list[str] = []
37 path_components = [p for p in path.split("/") if p]
38
39 for i, component in enumerate(path_components):
40 if component in current_node.child_keys:
41 current_node = current_node.children[component]
42 continue
43
44 if current_node.is_path_param_node:
45 current_node = current_node.children[PathParameterSentinel]
46
47 if current_node.is_path_type:
48 path_params.append(normalize_path("/".join(path_components[i:])))
49 break
50
51 path_params.append(component)
52 continue
53
54 raise NotFoundException()
55
56 if not current_node.asgi_handlers:
57 raise NotFoundException()
58
59 return current_node, path_params, path
60
61
62 def parse_node_handlers(
63 node: RouteTrieNode,
64 method: Method | None,
65 ) -> ASGIHandlerTuple:
66 """Retrieve the handler tuple from the node.
67
68 Args:
69 node: The trie node to parse.
70 method: The scope's method.
71
72 Raises:
73 KeyError: If no matching method is found.
74
75 Returns:
76 An ASGI Handler tuple.
77 """
78
79 if node.is_asgi:
80 return node.asgi_handlers["asgi"]
81 if method:
82 return node.asgi_handlers[method]
83 return node.asgi_handlers["websocket"]
84
85
86 @lru_cache(1024)
87 def parse_path_params(
88 parameter_definitions: tuple[PathParameterDefinition, ...], path_param_values: tuple[str, ...]
89 ) -> dict[str, Any]:
90 """Parse path parameters into a dictionary of values.
91
92 Args:
93 parameter_definitions: The parameter definitions tuple from the route.
94 path_param_values: The string values extracted from the url
95
96 Raises:
97 ValueError: If any of path parameters can not be parsed into a value.
98
99 Returns:
100 A dictionary of parsed path parameters.
101 """
102 return {
103 param_definition.name: param_definition.parser(value) if param_definition.parser else value
104 for param_definition, value in zip(parameter_definitions, path_param_values)
105 }
106
107
108 def parse_path_to_route(
109 method: Method | None,
110 mount_paths_regex: Pattern | None,
111 mount_routes: dict[str, RouteTrieNode],
112 path: str,
113 plain_routes: set[str],
114 root_node: RouteTrieNode,
115 ) -> tuple[ASGIApp, RouteHandlerType, str, dict[str, Any]]:
116 """Given a scope object, retrieve the asgi_handlers and is_mount boolean values from correct trie node.
117
118 Args:
119 method: The scope's method, if any.
120 root_node: The root trie node.
121 path: The path to resolve scope instance.
122 plain_routes: The set of plain routes.
123 mount_routes: Mapping of mount routes to trie nodes.
124 mount_paths_regex: A compiled regex to match the mount routes.
125
126 Raises:
127 MethodNotAllowedException: if no matching method is found.
128 NotFoundException: If no correlating node is found or if path params can not be parsed into values according to the node definition.
129
130 Returns:
131 A tuple containing the stack of middlewares and the route handler that is wrapped by it.
132 """
133
134 try:
135 if path in plain_routes:
136 asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)
137 return asgi_app, handler, path, {}
138
139 if mount_paths_regex and (match := mount_paths_regex.search(path)):
140 mount_path = path[match.start() : match.end()]
141 mount_node = mount_routes[mount_path]
142 remaining_path = path[match.end() :]
143 # since we allow regular handlers under static paths, we must validate that the request does not match
144 # any such handler.
145 children = [sub_route for sub_route in mount_node.children or [] if sub_route != mount_path]
146 if not children or all(sub_route not in path for sub_route in children): # type: ignore[operator]
147 asgi_app, handler = parse_node_handlers(node=mount_node, method=method)
148 remaining_path = remaining_path or "/"
149 if not mount_node.is_static:
150 remaining_path = remaining_path if remaining_path.endswith("/") else f"{remaining_path}/"
151 return asgi_app, handler, remaining_path, {}
152
153 node, path_parameters, path = traverse_route_map(
154 root_node=root_node,
155 path=path,
156 )
157 asgi_app, handler = parse_node_handlers(node=node, method=method)
158 key = method or ("asgi" if node.is_asgi else "websocket")
159 parsed_path_parameters = parse_path_params(node.path_parameters[key], tuple(path_parameters))
160
161 return (
162 asgi_app,
163 handler,
164 path,
165 parsed_path_parameters,
166 )
167 except KeyError as e:
168 raise MethodNotAllowedException() from e
169 except ValueError as e:
170 raise NotFoundException() from e
171
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/litestar/_asgi/routing_trie/traversal.py b/litestar/_asgi/routing_trie/traversal.py
--- a/litestar/_asgi/routing_trie/traversal.py
+++ b/litestar/_asgi/routing_trie/traversal.py
@@ -142,8 +142,12 @@
remaining_path = path[match.end() :]
# since we allow regular handlers under static paths, we must validate that the request does not match
# any such handler.
- children = [sub_route for sub_route in mount_node.children or [] if sub_route != mount_path]
- if not children or all(sub_route not in path for sub_route in children): # type: ignore[operator]
+ children = (
+ normalize_path(sub_route)
+ for sub_route in mount_node.children or []
+ if sub_route != mount_path and isinstance(sub_route, str)
+ )
+ if not any(remaining_path.startswith(f"{sub_route}/") for sub_route in children):
asgi_app, handler = parse_node_handlers(node=mount_node, method=method)
remaining_path = remaining_path or "/"
if not mount_node.is_static:
|
{"golden_diff": "diff --git a/litestar/_asgi/routing_trie/traversal.py b/litestar/_asgi/routing_trie/traversal.py\n--- a/litestar/_asgi/routing_trie/traversal.py\n+++ b/litestar/_asgi/routing_trie/traversal.py\n@@ -142,8 +142,12 @@\n remaining_path = path[match.end() :]\n # since we allow regular handlers under static paths, we must validate that the request does not match\n # any such handler.\n- children = [sub_route for sub_route in mount_node.children or [] if sub_route != mount_path]\n- if not children or all(sub_route not in path for sub_route in children): # type: ignore[operator]\n+ children = (\n+ normalize_path(sub_route)\n+ for sub_route in mount_node.children or []\n+ if sub_route != mount_path and isinstance(sub_route, str)\n+ )\n+ if not any(remaining_path.startswith(f\"{sub_route}/\") for sub_route in children):\n asgi_app, handler = parse_node_handlers(node=mount_node, method=method)\n remaining_path = remaining_path or \"/\"\n if not mount_node.is_static:\n", "issue": "Bug: regular handler under asgi mount path conflicts with routing to asgi app\n### Description\r\n\r\nIf we have a regular handler mounted under an asgi mounted path, and the path of the regular handler is a non-prefix sub-string of the a request path the request will not get routed to the asgi app.\r\n\r\nI.e., if we have an asgi mounted on `\"/\"`, and a regular handler at `\"/path\"`, a request to `\"/some/path\"` does not get routed to the asgi app.\r\n\r\n### URL to code causing the issue\r\n\r\n_No response_\r\n\r\n### MCVE\r\n\r\n```python\r\nfrom __future__ import annotations\r\n\r\nfrom typing import TYPE_CHECKING\r\n\r\nfrom litestar import Litestar, asgi, get\r\nfrom litestar.testing import TestClient\r\n\r\nif TYPE_CHECKING:\r\n from litestar.types.asgi_types import Receive, Scope, Send\r\n\r\n\r\nasync def asgi_app(scope: Scope, receive: Receive, send: Send) -> None:\r\n assert scope[\"type\"] == \"http\"\r\n await send({\r\n \"type\": \"http.response.start\",\r\n \"status\": 200,\r\n \"headers\": [\r\n (b\"content-type\", b\"text/plain\"),\r\n (b\"content-length\", b\"%d\" % len(scope[\"raw_path\"])),\r\n ],\r\n })\r\n await send({\r\n \"type\": \"http.response.body\",\r\n \"body\": scope[\"raw_path\"],\r\n })\r\n\r\nasgi_handler = asgi(\"/\", is_mount=True)(asgi_app)\r\n\r\n\r\n@get(\"/path\")\r\ndef get_handler() -> str:\r\n return \"Hello, world!\"\r\n\r\n\r\ndef test_regular_handler_under_mounted_asgi_app() -> None:\r\n app = Litestar(\r\n route_handlers=[asgi(\"/\", is_mount=True)(asgi_app), get_handler],\r\n openapi_config=None,\r\n debug=True,\r\n )\r\n\r\n with TestClient(app) as client:\r\n resp = client.get(\"/some/path\") # currently this is a 404\r\n assert resp.content == b\"/some/path\"\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n```bash\r\n1. Go to '...'\r\n2. Click on '....'\r\n3. Scroll down to '....'\r\n4. See error\r\n```\r\n\r\n\r\n### Screenshots\r\n\r\n```bash\r\n\"\"\r\n```\r\n\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Litestar Version\r\n\r\nmain\r\n\r\n### Platform\r\n\r\n- [X] Linux\r\n- [ ] Mac\r\n- [ ] Windows\r\n- [ ] Other (Please specify in the description above)\r\n\r\n<!-- POLAR PLEDGE BADGE START -->\r\n---\r\n> [!NOTE] \r\n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \r\n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\r\n>\r\n> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)\r\n> * If you would like to see an issue prioritized, make a pledge towards it!\r\n> * We receive the pledge once the issue is completed & verified\r\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\r\n\r\n<a href=\"https://polar.sh/litestar-org/litestar/issues/3429\">\r\n<picture>\r\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/3429/pledge.svg?darkmode=1\">\r\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/3429/pledge.svg\">\r\n</picture>\r\n</a>\r\n<!-- POLAR PLEDGE BADGE END -->\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom functools import lru_cache\nfrom typing import TYPE_CHECKING, Any, Pattern\n\nfrom litestar._asgi.routing_trie.types import PathParameterSentinel\nfrom litestar.exceptions import MethodNotAllowedException, NotFoundException\nfrom litestar.utils import normalize_path\n\n__all__ = (\"parse_node_handlers\", \"parse_path_params\", \"parse_path_to_route\", \"traverse_route_map\")\n\n\nif TYPE_CHECKING:\n from litestar._asgi.routing_trie.types import ASGIHandlerTuple, RouteTrieNode\n from litestar.types import ASGIApp, Method, RouteHandlerType\n from litestar.types.internal_types import PathParameterDefinition\n\n\ndef traverse_route_map(\n root_node: RouteTrieNode,\n path: str,\n) -> tuple[RouteTrieNode, list[str], str]:\n \"\"\"Traverses the application route mapping and retrieves the correct node for the request url.\n\n Args:\n root_node: The root trie node.\n path: The request's path.\n\n Raises:\n NotFoundException: If no correlating node is found.\n\n Returns:\n A tuple containing the target RouteMapNode and a list containing all path parameter values.\n \"\"\"\n current_node = root_node\n path_params: list[str] = []\n path_components = [p for p in path.split(\"/\") if p]\n\n for i, component in enumerate(path_components):\n if component in current_node.child_keys:\n current_node = current_node.children[component]\n continue\n\n if current_node.is_path_param_node:\n current_node = current_node.children[PathParameterSentinel]\n\n if current_node.is_path_type:\n path_params.append(normalize_path(\"/\".join(path_components[i:])))\n break\n\n path_params.append(component)\n continue\n\n raise NotFoundException()\n\n if not current_node.asgi_handlers:\n raise NotFoundException()\n\n return current_node, path_params, path\n\n\ndef parse_node_handlers(\n node: RouteTrieNode,\n method: Method | None,\n) -> ASGIHandlerTuple:\n \"\"\"Retrieve the handler tuple from the node.\n\n Args:\n node: The trie node to parse.\n method: The scope's method.\n\n Raises:\n KeyError: If no matching method is found.\n\n Returns:\n An ASGI Handler tuple.\n \"\"\"\n\n if node.is_asgi:\n return node.asgi_handlers[\"asgi\"]\n if method:\n return node.asgi_handlers[method]\n return node.asgi_handlers[\"websocket\"]\n\n\n@lru_cache(1024)\ndef parse_path_params(\n parameter_definitions: tuple[PathParameterDefinition, ...], path_param_values: tuple[str, ...]\n) -> dict[str, Any]:\n \"\"\"Parse path parameters into a dictionary of values.\n\n Args:\n parameter_definitions: The parameter definitions tuple from the route.\n path_param_values: The string values extracted from the url\n\n Raises:\n ValueError: If any of path parameters can not be parsed into a value.\n\n Returns:\n A dictionary of parsed path parameters.\n \"\"\"\n return {\n param_definition.name: param_definition.parser(value) if param_definition.parser else value\n for param_definition, value in zip(parameter_definitions, path_param_values)\n }\n\n\ndef parse_path_to_route(\n method: Method | None,\n mount_paths_regex: Pattern | None,\n mount_routes: dict[str, RouteTrieNode],\n path: str,\n plain_routes: set[str],\n root_node: RouteTrieNode,\n) -> tuple[ASGIApp, RouteHandlerType, str, dict[str, Any]]:\n \"\"\"Given a scope object, retrieve the asgi_handlers and is_mount boolean values from correct trie node.\n\n Args:\n method: The scope's method, if any.\n root_node: The root trie node.\n path: The path to resolve scope instance.\n plain_routes: The set of plain routes.\n mount_routes: Mapping of mount routes to trie nodes.\n mount_paths_regex: A compiled regex to match the mount routes.\n\n Raises:\n MethodNotAllowedException: if no matching method is found.\n NotFoundException: If no correlating node is found or if path params can not be parsed into values according to the node definition.\n\n Returns:\n A tuple containing the stack of middlewares and the route handler that is wrapped by it.\n \"\"\"\n\n try:\n if path in plain_routes:\n asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)\n return asgi_app, handler, path, {}\n\n if mount_paths_regex and (match := mount_paths_regex.search(path)):\n mount_path = path[match.start() : match.end()]\n mount_node = mount_routes[mount_path]\n remaining_path = path[match.end() :]\n # since we allow regular handlers under static paths, we must validate that the request does not match\n # any such handler.\n children = [sub_route for sub_route in mount_node.children or [] if sub_route != mount_path]\n if not children or all(sub_route not in path for sub_route in children): # type: ignore[operator]\n asgi_app, handler = parse_node_handlers(node=mount_node, method=method)\n remaining_path = remaining_path or \"/\"\n if not mount_node.is_static:\n remaining_path = remaining_path if remaining_path.endswith(\"/\") else f\"{remaining_path}/\"\n return asgi_app, handler, remaining_path, {}\n\n node, path_parameters, path = traverse_route_map(\n root_node=root_node,\n path=path,\n )\n asgi_app, handler = parse_node_handlers(node=node, method=method)\n key = method or (\"asgi\" if node.is_asgi else \"websocket\")\n parsed_path_parameters = parse_path_params(node.path_parameters[key], tuple(path_parameters))\n\n return (\n asgi_app,\n handler,\n path,\n parsed_path_parameters,\n )\n except KeyError as e:\n raise MethodNotAllowedException() from e\n except ValueError as e:\n raise NotFoundException() from e\n", "path": "litestar/_asgi/routing_trie/traversal.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom functools import lru_cache\nfrom typing import TYPE_CHECKING, Any, Pattern\n\nfrom litestar._asgi.routing_trie.types import PathParameterSentinel\nfrom litestar.exceptions import MethodNotAllowedException, NotFoundException\nfrom litestar.utils import normalize_path\n\n__all__ = (\"parse_node_handlers\", \"parse_path_params\", \"parse_path_to_route\", \"traverse_route_map\")\n\n\nif TYPE_CHECKING:\n from litestar._asgi.routing_trie.types import ASGIHandlerTuple, RouteTrieNode\n from litestar.types import ASGIApp, Method, RouteHandlerType\n from litestar.types.internal_types import PathParameterDefinition\n\n\ndef traverse_route_map(\n root_node: RouteTrieNode,\n path: str,\n) -> tuple[RouteTrieNode, list[str], str]:\n \"\"\"Traverses the application route mapping and retrieves the correct node for the request url.\n\n Args:\n root_node: The root trie node.\n path: The request's path.\n\n Raises:\n NotFoundException: If no correlating node is found.\n\n Returns:\n A tuple containing the target RouteMapNode and a list containing all path parameter values.\n \"\"\"\n current_node = root_node\n path_params: list[str] = []\n path_components = [p for p in path.split(\"/\") if p]\n\n for i, component in enumerate(path_components):\n if component in current_node.child_keys:\n current_node = current_node.children[component]\n continue\n\n if current_node.is_path_param_node:\n current_node = current_node.children[PathParameterSentinel]\n\n if current_node.is_path_type:\n path_params.append(normalize_path(\"/\".join(path_components[i:])))\n break\n\n path_params.append(component)\n continue\n\n raise NotFoundException()\n\n if not current_node.asgi_handlers:\n raise NotFoundException()\n\n return current_node, path_params, path\n\n\ndef parse_node_handlers(\n node: RouteTrieNode,\n method: Method | None,\n) -> ASGIHandlerTuple:\n \"\"\"Retrieve the handler tuple from the node.\n\n Args:\n node: The trie node to parse.\n method: The scope's method.\n\n Raises:\n KeyError: If no matching method is found.\n\n Returns:\n An ASGI Handler tuple.\n \"\"\"\n\n if node.is_asgi:\n return node.asgi_handlers[\"asgi\"]\n if method:\n return node.asgi_handlers[method]\n return node.asgi_handlers[\"websocket\"]\n\n\n@lru_cache(1024)\ndef parse_path_params(\n parameter_definitions: tuple[PathParameterDefinition, ...], path_param_values: tuple[str, ...]\n) -> dict[str, Any]:\n \"\"\"Parse path parameters into a dictionary of values.\n\n Args:\n parameter_definitions: The parameter definitions tuple from the route.\n path_param_values: The string values extracted from the url\n\n Raises:\n ValueError: If any of path parameters can not be parsed into a value.\n\n Returns:\n A dictionary of parsed path parameters.\n \"\"\"\n return {\n param_definition.name: param_definition.parser(value) if param_definition.parser else value\n for param_definition, value in zip(parameter_definitions, path_param_values)\n }\n\n\ndef parse_path_to_route(\n method: Method | None,\n mount_paths_regex: Pattern | None,\n mount_routes: dict[str, RouteTrieNode],\n path: str,\n plain_routes: set[str],\n root_node: RouteTrieNode,\n) -> tuple[ASGIApp, RouteHandlerType, str, dict[str, Any]]:\n \"\"\"Given a scope object, retrieve the asgi_handlers and is_mount boolean values from correct trie node.\n\n Args:\n method: The scope's method, if any.\n root_node: The root trie node.\n path: The path to resolve scope instance.\n plain_routes: The set of plain routes.\n mount_routes: Mapping of mount routes to trie nodes.\n mount_paths_regex: A compiled regex to match the mount routes.\n\n Raises:\n MethodNotAllowedException: if no matching method is found.\n NotFoundException: If no correlating node is found or if path params can not be parsed into values according to the node definition.\n\n Returns:\n A tuple containing the stack of middlewares and the route handler that is wrapped by it.\n \"\"\"\n\n try:\n if path in plain_routes:\n asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)\n return asgi_app, handler, path, {}\n\n if mount_paths_regex and (match := mount_paths_regex.search(path)):\n mount_path = path[match.start() : match.end()]\n mount_node = mount_routes[mount_path]\n remaining_path = path[match.end() :]\n # since we allow regular handlers under static paths, we must validate that the request does not match\n # any such handler.\n children = (\n normalize_path(sub_route)\n for sub_route in mount_node.children or []\n if sub_route != mount_path and isinstance(sub_route, str)\n )\n if not any(remaining_path.startswith(f\"{sub_route}/\") for sub_route in children):\n asgi_app, handler = parse_node_handlers(node=mount_node, method=method)\n remaining_path = remaining_path or \"/\"\n if not mount_node.is_static:\n remaining_path = remaining_path if remaining_path.endswith(\"/\") else f\"{remaining_path}/\"\n return asgi_app, handler, remaining_path, {}\n\n node, path_parameters, path = traverse_route_map(\n root_node=root_node,\n path=path,\n )\n asgi_app, handler = parse_node_handlers(node=node, method=method)\n key = method or (\"asgi\" if node.is_asgi else \"websocket\")\n parsed_path_parameters = parse_path_params(node.path_parameters[key], tuple(path_parameters))\n\n return (\n asgi_app,\n handler,\n path,\n parsed_path_parameters,\n )\n except KeyError as e:\n raise MethodNotAllowedException() from e\n except ValueError as e:\n raise NotFoundException() from e\n", "path": "litestar/_asgi/routing_trie/traversal.py"}]}
| 2,771 | 262 |
gh_patches_debug_32196
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__pytorch-lightning-543
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Summary not working for model on GPU with multiple inputs
**Describe the bug**
When you want a summary for a model that requires multiple input parameters for forward, then this doesn't work. You can set self.example_input_array to be a tuple and there is some code for passing this to the forward method. However, if the model is on cuda, it tries to pass to move this input directly to cuda without a check whether it is a tuple or list.
the line with the error is here:
[pytorch-lightning/blob/master/pytorch_lightning/root_module/memory.py#L53](https://github.com/williamFalcon/pytorch-lightning/blob/7324dd902b8d071f4889ab1274a4d4dc09de9a78/pytorch_lightning/root_module/memory.py#L53)
example of how it should be checked:
[pytorch-lightning/blob/master/pytorch_lightning/root_module/memory.py#L61](https://github.com/williamFalcon/pytorch-lightning/blob/7324dd902b8d071f4889ab1274a4d4dc09de9a78/pytorch_lightning/root_module/memory.py#L61)
**To Reproduce**
Steps to reproduce the behavior:
1. create a model that requires multiple inputs in the forward method.
2. set self.example_input_array to be a tuple
3. run the model on GPU
**Expected behavior**
a list with all layers and the input and output shapes of these layers.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Linux Mint 19.2
- Browser chrome
- Version 8.0.3904.97 (Official Build) (64-bit)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/core/memory.py`
Content:
```
1 '''
2 Generates a summary of a model's layers and dimensionality
3 '''
4
5 import gc
6 import os
7 import subprocess
8
9 import numpy as np
10 import pandas as pd
11 import torch
12 import logging
13
14
15 class ModelSummary(object):
16
17 def __init__(self, model, mode='full'):
18 '''
19 Generates summaries of model layers and dimensions.
20 '''
21 self.model = model
22 self.mode = mode
23 self.in_sizes = []
24 self.out_sizes = []
25
26 self.summarize()
27
28 def __str__(self):
29 return self.summary.__str__()
30
31 def __repr__(self):
32 return self.summary.__str__()
33
34 def named_modules(self):
35 if self.mode == 'full':
36 mods = self.model.named_modules()
37 mods = list(mods)[1:] # do not include root module (LightningModule)
38 elif self.mode == 'top':
39 # the children are the top-level modules
40 mods = self.model.named_children()
41 else:
42 mods = []
43 return list(mods)
44
45 def get_variable_sizes(self):
46 '''Run sample input through each layer to get output sizes'''
47 mods = self.named_modules()
48 in_sizes = []
49 out_sizes = []
50 input_ = self.model.example_input_array
51
52 if self.model.on_gpu:
53 input_ = input_.cuda(0)
54
55 if self.model.trainer.use_amp:
56 input_ = input_.half()
57
58 with torch.no_grad():
59
60 for _, m in mods:
61 if type(input_) is list or type(input_) is tuple: # pragma: no cover
62 out = m(*input_)
63 else:
64 out = m(input_)
65
66 if type(input_) is tuple or type(input_) is list: # pragma: no cover
67 in_size = []
68 for x in input_:
69 if type(x) is list:
70 in_size.append(len(x))
71 else:
72 in_size.append(x.size())
73 else:
74 in_size = np.array(input_.size())
75
76 in_sizes.append(in_size)
77
78 if type(out) is tuple or type(out) is list: # pragma: no cover
79 out_size = np.asarray([x.size() for x in out])
80 else:
81 out_size = np.array(out.size())
82
83 out_sizes.append(out_size)
84 input_ = out
85
86 self.in_sizes = in_sizes
87 self.out_sizes = out_sizes
88 assert len(in_sizes) == len(out_sizes)
89 return
90
91 def get_layer_names(self):
92 '''Collect Layer Names'''
93 mods = self.named_modules()
94 names = []
95 layers = []
96 for name, m in mods:
97 names += [name]
98 layers += [str(m.__class__)]
99
100 layer_types = [x.split('.')[-1][:-2] for x in layers]
101
102 self.layer_names = names
103 self.layer_types = layer_types
104 return
105
106 def get_parameter_sizes(self):
107 '''Get sizes of all parameters in `model`'''
108 mods = self.named_modules()
109 sizes = []
110 for _, m in mods:
111 p = list(m.parameters())
112 modsz = []
113 for j in range(len(p)):
114 modsz.append(np.array(p[j].size()))
115 sizes.append(modsz)
116
117 self.param_sizes = sizes
118 return
119
120 def get_parameter_nums(self):
121 '''Get number of parameters in each layer'''
122 param_nums = []
123 for mod in self.param_sizes:
124 all_params = 0
125 for p in mod:
126 all_params += np.prod(p)
127 param_nums.append(all_params)
128 self.param_nums = param_nums
129 return
130
131 def make_summary(self):
132 '''
133 Makes a summary listing with:
134
135 Layer Name, Layer Type, Input Size, Output Size, Number of Parameters
136 '''
137
138 cols = ['Name', 'Type', 'Params']
139 if self.model.example_input_array is not None:
140 cols.extend(['In_sizes', 'Out_sizes'])
141
142 df = pd.DataFrame(np.zeros((len(self.layer_names), len(cols))))
143 df.columns = cols
144
145 df['Name'] = self.layer_names
146 df['Type'] = self.layer_types
147 df['Params'] = self.param_nums
148 df['Params'] = df['Params'].map(get_human_readable_count)
149
150 if self.model.example_input_array is not None:
151 df['In_sizes'] = self.in_sizes
152 df['Out_sizes'] = self.out_sizes
153
154 self.summary = df
155 return
156
157 def summarize(self):
158 self.get_layer_names()
159 self.get_parameter_sizes()
160 self.get_parameter_nums()
161
162 if self.model.example_input_array is not None:
163 self.get_variable_sizes()
164 self.make_summary()
165
166
167 def print_mem_stack(): # pragma: no cover
168 for obj in gc.get_objects():
169 try:
170 if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
171 logging.info(type(obj), obj.size())
172 except Exception:
173 pass
174
175
176 def count_mem_items(): # pragma: no cover
177 num_params = 0
178 num_tensors = 0
179 for obj in gc.get_objects():
180 try:
181 if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
182 obj_type = str(type(obj))
183 if 'parameter' in obj_type:
184 num_params += 1
185 else:
186 num_tensors += 1
187 except Exception:
188 pass
189
190 return num_params, num_tensors
191
192
193 def get_memory_profile(mode):
194 """
195 'all' means return memory for all gpus
196 'min_max' means return memory for max and min
197 :param mode:
198 :return:
199 """
200 memory_map = get_gpu_memory_map()
201
202 if mode == 'min_max':
203 min_index, min_memory = min(memory_map.items(), key=lambda item: item[1])
204 max_index, max_memory = max(memory_map.items(), key=lambda item: item[1])
205
206 memory_map = {min_index: min_memory, max_index: max_memory}
207
208 return memory_map
209
210
211 def get_gpu_memory_map():
212 """Get the current gpu usage.
213
214 Returns
215 -------
216 usage: dict
217 Keys are device ids as integers.
218 Values are memory usage as integers in MB.
219 """
220 result = subprocess.run(
221 [
222 'nvidia-smi',
223 '--query-gpu=memory.used',
224 '--format=csv,nounits,noheader',
225 ],
226 encoding='utf-8',
227 capture_output=True,
228 check=True)
229 # Convert lines into a dictionary
230 gpu_memory = [int(x) for x in result.stdout.strip().split(os.linesep)]
231 gpu_memory_map = {f'gpu_{index}': memory for index, memory in enumerate(gpu_memory)}
232 return gpu_memory_map
233
234
235 def get_human_readable_count(number):
236 """
237 Abbreviates an integer number with K, M, B, T for thousands, millions,
238 billions and trillions, respectively.
239 Examples:
240 123 -> 123
241 1234 -> 1 K (one thousand)
242 2e6 -> 2 M (two million)
243 3e9 -> 3 B (three billion)
244 4e12 -> 4 T (four trillion)
245 5e15 -> 5,000 T
246 :param number: a positive integer number
247 :returns a string formatted according to the pattern described above.
248 """
249 assert number >= 0
250 labels = [' ', 'K', 'M', 'B', 'T']
251 num_digits = int(np.floor(np.log10(number)) + 1 if number > 0 else 1)
252 num_groups = int(np.ceil(num_digits / 3))
253 num_groups = min(num_groups, len(labels)) # don't abbreviate beyond trillions
254 shift = -3 * (num_groups - 1)
255 number = number * (10 ** shift)
256 index = num_groups - 1
257 return f'{int(number):,d} {labels[index]}'
258
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pytorch_lightning/core/memory.py b/pytorch_lightning/core/memory.py
--- a/pytorch_lightning/core/memory.py
+++ b/pytorch_lightning/core/memory.py
@@ -50,20 +50,31 @@
input_ = self.model.example_input_array
if self.model.on_gpu:
- input_ = input_.cuda(0)
+ device = next(self.model.parameters()).get_device()
+ # test if input is a list or a tuple
+ if isinstance(input_, (list, tuple)):
+ input_ = [input_i.cuda(device) if torch.is_tensor(input_i) else input_i
+ for input_i in input_]
+ else:
+ input_ = input_.cuda(device)
if self.model.trainer.use_amp:
- input_ = input_.half()
+ # test if it is not a list or a tuple
+ if isinstance(input_, (list, tuple)):
+ input_ = [input_i.half() if torch.is_tensor(input_i) else input_i
+ for input_i in input_]
+ else:
+ input_ = input_.half()
with torch.no_grad():
for _, m in mods:
- if type(input_) is list or type(input_) is tuple: # pragma: no cover
+ if isinstance(input_, (list, tuple)): # pragma: no cover
out = m(*input_)
else:
out = m(input_)
- if type(input_) is tuple or type(input_) is list: # pragma: no cover
+ if isinstance(input_, (list, tuple)): # pragma: no cover
in_size = []
for x in input_:
if type(x) is list:
@@ -75,7 +86,7 @@
in_sizes.append(in_size)
- if type(out) is tuple or type(out) is list: # pragma: no cover
+ if isinstance(out, (list, tuple)): # pragma: no cover
out_size = np.asarray([x.size() for x in out])
else:
out_size = np.array(out.size())
|
{"golden_diff": "diff --git a/pytorch_lightning/core/memory.py b/pytorch_lightning/core/memory.py\n--- a/pytorch_lightning/core/memory.py\n+++ b/pytorch_lightning/core/memory.py\n@@ -50,20 +50,31 @@\n input_ = self.model.example_input_array\n \n if self.model.on_gpu:\n- input_ = input_.cuda(0)\n+ device = next(self.model.parameters()).get_device()\n+ # test if input is a list or a tuple\n+ if isinstance(input_, (list, tuple)):\n+ input_ = [input_i.cuda(device) if torch.is_tensor(input_i) else input_i\n+ for input_i in input_]\n+ else:\n+ input_ = input_.cuda(device)\n \n if self.model.trainer.use_amp:\n- input_ = input_.half()\n+ # test if it is not a list or a tuple\n+ if isinstance(input_, (list, tuple)):\n+ input_ = [input_i.half() if torch.is_tensor(input_i) else input_i\n+ for input_i in input_]\n+ else:\n+ input_ = input_.half()\n \n with torch.no_grad():\n \n for _, m in mods:\n- if type(input_) is list or type(input_) is tuple: # pragma: no cover\n+ if isinstance(input_, (list, tuple)): # pragma: no cover\n out = m(*input_)\n else:\n out = m(input_)\n \n- if type(input_) is tuple or type(input_) is list: # pragma: no cover\n+ if isinstance(input_, (list, tuple)): # pragma: no cover\n in_size = []\n for x in input_:\n if type(x) is list:\n@@ -75,7 +86,7 @@\n \n in_sizes.append(in_size)\n \n- if type(out) is tuple or type(out) is list: # pragma: no cover\n+ if isinstance(out, (list, tuple)): # pragma: no cover\n out_size = np.asarray([x.size() for x in out])\n else:\n out_size = np.array(out.size())\n", "issue": "Summary not working for model on GPU with multiple inputs\n**Describe the bug**\r\nWhen you want a summary for a model that requires multiple input parameters for forward, then this doesn't work. You can set self.example_input_array to be a tuple and there is some code for passing this to the forward method. However, if the model is on cuda, it tries to pass to move this input directly to cuda without a check whether it is a tuple or list.\r\n\r\nthe line with the error is here: \r\n[pytorch-lightning/blob/master/pytorch_lightning/root_module/memory.py#L53](https://github.com/williamFalcon/pytorch-lightning/blob/7324dd902b8d071f4889ab1274a4d4dc09de9a78/pytorch_lightning/root_module/memory.py#L53)\r\n\r\nexample of how it should be checked: \r\n[pytorch-lightning/blob/master/pytorch_lightning/root_module/memory.py#L61](https://github.com/williamFalcon/pytorch-lightning/blob/7324dd902b8d071f4889ab1274a4d4dc09de9a78/pytorch_lightning/root_module/memory.py#L61)\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. create a model that requires multiple inputs in the forward method. \r\n2. set self.example_input_array to be a tuple\r\n3. run the model on GPU\r\n\r\n**Expected behavior**\r\na list with all layers and the input and output shapes of these layers. \r\n\r\n**Screenshots**\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Linux Mint 19.2\r\n - Browser chrome\r\n - Version 8.0.3904.97 (Official Build) (64-bit)\r\n\n", "before_files": [{"content": "'''\nGenerates a summary of a model's layers and dimensionality\n'''\n\nimport gc\nimport os\nimport subprocess\n\nimport numpy as np\nimport pandas as pd\nimport torch\nimport logging\n\n\nclass ModelSummary(object):\n\n def __init__(self, model, mode='full'):\n '''\n Generates summaries of model layers and dimensions.\n '''\n self.model = model\n self.mode = mode\n self.in_sizes = []\n self.out_sizes = []\n\n self.summarize()\n\n def __str__(self):\n return self.summary.__str__()\n\n def __repr__(self):\n return self.summary.__str__()\n\n def named_modules(self):\n if self.mode == 'full':\n mods = self.model.named_modules()\n mods = list(mods)[1:] # do not include root module (LightningModule)\n elif self.mode == 'top':\n # the children are the top-level modules\n mods = self.model.named_children()\n else:\n mods = []\n return list(mods)\n\n def get_variable_sizes(self):\n '''Run sample input through each layer to get output sizes'''\n mods = self.named_modules()\n in_sizes = []\n out_sizes = []\n input_ = self.model.example_input_array\n\n if self.model.on_gpu:\n input_ = input_.cuda(0)\n\n if self.model.trainer.use_amp:\n input_ = input_.half()\n\n with torch.no_grad():\n\n for _, m in mods:\n if type(input_) is list or type(input_) is tuple: # pragma: no cover\n out = m(*input_)\n else:\n out = m(input_)\n\n if type(input_) is tuple or type(input_) is list: # pragma: no cover\n in_size = []\n for x in input_:\n if type(x) is list:\n in_size.append(len(x))\n else:\n in_size.append(x.size())\n else:\n in_size = np.array(input_.size())\n\n in_sizes.append(in_size)\n\n if type(out) is tuple or type(out) is list: # pragma: no cover\n out_size = np.asarray([x.size() for x in out])\n else:\n out_size = np.array(out.size())\n\n out_sizes.append(out_size)\n input_ = out\n\n self.in_sizes = in_sizes\n self.out_sizes = out_sizes\n assert len(in_sizes) == len(out_sizes)\n return\n\n def get_layer_names(self):\n '''Collect Layer Names'''\n mods = self.named_modules()\n names = []\n layers = []\n for name, m in mods:\n names += [name]\n layers += [str(m.__class__)]\n\n layer_types = [x.split('.')[-1][:-2] for x in layers]\n\n self.layer_names = names\n self.layer_types = layer_types\n return\n\n def get_parameter_sizes(self):\n '''Get sizes of all parameters in `model`'''\n mods = self.named_modules()\n sizes = []\n for _, m in mods:\n p = list(m.parameters())\n modsz = []\n for j in range(len(p)):\n modsz.append(np.array(p[j].size()))\n sizes.append(modsz)\n\n self.param_sizes = sizes\n return\n\n def get_parameter_nums(self):\n '''Get number of parameters in each layer'''\n param_nums = []\n for mod in self.param_sizes:\n all_params = 0\n for p in mod:\n all_params += np.prod(p)\n param_nums.append(all_params)\n self.param_nums = param_nums\n return\n\n def make_summary(self):\n '''\n Makes a summary listing with:\n\n Layer Name, Layer Type, Input Size, Output Size, Number of Parameters\n '''\n\n cols = ['Name', 'Type', 'Params']\n if self.model.example_input_array is not None:\n cols.extend(['In_sizes', 'Out_sizes'])\n\n df = pd.DataFrame(np.zeros((len(self.layer_names), len(cols))))\n df.columns = cols\n\n df['Name'] = self.layer_names\n df['Type'] = self.layer_types\n df['Params'] = self.param_nums\n df['Params'] = df['Params'].map(get_human_readable_count)\n\n if self.model.example_input_array is not None:\n df['In_sizes'] = self.in_sizes\n df['Out_sizes'] = self.out_sizes\n\n self.summary = df\n return\n\n def summarize(self):\n self.get_layer_names()\n self.get_parameter_sizes()\n self.get_parameter_nums()\n\n if self.model.example_input_array is not None:\n self.get_variable_sizes()\n self.make_summary()\n\n\ndef print_mem_stack(): # pragma: no cover\n for obj in gc.get_objects():\n try:\n if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):\n logging.info(type(obj), obj.size())\n except Exception:\n pass\n\n\ndef count_mem_items(): # pragma: no cover\n num_params = 0\n num_tensors = 0\n for obj in gc.get_objects():\n try:\n if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):\n obj_type = str(type(obj))\n if 'parameter' in obj_type:\n num_params += 1\n else:\n num_tensors += 1\n except Exception:\n pass\n\n return num_params, num_tensors\n\n\ndef get_memory_profile(mode):\n \"\"\"\n 'all' means return memory for all gpus\n 'min_max' means return memory for max and min\n :param mode:\n :return:\n \"\"\"\n memory_map = get_gpu_memory_map()\n\n if mode == 'min_max':\n min_index, min_memory = min(memory_map.items(), key=lambda item: item[1])\n max_index, max_memory = max(memory_map.items(), key=lambda item: item[1])\n\n memory_map = {min_index: min_memory, max_index: max_memory}\n\n return memory_map\n\n\ndef get_gpu_memory_map():\n \"\"\"Get the current gpu usage.\n\n Returns\n -------\n usage: dict\n Keys are device ids as integers.\n Values are memory usage as integers in MB.\n \"\"\"\n result = subprocess.run(\n [\n 'nvidia-smi',\n '--query-gpu=memory.used',\n '--format=csv,nounits,noheader',\n ],\n encoding='utf-8',\n capture_output=True,\n check=True)\n # Convert lines into a dictionary\n gpu_memory = [int(x) for x in result.stdout.strip().split(os.linesep)]\n gpu_memory_map = {f'gpu_{index}': memory for index, memory in enumerate(gpu_memory)}\n return gpu_memory_map\n\n\ndef get_human_readable_count(number):\n \"\"\"\n Abbreviates an integer number with K, M, B, T for thousands, millions,\n billions and trillions, respectively.\n Examples:\n 123 -> 123\n 1234 -> 1 K (one thousand)\n 2e6 -> 2 M (two million)\n 3e9 -> 3 B (three billion)\n 4e12 -> 4 T (four trillion)\n 5e15 -> 5,000 T\n :param number: a positive integer number\n :returns a string formatted according to the pattern described above.\n \"\"\"\n assert number >= 0\n labels = [' ', 'K', 'M', 'B', 'T']\n num_digits = int(np.floor(np.log10(number)) + 1 if number > 0 else 1)\n num_groups = int(np.ceil(num_digits / 3))\n num_groups = min(num_groups, len(labels)) # don't abbreviate beyond trillions\n shift = -3 * (num_groups - 1)\n number = number * (10 ** shift)\n index = num_groups - 1\n return f'{int(number):,d} {labels[index]}'\n", "path": "pytorch_lightning/core/memory.py"}], "after_files": [{"content": "'''\nGenerates a summary of a model's layers and dimensionality\n'''\n\nimport gc\nimport os\nimport subprocess\n\nimport numpy as np\nimport pandas as pd\nimport torch\nimport logging\n\n\nclass ModelSummary(object):\n\n def __init__(self, model, mode='full'):\n '''\n Generates summaries of model layers and dimensions.\n '''\n self.model = model\n self.mode = mode\n self.in_sizes = []\n self.out_sizes = []\n\n self.summarize()\n\n def __str__(self):\n return self.summary.__str__()\n\n def __repr__(self):\n return self.summary.__str__()\n\n def named_modules(self):\n if self.mode == 'full':\n mods = self.model.named_modules()\n mods = list(mods)[1:] # do not include root module (LightningModule)\n elif self.mode == 'top':\n # the children are the top-level modules\n mods = self.model.named_children()\n else:\n mods = []\n return list(mods)\n\n def get_variable_sizes(self):\n '''Run sample input through each layer to get output sizes'''\n mods = self.named_modules()\n in_sizes = []\n out_sizes = []\n input_ = self.model.example_input_array\n\n if self.model.on_gpu:\n device = next(self.model.parameters()).get_device()\n # test if input is a list or a tuple\n if isinstance(input_, (list, tuple)):\n input_ = [input_i.cuda(device) if torch.is_tensor(input_i) else input_i\n for input_i in input_]\n else:\n input_ = input_.cuda(device)\n\n if self.model.trainer.use_amp:\n # test if it is not a list or a tuple\n if isinstance(input_, (list, tuple)):\n input_ = [input_i.half() if torch.is_tensor(input_i) else input_i\n for input_i in input_]\n else:\n input_ = input_.half()\n\n with torch.no_grad():\n\n for _, m in mods:\n if isinstance(input_, (list, tuple)): # pragma: no cover\n out = m(*input_)\n else:\n out = m(input_)\n\n if isinstance(input_, (list, tuple)): # pragma: no cover\n in_size = []\n for x in input_:\n if type(x) is list:\n in_size.append(len(x))\n else:\n in_size.append(x.size())\n else:\n in_size = np.array(input_.size())\n\n in_sizes.append(in_size)\n\n if isinstance(out, (list, tuple)): # pragma: no cover\n out_size = np.asarray([x.size() for x in out])\n else:\n out_size = np.array(out.size())\n\n out_sizes.append(out_size)\n input_ = out\n\n self.in_sizes = in_sizes\n self.out_sizes = out_sizes\n assert len(in_sizes) == len(out_sizes)\n return\n\n def get_layer_names(self):\n '''Collect Layer Names'''\n mods = self.named_modules()\n names = []\n layers = []\n for name, m in mods:\n names += [name]\n layers += [str(m.__class__)]\n\n layer_types = [x.split('.')[-1][:-2] for x in layers]\n\n self.layer_names = names\n self.layer_types = layer_types\n return\n\n def get_parameter_sizes(self):\n '''Get sizes of all parameters in `model`'''\n mods = self.named_modules()\n sizes = []\n for _, m in mods:\n p = list(m.parameters())\n modsz = []\n for j in range(len(p)):\n modsz.append(np.array(p[j].size()))\n sizes.append(modsz)\n\n self.param_sizes = sizes\n return\n\n def get_parameter_nums(self):\n '''Get number of parameters in each layer'''\n param_nums = []\n for mod in self.param_sizes:\n all_params = 0\n for p in mod:\n all_params += np.prod(p)\n param_nums.append(all_params)\n self.param_nums = param_nums\n return\n\n def make_summary(self):\n '''\n Makes a summary listing with:\n\n Layer Name, Layer Type, Input Size, Output Size, Number of Parameters\n '''\n\n cols = ['Name', 'Type', 'Params']\n if self.model.example_input_array is not None:\n cols.extend(['In_sizes', 'Out_sizes'])\n\n df = pd.DataFrame(np.zeros((len(self.layer_names), len(cols))))\n df.columns = cols\n\n df['Name'] = self.layer_names\n df['Type'] = self.layer_types\n df['Params'] = self.param_nums\n df['Params'] = df['Params'].map(get_human_readable_count)\n\n if self.model.example_input_array is not None:\n df['In_sizes'] = self.in_sizes\n df['Out_sizes'] = self.out_sizes\n\n self.summary = df\n return\n\n def summarize(self):\n self.get_layer_names()\n self.get_parameter_sizes()\n self.get_parameter_nums()\n\n if self.model.example_input_array is not None:\n self.get_variable_sizes()\n self.make_summary()\n\n\ndef print_mem_stack(): # pragma: no cover\n for obj in gc.get_objects():\n try:\n if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):\n logging.info(type(obj), obj.size())\n except Exception:\n pass\n\n\ndef count_mem_items(): # pragma: no cover\n num_params = 0\n num_tensors = 0\n for obj in gc.get_objects():\n try:\n if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):\n obj_type = str(type(obj))\n if 'parameter' in obj_type:\n num_params += 1\n else:\n num_tensors += 1\n except Exception:\n pass\n\n return num_params, num_tensors\n\n\ndef get_memory_profile(mode):\n \"\"\"\n 'all' means return memory for all gpus\n 'min_max' means return memory for max and min\n :param mode:\n :return:\n \"\"\"\n memory_map = get_gpu_memory_map()\n\n if mode == 'min_max':\n min_index, min_memory = min(memory_map.items(), key=lambda item: item[1])\n max_index, max_memory = max(memory_map.items(), key=lambda item: item[1])\n\n memory_map = {min_index: min_memory, max_index: max_memory}\n\n return memory_map\n\n\ndef get_gpu_memory_map():\n \"\"\"Get the current gpu usage.\n\n Returns\n -------\n usage: dict\n Keys are device ids as integers.\n Values are memory usage as integers in MB.\n \"\"\"\n result = subprocess.run(\n [\n 'nvidia-smi',\n '--query-gpu=memory.used',\n '--format=csv,nounits,noheader',\n ],\n encoding='utf-8',\n capture_output=True,\n check=True)\n # Convert lines into a dictionary\n gpu_memory = [int(x) for x in result.stdout.strip().split(os.linesep)]\n gpu_memory_map = {f'gpu_{index}': memory for index, memory in enumerate(gpu_memory)}\n return gpu_memory_map\n\n\ndef get_human_readable_count(number):\n \"\"\"\n Abbreviates an integer number with K, M, B, T for thousands, millions,\n billions and trillions, respectively.\n Examples:\n 123 -> 123\n 1234 -> 1 K (one thousand)\n 2e6 -> 2 M (two million)\n 3e9 -> 3 B (three billion)\n 4e12 -> 4 T (four trillion)\n 5e15 -> 5,000 T\n :param number: a positive integer number\n :returns a string formatted according to the pattern described above.\n \"\"\"\n assert number >= 0\n labels = [' ', 'K', 'M', 'B', 'T']\n num_digits = int(np.floor(np.log10(number)) + 1 if number > 0 else 1)\n num_groups = int(np.ceil(num_digits / 3))\n num_groups = min(num_groups, len(labels)) # don't abbreviate beyond trillions\n shift = -3 * (num_groups - 1)\n number = number * (10 ** shift)\n index = num_groups - 1\n return f'{int(number):,d} {labels[index]}'\n", "path": "pytorch_lightning/core/memory.py"}]}
| 3,147 | 471 |
gh_patches_debug_37700
|
rasdani/github-patches
|
git_diff
|
optuna__optuna-587
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dashboard only supports direction="minimize"

I used `optuna create-study --direction maximize --storage sqlite:///example.db` to create my study but the curve on the dashboard tracks only the minimum, not the maximum.
Maybe this is an error of `create-study` that doesn't take my argument into account…
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/dashboard.py`
Content:
```
1 try:
2 import bokeh.command.bootstrap
3 import bokeh.document # NOQA
4 import bokeh.layouts
5 import bokeh.models
6 import bokeh.models.widgets
7 import bokeh.plotting
8 import bokeh.themes
9 import tornado.gen
10 _available = True
11 except ImportError as e:
12 _available = False
13 _import_error = e
14
15 import collections
16 import numpy as np
17 import threading
18 import time
19
20 import optuna.logging
21 import optuna.structs
22 import optuna.study
23 from optuna import type_checking
24
25 if type_checking.TYPE_CHECKING:
26 from typing import Any # NOQA
27 from typing import Dict # NOQA
28 from typing import List # NOQA
29 from typing import Optional # NOQA
30
31 _mode = None # type: Optional[str]
32 _study = None # type: Optional[optuna.study.Study]
33
34 _HEADER_FORMAT = '''
35 <style>
36 body {{
37 margin: 20px;
38 }}
39 h1, p {{
40 margin: 10px 0px;
41 }}
42 </style>
43
44 <h1>Optuna Dashboard (Beta)</h1>
45 <p>
46 <b>Study name:</b> {study_name}<br>
47 </p>
48 '''
49
50 _DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
51
52 if _available:
53
54 class _CompleteTrialsWidget(object):
55 def __init__(self, trials):
56 # type: (List[optuna.structs.FrozenTrial]) -> None
57
58 complete_trials = [
59 trial for trial in trials if trial.state == optuna.structs.TrialState.COMPLETE
60 ]
61 self.trial_ids = set([trial.trial_id for trial in complete_trials])
62
63 values = [trial.value for trial in complete_trials]
64 best_values = np.minimum.accumulate(values, axis=0)
65 self.cds = bokeh.models.ColumnDataSource({
66 '#': list(range(len(complete_trials))),
67 'value': values,
68 'best_value': best_values,
69 })
70
71 self.best_value = best_values[-1] if complete_trials else np.inf
72
73 def create_figure(self):
74 # type: () -> bokeh.plotting.Figure
75
76 figure = bokeh.plotting.figure(height=150)
77 figure.circle(x='#', y='value', source=self.cds, alpha=0.3, color='navy')
78 figure.line(x='#', y='best_value', source=self.cds, color='firebrick')
79 figure.xaxis[0].axis_label = 'Number of Trials'
80 figure.yaxis[0].axis_label = 'Objective Value'
81 return figure
82
83 def update(self, new_trials):
84 # type: (List[optuna.structs.FrozenTrial]) -> None
85
86 stream_dict = collections.defaultdict(list) # type: Dict[str, List[Any]]
87
88 for trial in new_trials:
89 if trial.state != optuna.structs.TrialState.COMPLETE:
90 continue
91 if trial.trial_id in self.trial_ids:
92 continue
93 stream_dict['#'].append(len(self.trial_ids))
94 stream_dict['value'].append(trial.value)
95 self.best_value = min(self.best_value, trial.value)
96 stream_dict['best_value'].append(self.best_value)
97 self.trial_ids.add(trial.trial_id)
98
99 if stream_dict:
100 self.cds.stream(stream_dict)
101
102 class _AllTrialsWidget(object):
103 def __init__(self, trials):
104 # type: (List[optuna.structs.FrozenTrial]) -> None
105
106 self.cds = bokeh.models.ColumnDataSource(self.trials_to_dict(trials))
107
108 def create_table(self):
109 # type: () -> bokeh.models.widgets.DataTable
110
111 return bokeh.models.widgets.DataTable(
112 source=self.cds,
113 columns=[
114 bokeh.models.widgets.TableColumn(field=field, title=field) for field in
115 ['number', 'state', 'value', 'params', 'datetime_start', 'datetime_complete']
116 ])
117
118 def update(
119 self,
120 old_trials, # type: List[optuna.structs.FrozenTrial]
121 new_trials, # type: List[optuna.structs.FrozenTrial]
122 ):
123 # type: (...) -> None
124
125 modified_indices = []
126 modified_trials = []
127 for i, old_trial in enumerate(old_trials):
128 new_trial = new_trials[i]
129 if old_trial != new_trial:
130 modified_indices.append(i)
131 modified_trials.append(new_trial)
132
133 patch_dict = self.trials_to_dict(modified_trials)
134 patch_dict = {k: list(zip(modified_indices, v)) for k, v in patch_dict.items()}
135 self.cds.patch(patch_dict)
136
137 self.cds.stream(self.trials_to_dict(new_trials[len(old_trials):]))
138
139 @staticmethod
140 def trials_to_dict(trials):
141 # type: (List[optuna.structs.FrozenTrial]) -> Dict[str, List[Any]]
142
143 return {
144 'number': [trial.number for trial in trials],
145 'state': [trial.state.name for trial in trials],
146 'value': [trial.value for trial in trials],
147 'params': [str(trial.params) for trial in trials],
148 'datetime_start': [
149 trial.datetime_start.strftime(_DATETIME_FORMAT)
150 if trial.datetime_start is not None else None for trial in trials
151 ],
152 'datetime_complete': [
153 trial.datetime_complete.strftime(_DATETIME_FORMAT)
154 if trial.datetime_complete is not None else None for trial in trials
155 ],
156 }
157
158 class _DashboardApp(object):
159 def __init__(self, study, launch_update_thread):
160 # type: (optuna.study.Study, bool) -> None
161
162 self.study = study
163 self.launch_update_thread = launch_update_thread
164 self.lock = threading.Lock()
165
166 def __call__(self, doc):
167 # type: (bokeh.document.Document) -> None
168
169 self.doc = doc
170 self.current_trials = \
171 self.study.trials # type: Optional[List[optuna.structs.FrozenTrial]]
172 self.new_trials = None # type: Optional[List[optuna.structs.FrozenTrial]]
173 self.complete_trials_widget = _CompleteTrialsWidget(self.current_trials)
174 self.all_trials_widget = _AllTrialsWidget(self.current_trials)
175
176 self.doc.title = 'Optuna Dashboard (Beta)'
177 header = _HEADER_FORMAT.format(study_name=self.study.study_name)
178 self.doc.add_root(
179 bokeh.layouts.layout([[bokeh.models.widgets.Div(text=header)],
180 [self.complete_trials_widget.create_figure()],
181 [self.all_trials_widget.create_table()]],
182 sizing_mode='scale_width'))
183
184 if self.launch_update_thread:
185 thread = threading.Thread(target=self.thread_loop)
186 thread.daemon = True
187 thread.start()
188
189 def thread_loop(self):
190 # type: () -> None
191
192 while True:
193 time.sleep(1)
194 new_trials = self.study.trials
195 with self.lock:
196 need_to_add_callback = (self.new_trials is None)
197 self.new_trials = new_trials
198 if need_to_add_callback:
199 self.doc.add_next_tick_callback(self.update_callback)
200
201 @tornado.gen.coroutine
202 def update_callback(self):
203 # type: () -> None
204
205 with self.lock:
206 current_trials = self.current_trials
207 new_trials = self.new_trials
208 self.current_trials = self.new_trials
209 self.new_trials = None
210
211 assert current_trials is not None
212 assert new_trials is not None
213 self.complete_trials_widget.update(new_trials)
214 self.all_trials_widget.update(current_trials, new_trials)
215
216
217 def _check_bokeh_availability():
218 # type: () -> None
219
220 if not _available:
221 raise ImportError(
222 'Bokeh is not available. Please install Bokeh to use the dashboard. '
223 'Bokeh can be installed by executing `$ pip install bokeh`. '
224 'For further information, please refer to the installation guide of Bokeh. '
225 '(The actual import error is as follows: ' + str(_import_error) + ')')
226
227
228 def _show_experimental_warning():
229 # type: () -> None
230
231 logger = optuna.logging.get_logger(__name__)
232 logger.warning('Optuna dashboard is still highly experimental. Please use with caution!')
233
234
235 def _get_this_source_path():
236 # type: () -> str
237
238 path = __file__
239
240 # Sometimes __file__ points to a *.pyc file, but Bokeh doesn't accept it.
241 if path.endswith('.pyc'):
242 path = path[:-1]
243 return path
244
245
246 def serve(study, bokeh_allow_websocket_origins=None):
247 # type: (optuna.study.Study, Optional[List[str]]) -> None
248
249 global _mode, _study
250
251 _check_bokeh_availability()
252 _show_experimental_warning()
253
254 # We want to pass the mode (launching a server? or, just writing an HTML?) and a target study
255 # to our Bokeh app. Unfortunately, as we are using `bokeh.command.bootstrap.main` to launch
256 # our Bokeh app, we cannot directly pass Python objects to it. Therefore, we have no choice but
257 # to use global variables to pass them.
258 _mode = 'serve'
259 _study = study
260
261 # TODO(akiba): Stop using Bokeh's CLI entry point, and start the HTTP server by ourselves.
262
263 # This is not a very clean way to launch Bokeh server.
264 # Another seemingly better way is to
265 # instantiate and launch `bokeh.server.server.Server` by ourselves. However, in this way,
266 # for some reason, we found that the CDS update is not reflected to browsers, at least on Bokeh
267 # version 0.12.15. In addition, we will need to do many configuration to servers, which can be
268 # done automatically with the following one line. So, for now, we decided to use this way.
269 command = ['bokeh', 'serve', '--show', _get_this_source_path()]
270 if bokeh_allow_websocket_origins is not None:
271 for bokeh_allow_websocket_origin in bokeh_allow_websocket_origins:
272 command.extend(['--allow-websocket-origin', bokeh_allow_websocket_origin])
273 bokeh.command.bootstrap.main(command)
274
275
276 def write(study, out_path):
277 # type: (optuna.study.Study, str) -> None
278
279 global _mode, _study
280
281 _check_bokeh_availability()
282 _show_experimental_warning()
283
284 _mode = 'html'
285 _study = study
286 bokeh.command.bootstrap.main(['bokeh', 'html', _get_this_source_path(), '-o', out_path])
287
288
289 def _run():
290 # type: () -> None
291
292 # Please note that `_study` and `optuna.dashboard._study` are different here. Here, this module
293 # is loaded inside Bokeh, and thus it is not `optuna.dashboard`, but `bk_script_????`.
294 study = optuna.dashboard._study
295 mode = optuna.dashboard._mode
296
297 assert study is not None
298 app = _DashboardApp(study, launch_update_thread=(mode == 'serve'))
299 doc = bokeh.plotting.curdoc()
300 app(doc)
301
302
303 if __name__.startswith('bk_script_'):
304 # Here, this module is loaded inside Bokeh. Therefore, we should launch the Bokeh app.
305 _run()
306
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/optuna/dashboard.py b/optuna/dashboard.py
--- a/optuna/dashboard.py
+++ b/optuna/dashboard.py
@@ -52,16 +52,21 @@
if _available:
class _CompleteTrialsWidget(object):
- def __init__(self, trials):
- # type: (List[optuna.structs.FrozenTrial]) -> None
+ def __init__(self, trials, direction):
+ # type: (List[optuna.structs.FrozenTrial], optuna.structs.StudyDirection) -> None
complete_trials = [
trial for trial in trials if trial.state == optuna.structs.TrialState.COMPLETE
]
self.trial_ids = set([trial.trial_id for trial in complete_trials])
+ self.direction = direction
values = [trial.value for trial in complete_trials]
- best_values = np.minimum.accumulate(values, axis=0)
+ if direction == optuna.structs.StudyDirection.MINIMIZE:
+ best_values = np.minimum.accumulate(values, axis=0)
+ else:
+ best_values = np.maximum.accumulate(values, axis=0)
+
self.cds = bokeh.models.ColumnDataSource({
'#': list(range(len(complete_trials))),
'value': values,
@@ -92,7 +97,10 @@
continue
stream_dict['#'].append(len(self.trial_ids))
stream_dict['value'].append(trial.value)
- self.best_value = min(self.best_value, trial.value)
+ if self.direction == optuna.structs.StudyDirection.MINIMIZE:
+ self.best_value = min(self.best_value, trial.value)
+ else:
+ self.best_value = max(self.best_value, trial.value)
stream_dict['best_value'].append(self.best_value)
self.trial_ids.add(trial.trial_id)
@@ -170,7 +178,8 @@
self.current_trials = \
self.study.trials # type: Optional[List[optuna.structs.FrozenTrial]]
self.new_trials = None # type: Optional[List[optuna.structs.FrozenTrial]]
- self.complete_trials_widget = _CompleteTrialsWidget(self.current_trials)
+ self.complete_trials_widget = _CompleteTrialsWidget(
+ self.current_trials, self.study.direction)
self.all_trials_widget = _AllTrialsWidget(self.current_trials)
self.doc.title = 'Optuna Dashboard (Beta)'
|
{"golden_diff": "diff --git a/optuna/dashboard.py b/optuna/dashboard.py\n--- a/optuna/dashboard.py\n+++ b/optuna/dashboard.py\n@@ -52,16 +52,21 @@\n if _available:\n \n class _CompleteTrialsWidget(object):\n- def __init__(self, trials):\n- # type: (List[optuna.structs.FrozenTrial]) -> None\n+ def __init__(self, trials, direction):\n+ # type: (List[optuna.structs.FrozenTrial], optuna.structs.StudyDirection) -> None\n \n complete_trials = [\n trial for trial in trials if trial.state == optuna.structs.TrialState.COMPLETE\n ]\n self.trial_ids = set([trial.trial_id for trial in complete_trials])\n \n+ self.direction = direction\n values = [trial.value for trial in complete_trials]\n- best_values = np.minimum.accumulate(values, axis=0)\n+ if direction == optuna.structs.StudyDirection.MINIMIZE:\n+ best_values = np.minimum.accumulate(values, axis=0)\n+ else:\n+ best_values = np.maximum.accumulate(values, axis=0)\n+\n self.cds = bokeh.models.ColumnDataSource({\n '#': list(range(len(complete_trials))),\n 'value': values,\n@@ -92,7 +97,10 @@\n continue\n stream_dict['#'].append(len(self.trial_ids))\n stream_dict['value'].append(trial.value)\n- self.best_value = min(self.best_value, trial.value)\n+ if self.direction == optuna.structs.StudyDirection.MINIMIZE:\n+ self.best_value = min(self.best_value, trial.value)\n+ else:\n+ self.best_value = max(self.best_value, trial.value)\n stream_dict['best_value'].append(self.best_value)\n self.trial_ids.add(trial.trial_id)\n \n@@ -170,7 +178,8 @@\n self.current_trials = \\\n self.study.trials # type: Optional[List[optuna.structs.FrozenTrial]]\n self.new_trials = None # type: Optional[List[optuna.structs.FrozenTrial]]\n- self.complete_trials_widget = _CompleteTrialsWidget(self.current_trials)\n+ self.complete_trials_widget = _CompleteTrialsWidget(\n+ self.current_trials, self.study.direction)\n self.all_trials_widget = _AllTrialsWidget(self.current_trials)\n \n self.doc.title = 'Optuna Dashboard (Beta)'\n", "issue": "Dashboard only supports direction=\"minimize\"\n\r\n\r\n\r\nI used `optuna create-study --direction maximize --storage sqlite:///example.db` to create my study but the curve on the dashboard tracks only the minimum, not the maximum.\r\n\r\nMaybe this is an error of `create-study` that doesn't take my argument into account\u2026\n", "before_files": [{"content": "try:\n import bokeh.command.bootstrap\n import bokeh.document # NOQA\n import bokeh.layouts\n import bokeh.models\n import bokeh.models.widgets\n import bokeh.plotting\n import bokeh.themes\n import tornado.gen\n _available = True\nexcept ImportError as e:\n _available = False\n _import_error = e\n\nimport collections\nimport numpy as np\nimport threading\nimport time\n\nimport optuna.logging\nimport optuna.structs\nimport optuna.study\nfrom optuna import type_checking\n\nif type_checking.TYPE_CHECKING:\n from typing import Any # NOQA\n from typing import Dict # NOQA\n from typing import List # NOQA\n from typing import Optional # NOQA\n\n_mode = None # type: Optional[str]\n_study = None # type: Optional[optuna.study.Study]\n\n_HEADER_FORMAT = '''\n<style>\nbody {{\n margin: 20px;\n}}\nh1, p {{\n margin: 10px 0px;\n}}\n</style>\n\n<h1>Optuna Dashboard (Beta)</h1>\n<p>\n<b>Study name:</b> {study_name}<br>\n</p>\n'''\n\n_DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'\n\nif _available:\n\n class _CompleteTrialsWidget(object):\n def __init__(self, trials):\n # type: (List[optuna.structs.FrozenTrial]) -> None\n\n complete_trials = [\n trial for trial in trials if trial.state == optuna.structs.TrialState.COMPLETE\n ]\n self.trial_ids = set([trial.trial_id for trial in complete_trials])\n\n values = [trial.value for trial in complete_trials]\n best_values = np.minimum.accumulate(values, axis=0)\n self.cds = bokeh.models.ColumnDataSource({\n '#': list(range(len(complete_trials))),\n 'value': values,\n 'best_value': best_values,\n })\n\n self.best_value = best_values[-1] if complete_trials else np.inf\n\n def create_figure(self):\n # type: () -> bokeh.plotting.Figure\n\n figure = bokeh.plotting.figure(height=150)\n figure.circle(x='#', y='value', source=self.cds, alpha=0.3, color='navy')\n figure.line(x='#', y='best_value', source=self.cds, color='firebrick')\n figure.xaxis[0].axis_label = 'Number of Trials'\n figure.yaxis[0].axis_label = 'Objective Value'\n return figure\n\n def update(self, new_trials):\n # type: (List[optuna.structs.FrozenTrial]) -> None\n\n stream_dict = collections.defaultdict(list) # type: Dict[str, List[Any]]\n\n for trial in new_trials:\n if trial.state != optuna.structs.TrialState.COMPLETE:\n continue\n if trial.trial_id in self.trial_ids:\n continue\n stream_dict['#'].append(len(self.trial_ids))\n stream_dict['value'].append(trial.value)\n self.best_value = min(self.best_value, trial.value)\n stream_dict['best_value'].append(self.best_value)\n self.trial_ids.add(trial.trial_id)\n\n if stream_dict:\n self.cds.stream(stream_dict)\n\n class _AllTrialsWidget(object):\n def __init__(self, trials):\n # type: (List[optuna.structs.FrozenTrial]) -> None\n\n self.cds = bokeh.models.ColumnDataSource(self.trials_to_dict(trials))\n\n def create_table(self):\n # type: () -> bokeh.models.widgets.DataTable\n\n return bokeh.models.widgets.DataTable(\n source=self.cds,\n columns=[\n bokeh.models.widgets.TableColumn(field=field, title=field) for field in\n ['number', 'state', 'value', 'params', 'datetime_start', 'datetime_complete']\n ])\n\n def update(\n self,\n old_trials, # type: List[optuna.structs.FrozenTrial]\n new_trials, # type: List[optuna.structs.FrozenTrial]\n ):\n # type: (...) -> None\n\n modified_indices = []\n modified_trials = []\n for i, old_trial in enumerate(old_trials):\n new_trial = new_trials[i]\n if old_trial != new_trial:\n modified_indices.append(i)\n modified_trials.append(new_trial)\n\n patch_dict = self.trials_to_dict(modified_trials)\n patch_dict = {k: list(zip(modified_indices, v)) for k, v in patch_dict.items()}\n self.cds.patch(patch_dict)\n\n self.cds.stream(self.trials_to_dict(new_trials[len(old_trials):]))\n\n @staticmethod\n def trials_to_dict(trials):\n # type: (List[optuna.structs.FrozenTrial]) -> Dict[str, List[Any]]\n\n return {\n 'number': [trial.number for trial in trials],\n 'state': [trial.state.name for trial in trials],\n 'value': [trial.value for trial in trials],\n 'params': [str(trial.params) for trial in trials],\n 'datetime_start': [\n trial.datetime_start.strftime(_DATETIME_FORMAT)\n if trial.datetime_start is not None else None for trial in trials\n ],\n 'datetime_complete': [\n trial.datetime_complete.strftime(_DATETIME_FORMAT)\n if trial.datetime_complete is not None else None for trial in trials\n ],\n }\n\n class _DashboardApp(object):\n def __init__(self, study, launch_update_thread):\n # type: (optuna.study.Study, bool) -> None\n\n self.study = study\n self.launch_update_thread = launch_update_thread\n self.lock = threading.Lock()\n\n def __call__(self, doc):\n # type: (bokeh.document.Document) -> None\n\n self.doc = doc\n self.current_trials = \\\n self.study.trials # type: Optional[List[optuna.structs.FrozenTrial]]\n self.new_trials = None # type: Optional[List[optuna.structs.FrozenTrial]]\n self.complete_trials_widget = _CompleteTrialsWidget(self.current_trials)\n self.all_trials_widget = _AllTrialsWidget(self.current_trials)\n\n self.doc.title = 'Optuna Dashboard (Beta)'\n header = _HEADER_FORMAT.format(study_name=self.study.study_name)\n self.doc.add_root(\n bokeh.layouts.layout([[bokeh.models.widgets.Div(text=header)],\n [self.complete_trials_widget.create_figure()],\n [self.all_trials_widget.create_table()]],\n sizing_mode='scale_width'))\n\n if self.launch_update_thread:\n thread = threading.Thread(target=self.thread_loop)\n thread.daemon = True\n thread.start()\n\n def thread_loop(self):\n # type: () -> None\n\n while True:\n time.sleep(1)\n new_trials = self.study.trials\n with self.lock:\n need_to_add_callback = (self.new_trials is None)\n self.new_trials = new_trials\n if need_to_add_callback:\n self.doc.add_next_tick_callback(self.update_callback)\n\n @tornado.gen.coroutine\n def update_callback(self):\n # type: () -> None\n\n with self.lock:\n current_trials = self.current_trials\n new_trials = self.new_trials\n self.current_trials = self.new_trials\n self.new_trials = None\n\n assert current_trials is not None\n assert new_trials is not None\n self.complete_trials_widget.update(new_trials)\n self.all_trials_widget.update(current_trials, new_trials)\n\n\ndef _check_bokeh_availability():\n # type: () -> None\n\n if not _available:\n raise ImportError(\n 'Bokeh is not available. Please install Bokeh to use the dashboard. '\n 'Bokeh can be installed by executing `$ pip install bokeh`. '\n 'For further information, please refer to the installation guide of Bokeh. '\n '(The actual import error is as follows: ' + str(_import_error) + ')')\n\n\ndef _show_experimental_warning():\n # type: () -> None\n\n logger = optuna.logging.get_logger(__name__)\n logger.warning('Optuna dashboard is still highly experimental. Please use with caution!')\n\n\ndef _get_this_source_path():\n # type: () -> str\n\n path = __file__\n\n # Sometimes __file__ points to a *.pyc file, but Bokeh doesn't accept it.\n if path.endswith('.pyc'):\n path = path[:-1]\n return path\n\n\ndef serve(study, bokeh_allow_websocket_origins=None):\n # type: (optuna.study.Study, Optional[List[str]]) -> None\n\n global _mode, _study\n\n _check_bokeh_availability()\n _show_experimental_warning()\n\n # We want to pass the mode (launching a server? or, just writing an HTML?) and a target study\n # to our Bokeh app. Unfortunately, as we are using `bokeh.command.bootstrap.main` to launch\n # our Bokeh app, we cannot directly pass Python objects to it. Therefore, we have no choice but\n # to use global variables to pass them.\n _mode = 'serve'\n _study = study\n\n # TODO(akiba): Stop using Bokeh's CLI entry point, and start the HTTP server by ourselves.\n\n # This is not a very clean way to launch Bokeh server.\n # Another seemingly better way is to\n # instantiate and launch `bokeh.server.server.Server` by ourselves. However, in this way,\n # for some reason, we found that the CDS update is not reflected to browsers, at least on Bokeh\n # version 0.12.15. In addition, we will need to do many configuration to servers, which can be\n # done automatically with the following one line. So, for now, we decided to use this way.\n command = ['bokeh', 'serve', '--show', _get_this_source_path()]\n if bokeh_allow_websocket_origins is not None:\n for bokeh_allow_websocket_origin in bokeh_allow_websocket_origins:\n command.extend(['--allow-websocket-origin', bokeh_allow_websocket_origin])\n bokeh.command.bootstrap.main(command)\n\n\ndef write(study, out_path):\n # type: (optuna.study.Study, str) -> None\n\n global _mode, _study\n\n _check_bokeh_availability()\n _show_experimental_warning()\n\n _mode = 'html'\n _study = study\n bokeh.command.bootstrap.main(['bokeh', 'html', _get_this_source_path(), '-o', out_path])\n\n\ndef _run():\n # type: () -> None\n\n # Please note that `_study` and `optuna.dashboard._study` are different here. Here, this module\n # is loaded inside Bokeh, and thus it is not `optuna.dashboard`, but `bk_script_????`.\n study = optuna.dashboard._study\n mode = optuna.dashboard._mode\n\n assert study is not None\n app = _DashboardApp(study, launch_update_thread=(mode == 'serve'))\n doc = bokeh.plotting.curdoc()\n app(doc)\n\n\nif __name__.startswith('bk_script_'):\n # Here, this module is loaded inside Bokeh. Therefore, we should launch the Bokeh app.\n _run()\n", "path": "optuna/dashboard.py"}], "after_files": [{"content": "try:\n import bokeh.command.bootstrap\n import bokeh.document # NOQA\n import bokeh.layouts\n import bokeh.models\n import bokeh.models.widgets\n import bokeh.plotting\n import bokeh.themes\n import tornado.gen\n _available = True\nexcept ImportError as e:\n _available = False\n _import_error = e\n\nimport collections\nimport numpy as np\nimport threading\nimport time\n\nimport optuna.logging\nimport optuna.structs\nimport optuna.study\nfrom optuna import type_checking\n\nif type_checking.TYPE_CHECKING:\n from typing import Any # NOQA\n from typing import Dict # NOQA\n from typing import List # NOQA\n from typing import Optional # NOQA\n\n_mode = None # type: Optional[str]\n_study = None # type: Optional[optuna.study.Study]\n\n_HEADER_FORMAT = '''\n<style>\nbody {{\n margin: 20px;\n}}\nh1, p {{\n margin: 10px 0px;\n}}\n</style>\n\n<h1>Optuna Dashboard (Beta)</h1>\n<p>\n<b>Study name:</b> {study_name}<br>\n</p>\n'''\n\n_DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'\n\nif _available:\n\n class _CompleteTrialsWidget(object):\n def __init__(self, trials, direction):\n # type: (List[optuna.structs.FrozenTrial], optuna.structs.StudyDirection) -> None\n\n complete_trials = [\n trial for trial in trials if trial.state == optuna.structs.TrialState.COMPLETE\n ]\n self.trial_ids = set([trial.trial_id for trial in complete_trials])\n\n self.direction = direction\n values = [trial.value for trial in complete_trials]\n if direction == optuna.structs.StudyDirection.MINIMIZE:\n best_values = np.minimum.accumulate(values, axis=0)\n else:\n best_values = np.maximum.accumulate(values, axis=0)\n\n self.cds = bokeh.models.ColumnDataSource({\n '#': list(range(len(complete_trials))),\n 'value': values,\n 'best_value': best_values,\n })\n\n self.best_value = best_values[-1] if complete_trials else np.inf\n\n def create_figure(self):\n # type: () -> bokeh.plotting.Figure\n\n figure = bokeh.plotting.figure(height=150)\n figure.circle(x='#', y='value', source=self.cds, alpha=0.3, color='navy')\n figure.line(x='#', y='best_value', source=self.cds, color='firebrick')\n figure.xaxis[0].axis_label = 'Number of Trials'\n figure.yaxis[0].axis_label = 'Objective Value'\n return figure\n\n def update(self, new_trials):\n # type: (List[optuna.structs.FrozenTrial]) -> None\n\n stream_dict = collections.defaultdict(list) # type: Dict[str, List[Any]]\n\n for trial in new_trials:\n if trial.state != optuna.structs.TrialState.COMPLETE:\n continue\n if trial.trial_id in self.trial_ids:\n continue\n stream_dict['#'].append(len(self.trial_ids))\n stream_dict['value'].append(trial.value)\n if self.direction == optuna.structs.StudyDirection.MINIMIZE:\n self.best_value = min(self.best_value, trial.value)\n else:\n self.best_value = max(self.best_value, trial.value)\n stream_dict['best_value'].append(self.best_value)\n self.trial_ids.add(trial.trial_id)\n\n if stream_dict:\n self.cds.stream(stream_dict)\n\n class _AllTrialsWidget(object):\n def __init__(self, trials):\n # type: (List[optuna.structs.FrozenTrial]) -> None\n\n self.cds = bokeh.models.ColumnDataSource(self.trials_to_dict(trials))\n\n def create_table(self):\n # type: () -> bokeh.models.widgets.DataTable\n\n return bokeh.models.widgets.DataTable(\n source=self.cds,\n columns=[\n bokeh.models.widgets.TableColumn(field=field, title=field) for field in\n ['number', 'state', 'value', 'params', 'datetime_start', 'datetime_complete']\n ])\n\n def update(\n self,\n old_trials, # type: List[optuna.structs.FrozenTrial]\n new_trials, # type: List[optuna.structs.FrozenTrial]\n ):\n # type: (...) -> None\n\n modified_indices = []\n modified_trials = []\n for i, old_trial in enumerate(old_trials):\n new_trial = new_trials[i]\n if old_trial != new_trial:\n modified_indices.append(i)\n modified_trials.append(new_trial)\n\n patch_dict = self.trials_to_dict(modified_trials)\n patch_dict = {k: list(zip(modified_indices, v)) for k, v in patch_dict.items()}\n self.cds.patch(patch_dict)\n\n self.cds.stream(self.trials_to_dict(new_trials[len(old_trials):]))\n\n @staticmethod\n def trials_to_dict(trials):\n # type: (List[optuna.structs.FrozenTrial]) -> Dict[str, List[Any]]\n\n return {\n 'number': [trial.number for trial in trials],\n 'state': [trial.state.name for trial in trials],\n 'value': [trial.value for trial in trials],\n 'params': [str(trial.params) for trial in trials],\n 'datetime_start': [\n trial.datetime_start.strftime(_DATETIME_FORMAT)\n if trial.datetime_start is not None else None for trial in trials\n ],\n 'datetime_complete': [\n trial.datetime_complete.strftime(_DATETIME_FORMAT)\n if trial.datetime_complete is not None else None for trial in trials\n ],\n }\n\n class _DashboardApp(object):\n def __init__(self, study, launch_update_thread):\n # type: (optuna.study.Study, bool) -> None\n\n self.study = study\n self.launch_update_thread = launch_update_thread\n self.lock = threading.Lock()\n\n def __call__(self, doc):\n # type: (bokeh.document.Document) -> None\n\n self.doc = doc\n self.current_trials = \\\n self.study.trials # type: Optional[List[optuna.structs.FrozenTrial]]\n self.new_trials = None # type: Optional[List[optuna.structs.FrozenTrial]]\n self.complete_trials_widget = _CompleteTrialsWidget(\n self.current_trials, self.study.direction)\n self.all_trials_widget = _AllTrialsWidget(self.current_trials)\n\n self.doc.title = 'Optuna Dashboard (Beta)'\n header = _HEADER_FORMAT.format(study_name=self.study.study_name)\n self.doc.add_root(\n bokeh.layouts.layout([[bokeh.models.widgets.Div(text=header)],\n [self.complete_trials_widget.create_figure()],\n [self.all_trials_widget.create_table()]],\n sizing_mode='scale_width'))\n\n if self.launch_update_thread:\n thread = threading.Thread(target=self.thread_loop)\n thread.daemon = True\n thread.start()\n\n def thread_loop(self):\n # type: () -> None\n\n while True:\n time.sleep(1)\n new_trials = self.study.trials\n with self.lock:\n need_to_add_callback = (self.new_trials is None)\n self.new_trials = new_trials\n if need_to_add_callback:\n self.doc.add_next_tick_callback(self.update_callback)\n\n @tornado.gen.coroutine\n def update_callback(self):\n # type: () -> None\n\n with self.lock:\n current_trials = self.current_trials\n new_trials = self.new_trials\n self.current_trials = self.new_trials\n self.new_trials = None\n\n assert current_trials is not None\n assert new_trials is not None\n self.complete_trials_widget.update(new_trials)\n self.all_trials_widget.update(current_trials, new_trials)\n\n\ndef _check_bokeh_availability():\n # type: () -> None\n\n if not _available:\n raise ImportError(\n 'Bokeh is not available. Please install Bokeh to use the dashboard. '\n 'Bokeh can be installed by executing `$ pip install bokeh`. '\n 'For further information, please refer to the installation guide of Bokeh. '\n '(The actual import error is as follows: ' + str(_import_error) + ')')\n\n\ndef _show_experimental_warning():\n # type: () -> None\n\n logger = optuna.logging.get_logger(__name__)\n logger.warning('Optuna dashboard is still highly experimental. Please use with caution!')\n\n\ndef _get_this_source_path():\n # type: () -> str\n\n path = __file__\n\n # Sometimes __file__ points to a *.pyc file, but Bokeh doesn't accept it.\n if path.endswith('.pyc'):\n path = path[:-1]\n return path\n\n\ndef serve(study, bokeh_allow_websocket_origins=None):\n # type: (optuna.study.Study, Optional[List[str]]) -> None\n\n global _mode, _study\n\n _check_bokeh_availability()\n _show_experimental_warning()\n\n # We want to pass the mode (launching a server? or, just writing an HTML?) and a target study\n # to our Bokeh app. Unfortunately, as we are using `bokeh.command.bootstrap.main` to launch\n # our Bokeh app, we cannot directly pass Python objects to it. Therefore, we have no choice but\n # to use global variables to pass them.\n _mode = 'serve'\n _study = study\n\n # TODO(akiba): Stop using Bokeh's CLI entry point, and start the HTTP server by ourselves.\n\n # This is not a very clean way to launch Bokeh server.\n # Another seemingly better way is to\n # instantiate and launch `bokeh.server.server.Server` by ourselves. However, in this way,\n # for some reason, we found that the CDS update is not reflected to browsers, at least on Bokeh\n # version 0.12.15. In addition, we will need to do many configuration to servers, which can be\n # done automatically with the following one line. So, for now, we decided to use this way.\n command = ['bokeh', 'serve', '--show', _get_this_source_path()]\n if bokeh_allow_websocket_origins is not None:\n for bokeh_allow_websocket_origin in bokeh_allow_websocket_origins:\n command.extend(['--allow-websocket-origin', bokeh_allow_websocket_origin])\n bokeh.command.bootstrap.main(command)\n\n\ndef write(study, out_path):\n # type: (optuna.study.Study, str) -> None\n\n global _mode, _study\n\n _check_bokeh_availability()\n _show_experimental_warning()\n\n _mode = 'html'\n _study = study\n bokeh.command.bootstrap.main(['bokeh', 'html', _get_this_source_path(), '-o', out_path])\n\n\ndef _run():\n # type: () -> None\n\n # Please note that `_study` and `optuna.dashboard._study` are different here. Here, this module\n # is loaded inside Bokeh, and thus it is not `optuna.dashboard`, but `bk_script_????`.\n study = optuna.dashboard._study\n mode = optuna.dashboard._mode\n\n assert study is not None\n app = _DashboardApp(study, launch_update_thread=(mode == 'serve'))\n doc = bokeh.plotting.curdoc()\n app(doc)\n\n\nif __name__.startswith('bk_script_'):\n # Here, this module is loaded inside Bokeh. Therefore, we should launch the Bokeh app.\n _run()\n", "path": "optuna/dashboard.py"}]}
| 3,681 | 532 |
gh_patches_debug_6395
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1883
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/contrib/sqlalchemy/dto.py`
Content:
```
1 from __future__ import annotations
2
3 from functools import singledispatchmethod
4 from typing import TYPE_CHECKING, Generic, Optional, TypeVar
5
6 from sqlalchemy import Column, inspect, orm, sql
7 from sqlalchemy.ext.associationproxy import AssociationProxy, AssociationProxyExtensionType
8 from sqlalchemy.ext.hybrid import HybridExtensionType, hybrid_property
9 from sqlalchemy.orm import (
10 ColumnProperty,
11 DeclarativeBase,
12 InspectionAttr,
13 Mapped,
14 NotExtension,
15 QueryableAttribute,
16 RelationshipDirection,
17 RelationshipProperty,
18 )
19
20 from litestar.dto.factory.abc import AbstractDTOFactory
21 from litestar.dto.factory.data_structures import FieldDefinition
22 from litestar.dto.factory.field import DTO_FIELD_META_KEY, DTOField, Mark
23 from litestar.dto.factory.utils import get_model_type_hints
24 from litestar.exceptions import ImproperlyConfiguredException
25 from litestar.types.empty import Empty
26 from litestar.typing import ParsedType
27 from litestar.utils.helpers import get_fully_qualified_class_name
28 from litestar.utils.signature import ParsedSignature
29
30 if TYPE_CHECKING:
31 from typing import Any, ClassVar, Collection, Generator
32
33 from typing_extensions import TypeAlias
34
35 __all__ = ("SQLAlchemyDTO",)
36
37 T = TypeVar("T", bound="DeclarativeBase | Collection[DeclarativeBase]")
38 ElementType: TypeAlias = "Column[Any] | RelationshipProperty[Any]"
39
40 SQLA_NS = {**vars(orm), **vars(sql)}
41
42
43 class SQLAlchemyDTO(AbstractDTOFactory[T], Generic[T]):
44 """Support for domain modelling with SQLAlchemy."""
45
46 __slots__ = ()
47
48 model_type: ClassVar[type[DeclarativeBase]]
49
50 @singledispatchmethod
51 @classmethod
52 def handle_orm_descriptor(
53 cls,
54 extension_type: NotExtension | AssociationProxyExtensionType | HybridExtensionType,
55 orm_descriptor: InspectionAttr,
56 key: str,
57 model_type_hints: dict[str, ParsedType],
58 model_name: str,
59 ) -> list[FieldDefinition]:
60 raise NotImplementedError(f"Unsupported extension type: {extension_type}")
61
62 @handle_orm_descriptor.register(NotExtension)
63 @classmethod
64 def _(
65 cls,
66 extension_type: NotExtension,
67 key: str,
68 orm_descriptor: InspectionAttr,
69 model_type_hints: dict[str, ParsedType],
70 model_name: str,
71 ) -> list[FieldDefinition]:
72 if not isinstance(orm_descriptor, QueryableAttribute):
73 raise NotImplementedError(f"Unexpected descriptor type for '{extension_type}': '{orm_descriptor}'")
74
75 elem: ElementType
76 if isinstance(orm_descriptor.property, ColumnProperty):
77 if not isinstance(orm_descriptor.property.expression, Column):
78 raise NotImplementedError(f"Expected 'Column', got: '{orm_descriptor.property.expression}'")
79 elem = orm_descriptor.property.expression
80 elif isinstance(orm_descriptor.property, RelationshipProperty):
81 elem = orm_descriptor.property
82 else:
83 raise NotImplementedError(f"Unhandled property type: '{orm_descriptor.property}'")
84
85 default, default_factory = _detect_defaults(elem)
86
87 try:
88 if (parsed_type := model_type_hints[key]).origin is Mapped:
89 (parsed_type,) = parsed_type.inner_types
90 else:
91 raise NotImplementedError(f"Expected 'Mapped' origin, got: '{parsed_type.origin}'")
92 except KeyError:
93 parsed_type = parse_type_from_element(elem)
94
95 return [
96 FieldDefinition(
97 name=key,
98 default=default,
99 parsed_type=parsed_type,
100 default_factory=default_factory,
101 dto_field=elem.info.get(DTO_FIELD_META_KEY, DTOField()),
102 unique_model_name=model_name,
103 dto_for=None,
104 )
105 ]
106
107 @handle_orm_descriptor.register(AssociationProxyExtensionType)
108 @classmethod
109 def _(
110 cls,
111 extension_type: AssociationProxyExtensionType,
112 key: str,
113 orm_descriptor: InspectionAttr,
114 model_type_hints: dict[str, ParsedType],
115 model_name: str,
116 ) -> list[FieldDefinition]:
117 if not isinstance(orm_descriptor, AssociationProxy):
118 raise NotImplementedError(f"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'")
119
120 if (parsed_type := model_type_hints[key]).origin is AssociationProxy:
121 (parsed_type,) = parsed_type.inner_types
122 else:
123 raise NotImplementedError(f"Expected 'AssociationProxy' origin, got: '{parsed_type.origin}'")
124
125 return [
126 FieldDefinition(
127 name=key,
128 default=Empty,
129 parsed_type=parsed_type,
130 default_factory=None,
131 dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),
132 unique_model_name=model_name,
133 dto_for=None,
134 )
135 ]
136
137 @handle_orm_descriptor.register(HybridExtensionType)
138 @classmethod
139 def _(
140 cls,
141 extension_type: HybridExtensionType,
142 key: str,
143 orm_descriptor: InspectionAttr,
144 model_type_hints: dict[str, ParsedType],
145 model_name: str,
146 ) -> list[FieldDefinition]:
147 if not isinstance(orm_descriptor, hybrid_property):
148 raise NotImplementedError(f"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'")
149
150 getter_sig = ParsedSignature.from_fn(orm_descriptor.fget, {})
151
152 field_defs = [
153 FieldDefinition(
154 name=orm_descriptor.__name__,
155 default=Empty,
156 parsed_type=getter_sig.return_type,
157 default_factory=None,
158 dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),
159 unique_model_name=model_name,
160 dto_for="return",
161 )
162 ]
163
164 if orm_descriptor.fset is not None:
165 setter_sig = ParsedSignature.from_fn(orm_descriptor.fset, {})
166 field_defs.append(
167 FieldDefinition(
168 name=orm_descriptor.__name__,
169 default=Empty,
170 parsed_type=next(iter(setter_sig.parameters.values())).parsed_type,
171 default_factory=None,
172 dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.WRITE_ONLY)),
173 unique_model_name=model_name,
174 dto_for="data",
175 )
176 )
177
178 return field_defs
179
180 @classmethod
181 def generate_field_definitions(cls, model_type: type[DeclarativeBase]) -> Generator[FieldDefinition, None, None]:
182 if (mapper := inspect(model_type)) is None: # pragma: no cover
183 raise RuntimeError("Unexpected `None` value for mapper.")
184
185 # includes SQLAlchemy names and other mapped class names in the forward reference resolution namespace
186 namespace = {**SQLA_NS, **{m.class_.__name__: m.class_ for m in mapper.registry.mappers if m is not mapper}}
187 model_type_hints = get_model_type_hints(model_type, namespace=namespace)
188 model_name = get_fully_qualified_class_name(model_type)
189
190 # the same hybrid property descriptor can be included in `all_orm_descriptors` multiple times, once
191 # for each method name it is bound to. We only need to see it once, so track views of it here.
192 seen_hybrid_descriptors: set[hybrid_property] = set()
193 for key, orm_descriptor in mapper.all_orm_descriptors.items():
194 if isinstance(orm_descriptor, hybrid_property):
195 if orm_descriptor in seen_hybrid_descriptors:
196 continue
197 seen_hybrid_descriptors.add(orm_descriptor)
198
199 yield from cls.handle_orm_descriptor(
200 orm_descriptor.extension_type, key, orm_descriptor, model_type_hints, model_name
201 )
202
203 @classmethod
204 def detect_nested_field(cls, parsed_type: ParsedType) -> bool:
205 return parsed_type.is_subclass_of(DeclarativeBase)
206
207
208 def _detect_defaults(elem: ElementType) -> tuple[Any, Any]:
209 default: Any = Empty
210 default_factory: Any = None # pyright:ignore
211 if sqla_default := getattr(elem, "default", None):
212 if sqla_default.is_scalar:
213 default = sqla_default.arg
214 elif sqla_default.is_callable:
215
216 def default_factory(d: Any = sqla_default) -> Any:
217 return d.arg({})
218
219 else:
220 raise ValueError("Unexpected default type")
221 else:
222 if (
223 isinstance(elem, RelationshipProperty)
224 and detect_nullable_relationship(elem)
225 or getattr(elem, "nullable", False)
226 ):
227 default = None
228
229 return default, default_factory
230
231
232 def parse_type_from_element(elem: ElementType) -> ParsedType:
233 """Parses a type from a SQLAlchemy element.
234
235 Args:
236 elem: The SQLAlchemy element to parse.
237
238 Returns:
239 ParsedType: The parsed type.
240
241 Raises:
242 ImproperlyConfiguredException: If the type cannot be parsed.
243 """
244
245 if isinstance(elem, Column):
246 if elem.nullable:
247 return ParsedType(Optional[elem.type.python_type])
248 return ParsedType(elem.type.python_type)
249
250 if isinstance(elem, RelationshipProperty):
251 if elem.direction in (RelationshipDirection.ONETOMANY, RelationshipDirection.MANYTOMANY):
252 collection_type = ParsedType(elem.collection_class or list)
253 return ParsedType(collection_type.safe_generic_origin[elem.mapper.class_])
254
255 if detect_nullable_relationship(elem):
256 return ParsedType(Optional[elem.mapper.class_])
257
258 return ParsedType(elem.mapper.class_)
259
260 raise ImproperlyConfiguredException(
261 f"Unable to parse type from element '{elem}'. Consider adding a type hint.",
262 )
263
264
265 def detect_nullable_relationship(elem: RelationshipProperty) -> bool:
266 """Detects if a relationship is nullable.
267
268 This attempts to decide if we should allow a ``None`` default value for a relationship by looking at the
269 foreign key fields. If all foreign key fields are nullable, then we allow a ``None`` default value.
270
271 Args:
272 elem: The relationship to check.
273
274 Returns:
275 bool: ``True`` if the relationship is nullable, ``False`` otherwise.
276 """
277 return elem.direction == RelationshipDirection.MANYTOONE and all(c.nullable for c in elem.local_columns)
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/litestar/contrib/sqlalchemy/dto.py b/litestar/contrib/sqlalchemy/dto.py
--- a/litestar/contrib/sqlalchemy/dto.py
+++ b/litestar/contrib/sqlalchemy/dto.py
@@ -216,6 +216,11 @@
def default_factory(d: Any = sqla_default) -> Any:
return d.arg({})
+ elif sqla_default.is_sequence:
+ # SQLAlchemy sequences represent server side defaults
+ # so we cannot infer a reasonable default value for
+ # them on the client side
+ pass
else:
raise ValueError("Unexpected default type")
else:
|
{"golden_diff": "diff --git a/litestar/contrib/sqlalchemy/dto.py b/litestar/contrib/sqlalchemy/dto.py\n--- a/litestar/contrib/sqlalchemy/dto.py\n+++ b/litestar/contrib/sqlalchemy/dto.py\n@@ -216,6 +216,11 @@\n def default_factory(d: Any = sqla_default) -> Any:\n return d.arg({})\n \n+ elif sqla_default.is_sequence:\n+ # SQLAlchemy sequences represent server side defaults\n+ # so we cannot infer a reasonable default value for\n+ # them on the client side\n+ pass\n else:\n raise ValueError(\"Unexpected default type\")\n else:\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom functools import singledispatchmethod\nfrom typing import TYPE_CHECKING, Generic, Optional, TypeVar\n\nfrom sqlalchemy import Column, inspect, orm, sql\nfrom sqlalchemy.ext.associationproxy import AssociationProxy, AssociationProxyExtensionType\nfrom sqlalchemy.ext.hybrid import HybridExtensionType, hybrid_property\nfrom sqlalchemy.orm import (\n ColumnProperty,\n DeclarativeBase,\n InspectionAttr,\n Mapped,\n NotExtension,\n QueryableAttribute,\n RelationshipDirection,\n RelationshipProperty,\n)\n\nfrom litestar.dto.factory.abc import AbstractDTOFactory\nfrom litestar.dto.factory.data_structures import FieldDefinition\nfrom litestar.dto.factory.field import DTO_FIELD_META_KEY, DTOField, Mark\nfrom litestar.dto.factory.utils import get_model_type_hints\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.types.empty import Empty\nfrom litestar.typing import ParsedType\nfrom litestar.utils.helpers import get_fully_qualified_class_name\nfrom litestar.utils.signature import ParsedSignature\n\nif TYPE_CHECKING:\n from typing import Any, ClassVar, Collection, Generator\n\n from typing_extensions import TypeAlias\n\n__all__ = (\"SQLAlchemyDTO\",)\n\nT = TypeVar(\"T\", bound=\"DeclarativeBase | Collection[DeclarativeBase]\")\nElementType: TypeAlias = \"Column[Any] | RelationshipProperty[Any]\"\n\nSQLA_NS = {**vars(orm), **vars(sql)}\n\n\nclass SQLAlchemyDTO(AbstractDTOFactory[T], Generic[T]):\n \"\"\"Support for domain modelling with SQLAlchemy.\"\"\"\n\n __slots__ = ()\n\n model_type: ClassVar[type[DeclarativeBase]]\n\n @singledispatchmethod\n @classmethod\n def handle_orm_descriptor(\n cls,\n extension_type: NotExtension | AssociationProxyExtensionType | HybridExtensionType,\n orm_descriptor: InspectionAttr,\n key: str,\n model_type_hints: dict[str, ParsedType],\n model_name: str,\n ) -> list[FieldDefinition]:\n raise NotImplementedError(f\"Unsupported extension type: {extension_type}\")\n\n @handle_orm_descriptor.register(NotExtension)\n @classmethod\n def _(\n cls,\n extension_type: NotExtension,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, ParsedType],\n model_name: str,\n ) -> list[FieldDefinition]:\n if not isinstance(orm_descriptor, QueryableAttribute):\n raise NotImplementedError(f\"Unexpected descriptor type for '{extension_type}': '{orm_descriptor}'\")\n\n elem: ElementType\n if isinstance(orm_descriptor.property, ColumnProperty):\n if not isinstance(orm_descriptor.property.expression, Column):\n raise NotImplementedError(f\"Expected 'Column', got: '{orm_descriptor.property.expression}'\")\n elem = orm_descriptor.property.expression\n elif isinstance(orm_descriptor.property, RelationshipProperty):\n elem = orm_descriptor.property\n else:\n raise NotImplementedError(f\"Unhandled property type: '{orm_descriptor.property}'\")\n\n default, default_factory = _detect_defaults(elem)\n\n try:\n if (parsed_type := model_type_hints[key]).origin is Mapped:\n (parsed_type,) = parsed_type.inner_types\n else:\n raise NotImplementedError(f\"Expected 'Mapped' origin, got: '{parsed_type.origin}'\")\n except KeyError:\n parsed_type = parse_type_from_element(elem)\n\n return [\n FieldDefinition(\n name=key,\n default=default,\n parsed_type=parsed_type,\n default_factory=default_factory,\n dto_field=elem.info.get(DTO_FIELD_META_KEY, DTOField()),\n unique_model_name=model_name,\n dto_for=None,\n )\n ]\n\n @handle_orm_descriptor.register(AssociationProxyExtensionType)\n @classmethod\n def _(\n cls,\n extension_type: AssociationProxyExtensionType,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, ParsedType],\n model_name: str,\n ) -> list[FieldDefinition]:\n if not isinstance(orm_descriptor, AssociationProxy):\n raise NotImplementedError(f\"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'\")\n\n if (parsed_type := model_type_hints[key]).origin is AssociationProxy:\n (parsed_type,) = parsed_type.inner_types\n else:\n raise NotImplementedError(f\"Expected 'AssociationProxy' origin, got: '{parsed_type.origin}'\")\n\n return [\n FieldDefinition(\n name=key,\n default=Empty,\n parsed_type=parsed_type,\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),\n unique_model_name=model_name,\n dto_for=None,\n )\n ]\n\n @handle_orm_descriptor.register(HybridExtensionType)\n @classmethod\n def _(\n cls,\n extension_type: HybridExtensionType,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, ParsedType],\n model_name: str,\n ) -> list[FieldDefinition]:\n if not isinstance(orm_descriptor, hybrid_property):\n raise NotImplementedError(f\"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'\")\n\n getter_sig = ParsedSignature.from_fn(orm_descriptor.fget, {})\n\n field_defs = [\n FieldDefinition(\n name=orm_descriptor.__name__,\n default=Empty,\n parsed_type=getter_sig.return_type,\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),\n unique_model_name=model_name,\n dto_for=\"return\",\n )\n ]\n\n if orm_descriptor.fset is not None:\n setter_sig = ParsedSignature.from_fn(orm_descriptor.fset, {})\n field_defs.append(\n FieldDefinition(\n name=orm_descriptor.__name__,\n default=Empty,\n parsed_type=next(iter(setter_sig.parameters.values())).parsed_type,\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.WRITE_ONLY)),\n unique_model_name=model_name,\n dto_for=\"data\",\n )\n )\n\n return field_defs\n\n @classmethod\n def generate_field_definitions(cls, model_type: type[DeclarativeBase]) -> Generator[FieldDefinition, None, None]:\n if (mapper := inspect(model_type)) is None: # pragma: no cover\n raise RuntimeError(\"Unexpected `None` value for mapper.\")\n\n # includes SQLAlchemy names and other mapped class names in the forward reference resolution namespace\n namespace = {**SQLA_NS, **{m.class_.__name__: m.class_ for m in mapper.registry.mappers if m is not mapper}}\n model_type_hints = get_model_type_hints(model_type, namespace=namespace)\n model_name = get_fully_qualified_class_name(model_type)\n\n # the same hybrid property descriptor can be included in `all_orm_descriptors` multiple times, once\n # for each method name it is bound to. We only need to see it once, so track views of it here.\n seen_hybrid_descriptors: set[hybrid_property] = set()\n for key, orm_descriptor in mapper.all_orm_descriptors.items():\n if isinstance(orm_descriptor, hybrid_property):\n if orm_descriptor in seen_hybrid_descriptors:\n continue\n seen_hybrid_descriptors.add(orm_descriptor)\n\n yield from cls.handle_orm_descriptor(\n orm_descriptor.extension_type, key, orm_descriptor, model_type_hints, model_name\n )\n\n @classmethod\n def detect_nested_field(cls, parsed_type: ParsedType) -> bool:\n return parsed_type.is_subclass_of(DeclarativeBase)\n\n\ndef _detect_defaults(elem: ElementType) -> tuple[Any, Any]:\n default: Any = Empty\n default_factory: Any = None # pyright:ignore\n if sqla_default := getattr(elem, \"default\", None):\n if sqla_default.is_scalar:\n default = sqla_default.arg\n elif sqla_default.is_callable:\n\n def default_factory(d: Any = sqla_default) -> Any:\n return d.arg({})\n\n else:\n raise ValueError(\"Unexpected default type\")\n else:\n if (\n isinstance(elem, RelationshipProperty)\n and detect_nullable_relationship(elem)\n or getattr(elem, \"nullable\", False)\n ):\n default = None\n\n return default, default_factory\n\n\ndef parse_type_from_element(elem: ElementType) -> ParsedType:\n \"\"\"Parses a type from a SQLAlchemy element.\n\n Args:\n elem: The SQLAlchemy element to parse.\n\n Returns:\n ParsedType: The parsed type.\n\n Raises:\n ImproperlyConfiguredException: If the type cannot be parsed.\n \"\"\"\n\n if isinstance(elem, Column):\n if elem.nullable:\n return ParsedType(Optional[elem.type.python_type])\n return ParsedType(elem.type.python_type)\n\n if isinstance(elem, RelationshipProperty):\n if elem.direction in (RelationshipDirection.ONETOMANY, RelationshipDirection.MANYTOMANY):\n collection_type = ParsedType(elem.collection_class or list)\n return ParsedType(collection_type.safe_generic_origin[elem.mapper.class_])\n\n if detect_nullable_relationship(elem):\n return ParsedType(Optional[elem.mapper.class_])\n\n return ParsedType(elem.mapper.class_)\n\n raise ImproperlyConfiguredException(\n f\"Unable to parse type from element '{elem}'. Consider adding a type hint.\",\n )\n\n\ndef detect_nullable_relationship(elem: RelationshipProperty) -> bool:\n \"\"\"Detects if a relationship is nullable.\n\n This attempts to decide if we should allow a ``None`` default value for a relationship by looking at the\n foreign key fields. If all foreign key fields are nullable, then we allow a ``None`` default value.\n\n Args:\n elem: The relationship to check.\n\n Returns:\n bool: ``True`` if the relationship is nullable, ``False`` otherwise.\n \"\"\"\n return elem.direction == RelationshipDirection.MANYTOONE and all(c.nullable for c in elem.local_columns)\n", "path": "litestar/contrib/sqlalchemy/dto.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom functools import singledispatchmethod\nfrom typing import TYPE_CHECKING, Generic, Optional, TypeVar\n\nfrom sqlalchemy import Column, inspect, orm, sql\nfrom sqlalchemy.ext.associationproxy import AssociationProxy, AssociationProxyExtensionType\nfrom sqlalchemy.ext.hybrid import HybridExtensionType, hybrid_property\nfrom sqlalchemy.orm import (\n ColumnProperty,\n DeclarativeBase,\n InspectionAttr,\n Mapped,\n NotExtension,\n QueryableAttribute,\n RelationshipDirection,\n RelationshipProperty,\n)\n\nfrom litestar.dto.factory.abc import AbstractDTOFactory\nfrom litestar.dto.factory.data_structures import FieldDefinition\nfrom litestar.dto.factory.field import DTO_FIELD_META_KEY, DTOField, Mark\nfrom litestar.dto.factory.utils import get_model_type_hints\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.types.empty import Empty\nfrom litestar.typing import ParsedType\nfrom litestar.utils.helpers import get_fully_qualified_class_name\nfrom litestar.utils.signature import ParsedSignature\n\nif TYPE_CHECKING:\n from typing import Any, ClassVar, Collection, Generator\n\n from typing_extensions import TypeAlias\n\n__all__ = (\"SQLAlchemyDTO\",)\n\nT = TypeVar(\"T\", bound=\"DeclarativeBase | Collection[DeclarativeBase]\")\nElementType: TypeAlias = \"Column[Any] | RelationshipProperty[Any]\"\n\nSQLA_NS = {**vars(orm), **vars(sql)}\n\n\nclass SQLAlchemyDTO(AbstractDTOFactory[T], Generic[T]):\n \"\"\"Support for domain modelling with SQLAlchemy.\"\"\"\n\n __slots__ = ()\n\n model_type: ClassVar[type[DeclarativeBase]]\n\n @singledispatchmethod\n @classmethod\n def handle_orm_descriptor(\n cls,\n extension_type: NotExtension | AssociationProxyExtensionType | HybridExtensionType,\n orm_descriptor: InspectionAttr,\n key: str,\n model_type_hints: dict[str, ParsedType],\n model_name: str,\n ) -> list[FieldDefinition]:\n raise NotImplementedError(f\"Unsupported extension type: {extension_type}\")\n\n @handle_orm_descriptor.register(NotExtension)\n @classmethod\n def _(\n cls,\n extension_type: NotExtension,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, ParsedType],\n model_name: str,\n ) -> list[FieldDefinition]:\n if not isinstance(orm_descriptor, QueryableAttribute):\n raise NotImplementedError(f\"Unexpected descriptor type for '{extension_type}': '{orm_descriptor}'\")\n\n elem: ElementType\n if isinstance(orm_descriptor.property, ColumnProperty):\n if not isinstance(orm_descriptor.property.expression, Column):\n raise NotImplementedError(f\"Expected 'Column', got: '{orm_descriptor.property.expression}'\")\n elem = orm_descriptor.property.expression\n elif isinstance(orm_descriptor.property, RelationshipProperty):\n elem = orm_descriptor.property\n else:\n raise NotImplementedError(f\"Unhandled property type: '{orm_descriptor.property}'\")\n\n default, default_factory = _detect_defaults(elem)\n\n try:\n if (parsed_type := model_type_hints[key]).origin is Mapped:\n (parsed_type,) = parsed_type.inner_types\n else:\n raise NotImplementedError(f\"Expected 'Mapped' origin, got: '{parsed_type.origin}'\")\n except KeyError:\n parsed_type = parse_type_from_element(elem)\n\n return [\n FieldDefinition(\n name=key,\n default=default,\n parsed_type=parsed_type,\n default_factory=default_factory,\n dto_field=elem.info.get(DTO_FIELD_META_KEY, DTOField()),\n unique_model_name=model_name,\n dto_for=None,\n )\n ]\n\n @handle_orm_descriptor.register(AssociationProxyExtensionType)\n @classmethod\n def _(\n cls,\n extension_type: AssociationProxyExtensionType,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, ParsedType],\n model_name: str,\n ) -> list[FieldDefinition]:\n if not isinstance(orm_descriptor, AssociationProxy):\n raise NotImplementedError(f\"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'\")\n\n if (parsed_type := model_type_hints[key]).origin is AssociationProxy:\n (parsed_type,) = parsed_type.inner_types\n else:\n raise NotImplementedError(f\"Expected 'AssociationProxy' origin, got: '{parsed_type.origin}'\")\n\n return [\n FieldDefinition(\n name=key,\n default=Empty,\n parsed_type=parsed_type,\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),\n unique_model_name=model_name,\n dto_for=None,\n )\n ]\n\n @handle_orm_descriptor.register(HybridExtensionType)\n @classmethod\n def _(\n cls,\n extension_type: HybridExtensionType,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, ParsedType],\n model_name: str,\n ) -> list[FieldDefinition]:\n if not isinstance(orm_descriptor, hybrid_property):\n raise NotImplementedError(f\"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'\")\n\n getter_sig = ParsedSignature.from_fn(orm_descriptor.fget, {})\n\n field_defs = [\n FieldDefinition(\n name=orm_descriptor.__name__,\n default=Empty,\n parsed_type=getter_sig.return_type,\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),\n unique_model_name=model_name,\n dto_for=\"return\",\n )\n ]\n\n if orm_descriptor.fset is not None:\n setter_sig = ParsedSignature.from_fn(orm_descriptor.fset, {})\n field_defs.append(\n FieldDefinition(\n name=orm_descriptor.__name__,\n default=Empty,\n parsed_type=next(iter(setter_sig.parameters.values())).parsed_type,\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.WRITE_ONLY)),\n unique_model_name=model_name,\n dto_for=\"data\",\n )\n )\n\n return field_defs\n\n @classmethod\n def generate_field_definitions(cls, model_type: type[DeclarativeBase]) -> Generator[FieldDefinition, None, None]:\n if (mapper := inspect(model_type)) is None: # pragma: no cover\n raise RuntimeError(\"Unexpected `None` value for mapper.\")\n\n # includes SQLAlchemy names and other mapped class names in the forward reference resolution namespace\n namespace = {**SQLA_NS, **{m.class_.__name__: m.class_ for m in mapper.registry.mappers if m is not mapper}}\n model_type_hints = get_model_type_hints(model_type, namespace=namespace)\n model_name = get_fully_qualified_class_name(model_type)\n\n # the same hybrid property descriptor can be included in `all_orm_descriptors` multiple times, once\n # for each method name it is bound to. We only need to see it once, so track views of it here.\n seen_hybrid_descriptors: set[hybrid_property] = set()\n for key, orm_descriptor in mapper.all_orm_descriptors.items():\n if isinstance(orm_descriptor, hybrid_property):\n if orm_descriptor in seen_hybrid_descriptors:\n continue\n seen_hybrid_descriptors.add(orm_descriptor)\n\n yield from cls.handle_orm_descriptor(\n orm_descriptor.extension_type, key, orm_descriptor, model_type_hints, model_name\n )\n\n @classmethod\n def detect_nested_field(cls, parsed_type: ParsedType) -> bool:\n return parsed_type.is_subclass_of(DeclarativeBase)\n\n\ndef _detect_defaults(elem: ElementType) -> tuple[Any, Any]:\n default: Any = Empty\n default_factory: Any = None # pyright:ignore\n if sqla_default := getattr(elem, \"default\", None):\n if sqla_default.is_scalar:\n default = sqla_default.arg\n elif sqla_default.is_callable:\n\n def default_factory(d: Any = sqla_default) -> Any:\n return d.arg({})\n\n elif sqla_default.is_sequence:\n # SQLAlchemy sequences represent server side defaults\n # so we cannot infer a reasonable default value for\n # them on the client side\n pass\n else:\n raise ValueError(\"Unexpected default type\")\n else:\n if (\n isinstance(elem, RelationshipProperty)\n and detect_nullable_relationship(elem)\n or getattr(elem, \"nullable\", False)\n ):\n default = None\n\n return default, default_factory\n\n\ndef parse_type_from_element(elem: ElementType) -> ParsedType:\n \"\"\"Parses a type from a SQLAlchemy element.\n\n Args:\n elem: The SQLAlchemy element to parse.\n\n Returns:\n ParsedType: The parsed type.\n\n Raises:\n ImproperlyConfiguredException: If the type cannot be parsed.\n \"\"\"\n\n if isinstance(elem, Column):\n if elem.nullable:\n return ParsedType(Optional[elem.type.python_type])\n return ParsedType(elem.type.python_type)\n\n if isinstance(elem, RelationshipProperty):\n if elem.direction in (RelationshipDirection.ONETOMANY, RelationshipDirection.MANYTOMANY):\n collection_type = ParsedType(elem.collection_class or list)\n return ParsedType(collection_type.safe_generic_origin[elem.mapper.class_])\n\n if detect_nullable_relationship(elem):\n return ParsedType(Optional[elem.mapper.class_])\n\n return ParsedType(elem.mapper.class_)\n\n raise ImproperlyConfiguredException(\n f\"Unable to parse type from element '{elem}'. Consider adding a type hint.\",\n )\n\n\ndef detect_nullable_relationship(elem: RelationshipProperty) -> bool:\n \"\"\"Detects if a relationship is nullable.\n\n This attempts to decide if we should allow a ``None`` default value for a relationship by looking at the\n foreign key fields. If all foreign key fields are nullable, then we allow a ``None`` default value.\n\n Args:\n elem: The relationship to check.\n\n Returns:\n bool: ``True`` if the relationship is nullable, ``False`` otherwise.\n \"\"\"\n return elem.direction == RelationshipDirection.MANYTOONE and all(c.nullable for c in elem.local_columns)\n", "path": "litestar/contrib/sqlalchemy/dto.py"}]}
| 3,303 | 147 |
gh_patches_debug_19793
|
rasdani/github-patches
|
git_diff
|
Flexget__Flexget-2512
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Notify (Pushbullet) Plugin Error.
Hi, since last week i have this error in my log. Up to this point everything worked without problems.
```
2019-11-13 10:30 ERROR notify_entry NexBox 'x-ratelimit-reset'
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notify.py", line 104, in send_notification
send_notification(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notification_framework.py", line 124, in send_notification
title, message, rendered_config
File "/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notifiers/pushbullet.py", line 89, in notify
self.send_push(key, title, message, config.get('url'), d, 'device_iden')
File "/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notifiers/pushbullet.py", line 130, in send_push
int(response.headers['X-Ratelimit-Reset'])
File "/usr/local/lib/python3.5/dist-packages/requests/structures.py", line 52, in __getitem__
return self._store[key.lower()][1]
KeyError: 'x-ratelimit-reset'
```
Flexget: 2.21.32
API: 1.5.0
Same with Flexget 3.0.8 and Python 3.8
Notify (Pushbullet) Plugin Error.
Hi, since last week i have this error in my log. Up to this point everything worked without problems.
```
2019-11-13 10:30 ERROR notify_entry NexBox 'x-ratelimit-reset'
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notify.py", line 104, in send_notification
send_notification(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notification_framework.py", line 124, in send_notification
title, message, rendered_config
File "/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notifiers/pushbullet.py", line 89, in notify
self.send_push(key, title, message, config.get('url'), d, 'device_iden')
File "/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notifiers/pushbullet.py", line 130, in send_push
int(response.headers['X-Ratelimit-Reset'])
File "/usr/local/lib/python3.5/dist-packages/requests/structures.py", line 52, in __getitem__
return self._store[key.lower()][1]
KeyError: 'x-ratelimit-reset'
```
Flexget: 2.21.32
API: 1.5.0
Same with Flexget 3.0.8 and Python 3.8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/components/notify/notifiers/pushbullet.py`
Content:
```
1 import base64
2 import datetime
3 import logging
4
5 from requests.exceptions import RequestException
6
7 from flexget import plugin
8 from flexget.config_schema import one_or_more
9 from flexget.event import event
10 from flexget.plugin import PluginWarning
11 from flexget.utils.requests import Session as RequestSession
12 from flexget.utils.requests import TimedLimiter
13
14 plugin_name = 'pushbullet'
15 log = logging.getLogger(plugin_name)
16
17 PUSHBULLET_URL = 'https://api.pushbullet.com/v2/pushes'
18
19 requests = RequestSession(max_retries=3)
20 requests.add_domain_limiter(TimedLimiter('pushbullet.com', '5 seconds'))
21
22
23 class PushbulletNotifier:
24 """
25 Example::
26
27 notify:
28 entries:
29 via:
30 pushbullet:
31 apikey: <API_KEY>
32 [device: <DEVICE_IDEN> (can also be a list of device ids, or don't specify any ids to send to all devices)]
33 [email: <EMAIL_ADDRESS> (can also be a list of user email addresses)]
34 [channel: <CHANNEL_TAG> (you can only specify device / email or channel tag, cannot use both)]
35
36 Configuration parameters are also supported from entries (eg. through set).
37 """
38
39 schema = {
40 'type': 'object',
41 'properties': {
42 'api_key': one_or_more({'type': 'string'}),
43 'device': one_or_more({'type': 'string'}),
44 'email': one_or_more({'type': 'string', 'format': 'email'}),
45 'url': {'type': 'string'},
46 'channel': {'type': 'string'},
47 'file_template': {'type': 'string'},
48 },
49 'required': ['api_key'],
50 'oneOf': [
51 {'required': ['device']},
52 {'required': ['channel']},
53 {'required': ['email']},
54 {
55 'not': {
56 'anyOf': [
57 {'required': ['device']},
58 {'required': ['channel']},
59 {'required': ['email']},
60 ]
61 }
62 },
63 ],
64 'error_oneOf': 'One (and only one) of `email`, `device` or `channel` are allowed.',
65 'additionalProperties': False,
66 }
67
68 def notify(self, title, message, config):
69 """
70 Send a Pushbullet notification
71 """
72 if config.get('device') and not isinstance(config['device'], list):
73 config['device'] = [config['device']]
74
75 if config.get('email') and not isinstance(config['email'], list):
76 config['email'] = [config['email']]
77
78 if not isinstance(config['api_key'], list):
79 config['api_key'] = [config['api_key']]
80
81 for key in config['api_key']:
82 if config.get('channel'):
83 self.send_push(
84 key, title, message, config.get('url'), config.get('channel'), 'channel_tag'
85 )
86 elif config.get('device'):
87 for d in config['device']:
88 self.send_push(key, title, message, config.get('url'), d, 'device_iden')
89 elif config.get('email'):
90 for e in config['email']:
91 self.send_push(key, title, message, config.get('url'), e, 'email')
92 else:
93 self.send_push(key, title, message, config.get('url'))
94
95 def send_push(self, api_key, title, body, url=None, destination=None, destination_type=None):
96 push_type = 'link' if url else 'note'
97 notification = {'type': push_type, 'title': title, 'body': body}
98 if url:
99 notification['url'] = url
100 if destination:
101 notification[destination_type] = destination
102
103 # Make the request
104 headers = {
105 'Authorization': b'Basic ' + base64.b64encode(api_key.encode('ascii')),
106 'Content-Type': 'application/json',
107 'Accept': 'application/json',
108 'User-Agent': 'Flexget',
109 }
110 try:
111 response = requests.post(PUSHBULLET_URL, headers=headers, json=notification)
112 except RequestException as e:
113 if e.response is not None:
114 if e.response.status_code == 429:
115 reset_time = datetime.datetime.fromtimestamp(
116 int(e.response.headers['X-Ratelimit-Reset'])
117 ).strftime('%Y-%m-%d %H:%M:%S')
118 message = (
119 'Monthly Pushbullet database operations limit reached. Next reset: %s'
120 % reset_time
121 )
122 else:
123 message = e.response.json()['error']['message']
124 else:
125 message = str(e)
126 raise PluginWarning(message)
127
128 reset_time = datetime.datetime.fromtimestamp(
129 int(response.headers['X-Ratelimit-Reset'])
130 ).strftime('%Y-%m-%d %H:%M:%S')
131 remaining = response.headers['X-Ratelimit-Remaining']
132 log.debug(
133 'Pushbullet notification sent. Database operations remaining until next reset: %s. '
134 'Next reset at: %s',
135 remaining,
136 reset_time,
137 )
138
139
140 @event('plugin.register')
141 def register_plugin():
142 plugin.register(PushbulletNotifier, plugin_name, api_ver=2, interfaces=['notifiers'])
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flexget/components/notify/notifiers/pushbullet.py b/flexget/components/notify/notifiers/pushbullet.py
--- a/flexget/components/notify/notifiers/pushbullet.py
+++ b/flexget/components/notify/notifiers/pushbullet.py
@@ -112,13 +112,12 @@
except RequestException as e:
if e.response is not None:
if e.response.status_code == 429:
- reset_time = datetime.datetime.fromtimestamp(
- int(e.response.headers['X-Ratelimit-Reset'])
- ).strftime('%Y-%m-%d %H:%M:%S')
- message = (
- 'Monthly Pushbullet database operations limit reached. Next reset: %s'
- % reset_time
- )
+ reset_time = e.response.headers.get('X-Ratelimit-Reset')
+ if reset_time:
+ reset_time = datetime.datetime.fromtimestamp(int(reset_time)).strftime(
+ '%Y-%m-%d %H:%M:%S'
+ )
+ message = f'Monthly Pushbullet database operations limit reached. Next reset: {reset_time}'
else:
message = e.response.json()['error']['message']
else:
|
{"golden_diff": "diff --git a/flexget/components/notify/notifiers/pushbullet.py b/flexget/components/notify/notifiers/pushbullet.py\n--- a/flexget/components/notify/notifiers/pushbullet.py\n+++ b/flexget/components/notify/notifiers/pushbullet.py\n@@ -112,13 +112,12 @@\n except RequestException as e:\n if e.response is not None:\n if e.response.status_code == 429:\n- reset_time = datetime.datetime.fromtimestamp(\n- int(e.response.headers['X-Ratelimit-Reset'])\n- ).strftime('%Y-%m-%d %H:%M:%S')\n- message = (\n- 'Monthly Pushbullet database operations limit reached. Next reset: %s'\n- % reset_time\n- )\n+ reset_time = e.response.headers.get('X-Ratelimit-Reset')\n+ if reset_time:\n+ reset_time = datetime.datetime.fromtimestamp(int(reset_time)).strftime(\n+ '%Y-%m-%d %H:%M:%S'\n+ )\n+ message = f'Monthly Pushbullet database operations limit reached. Next reset: {reset_time}'\n else:\n message = e.response.json()['error']['message']\n else:\n", "issue": "Notify (Pushbullet) Plugin Error.\nHi, since last week i have this error in my log. Up to this point everything worked without problems.\r\n```\r\n2019-11-13 10:30 ERROR notify_entry NexBox 'x-ratelimit-reset'\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notify.py\", line 104, in send_notification\r\n send_notification(*args, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notification_framework.py\", line 124, in send_notification\r\n title, message, rendered_config\r\n File \"/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notifiers/pushbullet.py\", line 89, in notify\r\n self.send_push(key, title, message, config.get('url'), d, 'device_iden')\r\n File \"/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notifiers/pushbullet.py\", line 130, in send_push\r\n int(response.headers['X-Ratelimit-Reset'])\r\n File \"/usr/local/lib/python3.5/dist-packages/requests/structures.py\", line 52, in __getitem__\r\n return self._store[key.lower()][1]\r\nKeyError: 'x-ratelimit-reset'\r\n```\r\nFlexget: 2.21.32\r\nAPI: 1.5.0\r\n\r\nSame with Flexget 3.0.8 and Python 3.8\nNotify (Pushbullet) Plugin Error.\nHi, since last week i have this error in my log. Up to this point everything worked without problems.\r\n```\r\n2019-11-13 10:30 ERROR notify_entry NexBox 'x-ratelimit-reset'\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notify.py\", line 104, in send_notification\r\n send_notification(*args, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notification_framework.py\", line 124, in send_notification\r\n title, message, rendered_config\r\n File \"/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notifiers/pushbullet.py\", line 89, in notify\r\n self.send_push(key, title, message, config.get('url'), d, 'device_iden')\r\n File \"/usr/local/lib/python3.5/dist-packages/flexget/components/notify/notifiers/pushbullet.py\", line 130, in send_push\r\n int(response.headers['X-Ratelimit-Reset'])\r\n File \"/usr/local/lib/python3.5/dist-packages/requests/structures.py\", line 52, in __getitem__\r\n return self._store[key.lower()][1]\r\nKeyError: 'x-ratelimit-reset'\r\n```\r\nFlexget: 2.21.32\r\nAPI: 1.5.0\r\n\r\nSame with Flexget 3.0.8 and Python 3.8\n", "before_files": [{"content": "import base64\nimport datetime\nimport logging\n\nfrom requests.exceptions import RequestException\n\nfrom flexget import plugin\nfrom flexget.config_schema import one_or_more\nfrom flexget.event import event\nfrom flexget.plugin import PluginWarning\nfrom flexget.utils.requests import Session as RequestSession\nfrom flexget.utils.requests import TimedLimiter\n\nplugin_name = 'pushbullet'\nlog = logging.getLogger(plugin_name)\n\nPUSHBULLET_URL = 'https://api.pushbullet.com/v2/pushes'\n\nrequests = RequestSession(max_retries=3)\nrequests.add_domain_limiter(TimedLimiter('pushbullet.com', '5 seconds'))\n\n\nclass PushbulletNotifier:\n \"\"\"\n Example::\n\n notify:\n entries:\n via:\n pushbullet:\n apikey: <API_KEY>\n [device: <DEVICE_IDEN> (can also be a list of device ids, or don't specify any ids to send to all devices)]\n [email: <EMAIL_ADDRESS> (can also be a list of user email addresses)]\n [channel: <CHANNEL_TAG> (you can only specify device / email or channel tag, cannot use both)]\n\n Configuration parameters are also supported from entries (eg. through set).\n \"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'api_key': one_or_more({'type': 'string'}),\n 'device': one_or_more({'type': 'string'}),\n 'email': one_or_more({'type': 'string', 'format': 'email'}),\n 'url': {'type': 'string'},\n 'channel': {'type': 'string'},\n 'file_template': {'type': 'string'},\n },\n 'required': ['api_key'],\n 'oneOf': [\n {'required': ['device']},\n {'required': ['channel']},\n {'required': ['email']},\n {\n 'not': {\n 'anyOf': [\n {'required': ['device']},\n {'required': ['channel']},\n {'required': ['email']},\n ]\n }\n },\n ],\n 'error_oneOf': 'One (and only one) of `email`, `device` or `channel` are allowed.',\n 'additionalProperties': False,\n }\n\n def notify(self, title, message, config):\n \"\"\"\n Send a Pushbullet notification\n \"\"\"\n if config.get('device') and not isinstance(config['device'], list):\n config['device'] = [config['device']]\n\n if config.get('email') and not isinstance(config['email'], list):\n config['email'] = [config['email']]\n\n if not isinstance(config['api_key'], list):\n config['api_key'] = [config['api_key']]\n\n for key in config['api_key']:\n if config.get('channel'):\n self.send_push(\n key, title, message, config.get('url'), config.get('channel'), 'channel_tag'\n )\n elif config.get('device'):\n for d in config['device']:\n self.send_push(key, title, message, config.get('url'), d, 'device_iden')\n elif config.get('email'):\n for e in config['email']:\n self.send_push(key, title, message, config.get('url'), e, 'email')\n else:\n self.send_push(key, title, message, config.get('url'))\n\n def send_push(self, api_key, title, body, url=None, destination=None, destination_type=None):\n push_type = 'link' if url else 'note'\n notification = {'type': push_type, 'title': title, 'body': body}\n if url:\n notification['url'] = url\n if destination:\n notification[destination_type] = destination\n\n # Make the request\n headers = {\n 'Authorization': b'Basic ' + base64.b64encode(api_key.encode('ascii')),\n 'Content-Type': 'application/json',\n 'Accept': 'application/json',\n 'User-Agent': 'Flexget',\n }\n try:\n response = requests.post(PUSHBULLET_URL, headers=headers, json=notification)\n except RequestException as e:\n if e.response is not None:\n if e.response.status_code == 429:\n reset_time = datetime.datetime.fromtimestamp(\n int(e.response.headers['X-Ratelimit-Reset'])\n ).strftime('%Y-%m-%d %H:%M:%S')\n message = (\n 'Monthly Pushbullet database operations limit reached. Next reset: %s'\n % reset_time\n )\n else:\n message = e.response.json()['error']['message']\n else:\n message = str(e)\n raise PluginWarning(message)\n\n reset_time = datetime.datetime.fromtimestamp(\n int(response.headers['X-Ratelimit-Reset'])\n ).strftime('%Y-%m-%d %H:%M:%S')\n remaining = response.headers['X-Ratelimit-Remaining']\n log.debug(\n 'Pushbullet notification sent. Database operations remaining until next reset: %s. '\n 'Next reset at: %s',\n remaining,\n reset_time,\n )\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(PushbulletNotifier, plugin_name, api_ver=2, interfaces=['notifiers'])\n", "path": "flexget/components/notify/notifiers/pushbullet.py"}], "after_files": [{"content": "import base64\nimport datetime\nimport logging\n\nfrom requests.exceptions import RequestException\n\nfrom flexget import plugin\nfrom flexget.config_schema import one_or_more\nfrom flexget.event import event\nfrom flexget.plugin import PluginWarning\nfrom flexget.utils.requests import Session as RequestSession\nfrom flexget.utils.requests import TimedLimiter\n\nplugin_name = 'pushbullet'\nlog = logging.getLogger(plugin_name)\n\nPUSHBULLET_URL = 'https://api.pushbullet.com/v2/pushes'\n\nrequests = RequestSession(max_retries=3)\nrequests.add_domain_limiter(TimedLimiter('pushbullet.com', '5 seconds'))\n\n\nclass PushbulletNotifier:\n \"\"\"\n Example::\n\n notify:\n entries:\n via:\n pushbullet:\n apikey: <API_KEY>\n [device: <DEVICE_IDEN> (can also be a list of device ids, or don't specify any ids to send to all devices)]\n [email: <EMAIL_ADDRESS> (can also be a list of user email addresses)]\n [channel: <CHANNEL_TAG> (you can only specify device / email or channel tag, cannot use both)]\n\n Configuration parameters are also supported from entries (eg. through set).\n \"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'api_key': one_or_more({'type': 'string'}),\n 'device': one_or_more({'type': 'string'}),\n 'email': one_or_more({'type': 'string', 'format': 'email'}),\n 'url': {'type': 'string'},\n 'channel': {'type': 'string'},\n 'file_template': {'type': 'string'},\n },\n 'required': ['api_key'],\n 'oneOf': [\n {'required': ['device']},\n {'required': ['channel']},\n {'required': ['email']},\n {\n 'not': {\n 'anyOf': [\n {'required': ['device']},\n {'required': ['channel']},\n {'required': ['email']},\n ]\n }\n },\n ],\n 'error_oneOf': 'One (and only one) of `email`, `device` or `channel` are allowed.',\n 'additionalProperties': False,\n }\n\n def notify(self, title, message, config):\n \"\"\"\n Send a Pushbullet notification\n \"\"\"\n if config.get('device') and not isinstance(config['device'], list):\n config['device'] = [config['device']]\n\n if config.get('email') and not isinstance(config['email'], list):\n config['email'] = [config['email']]\n\n if not isinstance(config['api_key'], list):\n config['api_key'] = [config['api_key']]\n\n for key in config['api_key']:\n if config.get('channel'):\n self.send_push(\n key, title, message, config.get('url'), config.get('channel'), 'channel_tag'\n )\n elif config.get('device'):\n for d in config['device']:\n self.send_push(key, title, message, config.get('url'), d, 'device_iden')\n elif config.get('email'):\n for e in config['email']:\n self.send_push(key, title, message, config.get('url'), e, 'email')\n else:\n self.send_push(key, title, message, config.get('url'))\n\n def send_push(self, api_key, title, body, url=None, destination=None, destination_type=None):\n push_type = 'link' if url else 'note'\n notification = {'type': push_type, 'title': title, 'body': body}\n if url:\n notification['url'] = url\n if destination:\n notification[destination_type] = destination\n\n # Make the request\n headers = {\n 'Authorization': b'Basic ' + base64.b64encode(api_key.encode('ascii')),\n 'Content-Type': 'application/json',\n 'Accept': 'application/json',\n 'User-Agent': 'Flexget',\n }\n try:\n response = requests.post(PUSHBULLET_URL, headers=headers, json=notification)\n except RequestException as e:\n if e.response is not None:\n if e.response.status_code == 429:\n reset_time = e.response.headers.get('X-Ratelimit-Reset')\n if reset_time:\n reset_time = datetime.datetime.fromtimestamp(int(reset_time)).strftime(\n '%Y-%m-%d %H:%M:%S'\n )\n message = f'Monthly Pushbullet database operations limit reached. Next reset: {reset_time}'\n else:\n message = e.response.json()['error']['message']\n else:\n message = str(e)\n raise PluginWarning(message)\n\n reset_time = datetime.datetime.fromtimestamp(\n int(response.headers['X-Ratelimit-Reset'])\n ).strftime('%Y-%m-%d %H:%M:%S')\n remaining = response.headers['X-Ratelimit-Remaining']\n log.debug(\n 'Pushbullet notification sent. Database operations remaining until next reset: %s. '\n 'Next reset at: %s',\n remaining,\n reset_time,\n )\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(PushbulletNotifier, plugin_name, api_ver=2, interfaces=['notifiers'])\n", "path": "flexget/components/notify/notifiers/pushbullet.py"}]}
| 2,384 | 271 |
gh_patches_debug_6194
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-1067
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Content Security Policy frame-src should not extend "none"
The default CSP `frame-src` setting [we configure is `["'none'"]`](https://github.com/cal-itp/benefits/blob/dev/benefits/settings.py#L275).
Then we `.extend()` that list with any additional allowed frame sources. But this is not correct in CSP, either `'none'` or a list of domains should be provided, not both.
We saw this with reCAPTCHA complaining in the console in `dev`:

## To Reproduce
Steps to reproduce the behavior:
1. Go to `dev`
2. Get to the Benefits selection page
3. Open the browser console
4. See error
## Expected behavior
A clear and concise description of what you expected to happen.
## Additional context
This _may_ be causing an issue where reCAPTCHA is allowing the form to submit even when required fields are not provided:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/settings.py`
Content:
```
1 """
2 Django settings for benefits project.
3 """
4 import os
5 import benefits.logging
6
7
8 def _filter_empty(ls):
9 return [s for s in ls if s]
10
11
12 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
13 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
14
15 # SECURITY WARNING: keep the secret key used in production secret!
16 SECRET_KEY = os.environ.get("DJANGO_SECRET_KEY", "secret")
17
18 # SECURITY WARNING: don't run with debug turned on in production!
19 DEBUG = os.environ.get("DJANGO_DEBUG", "False").lower() == "true"
20
21 ADMIN = os.environ.get("DJANGO_ADMIN", "False").lower() == "true"
22
23 ALLOWED_HOSTS = _filter_empty(os.environ.get("DJANGO_ALLOWED_HOSTS", "localhost,127.0.0.1").split(","))
24
25 # Application definition
26
27 INSTALLED_APPS = [
28 "django.contrib.messages",
29 "django.contrib.sessions",
30 "django.contrib.staticfiles",
31 "benefits.core",
32 "benefits.enrollment",
33 "benefits.eligibility",
34 "benefits.oauth",
35 ]
36
37 if ADMIN:
38 INSTALLED_APPS.extend(
39 [
40 "django.contrib.admin",
41 "django.contrib.auth",
42 "django.contrib.contenttypes",
43 ]
44 )
45
46 MIDDLEWARE = [
47 "django.middleware.security.SecurityMiddleware",
48 "django.contrib.sessions.middleware.SessionMiddleware",
49 "django.contrib.messages.middleware.MessageMiddleware",
50 "django.middleware.locale.LocaleMiddleware",
51 "benefits.core.middleware.Healthcheck",
52 "django.middleware.common.CommonMiddleware",
53 "django.middleware.csrf.CsrfViewMiddleware",
54 "django.middleware.clickjacking.XFrameOptionsMiddleware",
55 "csp.middleware.CSPMiddleware",
56 "benefits.core.middleware.ChangedLanguageEvent",
57 ]
58
59 if ADMIN:
60 MIDDLEWARE.extend(
61 [
62 "django.contrib.auth.middleware.AuthenticationMiddleware",
63 "django.contrib.messages.middleware.MessageMiddleware",
64 ]
65 )
66
67 if DEBUG:
68 MIDDLEWARE.append("benefits.core.middleware.DebugSession")
69
70
71 # Azure Insights
72 # https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-python-request#tracking-django-applications
73
74 ENABLE_AZURE_INSIGHTS = "APPLICATIONINSIGHTS_CONNECTION_STRING" in os.environ
75 print("ENABLE_AZURE_INSIGHTS: ", ENABLE_AZURE_INSIGHTS)
76 if ENABLE_AZURE_INSIGHTS:
77 MIDDLEWARE.extend(
78 [
79 "opencensus.ext.django.middleware.OpencensusMiddleware",
80 "benefits.core.middleware.LogErrorToAzure",
81 ]
82 )
83
84 # only used if enabled above
85 OPENCENSUS = {
86 "TRACE": {
87 "SAMPLER": "opencensus.trace.samplers.ProbabilitySampler(rate=1)",
88 "EXPORTER": "opencensus.ext.azure.trace_exporter.AzureExporter()",
89 }
90 }
91
92
93 CSRF_COOKIE_AGE = None
94 CSRF_COOKIE_SAMESITE = "Strict"
95 CSRF_COOKIE_HTTPONLY = True
96 CSRF_TRUSTED_ORIGINS = _filter_empty(os.environ.get("DJANGO_TRUSTED_ORIGINS", "http://localhost,http://127.0.0.1").split(","))
97
98 # With `Strict`, the user loses their Django session between leaving our app to
99 # sign in with OAuth, and coming back into our app from the OAuth redirect.
100 # This is because `Strict` disallows our cookie being sent from an external
101 # domain and so the session cookie is lost.
102 #
103 # `Lax` allows the cookie to travel with the user and be sent back to us by the
104 # OAuth server, as long as the request is "safe" i.e. GET
105 SESSION_COOKIE_SAMESITE = "Lax"
106 SESSION_ENGINE = "django.contrib.sessions.backends.signed_cookies"
107 SESSION_EXPIRE_AT_BROWSER_CLOSE = True
108 SESSION_COOKIE_NAME = "_benefitssessionid"
109
110 if not DEBUG:
111 CSRF_COOKIE_SECURE = True
112 CSRF_FAILURE_VIEW = "benefits.core.views.csrf_failure"
113 SESSION_COOKIE_SECURE = True
114
115 SECURE_BROWSER_XSS_FILTER = True
116
117 # required so that cross-origin pop-ups (like the enrollment overlay) have access to parent window context
118 # https://github.com/cal-itp/benefits/pull/793
119 SECURE_CROSS_ORIGIN_OPENER_POLICY = "same-origin-allow-popups"
120
121 # the NGINX reverse proxy sits in front of the application in deployed environments
122 # SSL terminates before getting to Django, and NGINX adds this header to indicate
123 # if the original request was secure or not
124 #
125 # See https://docs.djangoproject.com/en/4.0/ref/settings/#secure-proxy-ssl-header
126 if not DEBUG:
127 SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
128
129 ROOT_URLCONF = "benefits.urls"
130
131 template_ctx_processors = [
132 "django.template.context_processors.request",
133 "django.contrib.messages.context_processors.messages",
134 "benefits.core.context_processors.analytics",
135 "benefits.core.context_processors.authentication",
136 "benefits.core.context_processors.recaptcha",
137 ]
138
139 if DEBUG:
140 template_ctx_processors.extend(
141 [
142 "django.template.context_processors.debug",
143 "benefits.core.context_processors.debug",
144 ]
145 )
146
147 if ADMIN:
148 template_ctx_processors.extend(
149 [
150 "django.contrib.auth.context_processors.auth",
151 "django.contrib.messages.context_processors.messages",
152 ]
153 )
154
155 TEMPLATES = [
156 {
157 "BACKEND": "django.template.backends.django.DjangoTemplates",
158 "DIRS": [os.path.join(BASE_DIR, "benefits", "templates")],
159 "APP_DIRS": True,
160 "OPTIONS": {
161 "context_processors": template_ctx_processors,
162 },
163 },
164 ]
165
166 WSGI_APPLICATION = "benefits.wsgi.application"
167
168 DATABASES = {
169 "default": {
170 "ENGINE": "django.db.backends.sqlite3",
171 "NAME": "django.db",
172 }
173 }
174
175 LOAD_SAMPLE_DATA = os.environ.get("DJANGO_LOAD_SAMPLE_DATA", "true").lower() != "false"
176
177 # Password validation
178
179 AUTH_PASSWORD_VALIDATORS = []
180
181 if ADMIN:
182 AUTH_PASSWORD_VALIDATORS.extend(
183 [
184 {
185 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
186 },
187 {
188 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
189 },
190 {
191 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
192 },
193 {
194 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
195 },
196 ]
197 )
198
199 # Internationalization
200
201 LANGUAGE_CODE = "en"
202
203 LANGUAGE_COOKIE_HTTPONLY = True
204 LANGUAGE_COOKIE_SAMESITE = "Strict"
205 LANGUAGE_COOKIE_SECURE = True
206
207 LANGUAGES = [("en", "English"), ("es", "Español")]
208
209 LOCALE_PATHS = [os.path.join(BASE_DIR, "benefits", "locale")]
210
211 USE_I18N = True
212 USE_L10N = True
213
214 TIME_ZONE = "UTC"
215 USE_TZ = True
216
217 # Static files (CSS, JavaScript, Images)
218
219 STATIC_URL = "/static/"
220 STATICFILES_DIRS = [os.path.join(BASE_DIR, "benefits", "static")]
221 # use Manifest Static Files Storage by default
222 STATICFILES_STORAGE = os.environ.get(
223 "DJANGO_STATICFILES_STORAGE", "django.contrib.staticfiles.storage.ManifestStaticFilesStorage"
224 )
225 STATIC_ROOT = os.path.join(BASE_DIR, "static")
226
227 # Logging configuration
228 LOG_LEVEL = os.environ.get("DJANGO_LOG_LEVEL", "DEBUG" if DEBUG else "WARNING")
229 LOGGING = benefits.logging.get_config(LOG_LEVEL, enable_azure=ENABLE_AZURE_INSIGHTS)
230
231 # Analytics configuration
232
233 ANALYTICS_KEY = os.environ.get("ANALYTICS_KEY")
234
235 # rate limit configuration
236
237 # number of requests allowed in the given period
238 RATE_LIMIT = int(os.environ.get("DJANGO_RATE_LIMIT", 5))
239
240 # HTTP request methods to rate limit
241 RATE_LIMIT_METHODS = os.environ.get("DJANGO_RATE_LIMIT_METHODS", "POST").upper().split(",")
242
243 # number of seconds before additional requests are denied
244 RATE_LIMIT_PERIOD = int(os.environ.get("DJANGO_RATE_LIMIT_PERIOD", 60))
245
246 # Rate Limit feature flag
247 RATE_LIMIT_ENABLED = all((RATE_LIMIT > 0, len(RATE_LIMIT_METHODS) > 0, RATE_LIMIT_PERIOD > 0))
248
249 # reCAPTCHA configuration
250
251 RECAPTCHA_API_URL = os.environ.get("DJANGO_RECAPTCHA_API_URL", "https://www.google.com/recaptcha/api.js")
252 RECAPTCHA_SITE_KEY = os.environ.get("DJANGO_RECAPTCHA_SITE_KEY")
253 RECAPTCHA_SECRET_KEY = os.environ.get("DJANGO_RECAPTCHA_SECRET_KEY")
254 RECAPTCHA_VERIFY_URL = os.environ.get("DJANGO_RECAPTCHA_VERIFY_URL", "https://www.google.com/recaptcha/api/siteverify")
255 RECAPTCHA_ENABLED = all((RECAPTCHA_API_URL, RECAPTCHA_SITE_KEY, RECAPTCHA_SECRET_KEY, RECAPTCHA_VERIFY_URL))
256
257 # Content Security Policy
258 # Configuration docs at https://django-csp.readthedocs.io/en/latest/configuration.html
259
260 # In particular, note that the inner single-quotes are required!
261 # https://django-csp.readthedocs.io/en/latest/configuration.html#policy-settings
262
263 CSP_DEFAULT_SRC = ["'self'"]
264
265 CSP_CONNECT_SRC = ["'self'", "https://api.amplitude.com/"]
266 env_connect_src = _filter_empty(os.environ.get("DJANGO_CSP_CONNECT_SRC", "").split(","))
267 CSP_CONNECT_SRC.extend(env_connect_src)
268
269 CSP_FONT_SRC = ["'self'", "https://california.azureedge.net/", "https://fonts.gstatic.com/"]
270 env_font_src = _filter_empty(os.environ.get("DJANGO_CSP_FONT_SRC", "").split(","))
271 CSP_FONT_SRC.extend(env_font_src)
272
273 CSP_FRAME_ANCESTORS = ["'none'"]
274
275 CSP_FRAME_SRC = ["'none'"]
276 env_frame_src = _filter_empty(os.environ.get("DJANGO_CSP_FRAME_SRC", "").split(","))
277 CSP_FRAME_SRC.extend(env_frame_src)
278 if RECAPTCHA_ENABLED:
279 CSP_FRAME_SRC.append("https://www.google.com")
280
281
282 CSP_SCRIPT_SRC = [
283 "'unsafe-inline'",
284 "https://california.azureedge.net/",
285 "https://cdn.amplitude.com/libs/",
286 "https://code.jquery.com/",
287 "*.littlepay.com",
288 ]
289 env_script_src = _filter_empty(os.environ.get("DJANGO_CSP_SCRIPT_SRC", "").split(","))
290 CSP_SCRIPT_SRC.extend(env_script_src)
291 if RECAPTCHA_ENABLED:
292 CSP_SCRIPT_SRC.extend(["https://www.google.com/recaptcha/", "https://www.gstatic.com/recaptcha/releases/"])
293
294 CSP_STYLE_SRC = [
295 "'self'",
296 "'unsafe-inline'",
297 "https://california.azureedge.net/",
298 "https://fonts.googleapis.com/css",
299 ]
300 env_style_src = _filter_empty(os.environ.get("DJANGO_CSP_STYLE_SRC", "").split(","))
301 CSP_STYLE_SRC.extend(env_style_src)
302
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/benefits/settings.py b/benefits/settings.py
--- a/benefits/settings.py
+++ b/benefits/settings.py
@@ -274,10 +274,10 @@
CSP_FRAME_SRC = ["'none'"]
env_frame_src = _filter_empty(os.environ.get("DJANGO_CSP_FRAME_SRC", "").split(","))
-CSP_FRAME_SRC.extend(env_frame_src)
if RECAPTCHA_ENABLED:
- CSP_FRAME_SRC.append("https://www.google.com")
-
+ env_frame_src.append("https://www.google.com")
+if len(env_frame_src) > 0:
+ CSP_FRAME_SRC = env_frame_src
CSP_SCRIPT_SRC = [
"'unsafe-inline'",
|
{"golden_diff": "diff --git a/benefits/settings.py b/benefits/settings.py\n--- a/benefits/settings.py\n+++ b/benefits/settings.py\n@@ -274,10 +274,10 @@\n \n CSP_FRAME_SRC = [\"'none'\"]\n env_frame_src = _filter_empty(os.environ.get(\"DJANGO_CSP_FRAME_SRC\", \"\").split(\",\"))\n-CSP_FRAME_SRC.extend(env_frame_src)\n if RECAPTCHA_ENABLED:\n- CSP_FRAME_SRC.append(\"https://www.google.com\")\n-\n+ env_frame_src.append(\"https://www.google.com\")\n+if len(env_frame_src) > 0:\n+ CSP_FRAME_SRC = env_frame_src\n \n CSP_SCRIPT_SRC = [\n \"'unsafe-inline'\",\n", "issue": "Content Security Policy frame-src should not extend \"none\"\nThe default CSP `frame-src` setting [we configure is `[\"'none'\"]`](https://github.com/cal-itp/benefits/blob/dev/benefits/settings.py#L275).\r\n\r\nThen we `.extend()` that list with any additional allowed frame sources. But this is not correct in CSP, either `'none'` or a list of domains should be provided, not both.\r\n\r\nWe saw this with reCAPTCHA complaining in the console in `dev`:\r\n\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Go to `dev`\r\n2. Get to the Benefits selection page\r\n3. Open the browser console\r\n4. See error\r\n\r\n## Expected behavior\r\n\r\nA clear and concise description of what you expected to happen.\r\n\r\n## Additional context\r\n\r\nThis _may_ be causing an issue where reCAPTCHA is allowing the form to submit even when required fields are not provided:\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for benefits project.\n\"\"\"\nimport os\nimport benefits.logging\n\n\ndef _filter_empty(ls):\n return [s for s in ls if s]\n\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ.get(\"DJANGO_SECRET_KEY\", \"secret\")\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = os.environ.get(\"DJANGO_DEBUG\", \"False\").lower() == \"true\"\n\nADMIN = os.environ.get(\"DJANGO_ADMIN\", \"False\").lower() == \"true\"\n\nALLOWED_HOSTS = _filter_empty(os.environ.get(\"DJANGO_ALLOWED_HOSTS\", \"localhost,127.0.0.1\").split(\",\"))\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.messages\",\n \"django.contrib.sessions\",\n \"django.contrib.staticfiles\",\n \"benefits.core\",\n \"benefits.enrollment\",\n \"benefits.eligibility\",\n \"benefits.oauth\",\n]\n\nif ADMIN:\n INSTALLED_APPS.extend(\n [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n ]\n )\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"benefits.core.middleware.Healthcheck\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"csp.middleware.CSPMiddleware\",\n \"benefits.core.middleware.ChangedLanguageEvent\",\n]\n\nif ADMIN:\n MIDDLEWARE.extend(\n [\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n ]\n )\n\nif DEBUG:\n MIDDLEWARE.append(\"benefits.core.middleware.DebugSession\")\n\n\n# Azure Insights\n# https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-python-request#tracking-django-applications\n\nENABLE_AZURE_INSIGHTS = \"APPLICATIONINSIGHTS_CONNECTION_STRING\" in os.environ\nprint(\"ENABLE_AZURE_INSIGHTS: \", ENABLE_AZURE_INSIGHTS)\nif ENABLE_AZURE_INSIGHTS:\n MIDDLEWARE.extend(\n [\n \"opencensus.ext.django.middleware.OpencensusMiddleware\",\n \"benefits.core.middleware.LogErrorToAzure\",\n ]\n )\n\n# only used if enabled above\nOPENCENSUS = {\n \"TRACE\": {\n \"SAMPLER\": \"opencensus.trace.samplers.ProbabilitySampler(rate=1)\",\n \"EXPORTER\": \"opencensus.ext.azure.trace_exporter.AzureExporter()\",\n }\n}\n\n\nCSRF_COOKIE_AGE = None\nCSRF_COOKIE_SAMESITE = \"Strict\"\nCSRF_COOKIE_HTTPONLY = True\nCSRF_TRUSTED_ORIGINS = _filter_empty(os.environ.get(\"DJANGO_TRUSTED_ORIGINS\", \"http://localhost,http://127.0.0.1\").split(\",\"))\n\n# With `Strict`, the user loses their Django session between leaving our app to\n# sign in with OAuth, and coming back into our app from the OAuth redirect.\n# This is because `Strict` disallows our cookie being sent from an external\n# domain and so the session cookie is lost.\n#\n# `Lax` allows the cookie to travel with the user and be sent back to us by the\n# OAuth server, as long as the request is \"safe\" i.e. GET\nSESSION_COOKIE_SAMESITE = \"Lax\"\nSESSION_ENGINE = \"django.contrib.sessions.backends.signed_cookies\"\nSESSION_EXPIRE_AT_BROWSER_CLOSE = True\nSESSION_COOKIE_NAME = \"_benefitssessionid\"\n\nif not DEBUG:\n CSRF_COOKIE_SECURE = True\n CSRF_FAILURE_VIEW = \"benefits.core.views.csrf_failure\"\n SESSION_COOKIE_SECURE = True\n\nSECURE_BROWSER_XSS_FILTER = True\n\n# required so that cross-origin pop-ups (like the enrollment overlay) have access to parent window context\n# https://github.com/cal-itp/benefits/pull/793\nSECURE_CROSS_ORIGIN_OPENER_POLICY = \"same-origin-allow-popups\"\n\n# the NGINX reverse proxy sits in front of the application in deployed environments\n# SSL terminates before getting to Django, and NGINX adds this header to indicate\n# if the original request was secure or not\n#\n# See https://docs.djangoproject.com/en/4.0/ref/settings/#secure-proxy-ssl-header\nif not DEBUG:\n SECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\nROOT_URLCONF = \"benefits.urls\"\n\ntemplate_ctx_processors = [\n \"django.template.context_processors.request\",\n \"django.contrib.messages.context_processors.messages\",\n \"benefits.core.context_processors.analytics\",\n \"benefits.core.context_processors.authentication\",\n \"benefits.core.context_processors.recaptcha\",\n]\n\nif DEBUG:\n template_ctx_processors.extend(\n [\n \"django.template.context_processors.debug\",\n \"benefits.core.context_processors.debug\",\n ]\n )\n\nif ADMIN:\n template_ctx_processors.extend(\n [\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ]\n )\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"benefits\", \"templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": template_ctx_processors,\n },\n },\n]\n\nWSGI_APPLICATION = \"benefits.wsgi.application\"\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": \"django.db\",\n }\n}\n\nLOAD_SAMPLE_DATA = os.environ.get(\"DJANGO_LOAD_SAMPLE_DATA\", \"true\").lower() != \"false\"\n\n# Password validation\n\nAUTH_PASSWORD_VALIDATORS = []\n\nif ADMIN:\n AUTH_PASSWORD_VALIDATORS.extend(\n [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n ]\n )\n\n# Internationalization\n\nLANGUAGE_CODE = \"en\"\n\nLANGUAGE_COOKIE_HTTPONLY = True\nLANGUAGE_COOKIE_SAMESITE = \"Strict\"\nLANGUAGE_COOKIE_SECURE = True\n\nLANGUAGES = [(\"en\", \"English\"), (\"es\", \"Espa\u00f1ol\")]\n\nLOCALE_PATHS = [os.path.join(BASE_DIR, \"benefits\", \"locale\")]\n\nUSE_I18N = True\nUSE_L10N = True\n\nTIME_ZONE = \"UTC\"\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n\nSTATIC_URL = \"/static/\"\nSTATICFILES_DIRS = [os.path.join(BASE_DIR, \"benefits\", \"static\")]\n# use Manifest Static Files Storage by default\nSTATICFILES_STORAGE = os.environ.get(\n \"DJANGO_STATICFILES_STORAGE\", \"django.contrib.staticfiles.storage.ManifestStaticFilesStorage\"\n)\nSTATIC_ROOT = os.path.join(BASE_DIR, \"static\")\n\n# Logging configuration\nLOG_LEVEL = os.environ.get(\"DJANGO_LOG_LEVEL\", \"DEBUG\" if DEBUG else \"WARNING\")\nLOGGING = benefits.logging.get_config(LOG_LEVEL, enable_azure=ENABLE_AZURE_INSIGHTS)\n\n# Analytics configuration\n\nANALYTICS_KEY = os.environ.get(\"ANALYTICS_KEY\")\n\n# rate limit configuration\n\n# number of requests allowed in the given period\nRATE_LIMIT = int(os.environ.get(\"DJANGO_RATE_LIMIT\", 5))\n\n# HTTP request methods to rate limit\nRATE_LIMIT_METHODS = os.environ.get(\"DJANGO_RATE_LIMIT_METHODS\", \"POST\").upper().split(\",\")\n\n# number of seconds before additional requests are denied\nRATE_LIMIT_PERIOD = int(os.environ.get(\"DJANGO_RATE_LIMIT_PERIOD\", 60))\n\n# Rate Limit feature flag\nRATE_LIMIT_ENABLED = all((RATE_LIMIT > 0, len(RATE_LIMIT_METHODS) > 0, RATE_LIMIT_PERIOD > 0))\n\n# reCAPTCHA configuration\n\nRECAPTCHA_API_URL = os.environ.get(\"DJANGO_RECAPTCHA_API_URL\", \"https://www.google.com/recaptcha/api.js\")\nRECAPTCHA_SITE_KEY = os.environ.get(\"DJANGO_RECAPTCHA_SITE_KEY\")\nRECAPTCHA_SECRET_KEY = os.environ.get(\"DJANGO_RECAPTCHA_SECRET_KEY\")\nRECAPTCHA_VERIFY_URL = os.environ.get(\"DJANGO_RECAPTCHA_VERIFY_URL\", \"https://www.google.com/recaptcha/api/siteverify\")\nRECAPTCHA_ENABLED = all((RECAPTCHA_API_URL, RECAPTCHA_SITE_KEY, RECAPTCHA_SECRET_KEY, RECAPTCHA_VERIFY_URL))\n\n# Content Security Policy\n# Configuration docs at https://django-csp.readthedocs.io/en/latest/configuration.html\n\n# In particular, note that the inner single-quotes are required!\n# https://django-csp.readthedocs.io/en/latest/configuration.html#policy-settings\n\nCSP_DEFAULT_SRC = [\"'self'\"]\n\nCSP_CONNECT_SRC = [\"'self'\", \"https://api.amplitude.com/\"]\nenv_connect_src = _filter_empty(os.environ.get(\"DJANGO_CSP_CONNECT_SRC\", \"\").split(\",\"))\nCSP_CONNECT_SRC.extend(env_connect_src)\n\nCSP_FONT_SRC = [\"'self'\", \"https://california.azureedge.net/\", \"https://fonts.gstatic.com/\"]\nenv_font_src = _filter_empty(os.environ.get(\"DJANGO_CSP_FONT_SRC\", \"\").split(\",\"))\nCSP_FONT_SRC.extend(env_font_src)\n\nCSP_FRAME_ANCESTORS = [\"'none'\"]\n\nCSP_FRAME_SRC = [\"'none'\"]\nenv_frame_src = _filter_empty(os.environ.get(\"DJANGO_CSP_FRAME_SRC\", \"\").split(\",\"))\nCSP_FRAME_SRC.extend(env_frame_src)\nif RECAPTCHA_ENABLED:\n CSP_FRAME_SRC.append(\"https://www.google.com\")\n\n\nCSP_SCRIPT_SRC = [\n \"'unsafe-inline'\",\n \"https://california.azureedge.net/\",\n \"https://cdn.amplitude.com/libs/\",\n \"https://code.jquery.com/\",\n \"*.littlepay.com\",\n]\nenv_script_src = _filter_empty(os.environ.get(\"DJANGO_CSP_SCRIPT_SRC\", \"\").split(\",\"))\nCSP_SCRIPT_SRC.extend(env_script_src)\nif RECAPTCHA_ENABLED:\n CSP_SCRIPT_SRC.extend([\"https://www.google.com/recaptcha/\", \"https://www.gstatic.com/recaptcha/releases/\"])\n\nCSP_STYLE_SRC = [\n \"'self'\",\n \"'unsafe-inline'\",\n \"https://california.azureedge.net/\",\n \"https://fonts.googleapis.com/css\",\n]\nenv_style_src = _filter_empty(os.environ.get(\"DJANGO_CSP_STYLE_SRC\", \"\").split(\",\"))\nCSP_STYLE_SRC.extend(env_style_src)\n", "path": "benefits/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for benefits project.\n\"\"\"\nimport os\nimport benefits.logging\n\n\ndef _filter_empty(ls):\n return [s for s in ls if s]\n\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ.get(\"DJANGO_SECRET_KEY\", \"secret\")\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = os.environ.get(\"DJANGO_DEBUG\", \"False\").lower() == \"true\"\n\nADMIN = os.environ.get(\"DJANGO_ADMIN\", \"False\").lower() == \"true\"\n\nALLOWED_HOSTS = _filter_empty(os.environ.get(\"DJANGO_ALLOWED_HOSTS\", \"localhost,127.0.0.1\").split(\",\"))\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.messages\",\n \"django.contrib.sessions\",\n \"django.contrib.staticfiles\",\n \"benefits.core\",\n \"benefits.enrollment\",\n \"benefits.eligibility\",\n \"benefits.oauth\",\n]\n\nif ADMIN:\n INSTALLED_APPS.extend(\n [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n ]\n )\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"benefits.core.middleware.Healthcheck\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"csp.middleware.CSPMiddleware\",\n \"benefits.core.middleware.ChangedLanguageEvent\",\n]\n\nif ADMIN:\n MIDDLEWARE.extend(\n [\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n ]\n )\n\nif DEBUG:\n MIDDLEWARE.append(\"benefits.core.middleware.DebugSession\")\n\n\n# Azure Insights\n# https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-python-request#tracking-django-applications\n\nENABLE_AZURE_INSIGHTS = \"APPLICATIONINSIGHTS_CONNECTION_STRING\" in os.environ\nprint(\"ENABLE_AZURE_INSIGHTS: \", ENABLE_AZURE_INSIGHTS)\nif ENABLE_AZURE_INSIGHTS:\n MIDDLEWARE.extend(\n [\n \"opencensus.ext.django.middleware.OpencensusMiddleware\",\n \"benefits.core.middleware.LogErrorToAzure\",\n ]\n )\n\n# only used if enabled above\nOPENCENSUS = {\n \"TRACE\": {\n \"SAMPLER\": \"opencensus.trace.samplers.ProbabilitySampler(rate=1)\",\n \"EXPORTER\": \"opencensus.ext.azure.trace_exporter.AzureExporter()\",\n }\n}\n\n\nCSRF_COOKIE_AGE = None\nCSRF_COOKIE_SAMESITE = \"Strict\"\nCSRF_COOKIE_HTTPONLY = True\nCSRF_TRUSTED_ORIGINS = _filter_empty(os.environ.get(\"DJANGO_TRUSTED_ORIGINS\", \"http://localhost,http://127.0.0.1\").split(\",\"))\n\n# With `Strict`, the user loses their Django session between leaving our app to\n# sign in with OAuth, and coming back into our app from the OAuth redirect.\n# This is because `Strict` disallows our cookie being sent from an external\n# domain and so the session cookie is lost.\n#\n# `Lax` allows the cookie to travel with the user and be sent back to us by the\n# OAuth server, as long as the request is \"safe\" i.e. GET\nSESSION_COOKIE_SAMESITE = \"Lax\"\nSESSION_ENGINE = \"django.contrib.sessions.backends.signed_cookies\"\nSESSION_EXPIRE_AT_BROWSER_CLOSE = True\nSESSION_COOKIE_NAME = \"_benefitssessionid\"\n\nif not DEBUG:\n CSRF_COOKIE_SECURE = True\n CSRF_FAILURE_VIEW = \"benefits.core.views.csrf_failure\"\n SESSION_COOKIE_SECURE = True\n\nSECURE_BROWSER_XSS_FILTER = True\n\n# required so that cross-origin pop-ups (like the enrollment overlay) have access to parent window context\n# https://github.com/cal-itp/benefits/pull/793\nSECURE_CROSS_ORIGIN_OPENER_POLICY = \"same-origin-allow-popups\"\n\n# the NGINX reverse proxy sits in front of the application in deployed environments\n# SSL terminates before getting to Django, and NGINX adds this header to indicate\n# if the original request was secure or not\n#\n# See https://docs.djangoproject.com/en/4.0/ref/settings/#secure-proxy-ssl-header\nif not DEBUG:\n SECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n\nROOT_URLCONF = \"benefits.urls\"\n\ntemplate_ctx_processors = [\n \"django.template.context_processors.request\",\n \"django.contrib.messages.context_processors.messages\",\n \"benefits.core.context_processors.analytics\",\n \"benefits.core.context_processors.authentication\",\n \"benefits.core.context_processors.recaptcha\",\n]\n\nif DEBUG:\n template_ctx_processors.extend(\n [\n \"django.template.context_processors.debug\",\n \"benefits.core.context_processors.debug\",\n ]\n )\n\nif ADMIN:\n template_ctx_processors.extend(\n [\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ]\n )\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"benefits\", \"templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": template_ctx_processors,\n },\n },\n]\n\nWSGI_APPLICATION = \"benefits.wsgi.application\"\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": \"django.db\",\n }\n}\n\nLOAD_SAMPLE_DATA = os.environ.get(\"DJANGO_LOAD_SAMPLE_DATA\", \"true\").lower() != \"false\"\n\n# Password validation\n\nAUTH_PASSWORD_VALIDATORS = []\n\nif ADMIN:\n AUTH_PASSWORD_VALIDATORS.extend(\n [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n ]\n )\n\n# Internationalization\n\nLANGUAGE_CODE = \"en\"\n\nLANGUAGE_COOKIE_HTTPONLY = True\nLANGUAGE_COOKIE_SAMESITE = \"Strict\"\nLANGUAGE_COOKIE_SECURE = True\n\nLANGUAGES = [(\"en\", \"English\"), (\"es\", \"Espa\u00f1ol\")]\n\nLOCALE_PATHS = [os.path.join(BASE_DIR, \"benefits\", \"locale\")]\n\nUSE_I18N = True\nUSE_L10N = True\n\nTIME_ZONE = \"UTC\"\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n\nSTATIC_URL = \"/static/\"\nSTATICFILES_DIRS = [os.path.join(BASE_DIR, \"benefits\", \"static\")]\n# use Manifest Static Files Storage by default\nSTATICFILES_STORAGE = os.environ.get(\n \"DJANGO_STATICFILES_STORAGE\", \"django.contrib.staticfiles.storage.ManifestStaticFilesStorage\"\n)\nSTATIC_ROOT = os.path.join(BASE_DIR, \"static\")\n\n# Logging configuration\nLOG_LEVEL = os.environ.get(\"DJANGO_LOG_LEVEL\", \"DEBUG\" if DEBUG else \"WARNING\")\nLOGGING = benefits.logging.get_config(LOG_LEVEL, enable_azure=ENABLE_AZURE_INSIGHTS)\n\n# Analytics configuration\n\nANALYTICS_KEY = os.environ.get(\"ANALYTICS_KEY\")\n\n# rate limit configuration\n\n# number of requests allowed in the given period\nRATE_LIMIT = int(os.environ.get(\"DJANGO_RATE_LIMIT\", 5))\n\n# HTTP request methods to rate limit\nRATE_LIMIT_METHODS = os.environ.get(\"DJANGO_RATE_LIMIT_METHODS\", \"POST\").upper().split(\",\")\n\n# number of seconds before additional requests are denied\nRATE_LIMIT_PERIOD = int(os.environ.get(\"DJANGO_RATE_LIMIT_PERIOD\", 60))\n\n# Rate Limit feature flag\nRATE_LIMIT_ENABLED = all((RATE_LIMIT > 0, len(RATE_LIMIT_METHODS) > 0, RATE_LIMIT_PERIOD > 0))\n\n# reCAPTCHA configuration\n\nRECAPTCHA_API_URL = os.environ.get(\"DJANGO_RECAPTCHA_API_URL\", \"https://www.google.com/recaptcha/api.js\")\nRECAPTCHA_SITE_KEY = os.environ.get(\"DJANGO_RECAPTCHA_SITE_KEY\")\nRECAPTCHA_SECRET_KEY = os.environ.get(\"DJANGO_RECAPTCHA_SECRET_KEY\")\nRECAPTCHA_VERIFY_URL = os.environ.get(\"DJANGO_RECAPTCHA_VERIFY_URL\", \"https://www.google.com/recaptcha/api/siteverify\")\nRECAPTCHA_ENABLED = all((RECAPTCHA_API_URL, RECAPTCHA_SITE_KEY, RECAPTCHA_SECRET_KEY, RECAPTCHA_VERIFY_URL))\n\n# Content Security Policy\n# Configuration docs at https://django-csp.readthedocs.io/en/latest/configuration.html\n\n# In particular, note that the inner single-quotes are required!\n# https://django-csp.readthedocs.io/en/latest/configuration.html#policy-settings\n\nCSP_DEFAULT_SRC = [\"'self'\"]\n\nCSP_CONNECT_SRC = [\"'self'\", \"https://api.amplitude.com/\"]\nenv_connect_src = _filter_empty(os.environ.get(\"DJANGO_CSP_CONNECT_SRC\", \"\").split(\",\"))\nCSP_CONNECT_SRC.extend(env_connect_src)\n\nCSP_FONT_SRC = [\"'self'\", \"https://california.azureedge.net/\", \"https://fonts.gstatic.com/\"]\nenv_font_src = _filter_empty(os.environ.get(\"DJANGO_CSP_FONT_SRC\", \"\").split(\",\"))\nCSP_FONT_SRC.extend(env_font_src)\n\nCSP_FRAME_ANCESTORS = [\"'none'\"]\n\nCSP_FRAME_SRC = [\"'none'\"]\nenv_frame_src = _filter_empty(os.environ.get(\"DJANGO_CSP_FRAME_SRC\", \"\").split(\",\"))\nif RECAPTCHA_ENABLED:\n env_frame_src.append(\"https://www.google.com\")\nif len(env_frame_src) > 0:\n CSP_FRAME_SRC = env_frame_src\n\nCSP_SCRIPT_SRC = [\n \"'unsafe-inline'\",\n \"https://california.azureedge.net/\",\n \"https://cdn.amplitude.com/libs/\",\n \"https://code.jquery.com/\",\n \"*.littlepay.com\",\n]\nenv_script_src = _filter_empty(os.environ.get(\"DJANGO_CSP_SCRIPT_SRC\", \"\").split(\",\"))\nCSP_SCRIPT_SRC.extend(env_script_src)\nif RECAPTCHA_ENABLED:\n CSP_SCRIPT_SRC.extend([\"https://www.google.com/recaptcha/\", \"https://www.gstatic.com/recaptcha/releases/\"])\n\nCSP_STYLE_SRC = [\n \"'self'\",\n \"'unsafe-inline'\",\n \"https://california.azureedge.net/\",\n \"https://fonts.googleapis.com/css\",\n]\nenv_style_src = _filter_empty(os.environ.get(\"DJANGO_CSP_STYLE_SRC\", \"\").split(\",\"))\nCSP_STYLE_SRC.extend(env_style_src)\n", "path": "benefits/settings.py"}]}
| 3,693 | 153 |
gh_patches_debug_20590
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-795
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove passwords from error messages
EvaP sends error messages to the admins when a server error occurs. Post data sent to the page where the error occurred will be included in the message, but passwords must not be included in these messages.
This currently happens e.g. when a user is logging in who has two accounts like described in #791.
Remove passwords from error messages
EvaP sends error messages to the admins when a server error occurs. Post data sent to the page where the error occurred will be included in the message, but passwords must not be included in these messages.
This currently happens e.g. when a user is logging in who has two accounts like described in #791.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/views.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.contrib import messages
5 from django.contrib.auth import login as auth_login
6 from django.contrib.auth.decorators import login_required
7 from django.core.mail import EmailMessage
8 from django.http import HttpResponse
9 from django.shortcuts import redirect, render
10 from django.utils.translation import ugettext as _
11 from django.core.urlresolvers import resolve, Resolver404
12 from django.views.decorators.http import require_POST
13
14 from evap.evaluation.forms import NewKeyForm, LoginUsernameForm
15 from evap.evaluation.models import UserProfile, FaqSection, EmailTemplate, Semester
16
17 logger = logging.getLogger(__name__)
18
19
20 def index(request):
21 """Main entry page into EvaP providing all the login options available. The username/password
22 login is thought to be used for internal users, e.g. by connecting to a LDAP directory.
23 The login key mechanism is meant to be used to include external participants, e.g. visiting
24 students or visiting contributors.
25 """
26
27 # parse the form data into the respective form
28 submit_type = request.POST.get("submit_type", "no_submit")
29 new_key_form = NewKeyForm(request.POST if submit_type == "new_key" else None)
30 login_username_form = LoginUsernameForm(request, request.POST if submit_type == "login_username" else None)
31
32 # process form data
33 if request.method == 'POST':
34 if new_key_form.is_valid():
35 # user wants a new login key
36 profile = new_key_form.get_user()
37 profile.generate_login_key()
38 profile.save()
39
40 EmailTemplate.send_login_url_to_user(new_key_form.get_user())
41
42 messages.success(request, _("We sent you an email with a one-time login URL. Please check your inbox."))
43 return redirect('evaluation:index')
44 elif login_username_form.is_valid():
45 # user would like to login with username and password and passed password test
46 auth_login(request, login_username_form.get_user())
47
48 # clean up our test cookie
49 if request.session.test_cookie_worked():
50 request.session.delete_test_cookie()
51
52 # if not logged in by now, render form
53 if not request.user.is_authenticated():
54 # set test cookie to verify whether they work in the next step
55 request.session.set_test_cookie()
56
57 template_data = dict(new_key_form=new_key_form, login_username_form=login_username_form)
58 return render(request, "index.html", template_data)
59 else:
60 user, __ = UserProfile.objects.get_or_create(username=request.user.username)
61
62 # check for redirect variable
63 redirect_to = request.GET.get("next", None)
64 if redirect_to is not None:
65 if redirect_to.startswith("/staff/"):
66 if request.user.is_staff:
67 return redirect(redirect_to)
68 elif redirect_to.startswith("/grades/"):
69 if request.user.is_grade_publisher:
70 return redirect(redirect_to)
71 elif redirect_to.startswith("/contributor/"):
72 if user.is_contributor:
73 return redirect(redirect_to)
74 elif redirect_to.startswith("/student/"):
75 if user.is_participant:
76 return redirect(redirect_to)
77 else:
78 try:
79 resolve(redirect_to)
80 except Resolver404:
81 pass
82 else:
83 return redirect(redirect_to)
84
85 # redirect user to appropriate start page
86 if request.user.is_staff:
87 return redirect('staff:index')
88 elif request.user.is_grade_publisher:
89 return redirect('grades:semester_view', Semester.active_semester().id)
90 elif user.is_student:
91 return redirect('student:index')
92 elif user.is_contributor_or_delegate:
93 return redirect('contributor:index')
94 elif user.is_participant:
95 return redirect('student:index')
96 else:
97 return redirect('results:index')
98
99
100 def faq(request):
101 return render(request, "faq.html", dict(sections=FaqSection.objects.all()))
102
103
104 def legal_notice(request):
105 return render(request, "legal_notice.html", dict())
106
107
108 @require_POST
109 @login_required
110 def feedback_send(request):
111 sender_email = request.user.email
112 message = request.POST.get("message")
113 subject = "Feedback from {}".format(sender_email)
114
115 if message:
116 mail = EmailMessage(
117 subject=subject,
118 body=message,
119 to=[settings.FEEDBACK_EMAIL])
120
121 try:
122 mail.send()
123 logger.info('Sent feedback email: \n{}\n'.format(mail.message()))
124 except Exception:
125 logger.exception('An exception occurred when sending the following feedback email:\n{}\n'.format(mail.message()))
126
127 return HttpResponse()
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evap/evaluation/views.py b/evap/evaluation/views.py
--- a/evap/evaluation/views.py
+++ b/evap/evaluation/views.py
@@ -10,6 +10,7 @@
from django.utils.translation import ugettext as _
from django.core.urlresolvers import resolve, Resolver404
from django.views.decorators.http import require_POST
+from django.views.decorators.debug import sensitive_post_parameters
from evap.evaluation.forms import NewKeyForm, LoginUsernameForm
from evap.evaluation.models import UserProfile, FaqSection, EmailTemplate, Semester
@@ -17,6 +18,7 @@
logger = logging.getLogger(__name__)
+@sensitive_post_parameters("password")
def index(request):
"""Main entry page into EvaP providing all the login options available. The username/password
login is thought to be used for internal users, e.g. by connecting to a LDAP directory.
|
{"golden_diff": "diff --git a/evap/evaluation/views.py b/evap/evaluation/views.py\n--- a/evap/evaluation/views.py\n+++ b/evap/evaluation/views.py\n@@ -10,6 +10,7 @@\n from django.utils.translation import ugettext as _\n from django.core.urlresolvers import resolve, Resolver404\n from django.views.decorators.http import require_POST\n+from django.views.decorators.debug import sensitive_post_parameters\n \n from evap.evaluation.forms import NewKeyForm, LoginUsernameForm\n from evap.evaluation.models import UserProfile, FaqSection, EmailTemplate, Semester\n@@ -17,6 +18,7 @@\n logger = logging.getLogger(__name__)\n \n \n+@sensitive_post_parameters(\"password\")\n def index(request):\n \"\"\"Main entry page into EvaP providing all the login options available. The username/password\n login is thought to be used for internal users, e.g. by connecting to a LDAP directory.\n", "issue": "Remove passwords from error messages\nEvaP sends error messages to the admins when a server error occurs. Post data sent to the page where the error occurred will be included in the message, but passwords must not be included in these messages.\nThis currently happens e.g. when a user is logging in who has two accounts like described in #791.\n\nRemove passwords from error messages\nEvaP sends error messages to the admins when a server error occurs. Post data sent to the page where the error occurred will be included in the message, but passwords must not be included in these messages.\nThis currently happens e.g. when a user is logging in who has two accounts like described in #791.\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth import login as auth_login\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.mail import EmailMessage\nfrom django.http import HttpResponse\nfrom django.shortcuts import redirect, render\nfrom django.utils.translation import ugettext as _\nfrom django.core.urlresolvers import resolve, Resolver404\nfrom django.views.decorators.http import require_POST\n\nfrom evap.evaluation.forms import NewKeyForm, LoginUsernameForm\nfrom evap.evaluation.models import UserProfile, FaqSection, EmailTemplate, Semester\n\nlogger = logging.getLogger(__name__)\n\n\ndef index(request):\n \"\"\"Main entry page into EvaP providing all the login options available. The username/password\n login is thought to be used for internal users, e.g. by connecting to a LDAP directory.\n The login key mechanism is meant to be used to include external participants, e.g. visiting\n students or visiting contributors.\n \"\"\"\n\n # parse the form data into the respective form\n submit_type = request.POST.get(\"submit_type\", \"no_submit\")\n new_key_form = NewKeyForm(request.POST if submit_type == \"new_key\" else None)\n login_username_form = LoginUsernameForm(request, request.POST if submit_type == \"login_username\" else None)\n\n # process form data\n if request.method == 'POST':\n if new_key_form.is_valid():\n # user wants a new login key\n profile = new_key_form.get_user()\n profile.generate_login_key()\n profile.save()\n\n EmailTemplate.send_login_url_to_user(new_key_form.get_user())\n\n messages.success(request, _(\"We sent you an email with a one-time login URL. Please check your inbox.\"))\n return redirect('evaluation:index')\n elif login_username_form.is_valid():\n # user would like to login with username and password and passed password test\n auth_login(request, login_username_form.get_user())\n\n # clean up our test cookie\n if request.session.test_cookie_worked():\n request.session.delete_test_cookie()\n\n # if not logged in by now, render form\n if not request.user.is_authenticated():\n # set test cookie to verify whether they work in the next step\n request.session.set_test_cookie()\n\n template_data = dict(new_key_form=new_key_form, login_username_form=login_username_form)\n return render(request, \"index.html\", template_data)\n else:\n user, __ = UserProfile.objects.get_or_create(username=request.user.username)\n\n # check for redirect variable\n redirect_to = request.GET.get(\"next\", None)\n if redirect_to is not None:\n if redirect_to.startswith(\"/staff/\"):\n if request.user.is_staff:\n return redirect(redirect_to)\n elif redirect_to.startswith(\"/grades/\"):\n if request.user.is_grade_publisher:\n return redirect(redirect_to)\n elif redirect_to.startswith(\"/contributor/\"):\n if user.is_contributor:\n return redirect(redirect_to)\n elif redirect_to.startswith(\"/student/\"):\n if user.is_participant:\n return redirect(redirect_to)\n else:\n try:\n resolve(redirect_to)\n except Resolver404:\n pass\n else:\n return redirect(redirect_to)\n\n # redirect user to appropriate start page\n if request.user.is_staff:\n return redirect('staff:index')\n elif request.user.is_grade_publisher:\n return redirect('grades:semester_view', Semester.active_semester().id)\n elif user.is_student:\n return redirect('student:index')\n elif user.is_contributor_or_delegate:\n return redirect('contributor:index')\n elif user.is_participant:\n return redirect('student:index')\n else:\n return redirect('results:index')\n\n\ndef faq(request):\n return render(request, \"faq.html\", dict(sections=FaqSection.objects.all()))\n\n\ndef legal_notice(request):\n return render(request, \"legal_notice.html\", dict())\n\n\n@require_POST\n@login_required\ndef feedback_send(request):\n sender_email = request.user.email\n message = request.POST.get(\"message\")\n subject = \"Feedback from {}\".format(sender_email)\n\n if message:\n mail = EmailMessage(\n subject=subject,\n body=message,\n to=[settings.FEEDBACK_EMAIL])\n\n try:\n mail.send()\n logger.info('Sent feedback email: \\n{}\\n'.format(mail.message()))\n except Exception:\n logger.exception('An exception occurred when sending the following feedback email:\\n{}\\n'.format(mail.message()))\n\n return HttpResponse()\n", "path": "evap/evaluation/views.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth import login as auth_login\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.mail import EmailMessage\nfrom django.http import HttpResponse\nfrom django.shortcuts import redirect, render\nfrom django.utils.translation import ugettext as _\nfrom django.core.urlresolvers import resolve, Resolver404\nfrom django.views.decorators.http import require_POST\nfrom django.views.decorators.debug import sensitive_post_parameters\n\nfrom evap.evaluation.forms import NewKeyForm, LoginUsernameForm\nfrom evap.evaluation.models import UserProfile, FaqSection, EmailTemplate, Semester\n\nlogger = logging.getLogger(__name__)\n\n\n@sensitive_post_parameters(\"password\")\ndef index(request):\n \"\"\"Main entry page into EvaP providing all the login options available. The username/password\n login is thought to be used for internal users, e.g. by connecting to a LDAP directory.\n The login key mechanism is meant to be used to include external participants, e.g. visiting\n students or visiting contributors.\n \"\"\"\n\n # parse the form data into the respective form\n submit_type = request.POST.get(\"submit_type\", \"no_submit\")\n new_key_form = NewKeyForm(request.POST if submit_type == \"new_key\" else None)\n login_username_form = LoginUsernameForm(request, request.POST if submit_type == \"login_username\" else None)\n\n # process form data\n if request.method == 'POST':\n if new_key_form.is_valid():\n # user wants a new login key\n profile = new_key_form.get_user()\n profile.generate_login_key()\n profile.save()\n\n EmailTemplate.send_login_url_to_user(new_key_form.get_user())\n\n messages.success(request, _(\"We sent you an email with a one-time login URL. Please check your inbox.\"))\n return redirect('evaluation:index')\n elif login_username_form.is_valid():\n # user would like to login with username and password and passed password test\n auth_login(request, login_username_form.get_user())\n\n # clean up our test cookie\n if request.session.test_cookie_worked():\n request.session.delete_test_cookie()\n\n # if not logged in by now, render form\n if not request.user.is_authenticated():\n # set test cookie to verify whether they work in the next step\n request.session.set_test_cookie()\n\n template_data = dict(new_key_form=new_key_form, login_username_form=login_username_form)\n return render(request, \"index.html\", template_data)\n else:\n user, __ = UserProfile.objects.get_or_create(username=request.user.username)\n\n # check for redirect variable\n redirect_to = request.GET.get(\"next\", None)\n if redirect_to is not None:\n if redirect_to.startswith(\"/staff/\"):\n if request.user.is_staff:\n return redirect(redirect_to)\n elif redirect_to.startswith(\"/grades/\"):\n if request.user.is_grade_publisher:\n return redirect(redirect_to)\n elif redirect_to.startswith(\"/contributor/\"):\n if user.is_contributor:\n return redirect(redirect_to)\n elif redirect_to.startswith(\"/student/\"):\n if user.is_participant:\n return redirect(redirect_to)\n else:\n try:\n resolve(redirect_to)\n except Resolver404:\n pass\n else:\n return redirect(redirect_to)\n\n # redirect user to appropriate start page\n if request.user.is_staff:\n return redirect('staff:index')\n elif request.user.is_grade_publisher:\n return redirect('grades:semester_view', Semester.active_semester().id)\n elif user.is_student:\n return redirect('student:index')\n elif user.is_contributor_or_delegate:\n return redirect('contributor:index')\n elif user.is_participant:\n return redirect('student:index')\n else:\n return redirect('results:index')\n\n\ndef faq(request):\n return render(request, \"faq.html\", dict(sections=FaqSection.objects.all()))\n\n\ndef legal_notice(request):\n return render(request, \"legal_notice.html\", dict())\n\n\n@require_POST\n@login_required\ndef feedback_send(request):\n sender_email = request.user.email\n message = request.POST.get(\"message\")\n subject = \"Feedback from {}\".format(sender_email)\n\n if message:\n mail = EmailMessage(\n subject=subject,\n body=message,\n to=[settings.FEEDBACK_EMAIL])\n\n try:\n mail.send()\n logger.info('Sent feedback email: \\n{}\\n'.format(mail.message()))\n except Exception:\n logger.exception('An exception occurred when sending the following feedback email:\\n{}\\n'.format(mail.message()))\n\n return HttpResponse()\n", "path": "evap/evaluation/views.py"}]}
| 1,637 | 201 |
gh_patches_debug_4984
|
rasdani/github-patches
|
git_diff
|
zalando__patroni-2080
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
documentation build failure with sphinx-4.2.0
**Describe the bug**
On Debian unstable, patroni fails to build with the latest sphinx:
```
Running Sphinx v4.2.0
making output directory... done
Exception occurred:
File "/<<PKGBUILDDIR>>/docs/conf.py", line 207, in setup
app.add_stylesheet("custom.css")
AttributeError: 'Sphinx' object has no attribute 'add_stylesheet'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Patroni documentation build configuration file, created by
5 # sphinx-quickstart on Mon Dec 19 16:54:09 2016.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 import os
21
22 import sys
23 sys.path.insert(0, os.path.abspath('..'))
24
25 from patroni.version import __version__
26
27 # -- General configuration ------------------------------------------------
28
29 # If your documentation needs a minimal Sphinx version, state it here.
30 #
31 # needs_sphinx = '1.0'
32
33 # Add any Sphinx extension module names here, as strings. They can be
34 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
35 # ones.
36 extensions = ['sphinx.ext.intersphinx',
37 'sphinx.ext.todo',
38 'sphinx.ext.mathjax',
39 'sphinx.ext.ifconfig',
40 'sphinx.ext.viewcode']
41
42 # Add any paths that contain templates here, relative to this directory.
43 templates_path = ['_templates']
44
45 # The suffix(es) of source filenames.
46 # You can specify multiple suffix as a list of string:
47 #
48 # source_suffix = ['.rst', '.md']
49 source_suffix = '.rst'
50
51 # The master toctree document.
52 master_doc = 'index'
53
54 # General information about the project.
55 project = 'Patroni'
56 copyright = '2015 Compose, Zalando SE'
57 author = 'Zalando SE'
58
59 # The version info for the project you're documenting, acts as replacement for
60 # |version| and |release|, also used in various other places throughout the
61 # built documents.
62 #
63 # The short X.Y version.
64 version = __version__[:__version__.rfind('.')]
65 # The full version, including alpha/beta/rc tags.
66 release = __version__
67
68 # The language for content autogenerated by Sphinx. Refer to documentation
69 # for a list of supported languages.
70 #
71 # This is also used if you do content translation via gettext catalogs.
72 # Usually you set "language" from the command line for these cases.
73 language = None
74
75 # List of patterns, relative to source directory, that match files and
76 # directories to ignore when looking for source files.
77 # This patterns also effect to html_static_path and html_extra_path
78 exclude_patterns = []
79
80 # The name of the Pygments (syntax highlighting) style to use.
81 pygments_style = 'sphinx'
82
83 # If true, `todo` and `todoList` produce output, else they produce nothing.
84 todo_include_todos = True
85
86
87 # -- Options for HTML output ----------------------------------------------
88
89 # The theme to use for HTML and HTML Help pages. See the documentation for
90 # a list of builtin themes.
91 #
92
93 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
94 if not on_rtd: # only import and set the theme if we're building docs locally
95 import sphinx_rtd_theme
96 html_theme = 'sphinx_rtd_theme'
97 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
98
99 # Theme options are theme-specific and customize the look and feel of a theme
100 # further. For a list of options available for each theme, see the
101 # documentation.
102 #
103 # html_theme_options = {}
104
105 # Add any paths that contain custom static files (such as style sheets) here,
106 # relative to this directory. They are copied after the builtin static files,
107 # so a file named "default.css" will overwrite the builtin "default.css".
108 html_static_path = ['_static']
109
110
111 # -- Options for HTMLHelp output ------------------------------------------
112
113 # Output file base name for HTML help builder.
114 htmlhelp_basename = 'Patronidoc'
115
116
117 # -- Options for LaTeX output ---------------------------------------------
118
119 latex_elements = {
120 # The paper size ('letterpaper' or 'a4paper').
121 #
122 # 'papersize': 'letterpaper',
123
124 # The font size ('10pt', '11pt' or '12pt').
125 #
126 # 'pointsize': '10pt',
127
128 # Additional stuff for the LaTeX preamble.
129 #
130 # 'preamble': '',
131
132 # Latex figure (float) alignment
133 #
134 # 'figure_align': 'htbp',
135 }
136
137 # Grouping the document tree into LaTeX files. List of tuples
138 # (source start file, target name, title,
139 # author, documentclass [howto, manual, or own class]).
140 latex_documents = [
141 (master_doc, 'Patroni.tex', 'Patroni Documentation',
142 'Zalando SE', 'manual'),
143 ]
144
145
146 # -- Options for manual page output ---------------------------------------
147
148 # One entry per manual page. List of tuples
149 # (source start file, name, description, authors, manual section).
150 man_pages = [
151 (master_doc, 'patroni', 'Patroni Documentation',
152 [author], 1)
153 ]
154
155
156 # -- Options for Texinfo output -------------------------------------------
157
158 # Grouping the document tree into Texinfo files. List of tuples
159 # (source start file, target name, title, author,
160 # dir menu entry, description, category)
161 texinfo_documents = [
162 (master_doc, 'Patroni', 'Patroni Documentation',
163 author, 'Patroni', 'One line description of project.',
164 'Miscellaneous'),
165 ]
166
167
168
169 # -- Options for Epub output ----------------------------------------------
170
171 # Bibliographic Dublin Core info.
172 epub_title = project
173 epub_author = author
174 epub_publisher = author
175 epub_copyright = copyright
176
177 # The unique identifier of the text. This can be a ISBN number
178 # or the project homepage.
179 #
180 # epub_identifier = ''
181
182 # A unique identification for the text.
183 #
184 # epub_uid = ''
185
186 # A list of files that should not be packed into the epub file.
187 epub_exclude_files = ['search.html']
188
189
190
191 # Example configuration for intersphinx: refer to the Python standard library.
192 intersphinx_mapping = {'https://docs.python.org/': None}
193
194 # A possibility to have an own stylesheet, to add new rules or override existing ones
195 # For the latter case, the CSS specificity of the rules should be higher than the default ones
196 def setup(app):
197 app.add_stylesheet("custom.css")
198
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -194,4 +194,7 @@
# A possibility to have an own stylesheet, to add new rules or override existing ones
# For the latter case, the CSS specificity of the rules should be higher than the default ones
def setup(app):
- app.add_stylesheet("custom.css")
+ if hasattr(app, 'add_css_file'):
+ app.add_css_file('custom.css')
+ else:
+ app.add_stylesheet('custom.css')
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -194,4 +194,7 @@\n # A possibility to have an own stylesheet, to add new rules or override existing ones\n # For the latter case, the CSS specificity of the rules should be higher than the default ones\n def setup(app):\n- app.add_stylesheet(\"custom.css\")\n+ if hasattr(app, 'add_css_file'):\n+ app.add_css_file('custom.css')\n+ else:\n+ app.add_stylesheet('custom.css')\n", "issue": "documentation build failure with sphinx-4.2.0\n**Describe the bug**\r\n\r\nOn Debian unstable, patroni fails to build with the latest sphinx:\r\n\r\n```\r\nRunning Sphinx v4.2.0\r\nmaking output directory... done\r\n\r\nException occurred:\r\n File \"/<<PKGBUILDDIR>>/docs/conf.py\", line 207, in setup\r\n app.add_stylesheet(\"custom.css\")\r\nAttributeError: 'Sphinx' object has no attribute 'add_stylesheet'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Patroni documentation build configuration file, created by\n# sphinx-quickstart on Mon Dec 19 16:54:09 2016.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\n\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n\nfrom patroni.version import __version__\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.ifconfig',\n 'sphinx.ext.viewcode']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Patroni'\ncopyright = '2015 Compose, Zalando SE'\nauthor = 'Zalando SE'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = __version__[:__version__.rfind('.')]\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Patronidoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'Patroni.tex', 'Patroni Documentation',\n 'Zalando SE', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'patroni', 'Patroni Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Patroni', 'Patroni Documentation',\n author, 'Patroni', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n\n# -- Options for Epub output ----------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'https://docs.python.org/': None}\n\n# A possibility to have an own stylesheet, to add new rules or override existing ones\n# For the latter case, the CSS specificity of the rules should be higher than the default ones\ndef setup(app):\n app.add_stylesheet(\"custom.css\")\n", "path": "docs/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Patroni documentation build configuration file, created by\n# sphinx-quickstart on Mon Dec 19 16:54:09 2016.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\n\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n\nfrom patroni.version import __version__\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.ifconfig',\n 'sphinx.ext.viewcode']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Patroni'\ncopyright = '2015 Compose, Zalando SE'\nauthor = 'Zalando SE'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = __version__[:__version__.rfind('.')]\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Patronidoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'Patroni.tex', 'Patroni Documentation',\n 'Zalando SE', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'patroni', 'Patroni Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Patroni', 'Patroni Documentation',\n author, 'Patroni', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n\n# -- Options for Epub output ----------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'https://docs.python.org/': None}\n\n# A possibility to have an own stylesheet, to add new rules or override existing ones\n# For the latter case, the CSS specificity of the rules should be higher than the default ones\ndef setup(app):\n if hasattr(app, 'add_css_file'):\n app.add_css_file('custom.css')\n else:\n app.add_stylesheet('custom.css')\n", "path": "docs/conf.py"}]}
| 2,288 | 123 |
gh_patches_debug_15446
|
rasdani/github-patches
|
git_diff
|
pretix__pretix-2983
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Migrate to celery 5
after #2056
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/setup.py`
Content:
```
1 #
2 # This file is part of pretix (Community Edition).
3 #
4 # Copyright (C) 2014-2020 Raphael Michel and contributors
5 # Copyright (C) 2020-2021 rami.io GmbH and contributors
6 #
7 # This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
8 # Public License as published by the Free Software Foundation in version 3 of the License.
9 #
10 # ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are
11 # applicable granting you additional permissions and placing additional restrictions on your usage of this software.
12 # Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive
13 # this file, see <https://pretix.eu/about/en/license>.
14 #
15 # This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
16 # warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
17 # details.
18 #
19 # You should have received a copy of the GNU Affero General Public License along with this program. If not, see
20 # <https://www.gnu.org/licenses/>.
21 #
22
23 # This file is based on an earlier version of pretix which was released under the Apache License 2.0. The full text of
24 # the Apache License 2.0 can be obtained at <http://www.apache.org/licenses/LICENSE-2.0>.
25 #
26 # This file may have since been changed and any changes are released under the terms of AGPLv3 as described above. A
27 # full history of changes and contributors is available at <https://github.com/pretix/pretix>.
28 #
29 # This file contains Apache-licensed contributions copyrighted by: Claudio Luck, FlaviaBastos, Katharina Bogad, Laura
30 # Klünder, Lukas Bockstaller, Matthew Emerson, Tobias Kunze, [email protected]
31 #
32 # Unless required by applicable law or agreed to in writing, software distributed under the Apache License 2.0 is
33 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
34 # License for the specific language governing permissions and limitations under the License.
35
36 import os
37 import subprocess
38 import sys
39 from codecs import open
40 from distutils.command.build import build
41 from distutils.command.build_ext import build_ext
42 from distutils.dir_util import copy_tree
43 from os import path
44
45 from setuptools import find_packages, setup
46
47 try:
48 from pretix import __version__
49 except:
50 if "PRETIX_DOCKER_BUILD" in os.environ:
51 __version__ = "0.0.0" # this is a hack to allow calling this file early in our docker build to make use of caching
52 else:
53 raise
54
55 CURRENT_PYTHON = sys.version_info[:2]
56 REQUIRED_PYTHON = (3, 6)
57 if CURRENT_PYTHON < REQUIRED_PYTHON:
58 sys.stderr.write("""
59 ==========================
60 Unsupported Python version
61 ==========================
62 This version of pretix requires Python {}.{}, but you're trying to
63 install it on Python {}.{}.
64 This may be because you are using a version of pip that doesn't
65 understand the python_requires classifier. Make sure you
66 have pip >= 9.0 and setuptools >= 24.2, then try again:
67 $ python -m pip install --upgrade pip setuptools
68 $ python -m pip install pretix
69 This will install the latest version of pretix which works on your
70 version of Python. If you can't upgrade your pip (or Python), request
71 an older version of pretix:
72 $ python -m pip install "pretix<2"
73 """.format(*(REQUIRED_PYTHON + CURRENT_PYTHON)))
74 sys.exit(1)
75
76 here = path.abspath(path.dirname(__file__))
77 npm_installed = False
78
79 # Get the long description from the relevant file
80 try:
81 with open(path.join(here, '../README.rst'), encoding='utf-8') as f:
82 long_description = f.read()
83 except:
84 long_description = ''
85
86
87 def npm_install():
88 global npm_installed
89
90 if not npm_installed:
91 # keep this in sync with Makefile!
92 node_prefix = os.path.join(here, 'pretix', 'static.dist', 'node_prefix')
93 os.makedirs(node_prefix, exist_ok=True)
94 copy_tree(os.path.join(here, 'pretix', 'static', 'npm_dir'), node_prefix)
95 subprocess.check_call('npm install', shell=True, cwd=node_prefix)
96 npm_installed = True
97
98
99 class CustomBuild(build):
100 def run(self):
101 if "PRETIX_DOCKER_BUILD" in os.environ:
102 return # this is a hack to allow calling this file early in our docker build to make use of caching
103 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "pretix.settings")
104 os.environ.setdefault("PRETIX_IGNORE_CONFLICTS", "True")
105 import django
106 django.setup()
107 from django.conf import settings
108 from django.core import management
109
110 settings.COMPRESS_ENABLED = True
111 settings.COMPRESS_OFFLINE = True
112
113 npm_install()
114 management.call_command('compilemessages', verbosity=1)
115 management.call_command('compilejsi18n', verbosity=1)
116 management.call_command('collectstatic', verbosity=1, interactive=False)
117 management.call_command('compress', verbosity=1)
118
119 build.run(self)
120
121
122 class CustomBuildExt(build_ext):
123 def run(self):
124 if "PRETIX_DOCKER_BUILD" in os.environ:
125 return # this is a hack to allow calling this file early in our docker build to make use of caching
126 npm_install()
127 build_ext.run(self)
128
129
130 cmdclass = {
131 'build': CustomBuild,
132 'build_ext': CustomBuildExt,
133 }
134
135
136 setup(
137 name='pretix',
138 version=__version__,
139 python_requires='>={}.{}'.format(*REQUIRED_PYTHON),
140 description='Reinventing presales, one ticket at a time',
141 long_description=long_description,
142 url='https://pretix.eu',
143 author='Raphael Michel',
144 author_email='[email protected]',
145 license='GNU Affero General Public License v3 with Additional Terms',
146 classifiers=[
147 'Development Status :: 5 - Production/Stable',
148 'Intended Audience :: Developers',
149 'Intended Audience :: Other Audience',
150 'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
151 'Environment :: Web Environment',
152 'License :: OSI Approved :: GNU Affero General Public License v3',
153 'Programming Language :: Python :: 3.7',
154 'Programming Language :: Python :: 3.8',
155 'Programming Language :: Python :: 3.9',
156 'Framework :: Django :: 3.2'
157 ],
158
159 keywords='tickets web shop ecommerce',
160 install_requires=[
161 'arabic-reshaper==2.1.4', # Support for Arabic in reportlab
162 'babel',
163 'BeautifulSoup4==4.11.*',
164 'bleach==5.0.*',
165 'celery==4.4.*',
166 'chardet==5.1.*',
167 'cryptography>=3.4.2',
168 'css-inline==0.8.*',
169 'defusedcsv>=1.1.0',
170 'dj-static',
171 'Django==3.2.*',
172 'django-bootstrap3==22.2.*',
173 'django-compressor==4.1.*',
174 'django-countries==7.5.*',
175 'django-filter==22.1',
176 'django-formset-js-improved==0.5.0.3',
177 'django-formtools==2.4',
178 'django-hierarkey==1.1.*',
179 'django-hijack==3.2.*',
180 'django-i18nfield==1.9.*,>=1.9.4',
181 'django-libsass==0.9',
182 'django-localflavor==3.1',
183 'django-markup',
184 'django-mysql',
185 'django-oauth-toolkit==1.2.*',
186 'django-otp==1.1.*',
187 'django-phonenumber-field==7.0.*',
188 'django-redis==5.2.*',
189 'django-scopes==1.2.*',
190 'django-statici18n==2.3.*',
191 'djangorestframework==3.14.*',
192 'dnspython<2.0', # do not upgrade, causes issues with eventlet / gunicorn 19 and we cannot upgrade gunicorn right now
193 'drf_ujson2==1.7.*',
194 'isoweek',
195 'jsonschema',
196 'kombu==4.6.*',
197 'libsass==0.22.*',
198 'lxml',
199 'markdown==3.3.4', # 3.3.5 requires importlib-metadata>=4.4, but django-bootstrap3 requires importlib-metadata<3.
200 # We can upgrade markdown again once django-bootstrap3 upgrades or once we drop Python 3.6 and 3.7
201 'mt-940==4.23.*',
202 'oauthlib==3.2.*',
203 'openpyxl==3.0.*',
204 'packaging',
205 'paypalrestsdk==1.13.*',
206 'paypal-checkout-serversdk==1.0.*',
207 'PyJWT==2.6.*',
208 'phonenumberslite==8.13.*',
209 'Pillow==9.3.*',
210 'protobuf==4.21.*',
211 'psycopg2-binary',
212 'pycountry',
213 'pycparser==2.21',
214 'pycryptodome==3.16.*',
215 'PyPDF2==2.11.*',
216 'python-bidi==0.4.*', # Support for Arabic in reportlab
217 'python-dateutil==2.8.*',
218 'python-u2flib-server==4.*',
219 'pytz',
220 'pyuca',
221 'redis==4.4.*',
222 'reportlab==3.6.*',
223 'requests==2.28.*',
224 'sentry-sdk==1.10.*', # when upgrading, check https://github.com/getsentry/sentry-python/issues/1700
225 'sepaxml==2.6.*',
226 'slimit',
227 'static3==0.7.*',
228 'stripe==5.0.*',
229 'text-unidecode==1.*',
230 'tlds>=2020041600',
231 'tqdm==4.*',
232 'vat_moss_forked==2020.3.20.0.11.0',
233 'vobject==0.9.*',
234 'webauthn==0.4.*',
235 'zeep==4.2.*'
236 ],
237 extras_require={
238 'dev': [
239 'coverage',
240 'coveralls',
241 'django-debug-toolbar==3.7.*',
242 'flake8==6.0.*',
243 'freezegun',
244 'isort==5.10.*',
245 'pep8-naming==0.12.*',
246 'potypo',
247 'pycodestyle==2.10.*',
248 'pyflakes==3.0.*',
249 'pytest-cache',
250 'pytest-cov',
251 'pytest-django==4.*',
252 'pytest-mock==3.10.*',
253 'pytest-rerunfailures==10.*',
254 'pytest-sugar',
255 'pytest-xdist==3.1.*',
256 'pytest==7.2.*',
257 'responses',
258 ],
259 'memcached': ['pylibmc'],
260 'mysql': ['mysqlclient'],
261 },
262 setup_requires=['setuptools-rust'],
263
264 packages=find_packages(exclude=['tests', 'tests.*']),
265 include_package_data=True,
266 cmdclass=cmdclass,
267 )
268
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/setup.py b/src/setup.py
--- a/src/setup.py
+++ b/src/setup.py
@@ -162,7 +162,7 @@
'babel',
'BeautifulSoup4==4.11.*',
'bleach==5.0.*',
- 'celery==4.4.*',
+ 'celery==5.2.*',
'chardet==5.1.*',
'cryptography>=3.4.2',
'css-inline==0.8.*',
@@ -193,7 +193,7 @@
'drf_ujson2==1.7.*',
'isoweek',
'jsonschema',
- 'kombu==4.6.*',
+ 'kombu==5.2.*',
'libsass==0.22.*',
'lxml',
'markdown==3.3.4', # 3.3.5 requires importlib-metadata>=4.4, but django-bootstrap3 requires importlib-metadata<3.
|
{"golden_diff": "diff --git a/src/setup.py b/src/setup.py\n--- a/src/setup.py\n+++ b/src/setup.py\n@@ -162,7 +162,7 @@\n 'babel',\n 'BeautifulSoup4==4.11.*',\n 'bleach==5.0.*',\n- 'celery==4.4.*',\n+ 'celery==5.2.*',\n 'chardet==5.1.*',\n 'cryptography>=3.4.2',\n 'css-inline==0.8.*',\n@@ -193,7 +193,7 @@\n 'drf_ujson2==1.7.*',\n 'isoweek',\n 'jsonschema',\n- 'kombu==4.6.*',\n+ 'kombu==5.2.*',\n 'libsass==0.22.*',\n 'lxml',\n 'markdown==3.3.4', # 3.3.5 requires importlib-metadata>=4.4, but django-bootstrap3 requires importlib-metadata<3.\n", "issue": "Migrate to celery 5\nafter #2056 \n", "before_files": [{"content": "#\n# This file is part of pretix (Community Edition).\n#\n# Copyright (C) 2014-2020 Raphael Michel and contributors\n# Copyright (C) 2020-2021 rami.io GmbH and contributors\n#\n# This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General\n# Public License as published by the Free Software Foundation in version 3 of the License.\n#\n# ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are\n# applicable granting you additional permissions and placing additional restrictions on your usage of this software.\n# Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive\n# this file, see <https://pretix.eu/about/en/license>.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied\n# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more\n# details.\n#\n# You should have received a copy of the GNU Affero General Public License along with this program. If not, see\n# <https://www.gnu.org/licenses/>.\n#\n\n# This file is based on an earlier version of pretix which was released under the Apache License 2.0. The full text of\n# the Apache License 2.0 can be obtained at <http://www.apache.org/licenses/LICENSE-2.0>.\n#\n# This file may have since been changed and any changes are released under the terms of AGPLv3 as described above. A\n# full history of changes and contributors is available at <https://github.com/pretix/pretix>.\n#\n# This file contains Apache-licensed contributions copyrighted by: Claudio Luck, FlaviaBastos, Katharina Bogad, Laura\n# Kl\u00fcnder, Lukas Bockstaller, Matthew Emerson, Tobias Kunze, [email protected]\n#\n# Unless required by applicable law or agreed to in writing, software distributed under the Apache License 2.0 is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under the License.\n\nimport os\nimport subprocess\nimport sys\nfrom codecs import open\nfrom distutils.command.build import build\nfrom distutils.command.build_ext import build_ext\nfrom distutils.dir_util import copy_tree\nfrom os import path\n\nfrom setuptools import find_packages, setup\n\ntry:\n from pretix import __version__\nexcept:\n if \"PRETIX_DOCKER_BUILD\" in os.environ:\n __version__ = \"0.0.0\" # this is a hack to allow calling this file early in our docker build to make use of caching\n else:\n raise\n\nCURRENT_PYTHON = sys.version_info[:2]\nREQUIRED_PYTHON = (3, 6)\nif CURRENT_PYTHON < REQUIRED_PYTHON:\n sys.stderr.write(\"\"\"\n==========================\nUnsupported Python version\n==========================\nThis version of pretix requires Python {}.{}, but you're trying to\ninstall it on Python {}.{}.\nThis may be because you are using a version of pip that doesn't\nunderstand the python_requires classifier. Make sure you\nhave pip >= 9.0 and setuptools >= 24.2, then try again:\n $ python -m pip install --upgrade pip setuptools\n $ python -m pip install pretix\nThis will install the latest version of pretix which works on your\nversion of Python. If you can't upgrade your pip (or Python), request\nan older version of pretix:\n $ python -m pip install \"pretix<2\"\n\"\"\".format(*(REQUIRED_PYTHON + CURRENT_PYTHON)))\n sys.exit(1)\n\nhere = path.abspath(path.dirname(__file__))\nnpm_installed = False\n\n# Get the long description from the relevant file\ntry:\n with open(path.join(here, '../README.rst'), encoding='utf-8') as f:\n long_description = f.read()\nexcept:\n long_description = ''\n\n\ndef npm_install():\n global npm_installed\n\n if not npm_installed:\n # keep this in sync with Makefile!\n node_prefix = os.path.join(here, 'pretix', 'static.dist', 'node_prefix')\n os.makedirs(node_prefix, exist_ok=True)\n copy_tree(os.path.join(here, 'pretix', 'static', 'npm_dir'), node_prefix)\n subprocess.check_call('npm install', shell=True, cwd=node_prefix)\n npm_installed = True\n\n\nclass CustomBuild(build):\n def run(self):\n if \"PRETIX_DOCKER_BUILD\" in os.environ:\n return # this is a hack to allow calling this file early in our docker build to make use of caching\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"pretix.settings\")\n os.environ.setdefault(\"PRETIX_IGNORE_CONFLICTS\", \"True\")\n import django\n django.setup()\n from django.conf import settings\n from django.core import management\n\n settings.COMPRESS_ENABLED = True\n settings.COMPRESS_OFFLINE = True\n\n npm_install()\n management.call_command('compilemessages', verbosity=1)\n management.call_command('compilejsi18n', verbosity=1)\n management.call_command('collectstatic', verbosity=1, interactive=False)\n management.call_command('compress', verbosity=1)\n\n build.run(self)\n\n\nclass CustomBuildExt(build_ext):\n def run(self):\n if \"PRETIX_DOCKER_BUILD\" in os.environ:\n return # this is a hack to allow calling this file early in our docker build to make use of caching\n npm_install()\n build_ext.run(self)\n\n\ncmdclass = {\n 'build': CustomBuild,\n 'build_ext': CustomBuildExt,\n}\n\n\nsetup(\n name='pretix',\n version=__version__,\n python_requires='>={}.{}'.format(*REQUIRED_PYTHON),\n description='Reinventing presales, one ticket at a time',\n long_description=long_description,\n url='https://pretix.eu',\n author='Raphael Michel',\n author_email='[email protected]',\n license='GNU Affero General Public License v3 with Additional Terms',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Other Audience',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content',\n 'Environment :: Web Environment',\n 'License :: OSI Approved :: GNU Affero General Public License v3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Framework :: Django :: 3.2'\n ],\n\n keywords='tickets web shop ecommerce',\n install_requires=[\n 'arabic-reshaper==2.1.4', # Support for Arabic in reportlab\n 'babel',\n 'BeautifulSoup4==4.11.*',\n 'bleach==5.0.*',\n 'celery==4.4.*',\n 'chardet==5.1.*',\n 'cryptography>=3.4.2',\n 'css-inline==0.8.*',\n 'defusedcsv>=1.1.0',\n 'dj-static',\n 'Django==3.2.*',\n 'django-bootstrap3==22.2.*',\n 'django-compressor==4.1.*',\n 'django-countries==7.5.*',\n 'django-filter==22.1',\n 'django-formset-js-improved==0.5.0.3',\n 'django-formtools==2.4',\n 'django-hierarkey==1.1.*',\n 'django-hijack==3.2.*',\n 'django-i18nfield==1.9.*,>=1.9.4',\n 'django-libsass==0.9',\n 'django-localflavor==3.1',\n 'django-markup',\n 'django-mysql',\n 'django-oauth-toolkit==1.2.*',\n 'django-otp==1.1.*',\n 'django-phonenumber-field==7.0.*',\n 'django-redis==5.2.*',\n 'django-scopes==1.2.*',\n 'django-statici18n==2.3.*',\n 'djangorestframework==3.14.*',\n 'dnspython<2.0', # do not upgrade, causes issues with eventlet / gunicorn 19 and we cannot upgrade gunicorn right now\n 'drf_ujson2==1.7.*',\n 'isoweek',\n 'jsonschema',\n 'kombu==4.6.*',\n 'libsass==0.22.*',\n 'lxml',\n 'markdown==3.3.4', # 3.3.5 requires importlib-metadata>=4.4, but django-bootstrap3 requires importlib-metadata<3.\n # We can upgrade markdown again once django-bootstrap3 upgrades or once we drop Python 3.6 and 3.7\n 'mt-940==4.23.*',\n 'oauthlib==3.2.*',\n 'openpyxl==3.0.*',\n 'packaging',\n 'paypalrestsdk==1.13.*',\n 'paypal-checkout-serversdk==1.0.*',\n 'PyJWT==2.6.*',\n 'phonenumberslite==8.13.*',\n 'Pillow==9.3.*',\n 'protobuf==4.21.*',\n 'psycopg2-binary',\n 'pycountry',\n 'pycparser==2.21',\n 'pycryptodome==3.16.*',\n 'PyPDF2==2.11.*',\n 'python-bidi==0.4.*', # Support for Arabic in reportlab\n 'python-dateutil==2.8.*',\n 'python-u2flib-server==4.*',\n 'pytz',\n 'pyuca',\n 'redis==4.4.*',\n 'reportlab==3.6.*',\n 'requests==2.28.*',\n 'sentry-sdk==1.10.*', # when upgrading, check https://github.com/getsentry/sentry-python/issues/1700\n 'sepaxml==2.6.*',\n 'slimit',\n 'static3==0.7.*',\n 'stripe==5.0.*',\n 'text-unidecode==1.*',\n 'tlds>=2020041600',\n 'tqdm==4.*',\n 'vat_moss_forked==2020.3.20.0.11.0',\n 'vobject==0.9.*',\n 'webauthn==0.4.*',\n 'zeep==4.2.*'\n ],\n extras_require={\n 'dev': [\n 'coverage',\n 'coveralls',\n 'django-debug-toolbar==3.7.*',\n 'flake8==6.0.*',\n 'freezegun',\n 'isort==5.10.*',\n 'pep8-naming==0.12.*',\n 'potypo',\n 'pycodestyle==2.10.*',\n 'pyflakes==3.0.*',\n 'pytest-cache',\n 'pytest-cov',\n 'pytest-django==4.*',\n 'pytest-mock==3.10.*',\n 'pytest-rerunfailures==10.*',\n 'pytest-sugar',\n 'pytest-xdist==3.1.*',\n 'pytest==7.2.*',\n 'responses',\n ],\n 'memcached': ['pylibmc'],\n 'mysql': ['mysqlclient'],\n },\n setup_requires=['setuptools-rust'],\n\n packages=find_packages(exclude=['tests', 'tests.*']),\n include_package_data=True,\n cmdclass=cmdclass,\n)\n", "path": "src/setup.py"}], "after_files": [{"content": "#\n# This file is part of pretix (Community Edition).\n#\n# Copyright (C) 2014-2020 Raphael Michel and contributors\n# Copyright (C) 2020-2021 rami.io GmbH and contributors\n#\n# This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General\n# Public License as published by the Free Software Foundation in version 3 of the License.\n#\n# ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are\n# applicable granting you additional permissions and placing additional restrictions on your usage of this software.\n# Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive\n# this file, see <https://pretix.eu/about/en/license>.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied\n# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more\n# details.\n#\n# You should have received a copy of the GNU Affero General Public License along with this program. If not, see\n# <https://www.gnu.org/licenses/>.\n#\n\n# This file is based on an earlier version of pretix which was released under the Apache License 2.0. The full text of\n# the Apache License 2.0 can be obtained at <http://www.apache.org/licenses/LICENSE-2.0>.\n#\n# This file may have since been changed and any changes are released under the terms of AGPLv3 as described above. A\n# full history of changes and contributors is available at <https://github.com/pretix/pretix>.\n#\n# This file contains Apache-licensed contributions copyrighted by: Claudio Luck, FlaviaBastos, Katharina Bogad, Laura\n# Kl\u00fcnder, Lukas Bockstaller, Matthew Emerson, Tobias Kunze, [email protected]\n#\n# Unless required by applicable law or agreed to in writing, software distributed under the Apache License 2.0 is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under the License.\n\nimport os\nimport subprocess\nimport sys\nfrom codecs import open\nfrom distutils.command.build import build\nfrom distutils.command.build_ext import build_ext\nfrom distutils.dir_util import copy_tree\nfrom os import path\n\nfrom setuptools import find_packages, setup\n\ntry:\n from pretix import __version__\nexcept:\n if \"PRETIX_DOCKER_BUILD\" in os.environ:\n __version__ = \"0.0.0\" # this is a hack to allow calling this file early in our docker build to make use of caching\n else:\n raise\n\nCURRENT_PYTHON = sys.version_info[:2]\nREQUIRED_PYTHON = (3, 6)\nif CURRENT_PYTHON < REQUIRED_PYTHON:\n sys.stderr.write(\"\"\"\n==========================\nUnsupported Python version\n==========================\nThis version of pretix requires Python {}.{}, but you're trying to\ninstall it on Python {}.{}.\nThis may be because you are using a version of pip that doesn't\nunderstand the python_requires classifier. Make sure you\nhave pip >= 9.0 and setuptools >= 24.2, then try again:\n $ python -m pip install --upgrade pip setuptools\n $ python -m pip install pretix\nThis will install the latest version of pretix which works on your\nversion of Python. If you can't upgrade your pip (or Python), request\nan older version of pretix:\n $ python -m pip install \"pretix<2\"\n\"\"\".format(*(REQUIRED_PYTHON + CURRENT_PYTHON)))\n sys.exit(1)\n\nhere = path.abspath(path.dirname(__file__))\nnpm_installed = False\n\n# Get the long description from the relevant file\ntry:\n with open(path.join(here, '../README.rst'), encoding='utf-8') as f:\n long_description = f.read()\nexcept:\n long_description = ''\n\n\ndef npm_install():\n global npm_installed\n\n if not npm_installed:\n # keep this in sync with Makefile!\n node_prefix = os.path.join(here, 'pretix', 'static.dist', 'node_prefix')\n os.makedirs(node_prefix, exist_ok=True)\n copy_tree(os.path.join(here, 'pretix', 'static', 'npm_dir'), node_prefix)\n subprocess.check_call('npm install', shell=True, cwd=node_prefix)\n npm_installed = True\n\n\nclass CustomBuild(build):\n def run(self):\n if \"PRETIX_DOCKER_BUILD\" in os.environ:\n return # this is a hack to allow calling this file early in our docker build to make use of caching\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"pretix.settings\")\n os.environ.setdefault(\"PRETIX_IGNORE_CONFLICTS\", \"True\")\n import django\n django.setup()\n from django.conf import settings\n from django.core import management\n\n settings.COMPRESS_ENABLED = True\n settings.COMPRESS_OFFLINE = True\n\n npm_install()\n management.call_command('compilemessages', verbosity=1)\n management.call_command('compilejsi18n', verbosity=1)\n management.call_command('collectstatic', verbosity=1, interactive=False)\n management.call_command('compress', verbosity=1)\n\n build.run(self)\n\n\nclass CustomBuildExt(build_ext):\n def run(self):\n if \"PRETIX_DOCKER_BUILD\" in os.environ:\n return # this is a hack to allow calling this file early in our docker build to make use of caching\n npm_install()\n build_ext.run(self)\n\n\ncmdclass = {\n 'build': CustomBuild,\n 'build_ext': CustomBuildExt,\n}\n\n\nsetup(\n name='pretix',\n version=__version__,\n python_requires='>={}.{}'.format(*REQUIRED_PYTHON),\n description='Reinventing presales, one ticket at a time',\n long_description=long_description,\n url='https://pretix.eu',\n author='Raphael Michel',\n author_email='[email protected]',\n license='GNU Affero General Public License v3 with Additional Terms',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Other Audience',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content',\n 'Environment :: Web Environment',\n 'License :: OSI Approved :: GNU Affero General Public License v3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Framework :: Django :: 3.2'\n ],\n\n keywords='tickets web shop ecommerce',\n install_requires=[\n 'arabic-reshaper==2.1.4', # Support for Arabic in reportlab\n 'babel',\n 'BeautifulSoup4==4.11.*',\n 'bleach==5.0.*',\n 'celery==5.2.*',\n 'chardet==5.1.*',\n 'cryptography>=3.4.2',\n 'css-inline==0.8.*',\n 'defusedcsv>=1.1.0',\n 'dj-static',\n 'Django==3.2.*',\n 'django-bootstrap3==22.2.*',\n 'django-compressor==4.1.*',\n 'django-countries==7.5.*',\n 'django-filter==22.1',\n 'django-formset-js-improved==0.5.0.3',\n 'django-formtools==2.4',\n 'django-hierarkey==1.1.*',\n 'django-hijack==3.2.*',\n 'django-i18nfield==1.9.*,>=1.9.4',\n 'django-libsass==0.9',\n 'django-localflavor==3.1',\n 'django-markup',\n 'django-mysql',\n 'django-oauth-toolkit==1.2.*',\n 'django-otp==1.1.*',\n 'django-phonenumber-field==7.0.*',\n 'django-redis==5.2.*',\n 'django-scopes==1.2.*',\n 'django-statici18n==2.3.*',\n 'djangorestframework==3.14.*',\n 'dnspython<2.0', # do not upgrade, causes issues with eventlet / gunicorn 19 and we cannot upgrade gunicorn right now\n 'drf_ujson2==1.7.*',\n 'isoweek',\n 'jsonschema',\n 'kombu==5.2.*',\n 'libsass==0.22.*',\n 'lxml',\n 'markdown==3.3.4', # 3.3.5 requires importlib-metadata>=4.4, but django-bootstrap3 requires importlib-metadata<3.\n # We can upgrade markdown again once django-bootstrap3 upgrades or once we drop Python 3.6 and 3.7\n 'mt-940==4.23.*',\n 'oauthlib==3.2.*',\n 'openpyxl==3.0.*',\n 'packaging',\n 'paypalrestsdk==1.13.*',\n 'paypal-checkout-serversdk==1.0.*',\n 'PyJWT==2.6.*',\n 'phonenumberslite==8.13.*',\n 'Pillow==9.3.*',\n 'protobuf==4.21.*',\n 'psycopg2-binary',\n 'pycountry',\n 'pycparser==2.21',\n 'pycryptodome==3.16.*',\n 'PyPDF2==2.11.*',\n 'python-bidi==0.4.*', # Support for Arabic in reportlab\n 'python-dateutil==2.8.*',\n 'python-u2flib-server==4.*',\n 'pytz',\n 'pyuca',\n 'redis==4.4.*',\n 'reportlab==3.6.*',\n 'requests==2.28.*',\n 'sentry-sdk==1.10.*', # when upgrading, check https://github.com/getsentry/sentry-python/issues/1700\n 'sepaxml==2.6.*',\n 'slimit',\n 'static3==0.7.*',\n 'stripe==5.0.*',\n 'text-unidecode==1.*',\n 'tlds>=2020041600',\n 'tqdm==4.*',\n 'vat_moss_forked==2020.3.20.0.11.0',\n 'vobject==0.9.*',\n 'webauthn==0.4.*',\n 'zeep==4.2.*'\n ],\n extras_require={\n 'dev': [\n 'coverage',\n 'coveralls',\n 'django-debug-toolbar==3.7.*',\n 'flake8==6.0.*',\n 'freezegun',\n 'isort==5.10.*',\n 'pep8-naming==0.12.*',\n 'potypo',\n 'pycodestyle==2.10.*',\n 'pyflakes==3.0.*',\n 'pytest-cache',\n 'pytest-cov',\n 'pytest-django==4.*',\n 'pytest-mock==3.10.*',\n 'pytest-rerunfailures==10.*',\n 'pytest-sugar',\n 'pytest-xdist==3.1.*',\n 'pytest==7.2.*',\n 'responses',\n ],\n 'memcached': ['pylibmc'],\n 'mysql': ['mysqlclient'],\n },\n setup_requires=['setuptools-rust'],\n\n packages=find_packages(exclude=['tests', 'tests.*']),\n include_package_data=True,\n cmdclass=cmdclass,\n)\n", "path": "src/setup.py"}]}
| 3,588 | 234 |
gh_patches_debug_29476
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-1137
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto dataloader - ValueError: batch_size should be a positive integer value, but got batch_size=0
## 🐛 Bug description
The following code will brake if `batch_size` is smaller than world size :
```python
import torch
import ignite.distributed as idist
def foo(i):
data = torch.arange(100).reshape(25, 4)
data_loader = idist.auto_dataloader(data, batch_size=6, num_workers=12)
if __name__ == "__main__":
idist.spawn("gloo", foo, args=(), nproc_per_node=8)
```
```
ValueError: batch_size should be a positive integer value, but got batch_size=0
```
A fix can be:
- keep batch size as provided if smaller than world size
- same for `num_workers`
## Environment
- PyTorch Version (e.g., 1.4): 1.5.0
- Ignite Version (e.g., 0.3.0): master
- OS (e.g., Linux): linux
- How you installed Ignite (`conda`, `pip`, source):
- Python version: 3.7
- Any other relevant information:
@InCogNiTo124 would you like to fix this as you recently played around `auto_*` ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/distributed/auto.py`
Content:
```
1 import warnings
2
3 import torch
4 import torch.nn as nn
5 from torch.optim.optimizer import Optimizer
6 from torch.utils.data import DataLoader, Dataset
7 from torch.utils.data.distributed import DistributedSampler
8 from torch.utils.data.sampler import Sampler
9
10 from ignite.distributed import utils as idist
11 from ignite.distributed.comp_models import native as idist_native
12 from ignite.distributed.comp_models import xla as idist_xla
13 from ignite.utils import setup_logger
14
15 __all__ = ["auto_dataloader", "auto_model", "auto_optim", "DistributedProxySampler"]
16
17
18 def auto_dataloader(dataset, **kwargs):
19 """Helper method to create a dataloader adapted for non-distributed and distributed configurations (supporting
20 all available backends from :meth:`~ignite.distributed.utils.available_backends()`).
21
22 Internally, we create a dataloader with provided kwargs while applying the following updates:
23
24 - batch size is scaled by world size: ``batch_size / world_size``.
25 - number of workers is scaled by number of local processes: ``num_workers / nprocs``.
26 - if no sampler provided by user, `torch DistributedSampler` is setup.
27 - if a sampler is provided by user, it is wrapped by :class:`~ignite.distributed.auto.DistributedProxySampler`.
28 - if the default device is 'cuda', `pin_memory` is automatically set to `True`.
29
30 .. warning::
31
32 Custom batch sampler is not adapted for distributed configuration. Please, make sure that provided batch
33 sampler is compatible with distributed configuration.
34
35 Examples:
36
37 .. code-block:: python
38
39 import ignite.distribted as idist
40
41 train_loader = idist.auto_dataloader(
42 train_dataset,
43 batch_size=32,
44 num_workers=4,
45 shuffle=True,
46 pin_memory="cuda" in idist.device().type,
47 drop_last=True,
48 )
49
50 Args:
51 dataset (Dataset): input torch dataset
52 **kwargs: keyword arguments for `torch DataLoader`_.
53
54 Returns:
55 `torch DataLoader`_ or `XLA MpDeviceLoader`_ for XLA devices
56
57 .. _torch DataLoader: https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
58 .. _XLA MpDeviceLoader: https://github.com/pytorch/xla/blob/master/torch_xla/distributed/parallel_loader.py#L178
59 .. _torch DistributedSampler:
60 https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler
61 """
62 rank = idist.get_rank()
63 world_size = idist.get_world_size()
64
65 logger = setup_logger(__name__ + ".auto_dataloader")
66
67 if world_size > 1:
68 if "batch_size" in kwargs:
69 kwargs["batch_size"] //= world_size
70
71 if "num_workers" in kwargs:
72 nproc = idist.get_nproc_per_node()
73 kwargs["num_workers"] = (kwargs["num_workers"] + nproc - 1) // nproc
74
75 if "batch_sampler" not in kwargs:
76 if kwargs.get("sampler", None) is not None:
77 sampler = DistributedProxySampler(kwargs["sampler"], num_replicas=world_size, rank=rank)
78 else:
79 sampler = DistributedSampler(
80 dataset, num_replicas=world_size, rank=rank, shuffle=kwargs.get("shuffle", True)
81 )
82 # we need to remove "shuffle" from kwargs if sampler is used
83 if "shuffle" in kwargs:
84 del kwargs["shuffle"]
85
86 kwargs["sampler"] = sampler
87 else:
88 warnings.warn(
89 "Found batch_sampler in provided kwargs. Please, make sure that it is compatible "
90 "with distributed configuration"
91 )
92
93 if idist.has_xla_support and idist.backend() == idist_xla.XLA_TPU and kwargs.get("pin_memory", False):
94 # TODO: How about XLA GPU ?
95 warnings.warn(
96 "Found incompatible options: xla support and pin_memory args equal True. "
97 "Argument `pin_memory=False` will be used to construct data loader."
98 )
99 kwargs["pin_memory"] = False
100 else:
101 kwargs["pin_memory"] = kwargs.get("pin_memory", "cuda" in idist.device().type)
102
103 logger.info("Use data loader kwargs for dataset '{}': \n\t{}".format(repr(dataset)[:20].strip(), kwargs))
104 dataloader = DataLoader(dataset, **kwargs)
105
106 if idist.has_xla_support and idist.backend() == idist_xla.XLA_TPU and world_size > 1:
107
108 logger.info("DataLoader is wrapped by `MpDeviceLoader` on XLA")
109
110 mp_device_loader_cls = _MpDeviceLoader
111 try:
112 from torch_xla.distributed.parallel_loader import MpDeviceLoader
113
114 mp_device_loader_cls = MpDeviceLoader
115 except ImportError:
116 pass
117
118 sampler = dataloader.sampler
119 dataloader = mp_device_loader_cls(dataloader, idist.device())
120 dataloader.sampler = sampler
121
122 return dataloader
123
124
125 def auto_model(model: nn.Module) -> nn.Module:
126 """Helper method to adapt provided model for non-distributed and distributed configurations (supporting
127 all available backends from :meth:`~ignite.distributed.utils.available_backends()`).
128
129 Internally, we perform to following:
130
131 - send model to current :meth:`~ignite.distributed.utils.device()`.
132 - wrap the model to `torch DistributedDataParallel`_ for native torch distributed if world size is larger than 1
133 - wrap the model to `torch DataParallel`_ if no distributed context found and more than one CUDA devices available.
134
135 Examples:
136
137 .. code-block:: python
138
139 import ignite.distribted as idist
140
141 model = idist.auto_model(model)
142
143 Args:
144 model (torch.nn.Module): model to adapt.
145
146 Returns:
147 torch.nn.Module
148
149 .. _torch DistributedDataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel
150 .. _torch DataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel
151 """
152 logger = setup_logger(__name__ + ".auto_model")
153
154 model.to(idist.device())
155
156 # distributed data parallel model
157 if idist.get_world_size() > 1:
158 if idist.backend() == idist_native.NCCL:
159 lrank = idist.get_local_rank()
160 logger.info("Apply torch DistributedDataParallel on model, device id: {}".format(lrank))
161 model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[lrank,])
162 elif idist.backend() == idist_native.GLOO:
163 logger.info("Apply torch DistributedDataParallel on model")
164 model = torch.nn.parallel.DistributedDataParallel(model)
165
166 # not distributed but multiple GPUs reachable so data parallel model
167 elif torch.cuda.device_count() > 1 and "cuda" in idist.device().type:
168 logger.info("Apply torch DataParallel on model")
169 model = torch.nn.parallel.DataParallel(model)
170
171 return model
172
173
174 def auto_optim(optimizer: Optimizer) -> Optimizer:
175 """Helper method to adapt optimizer for non-distributed and distributed configurations (supporting
176 all available backends from :meth:`~ignite.distributed.utils.available_backends()`).
177
178 Internally, this method is no-op for non-distributed and torch native distributed configuration.
179 For XLA distributed configuration, we create a new class that inherits from provided optimizer.
180 The goal is to override the `step()` method with specific `xm.optimizer_step`_ implementation.
181
182 Examples:
183
184 .. code-block:: python
185
186 import ignite.distribted as idist
187
188 optimizer = idist.auto_optim(optimizer)
189
190
191 Args:
192 optimizer (Optimizer): input torch optimizer
193
194 Returns:
195 Optimizer
196
197 .. _xm.optimizer_step: http://pytorch.org/xla/release/1.5/index.html#torch_xla.core.xla_model.optimizer_step
198
199 """
200 if not (idist.has_xla_support and idist.backend() == idist_xla.XLA_TPU):
201 return optimizer
202
203 cls = type(optimizer.__class__.__name__, (optimizer.__class__,), dict(_XLADistributedOptimizer.__dict__))
204 return cls(optimizer)
205
206
207 class DistributedProxySampler(DistributedSampler):
208 """Distributed sampler proxy to adapt user's sampler for distributed data parallelism configuration.
209
210 Code is based on https://github.com/pytorch/pytorch/issues/23430#issuecomment-562350407
211
212
213 .. note::
214 Input sampler is assumed to have a constant size.
215
216 Args:
217 sampler (Sampler): Input torch data sampler.
218 num_replicas (int, optional): Number of processes participating in distributed training.
219 rank (int, optional): Rank of the current process within ``num_replicas``.
220
221 """
222
223 def __init__(self, sampler: Sampler, num_replicas=None, rank=None):
224
225 if not isinstance(sampler, Sampler):
226 raise TypeError("Argument sampler should be instance of torch Sampler, but given: {}".format(type(sampler)))
227
228 if not hasattr(sampler, "__len__"):
229 raise TypeError("Argument sampler should have length")
230
231 super(DistributedProxySampler, self).__init__(sampler, num_replicas=num_replicas, rank=rank, shuffle=False)
232 self.sampler = sampler
233
234 def __iter__(self):
235 # deterministically shuffle based on epoch
236 torch.manual_seed(self.epoch)
237
238 indices = []
239 while len(indices) < self.total_size:
240 indices += list(self.sampler)
241
242 if len(indices) != self.total_size:
243 raise RuntimeError("{} vs {}".format(len(indices), self.total_size))
244
245 # subsample
246 indices = indices[self.rank : self.total_size : self.num_replicas]
247 if len(indices) != self.num_samples:
248 raise RuntimeError("{} vs {}".format(len(indices), self.num_samples))
249
250 return iter(indices)
251
252
253 if idist.has_xla_support:
254
255 import torch_xla.core.xla_model as xm
256 from torch_xla.distributed.parallel_loader import ParallelLoader
257
258 class _MpDeviceLoader:
259 # https://github.com/pytorch/xla/pull/2117
260 # From pytorch/xla if `torch_xla.distributed.parallel_loader.MpDeviceLoader` is not available
261 def __init__(self, loader, device, **kwargs):
262 self._loader = loader
263 self._device = device
264 self._parallel_loader_kwargs = kwargs
265
266 def __iter__(self):
267 parallel_loader = ParallelLoader(self._loader, [self._device], **self._parallel_loader_kwargs)
268 return parallel_loader.per_device_loader(self._device)
269
270 def __len__(self):
271 return len(self._loader)
272
273 class _XLADistributedOptimizer(Optimizer):
274 def __init__(self, optimizer):
275 super(self.__class__, self).__init__(optimizer.param_groups)
276 self.wrapped_optimizer = optimizer
277
278 def step(self, closure=None):
279 xm.optimizer_step(self.wrapped_optimizer, barrier=True)
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/distributed/auto.py b/ignite/distributed/auto.py
--- a/ignite/distributed/auto.py
+++ b/ignite/distributed/auto.py
@@ -21,8 +21,8 @@
Internally, we create a dataloader with provided kwargs while applying the following updates:
- - batch size is scaled by world size: ``batch_size / world_size``.
- - number of workers is scaled by number of local processes: ``num_workers / nprocs``.
+ - batch size is scaled by world size: ``batch_size / world_size`` if larger or equal world size.
+ - number of workers is scaled by number of local processes: ``num_workers / nprocs`` if larger or equal world size.
- if no sampler provided by user, `torch DistributedSampler` is setup.
- if a sampler is provided by user, it is wrapped by :class:`~ignite.distributed.auto.DistributedProxySampler`.
- if the default device is 'cuda', `pin_memory` is automatically set to `True`.
@@ -63,13 +63,12 @@
world_size = idist.get_world_size()
logger = setup_logger(__name__ + ".auto_dataloader")
-
if world_size > 1:
- if "batch_size" in kwargs:
+ if "batch_size" in kwargs and kwargs["batch_size"] >= world_size:
kwargs["batch_size"] //= world_size
- if "num_workers" in kwargs:
- nproc = idist.get_nproc_per_node()
+ nproc = idist.get_nproc_per_node()
+ if "num_workers" in kwargs and kwargs["num_workers"] >= nproc:
kwargs["num_workers"] = (kwargs["num_workers"] + nproc - 1) // nproc
if "batch_sampler" not in kwargs:
|
{"golden_diff": "diff --git a/ignite/distributed/auto.py b/ignite/distributed/auto.py\n--- a/ignite/distributed/auto.py\n+++ b/ignite/distributed/auto.py\n@@ -21,8 +21,8 @@\n \n Internally, we create a dataloader with provided kwargs while applying the following updates:\n \n- - batch size is scaled by world size: ``batch_size / world_size``.\n- - number of workers is scaled by number of local processes: ``num_workers / nprocs``.\n+ - batch size is scaled by world size: ``batch_size / world_size`` if larger or equal world size.\n+ - number of workers is scaled by number of local processes: ``num_workers / nprocs`` if larger or equal world size.\n - if no sampler provided by user, `torch DistributedSampler` is setup.\n - if a sampler is provided by user, it is wrapped by :class:`~ignite.distributed.auto.DistributedProxySampler`.\n - if the default device is 'cuda', `pin_memory` is automatically set to `True`.\n@@ -63,13 +63,12 @@\n world_size = idist.get_world_size()\n \n logger = setup_logger(__name__ + \".auto_dataloader\")\n-\n if world_size > 1:\n- if \"batch_size\" in kwargs:\n+ if \"batch_size\" in kwargs and kwargs[\"batch_size\"] >= world_size:\n kwargs[\"batch_size\"] //= world_size\n \n- if \"num_workers\" in kwargs:\n- nproc = idist.get_nproc_per_node()\n+ nproc = idist.get_nproc_per_node()\n+ if \"num_workers\" in kwargs and kwargs[\"num_workers\"] >= nproc:\n kwargs[\"num_workers\"] = (kwargs[\"num_workers\"] + nproc - 1) // nproc\n \n if \"batch_sampler\" not in kwargs:\n", "issue": "Auto dataloader - ValueError: batch_size should be a positive integer value, but got batch_size=0\n## \ud83d\udc1b Bug description\r\n\r\nThe following code will brake if `batch_size` is smaller than world size :\r\n```python\r\nimport torch\r\nimport ignite.distributed as idist\r\n\r\ndef foo(i):\r\n data = torch.arange(100).reshape(25, 4)\r\n data_loader = idist.auto_dataloader(data, batch_size=6, num_workers=12)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n idist.spawn(\"gloo\", foo, args=(), nproc_per_node=8) \r\n```\r\n```\r\nValueError: batch_size should be a positive integer value, but got batch_size=0\r\n```\r\n\r\nA fix can be:\r\n- keep batch size as provided if smaller than world size\r\n- same for `num_workers`\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4): 1.5.0\r\n - Ignite Version (e.g., 0.3.0): master\r\n - OS (e.g., Linux): linux\r\n - How you installed Ignite (`conda`, `pip`, source):\r\n - Python version: 3.7\r\n - Any other relevant information:\r\n\r\n\r\n@InCogNiTo124 would you like to fix this as you recently played around `auto_*` ?\n", "before_files": [{"content": "import warnings\n\nimport torch\nimport torch.nn as nn\nfrom torch.optim.optimizer import Optimizer\nfrom torch.utils.data import DataLoader, Dataset\nfrom torch.utils.data.distributed import DistributedSampler\nfrom torch.utils.data.sampler import Sampler\n\nfrom ignite.distributed import utils as idist\nfrom ignite.distributed.comp_models import native as idist_native\nfrom ignite.distributed.comp_models import xla as idist_xla\nfrom ignite.utils import setup_logger\n\n__all__ = [\"auto_dataloader\", \"auto_model\", \"auto_optim\", \"DistributedProxySampler\"]\n\n\ndef auto_dataloader(dataset, **kwargs):\n \"\"\"Helper method to create a dataloader adapted for non-distributed and distributed configurations (supporting\n all available backends from :meth:`~ignite.distributed.utils.available_backends()`).\n\n Internally, we create a dataloader with provided kwargs while applying the following updates:\n\n - batch size is scaled by world size: ``batch_size / world_size``.\n - number of workers is scaled by number of local processes: ``num_workers / nprocs``.\n - if no sampler provided by user, `torch DistributedSampler` is setup.\n - if a sampler is provided by user, it is wrapped by :class:`~ignite.distributed.auto.DistributedProxySampler`.\n - if the default device is 'cuda', `pin_memory` is automatically set to `True`.\n\n .. warning::\n\n Custom batch sampler is not adapted for distributed configuration. Please, make sure that provided batch\n sampler is compatible with distributed configuration.\n\n Examples:\n\n .. code-block:: python\n\n import ignite.distribted as idist\n\n train_loader = idist.auto_dataloader(\n train_dataset,\n batch_size=32,\n num_workers=4,\n shuffle=True,\n pin_memory=\"cuda\" in idist.device().type,\n drop_last=True,\n )\n\n Args:\n dataset (Dataset): input torch dataset\n **kwargs: keyword arguments for `torch DataLoader`_.\n\n Returns:\n `torch DataLoader`_ or `XLA MpDeviceLoader`_ for XLA devices\n\n .. _torch DataLoader: https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader\n .. _XLA MpDeviceLoader: https://github.com/pytorch/xla/blob/master/torch_xla/distributed/parallel_loader.py#L178\n .. _torch DistributedSampler:\n https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler\n \"\"\"\n rank = idist.get_rank()\n world_size = idist.get_world_size()\n\n logger = setup_logger(__name__ + \".auto_dataloader\")\n\n if world_size > 1:\n if \"batch_size\" in kwargs:\n kwargs[\"batch_size\"] //= world_size\n\n if \"num_workers\" in kwargs:\n nproc = idist.get_nproc_per_node()\n kwargs[\"num_workers\"] = (kwargs[\"num_workers\"] + nproc - 1) // nproc\n\n if \"batch_sampler\" not in kwargs:\n if kwargs.get(\"sampler\", None) is not None:\n sampler = DistributedProxySampler(kwargs[\"sampler\"], num_replicas=world_size, rank=rank)\n else:\n sampler = DistributedSampler(\n dataset, num_replicas=world_size, rank=rank, shuffle=kwargs.get(\"shuffle\", True)\n )\n # we need to remove \"shuffle\" from kwargs if sampler is used\n if \"shuffle\" in kwargs:\n del kwargs[\"shuffle\"]\n\n kwargs[\"sampler\"] = sampler\n else:\n warnings.warn(\n \"Found batch_sampler in provided kwargs. Please, make sure that it is compatible \"\n \"with distributed configuration\"\n )\n\n if idist.has_xla_support and idist.backend() == idist_xla.XLA_TPU and kwargs.get(\"pin_memory\", False):\n # TODO: How about XLA GPU ?\n warnings.warn(\n \"Found incompatible options: xla support and pin_memory args equal True. \"\n \"Argument `pin_memory=False` will be used to construct data loader.\"\n )\n kwargs[\"pin_memory\"] = False\n else:\n kwargs[\"pin_memory\"] = kwargs.get(\"pin_memory\", \"cuda\" in idist.device().type)\n\n logger.info(\"Use data loader kwargs for dataset '{}': \\n\\t{}\".format(repr(dataset)[:20].strip(), kwargs))\n dataloader = DataLoader(dataset, **kwargs)\n\n if idist.has_xla_support and idist.backend() == idist_xla.XLA_TPU and world_size > 1:\n\n logger.info(\"DataLoader is wrapped by `MpDeviceLoader` on XLA\")\n\n mp_device_loader_cls = _MpDeviceLoader\n try:\n from torch_xla.distributed.parallel_loader import MpDeviceLoader\n\n mp_device_loader_cls = MpDeviceLoader\n except ImportError:\n pass\n\n sampler = dataloader.sampler\n dataloader = mp_device_loader_cls(dataloader, idist.device())\n dataloader.sampler = sampler\n\n return dataloader\n\n\ndef auto_model(model: nn.Module) -> nn.Module:\n \"\"\"Helper method to adapt provided model for non-distributed and distributed configurations (supporting\n all available backends from :meth:`~ignite.distributed.utils.available_backends()`).\n\n Internally, we perform to following:\n\n - send model to current :meth:`~ignite.distributed.utils.device()`.\n - wrap the model to `torch DistributedDataParallel`_ for native torch distributed if world size is larger than 1\n - wrap the model to `torch DataParallel`_ if no distributed context found and more than one CUDA devices available.\n\n Examples:\n\n .. code-block:: python\n\n import ignite.distribted as idist\n\n model = idist.auto_model(model)\n\n Args:\n model (torch.nn.Module): model to adapt.\n\n Returns:\n torch.nn.Module\n\n .. _torch DistributedDataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel\n .. _torch DataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel\n \"\"\"\n logger = setup_logger(__name__ + \".auto_model\")\n\n model.to(idist.device())\n\n # distributed data parallel model\n if idist.get_world_size() > 1:\n if idist.backend() == idist_native.NCCL:\n lrank = idist.get_local_rank()\n logger.info(\"Apply torch DistributedDataParallel on model, device id: {}\".format(lrank))\n model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[lrank,])\n elif idist.backend() == idist_native.GLOO:\n logger.info(\"Apply torch DistributedDataParallel on model\")\n model = torch.nn.parallel.DistributedDataParallel(model)\n\n # not distributed but multiple GPUs reachable so data parallel model\n elif torch.cuda.device_count() > 1 and \"cuda\" in idist.device().type:\n logger.info(\"Apply torch DataParallel on model\")\n model = torch.nn.parallel.DataParallel(model)\n\n return model\n\n\ndef auto_optim(optimizer: Optimizer) -> Optimizer:\n \"\"\"Helper method to adapt optimizer for non-distributed and distributed configurations (supporting\n all available backends from :meth:`~ignite.distributed.utils.available_backends()`).\n\n Internally, this method is no-op for non-distributed and torch native distributed configuration.\n For XLA distributed configuration, we create a new class that inherits from provided optimizer.\n The goal is to override the `step()` method with specific `xm.optimizer_step`_ implementation.\n\n Examples:\n\n .. code-block:: python\n\n import ignite.distribted as idist\n\n optimizer = idist.auto_optim(optimizer)\n\n\n Args:\n optimizer (Optimizer): input torch optimizer\n\n Returns:\n Optimizer\n\n .. _xm.optimizer_step: http://pytorch.org/xla/release/1.5/index.html#torch_xla.core.xla_model.optimizer_step\n\n \"\"\"\n if not (idist.has_xla_support and idist.backend() == idist_xla.XLA_TPU):\n return optimizer\n\n cls = type(optimizer.__class__.__name__, (optimizer.__class__,), dict(_XLADistributedOptimizer.__dict__))\n return cls(optimizer)\n\n\nclass DistributedProxySampler(DistributedSampler):\n \"\"\"Distributed sampler proxy to adapt user's sampler for distributed data parallelism configuration.\n\n Code is based on https://github.com/pytorch/pytorch/issues/23430#issuecomment-562350407\n\n\n .. note::\n Input sampler is assumed to have a constant size.\n\n Args:\n sampler (Sampler): Input torch data sampler.\n num_replicas (int, optional): Number of processes participating in distributed training.\n rank (int, optional): Rank of the current process within ``num_replicas``.\n\n \"\"\"\n\n def __init__(self, sampler: Sampler, num_replicas=None, rank=None):\n\n if not isinstance(sampler, Sampler):\n raise TypeError(\"Argument sampler should be instance of torch Sampler, but given: {}\".format(type(sampler)))\n\n if not hasattr(sampler, \"__len__\"):\n raise TypeError(\"Argument sampler should have length\")\n\n super(DistributedProxySampler, self).__init__(sampler, num_replicas=num_replicas, rank=rank, shuffle=False)\n self.sampler = sampler\n\n def __iter__(self):\n # deterministically shuffle based on epoch\n torch.manual_seed(self.epoch)\n\n indices = []\n while len(indices) < self.total_size:\n indices += list(self.sampler)\n\n if len(indices) != self.total_size:\n raise RuntimeError(\"{} vs {}\".format(len(indices), self.total_size))\n\n # subsample\n indices = indices[self.rank : self.total_size : self.num_replicas]\n if len(indices) != self.num_samples:\n raise RuntimeError(\"{} vs {}\".format(len(indices), self.num_samples))\n\n return iter(indices)\n\n\nif idist.has_xla_support:\n\n import torch_xla.core.xla_model as xm\n from torch_xla.distributed.parallel_loader import ParallelLoader\n\n class _MpDeviceLoader:\n # https://github.com/pytorch/xla/pull/2117\n # From pytorch/xla if `torch_xla.distributed.parallel_loader.MpDeviceLoader` is not available\n def __init__(self, loader, device, **kwargs):\n self._loader = loader\n self._device = device\n self._parallel_loader_kwargs = kwargs\n\n def __iter__(self):\n parallel_loader = ParallelLoader(self._loader, [self._device], **self._parallel_loader_kwargs)\n return parallel_loader.per_device_loader(self._device)\n\n def __len__(self):\n return len(self._loader)\n\n class _XLADistributedOptimizer(Optimizer):\n def __init__(self, optimizer):\n super(self.__class__, self).__init__(optimizer.param_groups)\n self.wrapped_optimizer = optimizer\n\n def step(self, closure=None):\n xm.optimizer_step(self.wrapped_optimizer, barrier=True)\n", "path": "ignite/distributed/auto.py"}], "after_files": [{"content": "import warnings\n\nimport torch\nimport torch.nn as nn\nfrom torch.optim.optimizer import Optimizer\nfrom torch.utils.data import DataLoader, Dataset\nfrom torch.utils.data.distributed import DistributedSampler\nfrom torch.utils.data.sampler import Sampler\n\nfrom ignite.distributed import utils as idist\nfrom ignite.distributed.comp_models import native as idist_native\nfrom ignite.distributed.comp_models import xla as idist_xla\nfrom ignite.utils import setup_logger\n\n__all__ = [\"auto_dataloader\", \"auto_model\", \"auto_optim\", \"DistributedProxySampler\"]\n\n\ndef auto_dataloader(dataset, **kwargs):\n \"\"\"Helper method to create a dataloader adapted for non-distributed and distributed configurations (supporting\n all available backends from :meth:`~ignite.distributed.utils.available_backends()`).\n\n Internally, we create a dataloader with provided kwargs while applying the following updates:\n\n - batch size is scaled by world size: ``batch_size / world_size`` if larger or equal world size.\n - number of workers is scaled by number of local processes: ``num_workers / nprocs`` if larger or equal world size.\n - if no sampler provided by user, `torch DistributedSampler` is setup.\n - if a sampler is provided by user, it is wrapped by :class:`~ignite.distributed.auto.DistributedProxySampler`.\n - if the default device is 'cuda', `pin_memory` is automatically set to `True`.\n\n .. warning::\n\n Custom batch sampler is not adapted for distributed configuration. Please, make sure that provided batch\n sampler is compatible with distributed configuration.\n\n Examples:\n\n .. code-block:: python\n\n import ignite.distribted as idist\n\n train_loader = idist.auto_dataloader(\n train_dataset,\n batch_size=32,\n num_workers=4,\n shuffle=True,\n pin_memory=\"cuda\" in idist.device().type,\n drop_last=True,\n )\n\n Args:\n dataset (Dataset): input torch dataset\n **kwargs: keyword arguments for `torch DataLoader`_.\n\n Returns:\n `torch DataLoader`_ or `XLA MpDeviceLoader`_ for XLA devices\n\n .. _torch DataLoader: https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader\n .. _XLA MpDeviceLoader: https://github.com/pytorch/xla/blob/master/torch_xla/distributed/parallel_loader.py#L178\n .. _torch DistributedSampler:\n https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler\n \"\"\"\n rank = idist.get_rank()\n world_size = idist.get_world_size()\n\n logger = setup_logger(__name__ + \".auto_dataloader\")\n if world_size > 1:\n if \"batch_size\" in kwargs and kwargs[\"batch_size\"] >= world_size:\n kwargs[\"batch_size\"] //= world_size\n\n nproc = idist.get_nproc_per_node()\n if \"num_workers\" in kwargs and kwargs[\"num_workers\"] >= nproc:\n kwargs[\"num_workers\"] = (kwargs[\"num_workers\"] + nproc - 1) // nproc\n\n if \"batch_sampler\" not in kwargs:\n if kwargs.get(\"sampler\", None) is not None:\n sampler = DistributedProxySampler(kwargs[\"sampler\"], num_replicas=world_size, rank=rank)\n else:\n sampler = DistributedSampler(\n dataset, num_replicas=world_size, rank=rank, shuffle=kwargs.get(\"shuffle\", True)\n )\n # we need to remove \"shuffle\" from kwargs if sampler is used\n if \"shuffle\" in kwargs:\n del kwargs[\"shuffle\"]\n\n kwargs[\"sampler\"] = sampler\n else:\n warnings.warn(\n \"Found batch_sampler in provided kwargs. Please, make sure that it is compatible \"\n \"with distributed configuration\"\n )\n\n if idist.has_xla_support and idist.backend() == idist_xla.XLA_TPU and kwargs.get(\"pin_memory\", False):\n # TODO: How about XLA GPU ?\n warnings.warn(\n \"Found incompatible options: xla support and pin_memory args equal True. \"\n \"Argument `pin_memory=False` will be used to construct data loader.\"\n )\n kwargs[\"pin_memory\"] = False\n else:\n kwargs[\"pin_memory\"] = kwargs.get(\"pin_memory\", \"cuda\" in idist.device().type)\n\n logger.info(\"Use data loader kwargs for dataset '{}': \\n\\t{}\".format(repr(dataset)[:20].strip(), kwargs))\n dataloader = DataLoader(dataset, **kwargs)\n\n if idist.has_xla_support and idist.backend() == idist_xla.XLA_TPU and world_size > 1:\n\n logger.info(\"DataLoader is wrapped by `MpDeviceLoader` on XLA\")\n\n mp_device_loader_cls = _MpDeviceLoader\n try:\n from torch_xla.distributed.parallel_loader import MpDeviceLoader\n\n mp_device_loader_cls = MpDeviceLoader\n except ImportError:\n pass\n\n sampler = dataloader.sampler\n dataloader = mp_device_loader_cls(dataloader, idist.device())\n dataloader.sampler = sampler\n\n return dataloader\n\n\ndef auto_model(model: nn.Module) -> nn.Module:\n \"\"\"Helper method to adapt provided model for non-distributed and distributed configurations (supporting\n all available backends from :meth:`~ignite.distributed.utils.available_backends()`).\n\n Internally, we perform to following:\n\n - send model to current :meth:`~ignite.distributed.utils.device()`.\n - wrap the model to `torch DistributedDataParallel`_ for native torch distributed if world size is larger than 1\n - wrap the model to `torch DataParallel`_ if no distributed context found and more than one CUDA devices available.\n\n Examples:\n\n .. code-block:: python\n\n import ignite.distribted as idist\n\n model = idist.auto_model(model)\n\n Args:\n model (torch.nn.Module): model to adapt.\n\n Returns:\n torch.nn.Module\n\n .. _torch DistributedDataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel\n .. _torch DataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel\n \"\"\"\n logger = setup_logger(__name__ + \".auto_model\")\n\n model.to(idist.device())\n\n # distributed data parallel model\n if idist.get_world_size() > 1:\n if idist.backend() == idist_native.NCCL:\n lrank = idist.get_local_rank()\n logger.info(\"Apply torch DistributedDataParallel on model, device id: {}\".format(lrank))\n model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[lrank,])\n elif idist.backend() == idist_native.GLOO:\n logger.info(\"Apply torch DistributedDataParallel on model\")\n model = torch.nn.parallel.DistributedDataParallel(model)\n\n # not distributed but multiple GPUs reachable so data parallel model\n elif torch.cuda.device_count() > 1 and \"cuda\" in idist.device().type:\n logger.info(\"Apply torch DataParallel on model\")\n model = torch.nn.parallel.DataParallel(model)\n\n return model\n\n\ndef auto_optim(optimizer: Optimizer) -> Optimizer:\n \"\"\"Helper method to adapt optimizer for non-distributed and distributed configurations (supporting\n all available backends from :meth:`~ignite.distributed.utils.available_backends()`).\n\n Internally, this method is no-op for non-distributed and torch native distributed configuration.\n For XLA distributed configuration, we create a new class that inherits from provided optimizer.\n The goal is to override the `step()` method with specific `xm.optimizer_step`_ implementation.\n\n Examples:\n\n .. code-block:: python\n\n import ignite.distribted as idist\n\n optimizer = idist.auto_optim(optimizer)\n\n\n Args:\n optimizer (Optimizer): input torch optimizer\n\n Returns:\n Optimizer\n\n .. _xm.optimizer_step: http://pytorch.org/xla/release/1.5/index.html#torch_xla.core.xla_model.optimizer_step\n\n \"\"\"\n if not (idist.has_xla_support and idist.backend() == idist_xla.XLA_TPU):\n return optimizer\n\n cls = type(optimizer.__class__.__name__, (optimizer.__class__,), dict(_XLADistributedOptimizer.__dict__))\n return cls(optimizer)\n\n\nclass DistributedProxySampler(DistributedSampler):\n \"\"\"Distributed sampler proxy to adapt user's sampler for distributed data parallelism configuration.\n\n Code is based on https://github.com/pytorch/pytorch/issues/23430#issuecomment-562350407\n\n\n .. note::\n Input sampler is assumed to have a constant size.\n\n Args:\n sampler (Sampler): Input torch data sampler.\n num_replicas (int, optional): Number of processes participating in distributed training.\n rank (int, optional): Rank of the current process within ``num_replicas``.\n\n \"\"\"\n\n def __init__(self, sampler: Sampler, num_replicas=None, rank=None):\n\n if not isinstance(sampler, Sampler):\n raise TypeError(\"Argument sampler should be instance of torch Sampler, but given: {}\".format(type(sampler)))\n\n if not hasattr(sampler, \"__len__\"):\n raise TypeError(\"Argument sampler should have length\")\n\n super(DistributedProxySampler, self).__init__(sampler, num_replicas=num_replicas, rank=rank, shuffle=False)\n self.sampler = sampler\n\n def __iter__(self):\n # deterministically shuffle based on epoch\n torch.manual_seed(self.epoch)\n\n indices = []\n while len(indices) < self.total_size:\n indices += list(self.sampler)\n\n if len(indices) != self.total_size:\n raise RuntimeError(\"{} vs {}\".format(len(indices), self.total_size))\n\n # subsample\n indices = indices[self.rank : self.total_size : self.num_replicas]\n if len(indices) != self.num_samples:\n raise RuntimeError(\"{} vs {}\".format(len(indices), self.num_samples))\n\n return iter(indices)\n\n\nif idist.has_xla_support:\n\n import torch_xla.core.xla_model as xm\n from torch_xla.distributed.parallel_loader import ParallelLoader\n\n class _MpDeviceLoader:\n # https://github.com/pytorch/xla/pull/2117\n # From pytorch/xla if `torch_xla.distributed.parallel_loader.MpDeviceLoader` is not available\n def __init__(self, loader, device, **kwargs):\n self._loader = loader\n self._device = device\n self._parallel_loader_kwargs = kwargs\n\n def __iter__(self):\n parallel_loader = ParallelLoader(self._loader, [self._device], **self._parallel_loader_kwargs)\n return parallel_loader.per_device_loader(self._device)\n\n def __len__(self):\n return len(self._loader)\n\n class _XLADistributedOptimizer(Optimizer):\n def __init__(self, optimizer):\n super(self.__class__, self).__init__(optimizer.param_groups)\n self.wrapped_optimizer = optimizer\n\n def step(self, closure=None):\n xm.optimizer_step(self.wrapped_optimizer, barrier=True)\n", "path": "ignite/distributed/auto.py"}]}
| 3,694 | 403 |
gh_patches_debug_6190
|
rasdani/github-patches
|
git_diff
|
microsoft__torchgeo-626
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update README
Before blog: update verbiage and add pics from blog/paper.
After blog: add link to blog.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchgeo/__init__.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 """TorchGeo: datasets, transforms, and models for geospatial data.
5
6 This library is part of the `PyTorch <http://pytorch.org/>`_ project. PyTorch is an open
7 source machine learning framework.
8
9 The :mod:`torchgeo` package consists of popular datasets, model architectures, and
10 common image transformations for geospatial data.
11 """
12
13 __author__ = "Adam J. Stewart"
14 __version__ = "0.3.0.dev0"
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchgeo/__init__.py b/torchgeo/__init__.py
--- a/torchgeo/__init__.py
+++ b/torchgeo/__init__.py
@@ -1,7 +1,7 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
-"""TorchGeo: datasets, transforms, and models for geospatial data.
+"""TorchGeo: datasets, samplers, transforms, and pre-trained models for geospatial data.
This library is part of the `PyTorch <http://pytorch.org/>`_ project. PyTorch is an open
source machine learning framework.
|
{"golden_diff": "diff --git a/torchgeo/__init__.py b/torchgeo/__init__.py\n--- a/torchgeo/__init__.py\n+++ b/torchgeo/__init__.py\n@@ -1,7 +1,7 @@\n # Copyright (c) Microsoft Corporation. All rights reserved.\n # Licensed under the MIT License.\n \n-\"\"\"TorchGeo: datasets, transforms, and models for geospatial data.\n+\"\"\"TorchGeo: datasets, samplers, transforms, and pre-trained models for geospatial data.\n \n This library is part of the `PyTorch <http://pytorch.org/>`_ project. PyTorch is an open\n source machine learning framework.\n", "issue": "Update README\nBefore blog: update verbiage and add pics from blog/paper. \n\nAfter blog: add link to blog.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"TorchGeo: datasets, transforms, and models for geospatial data.\n\nThis library is part of the `PyTorch <http://pytorch.org/>`_ project. PyTorch is an open\nsource machine learning framework.\n\nThe :mod:`torchgeo` package consists of popular datasets, model architectures, and\ncommon image transformations for geospatial data.\n\"\"\"\n\n__author__ = \"Adam J. Stewart\"\n__version__ = \"0.3.0.dev0\"\n", "path": "torchgeo/__init__.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"TorchGeo: datasets, samplers, transforms, and pre-trained models for geospatial data.\n\nThis library is part of the `PyTorch <http://pytorch.org/>`_ project. PyTorch is an open\nsource machine learning framework.\n\nThe :mod:`torchgeo` package consists of popular datasets, model architectures, and\ncommon image transformations for geospatial data.\n\"\"\"\n\n__author__ = \"Adam J. Stewart\"\n__version__ = \"0.3.0.dev0\"\n", "path": "torchgeo/__init__.py"}]}
| 432 | 145 |
gh_patches_debug_34052
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-5750
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Purging deleted packages does not delete associated filestore/datastore resources
**CKAN version**
2.9
**Describe the bug**
After deleting and purging dataset packages, the associated datastore/filestore resources are orphaned and not deleted.
**Steps to reproduce**
1. Delete datasets
2. Go to sysadmin/trash
3. Select "Purge All"
4. Check datastore and filestore. Associated resources are still there.
**Expected behavior**
Dataset resources in the filestore and the datastore are deleted when the associated package resources are purged.
**Additional details**
This is related to #4705 and a lot of work has been done on PRs #4867 and #4905, but neither have been merged.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext/datastore/cli.py`
Content:
```
1 # encoding: utf-8
2
3 import logging
4 import os
5
6 import click
7
8 from ckan.model import parse_db_config
9 from ckan.common import config
10
11 import ckanext.datastore as datastore_module
12 from ckanext.datastore.backend.postgres import identifier
13 from ckanext.datastore.blueprint import DUMP_FORMATS, dump_to
14
15 log = logging.getLogger(__name__)
16
17
18 @click.group()
19 def datastore():
20 u'''Perform commands to set up the datastore.
21 '''
22
23
24 @datastore.command(
25 u'set-permissions',
26 short_help=u'Generate SQL for permission configuration.'
27 )
28 def set_permissions():
29 u'''Emit an SQL script that will set the permissions for the datastore
30 users as configured in your configuration file.'''
31
32 write_url = _parse_db_config(u'ckan.datastore.write_url')
33 read_url = _parse_db_config(u'ckan.datastore.read_url')
34 db_url = _parse_db_config(u'sqlalchemy.url')
35
36 # Basic validation that read and write URLs reference the same database.
37 # This obviously doesn't check they're the same database (the hosts/ports
38 # could be different), but it's better than nothing, I guess.
39
40 if write_url[u'db_name'] != read_url[u'db_name']:
41 click.secho(
42 u'The datastore write_url and read_url must refer to the same '
43 u'database!',
44 fg=u'red',
45 bold=True
46 )
47 raise click.Abort()
48
49 sql = permissions_sql(
50 maindb=db_url[u'db_name'],
51 datastoredb=write_url[u'db_name'],
52 mainuser=db_url[u'db_user'],
53 writeuser=write_url[u'db_user'],
54 readuser=read_url[u'db_user']
55 )
56
57 click.echo(sql)
58
59
60 def permissions_sql(maindb, datastoredb, mainuser, writeuser, readuser):
61 template_filename = os.path.join(
62 os.path.dirname(datastore_module.__file__), u'set_permissions.sql'
63 )
64 with open(template_filename) as fp:
65 template = fp.read()
66 return template.format(
67 maindb=identifier(maindb),
68 datastoredb=identifier(datastoredb),
69 mainuser=identifier(mainuser),
70 writeuser=identifier(writeuser),
71 readuser=identifier(readuser)
72 )
73
74
75 @datastore.command()
76 @click.argument(u'resource-id', nargs=1)
77 @click.argument(
78 u'output-file',
79 type=click.File(u'wb'),
80 default=click.get_binary_stream(u'stdout')
81 )
82 @click.option(u'--format', default=u'csv', type=click.Choice(DUMP_FORMATS))
83 @click.option(u'--offset', type=click.IntRange(0, None), default=0)
84 @click.option(u'--limit', type=click.IntRange(0))
85 @click.option(u'--bom', is_flag=True) # FIXME: options based on format
86 @click.pass_context
87 def dump(ctx, resource_id, output_file, format, offset, limit, bom):
88 u'''Dump a datastore resource.
89 '''
90 flask_app = ctx.meta['flask_app']
91 with flask_app.test_request_context():
92 dump_to(
93 resource_id,
94 output_file,
95 fmt=format,
96 offset=offset,
97 limit=limit,
98 options={u'bom': bom},
99 sort=u'_id',
100 search_params={}
101 )
102
103
104 def _parse_db_config(config_key=u'sqlalchemy.url'):
105 db_config = parse_db_config(config_key)
106 if not db_config:
107 click.secho(
108 u'Could not extract db details from url: %r' % config[config_key],
109 fg=u'red',
110 bold=True
111 )
112 raise click.Abort()
113 return db_config
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext/datastore/cli.py b/ckanext/datastore/cli.py
--- a/ckanext/datastore/cli.py
+++ b/ckanext/datastore/cli.py
@@ -7,6 +7,7 @@
from ckan.model import parse_db_config
from ckan.common import config
+import ckan.logic as logic
import ckanext.datastore as datastore_module
from ckanext.datastore.backend.postgres import identifier
@@ -111,3 +112,69 @@
)
raise click.Abort()
return db_config
+
+
[email protected](
+ u'purge',
+ short_help=u'purge orphaned resources from the datastore.'
+)
+def purge():
+ u'''Purge orphaned resources from the datastore using the datastore_delete
+ action, which drops tables when called without filters.'''
+
+ site_user = logic.get_action(u'get_site_user')({u'ignore_auth': True}, {})
+ context = {u'user': site_user[u'name']}
+
+ result = logic.get_action(u'datastore_search')(
+ context,
+ {u'resource_id': u'_table_metadata'}
+ )
+
+ resource_id_list = []
+ for record in result[u'records']:
+ try:
+ # ignore 'alias' records (views) as they are automatically
+ # deleted when the parent resource table is dropped
+ if record[u'alias_of']:
+ continue
+
+ # we need to do this to trigger resource_show auth function
+ site_user = logic.get_action(u'get_site_user')(
+ {u'ignore_auth': True}, {})
+ context = {u'user': site_user[u'name']}
+
+ logic.get_action(u'resource_show')(
+ context,
+ {u'id': record[u'name']}
+ )
+ except logic.NotFound:
+ resource_id_list.append(record[u'name'])
+ click.echo(u"Resource '%s' orphaned - queued for drop" %
+ record[u'name'])
+ except KeyError:
+ continue
+
+ orphaned_table_count = len(resource_id_list)
+ click.echo(u'%d orphaned tables found.' % orphaned_table_count)
+
+ if not orphaned_table_count:
+ return
+
+ click.confirm(u'Proceed with purge?', abort=True)
+
+ # Drop the orphaned datastore tables. When datastore_delete is called
+ # without filters, it does a drop table cascade
+ drop_count = 0
+ for resource_id in resource_id_list:
+ logic.get_action(u'datastore_delete')(
+ context,
+ {u'resource_id': resource_id, u'force': True}
+ )
+ click.echo(u"Table '%s' dropped)" % resource_id)
+ drop_count += 1
+
+ click.echo(u'Dropped %s tables' % drop_count)
+
+
+def get_commands():
+ return (set_permissions, dump, purge)
|
{"golden_diff": "diff --git a/ckanext/datastore/cli.py b/ckanext/datastore/cli.py\n--- a/ckanext/datastore/cli.py\n+++ b/ckanext/datastore/cli.py\n@@ -7,6 +7,7 @@\n \n from ckan.model import parse_db_config\n from ckan.common import config\n+import ckan.logic as logic\n \n import ckanext.datastore as datastore_module\n from ckanext.datastore.backend.postgres import identifier\n@@ -111,3 +112,69 @@\n )\n raise click.Abort()\n return db_config\n+\n+\[email protected](\n+ u'purge',\n+ short_help=u'purge orphaned resources from the datastore.'\n+)\n+def purge():\n+ u'''Purge orphaned resources from the datastore using the datastore_delete\n+ action, which drops tables when called without filters.'''\n+\n+ site_user = logic.get_action(u'get_site_user')({u'ignore_auth': True}, {})\n+ context = {u'user': site_user[u'name']}\n+\n+ result = logic.get_action(u'datastore_search')(\n+ context,\n+ {u'resource_id': u'_table_metadata'}\n+ )\n+\n+ resource_id_list = []\n+ for record in result[u'records']:\n+ try:\n+ # ignore 'alias' records (views) as they are automatically\n+ # deleted when the parent resource table is dropped\n+ if record[u'alias_of']:\n+ continue\n+\n+ # we need to do this to trigger resource_show auth function\n+ site_user = logic.get_action(u'get_site_user')(\n+ {u'ignore_auth': True}, {})\n+ context = {u'user': site_user[u'name']}\n+\n+ logic.get_action(u'resource_show')(\n+ context,\n+ {u'id': record[u'name']}\n+ )\n+ except logic.NotFound:\n+ resource_id_list.append(record[u'name'])\n+ click.echo(u\"Resource '%s' orphaned - queued for drop\" %\n+ record[u'name'])\n+ except KeyError:\n+ continue\n+\n+ orphaned_table_count = len(resource_id_list)\n+ click.echo(u'%d orphaned tables found.' % orphaned_table_count)\n+\n+ if not orphaned_table_count:\n+ return\n+\n+ click.confirm(u'Proceed with purge?', abort=True)\n+\n+ # Drop the orphaned datastore tables. When datastore_delete is called\n+ # without filters, it does a drop table cascade\n+ drop_count = 0\n+ for resource_id in resource_id_list:\n+ logic.get_action(u'datastore_delete')(\n+ context,\n+ {u'resource_id': resource_id, u'force': True}\n+ )\n+ click.echo(u\"Table '%s' dropped)\" % resource_id)\n+ drop_count += 1\n+\n+ click.echo(u'Dropped %s tables' % drop_count)\n+\n+\n+def get_commands():\n+ return (set_permissions, dump, purge)\n", "issue": "Purging deleted packages does not delete associated filestore/datastore resources\n**CKAN version**\r\n2.9\r\n\r\n**Describe the bug**\r\nAfter deleting and purging dataset packages, the associated datastore/filestore resources are orphaned and not deleted.\r\n\r\n**Steps to reproduce**\r\n\r\n1. Delete datasets\r\n2. Go to sysadmin/trash\r\n3. Select \"Purge All\"\r\n4. Check datastore and filestore. Associated resources are still there.\r\n\r\n**Expected behavior**\r\nDataset resources in the filestore and the datastore are deleted when the associated package resources are purged.\r\n\r\n**Additional details**\r\nThis is related to #4705 and a lot of work has been done on PRs #4867 and #4905, but neither have been merged.\r\n\n", "before_files": [{"content": "# encoding: utf-8\n\nimport logging\nimport os\n\nimport click\n\nfrom ckan.model import parse_db_config\nfrom ckan.common import config\n\nimport ckanext.datastore as datastore_module\nfrom ckanext.datastore.backend.postgres import identifier\nfrom ckanext.datastore.blueprint import DUMP_FORMATS, dump_to\n\nlog = logging.getLogger(__name__)\n\n\[email protected]()\ndef datastore():\n u'''Perform commands to set up the datastore.\n '''\n\n\[email protected](\n u'set-permissions',\n short_help=u'Generate SQL for permission configuration.'\n)\ndef set_permissions():\n u'''Emit an SQL script that will set the permissions for the datastore\n users as configured in your configuration file.'''\n\n write_url = _parse_db_config(u'ckan.datastore.write_url')\n read_url = _parse_db_config(u'ckan.datastore.read_url')\n db_url = _parse_db_config(u'sqlalchemy.url')\n\n # Basic validation that read and write URLs reference the same database.\n # This obviously doesn't check they're the same database (the hosts/ports\n # could be different), but it's better than nothing, I guess.\n\n if write_url[u'db_name'] != read_url[u'db_name']:\n click.secho(\n u'The datastore write_url and read_url must refer to the same '\n u'database!',\n fg=u'red',\n bold=True\n )\n raise click.Abort()\n\n sql = permissions_sql(\n maindb=db_url[u'db_name'],\n datastoredb=write_url[u'db_name'],\n mainuser=db_url[u'db_user'],\n writeuser=write_url[u'db_user'],\n readuser=read_url[u'db_user']\n )\n\n click.echo(sql)\n\n\ndef permissions_sql(maindb, datastoredb, mainuser, writeuser, readuser):\n template_filename = os.path.join(\n os.path.dirname(datastore_module.__file__), u'set_permissions.sql'\n )\n with open(template_filename) as fp:\n template = fp.read()\n return template.format(\n maindb=identifier(maindb),\n datastoredb=identifier(datastoredb),\n mainuser=identifier(mainuser),\n writeuser=identifier(writeuser),\n readuser=identifier(readuser)\n )\n\n\[email protected]()\[email protected](u'resource-id', nargs=1)\[email protected](\n u'output-file',\n type=click.File(u'wb'),\n default=click.get_binary_stream(u'stdout')\n)\[email protected](u'--format', default=u'csv', type=click.Choice(DUMP_FORMATS))\[email protected](u'--offset', type=click.IntRange(0, None), default=0)\[email protected](u'--limit', type=click.IntRange(0))\[email protected](u'--bom', is_flag=True) # FIXME: options based on format\[email protected]_context\ndef dump(ctx, resource_id, output_file, format, offset, limit, bom):\n u'''Dump a datastore resource.\n '''\n flask_app = ctx.meta['flask_app']\n with flask_app.test_request_context():\n dump_to(\n resource_id,\n output_file,\n fmt=format,\n offset=offset,\n limit=limit,\n options={u'bom': bom},\n sort=u'_id',\n search_params={}\n )\n\n\ndef _parse_db_config(config_key=u'sqlalchemy.url'):\n db_config = parse_db_config(config_key)\n if not db_config:\n click.secho(\n u'Could not extract db details from url: %r' % config[config_key],\n fg=u'red',\n bold=True\n )\n raise click.Abort()\n return db_config\n", "path": "ckanext/datastore/cli.py"}], "after_files": [{"content": "# encoding: utf-8\n\nimport logging\nimport os\n\nimport click\n\nfrom ckan.model import parse_db_config\nfrom ckan.common import config\nimport ckan.logic as logic\n\nimport ckanext.datastore as datastore_module\nfrom ckanext.datastore.backend.postgres import identifier\nfrom ckanext.datastore.blueprint import DUMP_FORMATS, dump_to\n\nlog = logging.getLogger(__name__)\n\n\[email protected]()\ndef datastore():\n u'''Perform commands to set up the datastore.\n '''\n\n\[email protected](\n u'set-permissions',\n short_help=u'Generate SQL for permission configuration.'\n)\ndef set_permissions():\n u'''Emit an SQL script that will set the permissions for the datastore\n users as configured in your configuration file.'''\n\n write_url = _parse_db_config(u'ckan.datastore.write_url')\n read_url = _parse_db_config(u'ckan.datastore.read_url')\n db_url = _parse_db_config(u'sqlalchemy.url')\n\n # Basic validation that read and write URLs reference the same database.\n # This obviously doesn't check they're the same database (the hosts/ports\n # could be different), but it's better than nothing, I guess.\n\n if write_url[u'db_name'] != read_url[u'db_name']:\n click.secho(\n u'The datastore write_url and read_url must refer to the same '\n u'database!',\n fg=u'red',\n bold=True\n )\n raise click.Abort()\n\n sql = permissions_sql(\n maindb=db_url[u'db_name'],\n datastoredb=write_url[u'db_name'],\n mainuser=db_url[u'db_user'],\n writeuser=write_url[u'db_user'],\n readuser=read_url[u'db_user']\n )\n\n click.echo(sql)\n\n\ndef permissions_sql(maindb, datastoredb, mainuser, writeuser, readuser):\n template_filename = os.path.join(\n os.path.dirname(datastore_module.__file__), u'set_permissions.sql'\n )\n with open(template_filename) as fp:\n template = fp.read()\n return template.format(\n maindb=identifier(maindb),\n datastoredb=identifier(datastoredb),\n mainuser=identifier(mainuser),\n writeuser=identifier(writeuser),\n readuser=identifier(readuser)\n )\n\n\[email protected]()\[email protected](u'resource-id', nargs=1)\[email protected](\n u'output-file',\n type=click.File(u'wb'),\n default=click.get_binary_stream(u'stdout')\n)\[email protected](u'--format', default=u'csv', type=click.Choice(DUMP_FORMATS))\[email protected](u'--offset', type=click.IntRange(0, None), default=0)\[email protected](u'--limit', type=click.IntRange(0))\[email protected](u'--bom', is_flag=True) # FIXME: options based on format\[email protected]_context\ndef dump(ctx, resource_id, output_file, format, offset, limit, bom):\n u'''Dump a datastore resource.\n '''\n flask_app = ctx.meta['flask_app']\n with flask_app.test_request_context():\n dump_to(\n resource_id,\n output_file,\n fmt=format,\n offset=offset,\n limit=limit,\n options={u'bom': bom},\n sort=u'_id',\n search_params={}\n )\n\n\ndef _parse_db_config(config_key=u'sqlalchemy.url'):\n db_config = parse_db_config(config_key)\n if not db_config:\n click.secho(\n u'Could not extract db details from url: %r' % config[config_key],\n fg=u'red',\n bold=True\n )\n raise click.Abort()\n return db_config\n\n\[email protected](\n u'purge',\n short_help=u'purge orphaned resources from the datastore.'\n)\ndef purge():\n u'''Purge orphaned resources from the datastore using the datastore_delete\n action, which drops tables when called without filters.'''\n\n site_user = logic.get_action(u'get_site_user')({u'ignore_auth': True}, {})\n context = {u'user': site_user[u'name']}\n\n result = logic.get_action(u'datastore_search')(\n context,\n {u'resource_id': u'_table_metadata'}\n )\n\n resource_id_list = []\n for record in result[u'records']:\n try:\n # ignore 'alias' records (views) as they are automatically\n # deleted when the parent resource table is dropped\n if record[u'alias_of']:\n continue\n\n # we need to do this to trigger resource_show auth function\n site_user = logic.get_action(u'get_site_user')(\n {u'ignore_auth': True}, {})\n context = {u'user': site_user[u'name']}\n\n logic.get_action(u'resource_show')(\n context,\n {u'id': record[u'name']}\n )\n except logic.NotFound:\n resource_id_list.append(record[u'name'])\n click.echo(u\"Resource '%s' orphaned - queued for drop\" %\n record[u'name'])\n except KeyError:\n continue\n\n orphaned_table_count = len(resource_id_list)\n click.echo(u'%d orphaned tables found.' % orphaned_table_count)\n\n if not orphaned_table_count:\n return\n\n click.confirm(u'Proceed with purge?', abort=True)\n\n # Drop the orphaned datastore tables. When datastore_delete is called\n # without filters, it does a drop table cascade\n drop_count = 0\n for resource_id in resource_id_list:\n logic.get_action(u'datastore_delete')(\n context,\n {u'resource_id': resource_id, u'force': True}\n )\n click.echo(u\"Table '%s' dropped)\" % resource_id)\n drop_count += 1\n\n click.echo(u'Dropped %s tables' % drop_count)\n\n\ndef get_commands():\n return (set_permissions, dump, purge)\n", "path": "ckanext/datastore/cli.py"}]}
| 1,465 | 664 |
gh_patches_debug_19386
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-1747
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ebola Page: make millions and % smaller
About half the current height looks right.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py`
Content:
```
1 '''
2 Created on Nov 3, 2014
3
4 @author: alexandru-m-g
5 '''
6
7 import logging
8 import datetime as dt
9 import decimal
10
11 import pylons.config as config
12
13 import ckan.lib.base as base
14 import ckan.logic as logic
15 import ckan.model as model
16 import ckan.common as common
17 import ckan.lib.helpers as h
18
19 render = base.render
20 get_action = logic.get_action
21 c = common.c
22 request = common.request
23 _ = common._
24
25 Decimal = decimal.Decimal
26
27 log = logging.getLogger(__name__)
28
29
30 class CrisisController(base.BaseController):
31
32 def show(self):
33
34 context = {'model': model, 'session': model.Session,
35 'user': c.user or c.author, 'for_view': True,
36 'auth_user_obj': c.userobj}
37
38 datastore_resource_id = self._get_datastore_resource_id(
39 context, config.get('hdx.crisis.ebola_dataset', None), config.get('hdx.crisis.ebola_resource_title', None))
40 if datastore_resource_id:
41 c.top_line_items = self._get_top_line_items(
42 context, datastore_resource_id)
43
44 limit = 25
45 c.q = u'ebola'
46
47 page = int(request.params.get('page', 1))
48 data_dict = {'sort': u'metadata_modified desc',
49 'fq': '+dataset_type:dataset',
50 'rows': limit,
51 'q': c.q,
52 'start': (page - 1) * limit
53 }
54 query = get_action("package_search")(context, data_dict)
55
56 def pager_url(q=None, page=None):
57 return h.url_for('show_crisis', page=page)
58
59 c.page = h.Page(
60 collection=query['results'],
61 page=page,
62 url=pager_url,
63 item_count=query['count'],
64 items_per_page=limit
65 )
66 c.items = query['results']
67 c.item_count = query['count']
68
69 c.other_links = {}
70 c.other_links['show_more'] = h.url_for(
71 "search", **{'q': u'ebola', 'sort': u'metadata_modified desc',
72 'ext_indicator': '0'})
73
74 return render('crisis/crisis.html')
75
76 def _get_decimal_value(self, value):
77 decimal_value = Decimal(str(value)).quantize(
78 Decimal('.1'), rounding=decimal.ROUND_HALF_UP)
79 return decimal_value
80
81 def _format_results(self, result):
82 for r in result['records']:
83 d = dt.datetime.strptime(r[u'latest_date'], '%Y-%m-%dT%H:%M:%S')
84 r[u'latest_date'] = dt.datetime.strftime(d, '%b %d, %Y')
85
86 modified_value = r[u'value']
87 if r[u'units'] == 'ratio':
88 modified_value *= 100.0
89 elif r[u'units'] == 'million':
90 modified_value /= 1000000.0
91
92 int_value = int(modified_value)
93 if int_value == modified_value:
94 r[u'formatted_value'] = '{:,}'.format(int_value)
95 else:
96 if r[u'units'] == 'ratio':
97 r[u'formatted_value'] = '{:,.1f}%'.format(
98 self._get_decimal_value(modified_value))
99 elif r[u'units'] == 'million':
100 r[u'formatted_value'] = '{:,.1f} '.format(
101 self._get_decimal_value(modified_value))
102 r[u'formatted_value'] += ' ' + _('million')
103
104 def _get_top_line_items(self, context, datastore_resource_id):
105 modified_context = dict(context)
106 modified_context['ignore_auth'] = True
107 result = get_action('datastore_search')(
108 modified_context, {'resource_id': datastore_resource_id})
109 if 'records' in result:
110 self._format_results(result)
111 return result['records']
112 return []
113
114 def _get_datastore_resource_id(self, context, dataset_id, resource_name):
115 try:
116 modified_context = dict(context)
117 modified_context['ignore_auth'] = True
118 dataset = get_action('package_show')(
119 modified_context, {'id': dataset_id})
120
121 if 'resources' in dataset:
122 for r in dataset['resources']:
123 if 'datastore_active' in r and r['datastore_active'] \
124 and r['name'] == resource_name:
125 return r['id']
126 return None
127 except:
128 log.warning('No dataset with id ' + dataset_id)
129 return None
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py
--- a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py
+++ b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py
@@ -94,12 +94,12 @@
r[u'formatted_value'] = '{:,}'.format(int_value)
else:
if r[u'units'] == 'ratio':
- r[u'formatted_value'] = '{:,.1f}%'.format(
+ r[u'formatted_value'] = '{:,.1f}'.format(
self._get_decimal_value(modified_value))
elif r[u'units'] == 'million':
- r[u'formatted_value'] = '{:,.1f} '.format(
+ r[u'formatted_value'] = '{:,.1f}'.format(
self._get_decimal_value(modified_value))
- r[u'formatted_value'] += ' ' + _('million')
+ #r[u'formatted_value'] += ' ' + _('million')
def _get_top_line_items(self, context, datastore_resource_id):
modified_context = dict(context)
|
{"golden_diff": "diff --git a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py\n--- a/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py\n+++ b/ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py\n@@ -94,12 +94,12 @@\n r[u'formatted_value'] = '{:,}'.format(int_value)\n else:\n if r[u'units'] == 'ratio':\n- r[u'formatted_value'] = '{:,.1f}%'.format(\n+ r[u'formatted_value'] = '{:,.1f}'.format(\n self._get_decimal_value(modified_value))\n elif r[u'units'] == 'million':\n- r[u'formatted_value'] = '{:,.1f} '.format(\n+ r[u'formatted_value'] = '{:,.1f}'.format(\n self._get_decimal_value(modified_value))\n- r[u'formatted_value'] += ' ' + _('million')\n+ #r[u'formatted_value'] += ' ' + _('million')\n \n def _get_top_line_items(self, context, datastore_resource_id):\n modified_context = dict(context)\n", "issue": "Ebola Page: make millions and % smaller\nAbout half the current height looks right.\n\n\n\n", "before_files": [{"content": "'''\nCreated on Nov 3, 2014\n\n@author: alexandru-m-g\n'''\n\nimport logging\nimport datetime as dt\nimport decimal\n\nimport pylons.config as config\n\nimport ckan.lib.base as base\nimport ckan.logic as logic\nimport ckan.model as model\nimport ckan.common as common\nimport ckan.lib.helpers as h\n\nrender = base.render\nget_action = logic.get_action\nc = common.c\nrequest = common.request\n_ = common._\n\nDecimal = decimal.Decimal\n\nlog = logging.getLogger(__name__)\n\n\nclass CrisisController(base.BaseController):\n\n def show(self):\n\n context = {'model': model, 'session': model.Session,\n 'user': c.user or c.author, 'for_view': True,\n 'auth_user_obj': c.userobj}\n\n datastore_resource_id = self._get_datastore_resource_id(\n context, config.get('hdx.crisis.ebola_dataset', None), config.get('hdx.crisis.ebola_resource_title', None))\n if datastore_resource_id:\n c.top_line_items = self._get_top_line_items(\n context, datastore_resource_id)\n\n limit = 25\n c.q = u'ebola'\n\n page = int(request.params.get('page', 1))\n data_dict = {'sort': u'metadata_modified desc',\n 'fq': '+dataset_type:dataset',\n 'rows': limit,\n 'q': c.q,\n 'start': (page - 1) * limit\n }\n query = get_action(\"package_search\")(context, data_dict)\n\n def pager_url(q=None, page=None):\n return h.url_for('show_crisis', page=page)\n\n c.page = h.Page(\n collection=query['results'],\n page=page,\n url=pager_url,\n item_count=query['count'],\n items_per_page=limit\n )\n c.items = query['results']\n c.item_count = query['count']\n\n c.other_links = {}\n c.other_links['show_more'] = h.url_for(\n \"search\", **{'q': u'ebola', 'sort': u'metadata_modified desc',\n 'ext_indicator': '0'})\n\n return render('crisis/crisis.html')\n\n def _get_decimal_value(self, value):\n decimal_value = Decimal(str(value)).quantize(\n Decimal('.1'), rounding=decimal.ROUND_HALF_UP)\n return decimal_value\n\n def _format_results(self, result):\n for r in result['records']:\n d = dt.datetime.strptime(r[u'latest_date'], '%Y-%m-%dT%H:%M:%S')\n r[u'latest_date'] = dt.datetime.strftime(d, '%b %d, %Y')\n\n modified_value = r[u'value']\n if r[u'units'] == 'ratio':\n modified_value *= 100.0\n elif r[u'units'] == 'million':\n modified_value /= 1000000.0\n\n int_value = int(modified_value)\n if int_value == modified_value:\n r[u'formatted_value'] = '{:,}'.format(int_value)\n else:\n if r[u'units'] == 'ratio':\n r[u'formatted_value'] = '{:,.1f}%'.format(\n self._get_decimal_value(modified_value))\n elif r[u'units'] == 'million':\n r[u'formatted_value'] = '{:,.1f} '.format(\n self._get_decimal_value(modified_value))\n r[u'formatted_value'] += ' ' + _('million')\n\n def _get_top_line_items(self, context, datastore_resource_id):\n modified_context = dict(context)\n modified_context['ignore_auth'] = True\n result = get_action('datastore_search')(\n modified_context, {'resource_id': datastore_resource_id})\n if 'records' in result:\n self._format_results(result)\n return result['records']\n return []\n\n def _get_datastore_resource_id(self, context, dataset_id, resource_name):\n try:\n modified_context = dict(context)\n modified_context['ignore_auth'] = True\n dataset = get_action('package_show')(\n modified_context, {'id': dataset_id})\n\n if 'resources' in dataset:\n for r in dataset['resources']:\n if 'datastore_active' in r and r['datastore_active'] \\\n and r['name'] == resource_name:\n return r['id']\n return None\n except:\n log.warning('No dataset with id ' + dataset_id)\n return None\n", "path": "ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py"}], "after_files": [{"content": "'''\nCreated on Nov 3, 2014\n\n@author: alexandru-m-g\n'''\n\nimport logging\nimport datetime as dt\nimport decimal\n\nimport pylons.config as config\n\nimport ckan.lib.base as base\nimport ckan.logic as logic\nimport ckan.model as model\nimport ckan.common as common\nimport ckan.lib.helpers as h\n\nrender = base.render\nget_action = logic.get_action\nc = common.c\nrequest = common.request\n_ = common._\n\nDecimal = decimal.Decimal\n\nlog = logging.getLogger(__name__)\n\n\nclass CrisisController(base.BaseController):\n\n def show(self):\n\n context = {'model': model, 'session': model.Session,\n 'user': c.user or c.author, 'for_view': True,\n 'auth_user_obj': c.userobj}\n\n datastore_resource_id = self._get_datastore_resource_id(\n context, config.get('hdx.crisis.ebola_dataset', None), config.get('hdx.crisis.ebola_resource_title', None))\n if datastore_resource_id:\n c.top_line_items = self._get_top_line_items(\n context, datastore_resource_id)\n\n limit = 25\n c.q = u'ebola'\n\n page = int(request.params.get('page', 1))\n data_dict = {'sort': u'metadata_modified desc',\n 'fq': '+dataset_type:dataset',\n 'rows': limit,\n 'q': c.q,\n 'start': (page - 1) * limit\n }\n query = get_action(\"package_search\")(context, data_dict)\n\n def pager_url(q=None, page=None):\n return h.url_for('show_crisis', page=page)\n\n c.page = h.Page(\n collection=query['results'],\n page=page,\n url=pager_url,\n item_count=query['count'],\n items_per_page=limit\n )\n c.items = query['results']\n c.item_count = query['count']\n\n c.other_links = {}\n c.other_links['show_more'] = h.url_for(\n \"search\", **{'q': u'ebola', 'sort': u'metadata_modified desc',\n 'ext_indicator': '0'})\n\n return render('crisis/crisis.html')\n\n def _get_decimal_value(self, value):\n decimal_value = Decimal(str(value)).quantize(\n Decimal('.1'), rounding=decimal.ROUND_HALF_UP)\n return decimal_value\n\n def _format_results(self, result):\n for r in result['records']:\n d = dt.datetime.strptime(r[u'latest_date'], '%Y-%m-%dT%H:%M:%S')\n r[u'latest_date'] = dt.datetime.strftime(d, '%b %d, %Y')\n\n modified_value = r[u'value']\n if r[u'units'] == 'ratio':\n modified_value *= 100.0\n elif r[u'units'] == 'million':\n modified_value /= 1000000.0\n\n int_value = int(modified_value)\n if int_value == modified_value:\n r[u'formatted_value'] = '{:,}'.format(int_value)\n else:\n if r[u'units'] == 'ratio':\n r[u'formatted_value'] = '{:,.1f}'.format(\n self._get_decimal_value(modified_value))\n elif r[u'units'] == 'million':\n r[u'formatted_value'] = '{:,.1f}'.format(\n self._get_decimal_value(modified_value))\n #r[u'formatted_value'] += ' ' + _('million')\n\n def _get_top_line_items(self, context, datastore_resource_id):\n modified_context = dict(context)\n modified_context['ignore_auth'] = True\n result = get_action('datastore_search')(\n modified_context, {'resource_id': datastore_resource_id})\n if 'records' in result:\n self._format_results(result)\n return result['records']\n return []\n\n def _get_datastore_resource_id(self, context, dataset_id, resource_name):\n try:\n modified_context = dict(context)\n modified_context['ignore_auth'] = True\n dataset = get_action('package_show')(\n modified_context, {'id': dataset_id})\n\n if 'resources' in dataset:\n for r in dataset['resources']:\n if 'datastore_active' in r and r['datastore_active'] \\\n and r['name'] == resource_name:\n return r['id']\n return None\n except:\n log.warning('No dataset with id ' + dataset_id)\n return None\n", "path": "ckanext-hdx_crisis/ckanext/hdx_crisis/controllers/crisis_controller.py"}]}
| 1,634 | 306 |
gh_patches_debug_30258
|
rasdani/github-patches
|
git_diff
|
StackStorm__st2-2354
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
st2 key set in CLI doesn't allow one to set a TTL.
Reported by: psychopenguin on slack
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2client/st2client/commands/keyvalue.py`
Content:
```
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17 import json
18 import logging
19 from os.path import join as pjoin
20
21 from st2client.commands import resource
22 from st2client.commands.noop import NoopCommand
23 from st2client.commands.resource import add_auth_token_to_kwargs_from_cli
24 from st2client.formatters import table
25 from st2client.models.keyvalue import KeyValuePair
26 from st2client.utils.date import format_isodate
27
28 LOG = logging.getLogger(__name__)
29
30
31 class KeyValuePairBranch(resource.ResourceBranch):
32
33 def __init__(self, description, app, subparsers, parent_parser=None):
34 super(KeyValuePairBranch, self).__init__(
35 KeyValuePair, description, app, subparsers,
36 parent_parser=parent_parser,
37 commands={
38 'list': KeyValuePairListCommand,
39 'get': KeyValuePairGetCommand,
40 'delete': KeyValuePairDeleteCommand,
41 'create': NoopCommand,
42 'update': NoopCommand
43 })
44
45 # Registers extended commands
46 self.commands['set'] = KeyValuePairSetCommand(self.resource, self.app,
47 self.subparsers)
48 self.commands['load'] = KeyValuePairLoadCommand(
49 self.resource, self.app, self.subparsers)
50 self.commands['delete_by_prefix'] = KeyValuePairDeleteByPrefixCommand(
51 self.resource, self.app, self.subparsers)
52
53 # Remove unsupported commands
54 # TODO: Refactor parent class and make it nicer
55 del self.commands['create']
56 del self.commands['update']
57
58
59 class KeyValuePairListCommand(resource.ResourceListCommand):
60 display_attributes = ['name', 'value', 'expire_timestamp']
61 attribute_transform_functions = {
62 'expire_timestamp': format_isodate,
63 }
64
65 def __init__(self, *args, **kwargs):
66 super(KeyValuePairListCommand, self).__init__(*args, **kwargs)
67
68 # Filter options
69 self.parser.add_argument('--prefix', help=('Only return values which name starts with the '
70 ' provided prefix.'))
71
72 def run_and_print(self, args, **kwargs):
73 if args.prefix:
74 kwargs['prefix'] = args.prefix
75
76 instances = self.run(args, **kwargs)
77 self.print_output(reversed(instances), table.MultiColumnTable,
78 attributes=args.attr, widths=args.width,
79 json=args.json,
80 attribute_transform_functions=self.attribute_transform_functions)
81
82
83 class KeyValuePairGetCommand(resource.ResourceGetCommand):
84 pk_argument_name = 'name'
85 display_attributes = ['name', 'value']
86
87
88 class KeyValuePairSetCommand(resource.ResourceCommand):
89 display_attributes = ['name', 'value']
90
91 def __init__(self, resource, *args, **kwargs):
92 super(KeyValuePairSetCommand, self).__init__(
93 resource, 'set',
94 'Set an existing %s.' % resource.get_display_name().lower(),
95 *args, **kwargs
96 )
97
98 self.parser.add_argument('name',
99 metavar='name',
100 help='Name of the key value pair.')
101 self.parser.add_argument('value', help='Value paired with the key.')
102
103 @add_auth_token_to_kwargs_from_cli
104 def run(self, args, **kwargs):
105 instance = KeyValuePair()
106 instance.id = args.name # TODO: refactor and get rid of id
107 instance.name = args.name
108 instance.value = args.value
109 return self.manager.update(instance, **kwargs)
110
111 def run_and_print(self, args, **kwargs):
112 instance = self.run(args, **kwargs)
113 self.print_output(instance, table.PropertyValueTable,
114 attributes=self.display_attributes, json=args.json)
115
116
117 class KeyValuePairDeleteCommand(resource.ResourceDeleteCommand):
118 pk_argument_name = 'name'
119
120 @add_auth_token_to_kwargs_from_cli
121 def run(self, args, **kwargs):
122 resource_id = getattr(args, self.pk_argument_name, None)
123 instance = self.get_resource(resource_id, **kwargs)
124
125 if not instance:
126 raise resource.ResourceNotFoundError('KeyValuePair with id "%s" not found', resource_id)
127
128 instance.id = resource_id # TODO: refactor and get rid of id
129 self.manager.delete(instance, **kwargs)
130
131
132 class KeyValuePairDeleteByPrefixCommand(resource.ResourceCommand):
133 """
134 Commands which delete all the key value pairs which match the provided
135 prefix.
136 """
137 def __init__(self, resource, *args, **kwargs):
138 super(KeyValuePairDeleteByPrefixCommand, self).__init__(resource, 'delete_by_prefix',
139 'Delete KeyValue pairs which match the provided prefix', *args, **kwargs)
140
141 self.parser.add_argument('-p', '--prefix', required=True,
142 help='Name prefix (e.g. twitter.TwitterSensor:)')
143
144 @add_auth_token_to_kwargs_from_cli
145 def run(self, args, **kwargs):
146 prefix = args.prefix
147 key_pairs = self.manager.get_all(prefix=prefix)
148
149 to_delete = []
150 for key_pair in key_pairs:
151 key_pair.id = key_pair.name
152 to_delete.append(key_pair)
153
154 deleted = []
155 for key_pair in to_delete:
156 self.manager.delete(instance=key_pair, **kwargs)
157 deleted.append(key_pair)
158
159 return deleted
160
161 def run_and_print(self, args, **kwargs):
162 # TODO: Need to use args, instead of kwargs (args=) because of bad API
163 # FIX ME
164 deleted = self.run(args, **kwargs)
165 key_ids = [key_pair.id for key_pair in deleted]
166
167 print('Deleted %s keys' % (len(deleted)))
168 print('Deleted key ids: %s' % (', '.join(key_ids)))
169
170
171 class KeyValuePairLoadCommand(resource.ResourceCommand):
172 pk_argument_name = 'name'
173 display_attributes = ['name', 'value']
174
175 def __init__(self, resource, *args, **kwargs):
176 help_text = ('Load a list of %s from file.' %
177 resource.get_plural_display_name().lower())
178 super(KeyValuePairLoadCommand, self).__init__(resource, 'load',
179 help_text, *args, **kwargs)
180
181 self.parser.add_argument(
182 'file', help=('JSON file containing the %s to create.'
183 % resource.get_plural_display_name().lower()))
184
185 @add_auth_token_to_kwargs_from_cli
186 def run(self, args, **kwargs):
187 file_path = os.path.normpath(pjoin(os.getcwd(), args.file))
188
189 if not os.path.exists(args.file):
190 raise ValueError('File "%s" doesn\'t exist' % (file_path))
191
192 if not os.path.isfile(args.file):
193 raise ValueError('"%s" is not a file' % (file_path))
194
195 with open(file_path, 'r') as f:
196 kvps = json.loads(f.read())
197
198 instances = []
199 for item in kvps:
200 name = item['name']
201 value = item['value']
202
203 instance = KeyValuePair()
204 instance.id = name # TODO: refactor and get rid of id
205 instance.name = name
206 instance.value = value
207
208 self.manager.update(instance, **kwargs)
209 instances.append(instance)
210 return instances
211
212 def run_and_print(self, args, **kwargs):
213 instances = self.run(args, **kwargs)
214 self.print_output(instances, table.MultiColumnTable,
215 attributes=['id', 'name', 'value'], json=args.json)
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/st2client/st2client/commands/keyvalue.py b/st2client/st2client/commands/keyvalue.py
--- a/st2client/st2client/commands/keyvalue.py
+++ b/st2client/st2client/commands/keyvalue.py
@@ -82,11 +82,11 @@
class KeyValuePairGetCommand(resource.ResourceGetCommand):
pk_argument_name = 'name'
- display_attributes = ['name', 'value']
+ display_attributes = ['name', 'value', 'expire_timestamp']
class KeyValuePairSetCommand(resource.ResourceCommand):
- display_attributes = ['name', 'value']
+ display_attributes = ['name', 'value', 'expire_timestamp']
def __init__(self, resource, *args, **kwargs):
super(KeyValuePairSetCommand, self).__init__(
@@ -99,6 +99,8 @@
metavar='name',
help='Name of the key value pair.')
self.parser.add_argument('value', help='Value paired with the key.')
+ self.parser.add_argument('-l', '--ttl', dest='ttl', type=int, default=None,
+ help='TTL (in seconds) for this value.')
@add_auth_token_to_kwargs_from_cli
def run(self, args, **kwargs):
@@ -106,6 +108,10 @@
instance.id = args.name # TODO: refactor and get rid of id
instance.name = args.name
instance.value = args.value
+
+ if args.ttl:
+ instance.ttl = args.ttl
+
return self.manager.update(instance, **kwargs)
def run_and_print(self, args, **kwargs):
|
{"golden_diff": "diff --git a/st2client/st2client/commands/keyvalue.py b/st2client/st2client/commands/keyvalue.py\n--- a/st2client/st2client/commands/keyvalue.py\n+++ b/st2client/st2client/commands/keyvalue.py\n@@ -82,11 +82,11 @@\n \n class KeyValuePairGetCommand(resource.ResourceGetCommand):\n pk_argument_name = 'name'\n- display_attributes = ['name', 'value']\n+ display_attributes = ['name', 'value', 'expire_timestamp']\n \n \n class KeyValuePairSetCommand(resource.ResourceCommand):\n- display_attributes = ['name', 'value']\n+ display_attributes = ['name', 'value', 'expire_timestamp']\n \n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairSetCommand, self).__init__(\n@@ -99,6 +99,8 @@\n metavar='name',\n help='Name of the key value pair.')\n self.parser.add_argument('value', help='Value paired with the key.')\n+ self.parser.add_argument('-l', '--ttl', dest='ttl', type=int, default=None,\n+ help='TTL (in seconds) for this value.')\n \n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n@@ -106,6 +108,10 @@\n instance.id = args.name # TODO: refactor and get rid of id\n instance.name = args.name\n instance.value = args.value\n+\n+ if args.ttl:\n+ instance.ttl = args.ttl\n+\n return self.manager.update(instance, **kwargs)\n \n def run_and_print(self, args, **kwargs):\n", "issue": "st2 key set in CLI doesn't allow one to set a TTL. \nReported by: psychopenguin on slack\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport json\nimport logging\nfrom os.path import join as pjoin\n\nfrom st2client.commands import resource\nfrom st2client.commands.noop import NoopCommand\nfrom st2client.commands.resource import add_auth_token_to_kwargs_from_cli\nfrom st2client.formatters import table\nfrom st2client.models.keyvalue import KeyValuePair\nfrom st2client.utils.date import format_isodate\n\nLOG = logging.getLogger(__name__)\n\n\nclass KeyValuePairBranch(resource.ResourceBranch):\n\n def __init__(self, description, app, subparsers, parent_parser=None):\n super(KeyValuePairBranch, self).__init__(\n KeyValuePair, description, app, subparsers,\n parent_parser=parent_parser,\n commands={\n 'list': KeyValuePairListCommand,\n 'get': KeyValuePairGetCommand,\n 'delete': KeyValuePairDeleteCommand,\n 'create': NoopCommand,\n 'update': NoopCommand\n })\n\n # Registers extended commands\n self.commands['set'] = KeyValuePairSetCommand(self.resource, self.app,\n self.subparsers)\n self.commands['load'] = KeyValuePairLoadCommand(\n self.resource, self.app, self.subparsers)\n self.commands['delete_by_prefix'] = KeyValuePairDeleteByPrefixCommand(\n self.resource, self.app, self.subparsers)\n\n # Remove unsupported commands\n # TODO: Refactor parent class and make it nicer\n del self.commands['create']\n del self.commands['update']\n\n\nclass KeyValuePairListCommand(resource.ResourceListCommand):\n display_attributes = ['name', 'value', 'expire_timestamp']\n attribute_transform_functions = {\n 'expire_timestamp': format_isodate,\n }\n\n def __init__(self, *args, **kwargs):\n super(KeyValuePairListCommand, self).__init__(*args, **kwargs)\n\n # Filter options\n self.parser.add_argument('--prefix', help=('Only return values which name starts with the '\n ' provided prefix.'))\n\n def run_and_print(self, args, **kwargs):\n if args.prefix:\n kwargs['prefix'] = args.prefix\n\n instances = self.run(args, **kwargs)\n self.print_output(reversed(instances), table.MultiColumnTable,\n attributes=args.attr, widths=args.width,\n json=args.json,\n attribute_transform_functions=self.attribute_transform_functions)\n\n\nclass KeyValuePairGetCommand(resource.ResourceGetCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value']\n\n\nclass KeyValuePairSetCommand(resource.ResourceCommand):\n display_attributes = ['name', 'value']\n\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairSetCommand, self).__init__(\n resource, 'set',\n 'Set an existing %s.' % resource.get_display_name().lower(),\n *args, **kwargs\n )\n\n self.parser.add_argument('name',\n metavar='name',\n help='Name of the key value pair.')\n self.parser.add_argument('value', help='Value paired with the key.')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n instance = KeyValuePair()\n instance.id = args.name # TODO: refactor and get rid of id\n instance.name = args.name\n instance.value = args.value\n return self.manager.update(instance, **kwargs)\n\n def run_and_print(self, args, **kwargs):\n instance = self.run(args, **kwargs)\n self.print_output(instance, table.PropertyValueTable,\n attributes=self.display_attributes, json=args.json)\n\n\nclass KeyValuePairDeleteCommand(resource.ResourceDeleteCommand):\n pk_argument_name = 'name'\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n resource_id = getattr(args, self.pk_argument_name, None)\n instance = self.get_resource(resource_id, **kwargs)\n\n if not instance:\n raise resource.ResourceNotFoundError('KeyValuePair with id \"%s\" not found', resource_id)\n\n instance.id = resource_id # TODO: refactor and get rid of id\n self.manager.delete(instance, **kwargs)\n\n\nclass KeyValuePairDeleteByPrefixCommand(resource.ResourceCommand):\n \"\"\"\n Commands which delete all the key value pairs which match the provided\n prefix.\n \"\"\"\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairDeleteByPrefixCommand, self).__init__(resource, 'delete_by_prefix',\n 'Delete KeyValue pairs which match the provided prefix', *args, **kwargs)\n\n self.parser.add_argument('-p', '--prefix', required=True,\n help='Name prefix (e.g. twitter.TwitterSensor:)')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n prefix = args.prefix\n key_pairs = self.manager.get_all(prefix=prefix)\n\n to_delete = []\n for key_pair in key_pairs:\n key_pair.id = key_pair.name\n to_delete.append(key_pair)\n\n deleted = []\n for key_pair in to_delete:\n self.manager.delete(instance=key_pair, **kwargs)\n deleted.append(key_pair)\n\n return deleted\n\n def run_and_print(self, args, **kwargs):\n # TODO: Need to use args, instead of kwargs (args=) because of bad API\n # FIX ME\n deleted = self.run(args, **kwargs)\n key_ids = [key_pair.id for key_pair in deleted]\n\n print('Deleted %s keys' % (len(deleted)))\n print('Deleted key ids: %s' % (', '.join(key_ids)))\n\n\nclass KeyValuePairLoadCommand(resource.ResourceCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value']\n\n def __init__(self, resource, *args, **kwargs):\n help_text = ('Load a list of %s from file.' %\n resource.get_plural_display_name().lower())\n super(KeyValuePairLoadCommand, self).__init__(resource, 'load',\n help_text, *args, **kwargs)\n\n self.parser.add_argument(\n 'file', help=('JSON file containing the %s to create.'\n % resource.get_plural_display_name().lower()))\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n file_path = os.path.normpath(pjoin(os.getcwd(), args.file))\n\n if not os.path.exists(args.file):\n raise ValueError('File \"%s\" doesn\\'t exist' % (file_path))\n\n if not os.path.isfile(args.file):\n raise ValueError('\"%s\" is not a file' % (file_path))\n\n with open(file_path, 'r') as f:\n kvps = json.loads(f.read())\n\n instances = []\n for item in kvps:\n name = item['name']\n value = item['value']\n\n instance = KeyValuePair()\n instance.id = name # TODO: refactor and get rid of id\n instance.name = name\n instance.value = value\n\n self.manager.update(instance, **kwargs)\n instances.append(instance)\n return instances\n\n def run_and_print(self, args, **kwargs):\n instances = self.run(args, **kwargs)\n self.print_output(instances, table.MultiColumnTable,\n attributes=['id', 'name', 'value'], json=args.json)\n", "path": "st2client/st2client/commands/keyvalue.py"}], "after_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport json\nimport logging\nfrom os.path import join as pjoin\n\nfrom st2client.commands import resource\nfrom st2client.commands.noop import NoopCommand\nfrom st2client.commands.resource import add_auth_token_to_kwargs_from_cli\nfrom st2client.formatters import table\nfrom st2client.models.keyvalue import KeyValuePair\nfrom st2client.utils.date import format_isodate\n\nLOG = logging.getLogger(__name__)\n\n\nclass KeyValuePairBranch(resource.ResourceBranch):\n\n def __init__(self, description, app, subparsers, parent_parser=None):\n super(KeyValuePairBranch, self).__init__(\n KeyValuePair, description, app, subparsers,\n parent_parser=parent_parser,\n commands={\n 'list': KeyValuePairListCommand,\n 'get': KeyValuePairGetCommand,\n 'delete': KeyValuePairDeleteCommand,\n 'create': NoopCommand,\n 'update': NoopCommand\n })\n\n # Registers extended commands\n self.commands['set'] = KeyValuePairSetCommand(self.resource, self.app,\n self.subparsers)\n self.commands['load'] = KeyValuePairLoadCommand(\n self.resource, self.app, self.subparsers)\n self.commands['delete_by_prefix'] = KeyValuePairDeleteByPrefixCommand(\n self.resource, self.app, self.subparsers)\n\n # Remove unsupported commands\n # TODO: Refactor parent class and make it nicer\n del self.commands['create']\n del self.commands['update']\n\n\nclass KeyValuePairListCommand(resource.ResourceListCommand):\n display_attributes = ['name', 'value', 'expire_timestamp']\n attribute_transform_functions = {\n 'expire_timestamp': format_isodate,\n }\n\n def __init__(self, *args, **kwargs):\n super(KeyValuePairListCommand, self).__init__(*args, **kwargs)\n\n # Filter options\n self.parser.add_argument('--prefix', help=('Only return values which name starts with the '\n ' provided prefix.'))\n\n def run_and_print(self, args, **kwargs):\n if args.prefix:\n kwargs['prefix'] = args.prefix\n\n instances = self.run(args, **kwargs)\n self.print_output(reversed(instances), table.MultiColumnTable,\n attributes=args.attr, widths=args.width,\n json=args.json,\n attribute_transform_functions=self.attribute_transform_functions)\n\n\nclass KeyValuePairGetCommand(resource.ResourceGetCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value', 'expire_timestamp']\n\n\nclass KeyValuePairSetCommand(resource.ResourceCommand):\n display_attributes = ['name', 'value', 'expire_timestamp']\n\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairSetCommand, self).__init__(\n resource, 'set',\n 'Set an existing %s.' % resource.get_display_name().lower(),\n *args, **kwargs\n )\n\n self.parser.add_argument('name',\n metavar='name',\n help='Name of the key value pair.')\n self.parser.add_argument('value', help='Value paired with the key.')\n self.parser.add_argument('-l', '--ttl', dest='ttl', type=int, default=None,\n help='TTL (in seconds) for this value.')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n instance = KeyValuePair()\n instance.id = args.name # TODO: refactor and get rid of id\n instance.name = args.name\n instance.value = args.value\n\n if args.ttl:\n instance.ttl = args.ttl\n\n return self.manager.update(instance, **kwargs)\n\n def run_and_print(self, args, **kwargs):\n instance = self.run(args, **kwargs)\n self.print_output(instance, table.PropertyValueTable,\n attributes=self.display_attributes, json=args.json)\n\n\nclass KeyValuePairDeleteCommand(resource.ResourceDeleteCommand):\n pk_argument_name = 'name'\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n resource_id = getattr(args, self.pk_argument_name, None)\n instance = self.get_resource(resource_id, **kwargs)\n\n if not instance:\n raise resource.ResourceNotFoundError('KeyValuePair with id \"%s\" not found', resource_id)\n\n instance.id = resource_id # TODO: refactor and get rid of id\n self.manager.delete(instance, **kwargs)\n\n\nclass KeyValuePairDeleteByPrefixCommand(resource.ResourceCommand):\n \"\"\"\n Commands which delete all the key value pairs which match the provided\n prefix.\n \"\"\"\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairDeleteByPrefixCommand, self).__init__(resource, 'delete_by_prefix',\n 'Delete KeyValue pairs which match the provided prefix', *args, **kwargs)\n\n self.parser.add_argument('-p', '--prefix', required=True,\n help='Name prefix (e.g. twitter.TwitterSensor:)')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n prefix = args.prefix\n key_pairs = self.manager.get_all(prefix=prefix)\n\n to_delete = []\n for key_pair in key_pairs:\n key_pair.id = key_pair.name\n to_delete.append(key_pair)\n\n deleted = []\n for key_pair in to_delete:\n self.manager.delete(instance=key_pair, **kwargs)\n deleted.append(key_pair)\n\n return deleted\n\n def run_and_print(self, args, **kwargs):\n # TODO: Need to use args, instead of kwargs (args=) because of bad API\n # FIX ME\n deleted = self.run(args, **kwargs)\n key_ids = [key_pair.id for key_pair in deleted]\n\n print('Deleted %s keys' % (len(deleted)))\n print('Deleted key ids: %s' % (', '.join(key_ids)))\n\n\nclass KeyValuePairLoadCommand(resource.ResourceCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value']\n\n def __init__(self, resource, *args, **kwargs):\n help_text = ('Load a list of %s from file.' %\n resource.get_plural_display_name().lower())\n super(KeyValuePairLoadCommand, self).__init__(resource, 'load',\n help_text, *args, **kwargs)\n\n self.parser.add_argument(\n 'file', help=('JSON file containing the %s to create.'\n % resource.get_plural_display_name().lower()))\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n file_path = os.path.normpath(pjoin(os.getcwd(), args.file))\n\n if not os.path.exists(args.file):\n raise ValueError('File \"%s\" doesn\\'t exist' % (file_path))\n\n if not os.path.isfile(args.file):\n raise ValueError('\"%s\" is not a file' % (file_path))\n\n with open(file_path, 'r') as f:\n kvps = json.loads(f.read())\n\n instances = []\n for item in kvps:\n name = item['name']\n value = item['value']\n\n instance = KeyValuePair()\n instance.id = name # TODO: refactor and get rid of id\n instance.name = name\n instance.value = value\n\n self.manager.update(instance, **kwargs)\n instances.append(instance)\n return instances\n\n def run_and_print(self, args, **kwargs):\n instances = self.run(args, **kwargs)\n self.print_output(instances, table.MultiColumnTable,\n attributes=['id', 'name', 'value'], json=args.json)\n", "path": "st2client/st2client/commands/keyvalue.py"}]}
| 2,539 | 364 |
gh_patches_debug_26743
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-533
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] plot_series inconsistent date/time index
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
**To Reproduce**
```python
y = load_airline()
y = y[:48]
fh=np.arange(1, 13)
y_train, y_test = temporal_train_test_split(y, test_size=len(fh))
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
print(y.shape, y_train.shape[0], y_test.shape[0])
print(y.index)
```
> (48,) 36 12
> PeriodIndex(['1949-01', '1949-02', '1949-03', '1949-04', '1949-05', '1949-06',
> '1949-07', '1949-08', '1949-09', '1949-10', '1949-11', '1949-12',
> '1950-01', '1950-02', '1950-03', '1950-04', '1950-05', '1950-06',
> '1950-07', '1950-08', '1950-09', '1950-10', '1950-11', '1950-12',
> '1951-01', '1951-02', '1951-03', '1951-04', '1951-05', '1951-06',
> '1951-07', '1951-08', '1951-09', '1951-10', '1951-11', '1951-12',
> '1952-01', '1952-02', '1952-03', '1952-04', '1952-05', '1952-06',
> '1952-07', '1952-08', '1952-09', '1952-10', '1952-11', '1952-12'],
> dtype='period[M]', name='Period', freq='M')
>
**Expected behavior**
The X-axis should go till end of 1952. The X-axis on the plot does not reflect the indices correctly. It should go till 1952, but shows values only till 1949

**Versions**
System:
python: 3.6.12 |Anaconda, Inc.| (default, Sep 9 2020, 00:29:25) [MSC v.1916 64 bit (AMD64)]
executable: C:\Users\xxxx\AppData\Local\Continuum\anaconda3\envs\sktime\python.exe
machine: Windows-10-10.0.18362-SP0
Python dependencies:
pip: 20.3
setuptools: 49.6.0
sklearn: 0.23.2
numpy: 1.19.2
scipy: 1.5.2
Cython: 0.29.17
pandas: 1.1.3
matplotlib: 3.3.2
joblib: 0.17.0
numba: None
pmdarima: 1.7.1
tsfresh: None
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/utils/plotting/__init__.py`
Content:
```
1 #!/usr/bin/env python3 -u
2 # -*- coding: utf-8 -*-
3 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
4
5 __all__ = ["plot_series"]
6 __author__ = ["Markus Löning"]
7
8 import warnings
9
10 import numpy as np
11
12 from sktime.utils.check_imports import _check_soft_dependencies
13 from sktime.utils.validation.forecasting import check_y
14
15
16 def plot_series(*series, labels=None):
17 """Plot one or more time series
18
19 Parameters
20 ----------
21 series : pd.Series
22 One or more time series
23 labels : list, optional (default=None)
24 Names of series, will be displayed in figure legend
25
26 Returns
27 -------
28 fig : plt.Figure
29 ax : plt.Axis
30 """
31 _check_soft_dependencies("matplotlib", "seaborn")
32 import matplotlib.pyplot as plt
33 import seaborn as sns
34
35 n_series = len(series)
36 if labels is not None:
37 if n_series != len(labels):
38 raise ValueError(
39 "There must be one label for each time series, "
40 "but found inconsistent numbers of series and "
41 "labels."
42 )
43 legend = True
44 else:
45 labels = ["" for _ in range(n_series)]
46 legend = False
47
48 for y in series:
49 check_y(y)
50
51 # create combined index
52 index = series[0].index
53 for y in series[1:]:
54 # check types, note that isinstance() does not work here because index
55 # types inherit from each other, hence we check for type equality
56 if not type(index) is type(y.index): # noqa
57 raise TypeError("Found series with different index types.")
58 index = index.union(y.index)
59
60 # generate integer x-values
61 xs = [np.argwhere(index.isin(y.index)).ravel() for y in series]
62
63 # create figure
64 fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))
65 colors = sns.color_palette("colorblind", n_colors=n_series)
66
67 # plot series
68 for x, y, color, label in zip(xs, series, colors, labels):
69
70 # scatter if little data is available or index is not complete
71 if len(x) <= 3 or not np.array_equal(np.arange(x[0], x[-1] + 1), x):
72 plot_func = sns.scatterplot
73 else:
74 plot_func = sns.lineplot
75
76 plot_func(x=x, y=y, ax=ax, marker="o", label=label, color=color)
77
78 # set combined index as xticklabels, suppress matplotlib warning
79 with warnings.catch_warnings():
80 warnings.filterwarnings("ignore")
81 ax.set(xticklabels=index)
82
83 if legend:
84 ax.legend()
85
86 return fig, ax
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sktime/utils/plotting/__init__.py b/sktime/utils/plotting/__init__.py
--- a/sktime/utils/plotting/__init__.py
+++ b/sktime/utils/plotting/__init__.py
@@ -5,8 +5,6 @@
__all__ = ["plot_series"]
__author__ = ["Markus Löning"]
-import warnings
-
import numpy as np
from sktime.utils.check_imports import _check_soft_dependencies
@@ -30,6 +28,8 @@
"""
_check_soft_dependencies("matplotlib", "seaborn")
import matplotlib.pyplot as plt
+ from matplotlib.ticker import FuncFormatter, MaxNLocator
+ from matplotlib.cbook import flatten
import seaborn as sns
n_series = len(series)
@@ -75,10 +75,19 @@
plot_func(x=x, y=y, ax=ax, marker="o", label=label, color=color)
- # set combined index as xticklabels, suppress matplotlib warning
- with warnings.catch_warnings():
- warnings.filterwarnings("ignore")
- ax.set(xticklabels=index)
+ # combine data points for all series
+ xs_flat = list(flatten(xs))
+
+ # set x label of data point to the matching index
+ def format_fn(tick_val, tick_pos):
+ if int(tick_val) in xs_flat:
+ return index[int(tick_val)]
+ else:
+ return ""
+
+ # dynamically set x label ticks and spacing from index labels
+ ax.xaxis.set_major_formatter(FuncFormatter(format_fn))
+ ax.xaxis.set_major_locator(MaxNLocator(integer=True))
if legend:
ax.legend()
|
{"golden_diff": "diff --git a/sktime/utils/plotting/__init__.py b/sktime/utils/plotting/__init__.py\n--- a/sktime/utils/plotting/__init__.py\n+++ b/sktime/utils/plotting/__init__.py\n@@ -5,8 +5,6 @@\n __all__ = [\"plot_series\"]\n __author__ = [\"Markus L\u00f6ning\"]\n \n-import warnings\n-\n import numpy as np\n \n from sktime.utils.check_imports import _check_soft_dependencies\n@@ -30,6 +28,8 @@\n \"\"\"\n _check_soft_dependencies(\"matplotlib\", \"seaborn\")\n import matplotlib.pyplot as plt\n+ from matplotlib.ticker import FuncFormatter, MaxNLocator\n+ from matplotlib.cbook import flatten\n import seaborn as sns\n \n n_series = len(series)\n@@ -75,10 +75,19 @@\n \n plot_func(x=x, y=y, ax=ax, marker=\"o\", label=label, color=color)\n \n- # set combined index as xticklabels, suppress matplotlib warning\n- with warnings.catch_warnings():\n- warnings.filterwarnings(\"ignore\")\n- ax.set(xticklabels=index)\n+ # combine data points for all series\n+ xs_flat = list(flatten(xs))\n+\n+ # set x label of data point to the matching index\n+ def format_fn(tick_val, tick_pos):\n+ if int(tick_val) in xs_flat:\n+ return index[int(tick_val)]\n+ else:\n+ return \"\"\n+\n+ # dynamically set x label ticks and spacing from index labels\n+ ax.xaxis.set_major_formatter(FuncFormatter(format_fn))\n+ ax.xaxis.set_major_locator(MaxNLocator(integer=True))\n \n if legend:\n ax.legend()\n", "issue": "[BUG] plot_series inconsistent date/time index \n**Describe the bug**\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\n\r\n**To Reproduce**\r\n\r\n```python\r\ny = load_airline()\r\ny = y[:48]\r\nfh=np.arange(1, 13)\r\n\r\ny_train, y_test = temporal_train_test_split(y, test_size=len(fh))\r\nplot_series(y_train, y_test, labels=[\"y_train\", \"y_test\"]);\r\nprint(y.shape, y_train.shape[0], y_test.shape[0])\r\nprint(y.index)\r\n```\r\n\r\n> (48,) 36 12\r\n> PeriodIndex(['1949-01', '1949-02', '1949-03', '1949-04', '1949-05', '1949-06',\r\n> '1949-07', '1949-08', '1949-09', '1949-10', '1949-11', '1949-12',\r\n> '1950-01', '1950-02', '1950-03', '1950-04', '1950-05', '1950-06',\r\n> '1950-07', '1950-08', '1950-09', '1950-10', '1950-11', '1950-12',\r\n> '1951-01', '1951-02', '1951-03', '1951-04', '1951-05', '1951-06',\r\n> '1951-07', '1951-08', '1951-09', '1951-10', '1951-11', '1951-12',\r\n> '1952-01', '1952-02', '1952-03', '1952-04', '1952-05', '1952-06',\r\n> '1952-07', '1952-08', '1952-09', '1952-10', '1952-11', '1952-12'],\r\n> dtype='period[M]', name='Period', freq='M')\r\n> \r\n\r\n**Expected behavior**\r\nThe X-axis should go till end of 1952. The X-axis on the plot does not reflect the indices correctly. It should go till 1952, but shows values only till 1949\r\n\r\n\r\n\r\n**Versions**\r\nSystem:\r\n python: 3.6.12 |Anaconda, Inc.| (default, Sep 9 2020, 00:29:25) [MSC v.1916 64 bit (AMD64)]\r\nexecutable: C:\\Users\\xxxx\\AppData\\Local\\Continuum\\anaconda3\\envs\\sktime\\python.exe\r\n machine: Windows-10-10.0.18362-SP0\r\n\r\nPython dependencies:\r\n pip: 20.3\r\n setuptools: 49.6.0\r\n sklearn: 0.23.2\r\n numpy: 1.19.2\r\n scipy: 1.5.2\r\n Cython: 0.29.17\r\n pandas: 1.1.3\r\n matplotlib: 3.3.2\r\n joblib: 0.17.0\r\n numba: None\r\n pmdarima: 1.7.1\r\n tsfresh: None\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3 -u\n# -*- coding: utf-8 -*-\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\n__all__ = [\"plot_series\"]\n__author__ = [\"Markus L\u00f6ning\"]\n\nimport warnings\n\nimport numpy as np\n\nfrom sktime.utils.check_imports import _check_soft_dependencies\nfrom sktime.utils.validation.forecasting import check_y\n\n\ndef plot_series(*series, labels=None):\n \"\"\"Plot one or more time series\n\n Parameters\n ----------\n series : pd.Series\n One or more time series\n labels : list, optional (default=None)\n Names of series, will be displayed in figure legend\n\n Returns\n -------\n fig : plt.Figure\n ax : plt.Axis\n \"\"\"\n _check_soft_dependencies(\"matplotlib\", \"seaborn\")\n import matplotlib.pyplot as plt\n import seaborn as sns\n\n n_series = len(series)\n if labels is not None:\n if n_series != len(labels):\n raise ValueError(\n \"There must be one label for each time series, \"\n \"but found inconsistent numbers of series and \"\n \"labels.\"\n )\n legend = True\n else:\n labels = [\"\" for _ in range(n_series)]\n legend = False\n\n for y in series:\n check_y(y)\n\n # create combined index\n index = series[0].index\n for y in series[1:]:\n # check types, note that isinstance() does not work here because index\n # types inherit from each other, hence we check for type equality\n if not type(index) is type(y.index): # noqa\n raise TypeError(\"Found series with different index types.\")\n index = index.union(y.index)\n\n # generate integer x-values\n xs = [np.argwhere(index.isin(y.index)).ravel() for y in series]\n\n # create figure\n fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))\n colors = sns.color_palette(\"colorblind\", n_colors=n_series)\n\n # plot series\n for x, y, color, label in zip(xs, series, colors, labels):\n\n # scatter if little data is available or index is not complete\n if len(x) <= 3 or not np.array_equal(np.arange(x[0], x[-1] + 1), x):\n plot_func = sns.scatterplot\n else:\n plot_func = sns.lineplot\n\n plot_func(x=x, y=y, ax=ax, marker=\"o\", label=label, color=color)\n\n # set combined index as xticklabels, suppress matplotlib warning\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n ax.set(xticklabels=index)\n\n if legend:\n ax.legend()\n\n return fig, ax\n", "path": "sktime/utils/plotting/__init__.py"}], "after_files": [{"content": "#!/usr/bin/env python3 -u\n# -*- coding: utf-8 -*-\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\n__all__ = [\"plot_series\"]\n__author__ = [\"Markus L\u00f6ning\"]\n\nimport numpy as np\n\nfrom sktime.utils.check_imports import _check_soft_dependencies\nfrom sktime.utils.validation.forecasting import check_y\n\n\ndef plot_series(*series, labels=None):\n \"\"\"Plot one or more time series\n\n Parameters\n ----------\n series : pd.Series\n One or more time series\n labels : list, optional (default=None)\n Names of series, will be displayed in figure legend\n\n Returns\n -------\n fig : plt.Figure\n ax : plt.Axis\n \"\"\"\n _check_soft_dependencies(\"matplotlib\", \"seaborn\")\n import matplotlib.pyplot as plt\n from matplotlib.ticker import FuncFormatter, MaxNLocator\n from matplotlib.cbook import flatten\n import seaborn as sns\n\n n_series = len(series)\n if labels is not None:\n if n_series != len(labels):\n raise ValueError(\n \"There must be one label for each time series, \"\n \"but found inconsistent numbers of series and \"\n \"labels.\"\n )\n legend = True\n else:\n labels = [\"\" for _ in range(n_series)]\n legend = False\n\n for y in series:\n check_y(y)\n\n # create combined index\n index = series[0].index\n for y in series[1:]:\n # check types, note that isinstance() does not work here because index\n # types inherit from each other, hence we check for type equality\n if not type(index) is type(y.index): # noqa\n raise TypeError(\"Found series with different index types.\")\n index = index.union(y.index)\n\n # generate integer x-values\n xs = [np.argwhere(index.isin(y.index)).ravel() for y in series]\n\n # create figure\n fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))\n colors = sns.color_palette(\"colorblind\", n_colors=n_series)\n\n # plot series\n for x, y, color, label in zip(xs, series, colors, labels):\n\n # scatter if little data is available or index is not complete\n if len(x) <= 3 or not np.array_equal(np.arange(x[0], x[-1] + 1), x):\n plot_func = sns.scatterplot\n else:\n plot_func = sns.lineplot\n\n plot_func(x=x, y=y, ax=ax, marker=\"o\", label=label, color=color)\n\n # combine data points for all series\n xs_flat = list(flatten(xs))\n\n # set x label of data point to the matching index\n def format_fn(tick_val, tick_pos):\n if int(tick_val) in xs_flat:\n return index[int(tick_val)]\n else:\n return \"\"\n\n # dynamically set x label ticks and spacing from index labels\n ax.xaxis.set_major_formatter(FuncFormatter(format_fn))\n ax.xaxis.set_major_locator(MaxNLocator(integer=True))\n\n if legend:\n ax.legend()\n\n return fig, ax\n", "path": "sktime/utils/plotting/__init__.py"}]}
| 1,990 | 379 |
gh_patches_debug_8685
|
rasdani/github-patches
|
git_diff
|
mirumee__ariadne-1040
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect typing for values on EnumType.__init__()
Ariadne 0.18.0 has updated the `EnumType` class in a way that causes the `values` argument of `__init__()` to become typed, but the typing is incorrect:
```
def __init__(
self, name: str, values: Union[Dict[str, Any], enum.Enum, enum.IntEnum]
) -> None:
```
should be:
```
def __init__(
self, name: str, values: Union[Dict[str, Any], Type[enum.Enum], Type[enum.IntEnum]]
) -> None:
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ariadne/enums.py`
Content:
```
1 import enum
2 from typing import (
3 Any,
4 Dict,
5 Generator,
6 List,
7 Mapping,
8 Optional,
9 Tuple,
10 TypeVar,
11 Union,
12 cast,
13 )
14 from functools import reduce, singledispatch
15 import operator
16
17 from graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema
18 from graphql.language.ast import (
19 EnumValueNode,
20 InputValueDefinitionNode,
21 ObjectValueNode,
22 )
23 from graphql.pyutils.undefined import Undefined
24 from graphql.type.definition import (
25 GraphQLArgument,
26 GraphQLField,
27 GraphQLInputField,
28 GraphQLInputObjectType,
29 GraphQLInputType,
30 GraphQLInterfaceType,
31 GraphQLNonNull,
32 GraphQLObjectType,
33 GraphQLScalarType,
34 )
35
36 from .types import SchemaBindable
37
38
39 T = TypeVar("T")
40 ArgumentWithKeys = Tuple[str, str, GraphQLArgument, Optional[List["str"]]]
41 InputFieldWithKeys = Tuple[str, str, GraphQLInputField, Optional[List["str"]]]
42 GraphQLNamedInputType = Union[
43 GraphQLScalarType, GraphQLEnumType, GraphQLInputObjectType
44 ]
45
46
47 class EnumType(SchemaBindable):
48 """Bindable mapping Python values to enumeration members in a GraphQL schema.
49
50 # Example
51
52 Given following GraphQL enum:
53
54 ```graphql
55 enum UserRole {
56 MEMBER
57 MODERATOR
58 ADMIN
59 }
60 ```
61
62 You can use `EnumType` to map it's members to Python `Enum`:
63
64 ```python
65 user_role_type = EnumType(
66 "UserRole",
67 {
68 "MEMBER": 0,
69 "MODERATOR": 1,
70 "ADMIN": 2,
71 }
72 )
73 ```
74
75 `EnumType` also works with dictionaries:
76
77 ```python
78 user_role_type = EnumType(
79 "UserRole",
80 {
81 "MEMBER": 0,
82 "MODERATOR": 1,
83 "ADMIN": 2,
84 }
85 )
86 ```
87 """
88
89 def __init__(
90 self, name: str, values: Union[Dict[str, Any], enum.Enum, enum.IntEnum]
91 ) -> None:
92 """Initializes the `EnumType` with `name` and `values` mapping.
93
94 # Required arguments
95
96 `name`: a `str` with the name of GraphQL enum type in GraphQL schema to
97 bind to.
98
99 `values`: a `dict` or `enums.Enum` with values to use to represent GraphQL
100 enum's in Python logic.
101 """
102 self.name = name
103 self.values = cast(Dict[str, Any], getattr(values, "__members__", values))
104
105 def bind_to_schema(self, schema: GraphQLSchema) -> None:
106 """Binds this `EnumType` instance to the instance of GraphQL schema."""
107 graphql_type = schema.type_map.get(self.name)
108 self.validate_graphql_type(graphql_type)
109 graphql_type = cast(GraphQLEnumType, graphql_type)
110
111 for key, value in self.values.items():
112 if key not in graphql_type.values:
113 raise ValueError(
114 "Value %s is not defined on enum %s" % (key, self.name)
115 )
116 graphql_type.values[key].value = value
117
118 def bind_to_default_values(self, schema: GraphQLSchema) -> None:
119 """Populates default values of input fields and args in the GraphQL schema.
120
121 This step is required because GraphQL query executor doesn't perform a
122 lookup for default values defined in schema. Instead it simply pulls the
123 value from fields and arguments `default_value` attribute, which is
124 `None` by default.
125 """
126 for _, _, arg, key_list in find_enum_values_in_schema(schema):
127 type_ = resolve_null_type(arg.type)
128 type_ = cast(GraphQLNamedInputType, type_)
129
130 if (
131 key_list is None
132 and arg.default_value in self.values
133 and type_.name == self.name
134 ):
135 type_ = resolve_null_type(arg.type)
136 arg.default_value = self.values[arg.default_value]
137
138 elif key_list is not None:
139 enum_value = get_value_from_mapping_value(arg.default_value, key_list)
140 type_ = cast(GraphQLEnumType, track_type_for_nested(arg, key_list))
141
142 if enum_value in self.values and type_.name == self.name:
143 set_leaf_value_in_mapping(
144 arg.default_value, key_list, self.values[enum_value]
145 )
146
147 def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:
148 """Validates that schema's GraphQL type associated with this `EnumType`
149 is an `enum`."""
150 if not graphql_type:
151 raise ValueError("Enum %s is not defined in the schema" % self.name)
152 if not isinstance(graphql_type, GraphQLEnumType):
153 raise ValueError(
154 "%s is defined in the schema, but it is instance of %s (expected %s)"
155 % (self.name, type(graphql_type).__name__, GraphQLEnumType.__name__)
156 )
157
158
159 def set_default_enum_values_on_schema(schema: GraphQLSchema):
160 """Sets missing Python values for GraphQL enums in schema.
161
162 Recursively scans GraphQL schema for enums and their values. If `value`
163 attribute is empty, its populated with with a string of its GraphQL name.
164
165 This string is then used to represent enum's value in Python instead of `None`.
166
167 # Requires arguments
168
169 `schema`: a GraphQL schema to set enums default values in.
170 """
171 for type_object in schema.type_map.values():
172 if isinstance(type_object, GraphQLEnumType):
173 set_default_enum_values(type_object)
174
175
176 def set_default_enum_values(graphql_type: GraphQLEnumType):
177 for key in graphql_type.values:
178 if graphql_type.values[key].value is None:
179 graphql_type.values[key].value = key
180
181
182 def validate_schema_enum_values(schema: GraphQLSchema) -> None:
183 """Raises `ValueError` if GraphQL schema has input fields or arguments with
184 default values that are undefined enum values.
185
186 # Example schema with invalid field argument
187
188 This schema fails to validate because argument `role` on field `users`
189 specifies `REVIEWER` as default value and `REVIEWER` is not a member of
190 the `UserRole` enum:
191
192 ```graphql
193 type Query {
194 users(role: UserRole = REVIEWER): [User!]!
195 }
196
197 enum UserRole {
198 MEMBER
199 MODERATOR
200 ADMIN
201 }
202
203 type User {
204 id: ID!
205 }
206 ```
207
208 # Example schema with invalid input field
209
210 This schema fails to validate because field `role` on input `UserFilters`
211 specifies `REVIEWER` as default value and `REVIEWER` is not a member of
212 the `UserRole` enum:
213
214 ```graphql
215 type Query {
216 users(filter: UserFilters): [User!]!
217 }
218
219 input UserFilters {
220 name: String
221 role: UserRole = REVIEWER
222 }
223
224 enum UserRole {
225 MEMBER
226 MODERATOR
227 ADMIN
228 }
229
230 type User {
231 id: ID!
232 }
233 ```
234 """
235
236 for type_name, field_name, arg, _ in find_enum_values_in_schema(schema):
237 if is_invalid_enum_value(arg):
238 raise ValueError(
239 f"Value for type: <{arg.type}> is invalid. "
240 f"Check InputField/Arguments for <{field_name}> in <{type_name}> "
241 "(Undefined enum value)."
242 )
243
244
245 def is_invalid_enum_value(field: Union[GraphQLInputField, GraphQLArgument]) -> bool:
246 if field.ast_node is None:
247 return False
248 return field.default_value is Undefined and field.ast_node.default_value is not None
249
250
251 def find_enum_values_in_schema(
252 schema: GraphQLSchema,
253 ) -> Generator[Union[ArgumentWithKeys, InputFieldWithKeys], None, None]:
254 for name, type_ in schema.type_map.items():
255 result = enum_values_in_types(type_, name)
256 if result is not None:
257 yield from result
258
259
260 @singledispatch
261 def enum_values_in_types(
262 type_: GraphQLNamedType, # pylint: disable=unused-argument
263 name: str, # pylint: disable=unused-argument
264 ) -> Optional[Generator[Union[ArgumentWithKeys, InputFieldWithKeys], None, None]]:
265 pass
266
267
268 @enum_values_in_types.register(GraphQLObjectType)
269 @enum_values_in_types.register(GraphQLInterfaceType)
270 def enum_values_in_object_type(
271 type_: Union[GraphQLObjectType, GraphQLInterfaceType],
272 field_name: str,
273 ) -> Generator[ArgumentWithKeys, None, None]:
274 for field in type_.fields.values():
275 yield from enum_values_in_field_args(field_name, field)
276
277
278 @enum_values_in_types.register(GraphQLInputObjectType)
279 def enum_values_in_input_type(
280 type_: GraphQLInputObjectType,
281 field_name,
282 ) -> Generator[InputFieldWithKeys, None, None]:
283 yield from _get_field_with_keys(field_name, type_.fields.items())
284
285
286 def enum_values_in_field_args(
287 field_name: str,
288 field: GraphQLField,
289 ) -> Generator[ArgumentWithKeys, None, None]:
290 args = [
291 (name, arg)
292 for name, arg in field.args.items()
293 if isinstance(
294 arg.type, (GraphQLInputObjectType, GraphQLEnumType, GraphQLNonNull)
295 )
296 ]
297
298 yield from _get_field_with_keys(field_name, args)
299
300
301 def _get_field_with_keys(field_name, fields):
302 for input_name, field in fields:
303 resolved_type = resolve_null_type(field.type)
304 if isinstance(resolved_type, GraphQLEnumType):
305 yield field_name, input_name, field, None
306
307 if isinstance(resolved_type, GraphQLInputObjectType):
308 if field.ast_node is not None and field.ast_node.default_value is not None:
309 routes = get_enum_keys_from_ast(field.ast_node)
310 for route in routes:
311 yield field_name, input_name, field, route
312
313
314 def get_enum_keys_from_ast(ast_node: InputValueDefinitionNode) -> List[List["str"]]:
315 routes = []
316 object_node = cast(ObjectValueNode, ast_node.default_value)
317 nodes = [([field.name.value], field) for field in object_node.fields]
318
319 while nodes:
320 key_list, field = nodes.pop()
321 if isinstance(field.value, EnumValueNode):
322 routes.append(key_list)
323
324 if isinstance(field.value, ObjectValueNode):
325 for new_field in field.value.fields:
326 new_route = key_list[:]
327 new_route.append(new_field.name.value)
328 nodes.append((new_route, new_field))
329
330 return routes
331
332
333 def get_value_from_mapping_value(mapping: Mapping[T, Any], key_list: List[T]) -> Any:
334 return reduce(operator.getitem, key_list, mapping)
335
336
337 def set_leaf_value_in_mapping(
338 mapping: Mapping[T, Any], key_list: List[T], value: Any
339 ) -> None:
340 get_value_from_mapping_value(mapping, key_list[:-1])[key_list[-1]] = value
341
342
343 def track_type_for_nested(
344 arg: Union[GraphQLArgument, GraphQLInputField], key_list: List[str]
345 ) -> GraphQLInputType:
346 type_ = resolve_null_type(arg.type)
347
348 for elem in key_list:
349 if isinstance(type_, GraphQLInputObjectType):
350 type_ = type_.fields[elem].type
351 return type_
352
353
354 def resolve_null_type(type_: GraphQLInputType) -> GraphQLInputType:
355 return type_.of_type if isinstance(type_, GraphQLNonNull) else type_
356
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ariadne/enums.py b/ariadne/enums.py
--- a/ariadne/enums.py
+++ b/ariadne/enums.py
@@ -7,6 +7,7 @@
Mapping,
Optional,
Tuple,
+ Type,
TypeVar,
Union,
cast,
@@ -87,7 +88,9 @@
"""
def __init__(
- self, name: str, values: Union[Dict[str, Any], enum.Enum, enum.IntEnum]
+ self,
+ name: str,
+ values: Union[Dict[str, Any], Type[enum.Enum], Type[enum.IntEnum]],
) -> None:
"""Initializes the `EnumType` with `name` and `values` mapping.
|
{"golden_diff": "diff --git a/ariadne/enums.py b/ariadne/enums.py\n--- a/ariadne/enums.py\n+++ b/ariadne/enums.py\n@@ -7,6 +7,7 @@\n Mapping,\n Optional,\n Tuple,\n+ Type,\n TypeVar,\n Union,\n cast,\n@@ -87,7 +88,9 @@\n \"\"\"\n \n def __init__(\n- self, name: str, values: Union[Dict[str, Any], enum.Enum, enum.IntEnum]\n+ self,\n+ name: str,\n+ values: Union[Dict[str, Any], Type[enum.Enum], Type[enum.IntEnum]],\n ) -> None:\n \"\"\"Initializes the `EnumType` with `name` and `values` mapping.\n", "issue": "Incorrect typing for values on EnumType.__init__()\nAriadne 0.18.0 has updated the `EnumType` class in a way that causes the `values` argument of `__init__()` to become typed, but the typing is incorrect:\r\n\r\n```\r\n def __init__(\r\n self, name: str, values: Union[Dict[str, Any], enum.Enum, enum.IntEnum]\r\n ) -> None:\r\n```\r\n\r\nshould be:\r\n\r\n```\r\n def __init__(\r\n self, name: str, values: Union[Dict[str, Any], Type[enum.Enum], Type[enum.IntEnum]]\r\n ) -> None:\r\n```\n", "before_files": [{"content": "import enum\nfrom typing import (\n Any,\n Dict,\n Generator,\n List,\n Mapping,\n Optional,\n Tuple,\n TypeVar,\n Union,\n cast,\n)\nfrom functools import reduce, singledispatch\nimport operator\n\nfrom graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema\nfrom graphql.language.ast import (\n EnumValueNode,\n InputValueDefinitionNode,\n ObjectValueNode,\n)\nfrom graphql.pyutils.undefined import Undefined\nfrom graphql.type.definition import (\n GraphQLArgument,\n GraphQLField,\n GraphQLInputField,\n GraphQLInputObjectType,\n GraphQLInputType,\n GraphQLInterfaceType,\n GraphQLNonNull,\n GraphQLObjectType,\n GraphQLScalarType,\n)\n\nfrom .types import SchemaBindable\n\n\nT = TypeVar(\"T\")\nArgumentWithKeys = Tuple[str, str, GraphQLArgument, Optional[List[\"str\"]]]\nInputFieldWithKeys = Tuple[str, str, GraphQLInputField, Optional[List[\"str\"]]]\nGraphQLNamedInputType = Union[\n GraphQLScalarType, GraphQLEnumType, GraphQLInputObjectType\n]\n\n\nclass EnumType(SchemaBindable):\n \"\"\"Bindable mapping Python values to enumeration members in a GraphQL schema.\n\n # Example\n\n Given following GraphQL enum:\n\n ```graphql\n enum UserRole {\n MEMBER\n MODERATOR\n ADMIN\n }\n ```\n\n You can use `EnumType` to map it's members to Python `Enum`:\n\n ```python\n user_role_type = EnumType(\n \"UserRole\",\n {\n \"MEMBER\": 0,\n \"MODERATOR\": 1,\n \"ADMIN\": 2,\n }\n )\n ```\n\n `EnumType` also works with dictionaries:\n\n ```python\n user_role_type = EnumType(\n \"UserRole\",\n {\n \"MEMBER\": 0,\n \"MODERATOR\": 1,\n \"ADMIN\": 2,\n }\n )\n ```\n \"\"\"\n\n def __init__(\n self, name: str, values: Union[Dict[str, Any], enum.Enum, enum.IntEnum]\n ) -> None:\n \"\"\"Initializes the `EnumType` with `name` and `values` mapping.\n\n # Required arguments\n\n `name`: a `str` with the name of GraphQL enum type in GraphQL schema to\n bind to.\n\n `values`: a `dict` or `enums.Enum` with values to use to represent GraphQL\n enum's in Python logic.\n \"\"\"\n self.name = name\n self.values = cast(Dict[str, Any], getattr(values, \"__members__\", values))\n\n def bind_to_schema(self, schema: GraphQLSchema) -> None:\n \"\"\"Binds this `EnumType` instance to the instance of GraphQL schema.\"\"\"\n graphql_type = schema.type_map.get(self.name)\n self.validate_graphql_type(graphql_type)\n graphql_type = cast(GraphQLEnumType, graphql_type)\n\n for key, value in self.values.items():\n if key not in graphql_type.values:\n raise ValueError(\n \"Value %s is not defined on enum %s\" % (key, self.name)\n )\n graphql_type.values[key].value = value\n\n def bind_to_default_values(self, schema: GraphQLSchema) -> None:\n \"\"\"Populates default values of input fields and args in the GraphQL schema.\n\n This step is required because GraphQL query executor doesn't perform a\n lookup for default values defined in schema. Instead it simply pulls the\n value from fields and arguments `default_value` attribute, which is\n `None` by default.\n \"\"\"\n for _, _, arg, key_list in find_enum_values_in_schema(schema):\n type_ = resolve_null_type(arg.type)\n type_ = cast(GraphQLNamedInputType, type_)\n\n if (\n key_list is None\n and arg.default_value in self.values\n and type_.name == self.name\n ):\n type_ = resolve_null_type(arg.type)\n arg.default_value = self.values[arg.default_value]\n\n elif key_list is not None:\n enum_value = get_value_from_mapping_value(arg.default_value, key_list)\n type_ = cast(GraphQLEnumType, track_type_for_nested(arg, key_list))\n\n if enum_value in self.values and type_.name == self.name:\n set_leaf_value_in_mapping(\n arg.default_value, key_list, self.values[enum_value]\n )\n\n def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:\n \"\"\"Validates that schema's GraphQL type associated with this `EnumType`\n is an `enum`.\"\"\"\n if not graphql_type:\n raise ValueError(\"Enum %s is not defined in the schema\" % self.name)\n if not isinstance(graphql_type, GraphQLEnumType):\n raise ValueError(\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLEnumType.__name__)\n )\n\n\ndef set_default_enum_values_on_schema(schema: GraphQLSchema):\n \"\"\"Sets missing Python values for GraphQL enums in schema.\n\n Recursively scans GraphQL schema for enums and their values. If `value`\n attribute is empty, its populated with with a string of its GraphQL name.\n\n This string is then used to represent enum's value in Python instead of `None`.\n\n # Requires arguments\n\n `schema`: a GraphQL schema to set enums default values in.\n \"\"\"\n for type_object in schema.type_map.values():\n if isinstance(type_object, GraphQLEnumType):\n set_default_enum_values(type_object)\n\n\ndef set_default_enum_values(graphql_type: GraphQLEnumType):\n for key in graphql_type.values:\n if graphql_type.values[key].value is None:\n graphql_type.values[key].value = key\n\n\ndef validate_schema_enum_values(schema: GraphQLSchema) -> None:\n \"\"\"Raises `ValueError` if GraphQL schema has input fields or arguments with\n default values that are undefined enum values.\n\n # Example schema with invalid field argument\n\n This schema fails to validate because argument `role` on field `users`\n specifies `REVIEWER` as default value and `REVIEWER` is not a member of\n the `UserRole` enum:\n\n ```graphql\n type Query {\n users(role: UserRole = REVIEWER): [User!]!\n }\n\n enum UserRole {\n MEMBER\n MODERATOR\n ADMIN\n }\n\n type User {\n id: ID!\n }\n ```\n\n # Example schema with invalid input field\n\n This schema fails to validate because field `role` on input `UserFilters`\n specifies `REVIEWER` as default value and `REVIEWER` is not a member of\n the `UserRole` enum:\n\n ```graphql\n type Query {\n users(filter: UserFilters): [User!]!\n }\n\n input UserFilters {\n name: String\n role: UserRole = REVIEWER\n }\n\n enum UserRole {\n MEMBER\n MODERATOR\n ADMIN\n }\n\n type User {\n id: ID!\n }\n ```\n \"\"\"\n\n for type_name, field_name, arg, _ in find_enum_values_in_schema(schema):\n if is_invalid_enum_value(arg):\n raise ValueError(\n f\"Value for type: <{arg.type}> is invalid. \"\n f\"Check InputField/Arguments for <{field_name}> in <{type_name}> \"\n \"(Undefined enum value).\"\n )\n\n\ndef is_invalid_enum_value(field: Union[GraphQLInputField, GraphQLArgument]) -> bool:\n if field.ast_node is None:\n return False\n return field.default_value is Undefined and field.ast_node.default_value is not None\n\n\ndef find_enum_values_in_schema(\n schema: GraphQLSchema,\n) -> Generator[Union[ArgumentWithKeys, InputFieldWithKeys], None, None]:\n for name, type_ in schema.type_map.items():\n result = enum_values_in_types(type_, name)\n if result is not None:\n yield from result\n\n\n@singledispatch\ndef enum_values_in_types(\n type_: GraphQLNamedType, # pylint: disable=unused-argument\n name: str, # pylint: disable=unused-argument\n) -> Optional[Generator[Union[ArgumentWithKeys, InputFieldWithKeys], None, None]]:\n pass\n\n\n@enum_values_in_types.register(GraphQLObjectType)\n@enum_values_in_types.register(GraphQLInterfaceType)\ndef enum_values_in_object_type(\n type_: Union[GraphQLObjectType, GraphQLInterfaceType],\n field_name: str,\n) -> Generator[ArgumentWithKeys, None, None]:\n for field in type_.fields.values():\n yield from enum_values_in_field_args(field_name, field)\n\n\n@enum_values_in_types.register(GraphQLInputObjectType)\ndef enum_values_in_input_type(\n type_: GraphQLInputObjectType,\n field_name,\n) -> Generator[InputFieldWithKeys, None, None]:\n yield from _get_field_with_keys(field_name, type_.fields.items())\n\n\ndef enum_values_in_field_args(\n field_name: str,\n field: GraphQLField,\n) -> Generator[ArgumentWithKeys, None, None]:\n args = [\n (name, arg)\n for name, arg in field.args.items()\n if isinstance(\n arg.type, (GraphQLInputObjectType, GraphQLEnumType, GraphQLNonNull)\n )\n ]\n\n yield from _get_field_with_keys(field_name, args)\n\n\ndef _get_field_with_keys(field_name, fields):\n for input_name, field in fields:\n resolved_type = resolve_null_type(field.type)\n if isinstance(resolved_type, GraphQLEnumType):\n yield field_name, input_name, field, None\n\n if isinstance(resolved_type, GraphQLInputObjectType):\n if field.ast_node is not None and field.ast_node.default_value is not None:\n routes = get_enum_keys_from_ast(field.ast_node)\n for route in routes:\n yield field_name, input_name, field, route\n\n\ndef get_enum_keys_from_ast(ast_node: InputValueDefinitionNode) -> List[List[\"str\"]]:\n routes = []\n object_node = cast(ObjectValueNode, ast_node.default_value)\n nodes = [([field.name.value], field) for field in object_node.fields]\n\n while nodes:\n key_list, field = nodes.pop()\n if isinstance(field.value, EnumValueNode):\n routes.append(key_list)\n\n if isinstance(field.value, ObjectValueNode):\n for new_field in field.value.fields:\n new_route = key_list[:]\n new_route.append(new_field.name.value)\n nodes.append((new_route, new_field))\n\n return routes\n\n\ndef get_value_from_mapping_value(mapping: Mapping[T, Any], key_list: List[T]) -> Any:\n return reduce(operator.getitem, key_list, mapping)\n\n\ndef set_leaf_value_in_mapping(\n mapping: Mapping[T, Any], key_list: List[T], value: Any\n) -> None:\n get_value_from_mapping_value(mapping, key_list[:-1])[key_list[-1]] = value\n\n\ndef track_type_for_nested(\n arg: Union[GraphQLArgument, GraphQLInputField], key_list: List[str]\n) -> GraphQLInputType:\n type_ = resolve_null_type(arg.type)\n\n for elem in key_list:\n if isinstance(type_, GraphQLInputObjectType):\n type_ = type_.fields[elem].type\n return type_\n\n\ndef resolve_null_type(type_: GraphQLInputType) -> GraphQLInputType:\n return type_.of_type if isinstance(type_, GraphQLNonNull) else type_\n", "path": "ariadne/enums.py"}], "after_files": [{"content": "import enum\nfrom typing import (\n Any,\n Dict,\n Generator,\n List,\n Mapping,\n Optional,\n Tuple,\n Type,\n TypeVar,\n Union,\n cast,\n)\nfrom functools import reduce, singledispatch\nimport operator\n\nfrom graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema\nfrom graphql.language.ast import (\n EnumValueNode,\n InputValueDefinitionNode,\n ObjectValueNode,\n)\nfrom graphql.pyutils.undefined import Undefined\nfrom graphql.type.definition import (\n GraphQLArgument,\n GraphQLField,\n GraphQLInputField,\n GraphQLInputObjectType,\n GraphQLInputType,\n GraphQLInterfaceType,\n GraphQLNonNull,\n GraphQLObjectType,\n GraphQLScalarType,\n)\n\nfrom .types import SchemaBindable\n\n\nT = TypeVar(\"T\")\nArgumentWithKeys = Tuple[str, str, GraphQLArgument, Optional[List[\"str\"]]]\nInputFieldWithKeys = Tuple[str, str, GraphQLInputField, Optional[List[\"str\"]]]\nGraphQLNamedInputType = Union[\n GraphQLScalarType, GraphQLEnumType, GraphQLInputObjectType\n]\n\n\nclass EnumType(SchemaBindable):\n \"\"\"Bindable mapping Python values to enumeration members in a GraphQL schema.\n\n # Example\n\n Given following GraphQL enum:\n\n ```graphql\n enum UserRole {\n MEMBER\n MODERATOR\n ADMIN\n }\n ```\n\n You can use `EnumType` to map it's members to Python `Enum`:\n\n ```python\n user_role_type = EnumType(\n \"UserRole\",\n {\n \"MEMBER\": 0,\n \"MODERATOR\": 1,\n \"ADMIN\": 2,\n }\n )\n ```\n\n `EnumType` also works with dictionaries:\n\n ```python\n user_role_type = EnumType(\n \"UserRole\",\n {\n \"MEMBER\": 0,\n \"MODERATOR\": 1,\n \"ADMIN\": 2,\n }\n )\n ```\n \"\"\"\n\n def __init__(\n self,\n name: str,\n values: Union[Dict[str, Any], Type[enum.Enum], Type[enum.IntEnum]],\n ) -> None:\n \"\"\"Initializes the `EnumType` with `name` and `values` mapping.\n\n # Required arguments\n\n `name`: a `str` with the name of GraphQL enum type in GraphQL schema to\n bind to.\n\n `values`: a `dict` or `enums.Enum` with values to use to represent GraphQL\n enum's in Python logic.\n \"\"\"\n self.name = name\n self.values = cast(Dict[str, Any], getattr(values, \"__members__\", values))\n\n def bind_to_schema(self, schema: GraphQLSchema) -> None:\n \"\"\"Binds this `EnumType` instance to the instance of GraphQL schema.\"\"\"\n graphql_type = schema.type_map.get(self.name)\n self.validate_graphql_type(graphql_type)\n graphql_type = cast(GraphQLEnumType, graphql_type)\n\n for key, value in self.values.items():\n if key not in graphql_type.values:\n raise ValueError(\n \"Value %s is not defined on enum %s\" % (key, self.name)\n )\n graphql_type.values[key].value = value\n\n def bind_to_default_values(self, schema: GraphQLSchema) -> None:\n \"\"\"Populates default values of input fields and args in the GraphQL schema.\n\n This step is required because GraphQL query executor doesn't perform a\n lookup for default values defined in schema. Instead it simply pulls the\n value from fields and arguments `default_value` attribute, which is\n `None` by default.\n \"\"\"\n for _, _, arg, key_list in find_enum_values_in_schema(schema):\n type_ = resolve_null_type(arg.type)\n type_ = cast(GraphQLNamedInputType, type_)\n\n if (\n key_list is None\n and arg.default_value in self.values\n and type_.name == self.name\n ):\n type_ = resolve_null_type(arg.type)\n arg.default_value = self.values[arg.default_value]\n\n elif key_list is not None:\n enum_value = get_value_from_mapping_value(arg.default_value, key_list)\n type_ = cast(GraphQLEnumType, track_type_for_nested(arg, key_list))\n\n if enum_value in self.values and type_.name == self.name:\n set_leaf_value_in_mapping(\n arg.default_value, key_list, self.values[enum_value]\n )\n\n def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:\n \"\"\"Validates that schema's GraphQL type associated with this `EnumType`\n is an `enum`.\"\"\"\n if not graphql_type:\n raise ValueError(\"Enum %s is not defined in the schema\" % self.name)\n if not isinstance(graphql_type, GraphQLEnumType):\n raise ValueError(\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLEnumType.__name__)\n )\n\n\ndef set_default_enum_values_on_schema(schema: GraphQLSchema):\n \"\"\"Sets missing Python values for GraphQL enums in schema.\n\n Recursively scans GraphQL schema for enums and their values. If `value`\n attribute is empty, its populated with with a string of its GraphQL name.\n\n This string is then used to represent enum's value in Python instead of `None`.\n\n # Requires arguments\n\n `schema`: a GraphQL schema to set enums default values in.\n \"\"\"\n for type_object in schema.type_map.values():\n if isinstance(type_object, GraphQLEnumType):\n set_default_enum_values(type_object)\n\n\ndef set_default_enum_values(graphql_type: GraphQLEnumType):\n for key in graphql_type.values:\n if graphql_type.values[key].value is None:\n graphql_type.values[key].value = key\n\n\ndef validate_schema_enum_values(schema: GraphQLSchema) -> None:\n \"\"\"Raises `ValueError` if GraphQL schema has input fields or arguments with\n default values that are undefined enum values.\n\n # Example schema with invalid field argument\n\n This schema fails to validate because argument `role` on field `users`\n specifies `REVIEWER` as default value and `REVIEWER` is not a member of\n the `UserRole` enum:\n\n ```graphql\n type Query {\n users(role: UserRole = REVIEWER): [User!]!\n }\n\n enum UserRole {\n MEMBER\n MODERATOR\n ADMIN\n }\n\n type User {\n id: ID!\n }\n ```\n\n # Example schema with invalid input field\n\n This schema fails to validate because field `role` on input `UserFilters`\n specifies `REVIEWER` as default value and `REVIEWER` is not a member of\n the `UserRole` enum:\n\n ```graphql\n type Query {\n users(filter: UserFilters): [User!]!\n }\n\n input UserFilters {\n name: String\n role: UserRole = REVIEWER\n }\n\n enum UserRole {\n MEMBER\n MODERATOR\n ADMIN\n }\n\n type User {\n id: ID!\n }\n ```\n \"\"\"\n\n for type_name, field_name, arg, _ in find_enum_values_in_schema(schema):\n if is_invalid_enum_value(arg):\n raise ValueError(\n f\"Value for type: <{arg.type}> is invalid. \"\n f\"Check InputField/Arguments for <{field_name}> in <{type_name}> \"\n \"(Undefined enum value).\"\n )\n\n\ndef is_invalid_enum_value(field: Union[GraphQLInputField, GraphQLArgument]) -> bool:\n if field.ast_node is None:\n return False\n return field.default_value is Undefined and field.ast_node.default_value is not None\n\n\ndef find_enum_values_in_schema(\n schema: GraphQLSchema,\n) -> Generator[Union[ArgumentWithKeys, InputFieldWithKeys], None, None]:\n for name, type_ in schema.type_map.items():\n result = enum_values_in_types(type_, name)\n if result is not None:\n yield from result\n\n\n@singledispatch\ndef enum_values_in_types(\n type_: GraphQLNamedType, # pylint: disable=unused-argument\n name: str, # pylint: disable=unused-argument\n) -> Optional[Generator[Union[ArgumentWithKeys, InputFieldWithKeys], None, None]]:\n pass\n\n\n@enum_values_in_types.register(GraphQLObjectType)\n@enum_values_in_types.register(GraphQLInterfaceType)\ndef enum_values_in_object_type(\n type_: Union[GraphQLObjectType, GraphQLInterfaceType],\n field_name: str,\n) -> Generator[ArgumentWithKeys, None, None]:\n for field in type_.fields.values():\n yield from enum_values_in_field_args(field_name, field)\n\n\n@enum_values_in_types.register(GraphQLInputObjectType)\ndef enum_values_in_input_type(\n type_: GraphQLInputObjectType,\n field_name,\n) -> Generator[InputFieldWithKeys, None, None]:\n yield from _get_field_with_keys(field_name, type_.fields.items())\n\n\ndef enum_values_in_field_args(\n field_name: str,\n field: GraphQLField,\n) -> Generator[ArgumentWithKeys, None, None]:\n args = [\n (name, arg)\n for name, arg in field.args.items()\n if isinstance(\n arg.type, (GraphQLInputObjectType, GraphQLEnumType, GraphQLNonNull)\n )\n ]\n\n yield from _get_field_with_keys(field_name, args)\n\n\ndef _get_field_with_keys(field_name, fields):\n for input_name, field in fields:\n resolved_type = resolve_null_type(field.type)\n if isinstance(resolved_type, GraphQLEnumType):\n yield field_name, input_name, field, None\n\n if isinstance(resolved_type, GraphQLInputObjectType):\n if field.ast_node is not None and field.ast_node.default_value is not None:\n routes = get_enum_keys_from_ast(field.ast_node)\n for route in routes:\n yield field_name, input_name, field, route\n\n\ndef get_enum_keys_from_ast(ast_node: InputValueDefinitionNode) -> List[List[\"str\"]]:\n routes = []\n object_node = cast(ObjectValueNode, ast_node.default_value)\n nodes = [([field.name.value], field) for field in object_node.fields]\n\n while nodes:\n key_list, field = nodes.pop()\n if isinstance(field.value, EnumValueNode):\n routes.append(key_list)\n\n if isinstance(field.value, ObjectValueNode):\n for new_field in field.value.fields:\n new_route = key_list[:]\n new_route.append(new_field.name.value)\n nodes.append((new_route, new_field))\n\n return routes\n\n\ndef get_value_from_mapping_value(mapping: Mapping[T, Any], key_list: List[T]) -> Any:\n return reduce(operator.getitem, key_list, mapping)\n\n\ndef set_leaf_value_in_mapping(\n mapping: Mapping[T, Any], key_list: List[T], value: Any\n) -> None:\n get_value_from_mapping_value(mapping, key_list[:-1])[key_list[-1]] = value\n\n\ndef track_type_for_nested(\n arg: Union[GraphQLArgument, GraphQLInputField], key_list: List[str]\n) -> GraphQLInputType:\n type_ = resolve_null_type(arg.type)\n\n for elem in key_list:\n if isinstance(type_, GraphQLInputObjectType):\n type_ = type_.fields[elem].type\n return type_\n\n\ndef resolve_null_type(type_: GraphQLInputType) -> GraphQLInputType:\n return type_.of_type if isinstance(type_, GraphQLNonNull) else type_\n", "path": "ariadne/enums.py"}]}
| 3,848 | 173 |
gh_patches_debug_47805
|
rasdani/github-patches
|
git_diff
|
pypa__pip-7056
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a documentation string to pip._internal.network.auth
Follow up to #7045
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/_internal/network/__init__.py`
Content:
```
```
Path: `src/pip/_internal/network/auth.py`
Content:
```
1 import logging
2
3 from pip._vendor.requests.auth import AuthBase, HTTPBasicAuth
4 from pip._vendor.requests.utils import get_netrc_auth
5 from pip._vendor.six.moves.urllib import parse as urllib_parse
6
7 from pip._internal.utils.misc import (
8 ask,
9 ask_input,
10 ask_password,
11 remove_auth_from_url,
12 split_auth_netloc_from_url,
13 )
14 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
15
16 if MYPY_CHECK_RUNNING:
17 from optparse import Values
18 from typing import Dict, Optional, Tuple
19
20 from pip._internal.vcs.versioncontrol import AuthInfo
21
22 Credentials = Tuple[str, str, str]
23
24 logger = logging.getLogger(__name__)
25
26 try:
27 import keyring # noqa
28 except ImportError:
29 keyring = None
30 except Exception as exc:
31 logger.warning(
32 "Keyring is skipped due to an exception: %s", str(exc),
33 )
34 keyring = None
35
36
37 def get_keyring_auth(url, username):
38 """Return the tuple auth for a given url from keyring."""
39 if not url or not keyring:
40 return None
41
42 try:
43 try:
44 get_credential = keyring.get_credential
45 except AttributeError:
46 pass
47 else:
48 logger.debug("Getting credentials from keyring for %s", url)
49 cred = get_credential(url, username)
50 if cred is not None:
51 return cred.username, cred.password
52 return None
53
54 if username:
55 logger.debug("Getting password from keyring for %s", url)
56 password = keyring.get_password(url, username)
57 if password:
58 return username, password
59
60 except Exception as exc:
61 logger.warning(
62 "Keyring is skipped due to an exception: %s", str(exc),
63 )
64
65
66 class MultiDomainBasicAuth(AuthBase):
67
68 def __init__(self, prompting=True, index_urls=None):
69 # type: (bool, Optional[Values]) -> None
70 self.prompting = prompting
71 self.index_urls = index_urls
72 self.passwords = {} # type: Dict[str, AuthInfo]
73 # When the user is prompted to enter credentials and keyring is
74 # available, we will offer to save them. If the user accepts,
75 # this value is set to the credentials they entered. After the
76 # request authenticates, the caller should call
77 # ``save_credentials`` to save these.
78 self._credentials_to_save = None # type: Optional[Credentials]
79
80 def _get_index_url(self, url):
81 """Return the original index URL matching the requested URL.
82
83 Cached or dynamically generated credentials may work against
84 the original index URL rather than just the netloc.
85
86 The provided url should have had its username and password
87 removed already. If the original index url had credentials then
88 they will be included in the return value.
89
90 Returns None if no matching index was found, or if --no-index
91 was specified by the user.
92 """
93 if not url or not self.index_urls:
94 return None
95
96 for u in self.index_urls:
97 prefix = remove_auth_from_url(u).rstrip("/") + "/"
98 if url.startswith(prefix):
99 return u
100
101 def _get_new_credentials(self, original_url, allow_netrc=True,
102 allow_keyring=True):
103 """Find and return credentials for the specified URL."""
104 # Split the credentials and netloc from the url.
105 url, netloc, url_user_password = split_auth_netloc_from_url(
106 original_url,
107 )
108
109 # Start with the credentials embedded in the url
110 username, password = url_user_password
111 if username is not None and password is not None:
112 logger.debug("Found credentials in url for %s", netloc)
113 return url_user_password
114
115 # Find a matching index url for this request
116 index_url = self._get_index_url(url)
117 if index_url:
118 # Split the credentials from the url.
119 index_info = split_auth_netloc_from_url(index_url)
120 if index_info:
121 index_url, _, index_url_user_password = index_info
122 logger.debug("Found index url %s", index_url)
123
124 # If an index URL was found, try its embedded credentials
125 if index_url and index_url_user_password[0] is not None:
126 username, password = index_url_user_password
127 if username is not None and password is not None:
128 logger.debug("Found credentials in index url for %s", netloc)
129 return index_url_user_password
130
131 # Get creds from netrc if we still don't have them
132 if allow_netrc:
133 netrc_auth = get_netrc_auth(original_url)
134 if netrc_auth:
135 logger.debug("Found credentials in netrc for %s", netloc)
136 return netrc_auth
137
138 # If we don't have a password and keyring is available, use it.
139 if allow_keyring:
140 # The index url is more specific than the netloc, so try it first
141 kr_auth = (
142 get_keyring_auth(index_url, username) or
143 get_keyring_auth(netloc, username)
144 )
145 if kr_auth:
146 logger.debug("Found credentials in keyring for %s", netloc)
147 return kr_auth
148
149 return username, password
150
151 def _get_url_and_credentials(self, original_url):
152 """Return the credentials to use for the provided URL.
153
154 If allowed, netrc and keyring may be used to obtain the
155 correct credentials.
156
157 Returns (url_without_credentials, username, password). Note
158 that even if the original URL contains credentials, this
159 function may return a different username and password.
160 """
161 url, netloc, _ = split_auth_netloc_from_url(original_url)
162
163 # Use any stored credentials that we have for this netloc
164 username, password = self.passwords.get(netloc, (None, None))
165
166 if username is None and password is None:
167 # No stored credentials. Acquire new credentials without prompting
168 # the user. (e.g. from netrc, keyring, or the URL itself)
169 username, password = self._get_new_credentials(original_url)
170
171 if username is not None or password is not None:
172 # Convert the username and password if they're None, so that
173 # this netloc will show up as "cached" in the conditional above.
174 # Further, HTTPBasicAuth doesn't accept None, so it makes sense to
175 # cache the value that is going to be used.
176 username = username or ""
177 password = password or ""
178
179 # Store any acquired credentials.
180 self.passwords[netloc] = (username, password)
181
182 assert (
183 # Credentials were found
184 (username is not None and password is not None) or
185 # Credentials were not found
186 (username is None and password is None)
187 ), "Could not load credentials from url: {}".format(original_url)
188
189 return url, username, password
190
191 def __call__(self, req):
192 # Get credentials for this request
193 url, username, password = self._get_url_and_credentials(req.url)
194
195 # Set the url of the request to the url without any credentials
196 req.url = url
197
198 if username is not None and password is not None:
199 # Send the basic auth with this request
200 req = HTTPBasicAuth(username, password)(req)
201
202 # Attach a hook to handle 401 responses
203 req.register_hook("response", self.handle_401)
204
205 return req
206
207 # Factored out to allow for easy patching in tests
208 def _prompt_for_password(self, netloc):
209 username = ask_input("User for %s: " % netloc)
210 if not username:
211 return None, None
212 auth = get_keyring_auth(netloc, username)
213 if auth:
214 return auth[0], auth[1], False
215 password = ask_password("Password: ")
216 return username, password, True
217
218 # Factored out to allow for easy patching in tests
219 def _should_save_password_to_keyring(self):
220 if not keyring:
221 return False
222 return ask("Save credentials to keyring [y/N]: ", ["y", "n"]) == "y"
223
224 def handle_401(self, resp, **kwargs):
225 # We only care about 401 responses, anything else we want to just
226 # pass through the actual response
227 if resp.status_code != 401:
228 return resp
229
230 # We are not able to prompt the user so simply return the response
231 if not self.prompting:
232 return resp
233
234 parsed = urllib_parse.urlparse(resp.url)
235
236 # Prompt the user for a new username and password
237 username, password, save = self._prompt_for_password(parsed.netloc)
238
239 # Store the new username and password to use for future requests
240 self._credentials_to_save = None
241 if username is not None and password is not None:
242 self.passwords[parsed.netloc] = (username, password)
243
244 # Prompt to save the password to keyring
245 if save and self._should_save_password_to_keyring():
246 self._credentials_to_save = (parsed.netloc, username, password)
247
248 # Consume content and release the original connection to allow our new
249 # request to reuse the same one.
250 resp.content
251 resp.raw.release_conn()
252
253 # Add our new username and password to the request
254 req = HTTPBasicAuth(username or "", password or "")(resp.request)
255 req.register_hook("response", self.warn_on_401)
256
257 # On successful request, save the credentials that were used to
258 # keyring. (Note that if the user responded "no" above, this member
259 # is not set and nothing will be saved.)
260 if self._credentials_to_save:
261 req.register_hook("response", self.save_credentials)
262
263 # Send our new request
264 new_resp = resp.connection.send(req, **kwargs)
265 new_resp.history.append(resp)
266
267 return new_resp
268
269 def warn_on_401(self, resp, **kwargs):
270 """Response callback to warn about incorrect credentials."""
271 if resp.status_code == 401:
272 logger.warning(
273 '401 Error, Credentials not correct for %s', resp.request.url,
274 )
275
276 def save_credentials(self, resp, **kwargs):
277 """Response callback to save credentials on success."""
278 assert keyring is not None, "should never reach here without keyring"
279 if not keyring:
280 return
281
282 creds = self._credentials_to_save
283 self._credentials_to_save = None
284 if creds and resp.status_code < 400:
285 try:
286 logger.info('Saving credentials to keyring')
287 keyring.set_password(*creds)
288 except Exception:
289 logger.exception('Failed to save credentials')
290
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pip/_internal/network/__init__.py b/src/pip/_internal/network/__init__.py
--- a/src/pip/_internal/network/__init__.py
+++ b/src/pip/_internal/network/__init__.py
@@ -0,0 +1,2 @@
+"""Contains purely network-related utilities.
+"""
diff --git a/src/pip/_internal/network/auth.py b/src/pip/_internal/network/auth.py
--- a/src/pip/_internal/network/auth.py
+++ b/src/pip/_internal/network/auth.py
@@ -1,3 +1,9 @@
+"""Network Authentication Helpers
+
+Contains interface (MultiDomainBasicAuth) and associated glue code for
+providing credentials in the context of network requests.
+"""
+
import logging
from pip._vendor.requests.auth import AuthBase, HTTPBasicAuth
|
{"golden_diff": "diff --git a/src/pip/_internal/network/__init__.py b/src/pip/_internal/network/__init__.py\n--- a/src/pip/_internal/network/__init__.py\n+++ b/src/pip/_internal/network/__init__.py\n@@ -0,0 +1,2 @@\n+\"\"\"Contains purely network-related utilities.\n+\"\"\"\ndiff --git a/src/pip/_internal/network/auth.py b/src/pip/_internal/network/auth.py\n--- a/src/pip/_internal/network/auth.py\n+++ b/src/pip/_internal/network/auth.py\n@@ -1,3 +1,9 @@\n+\"\"\"Network Authentication Helpers\n+\n+Contains interface (MultiDomainBasicAuth) and associated glue code for\n+providing credentials in the context of network requests.\n+\"\"\"\n+\n import logging\n \n from pip._vendor.requests.auth import AuthBase, HTTPBasicAuth\n", "issue": "Add a documentation string to pip._internal.network.auth\nFollow up to #7045\n", "before_files": [{"content": "", "path": "src/pip/_internal/network/__init__.py"}, {"content": "import logging\n\nfrom pip._vendor.requests.auth import AuthBase, HTTPBasicAuth\nfrom pip._vendor.requests.utils import get_netrc_auth\nfrom pip._vendor.six.moves.urllib import parse as urllib_parse\n\nfrom pip._internal.utils.misc import (\n ask,\n ask_input,\n ask_password,\n remove_auth_from_url,\n split_auth_netloc_from_url,\n)\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from optparse import Values\n from typing import Dict, Optional, Tuple\n\n from pip._internal.vcs.versioncontrol import AuthInfo\n\n Credentials = Tuple[str, str, str]\n\nlogger = logging.getLogger(__name__)\n\ntry:\n import keyring # noqa\nexcept ImportError:\n keyring = None\nexcept Exception as exc:\n logger.warning(\n \"Keyring is skipped due to an exception: %s\", str(exc),\n )\n keyring = None\n\n\ndef get_keyring_auth(url, username):\n \"\"\"Return the tuple auth for a given url from keyring.\"\"\"\n if not url or not keyring:\n return None\n\n try:\n try:\n get_credential = keyring.get_credential\n except AttributeError:\n pass\n else:\n logger.debug(\"Getting credentials from keyring for %s\", url)\n cred = get_credential(url, username)\n if cred is not None:\n return cred.username, cred.password\n return None\n\n if username:\n logger.debug(\"Getting password from keyring for %s\", url)\n password = keyring.get_password(url, username)\n if password:\n return username, password\n\n except Exception as exc:\n logger.warning(\n \"Keyring is skipped due to an exception: %s\", str(exc),\n )\n\n\nclass MultiDomainBasicAuth(AuthBase):\n\n def __init__(self, prompting=True, index_urls=None):\n # type: (bool, Optional[Values]) -> None\n self.prompting = prompting\n self.index_urls = index_urls\n self.passwords = {} # type: Dict[str, AuthInfo]\n # When the user is prompted to enter credentials and keyring is\n # available, we will offer to save them. If the user accepts,\n # this value is set to the credentials they entered. After the\n # request authenticates, the caller should call\n # ``save_credentials`` to save these.\n self._credentials_to_save = None # type: Optional[Credentials]\n\n def _get_index_url(self, url):\n \"\"\"Return the original index URL matching the requested URL.\n\n Cached or dynamically generated credentials may work against\n the original index URL rather than just the netloc.\n\n The provided url should have had its username and password\n removed already. If the original index url had credentials then\n they will be included in the return value.\n\n Returns None if no matching index was found, or if --no-index\n was specified by the user.\n \"\"\"\n if not url or not self.index_urls:\n return None\n\n for u in self.index_urls:\n prefix = remove_auth_from_url(u).rstrip(\"/\") + \"/\"\n if url.startswith(prefix):\n return u\n\n def _get_new_credentials(self, original_url, allow_netrc=True,\n allow_keyring=True):\n \"\"\"Find and return credentials for the specified URL.\"\"\"\n # Split the credentials and netloc from the url.\n url, netloc, url_user_password = split_auth_netloc_from_url(\n original_url,\n )\n\n # Start with the credentials embedded in the url\n username, password = url_user_password\n if username is not None and password is not None:\n logger.debug(\"Found credentials in url for %s\", netloc)\n return url_user_password\n\n # Find a matching index url for this request\n index_url = self._get_index_url(url)\n if index_url:\n # Split the credentials from the url.\n index_info = split_auth_netloc_from_url(index_url)\n if index_info:\n index_url, _, index_url_user_password = index_info\n logger.debug(\"Found index url %s\", index_url)\n\n # If an index URL was found, try its embedded credentials\n if index_url and index_url_user_password[0] is not None:\n username, password = index_url_user_password\n if username is not None and password is not None:\n logger.debug(\"Found credentials in index url for %s\", netloc)\n return index_url_user_password\n\n # Get creds from netrc if we still don't have them\n if allow_netrc:\n netrc_auth = get_netrc_auth(original_url)\n if netrc_auth:\n logger.debug(\"Found credentials in netrc for %s\", netloc)\n return netrc_auth\n\n # If we don't have a password and keyring is available, use it.\n if allow_keyring:\n # The index url is more specific than the netloc, so try it first\n kr_auth = (\n get_keyring_auth(index_url, username) or\n get_keyring_auth(netloc, username)\n )\n if kr_auth:\n logger.debug(\"Found credentials in keyring for %s\", netloc)\n return kr_auth\n\n return username, password\n\n def _get_url_and_credentials(self, original_url):\n \"\"\"Return the credentials to use for the provided URL.\n\n If allowed, netrc and keyring may be used to obtain the\n correct credentials.\n\n Returns (url_without_credentials, username, password). Note\n that even if the original URL contains credentials, this\n function may return a different username and password.\n \"\"\"\n url, netloc, _ = split_auth_netloc_from_url(original_url)\n\n # Use any stored credentials that we have for this netloc\n username, password = self.passwords.get(netloc, (None, None))\n\n if username is None and password is None:\n # No stored credentials. Acquire new credentials without prompting\n # the user. (e.g. from netrc, keyring, or the URL itself)\n username, password = self._get_new_credentials(original_url)\n\n if username is not None or password is not None:\n # Convert the username and password if they're None, so that\n # this netloc will show up as \"cached\" in the conditional above.\n # Further, HTTPBasicAuth doesn't accept None, so it makes sense to\n # cache the value that is going to be used.\n username = username or \"\"\n password = password or \"\"\n\n # Store any acquired credentials.\n self.passwords[netloc] = (username, password)\n\n assert (\n # Credentials were found\n (username is not None and password is not None) or\n # Credentials were not found\n (username is None and password is None)\n ), \"Could not load credentials from url: {}\".format(original_url)\n\n return url, username, password\n\n def __call__(self, req):\n # Get credentials for this request\n url, username, password = self._get_url_and_credentials(req.url)\n\n # Set the url of the request to the url without any credentials\n req.url = url\n\n if username is not None and password is not None:\n # Send the basic auth with this request\n req = HTTPBasicAuth(username, password)(req)\n\n # Attach a hook to handle 401 responses\n req.register_hook(\"response\", self.handle_401)\n\n return req\n\n # Factored out to allow for easy patching in tests\n def _prompt_for_password(self, netloc):\n username = ask_input(\"User for %s: \" % netloc)\n if not username:\n return None, None\n auth = get_keyring_auth(netloc, username)\n if auth:\n return auth[0], auth[1], False\n password = ask_password(\"Password: \")\n return username, password, True\n\n # Factored out to allow for easy patching in tests\n def _should_save_password_to_keyring(self):\n if not keyring:\n return False\n return ask(\"Save credentials to keyring [y/N]: \", [\"y\", \"n\"]) == \"y\"\n\n def handle_401(self, resp, **kwargs):\n # We only care about 401 responses, anything else we want to just\n # pass through the actual response\n if resp.status_code != 401:\n return resp\n\n # We are not able to prompt the user so simply return the response\n if not self.prompting:\n return resp\n\n parsed = urllib_parse.urlparse(resp.url)\n\n # Prompt the user for a new username and password\n username, password, save = self._prompt_for_password(parsed.netloc)\n\n # Store the new username and password to use for future requests\n self._credentials_to_save = None\n if username is not None and password is not None:\n self.passwords[parsed.netloc] = (username, password)\n\n # Prompt to save the password to keyring\n if save and self._should_save_password_to_keyring():\n self._credentials_to_save = (parsed.netloc, username, password)\n\n # Consume content and release the original connection to allow our new\n # request to reuse the same one.\n resp.content\n resp.raw.release_conn()\n\n # Add our new username and password to the request\n req = HTTPBasicAuth(username or \"\", password or \"\")(resp.request)\n req.register_hook(\"response\", self.warn_on_401)\n\n # On successful request, save the credentials that were used to\n # keyring. (Note that if the user responded \"no\" above, this member\n # is not set and nothing will be saved.)\n if self._credentials_to_save:\n req.register_hook(\"response\", self.save_credentials)\n\n # Send our new request\n new_resp = resp.connection.send(req, **kwargs)\n new_resp.history.append(resp)\n\n return new_resp\n\n def warn_on_401(self, resp, **kwargs):\n \"\"\"Response callback to warn about incorrect credentials.\"\"\"\n if resp.status_code == 401:\n logger.warning(\n '401 Error, Credentials not correct for %s', resp.request.url,\n )\n\n def save_credentials(self, resp, **kwargs):\n \"\"\"Response callback to save credentials on success.\"\"\"\n assert keyring is not None, \"should never reach here without keyring\"\n if not keyring:\n return\n\n creds = self._credentials_to_save\n self._credentials_to_save = None\n if creds and resp.status_code < 400:\n try:\n logger.info('Saving credentials to keyring')\n keyring.set_password(*creds)\n except Exception:\n logger.exception('Failed to save credentials')\n", "path": "src/pip/_internal/network/auth.py"}], "after_files": [{"content": "\"\"\"Contains purely network-related utilities.\n\"\"\"\n", "path": "src/pip/_internal/network/__init__.py"}, {"content": "\"\"\"Network Authentication Helpers\n\nContains interface (MultiDomainBasicAuth) and associated glue code for\nproviding credentials in the context of network requests.\n\"\"\"\n\nimport logging\n\nfrom pip._vendor.requests.auth import AuthBase, HTTPBasicAuth\nfrom pip._vendor.requests.utils import get_netrc_auth\nfrom pip._vendor.six.moves.urllib import parse as urllib_parse\n\nfrom pip._internal.utils.misc import (\n ask,\n ask_input,\n ask_password,\n remove_auth_from_url,\n split_auth_netloc_from_url,\n)\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from optparse import Values\n from typing import Dict, Optional, Tuple\n\n from pip._internal.vcs.versioncontrol import AuthInfo\n\n Credentials = Tuple[str, str, str]\n\nlogger = logging.getLogger(__name__)\n\ntry:\n import keyring # noqa\nexcept ImportError:\n keyring = None\nexcept Exception as exc:\n logger.warning(\n \"Keyring is skipped due to an exception: %s\", str(exc),\n )\n keyring = None\n\n\ndef get_keyring_auth(url, username):\n \"\"\"Return the tuple auth for a given url from keyring.\"\"\"\n if not url or not keyring:\n return None\n\n try:\n try:\n get_credential = keyring.get_credential\n except AttributeError:\n pass\n else:\n logger.debug(\"Getting credentials from keyring for %s\", url)\n cred = get_credential(url, username)\n if cred is not None:\n return cred.username, cred.password\n return None\n\n if username:\n logger.debug(\"Getting password from keyring for %s\", url)\n password = keyring.get_password(url, username)\n if password:\n return username, password\n\n except Exception as exc:\n logger.warning(\n \"Keyring is skipped due to an exception: %s\", str(exc),\n )\n\n\nclass MultiDomainBasicAuth(AuthBase):\n\n def __init__(self, prompting=True, index_urls=None):\n # type: (bool, Optional[Values]) -> None\n self.prompting = prompting\n self.index_urls = index_urls\n self.passwords = {} # type: Dict[str, AuthInfo]\n # When the user is prompted to enter credentials and keyring is\n # available, we will offer to save them. If the user accepts,\n # this value is set to the credentials they entered. After the\n # request authenticates, the caller should call\n # ``save_credentials`` to save these.\n self._credentials_to_save = None # type: Optional[Credentials]\n\n def _get_index_url(self, url):\n \"\"\"Return the original index URL matching the requested URL.\n\n Cached or dynamically generated credentials may work against\n the original index URL rather than just the netloc.\n\n The provided url should have had its username and password\n removed already. If the original index url had credentials then\n they will be included in the return value.\n\n Returns None if no matching index was found, or if --no-index\n was specified by the user.\n \"\"\"\n if not url or not self.index_urls:\n return None\n\n for u in self.index_urls:\n prefix = remove_auth_from_url(u).rstrip(\"/\") + \"/\"\n if url.startswith(prefix):\n return u\n\n def _get_new_credentials(self, original_url, allow_netrc=True,\n allow_keyring=True):\n \"\"\"Find and return credentials for the specified URL.\"\"\"\n # Split the credentials and netloc from the url.\n url, netloc, url_user_password = split_auth_netloc_from_url(\n original_url,\n )\n\n # Start with the credentials embedded in the url\n username, password = url_user_password\n if username is not None and password is not None:\n logger.debug(\"Found credentials in url for %s\", netloc)\n return url_user_password\n\n # Find a matching index url for this request\n index_url = self._get_index_url(url)\n if index_url:\n # Split the credentials from the url.\n index_info = split_auth_netloc_from_url(index_url)\n if index_info:\n index_url, _, index_url_user_password = index_info\n logger.debug(\"Found index url %s\", index_url)\n\n # If an index URL was found, try its embedded credentials\n if index_url and index_url_user_password[0] is not None:\n username, password = index_url_user_password\n if username is not None and password is not None:\n logger.debug(\"Found credentials in index url for %s\", netloc)\n return index_url_user_password\n\n # Get creds from netrc if we still don't have them\n if allow_netrc:\n netrc_auth = get_netrc_auth(original_url)\n if netrc_auth:\n logger.debug(\"Found credentials in netrc for %s\", netloc)\n return netrc_auth\n\n # If we don't have a password and keyring is available, use it.\n if allow_keyring:\n # The index url is more specific than the netloc, so try it first\n kr_auth = (\n get_keyring_auth(index_url, username) or\n get_keyring_auth(netloc, username)\n )\n if kr_auth:\n logger.debug(\"Found credentials in keyring for %s\", netloc)\n return kr_auth\n\n return username, password\n\n def _get_url_and_credentials(self, original_url):\n \"\"\"Return the credentials to use for the provided URL.\n\n If allowed, netrc and keyring may be used to obtain the\n correct credentials.\n\n Returns (url_without_credentials, username, password). Note\n that even if the original URL contains credentials, this\n function may return a different username and password.\n \"\"\"\n url, netloc, _ = split_auth_netloc_from_url(original_url)\n\n # Use any stored credentials that we have for this netloc\n username, password = self.passwords.get(netloc, (None, None))\n\n if username is None and password is None:\n # No stored credentials. Acquire new credentials without prompting\n # the user. (e.g. from netrc, keyring, or the URL itself)\n username, password = self._get_new_credentials(original_url)\n\n if username is not None or password is not None:\n # Convert the username and password if they're None, so that\n # this netloc will show up as \"cached\" in the conditional above.\n # Further, HTTPBasicAuth doesn't accept None, so it makes sense to\n # cache the value that is going to be used.\n username = username or \"\"\n password = password or \"\"\n\n # Store any acquired credentials.\n self.passwords[netloc] = (username, password)\n\n assert (\n # Credentials were found\n (username is not None and password is not None) or\n # Credentials were not found\n (username is None and password is None)\n ), \"Could not load credentials from url: {}\".format(original_url)\n\n return url, username, password\n\n def __call__(self, req):\n # Get credentials for this request\n url, username, password = self._get_url_and_credentials(req.url)\n\n # Set the url of the request to the url without any credentials\n req.url = url\n\n if username is not None and password is not None:\n # Send the basic auth with this request\n req = HTTPBasicAuth(username, password)(req)\n\n # Attach a hook to handle 401 responses\n req.register_hook(\"response\", self.handle_401)\n\n return req\n\n # Factored out to allow for easy patching in tests\n def _prompt_for_password(self, netloc):\n username = ask_input(\"User for %s: \" % netloc)\n if not username:\n return None, None\n auth = get_keyring_auth(netloc, username)\n if auth:\n return auth[0], auth[1], False\n password = ask_password(\"Password: \")\n return username, password, True\n\n # Factored out to allow for easy patching in tests\n def _should_save_password_to_keyring(self):\n if not keyring:\n return False\n return ask(\"Save credentials to keyring [y/N]: \", [\"y\", \"n\"]) == \"y\"\n\n def handle_401(self, resp, **kwargs):\n # We only care about 401 responses, anything else we want to just\n # pass through the actual response\n if resp.status_code != 401:\n return resp\n\n # We are not able to prompt the user so simply return the response\n if not self.prompting:\n return resp\n\n parsed = urllib_parse.urlparse(resp.url)\n\n # Prompt the user for a new username and password\n username, password, save = self._prompt_for_password(parsed.netloc)\n\n # Store the new username and password to use for future requests\n self._credentials_to_save = None\n if username is not None and password is not None:\n self.passwords[parsed.netloc] = (username, password)\n\n # Prompt to save the password to keyring\n if save and self._should_save_password_to_keyring():\n self._credentials_to_save = (parsed.netloc, username, password)\n\n # Consume content and release the original connection to allow our new\n # request to reuse the same one.\n resp.content\n resp.raw.release_conn()\n\n # Add our new username and password to the request\n req = HTTPBasicAuth(username or \"\", password or \"\")(resp.request)\n req.register_hook(\"response\", self.warn_on_401)\n\n # On successful request, save the credentials that were used to\n # keyring. (Note that if the user responded \"no\" above, this member\n # is not set and nothing will be saved.)\n if self._credentials_to_save:\n req.register_hook(\"response\", self.save_credentials)\n\n # Send our new request\n new_resp = resp.connection.send(req, **kwargs)\n new_resp.history.append(resp)\n\n return new_resp\n\n def warn_on_401(self, resp, **kwargs):\n \"\"\"Response callback to warn about incorrect credentials.\"\"\"\n if resp.status_code == 401:\n logger.warning(\n '401 Error, Credentials not correct for %s', resp.request.url,\n )\n\n def save_credentials(self, resp, **kwargs):\n \"\"\"Response callback to save credentials on success.\"\"\"\n assert keyring is not None, \"should never reach here without keyring\"\n if not keyring:\n return\n\n creds = self._credentials_to_save\n self._credentials_to_save = None\n if creds and resp.status_code < 400:\n try:\n logger.info('Saving credentials to keyring')\n keyring.set_password(*creds)\n except Exception:\n logger.exception('Failed to save credentials')\n", "path": "src/pip/_internal/network/auth.py"}]}
| 3,423 | 180 |
gh_patches_debug_18214
|
rasdani/github-patches
|
git_diff
|
python-poetry__poetry-1954
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
command to check lock file
<!-- Checked checkbox should look like this: [x] -->
- [X] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
## Issue
Would it be possible to add a `poetry check` command (oh, I see you'e used that already, maybe the command could be extended or have a `poetry lock --check`?) that will return zero if the current poetry.lock is both correct w.r.t. the pyproject.toml (ie: packages match) *and* that it matches the output of `poetry lock`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/console/commands/lock.py`
Content:
```
1 from cleo.helpers import option
2
3 from .installer_command import InstallerCommand
4
5
6 class LockCommand(InstallerCommand):
7
8 name = "lock"
9 description = "Locks the project dependencies."
10
11 options = [
12 option(
13 "no-update", None, "Do not update locked versions, only refresh lock file."
14 ),
15 ]
16
17 help = """
18 The <info>lock</info> command reads the <comment>pyproject.toml</> file from the
19 current directory, processes it, and locks the dependencies in the <comment>poetry.lock</>
20 file.
21
22 <info>poetry lock</info>
23 """
24
25 loggers = ["poetry.repositories.pypi_repository"]
26
27 def handle(self) -> int:
28 self._installer.use_executor(
29 self.poetry.config.get("experimental.new-installer", False)
30 )
31
32 self._installer.lock(update=not self.option("no-update"))
33
34 return self._installer.run()
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/poetry/console/commands/lock.py b/poetry/console/commands/lock.py
--- a/poetry/console/commands/lock.py
+++ b/poetry/console/commands/lock.py
@@ -12,6 +12,12 @@
option(
"no-update", None, "Do not update locked versions, only refresh lock file."
),
+ option(
+ "check",
+ None,
+ "Check that the <comment>poetry.lock</> file corresponds to the current version "
+ "of <comment>pyproject.toml</>.",
+ ),
]
help = """
@@ -29,6 +35,13 @@
self.poetry.config.get("experimental.new-installer", False)
)
+ if self.option("check"):
+ return (
+ 0
+ if self.poetry.locker.is_locked() and self.poetry.locker.is_fresh()
+ else 1
+ )
+
self._installer.lock(update=not self.option("no-update"))
return self._installer.run()
|
{"golden_diff": "diff --git a/poetry/console/commands/lock.py b/poetry/console/commands/lock.py\n--- a/poetry/console/commands/lock.py\n+++ b/poetry/console/commands/lock.py\n@@ -12,6 +12,12 @@\n option(\n \"no-update\", None, \"Do not update locked versions, only refresh lock file.\"\n ),\n+ option(\n+ \"check\",\n+ None,\n+ \"Check that the <comment>poetry.lock</> file corresponds to the current version \"\n+ \"of <comment>pyproject.toml</>.\",\n+ ),\n ]\n \n help = \"\"\"\n@@ -29,6 +35,13 @@\n self.poetry.config.get(\"experimental.new-installer\", False)\n )\n \n+ if self.option(\"check\"):\n+ return (\n+ 0\n+ if self.poetry.locker.is_locked() and self.poetry.locker.is_fresh()\n+ else 1\n+ )\n+\n self._installer.lock(update=not self.option(\"no-update\"))\n \n return self._installer.run()\n", "issue": "command to check lock file\n<!-- Checked checkbox should look like this: [x] -->\r\n- [X] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n\r\n## Issue\r\n\r\nWould it be possible to add a `poetry check` command (oh, I see you'e used that already, maybe the command could be extended or have a `poetry lock --check`?) that will return zero if the current poetry.lock is both correct w.r.t. the pyproject.toml (ie: packages match) *and* that it matches the output of `poetry lock`.\r\n\n", "before_files": [{"content": "from cleo.helpers import option\n\nfrom .installer_command import InstallerCommand\n\n\nclass LockCommand(InstallerCommand):\n\n name = \"lock\"\n description = \"Locks the project dependencies.\"\n\n options = [\n option(\n \"no-update\", None, \"Do not update locked versions, only refresh lock file.\"\n ),\n ]\n\n help = \"\"\"\nThe <info>lock</info> command reads the <comment>pyproject.toml</> file from the\ncurrent directory, processes it, and locks the dependencies in the <comment>poetry.lock</>\nfile.\n\n<info>poetry lock</info>\n\"\"\"\n\n loggers = [\"poetry.repositories.pypi_repository\"]\n\n def handle(self) -> int:\n self._installer.use_executor(\n self.poetry.config.get(\"experimental.new-installer\", False)\n )\n\n self._installer.lock(update=not self.option(\"no-update\"))\n\n return self._installer.run()\n", "path": "poetry/console/commands/lock.py"}], "after_files": [{"content": "from cleo.helpers import option\n\nfrom .installer_command import InstallerCommand\n\n\nclass LockCommand(InstallerCommand):\n\n name = \"lock\"\n description = \"Locks the project dependencies.\"\n\n options = [\n option(\n \"no-update\", None, \"Do not update locked versions, only refresh lock file.\"\n ),\n option(\n \"check\",\n None,\n \"Check that the <comment>poetry.lock</> file corresponds to the current version \"\n \"of <comment>pyproject.toml</>.\",\n ),\n ]\n\n help = \"\"\"\nThe <info>lock</info> command reads the <comment>pyproject.toml</> file from the\ncurrent directory, processes it, and locks the dependencies in the <comment>poetry.lock</>\nfile.\n\n<info>poetry lock</info>\n\"\"\"\n\n loggers = [\"poetry.repositories.pypi_repository\"]\n\n def handle(self) -> int:\n self._installer.use_executor(\n self.poetry.config.get(\"experimental.new-installer\", False)\n )\n\n if self.option(\"check\"):\n return (\n 0\n if self.poetry.locker.is_locked() and self.poetry.locker.is_fresh()\n else 1\n )\n\n self._installer.lock(update=not self.option(\"no-update\"))\n\n return self._installer.run()\n", "path": "poetry/console/commands/lock.py"}]}
| 666 | 243 |
gh_patches_debug_39342
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-5564
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
VS generator not exporting for <Lib> tag in vcxproj
Need to add Conan-handled lib to be referenced in .vcxproj in <Lib> tags. Conan VS-generator is exporting only in <Link> tag.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/generators/visualstudio.py`
Content:
```
1 import os
2 import re
3
4 from conans.model import Generator
5 from conans.paths import BUILD_INFO_VISUAL_STUDIO
6 from conans.client.tools.files import VALID_LIB_EXTENSIONS
7
8
9 class VisualStudioGenerator(Generator):
10
11 template = '''<?xml version="1.0" encoding="utf-8"?>
12 <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
13 <ImportGroup Label="PropertySheets" />
14 <PropertyGroup Label="UserMacros" />
15 <PropertyGroup Label="Conan-RootDirs">{item_properties}
16 </PropertyGroup>
17 {properties}
18 <ItemGroup />
19 </Project>'''
20
21 properties_template = ''' <PropertyGroup Label="ConanVariables"{condition}>
22 <ConanBinaryDirectories>{bin_dirs};%(ConanBinaryDirectories)</ConanBinaryDirectories>
23 <ConanResourceDirectories>{res_dirs};%(ConanResourceDirectories)$(</ConanResourceDirectories>
24 </PropertyGroup>
25 <PropertyGroup{condition}>
26 <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>
27 <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>
28 </PropertyGroup>
29 <ItemDefinitionGroup{condition}>
30 <ClCompile>
31 <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
32 <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>
33 <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>
34 </ClCompile>
35 <Link>
36 <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
37 <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>
38 <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>
39 </Link>
40 <Midl>
41 <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
42 </Midl>
43 <ResourceCompile>
44 <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
45 <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>
46 <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>
47 </ResourceCompile>
48 </ItemDefinitionGroup>'''
49
50 item_template = '''
51 <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''
52
53 def _format_items(self):
54 sections = []
55 for dep_name, cpp_info in self._deps_build_info.dependencies:
56 fields = {
57 'root_dir': cpp_info.rootpath,
58 'name': dep_name.replace(".", "-")
59 }
60 section = self.item_template.format(**fields)
61 sections.append(section)
62 return "".join(sections)
63
64 @property
65 def filename(self):
66 return BUILD_INFO_VISUAL_STUDIO
67
68 def _format_properties(self, build_info, condition):
69 def has_valid_ext(lib):
70 ext = os.path.splitext(lib)[1]
71 return ext in VALID_LIB_EXTENSIONS
72
73 fields = {
74 'condition': condition,
75 'bin_dirs': "".join("%s;" % p for p in build_info.bin_paths),
76 'res_dirs': "".join("%s;" % p for p in build_info.res_paths),
77 'include_dirs': "".join("%s;" % p for p in build_info.include_paths),
78 'lib_dirs': "".join("%s;" % p for p in build_info.lib_paths),
79 'libs': "".join(['%s.lib;' % lib if not has_valid_ext(lib)
80 else '%s;' % lib for lib in build_info.libs]),
81 'definitions': "".join("%s;" % d for d in build_info.defines),
82 'compiler_flags': " ".join(build_info.cxxflags + build_info.cflags),
83 'linker_flags': " ".join(build_info.sharedlinkflags),
84 'exe_flags': " ".join(build_info.exelinkflags)
85 }
86 formatted_template = self.properties_template.format(**fields)
87 return formatted_template
88
89 @property
90 def content(self):
91 per_item_props = self._format_items()
92
93 properties = [self._format_properties(self._deps_build_info, condition='')]
94 for config, cpp_info in self._deps_build_info.configs.items():
95 condition = " Condition=\"'$(Configuration)' == '%s'\"" % config
96 properties.append(self._format_properties(cpp_info, condition=condition))
97
98 fields = {
99 'item_properties': per_item_props,
100 'properties': '\n'.join(properties)
101 }
102 formatted_template = self.template.format(**fields)
103
104 userprofile = os.getenv("USERPROFILE")
105 if userprofile:
106 userprofile = userprofile.replace("\\", "\\\\")
107 formatted_template = re.sub(userprofile, "$(USERPROFILE)", formatted_template,
108 flags=re.I)
109 return formatted_template
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/client/generators/visualstudio.py b/conans/client/generators/visualstudio.py
--- a/conans/client/generators/visualstudio.py
+++ b/conans/client/generators/visualstudio.py
@@ -19,8 +19,12 @@
</Project>'''
properties_template = ''' <PropertyGroup Label="ConanVariables"{condition}>
- <ConanBinaryDirectories>{bin_dirs};%(ConanBinaryDirectories)</ConanBinaryDirectories>
- <ConanResourceDirectories>{res_dirs};%(ConanResourceDirectories)$(</ConanResourceDirectories>
+ <ConanCompilerFlags>{compiler_flags}</ConanCompilerFlags>
+ <ConanPreprocessorDefinitions>{definitions}</ConanPreprocessorDefinitions>
+ <ConanIncludeDirectories>{include_dirs}</ConanIncludeDirectories>
+ <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>
+ <ConanLibraryDirectories>{lib_dirs}</ConanLibraryDirectories>
+ <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>
</PropertyGroup>
<PropertyGroup{condition}>
<LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>
@@ -28,22 +32,26 @@
</PropertyGroup>
<ItemDefinitionGroup{condition}>
<ClCompile>
- <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
- <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>
- <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>
+ <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
+ <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>
+ <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>
</ClCompile>
<Link>
- <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
+ <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>
</Link>
+ <Lib>
+ <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
+ <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>
+ </Lib>
<Midl>
- <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
+ <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</Midl>
<ResourceCompile>
- <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
- <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>
- <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>
+ <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
+ <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>
+ <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>
</ResourceCompile>
</ItemDefinitionGroup>'''
|
{"golden_diff": "diff --git a/conans/client/generators/visualstudio.py b/conans/client/generators/visualstudio.py\n--- a/conans/client/generators/visualstudio.py\n+++ b/conans/client/generators/visualstudio.py\n@@ -19,8 +19,12 @@\n </Project>'''\n \n properties_template = ''' <PropertyGroup Label=\"ConanVariables\"{condition}>\n- <ConanBinaryDirectories>{bin_dirs};%(ConanBinaryDirectories)</ConanBinaryDirectories>\n- <ConanResourceDirectories>{res_dirs};%(ConanResourceDirectories)$(</ConanResourceDirectories>\n+ <ConanCompilerFlags>{compiler_flags}</ConanCompilerFlags>\n+ <ConanPreprocessorDefinitions>{definitions}</ConanPreprocessorDefinitions>\n+ <ConanIncludeDirectories>{include_dirs}</ConanIncludeDirectories>\n+ <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>\n+ <ConanLibraryDirectories>{lib_dirs}</ConanLibraryDirectories>\n+ <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>\n </PropertyGroup>\n <PropertyGroup{condition}>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n@@ -28,22 +32,26 @@\n </PropertyGroup>\n <ItemDefinitionGroup{condition}>\n <ClCompile>\n- <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n- <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n- <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n+ <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n+ <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>\n+ <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>\n </ClCompile>\n <Link>\n- <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n+ <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>\n </Link>\n+ <Lib>\n+ <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n+ <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n+ </Lib>\n <Midl>\n- <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n+ <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n <ResourceCompile>\n- <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n- <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n- <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n+ <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n+ <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>\n+ <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>\n </ResourceCompile>\n </ItemDefinitionGroup>'''\n", "issue": "VS generator not exporting for <Lib> tag in vcxproj\nNeed to add Conan-handled lib to be referenced in .vcxproj in <Lib> tags. Conan VS-generator is exporting only in <Link> tag. \r\n\n", "before_files": [{"content": "import os\nimport re\n\nfrom conans.model import Generator\nfrom conans.paths import BUILD_INFO_VISUAL_STUDIO\nfrom conans.client.tools.files import VALID_LIB_EXTENSIONS\n\n\nclass VisualStudioGenerator(Generator):\n\n template = '''<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ImportGroup Label=\"PropertySheets\" />\n <PropertyGroup Label=\"UserMacros\" />\n <PropertyGroup Label=\"Conan-RootDirs\">{item_properties}\n </PropertyGroup>\n {properties}\n <ItemGroup />\n</Project>'''\n\n properties_template = ''' <PropertyGroup Label=\"ConanVariables\"{condition}>\n <ConanBinaryDirectories>{bin_dirs};%(ConanBinaryDirectories)</ConanBinaryDirectories>\n <ConanResourceDirectories>{res_dirs};%(ConanResourceDirectories)$(</ConanResourceDirectories>\n </PropertyGroup>\n <PropertyGroup{condition}>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n </PropertyGroup>\n <ItemDefinitionGroup{condition}>\n <ClCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ClCompile>\n <Link>\n <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>\n </Link>\n <Midl>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n <ResourceCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ResourceCompile>\n </ItemDefinitionGroup>'''\n\n item_template = '''\n <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''\n\n def _format_items(self):\n sections = []\n for dep_name, cpp_info in self._deps_build_info.dependencies:\n fields = {\n 'root_dir': cpp_info.rootpath,\n 'name': dep_name.replace(\".\", \"-\")\n }\n section = self.item_template.format(**fields)\n sections.append(section)\n return \"\".join(sections)\n\n @property\n def filename(self):\n return BUILD_INFO_VISUAL_STUDIO\n\n def _format_properties(self, build_info, condition):\n def has_valid_ext(lib):\n ext = os.path.splitext(lib)[1]\n return ext in VALID_LIB_EXTENSIONS\n\n fields = {\n 'condition': condition,\n 'bin_dirs': \"\".join(\"%s;\" % p for p in build_info.bin_paths),\n 'res_dirs': \"\".join(\"%s;\" % p for p in build_info.res_paths),\n 'include_dirs': \"\".join(\"%s;\" % p for p in build_info.include_paths),\n 'lib_dirs': \"\".join(\"%s;\" % p for p in build_info.lib_paths),\n 'libs': \"\".join(['%s.lib;' % lib if not has_valid_ext(lib)\n else '%s;' % lib for lib in build_info.libs]),\n 'definitions': \"\".join(\"%s;\" % d for d in build_info.defines),\n 'compiler_flags': \" \".join(build_info.cxxflags + build_info.cflags),\n 'linker_flags': \" \".join(build_info.sharedlinkflags),\n 'exe_flags': \" \".join(build_info.exelinkflags)\n }\n formatted_template = self.properties_template.format(**fields)\n return formatted_template\n\n @property\n def content(self):\n per_item_props = self._format_items()\n\n properties = [self._format_properties(self._deps_build_info, condition='')]\n for config, cpp_info in self._deps_build_info.configs.items():\n condition = \" Condition=\\\"'$(Configuration)' == '%s'\\\"\" % config\n properties.append(self._format_properties(cpp_info, condition=condition))\n\n fields = {\n 'item_properties': per_item_props,\n 'properties': '\\n'.join(properties)\n }\n formatted_template = self.template.format(**fields)\n\n userprofile = os.getenv(\"USERPROFILE\")\n if userprofile:\n userprofile = userprofile.replace(\"\\\\\", \"\\\\\\\\\")\n formatted_template = re.sub(userprofile, \"$(USERPROFILE)\", formatted_template,\n flags=re.I)\n return formatted_template\n", "path": "conans/client/generators/visualstudio.py"}], "after_files": [{"content": "import os\nimport re\n\nfrom conans.model import Generator\nfrom conans.paths import BUILD_INFO_VISUAL_STUDIO\nfrom conans.client.tools.files import VALID_LIB_EXTENSIONS\n\n\nclass VisualStudioGenerator(Generator):\n\n template = '''<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ImportGroup Label=\"PropertySheets\" />\n <PropertyGroup Label=\"UserMacros\" />\n <PropertyGroup Label=\"Conan-RootDirs\">{item_properties}\n </PropertyGroup>\n {properties}\n <ItemGroup />\n</Project>'''\n\n properties_template = ''' <PropertyGroup Label=\"ConanVariables\"{condition}>\n <ConanCompilerFlags>{compiler_flags}</ConanCompilerFlags>\n <ConanPreprocessorDefinitions>{definitions}</ConanPreprocessorDefinitions>\n <ConanIncludeDirectories>{include_dirs}</ConanIncludeDirectories>\n <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>\n <ConanLibraryDirectories>{lib_dirs}</ConanLibraryDirectories>\n <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>\n </PropertyGroup>\n <PropertyGroup{condition}>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n </PropertyGroup>\n <ItemDefinitionGroup{condition}>\n <ClCompile>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>\n </ClCompile>\n <Link>\n <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>\n </Link>\n <Lib>\n <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n </Lib>\n <Midl>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n <ResourceCompile>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>\n </ResourceCompile>\n </ItemDefinitionGroup>'''\n\n item_template = '''\n <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''\n\n def _format_items(self):\n sections = []\n for dep_name, cpp_info in self._deps_build_info.dependencies:\n fields = {\n 'root_dir': cpp_info.rootpath,\n 'name': dep_name.replace(\".\", \"-\")\n }\n section = self.item_template.format(**fields)\n sections.append(section)\n return \"\".join(sections)\n\n @property\n def filename(self):\n return BUILD_INFO_VISUAL_STUDIO\n\n def _format_properties(self, build_info, condition):\n def has_valid_ext(lib):\n ext = os.path.splitext(lib)[1]\n return ext in VALID_LIB_EXTENSIONS\n\n fields = {\n 'condition': condition,\n 'bin_dirs': \"\".join(\"%s;\" % p for p in build_info.bin_paths),\n 'res_dirs': \"\".join(\"%s;\" % p for p in build_info.res_paths),\n 'include_dirs': \"\".join(\"%s;\" % p for p in build_info.include_paths),\n 'lib_dirs': \"\".join(\"%s;\" % p for p in build_info.lib_paths),\n 'libs': \"\".join(['%s.lib;' % lib if not has_valid_ext(lib)\n else '%s;' % lib for lib in build_info.libs]),\n 'definitions': \"\".join(\"%s;\" % d for d in build_info.defines),\n 'compiler_flags': \" \".join(build_info.cxxflags + build_info.cflags),\n 'linker_flags': \" \".join(build_info.sharedlinkflags),\n 'exe_flags': \" \".join(build_info.exelinkflags)\n }\n formatted_template = self.properties_template.format(**fields)\n return formatted_template\n\n @property\n def content(self):\n per_item_props = self._format_items()\n\n properties = [self._format_properties(self._deps_build_info, condition='')]\n for config, cpp_info in self._deps_build_info.configs.items():\n condition = \" Condition=\\\"'$(Configuration)' == '%s'\\\"\" % config\n properties.append(self._format_properties(cpp_info, condition=condition))\n\n fields = {\n 'item_properties': per_item_props,\n 'properties': '\\n'.join(properties)\n }\n formatted_template = self.template.format(**fields)\n\n userprofile = os.getenv(\"USERPROFILE\")\n if userprofile:\n userprofile = userprofile.replace(\"\\\\\", \"\\\\\\\\\")\n formatted_template = re.sub(userprofile, \"$(USERPROFILE)\", formatted_template,\n flags=re.I)\n return formatted_template\n", "path": "conans/client/generators/visualstudio.py"}]}
| 1,558 | 725 |
gh_patches_debug_9185
|
rasdani/github-patches
|
git_diff
|
fidals__shopelectro-729
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
502 error on category pages
We can reproduce it with code below.
Seems it's connected with https://github.com/fidals/shopelectro/issues/648, but it's not checked yet
```
import requests
BASE = 'https://www.shopelectro.ru'
broken_urls = []
for c in CategoryPage.objects.all():
response = requests.get(BASE + c.url)
if response.status_code != 200:
broken_urls += (response.status_code, response.url)
print('\n'.join((str(url) for url in broken_urls)))
---
# output:
502
https://www.shopelectro.ru/catalog/categories/batarei-saliutov-moshchnye/
502
https://www.shopelectro.ru/catalog/categories/khlopushki-pnevmaticheskie/
502
https://www.shopelectro.ru/catalog/categories/izmeritelnye-pribory-211/
502
https://www.shopelectro.ru/catalog/categories/dlia-tiporazmera-aaaaa/
502
https://www.shopelectro.ru/catalog/categories/tiporazmer-aa/
502
https://www.shopelectro.ru/catalog/categories/tiporazmer-6f22/
502
https://www.shopelectro.ru/catalog/categories/invertory/
502
https://www.shopelectro.ru/catalog/categories/palchikovye-aa-r6-lr6-fr6-316/
502
https://www.shopelectro.ru/catalog/categories/mizinchikovye-aaa-r03-lr03-fr03-286/
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/celery.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2 from datetime import timedelta
3 import os
4
5 from celery import Celery
6 from kombu import Exchange, Queue
7
8 # set the default Django settings module for the 'celery' program.
9 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'shopelectro.settings.local')
10
11 app = Celery('shopelectro')
12
13 # Exchanges
14 default_exchange = Exchange('default', type='direct')
15 utils_exchange = Exchange('utils', type='direct')
16
17 # http://docs.celeryproject.org/en/latest/userguide/tasks.html
18 task_queues = (
19 Queue(
20 name='default',
21 exchange=default_exchange,
22 routing_key='default',
23 ),
24 Queue(
25 name='mail',
26 exchange=utils_exchange,
27 routing_key='utils.mail',
28 ),
29 Queue(
30 name='command',
31 exchange=utils_exchange,
32 routing_key='utils.command',
33 )
34 )
35
36 # http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html
37 beat_schedule = {
38 'update-catalog': {
39 'task': 'shopelectro.tasks.update_catalog',
40 'schedule': timedelta(hours=2).total_seconds(),
41 },
42 'check-purchase': {
43 'task': 'shopelectro.tasks.check_purchase',
44 'schedule': timedelta(days=1).total_seconds(),
45 },
46 }
47
48 # http://docs.celeryproject.org/en/master/userguide/routing.html
49 task_routes = {
50 'shopelectro.tasks.update_catalog': {
51 'queue': 'command',
52 'routing_key': 'utils.command',
53 'priority': 30,
54 },
55 'shopelectro.tasks.check_purchase': {
56 'queue': 'command',
57 'routing_key': 'utils.command',
58 'priority': 20,
59 },
60 'ecommerce.tasks.send_mail': {
61 'queue': 'mail',
62 'routing_key': 'utils.mail',
63 'priority': 50,
64 },
65 }
66
67 # Using a string here means the worker don't have to serialize
68 # the configuration object to child processes.
69 # - namespace='CELERY' means all celery-related configuration keys
70 # should have a `CELERY_` prefix.
71 app.config_from_object('django.conf:settings', namespace='CELERY')
72
73 # http://docs.celeryproject.org/en/latest/userguide/configuration.html
74
75 BROCKER_URL = (
76 f'amqp://{os.environ["RABBITMQ_DEFAULT_USER"]}:{os.environ["RABBITMQ_DEFAULT_PASS"]}'
77 f'@{os.environ["RABBITMQ_URL"]}:{os.environ["RABBITMQ_PORT"]}/'
78 )
79 app.conf.update(
80 broker_url=BROCKER_URL,
81 broker_heartbeat=30,
82 task_acks_late=True,
83 task_default_queue='default',
84 task_default_exchange='default',
85 task_default_routing_key='default',
86 task_ignore_result=True,
87 worker_pool_restarts=1000,
88 task_routes=task_routes,
89 task_queues=task_queues,
90 beat_schedule=beat_schedule,
91 )
92
93 # Load task modules from all registered Django app configs.
94 app.autodiscover_tasks()
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/shopelectro/celery.py b/shopelectro/celery.py
--- a/shopelectro/celery.py
+++ b/shopelectro/celery.py
@@ -83,11 +83,11 @@
task_default_queue='default',
task_default_exchange='default',
task_default_routing_key='default',
- task_ignore_result=True,
- worker_pool_restarts=1000,
+ worker_pool_restarts=True,
task_routes=task_routes,
task_queues=task_queues,
beat_schedule=beat_schedule,
+ worker_max_memory_per_child=200000, # after 250MB will restart
)
# Load task modules from all registered Django app configs.
|
{"golden_diff": "diff --git a/shopelectro/celery.py b/shopelectro/celery.py\n--- a/shopelectro/celery.py\n+++ b/shopelectro/celery.py\n@@ -83,11 +83,11 @@\n task_default_queue='default',\n task_default_exchange='default',\n task_default_routing_key='default',\n- task_ignore_result=True,\n- worker_pool_restarts=1000,\n+ worker_pool_restarts=True,\n task_routes=task_routes,\n task_queues=task_queues,\n beat_schedule=beat_schedule,\n+ worker_max_memory_per_child=200000, # after 250MB will restart\n )\n \n # Load task modules from all registered Django app configs.\n", "issue": "502 error on category pages\nWe can reproduce it with code below.\r\n\r\nSeems it's connected with https://github.com/fidals/shopelectro/issues/648, but it's not checked yet\r\n\r\n```\r\nimport requests\r\n\r\nBASE = 'https://www.shopelectro.ru'\r\n\r\nbroken_urls = []\r\nfor c in CategoryPage.objects.all():\r\n\tresponse = requests.get(BASE + c.url)\r\n\tif response.status_code != 200:\r\n\t\tbroken_urls += (response.status_code, response.url)\r\n\r\nprint('\\n'.join((str(url) for url in broken_urls)))\r\n\r\n---\r\n\r\n# output:\r\n502\r\nhttps://www.shopelectro.ru/catalog/categories/batarei-saliutov-moshchnye/\r\n502\r\nhttps://www.shopelectro.ru/catalog/categories/khlopushki-pnevmaticheskie/\r\n502\r\nhttps://www.shopelectro.ru/catalog/categories/izmeritelnye-pribory-211/\r\n502\r\nhttps://www.shopelectro.ru/catalog/categories/dlia-tiporazmera-aaaaa/\r\n502\r\nhttps://www.shopelectro.ru/catalog/categories/tiporazmer-aa/\r\n502\r\nhttps://www.shopelectro.ru/catalog/categories/tiporazmer-6f22/\r\n502\r\nhttps://www.shopelectro.ru/catalog/categories/invertory/\r\n502\r\nhttps://www.shopelectro.ru/catalog/categories/palchikovye-aa-r6-lr6-fr6-316/\r\n502\r\nhttps://www.shopelectro.ru/catalog/categories/mizinchikovye-aaa-r03-lr03-fr03-286/\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\nfrom datetime import timedelta\nimport os\n\nfrom celery import Celery\nfrom kombu import Exchange, Queue\n\n# set the default Django settings module for the 'celery' program.\nos.environ.setdefault('DJANGO_SETTINGS_MODULE', 'shopelectro.settings.local')\n\napp = Celery('shopelectro')\n\n# Exchanges\ndefault_exchange = Exchange('default', type='direct')\nutils_exchange = Exchange('utils', type='direct')\n\n# http://docs.celeryproject.org/en/latest/userguide/tasks.html\ntask_queues = (\n Queue(\n name='default',\n exchange=default_exchange,\n routing_key='default',\n ),\n Queue(\n name='mail',\n exchange=utils_exchange,\n routing_key='utils.mail',\n ),\n Queue(\n name='command',\n exchange=utils_exchange,\n routing_key='utils.command',\n )\n)\n\n# http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html\nbeat_schedule = {\n 'update-catalog': {\n 'task': 'shopelectro.tasks.update_catalog',\n 'schedule': timedelta(hours=2).total_seconds(),\n },\n 'check-purchase': {\n 'task': 'shopelectro.tasks.check_purchase',\n 'schedule': timedelta(days=1).total_seconds(),\n },\n}\n\n# http://docs.celeryproject.org/en/master/userguide/routing.html\ntask_routes = {\n 'shopelectro.tasks.update_catalog': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\n 'priority': 30,\n },\n 'shopelectro.tasks.check_purchase': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\n 'priority': 20,\n },\n 'ecommerce.tasks.send_mail': {\n 'queue': 'mail',\n 'routing_key': 'utils.mail',\n 'priority': 50,\n },\n}\n\n# Using a string here means the worker don't have to serialize\n# the configuration object to child processes.\n# - namespace='CELERY' means all celery-related configuration keys\n# should have a `CELERY_` prefix.\napp.config_from_object('django.conf:settings', namespace='CELERY')\n\n# http://docs.celeryproject.org/en/latest/userguide/configuration.html\n\nBROCKER_URL = (\n f'amqp://{os.environ[\"RABBITMQ_DEFAULT_USER\"]}:{os.environ[\"RABBITMQ_DEFAULT_PASS\"]}'\n f'@{os.environ[\"RABBITMQ_URL\"]}:{os.environ[\"RABBITMQ_PORT\"]}/'\n)\napp.conf.update(\n broker_url=BROCKER_URL,\n broker_heartbeat=30,\n task_acks_late=True,\n task_default_queue='default',\n task_default_exchange='default',\n task_default_routing_key='default',\n task_ignore_result=True,\n worker_pool_restarts=1000,\n task_routes=task_routes,\n task_queues=task_queues,\n beat_schedule=beat_schedule,\n)\n\n# Load task modules from all registered Django app configs.\napp.autodiscover_tasks()\n", "path": "shopelectro/celery.py"}], "after_files": [{"content": "from __future__ import absolute_import, unicode_literals\nfrom datetime import timedelta\nimport os\n\nfrom celery import Celery\nfrom kombu import Exchange, Queue\n\n# set the default Django settings module for the 'celery' program.\nos.environ.setdefault('DJANGO_SETTINGS_MODULE', 'shopelectro.settings.local')\n\napp = Celery('shopelectro')\n\n# Exchanges\ndefault_exchange = Exchange('default', type='direct')\nutils_exchange = Exchange('utils', type='direct')\n\n# http://docs.celeryproject.org/en/latest/userguide/tasks.html\ntask_queues = (\n Queue(\n name='default',\n exchange=default_exchange,\n routing_key='default',\n ),\n Queue(\n name='mail',\n exchange=utils_exchange,\n routing_key='utils.mail',\n ),\n Queue(\n name='command',\n exchange=utils_exchange,\n routing_key='utils.command',\n )\n)\n\n# http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html\nbeat_schedule = {\n 'update-catalog': {\n 'task': 'shopelectro.tasks.update_catalog',\n 'schedule': timedelta(hours=2).total_seconds(),\n },\n 'check-purchase': {\n 'task': 'shopelectro.tasks.check_purchase',\n 'schedule': timedelta(days=1).total_seconds(),\n },\n}\n\n# http://docs.celeryproject.org/en/master/userguide/routing.html\ntask_routes = {\n 'shopelectro.tasks.update_catalog': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\n 'priority': 30,\n },\n 'shopelectro.tasks.check_purchase': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\n 'priority': 20,\n },\n 'ecommerce.tasks.send_mail': {\n 'queue': 'mail',\n 'routing_key': 'utils.mail',\n 'priority': 50,\n },\n}\n\n# Using a string here means the worker don't have to serialize\n# the configuration object to child processes.\n# - namespace='CELERY' means all celery-related configuration keys\n# should have a `CELERY_` prefix.\napp.config_from_object('django.conf:settings', namespace='CELERY')\n\n# http://docs.celeryproject.org/en/latest/userguide/configuration.html\n\nBROCKER_URL = (\n f'amqp://{os.environ[\"RABBITMQ_DEFAULT_USER\"]}:{os.environ[\"RABBITMQ_DEFAULT_PASS\"]}'\n f'@{os.environ[\"RABBITMQ_URL\"]}:{os.environ[\"RABBITMQ_PORT\"]}/'\n)\napp.conf.update(\n broker_url=BROCKER_URL,\n broker_heartbeat=30,\n task_acks_late=True,\n task_default_queue='default',\n task_default_exchange='default',\n task_default_routing_key='default',\n worker_pool_restarts=True,\n task_routes=task_routes,\n task_queues=task_queues,\n beat_schedule=beat_schedule,\n worker_max_memory_per_child=200000, # after 250MB will restart\n)\n\n# Load task modules from all registered Django app configs.\napp.autodiscover_tasks()\n", "path": "shopelectro/celery.py"}]}
| 1,484 | 167 |
gh_patches_debug_18091
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-2198
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pip installing Nilearn 0.6.0a0 in Py2 envs
Not sure why, installing Nilearn 06.0a0 in a Python2environment does not raise an error. This is strange, since the wheels I build this time are not universal and I removed Python 2 tags. Maybe something was overlooked? Will look at this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python
2
3 descr = """A set of python modules for neuroimaging..."""
4
5 import sys
6 import os
7
8 from setuptools import setup, find_packages
9
10
11 def load_version():
12 """Executes nilearn/version.py in a globals dictionary and return it.
13
14 Note: importing nilearn is not an option because there may be
15 dependencies like nibabel which are not installed and
16 setup.py is supposed to install them.
17 """
18 # load all vars into globals, otherwise
19 # the later function call using global vars doesn't work.
20 globals_dict = {}
21 with open(os.path.join('nilearn', 'version.py')) as fp:
22 exec(fp.read(), globals_dict)
23
24 return globals_dict
25
26
27 def is_installing():
28 # Allow command-lines such as "python setup.py build install"
29 install_commands = set(['install', 'develop'])
30 return install_commands.intersection(set(sys.argv))
31
32
33 # Make sources available using relative paths from this file's directory.
34 os.chdir(os.path.dirname(os.path.abspath(__file__)))
35
36 _VERSION_GLOBALS = load_version()
37 DISTNAME = 'nilearn'
38 DESCRIPTION = 'Statistical learning for neuroimaging in Python'
39 with open('README.rst') as fp:
40 LONG_DESCRIPTION = fp.read()
41 MAINTAINER = 'Gael Varoquaux'
42 MAINTAINER_EMAIL = '[email protected]'
43 URL = 'http://nilearn.github.io'
44 LICENSE = 'new BSD'
45 DOWNLOAD_URL = 'http://nilearn.github.io'
46 VERSION = _VERSION_GLOBALS['__version__']
47
48
49 if __name__ == "__main__":
50 if is_installing():
51 module_check_fn = _VERSION_GLOBALS['_check_module_dependencies']
52 module_check_fn(is_nilearn_installing=True)
53
54 install_requires = \
55 ['%s>=%s' % (mod, meta['min_version'])
56 for mod, meta in _VERSION_GLOBALS['REQUIRED_MODULE_METADATA']
57 if not meta['required_at_installation']]
58
59 setup(name=DISTNAME,
60 maintainer=MAINTAINER,
61 maintainer_email=MAINTAINER_EMAIL,
62 description=DESCRIPTION,
63 license=LICENSE,
64 url=URL,
65 version=VERSION,
66 download_url=DOWNLOAD_URL,
67 long_description=LONG_DESCRIPTION,
68 zip_safe=False, # the package can run out of an .egg file
69 classifiers=[
70 'Intended Audience :: Science/Research',
71 'Intended Audience :: Developers',
72 'License :: OSI Approved',
73 'Programming Language :: C',
74 'Programming Language :: Python',
75 'Topic :: Software Development',
76 'Topic :: Scientific/Engineering',
77 'Operating System :: Microsoft :: Windows',
78 'Operating System :: POSIX',
79 'Operating System :: Unix',
80 'Operating System :: MacOS',
81 'Programming Language :: Python :: 3.5',
82 'Programming Language :: Python :: 3.6',
83 'Programming Language :: Python :: 3.7',
84 ],
85 packages=find_packages(),
86 package_data={'nilearn.datasets.data': ['*.nii.gz', '*.csv', '*.txt'
87 ],
88 'nilearn.datasets.data.fsaverage5': ['*.gz'],
89 'nilearn.surface.data': ['*.csv'],
90 'nilearn.plotting.data.js': ['*.js'],
91 'nilearn.plotting.data.html': ['*.html'],
92 'nilearn.plotting.glass_brain_files': ['*.json'],
93 'nilearn.tests.data': ['*'],
94 'nilearn.image.tests.data': ['*.mgz'],
95 'nilearn.surface.tests.data': ['*.annot', '*.label'],
96 'nilearn.datasets.tests.data': ['*.*'],
97 'nilearn.datasets.description': ['*.rst'],
98 'nilearn.reporting.data.html': ['*.html']},
99 install_requires=install_requires,)
100
```
Path: `nilearn/version.py`
Content:
```
1 # *- encoding: utf-8 -*-
2 """
3 nilearn version, required package versions, and utilities for checking
4 """
5 # Author: Loic Esteve, Ben Cipollini
6 # License: simplified BSD
7
8 # PEP0440 compatible formatted version, see:
9 # https://www.python.org/dev/peps/pep-0440/
10 #
11 # Generic release markers:
12 # X.Y
13 # X.Y.Z # For bugfix releases
14 #
15 # Admissible pre-release markers:
16 # X.YaN # Alpha release
17 # X.YbN # Beta release
18 # X.YrcN # Release Candidate
19 # X.Y # Final release
20 #
21 # Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
22 # 'X.Y.dev0' is the canonical version of 'X.Y.dev'
23 #
24 __version__ = '0.6.0a'
25
26 _NILEARN_INSTALL_MSG = 'See %s for installation information.' % (
27 'http://nilearn.github.io/introduction.html#installation')
28
29 # This is a tuple to preserve order, so that dependencies are checked
30 # in some meaningful order (more => less 'core').
31 REQUIRED_MODULE_METADATA = (
32 ('numpy', {
33 'min_version': '1.11',
34 'required_at_installation': True,
35 'install_info': _NILEARN_INSTALL_MSG}),
36 ('scipy', {
37 'min_version': '0.19',
38 'required_at_installation': True,
39 'install_info': _NILEARN_INSTALL_MSG}),
40 ('sklearn', {
41 'min_version': '0.19',
42 'required_at_installation': True,
43 'install_info': _NILEARN_INSTALL_MSG}),
44 ('joblib', {
45 'min_version': '0.11',
46 'required_at_installation': True,
47 'install_info': _NILEARN_INSTALL_MSG}),
48 ('nibabel', {
49 'min_version': '2.0.2',
50 'required_at_installation': False}))
51
52 OPTIONAL_MATPLOTLIB_MIN_VERSION = '2.0'
53
54
55 def _import_module_with_version_check(
56 module_name,
57 minimum_version,
58 install_info=None):
59 """Check that module is installed with a recent enough version
60 """
61 from distutils.version import LooseVersion
62
63 try:
64 module = __import__(module_name)
65 except ImportError as exc:
66 user_friendly_info = ('Module "{0}" could not be found. {1}').format(
67 module_name,
68 install_info or 'Please install it properly to use nilearn.')
69 exc.args += (user_friendly_info,)
70 # Necessary for Python 3 because the repr/str of ImportError
71 # objects was changed in Python 3
72 if hasattr(exc, 'msg'):
73 exc.msg += '. ' + user_friendly_info
74 raise
75
76 # Avoid choking on modules with no __version__ attribute
77 module_version = getattr(module, '__version__', '0.0.0')
78
79 version_too_old = (not LooseVersion(module_version) >=
80 LooseVersion(minimum_version))
81
82 if version_too_old:
83 message = (
84 'A {module_name} version of at least {minimum_version} '
85 'is required to use nilearn. {module_version} was found. '
86 'Please upgrade {module_name}').format(
87 module_name=module_name,
88 minimum_version=minimum_version,
89 module_version=module_version)
90
91 raise ImportError(message)
92
93 return module
94
95
96 def _check_module_dependencies(is_nilearn_installing=False):
97 """Throw an exception if nilearn dependencies are not installed.
98
99 Parameters
100 ----------
101 is_nilearn_installing: boolean
102 if True, only error on missing packages that cannot be auto-installed.
103 if False, error on any missing package.
104
105 Throws
106 -------
107 ImportError
108 """
109
110 for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:
111 if not (is_nilearn_installing and
112 not module_metadata['required_at_installation']):
113 # Skip check only when installing and it's a module that
114 # will be auto-installed.
115 _import_module_with_version_check(
116 module_name=module_name,
117 minimum_version=module_metadata['min_version'],
118 install_info=module_metadata.get('install_info'))
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nilearn/version.py b/nilearn/version.py
--- a/nilearn/version.py
+++ b/nilearn/version.py
@@ -21,7 +21,7 @@
# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
# 'X.Y.dev0' is the canonical version of 'X.Y.dev'
#
-__version__ = '0.6.0a'
+__version__ = '0.6.0b'
_NILEARN_INSTALL_MSG = 'See %s for installation information.' % (
'http://nilearn.github.io/introduction.html#installation')
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -96,4 +96,6 @@
'nilearn.datasets.tests.data': ['*.*'],
'nilearn.datasets.description': ['*.rst'],
'nilearn.reporting.data.html': ['*.html']},
- install_requires=install_requires,)
+ install_requires=install_requires,
+ python_requires='>=3.5',
+ )
|
{"golden_diff": "diff --git a/nilearn/version.py b/nilearn/version.py\n--- a/nilearn/version.py\n+++ b/nilearn/version.py\n@@ -21,7 +21,7 @@\n # Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.\n # 'X.Y.dev0' is the canonical version of 'X.Y.dev'\n #\n-__version__ = '0.6.0a'\n+__version__ = '0.6.0b'\n \n _NILEARN_INSTALL_MSG = 'See %s for installation information.' % (\n 'http://nilearn.github.io/introduction.html#installation')\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -96,4 +96,6 @@\n 'nilearn.datasets.tests.data': ['*.*'],\n 'nilearn.datasets.description': ['*.rst'],\n 'nilearn.reporting.data.html': ['*.html']},\n- install_requires=install_requires,)\n+ install_requires=install_requires,\n+ python_requires='>=3.5',\n+ )\n", "issue": "Pip installing Nilearn 0.6.0a0 in Py2 envs\nNot sure why, installing Nilearn 06.0a0 in a Python2environment does not raise an error. This is strange, since the wheels I build this time are not universal and I removed Python 2 tags. Maybe something was overlooked? Will look at this.\n", "before_files": [{"content": "#! /usr/bin/env python\n\ndescr = \"\"\"A set of python modules for neuroimaging...\"\"\"\n\nimport sys\nimport os\n\nfrom setuptools import setup, find_packages\n\n\ndef load_version():\n \"\"\"Executes nilearn/version.py in a globals dictionary and return it.\n\n Note: importing nilearn is not an option because there may be\n dependencies like nibabel which are not installed and\n setup.py is supposed to install them.\n \"\"\"\n # load all vars into globals, otherwise\n # the later function call using global vars doesn't work.\n globals_dict = {}\n with open(os.path.join('nilearn', 'version.py')) as fp:\n exec(fp.read(), globals_dict)\n\n return globals_dict\n\n\ndef is_installing():\n # Allow command-lines such as \"python setup.py build install\"\n install_commands = set(['install', 'develop'])\n return install_commands.intersection(set(sys.argv))\n\n\n# Make sources available using relative paths from this file's directory.\nos.chdir(os.path.dirname(os.path.abspath(__file__)))\n\n_VERSION_GLOBALS = load_version()\nDISTNAME = 'nilearn'\nDESCRIPTION = 'Statistical learning for neuroimaging in Python'\nwith open('README.rst') as fp:\n LONG_DESCRIPTION = fp.read()\nMAINTAINER = 'Gael Varoquaux'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'http://nilearn.github.io'\nLICENSE = 'new BSD'\nDOWNLOAD_URL = 'http://nilearn.github.io'\nVERSION = _VERSION_GLOBALS['__version__']\n\n\nif __name__ == \"__main__\":\n if is_installing():\n module_check_fn = _VERSION_GLOBALS['_check_module_dependencies']\n module_check_fn(is_nilearn_installing=True)\n\n install_requires = \\\n ['%s>=%s' % (mod, meta['min_version'])\n for mod, meta in _VERSION_GLOBALS['REQUIRED_MODULE_METADATA']\n if not meta['required_at_installation']]\n\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n long_description=LONG_DESCRIPTION,\n zip_safe=False, # the package can run out of an .egg file\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: C',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n packages=find_packages(),\n package_data={'nilearn.datasets.data': ['*.nii.gz', '*.csv', '*.txt'\n ],\n 'nilearn.datasets.data.fsaverage5': ['*.gz'],\n 'nilearn.surface.data': ['*.csv'],\n 'nilearn.plotting.data.js': ['*.js'],\n 'nilearn.plotting.data.html': ['*.html'],\n 'nilearn.plotting.glass_brain_files': ['*.json'],\n 'nilearn.tests.data': ['*'],\n 'nilearn.image.tests.data': ['*.mgz'],\n 'nilearn.surface.tests.data': ['*.annot', '*.label'],\n 'nilearn.datasets.tests.data': ['*.*'],\n 'nilearn.datasets.description': ['*.rst'],\n 'nilearn.reporting.data.html': ['*.html']},\n install_requires=install_requires,)\n", "path": "setup.py"}, {"content": "# *- encoding: utf-8 -*-\n\"\"\"\nnilearn version, required package versions, and utilities for checking\n\"\"\"\n# Author: Loic Esteve, Ben Cipollini\n# License: simplified BSD\n\n# PEP0440 compatible formatted version, see:\n# https://www.python.org/dev/peps/pep-0440/\n#\n# Generic release markers:\n# X.Y\n# X.Y.Z # For bugfix releases\n#\n# Admissible pre-release markers:\n# X.YaN # Alpha release\n# X.YbN # Beta release\n# X.YrcN # Release Candidate\n# X.Y # Final release\n#\n# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.\n# 'X.Y.dev0' is the canonical version of 'X.Y.dev'\n#\n__version__ = '0.6.0a'\n\n_NILEARN_INSTALL_MSG = 'See %s for installation information.' % (\n 'http://nilearn.github.io/introduction.html#installation')\n\n# This is a tuple to preserve order, so that dependencies are checked\n# in some meaningful order (more => less 'core').\nREQUIRED_MODULE_METADATA = (\n ('numpy', {\n 'min_version': '1.11',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('scipy', {\n 'min_version': '0.19',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('sklearn', {\n 'min_version': '0.19',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('joblib', {\n 'min_version': '0.11',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('nibabel', {\n 'min_version': '2.0.2',\n 'required_at_installation': False}))\n\nOPTIONAL_MATPLOTLIB_MIN_VERSION = '2.0'\n\n\ndef _import_module_with_version_check(\n module_name,\n minimum_version,\n install_info=None):\n \"\"\"Check that module is installed with a recent enough version\n \"\"\"\n from distutils.version import LooseVersion\n\n try:\n module = __import__(module_name)\n except ImportError as exc:\n user_friendly_info = ('Module \"{0}\" could not be found. {1}').format(\n module_name,\n install_info or 'Please install it properly to use nilearn.')\n exc.args += (user_friendly_info,)\n # Necessary for Python 3 because the repr/str of ImportError\n # objects was changed in Python 3\n if hasattr(exc, 'msg'):\n exc.msg += '. ' + user_friendly_info\n raise\n\n # Avoid choking on modules with no __version__ attribute\n module_version = getattr(module, '__version__', '0.0.0')\n\n version_too_old = (not LooseVersion(module_version) >=\n LooseVersion(minimum_version))\n\n if version_too_old:\n message = (\n 'A {module_name} version of at least {minimum_version} '\n 'is required to use nilearn. {module_version} was found. '\n 'Please upgrade {module_name}').format(\n module_name=module_name,\n minimum_version=minimum_version,\n module_version=module_version)\n\n raise ImportError(message)\n\n return module\n\n\ndef _check_module_dependencies(is_nilearn_installing=False):\n \"\"\"Throw an exception if nilearn dependencies are not installed.\n\n Parameters\n ----------\n is_nilearn_installing: boolean\n if True, only error on missing packages that cannot be auto-installed.\n if False, error on any missing package.\n\n Throws\n -------\n ImportError\n \"\"\"\n\n for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:\n if not (is_nilearn_installing and\n not module_metadata['required_at_installation']):\n # Skip check only when installing and it's a module that\n # will be auto-installed.\n _import_module_with_version_check(\n module_name=module_name,\n minimum_version=module_metadata['min_version'],\n install_info=module_metadata.get('install_info'))\n", "path": "nilearn/version.py"}], "after_files": [{"content": "#! /usr/bin/env python\n\ndescr = \"\"\"A set of python modules for neuroimaging...\"\"\"\n\nimport sys\nimport os\n\nfrom setuptools import setup, find_packages\n\n\ndef load_version():\n \"\"\"Executes nilearn/version.py in a globals dictionary and return it.\n\n Note: importing nilearn is not an option because there may be\n dependencies like nibabel which are not installed and\n setup.py is supposed to install them.\n \"\"\"\n # load all vars into globals, otherwise\n # the later function call using global vars doesn't work.\n globals_dict = {}\n with open(os.path.join('nilearn', 'version.py')) as fp:\n exec(fp.read(), globals_dict)\n\n return globals_dict\n\n\ndef is_installing():\n # Allow command-lines such as \"python setup.py build install\"\n install_commands = set(['install', 'develop'])\n return install_commands.intersection(set(sys.argv))\n\n\n# Make sources available using relative paths from this file's directory.\nos.chdir(os.path.dirname(os.path.abspath(__file__)))\n\n_VERSION_GLOBALS = load_version()\nDISTNAME = 'nilearn'\nDESCRIPTION = 'Statistical learning for neuroimaging in Python'\nwith open('README.rst') as fp:\n LONG_DESCRIPTION = fp.read()\nMAINTAINER = 'Gael Varoquaux'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'http://nilearn.github.io'\nLICENSE = 'new BSD'\nDOWNLOAD_URL = 'http://nilearn.github.io'\nVERSION = _VERSION_GLOBALS['__version__']\n\n\nif __name__ == \"__main__\":\n if is_installing():\n module_check_fn = _VERSION_GLOBALS['_check_module_dependencies']\n module_check_fn(is_nilearn_installing=True)\n\n install_requires = \\\n ['%s>=%s' % (mod, meta['min_version'])\n for mod, meta in _VERSION_GLOBALS['REQUIRED_MODULE_METADATA']\n if not meta['required_at_installation']]\n\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n long_description=LONG_DESCRIPTION,\n zip_safe=False, # the package can run out of an .egg file\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: C',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n packages=find_packages(),\n package_data={'nilearn.datasets.data': ['*.nii.gz', '*.csv', '*.txt'\n ],\n 'nilearn.datasets.data.fsaverage5': ['*.gz'],\n 'nilearn.surface.data': ['*.csv'],\n 'nilearn.plotting.data.js': ['*.js'],\n 'nilearn.plotting.data.html': ['*.html'],\n 'nilearn.plotting.glass_brain_files': ['*.json'],\n 'nilearn.tests.data': ['*'],\n 'nilearn.image.tests.data': ['*.mgz'],\n 'nilearn.surface.tests.data': ['*.annot', '*.label'],\n 'nilearn.datasets.tests.data': ['*.*'],\n 'nilearn.datasets.description': ['*.rst'],\n 'nilearn.reporting.data.html': ['*.html']},\n install_requires=install_requires,\n python_requires='>=3.5',\n )\n", "path": "setup.py"}, {"content": "# *- encoding: utf-8 -*-\n\"\"\"\nnilearn version, required package versions, and utilities for checking\n\"\"\"\n# Author: Loic Esteve, Ben Cipollini\n# License: simplified BSD\n\n# PEP0440 compatible formatted version, see:\n# https://www.python.org/dev/peps/pep-0440/\n#\n# Generic release markers:\n# X.Y\n# X.Y.Z # For bugfix releases\n#\n# Admissible pre-release markers:\n# X.YaN # Alpha release\n# X.YbN # Beta release\n# X.YrcN # Release Candidate\n# X.Y # Final release\n#\n# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.\n# 'X.Y.dev0' is the canonical version of 'X.Y.dev'\n#\n__version__ = '0.6.0b'\n\n_NILEARN_INSTALL_MSG = 'See %s for installation information.' % (\n 'http://nilearn.github.io/introduction.html#installation')\n\n# This is a tuple to preserve order, so that dependencies are checked\n# in some meaningful order (more => less 'core').\nREQUIRED_MODULE_METADATA = (\n ('numpy', {\n 'min_version': '1.11',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('scipy', {\n 'min_version': '0.19',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('sklearn', {\n 'min_version': '0.19',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('joblib', {\n 'min_version': '0.11',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('nibabel', {\n 'min_version': '2.0.2',\n 'required_at_installation': False}))\n\nOPTIONAL_MATPLOTLIB_MIN_VERSION = '2.0'\n\n\ndef _import_module_with_version_check(\n module_name,\n minimum_version,\n install_info=None):\n \"\"\"Check that module is installed with a recent enough version\n \"\"\"\n from distutils.version import LooseVersion\n\n try:\n module = __import__(module_name)\n except ImportError as exc:\n user_friendly_info = ('Module \"{0}\" could not be found. {1}').format(\n module_name,\n install_info or 'Please install it properly to use nilearn.')\n exc.args += (user_friendly_info,)\n # Necessary for Python 3 because the repr/str of ImportError\n # objects was changed in Python 3\n if hasattr(exc, 'msg'):\n exc.msg += '. ' + user_friendly_info\n raise\n\n # Avoid choking on modules with no __version__ attribute\n module_version = getattr(module, '__version__', '0.0.0')\n\n version_too_old = (not LooseVersion(module_version) >=\n LooseVersion(minimum_version))\n\n if version_too_old:\n message = (\n 'A {module_name} version of at least {minimum_version} '\n 'is required to use nilearn. {module_version} was found. '\n 'Please upgrade {module_name}').format(\n module_name=module_name,\n minimum_version=minimum_version,\n module_version=module_version)\n\n raise ImportError(message)\n\n return module\n\n\ndef _check_module_dependencies(is_nilearn_installing=False):\n \"\"\"Throw an exception if nilearn dependencies are not installed.\n\n Parameters\n ----------\n is_nilearn_installing: boolean\n if True, only error on missing packages that cannot be auto-installed.\n if False, error on any missing package.\n\n Throws\n -------\n ImportError\n \"\"\"\n\n for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:\n if not (is_nilearn_installing and\n not module_metadata['required_at_installation']):\n # Skip check only when installing and it's a module that\n # will be auto-installed.\n _import_module_with_version_check(\n module_name=module_name,\n minimum_version=module_metadata['min_version'],\n install_info=module_metadata.get('install_info'))\n", "path": "nilearn/version.py"}]}
| 2,555 | 244 |
gh_patches_debug_5657
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-254
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`split_axis` fails when it splits the last axis of an array on py3
This code causes an error.
```
>>> x=chainer.Variable(chainer.cuda.to_gpu(numpy.array([1,2,3], numpy.float32)))
>>> chainer.functions.split_axis(x, [1], 0)
...
struct.error: required argument is not an integer
```
A return value of `numpy.prod(())` is `float`, and pycuda cannot pass a float value as an integer argument on py3 (on py2, it works).
https://github.com/pfnet/chainer/blob/master/chainer/functions/split_axis.py#L60
@ShigekiKarita reported this bug. Thanks!!
https://gist.github.com/ShigekiKarita/f27d1d149ccc3e3f381f
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/functions/split_axis.py`
Content:
```
1 import collections
2
3 import numpy
4
5 from chainer import cuda
6 from chainer import function
7 from chainer.utils import type_check
8
9
10 _args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'
11 _preamble = '''
12 #define COPY(statement) \
13 int l = i / (rdim * cdimy); \
14 int c = i / rdim % cdimy + coffset; \
15 int r = i % rdim; \
16 int idx = r + rdim * (c + cdimx * l); \
17 statement;
18 '''
19
20
21 class SplitAxis(function.Function):
22
23 """Function that splits multiple arrays towards the specified axis."""
24
25 def __init__(self, indices_or_sections, axis):
26 if not isinstance(indices_or_sections, (int, collections.Iterable)):
27 raise TypeError('indices_or_sections must be integer or 1-D array')
28 self.indices_or_sections = indices_or_sections
29 self.axis = axis
30
31 def check_type_forward(self, in_types):
32 type_check.expect(in_types.size() == 1)
33 type_check.expect(in_types[0].ndim >= self.axis)
34
35 if isinstance(self.indices_or_sections, collections.Iterable):
36 max_index = type_check.Variable(
37 self.indices_or_sections[-1], 'max_index')
38 type_check.expect(in_types[0].shape[self.axis] > max_index)
39 else:
40 sections = type_check.Variable(
41 self.indices_or_sections, 'sections')
42 type_check.expect(in_types[0].shape[self.axis] % sections == 0)
43
44 def forward_cpu(self, x):
45 if isinstance(self.indices_or_sections, collections.Iterable):
46 cdimx = x[0].shape[self.axis]
47 ind = list(self.indices_or_sections)
48 ind.append(cdimx)
49 prev_i = 0
50 for i in ind:
51 cdimy = max(0, min(i, cdimx) - prev_i)
52 if cdimy == 0:
53 raise ValueError('Not support if shape contains 0')
54 prev_i = i
55 return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))
56
57 def forward_gpu(self, x):
58 xshape = x[0].shape
59 self.cdimx = xshape[self.axis]
60 self.rdim = numpy.prod(xshape[self.axis + 1:])
61
62 if isinstance(self.indices_or_sections, collections.Iterable):
63 ind = list(self.indices_or_sections)
64 ind.append(self.cdimx)
65 else:
66 sec = self.indices_or_sections
67 if self.cdimx % sec:
68 raise ValueError(
69 'array split does not result in an equal division')
70 ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)
71 ys = []
72 kernel = cuda.elementwise(
73 _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)
74 prev_i = 0
75 for i in ind:
76 cdimy = max(0, min(i, self.cdimx) - prev_i)
77 s = list(xshape)
78 s[self.axis] = cdimy
79 y = cuda.empty(s, dtype=x[0].dtype)
80 if cdimy == 0:
81 raise ValueError('Not support if shape contains 0')
82 kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)
83 prev_i = i
84 ys.append(y)
85 return tuple(ys)
86
87 def backward_cpu(self, x, gys):
88 return numpy.concatenate(gys, axis=self.axis),
89
90 def backward_gpu(self, x, gys):
91 gx = cuda.empty_like(x[0])
92 coffset = 0
93 kernel = cuda.elementwise(
94 _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)
95 for gy in gys:
96 cdimy = gy.shape[self.axis]
97 if cdimy != 0:
98 kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)
99 coffset += cdimy
100 return gx,
101
102
103 def split_axis(x, indices_or_sections, axis):
104 """Splits given variables along an axis.
105
106 Args:
107 x (tuple of Variables): Variables to be split.
108 indices_or_sections (int or 1-D array): If this argument is an integer,
109 N, the array will be divided into N equal arrays along axis.
110 If it is a 1-D array of sorted integers, it
111 indicates the positions where the array is split.
112 axis (int): Axis that the input array is split along.
113
114 Returns:
115 ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects
116 if the number of outputs is more than 1 or
117 :class:`~chainer.Variable` otherwise.
118
119 .. note::
120 This function raises ``ValueError`` if at least
121 one of the outputs is splitted to zero-size
122 (i.e. `axis`-th value of its shape is zero).
123
124 """
125 return SplitAxis(indices_or_sections, axis)(x)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/chainer/functions/split_axis.py b/chainer/functions/split_axis.py
--- a/chainer/functions/split_axis.py
+++ b/chainer/functions/split_axis.py
@@ -57,7 +57,7 @@
def forward_gpu(self, x):
xshape = x[0].shape
self.cdimx = xshape[self.axis]
- self.rdim = numpy.prod(xshape[self.axis + 1:])
+ self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)
if isinstance(self.indices_or_sections, collections.Iterable):
ind = list(self.indices_or_sections)
|
{"golden_diff": "diff --git a/chainer/functions/split_axis.py b/chainer/functions/split_axis.py\n--- a/chainer/functions/split_axis.py\n+++ b/chainer/functions/split_axis.py\n@@ -57,7 +57,7 @@\n def forward_gpu(self, x):\n xshape = x[0].shape\n self.cdimx = xshape[self.axis]\n- self.rdim = numpy.prod(xshape[self.axis + 1:])\n+ self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)\n \n if isinstance(self.indices_or_sections, collections.Iterable):\n ind = list(self.indices_or_sections)\n", "issue": "`split_axis` fails when it splits the last axis of an array on py3\nThis code causes an error.\n\n```\n>>> x=chainer.Variable(chainer.cuda.to_gpu(numpy.array([1,2,3], numpy.float32)))\n>>> chainer.functions.split_axis(x, [1], 0)\n...\nstruct.error: required argument is not an integer\n```\n\nA return value of `numpy.prod(())` is `float`, and pycuda cannot pass a float value as an integer argument on py3 (on py2, it works).\nhttps://github.com/pfnet/chainer/blob/master/chainer/functions/split_axis.py#L60\n\n@ShigekiKarita reported this bug. Thanks!!\nhttps://gist.github.com/ShigekiKarita/f27d1d149ccc3e3f381f\n\n", "before_files": [{"content": "import collections\n\nimport numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\n_args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimy); \\\n int c = i / rdim % cdimy + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimx * l); \\\n statement;\n'''\n\n\nclass SplitAxis(function.Function):\n\n \"\"\"Function that splits multiple arrays towards the specified axis.\"\"\"\n\n def __init__(self, indices_or_sections, axis):\n if not isinstance(indices_or_sections, (int, collections.Iterable)):\n raise TypeError('indices_or_sections must be integer or 1-D array')\n self.indices_or_sections = indices_or_sections\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].ndim >= self.axis)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n max_index = type_check.Variable(\n self.indices_or_sections[-1], 'max_index')\n type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n type_check.expect(in_types[0].shape[self.axis] % sections == 0)\n\n def forward_cpu(self, x):\n if isinstance(self.indices_or_sections, collections.Iterable):\n cdimx = x[0].shape[self.axis]\n ind = list(self.indices_or_sections)\n ind.append(cdimx)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, cdimx) - prev_i)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n prev_i = i\n return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))\n\n def forward_gpu(self, x):\n xshape = x[0].shape\n self.cdimx = xshape[self.axis]\n self.rdim = numpy.prod(xshape[self.axis + 1:])\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n ind = list(self.indices_or_sections)\n ind.append(self.cdimx)\n else:\n sec = self.indices_or_sections\n if self.cdimx % sec:\n raise ValueError(\n 'array split does not result in an equal division')\n ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)\n ys = []\n kernel = cuda.elementwise(\n _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, self.cdimx) - prev_i)\n s = list(xshape)\n s[self.axis] = cdimy\n y = cuda.empty(s, dtype=x[0].dtype)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)\n prev_i = i\n ys.append(y)\n return tuple(ys)\n\n def backward_cpu(self, x, gys):\n return numpy.concatenate(gys, axis=self.axis),\n\n def backward_gpu(self, x, gys):\n gx = cuda.empty_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n coffset += cdimy\n return gx,\n\n\ndef split_axis(x, indices_or_sections, axis):\n \"\"\"Splits given variables along an axis.\n\n Args:\n x (tuple of Variables): Variables to be split.\n indices_or_sections (int or 1-D array): If this argument is an integer,\n N, the array will be divided into N equal arrays along axis.\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n\n Returns:\n ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n\n .. note::\n This function raises ``ValueError`` if at least\n one of the outputs is splitted to zero-size\n (i.e. `axis`-th value of its shape is zero).\n\n \"\"\"\n return SplitAxis(indices_or_sections, axis)(x)\n", "path": "chainer/functions/split_axis.py"}], "after_files": [{"content": "import collections\n\nimport numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\n_args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimy); \\\n int c = i / rdim % cdimy + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimx * l); \\\n statement;\n'''\n\n\nclass SplitAxis(function.Function):\n\n \"\"\"Function that splits multiple arrays towards the specified axis.\"\"\"\n\n def __init__(self, indices_or_sections, axis):\n if not isinstance(indices_or_sections, (int, collections.Iterable)):\n raise TypeError('indices_or_sections must be integer or 1-D array')\n self.indices_or_sections = indices_or_sections\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].ndim >= self.axis)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n max_index = type_check.Variable(\n self.indices_or_sections[-1], 'max_index')\n type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n type_check.expect(in_types[0].shape[self.axis] % sections == 0)\n\n def forward_cpu(self, x):\n if isinstance(self.indices_or_sections, collections.Iterable):\n cdimx = x[0].shape[self.axis]\n ind = list(self.indices_or_sections)\n ind.append(cdimx)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, cdimx) - prev_i)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n prev_i = i\n return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))\n\n def forward_gpu(self, x):\n xshape = x[0].shape\n self.cdimx = xshape[self.axis]\n self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n ind = list(self.indices_or_sections)\n ind.append(self.cdimx)\n else:\n sec = self.indices_or_sections\n if self.cdimx % sec:\n raise ValueError(\n 'array split does not result in an equal division')\n ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)\n ys = []\n kernel = cuda.elementwise(\n _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, self.cdimx) - prev_i)\n s = list(xshape)\n s[self.axis] = cdimy\n y = cuda.empty(s, dtype=x[0].dtype)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)\n prev_i = i\n ys.append(y)\n return tuple(ys)\n\n def backward_cpu(self, x, gys):\n return numpy.concatenate(gys, axis=self.axis),\n\n def backward_gpu(self, x, gys):\n gx = cuda.empty_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n coffset += cdimy\n return gx,\n\n\ndef split_axis(x, indices_or_sections, axis):\n \"\"\"Splits given variables along an axis.\n\n Args:\n x (tuple of Variables): Variables to be split.\n indices_or_sections (int or 1-D array): If this argument is an integer,\n N, the array will be divided into N equal arrays along axis.\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n\n Returns:\n ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n\n .. note::\n This function raises ``ValueError`` if at least\n one of the outputs is splitted to zero-size\n (i.e. `axis`-th value of its shape is zero).\n\n \"\"\"\n return SplitAxis(indices_or_sections, axis)(x)\n", "path": "chainer/functions/split_axis.py"}]}
| 1,846 | 138 |
gh_patches_debug_45615
|
rasdani/github-patches
|
git_diff
|
Flexget__Flexget-2628
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AniList crashing with custom lists
### Expected behaviour:
Parse AniList by status, ignore custom lists since they are duplicate entries from status lists.
### Actual behaviour:
Trying to parse custom lists cause a crash
### Steps to reproduce:
- Step 1: Try to parse an AniList account with a custom list.
#### Config:
```yaml
Paste FULL config and remove any personal info if config is too long, attach the file to the ticket.
If issue is with a single task, you can get get resulting configuration by running:
flexget execute --task <NAME> --dump-config
Make sure to redact any personal information (passwords, api keys, etc) !
```
#### Log:
<details>
<summary>(click to expand)</summary>
```
2020-03-18 10:22:25 CRITICAL task_queue BUG: Unhandled exception during task queue run loop.
2020-03-18 10:22:25 DEBUG scheduler task finished executing: fill-anime-movies-list
Traceback (most recent call last):
File "github\flexget\flexget\task_queue.py", line 46, in run
self.current_task.execute()
File "github\flexget\flexget\task.py", line 80, in wrapper
return func(self, *args, **kw)
File "github\flexget\flexget\task.py", line 697, in execute
self._execute()
File "github\flexget\flexget\task.py", line 663, in _execute
self.__run_task_phase(phase)
File "github\flexget\flexget\task.py", line 492, in __run_task_phase
for e in response:
File "github\flexget\flexget\utils\cached_input.py", line 217, in __iter__
for item in self.iterable:
File "github\flexget\flexget\plugins\input\anilist.py", line 133, in on_task_input
entry['al_list_status'] = list_status['status'].capitalize()
AttributeError: 'NoneType' object has no attribute 'capitalize'
```
</details>
### Additional information:
- FlexGet version: 3.1.34
- Python version: 3.7
- Installation method: git
- Using daemon (yes/no): yes
- OS and version: Win10
- Link to crash log: N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/plugins/input/anilist.py`
Content:
```
1 from loguru import logger
2
3 from flexget import plugin
4 from flexget.config_schema import one_or_more
5 from flexget.entry import Entry
6 from flexget.event import event
7 from flexget.utils.cached_input import cached
8 from flexget.utils.requests import RequestException
9
10 logger = logger.bind(name='anilist')
11
12 LIST_STATUS = ['current', 'planning', 'completed', 'dropped', 'paused', 'repeating']
13
14 RELEASE_STATUS = ['finished', 'releasing', 'not_yet_released', 'cancelled', 'all']
15
16 ANIME_FORMAT = ['tv', 'tv_short', 'movie', 'special', 'ova', 'ona', 'all']
17
18 TRAILER_SOURCE = {
19 'youtube': 'https://www.youtube.com/embed/',
20 'dailymotion': 'https://www.dailymotion.com/embed/video/',
21 }
22
23
24 class AniList(object):
25 """" Creates entries for series and movies from your AniList list
26
27 Syntax:
28 anilist:
29 username: <value>
30 status:
31 - <current|planning|completed|dropped|paused|repeating>
32 - <current|planning|completed|dropped|paused|repeating>
33 ...
34 release_status:
35 - <all|finished|releasing|not_yet_released|cancelled>
36 - <finished|releasing|not_yet_released|cancelled>
37 ...
38 format:
39 - <all|tv|tv_short|movie|special|ova|ona>
40 - <tv|tv_short|movie|special|ova|ona>
41 ...
42 """
43
44 schema = {
45 'oneOf': [
46 {'type': 'string'},
47 {
48 'type': 'object',
49 'properties': {
50 'username': {'type': 'string'},
51 'status': one_or_more(
52 {'type': 'string', 'enum': LIST_STATUS}, unique_items=True
53 ),
54 'release_status': one_or_more(
55 {'type': 'string', 'enum': RELEASE_STATUS}, unique_items=True
56 ),
57 'format': one_or_more(
58 {'type': 'string', 'enum': ANIME_FORMAT}, unique_items=True
59 ),
60 },
61 'required': ['username'],
62 'additionalProperties': False,
63 },
64 ]
65 }
66
67 @cached('anilist', persist='2 hours')
68 def on_task_input(self, task, config):
69 if isinstance(config, str):
70 config = {'username': config}
71 selected_list_status = config['status'] if 'status' in config else ['current', 'planning']
72 selected_release_status = (
73 config['release_status'] if 'release_status' in config else ['all']
74 )
75 selected_formats = config['format'] if 'format' in config else ['all']
76
77 if not isinstance(selected_list_status, list):
78 selected_list_status = [selected_list_status]
79
80 if not isinstance(selected_release_status, list):
81 selected_release_status = [selected_release_status]
82
83 if not isinstance(selected_formats, list):
84 selected_formats = [selected_formats]
85
86 logger.debug('Selected List Status: {}', selected_list_status)
87 logger.debug('Selected Release Status: {}', selected_release_status)
88 logger.debug('Selected Formats: {}', selected_formats)
89
90 req_variables = {'user': config['username']}
91 req_chunk = 1
92 req_fields = (
93 'status, title{ romaji, english }, synonyms, siteUrl, idMal, format, episodes, '
94 'trailer{ site, id }, coverImage{ large }, bannerImage, genres, tags{ name }, '
95 'externalLinks{ site, url }'
96 )
97 while req_chunk:
98 req_query = (
99 f'query ($user: String){{ collection: MediaListCollection(userName: $user, '
100 f'type: ANIME, perChunk: 500, chunk: {req_chunk}, status_in: '
101 f'[{", ".join([s.upper() for s in selected_list_status])}]) {{ hasNextChunk, '
102 f'statuses: lists{{ status, list: entries{{ anime: media{{ {req_fields} }}}}}}}}}}'
103 )
104
105 try:
106 list_response = task.requests.post(
107 'https://graphql.anilist.co',
108 json={'query': req_query, 'variables': req_variables},
109 )
110 except RequestException as e:
111 raise plugin.PluginError('Error reading list - {url}'.format(url=e))
112
113 try:
114 list_response = list_response.json()['data']
115 logger.debug('JSON output: {}', list_response)
116 for list_status in list_response['collection']['statuses']:
117 for anime in list_status['list']:
118 anime = anime['anime']
119 has_selected_release_status = (
120 anime['status'].lower() in selected_release_status
121 or 'all' in selected_release_status
122 )
123 has_selected_type = (
124 anime['format'].lower() in selected_formats
125 or 'all' in selected_formats
126 )
127 if has_selected_type and has_selected_release_status:
128 entry = Entry()
129 entry['title'] = anime['title']['romaji']
130 entry['al_title'] = anime['title']
131 entry['al_format'] = anime['format']
132 entry['al_release_status'] = anime['status'].capitalize()
133 entry['al_list_status'] = list_status['status'].capitalize()
134 entry['alternate_name'] = anime.get('synonyms', [])
135 if (
136 anime['title'].get('english')
137 and anime['title'].get('english') != anime['title']['romaji']
138 and anime['title'].get('english') not in entry['alternate_name']
139 ):
140 entry['alternate_name'].insert(0, anime['title']['english'])
141 entry['url'] = anime['siteUrl']
142 entry['al_idMal'] = anime['idMal']
143 entry['al_episodes'] = anime['episodes']
144 entry['al_trailer'] = (
145 TRAILER_SOURCE[anime['trailer']['site']] + anime['trailer']['id']
146 if anime['trailer']
147 else ''
148 )
149 entry['al_cover'] = anime['coverImage']['large']
150 entry['al_banner'] = anime['bannerImage']
151 entry['al_genres'] = anime['genres']
152 entry['al_tags'] = [t['name'] for t in anime['tags']]
153 entry['al_links'] = anime['externalLinks']
154 if entry.isvalid():
155 yield entry
156 req_chunk = req_chunk + 1 if list_response['collection']['hasNextChunk'] else False
157
158 except ValueError:
159 raise plugin.PluginError('Invalid JSON response')
160
161
162 @event('plugin.register')
163 def register_plugin():
164 plugin.register(AniList, 'anilist', api_ver=2)
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flexget/plugins/input/anilist.py b/flexget/plugins/input/anilist.py
--- a/flexget/plugins/input/anilist.py
+++ b/flexget/plugins/input/anilist.py
@@ -26,7 +26,7 @@
Syntax:
anilist:
- username: <value>
+ username: <string>
status:
- <current|planning|completed|dropped|paused|repeating>
- <current|planning|completed|dropped|paused|repeating>
@@ -39,6 +39,10 @@
- <all|tv|tv_short|movie|special|ova|ona>
- <tv|tv_short|movie|special|ova|ona>
...
+ list:
+ - <string>
+ - <string>
+ ...
"""
schema = {
@@ -57,6 +61,7 @@
'format': one_or_more(
{'type': 'string', 'enum': ANIME_FORMAT}, unique_items=True
),
+ 'list': one_or_more({'type': 'string'}),
},
'required': ['username'],
'additionalProperties': False,
@@ -68,11 +73,10 @@
def on_task_input(self, task, config):
if isinstance(config, str):
config = {'username': config}
- selected_list_status = config['status'] if 'status' in config else ['current', 'planning']
- selected_release_status = (
- config['release_status'] if 'release_status' in config else ['all']
- )
- selected_formats = config['format'] if 'format' in config else ['all']
+ selected_list_status = config.get('status', ['current', 'planning'])
+ selected_release_status = config.get('release_status', ['all'])
+ selected_formats = config.get('format', ['all'])
+ selected_list_name = config.get('list', [])
if not isinstance(selected_list_status, list):
selected_list_status = [selected_list_status]
@@ -83,6 +87,10 @@
if not isinstance(selected_formats, list):
selected_formats = [selected_formats]
+ if not isinstance(selected_list_name, list):
+ selected_list_name = [selected_list_name]
+ selected_list_name = [i.lower() for i in selected_list_name]
+
logger.debug('Selected List Status: {}', selected_list_status)
logger.debug('Selected Release Status: {}', selected_release_status)
logger.debug('Selected Formats: {}', selected_formats)
@@ -99,7 +107,8 @@
f'query ($user: String){{ collection: MediaListCollection(userName: $user, '
f'type: ANIME, perChunk: 500, chunk: {req_chunk}, status_in: '
f'[{", ".join([s.upper() for s in selected_list_status])}]) {{ hasNextChunk, '
- f'statuses: lists{{ status, list: entries{{ anime: media{{ {req_fields} }}}}}}}}}}'
+ f'statuses: lists{{ status, name, list: entries{{ anime: media{{ {req_fields}'
+ f' }}}}}}}}}}'
)
try:
@@ -114,6 +123,11 @@
list_response = list_response.json()['data']
logger.debug('JSON output: {}', list_response)
for list_status in list_response['collection']['statuses']:
+ if (
+ selected_list_name
+ and list_status['name'].lower() not in selected_list_name
+ ):
+ continue
for anime in list_status['list']:
anime = anime['anime']
has_selected_release_status = (
@@ -130,7 +144,12 @@
entry['al_title'] = anime['title']
entry['al_format'] = anime['format']
entry['al_release_status'] = anime['status'].capitalize()
- entry['al_list_status'] = list_status['status'].capitalize()
+ entry['al_list'] = list_status['name']
+ entry['al_list_status'] = (
+ list_status['status'].capitalize()
+ if list_status.get('status')
+ else ''
+ )
entry['alternate_name'] = anime.get('synonyms', [])
if (
anime['title'].get('english')
|
{"golden_diff": "diff --git a/flexget/plugins/input/anilist.py b/flexget/plugins/input/anilist.py\n--- a/flexget/plugins/input/anilist.py\n+++ b/flexget/plugins/input/anilist.py\n@@ -26,7 +26,7 @@\n \n Syntax:\n anilist:\n- username: <value>\n+ username: <string>\n status:\n - <current|planning|completed|dropped|paused|repeating>\n - <current|planning|completed|dropped|paused|repeating>\n@@ -39,6 +39,10 @@\n - <all|tv|tv_short|movie|special|ova|ona>\n - <tv|tv_short|movie|special|ova|ona>\n ...\n+ list:\n+ - <string>\n+ - <string>\n+ ...\n \"\"\"\n \n schema = {\n@@ -57,6 +61,7 @@\n 'format': one_or_more(\n {'type': 'string', 'enum': ANIME_FORMAT}, unique_items=True\n ),\n+ 'list': one_or_more({'type': 'string'}),\n },\n 'required': ['username'],\n 'additionalProperties': False,\n@@ -68,11 +73,10 @@\n def on_task_input(self, task, config):\n if isinstance(config, str):\n config = {'username': config}\n- selected_list_status = config['status'] if 'status' in config else ['current', 'planning']\n- selected_release_status = (\n- config['release_status'] if 'release_status' in config else ['all']\n- )\n- selected_formats = config['format'] if 'format' in config else ['all']\n+ selected_list_status = config.get('status', ['current', 'planning'])\n+ selected_release_status = config.get('release_status', ['all'])\n+ selected_formats = config.get('format', ['all'])\n+ selected_list_name = config.get('list', [])\n \n if not isinstance(selected_list_status, list):\n selected_list_status = [selected_list_status]\n@@ -83,6 +87,10 @@\n if not isinstance(selected_formats, list):\n selected_formats = [selected_formats]\n \n+ if not isinstance(selected_list_name, list):\n+ selected_list_name = [selected_list_name]\n+ selected_list_name = [i.lower() for i in selected_list_name]\n+\n logger.debug('Selected List Status: {}', selected_list_status)\n logger.debug('Selected Release Status: {}', selected_release_status)\n logger.debug('Selected Formats: {}', selected_formats)\n@@ -99,7 +107,8 @@\n f'query ($user: String){{ collection: MediaListCollection(userName: $user, '\n f'type: ANIME, perChunk: 500, chunk: {req_chunk}, status_in: '\n f'[{\", \".join([s.upper() for s in selected_list_status])}]) {{ hasNextChunk, '\n- f'statuses: lists{{ status, list: entries{{ anime: media{{ {req_fields} }}}}}}}}}}'\n+ f'statuses: lists{{ status, name, list: entries{{ anime: media{{ {req_fields}'\n+ f' }}}}}}}}}}'\n )\n \n try:\n@@ -114,6 +123,11 @@\n list_response = list_response.json()['data']\n logger.debug('JSON output: {}', list_response)\n for list_status in list_response['collection']['statuses']:\n+ if (\n+ selected_list_name\n+ and list_status['name'].lower() not in selected_list_name\n+ ):\n+ continue\n for anime in list_status['list']:\n anime = anime['anime']\n has_selected_release_status = (\n@@ -130,7 +144,12 @@\n entry['al_title'] = anime['title']\n entry['al_format'] = anime['format']\n entry['al_release_status'] = anime['status'].capitalize()\n- entry['al_list_status'] = list_status['status'].capitalize()\n+ entry['al_list'] = list_status['name']\n+ entry['al_list_status'] = (\n+ list_status['status'].capitalize()\n+ if list_status.get('status')\n+ else ''\n+ )\n entry['alternate_name'] = anime.get('synonyms', [])\n if (\n anime['title'].get('english')\n", "issue": "AniList crashing with custom lists\n### Expected behaviour:\r\nParse AniList by status, ignore custom lists since they are duplicate entries from status lists.\r\n\r\n### Actual behaviour:\r\nTrying to parse custom lists cause a crash\r\n### Steps to reproduce:\r\n- Step 1: Try to parse an AniList account with a custom list.\r\n\r\n#### Config:\r\n```yaml\r\nPaste FULL config and remove any personal info if config is too long, attach the file to the ticket.\r\nIf issue is with a single task, you can get get resulting configuration by running:\r\n flexget execute --task <NAME> --dump-config\r\nMake sure to redact any personal information (passwords, api keys, etc) !\r\n```\r\n \r\n#### Log:\r\n<details>\r\n <summary>(click to expand)</summary>\r\n\r\n```\r\n2020-03-18 10:22:25 CRITICAL task_queue BUG: Unhandled exception during task queue run loop.\r\n2020-03-18 10:22:25 DEBUG scheduler task finished executing: fill-anime-movies-list\r\nTraceback (most recent call last):\r\n File \"github\\flexget\\flexget\\task_queue.py\", line 46, in run\r\n self.current_task.execute()\r\n File \"github\\flexget\\flexget\\task.py\", line 80, in wrapper\r\n return func(self, *args, **kw)\r\n File \"github\\flexget\\flexget\\task.py\", line 697, in execute\r\n self._execute()\r\n File \"github\\flexget\\flexget\\task.py\", line 663, in _execute\r\n self.__run_task_phase(phase)\r\n File \"github\\flexget\\flexget\\task.py\", line 492, in __run_task_phase\r\n for e in response:\r\n File \"github\\flexget\\flexget\\utils\\cached_input.py\", line 217, in __iter__\r\n for item in self.iterable:\r\n File \"github\\flexget\\flexget\\plugins\\input\\anilist.py\", line 133, in on_task_input\r\n entry['al_list_status'] = list_status['status'].capitalize()\r\nAttributeError: 'NoneType' object has no attribute 'capitalize'\r\n```\r\n</details>\r\n\r\n### Additional information:\r\n\r\n- FlexGet version: 3.1.34\r\n- Python version: 3.7\r\n- Installation method: git\r\n- Using daemon (yes/no): yes\r\n- OS and version: Win10\r\n- Link to crash log: N/A\r\n\n", "before_files": [{"content": "from loguru import logger\n\nfrom flexget import plugin\nfrom flexget.config_schema import one_or_more\nfrom flexget.entry import Entry\nfrom flexget.event import event\nfrom flexget.utils.cached_input import cached\nfrom flexget.utils.requests import RequestException\n\nlogger = logger.bind(name='anilist')\n\nLIST_STATUS = ['current', 'planning', 'completed', 'dropped', 'paused', 'repeating']\n\nRELEASE_STATUS = ['finished', 'releasing', 'not_yet_released', 'cancelled', 'all']\n\nANIME_FORMAT = ['tv', 'tv_short', 'movie', 'special', 'ova', 'ona', 'all']\n\nTRAILER_SOURCE = {\n 'youtube': 'https://www.youtube.com/embed/',\n 'dailymotion': 'https://www.dailymotion.com/embed/video/',\n}\n\n\nclass AniList(object):\n \"\"\"\" Creates entries for series and movies from your AniList list\n\n Syntax:\n anilist:\n username: <value>\n status:\n - <current|planning|completed|dropped|paused|repeating>\n - <current|planning|completed|dropped|paused|repeating>\n ...\n release_status:\n - <all|finished|releasing|not_yet_released|cancelled>\n - <finished|releasing|not_yet_released|cancelled>\n ...\n format:\n - <all|tv|tv_short|movie|special|ova|ona>\n - <tv|tv_short|movie|special|ova|ona>\n ...\n \"\"\"\n\n schema = {\n 'oneOf': [\n {'type': 'string'},\n {\n 'type': 'object',\n 'properties': {\n 'username': {'type': 'string'},\n 'status': one_or_more(\n {'type': 'string', 'enum': LIST_STATUS}, unique_items=True\n ),\n 'release_status': one_or_more(\n {'type': 'string', 'enum': RELEASE_STATUS}, unique_items=True\n ),\n 'format': one_or_more(\n {'type': 'string', 'enum': ANIME_FORMAT}, unique_items=True\n ),\n },\n 'required': ['username'],\n 'additionalProperties': False,\n },\n ]\n }\n\n @cached('anilist', persist='2 hours')\n def on_task_input(self, task, config):\n if isinstance(config, str):\n config = {'username': config}\n selected_list_status = config['status'] if 'status' in config else ['current', 'planning']\n selected_release_status = (\n config['release_status'] if 'release_status' in config else ['all']\n )\n selected_formats = config['format'] if 'format' in config else ['all']\n\n if not isinstance(selected_list_status, list):\n selected_list_status = [selected_list_status]\n\n if not isinstance(selected_release_status, list):\n selected_release_status = [selected_release_status]\n\n if not isinstance(selected_formats, list):\n selected_formats = [selected_formats]\n\n logger.debug('Selected List Status: {}', selected_list_status)\n logger.debug('Selected Release Status: {}', selected_release_status)\n logger.debug('Selected Formats: {}', selected_formats)\n\n req_variables = {'user': config['username']}\n req_chunk = 1\n req_fields = (\n 'status, title{ romaji, english }, synonyms, siteUrl, idMal, format, episodes, '\n 'trailer{ site, id }, coverImage{ large }, bannerImage, genres, tags{ name }, '\n 'externalLinks{ site, url }'\n )\n while req_chunk:\n req_query = (\n f'query ($user: String){{ collection: MediaListCollection(userName: $user, '\n f'type: ANIME, perChunk: 500, chunk: {req_chunk}, status_in: '\n f'[{\", \".join([s.upper() for s in selected_list_status])}]) {{ hasNextChunk, '\n f'statuses: lists{{ status, list: entries{{ anime: media{{ {req_fields} }}}}}}}}}}'\n )\n\n try:\n list_response = task.requests.post(\n 'https://graphql.anilist.co',\n json={'query': req_query, 'variables': req_variables},\n )\n except RequestException as e:\n raise plugin.PluginError('Error reading list - {url}'.format(url=e))\n\n try:\n list_response = list_response.json()['data']\n logger.debug('JSON output: {}', list_response)\n for list_status in list_response['collection']['statuses']:\n for anime in list_status['list']:\n anime = anime['anime']\n has_selected_release_status = (\n anime['status'].lower() in selected_release_status\n or 'all' in selected_release_status\n )\n has_selected_type = (\n anime['format'].lower() in selected_formats\n or 'all' in selected_formats\n )\n if has_selected_type and has_selected_release_status:\n entry = Entry()\n entry['title'] = anime['title']['romaji']\n entry['al_title'] = anime['title']\n entry['al_format'] = anime['format']\n entry['al_release_status'] = anime['status'].capitalize()\n entry['al_list_status'] = list_status['status'].capitalize()\n entry['alternate_name'] = anime.get('synonyms', [])\n if (\n anime['title'].get('english')\n and anime['title'].get('english') != anime['title']['romaji']\n and anime['title'].get('english') not in entry['alternate_name']\n ):\n entry['alternate_name'].insert(0, anime['title']['english'])\n entry['url'] = anime['siteUrl']\n entry['al_idMal'] = anime['idMal']\n entry['al_episodes'] = anime['episodes']\n entry['al_trailer'] = (\n TRAILER_SOURCE[anime['trailer']['site']] + anime['trailer']['id']\n if anime['trailer']\n else ''\n )\n entry['al_cover'] = anime['coverImage']['large']\n entry['al_banner'] = anime['bannerImage']\n entry['al_genres'] = anime['genres']\n entry['al_tags'] = [t['name'] for t in anime['tags']]\n entry['al_links'] = anime['externalLinks']\n if entry.isvalid():\n yield entry\n req_chunk = req_chunk + 1 if list_response['collection']['hasNextChunk'] else False\n\n except ValueError:\n raise plugin.PluginError('Invalid JSON response')\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(AniList, 'anilist', api_ver=2)\n", "path": "flexget/plugins/input/anilist.py"}], "after_files": [{"content": "from loguru import logger\n\nfrom flexget import plugin\nfrom flexget.config_schema import one_or_more\nfrom flexget.entry import Entry\nfrom flexget.event import event\nfrom flexget.utils.cached_input import cached\nfrom flexget.utils.requests import RequestException\n\nlogger = logger.bind(name='anilist')\n\nLIST_STATUS = ['current', 'planning', 'completed', 'dropped', 'paused', 'repeating']\n\nRELEASE_STATUS = ['finished', 'releasing', 'not_yet_released', 'cancelled', 'all']\n\nANIME_FORMAT = ['tv', 'tv_short', 'movie', 'special', 'ova', 'ona', 'all']\n\nTRAILER_SOURCE = {\n 'youtube': 'https://www.youtube.com/embed/',\n 'dailymotion': 'https://www.dailymotion.com/embed/video/',\n}\n\n\nclass AniList(object):\n \"\"\"\" Creates entries for series and movies from your AniList list\n\n Syntax:\n anilist:\n username: <string>\n status:\n - <current|planning|completed|dropped|paused|repeating>\n - <current|planning|completed|dropped|paused|repeating>\n ...\n release_status:\n - <all|finished|releasing|not_yet_released|cancelled>\n - <finished|releasing|not_yet_released|cancelled>\n ...\n format:\n - <all|tv|tv_short|movie|special|ova|ona>\n - <tv|tv_short|movie|special|ova|ona>\n ...\n list:\n - <string>\n - <string>\n ...\n \"\"\"\n\n schema = {\n 'oneOf': [\n {'type': 'string'},\n {\n 'type': 'object',\n 'properties': {\n 'username': {'type': 'string'},\n 'status': one_or_more(\n {'type': 'string', 'enum': LIST_STATUS}, unique_items=True\n ),\n 'release_status': one_or_more(\n {'type': 'string', 'enum': RELEASE_STATUS}, unique_items=True\n ),\n 'format': one_or_more(\n {'type': 'string', 'enum': ANIME_FORMAT}, unique_items=True\n ),\n 'list': one_or_more({'type': 'string'}),\n },\n 'required': ['username'],\n 'additionalProperties': False,\n },\n ]\n }\n\n @cached('anilist', persist='2 hours')\n def on_task_input(self, task, config):\n if isinstance(config, str):\n config = {'username': config}\n selected_list_status = config.get('status', ['current', 'planning'])\n selected_release_status = config.get('release_status', ['all'])\n selected_formats = config.get('format', ['all'])\n selected_list_name = config.get('list', [])\n\n if not isinstance(selected_list_status, list):\n selected_list_status = [selected_list_status]\n\n if not isinstance(selected_release_status, list):\n selected_release_status = [selected_release_status]\n\n if not isinstance(selected_formats, list):\n selected_formats = [selected_formats]\n\n if not isinstance(selected_list_name, list):\n selected_list_name = [selected_list_name]\n selected_list_name = [i.lower() for i in selected_list_name]\n\n logger.debug('Selected List Status: {}', selected_list_status)\n logger.debug('Selected Release Status: {}', selected_release_status)\n logger.debug('Selected Formats: {}', selected_formats)\n\n req_variables = {'user': config['username']}\n req_chunk = 1\n req_fields = (\n 'status, title{ romaji, english }, synonyms, siteUrl, idMal, format, episodes, '\n 'trailer{ site, id }, coverImage{ large }, bannerImage, genres, tags{ name }, '\n 'externalLinks{ site, url }'\n )\n while req_chunk:\n req_query = (\n f'query ($user: String){{ collection: MediaListCollection(userName: $user, '\n f'type: ANIME, perChunk: 500, chunk: {req_chunk}, status_in: '\n f'[{\", \".join([s.upper() for s in selected_list_status])}]) {{ hasNextChunk, '\n f'statuses: lists{{ status, name, list: entries{{ anime: media{{ {req_fields}'\n f' }}}}}}}}}}'\n )\n\n try:\n list_response = task.requests.post(\n 'https://graphql.anilist.co',\n json={'query': req_query, 'variables': req_variables},\n )\n except RequestException as e:\n raise plugin.PluginError('Error reading list - {url}'.format(url=e))\n\n try:\n list_response = list_response.json()['data']\n logger.debug('JSON output: {}', list_response)\n for list_status in list_response['collection']['statuses']:\n if (\n selected_list_name\n and list_status['name'].lower() not in selected_list_name\n ):\n continue\n for anime in list_status['list']:\n anime = anime['anime']\n has_selected_release_status = (\n anime['status'].lower() in selected_release_status\n or 'all' in selected_release_status\n )\n has_selected_type = (\n anime['format'].lower() in selected_formats\n or 'all' in selected_formats\n )\n if has_selected_type and has_selected_release_status:\n entry = Entry()\n entry['title'] = anime['title']['romaji']\n entry['al_title'] = anime['title']\n entry['al_format'] = anime['format']\n entry['al_release_status'] = anime['status'].capitalize()\n entry['al_list'] = list_status['name']\n entry['al_list_status'] = (\n list_status['status'].capitalize()\n if list_status.get('status')\n else ''\n )\n entry['alternate_name'] = anime.get('synonyms', [])\n if (\n anime['title'].get('english')\n and anime['title'].get('english') != anime['title']['romaji']\n and anime['title'].get('english') not in entry['alternate_name']\n ):\n entry['alternate_name'].insert(0, anime['title']['english'])\n entry['url'] = anime['siteUrl']\n entry['al_idMal'] = anime['idMal']\n entry['al_episodes'] = anime['episodes']\n entry['al_trailer'] = (\n TRAILER_SOURCE[anime['trailer']['site']] + anime['trailer']['id']\n if anime['trailer']\n else ''\n )\n entry['al_cover'] = anime['coverImage']['large']\n entry['al_banner'] = anime['bannerImage']\n entry['al_genres'] = anime['genres']\n entry['al_tags'] = [t['name'] for t in anime['tags']]\n entry['al_links'] = anime['externalLinks']\n if entry.isvalid():\n yield entry\n req_chunk = req_chunk + 1 if list_response['collection']['hasNextChunk'] else False\n\n except ValueError:\n raise plugin.PluginError('Invalid JSON response')\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(AniList, 'anilist', api_ver=2)\n", "path": "flexget/plugins/input/anilist.py"}]}
| 2,653 | 967 |
gh_patches_debug_13030
|
rasdani/github-patches
|
git_diff
|
microsoft__AzureTRE-395
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Application Insights configuration for processor function broken
The `createNewWorkspace` function monitoring blade indicates that Application Insights integration is not enabled.

At a first glance, the config seems to be there in [processor_function/function.tf](https://github.com/microsoft/AzureTRE/blob/develop/templates/core/terraform/processor_function/function.tf#L24)
```plaintext
APP_INSIGHTS_INSTRUMENTATION_KEY = var.app_insights_instrumentation_key
```
however, the setting key name is incorrect.
Manually enabling the integration in the portal adds additional config keys:
`APPLICATIONINSIGHTS_CONNECTION_STRING`
`APPINSIGHTS_INSTRUMENTATIONKEY`
Need to add above keys with values to function app config.
[Reference](https://docs.microsoft.com/en-us/azure/azure-monitor/app/resource-manager-function-app)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `processor_function/shared/logging.py`
Content:
```
1 import logging
2 import os
3
4 from opencensus.ext.azure.log_exporter import AzureLogHandler
5 from opencensus.trace import config_integration
6 from opencensus.trace.samplers import AlwaysOnSampler
7 from opencensus.trace.tracer import Tracer
8
9
10 def initialize_logging(logging_level: int, correlation_id: str) -> logging.LoggerAdapter:
11 """
12 Adds the Application Insights handler for the root logger and sets the given logging level.
13 Creates and returns a logger adapter that integrates the correlation ID, if given, to the log messages.
14
15 :param logging_level: The logging level to set e.g., logging.WARNING.
16 :param correlation_id: Optional. The correlation ID that is passed on to the operation_Id in App Insights.
17 :returns: A newly created logger adapter.
18 """
19 logger = logging.getLogger()
20 logger.addHandler(logging.StreamHandler()) # For logging into console
21 app_insights_instrumentation_key = os.getenv("APP_INSIGHTS_INSTRUMENTATION_KEY")
22
23 try:
24 logger.addHandler(AzureLogHandler(connection_string=f"InstrumentationKey={app_insights_instrumentation_key}"))
25 except ValueError as e:
26 logger.error(f"Failed to set Application Insights logger handler: {e}")
27
28 config_integration.trace_integrations(['logging'])
29 logging.basicConfig(level=logging_level, format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')
30 Tracer(sampler=AlwaysOnSampler())
31 logger.setLevel(logging_level)
32
33 extra = None
34
35 if correlation_id:
36 extra = {'traceId': correlation_id}
37
38 adapter = logging.LoggerAdapter(logger, extra)
39 adapter.debug(f"Logger adapter initialized with extra: {extra}")
40
41 return adapter
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/processor_function/shared/logging.py b/processor_function/shared/logging.py
--- a/processor_function/shared/logging.py
+++ b/processor_function/shared/logging.py
@@ -18,10 +18,10 @@
"""
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler()) # For logging into console
- app_insights_instrumentation_key = os.getenv("APP_INSIGHTS_INSTRUMENTATION_KEY")
+ app_insights_connection_string = os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING")
try:
- logger.addHandler(AzureLogHandler(connection_string=f"InstrumentationKey={app_insights_instrumentation_key}"))
+ logger.addHandler(AzureLogHandler(connection_string=app_insights_connection_string))
except ValueError as e:
logger.error(f"Failed to set Application Insights logger handler: {e}")
|
{"golden_diff": "diff --git a/processor_function/shared/logging.py b/processor_function/shared/logging.py\n--- a/processor_function/shared/logging.py\n+++ b/processor_function/shared/logging.py\n@@ -18,10 +18,10 @@\n \"\"\"\n logger = logging.getLogger()\n logger.addHandler(logging.StreamHandler()) # For logging into console\n- app_insights_instrumentation_key = os.getenv(\"APP_INSIGHTS_INSTRUMENTATION_KEY\")\n+ app_insights_connection_string = os.getenv(\"APPLICATIONINSIGHTS_CONNECTION_STRING\")\n \n try:\n- logger.addHandler(AzureLogHandler(connection_string=f\"InstrumentationKey={app_insights_instrumentation_key}\"))\n+ logger.addHandler(AzureLogHandler(connection_string=app_insights_connection_string))\n except ValueError as e:\n logger.error(f\"Failed to set Application Insights logger handler: {e}\")\n", "issue": "Application Insights configuration for processor function broken\nThe `createNewWorkspace` function monitoring blade indicates that Application Insights integration is not enabled.\r\n\r\n\r\nAt a first glance, the config seems to be there in [processor_function/function.tf](https://github.com/microsoft/AzureTRE/blob/develop/templates/core/terraform/processor_function/function.tf#L24)\r\n```plaintext\r\n APP_INSIGHTS_INSTRUMENTATION_KEY = var.app_insights_instrumentation_key\r\n```\r\nhowever, the setting key name is incorrect.\r\n\r\nManually enabling the integration in the portal adds additional config keys:\r\n\r\n`APPLICATIONINSIGHTS_CONNECTION_STRING`\r\n`APPINSIGHTS_INSTRUMENTATIONKEY`\r\n\r\nNeed to add above keys with values to function app config.\r\n\r\n[Reference](https://docs.microsoft.com/en-us/azure/azure-monitor/app/resource-manager-function-app)\n", "before_files": [{"content": "import logging\nimport os\n\nfrom opencensus.ext.azure.log_exporter import AzureLogHandler\nfrom opencensus.trace import config_integration\nfrom opencensus.trace.samplers import AlwaysOnSampler\nfrom opencensus.trace.tracer import Tracer\n\n\ndef initialize_logging(logging_level: int, correlation_id: str) -> logging.LoggerAdapter:\n \"\"\"\n Adds the Application Insights handler for the root logger and sets the given logging level.\n Creates and returns a logger adapter that integrates the correlation ID, if given, to the log messages.\n\n :param logging_level: The logging level to set e.g., logging.WARNING.\n :param correlation_id: Optional. The correlation ID that is passed on to the operation_Id in App Insights.\n :returns: A newly created logger adapter.\n \"\"\"\n logger = logging.getLogger()\n logger.addHandler(logging.StreamHandler()) # For logging into console\n app_insights_instrumentation_key = os.getenv(\"APP_INSIGHTS_INSTRUMENTATION_KEY\")\n\n try:\n logger.addHandler(AzureLogHandler(connection_string=f\"InstrumentationKey={app_insights_instrumentation_key}\"))\n except ValueError as e:\n logger.error(f\"Failed to set Application Insights logger handler: {e}\")\n\n config_integration.trace_integrations(['logging'])\n logging.basicConfig(level=logging_level, format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')\n Tracer(sampler=AlwaysOnSampler())\n logger.setLevel(logging_level)\n\n extra = None\n\n if correlation_id:\n extra = {'traceId': correlation_id}\n\n adapter = logging.LoggerAdapter(logger, extra)\n adapter.debug(f\"Logger adapter initialized with extra: {extra}\")\n\n return adapter\n", "path": "processor_function/shared/logging.py"}], "after_files": [{"content": "import logging\nimport os\n\nfrom opencensus.ext.azure.log_exporter import AzureLogHandler\nfrom opencensus.trace import config_integration\nfrom opencensus.trace.samplers import AlwaysOnSampler\nfrom opencensus.trace.tracer import Tracer\n\n\ndef initialize_logging(logging_level: int, correlation_id: str) -> logging.LoggerAdapter:\n \"\"\"\n Adds the Application Insights handler for the root logger and sets the given logging level.\n Creates and returns a logger adapter that integrates the correlation ID, if given, to the log messages.\n\n :param logging_level: The logging level to set e.g., logging.WARNING.\n :param correlation_id: Optional. The correlation ID that is passed on to the operation_Id in App Insights.\n :returns: A newly created logger adapter.\n \"\"\"\n logger = logging.getLogger()\n logger.addHandler(logging.StreamHandler()) # For logging into console\n app_insights_connection_string = os.getenv(\"APPLICATIONINSIGHTS_CONNECTION_STRING\")\n\n try:\n logger.addHandler(AzureLogHandler(connection_string=app_insights_connection_string))\n except ValueError as e:\n logger.error(f\"Failed to set Application Insights logger handler: {e}\")\n\n config_integration.trace_integrations(['logging'])\n logging.basicConfig(level=logging_level, format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')\n Tracer(sampler=AlwaysOnSampler())\n logger.setLevel(logging_level)\n\n extra = None\n\n if correlation_id:\n extra = {'traceId': correlation_id}\n\n adapter = logging.LoggerAdapter(logger, extra)\n adapter.debug(f\"Logger adapter initialized with extra: {extra}\")\n\n return adapter\n", "path": "processor_function/shared/logging.py"}]}
| 932 | 181 |
gh_patches_debug_15975
|
rasdani/github-patches
|
git_diff
|
pypa__pip-3031
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
--process-dependency-links not recognized in requirements.txt
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/req/req_file.py`
Content:
```
1 """
2 Requirements file parsing
3 """
4
5 from __future__ import absolute_import
6
7 import os
8 import re
9 import shlex
10 import optparse
11 import warnings
12
13 from pip._vendor.six.moves.urllib import parse as urllib_parse
14 from pip._vendor.six.moves import filterfalse
15
16 import pip
17 from pip.download import get_file_content
18 from pip.req.req_install import InstallRequirement
19 from pip.exceptions import (RequirementsFileParseError)
20 from pip.utils.deprecation import RemovedInPip10Warning
21 from pip import cmdoptions
22
23 __all__ = ['parse_requirements']
24
25 SCHEME_RE = re.compile(r'^(http|https|file):', re.I)
26 COMMENT_RE = re.compile(r'(^|\s)+#.*$')
27
28 SUPPORTED_OPTIONS = [
29 cmdoptions.constraints,
30 cmdoptions.editable,
31 cmdoptions.requirements,
32 cmdoptions.no_index,
33 cmdoptions.index_url,
34 cmdoptions.find_links,
35 cmdoptions.extra_index_url,
36 cmdoptions.allow_external,
37 cmdoptions.allow_all_external,
38 cmdoptions.no_allow_external,
39 cmdoptions.allow_unsafe,
40 cmdoptions.no_allow_unsafe,
41 cmdoptions.use_wheel,
42 cmdoptions.no_use_wheel,
43 cmdoptions.always_unzip,
44 cmdoptions.no_binary,
45 cmdoptions.only_binary,
46 cmdoptions.trusted_host,
47 ]
48
49 # options to be passed to requirements
50 SUPPORTED_OPTIONS_REQ = [
51 cmdoptions.install_options,
52 cmdoptions.global_options
53 ]
54
55 # the 'dest' string values
56 SUPPORTED_OPTIONS_REQ_DEST = [o().dest for o in SUPPORTED_OPTIONS_REQ]
57
58
59 def parse_requirements(filename, finder=None, comes_from=None, options=None,
60 session=None, constraint=False, wheel_cache=None):
61 """Parse a requirements file and yield InstallRequirement instances.
62
63 :param filename: Path or url of requirements file.
64 :param finder: Instance of pip.index.PackageFinder.
65 :param comes_from: Origin description of requirements.
66 :param options: Global options.
67 :param session: Instance of pip.download.PipSession.
68 :param constraint: If true, parsing a constraint file rather than
69 requirements file.
70 :param wheel_cache: Instance of pip.wheel.WheelCache
71 """
72 if session is None:
73 raise TypeError(
74 "parse_requirements() missing 1 required keyword argument: "
75 "'session'"
76 )
77
78 _, content = get_file_content(
79 filename, comes_from=comes_from, session=session
80 )
81
82 lines = content.splitlines()
83 lines = ignore_comments(lines)
84 lines = join_lines(lines)
85 lines = skip_regex(lines, options)
86
87 for line_number, line in enumerate(lines, 1):
88 req_iter = process_line(line, filename, line_number, finder,
89 comes_from, options, session, wheel_cache,
90 constraint=constraint)
91 for req in req_iter:
92 yield req
93
94
95 def process_line(line, filename, line_number, finder=None, comes_from=None,
96 options=None, session=None, wheel_cache=None,
97 constraint=False):
98 """Process a single requirements line; This can result in creating/yielding
99 requirements, or updating the finder.
100
101 For lines that contain requirements, the only options that have an effect
102 are from SUPPORTED_OPTIONS_REQ, and they are scoped to the
103 requirement. Other options from SUPPORTED_OPTIONS may be present, but are
104 ignored.
105
106 For lines that do not contain requirements, the only options that have an
107 effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may
108 be present, but are ignored. These lines may contain multiple options
109 (although our docs imply only one is supported), and all our parsed and
110 affect the finder.
111
112 :param constraint: If True, parsing a constraints file.
113 """
114 parser = build_parser()
115 defaults = parser.get_default_values()
116 defaults.index_url = None
117 if finder:
118 # `finder.format_control` will be updated during parsing
119 defaults.format_control = finder.format_control
120 args_str, options_str = break_args_options(line)
121 opts, _ = parser.parse_args(shlex.split(options_str), defaults)
122
123 # preserve for the nested code path
124 line_comes_from = '%s %s (line %s)' % (
125 '-c' if constraint else '-r', filename, line_number)
126
127 # yield a line requirement
128 if args_str:
129 isolated = options.isolated_mode if options else False
130 if options:
131 cmdoptions.check_install_build_global(options, opts)
132 # get the options that apply to requirements
133 req_options = {}
134 for dest in SUPPORTED_OPTIONS_REQ_DEST:
135 if dest in opts.__dict__ and opts.__dict__[dest]:
136 req_options[dest] = opts.__dict__[dest]
137 yield InstallRequirement.from_line(
138 args_str, line_comes_from, constraint=constraint,
139 isolated=isolated, options=req_options, wheel_cache=wheel_cache
140 )
141
142 # yield an editable requirement
143 elif opts.editables:
144 isolated = options.isolated_mode if options else False
145 default_vcs = options.default_vcs if options else None
146 yield InstallRequirement.from_editable(
147 opts.editables[0], comes_from=line_comes_from,
148 constraint=constraint, default_vcs=default_vcs, isolated=isolated,
149 wheel_cache=wheel_cache
150 )
151
152 # parse a nested requirements file
153 elif opts.requirements or opts.constraints:
154 if opts.requirements:
155 req_path = opts.requirements[0]
156 nested_constraint = False
157 else:
158 req_path = opts.constraints[0]
159 nested_constraint = True
160 # original file is over http
161 if SCHEME_RE.search(filename):
162 # do a url join so relative paths work
163 req_path = urllib_parse.urljoin(filename, req_path)
164 # original file and nested file are paths
165 elif not SCHEME_RE.search(req_path):
166 # do a join so relative paths work
167 req_dir = os.path.dirname(filename)
168 req_path = os.path.join(os.path.dirname(filename), req_path)
169 # TODO: Why not use `comes_from='-r {} (line {})'` here as well?
170 parser = parse_requirements(
171 req_path, finder, comes_from, options, session,
172 constraint=nested_constraint, wheel_cache=wheel_cache
173 )
174 for req in parser:
175 yield req
176
177 # set finder options
178 elif finder:
179 if opts.allow_external:
180 warnings.warn(
181 "--allow-external has been deprecated and will be removed in "
182 "the future. Due to changes in the repository protocol, it no "
183 "longer has any effect.",
184 RemovedInPip10Warning,
185 )
186
187 if opts.allow_all_external:
188 warnings.warn(
189 "--allow-all-external has been deprecated and will be removed "
190 "in the future. Due to changes in the repository protocol, it "
191 "no longer has any effect.",
192 RemovedInPip10Warning,
193 )
194
195 if opts.allow_unverified:
196 warnings.warn(
197 "--allow-unverified has been deprecated and will be removed "
198 "in the future. Due to changes in the repository protocol, it "
199 "no longer has any effect.",
200 RemovedInPip10Warning,
201 )
202
203 if opts.index_url:
204 finder.index_urls = [opts.index_url]
205 if opts.use_wheel is False:
206 finder.use_wheel = False
207 pip.index.fmt_ctl_no_use_wheel(finder.format_control)
208 if opts.no_index is True:
209 finder.index_urls = []
210 if opts.extra_index_urls:
211 finder.index_urls.extend(opts.extra_index_urls)
212 if opts.find_links:
213 # FIXME: it would be nice to keep track of the source
214 # of the find_links: support a find-links local path
215 # relative to a requirements file.
216 value = opts.find_links[0]
217 req_dir = os.path.dirname(os.path.abspath(filename))
218 relative_to_reqs_file = os.path.join(req_dir, value)
219 if os.path.exists(relative_to_reqs_file):
220 value = relative_to_reqs_file
221 finder.find_links.append(value)
222 if opts.trusted_hosts:
223 finder.secure_origins.extend(
224 ("*", host, "*") for host in opts.trusted_hosts)
225
226
227 def break_args_options(line):
228 """Break up the line into an args and options string. We only want to shlex
229 (and then optparse) the options, not the args. args can contain markers
230 which are corrupted by shlex.
231 """
232 tokens = line.split(' ')
233 args = []
234 options = tokens[:]
235 for token in tokens:
236 if token.startswith('-') or token.startswith('--'):
237 break
238 else:
239 args.append(token)
240 options.pop(0)
241 return ' '.join(args), ' '.join(options)
242
243
244 def build_parser():
245 """
246 Return a parser for parsing requirement lines
247 """
248 parser = optparse.OptionParser(add_help_option=False)
249
250 option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ
251 for option_factory in option_factories:
252 option = option_factory()
253 parser.add_option(option)
254
255 # By default optparse sys.exits on parsing errors. We want to wrap
256 # that in our own exception.
257 def parser_exit(self, msg):
258 raise RequirementsFileParseError(msg)
259 parser.exit = parser_exit
260
261 return parser
262
263
264 def join_lines(iterator):
265 """
266 Joins a line ending in '\' with the previous line.
267 """
268 lines = []
269 for line in iterator:
270 if not line.endswith('\\'):
271 if lines:
272 lines.append(line)
273 yield ''.join(lines)
274 lines = []
275 else:
276 yield line
277 else:
278 lines.append(line.strip('\\'))
279
280 # TODO: handle space after '\'.
281 # TODO: handle '\' on last line.
282
283
284 def ignore_comments(iterator):
285 """
286 Strips and filters empty or commented lines.
287 """
288 for line in iterator:
289 line = COMMENT_RE.sub('', line)
290 line = line.strip()
291 if line:
292 yield line
293
294
295 def skip_regex(lines, options):
296 """
297 Optionally exclude lines that match '--skip-requirements-regex'
298 """
299 skip_regex = options.skip_requirements_regex if options else None
300 if skip_regex:
301 lines = filterfalse(re.compile(skip_regex).search, lines)
302 return lines
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pip/req/req_file.py b/pip/req/req_file.py
--- a/pip/req/req_file.py
+++ b/pip/req/req_file.py
@@ -43,6 +43,7 @@
cmdoptions.always_unzip,
cmdoptions.no_binary,
cmdoptions.only_binary,
+ cmdoptions.process_dependency_links,
cmdoptions.trusted_host,
]
@@ -219,6 +220,8 @@
if os.path.exists(relative_to_reqs_file):
value = relative_to_reqs_file
finder.find_links.append(value)
+ if opts.process_dependency_links:
+ finder.process_dependency_links = True
if opts.trusted_hosts:
finder.secure_origins.extend(
("*", host, "*") for host in opts.trusted_hosts)
|
{"golden_diff": "diff --git a/pip/req/req_file.py b/pip/req/req_file.py\n--- a/pip/req/req_file.py\n+++ b/pip/req/req_file.py\n@@ -43,6 +43,7 @@\n cmdoptions.always_unzip,\n cmdoptions.no_binary,\n cmdoptions.only_binary,\n+ cmdoptions.process_dependency_links,\n cmdoptions.trusted_host,\n ]\n \n@@ -219,6 +220,8 @@\n if os.path.exists(relative_to_reqs_file):\n value = relative_to_reqs_file\n finder.find_links.append(value)\n+ if opts.process_dependency_links:\n+ finder.process_dependency_links = True\n if opts.trusted_hosts:\n finder.secure_origins.extend(\n (\"*\", host, \"*\") for host in opts.trusted_hosts)\n", "issue": "--process-dependency-links not recognized in requirements.txt\n\n", "before_files": [{"content": "\"\"\"\nRequirements file parsing\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport os\nimport re\nimport shlex\nimport optparse\nimport warnings\n\nfrom pip._vendor.six.moves.urllib import parse as urllib_parse\nfrom pip._vendor.six.moves import filterfalse\n\nimport pip\nfrom pip.download import get_file_content\nfrom pip.req.req_install import InstallRequirement\nfrom pip.exceptions import (RequirementsFileParseError)\nfrom pip.utils.deprecation import RemovedInPip10Warning\nfrom pip import cmdoptions\n\n__all__ = ['parse_requirements']\n\nSCHEME_RE = re.compile(r'^(http|https|file):', re.I)\nCOMMENT_RE = re.compile(r'(^|\\s)+#.*$')\n\nSUPPORTED_OPTIONS = [\n cmdoptions.constraints,\n cmdoptions.editable,\n cmdoptions.requirements,\n cmdoptions.no_index,\n cmdoptions.index_url,\n cmdoptions.find_links,\n cmdoptions.extra_index_url,\n cmdoptions.allow_external,\n cmdoptions.allow_all_external,\n cmdoptions.no_allow_external,\n cmdoptions.allow_unsafe,\n cmdoptions.no_allow_unsafe,\n cmdoptions.use_wheel,\n cmdoptions.no_use_wheel,\n cmdoptions.always_unzip,\n cmdoptions.no_binary,\n cmdoptions.only_binary,\n cmdoptions.trusted_host,\n]\n\n# options to be passed to requirements\nSUPPORTED_OPTIONS_REQ = [\n cmdoptions.install_options,\n cmdoptions.global_options\n]\n\n# the 'dest' string values\nSUPPORTED_OPTIONS_REQ_DEST = [o().dest for o in SUPPORTED_OPTIONS_REQ]\n\n\ndef parse_requirements(filename, finder=None, comes_from=None, options=None,\n session=None, constraint=False, wheel_cache=None):\n \"\"\"Parse a requirements file and yield InstallRequirement instances.\n\n :param filename: Path or url of requirements file.\n :param finder: Instance of pip.index.PackageFinder.\n :param comes_from: Origin description of requirements.\n :param options: Global options.\n :param session: Instance of pip.download.PipSession.\n :param constraint: If true, parsing a constraint file rather than\n requirements file.\n :param wheel_cache: Instance of pip.wheel.WheelCache\n \"\"\"\n if session is None:\n raise TypeError(\n \"parse_requirements() missing 1 required keyword argument: \"\n \"'session'\"\n )\n\n _, content = get_file_content(\n filename, comes_from=comes_from, session=session\n )\n\n lines = content.splitlines()\n lines = ignore_comments(lines)\n lines = join_lines(lines)\n lines = skip_regex(lines, options)\n\n for line_number, line in enumerate(lines, 1):\n req_iter = process_line(line, filename, line_number, finder,\n comes_from, options, session, wheel_cache,\n constraint=constraint)\n for req in req_iter:\n yield req\n\n\ndef process_line(line, filename, line_number, finder=None, comes_from=None,\n options=None, session=None, wheel_cache=None,\n constraint=False):\n \"\"\"Process a single requirements line; This can result in creating/yielding\n requirements, or updating the finder.\n\n For lines that contain requirements, the only options that have an effect\n are from SUPPORTED_OPTIONS_REQ, and they are scoped to the\n requirement. Other options from SUPPORTED_OPTIONS may be present, but are\n ignored.\n\n For lines that do not contain requirements, the only options that have an\n effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may\n be present, but are ignored. These lines may contain multiple options\n (although our docs imply only one is supported), and all our parsed and\n affect the finder.\n\n :param constraint: If True, parsing a constraints file.\n \"\"\"\n parser = build_parser()\n defaults = parser.get_default_values()\n defaults.index_url = None\n if finder:\n # `finder.format_control` will be updated during parsing\n defaults.format_control = finder.format_control\n args_str, options_str = break_args_options(line)\n opts, _ = parser.parse_args(shlex.split(options_str), defaults)\n\n # preserve for the nested code path\n line_comes_from = '%s %s (line %s)' % (\n '-c' if constraint else '-r', filename, line_number)\n\n # yield a line requirement\n if args_str:\n isolated = options.isolated_mode if options else False\n if options:\n cmdoptions.check_install_build_global(options, opts)\n # get the options that apply to requirements\n req_options = {}\n for dest in SUPPORTED_OPTIONS_REQ_DEST:\n if dest in opts.__dict__ and opts.__dict__[dest]:\n req_options[dest] = opts.__dict__[dest]\n yield InstallRequirement.from_line(\n args_str, line_comes_from, constraint=constraint,\n isolated=isolated, options=req_options, wheel_cache=wheel_cache\n )\n\n # yield an editable requirement\n elif opts.editables:\n isolated = options.isolated_mode if options else False\n default_vcs = options.default_vcs if options else None\n yield InstallRequirement.from_editable(\n opts.editables[0], comes_from=line_comes_from,\n constraint=constraint, default_vcs=default_vcs, isolated=isolated,\n wheel_cache=wheel_cache\n )\n\n # parse a nested requirements file\n elif opts.requirements or opts.constraints:\n if opts.requirements:\n req_path = opts.requirements[0]\n nested_constraint = False\n else:\n req_path = opts.constraints[0]\n nested_constraint = True\n # original file is over http\n if SCHEME_RE.search(filename):\n # do a url join so relative paths work\n req_path = urllib_parse.urljoin(filename, req_path)\n # original file and nested file are paths\n elif not SCHEME_RE.search(req_path):\n # do a join so relative paths work\n req_dir = os.path.dirname(filename)\n req_path = os.path.join(os.path.dirname(filename), req_path)\n # TODO: Why not use `comes_from='-r {} (line {})'` here as well?\n parser = parse_requirements(\n req_path, finder, comes_from, options, session,\n constraint=nested_constraint, wheel_cache=wheel_cache\n )\n for req in parser:\n yield req\n\n # set finder options\n elif finder:\n if opts.allow_external:\n warnings.warn(\n \"--allow-external has been deprecated and will be removed in \"\n \"the future. Due to changes in the repository protocol, it no \"\n \"longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if opts.allow_all_external:\n warnings.warn(\n \"--allow-all-external has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if opts.allow_unverified:\n warnings.warn(\n \"--allow-unverified has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if opts.index_url:\n finder.index_urls = [opts.index_url]\n if opts.use_wheel is False:\n finder.use_wheel = False\n pip.index.fmt_ctl_no_use_wheel(finder.format_control)\n if opts.no_index is True:\n finder.index_urls = []\n if opts.extra_index_urls:\n finder.index_urls.extend(opts.extra_index_urls)\n if opts.find_links:\n # FIXME: it would be nice to keep track of the source\n # of the find_links: support a find-links local path\n # relative to a requirements file.\n value = opts.find_links[0]\n req_dir = os.path.dirname(os.path.abspath(filename))\n relative_to_reqs_file = os.path.join(req_dir, value)\n if os.path.exists(relative_to_reqs_file):\n value = relative_to_reqs_file\n finder.find_links.append(value)\n if opts.trusted_hosts:\n finder.secure_origins.extend(\n (\"*\", host, \"*\") for host in opts.trusted_hosts)\n\n\ndef break_args_options(line):\n \"\"\"Break up the line into an args and options string. We only want to shlex\n (and then optparse) the options, not the args. args can contain markers\n which are corrupted by shlex.\n \"\"\"\n tokens = line.split(' ')\n args = []\n options = tokens[:]\n for token in tokens:\n if token.startswith('-') or token.startswith('--'):\n break\n else:\n args.append(token)\n options.pop(0)\n return ' '.join(args), ' '.join(options)\n\n\ndef build_parser():\n \"\"\"\n Return a parser for parsing requirement lines\n \"\"\"\n parser = optparse.OptionParser(add_help_option=False)\n\n option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ\n for option_factory in option_factories:\n option = option_factory()\n parser.add_option(option)\n\n # By default optparse sys.exits on parsing errors. We want to wrap\n # that in our own exception.\n def parser_exit(self, msg):\n raise RequirementsFileParseError(msg)\n parser.exit = parser_exit\n\n return parser\n\n\ndef join_lines(iterator):\n \"\"\"\n Joins a line ending in '\\' with the previous line.\n \"\"\"\n lines = []\n for line in iterator:\n if not line.endswith('\\\\'):\n if lines:\n lines.append(line)\n yield ''.join(lines)\n lines = []\n else:\n yield line\n else:\n lines.append(line.strip('\\\\'))\n\n # TODO: handle space after '\\'.\n # TODO: handle '\\' on last line.\n\n\ndef ignore_comments(iterator):\n \"\"\"\n Strips and filters empty or commented lines.\n \"\"\"\n for line in iterator:\n line = COMMENT_RE.sub('', line)\n line = line.strip()\n if line:\n yield line\n\n\ndef skip_regex(lines, options):\n \"\"\"\n Optionally exclude lines that match '--skip-requirements-regex'\n \"\"\"\n skip_regex = options.skip_requirements_regex if options else None\n if skip_regex:\n lines = filterfalse(re.compile(skip_regex).search, lines)\n return lines\n", "path": "pip/req/req_file.py"}], "after_files": [{"content": "\"\"\"\nRequirements file parsing\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport os\nimport re\nimport shlex\nimport optparse\nimport warnings\n\nfrom pip._vendor.six.moves.urllib import parse as urllib_parse\nfrom pip._vendor.six.moves import filterfalse\n\nimport pip\nfrom pip.download import get_file_content\nfrom pip.req.req_install import InstallRequirement\nfrom pip.exceptions import (RequirementsFileParseError)\nfrom pip.utils.deprecation import RemovedInPip10Warning\nfrom pip import cmdoptions\n\n__all__ = ['parse_requirements']\n\nSCHEME_RE = re.compile(r'^(http|https|file):', re.I)\nCOMMENT_RE = re.compile(r'(^|\\s)+#.*$')\n\nSUPPORTED_OPTIONS = [\n cmdoptions.constraints,\n cmdoptions.editable,\n cmdoptions.requirements,\n cmdoptions.no_index,\n cmdoptions.index_url,\n cmdoptions.find_links,\n cmdoptions.extra_index_url,\n cmdoptions.allow_external,\n cmdoptions.allow_all_external,\n cmdoptions.no_allow_external,\n cmdoptions.allow_unsafe,\n cmdoptions.no_allow_unsafe,\n cmdoptions.use_wheel,\n cmdoptions.no_use_wheel,\n cmdoptions.always_unzip,\n cmdoptions.no_binary,\n cmdoptions.only_binary,\n cmdoptions.process_dependency_links,\n cmdoptions.trusted_host,\n]\n\n# options to be passed to requirements\nSUPPORTED_OPTIONS_REQ = [\n cmdoptions.install_options,\n cmdoptions.global_options\n]\n\n# the 'dest' string values\nSUPPORTED_OPTIONS_REQ_DEST = [o().dest for o in SUPPORTED_OPTIONS_REQ]\n\n\ndef parse_requirements(filename, finder=None, comes_from=None, options=None,\n session=None, constraint=False, wheel_cache=None):\n \"\"\"Parse a requirements file and yield InstallRequirement instances.\n\n :param filename: Path or url of requirements file.\n :param finder: Instance of pip.index.PackageFinder.\n :param comes_from: Origin description of requirements.\n :param options: Global options.\n :param session: Instance of pip.download.PipSession.\n :param constraint: If true, parsing a constraint file rather than\n requirements file.\n :param wheel_cache: Instance of pip.wheel.WheelCache\n \"\"\"\n if session is None:\n raise TypeError(\n \"parse_requirements() missing 1 required keyword argument: \"\n \"'session'\"\n )\n\n _, content = get_file_content(\n filename, comes_from=comes_from, session=session\n )\n\n lines = content.splitlines()\n lines = ignore_comments(lines)\n lines = join_lines(lines)\n lines = skip_regex(lines, options)\n\n for line_number, line in enumerate(lines, 1):\n req_iter = process_line(line, filename, line_number, finder,\n comes_from, options, session, wheel_cache,\n constraint=constraint)\n for req in req_iter:\n yield req\n\n\ndef process_line(line, filename, line_number, finder=None, comes_from=None,\n options=None, session=None, wheel_cache=None,\n constraint=False):\n \"\"\"Process a single requirements line; This can result in creating/yielding\n requirements, or updating the finder.\n\n For lines that contain requirements, the only options that have an effect\n are from SUPPORTED_OPTIONS_REQ, and they are scoped to the\n requirement. Other options from SUPPORTED_OPTIONS may be present, but are\n ignored.\n\n For lines that do not contain requirements, the only options that have an\n effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may\n be present, but are ignored. These lines may contain multiple options\n (although our docs imply only one is supported), and all our parsed and\n affect the finder.\n\n :param constraint: If True, parsing a constraints file.\n \"\"\"\n parser = build_parser()\n defaults = parser.get_default_values()\n defaults.index_url = None\n if finder:\n # `finder.format_control` will be updated during parsing\n defaults.format_control = finder.format_control\n args_str, options_str = break_args_options(line)\n opts, _ = parser.parse_args(shlex.split(options_str), defaults)\n\n # preserve for the nested code path\n line_comes_from = '%s %s (line %s)' % (\n '-c' if constraint else '-r', filename, line_number)\n\n # yield a line requirement\n if args_str:\n isolated = options.isolated_mode if options else False\n if options:\n cmdoptions.check_install_build_global(options, opts)\n # get the options that apply to requirements\n req_options = {}\n for dest in SUPPORTED_OPTIONS_REQ_DEST:\n if dest in opts.__dict__ and opts.__dict__[dest]:\n req_options[dest] = opts.__dict__[dest]\n yield InstallRequirement.from_line(\n args_str, line_comes_from, constraint=constraint,\n isolated=isolated, options=req_options, wheel_cache=wheel_cache\n )\n\n # yield an editable requirement\n elif opts.editables:\n isolated = options.isolated_mode if options else False\n default_vcs = options.default_vcs if options else None\n yield InstallRequirement.from_editable(\n opts.editables[0], comes_from=line_comes_from,\n constraint=constraint, default_vcs=default_vcs, isolated=isolated,\n wheel_cache=wheel_cache\n )\n\n # parse a nested requirements file\n elif opts.requirements or opts.constraints:\n if opts.requirements:\n req_path = opts.requirements[0]\n nested_constraint = False\n else:\n req_path = opts.constraints[0]\n nested_constraint = True\n # original file is over http\n if SCHEME_RE.search(filename):\n # do a url join so relative paths work\n req_path = urllib_parse.urljoin(filename, req_path)\n # original file and nested file are paths\n elif not SCHEME_RE.search(req_path):\n # do a join so relative paths work\n req_dir = os.path.dirname(filename)\n req_path = os.path.join(os.path.dirname(filename), req_path)\n # TODO: Why not use `comes_from='-r {} (line {})'` here as well?\n parser = parse_requirements(\n req_path, finder, comes_from, options, session,\n constraint=nested_constraint, wheel_cache=wheel_cache\n )\n for req in parser:\n yield req\n\n # set finder options\n elif finder:\n if opts.allow_external:\n warnings.warn(\n \"--allow-external has been deprecated and will be removed in \"\n \"the future. Due to changes in the repository protocol, it no \"\n \"longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if opts.allow_all_external:\n warnings.warn(\n \"--allow-all-external has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if opts.allow_unverified:\n warnings.warn(\n \"--allow-unverified has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if opts.index_url:\n finder.index_urls = [opts.index_url]\n if opts.use_wheel is False:\n finder.use_wheel = False\n pip.index.fmt_ctl_no_use_wheel(finder.format_control)\n if opts.no_index is True:\n finder.index_urls = []\n if opts.extra_index_urls:\n finder.index_urls.extend(opts.extra_index_urls)\n if opts.find_links:\n # FIXME: it would be nice to keep track of the source\n # of the find_links: support a find-links local path\n # relative to a requirements file.\n value = opts.find_links[0]\n req_dir = os.path.dirname(os.path.abspath(filename))\n relative_to_reqs_file = os.path.join(req_dir, value)\n if os.path.exists(relative_to_reqs_file):\n value = relative_to_reqs_file\n finder.find_links.append(value)\n if opts.process_dependency_links:\n finder.process_dependency_links = True\n if opts.trusted_hosts:\n finder.secure_origins.extend(\n (\"*\", host, \"*\") for host in opts.trusted_hosts)\n\n\ndef break_args_options(line):\n \"\"\"Break up the line into an args and options string. We only want to shlex\n (and then optparse) the options, not the args. args can contain markers\n which are corrupted by shlex.\n \"\"\"\n tokens = line.split(' ')\n args = []\n options = tokens[:]\n for token in tokens:\n if token.startswith('-') or token.startswith('--'):\n break\n else:\n args.append(token)\n options.pop(0)\n return ' '.join(args), ' '.join(options)\n\n\ndef build_parser():\n \"\"\"\n Return a parser for parsing requirement lines\n \"\"\"\n parser = optparse.OptionParser(add_help_option=False)\n\n option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ\n for option_factory in option_factories:\n option = option_factory()\n parser.add_option(option)\n\n # By default optparse sys.exits on parsing errors. We want to wrap\n # that in our own exception.\n def parser_exit(self, msg):\n raise RequirementsFileParseError(msg)\n parser.exit = parser_exit\n\n return parser\n\n\ndef join_lines(iterator):\n \"\"\"\n Joins a line ending in '\\' with the previous line.\n \"\"\"\n lines = []\n for line in iterator:\n if not line.endswith('\\\\'):\n if lines:\n lines.append(line)\n yield ''.join(lines)\n lines = []\n else:\n yield line\n else:\n lines.append(line.strip('\\\\'))\n\n # TODO: handle space after '\\'.\n # TODO: handle '\\' on last line.\n\n\ndef ignore_comments(iterator):\n \"\"\"\n Strips and filters empty or commented lines.\n \"\"\"\n for line in iterator:\n line = COMMENT_RE.sub('', line)\n line = line.strip()\n if line:\n yield line\n\n\ndef skip_regex(lines, options):\n \"\"\"\n Optionally exclude lines that match '--skip-requirements-regex'\n \"\"\"\n skip_regex = options.skip_requirements_regex if options else None\n if skip_regex:\n lines = filterfalse(re.compile(skip_regex).search, lines)\n return lines\n", "path": "pip/req/req_file.py"}]}
| 3,288 | 181 |
gh_patches_debug_42415
|
rasdani/github-patches
|
git_diff
|
great-expectations__great_expectations-4502
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `assets/scripts/build_gallery.py`
Content:
```
1 import ast
2 import importlib
3 import json
4 import logging
5 import os
6 import sys
7 import traceback
8 from io import StringIO
9 from subprocess import CalledProcessError, CompletedProcess, run
10 from typing import Dict
11
12 import pkg_resources
13
14 logger = logging.getLogger(__name__)
15 chandler = logging.StreamHandler(stream=sys.stdout)
16 chandler.setLevel(logging.DEBUG)
17 chandler.setFormatter(
18 logging.Formatter("%(asctime)s - %(levelname)s - %(message)s", "%Y-%m-%dT%H:%M:%S")
19 )
20 logger.addHandler(chandler)
21 logger.setLevel(logging.DEBUG)
22
23
24 expectation_tracebacks = StringIO()
25 expectation_checklists = StringIO()
26
27
28 def execute_shell_command(command: str) -> int:
29 """
30 Wrap subprocess command in a try/except block to provide a convenient method for pip installing dependencies.
31
32 :param command: bash command -- as if typed in a shell/Terminal window
33 :return: status code -- 0 if successful; all other values (1 is the most common) indicate an error
34 """
35 cwd: str = os.getcwd()
36
37 path_env_var: str = os.pathsep.join([os.environ.get("PATH", os.defpath), cwd])
38 env: dict = dict(os.environ, PATH=path_env_var)
39
40 status_code: int = 0
41 try:
42 res: CompletedProcess = run(
43 args=["bash", "-c", command],
44 stdin=None,
45 input=None,
46 # stdout=None, # commenting out to prevent issues with `subprocess.run` in python <3.7.4
47 # stderr=None, # commenting out to prevent issues with `subprocess.run` in python <3.7.4
48 capture_output=True,
49 shell=False,
50 cwd=cwd,
51 timeout=None,
52 check=True,
53 encoding=None,
54 errors=None,
55 text=None,
56 env=env,
57 universal_newlines=True,
58 )
59 sh_out: str = res.stdout.strip()
60 logger.info(sh_out)
61 except CalledProcessError as cpe:
62 status_code = cpe.returncode
63 sys.stderr.write(cpe.output)
64 sys.stderr.flush()
65 exception_message: str = "A Sub-Process call Exception occurred.\n"
66 exception_traceback: str = traceback.format_exc()
67 exception_message += (
68 f'{type(cpe).__name__}: "{str(cpe)}". Traceback: "{exception_traceback}".'
69 )
70 logger.error(exception_message)
71
72 return status_code
73
74
75 def get_contrib_requirements(filepath: str) -> Dict:
76 """
77 Parse the python file from filepath to identify a "library_metadata" dictionary in any defined classes, and return a requirements_info object that includes a list of pip-installable requirements for each class that defines them.
78
79 Note, currently we are handling all dependencies at the module level. To support future expandability and detail, this method also returns per-class requirements in addition to the concatenated list.
80
81 Args:
82 filepath: the path to the file to parse and analyze
83
84 Returns:
85 A dictionary:
86 {
87 "requirements": [ all_requirements_found_in_any_library_metadata_in_file ],
88 class_name: [ requirements ]
89 }
90
91 """
92 with open(filepath) as file:
93 tree = ast.parse(file.read())
94
95 requirements_info = {"requirements": []}
96 for child in ast.iter_child_nodes(tree):
97 if not isinstance(child, ast.ClassDef):
98 continue
99 current_class = child.name
100 for node in ast.walk(child):
101 if isinstance(node, ast.Assign):
102 try:
103 target_ids = [target.id for target in node.targets]
104 except (ValueError, AttributeError):
105 # some assignment types assign to non-node objects (e.g. Tuple)
106 target_ids = []
107 if "library_metadata" in target_ids:
108 library_metadata = ast.literal_eval(node.value)
109 requirements = library_metadata.get("requirements", [])
110 requirements_info[current_class] = requirements
111 requirements_info["requirements"] += requirements
112
113 return requirements_info
114
115
116 def build_gallery(include_core: bool = True, include_contrib: bool = True) -> Dict:
117 """
118 Build the gallery object by running diagnostics for each Expectation and returning the resulting reports.
119
120 Args:
121 include_core: if true, include Expectations defined in the core module
122 include_contrib_experimental: if true, include Expectations defined in contrib_experimental:
123
124 Returns:
125 None
126
127 """
128 gallery_info = dict()
129 requirements_dict = {}
130 logger.info("Loading great_expectations library.")
131 installed_packages = pkg_resources.working_set
132 installed_packages_txt = sorted(f"{i.key}=={i.version}" for i in installed_packages)
133 logger.debug(f"Found the following packages: {installed_packages_txt}")
134
135 import great_expectations
136
137 if include_core:
138 print("\n\n\n=== (Core) ===")
139 logger.info("Getting base registered expectations list")
140 core_expectations = (
141 great_expectations.expectations.registry.list_registered_expectation_implementations()
142 )
143 logger.debug(f"Found the following expectations: {sorted(core_expectations)}")
144 for expectation in core_expectations:
145 requirements_dict[expectation] = {"group": "core"}
146
147 just_installed = set()
148
149 if include_contrib:
150 print("\n\n\n=== (Contrib) ===")
151 logger.info("Finding contrib modules")
152 skip_dirs = ("cli", "tests")
153 contrib_dir = os.path.join(
154 os.path.dirname(__file__),
155 "..",
156 "..",
157 "contrib",
158 )
159
160 for root, dirs, files in os.walk(contrib_dir):
161 for dirname in skip_dirs:
162 if dirname in dirs:
163 dirs.remove(dirname)
164 if "expectations" in dirs:
165 if root.endswith("great_expectations_experimental"):
166 sys.path.append(root)
167 else:
168 # A package in contrib that may contain more Expectations
169 sys.path.append(os.path.dirname(root))
170 for filename in files:
171 if filename.endswith(".py") and filename.startswith("expect_"):
172 logger.debug(f"Getting requirements for module {filename}")
173 contrib_subdir_name = os.path.basename(os.path.dirname(root))
174 requirements_dict[filename[:-3]] = get_contrib_requirements(
175 os.path.join(root, filename)
176 )
177 requirements_dict[filename[:-3]]["group"] = contrib_subdir_name
178
179 for expectation in sorted(requirements_dict):
180 group = requirements_dict[expectation]["group"]
181 print(f"\n\n\n=== {expectation} ({group}) ===")
182 requirements = requirements_dict[expectation].get("requirements", [])
183 parsed_requirements = pkg_resources.parse_requirements(requirements)
184 for req in parsed_requirements:
185 is_satisfied = any(
186 [installed_pkg in req for installed_pkg in installed_packages]
187 )
188 if is_satisfied or req in just_installed:
189 continue
190 logger.debug(f"Executing command: 'pip install \"{req}\"'")
191 status_code = execute_shell_command(f'pip install "{req}"')
192 if status_code == 0:
193 just_installed.add(req)
194 else:
195 expectation_tracebacks.write(
196 f"\n\n----------------\n{expectation} ({group})\n"
197 )
198 expectation_tracebacks.write(f"Failed to pip install {req}\n\n")
199
200 if group != "core":
201 logger.debug(f"Importing {expectation}")
202 try:
203 if group == "great_expectations_experimental":
204 importlib.import_module(f"expectations.{expectation}", group)
205 else:
206 importlib.import_module(f"{group}.expectations")
207 except ModuleNotFoundError as e:
208 logger.error(f"Failed to load expectation: {expectation}")
209 print(traceback.format_exc())
210 expectation_tracebacks.write(
211 f"\n\n----------------\n{expectation} ({group})\n"
212 )
213 expectation_tracebacks.write(traceback.format_exc())
214 continue
215
216 logger.debug(f"Running diagnostics for expectation: {expectation}")
217 impl = great_expectations.expectations.registry.get_expectation_impl(
218 expectation
219 )
220 try:
221 diagnostics = impl().run_diagnostics(return_only_gallery_examples=True)
222 checklist_string = diagnostics.generate_checklist()
223 expectation_checklists.write(
224 f"\n\n----------------\n{expectation} ({group})\n"
225 )
226 expectation_checklists.write(f"{checklist_string}\n")
227 gallery_info[expectation] = diagnostics.to_json_dict()
228 except Exception:
229 logger.error(f"Failed to run diagnostics for: {expectation}")
230 print(traceback.format_exc())
231 expectation_tracebacks.write(
232 f"\n\n----------------\n{expectation} ({group})\n"
233 )
234 expectation_tracebacks.write(traceback.format_exc())
235
236 if just_installed:
237 print("\n\n\n=== (Uninstalling) ===")
238 logger.info(
239 f"Uninstalling packages that were installed while running this script..."
240 )
241 for req in just_installed:
242 logger.debug(f"Executing command: 'pip uninstall -y \"{req}\"'")
243 execute_shell_command(f'pip uninstall -y "{req}"')
244
245 expectation_filenames_set = set(requirements_dict.keys())
246 registered_expectations_set = set(
247 great_expectations.expectations.registry.list_registered_expectation_implementations()
248 )
249 non_matched_filenames = expectation_filenames_set - registered_expectations_set
250 if non_matched_filenames:
251 expectation_tracebacks.write(f"\n\n----------------\n(Not a traceback)\n")
252 expectation_tracebacks.write(
253 "Expectation filenames that don't match their defined Expectation name:\n"
254 )
255 for fname in sorted(non_matched_filenames):
256 expectation_tracebacks.write(f"- {fname}\n")
257
258 bad_names = sorted(
259 list(registered_expectations_set - expectation_filenames_set)
260 )
261 expectation_tracebacks.write(
262 f"\nRegistered Expectation names that don't match:\n"
263 )
264 for exp_name in bad_names:
265 expectation_tracebacks.write(f"- {exp_name}\n")
266
267 if include_core:
268 core_dir = os.path.join(
269 os.path.dirname(__file__),
270 "..",
271 "..",
272 "great_expectations",
273 "expectations",
274 "core",
275 )
276 core_expectations_filename_set = set(
277 [
278 fname.rsplit(".", 1)[0]
279 for fname in os.listdir(core_dir)
280 if fname.startswith("expect_")
281 ]
282 )
283 core_expectations_not_in_gallery = core_expectations_filename_set - set(
284 core_expectations
285 )
286 if core_expectations_not_in_gallery:
287 expectation_tracebacks.write(f"\n\n----------------\n(Not a traceback)\n")
288 expectation_tracebacks.write(
289 f"Core Expectation files not included in core_expectations:\n"
290 )
291 for exp_name in sorted(core_expectations_not_in_gallery):
292 expectation_tracebacks.write(f"- {exp_name}\n")
293
294 return gallery_info
295
296
297 if __name__ == "__main__":
298 gallery_info = build_gallery(include_core=True, include_contrib=True)
299 tracebacks = expectation_tracebacks.getvalue()
300 checklists = expectation_checklists.getvalue()
301 if tracebacks != "":
302 print("\n\n\n" + "#" * 30 + " T R A C E B A C K S " + "#" * 30 + "\n")
303 print(tracebacks)
304 print(
305 "\n\n" + "#" * 30 + " E N D T R A C E B A C K S " + "#" * 30 + "\n\n"
306 )
307 with open("./gallery-errors.txt", "w") as outfile:
308 outfile.write(tracebacks)
309 if checklists != "":
310 print(checklists)
311 with open("./checklists.txt", "w") as outfile:
312 outfile.write(checklists)
313 with open("./expectation_library_v2.json", "w") as outfile:
314 json.dump(gallery_info, outfile)
315
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/assets/scripts/build_gallery.py b/assets/scripts/build_gallery.py
--- a/assets/scripts/build_gallery.py
+++ b/assets/scripts/build_gallery.py
@@ -3,6 +3,7 @@
import json
import logging
import os
+import re
import sys
import traceback
from io import StringIO
@@ -224,6 +225,10 @@
f"\n\n----------------\n{expectation} ({group})\n"
)
expectation_checklists.write(f"{checklist_string}\n")
+ if diagnostics["description"]["docstring"]:
+ diagnostics["description"]["docstring"] = format_docstring_to_markdown(
+ diagnostics["description"]["docstring"]
+ )
gallery_info[expectation] = diagnostics.to_json_dict()
except Exception:
logger.error(f"Failed to run diagnostics for: {expectation}")
@@ -294,6 +299,77 @@
return gallery_info
+def format_docstring_to_markdown(docstr: str) -> str:
+ """
+ Add markdown formatting to a provided docstring
+
+ Args:
+ docstr: the original docstring that needs to be converted to markdown.
+
+ Returns:
+ str of Docstring formatted as markdown
+
+ """
+ r = re.compile(r"\s\s+", re.MULTILINE)
+ clean_docstr_list = []
+ prev_line = None
+ in_code_block = False
+ in_param = False
+ first_code_indentation = None
+
+ # Parse each line to determine if it needs formatting
+ for original_line in docstr.split("\n"):
+ # Remove excess spaces from lines formed by concatenated docstring lines.
+ line = r.sub(" ", original_line)
+ # In some old docstrings, this indicates the start of an example block.
+ if line.strip() == "::":
+ in_code_block = True
+ clean_docstr_list.append("```")
+
+ # All of our parameter/arg/etc lists start after a line ending in ':'.
+ elif line.strip().endswith(":"):
+ in_param = True
+ # This adds a blank line before the header if one doesn't already exist.
+ if prev_line != "":
+ clean_docstr_list.append("")
+ # Turn the line into an H4 header
+ clean_docstr_list.append(f"#### {line.strip()}")
+ elif line.strip() == "" and prev_line != "::":
+ # All of our parameter groups end with a line break, but we don't want to exit a parameter block due to a
+ # line break in a code block. However, some code blocks start with a blank first line, so we want to make
+ # sure we aren't immediately exiting the code block (hence the test for '::' on the previous line.
+ in_param = False
+ # Add the markdown indicator to close a code block, since we aren't in one now.
+ if in_code_block:
+ clean_docstr_list.append("```")
+ in_code_block = False
+ first_code_indentation = None
+ clean_docstr_list.append(line)
+ else:
+ if in_code_block:
+ # Determine the number of spaces indenting the first line of code so they can be removed from all lines
+ # in the code block without wrecking the hierarchical indentation levels of future lines.
+ if first_code_indentation == None and line.strip() != "":
+ first_code_indentation = len(
+ re.match(r"\s*", original_line, re.UNICODE).group(0)
+ )
+ if line.strip() == "" and prev_line == "::":
+ # If the first line of the code block is a blank one, just skip it.
+ pass
+ else:
+ # Append the line of code, minus the extra indentation from being written in an indented docstring.
+ clean_docstr_list.append(original_line[first_code_indentation:])
+ elif ":" in line.replace(":ref:", "") and in_param:
+ # This indicates a parameter. arg. or other definition.
+ clean_docstr_list.append(f"- {line.strip()}")
+ else:
+ # This indicates a regular line of text.
+ clean_docstr_list.append(f"{line.strip()}")
+ prev_line = line.strip()
+ clean_docstr = "\n".join(clean_docstr_list)
+ return clean_docstr
+
+
if __name__ == "__main__":
gallery_info = build_gallery(include_core=True, include_contrib=True)
tracebacks = expectation_tracebacks.getvalue()
|
{"golden_diff": "diff --git a/assets/scripts/build_gallery.py b/assets/scripts/build_gallery.py\n--- a/assets/scripts/build_gallery.py\n+++ b/assets/scripts/build_gallery.py\n@@ -3,6 +3,7 @@\n import json\n import logging\n import os\n+import re\n import sys\n import traceback\n from io import StringIO\n@@ -224,6 +225,10 @@\n f\"\\n\\n----------------\\n{expectation} ({group})\\n\"\n )\n expectation_checklists.write(f\"{checklist_string}\\n\")\n+ if diagnostics[\"description\"][\"docstring\"]:\n+ diagnostics[\"description\"][\"docstring\"] = format_docstring_to_markdown(\n+ diagnostics[\"description\"][\"docstring\"]\n+ )\n gallery_info[expectation] = diagnostics.to_json_dict()\n except Exception:\n logger.error(f\"Failed to run diagnostics for: {expectation}\")\n@@ -294,6 +299,77 @@\n return gallery_info\n \n \n+def format_docstring_to_markdown(docstr: str) -> str:\n+ \"\"\"\n+ Add markdown formatting to a provided docstring\n+\n+ Args:\n+ docstr: the original docstring that needs to be converted to markdown.\n+\n+ Returns:\n+ str of Docstring formatted as markdown\n+\n+ \"\"\"\n+ r = re.compile(r\"\\s\\s+\", re.MULTILINE)\n+ clean_docstr_list = []\n+ prev_line = None\n+ in_code_block = False\n+ in_param = False\n+ first_code_indentation = None\n+\n+ # Parse each line to determine if it needs formatting\n+ for original_line in docstr.split(\"\\n\"):\n+ # Remove excess spaces from lines formed by concatenated docstring lines.\n+ line = r.sub(\" \", original_line)\n+ # In some old docstrings, this indicates the start of an example block.\n+ if line.strip() == \"::\":\n+ in_code_block = True\n+ clean_docstr_list.append(\"```\")\n+\n+ # All of our parameter/arg/etc lists start after a line ending in ':'.\n+ elif line.strip().endswith(\":\"):\n+ in_param = True\n+ # This adds a blank line before the header if one doesn't already exist.\n+ if prev_line != \"\":\n+ clean_docstr_list.append(\"\")\n+ # Turn the line into an H4 header\n+ clean_docstr_list.append(f\"#### {line.strip()}\")\n+ elif line.strip() == \"\" and prev_line != \"::\":\n+ # All of our parameter groups end with a line break, but we don't want to exit a parameter block due to a\n+ # line break in a code block. However, some code blocks start with a blank first line, so we want to make\n+ # sure we aren't immediately exiting the code block (hence the test for '::' on the previous line.\n+ in_param = False\n+ # Add the markdown indicator to close a code block, since we aren't in one now.\n+ if in_code_block:\n+ clean_docstr_list.append(\"```\")\n+ in_code_block = False\n+ first_code_indentation = None\n+ clean_docstr_list.append(line)\n+ else:\n+ if in_code_block:\n+ # Determine the number of spaces indenting the first line of code so they can be removed from all lines\n+ # in the code block without wrecking the hierarchical indentation levels of future lines.\n+ if first_code_indentation == None and line.strip() != \"\":\n+ first_code_indentation = len(\n+ re.match(r\"\\s*\", original_line, re.UNICODE).group(0)\n+ )\n+ if line.strip() == \"\" and prev_line == \"::\":\n+ # If the first line of the code block is a blank one, just skip it.\n+ pass\n+ else:\n+ # Append the line of code, minus the extra indentation from being written in an indented docstring.\n+ clean_docstr_list.append(original_line[first_code_indentation:])\n+ elif \":\" in line.replace(\":ref:\", \"\") and in_param:\n+ # This indicates a parameter. arg. or other definition.\n+ clean_docstr_list.append(f\"- {line.strip()}\")\n+ else:\n+ # This indicates a regular line of text.\n+ clean_docstr_list.append(f\"{line.strip()}\")\n+ prev_line = line.strip()\n+ clean_docstr = \"\\n\".join(clean_docstr_list)\n+ return clean_docstr\n+\n+\n if __name__ == \"__main__\":\n gallery_info = build_gallery(include_core=True, include_contrib=True)\n tracebacks = expectation_tracebacks.getvalue()\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import ast\nimport importlib\nimport json\nimport logging\nimport os\nimport sys\nimport traceback\nfrom io import StringIO\nfrom subprocess import CalledProcessError, CompletedProcess, run\nfrom typing import Dict\n\nimport pkg_resources\n\nlogger = logging.getLogger(__name__)\nchandler = logging.StreamHandler(stream=sys.stdout)\nchandler.setLevel(logging.DEBUG)\nchandler.setFormatter(\n logging.Formatter(\"%(asctime)s - %(levelname)s - %(message)s\", \"%Y-%m-%dT%H:%M:%S\")\n)\nlogger.addHandler(chandler)\nlogger.setLevel(logging.DEBUG)\n\n\nexpectation_tracebacks = StringIO()\nexpectation_checklists = StringIO()\n\n\ndef execute_shell_command(command: str) -> int:\n \"\"\"\n Wrap subprocess command in a try/except block to provide a convenient method for pip installing dependencies.\n\n :param command: bash command -- as if typed in a shell/Terminal window\n :return: status code -- 0 if successful; all other values (1 is the most common) indicate an error\n \"\"\"\n cwd: str = os.getcwd()\n\n path_env_var: str = os.pathsep.join([os.environ.get(\"PATH\", os.defpath), cwd])\n env: dict = dict(os.environ, PATH=path_env_var)\n\n status_code: int = 0\n try:\n res: CompletedProcess = run(\n args=[\"bash\", \"-c\", command],\n stdin=None,\n input=None,\n # stdout=None, # commenting out to prevent issues with `subprocess.run` in python <3.7.4\n # stderr=None, # commenting out to prevent issues with `subprocess.run` in python <3.7.4\n capture_output=True,\n shell=False,\n cwd=cwd,\n timeout=None,\n check=True,\n encoding=None,\n errors=None,\n text=None,\n env=env,\n universal_newlines=True,\n )\n sh_out: str = res.stdout.strip()\n logger.info(sh_out)\n except CalledProcessError as cpe:\n status_code = cpe.returncode\n sys.stderr.write(cpe.output)\n sys.stderr.flush()\n exception_message: str = \"A Sub-Process call Exception occurred.\\n\"\n exception_traceback: str = traceback.format_exc()\n exception_message += (\n f'{type(cpe).__name__}: \"{str(cpe)}\". Traceback: \"{exception_traceback}\".'\n )\n logger.error(exception_message)\n\n return status_code\n\n\ndef get_contrib_requirements(filepath: str) -> Dict:\n \"\"\"\n Parse the python file from filepath to identify a \"library_metadata\" dictionary in any defined classes, and return a requirements_info object that includes a list of pip-installable requirements for each class that defines them.\n\n Note, currently we are handling all dependencies at the module level. To support future expandability and detail, this method also returns per-class requirements in addition to the concatenated list.\n\n Args:\n filepath: the path to the file to parse and analyze\n\n Returns:\n A dictionary:\n {\n \"requirements\": [ all_requirements_found_in_any_library_metadata_in_file ],\n class_name: [ requirements ]\n }\n\n \"\"\"\n with open(filepath) as file:\n tree = ast.parse(file.read())\n\n requirements_info = {\"requirements\": []}\n for child in ast.iter_child_nodes(tree):\n if not isinstance(child, ast.ClassDef):\n continue\n current_class = child.name\n for node in ast.walk(child):\n if isinstance(node, ast.Assign):\n try:\n target_ids = [target.id for target in node.targets]\n except (ValueError, AttributeError):\n # some assignment types assign to non-node objects (e.g. Tuple)\n target_ids = []\n if \"library_metadata\" in target_ids:\n library_metadata = ast.literal_eval(node.value)\n requirements = library_metadata.get(\"requirements\", [])\n requirements_info[current_class] = requirements\n requirements_info[\"requirements\"] += requirements\n\n return requirements_info\n\n\ndef build_gallery(include_core: bool = True, include_contrib: bool = True) -> Dict:\n \"\"\"\n Build the gallery object by running diagnostics for each Expectation and returning the resulting reports.\n\n Args:\n include_core: if true, include Expectations defined in the core module\n include_contrib_experimental: if true, include Expectations defined in contrib_experimental:\n\n Returns:\n None\n\n \"\"\"\n gallery_info = dict()\n requirements_dict = {}\n logger.info(\"Loading great_expectations library.\")\n installed_packages = pkg_resources.working_set\n installed_packages_txt = sorted(f\"{i.key}=={i.version}\" for i in installed_packages)\n logger.debug(f\"Found the following packages: {installed_packages_txt}\")\n\n import great_expectations\n\n if include_core:\n print(\"\\n\\n\\n=== (Core) ===\")\n logger.info(\"Getting base registered expectations list\")\n core_expectations = (\n great_expectations.expectations.registry.list_registered_expectation_implementations()\n )\n logger.debug(f\"Found the following expectations: {sorted(core_expectations)}\")\n for expectation in core_expectations:\n requirements_dict[expectation] = {\"group\": \"core\"}\n\n just_installed = set()\n\n if include_contrib:\n print(\"\\n\\n\\n=== (Contrib) ===\")\n logger.info(\"Finding contrib modules\")\n skip_dirs = (\"cli\", \"tests\")\n contrib_dir = os.path.join(\n os.path.dirname(__file__),\n \"..\",\n \"..\",\n \"contrib\",\n )\n\n for root, dirs, files in os.walk(contrib_dir):\n for dirname in skip_dirs:\n if dirname in dirs:\n dirs.remove(dirname)\n if \"expectations\" in dirs:\n if root.endswith(\"great_expectations_experimental\"):\n sys.path.append(root)\n else:\n # A package in contrib that may contain more Expectations\n sys.path.append(os.path.dirname(root))\n for filename in files:\n if filename.endswith(\".py\") and filename.startswith(\"expect_\"):\n logger.debug(f\"Getting requirements for module {filename}\")\n contrib_subdir_name = os.path.basename(os.path.dirname(root))\n requirements_dict[filename[:-3]] = get_contrib_requirements(\n os.path.join(root, filename)\n )\n requirements_dict[filename[:-3]][\"group\"] = contrib_subdir_name\n\n for expectation in sorted(requirements_dict):\n group = requirements_dict[expectation][\"group\"]\n print(f\"\\n\\n\\n=== {expectation} ({group}) ===\")\n requirements = requirements_dict[expectation].get(\"requirements\", [])\n parsed_requirements = pkg_resources.parse_requirements(requirements)\n for req in parsed_requirements:\n is_satisfied = any(\n [installed_pkg in req for installed_pkg in installed_packages]\n )\n if is_satisfied or req in just_installed:\n continue\n logger.debug(f\"Executing command: 'pip install \\\"{req}\\\"'\")\n status_code = execute_shell_command(f'pip install \"{req}\"')\n if status_code == 0:\n just_installed.add(req)\n else:\n expectation_tracebacks.write(\n f\"\\n\\n----------------\\n{expectation} ({group})\\n\"\n )\n expectation_tracebacks.write(f\"Failed to pip install {req}\\n\\n\")\n\n if group != \"core\":\n logger.debug(f\"Importing {expectation}\")\n try:\n if group == \"great_expectations_experimental\":\n importlib.import_module(f\"expectations.{expectation}\", group)\n else:\n importlib.import_module(f\"{group}.expectations\")\n except ModuleNotFoundError as e:\n logger.error(f\"Failed to load expectation: {expectation}\")\n print(traceback.format_exc())\n expectation_tracebacks.write(\n f\"\\n\\n----------------\\n{expectation} ({group})\\n\"\n )\n expectation_tracebacks.write(traceback.format_exc())\n continue\n\n logger.debug(f\"Running diagnostics for expectation: {expectation}\")\n impl = great_expectations.expectations.registry.get_expectation_impl(\n expectation\n )\n try:\n diagnostics = impl().run_diagnostics(return_only_gallery_examples=True)\n checklist_string = diagnostics.generate_checklist()\n expectation_checklists.write(\n f\"\\n\\n----------------\\n{expectation} ({group})\\n\"\n )\n expectation_checklists.write(f\"{checklist_string}\\n\")\n gallery_info[expectation] = diagnostics.to_json_dict()\n except Exception:\n logger.error(f\"Failed to run diagnostics for: {expectation}\")\n print(traceback.format_exc())\n expectation_tracebacks.write(\n f\"\\n\\n----------------\\n{expectation} ({group})\\n\"\n )\n expectation_tracebacks.write(traceback.format_exc())\n\n if just_installed:\n print(\"\\n\\n\\n=== (Uninstalling) ===\")\n logger.info(\n f\"Uninstalling packages that were installed while running this script...\"\n )\n for req in just_installed:\n logger.debug(f\"Executing command: 'pip uninstall -y \\\"{req}\\\"'\")\n execute_shell_command(f'pip uninstall -y \"{req}\"')\n\n expectation_filenames_set = set(requirements_dict.keys())\n registered_expectations_set = set(\n great_expectations.expectations.registry.list_registered_expectation_implementations()\n )\n non_matched_filenames = expectation_filenames_set - registered_expectations_set\n if non_matched_filenames:\n expectation_tracebacks.write(f\"\\n\\n----------------\\n(Not a traceback)\\n\")\n expectation_tracebacks.write(\n \"Expectation filenames that don't match their defined Expectation name:\\n\"\n )\n for fname in sorted(non_matched_filenames):\n expectation_tracebacks.write(f\"- {fname}\\n\")\n\n bad_names = sorted(\n list(registered_expectations_set - expectation_filenames_set)\n )\n expectation_tracebacks.write(\n f\"\\nRegistered Expectation names that don't match:\\n\"\n )\n for exp_name in bad_names:\n expectation_tracebacks.write(f\"- {exp_name}\\n\")\n\n if include_core:\n core_dir = os.path.join(\n os.path.dirname(__file__),\n \"..\",\n \"..\",\n \"great_expectations\",\n \"expectations\",\n \"core\",\n )\n core_expectations_filename_set = set(\n [\n fname.rsplit(\".\", 1)[0]\n for fname in os.listdir(core_dir)\n if fname.startswith(\"expect_\")\n ]\n )\n core_expectations_not_in_gallery = core_expectations_filename_set - set(\n core_expectations\n )\n if core_expectations_not_in_gallery:\n expectation_tracebacks.write(f\"\\n\\n----------------\\n(Not a traceback)\\n\")\n expectation_tracebacks.write(\n f\"Core Expectation files not included in core_expectations:\\n\"\n )\n for exp_name in sorted(core_expectations_not_in_gallery):\n expectation_tracebacks.write(f\"- {exp_name}\\n\")\n\n return gallery_info\n\n\nif __name__ == \"__main__\":\n gallery_info = build_gallery(include_core=True, include_contrib=True)\n tracebacks = expectation_tracebacks.getvalue()\n checklists = expectation_checklists.getvalue()\n if tracebacks != \"\":\n print(\"\\n\\n\\n\" + \"#\" * 30 + \" T R A C E B A C K S \" + \"#\" * 30 + \"\\n\")\n print(tracebacks)\n print(\n \"\\n\\n\" + \"#\" * 30 + \" E N D T R A C E B A C K S \" + \"#\" * 30 + \"\\n\\n\"\n )\n with open(\"./gallery-errors.txt\", \"w\") as outfile:\n outfile.write(tracebacks)\n if checklists != \"\":\n print(checklists)\n with open(\"./checklists.txt\", \"w\") as outfile:\n outfile.write(checklists)\n with open(\"./expectation_library_v2.json\", \"w\") as outfile:\n json.dump(gallery_info, outfile)\n", "path": "assets/scripts/build_gallery.py"}], "after_files": [{"content": "import ast\nimport importlib\nimport json\nimport logging\nimport os\nimport re\nimport sys\nimport traceback\nfrom io import StringIO\nfrom subprocess import CalledProcessError, CompletedProcess, run\nfrom typing import Dict\n\nimport pkg_resources\n\nlogger = logging.getLogger(__name__)\nchandler = logging.StreamHandler(stream=sys.stdout)\nchandler.setLevel(logging.DEBUG)\nchandler.setFormatter(\n logging.Formatter(\"%(asctime)s - %(levelname)s - %(message)s\", \"%Y-%m-%dT%H:%M:%S\")\n)\nlogger.addHandler(chandler)\nlogger.setLevel(logging.DEBUG)\n\n\nexpectation_tracebacks = StringIO()\nexpectation_checklists = StringIO()\n\n\ndef execute_shell_command(command: str) -> int:\n \"\"\"\n Wrap subprocess command in a try/except block to provide a convenient method for pip installing dependencies.\n\n :param command: bash command -- as if typed in a shell/Terminal window\n :return: status code -- 0 if successful; all other values (1 is the most common) indicate an error\n \"\"\"\n cwd: str = os.getcwd()\n\n path_env_var: str = os.pathsep.join([os.environ.get(\"PATH\", os.defpath), cwd])\n env: dict = dict(os.environ, PATH=path_env_var)\n\n status_code: int = 0\n try:\n res: CompletedProcess = run(\n args=[\"bash\", \"-c\", command],\n stdin=None,\n input=None,\n # stdout=None, # commenting out to prevent issues with `subprocess.run` in python <3.7.4\n # stderr=None, # commenting out to prevent issues with `subprocess.run` in python <3.7.4\n capture_output=True,\n shell=False,\n cwd=cwd,\n timeout=None,\n check=True,\n encoding=None,\n errors=None,\n text=None,\n env=env,\n universal_newlines=True,\n )\n sh_out: str = res.stdout.strip()\n logger.info(sh_out)\n except CalledProcessError as cpe:\n status_code = cpe.returncode\n sys.stderr.write(cpe.output)\n sys.stderr.flush()\n exception_message: str = \"A Sub-Process call Exception occurred.\\n\"\n exception_traceback: str = traceback.format_exc()\n exception_message += (\n f'{type(cpe).__name__}: \"{str(cpe)}\". Traceback: \"{exception_traceback}\".'\n )\n logger.error(exception_message)\n\n return status_code\n\n\ndef get_contrib_requirements(filepath: str) -> Dict:\n \"\"\"\n Parse the python file from filepath to identify a \"library_metadata\" dictionary in any defined classes, and return a requirements_info object that includes a list of pip-installable requirements for each class that defines them.\n\n Note, currently we are handling all dependencies at the module level. To support future expandability and detail, this method also returns per-class requirements in addition to the concatenated list.\n\n Args:\n filepath: the path to the file to parse and analyze\n\n Returns:\n A dictionary:\n {\n \"requirements\": [ all_requirements_found_in_any_library_metadata_in_file ],\n class_name: [ requirements ]\n }\n\n \"\"\"\n with open(filepath) as file:\n tree = ast.parse(file.read())\n\n requirements_info = {\"requirements\": []}\n for child in ast.iter_child_nodes(tree):\n if not isinstance(child, ast.ClassDef):\n continue\n current_class = child.name\n for node in ast.walk(child):\n if isinstance(node, ast.Assign):\n try:\n target_ids = [target.id for target in node.targets]\n except (ValueError, AttributeError):\n # some assignment types assign to non-node objects (e.g. Tuple)\n target_ids = []\n if \"library_metadata\" in target_ids:\n library_metadata = ast.literal_eval(node.value)\n requirements = library_metadata.get(\"requirements\", [])\n requirements_info[current_class] = requirements\n requirements_info[\"requirements\"] += requirements\n\n return requirements_info\n\n\ndef build_gallery(include_core: bool = True, include_contrib: bool = True) -> Dict:\n \"\"\"\n Build the gallery object by running diagnostics for each Expectation and returning the resulting reports.\n\n Args:\n include_core: if true, include Expectations defined in the core module\n include_contrib_experimental: if true, include Expectations defined in contrib_experimental:\n\n Returns:\n None\n\n \"\"\"\n gallery_info = dict()\n requirements_dict = {}\n logger.info(\"Loading great_expectations library.\")\n installed_packages = pkg_resources.working_set\n installed_packages_txt = sorted(f\"{i.key}=={i.version}\" for i in installed_packages)\n logger.debug(f\"Found the following packages: {installed_packages_txt}\")\n\n import great_expectations\n\n if include_core:\n print(\"\\n\\n\\n=== (Core) ===\")\n logger.info(\"Getting base registered expectations list\")\n core_expectations = (\n great_expectations.expectations.registry.list_registered_expectation_implementations()\n )\n logger.debug(f\"Found the following expectations: {sorted(core_expectations)}\")\n for expectation in core_expectations:\n requirements_dict[expectation] = {\"group\": \"core\"}\n\n just_installed = set()\n\n if include_contrib:\n print(\"\\n\\n\\n=== (Contrib) ===\")\n logger.info(\"Finding contrib modules\")\n skip_dirs = (\"cli\", \"tests\")\n contrib_dir = os.path.join(\n os.path.dirname(__file__),\n \"..\",\n \"..\",\n \"contrib\",\n )\n\n for root, dirs, files in os.walk(contrib_dir):\n for dirname in skip_dirs:\n if dirname in dirs:\n dirs.remove(dirname)\n if \"expectations\" in dirs:\n if root.endswith(\"great_expectations_experimental\"):\n sys.path.append(root)\n else:\n # A package in contrib that may contain more Expectations\n sys.path.append(os.path.dirname(root))\n for filename in files:\n if filename.endswith(\".py\") and filename.startswith(\"expect_\"):\n logger.debug(f\"Getting requirements for module {filename}\")\n contrib_subdir_name = os.path.basename(os.path.dirname(root))\n requirements_dict[filename[:-3]] = get_contrib_requirements(\n os.path.join(root, filename)\n )\n requirements_dict[filename[:-3]][\"group\"] = contrib_subdir_name\n\n for expectation in sorted(requirements_dict):\n group = requirements_dict[expectation][\"group\"]\n print(f\"\\n\\n\\n=== {expectation} ({group}) ===\")\n requirements = requirements_dict[expectation].get(\"requirements\", [])\n parsed_requirements = pkg_resources.parse_requirements(requirements)\n for req in parsed_requirements:\n is_satisfied = any(\n [installed_pkg in req for installed_pkg in installed_packages]\n )\n if is_satisfied or req in just_installed:\n continue\n logger.debug(f\"Executing command: 'pip install \\\"{req}\\\"'\")\n status_code = execute_shell_command(f'pip install \"{req}\"')\n if status_code == 0:\n just_installed.add(req)\n else:\n expectation_tracebacks.write(\n f\"\\n\\n----------------\\n{expectation} ({group})\\n\"\n )\n expectation_tracebacks.write(f\"Failed to pip install {req}\\n\\n\")\n\n if group != \"core\":\n logger.debug(f\"Importing {expectation}\")\n try:\n if group == \"great_expectations_experimental\":\n importlib.import_module(f\"expectations.{expectation}\", group)\n else:\n importlib.import_module(f\"{group}.expectations\")\n except ModuleNotFoundError as e:\n logger.error(f\"Failed to load expectation: {expectation}\")\n print(traceback.format_exc())\n expectation_tracebacks.write(\n f\"\\n\\n----------------\\n{expectation} ({group})\\n\"\n )\n expectation_tracebacks.write(traceback.format_exc())\n continue\n\n logger.debug(f\"Running diagnostics for expectation: {expectation}\")\n impl = great_expectations.expectations.registry.get_expectation_impl(\n expectation\n )\n try:\n diagnostics = impl().run_diagnostics(return_only_gallery_examples=True)\n checklist_string = diagnostics.generate_checklist()\n expectation_checklists.write(\n f\"\\n\\n----------------\\n{expectation} ({group})\\n\"\n )\n expectation_checklists.write(f\"{checklist_string}\\n\")\n if diagnostics[\"description\"][\"docstring\"]:\n diagnostics[\"description\"][\"docstring\"] = format_docstring_to_markdown(\n diagnostics[\"description\"][\"docstring\"]\n )\n gallery_info[expectation] = diagnostics.to_json_dict()\n except Exception:\n logger.error(f\"Failed to run diagnostics for: {expectation}\")\n print(traceback.format_exc())\n expectation_tracebacks.write(\n f\"\\n\\n----------------\\n{expectation} ({group})\\n\"\n )\n expectation_tracebacks.write(traceback.format_exc())\n\n if just_installed:\n print(\"\\n\\n\\n=== (Uninstalling) ===\")\n logger.info(\n f\"Uninstalling packages that were installed while running this script...\"\n )\n for req in just_installed:\n logger.debug(f\"Executing command: 'pip uninstall -y \\\"{req}\\\"'\")\n execute_shell_command(f'pip uninstall -y \"{req}\"')\n\n expectation_filenames_set = set(requirements_dict.keys())\n registered_expectations_set = set(\n great_expectations.expectations.registry.list_registered_expectation_implementations()\n )\n non_matched_filenames = expectation_filenames_set - registered_expectations_set\n if non_matched_filenames:\n expectation_tracebacks.write(f\"\\n\\n----------------\\n(Not a traceback)\\n\")\n expectation_tracebacks.write(\n \"Expectation filenames that don't match their defined Expectation name:\\n\"\n )\n for fname in sorted(non_matched_filenames):\n expectation_tracebacks.write(f\"- {fname}\\n\")\n\n bad_names = sorted(\n list(registered_expectations_set - expectation_filenames_set)\n )\n expectation_tracebacks.write(\n f\"\\nRegistered Expectation names that don't match:\\n\"\n )\n for exp_name in bad_names:\n expectation_tracebacks.write(f\"- {exp_name}\\n\")\n\n if include_core:\n core_dir = os.path.join(\n os.path.dirname(__file__),\n \"..\",\n \"..\",\n \"great_expectations\",\n \"expectations\",\n \"core\",\n )\n core_expectations_filename_set = set(\n [\n fname.rsplit(\".\", 1)[0]\n for fname in os.listdir(core_dir)\n if fname.startswith(\"expect_\")\n ]\n )\n core_expectations_not_in_gallery = core_expectations_filename_set - set(\n core_expectations\n )\n if core_expectations_not_in_gallery:\n expectation_tracebacks.write(f\"\\n\\n----------------\\n(Not a traceback)\\n\")\n expectation_tracebacks.write(\n f\"Core Expectation files not included in core_expectations:\\n\"\n )\n for exp_name in sorted(core_expectations_not_in_gallery):\n expectation_tracebacks.write(f\"- {exp_name}\\n\")\n\n return gallery_info\n\n\ndef format_docstring_to_markdown(docstr: str) -> str:\n \"\"\"\n Add markdown formatting to a provided docstring\n\n Args:\n docstr: the original docstring that needs to be converted to markdown.\n\n Returns:\n str of Docstring formatted as markdown\n\n \"\"\"\n r = re.compile(r\"\\s\\s+\", re.MULTILINE)\n clean_docstr_list = []\n prev_line = None\n in_code_block = False\n in_param = False\n first_code_indentation = None\n\n # Parse each line to determine if it needs formatting\n for original_line in docstr.split(\"\\n\"):\n # Remove excess spaces from lines formed by concatenated docstring lines.\n line = r.sub(\" \", original_line)\n # In some old docstrings, this indicates the start of an example block.\n if line.strip() == \"::\":\n in_code_block = True\n clean_docstr_list.append(\"```\")\n\n # All of our parameter/arg/etc lists start after a line ending in ':'.\n elif line.strip().endswith(\":\"):\n in_param = True\n # This adds a blank line before the header if one doesn't already exist.\n if prev_line != \"\":\n clean_docstr_list.append(\"\")\n # Turn the line into an H4 header\n clean_docstr_list.append(f\"#### {line.strip()}\")\n elif line.strip() == \"\" and prev_line != \"::\":\n # All of our parameter groups end with a line break, but we don't want to exit a parameter block due to a\n # line break in a code block. However, some code blocks start with a blank first line, so we want to make\n # sure we aren't immediately exiting the code block (hence the test for '::' on the previous line.\n in_param = False\n # Add the markdown indicator to close a code block, since we aren't in one now.\n if in_code_block:\n clean_docstr_list.append(\"```\")\n in_code_block = False\n first_code_indentation = None\n clean_docstr_list.append(line)\n else:\n if in_code_block:\n # Determine the number of spaces indenting the first line of code so they can be removed from all lines\n # in the code block without wrecking the hierarchical indentation levels of future lines.\n if first_code_indentation == None and line.strip() != \"\":\n first_code_indentation = len(\n re.match(r\"\\s*\", original_line, re.UNICODE).group(0)\n )\n if line.strip() == \"\" and prev_line == \"::\":\n # If the first line of the code block is a blank one, just skip it.\n pass\n else:\n # Append the line of code, minus the extra indentation from being written in an indented docstring.\n clean_docstr_list.append(original_line[first_code_indentation:])\n elif \":\" in line.replace(\":ref:\", \"\") and in_param:\n # This indicates a parameter. arg. or other definition.\n clean_docstr_list.append(f\"- {line.strip()}\")\n else:\n # This indicates a regular line of text.\n clean_docstr_list.append(f\"{line.strip()}\")\n prev_line = line.strip()\n clean_docstr = \"\\n\".join(clean_docstr_list)\n return clean_docstr\n\n\nif __name__ == \"__main__\":\n gallery_info = build_gallery(include_core=True, include_contrib=True)\n tracebacks = expectation_tracebacks.getvalue()\n checklists = expectation_checklists.getvalue()\n if tracebacks != \"\":\n print(\"\\n\\n\\n\" + \"#\" * 30 + \" T R A C E B A C K S \" + \"#\" * 30 + \"\\n\")\n print(tracebacks)\n print(\n \"\\n\\n\" + \"#\" * 30 + \" E N D T R A C E B A C K S \" + \"#\" * 30 + \"\\n\\n\"\n )\n with open(\"./gallery-errors.txt\", \"w\") as outfile:\n outfile.write(tracebacks)\n if checklists != \"\":\n print(checklists)\n with open(\"./checklists.txt\", \"w\") as outfile:\n outfile.write(checklists)\n with open(\"./expectation_library_v2.json\", \"w\") as outfile:\n json.dump(gallery_info, outfile)\n", "path": "assets/scripts/build_gallery.py"}]}
| 3,635 | 1,004 |
gh_patches_debug_22772
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-559
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Doc does not build
Am I the only one?
```
Exception occurred:
File "/home/aa013911/work/nilearn/doc/sphinxext/sphinxgallery/docs_resolv.py", line 348, in embed_code_links
gallery_dir = os.path.join(app.builder.srcdir, gallery_conf['gallery_dir'])
KeyError: 'gallery_dir'
The full traceback has been saved in /tmp/sphinx-err-sePU7r.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/sphinxext/sphinxgallery/__init__.py`
Content:
```
1 """Sphinx Gallery
2 """
3 import os
4 __version__ = '0.0.8'
5
6 def path_static():
7 """Returns path to packaged static files"""
8 return os.path.abspath(os.path.dirname(__file__))+'/_static'
9
```
Path: `doc/sphinxext/sphinxgallery/gen_gallery.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Author: Óscar Nájera
3 # License: 3-clause BSD
4
5 import os
6 from sphinxgallery.gen_rst import generate_dir_rst
7 from sphinxgallery.docs_resolv import embed_code_links
8
9
10 def generate_gallery_rst(app):
11 """Starts the gallery configuration and recursively scans the examples
12 directory in order to populate the examples gallery
13 """
14 try:
15 plot_gallery = eval(app.builder.config.plot_gallery)
16 except TypeError:
17 plot_gallery = bool(app.builder.config.plot_gallery)
18
19 if not plot_gallery:
20 return
21
22 gallery_conf.update(app.config.sphinxgallery_conf)
23
24 # this assures I can call the config in other places
25 app.config.sphinxgallery_conf = gallery_conf
26
27 examples_dir = os.path.join(app.builder.srcdir, gallery_conf['examples_dir'])
28 gallery_dir = os.path.join(app.builder.srcdir, gallery_conf['gallery_dir'])
29 mod_examples_dir = os.path.join(app.builder.srcdir, gallery_conf['mod_example_dir'])
30
31 for workdir in [examples_dir, gallery_dir, mod_examples_dir]:
32 if not os.path.exists(workdir):
33 os.makedirs(workdir)
34
35 # we create an index.rst with all examples
36 fhindex = open(os.path.join(gallery_dir, 'index.rst'), 'w')
37 fhindex.write("""
38
39 .. _examples-index:
40
41 Gallery of Examples
42 ===================
43
44 """)
45 # Here we don't use an os.walk, but we recurse only twice: flat is
46 # better than nested.
47 seen_backrefs = set()
48 generate_dir_rst('.', fhindex, examples_dir, gallery_dir, gallery_conf, plot_gallery, seen_backrefs)
49 for directory in sorted(os.listdir(examples_dir)):
50 if os.path.isdir(os.path.join(examples_dir, directory)):
51 generate_dir_rst(directory, fhindex, examples_dir, gallery_dir, gallery_conf, plot_gallery, seen_backrefs)
52 fhindex.flush()
53
54
55 gallery_conf = {
56 'examples_dir' : '../examples',
57 'gallery_dir' : 'auto_examples',
58 'mod_example_dir': 'modules/generated',
59 'doc_module' : (),
60 'reference_url' : {},
61 }
62
63 def setup(app):
64 app.add_config_value('plot_gallery', True, 'html')
65 app.add_config_value('sphinxgallery_conf', gallery_conf, 'html')
66 app.add_stylesheet('gallery.css')
67
68 app.connect('builder-inited', generate_gallery_rst)
69
70 app.connect('build-finished', embed_code_links)
71
72 # Sphinx hack: sphinx copies generated images to the build directory
73 # each time the docs are made. If the desired image name already
74 # exists, it appends a digit to prevent overwrites. The problem is,
75 # the directory is never cleared. This means that each time you build
76 # the docs, the number of images in the directory grows.
77 #
78 # This question has been asked on the sphinx development list, but there
79 # was no response: http://osdir.com/ml/sphinx-dev/2011-02/msg00123.html
80 #
81 # The following is a hack that prevents this behavior by clearing the
82 # image build directory each time the docs are built. If sphinx
83 # changes their layout between versions, this will not work (though
84 # it should probably not cause a crash). Tested successfully
85 # on Sphinx 1.0.7
86 build_image_dir = '_build/html/_images'
87 if os.path.exists(build_image_dir):
88 filelist = os.listdir(build_image_dir)
89 for filename in filelist:
90 if filename.endswith('png'):
91 os.remove(os.path.join(build_image_dir, filename))
92
93 def setup_module():
94 # HACK: Stop nosetests running setup() above
95 pass
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/doc/sphinxext/sphinxgallery/__init__.py b/doc/sphinxext/sphinxgallery/__init__.py
--- a/doc/sphinxext/sphinxgallery/__init__.py
+++ b/doc/sphinxext/sphinxgallery/__init__.py
@@ -1,7 +1,7 @@
"""Sphinx Gallery
"""
import os
-__version__ = '0.0.8'
+__version__ = '0.0.9-dev'
def path_static():
"""Returns path to packaged static files"""
diff --git a/doc/sphinxext/sphinxgallery/gen_gallery.py b/doc/sphinxext/sphinxgallery/gen_gallery.py
--- a/doc/sphinxext/sphinxgallery/gen_gallery.py
+++ b/doc/sphinxext/sphinxgallery/gen_gallery.py
@@ -16,14 +16,14 @@
except TypeError:
plot_gallery = bool(app.builder.config.plot_gallery)
- if not plot_gallery:
- return
-
gallery_conf.update(app.config.sphinxgallery_conf)
# this assures I can call the config in other places
app.config.sphinxgallery_conf = gallery_conf
+ if not plot_gallery:
+ return
+
examples_dir = os.path.join(app.builder.srcdir, gallery_conf['examples_dir'])
gallery_dir = os.path.join(app.builder.srcdir, gallery_conf['gallery_dir'])
mod_examples_dir = os.path.join(app.builder.srcdir, gallery_conf['mod_example_dir'])
|
{"golden_diff": "diff --git a/doc/sphinxext/sphinxgallery/__init__.py b/doc/sphinxext/sphinxgallery/__init__.py\n--- a/doc/sphinxext/sphinxgallery/__init__.py\n+++ b/doc/sphinxext/sphinxgallery/__init__.py\n@@ -1,7 +1,7 @@\n \"\"\"Sphinx Gallery\n \"\"\"\n import os\n-__version__ = '0.0.8'\n+__version__ = '0.0.9-dev'\n \n def path_static():\n \"\"\"Returns path to packaged static files\"\"\"\ndiff --git a/doc/sphinxext/sphinxgallery/gen_gallery.py b/doc/sphinxext/sphinxgallery/gen_gallery.py\n--- a/doc/sphinxext/sphinxgallery/gen_gallery.py\n+++ b/doc/sphinxext/sphinxgallery/gen_gallery.py\n@@ -16,14 +16,14 @@\n except TypeError:\n plot_gallery = bool(app.builder.config.plot_gallery)\n \n- if not plot_gallery:\n- return\n-\n gallery_conf.update(app.config.sphinxgallery_conf)\n \n # this assures I can call the config in other places\n app.config.sphinxgallery_conf = gallery_conf\n \n+ if not plot_gallery:\n+ return\n+\n examples_dir = os.path.join(app.builder.srcdir, gallery_conf['examples_dir'])\n gallery_dir = os.path.join(app.builder.srcdir, gallery_conf['gallery_dir'])\n mod_examples_dir = os.path.join(app.builder.srcdir, gallery_conf['mod_example_dir'])\n", "issue": "Doc does not build\nAm I the only one?\n\n```\nException occurred:\n File \"/home/aa013911/work/nilearn/doc/sphinxext/sphinxgallery/docs_resolv.py\", line 348, in embed_code_links\n gallery_dir = os.path.join(app.builder.srcdir, gallery_conf['gallery_dir'])\nKeyError: 'gallery_dir'\nThe full traceback has been saved in /tmp/sphinx-err-sePU7r.log, if you want to report the issue to the developers.\nPlease also report this if it was a user error, so that a better error message can be provided next time.\n```\n\n", "before_files": [{"content": "\"\"\"Sphinx Gallery\n\"\"\"\nimport os\n__version__ = '0.0.8'\n\ndef path_static():\n \"\"\"Returns path to packaged static files\"\"\"\n return os.path.abspath(os.path.dirname(__file__))+'/_static'\n", "path": "doc/sphinxext/sphinxgallery/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n# Author: \u00d3scar N\u00e1jera\n# License: 3-clause BSD\n\nimport os\nfrom sphinxgallery.gen_rst import generate_dir_rst\nfrom sphinxgallery.docs_resolv import embed_code_links\n\n\ndef generate_gallery_rst(app):\n \"\"\"Starts the gallery configuration and recursively scans the examples\n directory in order to populate the examples gallery\n \"\"\"\n try:\n plot_gallery = eval(app.builder.config.plot_gallery)\n except TypeError:\n plot_gallery = bool(app.builder.config.plot_gallery)\n\n if not plot_gallery:\n return\n\n gallery_conf.update(app.config.sphinxgallery_conf)\n\n # this assures I can call the config in other places\n app.config.sphinxgallery_conf = gallery_conf\n\n examples_dir = os.path.join(app.builder.srcdir, gallery_conf['examples_dir'])\n gallery_dir = os.path.join(app.builder.srcdir, gallery_conf['gallery_dir'])\n mod_examples_dir = os.path.join(app.builder.srcdir, gallery_conf['mod_example_dir'])\n\n for workdir in [examples_dir, gallery_dir, mod_examples_dir]:\n if not os.path.exists(workdir):\n os.makedirs(workdir)\n\n # we create an index.rst with all examples\n fhindex = open(os.path.join(gallery_dir, 'index.rst'), 'w')\n fhindex.write(\"\"\"\n\n.. _examples-index:\n\nGallery of Examples\n===================\n\n\"\"\")\n # Here we don't use an os.walk, but we recurse only twice: flat is\n # better than nested.\n seen_backrefs = set()\n generate_dir_rst('.', fhindex, examples_dir, gallery_dir, gallery_conf, plot_gallery, seen_backrefs)\n for directory in sorted(os.listdir(examples_dir)):\n if os.path.isdir(os.path.join(examples_dir, directory)):\n generate_dir_rst(directory, fhindex, examples_dir, gallery_dir, gallery_conf, plot_gallery, seen_backrefs)\n fhindex.flush()\n\n\ngallery_conf = {\n 'examples_dir' : '../examples',\n 'gallery_dir' : 'auto_examples',\n 'mod_example_dir': 'modules/generated',\n 'doc_module' : (),\n 'reference_url' : {},\n}\n\ndef setup(app):\n app.add_config_value('plot_gallery', True, 'html')\n app.add_config_value('sphinxgallery_conf', gallery_conf, 'html')\n app.add_stylesheet('gallery.css')\n\n app.connect('builder-inited', generate_gallery_rst)\n\n app.connect('build-finished', embed_code_links)\n\n # Sphinx hack: sphinx copies generated images to the build directory\n # each time the docs are made. If the desired image name already\n # exists, it appends a digit to prevent overwrites. The problem is,\n # the directory is never cleared. This means that each time you build\n # the docs, the number of images in the directory grows.\n #\n # This question has been asked on the sphinx development list, but there\n # was no response: http://osdir.com/ml/sphinx-dev/2011-02/msg00123.html\n #\n # The following is a hack that prevents this behavior by clearing the\n # image build directory each time the docs are built. If sphinx\n # changes their layout between versions, this will not work (though\n # it should probably not cause a crash). Tested successfully\n # on Sphinx 1.0.7\n build_image_dir = '_build/html/_images'\n if os.path.exists(build_image_dir):\n filelist = os.listdir(build_image_dir)\n for filename in filelist:\n if filename.endswith('png'):\n os.remove(os.path.join(build_image_dir, filename))\n\ndef setup_module():\n # HACK: Stop nosetests running setup() above\n pass\n", "path": "doc/sphinxext/sphinxgallery/gen_gallery.py"}], "after_files": [{"content": "\"\"\"Sphinx Gallery\n\"\"\"\nimport os\n__version__ = '0.0.9-dev'\n\ndef path_static():\n \"\"\"Returns path to packaged static files\"\"\"\n return os.path.abspath(os.path.dirname(__file__))+'/_static'\n", "path": "doc/sphinxext/sphinxgallery/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n# Author: \u00d3scar N\u00e1jera\n# License: 3-clause BSD\n\nimport os\nfrom sphinxgallery.gen_rst import generate_dir_rst\nfrom sphinxgallery.docs_resolv import embed_code_links\n\n\ndef generate_gallery_rst(app):\n \"\"\"Starts the gallery configuration and recursively scans the examples\n directory in order to populate the examples gallery\n \"\"\"\n try:\n plot_gallery = eval(app.builder.config.plot_gallery)\n except TypeError:\n plot_gallery = bool(app.builder.config.plot_gallery)\n\n gallery_conf.update(app.config.sphinxgallery_conf)\n\n # this assures I can call the config in other places\n app.config.sphinxgallery_conf = gallery_conf\n\n if not plot_gallery:\n return\n\n examples_dir = os.path.join(app.builder.srcdir, gallery_conf['examples_dir'])\n gallery_dir = os.path.join(app.builder.srcdir, gallery_conf['gallery_dir'])\n mod_examples_dir = os.path.join(app.builder.srcdir, gallery_conf['mod_example_dir'])\n\n for workdir in [examples_dir, gallery_dir, mod_examples_dir]:\n if not os.path.exists(workdir):\n os.makedirs(workdir)\n\n # we create an index.rst with all examples\n fhindex = open(os.path.join(gallery_dir, 'index.rst'), 'w')\n fhindex.write(\"\"\"\n\n.. _examples-index:\n\nGallery of Examples\n===================\n\n\"\"\")\n # Here we don't use an os.walk, but we recurse only twice: flat is\n # better than nested.\n seen_backrefs = set()\n generate_dir_rst('.', fhindex, examples_dir, gallery_dir, gallery_conf, plot_gallery, seen_backrefs)\n for directory in sorted(os.listdir(examples_dir)):\n if os.path.isdir(os.path.join(examples_dir, directory)):\n generate_dir_rst(directory, fhindex, examples_dir, gallery_dir, gallery_conf, plot_gallery, seen_backrefs)\n fhindex.flush()\n\n\ngallery_conf = {\n 'examples_dir' : '../examples',\n 'gallery_dir' : 'auto_examples',\n 'mod_example_dir': 'modules/generated',\n 'doc_module' : (),\n 'reference_url' : {},\n}\n\ndef setup(app):\n app.add_config_value('plot_gallery', True, 'html')\n app.add_config_value('sphinxgallery_conf', gallery_conf, 'html')\n app.add_stylesheet('gallery.css')\n\n app.connect('builder-inited', generate_gallery_rst)\n\n app.connect('build-finished', embed_code_links)\n\n # Sphinx hack: sphinx copies generated images to the build directory\n # each time the docs are made. If the desired image name already\n # exists, it appends a digit to prevent overwrites. The problem is,\n # the directory is never cleared. This means that each time you build\n # the docs, the number of images in the directory grows.\n #\n # This question has been asked on the sphinx development list, but there\n # was no response: http://osdir.com/ml/sphinx-dev/2011-02/msg00123.html\n #\n # The following is a hack that prevents this behavior by clearing the\n # image build directory each time the docs are built. If sphinx\n # changes their layout between versions, this will not work (though\n # it should probably not cause a crash). Tested successfully\n # on Sphinx 1.0.7\n build_image_dir = '_build/html/_images'\n if os.path.exists(build_image_dir):\n filelist = os.listdir(build_image_dir)\n for filename in filelist:\n if filename.endswith('png'):\n os.remove(os.path.join(build_image_dir, filename))\n\ndef setup_module():\n # HACK: Stop nosetests running setup() above\n pass\n", "path": "doc/sphinxext/sphinxgallery/gen_gallery.py"}]}
| 1,504 | 312 |
gh_patches_debug_13397
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-940
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
If nested fields are null, show null instead of their structure
Take for example Sessions API. If `track` is None (not set), it shows
``` json
"title": "string",
"track": {
"id": null,
"name": null
},
"video": "string"
```
It would be better if it shows
``` json
"title": "string",
"track": null,
"video": "string"
```
@shivamMg @mariobehling What do you think ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `open_event/api/sessions.py`
Content:
```
1 from flask.ext.restplus import Resource, Namespace
2 from sqlalchemy.orm.collections import InstrumentedList
3
4 from open_event.models.session import Session as SessionModel
5 from open_event.models.track import Track as TrackModel
6 from open_event.models.microlocation import Microlocation as MicrolocationModel
7 from open_event.models.speaker import Speaker as SpeakerModel
8
9 from .helpers.helpers import get_paginated_list, requires_auth, \
10 save_db_model, get_object_in_event
11 from .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \
12 PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES
13 from .helpers import custom_fields as fields
14 from .helpers.special_fields import SessionLanguageField, SessionStateField
15
16 api = Namespace('sessions', description='Sessions', path='/')
17
18 # Create models
19 SESSION_TRACK = api.model('SessionTrack', {
20 'id': fields.Integer(required=True),
21 'name': fields.String(),
22 })
23
24 SESSION_SPEAKER = api.model('SessionSpeaker', {
25 'id': fields.Integer(required=True),
26 'name': fields.String(),
27 'organisation': fields.String()
28 })
29
30 SESSION_MICROLOCATION = api.model('SessionMicrolocation', {
31 'id': fields.Integer(required=True),
32 'name': fields.String(),
33 })
34
35 SESSION = api.model('Session', {
36 'id': fields.Integer(required=True),
37 'title': fields.String(required=True),
38 'subtitle': fields.String(),
39 'short_abstract': fields.String(),
40 'long_abstract': fields.String(required=True),
41 'comments': fields.String(),
42 'start_time': fields.DateTime(required=True),
43 'end_time': fields.DateTime(required=True),
44 'track': fields.Nested(SESSION_TRACK),
45 'speakers': fields.List(fields.Nested(SESSION_SPEAKER)),
46 'language': SessionLanguageField(),
47 'microlocation': fields.Nested(SESSION_MICROLOCATION),
48 'slides': fields.String(),
49 'video': fields.String(),
50 'audio': fields.String(),
51 'signup_url': fields.Uri(),
52 'state': SessionStateField()
53 })
54
55 SESSION_PAGINATED = api.clone('SessionPaginated', PAGINATED_MODEL, {
56 'results': fields.List(fields.Nested(SESSION))
57 })
58
59 SESSION_POST = api.clone('SessionPost', SESSION, {
60 'track_id': fields.Integer(),
61 'speaker_ids': fields.List(fields.Integer()),
62 'microlocation_id': fields.Integer()
63 })
64 del SESSION_POST['id']
65 del SESSION_POST['track']
66 del SESSION_POST['speakers']
67 del SESSION_POST['microlocation']
68
69
70 # Create DAO
71 class SessionDAO(ServiceDAO):
72 def _delete_fields(self, data):
73 del data['speaker_ids']
74 del data['track_id']
75 del data['microlocation_id']
76 data['start_time'] = SESSION_POST['start_time'].from_str(
77 data['start_time'])
78 data['end_time'] = SESSION_POST['end_time'].from_str(data['end_time'])
79 return data
80
81 def get_object(self, model, sid, event_id):
82 """
83 returns object (model). Checks if object is in same event
84 """
85 if sid is None:
86 return None
87 return get_object_in_event(model, sid, event_id)
88
89 def fix_payload_post(self, event_id, data):
90 """
91 Fixes payload of POST request
92 """
93 data['track'] = self.get_object(TrackModel, data['track_id'], event_id)
94 data['microlocation'] = self.get_object(MicrolocationModel, data['microlocation_id'], event_id)
95 data['event_id'] = event_id
96 data['speakers'] = InstrumentedList(
97 SpeakerModel.query.get(_) for _ in data['speaker_ids']
98 if self.get_object(SpeakerModel, _, event_id) is not None
99 )
100 data = self._delete_fields(data)
101 return data
102
103 def update(self, event_id, service_id, data):
104 data = self.validate(data)
105 data_copy = data.copy()
106 data_copy = self.fix_payload_post(event_id, data_copy)
107 data = self._delete_fields(data)
108 obj = ServiceDAO.update(self, event_id, service_id, data)
109 obj.track = data_copy['track']
110 obj.microlocation = data_copy['microlocation']
111 obj.speakers = data_copy['speakers']
112 obj = save_db_model(obj, SessionModel.__name__, event_id)
113 return obj
114
115 def create(self, event_id, data, url):
116 data = self.validate(data)
117 payload = self.fix_payload_post(event_id, data)
118 return ServiceDAO.create(self, event_id, payload, url, validate=False)
119
120
121 DAO = SessionDAO(SessionModel, SESSION_POST)
122
123
124 @api.route('/events/<int:event_id>/sessions/<int:session_id>')
125 @api.response(404, 'Session not found')
126 @api.response(400, 'Session does not belong to event')
127 class Session(Resource):
128 @api.doc('get_session')
129 @api.marshal_with(SESSION)
130 def get(self, event_id, session_id):
131 """Fetch a session given its id"""
132 return DAO.get(event_id, session_id)
133
134 @requires_auth
135 @api.doc('delete_session')
136 @api.marshal_with(SESSION)
137 def delete(self, event_id, session_id):
138 """Delete a session given its id"""
139 return DAO.delete(event_id, session_id)
140
141 @requires_auth
142 @api.doc('update_session', responses=PUT_RESPONSES)
143 @api.marshal_with(SESSION)
144 @api.expect(SESSION_POST)
145 def put(self, event_id, session_id):
146 """Update a session given its id"""
147 return DAO.update(event_id, session_id, self.api.payload)
148
149
150 @api.route('/events/<int:event_id>/sessions')
151 class SessionList(Resource):
152 @api.doc('list_sessions')
153 @api.marshal_list_with(SESSION)
154 def get(self, event_id):
155 """List all sessions"""
156 return DAO.list(event_id)
157
158 @requires_auth
159 @api.doc('create_session', responses=POST_RESPONSES)
160 @api.marshal_with(SESSION)
161 @api.expect(SESSION_POST)
162 def post(self, event_id):
163 """Create a session"""
164 return DAO.create(
165 event_id,
166 self.api.payload,
167 self.api.url_for(self, event_id=event_id)
168 )
169
170
171 @api.route('/events/<int:event_id>/sessions/page')
172 class SessionListPaginated(Resource, PaginatedResourceBase):
173 @api.doc('list_sessions_paginated', params=PAGE_PARAMS)
174 @api.marshal_with(SESSION_PAGINATED)
175 def get(self, event_id):
176 """List sessions in a paginated manner"""
177 return get_paginated_list(
178 SessionModel,
179 self.api.url_for(self, event_id=event_id),
180 args=self.parser.parse_args(),
181 event_id=event_id
182 )
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/open_event/api/sessions.py b/open_event/api/sessions.py
--- a/open_event/api/sessions.py
+++ b/open_event/api/sessions.py
@@ -41,10 +41,10 @@
'comments': fields.String(),
'start_time': fields.DateTime(required=True),
'end_time': fields.DateTime(required=True),
- 'track': fields.Nested(SESSION_TRACK),
+ 'track': fields.Nested(SESSION_TRACK, allow_null=True),
'speakers': fields.List(fields.Nested(SESSION_SPEAKER)),
'language': SessionLanguageField(),
- 'microlocation': fields.Nested(SESSION_MICROLOCATION),
+ 'microlocation': fields.Nested(SESSION_MICROLOCATION, allow_null=True),
'slides': fields.String(),
'video': fields.String(),
'audio': fields.String(),
|
{"golden_diff": "diff --git a/open_event/api/sessions.py b/open_event/api/sessions.py\n--- a/open_event/api/sessions.py\n+++ b/open_event/api/sessions.py\n@@ -41,10 +41,10 @@\n 'comments': fields.String(),\n 'start_time': fields.DateTime(required=True),\n 'end_time': fields.DateTime(required=True),\n- 'track': fields.Nested(SESSION_TRACK),\n+ 'track': fields.Nested(SESSION_TRACK, allow_null=True),\n 'speakers': fields.List(fields.Nested(SESSION_SPEAKER)),\n 'language': SessionLanguageField(),\n- 'microlocation': fields.Nested(SESSION_MICROLOCATION),\n+ 'microlocation': fields.Nested(SESSION_MICROLOCATION, allow_null=True),\n 'slides': fields.String(),\n 'video': fields.String(),\n 'audio': fields.String(),\n", "issue": "If nested fields are null, show null instead of their structure\nTake for example Sessions API. If `track` is None (not set), it shows \n\n``` json\n \"title\": \"string\",\n \"track\": {\n \"id\": null,\n \"name\": null\n },\n \"video\": \"string\"\n```\n\nIt would be better if it shows \n\n``` json\n \"title\": \"string\",\n \"track\": null,\n \"video\": \"string\"\n```\n\n@shivamMg @mariobehling What do you think ?\n\n", "before_files": [{"content": "from flask.ext.restplus import Resource, Namespace\nfrom sqlalchemy.orm.collections import InstrumentedList\n\nfrom open_event.models.session import Session as SessionModel\nfrom open_event.models.track import Track as TrackModel\nfrom open_event.models.microlocation import Microlocation as MicrolocationModel\nfrom open_event.models.speaker import Speaker as SpeakerModel\n\nfrom .helpers.helpers import get_paginated_list, requires_auth, \\\n save_db_model, get_object_in_event\nfrom .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \\\n PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES\nfrom .helpers import custom_fields as fields\nfrom .helpers.special_fields import SessionLanguageField, SessionStateField\n\napi = Namespace('sessions', description='Sessions', path='/')\n\n# Create models\nSESSION_TRACK = api.model('SessionTrack', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n})\n\nSESSION_SPEAKER = api.model('SessionSpeaker', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n 'organisation': fields.String()\n})\n\nSESSION_MICROLOCATION = api.model('SessionMicrolocation', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n})\n\nSESSION = api.model('Session', {\n 'id': fields.Integer(required=True),\n 'title': fields.String(required=True),\n 'subtitle': fields.String(),\n 'short_abstract': fields.String(),\n 'long_abstract': fields.String(required=True),\n 'comments': fields.String(),\n 'start_time': fields.DateTime(required=True),\n 'end_time': fields.DateTime(required=True),\n 'track': fields.Nested(SESSION_TRACK),\n 'speakers': fields.List(fields.Nested(SESSION_SPEAKER)),\n 'language': SessionLanguageField(),\n 'microlocation': fields.Nested(SESSION_MICROLOCATION),\n 'slides': fields.String(),\n 'video': fields.String(),\n 'audio': fields.String(),\n 'signup_url': fields.Uri(),\n 'state': SessionStateField()\n})\n\nSESSION_PAGINATED = api.clone('SessionPaginated', PAGINATED_MODEL, {\n 'results': fields.List(fields.Nested(SESSION))\n})\n\nSESSION_POST = api.clone('SessionPost', SESSION, {\n 'track_id': fields.Integer(),\n 'speaker_ids': fields.List(fields.Integer()),\n 'microlocation_id': fields.Integer()\n})\ndel SESSION_POST['id']\ndel SESSION_POST['track']\ndel SESSION_POST['speakers']\ndel SESSION_POST['microlocation']\n\n\n# Create DAO\nclass SessionDAO(ServiceDAO):\n def _delete_fields(self, data):\n del data['speaker_ids']\n del data['track_id']\n del data['microlocation_id']\n data['start_time'] = SESSION_POST['start_time'].from_str(\n data['start_time'])\n data['end_time'] = SESSION_POST['end_time'].from_str(data['end_time'])\n return data\n\n def get_object(self, model, sid, event_id):\n \"\"\"\n returns object (model). Checks if object is in same event\n \"\"\"\n if sid is None:\n return None\n return get_object_in_event(model, sid, event_id)\n\n def fix_payload_post(self, event_id, data):\n \"\"\"\n Fixes payload of POST request\n \"\"\"\n data['track'] = self.get_object(TrackModel, data['track_id'], event_id)\n data['microlocation'] = self.get_object(MicrolocationModel, data['microlocation_id'], event_id)\n data['event_id'] = event_id\n data['speakers'] = InstrumentedList(\n SpeakerModel.query.get(_) for _ in data['speaker_ids']\n if self.get_object(SpeakerModel, _, event_id) is not None\n )\n data = self._delete_fields(data)\n return data\n\n def update(self, event_id, service_id, data):\n data = self.validate(data)\n data_copy = data.copy()\n data_copy = self.fix_payload_post(event_id, data_copy)\n data = self._delete_fields(data)\n obj = ServiceDAO.update(self, event_id, service_id, data)\n obj.track = data_copy['track']\n obj.microlocation = data_copy['microlocation']\n obj.speakers = data_copy['speakers']\n obj = save_db_model(obj, SessionModel.__name__, event_id)\n return obj\n\n def create(self, event_id, data, url):\n data = self.validate(data)\n payload = self.fix_payload_post(event_id, data)\n return ServiceDAO.create(self, event_id, payload, url, validate=False)\n\n\nDAO = SessionDAO(SessionModel, SESSION_POST)\n\n\[email protected]('/events/<int:event_id>/sessions/<int:session_id>')\[email protected](404, 'Session not found')\[email protected](400, 'Session does not belong to event')\nclass Session(Resource):\n @api.doc('get_session')\n @api.marshal_with(SESSION)\n def get(self, event_id, session_id):\n \"\"\"Fetch a session given its id\"\"\"\n return DAO.get(event_id, session_id)\n\n @requires_auth\n @api.doc('delete_session')\n @api.marshal_with(SESSION)\n def delete(self, event_id, session_id):\n \"\"\"Delete a session given its id\"\"\"\n return DAO.delete(event_id, session_id)\n\n @requires_auth\n @api.doc('update_session', responses=PUT_RESPONSES)\n @api.marshal_with(SESSION)\n @api.expect(SESSION_POST)\n def put(self, event_id, session_id):\n \"\"\"Update a session given its id\"\"\"\n return DAO.update(event_id, session_id, self.api.payload)\n\n\[email protected]('/events/<int:event_id>/sessions')\nclass SessionList(Resource):\n @api.doc('list_sessions')\n @api.marshal_list_with(SESSION)\n def get(self, event_id):\n \"\"\"List all sessions\"\"\"\n return DAO.list(event_id)\n\n @requires_auth\n @api.doc('create_session', responses=POST_RESPONSES)\n @api.marshal_with(SESSION)\n @api.expect(SESSION_POST)\n def post(self, event_id):\n \"\"\"Create a session\"\"\"\n return DAO.create(\n event_id,\n self.api.payload,\n self.api.url_for(self, event_id=event_id)\n )\n\n\[email protected]('/events/<int:event_id>/sessions/page')\nclass SessionListPaginated(Resource, PaginatedResourceBase):\n @api.doc('list_sessions_paginated', params=PAGE_PARAMS)\n @api.marshal_with(SESSION_PAGINATED)\n def get(self, event_id):\n \"\"\"List sessions in a paginated manner\"\"\"\n return get_paginated_list(\n SessionModel,\n self.api.url_for(self, event_id=event_id),\n args=self.parser.parse_args(),\n event_id=event_id\n )\n", "path": "open_event/api/sessions.py"}], "after_files": [{"content": "from flask.ext.restplus import Resource, Namespace\nfrom sqlalchemy.orm.collections import InstrumentedList\n\nfrom open_event.models.session import Session as SessionModel\nfrom open_event.models.track import Track as TrackModel\nfrom open_event.models.microlocation import Microlocation as MicrolocationModel\nfrom open_event.models.speaker import Speaker as SpeakerModel\n\nfrom .helpers.helpers import get_paginated_list, requires_auth, \\\n save_db_model, get_object_in_event\nfrom .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \\\n PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES\nfrom .helpers import custom_fields as fields\nfrom .helpers.special_fields import SessionLanguageField, SessionStateField\n\napi = Namespace('sessions', description='Sessions', path='/')\n\n# Create models\nSESSION_TRACK = api.model('SessionTrack', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n})\n\nSESSION_SPEAKER = api.model('SessionSpeaker', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n 'organisation': fields.String()\n})\n\nSESSION_MICROLOCATION = api.model('SessionMicrolocation', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n})\n\nSESSION = api.model('Session', {\n 'id': fields.Integer(required=True),\n 'title': fields.String(required=True),\n 'subtitle': fields.String(),\n 'short_abstract': fields.String(),\n 'long_abstract': fields.String(required=True),\n 'comments': fields.String(),\n 'start_time': fields.DateTime(required=True),\n 'end_time': fields.DateTime(required=True),\n 'track': fields.Nested(SESSION_TRACK, allow_null=True),\n 'speakers': fields.List(fields.Nested(SESSION_SPEAKER)),\n 'language': SessionLanguageField(),\n 'microlocation': fields.Nested(SESSION_MICROLOCATION, allow_null=True),\n 'slides': fields.String(),\n 'video': fields.String(),\n 'audio': fields.String(),\n 'signup_url': fields.Uri(),\n 'state': SessionStateField()\n})\n\nSESSION_PAGINATED = api.clone('SessionPaginated', PAGINATED_MODEL, {\n 'results': fields.List(fields.Nested(SESSION))\n})\n\nSESSION_POST = api.clone('SessionPost', SESSION, {\n 'track_id': fields.Integer(),\n 'speaker_ids': fields.List(fields.Integer()),\n 'microlocation_id': fields.Integer()\n})\ndel SESSION_POST['id']\ndel SESSION_POST['track']\ndel SESSION_POST['speakers']\ndel SESSION_POST['microlocation']\n\n\n# Create DAO\nclass SessionDAO(ServiceDAO):\n def _delete_fields(self, data):\n del data['speaker_ids']\n del data['track_id']\n del data['microlocation_id']\n data['start_time'] = SESSION_POST['start_time'].from_str(\n data['start_time'])\n data['end_time'] = SESSION_POST['end_time'].from_str(data['end_time'])\n return data\n\n def get_object(self, model, sid, event_id):\n \"\"\"\n returns object (model). Checks if object is in same event\n \"\"\"\n if sid is None:\n return None\n return get_object_in_event(model, sid, event_id)\n\n def fix_payload_post(self, event_id, data):\n \"\"\"\n Fixes payload of POST request\n \"\"\"\n data['track'] = self.get_object(TrackModel, data['track_id'], event_id)\n data['microlocation'] = self.get_object(MicrolocationModel, data['microlocation_id'], event_id)\n data['event_id'] = event_id\n data['speakers'] = InstrumentedList(\n SpeakerModel.query.get(_) for _ in data['speaker_ids']\n if self.get_object(SpeakerModel, _, event_id) is not None\n )\n data = self._delete_fields(data)\n return data\n\n def update(self, event_id, service_id, data):\n data = self.validate(data)\n data_copy = data.copy()\n data_copy = self.fix_payload_post(event_id, data_copy)\n data = self._delete_fields(data)\n obj = ServiceDAO.update(self, event_id, service_id, data)\n obj.track = data_copy['track']\n obj.microlocation = data_copy['microlocation']\n obj.speakers = data_copy['speakers']\n obj = save_db_model(obj, SessionModel.__name__, event_id)\n return obj\n\n def create(self, event_id, data, url):\n data = self.validate(data)\n payload = self.fix_payload_post(event_id, data)\n return ServiceDAO.create(self, event_id, payload, url, validate=False)\n\n\nDAO = SessionDAO(SessionModel, SESSION_POST)\n\n\[email protected]('/events/<int:event_id>/sessions/<int:session_id>')\[email protected](404, 'Session not found')\[email protected](400, 'Session does not belong to event')\nclass Session(Resource):\n @api.doc('get_session')\n @api.marshal_with(SESSION)\n def get(self, event_id, session_id):\n \"\"\"Fetch a session given its id\"\"\"\n return DAO.get(event_id, session_id)\n\n @requires_auth\n @api.doc('delete_session')\n @api.marshal_with(SESSION)\n def delete(self, event_id, session_id):\n \"\"\"Delete a session given its id\"\"\"\n return DAO.delete(event_id, session_id)\n\n @requires_auth\n @api.doc('update_session', responses=PUT_RESPONSES)\n @api.marshal_with(SESSION)\n @api.expect(SESSION_POST)\n def put(self, event_id, session_id):\n \"\"\"Update a session given its id\"\"\"\n return DAO.update(event_id, session_id, self.api.payload)\n\n\[email protected]('/events/<int:event_id>/sessions')\nclass SessionList(Resource):\n @api.doc('list_sessions')\n @api.marshal_list_with(SESSION)\n def get(self, event_id):\n \"\"\"List all sessions\"\"\"\n return DAO.list(event_id)\n\n @requires_auth\n @api.doc('create_session', responses=POST_RESPONSES)\n @api.marshal_with(SESSION)\n @api.expect(SESSION_POST)\n def post(self, event_id):\n \"\"\"Create a session\"\"\"\n return DAO.create(\n event_id,\n self.api.payload,\n self.api.url_for(self, event_id=event_id)\n )\n\n\[email protected]('/events/<int:event_id>/sessions/page')\nclass SessionListPaginated(Resource, PaginatedResourceBase):\n @api.doc('list_sessions_paginated', params=PAGE_PARAMS)\n @api.marshal_with(SESSION_PAGINATED)\n def get(self, event_id):\n \"\"\"List sessions in a paginated manner\"\"\"\n return get_paginated_list(\n SessionModel,\n self.api.url_for(self, event_id=event_id),\n args=self.parser.parse_args(),\n event_id=event_id\n )\n", "path": "open_event/api/sessions.py"}]}
| 2,277 | 183 |
gh_patches_debug_5308
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-1263
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve extractart to extract cover art of all albums for a query
The current `extractart` command of the `embedart` plugin is only able to extract the cover art of a single file. If the query the user specifies results in more than one item, only the art of the first one is extracted.
I would like to improve the command such the user queries for albums (like `ls -a` command) and one cover art is extracted per album and placed inside the album folder. The command should try all files of an album until an art is found. The command could be executed like `beet extractart -a <query>`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/embedart.py`
Content:
```
1 # This file is part of beets.
2 # Copyright 2015, Adrian Sampson.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Allows beets to embed album art into file metadata."""
16 import os.path
17 import imghdr
18 import subprocess
19 import platform
20 from tempfile import NamedTemporaryFile
21
22 from beets import logging
23 from beets.plugins import BeetsPlugin
24 from beets import mediafile
25 from beets import ui
26 from beets.ui import decargs
27 from beets.util import syspath, normpath, displayable_path
28 from beets.util.artresizer import ArtResizer
29 from beets import config
30
31
32 class EmbedCoverArtPlugin(BeetsPlugin):
33 """Allows albumart to be embedded into the actual files.
34 """
35 def __init__(self):
36 super(EmbedCoverArtPlugin, self).__init__()
37 self.config.add({
38 'maxwidth': 0,
39 'auto': True,
40 'compare_threshold': 0,
41 'ifempty': False,
42 })
43
44 if self.config['maxwidth'].get(int) and not ArtResizer.shared.local:
45 self.config['maxwidth'] = 0
46 self._log.warn(u"ImageMagick or PIL not found; "
47 u"'maxwidth' option ignored")
48 if self.config['compare_threshold'].get(int) and not \
49 ArtResizer.shared.can_compare:
50 self.config['compare_threshold'] = 0
51 self._log.warn(u"ImageMagick 6.8.7 or higher not installed; "
52 u"'compare_threshold' option ignored")
53
54 self.register_listener('album_imported', self.album_imported)
55
56 def commands(self):
57 # Embed command.
58 embed_cmd = ui.Subcommand(
59 'embedart', help='embed image files into file metadata'
60 )
61 embed_cmd.parser.add_option(
62 '-f', '--file', metavar='PATH', help='the image file to embed'
63 )
64 maxwidth = self.config['maxwidth'].get(int)
65 compare_threshold = self.config['compare_threshold'].get(int)
66 ifempty = self.config['ifempty'].get(bool)
67
68 def embed_func(lib, opts, args):
69 if opts.file:
70 imagepath = normpath(opts.file)
71 if not os.path.isfile(syspath(imagepath)):
72 raise ui.UserError(u'image file {0} not found'.format(
73 displayable_path(imagepath)
74 ))
75 for item in lib.items(decargs(args)):
76 self.embed_item(item, imagepath, maxwidth, None,
77 compare_threshold, ifempty)
78 else:
79 for album in lib.albums(decargs(args)):
80 self.embed_album(album, maxwidth)
81
82 embed_cmd.func = embed_func
83
84 # Extract command.
85 extract_cmd = ui.Subcommand('extractart',
86 help='extract an image from file metadata')
87 extract_cmd.parser.add_option('-o', dest='outpath',
88 help='image output file')
89
90 def extract_func(lib, opts, args):
91 outpath = normpath(opts.outpath or (config['art_filename'].get()
92 or 'cover'))
93 item = lib.items(decargs(args)).get()
94 self.extract(outpath, item)
95 extract_cmd.func = extract_func
96
97 # Clear command.
98 clear_cmd = ui.Subcommand('clearart',
99 help='remove images from file metadata')
100
101 def clear_func(lib, opts, args):
102 self.clear(lib, decargs(args))
103 clear_cmd.func = clear_func
104
105 return [embed_cmd, extract_cmd, clear_cmd]
106
107 def album_imported(self, lib, album):
108 """Automatically embed art into imported albums.
109 """
110 if album.artpath and self.config['auto']:
111 max_width = self.config['maxwidth'].get(int)
112 self.embed_album(album, max_width, True)
113
114 def embed_item(self, item, imagepath, maxwidth=None, itempath=None,
115 compare_threshold=0, ifempty=False, as_album=False):
116 """Embed an image into the item's media file.
117 """
118 if compare_threshold:
119 if not self.check_art_similarity(item, imagepath,
120 compare_threshold):
121 self._log.warn(u'Image not similar; skipping.')
122 return
123 if ifempty:
124 art = self.get_art(item)
125 if not art:
126 pass
127 else:
128 self._log.debug(u'media file contained art already {0}',
129 displayable_path(imagepath))
130 return
131 if maxwidth and not as_album:
132 imagepath = self.resize_image(imagepath, maxwidth)
133
134 try:
135 self._log.debug(u'embedding {0}', displayable_path(imagepath))
136 item['images'] = [self._mediafile_image(imagepath, maxwidth)]
137 except IOError as exc:
138 self._log.error(u'could not read image file: {0}', exc)
139 else:
140 # We don't want to store the image in the database.
141 item.try_write(itempath)
142 del item['images']
143
144 def embed_album(self, album, maxwidth=None, quiet=False):
145 """Embed album art into all of the album's items.
146 """
147 imagepath = album.artpath
148 if not imagepath:
149 self._log.info(u'No album art present: {0} - {1}',
150 album.albumartist, album.album)
151 return
152 if not os.path.isfile(syspath(imagepath)):
153 self._log.error(u'Album art not found at {0}',
154 displayable_path(imagepath))
155 return
156 if maxwidth:
157 imagepath = self.resize_image(imagepath, maxwidth)
158
159 self._log.log(
160 logging.DEBUG if quiet else logging.INFO,
161 u'Embedding album art into {0.albumartist} - {0.album}.', album
162 )
163
164 for item in album.items():
165 thresh = self.config['compare_threshold'].get(int)
166 ifempty = self.config['ifempty'].get(bool)
167 self.embed_item(item, imagepath, maxwidth, None,
168 thresh, ifempty, as_album=True)
169
170 def resize_image(self, imagepath, maxwidth):
171 """Returns path to an image resized to maxwidth.
172 """
173 self._log.info(u'Resizing album art to {0} pixels wide', maxwidth)
174 imagepath = ArtResizer.shared.resize(maxwidth, syspath(imagepath))
175 return imagepath
176
177 def check_art_similarity(self, item, imagepath, compare_threshold):
178 """A boolean indicating if an image is similar to embedded item art.
179 """
180 with NamedTemporaryFile(delete=True) as f:
181 art = self.extract(f.name, item)
182
183 if art:
184 is_windows = platform.system() == "Windows"
185
186 # Converting images to grayscale tends to minimize the weight
187 # of colors in the diff score.
188 convert_proc = subprocess.Popen(
189 ['convert', syspath(imagepath), syspath(art),
190 '-colorspace', 'gray', 'MIFF:-'],
191 stdout=subprocess.PIPE,
192 close_fds=not is_windows,
193 )
194 compare_proc = subprocess.Popen(
195 ['compare', '-metric', 'PHASH', '-', 'null:'],
196 stdin=convert_proc.stdout,
197 stdout=subprocess.PIPE,
198 stderr=subprocess.PIPE,
199 close_fds=not is_windows,
200 )
201 convert_proc.stdout.close()
202
203 stdout, stderr = compare_proc.communicate()
204 if compare_proc.returncode:
205 if compare_proc.returncode != 1:
206 self._log.debug(u'IM phashes compare failed for {0}, '
207 u'{1}', displayable_path(imagepath),
208 displayable_path(art))
209 return
210 out_str = stderr
211 else:
212 out_str = stdout
213
214 try:
215 phash_diff = float(out_str)
216 except ValueError:
217 self._log.debug(u'IM output is not a number: {0!r}',
218 out_str)
219 return
220
221 self._log.info(u'compare PHASH score is {0}', phash_diff)
222 if phash_diff > compare_threshold:
223 return False
224
225 return True
226
227 def _mediafile_image(self, image_path, maxwidth=None):
228 """Return a `mediafile.Image` object for the path.
229 """
230
231 with open(syspath(image_path), 'rb') as f:
232 data = f.read()
233 return mediafile.Image(data, type=mediafile.ImageType.front)
234
235 def get_art(self, item):
236 # Extract the art.
237 try:
238 mf = mediafile.MediaFile(syspath(item.path))
239 except mediafile.UnreadableFileError as exc:
240 self._log.error(u'Could not extract art from {0}: {1}',
241 displayable_path(item.path), exc)
242 return
243
244 return mf.art
245
246 # 'extractart' command.
247
248 def extract(self, outpath, item):
249 if not item:
250 self._log.error(u'No item matches query.')
251 return
252
253 art = self.get_art(item)
254
255 if not art:
256 self._log.error(u'No album art present in {0} - {1}.',
257 item.artist, item.title)
258 return
259
260 # Add an extension to the filename.
261 ext = imghdr.what(None, h=art)
262 if not ext:
263 self._log.error(u'Unknown image type.')
264 return
265 outpath += '.' + ext
266
267 self._log.info(u'Extracting album art from: {0.artist} - {0.title} '
268 u'to: {1}', item, displayable_path(outpath))
269 with open(syspath(outpath), 'wb') as f:
270 f.write(art)
271 return outpath
272
273 # 'clearart' command.
274 def clear(self, lib, query):
275 self._log.info(u'Clearing album art from items:')
276 for item in lib.items(query):
277 self._log.info(u'{0} - {1}', item.artist, item.title)
278 try:
279 mf = mediafile.MediaFile(syspath(item.path),
280 config['id3v23'].get(bool))
281 except mediafile.UnreadableFileError as exc:
282 self._log.error(u'Could not clear art from {0}: {1}',
283 displayable_path(item.path), exc)
284 continue
285 del mf.art
286 mf.save()
287
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/beetsplug/embedart.py b/beetsplug/embedart.py
--- a/beetsplug/embedart.py
+++ b/beetsplug/embedart.py
@@ -89,8 +89,9 @@
def extract_func(lib, opts, args):
outpath = normpath(opts.outpath or 'cover')
- item = lib.items(decargs(args)).get()
- self.extract(outpath, item)
+ for item in lib.items(decargs(args)):
+ if self.extract(outpath, item):
+ return
extract_cmd.func = extract_func
# Clear command.
|
{"golden_diff": "diff --git a/beetsplug/embedart.py b/beetsplug/embedart.py\n--- a/beetsplug/embedart.py\n+++ b/beetsplug/embedart.py\n@@ -89,8 +89,9 @@\n \n def extract_func(lib, opts, args):\n outpath = normpath(opts.outpath or 'cover')\n- item = lib.items(decargs(args)).get()\n- self.extract(outpath, item)\n+ for item in lib.items(decargs(args)):\n+ if self.extract(outpath, item):\n+ return\n extract_cmd.func = extract_func\n \n # Clear command.\n", "issue": "Improve extractart to extract cover art of all albums for a query\nThe current `extractart` command of the `embedart` plugin is only able to extract the cover art of a single file. If the query the user specifies results in more than one item, only the art of the first one is extracted.\n\nI would like to improve the command such the user queries for albums (like `ls -a` command) and one cover art is extracted per album and placed inside the album folder. The command should try all files of an album until an art is found. The command could be executed like `beet extractart -a <query>`.\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Allows beets to embed album art into file metadata.\"\"\"\nimport os.path\nimport imghdr\nimport subprocess\nimport platform\nfrom tempfile import NamedTemporaryFile\n\nfrom beets import logging\nfrom beets.plugins import BeetsPlugin\nfrom beets import mediafile\nfrom beets import ui\nfrom beets.ui import decargs\nfrom beets.util import syspath, normpath, displayable_path\nfrom beets.util.artresizer import ArtResizer\nfrom beets import config\n\n\nclass EmbedCoverArtPlugin(BeetsPlugin):\n \"\"\"Allows albumart to be embedded into the actual files.\n \"\"\"\n def __init__(self):\n super(EmbedCoverArtPlugin, self).__init__()\n self.config.add({\n 'maxwidth': 0,\n 'auto': True,\n 'compare_threshold': 0,\n 'ifempty': False,\n })\n\n if self.config['maxwidth'].get(int) and not ArtResizer.shared.local:\n self.config['maxwidth'] = 0\n self._log.warn(u\"ImageMagick or PIL not found; \"\n u\"'maxwidth' option ignored\")\n if self.config['compare_threshold'].get(int) and not \\\n ArtResizer.shared.can_compare:\n self.config['compare_threshold'] = 0\n self._log.warn(u\"ImageMagick 6.8.7 or higher not installed; \"\n u\"'compare_threshold' option ignored\")\n\n self.register_listener('album_imported', self.album_imported)\n\n def commands(self):\n # Embed command.\n embed_cmd = ui.Subcommand(\n 'embedart', help='embed image files into file metadata'\n )\n embed_cmd.parser.add_option(\n '-f', '--file', metavar='PATH', help='the image file to embed'\n )\n maxwidth = self.config['maxwidth'].get(int)\n compare_threshold = self.config['compare_threshold'].get(int)\n ifempty = self.config['ifempty'].get(bool)\n\n def embed_func(lib, opts, args):\n if opts.file:\n imagepath = normpath(opts.file)\n if not os.path.isfile(syspath(imagepath)):\n raise ui.UserError(u'image file {0} not found'.format(\n displayable_path(imagepath)\n ))\n for item in lib.items(decargs(args)):\n self.embed_item(item, imagepath, maxwidth, None,\n compare_threshold, ifempty)\n else:\n for album in lib.albums(decargs(args)):\n self.embed_album(album, maxwidth)\n\n embed_cmd.func = embed_func\n\n # Extract command.\n extract_cmd = ui.Subcommand('extractart',\n help='extract an image from file metadata')\n extract_cmd.parser.add_option('-o', dest='outpath',\n help='image output file')\n\n def extract_func(lib, opts, args):\n outpath = normpath(opts.outpath or (config['art_filename'].get()\n or 'cover'))\n item = lib.items(decargs(args)).get()\n self.extract(outpath, item)\n extract_cmd.func = extract_func\n\n # Clear command.\n clear_cmd = ui.Subcommand('clearart',\n help='remove images from file metadata')\n\n def clear_func(lib, opts, args):\n self.clear(lib, decargs(args))\n clear_cmd.func = clear_func\n\n return [embed_cmd, extract_cmd, clear_cmd]\n\n def album_imported(self, lib, album):\n \"\"\"Automatically embed art into imported albums.\n \"\"\"\n if album.artpath and self.config['auto']:\n max_width = self.config['maxwidth'].get(int)\n self.embed_album(album, max_width, True)\n\n def embed_item(self, item, imagepath, maxwidth=None, itempath=None,\n compare_threshold=0, ifempty=False, as_album=False):\n \"\"\"Embed an image into the item's media file.\n \"\"\"\n if compare_threshold:\n if not self.check_art_similarity(item, imagepath,\n compare_threshold):\n self._log.warn(u'Image not similar; skipping.')\n return\n if ifempty:\n art = self.get_art(item)\n if not art:\n pass\n else:\n self._log.debug(u'media file contained art already {0}',\n displayable_path(imagepath))\n return\n if maxwidth and not as_album:\n imagepath = self.resize_image(imagepath, maxwidth)\n\n try:\n self._log.debug(u'embedding {0}', displayable_path(imagepath))\n item['images'] = [self._mediafile_image(imagepath, maxwidth)]\n except IOError as exc:\n self._log.error(u'could not read image file: {0}', exc)\n else:\n # We don't want to store the image in the database.\n item.try_write(itempath)\n del item['images']\n\n def embed_album(self, album, maxwidth=None, quiet=False):\n \"\"\"Embed album art into all of the album's items.\n \"\"\"\n imagepath = album.artpath\n if not imagepath:\n self._log.info(u'No album art present: {0} - {1}',\n album.albumartist, album.album)\n return\n if not os.path.isfile(syspath(imagepath)):\n self._log.error(u'Album art not found at {0}',\n displayable_path(imagepath))\n return\n if maxwidth:\n imagepath = self.resize_image(imagepath, maxwidth)\n\n self._log.log(\n logging.DEBUG if quiet else logging.INFO,\n u'Embedding album art into {0.albumartist} - {0.album}.', album\n )\n\n for item in album.items():\n thresh = self.config['compare_threshold'].get(int)\n ifempty = self.config['ifempty'].get(bool)\n self.embed_item(item, imagepath, maxwidth, None,\n thresh, ifempty, as_album=True)\n\n def resize_image(self, imagepath, maxwidth):\n \"\"\"Returns path to an image resized to maxwidth.\n \"\"\"\n self._log.info(u'Resizing album art to {0} pixels wide', maxwidth)\n imagepath = ArtResizer.shared.resize(maxwidth, syspath(imagepath))\n return imagepath\n\n def check_art_similarity(self, item, imagepath, compare_threshold):\n \"\"\"A boolean indicating if an image is similar to embedded item art.\n \"\"\"\n with NamedTemporaryFile(delete=True) as f:\n art = self.extract(f.name, item)\n\n if art:\n is_windows = platform.system() == \"Windows\"\n\n # Converting images to grayscale tends to minimize the weight\n # of colors in the diff score.\n convert_proc = subprocess.Popen(\n ['convert', syspath(imagepath), syspath(art),\n '-colorspace', 'gray', 'MIFF:-'],\n stdout=subprocess.PIPE,\n close_fds=not is_windows,\n )\n compare_proc = subprocess.Popen(\n ['compare', '-metric', 'PHASH', '-', 'null:'],\n stdin=convert_proc.stdout,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n close_fds=not is_windows,\n )\n convert_proc.stdout.close()\n\n stdout, stderr = compare_proc.communicate()\n if compare_proc.returncode:\n if compare_proc.returncode != 1:\n self._log.debug(u'IM phashes compare failed for {0}, '\n u'{1}', displayable_path(imagepath),\n displayable_path(art))\n return\n out_str = stderr\n else:\n out_str = stdout\n\n try:\n phash_diff = float(out_str)\n except ValueError:\n self._log.debug(u'IM output is not a number: {0!r}',\n out_str)\n return\n\n self._log.info(u'compare PHASH score is {0}', phash_diff)\n if phash_diff > compare_threshold:\n return False\n\n return True\n\n def _mediafile_image(self, image_path, maxwidth=None):\n \"\"\"Return a `mediafile.Image` object for the path.\n \"\"\"\n\n with open(syspath(image_path), 'rb') as f:\n data = f.read()\n return mediafile.Image(data, type=mediafile.ImageType.front)\n\n def get_art(self, item):\n # Extract the art.\n try:\n mf = mediafile.MediaFile(syspath(item.path))\n except mediafile.UnreadableFileError as exc:\n self._log.error(u'Could not extract art from {0}: {1}',\n displayable_path(item.path), exc)\n return\n\n return mf.art\n\n # 'extractart' command.\n\n def extract(self, outpath, item):\n if not item:\n self._log.error(u'No item matches query.')\n return\n\n art = self.get_art(item)\n\n if not art:\n self._log.error(u'No album art present in {0} - {1}.',\n item.artist, item.title)\n return\n\n # Add an extension to the filename.\n ext = imghdr.what(None, h=art)\n if not ext:\n self._log.error(u'Unknown image type.')\n return\n outpath += '.' + ext\n\n self._log.info(u'Extracting album art from: {0.artist} - {0.title} '\n u'to: {1}', item, displayable_path(outpath))\n with open(syspath(outpath), 'wb') as f:\n f.write(art)\n return outpath\n\n # 'clearart' command.\n def clear(self, lib, query):\n self._log.info(u'Clearing album art from items:')\n for item in lib.items(query):\n self._log.info(u'{0} - {1}', item.artist, item.title)\n try:\n mf = mediafile.MediaFile(syspath(item.path),\n config['id3v23'].get(bool))\n except mediafile.UnreadableFileError as exc:\n self._log.error(u'Could not clear art from {0}: {1}',\n displayable_path(item.path), exc)\n continue\n del mf.art\n mf.save()\n", "path": "beetsplug/embedart.py"}], "after_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Allows beets to embed album art into file metadata.\"\"\"\nimport os.path\nimport imghdr\nimport subprocess\nimport platform\nfrom tempfile import NamedTemporaryFile\n\nfrom beets import logging\nfrom beets.plugins import BeetsPlugin\nfrom beets import mediafile\nfrom beets import ui\nfrom beets.ui import decargs\nfrom beets.util import syspath, normpath, displayable_path\nfrom beets.util.artresizer import ArtResizer\nfrom beets import config\n\n\nclass EmbedCoverArtPlugin(BeetsPlugin):\n \"\"\"Allows albumart to be embedded into the actual files.\n \"\"\"\n def __init__(self):\n super(EmbedCoverArtPlugin, self).__init__()\n self.config.add({\n 'maxwidth': 0,\n 'auto': True,\n 'compare_threshold': 0,\n 'ifempty': False,\n })\n\n if self.config['maxwidth'].get(int) and not ArtResizer.shared.local:\n self.config['maxwidth'] = 0\n self._log.warn(u\"ImageMagick or PIL not found; \"\n u\"'maxwidth' option ignored\")\n if self.config['compare_threshold'].get(int) and not \\\n ArtResizer.shared.can_compare:\n self.config['compare_threshold'] = 0\n self._log.warn(u\"ImageMagick 6.8.7 or higher not installed; \"\n u\"'compare_threshold' option ignored\")\n\n self.register_listener('album_imported', self.album_imported)\n\n def commands(self):\n # Embed command.\n embed_cmd = ui.Subcommand(\n 'embedart', help='embed image files into file metadata'\n )\n embed_cmd.parser.add_option(\n '-f', '--file', metavar='PATH', help='the image file to embed'\n )\n maxwidth = self.config['maxwidth'].get(int)\n compare_threshold = self.config['compare_threshold'].get(int)\n ifempty = self.config['ifempty'].get(bool)\n\n def embed_func(lib, opts, args):\n if opts.file:\n imagepath = normpath(opts.file)\n if not os.path.isfile(syspath(imagepath)):\n raise ui.UserError(u'image file {0} not found'.format(\n displayable_path(imagepath)\n ))\n for item in lib.items(decargs(args)):\n self.embed_item(item, imagepath, maxwidth, None,\n compare_threshold, ifempty)\n else:\n for album in lib.albums(decargs(args)):\n self.embed_album(album, maxwidth)\n\n embed_cmd.func = embed_func\n\n # Extract command.\n extract_cmd = ui.Subcommand('extractart',\n help='extract an image from file metadata')\n extract_cmd.parser.add_option('-o', dest='outpath',\n help='image output file')\n\n def extract_func(lib, opts, args):\n outpath = normpath(opts.outpath or 'cover')\n for item in lib.items(decargs(args)):\n if self.extract(outpath, item):\n return\n extract_cmd.func = extract_func\n\n # Clear command.\n clear_cmd = ui.Subcommand('clearart',\n help='remove images from file metadata')\n\n def clear_func(lib, opts, args):\n self.clear(lib, decargs(args))\n clear_cmd.func = clear_func\n\n return [embed_cmd, extract_cmd, clear_cmd]\n\n def album_imported(self, lib, album):\n \"\"\"Automatically embed art into imported albums.\n \"\"\"\n if album.artpath and self.config['auto']:\n max_width = self.config['maxwidth'].get(int)\n self.embed_album(album, max_width, True)\n\n def embed_item(self, item, imagepath, maxwidth=None, itempath=None,\n compare_threshold=0, ifempty=False, as_album=False):\n \"\"\"Embed an image into the item's media file.\n \"\"\"\n if compare_threshold:\n if not self.check_art_similarity(item, imagepath,\n compare_threshold):\n self._log.warn(u'Image not similar; skipping.')\n return\n if ifempty:\n art = self.get_art(item)\n if not art:\n pass\n else:\n self._log.debug(u'media file contained art already {0}',\n displayable_path(imagepath))\n return\n if maxwidth and not as_album:\n imagepath = self.resize_image(imagepath, maxwidth)\n\n try:\n self._log.debug(u'embedding {0}', displayable_path(imagepath))\n item['images'] = [self._mediafile_image(imagepath, maxwidth)]\n except IOError as exc:\n self._log.error(u'could not read image file: {0}', exc)\n else:\n # We don't want to store the image in the database.\n item.try_write(itempath)\n del item['images']\n\n def embed_album(self, album, maxwidth=None, quiet=False):\n \"\"\"Embed album art into all of the album's items.\n \"\"\"\n imagepath = album.artpath\n if not imagepath:\n self._log.info(u'No album art present: {0} - {1}',\n album.albumartist, album.album)\n return\n if not os.path.isfile(syspath(imagepath)):\n self._log.error(u'Album art not found at {0}',\n displayable_path(imagepath))\n return\n if maxwidth:\n imagepath = self.resize_image(imagepath, maxwidth)\n\n self._log.log(\n logging.DEBUG if quiet else logging.INFO,\n u'Embedding album art into {0.albumartist} - {0.album}.', album\n )\n\n for item in album.items():\n thresh = self.config['compare_threshold'].get(int)\n ifempty = self.config['ifempty'].get(bool)\n self.embed_item(item, imagepath, maxwidth, None,\n thresh, ifempty, as_album=True)\n\n def resize_image(self, imagepath, maxwidth):\n \"\"\"Returns path to an image resized to maxwidth.\n \"\"\"\n self._log.info(u'Resizing album art to {0} pixels wide', maxwidth)\n imagepath = ArtResizer.shared.resize(maxwidth, syspath(imagepath))\n return imagepath\n\n def check_art_similarity(self, item, imagepath, compare_threshold):\n \"\"\"A boolean indicating if an image is similar to embedded item art.\n \"\"\"\n with NamedTemporaryFile(delete=True) as f:\n art = self.extract(f.name, item)\n\n if art:\n is_windows = platform.system() == \"Windows\"\n\n # Converting images to grayscale tends to minimize the weight\n # of colors in the diff score.\n convert_proc = subprocess.Popen(\n ['convert', syspath(imagepath), syspath(art),\n '-colorspace', 'gray', 'MIFF:-'],\n stdout=subprocess.PIPE,\n close_fds=not is_windows,\n )\n compare_proc = subprocess.Popen(\n ['compare', '-metric', 'PHASH', '-', 'null:'],\n stdin=convert_proc.stdout,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n close_fds=not is_windows,\n )\n convert_proc.stdout.close()\n\n stdout, stderr = compare_proc.communicate()\n if compare_proc.returncode:\n if compare_proc.returncode != 1:\n self._log.debug(u'IM phashes compare failed for {0}, '\n u'{1}', displayable_path(imagepath),\n displayable_path(art))\n return\n out_str = stderr\n else:\n out_str = stdout\n\n try:\n phash_diff = float(out_str)\n except ValueError:\n self._log.debug(u'IM output is not a number: {0!r}',\n out_str)\n return\n\n self._log.info(u'compare PHASH score is {0}', phash_diff)\n if phash_diff > compare_threshold:\n return False\n\n return True\n\n def _mediafile_image(self, image_path, maxwidth=None):\n \"\"\"Return a `mediafile.Image` object for the path.\n \"\"\"\n\n with open(syspath(image_path), 'rb') as f:\n data = f.read()\n return mediafile.Image(data, type=mediafile.ImageType.front)\n\n def get_art(self, item):\n # Extract the art.\n try:\n mf = mediafile.MediaFile(syspath(item.path))\n except mediafile.UnreadableFileError as exc:\n self._log.error(u'Could not extract art from {0}: {1}',\n displayable_path(item.path), exc)\n return\n\n return mf.art\n\n # 'extractart' command.\n\n def extract(self, outpath, item):\n if not item:\n self._log.error(u'No item matches query.')\n return\n\n art = self.get_art(item)\n\n if not art:\n self._log.error(u'No album art present in {0} - {1}.',\n item.artist, item.title)\n return\n\n # Add an extension to the filename.\n ext = imghdr.what(None, h=art)\n if not ext:\n self._log.error(u'Unknown image type.')\n return\n outpath += '.' + ext\n\n self._log.info(u'Extracting album art from: {0.artist} - {0.title} '\n u'to: {1}', item, displayable_path(outpath))\n with open(syspath(outpath), 'wb') as f:\n f.write(art)\n return outpath\n\n # 'clearart' command.\n def clear(self, lib, query):\n self._log.info(u'Clearing album art from items:')\n for item in lib.items(query):\n self._log.info(u'{0} - {1}', item.artist, item.title)\n try:\n mf = mediafile.MediaFile(syspath(item.path),\n config['id3v23'].get(bool))\n except mediafile.UnreadableFileError as exc:\n self._log.error(u'Could not clear art from {0}: {1}',\n displayable_path(item.path), exc)\n continue\n del mf.art\n mf.save()\n", "path": "beetsplug/embedart.py"}]}
| 3,458 | 131 |
gh_patches_debug_22829
|
rasdani/github-patches
|
git_diff
|
pyodide__pyodide-891
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JsProxy does not handle Error correctly
Example:
```python
from js import Error
e = Error.new("Hi")
raise e
```
Raises: `TypeError: exceptions must derive from BaseException`.
This came up in https://github.com/iodide-project/pyodide/pull/880#pullrequestreview-555341317. I will try to fix this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyodide-py/pyodide/__init__.py`
Content:
```
1 from ._base import open_url, eval_code, find_imports, as_nested_list
2 from .console import get_completions
3
4 __version__ = "0.15.0"
5
6 __all__ = ["open_url", "eval_code", "find_imports", "as_nested_list", "get_completions"]
7
```
Path: `src/pyodide-py/pyodide/_base.py`
Content:
```
1 """
2 A library of helper utilities for connecting Python to the browser environment.
3 """
4
5 import ast
6 from io import StringIO
7 from textwrap import dedent
8 from typing import Dict, List, Any
9
10
11 def open_url(url: str) -> StringIO:
12 """
13 Fetches a given URL
14
15 Parameters
16 ----------
17 url
18 URL to fetch
19
20 Returns
21 -------
22 a io.StringIO object with the contents of the URL.
23 """
24 from js import XMLHttpRequest
25
26 req = XMLHttpRequest.new()
27 req.open("GET", url, False)
28 req.send(None)
29 return StringIO(req.response)
30
31
32 def eval_code(code: str, ns: Dict[str, Any]) -> None:
33 """Runs a code string
34
35 The last part of the provided code may be an expression.
36
37 Parameters
38 ----------
39 code
40 the Python code to run.
41 ns
42 `locals()` or `globals()` context where to execute code.
43
44 Returns
45 -------
46 None
47 """
48 # handle mis-indented input from multi-line strings
49 code = dedent(code)
50
51 mod = ast.parse(code)
52 if len(mod.body) == 0:
53 return None
54
55 expr: Any
56 if isinstance(mod.body[-1], ast.Expr):
57 expr = ast.Expression(mod.body[-1].value)
58 del mod.body[-1]
59 else:
60 expr = None
61
62 if len(mod.body):
63 exec(compile(mod, "<exec>", mode="exec"), ns, ns)
64 if expr is not None:
65 return eval(compile(expr, "<eval>", mode="eval"), ns, ns)
66 else:
67 return None
68
69
70 def find_imports(code: str) -> List[str]:
71 """
72 Finds the imports in a string of code
73
74 Parameters
75 ----------
76 code
77 the Python code to run.
78
79 Returns
80 -------
81 A list of module names that are imported in the code.
82
83 Examples
84 --------
85 >>> from pyodide import find_imports
86 >>> code = "import numpy as np; import scipy.stats"
87 >>> find_imports(code)
88 ['numpy', 'scipy']
89 """
90 # handle mis-indented input from multi-line strings
91 code = dedent(code)
92
93 mod = ast.parse(code)
94 imports = set()
95 for node in ast.walk(mod):
96 if isinstance(node, ast.Import):
97 for name in node.names:
98 node_name = name.name
99 imports.add(node_name.split(".")[0])
100 elif isinstance(node, ast.ImportFrom):
101 module_name = node.module
102 if module_name is None:
103 continue
104 imports.add(module_name.split(".")[0])
105 return list(sorted(imports))
106
107
108 def as_nested_list(obj) -> List:
109 """Convert a nested JS array to nested Python list.
110
111 Assumes a Javascript object is made of (possibly nested) arrays and
112 converts them to nested Python lists.
113
114 Parameters
115 ----------
116 obj
117 a Javscript object made of nested arrays.
118
119 Returns
120 -------
121 Python list, or a nested Python list
122 """
123 try:
124 it = iter(obj)
125 return [as_nested_list(x) for x in it]
126 except TypeError:
127 return obj
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pyodide-py/pyodide/__init__.py b/src/pyodide-py/pyodide/__init__.py
--- a/src/pyodide-py/pyodide/__init__.py
+++ b/src/pyodide-py/pyodide/__init__.py
@@ -1,6 +1,13 @@
-from ._base import open_url, eval_code, find_imports, as_nested_list
+from ._base import open_url, eval_code, find_imports, as_nested_list, JsException
from .console import get_completions
__version__ = "0.15.0"
-__all__ = ["open_url", "eval_code", "find_imports", "as_nested_list", "get_completions"]
+__all__ = [
+ "open_url",
+ "eval_code",
+ "find_imports",
+ "as_nested_list",
+ "get_completions",
+ "JsException",
+]
diff --git a/src/pyodide-py/pyodide/_base.py b/src/pyodide-py/pyodide/_base.py
--- a/src/pyodide-py/pyodide/_base.py
+++ b/src/pyodide-py/pyodide/_base.py
@@ -1,6 +1,8 @@
"""
A library of helper utilities for connecting Python to the browser environment.
"""
+# Added by C:
+# JsException (from jsproxy.c)
import ast
from io import StringIO
@@ -8,6 +10,16 @@
from typing import Dict, List, Any
+class JsException(Exception):
+ """
+ A wrapper around a Javascript Error to allow the Error to be thrown in Python.
+ """
+
+ # This gets overwritten in jsproxy.c, it is just here for autodoc and humans
+ # reading this file.
+ pass
+
+
def open_url(url: str) -> StringIO:
"""
Fetches a given URL
|
{"golden_diff": "diff --git a/src/pyodide-py/pyodide/__init__.py b/src/pyodide-py/pyodide/__init__.py\n--- a/src/pyodide-py/pyodide/__init__.py\n+++ b/src/pyodide-py/pyodide/__init__.py\n@@ -1,6 +1,13 @@\n-from ._base import open_url, eval_code, find_imports, as_nested_list\n+from ._base import open_url, eval_code, find_imports, as_nested_list, JsException\n from .console import get_completions\n \n __version__ = \"0.15.0\"\n \n-__all__ = [\"open_url\", \"eval_code\", \"find_imports\", \"as_nested_list\", \"get_completions\"]\n+__all__ = [\n+ \"open_url\",\n+ \"eval_code\",\n+ \"find_imports\",\n+ \"as_nested_list\",\n+ \"get_completions\",\n+ \"JsException\",\n+]\ndiff --git a/src/pyodide-py/pyodide/_base.py b/src/pyodide-py/pyodide/_base.py\n--- a/src/pyodide-py/pyodide/_base.py\n+++ b/src/pyodide-py/pyodide/_base.py\n@@ -1,6 +1,8 @@\n \"\"\"\n A library of helper utilities for connecting Python to the browser environment.\n \"\"\"\n+# Added by C:\n+# JsException (from jsproxy.c)\n \n import ast\n from io import StringIO\n@@ -8,6 +10,16 @@\n from typing import Dict, List, Any\n \n \n+class JsException(Exception):\n+ \"\"\"\n+ A wrapper around a Javascript Error to allow the Error to be thrown in Python.\n+ \"\"\"\n+\n+ # This gets overwritten in jsproxy.c, it is just here for autodoc and humans\n+ # reading this file.\n+ pass\n+\n+\n def open_url(url: str) -> StringIO:\n \"\"\"\n Fetches a given URL\n", "issue": "JsProxy does not handle Error correctly\nExample:\r\n```python\r\nfrom js import Error\r\ne = Error.new(\"Hi\")\r\nraise e\r\n```\r\nRaises: `TypeError: exceptions must derive from BaseException`.\r\nThis came up in https://github.com/iodide-project/pyodide/pull/880#pullrequestreview-555341317. I will try to fix this.\n", "before_files": [{"content": "from ._base import open_url, eval_code, find_imports, as_nested_list\nfrom .console import get_completions\n\n__version__ = \"0.15.0\"\n\n__all__ = [\"open_url\", \"eval_code\", \"find_imports\", \"as_nested_list\", \"get_completions\"]\n", "path": "src/pyodide-py/pyodide/__init__.py"}, {"content": "\"\"\"\nA library of helper utilities for connecting Python to the browser environment.\n\"\"\"\n\nimport ast\nfrom io import StringIO\nfrom textwrap import dedent\nfrom typing import Dict, List, Any\n\n\ndef open_url(url: str) -> StringIO:\n \"\"\"\n Fetches a given URL\n\n Parameters\n ----------\n url\n URL to fetch\n\n Returns\n -------\n a io.StringIO object with the contents of the URL.\n \"\"\"\n from js import XMLHttpRequest\n\n req = XMLHttpRequest.new()\n req.open(\"GET\", url, False)\n req.send(None)\n return StringIO(req.response)\n\n\ndef eval_code(code: str, ns: Dict[str, Any]) -> None:\n \"\"\"Runs a code string\n\n The last part of the provided code may be an expression.\n\n Parameters\n ----------\n code\n the Python code to run.\n ns\n `locals()` or `globals()` context where to execute code.\n\n Returns\n -------\n None\n \"\"\"\n # handle mis-indented input from multi-line strings\n code = dedent(code)\n\n mod = ast.parse(code)\n if len(mod.body) == 0:\n return None\n\n expr: Any\n if isinstance(mod.body[-1], ast.Expr):\n expr = ast.Expression(mod.body[-1].value)\n del mod.body[-1]\n else:\n expr = None\n\n if len(mod.body):\n exec(compile(mod, \"<exec>\", mode=\"exec\"), ns, ns)\n if expr is not None:\n return eval(compile(expr, \"<eval>\", mode=\"eval\"), ns, ns)\n else:\n return None\n\n\ndef find_imports(code: str) -> List[str]:\n \"\"\"\n Finds the imports in a string of code\n\n Parameters\n ----------\n code\n the Python code to run.\n\n Returns\n -------\n A list of module names that are imported in the code.\n\n Examples\n --------\n >>> from pyodide import find_imports\n >>> code = \"import numpy as np; import scipy.stats\"\n >>> find_imports(code)\n ['numpy', 'scipy']\n \"\"\"\n # handle mis-indented input from multi-line strings\n code = dedent(code)\n\n mod = ast.parse(code)\n imports = set()\n for node in ast.walk(mod):\n if isinstance(node, ast.Import):\n for name in node.names:\n node_name = name.name\n imports.add(node_name.split(\".\")[0])\n elif isinstance(node, ast.ImportFrom):\n module_name = node.module\n if module_name is None:\n continue\n imports.add(module_name.split(\".\")[0])\n return list(sorted(imports))\n\n\ndef as_nested_list(obj) -> List:\n \"\"\"Convert a nested JS array to nested Python list.\n\n Assumes a Javascript object is made of (possibly nested) arrays and\n converts them to nested Python lists.\n\n Parameters\n ----------\n obj\n a Javscript object made of nested arrays.\n\n Returns\n -------\n Python list, or a nested Python list\n \"\"\"\n try:\n it = iter(obj)\n return [as_nested_list(x) for x in it]\n except TypeError:\n return obj\n", "path": "src/pyodide-py/pyodide/_base.py"}], "after_files": [{"content": "from ._base import open_url, eval_code, find_imports, as_nested_list, JsException\nfrom .console import get_completions\n\n__version__ = \"0.15.0\"\n\n__all__ = [\n \"open_url\",\n \"eval_code\",\n \"find_imports\",\n \"as_nested_list\",\n \"get_completions\",\n \"JsException\",\n]\n", "path": "src/pyodide-py/pyodide/__init__.py"}, {"content": "\"\"\"\nA library of helper utilities for connecting Python to the browser environment.\n\"\"\"\n# Added by C:\n# JsException (from jsproxy.c)\n\nimport ast\nfrom io import StringIO\nfrom textwrap import dedent\nfrom typing import Dict, List, Any\n\n\nclass JsException(Exception):\n \"\"\"\n A wrapper around a Javascript Error to allow the Error to be thrown in Python.\n \"\"\"\n\n # This gets overwritten in jsproxy.c, it is just here for autodoc and humans\n # reading this file.\n pass\n\n\ndef open_url(url: str) -> StringIO:\n \"\"\"\n Fetches a given URL\n\n Parameters\n ----------\n url\n URL to fetch\n\n Returns\n -------\n a io.StringIO object with the contents of the URL.\n \"\"\"\n from js import XMLHttpRequest\n\n req = XMLHttpRequest.new()\n req.open(\"GET\", url, False)\n req.send(None)\n return StringIO(req.response)\n\n\ndef eval_code(code: str, ns: Dict[str, Any]) -> None:\n \"\"\"Runs a code string\n\n The last part of the provided code may be an expression.\n\n Parameters\n ----------\n code\n the Python code to run.\n ns\n `locals()` or `globals()` context where to execute code.\n\n Returns\n -------\n None\n \"\"\"\n # handle mis-indented input from multi-line strings\n code = dedent(code)\n\n mod = ast.parse(code)\n if len(mod.body) == 0:\n return None\n\n expr: Any\n if isinstance(mod.body[-1], ast.Expr):\n expr = ast.Expression(mod.body[-1].value)\n del mod.body[-1]\n else:\n expr = None\n\n if len(mod.body):\n exec(compile(mod, \"<exec>\", mode=\"exec\"), ns, ns)\n if expr is not None:\n return eval(compile(expr, \"<eval>\", mode=\"eval\"), ns, ns)\n else:\n return None\n\n\ndef find_imports(code: str) -> List[str]:\n \"\"\"\n Finds the imports in a string of code\n\n Parameters\n ----------\n code\n the Python code to run.\n\n Returns\n -------\n A list of module names that are imported in the code.\n\n Examples\n --------\n >>> from pyodide import find_imports\n >>> code = \"import numpy as np; import scipy.stats\"\n >>> find_imports(code)\n ['numpy', 'scipy']\n \"\"\"\n # handle mis-indented input from multi-line strings\n code = dedent(code)\n\n mod = ast.parse(code)\n imports = set()\n for node in ast.walk(mod):\n if isinstance(node, ast.Import):\n for name in node.names:\n node_name = name.name\n imports.add(node_name.split(\".\")[0])\n elif isinstance(node, ast.ImportFrom):\n module_name = node.module\n if module_name is None:\n continue\n imports.add(module_name.split(\".\")[0])\n return list(sorted(imports))\n\n\ndef as_nested_list(obj) -> List:\n \"\"\"Convert a nested JS array to nested Python list.\n\n Assumes a Javascript object is made of (possibly nested) arrays and\n converts them to nested Python lists.\n\n Parameters\n ----------\n obj\n a Javscript object made of nested arrays.\n\n Returns\n -------\n Python list, or a nested Python list\n \"\"\"\n try:\n it = iter(obj)\n return [as_nested_list(x) for x in it]\n except TypeError:\n return obj\n", "path": "src/pyodide-py/pyodide/_base.py"}]}
| 1,422 | 429 |
gh_patches_debug_20992
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-4997
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Minor bug in export orders as CSV task
**Describe the bug**
Minor bug in exporting orders as CSV. Trying to decode string which throws a type error:
```python
if order.user and order.user.first_name else '',
str(order.user.last_name, 'utf-8')
```
**To Reproduce**
Steps to reproduce the behavior:
1. Start export orders as csv task
2. Make sure user is linked for that particular order
3. See the error
**Expected behavior**
It should work properly
**Stacktrace**
```sh
File "/home/mayank/OpenSource/open-event-server/app/api/helpers/tasks.py", line 238, in export_order_csv_task
content = export_orders_csv(orders)
File "/home/mayank/OpenSource/open-event-server/app/api/helpers/csv_jobs_util.py", line 13, in export_orders_csv
if order.user and order.user.first_name else '',
TypeError: decoding str is not supported
```
**Additional details (please complete the following information):**
- OS: Ubuntu 18.04
- Python Version 3.6
- `HEAD` Commit hash [e.g. `4629c62`]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/helpers/csv_jobs_util.py`
Content:
```
1 def export_orders_csv(orders):
2 headers = ['Order#', 'Order Date', 'Status', 'Payment Type', 'Total Amount', 'Quantity',
3 'Discount Code', 'First Name', 'Last Name', 'Email']
4
5 rows = [headers]
6 for order in orders:
7 if order.status != "deleted":
8 column = [str(order.get_invoice_number()), str(order.created_at) if order.created_at else '',
9 str(order.status) if order.status else '', str(order.paid_via) if order.paid_via else '',
10 str(order.amount) if order.amount else '', str(order.get_tickets_count()),
11 str(order.discount_code.code) if order.discount_code else '',
12 str(order.user.first_name, 'utf-8')
13 if order.user and order.user.first_name else '',
14 str(order.user.last_name, 'utf-8')
15 if order.user and order.user.last_name else '',
16 str(order.user._email, 'utf-8') if order.user and order.user._email else '']
17 rows.append(column)
18
19 return rows
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/api/helpers/csv_jobs_util.py b/app/api/helpers/csv_jobs_util.py
--- a/app/api/helpers/csv_jobs_util.py
+++ b/app/api/helpers/csv_jobs_util.py
@@ -9,11 +9,11 @@
str(order.status) if order.status else '', str(order.paid_via) if order.paid_via else '',
str(order.amount) if order.amount else '', str(order.get_tickets_count()),
str(order.discount_code.code) if order.discount_code else '',
- str(order.user.first_name, 'utf-8')
+ str(order.user.first_name)
if order.user and order.user.first_name else '',
- str(order.user.last_name, 'utf-8')
+ str(order.user.last_name)
if order.user and order.user.last_name else '',
- str(order.user._email, 'utf-8') if order.user and order.user._email else '']
+ str(order.user.email) if order.user and order.user.email else '']
rows.append(column)
return rows
|
{"golden_diff": "diff --git a/app/api/helpers/csv_jobs_util.py b/app/api/helpers/csv_jobs_util.py\n--- a/app/api/helpers/csv_jobs_util.py\n+++ b/app/api/helpers/csv_jobs_util.py\n@@ -9,11 +9,11 @@\n str(order.status) if order.status else '', str(order.paid_via) if order.paid_via else '',\n str(order.amount) if order.amount else '', str(order.get_tickets_count()),\n str(order.discount_code.code) if order.discount_code else '',\n- str(order.user.first_name, 'utf-8')\n+ str(order.user.first_name)\n if order.user and order.user.first_name else '',\n- str(order.user.last_name, 'utf-8')\n+ str(order.user.last_name)\n if order.user and order.user.last_name else '',\n- str(order.user._email, 'utf-8') if order.user and order.user._email else '']\n+ str(order.user.email) if order.user and order.user.email else '']\n rows.append(column)\n \n return rows\n", "issue": "Minor bug in export orders as CSV task\n**Describe the bug**\r\nMinor bug in exporting orders as CSV. Trying to decode string which throws a type error:\r\n```python\r\n if order.user and order.user.first_name else '',\r\n str(order.user.last_name, 'utf-8')\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Start export orders as csv task \r\n2. Make sure user is linked for that particular order\r\n3. See the error\r\n\r\n**Expected behavior**\r\nIt should work properly\r\n\r\n**Stacktrace**\r\n```sh\r\n File \"/home/mayank/OpenSource/open-event-server/app/api/helpers/tasks.py\", line 238, in export_order_csv_task\r\n content = export_orders_csv(orders)\r\n File \"/home/mayank/OpenSource/open-event-server/app/api/helpers/csv_jobs_util.py\", line 13, in export_orders_csv\r\n if order.user and order.user.first_name else '',\r\nTypeError: decoding str is not supported\r\n```\r\n**Additional details (please complete the following information):**\r\n - OS: Ubuntu 18.04\r\n - Python Version 3.6\r\n - `HEAD` Commit hash [e.g. `4629c62`]\n", "before_files": [{"content": "def export_orders_csv(orders):\n headers = ['Order#', 'Order Date', 'Status', 'Payment Type', 'Total Amount', 'Quantity',\n 'Discount Code', 'First Name', 'Last Name', 'Email']\n\n rows = [headers]\n for order in orders:\n if order.status != \"deleted\":\n column = [str(order.get_invoice_number()), str(order.created_at) if order.created_at else '',\n str(order.status) if order.status else '', str(order.paid_via) if order.paid_via else '',\n str(order.amount) if order.amount else '', str(order.get_tickets_count()),\n str(order.discount_code.code) if order.discount_code else '',\n str(order.user.first_name, 'utf-8')\n if order.user and order.user.first_name else '',\n str(order.user.last_name, 'utf-8')\n if order.user and order.user.last_name else '',\n str(order.user._email, 'utf-8') if order.user and order.user._email else '']\n rows.append(column)\n\n return rows\n", "path": "app/api/helpers/csv_jobs_util.py"}], "after_files": [{"content": "def export_orders_csv(orders):\n headers = ['Order#', 'Order Date', 'Status', 'Payment Type', 'Total Amount', 'Quantity',\n 'Discount Code', 'First Name', 'Last Name', 'Email']\n\n rows = [headers]\n for order in orders:\n if order.status != \"deleted\":\n column = [str(order.get_invoice_number()), str(order.created_at) if order.created_at else '',\n str(order.status) if order.status else '', str(order.paid_via) if order.paid_via else '',\n str(order.amount) if order.amount else '', str(order.get_tickets_count()),\n str(order.discount_code.code) if order.discount_code else '',\n str(order.user.first_name)\n if order.user and order.user.first_name else '',\n str(order.user.last_name)\n if order.user and order.user.last_name else '',\n str(order.user.email) if order.user and order.user.email else '']\n rows.append(column)\n\n return rows\n", "path": "app/api/helpers/csv_jobs_util.py"}]}
| 767 | 219 |
gh_patches_debug_15044
|
rasdani/github-patches
|
git_diff
|
vyperlang__vyper-2077
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Arrays as default arguments do not properly copy to memory
### Version Information
* vyper Version: latest master
* OS: linux
* Python Version: 3.8.2
### What's your issue about?
When an array used as a default argument is copied to a memory variable, every item in the memory variable has the value of the first item in the array. For example:
```python
@public
@constant
def foo(a: int128[2] = [1, 2]) -> int128[2]:
b: int128[2] = a
return b
```
Calls to `foo` produce the following output:
```python
>>> x.foo()
(1, 1)
>>> x.foo([4, 6])
(4, 4)
```
* The same effect is observed with longer arrays and different types.
* Returning directly from calldata gives the expected result.
* Writing to storage gives the expected result.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/parser/function_definitions/parse_external_function.py`
Content:
```
1 import ast
2 from typing import Any, List, Union
3
4 from vyper.exceptions import FunctionDeclarationException
5 from vyper.parser.arg_clamps import make_arg_clamper
6 from vyper.parser.context import Context, VariableRecord
7 from vyper.parser.expr import Expr
8 from vyper.parser.function_definitions.utils import (
9 get_default_names_to_set,
10 get_nonreentrant_lock,
11 get_sig_statements,
12 )
13 from vyper.parser.global_context import GlobalContext
14 from vyper.parser.lll_node import LLLnode
15 from vyper.parser.parser_utils import getpos, make_setter
16 from vyper.parser.stmt import parse_body
17 from vyper.signatures import sig_utils
18 from vyper.signatures.function_signature import FunctionSignature
19 from vyper.types.types import ByteArrayLike, get_size_of_type
20 from vyper.utils import MemoryPositions
21
22
23 def get_external_arg_copier(
24 total_size: int, memory_dest: int, offset: Union[int, List[Any]] = 4
25 ) -> List[Any]:
26 """
27 Generate argument copier.
28
29 :param total_size: total memory size to copy
30 :param memory_dest: base memory address to start from
31 :param offset: starting offset, used for ByteArrays
32 """
33 copier = ["calldatacopy", memory_dest, offset, total_size]
34 return copier
35
36
37 def validate_external_function(
38 code: ast.FunctionDef, sig: FunctionSignature, global_ctx: GlobalContext
39 ) -> None:
40 """ Validate external function definition. """
41
42 # __init__ function may not have defaults.
43 if sig.is_initializer() and sig.total_default_args > 0:
44 raise FunctionDeclarationException(
45 "__init__ function may not have default parameters.", code
46 )
47
48 # Check for duplicate variables with globals
49 for arg in sig.args:
50 if arg.name in global_ctx._globals:
51 raise FunctionDeclarationException(
52 "Variable name duplicated between " "function arguments and globals: " + arg.name,
53 code,
54 )
55
56
57 def parse_external_function(
58 code: ast.FunctionDef, sig: FunctionSignature, context: Context
59 ) -> LLLnode:
60 """
61 Parse a external function (FuncDef), and produce full function body.
62
63 :param sig: the FuntionSignature
64 :param code: ast of function
65 :return: full sig compare & function body
66 """
67
68 validate_external_function(code, sig, context.global_ctx)
69
70 # Get nonreentrant lock
71 nonreentrant_pre, nonreentrant_post = get_nonreentrant_lock(sig, context.global_ctx)
72
73 clampers = []
74
75 # Generate copiers
76 copier: List[Any] = ["pass"]
77 if not len(sig.base_args):
78 copier = ["pass"]
79 elif sig.name == "__init__":
80 copier = ["codecopy", MemoryPositions.RESERVED_MEMORY, "~codelen", sig.base_copy_size]
81 context.memory_allocator.increase_memory(sig.max_copy_size)
82 clampers.append(copier)
83
84 # Add asserts for payable and internal
85 if sig.mutability != "payable":
86 clampers.append(["assert", ["iszero", "callvalue"]])
87
88 # Fill variable positions
89 default_args_start_pos = len(sig.base_args)
90 for i, arg in enumerate(sig.args):
91 if i < len(sig.base_args):
92 clampers.append(
93 make_arg_clamper(
94 arg.pos,
95 context.memory_allocator.get_next_memory_position(),
96 arg.typ,
97 sig.name == "__init__",
98 )
99 )
100 if isinstance(arg.typ, ByteArrayLike):
101 mem_pos, _ = context.memory_allocator.increase_memory(32 * get_size_of_type(arg.typ))
102 context.vars[arg.name] = VariableRecord(arg.name, mem_pos, arg.typ, False)
103 else:
104 if sig.name == "__init__":
105 context.vars[arg.name] = VariableRecord(
106 arg.name, MemoryPositions.RESERVED_MEMORY + arg.pos, arg.typ, False,
107 )
108 elif i >= default_args_start_pos: # default args need to be allocated in memory.
109 default_arg_pos, _ = context.memory_allocator.increase_memory(32)
110 context.vars[arg.name] = VariableRecord(
111 name=arg.name, pos=default_arg_pos, typ=arg.typ, mutable=False,
112 )
113 else:
114 context.vars[arg.name] = VariableRecord(
115 name=arg.name, pos=4 + arg.pos, typ=arg.typ, mutable=False, location="calldata"
116 )
117
118 # Create "clampers" (input well-formedness checkers)
119 # Return function body
120 if sig.name == "__init__":
121 o = LLLnode.from_list(
122 ["seq"] + clampers + [parse_body(code.body, context)], # type: ignore
123 pos=getpos(code),
124 )
125 # Is default function.
126 elif sig.is_default_func():
127 if len(sig.args) > 0:
128 raise FunctionDeclarationException(
129 "Default function may not receive any arguments.", code
130 )
131 o = LLLnode.from_list(
132 ["seq"] + clampers + [parse_body(code.body, context)], # type: ignore
133 pos=getpos(code),
134 )
135 # Is a normal function.
136 else:
137 # Function with default parameters.
138 if sig.total_default_args > 0:
139 function_routine = f"{sig.name}_{sig.method_id}"
140 default_sigs = sig_utils.generate_default_arg_sigs(
141 code, context.sigs, context.global_ctx
142 )
143 sig_chain: List[Any] = ["seq"]
144
145 for default_sig in default_sigs:
146 sig_compare, _ = get_sig_statements(default_sig, getpos(code))
147
148 # Populate unset default variables
149 set_defaults = []
150 for arg_name in get_default_names_to_set(sig, default_sig):
151 value = Expr(sig.default_values[arg_name], context).lll_node
152 var = context.vars[arg_name]
153 left = LLLnode.from_list(
154 var.pos,
155 typ=var.typ,
156 location="memory",
157 pos=getpos(code),
158 mutable=var.mutable,
159 )
160 set_defaults.append(make_setter(left, value, "memory", pos=getpos(code)))
161
162 current_sig_arg_names = {x.name for x in default_sig.args}
163 base_arg_names = {arg.name for arg in sig.base_args}
164 copier_arg_count = len(default_sig.args) - len(sig.base_args)
165 copier_arg_names = list(current_sig_arg_names - base_arg_names)
166
167 # Order copier_arg_names, this is very important.
168 copier_arg_names = [x.name for x in default_sig.args if x.name in copier_arg_names]
169
170 # Variables to be populated from calldata/stack.
171 default_copiers: List[Any] = []
172 if copier_arg_count > 0:
173 # Get map of variables in calldata, with thier offsets
174 offset = 4
175 calldata_offset_map = {}
176 for arg in default_sig.args:
177 calldata_offset_map[arg.name] = offset
178 offset += (
179 32
180 if isinstance(arg.typ, ByteArrayLike)
181 else get_size_of_type(arg.typ) * 32
182 )
183
184 # Copy default parameters from calldata.
185 for arg_name in copier_arg_names:
186 var = context.vars[arg_name]
187 calldata_offset = calldata_offset_map[arg_name]
188
189 # Add clampers.
190 default_copiers.append(
191 make_arg_clamper(calldata_offset - 4, var.pos, var.typ,)
192 )
193 # Add copying code.
194 _offset: Union[int, List[Any]] = calldata_offset
195 if isinstance(var.typ, ByteArrayLike):
196 _offset = ["add", 4, ["calldataload", calldata_offset]]
197 default_copiers.append(
198 get_external_arg_copier(
199 memory_dest=var.pos, total_size=var.size * 32, offset=_offset,
200 )
201 )
202
203 default_copiers.append(0) # for over arching seq, POP
204
205 sig_chain.append(
206 [
207 "if",
208 sig_compare,
209 [
210 "seq",
211 ["seq"] + set_defaults if set_defaults else ["pass"],
212 ["seq_unchecked"] + default_copiers if default_copiers else ["pass"],
213 ["goto", function_routine],
214 ],
215 ]
216 )
217
218 # Function with default parameters.
219 o = LLLnode.from_list(
220 [
221 "seq",
222 sig_chain,
223 [
224 "if",
225 0, # can only be jumped into
226 [
227 "seq",
228 ["label", function_routine],
229 ["seq"]
230 + nonreentrant_pre
231 + clampers
232 + [parse_body(c, context) for c in code.body]
233 + nonreentrant_post
234 + [["stop"]],
235 ],
236 ],
237 ],
238 typ=None,
239 pos=getpos(code),
240 )
241
242 else:
243 # Function without default parameters.
244 sig_compare, _ = get_sig_statements(sig, getpos(code))
245 o = LLLnode.from_list(
246 [
247 "if",
248 sig_compare,
249 ["seq"]
250 + nonreentrant_pre
251 + clampers
252 + [parse_body(c, context) for c in code.body]
253 + nonreentrant_post
254 + [["stop"]],
255 ],
256 typ=None,
257 pos=getpos(code),
258 )
259 return o
260
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vyper/parser/function_definitions/parse_external_function.py b/vyper/parser/function_definitions/parse_external_function.py
--- a/vyper/parser/function_definitions/parse_external_function.py
+++ b/vyper/parser/function_definitions/parse_external_function.py
@@ -106,7 +106,8 @@
arg.name, MemoryPositions.RESERVED_MEMORY + arg.pos, arg.typ, False,
)
elif i >= default_args_start_pos: # default args need to be allocated in memory.
- default_arg_pos, _ = context.memory_allocator.increase_memory(32)
+ type_size = get_size_of_type(arg.typ) * 32
+ default_arg_pos, _ = context.memory_allocator.increase_memory(type_size)
context.vars[arg.name] = VariableRecord(
name=arg.name, pos=default_arg_pos, typ=arg.typ, mutable=False,
)
|
{"golden_diff": "diff --git a/vyper/parser/function_definitions/parse_external_function.py b/vyper/parser/function_definitions/parse_external_function.py\n--- a/vyper/parser/function_definitions/parse_external_function.py\n+++ b/vyper/parser/function_definitions/parse_external_function.py\n@@ -106,7 +106,8 @@\n arg.name, MemoryPositions.RESERVED_MEMORY + arg.pos, arg.typ, False,\n )\n elif i >= default_args_start_pos: # default args need to be allocated in memory.\n- default_arg_pos, _ = context.memory_allocator.increase_memory(32)\n+ type_size = get_size_of_type(arg.typ) * 32\n+ default_arg_pos, _ = context.memory_allocator.increase_memory(type_size)\n context.vars[arg.name] = VariableRecord(\n name=arg.name, pos=default_arg_pos, typ=arg.typ, mutable=False,\n )\n", "issue": "Arrays as default arguments do not properly copy to memory\n### Version Information\r\n\r\n* vyper Version: latest master\r\n* OS: linux\r\n* Python Version: 3.8.2\r\n\r\n### What's your issue about?\r\nWhen an array used as a default argument is copied to a memory variable, every item in the memory variable has the value of the first item in the array. For example:\r\n\r\n```python\r\n@public\r\n@constant\r\ndef foo(a: int128[2] = [1, 2]) -> int128[2]:\r\n b: int128[2] = a\r\n return b\r\n```\r\n\r\nCalls to `foo` produce the following output:\r\n\r\n```python\r\n>>> x.foo()\r\n(1, 1)\r\n>>> x.foo([4, 6])\r\n(4, 4)\r\n```\r\n\r\n* The same effect is observed with longer arrays and different types.\r\n* Returning directly from calldata gives the expected result.\r\n* Writing to storage gives the expected result.\r\n\n", "before_files": [{"content": "import ast\nfrom typing import Any, List, Union\n\nfrom vyper.exceptions import FunctionDeclarationException\nfrom vyper.parser.arg_clamps import make_arg_clamper\nfrom vyper.parser.context import Context, VariableRecord\nfrom vyper.parser.expr import Expr\nfrom vyper.parser.function_definitions.utils import (\n get_default_names_to_set,\n get_nonreentrant_lock,\n get_sig_statements,\n)\nfrom vyper.parser.global_context import GlobalContext\nfrom vyper.parser.lll_node import LLLnode\nfrom vyper.parser.parser_utils import getpos, make_setter\nfrom vyper.parser.stmt import parse_body\nfrom vyper.signatures import sig_utils\nfrom vyper.signatures.function_signature import FunctionSignature\nfrom vyper.types.types import ByteArrayLike, get_size_of_type\nfrom vyper.utils import MemoryPositions\n\n\ndef get_external_arg_copier(\n total_size: int, memory_dest: int, offset: Union[int, List[Any]] = 4\n) -> List[Any]:\n \"\"\"\n Generate argument copier.\n\n :param total_size: total memory size to copy\n :param memory_dest: base memory address to start from\n :param offset: starting offset, used for ByteArrays\n \"\"\"\n copier = [\"calldatacopy\", memory_dest, offset, total_size]\n return copier\n\n\ndef validate_external_function(\n code: ast.FunctionDef, sig: FunctionSignature, global_ctx: GlobalContext\n) -> None:\n \"\"\" Validate external function definition. \"\"\"\n\n # __init__ function may not have defaults.\n if sig.is_initializer() and sig.total_default_args > 0:\n raise FunctionDeclarationException(\n \"__init__ function may not have default parameters.\", code\n )\n\n # Check for duplicate variables with globals\n for arg in sig.args:\n if arg.name in global_ctx._globals:\n raise FunctionDeclarationException(\n \"Variable name duplicated between \" \"function arguments and globals: \" + arg.name,\n code,\n )\n\n\ndef parse_external_function(\n code: ast.FunctionDef, sig: FunctionSignature, context: Context\n) -> LLLnode:\n \"\"\"\n Parse a external function (FuncDef), and produce full function body.\n\n :param sig: the FuntionSignature\n :param code: ast of function\n :return: full sig compare & function body\n \"\"\"\n\n validate_external_function(code, sig, context.global_ctx)\n\n # Get nonreentrant lock\n nonreentrant_pre, nonreentrant_post = get_nonreentrant_lock(sig, context.global_ctx)\n\n clampers = []\n\n # Generate copiers\n copier: List[Any] = [\"pass\"]\n if not len(sig.base_args):\n copier = [\"pass\"]\n elif sig.name == \"__init__\":\n copier = [\"codecopy\", MemoryPositions.RESERVED_MEMORY, \"~codelen\", sig.base_copy_size]\n context.memory_allocator.increase_memory(sig.max_copy_size)\n clampers.append(copier)\n\n # Add asserts for payable and internal\n if sig.mutability != \"payable\":\n clampers.append([\"assert\", [\"iszero\", \"callvalue\"]])\n\n # Fill variable positions\n default_args_start_pos = len(sig.base_args)\n for i, arg in enumerate(sig.args):\n if i < len(sig.base_args):\n clampers.append(\n make_arg_clamper(\n arg.pos,\n context.memory_allocator.get_next_memory_position(),\n arg.typ,\n sig.name == \"__init__\",\n )\n )\n if isinstance(arg.typ, ByteArrayLike):\n mem_pos, _ = context.memory_allocator.increase_memory(32 * get_size_of_type(arg.typ))\n context.vars[arg.name] = VariableRecord(arg.name, mem_pos, arg.typ, False)\n else:\n if sig.name == \"__init__\":\n context.vars[arg.name] = VariableRecord(\n arg.name, MemoryPositions.RESERVED_MEMORY + arg.pos, arg.typ, False,\n )\n elif i >= default_args_start_pos: # default args need to be allocated in memory.\n default_arg_pos, _ = context.memory_allocator.increase_memory(32)\n context.vars[arg.name] = VariableRecord(\n name=arg.name, pos=default_arg_pos, typ=arg.typ, mutable=False,\n )\n else:\n context.vars[arg.name] = VariableRecord(\n name=arg.name, pos=4 + arg.pos, typ=arg.typ, mutable=False, location=\"calldata\"\n )\n\n # Create \"clampers\" (input well-formedness checkers)\n # Return function body\n if sig.name == \"__init__\":\n o = LLLnode.from_list(\n [\"seq\"] + clampers + [parse_body(code.body, context)], # type: ignore\n pos=getpos(code),\n )\n # Is default function.\n elif sig.is_default_func():\n if len(sig.args) > 0:\n raise FunctionDeclarationException(\n \"Default function may not receive any arguments.\", code\n )\n o = LLLnode.from_list(\n [\"seq\"] + clampers + [parse_body(code.body, context)], # type: ignore\n pos=getpos(code),\n )\n # Is a normal function.\n else:\n # Function with default parameters.\n if sig.total_default_args > 0:\n function_routine = f\"{sig.name}_{sig.method_id}\"\n default_sigs = sig_utils.generate_default_arg_sigs(\n code, context.sigs, context.global_ctx\n )\n sig_chain: List[Any] = [\"seq\"]\n\n for default_sig in default_sigs:\n sig_compare, _ = get_sig_statements(default_sig, getpos(code))\n\n # Populate unset default variables\n set_defaults = []\n for arg_name in get_default_names_to_set(sig, default_sig):\n value = Expr(sig.default_values[arg_name], context).lll_node\n var = context.vars[arg_name]\n left = LLLnode.from_list(\n var.pos,\n typ=var.typ,\n location=\"memory\",\n pos=getpos(code),\n mutable=var.mutable,\n )\n set_defaults.append(make_setter(left, value, \"memory\", pos=getpos(code)))\n\n current_sig_arg_names = {x.name for x in default_sig.args}\n base_arg_names = {arg.name for arg in sig.base_args}\n copier_arg_count = len(default_sig.args) - len(sig.base_args)\n copier_arg_names = list(current_sig_arg_names - base_arg_names)\n\n # Order copier_arg_names, this is very important.\n copier_arg_names = [x.name for x in default_sig.args if x.name in copier_arg_names]\n\n # Variables to be populated from calldata/stack.\n default_copiers: List[Any] = []\n if copier_arg_count > 0:\n # Get map of variables in calldata, with thier offsets\n offset = 4\n calldata_offset_map = {}\n for arg in default_sig.args:\n calldata_offset_map[arg.name] = offset\n offset += (\n 32\n if isinstance(arg.typ, ByteArrayLike)\n else get_size_of_type(arg.typ) * 32\n )\n\n # Copy default parameters from calldata.\n for arg_name in copier_arg_names:\n var = context.vars[arg_name]\n calldata_offset = calldata_offset_map[arg_name]\n\n # Add clampers.\n default_copiers.append(\n make_arg_clamper(calldata_offset - 4, var.pos, var.typ,)\n )\n # Add copying code.\n _offset: Union[int, List[Any]] = calldata_offset\n if isinstance(var.typ, ByteArrayLike):\n _offset = [\"add\", 4, [\"calldataload\", calldata_offset]]\n default_copiers.append(\n get_external_arg_copier(\n memory_dest=var.pos, total_size=var.size * 32, offset=_offset,\n )\n )\n\n default_copiers.append(0) # for over arching seq, POP\n\n sig_chain.append(\n [\n \"if\",\n sig_compare,\n [\n \"seq\",\n [\"seq\"] + set_defaults if set_defaults else [\"pass\"],\n [\"seq_unchecked\"] + default_copiers if default_copiers else [\"pass\"],\n [\"goto\", function_routine],\n ],\n ]\n )\n\n # Function with default parameters.\n o = LLLnode.from_list(\n [\n \"seq\",\n sig_chain,\n [\n \"if\",\n 0, # can only be jumped into\n [\n \"seq\",\n [\"label\", function_routine],\n [\"seq\"]\n + nonreentrant_pre\n + clampers\n + [parse_body(c, context) for c in code.body]\n + nonreentrant_post\n + [[\"stop\"]],\n ],\n ],\n ],\n typ=None,\n pos=getpos(code),\n )\n\n else:\n # Function without default parameters.\n sig_compare, _ = get_sig_statements(sig, getpos(code))\n o = LLLnode.from_list(\n [\n \"if\",\n sig_compare,\n [\"seq\"]\n + nonreentrant_pre\n + clampers\n + [parse_body(c, context) for c in code.body]\n + nonreentrant_post\n + [[\"stop\"]],\n ],\n typ=None,\n pos=getpos(code),\n )\n return o\n", "path": "vyper/parser/function_definitions/parse_external_function.py"}], "after_files": [{"content": "import ast\nfrom typing import Any, List, Union\n\nfrom vyper.exceptions import FunctionDeclarationException\nfrom vyper.parser.arg_clamps import make_arg_clamper\nfrom vyper.parser.context import Context, VariableRecord\nfrom vyper.parser.expr import Expr\nfrom vyper.parser.function_definitions.utils import (\n get_default_names_to_set,\n get_nonreentrant_lock,\n get_sig_statements,\n)\nfrom vyper.parser.global_context import GlobalContext\nfrom vyper.parser.lll_node import LLLnode\nfrom vyper.parser.parser_utils import getpos, make_setter\nfrom vyper.parser.stmt import parse_body\nfrom vyper.signatures import sig_utils\nfrom vyper.signatures.function_signature import FunctionSignature\nfrom vyper.types.types import ByteArrayLike, get_size_of_type\nfrom vyper.utils import MemoryPositions\n\n\ndef get_external_arg_copier(\n total_size: int, memory_dest: int, offset: Union[int, List[Any]] = 4\n) -> List[Any]:\n \"\"\"\n Generate argument copier.\n\n :param total_size: total memory size to copy\n :param memory_dest: base memory address to start from\n :param offset: starting offset, used for ByteArrays\n \"\"\"\n copier = [\"calldatacopy\", memory_dest, offset, total_size]\n return copier\n\n\ndef validate_external_function(\n code: ast.FunctionDef, sig: FunctionSignature, global_ctx: GlobalContext\n) -> None:\n \"\"\" Validate external function definition. \"\"\"\n\n # __init__ function may not have defaults.\n if sig.is_initializer() and sig.total_default_args > 0:\n raise FunctionDeclarationException(\n \"__init__ function may not have default parameters.\", code\n )\n\n # Check for duplicate variables with globals\n for arg in sig.args:\n if arg.name in global_ctx._globals:\n raise FunctionDeclarationException(\n \"Variable name duplicated between \" \"function arguments and globals: \" + arg.name,\n code,\n )\n\n\ndef parse_external_function(\n code: ast.FunctionDef, sig: FunctionSignature, context: Context\n) -> LLLnode:\n \"\"\"\n Parse a external function (FuncDef), and produce full function body.\n\n :param sig: the FuntionSignature\n :param code: ast of function\n :return: full sig compare & function body\n \"\"\"\n\n validate_external_function(code, sig, context.global_ctx)\n\n # Get nonreentrant lock\n nonreentrant_pre, nonreentrant_post = get_nonreentrant_lock(sig, context.global_ctx)\n\n clampers = []\n\n # Generate copiers\n copier: List[Any] = [\"pass\"]\n if not len(sig.base_args):\n copier = [\"pass\"]\n elif sig.name == \"__init__\":\n copier = [\"codecopy\", MemoryPositions.RESERVED_MEMORY, \"~codelen\", sig.base_copy_size]\n context.memory_allocator.increase_memory(sig.max_copy_size)\n clampers.append(copier)\n\n # Add asserts for payable and internal\n if sig.mutability != \"payable\":\n clampers.append([\"assert\", [\"iszero\", \"callvalue\"]])\n\n # Fill variable positions\n default_args_start_pos = len(sig.base_args)\n for i, arg in enumerate(sig.args):\n if i < len(sig.base_args):\n clampers.append(\n make_arg_clamper(\n arg.pos,\n context.memory_allocator.get_next_memory_position(),\n arg.typ,\n sig.name == \"__init__\",\n )\n )\n if isinstance(arg.typ, ByteArrayLike):\n mem_pos, _ = context.memory_allocator.increase_memory(32 * get_size_of_type(arg.typ))\n context.vars[arg.name] = VariableRecord(arg.name, mem_pos, arg.typ, False)\n else:\n if sig.name == \"__init__\":\n context.vars[arg.name] = VariableRecord(\n arg.name, MemoryPositions.RESERVED_MEMORY + arg.pos, arg.typ, False,\n )\n elif i >= default_args_start_pos: # default args need to be allocated in memory.\n type_size = get_size_of_type(arg.typ) * 32\n default_arg_pos, _ = context.memory_allocator.increase_memory(type_size)\n context.vars[arg.name] = VariableRecord(\n name=arg.name, pos=default_arg_pos, typ=arg.typ, mutable=False,\n )\n else:\n context.vars[arg.name] = VariableRecord(\n name=arg.name, pos=4 + arg.pos, typ=arg.typ, mutable=False, location=\"calldata\"\n )\n\n # Create \"clampers\" (input well-formedness checkers)\n # Return function body\n if sig.name == \"__init__\":\n o = LLLnode.from_list(\n [\"seq\"] + clampers + [parse_body(code.body, context)], # type: ignore\n pos=getpos(code),\n )\n # Is default function.\n elif sig.is_default_func():\n if len(sig.args) > 0:\n raise FunctionDeclarationException(\n \"Default function may not receive any arguments.\", code\n )\n o = LLLnode.from_list(\n [\"seq\"] + clampers + [parse_body(code.body, context)], # type: ignore\n pos=getpos(code),\n )\n # Is a normal function.\n else:\n # Function with default parameters.\n if sig.total_default_args > 0:\n function_routine = f\"{sig.name}_{sig.method_id}\"\n default_sigs = sig_utils.generate_default_arg_sigs(\n code, context.sigs, context.global_ctx\n )\n sig_chain: List[Any] = [\"seq\"]\n\n for default_sig in default_sigs:\n sig_compare, _ = get_sig_statements(default_sig, getpos(code))\n\n # Populate unset default variables\n set_defaults = []\n for arg_name in get_default_names_to_set(sig, default_sig):\n value = Expr(sig.default_values[arg_name], context).lll_node\n var = context.vars[arg_name]\n left = LLLnode.from_list(\n var.pos,\n typ=var.typ,\n location=\"memory\",\n pos=getpos(code),\n mutable=var.mutable,\n )\n set_defaults.append(make_setter(left, value, \"memory\", pos=getpos(code)))\n\n current_sig_arg_names = {x.name for x in default_sig.args}\n base_arg_names = {arg.name for arg in sig.base_args}\n copier_arg_count = len(default_sig.args) - len(sig.base_args)\n copier_arg_names = list(current_sig_arg_names - base_arg_names)\n\n # Order copier_arg_names, this is very important.\n copier_arg_names = [x.name for x in default_sig.args if x.name in copier_arg_names]\n\n # Variables to be populated from calldata/stack.\n default_copiers: List[Any] = []\n if copier_arg_count > 0:\n # Get map of variables in calldata, with thier offsets\n offset = 4\n calldata_offset_map = {}\n for arg in default_sig.args:\n calldata_offset_map[arg.name] = offset\n offset += (\n 32\n if isinstance(arg.typ, ByteArrayLike)\n else get_size_of_type(arg.typ) * 32\n )\n\n # Copy default parameters from calldata.\n for arg_name in copier_arg_names:\n var = context.vars[arg_name]\n calldata_offset = calldata_offset_map[arg_name]\n\n # Add clampers.\n default_copiers.append(\n make_arg_clamper(calldata_offset - 4, var.pos, var.typ,)\n )\n # Add copying code.\n _offset: Union[int, List[Any]] = calldata_offset\n if isinstance(var.typ, ByteArrayLike):\n _offset = [\"add\", 4, [\"calldataload\", calldata_offset]]\n default_copiers.append(\n get_external_arg_copier(\n memory_dest=var.pos, total_size=var.size * 32, offset=_offset,\n )\n )\n\n default_copiers.append(0) # for over arching seq, POP\n\n sig_chain.append(\n [\n \"if\",\n sig_compare,\n [\n \"seq\",\n [\"seq\"] + set_defaults if set_defaults else [\"pass\"],\n [\"seq_unchecked\"] + default_copiers if default_copiers else [\"pass\"],\n [\"goto\", function_routine],\n ],\n ]\n )\n\n # Function with default parameters.\n o = LLLnode.from_list(\n [\n \"seq\",\n sig_chain,\n [\n \"if\",\n 0, # can only be jumped into\n [\n \"seq\",\n [\"label\", function_routine],\n [\"seq\"]\n + nonreentrant_pre\n + clampers\n + [parse_body(c, context) for c in code.body]\n + nonreentrant_post\n + [[\"stop\"]],\n ],\n ],\n ],\n typ=None,\n pos=getpos(code),\n )\n\n else:\n # Function without default parameters.\n sig_compare, _ = get_sig_statements(sig, getpos(code))\n o = LLLnode.from_list(\n [\n \"if\",\n sig_compare,\n [\"seq\"]\n + nonreentrant_pre\n + clampers\n + [parse_body(c, context) for c in code.body]\n + nonreentrant_post\n + [[\"stop\"]],\n ],\n typ=None,\n pos=getpos(code),\n )\n return o\n", "path": "vyper/parser/function_definitions/parse_external_function.py"}]}
| 3,166 | 189 |
gh_patches_debug_39511
|
rasdani/github-patches
|
git_diff
|
pytorch__vision-3330
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Utility to draw Semantic Segmentation Masks
## 🚀 Feature
We recently added utility to draw bounding boxes, which works really well with detection models. #2785 #2556
It might be nice to draw segmentation masks which we obtain from instance segmentation models.
## Motivation
Same as bounding box utils. It is very useful to have these. It reduces the dependence of users over other plotting libraries.
## Pitch
Our API should be compatible with segmentation models, so we should probably use Tensors.
I think most params remain as same as the previous util. This keeps consistency too.
```
@torch.no_grad()
def draw_segmentation_masks(image: torch.Tensor,
masks: torch.Tensor,
labels: Optional[List[str]] = None,
colors: Optional[List[Union[str, Tuple[int, int, int]]]] = None,
width: int = 1,
font: Optional[str] = None,
font_size: int = 10)
```
We might need to see a method using which we can draw with PIL.
We used `draw.rectangle()` to draw a box in utils, maybe there is some functionality in PIL that can help us draw shapes.
## Alternatives
Let's discuss further, how this API would work with our models.
Also, it would be nice if this works directly for instance segmentation model Mask RCNN.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/utils.py`
Content:
```
1 from typing import Union, Optional, List, Tuple, Text, BinaryIO
2 import pathlib
3 import torch
4 import math
5 import warnings
6 import numpy as np
7 from PIL import Image, ImageDraw, ImageFont, ImageColor
8
9 __all__ = ["make_grid", "save_image", "draw_bounding_boxes"]
10
11
12 @torch.no_grad()
13 def make_grid(
14 tensor: Union[torch.Tensor, List[torch.Tensor]],
15 nrow: int = 8,
16 padding: int = 2,
17 normalize: bool = False,
18 value_range: Optional[Tuple[int, int]] = None,
19 scale_each: bool = False,
20 pad_value: int = 0,
21 **kwargs
22 ) -> torch.Tensor:
23 """Make a grid of images.
24
25 Args:
26 tensor (Tensor or list): 4D mini-batch Tensor of shape (B x C x H x W)
27 or a list of images all of the same size.
28 nrow (int, optional): Number of images displayed in each row of the grid.
29 The final grid size is ``(B / nrow, nrow)``. Default: ``8``.
30 padding (int, optional): amount of padding. Default: ``2``.
31 normalize (bool, optional): If True, shift the image to the range (0, 1),
32 by the min and max values specified by :attr:`range`. Default: ``False``.
33 value_range (tuple, optional): tuple (min, max) where min and max are numbers,
34 then these numbers are used to normalize the image. By default, min and max
35 are computed from the tensor.
36 scale_each (bool, optional): If ``True``, scale each image in the batch of
37 images separately rather than the (min, max) over all images. Default: ``False``.
38 pad_value (float, optional): Value for the padded pixels. Default: ``0``.
39
40 Example:
41 See this notebook `here <https://gist.github.com/anonymous/bf16430f7750c023141c562f3e9f2a91>`_
42
43 """
44 if not (torch.is_tensor(tensor) or
45 (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))):
46 raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}')
47
48 if "range" in kwargs.keys():
49 warning = "range will be deprecated, please use value_range instead."
50 warnings.warn(warning)
51 value_range = kwargs["range"]
52
53 # if list of tensors, convert to a 4D mini-batch Tensor
54 if isinstance(tensor, list):
55 tensor = torch.stack(tensor, dim=0)
56
57 if tensor.dim() == 2: # single image H x W
58 tensor = tensor.unsqueeze(0)
59 if tensor.dim() == 3: # single image
60 if tensor.size(0) == 1: # if single-channel, convert to 3-channel
61 tensor = torch.cat((tensor, tensor, tensor), 0)
62 tensor = tensor.unsqueeze(0)
63
64 if tensor.dim() == 4 and tensor.size(1) == 1: # single-channel images
65 tensor = torch.cat((tensor, tensor, tensor), 1)
66
67 if normalize is True:
68 tensor = tensor.clone() # avoid modifying tensor in-place
69 if value_range is not None:
70 assert isinstance(value_range, tuple), \
71 "value_range has to be a tuple (min, max) if specified. min and max are numbers"
72
73 def norm_ip(img, low, high):
74 img.clamp_(min=low, max=high)
75 img.sub_(low).div_(max(high - low, 1e-5))
76
77 def norm_range(t, value_range):
78 if value_range is not None:
79 norm_ip(t, value_range[0], value_range[1])
80 else:
81 norm_ip(t, float(t.min()), float(t.max()))
82
83 if scale_each is True:
84 for t in tensor: # loop over mini-batch dimension
85 norm_range(t, value_range)
86 else:
87 norm_range(tensor, value_range)
88
89 if tensor.size(0) == 1:
90 return tensor.squeeze(0)
91
92 # make the mini-batch of images into a grid
93 nmaps = tensor.size(0)
94 xmaps = min(nrow, nmaps)
95 ymaps = int(math.ceil(float(nmaps) / xmaps))
96 height, width = int(tensor.size(2) + padding), int(tensor.size(3) + padding)
97 num_channels = tensor.size(1)
98 grid = tensor.new_full((num_channels, height * ymaps + padding, width * xmaps + padding), pad_value)
99 k = 0
100 for y in range(ymaps):
101 for x in range(xmaps):
102 if k >= nmaps:
103 break
104 # Tensor.copy_() is a valid method but seems to be missing from the stubs
105 # https://pytorch.org/docs/stable/tensors.html#torch.Tensor.copy_
106 grid.narrow(1, y * height + padding, height - padding).narrow( # type: ignore[attr-defined]
107 2, x * width + padding, width - padding
108 ).copy_(tensor[k])
109 k = k + 1
110 return grid
111
112
113 @torch.no_grad()
114 def save_image(
115 tensor: Union[torch.Tensor, List[torch.Tensor]],
116 fp: Union[Text, pathlib.Path, BinaryIO],
117 format: Optional[str] = None,
118 **kwargs
119 ) -> None:
120 """Save a given Tensor into an image file.
121
122 Args:
123 tensor (Tensor or list): Image to be saved. If given a mini-batch tensor,
124 saves the tensor as a grid of images by calling ``make_grid``.
125 fp (string or file object): A filename or a file object
126 format(Optional): If omitted, the format to use is determined from the filename extension.
127 If a file object was used instead of a filename, this parameter should always be used.
128 **kwargs: Other arguments are documented in ``make_grid``.
129 """
130
131 grid = make_grid(tensor, **kwargs)
132 # Add 0.5 after unnormalizing to [0, 255] to round to nearest integer
133 ndarr = grid.mul(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).to('cpu', torch.uint8).numpy()
134 im = Image.fromarray(ndarr)
135 im.save(fp, format=format)
136
137
138 @torch.no_grad()
139 def draw_bounding_boxes(
140 image: torch.Tensor,
141 boxes: torch.Tensor,
142 labels: Optional[List[str]] = None,
143 colors: Optional[List[Union[str, Tuple[int, int, int]]]] = None,
144 fill: Optional[bool] = False,
145 width: int = 1,
146 font: Optional[str] = None,
147 font_size: int = 10
148 ) -> torch.Tensor:
149
150 """
151 Draws bounding boxes on given image.
152 The values of the input image should be uint8 between 0 and 255.
153 If filled, Resulting Tensor should be saved as PNG image.
154
155 Args:
156 image (Tensor): Tensor of shape (C x H x W)
157 boxes (Tensor): Tensor of size (N, 4) containing bounding boxes in (xmin, ymin, xmax, ymax) format. Note that
158 the boxes are absolute coordinates with respect to the image. In other words: `0 <= xmin < xmax < W` and
159 `0 <= ymin < ymax < H`.
160 labels (List[str]): List containing the labels of bounding boxes.
161 colors (List[Union[str, Tuple[int, int, int]]]): List containing the colors of bounding boxes. The colors can
162 be represented as `str` or `Tuple[int, int, int]`.
163 fill (bool): If `True` fills the bounding box with specified color.
164 width (int): Width of bounding box.
165 font (str): A filename containing a TrueType font. If the file is not found in this filename, the loader may
166 also search in other directories, such as the `fonts/` directory on Windows or `/Library/Fonts/`,
167 `/System/Library/Fonts/` and `~/Library/Fonts/` on macOS.
168 font_size (int): The requested font size in points.
169 """
170
171 if not isinstance(image, torch.Tensor):
172 raise TypeError(f"Tensor expected, got {type(image)}")
173 elif image.dtype != torch.uint8:
174 raise ValueError(f"Tensor uint8 expected, got {image.dtype}")
175 elif image.dim() != 3:
176 raise ValueError("Pass individual images, not batches")
177
178 ndarr = image.permute(1, 2, 0).numpy()
179 img_to_draw = Image.fromarray(ndarr)
180
181 img_boxes = boxes.to(torch.int64).tolist()
182
183 if fill:
184 draw = ImageDraw.Draw(img_to_draw, "RGBA")
185
186 else:
187 draw = ImageDraw.Draw(img_to_draw)
188
189 txt_font = ImageFont.load_default() if font is None else ImageFont.truetype(font=font, size=font_size)
190
191 for i, bbox in enumerate(img_boxes):
192 if colors is None:
193 color = None
194 else:
195 color = colors[i]
196
197 if fill:
198 if color is None:
199 fill_color = (255, 255, 255, 100)
200 elif isinstance(color, str):
201 # This will automatically raise Error if rgb cannot be parsed.
202 fill_color = ImageColor.getrgb(color) + (100,)
203 elif isinstance(color, tuple):
204 fill_color = color + (100,)
205 draw.rectangle(bbox, width=width, outline=color, fill=fill_color)
206 else:
207 draw.rectangle(bbox, width=width, outline=color)
208
209 if labels is not None:
210 draw.text((bbox[0], bbox[1]), labels[i], fill=color, font=txt_font)
211
212 return torch.from_numpy(np.array(img_to_draw)).permute(2, 0, 1)
213
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchvision/utils.py b/torchvision/utils.py
--- a/torchvision/utils.py
+++ b/torchvision/utils.py
@@ -6,7 +6,7 @@
import numpy as np
from PIL import Image, ImageDraw, ImageFont, ImageColor
-__all__ = ["make_grid", "save_image", "draw_bounding_boxes"]
+__all__ = ["make_grid", "save_image", "draw_bounding_boxes", "draw_segmentation_masks"]
@torch.no_grad()
@@ -153,7 +153,7 @@
If filled, Resulting Tensor should be saved as PNG image.
Args:
- image (Tensor): Tensor of shape (C x H x W)
+ image (Tensor): Tensor of shape (C x H x W) and dtype uint8.
boxes (Tensor): Tensor of size (N, 4) containing bounding boxes in (xmin, ymin, xmax, ymax) format. Note that
the boxes are absolute coordinates with respect to the image. In other words: `0 <= xmin < xmax < W` and
`0 <= ymin < ymax < H`.
@@ -210,3 +210,61 @@
draw.text((bbox[0], bbox[1]), labels[i], fill=color, font=txt_font)
return torch.from_numpy(np.array(img_to_draw)).permute(2, 0, 1)
+
+
[email protected]_grad()
+def draw_segmentation_masks(
+ image: torch.Tensor,
+ masks: torch.Tensor,
+ alpha: float = 0.2,
+ colors: Optional[List[Union[str, Tuple[int, int, int]]]] = None,
+) -> torch.Tensor:
+
+ """
+ Draws segmentation masks on given RGB image.
+ The values of the input image should be uint8 between 0 and 255.
+
+ Args:
+ image (Tensor): Tensor of shape (3 x H x W) and dtype uint8.
+ masks (Tensor): Tensor of shape (num_masks, H, W). Each containing probability of predicted class.
+ alpha (float): Float number between 0 and 1 denoting factor of transpaerency of masks.
+ colors (List[Union[str, Tuple[int, int, int]]]): List containing the colors of masks. The colors can
+ be represented as `str` or `Tuple[int, int, int]`.
+ """
+
+ if not isinstance(image, torch.Tensor):
+ raise TypeError(f"Tensor expected, got {type(image)}")
+ elif image.dtype != torch.uint8:
+ raise ValueError(f"Tensor uint8 expected, got {image.dtype}")
+ elif image.dim() != 3:
+ raise ValueError("Pass individual images, not batches")
+ elif image.size()[0] != 3:
+ raise ValueError("Pass an RGB image. Other Image formats are not supported")
+
+ num_masks = masks.size()[0]
+ masks = masks.argmax(0)
+
+ if colors is None:
+ palette = torch.tensor([2 ** 25 - 1, 2 ** 15 - 1, 2 ** 21 - 1])
+ colors_t = torch.as_tensor([i for i in range(num_masks)])[:, None] * palette
+ color_arr = (colors_t % 255).numpy().astype("uint8")
+ else:
+ color_list = []
+ for color in colors:
+ if isinstance(color, str):
+ # This will automatically raise Error if rgb cannot be parsed.
+ fill_color = ImageColor.getrgb(color)
+ color_list.append(fill_color)
+ elif isinstance(color, tuple):
+ color_list.append(color)
+
+ color_arr = np.array(color_list).astype("uint8")
+
+ _, h, w = image.size()
+ img_to_draw = Image.fromarray(masks.byte().cpu().numpy()).resize((w, h))
+ img_to_draw.putpalette(color_arr)
+
+ img_to_draw = torch.from_numpy(np.array(img_to_draw.convert('RGB')))
+ img_to_draw = img_to_draw.permute((2, 0, 1))
+
+ return (image.float() * alpha + img_to_draw.float() * (1.0 - alpha)).to(dtype=torch.uint8)
|
{"golden_diff": "diff --git a/torchvision/utils.py b/torchvision/utils.py\n--- a/torchvision/utils.py\n+++ b/torchvision/utils.py\n@@ -6,7 +6,7 @@\n import numpy as np\n from PIL import Image, ImageDraw, ImageFont, ImageColor\n \n-__all__ = [\"make_grid\", \"save_image\", \"draw_bounding_boxes\"]\n+__all__ = [\"make_grid\", \"save_image\", \"draw_bounding_boxes\", \"draw_segmentation_masks\"]\n \n \n @torch.no_grad()\n@@ -153,7 +153,7 @@\n If filled, Resulting Tensor should be saved as PNG image.\n \n Args:\n- image (Tensor): Tensor of shape (C x H x W)\n+ image (Tensor): Tensor of shape (C x H x W) and dtype uint8.\n boxes (Tensor): Tensor of size (N, 4) containing bounding boxes in (xmin, ymin, xmax, ymax) format. Note that\n the boxes are absolute coordinates with respect to the image. In other words: `0 <= xmin < xmax < W` and\n `0 <= ymin < ymax < H`.\n@@ -210,3 +210,61 @@\n draw.text((bbox[0], bbox[1]), labels[i], fill=color, font=txt_font)\n \n return torch.from_numpy(np.array(img_to_draw)).permute(2, 0, 1)\n+\n+\[email protected]_grad()\n+def draw_segmentation_masks(\n+ image: torch.Tensor,\n+ masks: torch.Tensor,\n+ alpha: float = 0.2,\n+ colors: Optional[List[Union[str, Tuple[int, int, int]]]] = None,\n+) -> torch.Tensor:\n+\n+ \"\"\"\n+ Draws segmentation masks on given RGB image.\n+ The values of the input image should be uint8 between 0 and 255.\n+\n+ Args:\n+ image (Tensor): Tensor of shape (3 x H x W) and dtype uint8.\n+ masks (Tensor): Tensor of shape (num_masks, H, W). Each containing probability of predicted class.\n+ alpha (float): Float number between 0 and 1 denoting factor of transpaerency of masks.\n+ colors (List[Union[str, Tuple[int, int, int]]]): List containing the colors of masks. The colors can\n+ be represented as `str` or `Tuple[int, int, int]`.\n+ \"\"\"\n+\n+ if not isinstance(image, torch.Tensor):\n+ raise TypeError(f\"Tensor expected, got {type(image)}\")\n+ elif image.dtype != torch.uint8:\n+ raise ValueError(f\"Tensor uint8 expected, got {image.dtype}\")\n+ elif image.dim() != 3:\n+ raise ValueError(\"Pass individual images, not batches\")\n+ elif image.size()[0] != 3:\n+ raise ValueError(\"Pass an RGB image. Other Image formats are not supported\")\n+\n+ num_masks = masks.size()[0]\n+ masks = masks.argmax(0)\n+\n+ if colors is None:\n+ palette = torch.tensor([2 ** 25 - 1, 2 ** 15 - 1, 2 ** 21 - 1])\n+ colors_t = torch.as_tensor([i for i in range(num_masks)])[:, None] * palette\n+ color_arr = (colors_t % 255).numpy().astype(\"uint8\")\n+ else:\n+ color_list = []\n+ for color in colors:\n+ if isinstance(color, str):\n+ # This will automatically raise Error if rgb cannot be parsed.\n+ fill_color = ImageColor.getrgb(color)\n+ color_list.append(fill_color)\n+ elif isinstance(color, tuple):\n+ color_list.append(color)\n+\n+ color_arr = np.array(color_list).astype(\"uint8\")\n+\n+ _, h, w = image.size()\n+ img_to_draw = Image.fromarray(masks.byte().cpu().numpy()).resize((w, h))\n+ img_to_draw.putpalette(color_arr)\n+\n+ img_to_draw = torch.from_numpy(np.array(img_to_draw.convert('RGB')))\n+ img_to_draw = img_to_draw.permute((2, 0, 1))\n+\n+ return (image.float() * alpha + img_to_draw.float() * (1.0 - alpha)).to(dtype=torch.uint8)\n", "issue": "Utility to draw Semantic Segmentation Masks\n## \ud83d\ude80 Feature\r\n\r\nWe recently added utility to draw bounding boxes, which works really well with detection models. #2785 #2556 \r\nIt might be nice to draw segmentation masks which we obtain from instance segmentation models.\r\n\r\n## Motivation\r\n\r\nSame as bounding box utils. It is very useful to have these. It reduces the dependence of users over other plotting libraries.\r\n\r\n## Pitch\r\n\r\nOur API should be compatible with segmentation models, so we should probably use Tensors.\r\nI think most params remain as same as the previous util. This keeps consistency too.\r\n\r\n```\r\[email protected]_grad()\r\ndef draw_segmentation_masks(image: torch.Tensor,\r\n masks: torch.Tensor,\r\n labels: Optional[List[str]] = None,\r\n colors: Optional[List[Union[str, Tuple[int, int, int]]]] = None,\r\n width: int = 1,\r\n font: Optional[str] = None,\r\n font_size: int = 10)\r\n```\r\n\r\nWe might need to see a method using which we can draw with PIL.\r\nWe used `draw.rectangle()` to draw a box in utils, maybe there is some functionality in PIL that can help us draw shapes.\r\n\r\n## Alternatives\r\n\r\nLet's discuss further, how this API would work with our models.\r\nAlso, it would be nice if this works directly for instance segmentation model Mask RCNN.\r\n\n", "before_files": [{"content": "from typing import Union, Optional, List, Tuple, Text, BinaryIO\nimport pathlib\nimport torch\nimport math\nimport warnings\nimport numpy as np\nfrom PIL import Image, ImageDraw, ImageFont, ImageColor\n\n__all__ = [\"make_grid\", \"save_image\", \"draw_bounding_boxes\"]\n\n\[email protected]_grad()\ndef make_grid(\n tensor: Union[torch.Tensor, List[torch.Tensor]],\n nrow: int = 8,\n padding: int = 2,\n normalize: bool = False,\n value_range: Optional[Tuple[int, int]] = None,\n scale_each: bool = False,\n pad_value: int = 0,\n **kwargs\n) -> torch.Tensor:\n \"\"\"Make a grid of images.\n\n Args:\n tensor (Tensor or list): 4D mini-batch Tensor of shape (B x C x H x W)\n or a list of images all of the same size.\n nrow (int, optional): Number of images displayed in each row of the grid.\n The final grid size is ``(B / nrow, nrow)``. Default: ``8``.\n padding (int, optional): amount of padding. Default: ``2``.\n normalize (bool, optional): If True, shift the image to the range (0, 1),\n by the min and max values specified by :attr:`range`. Default: ``False``.\n value_range (tuple, optional): tuple (min, max) where min and max are numbers,\n then these numbers are used to normalize the image. By default, min and max\n are computed from the tensor.\n scale_each (bool, optional): If ``True``, scale each image in the batch of\n images separately rather than the (min, max) over all images. Default: ``False``.\n pad_value (float, optional): Value for the padded pixels. Default: ``0``.\n\n Example:\n See this notebook `here <https://gist.github.com/anonymous/bf16430f7750c023141c562f3e9f2a91>`_\n\n \"\"\"\n if not (torch.is_tensor(tensor) or\n (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))):\n raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}')\n\n if \"range\" in kwargs.keys():\n warning = \"range will be deprecated, please use value_range instead.\"\n warnings.warn(warning)\n value_range = kwargs[\"range\"]\n\n # if list of tensors, convert to a 4D mini-batch Tensor\n if isinstance(tensor, list):\n tensor = torch.stack(tensor, dim=0)\n\n if tensor.dim() == 2: # single image H x W\n tensor = tensor.unsqueeze(0)\n if tensor.dim() == 3: # single image\n if tensor.size(0) == 1: # if single-channel, convert to 3-channel\n tensor = torch.cat((tensor, tensor, tensor), 0)\n tensor = tensor.unsqueeze(0)\n\n if tensor.dim() == 4 and tensor.size(1) == 1: # single-channel images\n tensor = torch.cat((tensor, tensor, tensor), 1)\n\n if normalize is True:\n tensor = tensor.clone() # avoid modifying tensor in-place\n if value_range is not None:\n assert isinstance(value_range, tuple), \\\n \"value_range has to be a tuple (min, max) if specified. min and max are numbers\"\n\n def norm_ip(img, low, high):\n img.clamp_(min=low, max=high)\n img.sub_(low).div_(max(high - low, 1e-5))\n\n def norm_range(t, value_range):\n if value_range is not None:\n norm_ip(t, value_range[0], value_range[1])\n else:\n norm_ip(t, float(t.min()), float(t.max()))\n\n if scale_each is True:\n for t in tensor: # loop over mini-batch dimension\n norm_range(t, value_range)\n else:\n norm_range(tensor, value_range)\n\n if tensor.size(0) == 1:\n return tensor.squeeze(0)\n\n # make the mini-batch of images into a grid\n nmaps = tensor.size(0)\n xmaps = min(nrow, nmaps)\n ymaps = int(math.ceil(float(nmaps) / xmaps))\n height, width = int(tensor.size(2) + padding), int(tensor.size(3) + padding)\n num_channels = tensor.size(1)\n grid = tensor.new_full((num_channels, height * ymaps + padding, width * xmaps + padding), pad_value)\n k = 0\n for y in range(ymaps):\n for x in range(xmaps):\n if k >= nmaps:\n break\n # Tensor.copy_() is a valid method but seems to be missing from the stubs\n # https://pytorch.org/docs/stable/tensors.html#torch.Tensor.copy_\n grid.narrow(1, y * height + padding, height - padding).narrow( # type: ignore[attr-defined]\n 2, x * width + padding, width - padding\n ).copy_(tensor[k])\n k = k + 1\n return grid\n\n\[email protected]_grad()\ndef save_image(\n tensor: Union[torch.Tensor, List[torch.Tensor]],\n fp: Union[Text, pathlib.Path, BinaryIO],\n format: Optional[str] = None,\n **kwargs\n) -> None:\n \"\"\"Save a given Tensor into an image file.\n\n Args:\n tensor (Tensor or list): Image to be saved. If given a mini-batch tensor,\n saves the tensor as a grid of images by calling ``make_grid``.\n fp (string or file object): A filename or a file object\n format(Optional): If omitted, the format to use is determined from the filename extension.\n If a file object was used instead of a filename, this parameter should always be used.\n **kwargs: Other arguments are documented in ``make_grid``.\n \"\"\"\n\n grid = make_grid(tensor, **kwargs)\n # Add 0.5 after unnormalizing to [0, 255] to round to nearest integer\n ndarr = grid.mul(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).to('cpu', torch.uint8).numpy()\n im = Image.fromarray(ndarr)\n im.save(fp, format=format)\n\n\[email protected]_grad()\ndef draw_bounding_boxes(\n image: torch.Tensor,\n boxes: torch.Tensor,\n labels: Optional[List[str]] = None,\n colors: Optional[List[Union[str, Tuple[int, int, int]]]] = None,\n fill: Optional[bool] = False,\n width: int = 1,\n font: Optional[str] = None,\n font_size: int = 10\n) -> torch.Tensor:\n\n \"\"\"\n Draws bounding boxes on given image.\n The values of the input image should be uint8 between 0 and 255.\n If filled, Resulting Tensor should be saved as PNG image.\n\n Args:\n image (Tensor): Tensor of shape (C x H x W)\n boxes (Tensor): Tensor of size (N, 4) containing bounding boxes in (xmin, ymin, xmax, ymax) format. Note that\n the boxes are absolute coordinates with respect to the image. In other words: `0 <= xmin < xmax < W` and\n `0 <= ymin < ymax < H`.\n labels (List[str]): List containing the labels of bounding boxes.\n colors (List[Union[str, Tuple[int, int, int]]]): List containing the colors of bounding boxes. The colors can\n be represented as `str` or `Tuple[int, int, int]`.\n fill (bool): If `True` fills the bounding box with specified color.\n width (int): Width of bounding box.\n font (str): A filename containing a TrueType font. If the file is not found in this filename, the loader may\n also search in other directories, such as the `fonts/` directory on Windows or `/Library/Fonts/`,\n `/System/Library/Fonts/` and `~/Library/Fonts/` on macOS.\n font_size (int): The requested font size in points.\n \"\"\"\n\n if not isinstance(image, torch.Tensor):\n raise TypeError(f\"Tensor expected, got {type(image)}\")\n elif image.dtype != torch.uint8:\n raise ValueError(f\"Tensor uint8 expected, got {image.dtype}\")\n elif image.dim() != 3:\n raise ValueError(\"Pass individual images, not batches\")\n\n ndarr = image.permute(1, 2, 0).numpy()\n img_to_draw = Image.fromarray(ndarr)\n\n img_boxes = boxes.to(torch.int64).tolist()\n\n if fill:\n draw = ImageDraw.Draw(img_to_draw, \"RGBA\")\n\n else:\n draw = ImageDraw.Draw(img_to_draw)\n\n txt_font = ImageFont.load_default() if font is None else ImageFont.truetype(font=font, size=font_size)\n\n for i, bbox in enumerate(img_boxes):\n if colors is None:\n color = None\n else:\n color = colors[i]\n\n if fill:\n if color is None:\n fill_color = (255, 255, 255, 100)\n elif isinstance(color, str):\n # This will automatically raise Error if rgb cannot be parsed.\n fill_color = ImageColor.getrgb(color) + (100,)\n elif isinstance(color, tuple):\n fill_color = color + (100,)\n draw.rectangle(bbox, width=width, outline=color, fill=fill_color)\n else:\n draw.rectangle(bbox, width=width, outline=color)\n\n if labels is not None:\n draw.text((bbox[0], bbox[1]), labels[i], fill=color, font=txt_font)\n\n return torch.from_numpy(np.array(img_to_draw)).permute(2, 0, 1)\n", "path": "torchvision/utils.py"}], "after_files": [{"content": "from typing import Union, Optional, List, Tuple, Text, BinaryIO\nimport pathlib\nimport torch\nimport math\nimport warnings\nimport numpy as np\nfrom PIL import Image, ImageDraw, ImageFont, ImageColor\n\n__all__ = [\"make_grid\", \"save_image\", \"draw_bounding_boxes\", \"draw_segmentation_masks\"]\n\n\[email protected]_grad()\ndef make_grid(\n tensor: Union[torch.Tensor, List[torch.Tensor]],\n nrow: int = 8,\n padding: int = 2,\n normalize: bool = False,\n value_range: Optional[Tuple[int, int]] = None,\n scale_each: bool = False,\n pad_value: int = 0,\n **kwargs\n) -> torch.Tensor:\n \"\"\"Make a grid of images.\n\n Args:\n tensor (Tensor or list): 4D mini-batch Tensor of shape (B x C x H x W)\n or a list of images all of the same size.\n nrow (int, optional): Number of images displayed in each row of the grid.\n The final grid size is ``(B / nrow, nrow)``. Default: ``8``.\n padding (int, optional): amount of padding. Default: ``2``.\n normalize (bool, optional): If True, shift the image to the range (0, 1),\n by the min and max values specified by :attr:`range`. Default: ``False``.\n value_range (tuple, optional): tuple (min, max) where min and max are numbers,\n then these numbers are used to normalize the image. By default, min and max\n are computed from the tensor.\n scale_each (bool, optional): If ``True``, scale each image in the batch of\n images separately rather than the (min, max) over all images. Default: ``False``.\n pad_value (float, optional): Value for the padded pixels. Default: ``0``.\n\n Example:\n See this notebook `here <https://gist.github.com/anonymous/bf16430f7750c023141c562f3e9f2a91>`_\n\n \"\"\"\n if not (torch.is_tensor(tensor) or\n (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))):\n raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}')\n\n if \"range\" in kwargs.keys():\n warning = \"range will be deprecated, please use value_range instead.\"\n warnings.warn(warning)\n value_range = kwargs[\"range\"]\n\n # if list of tensors, convert to a 4D mini-batch Tensor\n if isinstance(tensor, list):\n tensor = torch.stack(tensor, dim=0)\n\n if tensor.dim() == 2: # single image H x W\n tensor = tensor.unsqueeze(0)\n if tensor.dim() == 3: # single image\n if tensor.size(0) == 1: # if single-channel, convert to 3-channel\n tensor = torch.cat((tensor, tensor, tensor), 0)\n tensor = tensor.unsqueeze(0)\n\n if tensor.dim() == 4 and tensor.size(1) == 1: # single-channel images\n tensor = torch.cat((tensor, tensor, tensor), 1)\n\n if normalize is True:\n tensor = tensor.clone() # avoid modifying tensor in-place\n if value_range is not None:\n assert isinstance(value_range, tuple), \\\n \"value_range has to be a tuple (min, max) if specified. min and max are numbers\"\n\n def norm_ip(img, low, high):\n img.clamp_(min=low, max=high)\n img.sub_(low).div_(max(high - low, 1e-5))\n\n def norm_range(t, value_range):\n if value_range is not None:\n norm_ip(t, value_range[0], value_range[1])\n else:\n norm_ip(t, float(t.min()), float(t.max()))\n\n if scale_each is True:\n for t in tensor: # loop over mini-batch dimension\n norm_range(t, value_range)\n else:\n norm_range(tensor, value_range)\n\n if tensor.size(0) == 1:\n return tensor.squeeze(0)\n\n # make the mini-batch of images into a grid\n nmaps = tensor.size(0)\n xmaps = min(nrow, nmaps)\n ymaps = int(math.ceil(float(nmaps) / xmaps))\n height, width = int(tensor.size(2) + padding), int(tensor.size(3) + padding)\n num_channels = tensor.size(1)\n grid = tensor.new_full((num_channels, height * ymaps + padding, width * xmaps + padding), pad_value)\n k = 0\n for y in range(ymaps):\n for x in range(xmaps):\n if k >= nmaps:\n break\n # Tensor.copy_() is a valid method but seems to be missing from the stubs\n # https://pytorch.org/docs/stable/tensors.html#torch.Tensor.copy_\n grid.narrow(1, y * height + padding, height - padding).narrow( # type: ignore[attr-defined]\n 2, x * width + padding, width - padding\n ).copy_(tensor[k])\n k = k + 1\n return grid\n\n\[email protected]_grad()\ndef save_image(\n tensor: Union[torch.Tensor, List[torch.Tensor]],\n fp: Union[Text, pathlib.Path, BinaryIO],\n format: Optional[str] = None,\n **kwargs\n) -> None:\n \"\"\"Save a given Tensor into an image file.\n\n Args:\n tensor (Tensor or list): Image to be saved. If given a mini-batch tensor,\n saves the tensor as a grid of images by calling ``make_grid``.\n fp (string or file object): A filename or a file object\n format(Optional): If omitted, the format to use is determined from the filename extension.\n If a file object was used instead of a filename, this parameter should always be used.\n **kwargs: Other arguments are documented in ``make_grid``.\n \"\"\"\n\n grid = make_grid(tensor, **kwargs)\n # Add 0.5 after unnormalizing to [0, 255] to round to nearest integer\n ndarr = grid.mul(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).to('cpu', torch.uint8).numpy()\n im = Image.fromarray(ndarr)\n im.save(fp, format=format)\n\n\[email protected]_grad()\ndef draw_bounding_boxes(\n image: torch.Tensor,\n boxes: torch.Tensor,\n labels: Optional[List[str]] = None,\n colors: Optional[List[Union[str, Tuple[int, int, int]]]] = None,\n fill: Optional[bool] = False,\n width: int = 1,\n font: Optional[str] = None,\n font_size: int = 10\n) -> torch.Tensor:\n\n \"\"\"\n Draws bounding boxes on given image.\n The values of the input image should be uint8 between 0 and 255.\n If filled, Resulting Tensor should be saved as PNG image.\n\n Args:\n image (Tensor): Tensor of shape (C x H x W) and dtype uint8.\n boxes (Tensor): Tensor of size (N, 4) containing bounding boxes in (xmin, ymin, xmax, ymax) format. Note that\n the boxes are absolute coordinates with respect to the image. In other words: `0 <= xmin < xmax < W` and\n `0 <= ymin < ymax < H`.\n labels (List[str]): List containing the labels of bounding boxes.\n colors (List[Union[str, Tuple[int, int, int]]]): List containing the colors of bounding boxes. The colors can\n be represented as `str` or `Tuple[int, int, int]`.\n fill (bool): If `True` fills the bounding box with specified color.\n width (int): Width of bounding box.\n font (str): A filename containing a TrueType font. If the file is not found in this filename, the loader may\n also search in other directories, such as the `fonts/` directory on Windows or `/Library/Fonts/`,\n `/System/Library/Fonts/` and `~/Library/Fonts/` on macOS.\n font_size (int): The requested font size in points.\n \"\"\"\n\n if not isinstance(image, torch.Tensor):\n raise TypeError(f\"Tensor expected, got {type(image)}\")\n elif image.dtype != torch.uint8:\n raise ValueError(f\"Tensor uint8 expected, got {image.dtype}\")\n elif image.dim() != 3:\n raise ValueError(\"Pass individual images, not batches\")\n\n ndarr = image.permute(1, 2, 0).numpy()\n img_to_draw = Image.fromarray(ndarr)\n\n img_boxes = boxes.to(torch.int64).tolist()\n\n if fill:\n draw = ImageDraw.Draw(img_to_draw, \"RGBA\")\n\n else:\n draw = ImageDraw.Draw(img_to_draw)\n\n txt_font = ImageFont.load_default() if font is None else ImageFont.truetype(font=font, size=font_size)\n\n for i, bbox in enumerate(img_boxes):\n if colors is None:\n color = None\n else:\n color = colors[i]\n\n if fill:\n if color is None:\n fill_color = (255, 255, 255, 100)\n elif isinstance(color, str):\n # This will automatically raise Error if rgb cannot be parsed.\n fill_color = ImageColor.getrgb(color) + (100,)\n elif isinstance(color, tuple):\n fill_color = color + (100,)\n draw.rectangle(bbox, width=width, outline=color, fill=fill_color)\n else:\n draw.rectangle(bbox, width=width, outline=color)\n\n if labels is not None:\n draw.text((bbox[0], bbox[1]), labels[i], fill=color, font=txt_font)\n\n return torch.from_numpy(np.array(img_to_draw)).permute(2, 0, 1)\n\n\[email protected]_grad()\ndef draw_segmentation_masks(\n image: torch.Tensor,\n masks: torch.Tensor,\n alpha: float = 0.2,\n colors: Optional[List[Union[str, Tuple[int, int, int]]]] = None,\n) -> torch.Tensor:\n\n \"\"\"\n Draws segmentation masks on given RGB image.\n The values of the input image should be uint8 between 0 and 255.\n\n Args:\n image (Tensor): Tensor of shape (3 x H x W) and dtype uint8.\n masks (Tensor): Tensor of shape (num_masks, H, W). Each containing probability of predicted class.\n alpha (float): Float number between 0 and 1 denoting factor of transpaerency of masks.\n colors (List[Union[str, Tuple[int, int, int]]]): List containing the colors of masks. The colors can\n be represented as `str` or `Tuple[int, int, int]`.\n \"\"\"\n\n if not isinstance(image, torch.Tensor):\n raise TypeError(f\"Tensor expected, got {type(image)}\")\n elif image.dtype != torch.uint8:\n raise ValueError(f\"Tensor uint8 expected, got {image.dtype}\")\n elif image.dim() != 3:\n raise ValueError(\"Pass individual images, not batches\")\n elif image.size()[0] != 3:\n raise ValueError(\"Pass an RGB image. Other Image formats are not supported\")\n\n num_masks = masks.size()[0]\n masks = masks.argmax(0)\n\n if colors is None:\n palette = torch.tensor([2 ** 25 - 1, 2 ** 15 - 1, 2 ** 21 - 1])\n colors_t = torch.as_tensor([i for i in range(num_masks)])[:, None] * palette\n color_arr = (colors_t % 255).numpy().astype(\"uint8\")\n else:\n color_list = []\n for color in colors:\n if isinstance(color, str):\n # This will automatically raise Error if rgb cannot be parsed.\n fill_color = ImageColor.getrgb(color)\n color_list.append(fill_color)\n elif isinstance(color, tuple):\n color_list.append(color)\n\n color_arr = np.array(color_list).astype(\"uint8\")\n\n _, h, w = image.size()\n img_to_draw = Image.fromarray(masks.byte().cpu().numpy()).resize((w, h))\n img_to_draw.putpalette(color_arr)\n\n img_to_draw = torch.from_numpy(np.array(img_to_draw.convert('RGB')))\n img_to_draw = img_to_draw.permute((2, 0, 1))\n\n return (image.float() * alpha + img_to_draw.float() * (1.0 - alpha)).to(dtype=torch.uint8)\n", "path": "torchvision/utils.py"}]}
| 3,300 | 948 |
gh_patches_debug_56926
|
rasdani/github-patches
|
git_diff
|
kivy__python-for-android-615
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OpenSSL 1.0.2e outdated (replaced by 1.0.2f)
A new version has been published today. The recipe no longer can download version 1.0.2e.
Does `disable-solver.patch` still apply?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/openssl/__init__.py`
Content:
```
1 from functools import partial
2
3 from pythonforandroid.toolchain import Recipe, shprint, current_directory
4 import sh
5
6
7 class OpenSSLRecipe(Recipe):
8 version = '1.0.2e'
9 url = 'https://www.openssl.org/source/openssl-{version}.tar.gz'
10
11 def should_build(self, arch):
12 return not self.has_libs(arch, 'libssl.so', 'libcrypto.so')
13
14 def check_symbol(self, env, sofile, symbol):
15 nm = env.get('NM', 'nm')
16 syms = sh.sh('-c', "{} -gp {} | cut -d' ' -f3".format(
17 nm, sofile), _env=env).splitlines()
18 if symbol in syms:
19 return True
20 print('{} missing symbol {}; rebuilding'.format(sofile, symbol))
21 return False
22
23 def get_recipe_env(self, arch=None):
24 env = super(OpenSSLRecipe, self).get_recipe_env(arch)
25 env['CFLAGS'] += ' ' + env['LDFLAGS']
26 env['CC'] += ' ' + env['LDFLAGS']
27 return env
28
29 def select_build_arch(self, arch):
30 aname = arch.arch
31 if 'arm64' in aname:
32 return 'linux-aarch64'
33 if 'v7a' in aname:
34 return 'android-armv7'
35 if 'arm' in aname:
36 return 'android'
37 return 'linux-armv4'
38
39 def build_arch(self, arch):
40 env = self.get_recipe_env(arch)
41 with current_directory(self.get_build_dir(arch.arch)):
42 # sh fails with code 255 trying to execute ./Configure
43 # so instead we manually run perl passing in Configure
44 perl = sh.Command('perl')
45 buildarch = self.select_build_arch(arch)
46 shprint(perl, 'Configure', 'shared', 'no-dso', 'no-krb5', buildarch, _env=env)
47 self.apply_patch('disable-sover.patch', arch.arch)
48
49 check_crypto = partial(self.check_symbol, env, 'libcrypto.so')
50 # check_ssl = partial(self.check_symbol, env, 'libssl.so')
51 while True:
52 shprint(sh.make, 'build_libs', _env=env)
53 if all(map(check_crypto, ('SSLeay', 'MD5_Transform', 'MD4_Init'))):
54 break
55 shprint(sh.make, 'clean', _env=env)
56
57 self.install_libs(arch, 'libssl.so', 'libcrypto.so')
58
59 recipe = OpenSSLRecipe()
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pythonforandroid/recipes/openssl/__init__.py b/pythonforandroid/recipes/openssl/__init__.py
--- a/pythonforandroid/recipes/openssl/__init__.py
+++ b/pythonforandroid/recipes/openssl/__init__.py
@@ -5,7 +5,7 @@
class OpenSSLRecipe(Recipe):
- version = '1.0.2e'
+ version = '1.0.2f'
url = 'https://www.openssl.org/source/openssl-{version}.tar.gz'
def should_build(self, arch):
|
{"golden_diff": "diff --git a/pythonforandroid/recipes/openssl/__init__.py b/pythonforandroid/recipes/openssl/__init__.py\n--- a/pythonforandroid/recipes/openssl/__init__.py\n+++ b/pythonforandroid/recipes/openssl/__init__.py\n@@ -5,7 +5,7 @@\n \n \n class OpenSSLRecipe(Recipe):\n- version = '1.0.2e'\n+ version = '1.0.2f'\n url = 'https://www.openssl.org/source/openssl-{version}.tar.gz'\n \n def should_build(self, arch):\n", "issue": "OpenSSL 1.0.2e outdated (replaced by 1.0.2f)\nA new version has been published today. The recipe no longer can download version 1.0.2e.\n\nDoes `disable-solver.patch` still apply?\n\n", "before_files": [{"content": "from functools import partial\n\nfrom pythonforandroid.toolchain import Recipe, shprint, current_directory\nimport sh\n\n\nclass OpenSSLRecipe(Recipe):\n version = '1.0.2e'\n url = 'https://www.openssl.org/source/openssl-{version}.tar.gz'\n\n def should_build(self, arch):\n return not self.has_libs(arch, 'libssl.so', 'libcrypto.so')\n\n def check_symbol(self, env, sofile, symbol):\n nm = env.get('NM', 'nm')\n syms = sh.sh('-c', \"{} -gp {} | cut -d' ' -f3\".format(\n nm, sofile), _env=env).splitlines()\n if symbol in syms:\n return True\n print('{} missing symbol {}; rebuilding'.format(sofile, symbol))\n return False\n\n def get_recipe_env(self, arch=None):\n env = super(OpenSSLRecipe, self).get_recipe_env(arch)\n env['CFLAGS'] += ' ' + env['LDFLAGS']\n env['CC'] += ' ' + env['LDFLAGS']\n return env\n\n def select_build_arch(self, arch):\n aname = arch.arch\n if 'arm64' in aname:\n return 'linux-aarch64'\n if 'v7a' in aname:\n return 'android-armv7'\n if 'arm' in aname:\n return 'android'\n return 'linux-armv4'\n\n def build_arch(self, arch):\n env = self.get_recipe_env(arch)\n with current_directory(self.get_build_dir(arch.arch)):\n # sh fails with code 255 trying to execute ./Configure\n # so instead we manually run perl passing in Configure\n perl = sh.Command('perl')\n buildarch = self.select_build_arch(arch)\n shprint(perl, 'Configure', 'shared', 'no-dso', 'no-krb5', buildarch, _env=env)\n self.apply_patch('disable-sover.patch', arch.arch)\n\n check_crypto = partial(self.check_symbol, env, 'libcrypto.so')\n # check_ssl = partial(self.check_symbol, env, 'libssl.so')\n while True:\n shprint(sh.make, 'build_libs', _env=env)\n if all(map(check_crypto, ('SSLeay', 'MD5_Transform', 'MD4_Init'))):\n break\n shprint(sh.make, 'clean', _env=env)\n\n self.install_libs(arch, 'libssl.so', 'libcrypto.so')\n\nrecipe = OpenSSLRecipe()\n", "path": "pythonforandroid/recipes/openssl/__init__.py"}], "after_files": [{"content": "from functools import partial\n\nfrom pythonforandroid.toolchain import Recipe, shprint, current_directory\nimport sh\n\n\nclass OpenSSLRecipe(Recipe):\n version = '1.0.2f'\n url = 'https://www.openssl.org/source/openssl-{version}.tar.gz'\n\n def should_build(self, arch):\n return not self.has_libs(arch, 'libssl.so', 'libcrypto.so')\n\n def check_symbol(self, env, sofile, symbol):\n nm = env.get('NM', 'nm')\n syms = sh.sh('-c', \"{} -gp {} | cut -d' ' -f3\".format(\n nm, sofile), _env=env).splitlines()\n if symbol in syms:\n return True\n print('{} missing symbol {}; rebuilding'.format(sofile, symbol))\n return False\n\n def get_recipe_env(self, arch=None):\n env = super(OpenSSLRecipe, self).get_recipe_env(arch)\n env['CFLAGS'] += ' ' + env['LDFLAGS']\n env['CC'] += ' ' + env['LDFLAGS']\n return env\n\n def select_build_arch(self, arch):\n aname = arch.arch\n if 'arm64' in aname:\n return 'linux-aarch64'\n if 'v7a' in aname:\n return 'android-armv7'\n if 'arm' in aname:\n return 'android'\n return 'linux-armv4'\n\n def build_arch(self, arch):\n env = self.get_recipe_env(arch)\n with current_directory(self.get_build_dir(arch.arch)):\n # sh fails with code 255 trying to execute ./Configure\n # so instead we manually run perl passing in Configure\n perl = sh.Command('perl')\n buildarch = self.select_build_arch(arch)\n shprint(perl, 'Configure', 'shared', 'no-dso', 'no-krb5', buildarch, _env=env)\n self.apply_patch('disable-sover.patch', arch.arch)\n\n check_crypto = partial(self.check_symbol, env, 'libcrypto.so')\n # check_ssl = partial(self.check_symbol, env, 'libssl.so')\n while True:\n shprint(sh.make, 'build_libs', _env=env)\n if all(map(check_crypto, ('SSLeay', 'MD5_Transform', 'MD4_Init'))):\n break\n shprint(sh.make, 'clean', _env=env)\n\n self.install_libs(arch, 'libssl.so', 'libcrypto.so')\n\nrecipe = OpenSSLRecipe()\n", "path": "pythonforandroid/recipes/openssl/__init__.py"}]}
| 1,001 | 125 |
gh_patches_debug_520
|
rasdani/github-patches
|
git_diff
|
googleapis__google-api-python-client-303
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Suggest use of httplib2 v0.9.2 in readme if using python > 3
Python3.x implementations of the client, passes bytestring to httplib2 header encoding method. Httplib2 < 0.9.2 does not support this.
Documentation should make a note that implementations in python3 should explicitly install httplib2 0.9.2. PyPi index targets 0.8.0, which is no bueno for those unaware of this issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Setup script for Google API Python client.
16
17 Also installs included versions of third party libraries, if those libraries
18 are not already installed.
19 """
20 from __future__ import print_function
21
22 import sys
23
24 if sys.version_info < (2, 6):
25 print('google-api-python-client requires python version >= 2.6.',
26 file=sys.stderr)
27 sys.exit(1)
28 if (3, 1) <= sys.version_info < (3, 3):
29 print('google-api-python-client requires python3 version >= 3.3.',
30 file=sys.stderr)
31 sys.exit(1)
32
33 from setuptools import setup
34 import pkg_resources
35
36 def _DetectBadness():
37 import os
38 if 'SKIP_GOOGLEAPICLIENT_COMPAT_CHECK' in os.environ:
39 return
40 o2c_pkg = None
41 try:
42 o2c_pkg = pkg_resources.get_distribution('oauth2client')
43 except pkg_resources.DistributionNotFound:
44 pass
45 oauth2client = None
46 try:
47 import oauth2client
48 except ImportError:
49 pass
50 if o2c_pkg is None and oauth2client is not None:
51 raise RuntimeError(
52 'Previous version of google-api-python-client detected; due to a '
53 'packaging issue, we cannot perform an in-place upgrade. Please remove '
54 'the old version and re-install this package.'
55 )
56
57 _DetectBadness()
58
59 packages = [
60 'apiclient',
61 'googleapiclient',
62 'googleapiclient/discovery_cache',
63 ]
64
65 install_requires = [
66 'httplib2>=0.8,<1dev',
67 'oauth2client>=1.5.0,<5.0.0dev',
68 'six>=1.6.1,<2dev',
69 'uritemplate>=3.0.0,<4dev',
70 ]
71
72 if sys.version_info < (2, 7):
73 install_requires.append('argparse')
74
75 long_desc = """The Google API Client for Python is a client library for
76 accessing the Plus, Moderator, and many other Google APIs."""
77
78 import googleapiclient
79 version = googleapiclient.__version__
80
81 setup(
82 name="google-api-python-client",
83 version=version,
84 description="Google API Client Library for Python",
85 long_description=long_desc,
86 author="Google Inc.",
87 url="http://github.com/google/google-api-python-client/",
88 install_requires=install_requires,
89 packages=packages,
90 package_data={},
91 license="Apache 2.0",
92 keywords="google api client",
93 classifiers=[
94 'Programming Language :: Python :: 2',
95 'Programming Language :: Python :: 2.6',
96 'Programming Language :: Python :: 2.7',
97 'Programming Language :: Python :: 3',
98 'Programming Language :: Python :: 3.3',
99 'Programming Language :: Python :: 3.4',
100 'Development Status :: 5 - Production/Stable',
101 'Intended Audience :: Developers',
102 'License :: OSI Approved :: Apache Software License',
103 'Operating System :: OS Independent',
104 'Topic :: Internet :: WWW/HTTP',
105 ],
106 )
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -63,7 +63,7 @@
]
install_requires = [
- 'httplib2>=0.8,<1dev',
+ 'httplib2>=0.9.2,<1dev',
'oauth2client>=1.5.0,<5.0.0dev',
'six>=1.6.1,<2dev',
'uritemplate>=3.0.0,<4dev',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -63,7 +63,7 @@\n ]\n \n install_requires = [\n- 'httplib2>=0.8,<1dev',\n+ 'httplib2>=0.9.2,<1dev',\n 'oauth2client>=1.5.0,<5.0.0dev',\n 'six>=1.6.1,<2dev',\n 'uritemplate>=3.0.0,<4dev',\n", "issue": "Suggest use of httplib2 v0.9.2 in readme if using python > 3\nPython3.x implementations of the client, passes bytestring to httplib2 header encoding method. Httplib2 < 0.9.2 does not support this. \n\nDocumentation should make a note that implementations in python3 should explicitly install httplib2 0.9.2. PyPi index targets 0.8.0, which is no bueno for those unaware of this issue. \n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (2, 6):\n print('google-api-python-client requires python version >= 2.6.',\n file=sys.stderr)\n sys.exit(1)\nif (3, 1) <= sys.version_info < (3, 3):\n print('google-api-python-client requires python3 version >= 3.3.',\n file=sys.stderr)\n sys.exit(1)\n\nfrom setuptools import setup\nimport pkg_resources\n\ndef _DetectBadness():\n import os\n if 'SKIP_GOOGLEAPICLIENT_COMPAT_CHECK' in os.environ:\n return\n o2c_pkg = None\n try:\n o2c_pkg = pkg_resources.get_distribution('oauth2client')\n except pkg_resources.DistributionNotFound:\n pass\n oauth2client = None\n try:\n import oauth2client\n except ImportError:\n pass\n if o2c_pkg is None and oauth2client is not None:\n raise RuntimeError(\n 'Previous version of google-api-python-client detected; due to a '\n 'packaging issue, we cannot perform an in-place upgrade. Please remove '\n 'the old version and re-install this package.'\n )\n\n_DetectBadness()\n\npackages = [\n 'apiclient',\n 'googleapiclient',\n 'googleapiclient/discovery_cache',\n]\n\ninstall_requires = [\n 'httplib2>=0.8,<1dev',\n 'oauth2client>=1.5.0,<5.0.0dev',\n 'six>=1.6.1,<2dev',\n 'uritemplate>=3.0.0,<4dev',\n]\n\nif sys.version_info < (2, 7):\n install_requires.append('argparse')\n\nlong_desc = \"\"\"The Google API Client for Python is a client library for\naccessing the Plus, Moderator, and many other Google APIs.\"\"\"\n\nimport googleapiclient\nversion = googleapiclient.__version__\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=long_desc,\n author=\"Google Inc.\",\n url=\"http://github.com/google/google-api-python-client/\",\n install_requires=install_requires,\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (2, 6):\n print('google-api-python-client requires python version >= 2.6.',\n file=sys.stderr)\n sys.exit(1)\nif (3, 1) <= sys.version_info < (3, 3):\n print('google-api-python-client requires python3 version >= 3.3.',\n file=sys.stderr)\n sys.exit(1)\n\nfrom setuptools import setup\nimport pkg_resources\n\ndef _DetectBadness():\n import os\n if 'SKIP_GOOGLEAPICLIENT_COMPAT_CHECK' in os.environ:\n return\n o2c_pkg = None\n try:\n o2c_pkg = pkg_resources.get_distribution('oauth2client')\n except pkg_resources.DistributionNotFound:\n pass\n oauth2client = None\n try:\n import oauth2client\n except ImportError:\n pass\n if o2c_pkg is None and oauth2client is not None:\n raise RuntimeError(\n 'Previous version of google-api-python-client detected; due to a '\n 'packaging issue, we cannot perform an in-place upgrade. Please remove '\n 'the old version and re-install this package.'\n )\n\n_DetectBadness()\n\npackages = [\n 'apiclient',\n 'googleapiclient',\n 'googleapiclient/discovery_cache',\n]\n\ninstall_requires = [\n 'httplib2>=0.9.2,<1dev',\n 'oauth2client>=1.5.0,<5.0.0dev',\n 'six>=1.6.1,<2dev',\n 'uritemplate>=3.0.0,<4dev',\n]\n\nif sys.version_info < (2, 7):\n install_requires.append('argparse')\n\nlong_desc = \"\"\"The Google API Client for Python is a client library for\naccessing the Plus, Moderator, and many other Google APIs.\"\"\"\n\nimport googleapiclient\nversion = googleapiclient.__version__\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=long_desc,\n author=\"Google Inc.\",\n url=\"http://github.com/google/google-api-python-client/\",\n install_requires=install_requires,\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py"}]}
| 1,374 | 115 |
gh_patches_debug_504
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-3702
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"the" plugin uses incorrect regex
### Problem
Running this command:
```sh
~ ❯ beet move artist:trance -p
Moving 40 items.
M:\TET - Travailleur En Trance\[2008] Cobra Coded Escalation\01-01 - Cobra Reporting In.mp3
-> M:\‒ Travailleur En Trance, TET\[2008] Cobra Coded Escalation\01-01 - Cobra Reporting In.mp3
```
Led to this problem:
"TET" is recognized by the "the" plugin as something it should move. This is because the regex used in the.py, line 26, is
```
PATTERN_THE = u'^[the]{3}\\s'
```
which matches "TET". It should probably be:
```
PATTERN_THE = u'^the\\s'
```
### Setup
* OS: Windows 10 2004
* Python version: 3.8
* beets version: latest trunk
* Turning off plugins made problem go away (yes/no): obviously, if I disable 'the' it no longer does this
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/the.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This file is part of beets.
3 # Copyright 2016, Blemjhoo Tezoulbr <[email protected]>.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 """Moves patterns in path formats (suitable for moving articles)."""
17
18 from __future__ import division, absolute_import, print_function
19
20 import re
21 from beets.plugins import BeetsPlugin
22
23 __author__ = '[email protected]'
24 __version__ = '1.1'
25
26 PATTERN_THE = u'^[the]{3}\\s'
27 PATTERN_A = u'^[a][n]?\\s'
28 FORMAT = u'{0}, {1}'
29
30
31 class ThePlugin(BeetsPlugin):
32
33 patterns = []
34
35 def __init__(self):
36 super(ThePlugin, self).__init__()
37
38 self.template_funcs['the'] = self.the_template_func
39
40 self.config.add({
41 'the': True,
42 'a': True,
43 'format': u'{0}, {1}',
44 'strip': False,
45 'patterns': [],
46 })
47
48 self.patterns = self.config['patterns'].as_str_seq()
49 for p in self.patterns:
50 if p:
51 try:
52 re.compile(p)
53 except re.error:
54 self._log.error(u'invalid pattern: {0}', p)
55 else:
56 if not (p.startswith('^') or p.endswith('$')):
57 self._log.warning(u'warning: \"{0}\" will not '
58 u'match string start/end', p)
59 if self.config['a']:
60 self.patterns = [PATTERN_A] + self.patterns
61 if self.config['the']:
62 self.patterns = [PATTERN_THE] + self.patterns
63 if not self.patterns:
64 self._log.warning(u'no patterns defined!')
65
66 def unthe(self, text, pattern):
67 """Moves pattern in the path format string or strips it
68
69 text -- text to handle
70 pattern -- regexp pattern (case ignore is already on)
71 strip -- if True, pattern will be removed
72 """
73 if text:
74 r = re.compile(pattern, flags=re.IGNORECASE)
75 try:
76 t = r.findall(text)[0]
77 except IndexError:
78 return text
79 else:
80 r = re.sub(r, '', text).strip()
81 if self.config['strip']:
82 return r
83 else:
84 fmt = self.config['format'].as_str()
85 return fmt.format(r, t.strip()).strip()
86 else:
87 return u''
88
89 def the_template_func(self, text):
90 if not self.patterns:
91 return text
92 if text:
93 for p in self.patterns:
94 r = self.unthe(text, p)
95 if r != text:
96 self._log.debug(u'\"{0}\" -> \"{1}\"', text, r)
97 break
98 return r
99 else:
100 return u''
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/beetsplug/the.py b/beetsplug/the.py
--- a/beetsplug/the.py
+++ b/beetsplug/the.py
@@ -23,7 +23,7 @@
__author__ = '[email protected]'
__version__ = '1.1'
-PATTERN_THE = u'^[the]{3}\\s'
+PATTERN_THE = u'^the\\s'
PATTERN_A = u'^[a][n]?\\s'
FORMAT = u'{0}, {1}'
|
{"golden_diff": "diff --git a/beetsplug/the.py b/beetsplug/the.py\n--- a/beetsplug/the.py\n+++ b/beetsplug/the.py\n@@ -23,7 +23,7 @@\n __author__ = '[email protected]'\n __version__ = '1.1'\n \n-PATTERN_THE = u'^[the]{3}\\\\s'\n+PATTERN_THE = u'^the\\\\s'\n PATTERN_A = u'^[a][n]?\\\\s'\n FORMAT = u'{0}, {1}'\n", "issue": "\"the\" plugin uses incorrect regex\n### Problem\r\n\r\nRunning this command:\r\n\r\n```sh\r\n~ \u276f beet move artist:trance -p\r\nMoving 40 items.\r\nM:\\TET - Travailleur En Trance\\[2008] Cobra Coded Escalation\\01-01 - Cobra Reporting In.mp3\r\n -> M:\\\u2012 Travailleur En Trance, TET\\[2008] Cobra Coded Escalation\\01-01 - Cobra Reporting In.mp3\r\n```\r\n\r\nLed to this problem:\r\n\r\n\"TET\" is recognized by the \"the\" plugin as something it should move. This is because the regex used in the.py, line 26, is\r\n```\r\nPATTERN_THE = u'^[the]{3}\\\\s'\r\n```\r\nwhich matches \"TET\". It should probably be:\r\n```\r\nPATTERN_THE = u'^the\\\\s'\r\n```\r\n\r\n### Setup\r\n\r\n* OS: Windows 10 2004\r\n* Python version: 3.8\r\n* beets version: latest trunk\r\n* Turning off plugins made problem go away (yes/no): obviously, if I disable 'the' it no longer does this\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Blemjhoo Tezoulbr <[email protected]>.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Moves patterns in path formats (suitable for moving articles).\"\"\"\n\nfrom __future__ import division, absolute_import, print_function\n\nimport re\nfrom beets.plugins import BeetsPlugin\n\n__author__ = '[email protected]'\n__version__ = '1.1'\n\nPATTERN_THE = u'^[the]{3}\\\\s'\nPATTERN_A = u'^[a][n]?\\\\s'\nFORMAT = u'{0}, {1}'\n\n\nclass ThePlugin(BeetsPlugin):\n\n patterns = []\n\n def __init__(self):\n super(ThePlugin, self).__init__()\n\n self.template_funcs['the'] = self.the_template_func\n\n self.config.add({\n 'the': True,\n 'a': True,\n 'format': u'{0}, {1}',\n 'strip': False,\n 'patterns': [],\n })\n\n self.patterns = self.config['patterns'].as_str_seq()\n for p in self.patterns:\n if p:\n try:\n re.compile(p)\n except re.error:\n self._log.error(u'invalid pattern: {0}', p)\n else:\n if not (p.startswith('^') or p.endswith('$')):\n self._log.warning(u'warning: \\\"{0}\\\" will not '\n u'match string start/end', p)\n if self.config['a']:\n self.patterns = [PATTERN_A] + self.patterns\n if self.config['the']:\n self.patterns = [PATTERN_THE] + self.patterns\n if not self.patterns:\n self._log.warning(u'no patterns defined!')\n\n def unthe(self, text, pattern):\n \"\"\"Moves pattern in the path format string or strips it\n\n text -- text to handle\n pattern -- regexp pattern (case ignore is already on)\n strip -- if True, pattern will be removed\n \"\"\"\n if text:\n r = re.compile(pattern, flags=re.IGNORECASE)\n try:\n t = r.findall(text)[0]\n except IndexError:\n return text\n else:\n r = re.sub(r, '', text).strip()\n if self.config['strip']:\n return r\n else:\n fmt = self.config['format'].as_str()\n return fmt.format(r, t.strip()).strip()\n else:\n return u''\n\n def the_template_func(self, text):\n if not self.patterns:\n return text\n if text:\n for p in self.patterns:\n r = self.unthe(text, p)\n if r != text:\n self._log.debug(u'\\\"{0}\\\" -> \\\"{1}\\\"', text, r)\n break\n return r\n else:\n return u''\n", "path": "beetsplug/the.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Blemjhoo Tezoulbr <[email protected]>.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Moves patterns in path formats (suitable for moving articles).\"\"\"\n\nfrom __future__ import division, absolute_import, print_function\n\nimport re\nfrom beets.plugins import BeetsPlugin\n\n__author__ = '[email protected]'\n__version__ = '1.1'\n\nPATTERN_THE = u'^the\\\\s'\nPATTERN_A = u'^[a][n]?\\\\s'\nFORMAT = u'{0}, {1}'\n\n\nclass ThePlugin(BeetsPlugin):\n\n patterns = []\n\n def __init__(self):\n super(ThePlugin, self).__init__()\n\n self.template_funcs['the'] = self.the_template_func\n\n self.config.add({\n 'the': True,\n 'a': True,\n 'format': u'{0}, {1}',\n 'strip': False,\n 'patterns': [],\n })\n\n self.patterns = self.config['patterns'].as_str_seq()\n for p in self.patterns:\n if p:\n try:\n re.compile(p)\n except re.error:\n self._log.error(u'invalid pattern: {0}', p)\n else:\n if not (p.startswith('^') or p.endswith('$')):\n self._log.warning(u'warning: \\\"{0}\\\" will not '\n u'match string start/end', p)\n if self.config['a']:\n self.patterns = [PATTERN_A] + self.patterns\n if self.config['the']:\n self.patterns = [PATTERN_THE] + self.patterns\n if not self.patterns:\n self._log.warning(u'no patterns defined!')\n\n def unthe(self, text, pattern):\n \"\"\"Moves pattern in the path format string or strips it\n\n text -- text to handle\n pattern -- regexp pattern (case ignore is already on)\n strip -- if True, pattern will be removed\n \"\"\"\n if text:\n r = re.compile(pattern, flags=re.IGNORECASE)\n try:\n t = r.findall(text)[0]\n except IndexError:\n return text\n else:\n r = re.sub(r, '', text).strip()\n if self.config['strip']:\n return r\n else:\n fmt = self.config['format'].as_str()\n return fmt.format(r, t.strip()).strip()\n else:\n return u''\n\n def the_template_func(self, text):\n if not self.patterns:\n return text\n if text:\n for p in self.patterns:\n r = self.unthe(text, p)\n if r != text:\n self._log.debug(u'\\\"{0}\\\" -> \\\"{1}\\\"', text, r)\n break\n return r\n else:\n return u''\n", "path": "beetsplug/the.py"}]}
| 1,454 | 115 |
gh_patches_debug_38490
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-modules-extras-68
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for "cpanm --mirror-only"
##### Issue Type:
Feature Idea
##### Ansible Version:
1.7.2
##### Environment:
Ubuntu, but this goes for any environment.
##### Summary:
It would be great if it was possible to specify `mirror_only=yes` in the cpanm module. This enable installs from [minicpan](https://metacpan.org/pod/CPAN::Mini) in a local environment where extra packages has been injected.
Example:
```
cpanm --mirror-only --mirror http://cpan.myinternal.net My::Custom::Module
```
Without "--mirror-only", cpanm will try to lookup My::Custom::Module in CPAN Meta DB.
##### Steps To Reproduce:
##### Expected Results:
##### Actual Results:
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packaging/language/cpanm.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2012, Franck Cuny <[email protected]>
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20 #
21
22 DOCUMENTATION = '''
23 ---
24 module: cpanm
25 short_description: Manages Perl library dependencies.
26 description:
27 - Manage Perl library dependencies.
28 version_added: "1.6"
29 options:
30 name:
31 description:
32 - The name of the Perl library to install. You may use the "full distribution path", e.g. MIYAGAWA/Plack-0.99_05.tar.gz
33 required: false
34 default: null
35 aliases: ["pkg"]
36 from_path:
37 description:
38 - The local directory from where to install
39 required: false
40 default: null
41 notest:
42 description:
43 - Do not run unit tests
44 required: false
45 default: false
46 locallib:
47 description:
48 - Specify the install base to install modules
49 required: false
50 default: false
51 mirror:
52 description:
53 - Specifies the base URL for the CPAN mirror to use
54 required: false
55 default: false
56 examples:
57 - code: "cpanm: name=Dancer"
58 description: Install I(Dancer) perl package.
59 - code: "cpanm: name=MIYAGAWA/Plack-0.99_05.tar.gz"
60 description: Install version 0.99_05 of the I(Plack) perl package.
61 - code: "cpanm: name=Dancer locallib=/srv/webapps/my_app/extlib"
62 description: "Install I(Dancer) (U(http://perldancer.org/)) into the specified I(locallib)"
63 - code: "cpanm: from_path=/srv/webapps/my_app/src/"
64 description: Install perl dependencies from local directory.
65 - code: "cpanm: name=Dancer notest=True locallib=/srv/webapps/my_app/extlib"
66 description: Install I(Dancer) perl package without running the unit tests in indicated I(locallib).
67 - code: "cpanm: name=Dancer mirror=http://cpan.cpantesters.org/"
68 description: Install I(Dancer) perl package from a specific mirror
69 notes:
70 - Please note that U(http://search.cpan.org/dist/App-cpanminus/bin/cpanm, cpanm) must be installed on the remote host.
71 author: Franck Cuny
72 '''
73
74 def _is_package_installed(module, name, locallib, cpanm):
75 cmd = ""
76 if locallib:
77 os.environ["PERL5LIB"] = "%s/lib/perl5" % locallib
78 cmd = "%s perl -M%s -e '1'" % (cmd, name)
79 res, stdout, stderr = module.run_command(cmd, check_rc=False)
80 if res == 0:
81 return True
82 else:
83 return False
84
85 def _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm):
86 # this code should use "%s" like everything else and just return early but not fixing all of it now.
87 # don't copy stuff like this
88 if from_path:
89 cmd = "{cpanm} {path}".format(cpanm=cpanm, path=from_path)
90 else:
91 cmd = "{cpanm} {name}".format(cpanm=cpanm, name=name)
92
93 if notest is True:
94 cmd = "{cmd} -n".format(cmd=cmd)
95
96 if locallib is not None:
97 cmd = "{cmd} -l {locallib}".format(cmd=cmd, locallib=locallib)
98
99 if mirror is not None:
100 cmd = "{cmd} --mirror {mirror}".format(cmd=cmd, mirror=mirror)
101
102 return cmd
103
104
105 def main():
106 arg_spec = dict(
107 name=dict(default=None, required=False, aliases=['pkg']),
108 from_path=dict(default=None, required=False),
109 notest=dict(default=False, type='bool'),
110 locallib=dict(default=None, required=False),
111 mirror=dict(default=None, required=False)
112 )
113
114 module = AnsibleModule(
115 argument_spec=arg_spec,
116 required_one_of=[['name', 'from_path']],
117 )
118
119 cpanm = module.get_bin_path('cpanm', True)
120 name = module.params['name']
121 from_path = module.params['from_path']
122 notest = module.boolean(module.params.get('notest', False))
123 locallib = module.params['locallib']
124 mirror = module.params['mirror']
125
126 changed = False
127
128 installed = _is_package_installed(module, name, locallib, cpanm)
129
130 if not installed:
131 out_cpanm = err_cpanm = ''
132 cmd = _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm)
133
134 rc_cpanm, out_cpanm, err_cpanm = module.run_command(cmd, check_rc=False)
135
136 if rc_cpanm != 0:
137 module.fail_json(msg=err_cpanm, cmd=cmd)
138
139 if err_cpanm and 'is up to date' not in err_cpanm:
140 changed = True
141
142 module.exit_json(changed=changed, binary=cpanm, name=name)
143
144 # import module snippets
145 from ansible.module_utils.basic import *
146
147 main()
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/packaging/language/cpanm.py b/packaging/language/cpanm.py
--- a/packaging/language/cpanm.py
+++ b/packaging/language/cpanm.py
@@ -53,6 +53,11 @@
- Specifies the base URL for the CPAN mirror to use
required: false
default: false
+ mirror_only:
+ description:
+ - Use the mirror's index file instead of the CPAN Meta DB
+ required: false
+ default: false
examples:
- code: "cpanm: name=Dancer"
description: Install I(Dancer) perl package.
@@ -82,7 +87,7 @@
else:
return False
-def _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm):
+def _build_cmd_line(name, from_path, notest, locallib, mirror, mirror_only, cpanm):
# this code should use "%s" like everything else and just return early but not fixing all of it now.
# don't copy stuff like this
if from_path:
@@ -99,6 +104,9 @@
if mirror is not None:
cmd = "{cmd} --mirror {mirror}".format(cmd=cmd, mirror=mirror)
+ if mirror_only is True:
+ cmd = "{cmd} --mirror-only".format(cmd=cmd)
+
return cmd
@@ -109,6 +117,7 @@
notest=dict(default=False, type='bool'),
locallib=dict(default=None, required=False),
mirror=dict(default=None, required=False)
+ mirror_only=dict(default=False, type='bool'),
)
module = AnsibleModule(
@@ -116,12 +125,13 @@
required_one_of=[['name', 'from_path']],
)
- cpanm = module.get_bin_path('cpanm', True)
- name = module.params['name']
- from_path = module.params['from_path']
- notest = module.boolean(module.params.get('notest', False))
- locallib = module.params['locallib']
- mirror = module.params['mirror']
+ cpanm = module.get_bin_path('cpanm', True)
+ name = module.params['name']
+ from_path = module.params['from_path']
+ notest = module.boolean(module.params.get('notest', False))
+ locallib = module.params['locallib']
+ mirror = module.params['mirror']
+ mirror_only = module.params['mirror_only']
changed = False
@@ -129,7 +139,7 @@
if not installed:
out_cpanm = err_cpanm = ''
- cmd = _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm)
+ cmd = _build_cmd_line(name, from_path, notest, locallib, mirror, mirror_only, cpanm)
rc_cpanm, out_cpanm, err_cpanm = module.run_command(cmd, check_rc=False)
|
{"golden_diff": "diff --git a/packaging/language/cpanm.py b/packaging/language/cpanm.py\n--- a/packaging/language/cpanm.py\n+++ b/packaging/language/cpanm.py\n@@ -53,6 +53,11 @@\n - Specifies the base URL for the CPAN mirror to use\n required: false\n default: false\n+ mirror_only:\n+ description:\n+ - Use the mirror's index file instead of the CPAN Meta DB\n+ required: false\n+ default: false\n examples:\n - code: \"cpanm: name=Dancer\"\n description: Install I(Dancer) perl package.\n@@ -82,7 +87,7 @@\n else: \n return False\n \n-def _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm):\n+def _build_cmd_line(name, from_path, notest, locallib, mirror, mirror_only, cpanm):\n # this code should use \"%s\" like everything else and just return early but not fixing all of it now.\n # don't copy stuff like this\n if from_path:\n@@ -99,6 +104,9 @@\n if mirror is not None:\n cmd = \"{cmd} --mirror {mirror}\".format(cmd=cmd, mirror=mirror)\n \n+ if mirror_only is True:\n+ cmd = \"{cmd} --mirror-only\".format(cmd=cmd)\n+\n return cmd\n \n \n@@ -109,6 +117,7 @@\n notest=dict(default=False, type='bool'),\n locallib=dict(default=None, required=False),\n mirror=dict(default=None, required=False)\n+ mirror_only=dict(default=False, type='bool'),\n )\n \n module = AnsibleModule(\n@@ -116,12 +125,13 @@\n required_one_of=[['name', 'from_path']],\n )\n \n- cpanm = module.get_bin_path('cpanm', True)\n- name = module.params['name']\n- from_path = module.params['from_path']\n- notest = module.boolean(module.params.get('notest', False))\n- locallib = module.params['locallib']\n- mirror = module.params['mirror']\n+ cpanm = module.get_bin_path('cpanm', True)\n+ name = module.params['name']\n+ from_path = module.params['from_path']\n+ notest = module.boolean(module.params.get('notest', False))\n+ locallib = module.params['locallib']\n+ mirror = module.params['mirror']\n+ mirror_only = module.params['mirror_only']\n \n changed = False\n \n@@ -129,7 +139,7 @@\n \n if not installed:\n out_cpanm = err_cpanm = ''\n- cmd = _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm)\n+ cmd = _build_cmd_line(name, from_path, notest, locallib, mirror, mirror_only, cpanm)\n \n rc_cpanm, out_cpanm, err_cpanm = module.run_command(cmd, check_rc=False)\n", "issue": "Add support for \"cpanm --mirror-only\"\n##### Issue Type:\n\nFeature Idea\n##### Ansible Version:\n\n1.7.2\n##### Environment:\n\nUbuntu, but this goes for any environment.\n##### Summary:\n\nIt would be great if it was possible to specify `mirror_only=yes` in the cpanm module. This enable installs from [minicpan](https://metacpan.org/pod/CPAN::Mini) in a local environment where extra packages has been injected.\n\nExample:\n\n```\ncpanm --mirror-only --mirror http://cpan.myinternal.net My::Custom::Module\n```\n\nWithout \"--mirror-only\", cpanm will try to lookup My::Custom::Module in CPAN Meta DB.\n##### Steps To Reproduce:\n##### Expected Results:\n##### Actual Results:\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2012, Franck Cuny <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = '''\n---\nmodule: cpanm\nshort_description: Manages Perl library dependencies.\ndescription:\n - Manage Perl library dependencies.\nversion_added: \"1.6\"\noptions:\n name:\n description:\n - The name of the Perl library to install. You may use the \"full distribution path\", e.g. MIYAGAWA/Plack-0.99_05.tar.gz\n required: false\n default: null\n aliases: [\"pkg\"]\n from_path:\n description:\n - The local directory from where to install\n required: false\n default: null\n notest:\n description:\n - Do not run unit tests\n required: false\n default: false\n locallib:\n description:\n - Specify the install base to install modules\n required: false\n default: false\n mirror:\n description:\n - Specifies the base URL for the CPAN mirror to use\n required: false\n default: false\nexamples:\n - code: \"cpanm: name=Dancer\"\n description: Install I(Dancer) perl package.\n - code: \"cpanm: name=MIYAGAWA/Plack-0.99_05.tar.gz\"\n description: Install version 0.99_05 of the I(Plack) perl package.\n - code: \"cpanm: name=Dancer locallib=/srv/webapps/my_app/extlib\"\n description: \"Install I(Dancer) (U(http://perldancer.org/)) into the specified I(locallib)\"\n - code: \"cpanm: from_path=/srv/webapps/my_app/src/\"\n description: Install perl dependencies from local directory.\n - code: \"cpanm: name=Dancer notest=True locallib=/srv/webapps/my_app/extlib\"\n description: Install I(Dancer) perl package without running the unit tests in indicated I(locallib).\n - code: \"cpanm: name=Dancer mirror=http://cpan.cpantesters.org/\"\n description: Install I(Dancer) perl package from a specific mirror\nnotes:\n - Please note that U(http://search.cpan.org/dist/App-cpanminus/bin/cpanm, cpanm) must be installed on the remote host.\nauthor: Franck Cuny\n'''\n\ndef _is_package_installed(module, name, locallib, cpanm):\n cmd = \"\"\n if locallib:\n os.environ[\"PERL5LIB\"] = \"%s/lib/perl5\" % locallib\n cmd = \"%s perl -M%s -e '1'\" % (cmd, name)\n res, stdout, stderr = module.run_command(cmd, check_rc=False)\n if res == 0:\n return True\n else: \n return False\n\ndef _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm):\n # this code should use \"%s\" like everything else and just return early but not fixing all of it now.\n # don't copy stuff like this\n if from_path:\n cmd = \"{cpanm} {path}\".format(cpanm=cpanm, path=from_path)\n else:\n cmd = \"{cpanm} {name}\".format(cpanm=cpanm, name=name)\n\n if notest is True:\n cmd = \"{cmd} -n\".format(cmd=cmd)\n\n if locallib is not None:\n cmd = \"{cmd} -l {locallib}\".format(cmd=cmd, locallib=locallib)\n\n if mirror is not None:\n cmd = \"{cmd} --mirror {mirror}\".format(cmd=cmd, mirror=mirror)\n\n return cmd\n\n\ndef main():\n arg_spec = dict(\n name=dict(default=None, required=False, aliases=['pkg']),\n from_path=dict(default=None, required=False),\n notest=dict(default=False, type='bool'),\n locallib=dict(default=None, required=False),\n mirror=dict(default=None, required=False)\n )\n\n module = AnsibleModule(\n argument_spec=arg_spec,\n required_one_of=[['name', 'from_path']],\n )\n\n cpanm = module.get_bin_path('cpanm', True)\n name = module.params['name']\n from_path = module.params['from_path']\n notest = module.boolean(module.params.get('notest', False))\n locallib = module.params['locallib']\n mirror = module.params['mirror']\n\n changed = False\n\n installed = _is_package_installed(module, name, locallib, cpanm)\n\n if not installed:\n out_cpanm = err_cpanm = ''\n cmd = _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm)\n\n rc_cpanm, out_cpanm, err_cpanm = module.run_command(cmd, check_rc=False)\n\n if rc_cpanm != 0:\n module.fail_json(msg=err_cpanm, cmd=cmd)\n\n if err_cpanm and 'is up to date' not in err_cpanm:\n changed = True\n\n module.exit_json(changed=changed, binary=cpanm, name=name)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\n\nmain()\n", "path": "packaging/language/cpanm.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2012, Franck Cuny <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = '''\n---\nmodule: cpanm\nshort_description: Manages Perl library dependencies.\ndescription:\n - Manage Perl library dependencies.\nversion_added: \"1.6\"\noptions:\n name:\n description:\n - The name of the Perl library to install. You may use the \"full distribution path\", e.g. MIYAGAWA/Plack-0.99_05.tar.gz\n required: false\n default: null\n aliases: [\"pkg\"]\n from_path:\n description:\n - The local directory from where to install\n required: false\n default: null\n notest:\n description:\n - Do not run unit tests\n required: false\n default: false\n locallib:\n description:\n - Specify the install base to install modules\n required: false\n default: false\n mirror:\n description:\n - Specifies the base URL for the CPAN mirror to use\n required: false\n default: false\n mirror_only:\n description:\n - Use the mirror's index file instead of the CPAN Meta DB\n required: false\n default: false\nexamples:\n - code: \"cpanm: name=Dancer\"\n description: Install I(Dancer) perl package.\n - code: \"cpanm: name=MIYAGAWA/Plack-0.99_05.tar.gz\"\n description: Install version 0.99_05 of the I(Plack) perl package.\n - code: \"cpanm: name=Dancer locallib=/srv/webapps/my_app/extlib\"\n description: \"Install I(Dancer) (U(http://perldancer.org/)) into the specified I(locallib)\"\n - code: \"cpanm: from_path=/srv/webapps/my_app/src/\"\n description: Install perl dependencies from local directory.\n - code: \"cpanm: name=Dancer notest=True locallib=/srv/webapps/my_app/extlib\"\n description: Install I(Dancer) perl package without running the unit tests in indicated I(locallib).\n - code: \"cpanm: name=Dancer mirror=http://cpan.cpantesters.org/\"\n description: Install I(Dancer) perl package from a specific mirror\nnotes:\n - Please note that U(http://search.cpan.org/dist/App-cpanminus/bin/cpanm, cpanm) must be installed on the remote host.\nauthor: Franck Cuny\n'''\n\ndef _is_package_installed(module, name, locallib, cpanm):\n cmd = \"\"\n if locallib:\n os.environ[\"PERL5LIB\"] = \"%s/lib/perl5\" % locallib\n cmd = \"%s perl -M%s -e '1'\" % (cmd, name)\n res, stdout, stderr = module.run_command(cmd, check_rc=False)\n if res == 0:\n return True\n else: \n return False\n\ndef _build_cmd_line(name, from_path, notest, locallib, mirror, mirror_only, cpanm):\n # this code should use \"%s\" like everything else and just return early but not fixing all of it now.\n # don't copy stuff like this\n if from_path:\n cmd = \"{cpanm} {path}\".format(cpanm=cpanm, path=from_path)\n else:\n cmd = \"{cpanm} {name}\".format(cpanm=cpanm, name=name)\n\n if notest is True:\n cmd = \"{cmd} -n\".format(cmd=cmd)\n\n if locallib is not None:\n cmd = \"{cmd} -l {locallib}\".format(cmd=cmd, locallib=locallib)\n\n if mirror is not None:\n cmd = \"{cmd} --mirror {mirror}\".format(cmd=cmd, mirror=mirror)\n\n if mirror_only is True:\n cmd = \"{cmd} --mirror-only\".format(cmd=cmd)\n\n return cmd\n\n\ndef main():\n arg_spec = dict(\n name=dict(default=None, required=False, aliases=['pkg']),\n from_path=dict(default=None, required=False),\n notest=dict(default=False, type='bool'),\n locallib=dict(default=None, required=False),\n mirror=dict(default=None, required=False)\n mirror_only=dict(default=False, type='bool'),\n )\n\n module = AnsibleModule(\n argument_spec=arg_spec,\n required_one_of=[['name', 'from_path']],\n )\n\n cpanm = module.get_bin_path('cpanm', True)\n name = module.params['name']\n from_path = module.params['from_path']\n notest = module.boolean(module.params.get('notest', False))\n locallib = module.params['locallib']\n mirror = module.params['mirror']\n mirror_only = module.params['mirror_only']\n\n changed = False\n\n installed = _is_package_installed(module, name, locallib, cpanm)\n\n if not installed:\n out_cpanm = err_cpanm = ''\n cmd = _build_cmd_line(name, from_path, notest, locallib, mirror, mirror_only, cpanm)\n\n rc_cpanm, out_cpanm, err_cpanm = module.run_command(cmd, check_rc=False)\n\n if rc_cpanm != 0:\n module.fail_json(msg=err_cpanm, cmd=cmd)\n\n if err_cpanm and 'is up to date' not in err_cpanm:\n changed = True\n\n module.exit_json(changed=changed, binary=cpanm, name=name)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\n\nmain()\n", "path": "packaging/language/cpanm.py"}]}
| 2,119 | 716 |
gh_patches_debug_777
|
rasdani/github-patches
|
git_diff
|
getmoto__moto-1840
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cryptography Package has a Security Vulnerability
Discovered using pipenv's security check feature that there's a vulnerability in the cryptography package versions<2.3.
> Checking installed package safety...
36351: cryptography >=1.9.0,<2.3 resolved (2.2.2 installed)!
python-cryptography versions >=1.9.0 and <2.3 did not enforce a minimum tag length for finalize_with_tag API. If a user did not validate the input length prior to passing it to finalize_with_tag an attacker could craft an invalid payload with a shortened tag (e.g. 1 byte) such that they would have a 1 in 256 chance of passing the MAC check. GCM tag forgeries can cause key leakage.
More details here: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-10903
setup.py should be updated to require cryptography>=2.3.0.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from __future__ import unicode_literals
3 import setuptools
4 from setuptools import setup, find_packages
5 import sys
6
7
8 install_requires = [
9 "Jinja2>=2.7.3",
10 "boto>=2.36.0",
11 "boto3>=1.6.16,<1.8",
12 "botocore>=1.9.16,<1.11",
13 "cryptography>=2.0.0",
14 "requests>=2.5",
15 "xmltodict",
16 "six>1.9",
17 "werkzeug",
18 "pyaml",
19 "pytz",
20 "python-dateutil<3.0.0,>=2.1",
21 "python-jose<3.0.0",
22 "mock",
23 "docker>=2.5.1",
24 "jsondiff==1.1.1",
25 "aws-xray-sdk<0.96,>=0.93",
26 "responses>=0.9.0",
27 ]
28
29 extras_require = {
30 'server': ['flask'],
31 }
32
33 # https://hynek.me/articles/conditional-python-dependencies/
34 if int(setuptools.__version__.split(".", 1)[0]) < 18:
35 if sys.version_info[0:2] < (3, 3):
36 install_requires.append("backports.tempfile")
37 else:
38 extras_require[":python_version<'3.3'"] = ["backports.tempfile"]
39
40
41 setup(
42 name='moto',
43 version='1.3.6',
44 description='A library that allows your python tests to easily'
45 ' mock out the boto library',
46 author='Steve Pulec',
47 author_email='[email protected]',
48 url='https://github.com/spulec/moto',
49 entry_points={
50 'console_scripts': [
51 'moto_server = moto.server:main',
52 ],
53 },
54 packages=find_packages(exclude=("tests", "tests.*")),
55 install_requires=install_requires,
56 extras_require=extras_require,
57 include_package_data=True,
58 license="Apache",
59 test_suite="tests",
60 classifiers=[
61 "Programming Language :: Python :: 2",
62 "Programming Language :: Python :: 2.7",
63 "Programming Language :: Python :: 3",
64 "Programming Language :: Python :: 3.3",
65 "Programming Language :: Python :: 3.4",
66 "Programming Language :: Python :: 3.5",
67 "Programming Language :: Python :: 3.6",
68 "License :: OSI Approved :: Apache Software License",
69 "Topic :: Software Development :: Testing",
70 ],
71 )
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
"boto>=2.36.0",
"boto3>=1.6.16,<1.8",
"botocore>=1.9.16,<1.11",
- "cryptography>=2.0.0",
+ "cryptography>=2.3.0",
"requests>=2.5",
"xmltodict",
"six>1.9",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,7 +10,7 @@\n \"boto>=2.36.0\",\n \"boto3>=1.6.16,<1.8\",\n \"botocore>=1.9.16,<1.11\",\n- \"cryptography>=2.0.0\",\n+ \"cryptography>=2.3.0\",\n \"requests>=2.5\",\n \"xmltodict\",\n \"six>1.9\",\n", "issue": "Cryptography Package has a Security Vulnerability\nDiscovered using pipenv's security check feature that there's a vulnerability in the cryptography package versions<2.3.\r\n \r\n> Checking installed package safety...\r\n36351: cryptography >=1.9.0,<2.3 resolved (2.2.2 installed)!\r\npython-cryptography versions >=1.9.0 and <2.3 did not enforce a minimum tag length for finalize_with_tag API. If a user did not validate the input length prior to passing it to finalize_with_tag an attacker could craft an invalid payload with a shortened tag (e.g. 1 byte) such that they would have a 1 in 256 chance of passing the MAC check. GCM tag forgeries can cause key leakage.\r\n \r\nMore details here: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-10903\r\n\r\nsetup.py should be updated to require cryptography>=2.3.0.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import unicode_literals\nimport setuptools\nfrom setuptools import setup, find_packages\nimport sys\n\n\ninstall_requires = [\n \"Jinja2>=2.7.3\",\n \"boto>=2.36.0\",\n \"boto3>=1.6.16,<1.8\",\n \"botocore>=1.9.16,<1.11\",\n \"cryptography>=2.0.0\",\n \"requests>=2.5\",\n \"xmltodict\",\n \"six>1.9\",\n \"werkzeug\",\n \"pyaml\",\n \"pytz\",\n \"python-dateutil<3.0.0,>=2.1\",\n \"python-jose<3.0.0\",\n \"mock\",\n \"docker>=2.5.1\",\n \"jsondiff==1.1.1\",\n \"aws-xray-sdk<0.96,>=0.93\",\n \"responses>=0.9.0\",\n]\n\nextras_require = {\n 'server': ['flask'],\n}\n\n# https://hynek.me/articles/conditional-python-dependencies/\nif int(setuptools.__version__.split(\".\", 1)[0]) < 18:\n if sys.version_info[0:2] < (3, 3):\n install_requires.append(\"backports.tempfile\")\nelse:\n extras_require[\":python_version<'3.3'\"] = [\"backports.tempfile\"]\n\n\nsetup(\n name='moto',\n version='1.3.6',\n description='A library that allows your python tests to easily'\n ' mock out the boto library',\n author='Steve Pulec',\n author_email='[email protected]',\n url='https://github.com/spulec/moto',\n entry_points={\n 'console_scripts': [\n 'moto_server = moto.server:main',\n ],\n },\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n install_requires=install_requires,\n extras_require=extras_require,\n include_package_data=True,\n license=\"Apache\",\n test_suite=\"tests\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Topic :: Software Development :: Testing\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import unicode_literals\nimport setuptools\nfrom setuptools import setup, find_packages\nimport sys\n\n\ninstall_requires = [\n \"Jinja2>=2.7.3\",\n \"boto>=2.36.0\",\n \"boto3>=1.6.16,<1.8\",\n \"botocore>=1.9.16,<1.11\",\n \"cryptography>=2.3.0\",\n \"requests>=2.5\",\n \"xmltodict\",\n \"six>1.9\",\n \"werkzeug\",\n \"pyaml\",\n \"pytz\",\n \"python-dateutil<3.0.0,>=2.1\",\n \"python-jose<3.0.0\",\n \"mock\",\n \"docker>=2.5.1\",\n \"jsondiff==1.1.1\",\n \"aws-xray-sdk<0.96,>=0.93\",\n \"responses>=0.9.0\",\n]\n\nextras_require = {\n 'server': ['flask'],\n}\n\n# https://hynek.me/articles/conditional-python-dependencies/\nif int(setuptools.__version__.split(\".\", 1)[0]) < 18:\n if sys.version_info[0:2] < (3, 3):\n install_requires.append(\"backports.tempfile\")\nelse:\n extras_require[\":python_version<'3.3'\"] = [\"backports.tempfile\"]\n\n\nsetup(\n name='moto',\n version='1.3.6',\n description='A library that allows your python tests to easily'\n ' mock out the boto library',\n author='Steve Pulec',\n author_email='[email protected]',\n url='https://github.com/spulec/moto',\n entry_points={\n 'console_scripts': [\n 'moto_server = moto.server:main',\n ],\n },\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n install_requires=install_requires,\n extras_require=extras_require,\n include_package_data=True,\n license=\"Apache\",\n test_suite=\"tests\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Topic :: Software Development :: Testing\",\n ],\n)\n", "path": "setup.py"}]}
| 1,170 | 125 |
gh_patches_debug_31049
|
rasdani/github-patches
|
git_diff
|
freqtrade__freqtrade-1656
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Minimal ROI value is not really optional, Bot Crash if minimal_roi have no 0 value
## The problem:
### The "minimal_roi" configuration object is suposed to be optionnal as explained in the doc:
> Most of the strategy files already include the optimal minimal_roi value. This parameter is optional. If you use it in the configuration file, it will take over the minimal_roi value from the strategy file.
Exemple:
> minimal_roi = {
> "40": 0.0,
> "30": 0.01,
> "20": 0.02,
> "0": 0.04
> }
**However, If I remove this parameter in the strategy and config I get this error:**
> 2019-03-06 21:56:56,768 - freqtrade - ERROR - Fatal exception!
> Traceback (most recent call last):
> File "./freqtrade/main.py", line 45, in main
> freqtrade = FreqtradeBot(config)
> File "/home/axel/dev/shatta/framework-dev/private/freqtrade/freqtrade/freqtradebot.py", line 54, in __init__
> self.strategy: IStrategy = StrategyResolver(self.config).strategy
> File "/home/axel/dev/shatta/framework-dev/private/freqtrade/freqtrade/resolvers/strategy_resolver.py", line 77, in __init__
> {int(key): value for (key, value) in self.strategy.minimal_roi.items()}.items(),
> AttributeError: 'ShattaStrategy' object has no attribute 'minimal_roi'
**Also, If I remove the 0 value key like that:**
> minimal_roi = {
> "40": 0.0,
> "30": 0.01,
> "20": 0.02,
> }
**The bot start but crash when trying to check ROI with this error:**
> File "/freqtrade/freqtrade/freqtradebot.py", line 699, in check_sell
> should_sell = self.strategy.should_sell(trade, sell_rate, datetime.utcnow(), buy, sell)
> File "/framework-dev/private/freqtrade/freqtrade/strategy/interface.py", line 273, in should_sell
> if self.min_roi_reached(trade=trade, current_profit=current_profit, current_time=date):
> File "/framework-dev/private/freqtrade/freqtrade/strategy/**interface.py", line 349**, in min_roi_reached
> **roi_entry = max(list(filter(lambda x: trade_dur >= x, self.minimal_roi.keys())))**
### Standalone Python test to reproduce:
> minimal_roi_keys = [0, 20, 30, 40, 50] ## Remove the 0 to trigger error
> trade_dur = 0.3486263354619344
> roi_entry = max(list(filter(lambda x: trade_dur >= x, minimal_roi_keys)))
> print(str(roi_entry))
### Solutions
1. The quick easy solution to temporaly deal with that issue is to make the minimal_roi parameter mandatory and make the "min_roi_reached" work with missing 0. Users who don't want to use ROI can simply put an unrealistic hight value.
2. The other is to make this parameter truly optionnal.
Minimal ROI value is not really optional, Bot Crash if minimal_roi have no 0 value
## The problem:
### The "minimal_roi" configuration object is suposed to be optionnal as explained in the doc:
> Most of the strategy files already include the optimal minimal_roi value. This parameter is optional. If you use it in the configuration file, it will take over the minimal_roi value from the strategy file.
Exemple:
> minimal_roi = {
> "40": 0.0,
> "30": 0.01,
> "20": 0.02,
> "0": 0.04
> }
**However, If I remove this parameter in the strategy and config I get this error:**
> 2019-03-06 21:56:56,768 - freqtrade - ERROR - Fatal exception!
> Traceback (most recent call last):
> File "./freqtrade/main.py", line 45, in main
> freqtrade = FreqtradeBot(config)
> File "/home/axel/dev/shatta/framework-dev/private/freqtrade/freqtrade/freqtradebot.py", line 54, in __init__
> self.strategy: IStrategy = StrategyResolver(self.config).strategy
> File "/home/axel/dev/shatta/framework-dev/private/freqtrade/freqtrade/resolvers/strategy_resolver.py", line 77, in __init__
> {int(key): value for (key, value) in self.strategy.minimal_roi.items()}.items(),
> AttributeError: 'ShattaStrategy' object has no attribute 'minimal_roi'
**Also, If I remove the 0 value key like that:**
> minimal_roi = {
> "40": 0.0,
> "30": 0.01,
> "20": 0.02,
> }
**The bot start but crash when trying to check ROI with this error:**
> File "/freqtrade/freqtrade/freqtradebot.py", line 699, in check_sell
> should_sell = self.strategy.should_sell(trade, sell_rate, datetime.utcnow(), buy, sell)
> File "/framework-dev/private/freqtrade/freqtrade/strategy/interface.py", line 273, in should_sell
> if self.min_roi_reached(trade=trade, current_profit=current_profit, current_time=date):
> File "/framework-dev/private/freqtrade/freqtrade/strategy/**interface.py", line 349**, in min_roi_reached
> **roi_entry = max(list(filter(lambda x: trade_dur >= x, self.minimal_roi.keys())))**
### Standalone Python test to reproduce:
> minimal_roi_keys = [0, 20, 30, 40, 50] ## Remove the 0 to trigger error
> trade_dur = 0.3486263354619344
> roi_entry = max(list(filter(lambda x: trade_dur >= x, minimal_roi_keys)))
> print(str(roi_entry))
### Solutions
1. The quick easy solution to temporaly deal with that issue is to make the minimal_roi parameter mandatory and make the "min_roi_reached" work with missing 0. Users who don't want to use ROI can simply put an unrealistic hight value.
2. The other is to make this parameter truly optionnal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/resolvers/strategy_resolver.py`
Content:
```
1 # pragma pylint: disable=attribute-defined-outside-init
2
3 """
4 This module load custom strategies
5 """
6 import logging
7 import tempfile
8 from base64 import urlsafe_b64decode
9 from collections import OrderedDict
10 from inspect import getfullargspec
11 from pathlib import Path
12 from typing import Dict, Optional
13
14 from freqtrade import constants
15 from freqtrade.resolvers import IResolver
16 from freqtrade.strategy import import_strategy
17 from freqtrade.strategy.interface import IStrategy
18
19 logger = logging.getLogger(__name__)
20
21
22 class StrategyResolver(IResolver):
23 """
24 This class contains all the logic to load custom strategy class
25 """
26
27 __slots__ = ['strategy']
28
29 def __init__(self, config: Optional[Dict] = None) -> None:
30 """
31 Load the custom class from config parameter
32 :param config: configuration dictionary or None
33 """
34 config = config or {}
35
36 # Verify the strategy is in the configuration, otherwise fallback to the default strategy
37 strategy_name = config.get('strategy') or constants.DEFAULT_STRATEGY
38 self.strategy: IStrategy = self._load_strategy(strategy_name,
39 config=config,
40 extra_dir=config.get('strategy_path'))
41
42 # make sure experimental dict is available
43 if 'experimental' not in config:
44 config['experimental'] = {}
45
46 # Set attributes
47 # Check if we need to override configuration
48 # (Attribute name, default, experimental)
49 attributes = [("minimal_roi", None, False),
50 ("ticker_interval", None, False),
51 ("stoploss", None, False),
52 ("trailing_stop", None, False),
53 ("trailing_stop_positive", None, False),
54 ("trailing_stop_positive_offset", 0.0, False),
55 ("process_only_new_candles", None, False),
56 ("order_types", None, False),
57 ("order_time_in_force", None, False),
58 ("use_sell_signal", False, True),
59 ("sell_profit_only", False, True),
60 ("ignore_roi_if_buy_signal", False, True),
61 ]
62 for attribute, default, experimental in attributes:
63 if experimental:
64 self._override_attribute_helper(config['experimental'], attribute, default)
65 else:
66 self._override_attribute_helper(config, attribute, default)
67
68 # Loop this list again to have output combined
69 for attribute, _, exp in attributes:
70 if exp and attribute in config['experimental']:
71 logger.info("Strategy using %s: %s", attribute, config['experimental'][attribute])
72 elif attribute in config:
73 logger.info("Strategy using %s: %s", attribute, config[attribute])
74
75 # Sort and apply type conversions
76 self.strategy.minimal_roi = OrderedDict(sorted(
77 {int(key): value for (key, value) in self.strategy.minimal_roi.items()}.items(),
78 key=lambda t: t[0]))
79 self.strategy.stoploss = float(self.strategy.stoploss)
80
81 self._strategy_sanity_validations()
82
83 def _override_attribute_helper(self, config, attribute: str, default):
84 """
85 Override attributes in the strategy.
86 Prevalence:
87 - Configuration
88 - Strategy
89 - default (if not None)
90 """
91 if attribute in config:
92 setattr(self.strategy, attribute, config[attribute])
93 logger.info("Override strategy '%s' with value in config file: %s.",
94 attribute, config[attribute])
95 elif hasattr(self.strategy, attribute):
96 config[attribute] = getattr(self.strategy, attribute)
97 # Explicitly check for None here as other "falsy" values are possible
98 elif default is not None:
99 setattr(self.strategy, attribute, default)
100 config[attribute] = default
101
102 def _strategy_sanity_validations(self):
103 if not all(k in self.strategy.order_types for k in constants.REQUIRED_ORDERTYPES):
104 raise ImportError(f"Impossible to load Strategy '{self.strategy.__class__.__name__}'. "
105 f"Order-types mapping is incomplete.")
106
107 if not all(k in self.strategy.order_time_in_force for k in constants.REQUIRED_ORDERTIF):
108 raise ImportError(f"Impossible to load Strategy '{self.strategy.__class__.__name__}'. "
109 f"Order-time-in-force mapping is incomplete.")
110
111 def _load_strategy(
112 self, strategy_name: str, config: dict, extra_dir: Optional[str] = None) -> IStrategy:
113 """
114 Search and loads the specified strategy.
115 :param strategy_name: name of the module to import
116 :param config: configuration for the strategy
117 :param extra_dir: additional directory to search for the given strategy
118 :return: Strategy instance or None
119 """
120 current_path = Path(__file__).parent.parent.joinpath('strategy').resolve()
121
122 abs_paths = [
123 Path.cwd().joinpath('user_data/strategies'),
124 current_path,
125 ]
126
127 if extra_dir:
128 # Add extra strategy directory on top of search paths
129 abs_paths.insert(0, Path(extra_dir).resolve())
130
131 if ":" in strategy_name:
132 logger.info("loading base64 endocded strategy")
133 strat = strategy_name.split(":")
134
135 if len(strat) == 2:
136 temp = Path(tempfile.mkdtemp("freq", "strategy"))
137 name = strat[0] + ".py"
138
139 temp.joinpath(name).write_text(urlsafe_b64decode(strat[1]).decode('utf-8'))
140 temp.joinpath("__init__.py").touch()
141
142 strategy_name = strat[0]
143
144 # register temp path with the bot
145 abs_paths.insert(0, temp.resolve())
146
147 for _path in abs_paths:
148 try:
149 strategy = self._search_object(directory=_path, object_type=IStrategy,
150 object_name=strategy_name, kwargs={'config': config})
151 if strategy:
152 logger.info("Using resolved strategy %s from '%s'", strategy_name, _path)
153 strategy._populate_fun_len = len(
154 getfullargspec(strategy.populate_indicators).args)
155 strategy._buy_fun_len = len(getfullargspec(strategy.populate_buy_trend).args)
156 strategy._sell_fun_len = len(getfullargspec(strategy.populate_sell_trend).args)
157
158 return import_strategy(strategy, config=config)
159 except FileNotFoundError:
160 logger.warning('Path "%s" does not exist', _path.relative_to(Path.cwd()))
161
162 raise ImportError(
163 "Impossible to load Strategy '{}'. This class does not exist"
164 " or contains Python code errors".format(strategy_name)
165 )
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/freqtrade/resolvers/strategy_resolver.py b/freqtrade/resolvers/strategy_resolver.py
--- a/freqtrade/resolvers/strategy_resolver.py
+++ b/freqtrade/resolvers/strategy_resolver.py
@@ -46,18 +46,18 @@
# Set attributes
# Check if we need to override configuration
# (Attribute name, default, experimental)
- attributes = [("minimal_roi", None, False),
- ("ticker_interval", None, False),
- ("stoploss", None, False),
- ("trailing_stop", None, False),
- ("trailing_stop_positive", None, False),
- ("trailing_stop_positive_offset", 0.0, False),
- ("process_only_new_candles", None, False),
- ("order_types", None, False),
- ("order_time_in_force", None, False),
- ("use_sell_signal", False, True),
- ("sell_profit_only", False, True),
- ("ignore_roi_if_buy_signal", False, True),
+ attributes = [("minimal_roi", {"0": 10.0}, False),
+ ("ticker_interval", None, False),
+ ("stoploss", None, False),
+ ("trailing_stop", None, False),
+ ("trailing_stop_positive", None, False),
+ ("trailing_stop_positive_offset", 0.0, False),
+ ("process_only_new_candles", None, False),
+ ("order_types", None, False),
+ ("order_time_in_force", None, False),
+ ("use_sell_signal", False, True),
+ ("sell_profit_only", False, True),
+ ("ignore_roi_if_buy_signal", False, True),
]
for attribute, default, experimental in attributes:
if experimental:
|
{"golden_diff": "diff --git a/freqtrade/resolvers/strategy_resolver.py b/freqtrade/resolvers/strategy_resolver.py\n--- a/freqtrade/resolvers/strategy_resolver.py\n+++ b/freqtrade/resolvers/strategy_resolver.py\n@@ -46,18 +46,18 @@\n # Set attributes\n # Check if we need to override configuration\n # (Attribute name, default, experimental)\n- attributes = [(\"minimal_roi\", None, False),\n- (\"ticker_interval\", None, False),\n- (\"stoploss\", None, False),\n- (\"trailing_stop\", None, False),\n- (\"trailing_stop_positive\", None, False),\n- (\"trailing_stop_positive_offset\", 0.0, False),\n- (\"process_only_new_candles\", None, False),\n- (\"order_types\", None, False),\n- (\"order_time_in_force\", None, False),\n- (\"use_sell_signal\", False, True),\n- (\"sell_profit_only\", False, True),\n- (\"ignore_roi_if_buy_signal\", False, True),\n+ attributes = [(\"minimal_roi\", {\"0\": 10.0}, False),\n+ (\"ticker_interval\", None, False),\n+ (\"stoploss\", None, False),\n+ (\"trailing_stop\", None, False),\n+ (\"trailing_stop_positive\", None, False),\n+ (\"trailing_stop_positive_offset\", 0.0, False),\n+ (\"process_only_new_candles\", None, False),\n+ (\"order_types\", None, False),\n+ (\"order_time_in_force\", None, False),\n+ (\"use_sell_signal\", False, True),\n+ (\"sell_profit_only\", False, True),\n+ (\"ignore_roi_if_buy_signal\", False, True),\n ]\n for attribute, default, experimental in attributes:\n if experimental:\n", "issue": "Minimal ROI value is not really optional, Bot Crash if minimal_roi have no 0 value\n## The problem:\r\n### The \"minimal_roi\" configuration object is suposed to be optionnal as explained in the doc:\r\n\r\n> Most of the strategy files already include the optimal minimal_roi value. This parameter is optional. If you use it in the configuration file, it will take over the minimal_roi value from the strategy file.\r\n\r\nExemple:\r\n\r\n> minimal_roi = {\r\n> \"40\": 0.0,\r\n> \"30\": 0.01,\r\n> \"20\": 0.02,\r\n> \"0\": 0.04\r\n> }\r\n\r\n**However, If I remove this parameter in the strategy and config I get this error:**\r\n\r\n> 2019-03-06 21:56:56,768 - freqtrade - ERROR - Fatal exception!\r\n> Traceback (most recent call last):\r\n> File \"./freqtrade/main.py\", line 45, in main\r\n> freqtrade = FreqtradeBot(config)\r\n> File \"/home/axel/dev/shatta/framework-dev/private/freqtrade/freqtrade/freqtradebot.py\", line 54, in __init__\r\n> self.strategy: IStrategy = StrategyResolver(self.config).strategy\r\n> File \"/home/axel/dev/shatta/framework-dev/private/freqtrade/freqtrade/resolvers/strategy_resolver.py\", line 77, in __init__\r\n> {int(key): value for (key, value) in self.strategy.minimal_roi.items()}.items(),\r\n> AttributeError: 'ShattaStrategy' object has no attribute 'minimal_roi'\r\n\r\n **Also, If I remove the 0 value key like that:**\r\n\r\n> minimal_roi = {\r\n> \"40\": 0.0,\r\n> \"30\": 0.01,\r\n> \"20\": 0.02,\r\n> }\r\n\r\n**The bot start but crash when trying to check ROI with this error:**\r\n> File \"/freqtrade/freqtrade/freqtradebot.py\", line 699, in check_sell\r\n> should_sell = self.strategy.should_sell(trade, sell_rate, datetime.utcnow(), buy, sell)\r\n> File \"/framework-dev/private/freqtrade/freqtrade/strategy/interface.py\", line 273, in should_sell\r\n> if self.min_roi_reached(trade=trade, current_profit=current_profit, current_time=date):\r\n> File \"/framework-dev/private/freqtrade/freqtrade/strategy/**interface.py\", line 349**, in min_roi_reached\r\n> **roi_entry = max(list(filter(lambda x: trade_dur >= x, self.minimal_roi.keys())))**\r\n\r\n### Standalone Python test to reproduce:\r\n\r\n> minimal_roi_keys = [0, 20, 30, 40, 50] ## Remove the 0 to trigger error\r\n> trade_dur = 0.3486263354619344\r\n> roi_entry = max(list(filter(lambda x: trade_dur >= x, minimal_roi_keys)))\r\n> print(str(roi_entry))\r\n\r\n\r\n### Solutions\r\n\r\n1. The quick easy solution to temporaly deal with that issue is to make the minimal_roi parameter mandatory and make the \"min_roi_reached\" work with missing 0. Users who don't want to use ROI can simply put an unrealistic hight value.\r\n\r\n2. The other is to make this parameter truly optionnal.\r\n\r\n\nMinimal ROI value is not really optional, Bot Crash if minimal_roi have no 0 value\n## The problem:\r\n### The \"minimal_roi\" configuration object is suposed to be optionnal as explained in the doc:\r\n\r\n> Most of the strategy files already include the optimal minimal_roi value. This parameter is optional. If you use it in the configuration file, it will take over the minimal_roi value from the strategy file.\r\n\r\nExemple:\r\n\r\n> minimal_roi = {\r\n> \"40\": 0.0,\r\n> \"30\": 0.01,\r\n> \"20\": 0.02,\r\n> \"0\": 0.04\r\n> }\r\n\r\n**However, If I remove this parameter in the strategy and config I get this error:**\r\n\r\n> 2019-03-06 21:56:56,768 - freqtrade - ERROR - Fatal exception!\r\n> Traceback (most recent call last):\r\n> File \"./freqtrade/main.py\", line 45, in main\r\n> freqtrade = FreqtradeBot(config)\r\n> File \"/home/axel/dev/shatta/framework-dev/private/freqtrade/freqtrade/freqtradebot.py\", line 54, in __init__\r\n> self.strategy: IStrategy = StrategyResolver(self.config).strategy\r\n> File \"/home/axel/dev/shatta/framework-dev/private/freqtrade/freqtrade/resolvers/strategy_resolver.py\", line 77, in __init__\r\n> {int(key): value for (key, value) in self.strategy.minimal_roi.items()}.items(),\r\n> AttributeError: 'ShattaStrategy' object has no attribute 'minimal_roi'\r\n\r\n **Also, If I remove the 0 value key like that:**\r\n\r\n> minimal_roi = {\r\n> \"40\": 0.0,\r\n> \"30\": 0.01,\r\n> \"20\": 0.02,\r\n> }\r\n\r\n**The bot start but crash when trying to check ROI with this error:**\r\n> File \"/freqtrade/freqtrade/freqtradebot.py\", line 699, in check_sell\r\n> should_sell = self.strategy.should_sell(trade, sell_rate, datetime.utcnow(), buy, sell)\r\n> File \"/framework-dev/private/freqtrade/freqtrade/strategy/interface.py\", line 273, in should_sell\r\n> if self.min_roi_reached(trade=trade, current_profit=current_profit, current_time=date):\r\n> File \"/framework-dev/private/freqtrade/freqtrade/strategy/**interface.py\", line 349**, in min_roi_reached\r\n> **roi_entry = max(list(filter(lambda x: trade_dur >= x, self.minimal_roi.keys())))**\r\n\r\n### Standalone Python test to reproduce:\r\n\r\n> minimal_roi_keys = [0, 20, 30, 40, 50] ## Remove the 0 to trigger error\r\n> trade_dur = 0.3486263354619344\r\n> roi_entry = max(list(filter(lambda x: trade_dur >= x, minimal_roi_keys)))\r\n> print(str(roi_entry))\r\n\r\n\r\n### Solutions\r\n\r\n1. The quick easy solution to temporaly deal with that issue is to make the minimal_roi parameter mandatory and make the \"min_roi_reached\" work with missing 0. Users who don't want to use ROI can simply put an unrealistic hight value.\r\n\r\n2. The other is to make this parameter truly optionnal.\r\n\r\n\n", "before_files": [{"content": "# pragma pylint: disable=attribute-defined-outside-init\n\n\"\"\"\nThis module load custom strategies\n\"\"\"\nimport logging\nimport tempfile\nfrom base64 import urlsafe_b64decode\nfrom collections import OrderedDict\nfrom inspect import getfullargspec\nfrom pathlib import Path\nfrom typing import Dict, Optional\n\nfrom freqtrade import constants\nfrom freqtrade.resolvers import IResolver\nfrom freqtrade.strategy import import_strategy\nfrom freqtrade.strategy.interface import IStrategy\n\nlogger = logging.getLogger(__name__)\n\n\nclass StrategyResolver(IResolver):\n \"\"\"\n This class contains all the logic to load custom strategy class\n \"\"\"\n\n __slots__ = ['strategy']\n\n def __init__(self, config: Optional[Dict] = None) -> None:\n \"\"\"\n Load the custom class from config parameter\n :param config: configuration dictionary or None\n \"\"\"\n config = config or {}\n\n # Verify the strategy is in the configuration, otherwise fallback to the default strategy\n strategy_name = config.get('strategy') or constants.DEFAULT_STRATEGY\n self.strategy: IStrategy = self._load_strategy(strategy_name,\n config=config,\n extra_dir=config.get('strategy_path'))\n\n # make sure experimental dict is available\n if 'experimental' not in config:\n config['experimental'] = {}\n\n # Set attributes\n # Check if we need to override configuration\n # (Attribute name, default, experimental)\n attributes = [(\"minimal_roi\", None, False),\n (\"ticker_interval\", None, False),\n (\"stoploss\", None, False),\n (\"trailing_stop\", None, False),\n (\"trailing_stop_positive\", None, False),\n (\"trailing_stop_positive_offset\", 0.0, False),\n (\"process_only_new_candles\", None, False),\n (\"order_types\", None, False),\n (\"order_time_in_force\", None, False),\n (\"use_sell_signal\", False, True),\n (\"sell_profit_only\", False, True),\n (\"ignore_roi_if_buy_signal\", False, True),\n ]\n for attribute, default, experimental in attributes:\n if experimental:\n self._override_attribute_helper(config['experimental'], attribute, default)\n else:\n self._override_attribute_helper(config, attribute, default)\n\n # Loop this list again to have output combined\n for attribute, _, exp in attributes:\n if exp and attribute in config['experimental']:\n logger.info(\"Strategy using %s: %s\", attribute, config['experimental'][attribute])\n elif attribute in config:\n logger.info(\"Strategy using %s: %s\", attribute, config[attribute])\n\n # Sort and apply type conversions\n self.strategy.minimal_roi = OrderedDict(sorted(\n {int(key): value for (key, value) in self.strategy.minimal_roi.items()}.items(),\n key=lambda t: t[0]))\n self.strategy.stoploss = float(self.strategy.stoploss)\n\n self._strategy_sanity_validations()\n\n def _override_attribute_helper(self, config, attribute: str, default):\n \"\"\"\n Override attributes in the strategy.\n Prevalence:\n - Configuration\n - Strategy\n - default (if not None)\n \"\"\"\n if attribute in config:\n setattr(self.strategy, attribute, config[attribute])\n logger.info(\"Override strategy '%s' with value in config file: %s.\",\n attribute, config[attribute])\n elif hasattr(self.strategy, attribute):\n config[attribute] = getattr(self.strategy, attribute)\n # Explicitly check for None here as other \"falsy\" values are possible\n elif default is not None:\n setattr(self.strategy, attribute, default)\n config[attribute] = default\n\n def _strategy_sanity_validations(self):\n if not all(k in self.strategy.order_types for k in constants.REQUIRED_ORDERTYPES):\n raise ImportError(f\"Impossible to load Strategy '{self.strategy.__class__.__name__}'. \"\n f\"Order-types mapping is incomplete.\")\n\n if not all(k in self.strategy.order_time_in_force for k in constants.REQUIRED_ORDERTIF):\n raise ImportError(f\"Impossible to load Strategy '{self.strategy.__class__.__name__}'. \"\n f\"Order-time-in-force mapping is incomplete.\")\n\n def _load_strategy(\n self, strategy_name: str, config: dict, extra_dir: Optional[str] = None) -> IStrategy:\n \"\"\"\n Search and loads the specified strategy.\n :param strategy_name: name of the module to import\n :param config: configuration for the strategy\n :param extra_dir: additional directory to search for the given strategy\n :return: Strategy instance or None\n \"\"\"\n current_path = Path(__file__).parent.parent.joinpath('strategy').resolve()\n\n abs_paths = [\n Path.cwd().joinpath('user_data/strategies'),\n current_path,\n ]\n\n if extra_dir:\n # Add extra strategy directory on top of search paths\n abs_paths.insert(0, Path(extra_dir).resolve())\n\n if \":\" in strategy_name:\n logger.info(\"loading base64 endocded strategy\")\n strat = strategy_name.split(\":\")\n\n if len(strat) == 2:\n temp = Path(tempfile.mkdtemp(\"freq\", \"strategy\"))\n name = strat[0] + \".py\"\n\n temp.joinpath(name).write_text(urlsafe_b64decode(strat[1]).decode('utf-8'))\n temp.joinpath(\"__init__.py\").touch()\n\n strategy_name = strat[0]\n\n # register temp path with the bot\n abs_paths.insert(0, temp.resolve())\n\n for _path in abs_paths:\n try:\n strategy = self._search_object(directory=_path, object_type=IStrategy,\n object_name=strategy_name, kwargs={'config': config})\n if strategy:\n logger.info(\"Using resolved strategy %s from '%s'\", strategy_name, _path)\n strategy._populate_fun_len = len(\n getfullargspec(strategy.populate_indicators).args)\n strategy._buy_fun_len = len(getfullargspec(strategy.populate_buy_trend).args)\n strategy._sell_fun_len = len(getfullargspec(strategy.populate_sell_trend).args)\n\n return import_strategy(strategy, config=config)\n except FileNotFoundError:\n logger.warning('Path \"%s\" does not exist', _path.relative_to(Path.cwd()))\n\n raise ImportError(\n \"Impossible to load Strategy '{}'. This class does not exist\"\n \" or contains Python code errors\".format(strategy_name)\n )\n", "path": "freqtrade/resolvers/strategy_resolver.py"}], "after_files": [{"content": "# pragma pylint: disable=attribute-defined-outside-init\n\n\"\"\"\nThis module load custom strategies\n\"\"\"\nimport logging\nimport tempfile\nfrom base64 import urlsafe_b64decode\nfrom collections import OrderedDict\nfrom inspect import getfullargspec\nfrom pathlib import Path\nfrom typing import Dict, Optional\n\nfrom freqtrade import constants\nfrom freqtrade.resolvers import IResolver\nfrom freqtrade.strategy import import_strategy\nfrom freqtrade.strategy.interface import IStrategy\n\nlogger = logging.getLogger(__name__)\n\n\nclass StrategyResolver(IResolver):\n \"\"\"\n This class contains all the logic to load custom strategy class\n \"\"\"\n\n __slots__ = ['strategy']\n\n def __init__(self, config: Optional[Dict] = None) -> None:\n \"\"\"\n Load the custom class from config parameter\n :param config: configuration dictionary or None\n \"\"\"\n config = config or {}\n\n # Verify the strategy is in the configuration, otherwise fallback to the default strategy\n strategy_name = config.get('strategy') or constants.DEFAULT_STRATEGY\n self.strategy: IStrategy = self._load_strategy(strategy_name,\n config=config,\n extra_dir=config.get('strategy_path'))\n\n # make sure experimental dict is available\n if 'experimental' not in config:\n config['experimental'] = {}\n\n # Set attributes\n # Check if we need to override configuration\n # (Attribute name, default, experimental)\n attributes = [(\"minimal_roi\", {\"0\": 10.0}, False),\n (\"ticker_interval\", None, False),\n (\"stoploss\", None, False),\n (\"trailing_stop\", None, False),\n (\"trailing_stop_positive\", None, False),\n (\"trailing_stop_positive_offset\", 0.0, False),\n (\"process_only_new_candles\", None, False),\n (\"order_types\", None, False),\n (\"order_time_in_force\", None, False),\n (\"use_sell_signal\", False, True),\n (\"sell_profit_only\", False, True),\n (\"ignore_roi_if_buy_signal\", False, True),\n ]\n for attribute, default, experimental in attributes:\n if experimental:\n self._override_attribute_helper(config['experimental'], attribute, default)\n else:\n self._override_attribute_helper(config, attribute, default)\n\n # Loop this list again to have output combined\n for attribute, _, exp in attributes:\n if exp and attribute in config['experimental']:\n logger.info(\"Strategy using %s: %s\", attribute, config['experimental'][attribute])\n elif attribute in config:\n logger.info(\"Strategy using %s: %s\", attribute, config[attribute])\n\n # Sort and apply type conversions\n self.strategy.minimal_roi = OrderedDict(sorted(\n {int(key): value for (key, value) in self.strategy.minimal_roi.items()}.items(),\n key=lambda t: t[0]))\n self.strategy.stoploss = float(self.strategy.stoploss)\n\n self._strategy_sanity_validations()\n\n def _override_attribute_helper(self, config, attribute: str, default):\n \"\"\"\n Override attributes in the strategy.\n Prevalence:\n - Configuration\n - Strategy\n - default (if not None)\n \"\"\"\n if attribute in config:\n setattr(self.strategy, attribute, config[attribute])\n logger.info(\"Override strategy '%s' with value in config file: %s.\",\n attribute, config[attribute])\n elif hasattr(self.strategy, attribute):\n config[attribute] = getattr(self.strategy, attribute)\n # Explicitly check for None here as other \"falsy\" values are possible\n elif default is not None:\n setattr(self.strategy, attribute, default)\n config[attribute] = default\n\n def _strategy_sanity_validations(self):\n if not all(k in self.strategy.order_types for k in constants.REQUIRED_ORDERTYPES):\n raise ImportError(f\"Impossible to load Strategy '{self.strategy.__class__.__name__}'. \"\n f\"Order-types mapping is incomplete.\")\n\n if not all(k in self.strategy.order_time_in_force for k in constants.REQUIRED_ORDERTIF):\n raise ImportError(f\"Impossible to load Strategy '{self.strategy.__class__.__name__}'. \"\n f\"Order-time-in-force mapping is incomplete.\")\n\n def _load_strategy(\n self, strategy_name: str, config: dict, extra_dir: Optional[str] = None) -> IStrategy:\n \"\"\"\n Search and loads the specified strategy.\n :param strategy_name: name of the module to import\n :param config: configuration for the strategy\n :param extra_dir: additional directory to search for the given strategy\n :return: Strategy instance or None\n \"\"\"\n current_path = Path(__file__).parent.parent.joinpath('strategy').resolve()\n\n abs_paths = [\n Path.cwd().joinpath('user_data/strategies'),\n current_path,\n ]\n\n if extra_dir:\n # Add extra strategy directory on top of search paths\n abs_paths.insert(0, Path(extra_dir).resolve())\n\n if \":\" in strategy_name:\n logger.info(\"loading base64 endocded strategy\")\n strat = strategy_name.split(\":\")\n\n if len(strat) == 2:\n temp = Path(tempfile.mkdtemp(\"freq\", \"strategy\"))\n name = strat[0] + \".py\"\n\n temp.joinpath(name).write_text(urlsafe_b64decode(strat[1]).decode('utf-8'))\n temp.joinpath(\"__init__.py\").touch()\n\n strategy_name = strat[0]\n\n # register temp path with the bot\n abs_paths.insert(0, temp.resolve())\n\n for _path in abs_paths:\n try:\n strategy = self._search_object(directory=_path, object_type=IStrategy,\n object_name=strategy_name, kwargs={'config': config})\n if strategy:\n logger.info(\"Using resolved strategy %s from '%s'\", strategy_name, _path)\n strategy._populate_fun_len = len(\n getfullargspec(strategy.populate_indicators).args)\n strategy._buy_fun_len = len(getfullargspec(strategy.populate_buy_trend).args)\n strategy._sell_fun_len = len(getfullargspec(strategy.populate_sell_trend).args)\n\n return import_strategy(strategy, config=config)\n except FileNotFoundError:\n logger.warning('Path \"%s\" does not exist', _path.relative_to(Path.cwd()))\n\n raise ImportError(\n \"Impossible to load Strategy '{}'. This class does not exist\"\n \" or contains Python code errors\".format(strategy_name)\n )\n", "path": "freqtrade/resolvers/strategy_resolver.py"}]}
| 3,551 | 438 |
gh_patches_debug_49499
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1896
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.104
On the docket:
+ [x] Pull in Pip fixes. #1805
+ [x] pex fails to overwrite zipapp pex with loose pex #1879
+ [x] Make lock update sensitive to artifacts. #1887
+ [x] PEXBuilder is not robust to ephemeral .pyc compiles. #1889
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.103"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.103"
+__version__ = "2.1.104"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.103\"\n+__version__ = \"2.1.104\"\n", "issue": "Release 2.1.104\nOn the docket:\r\n+ [x] Pull in Pip fixes. #1805\r\n+ [x] pex fails to overwrite zipapp pex with loose pex #1879\r\n+ [x] Make lock update sensitive to artifacts. #1887 \r\n+ [x] PEXBuilder is not robust to ephemeral .pyc compiles. #1889 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.103\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.104\"\n", "path": "pex/version.py"}]}
| 403 | 98 |
gh_patches_debug_15256
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-1645
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Previews not loading correctly when using local image storage
**Describe the bug**
Previews not loading correctly when using local image storage
**To Reproduce**
In status update previews are not shown, for example:
https://ziurkes.group.lt/user/athinkingmeat/quotation/167#anchor-167
https://ziurkes.group.lt/user/athinkingmeat/review/166#anchor-166
**Expected behavior**
they work for books though:
https://ziurkes.group.lt/book/7
**Examples**
https://twitter.com/DievaiDemonai/status/1459865136781578243
**Instance**
https://ziurkes.group.lt/
**Additional context**
Using cloudflare and also caddy instead of nginx to serve files, but it seems that the variable is not generated in template itself, not that the file is not being served.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/feed.py`
Content:
```
1 """ non-interactive pages """
2 from django.contrib.auth.decorators import login_required
3 from django.core.paginator import Paginator
4 from django.db.models import Q
5 from django.http import HttpResponseNotFound, Http404
6 from django.shortcuts import get_object_or_404
7 from django.template.response import TemplateResponse
8 from django.utils import timezone
9 from django.utils.decorators import method_decorator
10 from django.views import View
11
12 from bookwyrm import activitystreams, forms, models
13 from bookwyrm.models.user import FeedFilterChoices
14 from bookwyrm.activitypub import ActivitypubResponse
15 from bookwyrm.settings import PAGE_LENGTH, STREAMS
16 from bookwyrm.suggested_users import suggested_users
17 from .helpers import filter_stream_by_status_type, get_user_from_username
18 from .helpers import is_api_request, is_bookwyrm_request
19
20
21 # pylint: disable= no-self-use
22 @method_decorator(login_required, name="dispatch")
23 class Feed(View):
24 """activity stream"""
25
26 def post(self, request, tab):
27 """save feed settings form, with a silent validation fail"""
28 settings_saved = False
29 form = forms.FeedStatusTypesForm(request.POST, instance=request.user)
30 if form.is_valid():
31 form.save()
32 settings_saved = True
33
34 return self.get(request, tab, settings_saved)
35
36 def get(self, request, tab, settings_saved=False):
37 """user's homepage with activity feed"""
38 tab = [s for s in STREAMS if s["key"] == tab]
39 tab = tab[0] if tab else STREAMS[0]
40
41 activities = activitystreams.streams[tab["key"]].get_activity_stream(
42 request.user
43 )
44 filtered_activities = filter_stream_by_status_type(
45 activities,
46 allowed_types=request.user.feed_status_types,
47 )
48 paginated = Paginator(filtered_activities, PAGE_LENGTH)
49
50 suggestions = suggested_users.get_suggestions(request.user)
51
52 data = {
53 **feed_page_data(request.user),
54 **{
55 "user": request.user,
56 "activities": paginated.get_page(request.GET.get("page")),
57 "suggested_users": suggestions,
58 "tab": tab,
59 "streams": STREAMS,
60 "goal_form": forms.GoalForm(),
61 "feed_status_types_options": FeedFilterChoices,
62 "allowed_status_types": request.user.feed_status_types,
63 "settings_saved": settings_saved,
64 "path": f"/{tab['key']}",
65 },
66 }
67 return TemplateResponse(request, "feed/feed.html", data)
68
69
70 @method_decorator(login_required, name="dispatch")
71 class DirectMessage(View):
72 """dm view"""
73
74 def get(self, request, username=None):
75 """like a feed but for dms only"""
76 # remove fancy subclasses of status, keep just good ol' notes
77 activities = (
78 models.Status.privacy_filter(request.user, privacy_levels=["direct"])
79 .filter(
80 review__isnull=True,
81 comment__isnull=True,
82 quotation__isnull=True,
83 generatednote__isnull=True,
84 )
85 .order_by("-published_date")
86 )
87
88 user = None
89 if username:
90 try:
91 user = get_user_from_username(request.user, username)
92 except Http404:
93 pass
94 if user:
95 activities = activities.filter(Q(user=user) | Q(mention_users=user))
96
97 paginated = Paginator(activities, PAGE_LENGTH)
98 data = {
99 **feed_page_data(request.user),
100 **{
101 "user": request.user,
102 "partner": user,
103 "activities": paginated.get_page(request.GET.get("page")),
104 "path": "/direct-messages",
105 },
106 }
107 return TemplateResponse(request, "feed/direct_messages.html", data)
108
109
110 class Status(View):
111 """get posting"""
112
113 def get(self, request, username, status_id):
114 """display a particular status (and replies, etc)"""
115 user = get_user_from_username(request.user, username)
116 status = get_object_or_404(
117 models.Status.objects.select_subclasses(),
118 user=user,
119 id=status_id,
120 deleted=False,
121 )
122 # make sure the user is authorized to see the status
123 status.raise_visible_to_user(request.user)
124
125 if is_api_request(request):
126 return ActivitypubResponse(
127 status.to_activity(pure=not is_bookwyrm_request(request))
128 )
129
130 visible_thread = (
131 models.Status.privacy_filter(request.user)
132 .filter(thread_id=status.thread_id)
133 .values_list("id", flat=True)
134 )
135 visible_thread = list(visible_thread)
136
137 ancestors = models.Status.objects.select_subclasses().raw(
138 """
139 WITH RECURSIVE get_thread(depth, id, path) AS (
140
141 SELECT 1, st.id, ARRAY[st.id]
142 FROM bookwyrm_status st
143 WHERE id = '%s' AND id = ANY(%s)
144
145 UNION
146
147 SELECT (gt.depth + 1), st.reply_parent_id, path || st.id
148 FROM get_thread gt, bookwyrm_status st
149
150 WHERE st.id = gt.id AND depth < 5 AND st.id = ANY(%s)
151
152 )
153
154 SELECT * FROM get_thread ORDER BY path DESC;
155 """,
156 params=[status.reply_parent_id or 0, visible_thread, visible_thread],
157 )
158 children = models.Status.objects.select_subclasses().raw(
159 """
160 WITH RECURSIVE get_thread(depth, id, path) AS (
161
162 SELECT 1, st.id, ARRAY[st.id]
163 FROM bookwyrm_status st
164 WHERE reply_parent_id = '%s' AND id = ANY(%s)
165
166 UNION
167
168 SELECT (gt.depth + 1), st.id, path || st.id
169 FROM get_thread gt, bookwyrm_status st
170
171 WHERE st.reply_parent_id = gt.id AND depth < 5 AND st.id = ANY(%s)
172
173 )
174
175 SELECT * FROM get_thread ORDER BY path;
176 """,
177 params=[status.id, visible_thread, visible_thread],
178 )
179
180 data = {
181 **feed_page_data(request.user),
182 **{
183 "status": status,
184 "children": children,
185 "ancestors": ancestors,
186 },
187 }
188 return TemplateResponse(request, "feed/status.html", data)
189
190
191 class Replies(View):
192 """replies page (a json view of status)"""
193
194 def get(self, request, username, status_id):
195 """ordered collection of replies to a status"""
196 # the html view is the same as Status
197 if not is_api_request(request):
198 status_view = Status.as_view()
199 return status_view(request, username, status_id)
200
201 # the json view is different than Status
202 status = models.Status.objects.get(id=status_id)
203 if status.user.localname != username:
204 return HttpResponseNotFound()
205 status.raise_visible_to_user(request.user)
206
207 return ActivitypubResponse(status.to_replies(**request.GET))
208
209
210 def feed_page_data(user):
211 """info we need for every feed page"""
212 if not user.is_authenticated:
213 return {}
214
215 goal = models.AnnualGoal.objects.filter(user=user, year=timezone.now().year).first()
216 return {
217 "suggested_books": get_suggested_books(user),
218 "goal": goal,
219 "goal_form": forms.GoalForm(),
220 }
221
222
223 def get_suggested_books(user, max_books=5):
224 """helper to get a user's recent books"""
225 book_count = 0
226 preset_shelves = [("reading", max_books), ("read", 2), ("to-read", max_books)]
227 suggested_books = []
228 for (preset, shelf_max) in preset_shelves:
229 limit = (
230 shelf_max
231 if shelf_max < (max_books - book_count)
232 else max_books - book_count
233 )
234 shelf = user.shelf_set.get(identifier=preset)
235 if not shelf.books.exists():
236 continue
237
238 shelf_preview = {
239 "name": shelf.name,
240 "identifier": shelf.identifier,
241 "books": models.Edition.viewer_aware_objects(user)
242 .filter(
243 shelfbook__shelf=shelf,
244 )
245 .order_by("-shelfbook__shelved_date")
246 .prefetch_related("authors")[:limit],
247 }
248 suggested_books.append(shelf_preview)
249 book_count += len(shelf_preview["books"])
250 return suggested_books
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/views/feed.py b/bookwyrm/views/feed.py
--- a/bookwyrm/views/feed.py
+++ b/bookwyrm/views/feed.py
@@ -177,12 +177,19 @@
params=[status.id, visible_thread, visible_thread],
)
+ preview = None
+ if hasattr(status, "book"):
+ preview = status.book.preview_image
+ elif status.mention_books.exists():
+ preview = status.mention_books.first().preview_image
+
data = {
**feed_page_data(request.user),
**{
"status": status,
"children": children,
"ancestors": ancestors,
+ "preview": preview,
},
}
return TemplateResponse(request, "feed/status.html", data)
|
{"golden_diff": "diff --git a/bookwyrm/views/feed.py b/bookwyrm/views/feed.py\n--- a/bookwyrm/views/feed.py\n+++ b/bookwyrm/views/feed.py\n@@ -177,12 +177,19 @@\n params=[status.id, visible_thread, visible_thread],\n )\n \n+ preview = None\n+ if hasattr(status, \"book\"):\n+ preview = status.book.preview_image\n+ elif status.mention_books.exists():\n+ preview = status.mention_books.first().preview_image\n+\n data = {\n **feed_page_data(request.user),\n **{\n \"status\": status,\n \"children\": children,\n \"ancestors\": ancestors,\n+ \"preview\": preview,\n },\n }\n return TemplateResponse(request, \"feed/status.html\", data)\n", "issue": "Previews not loading correctly when using local image storage \n**Describe the bug**\r\nPreviews not loading correctly when using local image storage \r\n\r\n**To Reproduce**\r\nIn status update previews are not shown, for example:\r\nhttps://ziurkes.group.lt/user/athinkingmeat/quotation/167#anchor-167\r\nhttps://ziurkes.group.lt/user/athinkingmeat/review/166#anchor-166\r\n\r\n**Expected behavior**\r\nthey work for books though:\r\nhttps://ziurkes.group.lt/book/7\r\n\r\n**Examples**\r\nhttps://twitter.com/DievaiDemonai/status/1459865136781578243\r\n\r\n**Instance**\r\nhttps://ziurkes.group.lt/\r\n\r\n**Additional context**\r\nUsing cloudflare and also caddy instead of nginx to serve files, but it seems that the variable is not generated in template itself, not that the file is not being served.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\" non-interactive pages \"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.db.models import Q\nfrom django.http import HttpResponseNotFound, Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import activitystreams, forms, models\nfrom bookwyrm.models.user import FeedFilterChoices\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH, STREAMS\nfrom bookwyrm.suggested_users import suggested_users\nfrom .helpers import filter_stream_by_status_type, get_user_from_username\nfrom .helpers import is_api_request, is_bookwyrm_request\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass Feed(View):\n \"\"\"activity stream\"\"\"\n\n def post(self, request, tab):\n \"\"\"save feed settings form, with a silent validation fail\"\"\"\n settings_saved = False\n form = forms.FeedStatusTypesForm(request.POST, instance=request.user)\n if form.is_valid():\n form.save()\n settings_saved = True\n\n return self.get(request, tab, settings_saved)\n\n def get(self, request, tab, settings_saved=False):\n \"\"\"user's homepage with activity feed\"\"\"\n tab = [s for s in STREAMS if s[\"key\"] == tab]\n tab = tab[0] if tab else STREAMS[0]\n\n activities = activitystreams.streams[tab[\"key\"]].get_activity_stream(\n request.user\n )\n filtered_activities = filter_stream_by_status_type(\n activities,\n allowed_types=request.user.feed_status_types,\n )\n paginated = Paginator(filtered_activities, PAGE_LENGTH)\n\n suggestions = suggested_users.get_suggestions(request.user)\n\n data = {\n **feed_page_data(request.user),\n **{\n \"user\": request.user,\n \"activities\": paginated.get_page(request.GET.get(\"page\")),\n \"suggested_users\": suggestions,\n \"tab\": tab,\n \"streams\": STREAMS,\n \"goal_form\": forms.GoalForm(),\n \"feed_status_types_options\": FeedFilterChoices,\n \"allowed_status_types\": request.user.feed_status_types,\n \"settings_saved\": settings_saved,\n \"path\": f\"/{tab['key']}\",\n },\n }\n return TemplateResponse(request, \"feed/feed.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DirectMessage(View):\n \"\"\"dm view\"\"\"\n\n def get(self, request, username=None):\n \"\"\"like a feed but for dms only\"\"\"\n # remove fancy subclasses of status, keep just good ol' notes\n activities = (\n models.Status.privacy_filter(request.user, privacy_levels=[\"direct\"])\n .filter(\n review__isnull=True,\n comment__isnull=True,\n quotation__isnull=True,\n generatednote__isnull=True,\n )\n .order_by(\"-published_date\")\n )\n\n user = None\n if username:\n try:\n user = get_user_from_username(request.user, username)\n except Http404:\n pass\n if user:\n activities = activities.filter(Q(user=user) | Q(mention_users=user))\n\n paginated = Paginator(activities, PAGE_LENGTH)\n data = {\n **feed_page_data(request.user),\n **{\n \"user\": request.user,\n \"partner\": user,\n \"activities\": paginated.get_page(request.GET.get(\"page\")),\n \"path\": \"/direct-messages\",\n },\n }\n return TemplateResponse(request, \"feed/direct_messages.html\", data)\n\n\nclass Status(View):\n \"\"\"get posting\"\"\"\n\n def get(self, request, username, status_id):\n \"\"\"display a particular status (and replies, etc)\"\"\"\n user = get_user_from_username(request.user, username)\n status = get_object_or_404(\n models.Status.objects.select_subclasses(),\n user=user,\n id=status_id,\n deleted=False,\n )\n # make sure the user is authorized to see the status\n status.raise_visible_to_user(request.user)\n\n if is_api_request(request):\n return ActivitypubResponse(\n status.to_activity(pure=not is_bookwyrm_request(request))\n )\n\n visible_thread = (\n models.Status.privacy_filter(request.user)\n .filter(thread_id=status.thread_id)\n .values_list(\"id\", flat=True)\n )\n visible_thread = list(visible_thread)\n\n ancestors = models.Status.objects.select_subclasses().raw(\n \"\"\"\n WITH RECURSIVE get_thread(depth, id, path) AS (\n\n SELECT 1, st.id, ARRAY[st.id]\n FROM bookwyrm_status st\n WHERE id = '%s' AND id = ANY(%s)\n\n UNION\n\n SELECT (gt.depth + 1), st.reply_parent_id, path || st.id\n FROM get_thread gt, bookwyrm_status st\n\n WHERE st.id = gt.id AND depth < 5 AND st.id = ANY(%s)\n\n )\n\n SELECT * FROM get_thread ORDER BY path DESC;\n \"\"\",\n params=[status.reply_parent_id or 0, visible_thread, visible_thread],\n )\n children = models.Status.objects.select_subclasses().raw(\n \"\"\"\n WITH RECURSIVE get_thread(depth, id, path) AS (\n\n SELECT 1, st.id, ARRAY[st.id]\n FROM bookwyrm_status st\n WHERE reply_parent_id = '%s' AND id = ANY(%s)\n\n UNION\n\n SELECT (gt.depth + 1), st.id, path || st.id\n FROM get_thread gt, bookwyrm_status st\n\n WHERE st.reply_parent_id = gt.id AND depth < 5 AND st.id = ANY(%s)\n\n )\n\n SELECT * FROM get_thread ORDER BY path;\n \"\"\",\n params=[status.id, visible_thread, visible_thread],\n )\n\n data = {\n **feed_page_data(request.user),\n **{\n \"status\": status,\n \"children\": children,\n \"ancestors\": ancestors,\n },\n }\n return TemplateResponse(request, \"feed/status.html\", data)\n\n\nclass Replies(View):\n \"\"\"replies page (a json view of status)\"\"\"\n\n def get(self, request, username, status_id):\n \"\"\"ordered collection of replies to a status\"\"\"\n # the html view is the same as Status\n if not is_api_request(request):\n status_view = Status.as_view()\n return status_view(request, username, status_id)\n\n # the json view is different than Status\n status = models.Status.objects.get(id=status_id)\n if status.user.localname != username:\n return HttpResponseNotFound()\n status.raise_visible_to_user(request.user)\n\n return ActivitypubResponse(status.to_replies(**request.GET))\n\n\ndef feed_page_data(user):\n \"\"\"info we need for every feed page\"\"\"\n if not user.is_authenticated:\n return {}\n\n goal = models.AnnualGoal.objects.filter(user=user, year=timezone.now().year).first()\n return {\n \"suggested_books\": get_suggested_books(user),\n \"goal\": goal,\n \"goal_form\": forms.GoalForm(),\n }\n\n\ndef get_suggested_books(user, max_books=5):\n \"\"\"helper to get a user's recent books\"\"\"\n book_count = 0\n preset_shelves = [(\"reading\", max_books), (\"read\", 2), (\"to-read\", max_books)]\n suggested_books = []\n for (preset, shelf_max) in preset_shelves:\n limit = (\n shelf_max\n if shelf_max < (max_books - book_count)\n else max_books - book_count\n )\n shelf = user.shelf_set.get(identifier=preset)\n if not shelf.books.exists():\n continue\n\n shelf_preview = {\n \"name\": shelf.name,\n \"identifier\": shelf.identifier,\n \"books\": models.Edition.viewer_aware_objects(user)\n .filter(\n shelfbook__shelf=shelf,\n )\n .order_by(\"-shelfbook__shelved_date\")\n .prefetch_related(\"authors\")[:limit],\n }\n suggested_books.append(shelf_preview)\n book_count += len(shelf_preview[\"books\"])\n return suggested_books\n", "path": "bookwyrm/views/feed.py"}], "after_files": [{"content": "\"\"\" non-interactive pages \"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.db.models import Q\nfrom django.http import HttpResponseNotFound, Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import activitystreams, forms, models\nfrom bookwyrm.models.user import FeedFilterChoices\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH, STREAMS\nfrom bookwyrm.suggested_users import suggested_users\nfrom .helpers import filter_stream_by_status_type, get_user_from_username\nfrom .helpers import is_api_request, is_bookwyrm_request\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass Feed(View):\n \"\"\"activity stream\"\"\"\n\n def post(self, request, tab):\n \"\"\"save feed settings form, with a silent validation fail\"\"\"\n settings_saved = False\n form = forms.FeedStatusTypesForm(request.POST, instance=request.user)\n if form.is_valid():\n form.save()\n settings_saved = True\n\n return self.get(request, tab, settings_saved)\n\n def get(self, request, tab, settings_saved=False):\n \"\"\"user's homepage with activity feed\"\"\"\n tab = [s for s in STREAMS if s[\"key\"] == tab]\n tab = tab[0] if tab else STREAMS[0]\n\n activities = activitystreams.streams[tab[\"key\"]].get_activity_stream(\n request.user\n )\n filtered_activities = filter_stream_by_status_type(\n activities,\n allowed_types=request.user.feed_status_types,\n )\n paginated = Paginator(filtered_activities, PAGE_LENGTH)\n\n suggestions = suggested_users.get_suggestions(request.user)\n\n data = {\n **feed_page_data(request.user),\n **{\n \"user\": request.user,\n \"activities\": paginated.get_page(request.GET.get(\"page\")),\n \"suggested_users\": suggestions,\n \"tab\": tab,\n \"streams\": STREAMS,\n \"goal_form\": forms.GoalForm(),\n \"feed_status_types_options\": FeedFilterChoices,\n \"allowed_status_types\": request.user.feed_status_types,\n \"settings_saved\": settings_saved,\n \"path\": f\"/{tab['key']}\",\n },\n }\n return TemplateResponse(request, \"feed/feed.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DirectMessage(View):\n \"\"\"dm view\"\"\"\n\n def get(self, request, username=None):\n \"\"\"like a feed but for dms only\"\"\"\n # remove fancy subclasses of status, keep just good ol' notes\n activities = (\n models.Status.privacy_filter(request.user, privacy_levels=[\"direct\"])\n .filter(\n review__isnull=True,\n comment__isnull=True,\n quotation__isnull=True,\n generatednote__isnull=True,\n )\n .order_by(\"-published_date\")\n )\n\n user = None\n if username:\n try:\n user = get_user_from_username(request.user, username)\n except Http404:\n pass\n if user:\n activities = activities.filter(Q(user=user) | Q(mention_users=user))\n\n paginated = Paginator(activities, PAGE_LENGTH)\n data = {\n **feed_page_data(request.user),\n **{\n \"user\": request.user,\n \"partner\": user,\n \"activities\": paginated.get_page(request.GET.get(\"page\")),\n \"path\": \"/direct-messages\",\n },\n }\n return TemplateResponse(request, \"feed/direct_messages.html\", data)\n\n\nclass Status(View):\n \"\"\"get posting\"\"\"\n\n def get(self, request, username, status_id):\n \"\"\"display a particular status (and replies, etc)\"\"\"\n user = get_user_from_username(request.user, username)\n status = get_object_or_404(\n models.Status.objects.select_subclasses(),\n user=user,\n id=status_id,\n deleted=False,\n )\n # make sure the user is authorized to see the status\n status.raise_visible_to_user(request.user)\n\n if is_api_request(request):\n return ActivitypubResponse(\n status.to_activity(pure=not is_bookwyrm_request(request))\n )\n\n visible_thread = (\n models.Status.privacy_filter(request.user)\n .filter(thread_id=status.thread_id)\n .values_list(\"id\", flat=True)\n )\n visible_thread = list(visible_thread)\n\n ancestors = models.Status.objects.select_subclasses().raw(\n \"\"\"\n WITH RECURSIVE get_thread(depth, id, path) AS (\n\n SELECT 1, st.id, ARRAY[st.id]\n FROM bookwyrm_status st\n WHERE id = '%s' AND id = ANY(%s)\n\n UNION\n\n SELECT (gt.depth + 1), st.reply_parent_id, path || st.id\n FROM get_thread gt, bookwyrm_status st\n\n WHERE st.id = gt.id AND depth < 5 AND st.id = ANY(%s)\n\n )\n\n SELECT * FROM get_thread ORDER BY path DESC;\n \"\"\",\n params=[status.reply_parent_id or 0, visible_thread, visible_thread],\n )\n children = models.Status.objects.select_subclasses().raw(\n \"\"\"\n WITH RECURSIVE get_thread(depth, id, path) AS (\n\n SELECT 1, st.id, ARRAY[st.id]\n FROM bookwyrm_status st\n WHERE reply_parent_id = '%s' AND id = ANY(%s)\n\n UNION\n\n SELECT (gt.depth + 1), st.id, path || st.id\n FROM get_thread gt, bookwyrm_status st\n\n WHERE st.reply_parent_id = gt.id AND depth < 5 AND st.id = ANY(%s)\n\n )\n\n SELECT * FROM get_thread ORDER BY path;\n \"\"\",\n params=[status.id, visible_thread, visible_thread],\n )\n\n preview = None\n if hasattr(status, \"book\"):\n preview = status.book.preview_image\n elif status.mention_books.exists():\n preview = status.mention_books.first().preview_image\n\n data = {\n **feed_page_data(request.user),\n **{\n \"status\": status,\n \"children\": children,\n \"ancestors\": ancestors,\n \"preview\": preview,\n },\n }\n return TemplateResponse(request, \"feed/status.html\", data)\n\n\nclass Replies(View):\n \"\"\"replies page (a json view of status)\"\"\"\n\n def get(self, request, username, status_id):\n \"\"\"ordered collection of replies to a status\"\"\"\n # the html view is the same as Status\n if not is_api_request(request):\n status_view = Status.as_view()\n return status_view(request, username, status_id)\n\n # the json view is different than Status\n status = models.Status.objects.get(id=status_id)\n if status.user.localname != username:\n return HttpResponseNotFound()\n status.raise_visible_to_user(request.user)\n\n return ActivitypubResponse(status.to_replies(**request.GET))\n\n\ndef feed_page_data(user):\n \"\"\"info we need for every feed page\"\"\"\n if not user.is_authenticated:\n return {}\n\n goal = models.AnnualGoal.objects.filter(user=user, year=timezone.now().year).first()\n return {\n \"suggested_books\": get_suggested_books(user),\n \"goal\": goal,\n \"goal_form\": forms.GoalForm(),\n }\n\n\ndef get_suggested_books(user, max_books=5):\n \"\"\"helper to get a user's recent books\"\"\"\n book_count = 0\n preset_shelves = [(\"reading\", max_books), (\"read\", 2), (\"to-read\", max_books)]\n suggested_books = []\n for (preset, shelf_max) in preset_shelves:\n limit = (\n shelf_max\n if shelf_max < (max_books - book_count)\n else max_books - book_count\n )\n shelf = user.shelf_set.get(identifier=preset)\n if not shelf.books.exists():\n continue\n\n shelf_preview = {\n \"name\": shelf.name,\n \"identifier\": shelf.identifier,\n \"books\": models.Edition.viewer_aware_objects(user)\n .filter(\n shelfbook__shelf=shelf,\n )\n .order_by(\"-shelfbook__shelved_date\")\n .prefetch_related(\"authors\")[:limit],\n }\n suggested_books.append(shelf_preview)\n book_count += len(shelf_preview[\"books\"])\n return suggested_books\n", "path": "bookwyrm/views/feed.py"}]}
| 2,888 | 173 |
gh_patches_debug_25518
|
rasdani/github-patches
|
git_diff
|
microsoft__torchgeo-1072
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
EuroSAT Dataset URL not being secure anymore
### Description
As `url = "https://madm.dfki.de/files/sentinel/EuroSATallBands.zip"` is not secure anymore, downloading the EuroSat dataset is blocked by default by certify.
We might need to either change the zip source or remove the `https`
### Steps to reproduce
Load the EuroSAT dataset
```python
from torchgeo.datasets import EuroSAT
eurosat_train = EuroSAT(download=True)
```
### Version
0.4.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchgeo/datasets/eurosat.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 """EuroSAT dataset."""
5
6 import os
7 from typing import Callable, Dict, Optional, Sequence, cast
8
9 import matplotlib.pyplot as plt
10 import numpy as np
11 import torch
12 from torch import Tensor
13
14 from .geo import NonGeoClassificationDataset
15 from .utils import check_integrity, download_url, extract_archive, rasterio_loader
16
17
18 class EuroSAT(NonGeoClassificationDataset):
19 """EuroSAT dataset.
20
21 The `EuroSAT <https://github.com/phelber/EuroSAT>`__ dataset is based on Sentinel-2
22 satellite images covering 13 spectral bands and consists of 10 target classes with
23 a total of 27,000 labeled and geo-referenced images.
24
25 Dataset format:
26
27 * rasters are 13-channel GeoTiffs
28 * labels are values in the range [0,9]
29
30 Dataset classes:
31
32 * Industrial Buildings
33 * Residential Buildings
34 * Annual Crop
35 * Permanent Crop
36 * River
37 * Sea and Lake
38 * Herbaceous Vegetation
39 * Highway
40 * Pasture
41 * Forest
42
43 This dataset uses the train/val/test splits defined in the "In-domain representation
44 learning for remote sensing" paper:
45
46 * https://arxiv.org/abs/1911.06721
47
48 If you use this dataset in your research, please cite the following papers:
49
50 * https://ieeexplore.ieee.org/document/8736785
51 * https://ieeexplore.ieee.org/document/8519248
52 """
53
54 # TODO: Change to https after https://github.com/phelber/EuroSAT/issues/10 is
55 # resolved
56 url = "http://madm.dfki.de/files/sentinel/EuroSATallBands.zip" # 2.0 GB download
57 filename = "EuroSATallBands.zip"
58 md5 = "5ac12b3b2557aa56e1826e981e8e200e"
59
60 # For some reason the class directories are actually nested in this directory
61 base_dir = os.path.join(
62 "ds", "images", "remote_sensing", "otherDatasets", "sentinel_2", "tif"
63 )
64
65 splits = ["train", "val", "test"]
66 split_urls = {
67 "train": "https://storage.googleapis.com/remote_sensing_representations/eurosat-train.txt", # noqa: E501
68 "val": "https://storage.googleapis.com/remote_sensing_representations/eurosat-val.txt", # noqa: E501
69 "test": "https://storage.googleapis.com/remote_sensing_representations/eurosat-test.txt", # noqa: E501
70 }
71 split_md5s = {
72 "train": "908f142e73d6acdf3f482c5e80d851b1",
73 "val": "95de90f2aa998f70a3b2416bfe0687b4",
74 "test": "7ae5ab94471417b6e315763121e67c5f",
75 }
76 classes = [
77 "Industrial Buildings",
78 "Residential Buildings",
79 "Annual Crop",
80 "Permanent Crop",
81 "River",
82 "Sea and Lake",
83 "Herbaceous Vegetation",
84 "Highway",
85 "Pasture",
86 "Forest",
87 ]
88
89 all_band_names = (
90 "B01",
91 "B02",
92 "B03",
93 "B04",
94 "B05",
95 "B06",
96 "B07",
97 "B08",
98 "B08A",
99 "B09",
100 "B10",
101 "B11",
102 "B12",
103 )
104
105 rgb_bands = ("B04", "B03", "B02")
106
107 BAND_SETS = {"all": all_band_names, "rgb": rgb_bands}
108
109 # TODO: reset checksum to False after https://github.com/phelber/EuroSAT/issues/10
110 # is resolved
111 def __init__(
112 self,
113 root: str = "data",
114 split: str = "train",
115 bands: Sequence[str] = BAND_SETS["all"],
116 transforms: Optional[Callable[[Dict[str, Tensor]], Dict[str, Tensor]]] = None,
117 download: bool = False,
118 checksum: bool = True,
119 ) -> None:
120 """Initialize a new EuroSAT dataset instance.
121
122 Args:
123 root: root directory where dataset can be found
124 split: one of "train", "val", or "test"
125 bands: a sequence of band names to load
126 transforms: a function/transform that takes input sample and its target as
127 entry and returns a transformed version
128 download: if True, download dataset and store it in the root directory
129 checksum: if True, check the MD5 of the downloaded files (may be slow)
130
131 Raises:
132 AssertionError: if ``split`` argument is invalid
133 RuntimeError: if ``download=False`` and data is not found, or checksums
134 don't match
135
136 .. versionadded:: 0.3
137 The *bands* parameter.
138 """
139 self.root = root
140 self.transforms = transforms
141 self.download = download
142 self.checksum = checksum
143
144 assert split in ["train", "val", "test"]
145
146 self._validate_bands(bands)
147 self.bands = bands
148 self.band_indices = Tensor(
149 [self.all_band_names.index(b) for b in bands if b in self.all_band_names]
150 ).long()
151
152 self._verify()
153
154 valid_fns = set()
155 with open(os.path.join(self.root, f"eurosat-{split}.txt")) as f:
156 for fn in f:
157 valid_fns.add(fn.strip().replace(".jpg", ".tif"))
158 is_in_split: Callable[[str], bool] = lambda x: os.path.basename(x) in valid_fns
159
160 super().__init__(
161 root=os.path.join(root, self.base_dir),
162 transforms=transforms,
163 loader=rasterio_loader,
164 is_valid_file=is_in_split,
165 )
166
167 def __getitem__(self, index: int) -> Dict[str, Tensor]:
168 """Return an index within the dataset.
169
170 Args:
171 index: index to return
172 Returns:
173 data and label at that index
174 """
175 image, label = self._load_image(index)
176
177 image = torch.index_select(image, dim=0, index=self.band_indices).float()
178 sample = {"image": image, "label": label}
179
180 if self.transforms is not None:
181 sample = self.transforms(sample)
182
183 return sample
184
185 def _check_integrity(self) -> bool:
186 """Check integrity of dataset.
187
188 Returns:
189 True if dataset files are found and/or MD5s match, else False
190 """
191 integrity: bool = check_integrity(
192 os.path.join(self.root, self.filename), self.md5 if self.checksum else None
193 )
194 return integrity
195
196 def _verify(self) -> None:
197 """Verify the integrity of the dataset.
198
199 Raises:
200 RuntimeError: if ``download=False`` but dataset is missing or checksum fails
201 """
202 # Check if the files already exist
203 filepath = os.path.join(self.root, self.base_dir)
204 if os.path.exists(filepath):
205 return
206
207 # Check if zip file already exists (if so then extract)
208 if self._check_integrity():
209 self._extract()
210 return
211
212 # Check if the user requested to download the dataset
213 if not self.download:
214 raise RuntimeError(
215 "Dataset not found in `root` directory and `download=False`, "
216 "either specify a different `root` directory or use `download=True` "
217 "to automatically download the dataset."
218 )
219
220 # Download and extract the dataset
221 self._download()
222 self._extract()
223
224 def _download(self) -> None:
225 """Download the dataset."""
226 download_url(
227 self.url,
228 self.root,
229 filename=self.filename,
230 md5=self.md5 if self.checksum else None,
231 )
232 for split in self.splits:
233 download_url(
234 self.split_urls[split],
235 self.root,
236 filename=f"eurosat-{split}.txt",
237 md5=self.split_md5s[split] if self.checksum else None,
238 )
239
240 def _extract(self) -> None:
241 """Extract the dataset."""
242 filepath = os.path.join(self.root, self.filename)
243 extract_archive(filepath)
244
245 def _validate_bands(self, bands: Sequence[str]) -> None:
246 """Validate list of bands.
247
248 Args:
249 bands: user-provided sequence of bands to load
250
251 Raises:
252 AssertionError: if ``bands`` is not a sequence
253 ValueError: if an invalid band name is provided
254
255 .. versionadded:: 0.3
256 """
257 assert isinstance(bands, Sequence), "'bands' must be a sequence"
258 for band in bands:
259 if band not in self.all_band_names:
260 raise ValueError(f"'{band}' is an invalid band name.")
261
262 def plot(
263 self,
264 sample: Dict[str, Tensor],
265 show_titles: bool = True,
266 suptitle: Optional[str] = None,
267 ) -> plt.Figure:
268 """Plot a sample from the dataset.
269
270 Args:
271 sample: a sample returned by :meth:`NonGeoClassificationDataset.__getitem__`
272 show_titles: flag indicating whether to show titles above each panel
273 suptitle: optional string to use as a suptitle
274
275 Returns:
276 a matplotlib Figure with the rendered sample
277
278 Raises:
279 ValueError: if RGB bands are not found in dataset
280
281 .. versionadded:: 0.2
282 """
283 rgb_indices = []
284 for band in self.rgb_bands:
285 if band in self.bands:
286 rgb_indices.append(self.bands.index(band))
287 else:
288 raise ValueError("Dataset doesn't contain some of the RGB bands")
289
290 image = np.take(sample["image"].numpy(), indices=rgb_indices, axis=0)
291 image = np.rollaxis(image, 0, 3)
292 image = np.clip(image / 3000, 0, 1)
293
294 label = cast(int, sample["label"].item())
295 label_class = self.classes[label]
296
297 showing_predictions = "prediction" in sample
298 if showing_predictions:
299 prediction = cast(int, sample["prediction"].item())
300 prediction_class = self.classes[prediction]
301
302 fig, ax = plt.subplots(figsize=(4, 4))
303 ax.imshow(image)
304 ax.axis("off")
305 if show_titles:
306 title = f"Label: {label_class}"
307 if showing_predictions:
308 title += f"\nPrediction: {prediction_class}"
309 ax.set_title(title)
310
311 if suptitle is not None:
312 plt.suptitle(suptitle)
313 return fig
314
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchgeo/datasets/eurosat.py b/torchgeo/datasets/eurosat.py
--- a/torchgeo/datasets/eurosat.py
+++ b/torchgeo/datasets/eurosat.py
@@ -51,9 +51,7 @@
* https://ieeexplore.ieee.org/document/8519248
"""
- # TODO: Change to https after https://github.com/phelber/EuroSAT/issues/10 is
- # resolved
- url = "http://madm.dfki.de/files/sentinel/EuroSATallBands.zip" # 2.0 GB download
+ url = "https://huggingface.co/datasets/torchgeo/eurosat/resolve/main/EuroSATallBands.zip" # noqa: E501
filename = "EuroSATallBands.zip"
md5 = "5ac12b3b2557aa56e1826e981e8e200e"
@@ -106,8 +104,6 @@
BAND_SETS = {"all": all_band_names, "rgb": rgb_bands}
- # TODO: reset checksum to False after https://github.com/phelber/EuroSAT/issues/10
- # is resolved
def __init__(
self,
root: str = "data",
@@ -115,7 +111,7 @@
bands: Sequence[str] = BAND_SETS["all"],
transforms: Optional[Callable[[Dict[str, Tensor]], Dict[str, Tensor]]] = None,
download: bool = False,
- checksum: bool = True,
+ checksum: bool = False,
) -> None:
"""Initialize a new EuroSAT dataset instance.
|
{"golden_diff": "diff --git a/torchgeo/datasets/eurosat.py b/torchgeo/datasets/eurosat.py\n--- a/torchgeo/datasets/eurosat.py\n+++ b/torchgeo/datasets/eurosat.py\n@@ -51,9 +51,7 @@\n * https://ieeexplore.ieee.org/document/8519248\n \"\"\"\n \n- # TODO: Change to https after https://github.com/phelber/EuroSAT/issues/10 is\n- # resolved\n- url = \"http://madm.dfki.de/files/sentinel/EuroSATallBands.zip\" # 2.0 GB download\n+ url = \"https://huggingface.co/datasets/torchgeo/eurosat/resolve/main/EuroSATallBands.zip\" # noqa: E501\n filename = \"EuroSATallBands.zip\"\n md5 = \"5ac12b3b2557aa56e1826e981e8e200e\"\n \n@@ -106,8 +104,6 @@\n \n BAND_SETS = {\"all\": all_band_names, \"rgb\": rgb_bands}\n \n- # TODO: reset checksum to False after https://github.com/phelber/EuroSAT/issues/10\n- # is resolved\n def __init__(\n self,\n root: str = \"data\",\n@@ -115,7 +111,7 @@\n bands: Sequence[str] = BAND_SETS[\"all\"],\n transforms: Optional[Callable[[Dict[str, Tensor]], Dict[str, Tensor]]] = None,\n download: bool = False,\n- checksum: bool = True,\n+ checksum: bool = False,\n ) -> None:\n \"\"\"Initialize a new EuroSAT dataset instance.\n", "issue": "EuroSAT Dataset URL not being secure anymore\n### Description\r\n\r\nAs `url = \"https://madm.dfki.de/files/sentinel/EuroSATallBands.zip\"` is not secure anymore, downloading the EuroSat dataset is blocked by default by certify.\r\n\r\nWe might need to either change the zip source or remove the `https`\r\n\r\n### Steps to reproduce\r\n\r\nLoad the EuroSAT dataset\r\n```python\r\nfrom torchgeo.datasets import EuroSAT\r\neurosat_train = EuroSAT(download=True)\r\n```\r\n\r\n### Version\r\n\r\n0.4.0\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"EuroSAT dataset.\"\"\"\n\nimport os\nfrom typing import Callable, Dict, Optional, Sequence, cast\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nfrom torch import Tensor\n\nfrom .geo import NonGeoClassificationDataset\nfrom .utils import check_integrity, download_url, extract_archive, rasterio_loader\n\n\nclass EuroSAT(NonGeoClassificationDataset):\n \"\"\"EuroSAT dataset.\n\n The `EuroSAT <https://github.com/phelber/EuroSAT>`__ dataset is based on Sentinel-2\n satellite images covering 13 spectral bands and consists of 10 target classes with\n a total of 27,000 labeled and geo-referenced images.\n\n Dataset format:\n\n * rasters are 13-channel GeoTiffs\n * labels are values in the range [0,9]\n\n Dataset classes:\n\n * Industrial Buildings\n * Residential Buildings\n * Annual Crop\n * Permanent Crop\n * River\n * Sea and Lake\n * Herbaceous Vegetation\n * Highway\n * Pasture\n * Forest\n\n This dataset uses the train/val/test splits defined in the \"In-domain representation\n learning for remote sensing\" paper:\n\n * https://arxiv.org/abs/1911.06721\n\n If you use this dataset in your research, please cite the following papers:\n\n * https://ieeexplore.ieee.org/document/8736785\n * https://ieeexplore.ieee.org/document/8519248\n \"\"\"\n\n # TODO: Change to https after https://github.com/phelber/EuroSAT/issues/10 is\n # resolved\n url = \"http://madm.dfki.de/files/sentinel/EuroSATallBands.zip\" # 2.0 GB download\n filename = \"EuroSATallBands.zip\"\n md5 = \"5ac12b3b2557aa56e1826e981e8e200e\"\n\n # For some reason the class directories are actually nested in this directory\n base_dir = os.path.join(\n \"ds\", \"images\", \"remote_sensing\", \"otherDatasets\", \"sentinel_2\", \"tif\"\n )\n\n splits = [\"train\", \"val\", \"test\"]\n split_urls = {\n \"train\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-train.txt\", # noqa: E501\n \"val\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-val.txt\", # noqa: E501\n \"test\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-test.txt\", # noqa: E501\n }\n split_md5s = {\n \"train\": \"908f142e73d6acdf3f482c5e80d851b1\",\n \"val\": \"95de90f2aa998f70a3b2416bfe0687b4\",\n \"test\": \"7ae5ab94471417b6e315763121e67c5f\",\n }\n classes = [\n \"Industrial Buildings\",\n \"Residential Buildings\",\n \"Annual Crop\",\n \"Permanent Crop\",\n \"River\",\n \"Sea and Lake\",\n \"Herbaceous Vegetation\",\n \"Highway\",\n \"Pasture\",\n \"Forest\",\n ]\n\n all_band_names = (\n \"B01\",\n \"B02\",\n \"B03\",\n \"B04\",\n \"B05\",\n \"B06\",\n \"B07\",\n \"B08\",\n \"B08A\",\n \"B09\",\n \"B10\",\n \"B11\",\n \"B12\",\n )\n\n rgb_bands = (\"B04\", \"B03\", \"B02\")\n\n BAND_SETS = {\"all\": all_band_names, \"rgb\": rgb_bands}\n\n # TODO: reset checksum to False after https://github.com/phelber/EuroSAT/issues/10\n # is resolved\n def __init__(\n self,\n root: str = \"data\",\n split: str = \"train\",\n bands: Sequence[str] = BAND_SETS[\"all\"],\n transforms: Optional[Callable[[Dict[str, Tensor]], Dict[str, Tensor]]] = None,\n download: bool = False,\n checksum: bool = True,\n ) -> None:\n \"\"\"Initialize a new EuroSAT dataset instance.\n\n Args:\n root: root directory where dataset can be found\n split: one of \"train\", \"val\", or \"test\"\n bands: a sequence of band names to load\n transforms: a function/transform that takes input sample and its target as\n entry and returns a transformed version\n download: if True, download dataset and store it in the root directory\n checksum: if True, check the MD5 of the downloaded files (may be slow)\n\n Raises:\n AssertionError: if ``split`` argument is invalid\n RuntimeError: if ``download=False`` and data is not found, or checksums\n don't match\n\n .. versionadded:: 0.3\n The *bands* parameter.\n \"\"\"\n self.root = root\n self.transforms = transforms\n self.download = download\n self.checksum = checksum\n\n assert split in [\"train\", \"val\", \"test\"]\n\n self._validate_bands(bands)\n self.bands = bands\n self.band_indices = Tensor(\n [self.all_band_names.index(b) for b in bands if b in self.all_band_names]\n ).long()\n\n self._verify()\n\n valid_fns = set()\n with open(os.path.join(self.root, f\"eurosat-{split}.txt\")) as f:\n for fn in f:\n valid_fns.add(fn.strip().replace(\".jpg\", \".tif\"))\n is_in_split: Callable[[str], bool] = lambda x: os.path.basename(x) in valid_fns\n\n super().__init__(\n root=os.path.join(root, self.base_dir),\n transforms=transforms,\n loader=rasterio_loader,\n is_valid_file=is_in_split,\n )\n\n def __getitem__(self, index: int) -> Dict[str, Tensor]:\n \"\"\"Return an index within the dataset.\n\n Args:\n index: index to return\n Returns:\n data and label at that index\n \"\"\"\n image, label = self._load_image(index)\n\n image = torch.index_select(image, dim=0, index=self.band_indices).float()\n sample = {\"image\": image, \"label\": label}\n\n if self.transforms is not None:\n sample = self.transforms(sample)\n\n return sample\n\n def _check_integrity(self) -> bool:\n \"\"\"Check integrity of dataset.\n\n Returns:\n True if dataset files are found and/or MD5s match, else False\n \"\"\"\n integrity: bool = check_integrity(\n os.path.join(self.root, self.filename), self.md5 if self.checksum else None\n )\n return integrity\n\n def _verify(self) -> None:\n \"\"\"Verify the integrity of the dataset.\n\n Raises:\n RuntimeError: if ``download=False`` but dataset is missing or checksum fails\n \"\"\"\n # Check if the files already exist\n filepath = os.path.join(self.root, self.base_dir)\n if os.path.exists(filepath):\n return\n\n # Check if zip file already exists (if so then extract)\n if self._check_integrity():\n self._extract()\n return\n\n # Check if the user requested to download the dataset\n if not self.download:\n raise RuntimeError(\n \"Dataset not found in `root` directory and `download=False`, \"\n \"either specify a different `root` directory or use `download=True` \"\n \"to automatically download the dataset.\"\n )\n\n # Download and extract the dataset\n self._download()\n self._extract()\n\n def _download(self) -> None:\n \"\"\"Download the dataset.\"\"\"\n download_url(\n self.url,\n self.root,\n filename=self.filename,\n md5=self.md5 if self.checksum else None,\n )\n for split in self.splits:\n download_url(\n self.split_urls[split],\n self.root,\n filename=f\"eurosat-{split}.txt\",\n md5=self.split_md5s[split] if self.checksum else None,\n )\n\n def _extract(self) -> None:\n \"\"\"Extract the dataset.\"\"\"\n filepath = os.path.join(self.root, self.filename)\n extract_archive(filepath)\n\n def _validate_bands(self, bands: Sequence[str]) -> None:\n \"\"\"Validate list of bands.\n\n Args:\n bands: user-provided sequence of bands to load\n\n Raises:\n AssertionError: if ``bands`` is not a sequence\n ValueError: if an invalid band name is provided\n\n .. versionadded:: 0.3\n \"\"\"\n assert isinstance(bands, Sequence), \"'bands' must be a sequence\"\n for band in bands:\n if band not in self.all_band_names:\n raise ValueError(f\"'{band}' is an invalid band name.\")\n\n def plot(\n self,\n sample: Dict[str, Tensor],\n show_titles: bool = True,\n suptitle: Optional[str] = None,\n ) -> plt.Figure:\n \"\"\"Plot a sample from the dataset.\n\n Args:\n sample: a sample returned by :meth:`NonGeoClassificationDataset.__getitem__`\n show_titles: flag indicating whether to show titles above each panel\n suptitle: optional string to use as a suptitle\n\n Returns:\n a matplotlib Figure with the rendered sample\n\n Raises:\n ValueError: if RGB bands are not found in dataset\n\n .. versionadded:: 0.2\n \"\"\"\n rgb_indices = []\n for band in self.rgb_bands:\n if band in self.bands:\n rgb_indices.append(self.bands.index(band))\n else:\n raise ValueError(\"Dataset doesn't contain some of the RGB bands\")\n\n image = np.take(sample[\"image\"].numpy(), indices=rgb_indices, axis=0)\n image = np.rollaxis(image, 0, 3)\n image = np.clip(image / 3000, 0, 1)\n\n label = cast(int, sample[\"label\"].item())\n label_class = self.classes[label]\n\n showing_predictions = \"prediction\" in sample\n if showing_predictions:\n prediction = cast(int, sample[\"prediction\"].item())\n prediction_class = self.classes[prediction]\n\n fig, ax = plt.subplots(figsize=(4, 4))\n ax.imshow(image)\n ax.axis(\"off\")\n if show_titles:\n title = f\"Label: {label_class}\"\n if showing_predictions:\n title += f\"\\nPrediction: {prediction_class}\"\n ax.set_title(title)\n\n if suptitle is not None:\n plt.suptitle(suptitle)\n return fig\n", "path": "torchgeo/datasets/eurosat.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"EuroSAT dataset.\"\"\"\n\nimport os\nfrom typing import Callable, Dict, Optional, Sequence, cast\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nfrom torch import Tensor\n\nfrom .geo import NonGeoClassificationDataset\nfrom .utils import check_integrity, download_url, extract_archive, rasterio_loader\n\n\nclass EuroSAT(NonGeoClassificationDataset):\n \"\"\"EuroSAT dataset.\n\n The `EuroSAT <https://github.com/phelber/EuroSAT>`__ dataset is based on Sentinel-2\n satellite images covering 13 spectral bands and consists of 10 target classes with\n a total of 27,000 labeled and geo-referenced images.\n\n Dataset format:\n\n * rasters are 13-channel GeoTiffs\n * labels are values in the range [0,9]\n\n Dataset classes:\n\n * Industrial Buildings\n * Residential Buildings\n * Annual Crop\n * Permanent Crop\n * River\n * Sea and Lake\n * Herbaceous Vegetation\n * Highway\n * Pasture\n * Forest\n\n This dataset uses the train/val/test splits defined in the \"In-domain representation\n learning for remote sensing\" paper:\n\n * https://arxiv.org/abs/1911.06721\n\n If you use this dataset in your research, please cite the following papers:\n\n * https://ieeexplore.ieee.org/document/8736785\n * https://ieeexplore.ieee.org/document/8519248\n \"\"\"\n\n url = \"https://huggingface.co/datasets/torchgeo/eurosat/resolve/main/EuroSATallBands.zip\" # noqa: E501\n filename = \"EuroSATallBands.zip\"\n md5 = \"5ac12b3b2557aa56e1826e981e8e200e\"\n\n # For some reason the class directories are actually nested in this directory\n base_dir = os.path.join(\n \"ds\", \"images\", \"remote_sensing\", \"otherDatasets\", \"sentinel_2\", \"tif\"\n )\n\n splits = [\"train\", \"val\", \"test\"]\n split_urls = {\n \"train\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-train.txt\", # noqa: E501\n \"val\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-val.txt\", # noqa: E501\n \"test\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-test.txt\", # noqa: E501\n }\n split_md5s = {\n \"train\": \"908f142e73d6acdf3f482c5e80d851b1\",\n \"val\": \"95de90f2aa998f70a3b2416bfe0687b4\",\n \"test\": \"7ae5ab94471417b6e315763121e67c5f\",\n }\n classes = [\n \"Industrial Buildings\",\n \"Residential Buildings\",\n \"Annual Crop\",\n \"Permanent Crop\",\n \"River\",\n \"Sea and Lake\",\n \"Herbaceous Vegetation\",\n \"Highway\",\n \"Pasture\",\n \"Forest\",\n ]\n\n all_band_names = (\n \"B01\",\n \"B02\",\n \"B03\",\n \"B04\",\n \"B05\",\n \"B06\",\n \"B07\",\n \"B08\",\n \"B08A\",\n \"B09\",\n \"B10\",\n \"B11\",\n \"B12\",\n )\n\n rgb_bands = (\"B04\", \"B03\", \"B02\")\n\n BAND_SETS = {\"all\": all_band_names, \"rgb\": rgb_bands}\n\n def __init__(\n self,\n root: str = \"data\",\n split: str = \"train\",\n bands: Sequence[str] = BAND_SETS[\"all\"],\n transforms: Optional[Callable[[Dict[str, Tensor]], Dict[str, Tensor]]] = None,\n download: bool = False,\n checksum: bool = False,\n ) -> None:\n \"\"\"Initialize a new EuroSAT dataset instance.\n\n Args:\n root: root directory where dataset can be found\n split: one of \"train\", \"val\", or \"test\"\n bands: a sequence of band names to load\n transforms: a function/transform that takes input sample and its target as\n entry and returns a transformed version\n download: if True, download dataset and store it in the root directory\n checksum: if True, check the MD5 of the downloaded files (may be slow)\n\n Raises:\n AssertionError: if ``split`` argument is invalid\n RuntimeError: if ``download=False`` and data is not found, or checksums\n don't match\n\n .. versionadded:: 0.3\n The *bands* parameter.\n \"\"\"\n self.root = root\n self.transforms = transforms\n self.download = download\n self.checksum = checksum\n\n assert split in [\"train\", \"val\", \"test\"]\n\n self._validate_bands(bands)\n self.bands = bands\n self.band_indices = Tensor(\n [self.all_band_names.index(b) for b in bands if b in self.all_band_names]\n ).long()\n\n self._verify()\n\n valid_fns = set()\n with open(os.path.join(self.root, f\"eurosat-{split}.txt\")) as f:\n for fn in f:\n valid_fns.add(fn.strip().replace(\".jpg\", \".tif\"))\n is_in_split: Callable[[str], bool] = lambda x: os.path.basename(x) in valid_fns\n\n super().__init__(\n root=os.path.join(root, self.base_dir),\n transforms=transforms,\n loader=rasterio_loader,\n is_valid_file=is_in_split,\n )\n\n def __getitem__(self, index: int) -> Dict[str, Tensor]:\n \"\"\"Return an index within the dataset.\n\n Args:\n index: index to return\n Returns:\n data and label at that index\n \"\"\"\n image, label = self._load_image(index)\n\n image = torch.index_select(image, dim=0, index=self.band_indices).float()\n sample = {\"image\": image, \"label\": label}\n\n if self.transforms is not None:\n sample = self.transforms(sample)\n\n return sample\n\n def _check_integrity(self) -> bool:\n \"\"\"Check integrity of dataset.\n\n Returns:\n True if dataset files are found and/or MD5s match, else False\n \"\"\"\n integrity: bool = check_integrity(\n os.path.join(self.root, self.filename), self.md5 if self.checksum else None\n )\n return integrity\n\n def _verify(self) -> None:\n \"\"\"Verify the integrity of the dataset.\n\n Raises:\n RuntimeError: if ``download=False`` but dataset is missing or checksum fails\n \"\"\"\n # Check if the files already exist\n filepath = os.path.join(self.root, self.base_dir)\n if os.path.exists(filepath):\n return\n\n # Check if zip file already exists (if so then extract)\n if self._check_integrity():\n self._extract()\n return\n\n # Check if the user requested to download the dataset\n if not self.download:\n raise RuntimeError(\n \"Dataset not found in `root` directory and `download=False`, \"\n \"either specify a different `root` directory or use `download=True` \"\n \"to automatically download the dataset.\"\n )\n\n # Download and extract the dataset\n self._download()\n self._extract()\n\n def _download(self) -> None:\n \"\"\"Download the dataset.\"\"\"\n download_url(\n self.url,\n self.root,\n filename=self.filename,\n md5=self.md5 if self.checksum else None,\n )\n for split in self.splits:\n download_url(\n self.split_urls[split],\n self.root,\n filename=f\"eurosat-{split}.txt\",\n md5=self.split_md5s[split] if self.checksum else None,\n )\n\n def _extract(self) -> None:\n \"\"\"Extract the dataset.\"\"\"\n filepath = os.path.join(self.root, self.filename)\n extract_archive(filepath)\n\n def _validate_bands(self, bands: Sequence[str]) -> None:\n \"\"\"Validate list of bands.\n\n Args:\n bands: user-provided sequence of bands to load\n\n Raises:\n AssertionError: if ``bands`` is not a sequence\n ValueError: if an invalid band name is provided\n\n .. versionadded:: 0.3\n \"\"\"\n assert isinstance(bands, Sequence), \"'bands' must be a sequence\"\n for band in bands:\n if band not in self.all_band_names:\n raise ValueError(f\"'{band}' is an invalid band name.\")\n\n def plot(\n self,\n sample: Dict[str, Tensor],\n show_titles: bool = True,\n suptitle: Optional[str] = None,\n ) -> plt.Figure:\n \"\"\"Plot a sample from the dataset.\n\n Args:\n sample: a sample returned by :meth:`NonGeoClassificationDataset.__getitem__`\n show_titles: flag indicating whether to show titles above each panel\n suptitle: optional string to use as a suptitle\n\n Returns:\n a matplotlib Figure with the rendered sample\n\n Raises:\n ValueError: if RGB bands are not found in dataset\n\n .. versionadded:: 0.2\n \"\"\"\n rgb_indices = []\n for band in self.rgb_bands:\n if band in self.bands:\n rgb_indices.append(self.bands.index(band))\n else:\n raise ValueError(\"Dataset doesn't contain some of the RGB bands\")\n\n image = np.take(sample[\"image\"].numpy(), indices=rgb_indices, axis=0)\n image = np.rollaxis(image, 0, 3)\n image = np.clip(image / 3000, 0, 1)\n\n label = cast(int, sample[\"label\"].item())\n label_class = self.classes[label]\n\n showing_predictions = \"prediction\" in sample\n if showing_predictions:\n prediction = cast(int, sample[\"prediction\"].item())\n prediction_class = self.classes[prediction]\n\n fig, ax = plt.subplots(figsize=(4, 4))\n ax.imshow(image)\n ax.axis(\"off\")\n if show_titles:\n title = f\"Label: {label_class}\"\n if showing_predictions:\n title += f\"\\nPrediction: {prediction_class}\"\n ax.set_title(title)\n\n if suptitle is not None:\n plt.suptitle(suptitle)\n return fig\n", "path": "torchgeo/datasets/eurosat.py"}]}
| 3,685 | 404 |
gh_patches_debug_2793
|
rasdani/github-patches
|
git_diff
|
kornia__kornia-579
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Backslash not showing in equation for Tversky Loss
## 📚 Documentation
The backslashs in the denominator of the Tversky loss equation is not rendered correctly with MathJax. As shown in this screenshot it only inserts a little space between P and G and thus does not correctly render the equation.

<!-- A clear and concise description of what content in https://kornia.readthedocs.io is an issue. If this has to do with the general https://kornia.org website, please file an issue at https://github.com/kornia/kornia.github.io/issues/new/choose instead. If this has to do with https://kornia.org/tutorials, please file an issue at https://github.com/kornia/tutorials/issues/new -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/losses/tversky.py`
Content:
```
1 from typing import Optional
2
3 import torch
4 import torch.nn as nn
5 import torch.nn.functional as F
6
7 from kornia.utils import one_hot
8
9 # based on:
10 # https://github.com/kevinzakka/pytorch-goodies/blob/master/losses.py
11
12
13 def tversky_loss(input: torch.Tensor, target: torch.Tensor,
14 alpha: float, beta: float, eps: float = 1e-8) -> torch.Tensor:
15 r"""Function that computes Tversky loss.
16
17 See :class:`~kornia.losses.TverskyLoss` for details.
18 """
19 if not torch.is_tensor(input):
20 raise TypeError("Input type is not a torch.Tensor. Got {}"
21 .format(type(input)))
22
23 if not len(input.shape) == 4:
24 raise ValueError("Invalid input shape, we expect BxNxHxW. Got: {}"
25 .format(input.shape))
26
27 if not input.shape[-2:] == target.shape[-2:]:
28 raise ValueError("input and target shapes must be the same. Got: {} and {}"
29 .format(input.shape, input.shape))
30
31 if not input.device == target.device:
32 raise ValueError(
33 "input and target must be in the same device. Got: {} and {}" .format(
34 input.device, target.device))
35
36 # compute softmax over the classes axis
37 input_soft: torch.Tensor = F.softmax(input, dim=1)
38
39 # create the labels one hot tensor
40 target_one_hot: torch.Tensor = one_hot(
41 target, num_classes=input.shape[1],
42 device=input.device, dtype=input.dtype)
43
44 # compute the actual dice score
45 dims = (1, 2, 3)
46 intersection = torch.sum(input_soft * target_one_hot, dims)
47 fps = torch.sum(input_soft * (-target_one_hot + 1.), dims)
48 fns = torch.sum((-input_soft + 1.) * target_one_hot, dims)
49
50 numerator = intersection
51 denominator = intersection + alpha * fps + beta * fns
52 tversky_loss = numerator / (denominator + eps)
53 return torch.mean(-tversky_loss + 1.)
54
55
56 class TverskyLoss(nn.Module):
57 r"""Criterion that computes Tversky Coeficient loss.
58
59 According to [1], we compute the Tversky Coefficient as follows:
60
61 .. math::
62
63 \text{S}(P, G, \alpha; \beta) =
64 \frac{|PG|}{|PG| + \alpha |P \ G| + \beta |G \ P|}
65
66 where:
67 - :math:`P` and :math:`G` are the predicted and ground truth binary
68 labels.
69 - :math:`\alpha` and :math:`\beta` control the magnitude of the
70 penalties for FPs and FNs, respectively.
71
72 Notes:
73 - :math:`\alpha = \beta = 0.5` => dice coeff
74 - :math:`\alpha = \beta = 1` => tanimoto coeff
75 - :math:`\alpha + \beta = 1` => F beta coeff
76
77 Shape:
78 - Input: :math:`(N, C, H, W)` where C = number of classes.
79 - Target: :math:`(N, H, W)` where each value is
80 :math:`0 ≤ targets[i] ≤ C−1`.
81
82 Examples:
83 >>> N = 5 # num_classes
84 >>> loss = kornia.losses.TverskyLoss(alpha=0.5, beta=0.5)
85 >>> input = torch.randn(1, N, 3, 5, requires_grad=True)
86 >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
87 >>> output = loss(input, target)
88 >>> output.backward()
89
90 References:
91 [1]: https://arxiv.org/abs/1706.05721
92 """
93
94 def __init__(self, alpha: float, beta: float, eps: float = 1e-8) -> None:
95 super(TverskyLoss, self).__init__()
96 self.alpha: float = alpha
97 self.beta: float = beta
98 self.eps: float = eps
99
100 def forward( # type: ignore
101 self,
102 input: torch.Tensor,
103 target: torch.Tensor) -> torch.Tensor:
104 return tversky_loss(input, target, self.alpha, self.beta, self.eps)
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kornia/losses/tversky.py b/kornia/losses/tversky.py
--- a/kornia/losses/tversky.py
+++ b/kornia/losses/tversky.py
@@ -61,7 +61,7 @@
.. math::
\text{S}(P, G, \alpha; \beta) =
- \frac{|PG|}{|PG| + \alpha |P \ G| + \beta |G \ P|}
+ \frac{|PG|}{|PG| + \alpha |P \setminus G| + \beta |G \setminus P|}
where:
- :math:`P` and :math:`G` are the predicted and ground truth binary
|
{"golden_diff": "diff --git a/kornia/losses/tversky.py b/kornia/losses/tversky.py\n--- a/kornia/losses/tversky.py\n+++ b/kornia/losses/tversky.py\n@@ -61,7 +61,7 @@\n .. math::\n \n \\text{S}(P, G, \\alpha; \\beta) =\n- \\frac{|PG|}{|PG| + \\alpha |P \\ G| + \\beta |G \\ P|}\n+ \\frac{|PG|}{|PG| + \\alpha |P \\setminus G| + \\beta |G \\setminus P|}\n \n where:\n - :math:`P` and :math:`G` are the predicted and ground truth binary\n", "issue": "Backslash not showing in equation for Tversky Loss\n## \ud83d\udcda Documentation\r\n\r\nThe backslashs in the denominator of the Tversky loss equation is not rendered correctly with MathJax. As shown in this screenshot it only inserts a little space between P and G and thus does not correctly render the equation.\r\n\r\n\r\n\r\n\r\n<!-- A clear and concise description of what content in https://kornia.readthedocs.io is an issue. If this has to do with the general https://kornia.org website, please file an issue at https://github.com/kornia/kornia.github.io/issues/new/choose instead. If this has to do with https://kornia.org/tutorials, please file an issue at https://github.com/kornia/tutorials/issues/new -->\r\n\r\n\n", "before_files": [{"content": "from typing import Optional\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom kornia.utils import one_hot\n\n# based on:\n# https://github.com/kevinzakka/pytorch-goodies/blob/master/losses.py\n\n\ndef tversky_loss(input: torch.Tensor, target: torch.Tensor,\n alpha: float, beta: float, eps: float = 1e-8) -> torch.Tensor:\n r\"\"\"Function that computes Tversky loss.\n\n See :class:`~kornia.losses.TverskyLoss` for details.\n \"\"\"\n if not torch.is_tensor(input):\n raise TypeError(\"Input type is not a torch.Tensor. Got {}\"\n .format(type(input)))\n\n if not len(input.shape) == 4:\n raise ValueError(\"Invalid input shape, we expect BxNxHxW. Got: {}\"\n .format(input.shape))\n\n if not input.shape[-2:] == target.shape[-2:]:\n raise ValueError(\"input and target shapes must be the same. Got: {} and {}\"\n .format(input.shape, input.shape))\n\n if not input.device == target.device:\n raise ValueError(\n \"input and target must be in the same device. Got: {} and {}\" .format(\n input.device, target.device))\n\n # compute softmax over the classes axis\n input_soft: torch.Tensor = F.softmax(input, dim=1)\n\n # create the labels one hot tensor\n target_one_hot: torch.Tensor = one_hot(\n target, num_classes=input.shape[1],\n device=input.device, dtype=input.dtype)\n\n # compute the actual dice score\n dims = (1, 2, 3)\n intersection = torch.sum(input_soft * target_one_hot, dims)\n fps = torch.sum(input_soft * (-target_one_hot + 1.), dims)\n fns = torch.sum((-input_soft + 1.) * target_one_hot, dims)\n\n numerator = intersection\n denominator = intersection + alpha * fps + beta * fns\n tversky_loss = numerator / (denominator + eps)\n return torch.mean(-tversky_loss + 1.)\n\n\nclass TverskyLoss(nn.Module):\n r\"\"\"Criterion that computes Tversky Coeficient loss.\n\n According to [1], we compute the Tversky Coefficient as follows:\n\n .. math::\n\n \\text{S}(P, G, \\alpha; \\beta) =\n \\frac{|PG|}{|PG| + \\alpha |P \\ G| + \\beta |G \\ P|}\n\n where:\n - :math:`P` and :math:`G` are the predicted and ground truth binary\n labels.\n - :math:`\\alpha` and :math:`\\beta` control the magnitude of the\n penalties for FPs and FNs, respectively.\n\n Notes:\n - :math:`\\alpha = \\beta = 0.5` => dice coeff\n - :math:`\\alpha = \\beta = 1` => tanimoto coeff\n - :math:`\\alpha + \\beta = 1` => F beta coeff\n\n Shape:\n - Input: :math:`(N, C, H, W)` where C = number of classes.\n - Target: :math:`(N, H, W)` where each value is\n :math:`0 \u2264 targets[i] \u2264 C\u22121`.\n\n Examples:\n >>> N = 5 # num_classes\n >>> loss = kornia.losses.TverskyLoss(alpha=0.5, beta=0.5)\n >>> input = torch.randn(1, N, 3, 5, requires_grad=True)\n >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)\n >>> output = loss(input, target)\n >>> output.backward()\n\n References:\n [1]: https://arxiv.org/abs/1706.05721\n \"\"\"\n\n def __init__(self, alpha: float, beta: float, eps: float = 1e-8) -> None:\n super(TverskyLoss, self).__init__()\n self.alpha: float = alpha\n self.beta: float = beta\n self.eps: float = eps\n\n def forward( # type: ignore\n self,\n input: torch.Tensor,\n target: torch.Tensor) -> torch.Tensor:\n return tversky_loss(input, target, self.alpha, self.beta, self.eps)\n", "path": "kornia/losses/tversky.py"}], "after_files": [{"content": "from typing import Optional\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom kornia.utils import one_hot\n\n# based on:\n# https://github.com/kevinzakka/pytorch-goodies/blob/master/losses.py\n\n\ndef tversky_loss(input: torch.Tensor, target: torch.Tensor,\n alpha: float, beta: float, eps: float = 1e-8) -> torch.Tensor:\n r\"\"\"Function that computes Tversky loss.\n\n See :class:`~kornia.losses.TverskyLoss` for details.\n \"\"\"\n if not torch.is_tensor(input):\n raise TypeError(\"Input type is not a torch.Tensor. Got {}\"\n .format(type(input)))\n\n if not len(input.shape) == 4:\n raise ValueError(\"Invalid input shape, we expect BxNxHxW. Got: {}\"\n .format(input.shape))\n\n if not input.shape[-2:] == target.shape[-2:]:\n raise ValueError(\"input and target shapes must be the same. Got: {} and {}\"\n .format(input.shape, input.shape))\n\n if not input.device == target.device:\n raise ValueError(\n \"input and target must be in the same device. Got: {} and {}\" .format(\n input.device, target.device))\n\n # compute softmax over the classes axis\n input_soft: torch.Tensor = F.softmax(input, dim=1)\n\n # create the labels one hot tensor\n target_one_hot: torch.Tensor = one_hot(\n target, num_classes=input.shape[1],\n device=input.device, dtype=input.dtype)\n\n # compute the actual dice score\n dims = (1, 2, 3)\n intersection = torch.sum(input_soft * target_one_hot, dims)\n fps = torch.sum(input_soft * (-target_one_hot + 1.), dims)\n fns = torch.sum((-input_soft + 1.) * target_one_hot, dims)\n\n numerator = intersection\n denominator = intersection + alpha * fps + beta * fns\n tversky_loss = numerator / (denominator + eps)\n return torch.mean(-tversky_loss + 1.)\n\n\nclass TverskyLoss(nn.Module):\n r\"\"\"Criterion that computes Tversky Coeficient loss.\n\n According to [1], we compute the Tversky Coefficient as follows:\n\n .. math::\n\n \\text{S}(P, G, \\alpha; \\beta) =\n \\frac{|PG|}{|PG| + \\alpha |P \\setminus G| + \\beta |G \\setminus P|}\n\n where:\n - :math:`P` and :math:`G` are the predicted and ground truth binary\n labels.\n - :math:`\\alpha` and :math:`\\beta` control the magnitude of the\n penalties for FPs and FNs, respectively.\n\n Notes:\n - :math:`\\alpha = \\beta = 0.5` => dice coeff\n - :math:`\\alpha = \\beta = 1` => tanimoto coeff\n - :math:`\\alpha + \\beta = 1` => F beta coeff\n\n Shape:\n - Input: :math:`(N, C, H, W)` where C = number of classes.\n - Target: :math:`(N, H, W)` where each value is\n :math:`0 \u2264 targets[i] \u2264 C\u22121`.\n\n Examples:\n >>> N = 5 # num_classes\n >>> loss = kornia.losses.TverskyLoss(alpha=0.5, beta=0.5)\n >>> input = torch.randn(1, N, 3, 5, requires_grad=True)\n >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)\n >>> output = loss(input, target)\n >>> output.backward()\n\n References:\n [1]: https://arxiv.org/abs/1706.05721\n \"\"\"\n\n def __init__(self, alpha: float, beta: float, eps: float = 1e-8) -> None:\n super(TverskyLoss, self).__init__()\n self.alpha: float = alpha\n self.beta: float = beta\n self.eps: float = eps\n\n def forward( # type: ignore\n self,\n input: torch.Tensor,\n target: torch.Tensor) -> torch.Tensor:\n return tversky_loss(input, target, self.alpha, self.beta, self.eps)\n", "path": "kornia/losses/tversky.py"}]}
| 1,677 | 169 |
gh_patches_debug_18960
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-9119
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Locked pages report renders all users in "locked_by" filter
### Issue Summary
Currently, the select-field for the "locked_by" filter on the "Locked pages"-report renders all users unfiltered from the user table.
This breaks the view for one of our projects, which tries to list all 500k users...
Actually, only the users which have locked a page should be listed.
I will try to make a PR for that, in the meantime this quickfix works for us (with wagtail 4.0, django 3.2):
```python
# in one of our apps.py
from django.apps import AppConfig
class BaseConfig(AppConfig):
name = "base"
def ready(self):
patch_locked_pages_filter()
def patch_locked_pages_filter():
import django_filters
from django.contrib.auth import get_user_model
from wagtail.admin.views.reports.locked_pages import LockedPagesView, LockedPagesReportFilterSet
def get_users_for_filter():
User = get_user_model()
return User.objects.filter(locked_pages__isnull=False).order_by(User.USERNAME_FIELD)
class PatchedLockedPagesReportFilterSet(LockedPagesReportFilterSet):
locked_by = django_filters.ModelChoiceFilter(
field_name="locked_by", queryset=lambda request: get_users_for_filter()
)
LockedPagesView.filterset_class = PatchedLockedPagesReportFilterSet
```
### Steps to Reproduce
1. Start a new project with `wagtail start myproject`
2. Create many users, no matter if they have access to the CMS or not
3. Open `/admin/reports/locked/` and find all those users in the "locked_by" dropdown on the right
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.10
- Django version: 3.2
- Wagtail version: 4.0, 2.15,... (according to the git history seems to be there from the beginning)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/views/reports/locked_pages.py`
Content:
```
1 import datetime
2
3 import django_filters
4 from django.conf import settings
5 from django.core.exceptions import PermissionDenied
6 from django.utils.translation import gettext_lazy as _
7
8 from wagtail.admin.filters import DateRangePickerWidget, WagtailFilterSet
9 from wagtail.models import Page, UserPagePermissionsProxy
10
11 from .base import PageReportView
12
13
14 class LockedPagesReportFilterSet(WagtailFilterSet):
15 locked_at = django_filters.DateFromToRangeFilter(widget=DateRangePickerWidget)
16
17 class Meta:
18 model = Page
19 fields = ["locked_by", "locked_at", "live"]
20
21
22 class LockedPagesView(PageReportView):
23 template_name = "wagtailadmin/reports/locked_pages.html"
24 title = _("Locked pages")
25 header_icon = "lock"
26 list_export = PageReportView.list_export + [
27 "locked_at",
28 "locked_by",
29 ]
30 filterset_class = LockedPagesReportFilterSet
31
32 def get_filename(self):
33 return "locked-pages-report-{}".format(
34 datetime.datetime.today().strftime("%Y-%m-%d")
35 )
36
37 def get_queryset(self):
38 pages = (
39 (
40 UserPagePermissionsProxy(self.request.user).editable_pages()
41 | Page.objects.filter(locked_by=self.request.user)
42 )
43 .filter(locked=True)
44 .specific(defer=True)
45 )
46
47 if getattr(settings, "WAGTAIL_I18N_ENABLED", False):
48 pages = pages.select_related("locale")
49
50 self.queryset = pages
51 return super().get_queryset()
52
53 def dispatch(self, request, *args, **kwargs):
54 if not UserPagePermissionsProxy(request.user).can_remove_locks():
55 raise PermissionDenied
56 return super().dispatch(request, *args, **kwargs)
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/admin/views/reports/locked_pages.py b/wagtail/admin/views/reports/locked_pages.py
--- a/wagtail/admin/views/reports/locked_pages.py
+++ b/wagtail/admin/views/reports/locked_pages.py
@@ -2,6 +2,7 @@
import django_filters
from django.conf import settings
+from django.contrib.auth import get_user_model
from django.core.exceptions import PermissionDenied
from django.utils.translation import gettext_lazy as _
@@ -11,8 +12,16 @@
from .base import PageReportView
+def get_users_for_filter():
+ User = get_user_model()
+ return User.objects.filter(locked_pages__isnull=False).order_by(User.USERNAME_FIELD)
+
+
class LockedPagesReportFilterSet(WagtailFilterSet):
locked_at = django_filters.DateFromToRangeFilter(widget=DateRangePickerWidget)
+ locked_by = django_filters.ModelChoiceFilter(
+ field_name="locked_by", queryset=lambda request: get_users_for_filter()
+ )
class Meta:
model = Page
|
{"golden_diff": "diff --git a/wagtail/admin/views/reports/locked_pages.py b/wagtail/admin/views/reports/locked_pages.py\n--- a/wagtail/admin/views/reports/locked_pages.py\n+++ b/wagtail/admin/views/reports/locked_pages.py\n@@ -2,6 +2,7 @@\n \n import django_filters\n from django.conf import settings\n+from django.contrib.auth import get_user_model\n from django.core.exceptions import PermissionDenied\n from django.utils.translation import gettext_lazy as _\n \n@@ -11,8 +12,16 @@\n from .base import PageReportView\n \n \n+def get_users_for_filter():\n+ User = get_user_model()\n+ return User.objects.filter(locked_pages__isnull=False).order_by(User.USERNAME_FIELD)\n+\n+\n class LockedPagesReportFilterSet(WagtailFilterSet):\n locked_at = django_filters.DateFromToRangeFilter(widget=DateRangePickerWidget)\n+ locked_by = django_filters.ModelChoiceFilter(\n+ field_name=\"locked_by\", queryset=lambda request: get_users_for_filter()\n+ )\n \n class Meta:\n model = Page\n", "issue": "Locked pages report renders all users in \"locked_by\" filter\n### Issue Summary\r\n\r\nCurrently, the select-field for the \"locked_by\" filter on the \"Locked pages\"-report renders all users unfiltered from the user table.\r\nThis breaks the view for one of our projects, which tries to list all 500k users...\r\nActually, only the users which have locked a page should be listed.\r\n\r\nI will try to make a PR for that, in the meantime this quickfix works for us (with wagtail 4.0, django 3.2):\r\n```python\r\n# in one of our apps.py\r\nfrom django.apps import AppConfig\r\n\r\nclass BaseConfig(AppConfig):\r\n name = \"base\"\r\n\r\n def ready(self):\r\n patch_locked_pages_filter()\r\n\r\ndef patch_locked_pages_filter():\r\n import django_filters\r\n from django.contrib.auth import get_user_model\r\n\r\n from wagtail.admin.views.reports.locked_pages import LockedPagesView, LockedPagesReportFilterSet\r\n\r\n def get_users_for_filter():\r\n User = get_user_model()\r\n return User.objects.filter(locked_pages__isnull=False).order_by(User.USERNAME_FIELD)\r\n\r\n class PatchedLockedPagesReportFilterSet(LockedPagesReportFilterSet):\r\n locked_by = django_filters.ModelChoiceFilter(\r\n field_name=\"locked_by\", queryset=lambda request: get_users_for_filter()\r\n )\r\n\r\n LockedPagesView.filterset_class = PatchedLockedPagesReportFilterSet\r\n\r\n```\r\n\r\n### Steps to Reproduce\r\n\r\n1. Start a new project with `wagtail start myproject`\r\n2. Create many users, no matter if they have access to the CMS or not\r\n3. Open `/admin/reports/locked/` and find all those users in the \"locked_by\" dropdown on the right\r\n\r\n- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes\r\n\r\n### Technical details\r\n\r\n- Python version: 3.10\r\n- Django version: 3.2\r\n- Wagtail version: 4.0, 2.15,... (according to the git history seems to be there from the beginning)\r\n\n", "before_files": [{"content": "import datetime\n\nimport django_filters\nfrom django.conf import settings\nfrom django.core.exceptions import PermissionDenied\nfrom django.utils.translation import gettext_lazy as _\n\nfrom wagtail.admin.filters import DateRangePickerWidget, WagtailFilterSet\nfrom wagtail.models import Page, UserPagePermissionsProxy\n\nfrom .base import PageReportView\n\n\nclass LockedPagesReportFilterSet(WagtailFilterSet):\n locked_at = django_filters.DateFromToRangeFilter(widget=DateRangePickerWidget)\n\n class Meta:\n model = Page\n fields = [\"locked_by\", \"locked_at\", \"live\"]\n\n\nclass LockedPagesView(PageReportView):\n template_name = \"wagtailadmin/reports/locked_pages.html\"\n title = _(\"Locked pages\")\n header_icon = \"lock\"\n list_export = PageReportView.list_export + [\n \"locked_at\",\n \"locked_by\",\n ]\n filterset_class = LockedPagesReportFilterSet\n\n def get_filename(self):\n return \"locked-pages-report-{}\".format(\n datetime.datetime.today().strftime(\"%Y-%m-%d\")\n )\n\n def get_queryset(self):\n pages = (\n (\n UserPagePermissionsProxy(self.request.user).editable_pages()\n | Page.objects.filter(locked_by=self.request.user)\n )\n .filter(locked=True)\n .specific(defer=True)\n )\n\n if getattr(settings, \"WAGTAIL_I18N_ENABLED\", False):\n pages = pages.select_related(\"locale\")\n\n self.queryset = pages\n return super().get_queryset()\n\n def dispatch(self, request, *args, **kwargs):\n if not UserPagePermissionsProxy(request.user).can_remove_locks():\n raise PermissionDenied\n return super().dispatch(request, *args, **kwargs)\n", "path": "wagtail/admin/views/reports/locked_pages.py"}], "after_files": [{"content": "import datetime\n\nimport django_filters\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import PermissionDenied\nfrom django.utils.translation import gettext_lazy as _\n\nfrom wagtail.admin.filters import DateRangePickerWidget, WagtailFilterSet\nfrom wagtail.models import Page, UserPagePermissionsProxy\n\nfrom .base import PageReportView\n\n\ndef get_users_for_filter():\n User = get_user_model()\n return User.objects.filter(locked_pages__isnull=False).order_by(User.USERNAME_FIELD)\n\n\nclass LockedPagesReportFilterSet(WagtailFilterSet):\n locked_at = django_filters.DateFromToRangeFilter(widget=DateRangePickerWidget)\n locked_by = django_filters.ModelChoiceFilter(\n field_name=\"locked_by\", queryset=lambda request: get_users_for_filter()\n )\n\n class Meta:\n model = Page\n fields = [\"locked_by\", \"locked_at\", \"live\"]\n\n\nclass LockedPagesView(PageReportView):\n template_name = \"wagtailadmin/reports/locked_pages.html\"\n title = _(\"Locked pages\")\n header_icon = \"lock\"\n list_export = PageReportView.list_export + [\n \"locked_at\",\n \"locked_by\",\n ]\n filterset_class = LockedPagesReportFilterSet\n\n def get_filename(self):\n return \"locked-pages-report-{}\".format(\n datetime.datetime.today().strftime(\"%Y-%m-%d\")\n )\n\n def get_queryset(self):\n pages = (\n (\n UserPagePermissionsProxy(self.request.user).editable_pages()\n | Page.objects.filter(locked_by=self.request.user)\n )\n .filter(locked=True)\n .specific(defer=True)\n )\n\n if getattr(settings, \"WAGTAIL_I18N_ENABLED\", False):\n pages = pages.select_related(\"locale\")\n\n self.queryset = pages\n return super().get_queryset()\n\n def dispatch(self, request, *args, **kwargs):\n if not UserPagePermissionsProxy(request.user).can_remove_locks():\n raise PermissionDenied\n return super().dispatch(request, *args, **kwargs)\n", "path": "wagtail/admin/views/reports/locked_pages.py"}]}
| 1,186 | 236 |
gh_patches_debug_207
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-239
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add type check to Identity Function
Related to #123
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/functions/identity.py`
Content:
```
1 from chainer import function
2
3
4 class Identity(function.Function):
5
6 """Identity function."""
7
8 def forward(self, xs):
9 return xs
10
11 def backward(self, xs, gys):
12 return gys
13
14
15 def identity(*inputs):
16 """Just returns input variables."""
17 return Identity()(*inputs)
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/chainer/functions/identity.py b/chainer/functions/identity.py
--- a/chainer/functions/identity.py
+++ b/chainer/functions/identity.py
@@ -5,6 +5,9 @@
"""Identity function."""
+ def check_type_forward(self, in_types):
+ pass
+
def forward(self, xs):
return xs
|
{"golden_diff": "diff --git a/chainer/functions/identity.py b/chainer/functions/identity.py\n--- a/chainer/functions/identity.py\n+++ b/chainer/functions/identity.py\n@@ -5,6 +5,9 @@\n \n \"\"\"Identity function.\"\"\"\n \n+ def check_type_forward(self, in_types):\n+ pass\n+\n def forward(self, xs):\n return xs\n", "issue": "Add type check to Identity Function\nRelated to #123\n\n", "before_files": [{"content": "from chainer import function\n\n\nclass Identity(function.Function):\n\n \"\"\"Identity function.\"\"\"\n\n def forward(self, xs):\n return xs\n\n def backward(self, xs, gys):\n return gys\n\n\ndef identity(*inputs):\n \"\"\"Just returns input variables.\"\"\"\n return Identity()(*inputs)\n", "path": "chainer/functions/identity.py"}], "after_files": [{"content": "from chainer import function\n\n\nclass Identity(function.Function):\n\n \"\"\"Identity function.\"\"\"\n\n def check_type_forward(self, in_types):\n pass\n\n def forward(self, xs):\n return xs\n\n def backward(self, xs, gys):\n return gys\n\n\ndef identity(*inputs):\n \"\"\"Just returns input variables.\"\"\"\n return Identity()(*inputs)\n", "path": "chainer/functions/identity.py"}]}
| 369 | 80 |
gh_patches_debug_44307
|
rasdani/github-patches
|
git_diff
|
qutip__qutip-285
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add progressbar for propatgator?
It is really great to use qutip 3.0 in iPython notebook with the HTMLProgressbar, especially when using mesolve or essolve. Unfortunately there is no progressbar when using the propagator function.
Would it be possible to implement this, for propagator (and/or possible other functions), since it calls mesolve, which already has it implemented?
Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutip/propagator.py`
Content:
```
1 # This file is part of QuTiP: Quantum Toolbox in Python.
2 #
3 # Copyright (c) 2011 and later, Paul D. Nation and Robert J. Johansson.
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are
8 # met:
9 #
10 # 1. Redistributions of source code must retain the above copyright notice,
11 # this list of conditions and the following disclaimer.
12 #
13 # 2. Redistributions in binary form must reproduce the above copyright
14 # notice, this list of conditions and the following disclaimer in the
15 # documentation and/or other materials provided with the distribution.
16 #
17 # 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names
18 # of its contributors may be used to endorse or promote products derived
19 # from this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
22 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
23 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
24 # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
25 # HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
26 # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
27 # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
28 # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
29 # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
30 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
31 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 ###############################################################################
33
34 __all__ = ['propagator', 'propagator_steadystate']
35
36 import types
37 import numpy as np
38 import scipy.linalg as la
39 import functools
40
41 from qutip.qobj import Qobj
42 from qutip.rhs_generate import rhs_clear
43 from qutip.superoperator import (vec2mat, mat2vec,
44 vector_to_operator, operator_to_vector)
45 from qutip.mesolve import mesolve
46 from qutip.sesolve import sesolve
47 from qutip.states import basis
48 from qutip.solver import Options
49
50
51 def propagator(H, t, c_op_list, args=None, options=None, sparse=False):
52 """
53 Calculate the propagator U(t) for the density matrix or wave function such
54 that :math:`\psi(t) = U(t)\psi(0)` or
55 :math:`\\rho_{\mathrm vec}(t) = U(t) \\rho_{\mathrm vec}(0)`
56 where :math:`\\rho_{\mathrm vec}` is the vector representation of the
57 density matrix.
58
59 Parameters
60 ----------
61 H : qobj or list
62 Hamiltonian as a Qobj instance of a nested list of Qobjs and
63 coefficients in the list-string or list-function format for
64 time-dependent Hamiltonians (see description in :func:`qutip.mesolve`).
65
66 t : float or array-like
67 Time or list of times for which to evaluate the propagator.
68
69 c_op_list : list
70 List of qobj collapse operators.
71
72 args : list/array/dictionary
73 Parameters to callback functions for time-dependent Hamiltonians and
74 collapse operators.
75
76 options : :class:`qutip.Options`
77 with options for the ODE solver.
78
79 Returns
80 -------
81 a : qobj
82 Instance representing the propagator :math:`U(t)`.
83
84 """
85
86 if options is None:
87 options = Options()
88 options.rhs_reuse = True
89 rhs_clear()
90
91 if isinstance(t, (int, float, np.integer, np.floating)):
92 tlist = [0, t]
93 else:
94 tlist = t
95
96 if isinstance(H, (types.FunctionType, types.BuiltinFunctionType,
97 functools.partial)):
98 H0 = H(0.0, args)
99 elif isinstance(H, list):
100 H0 = H[0][0] if isinstance(H[0], list) else H[0]
101 else:
102 H0 = H
103
104 if len(c_op_list) == 0 and H0.isoper:
105 # calculate propagator for the wave function
106
107 N = H0.shape[0]
108 dims = H0.dims
109 u = np.zeros([N, N, len(tlist)], dtype=complex)
110
111 for n in range(0, N):
112 psi0 = basis(N, n)
113 output = sesolve(H, psi0, tlist, [], args, options)
114 for k, t in enumerate(tlist):
115 u[:, n, k] = output.states[k].full().T
116
117 # todo: evolving a batch of wave functions:
118 # psi_0_list = [basis(N, n) for n in range(N)]
119 # psi_t_list = mesolve(H, psi_0_list, [0, t], [], [], args, options)
120 # for n in range(0, N):
121 # u[:,n] = psi_t_list[n][1].full().T
122
123 elif len(c_op_list) == 0 and H0.issuper:
124 # calculate the propagator for the vector representation of the
125 # density matrix (a superoperator propagator)
126
127 N = H0.shape[0]
128 dims = H0.dims
129
130 u = np.zeros([N, N, len(tlist)], dtype=complex)
131
132 for n in range(0, N):
133 psi0 = basis(N, n)
134 rho0 = Qobj(vec2mat(psi0.full()))
135 output = mesolve(H, rho0, tlist, [], [], args, options)
136 for k, t in enumerate(tlist):
137 u[:, n, k] = mat2vec(output.states[k].full()).T
138
139 else:
140 # calculate the propagator for the vector representation of the
141 # density matrix (a superoperator propagator)
142
143 N = H0.shape[0]
144 dims = [H0.dims, H0.dims]
145
146 u = np.zeros([N * N, N * N, len(tlist)], dtype=complex)
147
148 if sparse:
149 for n in range(N * N):
150 psi0 = basis(N * N, n)
151 psi0.dims = [dims[0], 1]
152 rho0 = vector_to_operator(psi0)
153 output = mesolve(H, rho0, tlist, c_op_list, [], args, options)
154 for k, t in enumerate(tlist):
155 u[:, n, k] = operator_to_vector(
156 output.states[k]).full(squeeze=True)
157
158 else:
159 for n in range(N * N):
160 psi0 = basis(N * N, n)
161 rho0 = Qobj(vec2mat(psi0.full()))
162 output = mesolve(H, rho0, tlist, c_op_list, [], args, options)
163 for k, t in enumerate(tlist):
164 u[:, n, k] = mat2vec(output.states[k].full()).T
165
166 if len(tlist) == 2:
167 return Qobj(u[:, :, 1], dims=dims)
168 else:
169 return [Qobj(u[:, :, k], dims=dims) for k in range(len(tlist))]
170
171
172 def _get_min_and_index(lst):
173 """
174 Private function for obtaining min and max indicies.
175 """
176 minval, minidx = lst[0], 0
177 for i, v in enumerate(lst[1:]):
178 if v < minval:
179 minval, minidx = v, i + 1
180 return minval, minidx
181
182
183 def propagator_steadystate(U):
184 """Find the steady state for successive applications of the propagator
185 :math:`U`.
186
187 Parameters
188 ----------
189 U : qobj
190 Operator representing the propagator.
191
192 Returns
193 -------
194 a : qobj
195 Instance representing the steady-state density matrix.
196
197 """
198
199 evals, evecs = la.eig(U.full())
200
201 ev_min, ev_idx = _get_min_and_index(abs(evals - 1.0))
202
203 evecs = evecs.T
204 rho = Qobj(vec2mat(evecs[ev_idx]), dims=U.dims[0])
205 rho = rho * (1.0 / rho.tr())
206 rho = 0.5 * (rho + rho.dag()) # make sure rho is herm
207 return rho
208
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qutip/propagator.py b/qutip/propagator.py
--- a/qutip/propagator.py
+++ b/qutip/propagator.py
@@ -46,9 +46,10 @@
from qutip.sesolve import sesolve
from qutip.states import basis
from qutip.solver import Options
+from qutip.ui.progressbar import BaseProgressBar, TextProgressBar
-
-def propagator(H, t, c_op_list, args=None, options=None, sparse=False):
+def propagator(H, t, c_op_list, args=None, options=None, sparse=False,
+ progress_bar=None):
"""
Calculate the propagator U(t) for the density matrix or wave function such
that :math:`\psi(t) = U(t)\psi(0)` or
@@ -76,6 +77,11 @@
options : :class:`qutip.Options`
with options for the ODE solver.
+ progress_bar: BaseProgressBar
+ Optional instance of BaseProgressBar, or a subclass thereof, for
+ showing the progress of the simulation. By default no progress bar
+ is used, and if set to True a TextProgressBar will be used.
+
Returns
-------
a : qobj
@@ -83,6 +89,11 @@
"""
+ if progress_bar is None:
+ progress_bar = BaseProgressBar()
+ elif progress_bar is True:
+ progress_bar = TextProgressBar()
+
if options is None:
options = Options()
options.rhs_reuse = True
@@ -108,11 +119,14 @@
dims = H0.dims
u = np.zeros([N, N, len(tlist)], dtype=complex)
+ progress_bar.start(N)
for n in range(0, N):
+ progress_bar.update(n)
psi0 = basis(N, n)
output = sesolve(H, psi0, tlist, [], args, options)
for k, t in enumerate(tlist):
u[:, n, k] = output.states[k].full().T
+ progress_bar.finished()
# todo: evolving a batch of wave functions:
# psi_0_list = [basis(N, n) for n in range(N)]
@@ -129,12 +143,15 @@
u = np.zeros([N, N, len(tlist)], dtype=complex)
+ progress_bar.start(N)
for n in range(0, N):
+ progress_bar.update(n)
psi0 = basis(N, n)
rho0 = Qobj(vec2mat(psi0.full()))
output = mesolve(H, rho0, tlist, [], [], args, options)
for k, t in enumerate(tlist):
u[:, n, k] = mat2vec(output.states[k].full()).T
+ progress_bar.finished()
else:
# calculate the propagator for the vector representation of the
@@ -146,7 +163,9 @@
u = np.zeros([N * N, N * N, len(tlist)], dtype=complex)
if sparse:
+ progress_bar.start(N * N)
for n in range(N * N):
+ progress_bar.update(n)
psi0 = basis(N * N, n)
psi0.dims = [dims[0], 1]
rho0 = vector_to_operator(psi0)
@@ -154,14 +173,18 @@
for k, t in enumerate(tlist):
u[:, n, k] = operator_to_vector(
output.states[k]).full(squeeze=True)
+ progress_bar.finished()
else:
+ progress_bar.start(N * N)
for n in range(N * N):
+ progress_bar.update(n)
psi0 = basis(N * N, n)
rho0 = Qobj(vec2mat(psi0.full()))
output = mesolve(H, rho0, tlist, c_op_list, [], args, options)
for k, t in enumerate(tlist):
u[:, n, k] = mat2vec(output.states[k].full()).T
+ progress_bar.finished()
if len(tlist) == 2:
return Qobj(u[:, :, 1], dims=dims)
|
{"golden_diff": "diff --git a/qutip/propagator.py b/qutip/propagator.py\n--- a/qutip/propagator.py\n+++ b/qutip/propagator.py\n@@ -46,9 +46,10 @@\n from qutip.sesolve import sesolve\n from qutip.states import basis\n from qutip.solver import Options\n+from qutip.ui.progressbar import BaseProgressBar, TextProgressBar\n \n-\n-def propagator(H, t, c_op_list, args=None, options=None, sparse=False):\n+def propagator(H, t, c_op_list, args=None, options=None, sparse=False,\n+ progress_bar=None):\n \"\"\"\n Calculate the propagator U(t) for the density matrix or wave function such\n that :math:`\\psi(t) = U(t)\\psi(0)` or\n@@ -76,6 +77,11 @@\n options : :class:`qutip.Options`\n with options for the ODE solver.\n \n+ progress_bar: BaseProgressBar\n+ Optional instance of BaseProgressBar, or a subclass thereof, for\n+ showing the progress of the simulation. By default no progress bar\n+ is used, and if set to True a TextProgressBar will be used.\n+\n Returns\n -------\n a : qobj\n@@ -83,6 +89,11 @@\n \n \"\"\"\n \n+ if progress_bar is None:\n+ progress_bar = BaseProgressBar()\n+ elif progress_bar is True:\n+ progress_bar = TextProgressBar()\n+\n if options is None:\n options = Options()\n options.rhs_reuse = True\n@@ -108,11 +119,14 @@\n dims = H0.dims\n u = np.zeros([N, N, len(tlist)], dtype=complex)\n \n+ progress_bar.start(N)\n for n in range(0, N):\n+ progress_bar.update(n)\n psi0 = basis(N, n)\n output = sesolve(H, psi0, tlist, [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = output.states[k].full().T\n+ progress_bar.finished()\n \n # todo: evolving a batch of wave functions:\n # psi_0_list = [basis(N, n) for n in range(N)]\n@@ -129,12 +143,15 @@\n \n u = np.zeros([N, N, len(tlist)], dtype=complex)\n \n+ progress_bar.start(N)\n for n in range(0, N):\n+ progress_bar.update(n)\n psi0 = basis(N, n)\n rho0 = Qobj(vec2mat(psi0.full()))\n output = mesolve(H, rho0, tlist, [], [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = mat2vec(output.states[k].full()).T\n+ progress_bar.finished()\n \n else:\n # calculate the propagator for the vector representation of the\n@@ -146,7 +163,9 @@\n u = np.zeros([N * N, N * N, len(tlist)], dtype=complex)\n \n if sparse:\n+ progress_bar.start(N * N)\n for n in range(N * N):\n+ progress_bar.update(n)\n psi0 = basis(N * N, n)\n psi0.dims = [dims[0], 1]\n rho0 = vector_to_operator(psi0)\n@@ -154,14 +173,18 @@\n for k, t in enumerate(tlist):\n u[:, n, k] = operator_to_vector(\n output.states[k]).full(squeeze=True)\n+ progress_bar.finished()\n \n else:\n+ progress_bar.start(N * N)\n for n in range(N * N):\n+ progress_bar.update(n)\n psi0 = basis(N * N, n)\n rho0 = Qobj(vec2mat(psi0.full()))\n output = mesolve(H, rho0, tlist, c_op_list, [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = mat2vec(output.states[k].full()).T\n+ progress_bar.finished()\n \n if len(tlist) == 2:\n return Qobj(u[:, :, 1], dims=dims)\n", "issue": "Add progressbar for propatgator?\nIt is really great to use qutip 3.0 in iPython notebook with the HTMLProgressbar, especially when using mesolve or essolve. Unfortunately there is no progressbar when using the propagator function.\n\nWould it be possible to implement this, for propagator (and/or possible other functions), since it calls mesolve, which already has it implemented?\n\nThanks\n\n", "before_files": [{"content": "# This file is part of QuTiP: Quantum Toolbox in Python.\n#\n# Copyright (c) 2011 and later, Paul D. Nation and Robert J. Johansson.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are\n# met:\n#\n# 1. Redistributions of source code must retain the above copyright notice,\n# this list of conditions and the following disclaimer.\n#\n# 2. Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n#\n# 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names\n# of its contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\n# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n###############################################################################\n\n__all__ = ['propagator', 'propagator_steadystate']\n\nimport types\nimport numpy as np\nimport scipy.linalg as la\nimport functools\n\nfrom qutip.qobj import Qobj\nfrom qutip.rhs_generate import rhs_clear\nfrom qutip.superoperator import (vec2mat, mat2vec,\n vector_to_operator, operator_to_vector)\nfrom qutip.mesolve import mesolve\nfrom qutip.sesolve import sesolve\nfrom qutip.states import basis\nfrom qutip.solver import Options\n\n\ndef propagator(H, t, c_op_list, args=None, options=None, sparse=False):\n \"\"\"\n Calculate the propagator U(t) for the density matrix or wave function such\n that :math:`\\psi(t) = U(t)\\psi(0)` or\n :math:`\\\\rho_{\\mathrm vec}(t) = U(t) \\\\rho_{\\mathrm vec}(0)`\n where :math:`\\\\rho_{\\mathrm vec}` is the vector representation of the\n density matrix.\n\n Parameters\n ----------\n H : qobj or list\n Hamiltonian as a Qobj instance of a nested list of Qobjs and\n coefficients in the list-string or list-function format for\n time-dependent Hamiltonians (see description in :func:`qutip.mesolve`).\n\n t : float or array-like\n Time or list of times for which to evaluate the propagator.\n\n c_op_list : list\n List of qobj collapse operators.\n\n args : list/array/dictionary\n Parameters to callback functions for time-dependent Hamiltonians and\n collapse operators.\n\n options : :class:`qutip.Options`\n with options for the ODE solver.\n\n Returns\n -------\n a : qobj\n Instance representing the propagator :math:`U(t)`.\n\n \"\"\"\n\n if options is None:\n options = Options()\n options.rhs_reuse = True\n rhs_clear()\n\n if isinstance(t, (int, float, np.integer, np.floating)):\n tlist = [0, t]\n else:\n tlist = t\n\n if isinstance(H, (types.FunctionType, types.BuiltinFunctionType,\n functools.partial)):\n H0 = H(0.0, args)\n elif isinstance(H, list):\n H0 = H[0][0] if isinstance(H[0], list) else H[0]\n else:\n H0 = H\n\n if len(c_op_list) == 0 and H0.isoper:\n # calculate propagator for the wave function\n\n N = H0.shape[0]\n dims = H0.dims\n u = np.zeros([N, N, len(tlist)], dtype=complex)\n\n for n in range(0, N):\n psi0 = basis(N, n)\n output = sesolve(H, psi0, tlist, [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = output.states[k].full().T\n\n # todo: evolving a batch of wave functions:\n # psi_0_list = [basis(N, n) for n in range(N)]\n # psi_t_list = mesolve(H, psi_0_list, [0, t], [], [], args, options)\n # for n in range(0, N):\n # u[:,n] = psi_t_list[n][1].full().T\n\n elif len(c_op_list) == 0 and H0.issuper:\n # calculate the propagator for the vector representation of the\n # density matrix (a superoperator propagator)\n\n N = H0.shape[0]\n dims = H0.dims\n\n u = np.zeros([N, N, len(tlist)], dtype=complex)\n\n for n in range(0, N):\n psi0 = basis(N, n)\n rho0 = Qobj(vec2mat(psi0.full()))\n output = mesolve(H, rho0, tlist, [], [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = mat2vec(output.states[k].full()).T\n\n else:\n # calculate the propagator for the vector representation of the\n # density matrix (a superoperator propagator)\n\n N = H0.shape[0]\n dims = [H0.dims, H0.dims]\n\n u = np.zeros([N * N, N * N, len(tlist)], dtype=complex)\n\n if sparse:\n for n in range(N * N):\n psi0 = basis(N * N, n)\n psi0.dims = [dims[0], 1]\n rho0 = vector_to_operator(psi0)\n output = mesolve(H, rho0, tlist, c_op_list, [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = operator_to_vector(\n output.states[k]).full(squeeze=True)\n\n else:\n for n in range(N * N):\n psi0 = basis(N * N, n)\n rho0 = Qobj(vec2mat(psi0.full()))\n output = mesolve(H, rho0, tlist, c_op_list, [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = mat2vec(output.states[k].full()).T\n\n if len(tlist) == 2:\n return Qobj(u[:, :, 1], dims=dims)\n else:\n return [Qobj(u[:, :, k], dims=dims) for k in range(len(tlist))]\n\n\ndef _get_min_and_index(lst):\n \"\"\"\n Private function for obtaining min and max indicies.\n \"\"\"\n minval, minidx = lst[0], 0\n for i, v in enumerate(lst[1:]):\n if v < minval:\n minval, minidx = v, i + 1\n return minval, minidx\n\n\ndef propagator_steadystate(U):\n \"\"\"Find the steady state for successive applications of the propagator\n :math:`U`.\n\n Parameters\n ----------\n U : qobj\n Operator representing the propagator.\n\n Returns\n -------\n a : qobj\n Instance representing the steady-state density matrix.\n\n \"\"\"\n\n evals, evecs = la.eig(U.full())\n\n ev_min, ev_idx = _get_min_and_index(abs(evals - 1.0))\n\n evecs = evecs.T\n rho = Qobj(vec2mat(evecs[ev_idx]), dims=U.dims[0])\n rho = rho * (1.0 / rho.tr())\n rho = 0.5 * (rho + rho.dag()) # make sure rho is herm\n return rho\n", "path": "qutip/propagator.py"}], "after_files": [{"content": "# This file is part of QuTiP: Quantum Toolbox in Python.\n#\n# Copyright (c) 2011 and later, Paul D. Nation and Robert J. Johansson.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are\n# met:\n#\n# 1. Redistributions of source code must retain the above copyright notice,\n# this list of conditions and the following disclaimer.\n#\n# 2. Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n#\n# 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names\n# of its contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\n# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n###############################################################################\n\n__all__ = ['propagator', 'propagator_steadystate']\n\nimport types\nimport numpy as np\nimport scipy.linalg as la\nimport functools\n\nfrom qutip.qobj import Qobj\nfrom qutip.rhs_generate import rhs_clear\nfrom qutip.superoperator import (vec2mat, mat2vec,\n vector_to_operator, operator_to_vector)\nfrom qutip.mesolve import mesolve\nfrom qutip.sesolve import sesolve\nfrom qutip.states import basis\nfrom qutip.solver import Options\nfrom qutip.ui.progressbar import BaseProgressBar, TextProgressBar\n\ndef propagator(H, t, c_op_list, args=None, options=None, sparse=False,\n progress_bar=None):\n \"\"\"\n Calculate the propagator U(t) for the density matrix or wave function such\n that :math:`\\psi(t) = U(t)\\psi(0)` or\n :math:`\\\\rho_{\\mathrm vec}(t) = U(t) \\\\rho_{\\mathrm vec}(0)`\n where :math:`\\\\rho_{\\mathrm vec}` is the vector representation of the\n density matrix.\n\n Parameters\n ----------\n H : qobj or list\n Hamiltonian as a Qobj instance of a nested list of Qobjs and\n coefficients in the list-string or list-function format for\n time-dependent Hamiltonians (see description in :func:`qutip.mesolve`).\n\n t : float or array-like\n Time or list of times for which to evaluate the propagator.\n\n c_op_list : list\n List of qobj collapse operators.\n\n args : list/array/dictionary\n Parameters to callback functions for time-dependent Hamiltonians and\n collapse operators.\n\n options : :class:`qutip.Options`\n with options for the ODE solver.\n\n progress_bar: BaseProgressBar\n Optional instance of BaseProgressBar, or a subclass thereof, for\n showing the progress of the simulation. By default no progress bar\n is used, and if set to True a TextProgressBar will be used.\n\n Returns\n -------\n a : qobj\n Instance representing the propagator :math:`U(t)`.\n\n \"\"\"\n\n if progress_bar is None:\n progress_bar = BaseProgressBar()\n elif progress_bar is True:\n progress_bar = TextProgressBar()\n\n if options is None:\n options = Options()\n options.rhs_reuse = True\n rhs_clear()\n\n if isinstance(t, (int, float, np.integer, np.floating)):\n tlist = [0, t]\n else:\n tlist = t\n\n if isinstance(H, (types.FunctionType, types.BuiltinFunctionType,\n functools.partial)):\n H0 = H(0.0, args)\n elif isinstance(H, list):\n H0 = H[0][0] if isinstance(H[0], list) else H[0]\n else:\n H0 = H\n\n if len(c_op_list) == 0 and H0.isoper:\n # calculate propagator for the wave function\n\n N = H0.shape[0]\n dims = H0.dims\n u = np.zeros([N, N, len(tlist)], dtype=complex)\n\n progress_bar.start(N)\n for n in range(0, N):\n progress_bar.update(n)\n psi0 = basis(N, n)\n output = sesolve(H, psi0, tlist, [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = output.states[k].full().T\n progress_bar.finished()\n\n # todo: evolving a batch of wave functions:\n # psi_0_list = [basis(N, n) for n in range(N)]\n # psi_t_list = mesolve(H, psi_0_list, [0, t], [], [], args, options)\n # for n in range(0, N):\n # u[:,n] = psi_t_list[n][1].full().T\n\n elif len(c_op_list) == 0 and H0.issuper:\n # calculate the propagator for the vector representation of the\n # density matrix (a superoperator propagator)\n\n N = H0.shape[0]\n dims = H0.dims\n\n u = np.zeros([N, N, len(tlist)], dtype=complex)\n\n progress_bar.start(N)\n for n in range(0, N):\n progress_bar.update(n)\n psi0 = basis(N, n)\n rho0 = Qobj(vec2mat(psi0.full()))\n output = mesolve(H, rho0, tlist, [], [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = mat2vec(output.states[k].full()).T\n progress_bar.finished()\n\n else:\n # calculate the propagator for the vector representation of the\n # density matrix (a superoperator propagator)\n\n N = H0.shape[0]\n dims = [H0.dims, H0.dims]\n\n u = np.zeros([N * N, N * N, len(tlist)], dtype=complex)\n\n if sparse:\n progress_bar.start(N * N)\n for n in range(N * N):\n progress_bar.update(n)\n psi0 = basis(N * N, n)\n psi0.dims = [dims[0], 1]\n rho0 = vector_to_operator(psi0)\n output = mesolve(H, rho0, tlist, c_op_list, [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = operator_to_vector(\n output.states[k]).full(squeeze=True)\n progress_bar.finished()\n\n else:\n progress_bar.start(N * N)\n for n in range(N * N):\n progress_bar.update(n)\n psi0 = basis(N * N, n)\n rho0 = Qobj(vec2mat(psi0.full()))\n output = mesolve(H, rho0, tlist, c_op_list, [], args, options)\n for k, t in enumerate(tlist):\n u[:, n, k] = mat2vec(output.states[k].full()).T\n progress_bar.finished()\n\n if len(tlist) == 2:\n return Qobj(u[:, :, 1], dims=dims)\n else:\n return [Qobj(u[:, :, k], dims=dims) for k in range(len(tlist))]\n\n\ndef _get_min_and_index(lst):\n \"\"\"\n Private function for obtaining min and max indicies.\n \"\"\"\n minval, minidx = lst[0], 0\n for i, v in enumerate(lst[1:]):\n if v < minval:\n minval, minidx = v, i + 1\n return minval, minidx\n\n\ndef propagator_steadystate(U):\n \"\"\"Find the steady state for successive applications of the propagator\n :math:`U`.\n\n Parameters\n ----------\n U : qobj\n Operator representing the propagator.\n\n Returns\n -------\n a : qobj\n Instance representing the steady-state density matrix.\n\n \"\"\"\n\n evals, evecs = la.eig(U.full())\n\n ev_min, ev_idx = _get_min_and_index(abs(evals - 1.0))\n\n evecs = evecs.T\n rho = Qobj(vec2mat(evecs[ev_idx]), dims=U.dims[0])\n rho = rho * (1.0 / rho.tr())\n rho = 0.5 * (rho + rho.dag()) # make sure rho is herm\n return rho\n", "path": "qutip/propagator.py"}]}
| 2,733 | 960 |
gh_patches_debug_1215
|
rasdani/github-patches
|
git_diff
|
TabbycatDebate__tabbycat-2348
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash when generating QF draw (WS)
**Running:** a1ca1a390866199e1884db12c215ddaa867a98dc
When generating the draw for the first elimination round in a WS tournament, I encountered this exception:
```python
[2023-07-09 12:01:47,564] ERROR django.request: Internal Server Error: /xxx-yyz/admin/draw/round/7/create/
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 56, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/generic/base.py", line 103, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/contrib/auth/mixins.py", line 135, in dispatch
return super().dispatch(request, *args, **kwargs)
File "/tcd/tabbycat/tournaments/mixins.py", line 125, in dispatch
return super().dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/generic/base.py", line 142, in dispatch
return handler(request, *args, **kwargs)
File "/tcd/tabbycat/draw/views.py", line 664, in post
manager.create()
File "/tcd/tabbycat/draw/manager.py", line 157, in create
drawer = DrawGenerator(self.teams_in_debate, generator_type, teams,
File "/tcd/tabbycat/draw/generator/__init__.py", line 93, in DrawGenerator
return klass(teams, results, rrseq, **kwargs)
File "/tcd/tabbycat/draw/generator/common.py", line 182, in __init__
super().__init__(teams, results, rrseq, **kwargs)
File "/tcd/tabbycat/draw/generator/common.py", line 73, in __init__
raise ValueError("Unrecognised options: " + ", ".join(unrecognised))
ValueError: Unrecognised options: avoid_conflicts
```
I quickly patched around it like so and we manually confirmed the draw was correct:
```diff
diff --git a/tabbycat/draw/generator/common.py b/tabbycat/draw/generator/common.py
index 2a61de6ea..3d7167aa1 100644
--- a/tabbycat/draw/generator/common.py
+++ b/tabbycat/draw/generator/common.py
@@ -68,9 +68,10 @@ class BaseDrawGenerator:
# Compute the full dictionary of default options
self.options = self.BASE_DEFAULT_OPTIONS.copy()
self.options.update(self.DEFAULT_OPTIONS)
+ print(self.__class__)
unrecognised = [key for key in kwargs if key not in self.options]
- if unrecognised:
- raise ValueError("Unrecognised options: " + ", ".join(unrecognised))
+# if unrecognised:
+# raise ValueError("Unrecognised options: " + ", ".join(unrecognised))
self.options.update(kwargs)
def generate(self):
```
Of course, this is not a fix for the problem, just avoiding the symptoms. **I intend to find the cause of this issue and fix it in the following days**, bu I'm dropping an issue here so I don't forget
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tabbycat/draw/generator/common.py`
Content:
```
1 import logging
2
3 from django.utils.translation import gettext as _
4
5 logger = logging.getLogger(__name__)
6
7
8 class BaseDrawError(Exception):
9 pass
10
11
12 class DrawUserError(BaseDrawError):
13 """DrawUserError is raised by any DrawGenerator class when a problem that
14 would appear to be user error prevents a draw from being produced.
15 DrawUserErrors are caught by the view class and shown to the user as an
16 error message.
17
18 Because DrawUserErrors expected and rectifier, the strings that go into them
19 should be internationalised (marked for translation)."""
20 pass
21
22
23 class DrawFatalError(BaseDrawError):
24 """DrawAlgorithmError is raised by any DrawGenerator class when a problem
25 that is an error condition that should never (ever) happen prevents a draw
26 from being produced. DrawAlgorithmError are also caught by the view class
27 and shown to the user as an error message. However, because they should
28 never happen, their messages are not internationalised, since that just
29 creates unnecessary work for translators."""
30 pass
31
32
33 class BaseDrawGenerator:
34 """Base class for generators for all draw types, for both two-team and BP.
35 """
36
37 # Subclasses must define BASE_DEFAULT_OPTIONS
38
39 requires_even_teams = True
40 requires_prev_results = False
41 requires_rrseq = False
42
43 def __init__(self, teams, results=None, rrseq=None, **kwargs):
44 self.teams = teams
45 self.team_flags = dict()
46 self.results = results
47 self.rrseq = rrseq
48
49 if self.requires_even_teams:
50 if not len(self.teams) % self.TEAMS_PER_DEBATE == 0:
51 raise DrawUserError(_("The number of teams presented for the draw was not "
52 "a multiple of %(num)d.") % {'num': self.TEAMS_PER_DEBATE})
53 if not self.teams:
54 raise DrawUserError(_("There were no teams for the draw."))
55
56 if results is None and self.requires_prev_results:
57 raise TypeError("'results' is required for draw of type {0:s}".format(
58 self.__class__.__name__))
59
60 if results is not None and not self.requires_prev_results:
61 logger.warning("'results' not required for draw of type %s, will probably be ignored",
62 self.__class__.__name__)
63
64 if rrseq is None and self.requires_rrseq:
65 raise TypeError("'rrseq' (round robin sequence) is required for draw of type {0:s}".format(
66 self.__class__.__name__))
67
68 # Compute the full dictionary of default options
69 self.options = self.BASE_DEFAULT_OPTIONS.copy()
70 self.options.update(self.DEFAULT_OPTIONS)
71 unrecognised = [key for key in kwargs if key not in self.options]
72 if unrecognised:
73 raise ValueError("Unrecognised options: " + ", ".join(unrecognised))
74 self.options.update(kwargs)
75
76 def generate(self):
77 """Abstract method."""
78 raise NotImplementedError
79
80 def get_option_function(self, option_name, option_dict):
81 option = self.options[option_name]
82 if callable(option):
83 return option
84 try:
85 return getattr(self, option_dict[option])
86 except KeyError:
87 raise ValueError("Invalid option for {1}: {0}".format(option, option_name))
88
89 def add_team_flag(self, team, flag):
90 """Attaches a flag to a team.
91 Child classes may use this when flags should follow teams, but
92 eventually be attached to pairings."""
93 flags = self.team_flags.setdefault(team, list())
94 flags.append(flag)
95
96 def annotate_team_flags(self, pairings):
97 """Applies the team flags to the pairings given.
98 Child classes that use team flags should call this method as the last
99 thing before the draw is returned."""
100 for pairing in pairings:
101 for team in pairing.teams:
102 if team in self.team_flags:
103 pairing.add_team_flags(team, self.team_flags[team])
104
105 @classmethod
106 def available_options(cls):
107 keys = set(cls.BASE_DEFAULT_OPTIONS.keys())
108 keys |= set(cls.DEFAULT_OPTIONS.keys())
109 return sorted(list(keys))
110
111 def check_teams_for_attribute(self, name, choices=None, checkfunc=None):
112 """Checks that all teams have the specified attribute, and raises a
113 DrawFatalError if they don't. This should be called during the
114 constructor. Note: Whether to run this check will sometimes be
115 conditional on options supplied to the DrawGenerator. 'name' is the name
116 of the attribute. 'choices', if specified, is a list of allowed values
117 for the attribute.
118 """
119 has_attribute = [hasattr(x, name) for x in self.teams]
120 if not all(has_attribute):
121 offending_teams = has_attribute.count(False)
122 raise DrawFatalError("{0} out of {1} teams don't have a '{name}' attribute.".format(
123 offending_teams, len(self.teams), name=name))
124
125 if choices:
126 attribute_value_valid = [getattr(x, name) in choices for x in self.teams]
127 elif checkfunc:
128 attribute_value_valid = [checkfunc(getattr(x, name)) for x in self.teams]
129 else:
130 return
131
132 if not all(attribute_value_valid):
133 offending_teams = attribute_value_valid.count(False)
134 message = "{0} out of {1} teams have an invalid '{name}' attribute.".format(offending_teams, len(self.teams), name=name)
135 if choices:
136 message += " Valid choices: " + ", ".join(map(repr, choices))
137 raise DrawFatalError(message)
138
139
140 class BasePairDrawGenerator(BaseDrawGenerator):
141 """Base class for generators for all draw types.
142 Options:
143 "side_allocations" - Side allocation method, one of:
144 "balance" - the team that has affirmed less in prior rounds affirms,
145 or randomly if both teams have affirmed the same number of times.
146 If used, team objects must have an `side_history` attribute.
147 "preallocated" - teams were pre-allocated sides. If used, teams must
148 have an 'allocated_side' attribute.
149 "none" - leave sides as they were when the pairings were drawn.
150 (This is almost never desirable.)
151 "random" - allocate randomly.
152 "avoid_history" - if True, draw tries to avoid pairing teams that have
153 seen each other before, and tries harder if they've seen each other
154 multiple times.
155 "history_penalty" -
156 "avoid_institution" - if True, draw tries to avoid pairing teams that
157 are from the same institution.
158 "side_penalty" - A penalty to apply when optimizing with side balance
159 """
160
161 BASE_DEFAULT_OPTIONS = {
162 "side_allocations" : "balance",
163 "avoid_history" : True,
164 "avoid_institution" : True,
165 "history_penalty" : 1e3,
166 "institution_penalty" : 1,
167 "side_penalty" : 0,
168 "pullup_debates_penalty": 0,
169 "pairing_penalty" : 0,
170 }
171
172 TEAMS_PER_DEBATE = 2
173
174 requires_even_teams = True
175 requires_prev_results = False
176 requires_rrseq = False
177
178 # All subclasses must define this with any options that may exist.
179 DEFAULT_OPTIONS = {}
180
181 def __init__(self, teams, results=None, rrseq=None, **kwargs):
182 super().__init__(teams, results, rrseq, **kwargs)
183
184 # Check for required team attributes. Subclasses might do more.
185 if self.options["avoid_history"]:
186 self.check_teams_for_attribute("seen", checkfunc=callable)
187 if self.options["avoid_institution"]:
188 self.check_teams_for_attribute("institution")
189
190 def allocate_sides(self, pairings):
191 if self.options["side_allocations"] == "balance":
192 for pairing in pairings:
193 pairing.balance_sides()
194 elif self.options["side_allocations"] == "random":
195 for pairing in pairings:
196 pairing.shuffle_sides()
197 elif self.options["side_allocations"] not in ["none", "preallocated"]:
198 raise ValueError("side_allocations setting not recognized: {0!r}".format(self.options["side_allocations"]))
199
200
201 class BaseBPDrawGenerator(BaseDrawGenerator):
202 BASE_DEFAULT_OPTIONS = {}
203 TEAMS_PER_DEBATE = 4
204
205
206 class EliminationDrawMixin:
207 """Mixin for elimination draws."""
208
209 def generate(self):
210 pairings = self.make_pairings()
211 self.shuffle_sides(pairings)
212 return pairings
213
214 def shuffle_sides(self, pairings):
215 for pairing in pairings:
216 pairing.shuffle_sides()
217
218 def make_pairings(self):
219 raise NotImplementedError
220
221
222 class ManualDrawGenerator(BaseDrawGenerator):
223 """Returns an empty draw.
224 Since this doesn't really do anything, it works for both two-team and BP.
225 """
226 DEFAULT_OPTIONS = {}
227 BASE_DEFAULT_OPTIONS = {}
228 requires_even_teams = False
229 requires_prev_results = False
230
231 def generate(self):
232 return []
233
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tabbycat/draw/generator/common.py b/tabbycat/draw/generator/common.py
--- a/tabbycat/draw/generator/common.py
+++ b/tabbycat/draw/generator/common.py
@@ -167,6 +167,7 @@
"side_penalty" : 0,
"pullup_debates_penalty": 0,
"pairing_penalty" : 0,
+ "avoid_conflicts" : "off",
}
TEAMS_PER_DEBATE = 2
|
{"golden_diff": "diff --git a/tabbycat/draw/generator/common.py b/tabbycat/draw/generator/common.py\n--- a/tabbycat/draw/generator/common.py\n+++ b/tabbycat/draw/generator/common.py\n@@ -167,6 +167,7 @@\n \"side_penalty\" : 0,\n \"pullup_debates_penalty\": 0,\n \"pairing_penalty\" : 0,\n+ \"avoid_conflicts\" : \"off\",\n }\n \n TEAMS_PER_DEBATE = 2\n", "issue": "Crash when generating QF draw (WS)\n**Running:** a1ca1a390866199e1884db12c215ddaa867a98dc\r\n\r\nWhen generating the draw for the first elimination round in a WS tournament, I encountered this exception:\r\n\r\n```python\r\n[2023-07-09 12:01:47,564] ERROR django.request: Internal Server Error: /xxx-yyz/admin/draw/round/7/create/\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py\", line 56, in inner\r\n response = get_response(request)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py\", line 197, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/django/views/generic/base.py\", line 103, in view\r\n return self.dispatch(request, *args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/django/contrib/auth/mixins.py\", line 135, in dispatch\r\n return super().dispatch(request, *args, **kwargs)\r\n File \"/tcd/tabbycat/tournaments/mixins.py\", line 125, in dispatch\r\n return super().dispatch(request, *args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/django/views/generic/base.py\", line 142, in dispatch\r\n return handler(request, *args, **kwargs)\r\n File \"/tcd/tabbycat/draw/views.py\", line 664, in post\r\n manager.create()\r\n File \"/tcd/tabbycat/draw/manager.py\", line 157, in create\r\n drawer = DrawGenerator(self.teams_in_debate, generator_type, teams,\r\n File \"/tcd/tabbycat/draw/generator/__init__.py\", line 93, in DrawGenerator\r\n return klass(teams, results, rrseq, **kwargs)\r\n File \"/tcd/tabbycat/draw/generator/common.py\", line 182, in __init__\r\n super().__init__(teams, results, rrseq, **kwargs)\r\n File \"/tcd/tabbycat/draw/generator/common.py\", line 73, in __init__\r\n raise ValueError(\"Unrecognised options: \" + \", \".join(unrecognised))\r\nValueError: Unrecognised options: avoid_conflicts\r\n```\r\n\r\nI quickly patched around it like so and we manually confirmed the draw was correct:\r\n\r\n```diff\r\ndiff --git a/tabbycat/draw/generator/common.py b/tabbycat/draw/generator/common.py\r\nindex 2a61de6ea..3d7167aa1 100644\r\n--- a/tabbycat/draw/generator/common.py\r\n+++ b/tabbycat/draw/generator/common.py\r\n@@ -68,9 +68,10 @@ class BaseDrawGenerator:\r\n # Compute the full dictionary of default options\r\n self.options = self.BASE_DEFAULT_OPTIONS.copy()\r\n self.options.update(self.DEFAULT_OPTIONS)\r\n+ print(self.__class__)\r\n unrecognised = [key for key in kwargs if key not in self.options]\r\n- if unrecognised:\r\n- raise ValueError(\"Unrecognised options: \" + \", \".join(unrecognised))\r\n+# if unrecognised:\r\n+# raise ValueError(\"Unrecognised options: \" + \", \".join(unrecognised))\r\n self.options.update(kwargs)\r\n \r\n def generate(self):\r\n```\r\n\r\nOf course, this is not a fix for the problem, just avoiding the symptoms. **I intend to find the cause of this issue and fix it in the following days**, bu I'm dropping an issue here so I don't forget\n", "before_files": [{"content": "import logging\n\nfrom django.utils.translation import gettext as _\n\nlogger = logging.getLogger(__name__)\n\n\nclass BaseDrawError(Exception):\n pass\n\n\nclass DrawUserError(BaseDrawError):\n \"\"\"DrawUserError is raised by any DrawGenerator class when a problem that\n would appear to be user error prevents a draw from being produced.\n DrawUserErrors are caught by the view class and shown to the user as an\n error message.\n\n Because DrawUserErrors expected and rectifier, the strings that go into them\n should be internationalised (marked for translation).\"\"\"\n pass\n\n\nclass DrawFatalError(BaseDrawError):\n \"\"\"DrawAlgorithmError is raised by any DrawGenerator class when a problem\n that is an error condition that should never (ever) happen prevents a draw\n from being produced. DrawAlgorithmError are also caught by the view class\n and shown to the user as an error message. However, because they should\n never happen, their messages are not internationalised, since that just\n creates unnecessary work for translators.\"\"\"\n pass\n\n\nclass BaseDrawGenerator:\n \"\"\"Base class for generators for all draw types, for both two-team and BP.\n \"\"\"\n\n # Subclasses must define BASE_DEFAULT_OPTIONS\n\n requires_even_teams = True\n requires_prev_results = False\n requires_rrseq = False\n\n def __init__(self, teams, results=None, rrseq=None, **kwargs):\n self.teams = teams\n self.team_flags = dict()\n self.results = results\n self.rrseq = rrseq\n\n if self.requires_even_teams:\n if not len(self.teams) % self.TEAMS_PER_DEBATE == 0:\n raise DrawUserError(_(\"The number of teams presented for the draw was not \"\n \"a multiple of %(num)d.\") % {'num': self.TEAMS_PER_DEBATE})\n if not self.teams:\n raise DrawUserError(_(\"There were no teams for the draw.\"))\n\n if results is None and self.requires_prev_results:\n raise TypeError(\"'results' is required for draw of type {0:s}\".format(\n self.__class__.__name__))\n\n if results is not None and not self.requires_prev_results:\n logger.warning(\"'results' not required for draw of type %s, will probably be ignored\",\n self.__class__.__name__)\n\n if rrseq is None and self.requires_rrseq:\n raise TypeError(\"'rrseq' (round robin sequence) is required for draw of type {0:s}\".format(\n self.__class__.__name__))\n\n # Compute the full dictionary of default options\n self.options = self.BASE_DEFAULT_OPTIONS.copy()\n self.options.update(self.DEFAULT_OPTIONS)\n unrecognised = [key for key in kwargs if key not in self.options]\n if unrecognised:\n raise ValueError(\"Unrecognised options: \" + \", \".join(unrecognised))\n self.options.update(kwargs)\n\n def generate(self):\n \"\"\"Abstract method.\"\"\"\n raise NotImplementedError\n\n def get_option_function(self, option_name, option_dict):\n option = self.options[option_name]\n if callable(option):\n return option\n try:\n return getattr(self, option_dict[option])\n except KeyError:\n raise ValueError(\"Invalid option for {1}: {0}\".format(option, option_name))\n\n def add_team_flag(self, team, flag):\n \"\"\"Attaches a flag to a team.\n Child classes may use this when flags should follow teams, but\n eventually be attached to pairings.\"\"\"\n flags = self.team_flags.setdefault(team, list())\n flags.append(flag)\n\n def annotate_team_flags(self, pairings):\n \"\"\"Applies the team flags to the pairings given.\n Child classes that use team flags should call this method as the last\n thing before the draw is returned.\"\"\"\n for pairing in pairings:\n for team in pairing.teams:\n if team in self.team_flags:\n pairing.add_team_flags(team, self.team_flags[team])\n\n @classmethod\n def available_options(cls):\n keys = set(cls.BASE_DEFAULT_OPTIONS.keys())\n keys |= set(cls.DEFAULT_OPTIONS.keys())\n return sorted(list(keys))\n\n def check_teams_for_attribute(self, name, choices=None, checkfunc=None):\n \"\"\"Checks that all teams have the specified attribute, and raises a\n DrawFatalError if they don't. This should be called during the\n constructor. Note: Whether to run this check will sometimes be\n conditional on options supplied to the DrawGenerator. 'name' is the name\n of the attribute. 'choices', if specified, is a list of allowed values\n for the attribute.\n \"\"\"\n has_attribute = [hasattr(x, name) for x in self.teams]\n if not all(has_attribute):\n offending_teams = has_attribute.count(False)\n raise DrawFatalError(\"{0} out of {1} teams don't have a '{name}' attribute.\".format(\n offending_teams, len(self.teams), name=name))\n\n if choices:\n attribute_value_valid = [getattr(x, name) in choices for x in self.teams]\n elif checkfunc:\n attribute_value_valid = [checkfunc(getattr(x, name)) for x in self.teams]\n else:\n return\n\n if not all(attribute_value_valid):\n offending_teams = attribute_value_valid.count(False)\n message = \"{0} out of {1} teams have an invalid '{name}' attribute.\".format(offending_teams, len(self.teams), name=name)\n if choices:\n message += \" Valid choices: \" + \", \".join(map(repr, choices))\n raise DrawFatalError(message)\n\n\nclass BasePairDrawGenerator(BaseDrawGenerator):\n \"\"\"Base class for generators for all draw types.\n Options:\n \"side_allocations\" - Side allocation method, one of:\n \"balance\" - the team that has affirmed less in prior rounds affirms,\n or randomly if both teams have affirmed the same number of times.\n If used, team objects must have an `side_history` attribute.\n \"preallocated\" - teams were pre-allocated sides. If used, teams must\n have an 'allocated_side' attribute.\n \"none\" - leave sides as they were when the pairings were drawn.\n (This is almost never desirable.)\n \"random\" - allocate randomly.\n \"avoid_history\" - if True, draw tries to avoid pairing teams that have\n seen each other before, and tries harder if they've seen each other\n multiple times.\n \"history_penalty\" -\n \"avoid_institution\" - if True, draw tries to avoid pairing teams that\n are from the same institution.\n \"side_penalty\" - A penalty to apply when optimizing with side balance\n \"\"\"\n\n BASE_DEFAULT_OPTIONS = {\n \"side_allocations\" : \"balance\",\n \"avoid_history\" : True,\n \"avoid_institution\" : True,\n \"history_penalty\" : 1e3,\n \"institution_penalty\" : 1,\n \"side_penalty\" : 0,\n \"pullup_debates_penalty\": 0,\n \"pairing_penalty\" : 0,\n }\n\n TEAMS_PER_DEBATE = 2\n\n requires_even_teams = True\n requires_prev_results = False\n requires_rrseq = False\n\n # All subclasses must define this with any options that may exist.\n DEFAULT_OPTIONS = {}\n\n def __init__(self, teams, results=None, rrseq=None, **kwargs):\n super().__init__(teams, results, rrseq, **kwargs)\n\n # Check for required team attributes. Subclasses might do more.\n if self.options[\"avoid_history\"]:\n self.check_teams_for_attribute(\"seen\", checkfunc=callable)\n if self.options[\"avoid_institution\"]:\n self.check_teams_for_attribute(\"institution\")\n\n def allocate_sides(self, pairings):\n if self.options[\"side_allocations\"] == \"balance\":\n for pairing in pairings:\n pairing.balance_sides()\n elif self.options[\"side_allocations\"] == \"random\":\n for pairing in pairings:\n pairing.shuffle_sides()\n elif self.options[\"side_allocations\"] not in [\"none\", \"preallocated\"]:\n raise ValueError(\"side_allocations setting not recognized: {0!r}\".format(self.options[\"side_allocations\"]))\n\n\nclass BaseBPDrawGenerator(BaseDrawGenerator):\n BASE_DEFAULT_OPTIONS = {}\n TEAMS_PER_DEBATE = 4\n\n\nclass EliminationDrawMixin:\n \"\"\"Mixin for elimination draws.\"\"\"\n\n def generate(self):\n pairings = self.make_pairings()\n self.shuffle_sides(pairings)\n return pairings\n\n def shuffle_sides(self, pairings):\n for pairing in pairings:\n pairing.shuffle_sides()\n\n def make_pairings(self):\n raise NotImplementedError\n\n\nclass ManualDrawGenerator(BaseDrawGenerator):\n \"\"\"Returns an empty draw.\n Since this doesn't really do anything, it works for both two-team and BP.\n \"\"\"\n DEFAULT_OPTIONS = {}\n BASE_DEFAULT_OPTIONS = {}\n requires_even_teams = False\n requires_prev_results = False\n\n def generate(self):\n return []\n", "path": "tabbycat/draw/generator/common.py"}], "after_files": [{"content": "import logging\n\nfrom django.utils.translation import gettext as _\n\nlogger = logging.getLogger(__name__)\n\n\nclass BaseDrawError(Exception):\n pass\n\n\nclass DrawUserError(BaseDrawError):\n \"\"\"DrawUserError is raised by any DrawGenerator class when a problem that\n would appear to be user error prevents a draw from being produced.\n DrawUserErrors are caught by the view class and shown to the user as an\n error message.\n\n Because DrawUserErrors expected and rectifier, the strings that go into them\n should be internationalised (marked for translation).\"\"\"\n pass\n\n\nclass DrawFatalError(BaseDrawError):\n \"\"\"DrawAlgorithmError is raised by any DrawGenerator class when a problem\n that is an error condition that should never (ever) happen prevents a draw\n from being produced. DrawAlgorithmError are also caught by the view class\n and shown to the user as an error message. However, because they should\n never happen, their messages are not internationalised, since that just\n creates unnecessary work for translators.\"\"\"\n pass\n\n\nclass BaseDrawGenerator:\n \"\"\"Base class for generators for all draw types, for both two-team and BP.\n \"\"\"\n\n # Subclasses must define BASE_DEFAULT_OPTIONS\n\n requires_even_teams = True\n requires_prev_results = False\n requires_rrseq = False\n\n def __init__(self, teams, results=None, rrseq=None, **kwargs):\n self.teams = teams\n self.team_flags = dict()\n self.results = results\n self.rrseq = rrseq\n\n if self.requires_even_teams:\n if not len(self.teams) % self.TEAMS_PER_DEBATE == 0:\n raise DrawUserError(_(\"The number of teams presented for the draw was not \"\n \"a multiple of %(num)d.\") % {'num': self.TEAMS_PER_DEBATE})\n if not self.teams:\n raise DrawUserError(_(\"There were no teams for the draw.\"))\n\n if results is None and self.requires_prev_results:\n raise TypeError(\"'results' is required for draw of type {0:s}\".format(\n self.__class__.__name__))\n\n if results is not None and not self.requires_prev_results:\n logger.warning(\"'results' not required for draw of type %s, will probably be ignored\",\n self.__class__.__name__)\n\n if rrseq is None and self.requires_rrseq:\n raise TypeError(\"'rrseq' (round robin sequence) is required for draw of type {0:s}\".format(\n self.__class__.__name__))\n\n # Compute the full dictionary of default options\n self.options = self.BASE_DEFAULT_OPTIONS.copy()\n self.options.update(self.DEFAULT_OPTIONS)\n unrecognised = [key for key in kwargs if key not in self.options]\n if unrecognised:\n raise ValueError(\"Unrecognised options: \" + \", \".join(unrecognised))\n self.options.update(kwargs)\n\n def generate(self):\n \"\"\"Abstract method.\"\"\"\n raise NotImplementedError\n\n def get_option_function(self, option_name, option_dict):\n option = self.options[option_name]\n if callable(option):\n return option\n try:\n return getattr(self, option_dict[option])\n except KeyError:\n raise ValueError(\"Invalid option for {1}: {0}\".format(option, option_name))\n\n def add_team_flag(self, team, flag):\n \"\"\"Attaches a flag to a team.\n Child classes may use this when flags should follow teams, but\n eventually be attached to pairings.\"\"\"\n flags = self.team_flags.setdefault(team, list())\n flags.append(flag)\n\n def annotate_team_flags(self, pairings):\n \"\"\"Applies the team flags to the pairings given.\n Child classes that use team flags should call this method as the last\n thing before the draw is returned.\"\"\"\n for pairing in pairings:\n for team in pairing.teams:\n if team in self.team_flags:\n pairing.add_team_flags(team, self.team_flags[team])\n\n @classmethod\n def available_options(cls):\n keys = set(cls.BASE_DEFAULT_OPTIONS.keys())\n keys |= set(cls.DEFAULT_OPTIONS.keys())\n return sorted(list(keys))\n\n def check_teams_for_attribute(self, name, choices=None, checkfunc=None):\n \"\"\"Checks that all teams have the specified attribute, and raises a\n DrawFatalError if they don't. This should be called during the\n constructor. Note: Whether to run this check will sometimes be\n conditional on options supplied to the DrawGenerator. 'name' is the name\n of the attribute. 'choices', if specified, is a list of allowed values\n for the attribute.\n \"\"\"\n has_attribute = [hasattr(x, name) for x in self.teams]\n if not all(has_attribute):\n offending_teams = has_attribute.count(False)\n raise DrawFatalError(\"{0} out of {1} teams don't have a '{name}' attribute.\".format(\n offending_teams, len(self.teams), name=name))\n\n if choices:\n attribute_value_valid = [getattr(x, name) in choices for x in self.teams]\n elif checkfunc:\n attribute_value_valid = [checkfunc(getattr(x, name)) for x in self.teams]\n else:\n return\n\n if not all(attribute_value_valid):\n offending_teams = attribute_value_valid.count(False)\n message = \"{0} out of {1} teams have an invalid '{name}' attribute.\".format(offending_teams, len(self.teams), name=name)\n if choices:\n message += \" Valid choices: \" + \", \".join(map(repr, choices))\n raise DrawFatalError(message)\n\n\nclass BasePairDrawGenerator(BaseDrawGenerator):\n \"\"\"Base class for generators for all draw types.\n Options:\n \"side_allocations\" - Side allocation method, one of:\n \"balance\" - the team that has affirmed less in prior rounds affirms,\n or randomly if both teams have affirmed the same number of times.\n If used, team objects must have an `side_history` attribute.\n \"preallocated\" - teams were pre-allocated sides. If used, teams must\n have an 'allocated_side' attribute.\n \"none\" - leave sides as they were when the pairings were drawn.\n (This is almost never desirable.)\n \"random\" - allocate randomly.\n \"avoid_history\" - if True, draw tries to avoid pairing teams that have\n seen each other before, and tries harder if they've seen each other\n multiple times.\n \"history_penalty\" -\n \"avoid_institution\" - if True, draw tries to avoid pairing teams that\n are from the same institution.\n \"side_penalty\" - A penalty to apply when optimizing with side balance\n \"\"\"\n\n BASE_DEFAULT_OPTIONS = {\n \"side_allocations\" : \"balance\",\n \"avoid_history\" : True,\n \"avoid_institution\" : True,\n \"history_penalty\" : 1e3,\n \"institution_penalty\" : 1,\n \"side_penalty\" : 0,\n \"pullup_debates_penalty\": 0,\n \"pairing_penalty\" : 0,\n \"avoid_conflicts\" : \"off\",\n }\n\n TEAMS_PER_DEBATE = 2\n\n requires_even_teams = True\n requires_prev_results = False\n requires_rrseq = False\n\n # All subclasses must define this with any options that may exist.\n DEFAULT_OPTIONS = {}\n\n def __init__(self, teams, results=None, rrseq=None, **kwargs):\n super().__init__(teams, results, rrseq, **kwargs)\n\n # Check for required team attributes. Subclasses might do more.\n if self.options[\"avoid_history\"]:\n self.check_teams_for_attribute(\"seen\", checkfunc=callable)\n if self.options[\"avoid_institution\"]:\n self.check_teams_for_attribute(\"institution\")\n\n def allocate_sides(self, pairings):\n if self.options[\"side_allocations\"] == \"balance\":\n for pairing in pairings:\n pairing.balance_sides()\n elif self.options[\"side_allocations\"] == \"random\":\n for pairing in pairings:\n pairing.shuffle_sides()\n elif self.options[\"side_allocations\"] not in [\"none\", \"preallocated\"]:\n raise ValueError(\"side_allocations setting not recognized: {0!r}\".format(self.options[\"side_allocations\"]))\n\n\nclass BaseBPDrawGenerator(BaseDrawGenerator):\n BASE_DEFAULT_OPTIONS = {}\n TEAMS_PER_DEBATE = 4\n\n\nclass EliminationDrawMixin:\n \"\"\"Mixin for elimination draws.\"\"\"\n\n def generate(self):\n pairings = self.make_pairings()\n self.shuffle_sides(pairings)\n return pairings\n\n def shuffle_sides(self, pairings):\n for pairing in pairings:\n pairing.shuffle_sides()\n\n def make_pairings(self):\n raise NotImplementedError\n\n\nclass ManualDrawGenerator(BaseDrawGenerator):\n \"\"\"Returns an empty draw.\n Since this doesn't really do anything, it works for both two-team and BP.\n \"\"\"\n DEFAULT_OPTIONS = {}\n BASE_DEFAULT_OPTIONS = {}\n requires_even_teams = False\n requires_prev_results = False\n\n def generate(self):\n return []\n", "path": "tabbycat/draw/generator/common.py"}]}
| 3,663 | 121 |
gh_patches_debug_28839
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-2070
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[v2] Detect unsupported Python versions and raise a helpful ImportError
We'll need to ensure our `urllib3/__init__.py` is still Python 2.x compatible for early urllib3 v2.x releases. If we detect an unsupported Python version we raise a helpful error message recommending pinning to `urllib3<2`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1 import os
2 import shutil
3 import subprocess
4
5 import nox
6
7 # Whenever type-hints are completed on a file it should be added here so that
8 # this file will continue to be checked by mypy. Errors from other files are
9 # ignored.
10 TYPED_FILES = {
11 "src/urllib3/contrib/__init__.py",
12 "src/urllib3/exceptions.py",
13 "src/urllib3/fields.py",
14 "src/urllib3/filepost.py",
15 "src/urllib3/packages/__init__.py",
16 "src/urllib3/packages/six.py",
17 "src/urllib3/packages/ssl_match_hostname/__init__.py",
18 "src/urllib3/packages/ssl_match_hostname/_implementation.py",
19 "src/urllib3/util/queue.py",
20 "src/urllib3/util/url.py",
21 }
22 SOURCE_FILES = [
23 "docs/",
24 "dummyserver/",
25 "src/",
26 "test/",
27 "noxfile.py",
28 "setup.py",
29 ]
30
31
32 def tests_impl(session, extras="socks,secure,brotli"):
33 # Install deps and the package itself.
34 session.install("-r", "dev-requirements.txt")
35 session.install(f".[{extras}]")
36
37 # Show the pip version.
38 session.run("pip", "--version")
39 # Print the Python version and bytesize.
40 session.run("python", "--version")
41 session.run("python", "-c", "import struct; print(struct.calcsize('P') * 8)")
42 # Print OpenSSL information.
43 session.run("python", "-m", "OpenSSL.debug")
44
45 # Inspired from https://github.com/pyca/cryptography
46 # We use parallel mode and then combine here so that coverage.py will take
47 # the paths like .tox/pyXY/lib/pythonX.Y/site-packages/urllib3/__init__.py
48 # and collapse them into src/urllib3/__init__.py.
49
50 session.run(
51 "coverage",
52 "run",
53 "--parallel-mode",
54 "-m",
55 "pytest",
56 "-r",
57 "a",
58 "--tb=native",
59 "--no-success-flaky-report",
60 *(session.posargs or ("test/",)),
61 env={"PYTHONWARNINGS": "always::DeprecationWarning"},
62 )
63 session.run("coverage", "combine")
64 session.run("coverage", "report", "-m")
65 session.run("coverage", "xml")
66
67
68 @nox.session(python=["3.6", "3.7", "3.8", "3.9", "3.10", "pypy"])
69 def test(session):
70 tests_impl(session)
71
72
73 @nox.session(python=["3"])
74 def google_brotli(session):
75 # https://pypi.org/project/Brotli/ is the Google version of brotli, so
76 # install it separately and don't install our brotli extra (which installs
77 # brotlipy).
78 session.install("brotli")
79 tests_impl(session, extras="socks,secure")
80
81
82 @nox.session()
83 def format(session):
84 """Run code formatters."""
85 session.install("black", "isort")
86 session.run("black", *SOURCE_FILES)
87 session.run("isort", *SOURCE_FILES)
88
89 lint(session)
90
91
92 @nox.session
93 def lint(session):
94 session.install("flake8", "flake8-2020", "black", "isort", "mypy")
95 session.run("flake8", "--version")
96 session.run("black", "--version")
97 session.run("isort", "--version")
98 session.run("mypy", "--version")
99 session.run("black", "--check", *SOURCE_FILES)
100 session.run("isort", "--check", *SOURCE_FILES)
101 session.run("flake8", *SOURCE_FILES)
102
103 session.log("mypy --strict src/urllib3")
104 all_errors, errors = [], []
105 process = subprocess.run(
106 ["mypy", "--strict", "src/urllib3"],
107 env=session.env,
108 text=True,
109 stdout=subprocess.PIPE,
110 stderr=subprocess.STDOUT,
111 )
112 # Ensure that mypy itself ran successfully
113 assert process.returncode in (0, 1)
114
115 for line in process.stdout.split("\n"):
116 all_errors.append(line)
117 filepath = line.partition(":")[0]
118 if filepath.replace(".pyi", ".py") in TYPED_FILES:
119 errors.append(line)
120 session.log(f"all errors count: {len(all_errors)}")
121 if errors:
122 session.error("\n" + "\n".join(sorted(set(errors))))
123
124
125 @nox.session
126 def docs(session):
127 session.install("-r", "docs/requirements.txt")
128 session.install(".[socks,secure,brotli]")
129
130 session.chdir("docs")
131 if os.path.exists("_build"):
132 shutil.rmtree("_build")
133 session.run("sphinx-build", "-b", "html", "-W", ".", "_build/html")
134
```
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # This file is protected via CODEOWNERS
3
4 import codecs
5 import os
6 import re
7
8 from setuptools import setup
9
10 base_path = os.path.dirname(__file__)
11
12 # Get the version (borrowed from SQLAlchemy)
13 with open(os.path.join(base_path, "src", "urllib3", "_version.py")) as fp:
14 VERSION = (
15 re.compile(r""".*__version__ = ["'](.*?)['"]""", re.S).match(fp.read()).group(1)
16 )
17
18
19 with codecs.open("README.rst", encoding="utf-8") as fp:
20 # Remove reST raw directive from README as they're not allowed on PyPI
21 # Those blocks start with a newline and continue until the next newline
22 mode = None
23 lines = []
24 for line in fp:
25 if line.startswith(".. raw::"):
26 mode = "ignore_nl"
27 elif line == "\n":
28 mode = "wait_nl" if mode == "ignore_nl" else None
29
30 if mode is None:
31 lines.append(line)
32 readme = "".join(lines)
33
34 with codecs.open("CHANGES.rst", encoding="utf-8") as fp:
35 changes = fp.read()
36
37 version = VERSION
38
39 setup(
40 name="urllib3",
41 version=version,
42 description="HTTP library with thread-safe connection pooling, file post, and more.",
43 long_description="\n\n".join([readme, changes]),
44 long_description_content_type="text/x-rst",
45 classifiers=[
46 "Environment :: Web Environment",
47 "Intended Audience :: Developers",
48 "License :: OSI Approved :: MIT License",
49 "Operating System :: OS Independent",
50 "Programming Language :: Python",
51 "Programming Language :: Python :: 3",
52 "Programming Language :: Python :: 3.6",
53 "Programming Language :: Python :: 3.7",
54 "Programming Language :: Python :: 3.8",
55 "Programming Language :: Python :: 3.9",
56 "Programming Language :: Python :: 3 :: Only",
57 "Programming Language :: Python :: Implementation :: CPython",
58 "Programming Language :: Python :: Implementation :: PyPy",
59 "Topic :: Internet :: WWW/HTTP",
60 "Topic :: Software Development :: Libraries",
61 ],
62 keywords="urllib httplib threadsafe filepost http https ssl pooling",
63 author="Andrey Petrov",
64 author_email="[email protected]",
65 url="https://urllib3.readthedocs.io/",
66 project_urls={
67 "Documentation": "https://urllib3.readthedocs.io/",
68 "Code": "https://github.com/urllib3/urllib3",
69 "Issue tracker": "https://github.com/urllib3/urllib3/issues",
70 },
71 license="MIT",
72 packages=[
73 "urllib3",
74 "urllib3.packages",
75 "urllib3.packages.ssl_match_hostname",
76 "urllib3.packages.backports",
77 "urllib3.contrib",
78 "urllib3.contrib._securetransport",
79 "urllib3.util",
80 ],
81 package_dir={"": "src"},
82 requires=[],
83 python_requires=">=3.6, <4",
84 extras_require={
85 "brotli": ["brotlipy>=0.6.0"],
86 "secure": [
87 "pyOpenSSL>=0.14",
88 "cryptography>=1.3.4",
89 "idna>=2.0.0",
90 "certifi",
91 ],
92 "socks": ["PySocks>=1.5.6,<2.0,!=1.5.7"],
93 },
94 )
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -70,6 +70,20 @@
tests_impl(session)
[email protected](python=["2.7"])
+def unsupported_python2(session):
+ # Can't check both returncode and output with session.run
+ process = subprocess.run(
+ ["python", "setup.py", "install"],
+ env={**session.env},
+ text=True,
+ capture_output=True,
+ )
+ assert process.returncode == 1
+ print(process.stderr)
+ assert "Unsupported Python version" in process.stderr
+
+
@nox.session(python=["3"])
def google_brotli(session):
# https://pypi.org/project/Brotli/ is the Google version of brotli, so
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,9 +4,38 @@
import codecs
import os
import re
+import sys
from setuptools import setup
+CURRENT_PYTHON = sys.version_info[:2]
+REQUIRED_PYTHON = (3, 6)
+
+# This check and everything above must remain compatible with Python 2.7.
+if CURRENT_PYTHON < REQUIRED_PYTHON:
+ sys.stderr.write(
+ """
+==========================
+Unsupported Python version
+==========================
+This version of urllib3 requires Python {}.{}, but you're trying to
+install it on Python {}.{}.
+This may be because you are using a version of pip that doesn't
+understand the python_requires classifier. Make sure you
+have pip >= 9.0 and setuptools >= 24.2, then try again:
+ $ python -m pip install --upgrade pip setuptools
+ $ python -m pip install urllib3
+This will install the latest version of urllib3 which works on your
+version of Python. If you can't upgrade your pip (or Python), request
+an older version of urllib3:
+ $ python -m pip install "urllib3<2"
+""".format(
+ *(REQUIRED_PYTHON + CURRENT_PYTHON)
+ )
+ )
+ sys.exit(1)
+
+
base_path = os.path.dirname(__file__)
# Get the version (borrowed from SQLAlchemy)
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -70,6 +70,20 @@\n tests_impl(session)\n \n \[email protected](python=[\"2.7\"])\n+def unsupported_python2(session):\n+ # Can't check both returncode and output with session.run\n+ process = subprocess.run(\n+ [\"python\", \"setup.py\", \"install\"],\n+ env={**session.env},\n+ text=True,\n+ capture_output=True,\n+ )\n+ assert process.returncode == 1\n+ print(process.stderr)\n+ assert \"Unsupported Python version\" in process.stderr\n+\n+\n @nox.session(python=[\"3\"])\n def google_brotli(session):\n # https://pypi.org/project/Brotli/ is the Google version of brotli, so\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,9 +4,38 @@\n import codecs\n import os\n import re\n+import sys\n \n from setuptools import setup\n \n+CURRENT_PYTHON = sys.version_info[:2]\n+REQUIRED_PYTHON = (3, 6)\n+\n+# This check and everything above must remain compatible with Python 2.7.\n+if CURRENT_PYTHON < REQUIRED_PYTHON:\n+ sys.stderr.write(\n+ \"\"\"\n+==========================\n+Unsupported Python version\n+==========================\n+This version of urllib3 requires Python {}.{}, but you're trying to\n+install it on Python {}.{}.\n+This may be because you are using a version of pip that doesn't\n+understand the python_requires classifier. Make sure you\n+have pip >= 9.0 and setuptools >= 24.2, then try again:\n+ $ python -m pip install --upgrade pip setuptools\n+ $ python -m pip install urllib3\n+This will install the latest version of urllib3 which works on your\n+version of Python. If you can't upgrade your pip (or Python), request\n+an older version of urllib3:\n+ $ python -m pip install \"urllib3<2\"\n+\"\"\".format(\n+ *(REQUIRED_PYTHON + CURRENT_PYTHON)\n+ )\n+ )\n+ sys.exit(1)\n+\n+\n base_path = os.path.dirname(__file__)\n \n # Get the version (borrowed from SQLAlchemy)\n", "issue": "[v2] Detect unsupported Python versions and raise a helpful ImportError\nWe'll need to ensure our `urllib3/__init__.py` is still Python 2.x compatible for early urllib3 v2.x releases. If we detect an unsupported Python version we raise a helpful error message recommending pinning to `urllib3<2`\n", "before_files": [{"content": "import os\nimport shutil\nimport subprocess\n\nimport nox\n\n# Whenever type-hints are completed on a file it should be added here so that\n# this file will continue to be checked by mypy. Errors from other files are\n# ignored.\nTYPED_FILES = {\n \"src/urllib3/contrib/__init__.py\",\n \"src/urllib3/exceptions.py\",\n \"src/urllib3/fields.py\",\n \"src/urllib3/filepost.py\",\n \"src/urllib3/packages/__init__.py\",\n \"src/urllib3/packages/six.py\",\n \"src/urllib3/packages/ssl_match_hostname/__init__.py\",\n \"src/urllib3/packages/ssl_match_hostname/_implementation.py\",\n \"src/urllib3/util/queue.py\",\n \"src/urllib3/util/url.py\",\n}\nSOURCE_FILES = [\n \"docs/\",\n \"dummyserver/\",\n \"src/\",\n \"test/\",\n \"noxfile.py\",\n \"setup.py\",\n]\n\n\ndef tests_impl(session, extras=\"socks,secure,brotli\"):\n # Install deps and the package itself.\n session.install(\"-r\", \"dev-requirements.txt\")\n session.install(f\".[{extras}]\")\n\n # Show the pip version.\n session.run(\"pip\", \"--version\")\n # Print the Python version and bytesize.\n session.run(\"python\", \"--version\")\n session.run(\"python\", \"-c\", \"import struct; print(struct.calcsize('P') * 8)\")\n # Print OpenSSL information.\n session.run(\"python\", \"-m\", \"OpenSSL.debug\")\n\n # Inspired from https://github.com/pyca/cryptography\n # We use parallel mode and then combine here so that coverage.py will take\n # the paths like .tox/pyXY/lib/pythonX.Y/site-packages/urllib3/__init__.py\n # and collapse them into src/urllib3/__init__.py.\n\n session.run(\n \"coverage\",\n \"run\",\n \"--parallel-mode\",\n \"-m\",\n \"pytest\",\n \"-r\",\n \"a\",\n \"--tb=native\",\n \"--no-success-flaky-report\",\n *(session.posargs or (\"test/\",)),\n env={\"PYTHONWARNINGS\": \"always::DeprecationWarning\"},\n )\n session.run(\"coverage\", \"combine\")\n session.run(\"coverage\", \"report\", \"-m\")\n session.run(\"coverage\", \"xml\")\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\", \"3.9\", \"3.10\", \"pypy\"])\ndef test(session):\n tests_impl(session)\n\n\[email protected](python=[\"3\"])\ndef google_brotli(session):\n # https://pypi.org/project/Brotli/ is the Google version of brotli, so\n # install it separately and don't install our brotli extra (which installs\n # brotlipy).\n session.install(\"brotli\")\n tests_impl(session, extras=\"socks,secure\")\n\n\[email protected]()\ndef format(session):\n \"\"\"Run code formatters.\"\"\"\n session.install(\"black\", \"isort\")\n session.run(\"black\", *SOURCE_FILES)\n session.run(\"isort\", *SOURCE_FILES)\n\n lint(session)\n\n\[email protected]\ndef lint(session):\n session.install(\"flake8\", \"flake8-2020\", \"black\", \"isort\", \"mypy\")\n session.run(\"flake8\", \"--version\")\n session.run(\"black\", \"--version\")\n session.run(\"isort\", \"--version\")\n session.run(\"mypy\", \"--version\")\n session.run(\"black\", \"--check\", *SOURCE_FILES)\n session.run(\"isort\", \"--check\", *SOURCE_FILES)\n session.run(\"flake8\", *SOURCE_FILES)\n\n session.log(\"mypy --strict src/urllib3\")\n all_errors, errors = [], []\n process = subprocess.run(\n [\"mypy\", \"--strict\", \"src/urllib3\"],\n env=session.env,\n text=True,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n )\n # Ensure that mypy itself ran successfully\n assert process.returncode in (0, 1)\n\n for line in process.stdout.split(\"\\n\"):\n all_errors.append(line)\n filepath = line.partition(\":\")[0]\n if filepath.replace(\".pyi\", \".py\") in TYPED_FILES:\n errors.append(line)\n session.log(f\"all errors count: {len(all_errors)}\")\n if errors:\n session.error(\"\\n\" + \"\\n\".join(sorted(set(errors))))\n\n\[email protected]\ndef docs(session):\n session.install(\"-r\", \"docs/requirements.txt\")\n session.install(\".[socks,secure,brotli]\")\n\n session.chdir(\"docs\")\n if os.path.exists(\"_build\"):\n shutil.rmtree(\"_build\")\n session.run(\"sphinx-build\", \"-b\", \"html\", \"-W\", \".\", \"_build/html\")\n", "path": "noxfile.py"}, {"content": "#!/usr/bin/env python\n# This file is protected via CODEOWNERS\n\nimport codecs\nimport os\nimport re\n\nfrom setuptools import setup\n\nbase_path = os.path.dirname(__file__)\n\n# Get the version (borrowed from SQLAlchemy)\nwith open(os.path.join(base_path, \"src\", \"urllib3\", \"_version.py\")) as fp:\n VERSION = (\n re.compile(r\"\"\".*__version__ = [\"'](.*?)['\"]\"\"\", re.S).match(fp.read()).group(1)\n )\n\n\nwith codecs.open(\"README.rst\", encoding=\"utf-8\") as fp:\n # Remove reST raw directive from README as they're not allowed on PyPI\n # Those blocks start with a newline and continue until the next newline\n mode = None\n lines = []\n for line in fp:\n if line.startswith(\".. raw::\"):\n mode = \"ignore_nl\"\n elif line == \"\\n\":\n mode = \"wait_nl\" if mode == \"ignore_nl\" else None\n\n if mode is None:\n lines.append(line)\n readme = \"\".join(lines)\n\nwith codecs.open(\"CHANGES.rst\", encoding=\"utf-8\") as fp:\n changes = fp.read()\n\nversion = VERSION\n\nsetup(\n name=\"urllib3\",\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=\"\\n\\n\".join([readme, changes]),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Software Development :: Libraries\",\n ],\n keywords=\"urllib httplib threadsafe filepost http https ssl pooling\",\n author=\"Andrey Petrov\",\n author_email=\"[email protected]\",\n url=\"https://urllib3.readthedocs.io/\",\n project_urls={\n \"Documentation\": \"https://urllib3.readthedocs.io/\",\n \"Code\": \"https://github.com/urllib3/urllib3\",\n \"Issue tracker\": \"https://github.com/urllib3/urllib3/issues\",\n },\n license=\"MIT\",\n packages=[\n \"urllib3\",\n \"urllib3.packages\",\n \"urllib3.packages.ssl_match_hostname\",\n \"urllib3.packages.backports\",\n \"urllib3.contrib\",\n \"urllib3.contrib._securetransport\",\n \"urllib3.util\",\n ],\n package_dir={\"\": \"src\"},\n requires=[],\n python_requires=\">=3.6, <4\",\n extras_require={\n \"brotli\": [\"brotlipy>=0.6.0\"],\n \"secure\": [\n \"pyOpenSSL>=0.14\",\n \"cryptography>=1.3.4\",\n \"idna>=2.0.0\",\n \"certifi\",\n ],\n \"socks\": [\"PySocks>=1.5.6,<2.0,!=1.5.7\"],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport shutil\nimport subprocess\n\nimport nox\n\n# Whenever type-hints are completed on a file it should be added here so that\n# this file will continue to be checked by mypy. Errors from other files are\n# ignored.\nTYPED_FILES = {\n \"src/urllib3/contrib/__init__.py\",\n \"src/urllib3/exceptions.py\",\n \"src/urllib3/fields.py\",\n \"src/urllib3/filepost.py\",\n \"src/urllib3/packages/__init__.py\",\n \"src/urllib3/packages/six.py\",\n \"src/urllib3/packages/ssl_match_hostname/__init__.py\",\n \"src/urllib3/packages/ssl_match_hostname/_implementation.py\",\n \"src/urllib3/util/queue.py\",\n \"src/urllib3/util/url.py\",\n}\nSOURCE_FILES = [\n \"docs/\",\n \"dummyserver/\",\n \"src/\",\n \"test/\",\n \"noxfile.py\",\n \"setup.py\",\n]\n\n\ndef tests_impl(session, extras=\"socks,secure,brotli\"):\n # Install deps and the package itself.\n session.install(\"-r\", \"dev-requirements.txt\")\n session.install(f\".[{extras}]\")\n\n # Show the pip version.\n session.run(\"pip\", \"--version\")\n # Print the Python version and bytesize.\n session.run(\"python\", \"--version\")\n session.run(\"python\", \"-c\", \"import struct; print(struct.calcsize('P') * 8)\")\n # Print OpenSSL information.\n session.run(\"python\", \"-m\", \"OpenSSL.debug\")\n\n # Inspired from https://github.com/pyca/cryptography\n # We use parallel mode and then combine here so that coverage.py will take\n # the paths like .tox/pyXY/lib/pythonX.Y/site-packages/urllib3/__init__.py\n # and collapse them into src/urllib3/__init__.py.\n\n session.run(\n \"coverage\",\n \"run\",\n \"--parallel-mode\",\n \"-m\",\n \"pytest\",\n \"-r\",\n \"a\",\n \"--tb=native\",\n \"--no-success-flaky-report\",\n *(session.posargs or (\"test/\",)),\n env={\"PYTHONWARNINGS\": \"always::DeprecationWarning\"},\n )\n session.run(\"coverage\", \"combine\")\n session.run(\"coverage\", \"report\", \"-m\")\n session.run(\"coverage\", \"xml\")\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\", \"3.9\", \"3.10\", \"pypy\"])\ndef test(session):\n tests_impl(session)\n\n\[email protected](python=[\"2.7\"])\ndef unsupported_python2(session):\n # Can't check both returncode and output with session.run\n process = subprocess.run(\n [\"python\", \"setup.py\", \"install\"],\n env={**session.env},\n text=True,\n capture_output=True,\n )\n assert process.returncode == 1\n print(process.stderr)\n assert \"Unsupported Python version\" in process.stderr\n\n\[email protected](python=[\"3\"])\ndef google_brotli(session):\n # https://pypi.org/project/Brotli/ is the Google version of brotli, so\n # install it separately and don't install our brotli extra (which installs\n # brotlipy).\n session.install(\"brotli\")\n tests_impl(session, extras=\"socks,secure\")\n\n\[email protected]()\ndef format(session):\n \"\"\"Run code formatters.\"\"\"\n session.install(\"black\", \"isort\")\n session.run(\"black\", *SOURCE_FILES)\n session.run(\"isort\", *SOURCE_FILES)\n\n lint(session)\n\n\[email protected]\ndef lint(session):\n session.install(\"flake8\", \"flake8-2020\", \"black\", \"isort\", \"mypy\")\n session.run(\"flake8\", \"--version\")\n session.run(\"black\", \"--version\")\n session.run(\"isort\", \"--version\")\n session.run(\"mypy\", \"--version\")\n session.run(\"black\", \"--check\", *SOURCE_FILES)\n session.run(\"isort\", \"--check\", *SOURCE_FILES)\n session.run(\"flake8\", *SOURCE_FILES)\n\n session.log(\"mypy --strict src/urllib3\")\n all_errors, errors = [], []\n process = subprocess.run(\n [\"mypy\", \"--strict\", \"src/urllib3\"],\n env=session.env,\n text=True,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n )\n # Ensure that mypy itself ran successfully\n assert process.returncode in (0, 1)\n\n for line in process.stdout.split(\"\\n\"):\n all_errors.append(line)\n filepath = line.partition(\":\")[0]\n if filepath.replace(\".pyi\", \".py\") in TYPED_FILES:\n errors.append(line)\n session.log(f\"all errors count: {len(all_errors)}\")\n if errors:\n session.error(\"\\n\" + \"\\n\".join(sorted(set(errors))))\n\n\[email protected]\ndef docs(session):\n session.install(\"-r\", \"docs/requirements.txt\")\n session.install(\".[socks,secure,brotli]\")\n\n session.chdir(\"docs\")\n if os.path.exists(\"_build\"):\n shutil.rmtree(\"_build\")\n session.run(\"sphinx-build\", \"-b\", \"html\", \"-W\", \".\", \"_build/html\")\n", "path": "noxfile.py"}, {"content": "#!/usr/bin/env python\n# This file is protected via CODEOWNERS\n\nimport codecs\nimport os\nimport re\nimport sys\n\nfrom setuptools import setup\n\nCURRENT_PYTHON = sys.version_info[:2]\nREQUIRED_PYTHON = (3, 6)\n\n# This check and everything above must remain compatible with Python 2.7.\nif CURRENT_PYTHON < REQUIRED_PYTHON:\n sys.stderr.write(\n \"\"\"\n==========================\nUnsupported Python version\n==========================\nThis version of urllib3 requires Python {}.{}, but you're trying to\ninstall it on Python {}.{}.\nThis may be because you are using a version of pip that doesn't\nunderstand the python_requires classifier. Make sure you\nhave pip >= 9.0 and setuptools >= 24.2, then try again:\n $ python -m pip install --upgrade pip setuptools\n $ python -m pip install urllib3\nThis will install the latest version of urllib3 which works on your\nversion of Python. If you can't upgrade your pip (or Python), request\nan older version of urllib3:\n $ python -m pip install \"urllib3<2\"\n\"\"\".format(\n *(REQUIRED_PYTHON + CURRENT_PYTHON)\n )\n )\n sys.exit(1)\n\n\nbase_path = os.path.dirname(__file__)\n\n# Get the version (borrowed from SQLAlchemy)\nwith open(os.path.join(base_path, \"src\", \"urllib3\", \"_version.py\")) as fp:\n VERSION = (\n re.compile(r\"\"\".*__version__ = [\"'](.*?)['\"]\"\"\", re.S).match(fp.read()).group(1)\n )\n\n\nwith codecs.open(\"README.rst\", encoding=\"utf-8\") as fp:\n # Remove reST raw directive from README as they're not allowed on PyPI\n # Those blocks start with a newline and continue until the next newline\n mode = None\n lines = []\n for line in fp:\n if line.startswith(\".. raw::\"):\n mode = \"ignore_nl\"\n elif line == \"\\n\":\n mode = \"wait_nl\" if mode == \"ignore_nl\" else None\n\n if mode is None:\n lines.append(line)\n readme = \"\".join(lines)\n\nwith codecs.open(\"CHANGES.rst\", encoding=\"utf-8\") as fp:\n changes = fp.read()\n\nversion = VERSION\n\nsetup(\n name=\"urllib3\",\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=\"\\n\\n\".join([readme, changes]),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Software Development :: Libraries\",\n ],\n keywords=\"urllib httplib threadsafe filepost http https ssl pooling\",\n author=\"Andrey Petrov\",\n author_email=\"[email protected]\",\n url=\"https://urllib3.readthedocs.io/\",\n project_urls={\n \"Documentation\": \"https://urllib3.readthedocs.io/\",\n \"Code\": \"https://github.com/urllib3/urllib3\",\n \"Issue tracker\": \"https://github.com/urllib3/urllib3/issues\",\n },\n license=\"MIT\",\n packages=[\n \"urllib3\",\n \"urllib3.packages\",\n \"urllib3.packages.ssl_match_hostname\",\n \"urllib3.packages.backports\",\n \"urllib3.contrib\",\n \"urllib3.contrib._securetransport\",\n \"urllib3.util\",\n ],\n package_dir={\"\": \"src\"},\n requires=[],\n python_requires=\">=3.6, <4\",\n extras_require={\n \"brotli\": [\"brotlipy>=0.6.0\"],\n \"secure\": [\n \"pyOpenSSL>=0.14\",\n \"cryptography>=1.3.4\",\n \"idna>=2.0.0\",\n \"certifi\",\n ],\n \"socks\": [\"PySocks>=1.5.6,<2.0,!=1.5.7\"],\n },\n)\n", "path": "setup.py"}]}
| 2,667 | 511 |
gh_patches_debug_14924
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-1944
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bitmex API request failed due to: Signature not valid.
I added a Bitmex API key with no special permissions:
<img width="925" alt="Screen Shot 2020-08-07 at 12 30 54" src="https://user-images.githubusercontent.com/2758453/89637013-d900c380-d8a9-11ea-82c1-9a114a5aa6b5.png">
In rotki, I get:
```
[07/08/2020 12:25:54 CEST] rotkehlchen.exchanges.bitmex: Bitmex API request failed due to: Signature not valid.
```
Update:
```
python -m pytest rotkehlchen/tests/exchanges/test_bitmex.py::test_bitmex_api_signature
// passed
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/exchanges/bitmex.py`
Content:
```
1 import hashlib
2 import hmac
3 import logging
4 import time
5 from json.decoder import JSONDecodeError
6 from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
7 from urllib.parse import urlencode
8
9 import requests
10
11 from rotkehlchen.assets.asset import Asset
12 from rotkehlchen.constants.assets import A_BTC
13 from rotkehlchen.errors import DeserializationError, RemoteError, UnknownAsset
14 from rotkehlchen.exchanges.data_structures import AssetMovement, Location, MarginPosition
15 from rotkehlchen.exchanges.exchange import ExchangeInterface
16 from rotkehlchen.exchanges.utils import deserialize_asset_movement_address, get_key_if_has_val
17 from rotkehlchen.fval import FVal
18 from rotkehlchen.inquirer import Inquirer
19 from rotkehlchen.logging import RotkehlchenLogsAdapter
20 from rotkehlchen.serialization.deserialize import (
21 deserialize_asset_amount_force_positive,
22 deserialize_fee,
23 )
24 from rotkehlchen.typing import (
25 ApiKey,
26 ApiSecret,
27 AssetAmount,
28 AssetMovementCategory,
29 Fee,
30 Timestamp,
31 )
32 from rotkehlchen.user_messages import MessagesAggregator
33 from rotkehlchen.utils.interfaces import cache_response_timewise, protect_with_lock
34 from rotkehlchen.utils.misc import iso8601ts_to_timestamp, satoshis_to_btc
35 from rotkehlchen.utils.serialization import rlk_jsonloads
36
37 if TYPE_CHECKING:
38 from rotkehlchen.db.dbhandler import DBHandler
39
40 logger = logging.getLogger(__name__)
41 log = RotkehlchenLogsAdapter(logger)
42
43 BITMEX_PRIVATE_ENDPOINTS = (
44 'user',
45 'user/wallet',
46 'user/walletHistory',
47 )
48
49
50 def bitmex_to_world(symbol: str) -> Asset:
51 if symbol == 'XBt':
52 return A_BTC
53 return Asset(symbol)
54
55
56 def trade_from_bitmex(bitmex_trade: Dict) -> MarginPosition:
57 """Turn a bitmex trade returned from bitmex trade history to our common trade
58 history format. This only returns margin positions as bitmex only deals in
59 margin trading"""
60 close_time = iso8601ts_to_timestamp(bitmex_trade['transactTime'])
61 profit_loss = AssetAmount(satoshis_to_btc(FVal(bitmex_trade['amount'])))
62 currency = bitmex_to_world(bitmex_trade['currency'])
63 fee = deserialize_fee(bitmex_trade['fee'])
64 notes = bitmex_trade['address']
65 assert currency == A_BTC, 'Bitmex trade should only deal in BTC'
66
67 log.debug(
68 'Processing Bitmex Trade',
69 sensitive_log=True,
70 timestamp=close_time,
71 profit_loss=profit_loss,
72 currency=currency,
73 fee=fee,
74 notes=notes,
75 )
76
77 return MarginPosition(
78 location=Location.BITMEX,
79 open_time=None,
80 close_time=close_time,
81 profit_loss=profit_loss,
82 pl_currency=currency,
83 fee=fee,
84 fee_currency=A_BTC,
85 notes=notes,
86 link=str(bitmex_trade['transactID']),
87 )
88
89
90 class Bitmex(ExchangeInterface):
91 def __init__(
92 self,
93 api_key: ApiKey,
94 secret: ApiSecret,
95 database: 'DBHandler',
96 msg_aggregator: MessagesAggregator,
97 ):
98 super(Bitmex, self).__init__('bitmex', api_key, secret, database)
99 self.uri = 'https://bitmex.com'
100 self.session.headers.update({'api-key': api_key})
101 self.msg_aggregator = msg_aggregator
102
103 def first_connection(self) -> None:
104 self.first_connection_made = True
105
106 def validate_api_key(self) -> Tuple[bool, str]:
107 try:
108 self._api_query('get', 'user')
109 except RemoteError as e:
110 error = str(e)
111 if 'Invalid API Key' in error:
112 return False, 'Provided API Key is invalid'
113 if 'Signature not valid' in error:
114 return False, 'Provided API Secret is invalid'
115 # else reraise
116 raise
117 return True, ''
118
119 def _generate_signature(self, verb: str, path: str, expires: int, data: str = '') -> str:
120 signature = hmac.new(
121 self.secret,
122 (verb.upper() + path + str(expires) + data).encode(),
123 hashlib.sha256,
124 ).hexdigest()
125 self.session.headers.update({
126 'api-signature': signature,
127 })
128 return signature
129
130 def _api_query(
131 self,
132 verb: str,
133 path: str,
134 options: Optional[Dict] = None,
135 ) -> Union[List, Dict]:
136 """
137 Queries Bitmex with the given verb for the given path and options
138 """
139 assert verb in ('get', 'post', 'push'), (
140 'Given verb {} is not a valid HTTP verb'.format(verb)
141 )
142
143 # 20 seconds expiration
144 expires = int(time.time()) + 20
145
146 request_path_no_args = '/api/v1/' + path
147
148 data = ''
149 if not options:
150 request_path = request_path_no_args
151 else:
152 request_path = request_path_no_args + '?' + urlencode(options)
153
154 if path in BITMEX_PRIVATE_ENDPOINTS:
155 self._generate_signature(
156 verb=verb,
157 path=request_path,
158 expires=expires,
159 data=data,
160 )
161
162 self.session.headers.update({
163 'api-expires': str(expires),
164 })
165 if data != '':
166 self.session.headers.update({
167 'Content-Type': 'application/json',
168 'Content-Length': str(len(data)),
169 })
170
171 request_url = self.uri + request_path
172 log.debug('Bitmex API Query', verb=verb, request_url=request_url)
173 try:
174 response = getattr(self.session, verb)(request_url, data=data)
175 except requests.exceptions.RequestException as e:
176 raise RemoteError(f'Bitmex API request failed due to {str(e)}') from e
177
178 if response.status_code not in (200, 401):
179 raise RemoteError(
180 'Bitmex api request for {} failed with HTTP status code {}'.format(
181 response.url,
182 response.status_code,
183 ),
184 )
185
186 try:
187 json_ret = rlk_jsonloads(response.text)
188 except JSONDecodeError as e:
189 raise RemoteError('Bitmex returned invalid JSON response') from e
190
191 if isinstance(json_ret, dict) and 'error' in json_ret:
192 raise RemoteError(json_ret['error']['message'])
193
194 return json_ret
195
196 def _api_query_dict(
197 self,
198 verb: str,
199 path: str,
200 options: Optional[Dict] = None,
201 ) -> Dict:
202 result = self._api_query(verb, path, options)
203 assert isinstance(result, Dict)
204 return result
205
206 def _api_query_list(
207 self,
208 verb: str,
209 path: str,
210 options: Optional[Dict] = None,
211 ) -> List:
212 result = self._api_query(verb, path, options)
213 assert isinstance(result, List)
214 return result
215
216 @protect_with_lock()
217 @cache_response_timewise()
218 def query_balances(self) -> Tuple[Optional[dict], str]:
219
220 try:
221 resp = self._api_query_dict('get', 'user/wallet', {'currency': 'XBt'})
222 # Bitmex shows only BTC balance
223 returned_balances = {}
224 usd_price = Inquirer().find_usd_price(A_BTC)
225 except RemoteError as e:
226 msg = f'Bitmex API request failed due to: {str(e)}'
227 log.error(msg)
228 return None, msg
229
230 # result is in satoshis
231 amount = satoshis_to_btc(FVal(resp['amount']))
232 usd_value = amount * usd_price
233
234 returned_balances[A_BTC] = {
235 'amount': amount,
236 'usd_value': usd_value,
237 }
238 log.debug(
239 'Bitmex balance query result',
240 sensitive_log=True,
241 currency='BTC',
242 amount=amount,
243 usd_value=usd_value,
244 )
245
246 return returned_balances, ''
247
248 def query_online_margin_history(
249 self,
250 start_ts: Timestamp,
251 end_ts: Timestamp,
252 ) -> List[MarginPosition]:
253
254 # We know user/walletHistory returns a list
255 resp = self._api_query_list('get', 'user/walletHistory')
256 log.debug('Bitmex trade history query', results_num=len(resp))
257
258 margin_trades = []
259 for tx in resp:
260 if tx['timestamp'] is None:
261 timestamp = None
262 else:
263 timestamp = iso8601ts_to_timestamp(tx['timestamp'])
264 if tx['transactType'] != 'RealisedPNL':
265 continue
266 if timestamp and timestamp < start_ts:
267 continue
268 if timestamp and timestamp > end_ts:
269 continue
270 margin_trades.append(trade_from_bitmex(tx))
271
272 return margin_trades
273
274 def query_online_deposits_withdrawals(
275 self,
276 start_ts: Timestamp,
277 end_ts: Timestamp,
278 ) -> List:
279 resp = self._api_query_list('get', 'user/walletHistory')
280
281 log.debug('Bitmex deposit/withdrawals query', results_num=len(resp))
282
283 movements = []
284 for movement in resp:
285 try:
286 transaction_type = movement['transactType']
287 if transaction_type == 'Deposit':
288 transaction_type = AssetMovementCategory.DEPOSIT
289 elif transaction_type == 'Withdrawal':
290 transaction_type = AssetMovementCategory.WITHDRAWAL
291 else:
292 continue
293
294 timestamp = iso8601ts_to_timestamp(movement['timestamp'])
295 if timestamp < start_ts:
296 continue
297 if timestamp > end_ts:
298 continue
299
300 asset = bitmex_to_world(movement['currency'])
301 amount = deserialize_asset_amount_force_positive(movement['amount'])
302 fee = deserialize_fee(movement['fee'])
303
304 if asset == A_BTC:
305 # bitmex stores amounts in satoshis
306 amount = AssetAmount(satoshis_to_btc(amount))
307 fee = Fee(satoshis_to_btc(fee))
308
309 movements.append(AssetMovement(
310 location=Location.BITMEX,
311 category=transaction_type,
312 address=deserialize_asset_movement_address(movement, 'address', asset),
313 transaction_id=get_key_if_has_val(movement, 'tx'),
314 timestamp=timestamp,
315 asset=asset,
316 amount=amount,
317 fee_asset=asset,
318 fee=fee,
319 link=str(movement['transactID']),
320 ))
321 except UnknownAsset as e:
322 self.msg_aggregator.add_warning(
323 f'Found bitmex deposit/withdrawal with unknown asset '
324 f'{e.asset_name}. Ignoring it.',
325 )
326 continue
327 except (DeserializationError, KeyError) as e:
328 msg = str(e)
329 if isinstance(e, KeyError):
330 msg = f'Missing key entry for {msg}.'
331 self.msg_aggregator.add_error(
332 'Unexpected data encountered during deserialization of a bitmex '
333 'asset movement. Check logs for details and open a bug report.',
334 )
335 log.error(
336 f'Unexpected data encountered during deserialization of bitmex '
337 f'asset_movement {movement}. Error was: {msg}',
338 )
339 continue
340 return movements
341
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rotkehlchen/exchanges/bitmex.py b/rotkehlchen/exchanges/bitmex.py
--- a/rotkehlchen/exchanges/bitmex.py
+++ b/rotkehlchen/exchanges/bitmex.py
@@ -148,13 +148,15 @@
data = ''
if not options:
request_path = request_path_no_args
+ signature_path = request_path
else:
request_path = request_path_no_args + '?' + urlencode(options)
+ signature_path = request_path_no_args if path == 'user/wallet' else request_path
if path in BITMEX_PRIVATE_ENDPOINTS:
self._generate_signature(
verb=verb,
- path=request_path,
+ path=signature_path,
expires=expires,
data=data,
)
|
{"golden_diff": "diff --git a/rotkehlchen/exchanges/bitmex.py b/rotkehlchen/exchanges/bitmex.py\n--- a/rotkehlchen/exchanges/bitmex.py\n+++ b/rotkehlchen/exchanges/bitmex.py\n@@ -148,13 +148,15 @@\n data = ''\n if not options:\n request_path = request_path_no_args\n+ signature_path = request_path\n else:\n request_path = request_path_no_args + '?' + urlencode(options)\n+ signature_path = request_path_no_args if path == 'user/wallet' else request_path\n \n if path in BITMEX_PRIVATE_ENDPOINTS:\n self._generate_signature(\n verb=verb,\n- path=request_path,\n+ path=signature_path,\n expires=expires,\n data=data,\n )\n", "issue": "Bitmex API request failed due to: Signature not valid.\nI added a Bitmex API key with no special permissions:\r\n\r\n<img width=\"925\" alt=\"Screen Shot 2020-08-07 at 12 30 54\" src=\"https://user-images.githubusercontent.com/2758453/89637013-d900c380-d8a9-11ea-82c1-9a114a5aa6b5.png\">\r\n\r\nIn rotki, I get:\r\n\r\n```\r\n[07/08/2020 12:25:54 CEST] rotkehlchen.exchanges.bitmex: Bitmex API request failed due to: Signature not valid.\r\n```\r\n\r\nUpdate:\r\n\r\n```\r\npython -m pytest rotkehlchen/tests/exchanges/test_bitmex.py::test_bitmex_api_signature\r\n// passed\r\n```\n", "before_files": [{"content": "import hashlib\nimport hmac\nimport logging\nimport time\nfrom json.decoder import JSONDecodeError\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union\nfrom urllib.parse import urlencode\n\nimport requests\n\nfrom rotkehlchen.assets.asset import Asset\nfrom rotkehlchen.constants.assets import A_BTC\nfrom rotkehlchen.errors import DeserializationError, RemoteError, UnknownAsset\nfrom rotkehlchen.exchanges.data_structures import AssetMovement, Location, MarginPosition\nfrom rotkehlchen.exchanges.exchange import ExchangeInterface\nfrom rotkehlchen.exchanges.utils import deserialize_asset_movement_address, get_key_if_has_val\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.inquirer import Inquirer\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.serialization.deserialize import (\n deserialize_asset_amount_force_positive,\n deserialize_fee,\n)\nfrom rotkehlchen.typing import (\n ApiKey,\n ApiSecret,\n AssetAmount,\n AssetMovementCategory,\n Fee,\n Timestamp,\n)\nfrom rotkehlchen.user_messages import MessagesAggregator\nfrom rotkehlchen.utils.interfaces import cache_response_timewise, protect_with_lock\nfrom rotkehlchen.utils.misc import iso8601ts_to_timestamp, satoshis_to_btc\nfrom rotkehlchen.utils.serialization import rlk_jsonloads\n\nif TYPE_CHECKING:\n from rotkehlchen.db.dbhandler import DBHandler\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\nBITMEX_PRIVATE_ENDPOINTS = (\n 'user',\n 'user/wallet',\n 'user/walletHistory',\n)\n\n\ndef bitmex_to_world(symbol: str) -> Asset:\n if symbol == 'XBt':\n return A_BTC\n return Asset(symbol)\n\n\ndef trade_from_bitmex(bitmex_trade: Dict) -> MarginPosition:\n \"\"\"Turn a bitmex trade returned from bitmex trade history to our common trade\n history format. This only returns margin positions as bitmex only deals in\n margin trading\"\"\"\n close_time = iso8601ts_to_timestamp(bitmex_trade['transactTime'])\n profit_loss = AssetAmount(satoshis_to_btc(FVal(bitmex_trade['amount'])))\n currency = bitmex_to_world(bitmex_trade['currency'])\n fee = deserialize_fee(bitmex_trade['fee'])\n notes = bitmex_trade['address']\n assert currency == A_BTC, 'Bitmex trade should only deal in BTC'\n\n log.debug(\n 'Processing Bitmex Trade',\n sensitive_log=True,\n timestamp=close_time,\n profit_loss=profit_loss,\n currency=currency,\n fee=fee,\n notes=notes,\n )\n\n return MarginPosition(\n location=Location.BITMEX,\n open_time=None,\n close_time=close_time,\n profit_loss=profit_loss,\n pl_currency=currency,\n fee=fee,\n fee_currency=A_BTC,\n notes=notes,\n link=str(bitmex_trade['transactID']),\n )\n\n\nclass Bitmex(ExchangeInterface):\n def __init__(\n self,\n api_key: ApiKey,\n secret: ApiSecret,\n database: 'DBHandler',\n msg_aggregator: MessagesAggregator,\n ):\n super(Bitmex, self).__init__('bitmex', api_key, secret, database)\n self.uri = 'https://bitmex.com'\n self.session.headers.update({'api-key': api_key})\n self.msg_aggregator = msg_aggregator\n\n def first_connection(self) -> None:\n self.first_connection_made = True\n\n def validate_api_key(self) -> Tuple[bool, str]:\n try:\n self._api_query('get', 'user')\n except RemoteError as e:\n error = str(e)\n if 'Invalid API Key' in error:\n return False, 'Provided API Key is invalid'\n if 'Signature not valid' in error:\n return False, 'Provided API Secret is invalid'\n # else reraise\n raise\n return True, ''\n\n def _generate_signature(self, verb: str, path: str, expires: int, data: str = '') -> str:\n signature = hmac.new(\n self.secret,\n (verb.upper() + path + str(expires) + data).encode(),\n hashlib.sha256,\n ).hexdigest()\n self.session.headers.update({\n 'api-signature': signature,\n })\n return signature\n\n def _api_query(\n self,\n verb: str,\n path: str,\n options: Optional[Dict] = None,\n ) -> Union[List, Dict]:\n \"\"\"\n Queries Bitmex with the given verb for the given path and options\n \"\"\"\n assert verb in ('get', 'post', 'push'), (\n 'Given verb {} is not a valid HTTP verb'.format(verb)\n )\n\n # 20 seconds expiration\n expires = int(time.time()) + 20\n\n request_path_no_args = '/api/v1/' + path\n\n data = ''\n if not options:\n request_path = request_path_no_args\n else:\n request_path = request_path_no_args + '?' + urlencode(options)\n\n if path in BITMEX_PRIVATE_ENDPOINTS:\n self._generate_signature(\n verb=verb,\n path=request_path,\n expires=expires,\n data=data,\n )\n\n self.session.headers.update({\n 'api-expires': str(expires),\n })\n if data != '':\n self.session.headers.update({\n 'Content-Type': 'application/json',\n 'Content-Length': str(len(data)),\n })\n\n request_url = self.uri + request_path\n log.debug('Bitmex API Query', verb=verb, request_url=request_url)\n try:\n response = getattr(self.session, verb)(request_url, data=data)\n except requests.exceptions.RequestException as e:\n raise RemoteError(f'Bitmex API request failed due to {str(e)}') from e\n\n if response.status_code not in (200, 401):\n raise RemoteError(\n 'Bitmex api request for {} failed with HTTP status code {}'.format(\n response.url,\n response.status_code,\n ),\n )\n\n try:\n json_ret = rlk_jsonloads(response.text)\n except JSONDecodeError as e:\n raise RemoteError('Bitmex returned invalid JSON response') from e\n\n if isinstance(json_ret, dict) and 'error' in json_ret:\n raise RemoteError(json_ret['error']['message'])\n\n return json_ret\n\n def _api_query_dict(\n self,\n verb: str,\n path: str,\n options: Optional[Dict] = None,\n ) -> Dict:\n result = self._api_query(verb, path, options)\n assert isinstance(result, Dict)\n return result\n\n def _api_query_list(\n self,\n verb: str,\n path: str,\n options: Optional[Dict] = None,\n ) -> List:\n result = self._api_query(verb, path, options)\n assert isinstance(result, List)\n return result\n\n @protect_with_lock()\n @cache_response_timewise()\n def query_balances(self) -> Tuple[Optional[dict], str]:\n\n try:\n resp = self._api_query_dict('get', 'user/wallet', {'currency': 'XBt'})\n # Bitmex shows only BTC balance\n returned_balances = {}\n usd_price = Inquirer().find_usd_price(A_BTC)\n except RemoteError as e:\n msg = f'Bitmex API request failed due to: {str(e)}'\n log.error(msg)\n return None, msg\n\n # result is in satoshis\n amount = satoshis_to_btc(FVal(resp['amount']))\n usd_value = amount * usd_price\n\n returned_balances[A_BTC] = {\n 'amount': amount,\n 'usd_value': usd_value,\n }\n log.debug(\n 'Bitmex balance query result',\n sensitive_log=True,\n currency='BTC',\n amount=amount,\n usd_value=usd_value,\n )\n\n return returned_balances, ''\n\n def query_online_margin_history(\n self,\n start_ts: Timestamp,\n end_ts: Timestamp,\n ) -> List[MarginPosition]:\n\n # We know user/walletHistory returns a list\n resp = self._api_query_list('get', 'user/walletHistory')\n log.debug('Bitmex trade history query', results_num=len(resp))\n\n margin_trades = []\n for tx in resp:\n if tx['timestamp'] is None:\n timestamp = None\n else:\n timestamp = iso8601ts_to_timestamp(tx['timestamp'])\n if tx['transactType'] != 'RealisedPNL':\n continue\n if timestamp and timestamp < start_ts:\n continue\n if timestamp and timestamp > end_ts:\n continue\n margin_trades.append(trade_from_bitmex(tx))\n\n return margin_trades\n\n def query_online_deposits_withdrawals(\n self,\n start_ts: Timestamp,\n end_ts: Timestamp,\n ) -> List:\n resp = self._api_query_list('get', 'user/walletHistory')\n\n log.debug('Bitmex deposit/withdrawals query', results_num=len(resp))\n\n movements = []\n for movement in resp:\n try:\n transaction_type = movement['transactType']\n if transaction_type == 'Deposit':\n transaction_type = AssetMovementCategory.DEPOSIT\n elif transaction_type == 'Withdrawal':\n transaction_type = AssetMovementCategory.WITHDRAWAL\n else:\n continue\n\n timestamp = iso8601ts_to_timestamp(movement['timestamp'])\n if timestamp < start_ts:\n continue\n if timestamp > end_ts:\n continue\n\n asset = bitmex_to_world(movement['currency'])\n amount = deserialize_asset_amount_force_positive(movement['amount'])\n fee = deserialize_fee(movement['fee'])\n\n if asset == A_BTC:\n # bitmex stores amounts in satoshis\n amount = AssetAmount(satoshis_to_btc(amount))\n fee = Fee(satoshis_to_btc(fee))\n\n movements.append(AssetMovement(\n location=Location.BITMEX,\n category=transaction_type,\n address=deserialize_asset_movement_address(movement, 'address', asset),\n transaction_id=get_key_if_has_val(movement, 'tx'),\n timestamp=timestamp,\n asset=asset,\n amount=amount,\n fee_asset=asset,\n fee=fee,\n link=str(movement['transactID']),\n ))\n except UnknownAsset as e:\n self.msg_aggregator.add_warning(\n f'Found bitmex deposit/withdrawal with unknown asset '\n f'{e.asset_name}. Ignoring it.',\n )\n continue\n except (DeserializationError, KeyError) as e:\n msg = str(e)\n if isinstance(e, KeyError):\n msg = f'Missing key entry for {msg}.'\n self.msg_aggregator.add_error(\n 'Unexpected data encountered during deserialization of a bitmex '\n 'asset movement. Check logs for details and open a bug report.',\n )\n log.error(\n f'Unexpected data encountered during deserialization of bitmex '\n f'asset_movement {movement}. Error was: {msg}',\n )\n continue\n return movements\n", "path": "rotkehlchen/exchanges/bitmex.py"}], "after_files": [{"content": "import hashlib\nimport hmac\nimport logging\nimport time\nfrom json.decoder import JSONDecodeError\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union\nfrom urllib.parse import urlencode\n\nimport requests\n\nfrom rotkehlchen.assets.asset import Asset\nfrom rotkehlchen.constants.assets import A_BTC\nfrom rotkehlchen.errors import DeserializationError, RemoteError, UnknownAsset\nfrom rotkehlchen.exchanges.data_structures import AssetMovement, Location, MarginPosition\nfrom rotkehlchen.exchanges.exchange import ExchangeInterface\nfrom rotkehlchen.exchanges.utils import deserialize_asset_movement_address, get_key_if_has_val\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.inquirer import Inquirer\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.serialization.deserialize import (\n deserialize_asset_amount_force_positive,\n deserialize_fee,\n)\nfrom rotkehlchen.typing import (\n ApiKey,\n ApiSecret,\n AssetAmount,\n AssetMovementCategory,\n Fee,\n Timestamp,\n)\nfrom rotkehlchen.user_messages import MessagesAggregator\nfrom rotkehlchen.utils.interfaces import cache_response_timewise, protect_with_lock\nfrom rotkehlchen.utils.misc import iso8601ts_to_timestamp, satoshis_to_btc\nfrom rotkehlchen.utils.serialization import rlk_jsonloads\n\nif TYPE_CHECKING:\n from rotkehlchen.db.dbhandler import DBHandler\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\nBITMEX_PRIVATE_ENDPOINTS = (\n 'user',\n 'user/wallet',\n 'user/walletHistory',\n)\n\n\ndef bitmex_to_world(symbol: str) -> Asset:\n if symbol == 'XBt':\n return A_BTC\n return Asset(symbol)\n\n\ndef trade_from_bitmex(bitmex_trade: Dict) -> MarginPosition:\n \"\"\"Turn a bitmex trade returned from bitmex trade history to our common trade\n history format. This only returns margin positions as bitmex only deals in\n margin trading\"\"\"\n close_time = iso8601ts_to_timestamp(bitmex_trade['transactTime'])\n profit_loss = AssetAmount(satoshis_to_btc(FVal(bitmex_trade['amount'])))\n currency = bitmex_to_world(bitmex_trade['currency'])\n fee = deserialize_fee(bitmex_trade['fee'])\n notes = bitmex_trade['address']\n assert currency == A_BTC, 'Bitmex trade should only deal in BTC'\n\n log.debug(\n 'Processing Bitmex Trade',\n sensitive_log=True,\n timestamp=close_time,\n profit_loss=profit_loss,\n currency=currency,\n fee=fee,\n notes=notes,\n )\n\n return MarginPosition(\n location=Location.BITMEX,\n open_time=None,\n close_time=close_time,\n profit_loss=profit_loss,\n pl_currency=currency,\n fee=fee,\n fee_currency=A_BTC,\n notes=notes,\n link=str(bitmex_trade['transactID']),\n )\n\n\nclass Bitmex(ExchangeInterface):\n def __init__(\n self,\n api_key: ApiKey,\n secret: ApiSecret,\n database: 'DBHandler',\n msg_aggregator: MessagesAggregator,\n ):\n super(Bitmex, self).__init__('bitmex', api_key, secret, database)\n self.uri = 'https://bitmex.com'\n self.session.headers.update({'api-key': api_key})\n self.msg_aggregator = msg_aggregator\n\n def first_connection(self) -> None:\n self.first_connection_made = True\n\n def validate_api_key(self) -> Tuple[bool, str]:\n try:\n self._api_query('get', 'user')\n except RemoteError as e:\n error = str(e)\n if 'Invalid API Key' in error:\n return False, 'Provided API Key is invalid'\n if 'Signature not valid' in error:\n return False, 'Provided API Secret is invalid'\n # else reraise\n raise\n return True, ''\n\n def _generate_signature(self, verb: str, path: str, expires: int, data: str = '') -> str:\n signature = hmac.new(\n self.secret,\n (verb.upper() + path + str(expires) + data).encode(),\n hashlib.sha256,\n ).hexdigest()\n self.session.headers.update({\n 'api-signature': signature,\n })\n return signature\n\n def _api_query(\n self,\n verb: str,\n path: str,\n options: Optional[Dict] = None,\n ) -> Union[List, Dict]:\n \"\"\"\n Queries Bitmex with the given verb for the given path and options\n \"\"\"\n assert verb in ('get', 'post', 'push'), (\n 'Given verb {} is not a valid HTTP verb'.format(verb)\n )\n\n # 20 seconds expiration\n expires = int(time.time()) + 20\n\n request_path_no_args = '/api/v1/' + path\n\n data = ''\n if not options:\n request_path = request_path_no_args\n signature_path = request_path\n else:\n request_path = request_path_no_args + '?' + urlencode(options)\n signature_path = request_path_no_args if path == 'user/wallet' else request_path\n\n if path in BITMEX_PRIVATE_ENDPOINTS:\n self._generate_signature(\n verb=verb,\n path=signature_path,\n expires=expires,\n data=data,\n )\n\n self.session.headers.update({\n 'api-expires': str(expires),\n })\n if data != '':\n self.session.headers.update({\n 'Content-Type': 'application/json',\n 'Content-Length': str(len(data)),\n })\n\n request_url = self.uri + request_path\n log.debug('Bitmex API Query', verb=verb, request_url=request_url)\n try:\n response = getattr(self.session, verb)(request_url, data=data)\n except requests.exceptions.RequestException as e:\n raise RemoteError(f'Bitmex API request failed due to {str(e)}') from e\n\n if response.status_code not in (200, 401):\n raise RemoteError(\n 'Bitmex api request for {} failed with HTTP status code {}'.format(\n response.url,\n response.status_code,\n ),\n )\n\n try:\n json_ret = rlk_jsonloads(response.text)\n except JSONDecodeError as e:\n raise RemoteError('Bitmex returned invalid JSON response') from e\n\n if isinstance(json_ret, dict) and 'error' in json_ret:\n raise RemoteError(json_ret['error']['message'])\n\n return json_ret\n\n def _api_query_dict(\n self,\n verb: str,\n path: str,\n options: Optional[Dict] = None,\n ) -> Dict:\n result = self._api_query(verb, path, options)\n assert isinstance(result, Dict)\n return result\n\n def _api_query_list(\n self,\n verb: str,\n path: str,\n options: Optional[Dict] = None,\n ) -> List:\n result = self._api_query(verb, path, options)\n assert isinstance(result, List)\n return result\n\n @protect_with_lock()\n @cache_response_timewise()\n def query_balances(self) -> Tuple[Optional[dict], str]:\n\n try:\n resp = self._api_query_dict('get', 'user/wallet', {'currency': 'XBt'})\n # Bitmex shows only BTC balance\n returned_balances = {}\n usd_price = Inquirer().find_usd_price(A_BTC)\n except RemoteError as e:\n msg = f'Bitmex API request failed due to: {str(e)}'\n log.error(msg)\n return None, msg\n\n # result is in satoshis\n amount = satoshis_to_btc(FVal(resp['amount']))\n usd_value = amount * usd_price\n\n returned_balances[A_BTC] = {\n 'amount': amount,\n 'usd_value': usd_value,\n }\n log.debug(\n 'Bitmex balance query result',\n sensitive_log=True,\n currency='BTC',\n amount=amount,\n usd_value=usd_value,\n )\n\n return returned_balances, ''\n\n def query_online_margin_history(\n self,\n start_ts: Timestamp,\n end_ts: Timestamp,\n ) -> List[MarginPosition]:\n\n # We know user/walletHistory returns a list\n resp = self._api_query_list('get', 'user/walletHistory')\n log.debug('Bitmex trade history query', results_num=len(resp))\n\n margin_trades = []\n for tx in resp:\n if tx['timestamp'] is None:\n timestamp = None\n else:\n timestamp = iso8601ts_to_timestamp(tx['timestamp'])\n if tx['transactType'] != 'RealisedPNL':\n continue\n if timestamp and timestamp < start_ts:\n continue\n if timestamp and timestamp > end_ts:\n continue\n margin_trades.append(trade_from_bitmex(tx))\n\n return margin_trades\n\n def query_online_deposits_withdrawals(\n self,\n start_ts: Timestamp,\n end_ts: Timestamp,\n ) -> List:\n resp = self._api_query_list('get', 'user/walletHistory')\n\n log.debug('Bitmex deposit/withdrawals query', results_num=len(resp))\n\n movements = []\n for movement in resp:\n try:\n transaction_type = movement['transactType']\n if transaction_type == 'Deposit':\n transaction_type = AssetMovementCategory.DEPOSIT\n elif transaction_type == 'Withdrawal':\n transaction_type = AssetMovementCategory.WITHDRAWAL\n else:\n continue\n\n timestamp = iso8601ts_to_timestamp(movement['timestamp'])\n if timestamp < start_ts:\n continue\n if timestamp > end_ts:\n continue\n\n asset = bitmex_to_world(movement['currency'])\n amount = deserialize_asset_amount_force_positive(movement['amount'])\n fee = deserialize_fee(movement['fee'])\n\n if asset == A_BTC:\n # bitmex stores amounts in satoshis\n amount = AssetAmount(satoshis_to_btc(amount))\n fee = Fee(satoshis_to_btc(fee))\n\n movements.append(AssetMovement(\n location=Location.BITMEX,\n category=transaction_type,\n address=deserialize_asset_movement_address(movement, 'address', asset),\n transaction_id=get_key_if_has_val(movement, 'tx'),\n timestamp=timestamp,\n asset=asset,\n amount=amount,\n fee_asset=asset,\n fee=fee,\n link=str(movement['transactID']),\n ))\n except UnknownAsset as e:\n self.msg_aggregator.add_warning(\n f'Found bitmex deposit/withdrawal with unknown asset '\n f'{e.asset_name}. Ignoring it.',\n )\n continue\n except (DeserializationError, KeyError) as e:\n msg = str(e)\n if isinstance(e, KeyError):\n msg = f'Missing key entry for {msg}.'\n self.msg_aggregator.add_error(\n 'Unexpected data encountered during deserialization of a bitmex '\n 'asset movement. Check logs for details and open a bug report.',\n )\n log.error(\n f'Unexpected data encountered during deserialization of bitmex '\n f'asset_movement {movement}. Error was: {msg}',\n )\n continue\n return movements\n", "path": "rotkehlchen/exchanges/bitmex.py"}]}
| 3,883 | 182 |
gh_patches_debug_1010
|
rasdani/github-patches
|
git_diff
|
cornellius-gp__gpytorch-871
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] SumBatchLazyTensor size is inconsistent with indices
# 🐛 Bug
I want to do a KroneckerProductLazyTensor on a batch of lazyTensor `x` times, then SumBatchLazyTensor and then get a specific row and finally evaluate. The code works if I first do an evaluation on the `sum_a` then retrieve the row (which is inefficient) but gives `size is inconsistent with indices` error if I retrieve the row first and then wants to evaluate.
Interestingly, If I use the same number for the dimension -1 and -2, there would be no error then.
## To reproduce
** Code snippet to reproduce **
```python
import gpytorch
x = 3
a = torch.rand((x, 5, 2, 3))
lazy_a = gpytorch.lazy.NonLazyTensor(a)
assert lazy_a.shape == torch.Size([3, 5, 2, 3])
prod_a = gpytorch.lazy.KroneckerProductLazyTensor(*lazy_a)
assert prod_a.shape == torch.Size([5, 8, 27])
sum_a = gpytorch.lazy.SumBatchLazyTensor(prod_a)
assert sum_a.shape == torch.Size([8, 27])
assert sum_a.evaluate()[0].shape == torch.Size([27])
assert sum_a[0].evaluate().shape == torch.Size([27]) # gives error in here
```
** Stack trace/error message **
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-251-7cce10ce99d3> in <module>()
8 assert sum_a.shape == torch.Size([8, 27])
9 assert sum_a.evaluate()[0].shape == torch.Size([27])
---> 10 assert sum_a[0].evaluate().shape == torch.Size([27])
11
9 frames
/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/lazy_tensor.py in __getitem__(self, index)
1703 # with the appropriate shape
1704 if (squeeze_row or squeeze_col or row_col_are_absorbed):
-> 1705 res = delazify(res)
1706 if squeeze_row:
1707 res = res.squeeze(-2)
/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/lazy_tensor.py in delazify(obj)
1753 return obj
1754 elif isinstance(obj, LazyTensor):
-> 1755 return obj.evaluate()
1756 else:
1757 raise TypeError("object of class {} cannot be made into a Tensor".format(obj.__class__.__name__))
/usr/local/lib/python3.6/dist-packages/gpytorch/utils/memoize.py in g(self, *args, **kwargs)
32 cache_name = name if name is not None else method
33 if not is_in_cache(self, cache_name):
---> 34 add_to_cache(self, cache_name, method(self, *args, **kwargs))
35 return get_from_cache(self, cache_name)
36
/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/lazy_tensor.py in evaluate(self)
858 eye = torch.eye(num_rows, dtype=self.dtype, device=self.device)
859 eye = eye.expand(*self.batch_shape, num_rows, num_rows)
--> 860 res = self.transpose(-1, -2).matmul(eye).transpose(-1, -2).contiguous()
861 else:
862 eye = torch.eye(num_cols, dtype=self.dtype, device=self.device)
/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/lazy_tensor.py in matmul(self, other)
1093
1094 func = Matmul()
-> 1095 return func.apply(self.representation_tree(), other, *self.representation())
1096
1097 @property
/usr/local/lib/python3.6/dist-packages/gpytorch/functions/_matmul.py in forward(ctx, representation_tree, rhs, *matrix_args)
18
19 lazy_tsr = ctx.representation_tree(*matrix_args)
---> 20 res = lazy_tsr._matmul(rhs)
21
22 to_save = [orig_rhs] + list(matrix_args)
/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/block_lazy_tensor.py in _matmul(self, rhs)
64
65 rhs = self._add_batch_dim(rhs)
---> 66 res = self.base_lazy_tensor._matmul(rhs)
67 res = self._remove_batch_dim(res)
68
/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/interpolated_lazy_tensor.py in _matmul(self, rhs)
157 def _matmul(self, rhs):
158 # Get sparse tensor representations of left/right interp matrices
--> 159 left_interp_t = self._sparse_left_interp_t(self.left_interp_indices, self.left_interp_values)
160 right_interp_t = self._sparse_right_interp_t(self.right_interp_indices, self.right_interp_values)
161
/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/interpolated_lazy_tensor.py in _sparse_left_interp_t(self, left_interp_indices_tensor, left_interp_values_tensor)
309
310 left_interp_t = sparse.make_sparse_from_indices_and_values(
--> 311 left_interp_indices_tensor, left_interp_values_tensor, self.base_lazy_tensor.size()[-1]
312 )
313 self._left_interp_indices_memo = left_interp_indices_tensor
/usr/local/lib/python3.6/dist-packages/gpytorch/utils/sparse.py in make_sparse_from_indices_and_values(interp_indices, interp_values, num_rows)
59 else:
60 cls = getattr(torch.sparse, type_name)
---> 61 res = cls(index_tensor, value_tensor, interp_size)
62
63 # Wrap things as a variable, if necessary
RuntimeError: size is inconsistent with indices: for dim 1, size is 8 but found index 26
```
## Expected Behavior
Expected to pass the tests.
## System information
**Please complete the following information:**
- GPyTorch Version 0.3.5
- PyTorch Version 1.2.0
- Ubuntu 18.04.3 LTS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/lazy/sum_batch_lazy_tensor.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import torch
4 from .block_lazy_tensor import BlockLazyTensor
5 from ..utils.broadcasting import _pad_with_singletons
6 from ..utils.getitem import _noop_index
7
8
9 class SumBatchLazyTensor(BlockLazyTensor):
10 """
11 Represents a lazy tensor that is actually the sum of several lazy tensors blocks.
12 The :attr:`block_dim` attribute specifies which dimension of the base LazyTensor
13 specifies the blocks.
14 For example, (with `block_dim=-3` a `k x n x n` tensor represents `k` `n x n` blocks (a `n x n` matrix).
15 A `b x k x n x n` tensor represents `k` `b x n x n` blocks (a `b x n x n` batch matrix).
16
17 Args:
18 :attr:`base_lazy_tensor` (LazyTensor):
19 A `k x n x n` LazyTensor, or a `b x k x n x n` LazyTensor.
20 :attr:`block_dim` (int):
21 The dimension that specifies the blocks.
22 """
23 def _add_batch_dim(self, other):
24 shape = list(other.shape)
25 expand_shape = list(other.shape)
26 shape.insert(-2, 1)
27 expand_shape.insert(-2, self.base_lazy_tensor.size(-3))
28 other = other.reshape(*shape).expand(*expand_shape)
29 return other
30
31 def _get_indices(self, row_index, col_index, *batch_indices):
32 # Create an extra index for the summed dimension
33 sum_index = torch.arange(0, self.base_lazy_tensor.size(-3), device=self.device)
34 sum_index = _pad_with_singletons(sum_index, row_index.dim(), 0)
35 row_index = row_index.unsqueeze(-1)
36 col_index = col_index.unsqueeze(-1)
37 batch_indices = [index.unsqueeze(-1) for index in batch_indices]
38
39 res = self.base_lazy_tensor._get_indices(row_index, col_index, *batch_indices, sum_index)
40 return res.sum(-1)
41
42 def _getitem(self, row_index, col_index, *batch_indices):
43 res = self.base_lazy_tensor._getitem(row_index, col_index, *batch_indices, _noop_index)
44 return self.__class__(res, **self._kwargs)
45
46 def _remove_batch_dim(self, other):
47 return other.sum(-3)
48
49 def _size(self):
50 shape = list(self.base_lazy_tensor.shape)
51 del shape[-3]
52 return torch.Size(shape)
53
54 def diag(self):
55 diag = self.base_lazy_tensor.diag().sum(-2)
56 return diag
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gpytorch/lazy/sum_batch_lazy_tensor.py b/gpytorch/lazy/sum_batch_lazy_tensor.py
--- a/gpytorch/lazy/sum_batch_lazy_tensor.py
+++ b/gpytorch/lazy/sum_batch_lazy_tensor.py
@@ -54,3 +54,6 @@
def diag(self):
diag = self.base_lazy_tensor.diag().sum(-2)
return diag
+
+ def evaluate(self):
+ return self.base_lazy_tensor.evaluate().sum(dim=-3) # BlockLazyTensors always use dim3 for the block_dim
|
{"golden_diff": "diff --git a/gpytorch/lazy/sum_batch_lazy_tensor.py b/gpytorch/lazy/sum_batch_lazy_tensor.py\n--- a/gpytorch/lazy/sum_batch_lazy_tensor.py\n+++ b/gpytorch/lazy/sum_batch_lazy_tensor.py\n@@ -54,3 +54,6 @@\n def diag(self):\n diag = self.base_lazy_tensor.diag().sum(-2)\n return diag\n+\n+ def evaluate(self):\n+ return self.base_lazy_tensor.evaluate().sum(dim=-3) # BlockLazyTensors always use dim3 for the block_dim\n", "issue": "[Bug] SumBatchLazyTensor size is inconsistent with indices\n# \ud83d\udc1b Bug\r\n\r\nI want to do a KroneckerProductLazyTensor on a batch of lazyTensor `x` times, then SumBatchLazyTensor and then get a specific row and finally evaluate. The code works if I first do an evaluation on the `sum_a` then retrieve the row (which is inefficient) but gives `size is inconsistent with indices` error if I retrieve the row first and then wants to evaluate.\r\n\r\nInterestingly, If I use the same number for the dimension -1 and -2, there would be no error then.\r\n\r\n## To reproduce\r\n\r\n** Code snippet to reproduce **\r\n```python\r\nimport gpytorch\r\nx = 3\r\na = torch.rand((x, 5, 2, 3))\r\nlazy_a = gpytorch.lazy.NonLazyTensor(a)\r\nassert lazy_a.shape == torch.Size([3, 5, 2, 3])\r\nprod_a = gpytorch.lazy.KroneckerProductLazyTensor(*lazy_a)\r\nassert prod_a.shape == torch.Size([5, 8, 27])\r\nsum_a = gpytorch.lazy.SumBatchLazyTensor(prod_a)\r\nassert sum_a.shape == torch.Size([8, 27])\r\nassert sum_a.evaluate()[0].shape == torch.Size([27])\r\nassert sum_a[0].evaluate().shape == torch.Size([27]) # gives error in here\r\n```\r\n\r\n** Stack trace/error message **\r\n```\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-251-7cce10ce99d3> in <module>()\r\n 8 assert sum_a.shape == torch.Size([8, 27])\r\n 9 assert sum_a.evaluate()[0].shape == torch.Size([27])\r\n---> 10 assert sum_a[0].evaluate().shape == torch.Size([27])\r\n 11 \r\n\r\n9 frames\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/lazy_tensor.py in __getitem__(self, index)\r\n 1703 # with the appropriate shape\r\n 1704 if (squeeze_row or squeeze_col or row_col_are_absorbed):\r\n-> 1705 res = delazify(res)\r\n 1706 if squeeze_row:\r\n 1707 res = res.squeeze(-2)\r\n\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/lazy_tensor.py in delazify(obj)\r\n 1753 return obj\r\n 1754 elif isinstance(obj, LazyTensor):\r\n-> 1755 return obj.evaluate()\r\n 1756 else:\r\n 1757 raise TypeError(\"object of class {} cannot be made into a Tensor\".format(obj.__class__.__name__))\r\n\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/utils/memoize.py in g(self, *args, **kwargs)\r\n 32 cache_name = name if name is not None else method\r\n 33 if not is_in_cache(self, cache_name):\r\n---> 34 add_to_cache(self, cache_name, method(self, *args, **kwargs))\r\n 35 return get_from_cache(self, cache_name)\r\n 36 \r\n\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/lazy_tensor.py in evaluate(self)\r\n 858 eye = torch.eye(num_rows, dtype=self.dtype, device=self.device)\r\n 859 eye = eye.expand(*self.batch_shape, num_rows, num_rows)\r\n--> 860 res = self.transpose(-1, -2).matmul(eye).transpose(-1, -2).contiguous()\r\n 861 else:\r\n 862 eye = torch.eye(num_cols, dtype=self.dtype, device=self.device)\r\n\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/lazy_tensor.py in matmul(self, other)\r\n 1093 \r\n 1094 func = Matmul()\r\n-> 1095 return func.apply(self.representation_tree(), other, *self.representation())\r\n 1096 \r\n 1097 @property\r\n\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/functions/_matmul.py in forward(ctx, representation_tree, rhs, *matrix_args)\r\n 18 \r\n 19 lazy_tsr = ctx.representation_tree(*matrix_args)\r\n---> 20 res = lazy_tsr._matmul(rhs)\r\n 21 \r\n 22 to_save = [orig_rhs] + list(matrix_args)\r\n\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/block_lazy_tensor.py in _matmul(self, rhs)\r\n 64 \r\n 65 rhs = self._add_batch_dim(rhs)\r\n---> 66 res = self.base_lazy_tensor._matmul(rhs)\r\n 67 res = self._remove_batch_dim(res)\r\n 68 \r\n\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/interpolated_lazy_tensor.py in _matmul(self, rhs)\r\n 157 def _matmul(self, rhs):\r\n 158 # Get sparse tensor representations of left/right interp matrices\r\n--> 159 left_interp_t = self._sparse_left_interp_t(self.left_interp_indices, self.left_interp_values)\r\n 160 right_interp_t = self._sparse_right_interp_t(self.right_interp_indices, self.right_interp_values)\r\n 161 \r\n\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/lazy/interpolated_lazy_tensor.py in _sparse_left_interp_t(self, left_interp_indices_tensor, left_interp_values_tensor)\r\n 309 \r\n 310 left_interp_t = sparse.make_sparse_from_indices_and_values(\r\n--> 311 left_interp_indices_tensor, left_interp_values_tensor, self.base_lazy_tensor.size()[-1]\r\n 312 )\r\n 313 self._left_interp_indices_memo = left_interp_indices_tensor\r\n\r\n/usr/local/lib/python3.6/dist-packages/gpytorch/utils/sparse.py in make_sparse_from_indices_and_values(interp_indices, interp_values, num_rows)\r\n 59 else:\r\n 60 cls = getattr(torch.sparse, type_name)\r\n---> 61 res = cls(index_tensor, value_tensor, interp_size)\r\n 62 \r\n 63 # Wrap things as a variable, if necessary\r\n\r\nRuntimeError: size is inconsistent with indices: for dim 1, size is 8 but found index 26\r\n```\r\n\r\n## Expected Behavior\r\n\r\nExpected to pass the tests.\r\n\r\n## System information\r\n\r\n**Please complete the following information:**\r\n- GPyTorch Version 0.3.5\r\n- PyTorch Version 1.2.0\r\n- Ubuntu 18.04.3 LTS\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport torch\nfrom .block_lazy_tensor import BlockLazyTensor\nfrom ..utils.broadcasting import _pad_with_singletons\nfrom ..utils.getitem import _noop_index\n\n\nclass SumBatchLazyTensor(BlockLazyTensor):\n \"\"\"\n Represents a lazy tensor that is actually the sum of several lazy tensors blocks.\n The :attr:`block_dim` attribute specifies which dimension of the base LazyTensor\n specifies the blocks.\n For example, (with `block_dim=-3` a `k x n x n` tensor represents `k` `n x n` blocks (a `n x n` matrix).\n A `b x k x n x n` tensor represents `k` `b x n x n` blocks (a `b x n x n` batch matrix).\n\n Args:\n :attr:`base_lazy_tensor` (LazyTensor):\n A `k x n x n` LazyTensor, or a `b x k x n x n` LazyTensor.\n :attr:`block_dim` (int):\n The dimension that specifies the blocks.\n \"\"\"\n def _add_batch_dim(self, other):\n shape = list(other.shape)\n expand_shape = list(other.shape)\n shape.insert(-2, 1)\n expand_shape.insert(-2, self.base_lazy_tensor.size(-3))\n other = other.reshape(*shape).expand(*expand_shape)\n return other\n\n def _get_indices(self, row_index, col_index, *batch_indices):\n # Create an extra index for the summed dimension\n sum_index = torch.arange(0, self.base_lazy_tensor.size(-3), device=self.device)\n sum_index = _pad_with_singletons(sum_index, row_index.dim(), 0)\n row_index = row_index.unsqueeze(-1)\n col_index = col_index.unsqueeze(-1)\n batch_indices = [index.unsqueeze(-1) for index in batch_indices]\n\n res = self.base_lazy_tensor._get_indices(row_index, col_index, *batch_indices, sum_index)\n return res.sum(-1)\n\n def _getitem(self, row_index, col_index, *batch_indices):\n res = self.base_lazy_tensor._getitem(row_index, col_index, *batch_indices, _noop_index)\n return self.__class__(res, **self._kwargs)\n\n def _remove_batch_dim(self, other):\n return other.sum(-3)\n\n def _size(self):\n shape = list(self.base_lazy_tensor.shape)\n del shape[-3]\n return torch.Size(shape)\n\n def diag(self):\n diag = self.base_lazy_tensor.diag().sum(-2)\n return diag\n", "path": "gpytorch/lazy/sum_batch_lazy_tensor.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport torch\nfrom .block_lazy_tensor import BlockLazyTensor\nfrom ..utils.broadcasting import _pad_with_singletons\nfrom ..utils.getitem import _noop_index\n\n\nclass SumBatchLazyTensor(BlockLazyTensor):\n \"\"\"\n Represents a lazy tensor that is actually the sum of several lazy tensors blocks.\n The :attr:`block_dim` attribute specifies which dimension of the base LazyTensor\n specifies the blocks.\n For example, (with `block_dim=-3` a `k x n x n` tensor represents `k` `n x n` blocks (a `n x n` matrix).\n A `b x k x n x n` tensor represents `k` `b x n x n` blocks (a `b x n x n` batch matrix).\n\n Args:\n :attr:`base_lazy_tensor` (LazyTensor):\n A `k x n x n` LazyTensor, or a `b x k x n x n` LazyTensor.\n :attr:`block_dim` (int):\n The dimension that specifies the blocks.\n \"\"\"\n def _add_batch_dim(self, other):\n shape = list(other.shape)\n expand_shape = list(other.shape)\n shape.insert(-2, 1)\n expand_shape.insert(-2, self.base_lazy_tensor.size(-3))\n other = other.reshape(*shape).expand(*expand_shape)\n return other\n\n def _get_indices(self, row_index, col_index, *batch_indices):\n # Create an extra index for the summed dimension\n sum_index = torch.arange(0, self.base_lazy_tensor.size(-3), device=self.device)\n sum_index = _pad_with_singletons(sum_index, row_index.dim(), 0)\n row_index = row_index.unsqueeze(-1)\n col_index = col_index.unsqueeze(-1)\n batch_indices = [index.unsqueeze(-1) for index in batch_indices]\n\n res = self.base_lazy_tensor._get_indices(row_index, col_index, *batch_indices, sum_index)\n return res.sum(-1)\n\n def _getitem(self, row_index, col_index, *batch_indices):\n res = self.base_lazy_tensor._getitem(row_index, col_index, *batch_indices, _noop_index)\n return self.__class__(res, **self._kwargs)\n\n def _remove_batch_dim(self, other):\n return other.sum(-3)\n\n def _size(self):\n shape = list(self.base_lazy_tensor.shape)\n del shape[-3]\n return torch.Size(shape)\n\n def diag(self):\n diag = self.base_lazy_tensor.diag().sum(-2)\n return diag\n\n def evaluate(self):\n return self.base_lazy_tensor.evaluate().sum(dim=-3) # BlockLazyTensors always use dim3 for the block_dim\n", "path": "gpytorch/lazy/sum_batch_lazy_tensor.py"}]}
| 2,424 | 126 |
gh_patches_debug_40380
|
rasdani/github-patches
|
git_diff
|
aio-libs__aiohttp-7198
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StreamResponse write() does not raise an exception if client disconnects
### Describe the bug
Hi!
I have an application that uses a `StreamResponse` to continuously write data to the client by calling the `write()` method. The server stops sending data when the client disconnects. In version 3.8.1 the `write()` method raises an exception when the client disconnects, but in later version it does not. The commit that introduces the bug is 20c93656d18b50c17c85b7a1e234276fb1444f27.
I've tested on Python 3.7, 3.8 and 3.10.
### To Reproduce
Add later if really needed.
### Expected behavior
An exception is raised when the client disconnects.
### Logs/tracebacks
```python-traceback
-
```
### Python Version
```console
$ python --version
Python 3.10.7
```
### aiohttp Version
```console
$ python -m pip show aiohttp
Name: aiohttp
Version: 3.8.3
...
```
### multidict Version
```console
-
```
### yarl Version
```console
-
```
### OS
Linux
### Related component
Server
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aiohttp/base_protocol.py`
Content:
```
1 import asyncio
2 from typing import Optional, cast
3
4 from .tcp_helpers import tcp_nodelay
5
6
7 class BaseProtocol(asyncio.Protocol):
8 __slots__ = (
9 "_loop",
10 "_paused",
11 "_drain_waiter",
12 "_connection_lost",
13 "_reading_paused",
14 "transport",
15 )
16
17 def __init__(self, loop: asyncio.AbstractEventLoop) -> None:
18 self._loop: asyncio.AbstractEventLoop = loop
19 self._paused = False
20 self._drain_waiter: Optional[asyncio.Future[None]] = None
21 self._connection_lost = False
22 self._reading_paused = False
23
24 self.transport: Optional[asyncio.Transport] = None
25
26 def pause_writing(self) -> None:
27 assert not self._paused
28 self._paused = True
29
30 def resume_writing(self) -> None:
31 assert self._paused
32 self._paused = False
33
34 waiter = self._drain_waiter
35 if waiter is not None:
36 self._drain_waiter = None
37 if not waiter.done():
38 waiter.set_result(None)
39
40 def pause_reading(self) -> None:
41 if not self._reading_paused and self.transport is not None:
42 try:
43 self.transport.pause_reading()
44 except (AttributeError, NotImplementedError, RuntimeError):
45 pass
46 self._reading_paused = True
47
48 def resume_reading(self) -> None:
49 if self._reading_paused and self.transport is not None:
50 try:
51 self.transport.resume_reading()
52 except (AttributeError, NotImplementedError, RuntimeError):
53 pass
54 self._reading_paused = False
55
56 def connection_made(self, transport: asyncio.BaseTransport) -> None:
57 tr = cast(asyncio.Transport, transport)
58 tcp_nodelay(tr, True)
59 self.transport = tr
60
61 def connection_lost(self, exc: Optional[BaseException]) -> None:
62 self._connection_lost = True
63 # Wake up the writer if currently paused.
64 self.transport = None
65 if not self._paused:
66 return
67 waiter = self._drain_waiter
68 if waiter is None:
69 return
70 self._drain_waiter = None
71 if waiter.done():
72 return
73 if exc is None:
74 waiter.set_result(None)
75 else:
76 waiter.set_exception(exc)
77
78 async def _drain_helper(self) -> None:
79 if self._connection_lost:
80 raise ConnectionResetError("Connection lost")
81 if not self._paused:
82 return
83 waiter = self._drain_waiter
84 if waiter is None:
85 waiter = self._loop.create_future()
86 self._drain_waiter = waiter
87 await asyncio.shield(waiter)
88
```
Path: `aiohttp/http_writer.py`
Content:
```
1 """Http related parsers and protocol."""
2
3 import asyncio
4 import zlib
5 from typing import Any, Awaitable, Callable, NamedTuple, Optional, Union # noqa
6
7 from multidict import CIMultiDict
8
9 from .abc import AbstractStreamWriter
10 from .base_protocol import BaseProtocol
11 from .helpers import NO_EXTENSIONS
12
13 __all__ = ("StreamWriter", "HttpVersion", "HttpVersion10", "HttpVersion11")
14
15
16 class HttpVersion(NamedTuple):
17 major: int
18 minor: int
19
20
21 HttpVersion10 = HttpVersion(1, 0)
22 HttpVersion11 = HttpVersion(1, 1)
23
24
25 _T_OnChunkSent = Optional[Callable[[bytes], Awaitable[None]]]
26 _T_OnHeadersSent = Optional[Callable[["CIMultiDict[str]"], Awaitable[None]]]
27
28
29 class StreamWriter(AbstractStreamWriter):
30 def __init__(
31 self,
32 protocol: BaseProtocol,
33 loop: asyncio.AbstractEventLoop,
34 on_chunk_sent: _T_OnChunkSent = None,
35 on_headers_sent: _T_OnHeadersSent = None,
36 ) -> None:
37 self._protocol = protocol
38 self._transport = protocol.transport
39
40 self.loop = loop
41 self.length = None
42 self.chunked = False
43 self.buffer_size = 0
44 self.output_size = 0
45
46 self._eof = False
47 self._compress: Any = None
48 self._drain_waiter = None
49
50 self._on_chunk_sent: _T_OnChunkSent = on_chunk_sent
51 self._on_headers_sent: _T_OnHeadersSent = on_headers_sent
52
53 @property
54 def transport(self) -> Optional[asyncio.Transport]:
55 return self._transport
56
57 @property
58 def protocol(self) -> BaseProtocol:
59 return self._protocol
60
61 def enable_chunking(self) -> None:
62 self.chunked = True
63
64 def enable_compression(
65 self, encoding: str = "deflate", strategy: int = zlib.Z_DEFAULT_STRATEGY
66 ) -> None:
67 zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else zlib.MAX_WBITS
68 self._compress = zlib.compressobj(wbits=zlib_mode, strategy=strategy)
69
70 def _write(self, chunk: bytes) -> None:
71 size = len(chunk)
72 self.buffer_size += size
73 self.output_size += size
74
75 if self._transport is None or self._transport.is_closing():
76 raise ConnectionResetError("Cannot write to closing transport")
77 self._transport.write(chunk)
78
79 async def write(
80 self, chunk: bytes, *, drain: bool = True, LIMIT: int = 0x10000
81 ) -> None:
82 """Writes chunk of data to a stream.
83
84 write_eof() indicates end of stream.
85 writer can't be used after write_eof() method being called.
86 write() return drain future.
87 """
88 if self._on_chunk_sent is not None:
89 await self._on_chunk_sent(chunk)
90
91 if isinstance(chunk, memoryview):
92 if chunk.nbytes != len(chunk):
93 # just reshape it
94 chunk = chunk.cast("c")
95
96 if self._compress is not None:
97 chunk = self._compress.compress(chunk)
98 if not chunk:
99 return
100
101 if self.length is not None:
102 chunk_len = len(chunk)
103 if self.length >= chunk_len:
104 self.length = self.length - chunk_len
105 else:
106 chunk = chunk[: self.length]
107 self.length = 0
108 if not chunk:
109 return
110
111 if chunk:
112 if self.chunked:
113 chunk_len_pre = ("%x\r\n" % len(chunk)).encode("ascii")
114 chunk = chunk_len_pre + chunk + b"\r\n"
115
116 self._write(chunk)
117
118 if self.buffer_size > LIMIT and drain:
119 self.buffer_size = 0
120 await self.drain()
121
122 async def write_headers(
123 self, status_line: str, headers: "CIMultiDict[str]"
124 ) -> None:
125 """Write request/response status and headers."""
126 if self._on_headers_sent is not None:
127 await self._on_headers_sent(headers)
128
129 # status + headers
130 buf = _serialize_headers(status_line, headers)
131 self._write(buf)
132
133 async def write_eof(self, chunk: bytes = b"") -> None:
134 if self._eof:
135 return
136
137 if chunk and self._on_chunk_sent is not None:
138 await self._on_chunk_sent(chunk)
139
140 if self._compress:
141 if chunk:
142 chunk = self._compress.compress(chunk)
143
144 chunk = chunk + self._compress.flush()
145 if chunk and self.chunked:
146 chunk_len = ("%x\r\n" % len(chunk)).encode("ascii")
147 chunk = chunk_len + chunk + b"\r\n0\r\n\r\n"
148 else:
149 if self.chunked:
150 if chunk:
151 chunk_len = ("%x\r\n" % len(chunk)).encode("ascii")
152 chunk = chunk_len + chunk + b"\r\n0\r\n\r\n"
153 else:
154 chunk = b"0\r\n\r\n"
155
156 if chunk:
157 self._write(chunk)
158
159 await self.drain()
160
161 self._eof = True
162 self._transport = None
163
164 async def drain(self) -> None:
165 """Flush the write buffer.
166
167 The intended use is to write
168
169 await w.write(data)
170 await w.drain()
171 """
172 if self._protocol.transport is not None:
173 await self._protocol._drain_helper()
174
175
176 def _safe_header(string: str) -> str:
177 if "\r" in string or "\n" in string:
178 raise ValueError(
179 "Newline or carriage return detected in headers. "
180 "Potential header injection attack."
181 )
182 return string
183
184
185 def _py_serialize_headers(status_line: str, headers: "CIMultiDict[str]") -> bytes:
186 headers_gen = (_safe_header(k) + ": " + _safe_header(v) for k, v in headers.items())
187 line = status_line + "\r\n" + "\r\n".join(headers_gen) + "\r\n\r\n"
188 return line.encode("utf-8")
189
190
191 _serialize_headers = _py_serialize_headers
192
193 try:
194 import aiohttp._http_writer as _http_writer # type: ignore[import]
195
196 _c_serialize_headers = _http_writer._serialize_headers
197 if not NO_EXTENSIONS:
198 _serialize_headers = _c_serialize_headers
199 except ImportError:
200 pass
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/aiohttp/base_protocol.py b/aiohttp/base_protocol.py
--- a/aiohttp/base_protocol.py
+++ b/aiohttp/base_protocol.py
@@ -18,11 +18,15 @@
self._loop: asyncio.AbstractEventLoop = loop
self._paused = False
self._drain_waiter: Optional[asyncio.Future[None]] = None
- self._connection_lost = False
self._reading_paused = False
self.transport: Optional[asyncio.Transport] = None
+ @property
+ def connected(self) -> bool:
+ """Return True if the connection is open."""
+ return self.transport is not None
+
def pause_writing(self) -> None:
assert not self._paused
self._paused = True
@@ -59,7 +63,6 @@
self.transport = tr
def connection_lost(self, exc: Optional[BaseException]) -> None:
- self._connection_lost = True
# Wake up the writer if currently paused.
self.transport = None
if not self._paused:
@@ -76,7 +79,7 @@
waiter.set_exception(exc)
async def _drain_helper(self) -> None:
- if self._connection_lost:
+ if not self.connected:
raise ConnectionResetError("Connection lost")
if not self._paused:
return
diff --git a/aiohttp/http_writer.py b/aiohttp/http_writer.py
--- a/aiohttp/http_writer.py
+++ b/aiohttp/http_writer.py
@@ -35,7 +35,6 @@
on_headers_sent: _T_OnHeadersSent = None,
) -> None:
self._protocol = protocol
- self._transport = protocol.transport
self.loop = loop
self.length = None
@@ -52,7 +51,7 @@
@property
def transport(self) -> Optional[asyncio.Transport]:
- return self._transport
+ return self._protocol.transport
@property
def protocol(self) -> BaseProtocol:
@@ -71,10 +70,10 @@
size = len(chunk)
self.buffer_size += size
self.output_size += size
-
- if self._transport is None or self._transport.is_closing():
+ transport = self.transport
+ if not self._protocol.connected or transport is None or transport.is_closing():
raise ConnectionResetError("Cannot write to closing transport")
- self._transport.write(chunk)
+ transport.write(chunk)
async def write(
self, chunk: bytes, *, drain: bool = True, LIMIT: int = 0x10000
@@ -159,7 +158,6 @@
await self.drain()
self._eof = True
- self._transport = None
async def drain(self) -> None:
"""Flush the write buffer.
|
{"golden_diff": "diff --git a/aiohttp/base_protocol.py b/aiohttp/base_protocol.py\n--- a/aiohttp/base_protocol.py\n+++ b/aiohttp/base_protocol.py\n@@ -18,11 +18,15 @@\n self._loop: asyncio.AbstractEventLoop = loop\n self._paused = False\n self._drain_waiter: Optional[asyncio.Future[None]] = None\n- self._connection_lost = False\n self._reading_paused = False\n \n self.transport: Optional[asyncio.Transport] = None\n \n+ @property\n+ def connected(self) -> bool:\n+ \"\"\"Return True if the connection is open.\"\"\"\n+ return self.transport is not None\n+\n def pause_writing(self) -> None:\n assert not self._paused\n self._paused = True\n@@ -59,7 +63,6 @@\n self.transport = tr\n \n def connection_lost(self, exc: Optional[BaseException]) -> None:\n- self._connection_lost = True\n # Wake up the writer if currently paused.\n self.transport = None\n if not self._paused:\n@@ -76,7 +79,7 @@\n waiter.set_exception(exc)\n \n async def _drain_helper(self) -> None:\n- if self._connection_lost:\n+ if not self.connected:\n raise ConnectionResetError(\"Connection lost\")\n if not self._paused:\n return\ndiff --git a/aiohttp/http_writer.py b/aiohttp/http_writer.py\n--- a/aiohttp/http_writer.py\n+++ b/aiohttp/http_writer.py\n@@ -35,7 +35,6 @@\n on_headers_sent: _T_OnHeadersSent = None,\n ) -> None:\n self._protocol = protocol\n- self._transport = protocol.transport\n \n self.loop = loop\n self.length = None\n@@ -52,7 +51,7 @@\n \n @property\n def transport(self) -> Optional[asyncio.Transport]:\n- return self._transport\n+ return self._protocol.transport\n \n @property\n def protocol(self) -> BaseProtocol:\n@@ -71,10 +70,10 @@\n size = len(chunk)\n self.buffer_size += size\n self.output_size += size\n-\n- if self._transport is None or self._transport.is_closing():\n+ transport = self.transport\n+ if not self._protocol.connected or transport is None or transport.is_closing():\n raise ConnectionResetError(\"Cannot write to closing transport\")\n- self._transport.write(chunk)\n+ transport.write(chunk)\n \n async def write(\n self, chunk: bytes, *, drain: bool = True, LIMIT: int = 0x10000\n@@ -159,7 +158,6 @@\n await self.drain()\n \n self._eof = True\n- self._transport = None\n \n async def drain(self) -> None:\n \"\"\"Flush the write buffer.\n", "issue": "StreamResponse write() does not raise an exception if client disconnects\n### Describe the bug\n\nHi!\r\n\r\nI have an application that uses a `StreamResponse` to continuously write data to the client by calling the `write()` method. The server stops sending data when the client disconnects. In version 3.8.1 the `write()` method raises an exception when the client disconnects, but in later version it does not. The commit that introduces the bug is 20c93656d18b50c17c85b7a1e234276fb1444f27.\r\n\r\nI've tested on Python 3.7, 3.8 and 3.10.\n\n### To Reproduce\n\nAdd later if really needed.\n\n### Expected behavior\n\nAn exception is raised when the client disconnects.\n\n### Logs/tracebacks\n\n```python-traceback\n-\n```\n\n\n### Python Version\n\n```console\n$ python --version\r\nPython 3.10.7\n```\n\n\n### aiohttp Version\n\n```console\n$ python -m pip show aiohttp\r\nName: aiohttp\r\nVersion: 3.8.3\r\n...\n```\n\n\n### multidict Version\n\n```console\n-\n```\n\n\n### yarl Version\n\n```console\n-\n```\n\n\n### OS\n\nLinux\n\n### Related component\n\nServer\n\n### Additional context\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow the aio-libs Code of Conduct\n", "before_files": [{"content": "import asyncio\nfrom typing import Optional, cast\n\nfrom .tcp_helpers import tcp_nodelay\n\n\nclass BaseProtocol(asyncio.Protocol):\n __slots__ = (\n \"_loop\",\n \"_paused\",\n \"_drain_waiter\",\n \"_connection_lost\",\n \"_reading_paused\",\n \"transport\",\n )\n\n def __init__(self, loop: asyncio.AbstractEventLoop) -> None:\n self._loop: asyncio.AbstractEventLoop = loop\n self._paused = False\n self._drain_waiter: Optional[asyncio.Future[None]] = None\n self._connection_lost = False\n self._reading_paused = False\n\n self.transport: Optional[asyncio.Transport] = None\n\n def pause_writing(self) -> None:\n assert not self._paused\n self._paused = True\n\n def resume_writing(self) -> None:\n assert self._paused\n self._paused = False\n\n waiter = self._drain_waiter\n if waiter is not None:\n self._drain_waiter = None\n if not waiter.done():\n waiter.set_result(None)\n\n def pause_reading(self) -> None:\n if not self._reading_paused and self.transport is not None:\n try:\n self.transport.pause_reading()\n except (AttributeError, NotImplementedError, RuntimeError):\n pass\n self._reading_paused = True\n\n def resume_reading(self) -> None:\n if self._reading_paused and self.transport is not None:\n try:\n self.transport.resume_reading()\n except (AttributeError, NotImplementedError, RuntimeError):\n pass\n self._reading_paused = False\n\n def connection_made(self, transport: asyncio.BaseTransport) -> None:\n tr = cast(asyncio.Transport, transport)\n tcp_nodelay(tr, True)\n self.transport = tr\n\n def connection_lost(self, exc: Optional[BaseException]) -> None:\n self._connection_lost = True\n # Wake up the writer if currently paused.\n self.transport = None\n if not self._paused:\n return\n waiter = self._drain_waiter\n if waiter is None:\n return\n self._drain_waiter = None\n if waiter.done():\n return\n if exc is None:\n waiter.set_result(None)\n else:\n waiter.set_exception(exc)\n\n async def _drain_helper(self) -> None:\n if self._connection_lost:\n raise ConnectionResetError(\"Connection lost\")\n if not self._paused:\n return\n waiter = self._drain_waiter\n if waiter is None:\n waiter = self._loop.create_future()\n self._drain_waiter = waiter\n await asyncio.shield(waiter)\n", "path": "aiohttp/base_protocol.py"}, {"content": "\"\"\"Http related parsers and protocol.\"\"\"\n\nimport asyncio\nimport zlib\nfrom typing import Any, Awaitable, Callable, NamedTuple, Optional, Union # noqa\n\nfrom multidict import CIMultiDict\n\nfrom .abc import AbstractStreamWriter\nfrom .base_protocol import BaseProtocol\nfrom .helpers import NO_EXTENSIONS\n\n__all__ = (\"StreamWriter\", \"HttpVersion\", \"HttpVersion10\", \"HttpVersion11\")\n\n\nclass HttpVersion(NamedTuple):\n major: int\n minor: int\n\n\nHttpVersion10 = HttpVersion(1, 0)\nHttpVersion11 = HttpVersion(1, 1)\n\n\n_T_OnChunkSent = Optional[Callable[[bytes], Awaitable[None]]]\n_T_OnHeadersSent = Optional[Callable[[\"CIMultiDict[str]\"], Awaitable[None]]]\n\n\nclass StreamWriter(AbstractStreamWriter):\n def __init__(\n self,\n protocol: BaseProtocol,\n loop: asyncio.AbstractEventLoop,\n on_chunk_sent: _T_OnChunkSent = None,\n on_headers_sent: _T_OnHeadersSent = None,\n ) -> None:\n self._protocol = protocol\n self._transport = protocol.transport\n\n self.loop = loop\n self.length = None\n self.chunked = False\n self.buffer_size = 0\n self.output_size = 0\n\n self._eof = False\n self._compress: Any = None\n self._drain_waiter = None\n\n self._on_chunk_sent: _T_OnChunkSent = on_chunk_sent\n self._on_headers_sent: _T_OnHeadersSent = on_headers_sent\n\n @property\n def transport(self) -> Optional[asyncio.Transport]:\n return self._transport\n\n @property\n def protocol(self) -> BaseProtocol:\n return self._protocol\n\n def enable_chunking(self) -> None:\n self.chunked = True\n\n def enable_compression(\n self, encoding: str = \"deflate\", strategy: int = zlib.Z_DEFAULT_STRATEGY\n ) -> None:\n zlib_mode = 16 + zlib.MAX_WBITS if encoding == \"gzip\" else zlib.MAX_WBITS\n self._compress = zlib.compressobj(wbits=zlib_mode, strategy=strategy)\n\n def _write(self, chunk: bytes) -> None:\n size = len(chunk)\n self.buffer_size += size\n self.output_size += size\n\n if self._transport is None or self._transport.is_closing():\n raise ConnectionResetError(\"Cannot write to closing transport\")\n self._transport.write(chunk)\n\n async def write(\n self, chunk: bytes, *, drain: bool = True, LIMIT: int = 0x10000\n ) -> None:\n \"\"\"Writes chunk of data to a stream.\n\n write_eof() indicates end of stream.\n writer can't be used after write_eof() method being called.\n write() return drain future.\n \"\"\"\n if self._on_chunk_sent is not None:\n await self._on_chunk_sent(chunk)\n\n if isinstance(chunk, memoryview):\n if chunk.nbytes != len(chunk):\n # just reshape it\n chunk = chunk.cast(\"c\")\n\n if self._compress is not None:\n chunk = self._compress.compress(chunk)\n if not chunk:\n return\n\n if self.length is not None:\n chunk_len = len(chunk)\n if self.length >= chunk_len:\n self.length = self.length - chunk_len\n else:\n chunk = chunk[: self.length]\n self.length = 0\n if not chunk:\n return\n\n if chunk:\n if self.chunked:\n chunk_len_pre = (\"%x\\r\\n\" % len(chunk)).encode(\"ascii\")\n chunk = chunk_len_pre + chunk + b\"\\r\\n\"\n\n self._write(chunk)\n\n if self.buffer_size > LIMIT and drain:\n self.buffer_size = 0\n await self.drain()\n\n async def write_headers(\n self, status_line: str, headers: \"CIMultiDict[str]\"\n ) -> None:\n \"\"\"Write request/response status and headers.\"\"\"\n if self._on_headers_sent is not None:\n await self._on_headers_sent(headers)\n\n # status + headers\n buf = _serialize_headers(status_line, headers)\n self._write(buf)\n\n async def write_eof(self, chunk: bytes = b\"\") -> None:\n if self._eof:\n return\n\n if chunk and self._on_chunk_sent is not None:\n await self._on_chunk_sent(chunk)\n\n if self._compress:\n if chunk:\n chunk = self._compress.compress(chunk)\n\n chunk = chunk + self._compress.flush()\n if chunk and self.chunked:\n chunk_len = (\"%x\\r\\n\" % len(chunk)).encode(\"ascii\")\n chunk = chunk_len + chunk + b\"\\r\\n0\\r\\n\\r\\n\"\n else:\n if self.chunked:\n if chunk:\n chunk_len = (\"%x\\r\\n\" % len(chunk)).encode(\"ascii\")\n chunk = chunk_len + chunk + b\"\\r\\n0\\r\\n\\r\\n\"\n else:\n chunk = b\"0\\r\\n\\r\\n\"\n\n if chunk:\n self._write(chunk)\n\n await self.drain()\n\n self._eof = True\n self._transport = None\n\n async def drain(self) -> None:\n \"\"\"Flush the write buffer.\n\n The intended use is to write\n\n await w.write(data)\n await w.drain()\n \"\"\"\n if self._protocol.transport is not None:\n await self._protocol._drain_helper()\n\n\ndef _safe_header(string: str) -> str:\n if \"\\r\" in string or \"\\n\" in string:\n raise ValueError(\n \"Newline or carriage return detected in headers. \"\n \"Potential header injection attack.\"\n )\n return string\n\n\ndef _py_serialize_headers(status_line: str, headers: \"CIMultiDict[str]\") -> bytes:\n headers_gen = (_safe_header(k) + \": \" + _safe_header(v) for k, v in headers.items())\n line = status_line + \"\\r\\n\" + \"\\r\\n\".join(headers_gen) + \"\\r\\n\\r\\n\"\n return line.encode(\"utf-8\")\n\n\n_serialize_headers = _py_serialize_headers\n\ntry:\n import aiohttp._http_writer as _http_writer # type: ignore[import]\n\n _c_serialize_headers = _http_writer._serialize_headers\n if not NO_EXTENSIONS:\n _serialize_headers = _c_serialize_headers\nexcept ImportError:\n pass\n", "path": "aiohttp/http_writer.py"}], "after_files": [{"content": "import asyncio\nfrom typing import Optional, cast\n\nfrom .tcp_helpers import tcp_nodelay\n\n\nclass BaseProtocol(asyncio.Protocol):\n __slots__ = (\n \"_loop\",\n \"_paused\",\n \"_drain_waiter\",\n \"_connection_lost\",\n \"_reading_paused\",\n \"transport\",\n )\n\n def __init__(self, loop: asyncio.AbstractEventLoop) -> None:\n self._loop: asyncio.AbstractEventLoop = loop\n self._paused = False\n self._drain_waiter: Optional[asyncio.Future[None]] = None\n self._reading_paused = False\n\n self.transport: Optional[asyncio.Transport] = None\n\n @property\n def connected(self) -> bool:\n \"\"\"Return True if the connection is open.\"\"\"\n return self.transport is not None\n\n def pause_writing(self) -> None:\n assert not self._paused\n self._paused = True\n\n def resume_writing(self) -> None:\n assert self._paused\n self._paused = False\n\n waiter = self._drain_waiter\n if waiter is not None:\n self._drain_waiter = None\n if not waiter.done():\n waiter.set_result(None)\n\n def pause_reading(self) -> None:\n if not self._reading_paused and self.transport is not None:\n try:\n self.transport.pause_reading()\n except (AttributeError, NotImplementedError, RuntimeError):\n pass\n self._reading_paused = True\n\n def resume_reading(self) -> None:\n if self._reading_paused and self.transport is not None:\n try:\n self.transport.resume_reading()\n except (AttributeError, NotImplementedError, RuntimeError):\n pass\n self._reading_paused = False\n\n def connection_made(self, transport: asyncio.BaseTransport) -> None:\n tr = cast(asyncio.Transport, transport)\n tcp_nodelay(tr, True)\n self.transport = tr\n\n def connection_lost(self, exc: Optional[BaseException]) -> None:\n # Wake up the writer if currently paused.\n self.transport = None\n if not self._paused:\n return\n waiter = self._drain_waiter\n if waiter is None:\n return\n self._drain_waiter = None\n if waiter.done():\n return\n if exc is None:\n waiter.set_result(None)\n else:\n waiter.set_exception(exc)\n\n async def _drain_helper(self) -> None:\n if not self.connected:\n raise ConnectionResetError(\"Connection lost\")\n if not self._paused:\n return\n waiter = self._drain_waiter\n if waiter is None:\n waiter = self._loop.create_future()\n self._drain_waiter = waiter\n await asyncio.shield(waiter)\n", "path": "aiohttp/base_protocol.py"}, {"content": "\"\"\"Http related parsers and protocol.\"\"\"\n\nimport asyncio\nimport zlib\nfrom typing import Any, Awaitable, Callable, NamedTuple, Optional, Union # noqa\n\nfrom multidict import CIMultiDict\n\nfrom .abc import AbstractStreamWriter\nfrom .base_protocol import BaseProtocol\nfrom .helpers import NO_EXTENSIONS\n\n__all__ = (\"StreamWriter\", \"HttpVersion\", \"HttpVersion10\", \"HttpVersion11\")\n\n\nclass HttpVersion(NamedTuple):\n major: int\n minor: int\n\n\nHttpVersion10 = HttpVersion(1, 0)\nHttpVersion11 = HttpVersion(1, 1)\n\n\n_T_OnChunkSent = Optional[Callable[[bytes], Awaitable[None]]]\n_T_OnHeadersSent = Optional[Callable[[\"CIMultiDict[str]\"], Awaitable[None]]]\n\n\nclass StreamWriter(AbstractStreamWriter):\n def __init__(\n self,\n protocol: BaseProtocol,\n loop: asyncio.AbstractEventLoop,\n on_chunk_sent: _T_OnChunkSent = None,\n on_headers_sent: _T_OnHeadersSent = None,\n ) -> None:\n self._protocol = protocol\n\n self.loop = loop\n self.length = None\n self.chunked = False\n self.buffer_size = 0\n self.output_size = 0\n\n self._eof = False\n self._compress: Any = None\n self._drain_waiter = None\n\n self._on_chunk_sent: _T_OnChunkSent = on_chunk_sent\n self._on_headers_sent: _T_OnHeadersSent = on_headers_sent\n\n @property\n def transport(self) -> Optional[asyncio.Transport]:\n return self._protocol.transport\n\n @property\n def protocol(self) -> BaseProtocol:\n return self._protocol\n\n def enable_chunking(self) -> None:\n self.chunked = True\n\n def enable_compression(\n self, encoding: str = \"deflate\", strategy: int = zlib.Z_DEFAULT_STRATEGY\n ) -> None:\n zlib_mode = 16 + zlib.MAX_WBITS if encoding == \"gzip\" else zlib.MAX_WBITS\n self._compress = zlib.compressobj(wbits=zlib_mode, strategy=strategy)\n\n def _write(self, chunk: bytes) -> None:\n size = len(chunk)\n self.buffer_size += size\n self.output_size += size\n transport = self.transport\n if not self._protocol.connected or transport is None or transport.is_closing():\n raise ConnectionResetError(\"Cannot write to closing transport\")\n transport.write(chunk)\n\n async def write(\n self, chunk: bytes, *, drain: bool = True, LIMIT: int = 0x10000\n ) -> None:\n \"\"\"Writes chunk of data to a stream.\n\n write_eof() indicates end of stream.\n writer can't be used after write_eof() method being called.\n write() return drain future.\n \"\"\"\n if self._on_chunk_sent is not None:\n await self._on_chunk_sent(chunk)\n\n if isinstance(chunk, memoryview):\n if chunk.nbytes != len(chunk):\n # just reshape it\n chunk = chunk.cast(\"c\")\n\n if self._compress is not None:\n chunk = self._compress.compress(chunk)\n if not chunk:\n return\n\n if self.length is not None:\n chunk_len = len(chunk)\n if self.length >= chunk_len:\n self.length = self.length - chunk_len\n else:\n chunk = chunk[: self.length]\n self.length = 0\n if not chunk:\n return\n\n if chunk:\n if self.chunked:\n chunk_len_pre = (\"%x\\r\\n\" % len(chunk)).encode(\"ascii\")\n chunk = chunk_len_pre + chunk + b\"\\r\\n\"\n\n self._write(chunk)\n\n if self.buffer_size > LIMIT and drain:\n self.buffer_size = 0\n await self.drain()\n\n async def write_headers(\n self, status_line: str, headers: \"CIMultiDict[str]\"\n ) -> None:\n \"\"\"Write request/response status and headers.\"\"\"\n if self._on_headers_sent is not None:\n await self._on_headers_sent(headers)\n\n # status + headers\n buf = _serialize_headers(status_line, headers)\n self._write(buf)\n\n async def write_eof(self, chunk: bytes = b\"\") -> None:\n if self._eof:\n return\n\n if chunk and self._on_chunk_sent is not None:\n await self._on_chunk_sent(chunk)\n\n if self._compress:\n if chunk:\n chunk = self._compress.compress(chunk)\n\n chunk = chunk + self._compress.flush()\n if chunk and self.chunked:\n chunk_len = (\"%x\\r\\n\" % len(chunk)).encode(\"ascii\")\n chunk = chunk_len + chunk + b\"\\r\\n0\\r\\n\\r\\n\"\n else:\n if self.chunked:\n if chunk:\n chunk_len = (\"%x\\r\\n\" % len(chunk)).encode(\"ascii\")\n chunk = chunk_len + chunk + b\"\\r\\n0\\r\\n\\r\\n\"\n else:\n chunk = b\"0\\r\\n\\r\\n\"\n\n if chunk:\n self._write(chunk)\n\n await self.drain()\n\n self._eof = True\n\n async def drain(self) -> None:\n \"\"\"Flush the write buffer.\n\n The intended use is to write\n\n await w.write(data)\n await w.drain()\n \"\"\"\n if self._protocol.transport is not None:\n await self._protocol._drain_helper()\n\n\ndef _safe_header(string: str) -> str:\n if \"\\r\" in string or \"\\n\" in string:\n raise ValueError(\n \"Newline or carriage return detected in headers. \"\n \"Potential header injection attack.\"\n )\n return string\n\n\ndef _py_serialize_headers(status_line: str, headers: \"CIMultiDict[str]\") -> bytes:\n headers_gen = (_safe_header(k) + \": \" + _safe_header(v) for k, v in headers.items())\n line = status_line + \"\\r\\n\" + \"\\r\\n\".join(headers_gen) + \"\\r\\n\\r\\n\"\n return line.encode(\"utf-8\")\n\n\n_serialize_headers = _py_serialize_headers\n\ntry:\n import aiohttp._http_writer as _http_writer # type: ignore[import]\n\n _c_serialize_headers = _http_writer._serialize_headers\n if not NO_EXTENSIONS:\n _serialize_headers = _c_serialize_headers\nexcept ImportError:\n pass\n", "path": "aiohttp/http_writer.py"}]}
| 3,269 | 647 |
gh_patches_debug_4580
|
rasdani/github-patches
|
git_diff
|
pypa__pip-5136
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Name normalization issue in 10.0.0b1 yields false-positive installation warnings
* Pip version: 10.0.0b1
* Python version: 2.7.14
* Operating system: Linux
### Description:
The new pip check mode used during installation wrongly reports that packages are not being installed. This only happens for package containing a `.` (dot) in the name.
### What I've run:
```
pip install pyramid
Collecting pyramid
Using cached pyramid-1.9.1-py2.py3-none-any.whl
Collecting zope.interface>=3.8.0 (from pyramid)
Using cached zope.interface-4.4.3-cp36-cp36m-manylinux1_x86_64.whl
Collecting hupper (from pyramid)
Using cached hupper-1.1-py2.py3-none-any.whl
Collecting zope.deprecation>=3.5.0 (from pyramid)
Using cached zope.deprecation-4.3.0-py2.py3-none-any.whl
Collecting translationstring>=0.4 (from pyramid)
Using cached translationstring-1.3-py2.py3-none-any.whl
Requirement already satisfied: setuptools in ./venv-ex/lib/python3.6/site-packages (from pyramid) (39.0.1)
Collecting WebOb>=1.7.0rc2 (from pyramid)
Using cached WebOb-1.8.0rc1-py2.py3-none-any.whl
Collecting venusian>=1.0a3 (from pyramid)
Using cached venusian-1.1.0-py2.py3-none-any.whl
Collecting repoze.lru>=0.4 (from pyramid)
Using cached repoze.lru-0.7-py3-none-any.whl
Collecting plaster-pastedeploy (from pyramid)
Using cached plaster_pastedeploy-0.5-py2.py3-none-any.whl
Collecting PasteDeploy>=1.5.0 (from pyramid)
Using cached PasteDeploy-1.5.2-py2.py3-none-any.whl
Collecting plaster (from pyramid)
Using cached plaster-1.0-py2.py3-none-any.whl
pyramid 1.9.1 requires repoze.lru>=0.4, which is not installed.
pyramid 1.9.1 requires zope.deprecation>=3.5.0, which is not installed.
pyramid 1.9.1 requires zope.interface>=3.8.0, which is not installed.
Installing collected packages: zope.interface, hupper, zope.deprecation, translationstring, WebOb, venusian, repoze.lru, PasteDeploy, plaster, plaster-pastedeploy, pyramid
Successfully installed PasteDeploy-1.5.2 WebOb-1.8.0rc1 hupper-1.1 plaster-1.0 plaster-pastedeploy-0.5 pyramid-1.9.1 repoze.lru-0.7 translationstring-1.3 venusian-1.1.0 zope.deprecation-4.3.0 zope.interface-4.4.3
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/_internal/operations/check.py`
Content:
```
1 """Validation of dependencies of packages
2 """
3
4 from collections import namedtuple
5
6 from pip._vendor.packaging.utils import canonicalize_name
7
8 from pip._internal.operations.prepare import make_abstract_dist
9
10 from pip._internal.utils.misc import get_installed_distributions
11 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
12
13 if MYPY_CHECK_RUNNING:
14 from pip._internal.req.req_install import InstallRequirement
15 from typing import Any, Dict, Iterator, Set, Tuple, List
16
17 # Shorthands
18 PackageSet = Dict[str, 'PackageDetails']
19 Missing = Tuple[str, Any]
20 Conflicting = Tuple[str, str, Any]
21
22 MissingDict = Dict[str, List[Missing]]
23 ConflictingDict = Dict[str, List[Conflicting]]
24 CheckResult = Tuple[MissingDict, ConflictingDict]
25
26 PackageDetails = namedtuple('PackageDetails', ['version', 'requires'])
27
28
29 def create_package_set_from_installed(**kwargs):
30 # type: (**Any) -> PackageSet
31 """Converts a list of distributions into a PackageSet.
32 """
33 retval = {}
34 for dist in get_installed_distributions(**kwargs):
35 name = canonicalize_name(dist.project_name)
36 retval[name] = PackageDetails(dist.version, dist.requires())
37 return retval
38
39
40 def check_package_set(package_set):
41 # type: (PackageSet) -> CheckResult
42 """Check if a package set is consistent
43 """
44 missing = dict()
45 conflicting = dict()
46
47 for package_name in package_set:
48 # Info about dependencies of package_name
49 missing_deps = set() # type: Set[Missing]
50 conflicting_deps = set() # type: Set[Conflicting]
51
52 for req in package_set[package_name].requires:
53 name = canonicalize_name(req.project_name) # type: str
54
55 # Check if it's missing
56 if name not in package_set:
57 missed = True
58 if req.marker is not None:
59 missed = req.marker.evaluate()
60 if missed:
61 missing_deps.add((name, req))
62 continue
63
64 # Check if there's a conflict
65 version = package_set[name].version # type: str
66 if version not in req.specifier:
67 conflicting_deps.add((name, version, req))
68
69 def str_key(x):
70 return str(x)
71
72 if missing_deps:
73 missing[package_name] = sorted(missing_deps, key=str_key)
74 if conflicting_deps:
75 conflicting[package_name] = sorted(conflicting_deps, key=str_key)
76
77 return missing, conflicting
78
79
80 def check_install_conflicts(to_install):
81 # type: (List[InstallRequirement]) -> Tuple[PackageSet, CheckResult]
82 """For checking if the dependency graph would be consistent after \
83 installing given requirements
84 """
85 # Start from the current state
86 state = create_package_set_from_installed()
87 _simulate_installation_of(to_install, state)
88 return state, check_package_set(state)
89
90
91 # NOTE from @pradyunsg
92 # This required a minor update in dependency link handling logic over at
93 # operations.prepare.IsSDist.dist() to get it working
94 def _simulate_installation_of(to_install, state):
95 # type: (List[InstallRequirement], PackageSet) -> None
96 """Computes the version of packages after installing to_install.
97 """
98
99 # Modify it as installing requirement_set would (assuming no errors)
100 for inst_req in to_install:
101 dist = make_abstract_dist(inst_req).dist(finder=None)
102 state[dist.key] = PackageDetails(dist.version, dist.requires())
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pip/_internal/operations/check.py b/src/pip/_internal/operations/check.py
--- a/src/pip/_internal/operations/check.py
+++ b/src/pip/_internal/operations/check.py
@@ -99,4 +99,5 @@
# Modify it as installing requirement_set would (assuming no errors)
for inst_req in to_install:
dist = make_abstract_dist(inst_req).dist(finder=None)
- state[dist.key] = PackageDetails(dist.version, dist.requires())
+ name = canonicalize_name(dist.key)
+ state[name] = PackageDetails(dist.version, dist.requires())
|
{"golden_diff": "diff --git a/src/pip/_internal/operations/check.py b/src/pip/_internal/operations/check.py\n--- a/src/pip/_internal/operations/check.py\n+++ b/src/pip/_internal/operations/check.py\n@@ -99,4 +99,5 @@\n # Modify it as installing requirement_set would (assuming no errors)\n for inst_req in to_install:\n dist = make_abstract_dist(inst_req).dist(finder=None)\n- state[dist.key] = PackageDetails(dist.version, dist.requires())\n+ name = canonicalize_name(dist.key)\n+ state[name] = PackageDetails(dist.version, dist.requires())\n", "issue": "Name normalization issue in 10.0.0b1 yields false-positive installation warnings\n* Pip version: 10.0.0b1\r\n* Python version: 2.7.14\r\n* Operating system: Linux\r\n\r\n### Description:\r\n\r\nThe new pip check mode used during installation wrongly reports that packages are not being installed. This only happens for package containing a `.` (dot) in the name.\r\n\r\n### What I've run:\r\n\r\n```\r\npip install pyramid\r\nCollecting pyramid\r\n Using cached pyramid-1.9.1-py2.py3-none-any.whl\r\nCollecting zope.interface>=3.8.0 (from pyramid)\r\n Using cached zope.interface-4.4.3-cp36-cp36m-manylinux1_x86_64.whl\r\nCollecting hupper (from pyramid)\r\n Using cached hupper-1.1-py2.py3-none-any.whl\r\nCollecting zope.deprecation>=3.5.0 (from pyramid)\r\n Using cached zope.deprecation-4.3.0-py2.py3-none-any.whl\r\nCollecting translationstring>=0.4 (from pyramid)\r\n Using cached translationstring-1.3-py2.py3-none-any.whl\r\nRequirement already satisfied: setuptools in ./venv-ex/lib/python3.6/site-packages (from pyramid) (39.0.1)\r\nCollecting WebOb>=1.7.0rc2 (from pyramid)\r\n Using cached WebOb-1.8.0rc1-py2.py3-none-any.whl\r\nCollecting venusian>=1.0a3 (from pyramid)\r\n Using cached venusian-1.1.0-py2.py3-none-any.whl\r\nCollecting repoze.lru>=0.4 (from pyramid)\r\n Using cached repoze.lru-0.7-py3-none-any.whl\r\nCollecting plaster-pastedeploy (from pyramid)\r\n Using cached plaster_pastedeploy-0.5-py2.py3-none-any.whl\r\nCollecting PasteDeploy>=1.5.0 (from pyramid)\r\n Using cached PasteDeploy-1.5.2-py2.py3-none-any.whl\r\nCollecting plaster (from pyramid)\r\n Using cached plaster-1.0-py2.py3-none-any.whl\r\npyramid 1.9.1 requires repoze.lru>=0.4, which is not installed.\r\npyramid 1.9.1 requires zope.deprecation>=3.5.0, which is not installed.\r\npyramid 1.9.1 requires zope.interface>=3.8.0, which is not installed.\r\nInstalling collected packages: zope.interface, hupper, zope.deprecation, translationstring, WebOb, venusian, repoze.lru, PasteDeploy, plaster, plaster-pastedeploy, pyramid\r\nSuccessfully installed PasteDeploy-1.5.2 WebOb-1.8.0rc1 hupper-1.1 plaster-1.0 plaster-pastedeploy-0.5 pyramid-1.9.1 repoze.lru-0.7 translationstring-1.3 venusian-1.1.0 zope.deprecation-4.3.0 zope.interface-4.4.3\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Validation of dependencies of packages\n\"\"\"\n\nfrom collections import namedtuple\n\nfrom pip._vendor.packaging.utils import canonicalize_name\n\nfrom pip._internal.operations.prepare import make_abstract_dist\n\nfrom pip._internal.utils.misc import get_installed_distributions\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from pip._internal.req.req_install import InstallRequirement\n from typing import Any, Dict, Iterator, Set, Tuple, List\n\n # Shorthands\n PackageSet = Dict[str, 'PackageDetails']\n Missing = Tuple[str, Any]\n Conflicting = Tuple[str, str, Any]\n\n MissingDict = Dict[str, List[Missing]]\n ConflictingDict = Dict[str, List[Conflicting]]\n CheckResult = Tuple[MissingDict, ConflictingDict]\n\nPackageDetails = namedtuple('PackageDetails', ['version', 'requires'])\n\n\ndef create_package_set_from_installed(**kwargs):\n # type: (**Any) -> PackageSet\n \"\"\"Converts a list of distributions into a PackageSet.\n \"\"\"\n retval = {}\n for dist in get_installed_distributions(**kwargs):\n name = canonicalize_name(dist.project_name)\n retval[name] = PackageDetails(dist.version, dist.requires())\n return retval\n\n\ndef check_package_set(package_set):\n # type: (PackageSet) -> CheckResult\n \"\"\"Check if a package set is consistent\n \"\"\"\n missing = dict()\n conflicting = dict()\n\n for package_name in package_set:\n # Info about dependencies of package_name\n missing_deps = set() # type: Set[Missing]\n conflicting_deps = set() # type: Set[Conflicting]\n\n for req in package_set[package_name].requires:\n name = canonicalize_name(req.project_name) # type: str\n\n # Check if it's missing\n if name not in package_set:\n missed = True\n if req.marker is not None:\n missed = req.marker.evaluate()\n if missed:\n missing_deps.add((name, req))\n continue\n\n # Check if there's a conflict\n version = package_set[name].version # type: str\n if version not in req.specifier:\n conflicting_deps.add((name, version, req))\n\n def str_key(x):\n return str(x)\n\n if missing_deps:\n missing[package_name] = sorted(missing_deps, key=str_key)\n if conflicting_deps:\n conflicting[package_name] = sorted(conflicting_deps, key=str_key)\n\n return missing, conflicting\n\n\ndef check_install_conflicts(to_install):\n # type: (List[InstallRequirement]) -> Tuple[PackageSet, CheckResult]\n \"\"\"For checking if the dependency graph would be consistent after \\\n installing given requirements\n \"\"\"\n # Start from the current state\n state = create_package_set_from_installed()\n _simulate_installation_of(to_install, state)\n return state, check_package_set(state)\n\n\n# NOTE from @pradyunsg\n# This required a minor update in dependency link handling logic over at\n# operations.prepare.IsSDist.dist() to get it working\ndef _simulate_installation_of(to_install, state):\n # type: (List[InstallRequirement], PackageSet) -> None\n \"\"\"Computes the version of packages after installing to_install.\n \"\"\"\n\n # Modify it as installing requirement_set would (assuming no errors)\n for inst_req in to_install:\n dist = make_abstract_dist(inst_req).dist(finder=None)\n state[dist.key] = PackageDetails(dist.version, dist.requires())\n", "path": "src/pip/_internal/operations/check.py"}], "after_files": [{"content": "\"\"\"Validation of dependencies of packages\n\"\"\"\n\nfrom collections import namedtuple\n\nfrom pip._vendor.packaging.utils import canonicalize_name\n\nfrom pip._internal.operations.prepare import make_abstract_dist\n\nfrom pip._internal.utils.misc import get_installed_distributions\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from pip._internal.req.req_install import InstallRequirement\n from typing import Any, Dict, Iterator, Set, Tuple, List\n\n # Shorthands\n PackageSet = Dict[str, 'PackageDetails']\n Missing = Tuple[str, Any]\n Conflicting = Tuple[str, str, Any]\n\n MissingDict = Dict[str, List[Missing]]\n ConflictingDict = Dict[str, List[Conflicting]]\n CheckResult = Tuple[MissingDict, ConflictingDict]\n\nPackageDetails = namedtuple('PackageDetails', ['version', 'requires'])\n\n\ndef create_package_set_from_installed(**kwargs):\n # type: (**Any) -> PackageSet\n \"\"\"Converts a list of distributions into a PackageSet.\n \"\"\"\n retval = {}\n for dist in get_installed_distributions(**kwargs):\n name = canonicalize_name(dist.project_name)\n retval[name] = PackageDetails(dist.version, dist.requires())\n return retval\n\n\ndef check_package_set(package_set):\n # type: (PackageSet) -> CheckResult\n \"\"\"Check if a package set is consistent\n \"\"\"\n missing = dict()\n conflicting = dict()\n\n for package_name in package_set:\n # Info about dependencies of package_name\n missing_deps = set() # type: Set[Missing]\n conflicting_deps = set() # type: Set[Conflicting]\n\n for req in package_set[package_name].requires:\n name = canonicalize_name(req.project_name) # type: str\n\n # Check if it's missing\n if name not in package_set:\n missed = True\n if req.marker is not None:\n missed = req.marker.evaluate()\n if missed:\n missing_deps.add((name, req))\n continue\n\n # Check if there's a conflict\n version = package_set[name].version # type: str\n if version not in req.specifier:\n conflicting_deps.add((name, version, req))\n\n def str_key(x):\n return str(x)\n\n if missing_deps:\n missing[package_name] = sorted(missing_deps, key=str_key)\n if conflicting_deps:\n conflicting[package_name] = sorted(conflicting_deps, key=str_key)\n\n return missing, conflicting\n\n\ndef check_install_conflicts(to_install):\n # type: (List[InstallRequirement]) -> Tuple[PackageSet, CheckResult]\n \"\"\"For checking if the dependency graph would be consistent after \\\n installing given requirements\n \"\"\"\n # Start from the current state\n state = create_package_set_from_installed()\n _simulate_installation_of(to_install, state)\n return state, check_package_set(state)\n\n\n# NOTE from @pradyunsg\n# This required a minor update in dependency link handling logic over at\n# operations.prepare.IsSDist.dist() to get it working\ndef _simulate_installation_of(to_install, state):\n # type: (List[InstallRequirement], PackageSet) -> None\n \"\"\"Computes the version of packages after installing to_install.\n \"\"\"\n\n # Modify it as installing requirement_set would (assuming no errors)\n for inst_req in to_install:\n dist = make_abstract_dist(inst_req).dist(finder=None)\n name = canonicalize_name(dist.key)\n state[name] = PackageDetails(dist.version, dist.requires())\n", "path": "src/pip/_internal/operations/check.py"}]}
| 1,939 | 136 |
gh_patches_debug_33145
|
rasdani/github-patches
|
git_diff
|
inventree__InvenTree-3538
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FR]Unit test for Schema allowance
**Is your feature request the result of a bug?**
#3414 #3411
**Problem**
Custom schema settings are not e2e or unit-tested in the API.
**Suggested solution**
Add tests
**Describe alternatives you've considered**
N/A
**Examples of other systems**
N/A
**Do you want to develop this?**
Yes
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/InvenTree/fields.py`
Content:
```
1 """Custom fields used in InvenTree."""
2
3 import sys
4 from decimal import Decimal
5
6 from django import forms
7 from django.core import validators
8 from django.db import models as models
9 from django.utils.translation import gettext_lazy as _
10
11 from djmoney.forms.fields import MoneyField
12 from djmoney.models.fields import MoneyField as ModelMoneyField
13 from djmoney.models.validators import MinMoneyValidator
14 from rest_framework.fields import URLField as RestURLField
15
16 import InvenTree.helpers
17
18 from .validators import allowable_url_schemes
19
20
21 class InvenTreeRestURLField(RestURLField):
22 """Custom field for DRF with custom scheme vaildators."""
23 def __init__(self, **kwargs):
24 """Update schemes."""
25
26 # Enforce 'max length' parameter in form validation
27 if 'max_length' not in kwargs:
28 kwargs['max_length'] = 200
29
30 super().__init__(**kwargs)
31 self.validators[-1].schemes = allowable_url_schemes()
32
33
34 class InvenTreeURLField(models.URLField):
35 """Custom URL field which has custom scheme validators."""
36
37 default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
38
39 def __init__(self, **kwargs):
40 """Initialization method for InvenTreeURLField"""
41
42 # Max length for InvenTreeURLField defaults to 200
43 if 'max_length' not in kwargs:
44 kwargs['max_length'] = 200
45
46 super().__init__(**kwargs)
47
48
49 def money_kwargs():
50 """Returns the database settings for MoneyFields."""
51 from common.settings import currency_code_default, currency_code_mappings
52
53 kwargs = {}
54 kwargs['currency_choices'] = currency_code_mappings()
55 kwargs['default_currency'] = currency_code_default()
56 return kwargs
57
58
59 class InvenTreeModelMoneyField(ModelMoneyField):
60 """Custom MoneyField for clean migrations while using dynamic currency settings."""
61
62 def __init__(self, **kwargs):
63 """Overwrite default values and validators."""
64 # detect if creating migration
65 if 'migrate' in sys.argv or 'makemigrations' in sys.argv:
66 # remove currency information for a clean migration
67 kwargs['default_currency'] = ''
68 kwargs['currency_choices'] = []
69 else:
70 # set defaults
71 kwargs.update(money_kwargs())
72
73 # Set a minimum value validator
74 validators = kwargs.get('validators', [])
75
76 allow_negative = kwargs.pop('allow_negative', False)
77
78 # If no validators are provided, add some "standard" ones
79 if len(validators) == 0:
80
81 if not allow_negative:
82 validators.append(
83 MinMoneyValidator(0),
84 )
85
86 kwargs['validators'] = validators
87
88 super().__init__(**kwargs)
89
90 def formfield(self, **kwargs):
91 """Override form class to use own function."""
92 kwargs['form_class'] = InvenTreeMoneyField
93 return super().formfield(**kwargs)
94
95
96 class InvenTreeMoneyField(MoneyField):
97 """Custom MoneyField for clean migrations while using dynamic currency settings."""
98 def __init__(self, *args, **kwargs):
99 """Override initial values with the real info from database."""
100 kwargs.update(money_kwargs())
101 super().__init__(*args, **kwargs)
102
103
104 class DatePickerFormField(forms.DateField):
105 """Custom date-picker field."""
106
107 def __init__(self, **kwargs):
108 """Set up custom values."""
109 help_text = kwargs.get('help_text', _('Enter date'))
110 label = kwargs.get('label', None)
111 required = kwargs.get('required', False)
112 initial = kwargs.get('initial', None)
113
114 widget = forms.DateInput(
115 attrs={
116 'type': 'date',
117 }
118 )
119
120 forms.DateField.__init__(
121 self,
122 required=required,
123 initial=initial,
124 help_text=help_text,
125 widget=widget,
126 label=label
127 )
128
129
130 def round_decimal(value, places):
131 """Round value to the specified number of places."""
132 if value is not None:
133 # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options
134 return value.quantize(Decimal(10) ** -places)
135 return value
136
137
138 class RoundingDecimalFormField(forms.DecimalField):
139 """Custom FormField that automatically rounds inputs."""
140
141 def to_python(self, value):
142 """Convert value to python type."""
143 value = super().to_python(value)
144 value = round_decimal(value, self.decimal_places)
145 return value
146
147 def prepare_value(self, value):
148 """Override the 'prepare_value' method, to remove trailing zeros when displaying.
149
150 Why? It looks nice!
151 """
152 if type(value) == Decimal:
153 return InvenTree.helpers.normalize(value)
154 else:
155 return value
156
157
158 class RoundingDecimalField(models.DecimalField):
159 """Custom Field that automatically rounds inputs."""
160
161 def to_python(self, value):
162 """Convert value to python type."""
163 value = super().to_python(value)
164 return round_decimal(value, self.decimal_places)
165
166 def formfield(self, **kwargs):
167 """Return a Field instance for this field."""
168
169 kwargs['form_class'] = RoundingDecimalFormField
170
171 return super().formfield(**kwargs)
172
173
174 class InvenTreeNotesField(models.TextField):
175 """Custom implementation of a 'notes' field"""
176
177 # Maximum character limit for the various 'notes' fields
178 NOTES_MAX_LENGTH = 50000
179
180 def __init__(self, **kwargs):
181 """Configure default initial values for this field"""
182 kwargs['max_length'] = self.NOTES_MAX_LENGTH
183 kwargs['verbose_name'] = _('Notes')
184 kwargs['blank'] = True
185 kwargs['null'] = True
186
187 super().__init__(**kwargs)
188
```
Path: `InvenTree/InvenTree/validators.py`
Content:
```
1 """Custom field validators for InvenTree."""
2
3 import re
4 from decimal import Decimal, InvalidOperation
5
6 from django.conf import settings
7 from django.core.exceptions import FieldDoesNotExist, ValidationError
8 from django.utils.translation import gettext_lazy as _
9
10 from moneyed import CURRENCIES
11
12 import common.models
13
14
15 def validate_currency_code(code):
16 """Check that a given code is a valid currency code."""
17 if code not in CURRENCIES:
18 raise ValidationError(_('Not a valid currency code'))
19
20
21 def allowable_url_schemes():
22 """Return the list of allowable URL schemes.
23
24 In addition to the default schemes allowed by Django,
25 the install configuration file (config.yaml) can specify
26 extra schemas
27 """
28 # Default schemes
29 schemes = ['http', 'https', 'ftp', 'ftps']
30
31 extra = settings.EXTRA_URL_SCHEMES
32
33 for e in extra:
34 if e.lower() not in schemes:
35 schemes.append(e.lower())
36
37 return schemes
38
39
40 def validate_part_name(value):
41 """Prevent some illegal characters in part names."""
42 for c in ['|', '#', '$', '{', '}']:
43 if c in str(value):
44 raise ValidationError(
45 _('Invalid character in part name')
46 )
47
48
49 def validate_part_ipn(value):
50 """Validate the Part IPN against regex rule."""
51 pattern = common.models.InvenTreeSetting.get_setting('PART_IPN_REGEX')
52
53 if pattern:
54 match = re.search(pattern, value)
55
56 if match is None:
57 raise ValidationError(_('IPN must match regex pattern {pat}').format(pat=pattern))
58
59
60 def validate_purchase_order_reference(value):
61 """Validate the 'reference' field of a PurchaseOrder."""
62 pattern = common.models.InvenTreeSetting.get_setting('PURCHASEORDER_REFERENCE_REGEX')
63
64 if pattern:
65 match = re.search(pattern, value)
66
67 if match is None:
68 raise ValidationError(_('Reference must match pattern {pattern}').format(pattern=pattern))
69
70
71 def validate_sales_order_reference(value):
72 """Validate the 'reference' field of a SalesOrder."""
73 pattern = common.models.InvenTreeSetting.get_setting('SALESORDER_REFERENCE_REGEX')
74
75 if pattern:
76 match = re.search(pattern, value)
77
78 if match is None:
79 raise ValidationError(_('Reference must match pattern {pattern}').format(pattern=pattern))
80
81
82 def validate_tree_name(value):
83 """Placeholder for legacy function used in migrations."""
84
85
86 def validate_overage(value):
87 """Validate that a BOM overage string is properly formatted.
88
89 An overage string can look like:
90
91 - An integer number ('1' / 3 / 4)
92 - A decimal number ('0.123')
93 - A percentage ('5%' / '10 %')
94 """
95 value = str(value).lower().strip()
96
97 # First look for a simple numerical value
98 try:
99 i = Decimal(value)
100
101 if i < 0:
102 raise ValidationError(_("Overage value must not be negative"))
103
104 # Looks like a number
105 return True
106 except (ValueError, InvalidOperation):
107 pass
108
109 # Now look for a percentage value
110 if value.endswith('%'):
111 v = value[:-1].strip()
112
113 # Does it look like a number?
114 try:
115 f = float(v)
116
117 if f < 0:
118 raise ValidationError(_("Overage value must not be negative"))
119 elif f > 100:
120 raise ValidationError(_("Overage must not exceed 100%"))
121
122 return True
123 except ValueError:
124 pass
125
126 raise ValidationError(
127 _("Invalid value for overage")
128 )
129
130
131 def validate_part_name_format(self):
132 """Validate part name format.
133
134 Make sure that each template container has a field of Part Model
135 """
136 jinja_template_regex = re.compile('{{.*?}}')
137 field_name_regex = re.compile('(?<=part\\.)[A-z]+')
138 for jinja_template in jinja_template_regex.findall(str(self)):
139 # make sure at least one and only one field is present inside the parser
140 field_names = field_name_regex.findall(jinja_template)
141 if len(field_names) < 1:
142 raise ValidationError({
143 'value': 'At least one field must be present inside a jinja template container i.e {{}}'
144 })
145
146 # Make sure that the field_name exists in Part model
147 from part.models import Part
148
149 for field_name in field_names:
150 try:
151 Part._meta.get_field(field_name)
152 except FieldDoesNotExist:
153 raise ValidationError({
154 'value': f'{field_name} does not exist in Part Model'
155 })
156
157 return True
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py
--- a/InvenTree/InvenTree/fields.py
+++ b/InvenTree/InvenTree/fields.py
@@ -4,7 +4,6 @@
from decimal import Decimal
from django import forms
-from django.core import validators
from django.db import models as models
from django.utils.translation import gettext_lazy as _
@@ -15,7 +14,7 @@
import InvenTree.helpers
-from .validators import allowable_url_schemes
+from .validators import AllowedURLValidator, allowable_url_schemes
class InvenTreeRestURLField(RestURLField):
@@ -34,7 +33,7 @@
class InvenTreeURLField(models.URLField):
"""Custom URL field which has custom scheme validators."""
- default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
+ default_validators = [AllowedURLValidator()]
def __init__(self, **kwargs):
"""Initialization method for InvenTreeURLField"""
diff --git a/InvenTree/InvenTree/validators.py b/InvenTree/InvenTree/validators.py
--- a/InvenTree/InvenTree/validators.py
+++ b/InvenTree/InvenTree/validators.py
@@ -4,6 +4,7 @@
from decimal import Decimal, InvalidOperation
from django.conf import settings
+from django.core import validators
from django.core.exceptions import FieldDoesNotExist, ValidationError
from django.utils.translation import gettext_lazy as _
@@ -37,6 +38,14 @@
return schemes
+class AllowedURLValidator(validators.URLValidator):
+ """Custom URL validator to allow for custom schemes."""
+ def __call__(self, value):
+ """Validate the URL."""
+ self.schemes = allowable_url_schemes()
+ super().__call__(value)
+
+
def validate_part_name(value):
"""Prevent some illegal characters in part names."""
for c in ['|', '#', '$', '{', '}']:
|
{"golden_diff": "diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py\n--- a/InvenTree/InvenTree/fields.py\n+++ b/InvenTree/InvenTree/fields.py\n@@ -4,7 +4,6 @@\n from decimal import Decimal\n \n from django import forms\n-from django.core import validators\n from django.db import models as models\n from django.utils.translation import gettext_lazy as _\n \n@@ -15,7 +14,7 @@\n \n import InvenTree.helpers\n \n-from .validators import allowable_url_schemes\n+from .validators import AllowedURLValidator, allowable_url_schemes\n \n \n class InvenTreeRestURLField(RestURLField):\n@@ -34,7 +33,7 @@\n class InvenTreeURLField(models.URLField):\n \"\"\"Custom URL field which has custom scheme validators.\"\"\"\n \n- default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n+ default_validators = [AllowedURLValidator()]\n \n def __init__(self, **kwargs):\n \"\"\"Initialization method for InvenTreeURLField\"\"\"\ndiff --git a/InvenTree/InvenTree/validators.py b/InvenTree/InvenTree/validators.py\n--- a/InvenTree/InvenTree/validators.py\n+++ b/InvenTree/InvenTree/validators.py\n@@ -4,6 +4,7 @@\n from decimal import Decimal, InvalidOperation\n \n from django.conf import settings\n+from django.core import validators\n from django.core.exceptions import FieldDoesNotExist, ValidationError\n from django.utils.translation import gettext_lazy as _\n \n@@ -37,6 +38,14 @@\n return schemes\n \n \n+class AllowedURLValidator(validators.URLValidator):\n+ \"\"\"Custom URL validator to allow for custom schemes.\"\"\"\n+ def __call__(self, value):\n+ \"\"\"Validate the URL.\"\"\"\n+ self.schemes = allowable_url_schemes()\n+ super().__call__(value)\n+\n+\n def validate_part_name(value):\n \"\"\"Prevent some illegal characters in part names.\"\"\"\n for c in ['|', '#', '$', '{', '}']:\n", "issue": "[FR]Unit test for Schema allowance\n**Is your feature request the result of a bug?**\r\n#3414 #3411\r\n\r\n**Problem**\r\nCustom schema settings are not e2e or unit-tested in the API.\r\n\r\n**Suggested solution**\r\nAdd tests\r\n\r\n**Describe alternatives you've considered**\r\nN/A\r\n\r\n**Examples of other systems**\r\nN/A\r\n\r\n**Do you want to develop this?**\r\nYes\r\n\n", "before_files": [{"content": "\"\"\"Custom fields used in InvenTree.\"\"\"\n\nimport sys\nfrom decimal import Decimal\n\nfrom django import forms\nfrom django.core import validators\nfrom django.db import models as models\nfrom django.utils.translation import gettext_lazy as _\n\nfrom djmoney.forms.fields import MoneyField\nfrom djmoney.models.fields import MoneyField as ModelMoneyField\nfrom djmoney.models.validators import MinMoneyValidator\nfrom rest_framework.fields import URLField as RestURLField\n\nimport InvenTree.helpers\n\nfrom .validators import allowable_url_schemes\n\n\nclass InvenTreeRestURLField(RestURLField):\n \"\"\"Custom field for DRF with custom scheme vaildators.\"\"\"\n def __init__(self, **kwargs):\n \"\"\"Update schemes.\"\"\"\n\n # Enforce 'max length' parameter in form validation\n if 'max_length' not in kwargs:\n kwargs['max_length'] = 200\n\n super().__init__(**kwargs)\n self.validators[-1].schemes = allowable_url_schemes()\n\n\nclass InvenTreeURLField(models.URLField):\n \"\"\"Custom URL field which has custom scheme validators.\"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n def __init__(self, **kwargs):\n \"\"\"Initialization method for InvenTreeURLField\"\"\"\n\n # Max length for InvenTreeURLField defaults to 200\n if 'max_length' not in kwargs:\n kwargs['max_length'] = 200\n\n super().__init__(**kwargs)\n\n\ndef money_kwargs():\n \"\"\"Returns the database settings for MoneyFields.\"\"\"\n from common.settings import currency_code_default, currency_code_mappings\n\n kwargs = {}\n kwargs['currency_choices'] = currency_code_mappings()\n kwargs['default_currency'] = currency_code_default()\n return kwargs\n\n\nclass InvenTreeModelMoneyField(ModelMoneyField):\n \"\"\"Custom MoneyField for clean migrations while using dynamic currency settings.\"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Overwrite default values and validators.\"\"\"\n # detect if creating migration\n if 'migrate' in sys.argv or 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n else:\n # set defaults\n kwargs.update(money_kwargs())\n\n # Set a minimum value validator\n validators = kwargs.get('validators', [])\n\n allow_negative = kwargs.pop('allow_negative', False)\n\n # If no validators are provided, add some \"standard\" ones\n if len(validators) == 0:\n\n if not allow_negative:\n validators.append(\n MinMoneyValidator(0),\n )\n\n kwargs['validators'] = validators\n\n super().__init__(**kwargs)\n\n def formfield(self, **kwargs):\n \"\"\"Override form class to use own function.\"\"\"\n kwargs['form_class'] = InvenTreeMoneyField\n return super().formfield(**kwargs)\n\n\nclass InvenTreeMoneyField(MoneyField):\n \"\"\"Custom MoneyField for clean migrations while using dynamic currency settings.\"\"\"\n def __init__(self, *args, **kwargs):\n \"\"\"Override initial values with the real info from database.\"\"\"\n kwargs.update(money_kwargs())\n super().__init__(*args, **kwargs)\n\n\nclass DatePickerFormField(forms.DateField):\n \"\"\"Custom date-picker field.\"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Set up custom values.\"\"\"\n help_text = kwargs.get('help_text', _('Enter date'))\n label = kwargs.get('label', None)\n required = kwargs.get('required', False)\n initial = kwargs.get('initial', None)\n\n widget = forms.DateInput(\n attrs={\n 'type': 'date',\n }\n )\n\n forms.DateField.__init__(\n self,\n required=required,\n initial=initial,\n help_text=help_text,\n widget=widget,\n label=label\n )\n\n\ndef round_decimal(value, places):\n \"\"\"Round value to the specified number of places.\"\"\"\n if value is not None:\n # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options\n return value.quantize(Decimal(10) ** -places)\n return value\n\n\nclass RoundingDecimalFormField(forms.DecimalField):\n \"\"\"Custom FormField that automatically rounds inputs.\"\"\"\n\n def to_python(self, value):\n \"\"\"Convert value to python type.\"\"\"\n value = super().to_python(value)\n value = round_decimal(value, self.decimal_places)\n return value\n\n def prepare_value(self, value):\n \"\"\"Override the 'prepare_value' method, to remove trailing zeros when displaying.\n\n Why? It looks nice!\n \"\"\"\n if type(value) == Decimal:\n return InvenTree.helpers.normalize(value)\n else:\n return value\n\n\nclass RoundingDecimalField(models.DecimalField):\n \"\"\"Custom Field that automatically rounds inputs.\"\"\"\n\n def to_python(self, value):\n \"\"\"Convert value to python type.\"\"\"\n value = super().to_python(value)\n return round_decimal(value, self.decimal_places)\n\n def formfield(self, **kwargs):\n \"\"\"Return a Field instance for this field.\"\"\"\n\n kwargs['form_class'] = RoundingDecimalFormField\n\n return super().formfield(**kwargs)\n\n\nclass InvenTreeNotesField(models.TextField):\n \"\"\"Custom implementation of a 'notes' field\"\"\"\n\n # Maximum character limit for the various 'notes' fields\n NOTES_MAX_LENGTH = 50000\n\n def __init__(self, **kwargs):\n \"\"\"Configure default initial values for this field\"\"\"\n kwargs['max_length'] = self.NOTES_MAX_LENGTH\n kwargs['verbose_name'] = _('Notes')\n kwargs['blank'] = True\n kwargs['null'] = True\n\n super().__init__(**kwargs)\n", "path": "InvenTree/InvenTree/fields.py"}, {"content": "\"\"\"Custom field validators for InvenTree.\"\"\"\n\nimport re\nfrom decimal import Decimal, InvalidOperation\n\nfrom django.conf import settings\nfrom django.core.exceptions import FieldDoesNotExist, ValidationError\nfrom django.utils.translation import gettext_lazy as _\n\nfrom moneyed import CURRENCIES\n\nimport common.models\n\n\ndef validate_currency_code(code):\n \"\"\"Check that a given code is a valid currency code.\"\"\"\n if code not in CURRENCIES:\n raise ValidationError(_('Not a valid currency code'))\n\n\ndef allowable_url_schemes():\n \"\"\"Return the list of allowable URL schemes.\n\n In addition to the default schemes allowed by Django,\n the install configuration file (config.yaml) can specify\n extra schemas\n \"\"\"\n # Default schemes\n schemes = ['http', 'https', 'ftp', 'ftps']\n\n extra = settings.EXTRA_URL_SCHEMES\n\n for e in extra:\n if e.lower() not in schemes:\n schemes.append(e.lower())\n\n return schemes\n\n\ndef validate_part_name(value):\n \"\"\"Prevent some illegal characters in part names.\"\"\"\n for c in ['|', '#', '$', '{', '}']:\n if c in str(value):\n raise ValidationError(\n _('Invalid character in part name')\n )\n\n\ndef validate_part_ipn(value):\n \"\"\"Validate the Part IPN against regex rule.\"\"\"\n pattern = common.models.InvenTreeSetting.get_setting('PART_IPN_REGEX')\n\n if pattern:\n match = re.search(pattern, value)\n\n if match is None:\n raise ValidationError(_('IPN must match regex pattern {pat}').format(pat=pattern))\n\n\ndef validate_purchase_order_reference(value):\n \"\"\"Validate the 'reference' field of a PurchaseOrder.\"\"\"\n pattern = common.models.InvenTreeSetting.get_setting('PURCHASEORDER_REFERENCE_REGEX')\n\n if pattern:\n match = re.search(pattern, value)\n\n if match is None:\n raise ValidationError(_('Reference must match pattern {pattern}').format(pattern=pattern))\n\n\ndef validate_sales_order_reference(value):\n \"\"\"Validate the 'reference' field of a SalesOrder.\"\"\"\n pattern = common.models.InvenTreeSetting.get_setting('SALESORDER_REFERENCE_REGEX')\n\n if pattern:\n match = re.search(pattern, value)\n\n if match is None:\n raise ValidationError(_('Reference must match pattern {pattern}').format(pattern=pattern))\n\n\ndef validate_tree_name(value):\n \"\"\"Placeholder for legacy function used in migrations.\"\"\"\n\n\ndef validate_overage(value):\n \"\"\"Validate that a BOM overage string is properly formatted.\n\n An overage string can look like:\n\n - An integer number ('1' / 3 / 4)\n - A decimal number ('0.123')\n - A percentage ('5%' / '10 %')\n \"\"\"\n value = str(value).lower().strip()\n\n # First look for a simple numerical value\n try:\n i = Decimal(value)\n\n if i < 0:\n raise ValidationError(_(\"Overage value must not be negative\"))\n\n # Looks like a number\n return True\n except (ValueError, InvalidOperation):\n pass\n\n # Now look for a percentage value\n if value.endswith('%'):\n v = value[:-1].strip()\n\n # Does it look like a number?\n try:\n f = float(v)\n\n if f < 0:\n raise ValidationError(_(\"Overage value must not be negative\"))\n elif f > 100:\n raise ValidationError(_(\"Overage must not exceed 100%\"))\n\n return True\n except ValueError:\n pass\n\n raise ValidationError(\n _(\"Invalid value for overage\")\n )\n\n\ndef validate_part_name_format(self):\n \"\"\"Validate part name format.\n\n Make sure that each template container has a field of Part Model\n \"\"\"\n jinja_template_regex = re.compile('{{.*?}}')\n field_name_regex = re.compile('(?<=part\\\\.)[A-z]+')\n for jinja_template in jinja_template_regex.findall(str(self)):\n # make sure at least one and only one field is present inside the parser\n field_names = field_name_regex.findall(jinja_template)\n if len(field_names) < 1:\n raise ValidationError({\n 'value': 'At least one field must be present inside a jinja template container i.e {{}}'\n })\n\n # Make sure that the field_name exists in Part model\n from part.models import Part\n\n for field_name in field_names:\n try:\n Part._meta.get_field(field_name)\n except FieldDoesNotExist:\n raise ValidationError({\n 'value': f'{field_name} does not exist in Part Model'\n })\n\n return True\n", "path": "InvenTree/InvenTree/validators.py"}], "after_files": [{"content": "\"\"\"Custom fields used in InvenTree.\"\"\"\n\nimport sys\nfrom decimal import Decimal\n\nfrom django import forms\nfrom django.db import models as models\nfrom django.utils.translation import gettext_lazy as _\n\nfrom djmoney.forms.fields import MoneyField\nfrom djmoney.models.fields import MoneyField as ModelMoneyField\nfrom djmoney.models.validators import MinMoneyValidator\nfrom rest_framework.fields import URLField as RestURLField\n\nimport InvenTree.helpers\n\nfrom .validators import AllowedURLValidator, allowable_url_schemes\n\n\nclass InvenTreeRestURLField(RestURLField):\n \"\"\"Custom field for DRF with custom scheme vaildators.\"\"\"\n def __init__(self, **kwargs):\n \"\"\"Update schemes.\"\"\"\n\n # Enforce 'max length' parameter in form validation\n if 'max_length' not in kwargs:\n kwargs['max_length'] = 200\n\n super().__init__(**kwargs)\n self.validators[-1].schemes = allowable_url_schemes()\n\n\nclass InvenTreeURLField(models.URLField):\n \"\"\"Custom URL field which has custom scheme validators.\"\"\"\n\n default_validators = [AllowedURLValidator()]\n\n def __init__(self, **kwargs):\n \"\"\"Initialization method for InvenTreeURLField\"\"\"\n\n # Max length for InvenTreeURLField defaults to 200\n if 'max_length' not in kwargs:\n kwargs['max_length'] = 200\n\n super().__init__(**kwargs)\n\n\ndef money_kwargs():\n \"\"\"Returns the database settings for MoneyFields.\"\"\"\n from common.settings import currency_code_default, currency_code_mappings\n\n kwargs = {}\n kwargs['currency_choices'] = currency_code_mappings()\n kwargs['default_currency'] = currency_code_default()\n return kwargs\n\n\nclass InvenTreeModelMoneyField(ModelMoneyField):\n \"\"\"Custom MoneyField for clean migrations while using dynamic currency settings.\"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Overwrite default values and validators.\"\"\"\n # detect if creating migration\n if 'migrate' in sys.argv or 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n else:\n # set defaults\n kwargs.update(money_kwargs())\n\n # Set a minimum value validator\n validators = kwargs.get('validators', [])\n\n allow_negative = kwargs.pop('allow_negative', False)\n\n # If no validators are provided, add some \"standard\" ones\n if len(validators) == 0:\n\n if not allow_negative:\n validators.append(\n MinMoneyValidator(0),\n )\n\n kwargs['validators'] = validators\n\n super().__init__(**kwargs)\n\n def formfield(self, **kwargs):\n \"\"\"Override form class to use own function.\"\"\"\n kwargs['form_class'] = InvenTreeMoneyField\n return super().formfield(**kwargs)\n\n\nclass InvenTreeMoneyField(MoneyField):\n \"\"\"Custom MoneyField for clean migrations while using dynamic currency settings.\"\"\"\n def __init__(self, *args, **kwargs):\n \"\"\"Override initial values with the real info from database.\"\"\"\n kwargs.update(money_kwargs())\n super().__init__(*args, **kwargs)\n\n\nclass DatePickerFormField(forms.DateField):\n \"\"\"Custom date-picker field.\"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Set up custom values.\"\"\"\n help_text = kwargs.get('help_text', _('Enter date'))\n label = kwargs.get('label', None)\n required = kwargs.get('required', False)\n initial = kwargs.get('initial', None)\n\n widget = forms.DateInput(\n attrs={\n 'type': 'date',\n }\n )\n\n forms.DateField.__init__(\n self,\n required=required,\n initial=initial,\n help_text=help_text,\n widget=widget,\n label=label\n )\n\n\ndef round_decimal(value, places):\n \"\"\"Round value to the specified number of places.\"\"\"\n if value is not None:\n # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options\n return value.quantize(Decimal(10) ** -places)\n return value\n\n\nclass RoundingDecimalFormField(forms.DecimalField):\n \"\"\"Custom FormField that automatically rounds inputs.\"\"\"\n\n def to_python(self, value):\n \"\"\"Convert value to python type.\"\"\"\n value = super().to_python(value)\n value = round_decimal(value, self.decimal_places)\n return value\n\n def prepare_value(self, value):\n \"\"\"Override the 'prepare_value' method, to remove trailing zeros when displaying.\n\n Why? It looks nice!\n \"\"\"\n if type(value) == Decimal:\n return InvenTree.helpers.normalize(value)\n else:\n return value\n\n\nclass RoundingDecimalField(models.DecimalField):\n \"\"\"Custom Field that automatically rounds inputs.\"\"\"\n\n def to_python(self, value):\n \"\"\"Convert value to python type.\"\"\"\n value = super().to_python(value)\n return round_decimal(value, self.decimal_places)\n\n def formfield(self, **kwargs):\n \"\"\"Return a Field instance for this field.\"\"\"\n\n kwargs['form_class'] = RoundingDecimalFormField\n\n return super().formfield(**kwargs)\n\n\nclass InvenTreeNotesField(models.TextField):\n \"\"\"Custom implementation of a 'notes' field\"\"\"\n\n # Maximum character limit for the various 'notes' fields\n NOTES_MAX_LENGTH = 50000\n\n def __init__(self, **kwargs):\n \"\"\"Configure default initial values for this field\"\"\"\n kwargs['max_length'] = self.NOTES_MAX_LENGTH\n kwargs['verbose_name'] = _('Notes')\n kwargs['blank'] = True\n kwargs['null'] = True\n\n super().__init__(**kwargs)\n", "path": "InvenTree/InvenTree/fields.py"}, {"content": "\"\"\"Custom field validators for InvenTree.\"\"\"\n\nimport re\nfrom decimal import Decimal, InvalidOperation\n\nfrom django.conf import settings\nfrom django.core import validators\nfrom django.core.exceptions import FieldDoesNotExist, ValidationError\nfrom django.utils.translation import gettext_lazy as _\n\nfrom moneyed import CURRENCIES\n\nimport common.models\n\n\ndef validate_currency_code(code):\n \"\"\"Check that a given code is a valid currency code.\"\"\"\n if code not in CURRENCIES:\n raise ValidationError(_('Not a valid currency code'))\n\n\ndef allowable_url_schemes():\n \"\"\"Return the list of allowable URL schemes.\n\n In addition to the default schemes allowed by Django,\n the install configuration file (config.yaml) can specify\n extra schemas\n \"\"\"\n # Default schemes\n schemes = ['http', 'https', 'ftp', 'ftps']\n\n extra = settings.EXTRA_URL_SCHEMES\n\n for e in extra:\n if e.lower() not in schemes:\n schemes.append(e.lower())\n\n return schemes\n\n\nclass AllowedURLValidator(validators.URLValidator):\n \"\"\"Custom URL validator to allow for custom schemes.\"\"\"\n def __call__(self, value):\n \"\"\"Validate the URL.\"\"\"\n self.schemes = allowable_url_schemes()\n super().__call__(value)\n\n\ndef validate_part_name(value):\n \"\"\"Prevent some illegal characters in part names.\"\"\"\n for c in ['|', '#', '$', '{', '}']:\n if c in str(value):\n raise ValidationError(\n _('Invalid character in part name')\n )\n\n\ndef validate_part_ipn(value):\n \"\"\"Validate the Part IPN against regex rule.\"\"\"\n pattern = common.models.InvenTreeSetting.get_setting('PART_IPN_REGEX')\n\n if pattern:\n match = re.search(pattern, value)\n\n if match is None:\n raise ValidationError(_('IPN must match regex pattern {pat}').format(pat=pattern))\n\n\ndef validate_purchase_order_reference(value):\n \"\"\"Validate the 'reference' field of a PurchaseOrder.\"\"\"\n pattern = common.models.InvenTreeSetting.get_setting('PURCHASEORDER_REFERENCE_REGEX')\n\n if pattern:\n match = re.search(pattern, value)\n\n if match is None:\n raise ValidationError(_('Reference must match pattern {pattern}').format(pattern=pattern))\n\n\ndef validate_sales_order_reference(value):\n \"\"\"Validate the 'reference' field of a SalesOrder.\"\"\"\n pattern = common.models.InvenTreeSetting.get_setting('SALESORDER_REFERENCE_REGEX')\n\n if pattern:\n match = re.search(pattern, value)\n\n if match is None:\n raise ValidationError(_('Reference must match pattern {pattern}').format(pattern=pattern))\n\n\ndef validate_tree_name(value):\n \"\"\"Placeholder for legacy function used in migrations.\"\"\"\n\n\ndef validate_overage(value):\n \"\"\"Validate that a BOM overage string is properly formatted.\n\n An overage string can look like:\n\n - An integer number ('1' / 3 / 4)\n - A decimal number ('0.123')\n - A percentage ('5%' / '10 %')\n \"\"\"\n value = str(value).lower().strip()\n\n # First look for a simple numerical value\n try:\n i = Decimal(value)\n\n if i < 0:\n raise ValidationError(_(\"Overage value must not be negative\"))\n\n # Looks like a number\n return True\n except (ValueError, InvalidOperation):\n pass\n\n # Now look for a percentage value\n if value.endswith('%'):\n v = value[:-1].strip()\n\n # Does it look like a number?\n try:\n f = float(v)\n\n if f < 0:\n raise ValidationError(_(\"Overage value must not be negative\"))\n elif f > 100:\n raise ValidationError(_(\"Overage must not exceed 100%\"))\n\n return True\n except ValueError:\n pass\n\n raise ValidationError(\n _(\"Invalid value for overage\")\n )\n\n\ndef validate_part_name_format(self):\n \"\"\"Validate part name format.\n\n Make sure that each template container has a field of Part Model\n \"\"\"\n jinja_template_regex = re.compile('{{.*?}}')\n field_name_regex = re.compile('(?<=part\\\\.)[A-z]+')\n for jinja_template in jinja_template_regex.findall(str(self)):\n # make sure at least one and only one field is present inside the parser\n field_names = field_name_regex.findall(jinja_template)\n if len(field_names) < 1:\n raise ValidationError({\n 'value': 'At least one field must be present inside a jinja template container i.e {{}}'\n })\n\n # Make sure that the field_name exists in Part model\n from part.models import Part\n\n for field_name in field_names:\n try:\n Part._meta.get_field(field_name)\n except FieldDoesNotExist:\n raise ValidationError({\n 'value': f'{field_name} does not exist in Part Model'\n })\n\n return True\n", "path": "InvenTree/InvenTree/validators.py"}]}
| 3,446 | 460 |
gh_patches_debug_4138
|
rasdani/github-patches
|
git_diff
|
google__turbinia-793
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
path_spec.parent is None exception
```
FsstatTask Task failed with exception: ['NoneType' object has no attribute 'type_indicator']
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/workers/__init__.py", line 881, in run_wrapper
self.evidence_setup(evidence)
File "/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/workers/__init__.py", line 499, in evidence_setup
evidence.preprocess(self.tmp_dir, required_states=self.REQUIRED_STATES)
File "/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/evidence.py", line 341, in preprocess
self._preprocess(tmp_dir, required_states)
File "/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/evidence.py", line 575, in _preprocess
encryption_type = partitions.GetPartitionEncryptionType(path_spec)
File "/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/processors/partitions.py", line 66, in GetPartitionEncryptionType
if path_spec.parent.type_indicator == dfvfs_definitions.TYPE_INDICATOR_BDE:
AttributeError: 'NoneType' object has no attribute 'type_indicator'
No state_manager initialized, not updating Task info
Trying last ditch attempt to close result
Task Result was auto-closed from task executor on turbinia-worker-6eaf93e7aee7c9cf-0 likely due to previous failures. Previous status: [FsstatTask Task failed with exception: ['NoneType' object has no attribute 'type_indicator']]
Task Result was auto-closed from task executor on turbinia-worker-6eaf93e7aee7c9cf-0 likely due to previous failures. Previous status: [FsstatTask Task failed with exception: ['NoneType' object has no attribute 'type_indicator']]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/processors/partitions.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2021 Google LLC
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # https://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Evidence processor to enumerate partitions."""
16
17 import logging
18
19 from dfvfs.helpers import volume_scanner
20 from dfvfs.lib import definitions as dfvfs_definitions
21 from dfvfs.lib import errors as dfvfs_errors
22
23 from turbinia.lib.dfvfs_classes import UnattendedVolumeScannerMediator
24 from turbinia import TurbiniaException
25
26 log = logging.getLogger('turbinia')
27
28
29 def Enumerate(evidence):
30 """Uses dfVFS to enumerate partitions in a disk / image.
31
32 Args:
33 evidence: Evidence object to be scanned.
34
35 Raises:
36 TurbiniaException if source evidence can't be scanned.
37
38 Returns:
39 list[dfVFS.path_spec]: path specs for identified partitions
40 """
41 dfvfs_definitions.PREFERRED_GPT_BACK_END = (
42 dfvfs_definitions.TYPE_INDICATOR_GPT)
43 mediator = UnattendedVolumeScannerMediator()
44 mediator.credentials = evidence.credentials
45 path_specs = []
46 try:
47 scanner = volume_scanner.VolumeScanner(mediator=mediator)
48 path_specs = scanner.GetBasePathSpecs(evidence.local_path)
49 except dfvfs_errors.ScannerError as e:
50 raise TurbiniaException(
51 'Could not enumerate partitions [{0!s}]: {1!s}'.format(
52 evidence.local_path, e))
53
54 return path_specs
55
56
57 def GetPartitionEncryptionType(path_spec):
58 """Checks a partition for encryption.
59
60 Args:
61 path_spec (dfVFS.path_spec): Partition path_spec.
62
63 Returns:
64 String representing the type of encryption, or None.
65 """
66 encryption_type = None
67 if path_spec.parent.type_indicator == dfvfs_definitions.TYPE_INDICATOR_BDE:
68 encryption_type = 'BDE'
69 return encryption_type
70
71
72 def GetPathSpecByLocation(path_specs, location):
73 """Finds a path_spec from a list of path_specs for a given location.
74
75 Args:
76 path_specs (list[dfVFS.path_spec]): List of path_specs from volume scanner.
77 location (str): dfVFS location to search for.
78
79 Returns:
80 dfVFS.path_spec for the given location or None if not found.
81 """
82 for path_spec in path_specs:
83 child_path_spec = path_spec
84 fs_location = getattr(path_spec, 'location', None)
85 while path_spec.HasParent():
86 type_indicator = path_spec.type_indicator
87 if type_indicator in (dfvfs_definitions.TYPE_INDICATOR_TSK_PARTITION,
88 dfvfs_definitions.TYPE_INDICATOR_GPT):
89 if fs_location in ('\\', '/'):
90 fs_location = getattr(path_spec, 'location', None)
91 break
92 path_spec = path_spec.parent
93 if fs_location == location:
94 return child_path_spec
95 return None
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/turbinia/processors/partitions.py b/turbinia/processors/partitions.py
--- a/turbinia/processors/partitions.py
+++ b/turbinia/processors/partitions.py
@@ -64,6 +64,10 @@
String representing the type of encryption, or None.
"""
encryption_type = None
+
+ if not path_spec.HasParent():
+ return None
+
if path_spec.parent.type_indicator == dfvfs_definitions.TYPE_INDICATOR_BDE:
encryption_type = 'BDE'
return encryption_type
|
{"golden_diff": "diff --git a/turbinia/processors/partitions.py b/turbinia/processors/partitions.py\n--- a/turbinia/processors/partitions.py\n+++ b/turbinia/processors/partitions.py\n@@ -64,6 +64,10 @@\n String representing the type of encryption, or None.\n \"\"\"\n encryption_type = None\n+\n+ if not path_spec.HasParent():\n+ return None\n+\n if path_spec.parent.type_indicator == dfvfs_definitions.TYPE_INDICATOR_BDE:\n encryption_type = 'BDE'\n return encryption_type\n", "issue": "path_spec.parent is None exception\n```\r\nFsstatTask Task failed with exception: ['NoneType' object has no attribute 'type_indicator']\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/workers/__init__.py\", line 881, in run_wrapper\r\n self.evidence_setup(evidence)\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/workers/__init__.py\", line 499, in evidence_setup\r\n evidence.preprocess(self.tmp_dir, required_states=self.REQUIRED_STATES)\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/evidence.py\", line 341, in preprocess\r\n self._preprocess(tmp_dir, required_states)\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/evidence.py\", line 575, in _preprocess\r\n encryption_type = partitions.GetPartitionEncryptionType(path_spec)\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210208.4.dev19+g7c02e5e-py3.6.egg/turbinia/processors/partitions.py\", line 66, in GetPartitionEncryptionType\r\n if path_spec.parent.type_indicator == dfvfs_definitions.TYPE_INDICATOR_BDE:\r\nAttributeError: 'NoneType' object has no attribute 'type_indicator'\r\n\r\nNo state_manager initialized, not updating Task info\r\nTrying last ditch attempt to close result\r\nTask Result was auto-closed from task executor on turbinia-worker-6eaf93e7aee7c9cf-0 likely due to previous failures. Previous status: [FsstatTask Task failed with exception: ['NoneType' object has no attribute 'type_indicator']]\r\nTask Result was auto-closed from task executor on turbinia-worker-6eaf93e7aee7c9cf-0 likely due to previous failures. Previous status: [FsstatTask Task failed with exception: ['NoneType' object has no attribute 'type_indicator']]\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Evidence processor to enumerate partitions.\"\"\"\n\nimport logging\n\nfrom dfvfs.helpers import volume_scanner\nfrom dfvfs.lib import definitions as dfvfs_definitions\nfrom dfvfs.lib import errors as dfvfs_errors\n\nfrom turbinia.lib.dfvfs_classes import UnattendedVolumeScannerMediator\nfrom turbinia import TurbiniaException\n\nlog = logging.getLogger('turbinia')\n\n\ndef Enumerate(evidence):\n \"\"\"Uses dfVFS to enumerate partitions in a disk / image.\n\n Args:\n evidence: Evidence object to be scanned.\n\n Raises:\n TurbiniaException if source evidence can't be scanned.\n\n Returns:\n list[dfVFS.path_spec]: path specs for identified partitions\n \"\"\"\n dfvfs_definitions.PREFERRED_GPT_BACK_END = (\n dfvfs_definitions.TYPE_INDICATOR_GPT)\n mediator = UnattendedVolumeScannerMediator()\n mediator.credentials = evidence.credentials\n path_specs = []\n try:\n scanner = volume_scanner.VolumeScanner(mediator=mediator)\n path_specs = scanner.GetBasePathSpecs(evidence.local_path)\n except dfvfs_errors.ScannerError as e:\n raise TurbiniaException(\n 'Could not enumerate partitions [{0!s}]: {1!s}'.format(\n evidence.local_path, e))\n\n return path_specs\n\n\ndef GetPartitionEncryptionType(path_spec):\n \"\"\"Checks a partition for encryption.\n\n Args:\n path_spec (dfVFS.path_spec): Partition path_spec.\n\n Returns:\n String representing the type of encryption, or None.\n \"\"\"\n encryption_type = None\n if path_spec.parent.type_indicator == dfvfs_definitions.TYPE_INDICATOR_BDE:\n encryption_type = 'BDE'\n return encryption_type\n\n\ndef GetPathSpecByLocation(path_specs, location):\n \"\"\"Finds a path_spec from a list of path_specs for a given location.\n\n Args:\n path_specs (list[dfVFS.path_spec]): List of path_specs from volume scanner.\n location (str): dfVFS location to search for.\n\n Returns:\n dfVFS.path_spec for the given location or None if not found.\n \"\"\"\n for path_spec in path_specs:\n child_path_spec = path_spec\n fs_location = getattr(path_spec, 'location', None)\n while path_spec.HasParent():\n type_indicator = path_spec.type_indicator\n if type_indicator in (dfvfs_definitions.TYPE_INDICATOR_TSK_PARTITION,\n dfvfs_definitions.TYPE_INDICATOR_GPT):\n if fs_location in ('\\\\', '/'):\n fs_location = getattr(path_spec, 'location', None)\n break\n path_spec = path_spec.parent\n if fs_location == location:\n return child_path_spec\n return None\n", "path": "turbinia/processors/partitions.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Evidence processor to enumerate partitions.\"\"\"\n\nimport logging\n\nfrom dfvfs.helpers import volume_scanner\nfrom dfvfs.lib import definitions as dfvfs_definitions\nfrom dfvfs.lib import errors as dfvfs_errors\n\nfrom turbinia.lib.dfvfs_classes import UnattendedVolumeScannerMediator\nfrom turbinia import TurbiniaException\n\nlog = logging.getLogger('turbinia')\n\n\ndef Enumerate(evidence):\n \"\"\"Uses dfVFS to enumerate partitions in a disk / image.\n\n Args:\n evidence: Evidence object to be scanned.\n\n Raises:\n TurbiniaException if source evidence can't be scanned.\n\n Returns:\n list[dfVFS.path_spec]: path specs for identified partitions\n \"\"\"\n dfvfs_definitions.PREFERRED_GPT_BACK_END = (\n dfvfs_definitions.TYPE_INDICATOR_GPT)\n mediator = UnattendedVolumeScannerMediator()\n mediator.credentials = evidence.credentials\n path_specs = []\n try:\n scanner = volume_scanner.VolumeScanner(mediator=mediator)\n path_specs = scanner.GetBasePathSpecs(evidence.local_path)\n except dfvfs_errors.ScannerError as e:\n raise TurbiniaException(\n 'Could not enumerate partitions [{0!s}]: {1!s}'.format(\n evidence.local_path, e))\n\n return path_specs\n\n\ndef GetPartitionEncryptionType(path_spec):\n \"\"\"Checks a partition for encryption.\n\n Args:\n path_spec (dfVFS.path_spec): Partition path_spec.\n\n Returns:\n String representing the type of encryption, or None.\n \"\"\"\n encryption_type = None\n\n if not path_spec.HasParent():\n return None\n\n if path_spec.parent.type_indicator == dfvfs_definitions.TYPE_INDICATOR_BDE:\n encryption_type = 'BDE'\n return encryption_type\n\n\ndef GetPathSpecByLocation(path_specs, location):\n \"\"\"Finds a path_spec from a list of path_specs for a given location.\n\n Args:\n path_specs (list[dfVFS.path_spec]): List of path_specs from volume scanner.\n location (str): dfVFS location to search for.\n\n Returns:\n dfVFS.path_spec for the given location or None if not found.\n \"\"\"\n for path_spec in path_specs:\n child_path_spec = path_spec\n fs_location = getattr(path_spec, 'location', None)\n while path_spec.HasParent():\n type_indicator = path_spec.type_indicator\n if type_indicator in (dfvfs_definitions.TYPE_INDICATOR_TSK_PARTITION,\n dfvfs_definitions.TYPE_INDICATOR_GPT):\n if fs_location in ('\\\\', '/'):\n fs_location = getattr(path_spec, 'location', None)\n break\n path_spec = path_spec.parent\n if fs_location == location:\n return child_path_spec\n return None\n", "path": "turbinia/processors/partitions.py"}]}
| 1,753 | 127 |
gh_patches_debug_40735
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-2087
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Parsing error scanning terraform plan output
**Describe the bug**
Receiving a parsing error when scanning terraform plan output. parsing error is in the cloudformation scan results. the file being scanned contains no cloudformation code. the terraform_plan scan results are reporting well
**To Reproduce**
Steps to reproduce the behavior:
1. Go to a folder which contains the output of a terraform plan piped into a tf.json file
2. Run cli command checkov -f .\tf.json
3. See error in attached scanlog.txt
**Expected behavior**
Expect to be able to scan terraform plan output without encountering a parsing error
**Screenshots**
scanlog.txt
[scanlog.txt](https://github.com/bridgecrewio/checkov/files/7464351/scanlog.txt)
ts.json - terraform plan output being scanned
[tf.json.txt](https://github.com/bridgecrewio/checkov/files/7464354/tf.json.txt)
**Desktop (please complete the following information):**
- OS: Windows 10
- Checkov Version 2.0.536
**Additional context**
Add any other context about the problem here (e.g. code snippets).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/cloudformation/parser/cfn_yaml.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import logging
6 from yaml import MappingNode
7 from yaml import ScalarNode
8 from yaml import SequenceNode
9 from yaml.composer import Composer
10 from yaml.constructor import ConstructorError
11 from yaml.constructor import SafeConstructor
12 from yaml.reader import Reader
13 from yaml.resolver import Resolver
14 from yaml.scanner import Scanner
15
16 from checkov.common.parsers.node import StrNode, DictNode, ListNode
17
18 try:
19 from yaml.cyaml import CParser as Parser # pylint: disable=ungrouped-imports
20
21 cyaml = True
22 except ImportError:
23 from yaml.parser import Parser # pylint: disable=ungrouped-imports
24
25 cyaml = False
26
27 UNCONVERTED_SUFFIXES = ['Ref', 'Condition']
28 FN_PREFIX = 'Fn::'
29
30 LOGGER = logging.getLogger(__name__)
31
32
33 class CfnParseError(ConstructorError):
34 """
35 Error thrown when the template contains Cfn Error
36 """
37
38 def __init__(self, filename, message, line_number, column_number, key=' '):
39 # Call the base class constructor with the parameters it needs
40 super(CfnParseError, self).__init__(message)
41
42 # Now for your custom code...
43 self.filename = filename
44 self.line_number = line_number
45 self.column_number = column_number
46 self.message = message
47
48
49 class NodeConstructor(SafeConstructor):
50 """
51 Node Constructors for loading different types in Yaml
52 """
53
54 def __init__(self, filename):
55 # Call the base class constructor
56 super(NodeConstructor, self).__init__()
57
58 self.filename = filename
59
60 # To support lazy loading, the original constructors first yield
61 # an empty object, then fill them in when iterated. Due to
62 # laziness we omit this behaviour (and will only do "deep
63 # construction") by first exhausting iterators, then yielding
64 # copies.
65 def construct_yaml_map(self, node):
66
67 # Check for duplicate keys on the current level, this is not desirable
68 # because a dict does not support this. It overwrites it with the last
69 # occurance, which can give unexpected results
70 mapping = {}
71 self.flatten_mapping(node)
72 for key_node, value_node in node.value:
73 key = self.construct_object(key_node, False)
74 value = self.construct_object(value_node, False)
75
76 if key in mapping:
77 raise CfnParseError(
78 self.filename,
79 'Duplicate resource found "{}" (line {})'.format(
80 key, key_node.start_mark.line + 1),
81 key_node.start_mark.line, key_node.start_mark.column, key)
82 mapping[key] = value
83
84 obj, = SafeConstructor.construct_yaml_map(self, node)
85 return DictNode(obj, node.start_mark, node.end_mark)
86
87 def construct_yaml_str(self, node):
88 obj = SafeConstructor.construct_yaml_str(self, node)
89 assert isinstance(obj, str) # nosec
90 return StrNode(obj, node.start_mark, node.end_mark)
91
92 def construct_yaml_seq(self, node):
93 obj, = SafeConstructor.construct_yaml_seq(self, node)
94 assert isinstance(obj, list) # nosec
95 return ListNode(obj, node.start_mark, node.end_mark) # nosec
96
97 def construct_yaml_null_error(self, node):
98 """Throw a null error"""
99 raise CfnParseError(
100 self.filename,
101 'Null value at line {0} column {1}'.format(
102 node.start_mark.line + 1, node.start_mark.column + 1),
103 node.start_mark.line, node.start_mark.column, ' ')
104
105
106 NodeConstructor.add_constructor(
107 u'tag:yaml.org,2002:map',
108 NodeConstructor.construct_yaml_map)
109
110 NodeConstructor.add_constructor(
111 u'tag:yaml.org,2002:str',
112 NodeConstructor.construct_yaml_str)
113
114 NodeConstructor.add_constructor(
115 u'tag:yaml.org,2002:seq',
116 NodeConstructor.construct_yaml_seq)
117
118 NodeConstructor.add_constructor(
119 u'tag:yaml.org,2002:null',
120 NodeConstructor.construct_yaml_null_error)
121
122
123 class MarkedLoader(Reader, Scanner, Parser, Composer, NodeConstructor, Resolver):
124 """
125 Class for marked loading YAML
126 """
127
128 # pylint: disable=non-parent-init-called,super-init-not-called
129
130 def __init__(self, stream, filename):
131 Reader.__init__(self, stream)
132 Scanner.__init__(self)
133 if cyaml:
134 Parser.__init__(self, stream)
135 else:
136 Parser.__init__(self)
137 Composer.__init__(self)
138 SafeConstructor.__init__(self)
139 Resolver.__init__(self)
140 NodeConstructor.__init__(self, filename)
141
142 def construct_mapping(self, node, deep=False):
143 mapping = super(MarkedLoader, self).construct_mapping(node, deep=deep)
144 # Add 1 so line numbering starts at 1
145 # mapping['__line__'] = node.start_mark.line + 1
146 mapping['__startline__'] = node.start_mark.line + 1
147 mapping['__endline__'] = node.end_mark.line + 1
148 return mapping
149
150
151 def multi_constructor(loader, tag_suffix, node):
152 """
153 Deal with !Ref style function format
154 """
155
156 if tag_suffix not in UNCONVERTED_SUFFIXES:
157 tag_suffix = '{}{}'.format(FN_PREFIX, tag_suffix)
158
159 constructor = None
160 if tag_suffix == 'Fn::GetAtt':
161 constructor = construct_getatt
162 elif isinstance(node, ScalarNode):
163 constructor = loader.construct_scalar
164 elif isinstance(node, SequenceNode):
165 constructor = loader.construct_sequence
166 elif isinstance(node, MappingNode):
167 constructor = loader.construct_mapping
168 else:
169 raise 'Bad tag: !{}'.format(tag_suffix)
170
171 return DictNode({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)
172
173
174 def construct_getatt(node):
175 """
176 Reconstruct !GetAtt into a list
177 """
178
179 if isinstance(node.value, str):
180 return ListNode(node.value.split('.'), node.start_mark, node.end_mark)
181 if isinstance(node.value, list):
182 return ListNode([s.value for s in node.value], node.start_mark, node.end_mark)
183
184 raise ValueError('Unexpected node type: {}'.format(type(node.value)))
185
186
187 def loads(yaml_string, fname=None):
188 """
189 Load the given YAML string
190 """
191 loader = MarkedLoader(yaml_string, fname)
192 loader.add_multi_constructor('!', multi_constructor)
193
194 template = loader.get_single_data()
195 # Convert an empty file to an empty dict
196 if template is None:
197 template = {}
198
199 return template
200
201
202 def load(filename):
203 """
204 Load the given YAML file
205 """
206
207 content = ''
208
209 with open(filename) as fp:
210 content = fp.read()
211 fp.seek(0)
212 file_lines = [(ind + 1, line) for (ind, line) in
213 list(enumerate(fp.readlines()))]
214
215 return (loads(content, filename), file_lines)
216
```
Path: `checkov/cloudformation/parser/__init__.py`
Content:
```
1 import logging
2 from typing import Tuple, List, Union, Dict
3
4 from checkov.cloudformation.parser import cfn_yaml
5 from checkov.common.parsers.json import parse as json_parse
6 from checkov.common.parsers.node import DictNode
7 from checkov.cloudformation.parser.cfn_keywords import TemplateSections
8 from yaml.parser import ScannerError
9 from yaml import YAMLError
10
11 LOGGER = logging.getLogger(__name__)
12
13
14 def parse(filename: str, out_parsing_errors: Dict[str, str] = {}) -> Union[Tuple[DictNode, List[Tuple[int, str]]], Tuple[None, None]]:
15 """
16 Decode filename into an object
17 """
18 template = None
19 template_lines = None
20 error = None
21 try:
22 (template, template_lines) = cfn_yaml.load(filename)
23 except IOError as err:
24 if err.errno == 2:
25 error = f"Template file not found: {filename} - {err}"
26 LOGGER.error(error)
27 elif err.errno == 21:
28 error = f"Template references a directory, not a file: {filename} - {err}"
29 LOGGER.error(error)
30 elif err.errno == 13:
31 error = f"Permission denied when accessing template file: {filename} - {err}"
32 LOGGER.error(error)
33 except UnicodeDecodeError as err:
34 error = f"Cannot read file contents: {filename} - {err}"
35 LOGGER.error(error)
36 except cfn_yaml.CfnParseError as err:
37 error = f"Parsing error in file: {filename} - {err}"
38 LOGGER.info(error)
39 except ValueError as err:
40 error = f"Parsing error in file: {filename} - {err}"
41 LOGGER.info(error)
42 except ScannerError as err:
43 if err.problem in ["found character '\\t' that cannot start any token", "found unknown escape character"]:
44 try:
45 (template, template_lines) = json_parse(filename, allow_nulls=False)
46 except Exception as json_err: # pylint: disable=W0703
47 error = f"Template {filename} is malformed: {err.problem}. Tried to parse {filename} as JSON but got error: {json_err}"
48 LOGGER.info(error)
49 except YAMLError as err:
50 error = f"Parsing error in file: {filename} - {err}"
51 LOGGER.info(error)
52
53 if error:
54 out_parsing_errors[filename] = error
55
56 if isinstance(template, dict):
57 resources = template.get(TemplateSections.RESOURCES.value, None)
58 if resources:
59 if '__startline__' in resources:
60 del resources['__startline__']
61 if '__endline__' in resources:
62 del resources['__endline__']
63 return template, template_lines
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/cloudformation/parser/__init__.py b/checkov/cloudformation/parser/__init__.py
--- a/checkov/cloudformation/parser/__init__.py
+++ b/checkov/cloudformation/parser/__init__.py
@@ -1,5 +1,5 @@
import logging
-from typing import Tuple, List, Union, Dict
+from typing import Tuple, List, Union, Dict, Optional
from checkov.cloudformation.parser import cfn_yaml
from checkov.common.parsers.json import parse as json_parse
@@ -11,13 +11,19 @@
LOGGER = logging.getLogger(__name__)
-def parse(filename: str, out_parsing_errors: Dict[str, str] = {}) -> Union[Tuple[DictNode, List[Tuple[int, str]]], Tuple[None, None]]:
+def parse(
+ filename: str, out_parsing_errors: Optional[Dict[str, str]] = None
+) -> Union[Tuple[DictNode, List[Tuple[int, str]]], Tuple[None, None]]:
"""
- Decode filename into an object
+ Decode filename into an object
"""
template = None
template_lines = None
error = None
+
+ if out_parsing_errors is None:
+ out_parsing_errors = {}
+
try:
(template, template_lines) = cfn_yaml.load(filename)
except IOError as err:
@@ -34,6 +40,10 @@
error = f"Cannot read file contents: {filename} - {err}"
LOGGER.error(error)
except cfn_yaml.CfnParseError as err:
+ if "Null value at" in err.message:
+ LOGGER.info(f"Null values do not exist in CFN templates: {filename} - {err}")
+ return None, None
+
error = f"Parsing error in file: {filename} - {err}"
LOGGER.info(error)
except ValueError as err:
@@ -56,8 +66,8 @@
if isinstance(template, dict):
resources = template.get(TemplateSections.RESOURCES.value, None)
if resources:
- if '__startline__' in resources:
- del resources['__startline__']
- if '__endline__' in resources:
- del resources['__endline__']
+ if "__startline__" in resources:
+ del resources["__startline__"]
+ if "__endline__" in resources:
+ del resources["__endline__"]
return template, template_lines
diff --git a/checkov/cloudformation/parser/cfn_yaml.py b/checkov/cloudformation/parser/cfn_yaml.py
--- a/checkov/cloudformation/parser/cfn_yaml.py
+++ b/checkov/cloudformation/parser/cfn_yaml.py
@@ -3,6 +3,9 @@
SPDX-License-Identifier: MIT-0
"""
import logging
+from pathlib import Path
+from typing import List, Tuple
+
from yaml import MappingNode
from yaml import ScalarNode
from yaml import SequenceNode
@@ -199,17 +202,13 @@
return template
-def load(filename):
+def load(filename: str) -> Tuple[DictNode, List[Tuple[int, str]]]:
"""
Load the given YAML file
"""
- content = ''
-
- with open(filename) as fp:
- content = fp.read()
- fp.seek(0)
- file_lines = [(ind + 1, line) for (ind, line) in
- list(enumerate(fp.readlines()))]
+ file_path = Path(filename)
+ content = file_path.read_text()
+ file_lines = [(idx + 1, line) for idx, line in enumerate(content.splitlines(keepends=True))]
return (loads(content, filename), file_lines)
|
{"golden_diff": "diff --git a/checkov/cloudformation/parser/__init__.py b/checkov/cloudformation/parser/__init__.py\n--- a/checkov/cloudformation/parser/__init__.py\n+++ b/checkov/cloudformation/parser/__init__.py\n@@ -1,5 +1,5 @@\n import logging\n-from typing import Tuple, List, Union, Dict\n+from typing import Tuple, List, Union, Dict, Optional\n \n from checkov.cloudformation.parser import cfn_yaml\n from checkov.common.parsers.json import parse as json_parse\n@@ -11,13 +11,19 @@\n LOGGER = logging.getLogger(__name__)\n \n \n-def parse(filename: str, out_parsing_errors: Dict[str, str] = {}) -> Union[Tuple[DictNode, List[Tuple[int, str]]], Tuple[None, None]]:\n+def parse(\n+ filename: str, out_parsing_errors: Optional[Dict[str, str]] = None\n+) -> Union[Tuple[DictNode, List[Tuple[int, str]]], Tuple[None, None]]:\n \"\"\"\n- Decode filename into an object\n+ Decode filename into an object\n \"\"\"\n template = None\n template_lines = None\n error = None\n+\n+ if out_parsing_errors is None:\n+ out_parsing_errors = {}\n+\n try:\n (template, template_lines) = cfn_yaml.load(filename)\n except IOError as err:\n@@ -34,6 +40,10 @@\n error = f\"Cannot read file contents: {filename} - {err}\"\n LOGGER.error(error)\n except cfn_yaml.CfnParseError as err:\n+ if \"Null value at\" in err.message:\n+ LOGGER.info(f\"Null values do not exist in CFN templates: {filename} - {err}\")\n+ return None, None\n+\n error = f\"Parsing error in file: {filename} - {err}\"\n LOGGER.info(error)\n except ValueError as err:\n@@ -56,8 +66,8 @@\n if isinstance(template, dict):\n resources = template.get(TemplateSections.RESOURCES.value, None)\n if resources:\n- if '__startline__' in resources:\n- del resources['__startline__']\n- if '__endline__' in resources:\n- del resources['__endline__']\n+ if \"__startline__\" in resources:\n+ del resources[\"__startline__\"]\n+ if \"__endline__\" in resources:\n+ del resources[\"__endline__\"]\n return template, template_lines\ndiff --git a/checkov/cloudformation/parser/cfn_yaml.py b/checkov/cloudformation/parser/cfn_yaml.py\n--- a/checkov/cloudformation/parser/cfn_yaml.py\n+++ b/checkov/cloudformation/parser/cfn_yaml.py\n@@ -3,6 +3,9 @@\n SPDX-License-Identifier: MIT-0\n \"\"\"\n import logging\n+from pathlib import Path\n+from typing import List, Tuple\n+\n from yaml import MappingNode\n from yaml import ScalarNode\n from yaml import SequenceNode\n@@ -199,17 +202,13 @@\n return template\n \n \n-def load(filename):\n+def load(filename: str) -> Tuple[DictNode, List[Tuple[int, str]]]:\n \"\"\"\n Load the given YAML file\n \"\"\"\n \n- content = ''\n-\n- with open(filename) as fp:\n- content = fp.read()\n- fp.seek(0)\n- file_lines = [(ind + 1, line) for (ind, line) in\n- list(enumerate(fp.readlines()))]\n+ file_path = Path(filename)\n+ content = file_path.read_text()\n+ file_lines = [(idx + 1, line) for idx, line in enumerate(content.splitlines(keepends=True))]\n \n return (loads(content, filename), file_lines)\n", "issue": "Parsing error scanning terraform plan output\n**Describe the bug**\r\nReceiving a parsing error when scanning terraform plan output. parsing error is in the cloudformation scan results. the file being scanned contains no cloudformation code. the terraform_plan scan results are reporting well\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to a folder which contains the output of a terraform plan piped into a tf.json file\r\n2. Run cli command checkov -f .\\tf.json\r\n3. See error in attached scanlog.txt\r\n\r\n**Expected behavior**\r\nExpect to be able to scan terraform plan output without encountering a parsing error\r\n\r\n**Screenshots**\r\nscanlog.txt\r\n[scanlog.txt](https://github.com/bridgecrewio/checkov/files/7464351/scanlog.txt)\r\n\r\nts.json - terraform plan output being scanned\r\n[tf.json.txt](https://github.com/bridgecrewio/checkov/files/7464354/tf.json.txt)\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Windows 10\r\n - Checkov Version 2.0.536\r\n\r\n**Additional context**\r\nAdd any other context about the problem here (e.g. code snippets).\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport logging\nfrom yaml import MappingNode\nfrom yaml import ScalarNode\nfrom yaml import SequenceNode\nfrom yaml.composer import Composer\nfrom yaml.constructor import ConstructorError\nfrom yaml.constructor import SafeConstructor\nfrom yaml.reader import Reader\nfrom yaml.resolver import Resolver\nfrom yaml.scanner import Scanner\n\nfrom checkov.common.parsers.node import StrNode, DictNode, ListNode\n\ntry:\n from yaml.cyaml import CParser as Parser # pylint: disable=ungrouped-imports\n\n cyaml = True\nexcept ImportError:\n from yaml.parser import Parser # pylint: disable=ungrouped-imports\n\n cyaml = False\n\nUNCONVERTED_SUFFIXES = ['Ref', 'Condition']\nFN_PREFIX = 'Fn::'\n\nLOGGER = logging.getLogger(__name__)\n\n\nclass CfnParseError(ConstructorError):\n \"\"\"\n Error thrown when the template contains Cfn Error\n \"\"\"\n\n def __init__(self, filename, message, line_number, column_number, key=' '):\n # Call the base class constructor with the parameters it needs\n super(CfnParseError, self).__init__(message)\n\n # Now for your custom code...\n self.filename = filename\n self.line_number = line_number\n self.column_number = column_number\n self.message = message\n\n\nclass NodeConstructor(SafeConstructor):\n \"\"\"\n Node Constructors for loading different types in Yaml\n \"\"\"\n\n def __init__(self, filename):\n # Call the base class constructor\n super(NodeConstructor, self).__init__()\n\n self.filename = filename\n\n # To support lazy loading, the original constructors first yield\n # an empty object, then fill them in when iterated. Due to\n # laziness we omit this behaviour (and will only do \"deep\n # construction\") by first exhausting iterators, then yielding\n # copies.\n def construct_yaml_map(self, node):\n\n # Check for duplicate keys on the current level, this is not desirable\n # because a dict does not support this. It overwrites it with the last\n # occurance, which can give unexpected results\n mapping = {}\n self.flatten_mapping(node)\n for key_node, value_node in node.value:\n key = self.construct_object(key_node, False)\n value = self.construct_object(value_node, False)\n\n if key in mapping:\n raise CfnParseError(\n self.filename,\n 'Duplicate resource found \"{}\" (line {})'.format(\n key, key_node.start_mark.line + 1),\n key_node.start_mark.line, key_node.start_mark.column, key)\n mapping[key] = value\n\n obj, = SafeConstructor.construct_yaml_map(self, node)\n return DictNode(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_str(self, node):\n obj = SafeConstructor.construct_yaml_str(self, node)\n assert isinstance(obj, str) # nosec\n return StrNode(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_seq(self, node):\n obj, = SafeConstructor.construct_yaml_seq(self, node)\n assert isinstance(obj, list) # nosec\n return ListNode(obj, node.start_mark, node.end_mark) # nosec\n\n def construct_yaml_null_error(self, node):\n \"\"\"Throw a null error\"\"\"\n raise CfnParseError(\n self.filename,\n 'Null value at line {0} column {1}'.format(\n node.start_mark.line + 1, node.start_mark.column + 1),\n node.start_mark.line, node.start_mark.column, ' ')\n\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:map',\n NodeConstructor.construct_yaml_map)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:str',\n NodeConstructor.construct_yaml_str)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:seq',\n NodeConstructor.construct_yaml_seq)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:null',\n NodeConstructor.construct_yaml_null_error)\n\n\nclass MarkedLoader(Reader, Scanner, Parser, Composer, NodeConstructor, Resolver):\n \"\"\"\n Class for marked loading YAML\n \"\"\"\n\n # pylint: disable=non-parent-init-called,super-init-not-called\n\n def __init__(self, stream, filename):\n Reader.__init__(self, stream)\n Scanner.__init__(self)\n if cyaml:\n Parser.__init__(self, stream)\n else:\n Parser.__init__(self)\n Composer.__init__(self)\n SafeConstructor.__init__(self)\n Resolver.__init__(self)\n NodeConstructor.__init__(self, filename)\n\n def construct_mapping(self, node, deep=False):\n mapping = super(MarkedLoader, self).construct_mapping(node, deep=deep)\n # Add 1 so line numbering starts at 1\n # mapping['__line__'] = node.start_mark.line + 1\n mapping['__startline__'] = node.start_mark.line + 1\n mapping['__endline__'] = node.end_mark.line + 1\n return mapping\n\n\ndef multi_constructor(loader, tag_suffix, node):\n \"\"\"\n Deal with !Ref style function format\n \"\"\"\n\n if tag_suffix not in UNCONVERTED_SUFFIXES:\n tag_suffix = '{}{}'.format(FN_PREFIX, tag_suffix)\n\n constructor = None\n if tag_suffix == 'Fn::GetAtt':\n constructor = construct_getatt\n elif isinstance(node, ScalarNode):\n constructor = loader.construct_scalar\n elif isinstance(node, SequenceNode):\n constructor = loader.construct_sequence\n elif isinstance(node, MappingNode):\n constructor = loader.construct_mapping\n else:\n raise 'Bad tag: !{}'.format(tag_suffix)\n\n return DictNode({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)\n\n\ndef construct_getatt(node):\n \"\"\"\n Reconstruct !GetAtt into a list\n \"\"\"\n\n if isinstance(node.value, str):\n return ListNode(node.value.split('.'), node.start_mark, node.end_mark)\n if isinstance(node.value, list):\n return ListNode([s.value for s in node.value], node.start_mark, node.end_mark)\n\n raise ValueError('Unexpected node type: {}'.format(type(node.value)))\n\n\ndef loads(yaml_string, fname=None):\n \"\"\"\n Load the given YAML string\n \"\"\"\n loader = MarkedLoader(yaml_string, fname)\n loader.add_multi_constructor('!', multi_constructor)\n\n template = loader.get_single_data()\n # Convert an empty file to an empty dict\n if template is None:\n template = {}\n\n return template\n\n\ndef load(filename):\n \"\"\"\n Load the given YAML file\n \"\"\"\n\n content = ''\n\n with open(filename) as fp:\n content = fp.read()\n fp.seek(0)\n file_lines = [(ind + 1, line) for (ind, line) in\n list(enumerate(fp.readlines()))]\n\n return (loads(content, filename), file_lines)\n", "path": "checkov/cloudformation/parser/cfn_yaml.py"}, {"content": "import logging\nfrom typing import Tuple, List, Union, Dict\n\nfrom checkov.cloudformation.parser import cfn_yaml\nfrom checkov.common.parsers.json import parse as json_parse\nfrom checkov.common.parsers.node import DictNode\nfrom checkov.cloudformation.parser.cfn_keywords import TemplateSections\nfrom yaml.parser import ScannerError\nfrom yaml import YAMLError\n\nLOGGER = logging.getLogger(__name__)\n\n\ndef parse(filename: str, out_parsing_errors: Dict[str, str] = {}) -> Union[Tuple[DictNode, List[Tuple[int, str]]], Tuple[None, None]]:\n \"\"\"\n Decode filename into an object\n \"\"\"\n template = None\n template_lines = None\n error = None\n try:\n (template, template_lines) = cfn_yaml.load(filename)\n except IOError as err:\n if err.errno == 2:\n error = f\"Template file not found: {filename} - {err}\"\n LOGGER.error(error)\n elif err.errno == 21:\n error = f\"Template references a directory, not a file: {filename} - {err}\"\n LOGGER.error(error)\n elif err.errno == 13:\n error = f\"Permission denied when accessing template file: {filename} - {err}\"\n LOGGER.error(error)\n except UnicodeDecodeError as err:\n error = f\"Cannot read file contents: {filename} - {err}\"\n LOGGER.error(error)\n except cfn_yaml.CfnParseError as err:\n error = f\"Parsing error in file: {filename} - {err}\"\n LOGGER.info(error)\n except ValueError as err:\n error = f\"Parsing error in file: {filename} - {err}\"\n LOGGER.info(error)\n except ScannerError as err:\n if err.problem in [\"found character '\\\\t' that cannot start any token\", \"found unknown escape character\"]:\n try:\n (template, template_lines) = json_parse(filename, allow_nulls=False)\n except Exception as json_err: # pylint: disable=W0703\n error = f\"Template {filename} is malformed: {err.problem}. Tried to parse {filename} as JSON but got error: {json_err}\"\n LOGGER.info(error)\n except YAMLError as err:\n error = f\"Parsing error in file: {filename} - {err}\"\n LOGGER.info(error)\n\n if error:\n out_parsing_errors[filename] = error\n\n if isinstance(template, dict):\n resources = template.get(TemplateSections.RESOURCES.value, None)\n if resources:\n if '__startline__' in resources:\n del resources['__startline__']\n if '__endline__' in resources:\n del resources['__endline__']\n return template, template_lines\n", "path": "checkov/cloudformation/parser/__init__.py"}], "after_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport logging\nfrom pathlib import Path\nfrom typing import List, Tuple\n\nfrom yaml import MappingNode\nfrom yaml import ScalarNode\nfrom yaml import SequenceNode\nfrom yaml.composer import Composer\nfrom yaml.constructor import ConstructorError\nfrom yaml.constructor import SafeConstructor\nfrom yaml.reader import Reader\nfrom yaml.resolver import Resolver\nfrom yaml.scanner import Scanner\n\nfrom checkov.common.parsers.node import StrNode, DictNode, ListNode\n\ntry:\n from yaml.cyaml import CParser as Parser # pylint: disable=ungrouped-imports\n\n cyaml = True\nexcept ImportError:\n from yaml.parser import Parser # pylint: disable=ungrouped-imports\n\n cyaml = False\n\nUNCONVERTED_SUFFIXES = ['Ref', 'Condition']\nFN_PREFIX = 'Fn::'\n\nLOGGER = logging.getLogger(__name__)\n\n\nclass CfnParseError(ConstructorError):\n \"\"\"\n Error thrown when the template contains Cfn Error\n \"\"\"\n\n def __init__(self, filename, message, line_number, column_number, key=' '):\n # Call the base class constructor with the parameters it needs\n super(CfnParseError, self).__init__(message)\n\n # Now for your custom code...\n self.filename = filename\n self.line_number = line_number\n self.column_number = column_number\n self.message = message\n\n\nclass NodeConstructor(SafeConstructor):\n \"\"\"\n Node Constructors for loading different types in Yaml\n \"\"\"\n\n def __init__(self, filename):\n # Call the base class constructor\n super(NodeConstructor, self).__init__()\n\n self.filename = filename\n\n # To support lazy loading, the original constructors first yield\n # an empty object, then fill them in when iterated. Due to\n # laziness we omit this behaviour (and will only do \"deep\n # construction\") by first exhausting iterators, then yielding\n # copies.\n def construct_yaml_map(self, node):\n\n # Check for duplicate keys on the current level, this is not desirable\n # because a dict does not support this. It overwrites it with the last\n # occurance, which can give unexpected results\n mapping = {}\n self.flatten_mapping(node)\n for key_node, value_node in node.value:\n key = self.construct_object(key_node, False)\n value = self.construct_object(value_node, False)\n\n if key in mapping:\n raise CfnParseError(\n self.filename,\n 'Duplicate resource found \"{}\" (line {})'.format(\n key, key_node.start_mark.line + 1),\n key_node.start_mark.line, key_node.start_mark.column, key)\n mapping[key] = value\n\n obj, = SafeConstructor.construct_yaml_map(self, node)\n return DictNode(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_str(self, node):\n obj = SafeConstructor.construct_yaml_str(self, node)\n assert isinstance(obj, str) # nosec\n return StrNode(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_seq(self, node):\n obj, = SafeConstructor.construct_yaml_seq(self, node)\n assert isinstance(obj, list) # nosec\n return ListNode(obj, node.start_mark, node.end_mark) # nosec\n\n def construct_yaml_null_error(self, node):\n \"\"\"Throw a null error\"\"\"\n raise CfnParseError(\n self.filename,\n 'Null value at line {0} column {1}'.format(\n node.start_mark.line + 1, node.start_mark.column + 1),\n node.start_mark.line, node.start_mark.column, ' ')\n\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:map',\n NodeConstructor.construct_yaml_map)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:str',\n NodeConstructor.construct_yaml_str)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:seq',\n NodeConstructor.construct_yaml_seq)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:null',\n NodeConstructor.construct_yaml_null_error)\n\n\nclass MarkedLoader(Reader, Scanner, Parser, Composer, NodeConstructor, Resolver):\n \"\"\"\n Class for marked loading YAML\n \"\"\"\n\n # pylint: disable=non-parent-init-called,super-init-not-called\n\n def __init__(self, stream, filename):\n Reader.__init__(self, stream)\n Scanner.__init__(self)\n if cyaml:\n Parser.__init__(self, stream)\n else:\n Parser.__init__(self)\n Composer.__init__(self)\n SafeConstructor.__init__(self)\n Resolver.__init__(self)\n NodeConstructor.__init__(self, filename)\n\n def construct_mapping(self, node, deep=False):\n mapping = super(MarkedLoader, self).construct_mapping(node, deep=deep)\n # Add 1 so line numbering starts at 1\n # mapping['__line__'] = node.start_mark.line + 1\n mapping['__startline__'] = node.start_mark.line + 1\n mapping['__endline__'] = node.end_mark.line + 1\n return mapping\n\n\ndef multi_constructor(loader, tag_suffix, node):\n \"\"\"\n Deal with !Ref style function format\n \"\"\"\n\n if tag_suffix not in UNCONVERTED_SUFFIXES:\n tag_suffix = '{}{}'.format(FN_PREFIX, tag_suffix)\n\n constructor = None\n if tag_suffix == 'Fn::GetAtt':\n constructor = construct_getatt\n elif isinstance(node, ScalarNode):\n constructor = loader.construct_scalar\n elif isinstance(node, SequenceNode):\n constructor = loader.construct_sequence\n elif isinstance(node, MappingNode):\n constructor = loader.construct_mapping\n else:\n raise 'Bad tag: !{}'.format(tag_suffix)\n\n return DictNode({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)\n\n\ndef construct_getatt(node):\n \"\"\"\n Reconstruct !GetAtt into a list\n \"\"\"\n\n if isinstance(node.value, str):\n return ListNode(node.value.split('.'), node.start_mark, node.end_mark)\n if isinstance(node.value, list):\n return ListNode([s.value for s in node.value], node.start_mark, node.end_mark)\n\n raise ValueError('Unexpected node type: {}'.format(type(node.value)))\n\n\ndef loads(yaml_string, fname=None):\n \"\"\"\n Load the given YAML string\n \"\"\"\n loader = MarkedLoader(yaml_string, fname)\n loader.add_multi_constructor('!', multi_constructor)\n\n template = loader.get_single_data()\n # Convert an empty file to an empty dict\n if template is None:\n template = {}\n\n return template\n\n\ndef load(filename: str) -> Tuple[DictNode, List[Tuple[int, str]]]:\n \"\"\"\n Load the given YAML file\n \"\"\"\n\n file_path = Path(filename)\n content = file_path.read_text()\n file_lines = [(idx + 1, line) for idx, line in enumerate(content.splitlines(keepends=True))]\n\n return (loads(content, filename), file_lines)\n", "path": "checkov/cloudformation/parser/cfn_yaml.py"}, {"content": "import logging\nfrom typing import Tuple, List, Union, Dict, Optional\n\nfrom checkov.cloudformation.parser import cfn_yaml\nfrom checkov.common.parsers.json import parse as json_parse\nfrom checkov.common.parsers.node import DictNode\nfrom checkov.cloudformation.parser.cfn_keywords import TemplateSections\nfrom yaml.parser import ScannerError\nfrom yaml import YAMLError\n\nLOGGER = logging.getLogger(__name__)\n\n\ndef parse(\n filename: str, out_parsing_errors: Optional[Dict[str, str]] = None\n) -> Union[Tuple[DictNode, List[Tuple[int, str]]], Tuple[None, None]]:\n \"\"\"\n Decode filename into an object\n \"\"\"\n template = None\n template_lines = None\n error = None\n\n if out_parsing_errors is None:\n out_parsing_errors = {}\n\n try:\n (template, template_lines) = cfn_yaml.load(filename)\n except IOError as err:\n if err.errno == 2:\n error = f\"Template file not found: {filename} - {err}\"\n LOGGER.error(error)\n elif err.errno == 21:\n error = f\"Template references a directory, not a file: {filename} - {err}\"\n LOGGER.error(error)\n elif err.errno == 13:\n error = f\"Permission denied when accessing template file: {filename} - {err}\"\n LOGGER.error(error)\n except UnicodeDecodeError as err:\n error = f\"Cannot read file contents: {filename} - {err}\"\n LOGGER.error(error)\n except cfn_yaml.CfnParseError as err:\n if \"Null value at\" in err.message:\n LOGGER.info(f\"Null values do not exist in CFN templates: {filename} - {err}\")\n return None, None\n\n error = f\"Parsing error in file: {filename} - {err}\"\n LOGGER.info(error)\n except ValueError as err:\n error = f\"Parsing error in file: {filename} - {err}\"\n LOGGER.info(error)\n except ScannerError as err:\n if err.problem in [\"found character '\\\\t' that cannot start any token\", \"found unknown escape character\"]:\n try:\n (template, template_lines) = json_parse(filename, allow_nulls=False)\n except Exception as json_err: # pylint: disable=W0703\n error = f\"Template {filename} is malformed: {err.problem}. Tried to parse {filename} as JSON but got error: {json_err}\"\n LOGGER.info(error)\n except YAMLError as err:\n error = f\"Parsing error in file: {filename} - {err}\"\n LOGGER.info(error)\n\n if error:\n out_parsing_errors[filename] = error\n\n if isinstance(template, dict):\n resources = template.get(TemplateSections.RESOURCES.value, None)\n if resources:\n if \"__startline__\" in resources:\n del resources[\"__startline__\"]\n if \"__endline__\" in resources:\n del resources[\"__endline__\"]\n return template, template_lines\n", "path": "checkov/cloudformation/parser/__init__.py"}]}
| 3,331 | 818 |
gh_patches_debug_10545
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-4889
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/nn/optimizer/cpu_adam.py`
Content:
```
1 import math
2 from typing import Optional
3
4 import torch
5
6 from colossalai.kernel.op_builder import CPUAdamBuilder
7
8 from .nvme_optimizer import NVMeOptimizer
9
10
11 class CPUAdam(NVMeOptimizer):
12 """Implements Adam algorithm.
13
14 Supports parameters updating on both GPU and CPU, depending on the device of parameters.
15 But the parameters and gradients should on the same device:
16 * Parameters on CPU and gradients on CPU is allowed.
17 * Parameters on GPU and gradients on GPU is allowed.
18 * Parameters on GPU and gradients on CPU is **not** allowed.
19
20 `CPUAdam` requires CUDA extensions which can be built during installation or runtime.
21
22 This version of CPU Adam accelerates parameters updating on CPU with SIMD.
23 Support of AVX2 or AVX512 is required.
24
25 The GPU part is implemented in an naive way.
26
27 CPU Adam also supports the hybrid precision calculation, eg. fp32 parameters and fp16 gradients.
28
29 :class:`colossalai.nn.optimizer.CPUAdam` may be used as a drop-in replacement for ``torch.optim.AdamW``,
30 or ``torch.optim.Adam`` with ``adamw_mode=False``
31
32 Adam was been proposed in `Adam: A Method for Stochastic Optimization`_.
33
34 Arguments:
35 model_params (iterable): iterable of parameters of dicts defining
36 parameter groups.
37 lr (float, optional): learning rate. (default: 1e-3)
38 betas (Tuple[float, float], optional): coefficients used for computing
39 running averages of gradient and its square. (default: (0.9, 0.999))
40 eps (float, optional): term added to the denominator to improve
41 numerical stability. (default: 1e-8)
42 weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
43 amsgrad (boolean, optional): whether to use the AMSGrad variant of this
44 algorithm from the paper `On the Convergence of Adam and Beyond`_
45 (default: False) NOT SUPPORTED yet in CPUAdam!
46 adamw_mode (boolean, optional): Apply L2 regularization or weight decay
47 True for decoupled weight decay(also known as AdamW) (default: True)
48 simd_log (boolean, optional): whether to show if you are using SIMD to
49 accelerate. (default: False)
50 nvme_offload_fraction (float, optional): Fraction of optimizer states to be offloaded to NVMe. Defaults to 0.0.
51 nvme_offload_dir (Optional[str], optional): Directory to save NVMe offload files.
52 If it's ``None``, a random temporary directory will be used. Defaults to None.
53
54 .. _Adam\: A Method for Stochastic Optimization:
55 https://arxiv.org/abs/1412.6980
56 .. _On the Convergence of Adam and Beyond:
57 https://openreview.net/forum?id=ryQu7f-RZ
58 """
59
60 # Number of fp32 shards for per parameter
61 # Param weight, grad, momentum and variance
62 num_fp32_shards_per_param = 4
63
64 def __init__(
65 self,
66 model_params,
67 lr=1e-3,
68 bias_correction=True,
69 betas=(0.9, 0.999),
70 eps=1e-8,
71 weight_decay=0,
72 adamw_mode=True,
73 nvme_offload_fraction: float = 0.0,
74 nvme_offload_dir: Optional[str] = None,
75 ):
76 default_args = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, bias_correction=bias_correction)
77 super(CPUAdam, self).__init__(model_params, default_args, nvme_offload_fraction, nvme_offload_dir)
78 self.adamw_mode = adamw_mode
79 cpu_adam = CPUAdamBuilder().load()
80 self.cpu_adam_op = cpu_adam.CPUAdamOptimizer(lr, betas[0], betas[1], eps, weight_decay, adamw_mode)
81
82 def torch_adam_update(
83 self,
84 data,
85 grad,
86 exp_avg,
87 exp_avg_sq,
88 lr,
89 beta1,
90 beta2,
91 eps,
92 weight_decay,
93 bias_correction1,
94 bias_correction2,
95 use_adamw=False,
96 ):
97 grad = grad.to(data.dtype)
98
99 if weight_decay != 0:
100 if use_adamw:
101 data.mul_(1 - lr * weight_decay)
102 else:
103 grad = grad.add(data, alpha=weight_decay)
104
105 # Decay the first and second moment running average coefficient
106 exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
107 exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
108
109 # TODO(jiaruifang) dose not support amsgrad
110 denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)
111
112 step_size = lr / bias_correction1
113
114 data.addcdiv_(exp_avg, denom, value=-step_size)
115
116 @torch.no_grad()
117 def step(self, closure=None, div_scale: float = -1):
118 loss = None
119 if closure is not None:
120 with torch.enable_grad():
121 loss = closure()
122
123 self._pre_step("exp_avg", "exp_avg_sq")
124 for _, group in enumerate(self.param_groups):
125 for _, p in enumerate(group["params"]):
126 if p.grad is None:
127 continue
128
129 state = self.state[p]
130
131 target_device = p.device
132 if len(state) == 0:
133 state["step"] = 0
134
135 # FIXME(ver217): CPU adam kernel only supports fp32 states now
136 assert p.dtype is torch.float, "CPUAdam only support fp32 parameters"
137 # gradient momentums
138 state["exp_avg"] = torch.zeros_like(p, device=target_device)
139 # gradient variances
140 state["exp_avg_sq"] = torch.zeros_like(p, device=target_device)
141 self._post_state_init(p)
142
143 state["step"] += 1
144 beta1, beta2 = group["betas"]
145
146 if target_device.type == "cpu":
147 assert p.data.numel() == p.grad.data.numel(), "parameter and gradient should have the same size"
148 assert state["exp_avg"].device.type == "cpu", "exp_avg should stay on cpu"
149 assert state["exp_avg_sq"].device.type == "cpu", "exp_avg should stay on cpu"
150 self._pre_update(p, "exp_avg", "exp_avg_sq")
151 if p.grad.dtype is torch.bfloat16:
152 # cpu adam kernel does not support bf16 now
153 bias_correction1 = 1 - beta1 ** state["step"]
154 bias_correction2 = 1 - beta2 ** state["step"]
155 self.torch_adam_update(
156 p.data,
157 p.grad.data,
158 state["exp_avg"],
159 state["exp_avg_sq"],
160 group["lr"],
161 beta1,
162 beta2,
163 group["eps"],
164 group["weight_decay"],
165 bias_correction1,
166 bias_correction2,
167 self.adamw_mode,
168 )
169 else:
170 self.cpu_adam_op.step(
171 state["step"],
172 group["lr"],
173 beta1,
174 beta2,
175 group["eps"],
176 group["weight_decay"],
177 group["bias_correction"],
178 p.data,
179 p.grad.data,
180 state["exp_avg"],
181 state["exp_avg_sq"],
182 div_scale,
183 )
184 self._post_update(p, "exp_avg", "exp_avg_sq")
185 elif target_device.type == "cuda":
186 assert div_scale == -1, "div_scale should remain default"
187 assert state["exp_avg"].device.type == "cuda", "exp_avg should stay on cuda"
188 assert state["exp_avg_sq"].device.type == "cuda", "exp_avg should stay on cuda"
189
190 bias_correction1 = 1 - beta1 ** state["step"]
191 bias_correction2 = 1 - beta2 ** state["step"]
192
193 # adam on cuda
194 self.torch_adam_update(
195 p.data,
196 p.grad.data,
197 state["exp_avg"],
198 state["exp_avg_sq"],
199 group["lr"],
200 beta1,
201 beta2,
202 group["eps"],
203 group["weight_decay"],
204 bias_correction1,
205 bias_correction2,
206 self.adamw_mode,
207 )
208 else:
209 raise RuntimeError
210 self._post_step()
211 return loss
212
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/colossalai/nn/optimizer/cpu_adam.py b/colossalai/nn/optimizer/cpu_adam.py
--- a/colossalai/nn/optimizer/cpu_adam.py
+++ b/colossalai/nn/optimizer/cpu_adam.py
@@ -77,6 +77,7 @@
super(CPUAdam, self).__init__(model_params, default_args, nvme_offload_fraction, nvme_offload_dir)
self.adamw_mode = adamw_mode
cpu_adam = CPUAdamBuilder().load()
+ # if you find yourself stuck here, make sure that you install colossalai with CUDA_EXT=1 specification
self.cpu_adam_op = cpu_adam.CPUAdamOptimizer(lr, betas[0], betas[1], eps, weight_decay, adamw_mode)
def torch_adam_update(
|
{"golden_diff": "diff --git a/colossalai/nn/optimizer/cpu_adam.py b/colossalai/nn/optimizer/cpu_adam.py\n--- a/colossalai/nn/optimizer/cpu_adam.py\n+++ b/colossalai/nn/optimizer/cpu_adam.py\n@@ -77,6 +77,7 @@\n super(CPUAdam, self).__init__(model_params, default_args, nvme_offload_fraction, nvme_offload_dir)\n self.adamw_mode = adamw_mode\n cpu_adam = CPUAdamBuilder().load()\n+ # if you find yourself stuck here, make sure that you install colossalai with CUDA_EXT=1 specification\n self.cpu_adam_op = cpu_adam.CPUAdamOptimizer(lr, betas[0], betas[1], eps, weight_decay, adamw_mode)\n \n def torch_adam_update(\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import math\nfrom typing import Optional\n\nimport torch\n\nfrom colossalai.kernel.op_builder import CPUAdamBuilder\n\nfrom .nvme_optimizer import NVMeOptimizer\n\n\nclass CPUAdam(NVMeOptimizer):\n \"\"\"Implements Adam algorithm.\n\n Supports parameters updating on both GPU and CPU, depending on the device of parameters.\n But the parameters and gradients should on the same device:\n * Parameters on CPU and gradients on CPU is allowed.\n * Parameters on GPU and gradients on GPU is allowed.\n * Parameters on GPU and gradients on CPU is **not** allowed.\n\n `CPUAdam` requires CUDA extensions which can be built during installation or runtime.\n\n This version of CPU Adam accelerates parameters updating on CPU with SIMD.\n Support of AVX2 or AVX512 is required.\n\n The GPU part is implemented in an naive way.\n\n CPU Adam also supports the hybrid precision calculation, eg. fp32 parameters and fp16 gradients.\n\n :class:`colossalai.nn.optimizer.CPUAdam` may be used as a drop-in replacement for ``torch.optim.AdamW``,\n or ``torch.optim.Adam`` with ``adamw_mode=False``\n\n Adam was been proposed in `Adam: A Method for Stochastic Optimization`_.\n\n Arguments:\n model_params (iterable): iterable of parameters of dicts defining\n parameter groups.\n lr (float, optional): learning rate. (default: 1e-3)\n betas (Tuple[float, float], optional): coefficients used for computing\n running averages of gradient and its square. (default: (0.9, 0.999))\n eps (float, optional): term added to the denominator to improve\n numerical stability. (default: 1e-8)\n weight_decay (float, optional): weight decay (L2 penalty) (default: 0)\n amsgrad (boolean, optional): whether to use the AMSGrad variant of this\n algorithm from the paper `On the Convergence of Adam and Beyond`_\n (default: False) NOT SUPPORTED yet in CPUAdam!\n adamw_mode (boolean, optional): Apply L2 regularization or weight decay\n True for decoupled weight decay(also known as AdamW) (default: True)\n simd_log (boolean, optional): whether to show if you are using SIMD to\n accelerate. (default: False)\n nvme_offload_fraction (float, optional): Fraction of optimizer states to be offloaded to NVMe. Defaults to 0.0.\n nvme_offload_dir (Optional[str], optional): Directory to save NVMe offload files.\n If it's ``None``, a random temporary directory will be used. Defaults to None.\n\n .. _Adam\\: A Method for Stochastic Optimization:\n https://arxiv.org/abs/1412.6980\n .. _On the Convergence of Adam and Beyond:\n https://openreview.net/forum?id=ryQu7f-RZ\n \"\"\"\n\n # Number of fp32 shards for per parameter\n # Param weight, grad, momentum and variance\n num_fp32_shards_per_param = 4\n\n def __init__(\n self,\n model_params,\n lr=1e-3,\n bias_correction=True,\n betas=(0.9, 0.999),\n eps=1e-8,\n weight_decay=0,\n adamw_mode=True,\n nvme_offload_fraction: float = 0.0,\n nvme_offload_dir: Optional[str] = None,\n ):\n default_args = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, bias_correction=bias_correction)\n super(CPUAdam, self).__init__(model_params, default_args, nvme_offload_fraction, nvme_offload_dir)\n self.adamw_mode = adamw_mode\n cpu_adam = CPUAdamBuilder().load()\n self.cpu_adam_op = cpu_adam.CPUAdamOptimizer(lr, betas[0], betas[1], eps, weight_decay, adamw_mode)\n\n def torch_adam_update(\n self,\n data,\n grad,\n exp_avg,\n exp_avg_sq,\n lr,\n beta1,\n beta2,\n eps,\n weight_decay,\n bias_correction1,\n bias_correction2,\n use_adamw=False,\n ):\n grad = grad.to(data.dtype)\n\n if weight_decay != 0:\n if use_adamw:\n data.mul_(1 - lr * weight_decay)\n else:\n grad = grad.add(data, alpha=weight_decay)\n\n # Decay the first and second moment running average coefficient\n exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)\n exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)\n\n # TODO(jiaruifang) dose not support amsgrad\n denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)\n\n step_size = lr / bias_correction1\n\n data.addcdiv_(exp_avg, denom, value=-step_size)\n\n @torch.no_grad()\n def step(self, closure=None, div_scale: float = -1):\n loss = None\n if closure is not None:\n with torch.enable_grad():\n loss = closure()\n\n self._pre_step(\"exp_avg\", \"exp_avg_sq\")\n for _, group in enumerate(self.param_groups):\n for _, p in enumerate(group[\"params\"]):\n if p.grad is None:\n continue\n\n state = self.state[p]\n\n target_device = p.device\n if len(state) == 0:\n state[\"step\"] = 0\n\n # FIXME(ver217): CPU adam kernel only supports fp32 states now\n assert p.dtype is torch.float, \"CPUAdam only support fp32 parameters\"\n # gradient momentums\n state[\"exp_avg\"] = torch.zeros_like(p, device=target_device)\n # gradient variances\n state[\"exp_avg_sq\"] = torch.zeros_like(p, device=target_device)\n self._post_state_init(p)\n\n state[\"step\"] += 1\n beta1, beta2 = group[\"betas\"]\n\n if target_device.type == \"cpu\":\n assert p.data.numel() == p.grad.data.numel(), \"parameter and gradient should have the same size\"\n assert state[\"exp_avg\"].device.type == \"cpu\", \"exp_avg should stay on cpu\"\n assert state[\"exp_avg_sq\"].device.type == \"cpu\", \"exp_avg should stay on cpu\"\n self._pre_update(p, \"exp_avg\", \"exp_avg_sq\")\n if p.grad.dtype is torch.bfloat16:\n # cpu adam kernel does not support bf16 now\n bias_correction1 = 1 - beta1 ** state[\"step\"]\n bias_correction2 = 1 - beta2 ** state[\"step\"]\n self.torch_adam_update(\n p.data,\n p.grad.data,\n state[\"exp_avg\"],\n state[\"exp_avg_sq\"],\n group[\"lr\"],\n beta1,\n beta2,\n group[\"eps\"],\n group[\"weight_decay\"],\n bias_correction1,\n bias_correction2,\n self.adamw_mode,\n )\n else:\n self.cpu_adam_op.step(\n state[\"step\"],\n group[\"lr\"],\n beta1,\n beta2,\n group[\"eps\"],\n group[\"weight_decay\"],\n group[\"bias_correction\"],\n p.data,\n p.grad.data,\n state[\"exp_avg\"],\n state[\"exp_avg_sq\"],\n div_scale,\n )\n self._post_update(p, \"exp_avg\", \"exp_avg_sq\")\n elif target_device.type == \"cuda\":\n assert div_scale == -1, \"div_scale should remain default\"\n assert state[\"exp_avg\"].device.type == \"cuda\", \"exp_avg should stay on cuda\"\n assert state[\"exp_avg_sq\"].device.type == \"cuda\", \"exp_avg should stay on cuda\"\n\n bias_correction1 = 1 - beta1 ** state[\"step\"]\n bias_correction2 = 1 - beta2 ** state[\"step\"]\n\n # adam on cuda\n self.torch_adam_update(\n p.data,\n p.grad.data,\n state[\"exp_avg\"],\n state[\"exp_avg_sq\"],\n group[\"lr\"],\n beta1,\n beta2,\n group[\"eps\"],\n group[\"weight_decay\"],\n bias_correction1,\n bias_correction2,\n self.adamw_mode,\n )\n else:\n raise RuntimeError\n self._post_step()\n return loss\n", "path": "colossalai/nn/optimizer/cpu_adam.py"}], "after_files": [{"content": "import math\nfrom typing import Optional\n\nimport torch\n\nfrom colossalai.kernel.op_builder import CPUAdamBuilder\n\nfrom .nvme_optimizer import NVMeOptimizer\n\n\nclass CPUAdam(NVMeOptimizer):\n \"\"\"Implements Adam algorithm.\n\n Supports parameters updating on both GPU and CPU, depending on the device of parameters.\n But the parameters and gradients should on the same device:\n * Parameters on CPU and gradients on CPU is allowed.\n * Parameters on GPU and gradients on GPU is allowed.\n * Parameters on GPU and gradients on CPU is **not** allowed.\n\n `CPUAdam` requires CUDA extensions which can be built during installation or runtime.\n\n This version of CPU Adam accelerates parameters updating on CPU with SIMD.\n Support of AVX2 or AVX512 is required.\n\n The GPU part is implemented in an naive way.\n\n CPU Adam also supports the hybrid precision calculation, eg. fp32 parameters and fp16 gradients.\n\n :class:`colossalai.nn.optimizer.CPUAdam` may be used as a drop-in replacement for ``torch.optim.AdamW``,\n or ``torch.optim.Adam`` with ``adamw_mode=False``\n\n Adam was been proposed in `Adam: A Method for Stochastic Optimization`_.\n\n Arguments:\n model_params (iterable): iterable of parameters of dicts defining\n parameter groups.\n lr (float, optional): learning rate. (default: 1e-3)\n betas (Tuple[float, float], optional): coefficients used for computing\n running averages of gradient and its square. (default: (0.9, 0.999))\n eps (float, optional): term added to the denominator to improve\n numerical stability. (default: 1e-8)\n weight_decay (float, optional): weight decay (L2 penalty) (default: 0)\n amsgrad (boolean, optional): whether to use the AMSGrad variant of this\n algorithm from the paper `On the Convergence of Adam and Beyond`_\n (default: False) NOT SUPPORTED yet in CPUAdam!\n adamw_mode (boolean, optional): Apply L2 regularization or weight decay\n True for decoupled weight decay(also known as AdamW) (default: True)\n simd_log (boolean, optional): whether to show if you are using SIMD to\n accelerate. (default: False)\n nvme_offload_fraction (float, optional): Fraction of optimizer states to be offloaded to NVMe. Defaults to 0.0.\n nvme_offload_dir (Optional[str], optional): Directory to save NVMe offload files.\n If it's ``None``, a random temporary directory will be used. Defaults to None.\n\n .. _Adam\\: A Method for Stochastic Optimization:\n https://arxiv.org/abs/1412.6980\n .. _On the Convergence of Adam and Beyond:\n https://openreview.net/forum?id=ryQu7f-RZ\n \"\"\"\n\n # Number of fp32 shards for per parameter\n # Param weight, grad, momentum and variance\n num_fp32_shards_per_param = 4\n\n def __init__(\n self,\n model_params,\n lr=1e-3,\n bias_correction=True,\n betas=(0.9, 0.999),\n eps=1e-8,\n weight_decay=0,\n adamw_mode=True,\n nvme_offload_fraction: float = 0.0,\n nvme_offload_dir: Optional[str] = None,\n ):\n default_args = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, bias_correction=bias_correction)\n super(CPUAdam, self).__init__(model_params, default_args, nvme_offload_fraction, nvme_offload_dir)\n self.adamw_mode = adamw_mode\n cpu_adam = CPUAdamBuilder().load()\n # if you find yourself stuck here, make sure that you install colossalai with CUDA_EXT=1 specification\n self.cpu_adam_op = cpu_adam.CPUAdamOptimizer(lr, betas[0], betas[1], eps, weight_decay, adamw_mode)\n\n def torch_adam_update(\n self,\n data,\n grad,\n exp_avg,\n exp_avg_sq,\n lr,\n beta1,\n beta2,\n eps,\n weight_decay,\n bias_correction1,\n bias_correction2,\n use_adamw=False,\n ):\n grad = grad.to(data.dtype)\n\n if weight_decay != 0:\n if use_adamw:\n data.mul_(1 - lr * weight_decay)\n else:\n grad = grad.add(data, alpha=weight_decay)\n\n # Decay the first and second moment running average coefficient\n exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)\n exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)\n\n # TODO(jiaruifang) dose not support amsgrad\n denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)\n\n step_size = lr / bias_correction1\n\n data.addcdiv_(exp_avg, denom, value=-step_size)\n\n @torch.no_grad()\n def step(self, closure=None, div_scale: float = -1):\n loss = None\n if closure is not None:\n with torch.enable_grad():\n loss = closure()\n\n self._pre_step(\"exp_avg\", \"exp_avg_sq\")\n for _, group in enumerate(self.param_groups):\n for _, p in enumerate(group[\"params\"]):\n if p.grad is None:\n continue\n\n state = self.state[p]\n\n target_device = p.device\n if len(state) == 0:\n state[\"step\"] = 0\n\n # FIXME(ver217): CPU adam kernel only supports fp32 states now\n assert p.dtype is torch.float, \"CPUAdam only support fp32 parameters\"\n # gradient momentums\n state[\"exp_avg\"] = torch.zeros_like(p, device=target_device)\n # gradient variances\n state[\"exp_avg_sq\"] = torch.zeros_like(p, device=target_device)\n self._post_state_init(p)\n\n state[\"step\"] += 1\n beta1, beta2 = group[\"betas\"]\n\n if target_device.type == \"cpu\":\n assert p.data.numel() == p.grad.data.numel(), \"parameter and gradient should have the same size\"\n assert state[\"exp_avg\"].device.type == \"cpu\", \"exp_avg should stay on cpu\"\n assert state[\"exp_avg_sq\"].device.type == \"cpu\", \"exp_avg should stay on cpu\"\n self._pre_update(p, \"exp_avg\", \"exp_avg_sq\")\n if p.grad.dtype is torch.bfloat16:\n # cpu adam kernel does not support bf16 now\n bias_correction1 = 1 - beta1 ** state[\"step\"]\n bias_correction2 = 1 - beta2 ** state[\"step\"]\n self.torch_adam_update(\n p.data,\n p.grad.data,\n state[\"exp_avg\"],\n state[\"exp_avg_sq\"],\n group[\"lr\"],\n beta1,\n beta2,\n group[\"eps\"],\n group[\"weight_decay\"],\n bias_correction1,\n bias_correction2,\n self.adamw_mode,\n )\n else:\n self.cpu_adam_op.step(\n state[\"step\"],\n group[\"lr\"],\n beta1,\n beta2,\n group[\"eps\"],\n group[\"weight_decay\"],\n group[\"bias_correction\"],\n p.data,\n p.grad.data,\n state[\"exp_avg\"],\n state[\"exp_avg_sq\"],\n div_scale,\n )\n self._post_update(p, \"exp_avg\", \"exp_avg_sq\")\n elif target_device.type == \"cuda\":\n assert div_scale == -1, \"div_scale should remain default\"\n assert state[\"exp_avg\"].device.type == \"cuda\", \"exp_avg should stay on cuda\"\n assert state[\"exp_avg_sq\"].device.type == \"cuda\", \"exp_avg should stay on cuda\"\n\n bias_correction1 = 1 - beta1 ** state[\"step\"]\n bias_correction2 = 1 - beta2 ** state[\"step\"]\n\n # adam on cuda\n self.torch_adam_update(\n p.data,\n p.grad.data,\n state[\"exp_avg\"],\n state[\"exp_avg_sq\"],\n group[\"lr\"],\n beta1,\n beta2,\n group[\"eps\"],\n group[\"weight_decay\"],\n bias_correction1,\n bias_correction2,\n self.adamw_mode,\n )\n else:\n raise RuntimeError\n self._post_step()\n return loss\n", "path": "colossalai/nn/optimizer/cpu_adam.py"}]}
| 2,681 | 189 |
gh_patches_debug_49337
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-4249
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'_Globals' has no attribute 'user' : exception when using an IAuthenticator on CKAN 2.8.0
I'm putting together a new deployment based on the new CKAN v2.8.0 release. I'm using ckanext-ldap as an authenticator, though it looks like this bug would apply to any authenticator plugin.
This exact setup worked fine on CKAN v2.7.3.
### CKAN Version if known (or site URL)
CKAN v 2.8.0
ckanext-ldap @ `ckan-upgrade-2.8.0a`
### Please describe the expected behaviour
If the IAuthenticator plugin cannot authenticate the user, it does not set `g.user`, and CKAN should run the default authenticator.
### Please describe the actual behaviour
If the IAuthenticator plugin cannot authenticate the user, it does not set `g.user`, and CKAN tries to lookup `g.user` and crashes with traceback:
```
Traceback (most recent call last):
File "/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py", line 1610, in full_dispatch_request
rv = self.preprocess_request()
File "/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py", line 1831, in preprocess_request
rv = func()
File "/usr/lib/ckan/venv/src/ckan/ckan/config/middleware/flask_app.py", line 281, in ckan_before_request
identify_user()
File "/usr/lib/ckan/venv/src/ckan/ckan/views/__init__.py", line 101, in identify_user
if g.user:
File "/usr/lib/ckan/venv/local/lib/python2.7/site-packages/werkzeug/local.py", line 347, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/lib/ckan/venv/local/lib/python2.7/site-packages/werkzeug/local.py", line 347, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/lib/ckan/venv/src/ckan/ckan/config/middleware/flask_app.py", line 334, in __getattr__
return getattr(app_globals.app_globals, name)
AttributeError: '_Globals' object has no attribute 'user'
```
### What steps can be taken to reproduce the issue?
* Install CKAN v2.8.0 as per documented instructions
* Install a plugin that implements IAuthenticator (In this case I am using the ckanext-ldap plugin in the 2.8.0 branch), that may not be able to authenticate the user, so may not set `g.user`.
* Run CKAN normally
* Attempt to load any page.
What is odd is that this section of code at `identify_user` in `ckan/views/__init__.py` has not changed between v2.7.3 and v2.8.0. And the way the authenticator plugin handles/sets `g.user` has not changed either. I'm guessing this is caused by a change in the way the _Globals object behaves when it cannot find an attribute.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckan/views/__init__.py`
Content:
```
1 # encoding: utf-8
2
3 from paste.deploy.converters import asbool
4 from six import text_type
5
6 import ckan.model as model
7 from ckan.common import g, request, config, session
8 from ckan.lib.helpers import redirect_to as redirect
9 import ckan.plugins as p
10
11 import logging
12 log = logging.getLogger(__name__)
13
14 APIKEY_HEADER_NAME_KEY = u'apikey_header_name'
15 APIKEY_HEADER_NAME_DEFAULT = u'X-CKAN-API-Key'
16
17
18 def check_session_cookie(response):
19 u'''
20 The cookies for auth (auth_tkt) and session (ckan) are separate. This
21 checks whether a user is logged in, and determines the validity of the
22 session cookie, removing it if necessary.
23 '''
24 for cookie in request.cookies:
25 # Remove the ckan session cookie if logged out.
26 if cookie == u'ckan' and not getattr(g, u'user', None):
27 # Check session for valid data (including flash messages)
28 is_valid_cookie_data = False
29 for key, value in session.items():
30 if not key.startswith(u'_') and value:
31 is_valid_cookie_data = True
32 break
33 if not is_valid_cookie_data:
34 if session.id:
35 log.debug(u'No valid session data - deleting session')
36 log.debug(u'Session: %r', session.items())
37 session.delete()
38 else:
39 log.debug(u'No session id - deleting session cookie')
40 response.delete_cookie(cookie)
41 # Remove auth_tkt repoze.who cookie if user not logged in.
42 elif cookie == u'auth_tkt' and not session.id:
43 response.delete_cookie(cookie)
44
45 return response
46
47
48 def set_cors_headers_for_response(response):
49 u'''
50 Set up Access Control Allow headers if either origin_allow_all is True, or
51 the request Origin is in the origin_whitelist.
52 '''
53 if config.get(u'ckan.cors.origin_allow_all') \
54 and request.headers.get(u'Origin'):
55
56 cors_origin_allowed = None
57 if asbool(config.get(u'ckan.cors.origin_allow_all')):
58 cors_origin_allowed = b'*'
59 elif config.get(u'ckan.cors.origin_whitelist') and \
60 request.headers.get(u'Origin') \
61 in config[u'ckan.cors.origin_whitelist'].split(u' '):
62 # set var to the origin to allow it.
63 cors_origin_allowed = request.headers.get(u'Origin')
64
65 if cors_origin_allowed is not None:
66 response.headers[b'Access-Control-Allow-Origin'] = \
67 cors_origin_allowed
68 response.headers[b'Access-Control-Allow-Methods'] = \
69 b'POST, PUT, GET, DELETE, OPTIONS'
70 response.headers[b'Access-Control-Allow-Headers'] = \
71 b'X-CKAN-API-KEY, Authorization, Content-Type'
72
73 return response
74
75
76 def identify_user():
77 u'''Try to identify the user
78 If the user is identified then:
79 g.user = user name (unicode)
80 g.userobj = user object
81 g.author = user name
82 otherwise:
83 g.user = None
84 g.userobj = None
85 g.author = user's IP address (unicode)
86
87 Note: Remember, when running under Pylons, `g` is the Pylons `c` object
88 '''
89 # see if it was proxied first
90 g.remote_addr = request.environ.get(u'HTTP_X_FORWARDED_FOR', u'')
91 if not g.remote_addr:
92 g.remote_addr = request.environ.get(u'REMOTE_ADDR',
93 u'Unknown IP Address')
94
95 # Authentication plugins get a chance to run here break as soon as a user
96 # is identified.
97 authenticators = p.PluginImplementations(p.IAuthenticator)
98 if authenticators:
99 for item in authenticators:
100 item.identify()
101 if g.user:
102 break
103
104 # We haven't identified the user so try the default methods
105 if not getattr(g, u'user', None):
106 _identify_user_default()
107
108 # If we have a user but not the userobj let's get the userobj. This means
109 # that IAuthenticator extensions do not need to access the user model
110 # directly.
111 if g.user and not getattr(g, u'userobj', None):
112 g.userobj = model.User.by_name(g.user)
113
114 # general settings
115 if g.user:
116 g.author = g.user
117 else:
118 g.author = g.remote_addr
119 g.author = text_type(g.author)
120
121
122 def _identify_user_default():
123 u'''
124 Identifies the user using two methods:
125 a) If they logged into the web interface then repoze.who will
126 set REMOTE_USER.
127 b) For API calls they may set a header with an API key.
128 '''
129
130 # environ['REMOTE_USER'] is set by repoze.who if it authenticates a
131 # user's cookie. But repoze.who doesn't check the user (still) exists
132 # in our database - we need to do that here. (Another way would be
133 # with an userid_checker, but that would mean another db access.
134 # See: http://docs.repoze.org/who/1.0/narr.html#module-repoze.who\
135 # .plugins.sql )
136 g.user = request.environ.get(u'REMOTE_USER', u'')
137 if g.user:
138 g.user = g.user.decode(u'utf8')
139 g.userobj = model.User.by_name(g.user)
140
141 if g.userobj is None or not g.userobj.is_active():
142
143 # This occurs when a user that was still logged in is deleted, or
144 # when you are logged in, clean db and then restart (or when you
145 # change your username). There is no user object, so even though
146 # repoze thinks you are logged in and your cookie has
147 # ckan_display_name, we need to force user to logout and login
148 # again to get the User object.
149
150 ev = request.environ
151 if u'repoze.who.plugins' in ev:
152 pth = getattr(ev[u'repoze.who.plugins'][u'friendlyform'],
153 u'logout_handler_path')
154 redirect(pth)
155 else:
156 g.userobj = _get_user_for_apikey()
157 if g.userobj is not None:
158 g.user = g.userobj.name
159
160
161 def _get_user_for_apikey():
162 apikey_header_name = config.get(APIKEY_HEADER_NAME_KEY,
163 APIKEY_HEADER_NAME_DEFAULT)
164 apikey = request.headers.get(apikey_header_name, u'')
165 if not apikey:
166 apikey = request.environ.get(apikey_header_name, u'')
167 if not apikey:
168 # For misunderstanding old documentation (now fixed).
169 apikey = request.environ.get(u'HTTP_AUTHORIZATION', u'')
170 if not apikey:
171 apikey = request.environ.get(u'Authorization', u'')
172 # Forget HTTP Auth credentials (they have spaces).
173 if u' ' in apikey:
174 apikey = u''
175 if not apikey:
176 return None
177 apikey = apikey.decode(u'utf8', u'ignore')
178 log.debug(u'Received API Key: %s' % apikey)
179 query = model.Session.query(model.User)
180 user = query.filter_by(apikey=apikey).first()
181 return user
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckan/views/__init__.py b/ckan/views/__init__.py
--- a/ckan/views/__init__.py
+++ b/ckan/views/__init__.py
@@ -98,8 +98,11 @@
if authenticators:
for item in authenticators:
item.identify()
- if g.user:
- break
+ try:
+ if g.user:
+ break
+ except AttributeError:
+ continue
# We haven't identified the user so try the default methods
if not getattr(g, u'user', None):
|
{"golden_diff": "diff --git a/ckan/views/__init__.py b/ckan/views/__init__.py\n--- a/ckan/views/__init__.py\n+++ b/ckan/views/__init__.py\n@@ -98,8 +98,11 @@\n if authenticators:\n for item in authenticators:\n item.identify()\n- if g.user:\n- break\n+ try:\n+ if g.user:\n+ break\n+ except AttributeError:\n+ continue\n \n # We haven't identified the user so try the default methods\n if not getattr(g, u'user', None):\n", "issue": "'_Globals' has no attribute 'user' : exception when using an IAuthenticator on CKAN 2.8.0\nI'm putting together a new deployment based on the new CKAN v2.8.0 release. I'm using ckanext-ldap as an authenticator, though it looks like this bug would apply to any authenticator plugin.\r\nThis exact setup worked fine on CKAN v2.7.3.\r\n\r\n### CKAN Version if known (or site URL)\r\nCKAN v 2.8.0\r\nckanext-ldap @ `ckan-upgrade-2.8.0a`\r\n\r\n### Please describe the expected behaviour\r\nIf the IAuthenticator plugin cannot authenticate the user, it does not set `g.user`, and CKAN should run the default authenticator.\r\n\r\n### Please describe the actual behaviour\r\nIf the IAuthenticator plugin cannot authenticate the user, it does not set `g.user`, and CKAN tries to lookup `g.user` and crashes with traceback:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py\", line 1982, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py\", line 1614, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py\", line 1517, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py\", line 1610, in full_dispatch_request\r\n rv = self.preprocess_request()\r\n File \"/usr/lib/ckan/venv/local/lib/python2.7/site-packages/flask/app.py\", line 1831, in preprocess_request\r\n rv = func()\r\n File \"/usr/lib/ckan/venv/src/ckan/ckan/config/middleware/flask_app.py\", line 281, in ckan_before_request\r\n identify_user()\r\n File \"/usr/lib/ckan/venv/src/ckan/ckan/views/__init__.py\", line 101, in identify_user\r\n if g.user:\r\n File \"/usr/lib/ckan/venv/local/lib/python2.7/site-packages/werkzeug/local.py\", line 347, in __getattr__\r\n return getattr(self._get_current_object(), name)\r\n File \"/usr/lib/ckan/venv/local/lib/python2.7/site-packages/werkzeug/local.py\", line 347, in __getattr__\r\n return getattr(self._get_current_object(), name)\r\n File \"/usr/lib/ckan/venv/src/ckan/ckan/config/middleware/flask_app.py\", line 334, in __getattr__\r\n return getattr(app_globals.app_globals, name)\r\nAttributeError: '_Globals' object has no attribute 'user'\r\n```\r\n### What steps can be taken to reproduce the issue? \r\n* Install CKAN v2.8.0 as per documented instructions\r\n* Install a plugin that implements IAuthenticator (In this case I am using the ckanext-ldap plugin in the 2.8.0 branch), that may not be able to authenticate the user, so may not set `g.user`.\r\n* Run CKAN normally\r\n* Attempt to load any page. \r\n\r\nWhat is odd is that this section of code at `identify_user` in `ckan/views/__init__.py` has not changed between v2.7.3 and v2.8.0. And the way the authenticator plugin handles/sets `g.user` has not changed either. I'm guessing this is caused by a change in the way the _Globals object behaves when it cannot find an attribute.\n", "before_files": [{"content": "# encoding: utf-8\n\nfrom paste.deploy.converters import asbool\nfrom six import text_type\n\nimport ckan.model as model\nfrom ckan.common import g, request, config, session\nfrom ckan.lib.helpers import redirect_to as redirect\nimport ckan.plugins as p\n\nimport logging\nlog = logging.getLogger(__name__)\n\nAPIKEY_HEADER_NAME_KEY = u'apikey_header_name'\nAPIKEY_HEADER_NAME_DEFAULT = u'X-CKAN-API-Key'\n\n\ndef check_session_cookie(response):\n u'''\n The cookies for auth (auth_tkt) and session (ckan) are separate. This\n checks whether a user is logged in, and determines the validity of the\n session cookie, removing it if necessary.\n '''\n for cookie in request.cookies:\n # Remove the ckan session cookie if logged out.\n if cookie == u'ckan' and not getattr(g, u'user', None):\n # Check session for valid data (including flash messages)\n is_valid_cookie_data = False\n for key, value in session.items():\n if not key.startswith(u'_') and value:\n is_valid_cookie_data = True\n break\n if not is_valid_cookie_data:\n if session.id:\n log.debug(u'No valid session data - deleting session')\n log.debug(u'Session: %r', session.items())\n session.delete()\n else:\n log.debug(u'No session id - deleting session cookie')\n response.delete_cookie(cookie)\n # Remove auth_tkt repoze.who cookie if user not logged in.\n elif cookie == u'auth_tkt' and not session.id:\n response.delete_cookie(cookie)\n\n return response\n\n\ndef set_cors_headers_for_response(response):\n u'''\n Set up Access Control Allow headers if either origin_allow_all is True, or\n the request Origin is in the origin_whitelist.\n '''\n if config.get(u'ckan.cors.origin_allow_all') \\\n and request.headers.get(u'Origin'):\n\n cors_origin_allowed = None\n if asbool(config.get(u'ckan.cors.origin_allow_all')):\n cors_origin_allowed = b'*'\n elif config.get(u'ckan.cors.origin_whitelist') and \\\n request.headers.get(u'Origin') \\\n in config[u'ckan.cors.origin_whitelist'].split(u' '):\n # set var to the origin to allow it.\n cors_origin_allowed = request.headers.get(u'Origin')\n\n if cors_origin_allowed is not None:\n response.headers[b'Access-Control-Allow-Origin'] = \\\n cors_origin_allowed\n response.headers[b'Access-Control-Allow-Methods'] = \\\n b'POST, PUT, GET, DELETE, OPTIONS'\n response.headers[b'Access-Control-Allow-Headers'] = \\\n b'X-CKAN-API-KEY, Authorization, Content-Type'\n\n return response\n\n\ndef identify_user():\n u'''Try to identify the user\n If the user is identified then:\n g.user = user name (unicode)\n g.userobj = user object\n g.author = user name\n otherwise:\n g.user = None\n g.userobj = None\n g.author = user's IP address (unicode)\n\n Note: Remember, when running under Pylons, `g` is the Pylons `c` object\n '''\n # see if it was proxied first\n g.remote_addr = request.environ.get(u'HTTP_X_FORWARDED_FOR', u'')\n if not g.remote_addr:\n g.remote_addr = request.environ.get(u'REMOTE_ADDR',\n u'Unknown IP Address')\n\n # Authentication plugins get a chance to run here break as soon as a user\n # is identified.\n authenticators = p.PluginImplementations(p.IAuthenticator)\n if authenticators:\n for item in authenticators:\n item.identify()\n if g.user:\n break\n\n # We haven't identified the user so try the default methods\n if not getattr(g, u'user', None):\n _identify_user_default()\n\n # If we have a user but not the userobj let's get the userobj. This means\n # that IAuthenticator extensions do not need to access the user model\n # directly.\n if g.user and not getattr(g, u'userobj', None):\n g.userobj = model.User.by_name(g.user)\n\n # general settings\n if g.user:\n g.author = g.user\n else:\n g.author = g.remote_addr\n g.author = text_type(g.author)\n\n\ndef _identify_user_default():\n u'''\n Identifies the user using two methods:\n a) If they logged into the web interface then repoze.who will\n set REMOTE_USER.\n b) For API calls they may set a header with an API key.\n '''\n\n # environ['REMOTE_USER'] is set by repoze.who if it authenticates a\n # user's cookie. But repoze.who doesn't check the user (still) exists\n # in our database - we need to do that here. (Another way would be\n # with an userid_checker, but that would mean another db access.\n # See: http://docs.repoze.org/who/1.0/narr.html#module-repoze.who\\\n # .plugins.sql )\n g.user = request.environ.get(u'REMOTE_USER', u'')\n if g.user:\n g.user = g.user.decode(u'utf8')\n g.userobj = model.User.by_name(g.user)\n\n if g.userobj is None or not g.userobj.is_active():\n\n # This occurs when a user that was still logged in is deleted, or\n # when you are logged in, clean db and then restart (or when you\n # change your username). There is no user object, so even though\n # repoze thinks you are logged in and your cookie has\n # ckan_display_name, we need to force user to logout and login\n # again to get the User object.\n\n ev = request.environ\n if u'repoze.who.plugins' in ev:\n pth = getattr(ev[u'repoze.who.plugins'][u'friendlyform'],\n u'logout_handler_path')\n redirect(pth)\n else:\n g.userobj = _get_user_for_apikey()\n if g.userobj is not None:\n g.user = g.userobj.name\n\n\ndef _get_user_for_apikey():\n apikey_header_name = config.get(APIKEY_HEADER_NAME_KEY,\n APIKEY_HEADER_NAME_DEFAULT)\n apikey = request.headers.get(apikey_header_name, u'')\n if not apikey:\n apikey = request.environ.get(apikey_header_name, u'')\n if not apikey:\n # For misunderstanding old documentation (now fixed).\n apikey = request.environ.get(u'HTTP_AUTHORIZATION', u'')\n if not apikey:\n apikey = request.environ.get(u'Authorization', u'')\n # Forget HTTP Auth credentials (they have spaces).\n if u' ' in apikey:\n apikey = u''\n if not apikey:\n return None\n apikey = apikey.decode(u'utf8', u'ignore')\n log.debug(u'Received API Key: %s' % apikey)\n query = model.Session.query(model.User)\n user = query.filter_by(apikey=apikey).first()\n return user\n", "path": "ckan/views/__init__.py"}], "after_files": [{"content": "# encoding: utf-8\n\nfrom paste.deploy.converters import asbool\nfrom six import text_type\n\nimport ckan.model as model\nfrom ckan.common import g, request, config, session\nfrom ckan.lib.helpers import redirect_to as redirect\nimport ckan.plugins as p\n\nimport logging\nlog = logging.getLogger(__name__)\n\nAPIKEY_HEADER_NAME_KEY = u'apikey_header_name'\nAPIKEY_HEADER_NAME_DEFAULT = u'X-CKAN-API-Key'\n\n\ndef check_session_cookie(response):\n u'''\n The cookies for auth (auth_tkt) and session (ckan) are separate. This\n checks whether a user is logged in, and determines the validity of the\n session cookie, removing it if necessary.\n '''\n for cookie in request.cookies:\n # Remove the ckan session cookie if logged out.\n if cookie == u'ckan' and not getattr(g, u'user', None):\n # Check session for valid data (including flash messages)\n is_valid_cookie_data = False\n for key, value in session.items():\n if not key.startswith(u'_') and value:\n is_valid_cookie_data = True\n break\n if not is_valid_cookie_data:\n if session.id:\n log.debug(u'No valid session data - deleting session')\n log.debug(u'Session: %r', session.items())\n session.delete()\n else:\n log.debug(u'No session id - deleting session cookie')\n response.delete_cookie(cookie)\n # Remove auth_tkt repoze.who cookie if user not logged in.\n elif cookie == u'auth_tkt' and not session.id:\n response.delete_cookie(cookie)\n\n return response\n\n\ndef set_cors_headers_for_response(response):\n u'''\n Set up Access Control Allow headers if either origin_allow_all is True, or\n the request Origin is in the origin_whitelist.\n '''\n if config.get(u'ckan.cors.origin_allow_all') \\\n and request.headers.get(u'Origin'):\n\n cors_origin_allowed = None\n if asbool(config.get(u'ckan.cors.origin_allow_all')):\n cors_origin_allowed = b'*'\n elif config.get(u'ckan.cors.origin_whitelist') and \\\n request.headers.get(u'Origin') \\\n in config[u'ckan.cors.origin_whitelist'].split(u' '):\n # set var to the origin to allow it.\n cors_origin_allowed = request.headers.get(u'Origin')\n\n if cors_origin_allowed is not None:\n response.headers[b'Access-Control-Allow-Origin'] = \\\n cors_origin_allowed\n response.headers[b'Access-Control-Allow-Methods'] = \\\n b'POST, PUT, GET, DELETE, OPTIONS'\n response.headers[b'Access-Control-Allow-Headers'] = \\\n b'X-CKAN-API-KEY, Authorization, Content-Type'\n\n return response\n\n\ndef identify_user():\n u'''Try to identify the user\n If the user is identified then:\n g.user = user name (unicode)\n g.userobj = user object\n g.author = user name\n otherwise:\n g.user = None\n g.userobj = None\n g.author = user's IP address (unicode)\n\n Note: Remember, when running under Pylons, `g` is the Pylons `c` object\n '''\n # see if it was proxied first\n g.remote_addr = request.environ.get(u'HTTP_X_FORWARDED_FOR', u'')\n if not g.remote_addr:\n g.remote_addr = request.environ.get(u'REMOTE_ADDR',\n u'Unknown IP Address')\n\n # Authentication plugins get a chance to run here break as soon as a user\n # is identified.\n authenticators = p.PluginImplementations(p.IAuthenticator)\n if authenticators:\n for item in authenticators:\n item.identify()\n try:\n if g.user:\n break\n except AttributeError:\n continue\n\n # We haven't identified the user so try the default methods\n if not getattr(g, u'user', None):\n _identify_user_default()\n\n # If we have a user but not the userobj let's get the userobj. This means\n # that IAuthenticator extensions do not need to access the user model\n # directly.\n if g.user and not getattr(g, u'userobj', None):\n g.userobj = model.User.by_name(g.user)\n\n # general settings\n if g.user:\n g.author = g.user\n else:\n g.author = g.remote_addr\n g.author = text_type(g.author)\n\n\ndef _identify_user_default():\n u'''\n Identifies the user using two methods:\n a) If they logged into the web interface then repoze.who will\n set REMOTE_USER.\n b) For API calls they may set a header with an API key.\n '''\n\n # environ['REMOTE_USER'] is set by repoze.who if it authenticates a\n # user's cookie. But repoze.who doesn't check the user (still) exists\n # in our database - we need to do that here. (Another way would be\n # with an userid_checker, but that would mean another db access.\n # See: http://docs.repoze.org/who/1.0/narr.html#module-repoze.who\\\n # .plugins.sql )\n g.user = request.environ.get(u'REMOTE_USER', u'')\n if g.user:\n g.user = g.user.decode(u'utf8')\n g.userobj = model.User.by_name(g.user)\n\n if g.userobj is None or not g.userobj.is_active():\n\n # This occurs when a user that was still logged in is deleted, or\n # when you are logged in, clean db and then restart (or when you\n # change your username). There is no user object, so even though\n # repoze thinks you are logged in and your cookie has\n # ckan_display_name, we need to force user to logout and login\n # again to get the User object.\n\n ev = request.environ\n if u'repoze.who.plugins' in ev:\n pth = getattr(ev[u'repoze.who.plugins'][u'friendlyform'],\n u'logout_handler_path')\n redirect(pth)\n else:\n g.userobj = _get_user_for_apikey()\n if g.userobj is not None:\n g.user = g.userobj.name\n\n\ndef _get_user_for_apikey():\n apikey_header_name = config.get(APIKEY_HEADER_NAME_KEY,\n APIKEY_HEADER_NAME_DEFAULT)\n apikey = request.headers.get(apikey_header_name, u'')\n if not apikey:\n apikey = request.environ.get(apikey_header_name, u'')\n if not apikey:\n # For misunderstanding old documentation (now fixed).\n apikey = request.environ.get(u'HTTP_AUTHORIZATION', u'')\n if not apikey:\n apikey = request.environ.get(u'Authorization', u'')\n # Forget HTTP Auth credentials (they have spaces).\n if u' ' in apikey:\n apikey = u''\n if not apikey:\n return None\n apikey = apikey.decode(u'utf8', u'ignore')\n log.debug(u'Received API Key: %s' % apikey)\n query = model.Session.query(model.User)\n user = query.filter_by(apikey=apikey).first()\n return user\n", "path": "ckan/views/__init__.py"}]}
| 3,136 | 132 |
gh_patches_debug_6803
|
rasdani/github-patches
|
git_diff
|
xorbitsai__inference-510
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FEAT: support Mistral-7B-Instruct
### Is your feature request related to a problem? Please describe
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xinference/model/llm/ggml/ctransformers.py`
Content:
```
1 # Copyright 2022-2023 XProbe Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16 import os
17 from typing import TYPE_CHECKING, Iterator, Optional, Sequence, TypedDict, Union
18
19 if TYPE_CHECKING:
20 from ctransformers import AutoConfig
21
22 from ....types import Completion, CompletionChunk
23 from ..core import LLM
24 from ..llm_family import LLMFamilyV1, LLMSpecV1
25 from .ctransformers_util import generate_stream
26
27 logger = logging.getLogger(__name__)
28
29 # all supported models for Ctransformers with their model type.
30 # Please Strictly follows this name format when inputting new model to model_family.
31 MODEL_TYPE_FOR_CTRANSFORMERS = {
32 "gpt-2": "gpt2",
33 "gpt-j": "gptj",
34 "gpt4all-j": "gptj",
35 "gpt-neox": "gpt_neox",
36 "stablelm": "gpt_neox",
37 "llama": "llama",
38 "llama-2": "llama",
39 "mpt": "mpt",
40 "dolly-v2": "dolly-v2",
41 "replit": "replit",
42 "starcoder": "starcoder",
43 "starchat": "starcoder",
44 "falcon": "falcon",
45 }
46
47 # these two constants subjects to change for future development and ctransformers updates.
48 CTRANSFORMERS_SUPPORTED_MODEL = ["starcoder", "gpt-2"]
49
50 CTRANSFORMERS_GPU_SUPPORT = ["llama", "llama-2", "mpt", "falcon"]
51
52 SIZE_TO_GPU_LAYERS = {
53 3: 26,
54 7: 32,
55 13: 40,
56 30: 60,
57 65: 80,
58 }
59
60
61 class CtransformersModelConfig(TypedDict, total=False):
62 n_ctx: int
63 n_gpu_layers: int
64
65
66 class CtransformersGenerateConfig(TypedDict, total=False):
67 max_tokens: Optional[int]
68 top_k: Optional[int]
69 top_p: Optional[float]
70 temperature: Optional[float]
71 repetition_penalty: Optional[float]
72 last_n_tokens: Optional[int]
73 seed: Optional[int]
74 batch_size: Optional[int]
75 threads: Optional[int]
76 stop: Optional[Sequence[str]]
77 stream: Optional[bool]
78 reset: Optional[bool]
79
80
81 class CtransformersModel(LLM):
82 def __init__(
83 self,
84 model_uid: str,
85 model_family: "LLMFamilyV1",
86 model_spec: "LLMSpecV1",
87 quantization: str,
88 model_path: str,
89 ctransformers_model_config: Optional[CtransformersModelConfig],
90 ):
91 super().__init__(model_uid, model_family, model_spec, quantization, model_path)
92
93 self._model_type = None
94 closest_size = min(
95 SIZE_TO_GPU_LAYERS.keys(),
96 key=lambda x: abs(x - model_spec.model_size_in_billions),
97 )
98
99 self._model_family = model_family
100 self._model_uid = model_uid
101 self._llm = None
102
103 self._gpu_layers = SIZE_TO_GPU_LAYERS[closest_size]
104 self._ctransformer_model_config = self._sanitize_model_config(
105 model_path, ctransformers_model_config
106 )
107
108 def _sanitize_model_config(
109 self, model_path, ctransformers_model_config: Optional[CtransformersModelConfig]
110 ) -> "AutoConfig":
111 try:
112 from ctransformers import AutoConfig, Config
113 except ImportError:
114 error_message = (
115 "Failed to import module 'ctransformers - AutoConfig and Config'"
116 )
117
118 installation_guide = [
119 f"Please make sure 'ctransformers' is installed.",
120 f"You can install it by checking out the repository for command:"
121 f"https://github.com/marella/ctransformers",
122 ]
123
124 raise ImportError(f"{error_message}\n\n{''.join(installation_guide)}")
125
126 # if the model have customized config, we update it.
127 model_config_ret = Config()
128 potential_gpu_layers = None
129 if ctransformers_model_config:
130 potential_context_length = ctransformers_model_config.pop("n_ctx", None)
131 potential_gpu_layers = ctransformers_model_config.pop("n_gpu_layers", None)
132
133 model_config_ret.context_length = potential_context_length
134 model_config_ret.gpu_layers = potential_gpu_layers
135
136 # if user does not define gpu layers, we have to set it with our system if applicable.
137 if potential_gpu_layers is None:
138 if self._model_family.model_name not in CTRANSFORMERS_GPU_SUPPORT:
139 model_config_ret.gpu_layers = -1
140 elif self._is_darwin_and_apple_silicon():
141 model_config_ret.gpu_layers = 1
142 elif self._has_cuda_device():
143 model_config_ret.gpu_layers = self._gpu_layers
144
145 return AutoConfig(model_config_ret)
146
147 def _sanitize_generate_config(
148 self,
149 ctransformers_generate_config: Optional[CtransformersGenerateConfig],
150 ) -> CtransformersGenerateConfig:
151 # if the input config is not None, we try to copy the selected attributes to the ctransformersGenerateConfig.
152 if ctransformers_generate_config is None:
153 ctransformers_generate_config = CtransformersGenerateConfig()
154
155 # for our system, the threads will have to be set to 4
156 # all other parameters, if not specified, will be set to default when generate.
157 ctransformers_generate_config.setdefault("threads", 4)
158
159 return ctransformers_generate_config
160
161 def load(self):
162 try:
163 from ctransformers import AutoModelForCausalLM
164 except ImportError:
165 error_message = "Failed to import module 'ctransformers'"
166
167 installation_guide = [
168 f"Please make sure 'ctransformers' is installed.",
169 f"You can install it by checking out the repository for command."
170 f"https://github.com/marella/ctransformers",
171 ]
172
173 raise ImportError(f"{error_message}\n\n{''.join(installation_guide)}")
174
175 model_path = os.path.join(
176 self.model_path,
177 self.model_spec.model_file_name_template.format(
178 quantization=self.quantization
179 ),
180 )
181
182 self._model_type = self._determine_model_type()
183 self._llm = AutoModelForCausalLM.from_pretrained(
184 model_path_or_repo_id=model_path,
185 model_type=self._model_type,
186 config=self._ctransformer_model_config,
187 )
188
189 @classmethod
190 def match(
191 cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1, quantization: str
192 ) -> bool:
193 if llm_spec.model_format != "ggmlv3":
194 return False
195 if llm_family.model_name not in CTRANSFORMERS_SUPPORTED_MODEL:
196 return False
197 if "generate" not in llm_family.model_ability:
198 return False
199 return True
200
201 def _determine_model_type(self):
202 if self._model_family.model_name not in MODEL_TYPE_FOR_CTRANSFORMERS:
203 raise ValueError(
204 f"The current model {self._model_family.model_name} is not supported, check your model name. "
205 )
206 return MODEL_TYPE_FOR_CTRANSFORMERS[self._model_family.model_name]
207
208 def generate(
209 self, prompt: str, generate_config_raw: CtransformersGenerateConfig
210 ) -> Union[Completion, Iterator[CompletionChunk]]:
211 def generator_wrapper(
212 _prompt: str,
213 _max_new_tokens: Union[int, None],
214 _generate_config: CtransformersGenerateConfig,
215 ) -> Iterator[CompletionChunk]:
216 assert self._model_uid is not None
217 for _completion_chunk, _ in generate_stream(
218 model=self._model_uid,
219 model_ref=self._llm,
220 prompt=_prompt,
221 max_new_tokens=_max_new_tokens,
222 **_generate_config,
223 ):
224 yield _completion_chunk
225
226 generate_config = self._sanitize_generate_config(generate_config_raw)
227
228 logger.debug(
229 "Enter generate, prompt: %s, generate config: %s", prompt, generate_config
230 )
231
232 max_new_tokens = generate_config.pop("max_tokens", None)
233
234 stream_or_not = generate_config.get("stream", False)
235 if stream_or_not:
236 return generator_wrapper(
237 _prompt=prompt,
238 _max_new_tokens=max_new_tokens,
239 _generate_config=generate_config,
240 )
241 else:
242 assert self.model_uid is not None
243 completion_chunk = None
244 completion_usage = None
245 for completion_chunk, completion_usage in generate_stream(
246 model=self.model_uid,
247 model_ref=self._llm,
248 prompt=prompt,
249 max_new_tokens=max_new_tokens,
250 **generate_config,
251 ):
252 pass
253
254 assert completion_chunk is not None
255 assert completion_usage is not None
256
257 completion = Completion(
258 id=completion_chunk["id"],
259 object=completion_chunk["object"],
260 created=completion_chunk["created"],
261 model=completion_chunk["model"],
262 choices=completion_chunk["choices"],
263 usage=completion_usage,
264 )
265
266 logger.debug(
267 "Generated, completion: %s, generate config: %s",
268 completion,
269 generate_config,
270 )
271
272 return completion
273
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/xinference/model/llm/ggml/ctransformers.py b/xinference/model/llm/ggml/ctransformers.py
--- a/xinference/model/llm/ggml/ctransformers.py
+++ b/xinference/model/llm/ggml/ctransformers.py
@@ -190,7 +190,7 @@
def match(
cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1, quantization: str
) -> bool:
- if llm_spec.model_format != "ggmlv3":
+ if llm_spec.model_format != "ggmlv3" and llm_spec.model_format != "ggufv2":
return False
if llm_family.model_name not in CTRANSFORMERS_SUPPORTED_MODEL:
return False
|
{"golden_diff": "diff --git a/xinference/model/llm/ggml/ctransformers.py b/xinference/model/llm/ggml/ctransformers.py\n--- a/xinference/model/llm/ggml/ctransformers.py\n+++ b/xinference/model/llm/ggml/ctransformers.py\n@@ -190,7 +190,7 @@\n def match(\n cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1, quantization: str\n ) -> bool:\n- if llm_spec.model_format != \"ggmlv3\":\n+ if llm_spec.model_format != \"ggmlv3\" and llm_spec.model_format != \"ggufv2\":\n return False\n if llm_family.model_name not in CTRANSFORMERS_SUPPORTED_MODEL:\n return False\n", "issue": "FEAT: support Mistral-7B-Instruct\n### Is your feature request related to a problem? Please describe\r\nhttps://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1\r\n\r\n\n", "before_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nimport os\nfrom typing import TYPE_CHECKING, Iterator, Optional, Sequence, TypedDict, Union\n\nif TYPE_CHECKING:\n from ctransformers import AutoConfig\n\nfrom ....types import Completion, CompletionChunk\nfrom ..core import LLM\nfrom ..llm_family import LLMFamilyV1, LLMSpecV1\nfrom .ctransformers_util import generate_stream\n\nlogger = logging.getLogger(__name__)\n\n# all supported models for Ctransformers with their model type.\n# Please Strictly follows this name format when inputting new model to model_family.\nMODEL_TYPE_FOR_CTRANSFORMERS = {\n \"gpt-2\": \"gpt2\",\n \"gpt-j\": \"gptj\",\n \"gpt4all-j\": \"gptj\",\n \"gpt-neox\": \"gpt_neox\",\n \"stablelm\": \"gpt_neox\",\n \"llama\": \"llama\",\n \"llama-2\": \"llama\",\n \"mpt\": \"mpt\",\n \"dolly-v2\": \"dolly-v2\",\n \"replit\": \"replit\",\n \"starcoder\": \"starcoder\",\n \"starchat\": \"starcoder\",\n \"falcon\": \"falcon\",\n}\n\n# these two constants subjects to change for future development and ctransformers updates.\nCTRANSFORMERS_SUPPORTED_MODEL = [\"starcoder\", \"gpt-2\"]\n\nCTRANSFORMERS_GPU_SUPPORT = [\"llama\", \"llama-2\", \"mpt\", \"falcon\"]\n\nSIZE_TO_GPU_LAYERS = {\n 3: 26,\n 7: 32,\n 13: 40,\n 30: 60,\n 65: 80,\n}\n\n\nclass CtransformersModelConfig(TypedDict, total=False):\n n_ctx: int\n n_gpu_layers: int\n\n\nclass CtransformersGenerateConfig(TypedDict, total=False):\n max_tokens: Optional[int]\n top_k: Optional[int]\n top_p: Optional[float]\n temperature: Optional[float]\n repetition_penalty: Optional[float]\n last_n_tokens: Optional[int]\n seed: Optional[int]\n batch_size: Optional[int]\n threads: Optional[int]\n stop: Optional[Sequence[str]]\n stream: Optional[bool]\n reset: Optional[bool]\n\n\nclass CtransformersModel(LLM):\n def __init__(\n self,\n model_uid: str,\n model_family: \"LLMFamilyV1\",\n model_spec: \"LLMSpecV1\",\n quantization: str,\n model_path: str,\n ctransformers_model_config: Optional[CtransformersModelConfig],\n ):\n super().__init__(model_uid, model_family, model_spec, quantization, model_path)\n\n self._model_type = None\n closest_size = min(\n SIZE_TO_GPU_LAYERS.keys(),\n key=lambda x: abs(x - model_spec.model_size_in_billions),\n )\n\n self._model_family = model_family\n self._model_uid = model_uid\n self._llm = None\n\n self._gpu_layers = SIZE_TO_GPU_LAYERS[closest_size]\n self._ctransformer_model_config = self._sanitize_model_config(\n model_path, ctransformers_model_config\n )\n\n def _sanitize_model_config(\n self, model_path, ctransformers_model_config: Optional[CtransformersModelConfig]\n ) -> \"AutoConfig\":\n try:\n from ctransformers import AutoConfig, Config\n except ImportError:\n error_message = (\n \"Failed to import module 'ctransformers - AutoConfig and Config'\"\n )\n\n installation_guide = [\n f\"Please make sure 'ctransformers' is installed.\",\n f\"You can install it by checking out the repository for command:\"\n f\"https://github.com/marella/ctransformers\",\n ]\n\n raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n\n # if the model have customized config, we update it.\n model_config_ret = Config()\n potential_gpu_layers = None\n if ctransformers_model_config:\n potential_context_length = ctransformers_model_config.pop(\"n_ctx\", None)\n potential_gpu_layers = ctransformers_model_config.pop(\"n_gpu_layers\", None)\n\n model_config_ret.context_length = potential_context_length\n model_config_ret.gpu_layers = potential_gpu_layers\n\n # if user does not define gpu layers, we have to set it with our system if applicable.\n if potential_gpu_layers is None:\n if self._model_family.model_name not in CTRANSFORMERS_GPU_SUPPORT:\n model_config_ret.gpu_layers = -1\n elif self._is_darwin_and_apple_silicon():\n model_config_ret.gpu_layers = 1\n elif self._has_cuda_device():\n model_config_ret.gpu_layers = self._gpu_layers\n\n return AutoConfig(model_config_ret)\n\n def _sanitize_generate_config(\n self,\n ctransformers_generate_config: Optional[CtransformersGenerateConfig],\n ) -> CtransformersGenerateConfig:\n # if the input config is not None, we try to copy the selected attributes to the ctransformersGenerateConfig.\n if ctransformers_generate_config is None:\n ctransformers_generate_config = CtransformersGenerateConfig()\n\n # for our system, the threads will have to be set to 4\n # all other parameters, if not specified, will be set to default when generate.\n ctransformers_generate_config.setdefault(\"threads\", 4)\n\n return ctransformers_generate_config\n\n def load(self):\n try:\n from ctransformers import AutoModelForCausalLM\n except ImportError:\n error_message = \"Failed to import module 'ctransformers'\"\n\n installation_guide = [\n f\"Please make sure 'ctransformers' is installed.\",\n f\"You can install it by checking out the repository for command.\"\n f\"https://github.com/marella/ctransformers\",\n ]\n\n raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n\n model_path = os.path.join(\n self.model_path,\n self.model_spec.model_file_name_template.format(\n quantization=self.quantization\n ),\n )\n\n self._model_type = self._determine_model_type()\n self._llm = AutoModelForCausalLM.from_pretrained(\n model_path_or_repo_id=model_path,\n model_type=self._model_type,\n config=self._ctransformer_model_config,\n )\n\n @classmethod\n def match(\n cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1, quantization: str\n ) -> bool:\n if llm_spec.model_format != \"ggmlv3\":\n return False\n if llm_family.model_name not in CTRANSFORMERS_SUPPORTED_MODEL:\n return False\n if \"generate\" not in llm_family.model_ability:\n return False\n return True\n\n def _determine_model_type(self):\n if self._model_family.model_name not in MODEL_TYPE_FOR_CTRANSFORMERS:\n raise ValueError(\n f\"The current model {self._model_family.model_name} is not supported, check your model name. \"\n )\n return MODEL_TYPE_FOR_CTRANSFORMERS[self._model_family.model_name]\n\n def generate(\n self, prompt: str, generate_config_raw: CtransformersGenerateConfig\n ) -> Union[Completion, Iterator[CompletionChunk]]:\n def generator_wrapper(\n _prompt: str,\n _max_new_tokens: Union[int, None],\n _generate_config: CtransformersGenerateConfig,\n ) -> Iterator[CompletionChunk]:\n assert self._model_uid is not None\n for _completion_chunk, _ in generate_stream(\n model=self._model_uid,\n model_ref=self._llm,\n prompt=_prompt,\n max_new_tokens=_max_new_tokens,\n **_generate_config,\n ):\n yield _completion_chunk\n\n generate_config = self._sanitize_generate_config(generate_config_raw)\n\n logger.debug(\n \"Enter generate, prompt: %s, generate config: %s\", prompt, generate_config\n )\n\n max_new_tokens = generate_config.pop(\"max_tokens\", None)\n\n stream_or_not = generate_config.get(\"stream\", False)\n if stream_or_not:\n return generator_wrapper(\n _prompt=prompt,\n _max_new_tokens=max_new_tokens,\n _generate_config=generate_config,\n )\n else:\n assert self.model_uid is not None\n completion_chunk = None\n completion_usage = None\n for completion_chunk, completion_usage in generate_stream(\n model=self.model_uid,\n model_ref=self._llm,\n prompt=prompt,\n max_new_tokens=max_new_tokens,\n **generate_config,\n ):\n pass\n\n assert completion_chunk is not None\n assert completion_usage is not None\n\n completion = Completion(\n id=completion_chunk[\"id\"],\n object=completion_chunk[\"object\"],\n created=completion_chunk[\"created\"],\n model=completion_chunk[\"model\"],\n choices=completion_chunk[\"choices\"],\n usage=completion_usage,\n )\n\n logger.debug(\n \"Generated, completion: %s, generate config: %s\",\n completion,\n generate_config,\n )\n\n return completion\n", "path": "xinference/model/llm/ggml/ctransformers.py"}], "after_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nimport os\nfrom typing import TYPE_CHECKING, Iterator, Optional, Sequence, TypedDict, Union\n\nif TYPE_CHECKING:\n from ctransformers import AutoConfig\n\nfrom ....types import Completion, CompletionChunk\nfrom ..core import LLM\nfrom ..llm_family import LLMFamilyV1, LLMSpecV1\nfrom .ctransformers_util import generate_stream\n\nlogger = logging.getLogger(__name__)\n\n# all supported models for Ctransformers with their model type.\n# Please Strictly follows this name format when inputting new model to model_family.\nMODEL_TYPE_FOR_CTRANSFORMERS = {\n \"gpt-2\": \"gpt2\",\n \"gpt-j\": \"gptj\",\n \"gpt4all-j\": \"gptj\",\n \"gpt-neox\": \"gpt_neox\",\n \"stablelm\": \"gpt_neox\",\n \"llama\": \"llama\",\n \"llama-2\": \"llama\",\n \"mpt\": \"mpt\",\n \"dolly-v2\": \"dolly-v2\",\n \"replit\": \"replit\",\n \"starcoder\": \"starcoder\",\n \"starchat\": \"starcoder\",\n \"falcon\": \"falcon\",\n}\n\n# these two constants subjects to change for future development and ctransformers updates.\nCTRANSFORMERS_SUPPORTED_MODEL = [\"starcoder\", \"gpt-2\"]\n\nCTRANSFORMERS_GPU_SUPPORT = [\"llama\", \"llama-2\", \"mpt\", \"falcon\"]\n\nSIZE_TO_GPU_LAYERS = {\n 3: 26,\n 7: 32,\n 13: 40,\n 30: 60,\n 65: 80,\n}\n\n\nclass CtransformersModelConfig(TypedDict, total=False):\n n_ctx: int\n n_gpu_layers: int\n\n\nclass CtransformersGenerateConfig(TypedDict, total=False):\n max_tokens: Optional[int]\n top_k: Optional[int]\n top_p: Optional[float]\n temperature: Optional[float]\n repetition_penalty: Optional[float]\n last_n_tokens: Optional[int]\n seed: Optional[int]\n batch_size: Optional[int]\n threads: Optional[int]\n stop: Optional[Sequence[str]]\n stream: Optional[bool]\n reset: Optional[bool]\n\n\nclass CtransformersModel(LLM):\n def __init__(\n self,\n model_uid: str,\n model_family: \"LLMFamilyV1\",\n model_spec: \"LLMSpecV1\",\n quantization: str,\n model_path: str,\n ctransformers_model_config: Optional[CtransformersModelConfig],\n ):\n super().__init__(model_uid, model_family, model_spec, quantization, model_path)\n\n self._model_type = None\n closest_size = min(\n SIZE_TO_GPU_LAYERS.keys(),\n key=lambda x: abs(x - model_spec.model_size_in_billions),\n )\n\n self._model_family = model_family\n self._model_uid = model_uid\n self._llm = None\n\n self._gpu_layers = SIZE_TO_GPU_LAYERS[closest_size]\n self._ctransformer_model_config = self._sanitize_model_config(\n model_path, ctransformers_model_config\n )\n\n def _sanitize_model_config(\n self, model_path, ctransformers_model_config: Optional[CtransformersModelConfig]\n ) -> \"AutoConfig\":\n try:\n from ctransformers import AutoConfig, Config\n except ImportError:\n error_message = (\n \"Failed to import module 'ctransformers - AutoConfig and Config'\"\n )\n\n installation_guide = [\n f\"Please make sure 'ctransformers' is installed.\",\n f\"You can install it by checking out the repository for command:\"\n f\"https://github.com/marella/ctransformers\",\n ]\n\n raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n\n # if the model have customized config, we update it.\n model_config_ret = Config()\n potential_gpu_layers = None\n if ctransformers_model_config:\n potential_context_length = ctransformers_model_config.pop(\"n_ctx\", None)\n potential_gpu_layers = ctransformers_model_config.pop(\"n_gpu_layers\", None)\n\n model_config_ret.context_length = potential_context_length\n model_config_ret.gpu_layers = potential_gpu_layers\n\n # if user does not define gpu layers, we have to set it with our system if applicable.\n if potential_gpu_layers is None:\n if self._model_family.model_name not in CTRANSFORMERS_GPU_SUPPORT:\n model_config_ret.gpu_layers = -1\n elif self._is_darwin_and_apple_silicon():\n model_config_ret.gpu_layers = 1\n elif self._has_cuda_device():\n model_config_ret.gpu_layers = self._gpu_layers\n\n return AutoConfig(model_config_ret)\n\n def _sanitize_generate_config(\n self,\n ctransformers_generate_config: Optional[CtransformersGenerateConfig],\n ) -> CtransformersGenerateConfig:\n # if the input config is not None, we try to copy the selected attributes to the ctransformersGenerateConfig.\n if ctransformers_generate_config is None:\n ctransformers_generate_config = CtransformersGenerateConfig()\n\n # for our system, the threads will have to be set to 4\n # all other parameters, if not specified, will be set to default when generate.\n ctransformers_generate_config.setdefault(\"threads\", 4)\n\n return ctransformers_generate_config\n\n def load(self):\n try:\n from ctransformers import AutoModelForCausalLM\n except ImportError:\n error_message = \"Failed to import module 'ctransformers'\"\n\n installation_guide = [\n f\"Please make sure 'ctransformers' is installed.\",\n f\"You can install it by checking out the repository for command.\"\n f\"https://github.com/marella/ctransformers\",\n ]\n\n raise ImportError(f\"{error_message}\\n\\n{''.join(installation_guide)}\")\n\n model_path = os.path.join(\n self.model_path,\n self.model_spec.model_file_name_template.format(\n quantization=self.quantization\n ),\n )\n\n self._model_type = self._determine_model_type()\n self._llm = AutoModelForCausalLM.from_pretrained(\n model_path_or_repo_id=model_path,\n model_type=self._model_type,\n config=self._ctransformer_model_config,\n )\n\n @classmethod\n def match(\n cls, llm_family: LLMFamilyV1, llm_spec: LLMSpecV1, quantization: str\n ) -> bool:\n if llm_spec.model_format != \"ggmlv3\" and llm_spec.model_format != \"ggufv2\":\n return False\n if llm_family.model_name not in CTRANSFORMERS_SUPPORTED_MODEL:\n return False\n if \"generate\" not in llm_family.model_ability:\n return False\n return True\n\n def _determine_model_type(self):\n if self._model_family.model_name not in MODEL_TYPE_FOR_CTRANSFORMERS:\n raise ValueError(\n f\"The current model {self._model_family.model_name} is not supported, check your model name. \"\n )\n return MODEL_TYPE_FOR_CTRANSFORMERS[self._model_family.model_name]\n\n def generate(\n self, prompt: str, generate_config_raw: CtransformersGenerateConfig\n ) -> Union[Completion, Iterator[CompletionChunk]]:\n def generator_wrapper(\n _prompt: str,\n _max_new_tokens: Union[int, None],\n _generate_config: CtransformersGenerateConfig,\n ) -> Iterator[CompletionChunk]:\n assert self._model_uid is not None\n for _completion_chunk, _ in generate_stream(\n model=self._model_uid,\n model_ref=self._llm,\n prompt=_prompt,\n max_new_tokens=_max_new_tokens,\n **_generate_config,\n ):\n yield _completion_chunk\n\n generate_config = self._sanitize_generate_config(generate_config_raw)\n\n logger.debug(\n \"Enter generate, prompt: %s, generate config: %s\", prompt, generate_config\n )\n\n max_new_tokens = generate_config.pop(\"max_tokens\", None)\n\n stream_or_not = generate_config.get(\"stream\", False)\n if stream_or_not:\n return generator_wrapper(\n _prompt=prompt,\n _max_new_tokens=max_new_tokens,\n _generate_config=generate_config,\n )\n else:\n assert self.model_uid is not None\n completion_chunk = None\n completion_usage = None\n for completion_chunk, completion_usage in generate_stream(\n model=self.model_uid,\n model_ref=self._llm,\n prompt=prompt,\n max_new_tokens=max_new_tokens,\n **generate_config,\n ):\n pass\n\n assert completion_chunk is not None\n assert completion_usage is not None\n\n completion = Completion(\n id=completion_chunk[\"id\"],\n object=completion_chunk[\"object\"],\n created=completion_chunk[\"created\"],\n model=completion_chunk[\"model\"],\n choices=completion_chunk[\"choices\"],\n usage=completion_usage,\n )\n\n logger.debug(\n \"Generated, completion: %s, generate config: %s\",\n completion,\n generate_config,\n )\n\n return completion\n", "path": "xinference/model/llm/ggml/ctransformers.py"}]}
| 3,186 | 194 |
gh_patches_debug_27171
|
rasdani/github-patches
|
git_diff
|
projectmesa__mesa-339
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Advanced tutorial chart
I have been unable to get the chart showing the gini coefficient to open up in the advanced tutorial to open up. Everything else is working, the chart just does not appear. Please advise,
Thank you.
Tom
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesa/visualization/ModularVisualization.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 ModularServer
4 =============
5
6 A visualization server which renders a model via one or more elements.
7
8 The concept for the modular visualization server as follows:
9 A visualization is composed of VisualizationElements, each of which defines how
10 to generate some visualization from a model instance and render it on the
11 client. VisualizationElements may be anything from a simple text display to
12 a multilayered HTML5 canvas.
13
14 The actual server is launched with one or more VisualizationElements;
15 it runs the model object through each of them, generating data to be sent to
16 the client. The client page is also generated based on the JavaScript code
17 provided by each element.
18
19 This file consists of the following classes:
20
21 VisualizationElement: Parent class for all other visualization elements, with
22 the minimal necessary options.
23 PageHandler: The handler for the visualization page, generated from a template
24 and built from the various visualization elements.
25 SocketHandler: Handles the websocket connection between the client page and
26 the server.
27 ModularServer: The overall visualization application class which stores and
28 controls the model and visualization instance.
29
30
31 ModularServer should *not* need to be subclassed on a model-by-model basis; it
32 should be primarily a pass-through for VisualizationElement subclasses, which
33 define the actual visualization specifics.
34
35 For example, suppose we have created two visualization elements for our model,
36 called canvasvis and graphvis; we would launch a server with:
37
38 server = ModularServer(MyModel, [canvasvis, graphvis], name="My Model")
39 server.launch()
40
41 The client keeps track of what step it is showing. Clicking the Step button in
42 the browser sends a message requesting the viz_state corresponding to the next
43 step position, which is then sent back to the client via the websocket.
44
45 The websocket protocol is as follows:
46 Each message is a JSON object, with a "type" property which defines the rest of
47 the structure.
48
49 Server -> Client:
50 Send over the model state to visualize.
51 Model state is a list, with each element corresponding to a div; each div
52 is expected to have a render function associated with it, which knows how
53 to render that particular data. The example below includes two elements:
54 the first is data for a CanvasGrid, the second for a raw text display.
55
56 {
57 "type": "viz_state",
58 "data": [{0:[ {"Shape": "circle", "x": 0, "y": 0, "r": 0.5,
59 "Color": "#AAAAAA", "Filled": "true", "Layer": 0,
60 "text": 'A', "text_color": "white" }]},
61 "Shape Count: 1"]
62 }
63
64 Informs the client that the model is over.
65 {"type": "end"}
66
67 Client -> Server:
68 Reset the model.
69 TODO: Allow this to come with parameters
70 {
71 "type": "reset"
72 }
73
74 Get a given state.
75 {
76 "type": "get_step",
77 "step:" index of the step to get.
78 }
79
80 """
81 import os
82
83 import tornado.ioloop
84 import tornado.web
85 import tornado.websocket
86 import tornado.escape
87 import tornado.gen
88
89 import webbrowser
90
91 # Suppress several pylint warnings for this file.
92 # Attributes being defined outside of init is a Tornado feature.
93 # pylint: disable=attribute-defined-outside-init
94
95
96 class VisualizationElement:
97 """
98 Defines an element of the visualization.
99
100 Attributes:
101 package_includes: A list of external JavaScript files to include that
102 are part of the Mesa packages.
103 local_includes: A list of JavaScript files that are local to the
104 directory that the server is being run in.
105 js_code: A JavaScript code string to instantiate the element.
106
107 Methods:
108 render: Takes a model object, and produces JSON data which can be sent
109 to the client.
110
111 """
112
113 package_includes = []
114 local_includes = []
115 js_code = ''
116 render_args = {}
117
118 def __init__(self):
119 pass
120
121 def render(self, model):
122 """ Build visualization data from a model object.
123
124 Args:
125 model: A model object
126
127 Returns:
128 A JSON-ready object.
129
130 """
131 return "<b>VisualizationElement goes here</b>."
132
133 # =============================================================================
134 # Actual Tornado code starts here:
135
136
137 class PageHandler(tornado.web.RequestHandler):
138 """ Handler for the HTML template which holds the visualization. """
139
140 def get(self):
141 elements = self.application.visualization_elements
142 for i, element in enumerate(elements):
143 element.index = i
144 self.render("modular_template.html", port=self.application.port,
145 model_name=self.application.model_name,
146 package_includes=self.application.package_includes,
147 local_includes=self.application.local_includes,
148 scripts=self.application.js_code)
149
150
151 class SocketHandler(tornado.websocket.WebSocketHandler):
152 """ Handler for websocket. """
153 def open(self):
154 if self.application.verbose:
155 print("Socket opened!")
156
157 def check_origin(self, origin):
158 return True
159
160 def on_message(self, message):
161 """ Receiving a message from the websocket, parse, and act accordingly.
162
163 """
164 if self.application.verbose:
165 print(message)
166 msg = tornado.escape.json_decode(message)
167
168 if msg["type"] == "get_step":
169 self.application.model.step()
170 self.write_message({"type": "viz_state",
171 "data": self.application.render_model()})
172
173 elif msg["type"] == "reset":
174 self.application.reset_model()
175 self.write_message({"type": "viz_state",
176 "data": self.application.render_model()})
177
178 else:
179 if self.application.verbose:
180 print("Unexpected message!")
181
182
183 class ModularServer(tornado.web.Application):
184 """ Main visualization application. """
185 verbose = True
186
187 model_name = "Mesa Model"
188 model_cls = None # A model class
189 portrayal_method = None
190 port = 8521 # Default port to listen on
191 canvas_width = 500
192 canvas_height = 500
193 grid_height = 0
194 grid_width = 0
195
196 max_steps = 100000
197
198 model_args = ()
199 model_kwargs = {}
200
201 # Handlers and other globals:
202 page_handler = (r'/', PageHandler)
203 socket_handler = (r'/ws', SocketHandler)
204 static_handler = (r'/static/(.*)', tornado.web.StaticFileHandler,
205 {"path": os.path.dirname(__file__) + "/templates"})
206 local_handler = (r'/local/(.*)', tornado.web.StaticFileHandler,
207 {"path": ''})
208
209 handlers = [page_handler, socket_handler, static_handler, local_handler]
210
211 settings = {"debug": True,
212 "template_path": os.path.dirname(__file__) + "/templates"}
213
214 def __init__(self, model_cls, visualization_elements, name="Mesa Model",
215 *args, **kwargs):
216 """ Create a new visualization server with the given elements. """
217 # Prep visualization elements:
218 self.visualization_elements = visualization_elements
219 self.package_includes = set()
220 self.local_includes = set()
221 self.js_code = []
222 for element in self.visualization_elements:
223 for include_file in element.package_includes:
224 self.package_includes.add(include_file)
225 for include_file in element.local_includes:
226 self.local_includes.add(include_file)
227 self.js_code.append(element.js_code)
228
229 # Initializing the model
230 self.model_name = name
231 self.model_cls = model_cls
232
233 self.model_args = args
234 self.model_kwargs = kwargs
235 self.reset_model()
236
237 # Initializing the application itself:
238 super().__init__(self.handlers, **self.settings)
239
240 def reset_model(self):
241 """ Reinstantiate the model object, using the current parameters. """
242 self.model = self.model_cls(*self.model_args, **self.model_kwargs)
243
244 def render_model(self):
245 """ Turn the current state of the model into a dictionary of
246 visualizations
247
248 """
249 visualization_state = []
250 for element in self.visualization_elements:
251 element_state = element.render(self.model)
252 visualization_state.append(element_state)
253 return visualization_state
254
255 def launch(self, port=None):
256 """ Run the app. """
257 if port is not None:
258 self.port = port
259 url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)
260 print('Interface starting at {url}'.format(url=url))
261 self.listen(self.port)
262 webbrowser.open(url)
263 tornado.ioloop.IOLoop.instance().start()
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mesa/visualization/ModularVisualization.py b/mesa/visualization/ModularVisualization.py
--- a/mesa/visualization/ModularVisualization.py
+++ b/mesa/visualization/ModularVisualization.py
@@ -80,6 +80,7 @@
"""
import os
+import tornado.autoreload
import tornado.ioloop
import tornado.web
import tornado.websocket
@@ -209,6 +210,7 @@
handlers = [page_handler, socket_handler, static_handler, local_handler]
settings = {"debug": True,
+ "autoreload": False,
"template_path": os.path.dirname(__file__) + "/templates"}
def __init__(self, model_cls, visualization_elements, name="Mesa Model",
@@ -254,10 +256,13 @@
def launch(self, port=None):
""" Run the app. """
+ startLoop = not tornado.ioloop.IOLoop.initialized()
if port is not None:
self.port = port
url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)
print('Interface starting at {url}'.format(url=url))
self.listen(self.port)
webbrowser.open(url)
- tornado.ioloop.IOLoop.instance().start()
+ tornado.autoreload.start()
+ if startLoop:
+ tornado.ioloop.IOLoop.instance().start()
|
{"golden_diff": "diff --git a/mesa/visualization/ModularVisualization.py b/mesa/visualization/ModularVisualization.py\n--- a/mesa/visualization/ModularVisualization.py\n+++ b/mesa/visualization/ModularVisualization.py\n@@ -80,6 +80,7 @@\n \"\"\"\n import os\n \n+import tornado.autoreload\n import tornado.ioloop\n import tornado.web\n import tornado.websocket\n@@ -209,6 +210,7 @@\n handlers = [page_handler, socket_handler, static_handler, local_handler]\n \n settings = {\"debug\": True,\n+ \"autoreload\": False,\n \"template_path\": os.path.dirname(__file__) + \"/templates\"}\n \n def __init__(self, model_cls, visualization_elements, name=\"Mesa Model\",\n@@ -254,10 +256,13 @@\n \n def launch(self, port=None):\n \"\"\" Run the app. \"\"\"\n+ startLoop = not tornado.ioloop.IOLoop.initialized()\n if port is not None:\n self.port = port\n url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)\n print('Interface starting at {url}'.format(url=url))\n self.listen(self.port)\n webbrowser.open(url)\n- tornado.ioloop.IOLoop.instance().start()\n+ tornado.autoreload.start()\n+ if startLoop:\n+ tornado.ioloop.IOLoop.instance().start()\n", "issue": "Advanced tutorial chart\nI have been unable to get the chart showing the gini coefficient to open up in the advanced tutorial to open up. Everything else is working, the chart just does not appear. Please advise, \r\n\r\nThank you.\r\n\r\nTom \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nModularServer\n=============\n\nA visualization server which renders a model via one or more elements.\n\nThe concept for the modular visualization server as follows:\nA visualization is composed of VisualizationElements, each of which defines how\nto generate some visualization from a model instance and render it on the\nclient. VisualizationElements may be anything from a simple text display to\na multilayered HTML5 canvas.\n\nThe actual server is launched with one or more VisualizationElements;\nit runs the model object through each of them, generating data to be sent to\nthe client. The client page is also generated based on the JavaScript code\nprovided by each element.\n\nThis file consists of the following classes:\n\nVisualizationElement: Parent class for all other visualization elements, with\n the minimal necessary options.\nPageHandler: The handler for the visualization page, generated from a template\n and built from the various visualization elements.\nSocketHandler: Handles the websocket connection between the client page and\n the server.\nModularServer: The overall visualization application class which stores and\n controls the model and visualization instance.\n\n\nModularServer should *not* need to be subclassed on a model-by-model basis; it\nshould be primarily a pass-through for VisualizationElement subclasses, which\ndefine the actual visualization specifics.\n\nFor example, suppose we have created two visualization elements for our model,\ncalled canvasvis and graphvis; we would launch a server with:\n\n server = ModularServer(MyModel, [canvasvis, graphvis], name=\"My Model\")\n server.launch()\n\nThe client keeps track of what step it is showing. Clicking the Step button in\nthe browser sends a message requesting the viz_state corresponding to the next\nstep position, which is then sent back to the client via the websocket.\n\nThe websocket protocol is as follows:\nEach message is a JSON object, with a \"type\" property which defines the rest of\nthe structure.\n\nServer -> Client:\n Send over the model state to visualize.\n Model state is a list, with each element corresponding to a div; each div\n is expected to have a render function associated with it, which knows how\n to render that particular data. The example below includes two elements:\n the first is data for a CanvasGrid, the second for a raw text display.\n\n {\n \"type\": \"viz_state\",\n \"data\": [{0:[ {\"Shape\": \"circle\", \"x\": 0, \"y\": 0, \"r\": 0.5,\n \"Color\": \"#AAAAAA\", \"Filled\": \"true\", \"Layer\": 0,\n \"text\": 'A', \"text_color\": \"white\" }]},\n \"Shape Count: 1\"]\n }\n\n Informs the client that the model is over.\n {\"type\": \"end\"}\n\nClient -> Server:\n Reset the model.\n TODO: Allow this to come with parameters\n {\n \"type\": \"reset\"\n }\n\n Get a given state.\n {\n \"type\": \"get_step\",\n \"step:\" index of the step to get.\n }\n\n\"\"\"\nimport os\n\nimport tornado.ioloop\nimport tornado.web\nimport tornado.websocket\nimport tornado.escape\nimport tornado.gen\n\nimport webbrowser\n\n# Suppress several pylint warnings for this file.\n# Attributes being defined outside of init is a Tornado feature.\n# pylint: disable=attribute-defined-outside-init\n\n\nclass VisualizationElement:\n \"\"\"\n Defines an element of the visualization.\n\n Attributes:\n package_includes: A list of external JavaScript files to include that\n are part of the Mesa packages.\n local_includes: A list of JavaScript files that are local to the\n directory that the server is being run in.\n js_code: A JavaScript code string to instantiate the element.\n\n Methods:\n render: Takes a model object, and produces JSON data which can be sent\n to the client.\n\n \"\"\"\n\n package_includes = []\n local_includes = []\n js_code = ''\n render_args = {}\n\n def __init__(self):\n pass\n\n def render(self, model):\n \"\"\" Build visualization data from a model object.\n\n Args:\n model: A model object\n\n Returns:\n A JSON-ready object.\n\n \"\"\"\n return \"<b>VisualizationElement goes here</b>.\"\n\n# =============================================================================\n# Actual Tornado code starts here:\n\n\nclass PageHandler(tornado.web.RequestHandler):\n \"\"\" Handler for the HTML template which holds the visualization. \"\"\"\n\n def get(self):\n elements = self.application.visualization_elements\n for i, element in enumerate(elements):\n element.index = i\n self.render(\"modular_template.html\", port=self.application.port,\n model_name=self.application.model_name,\n package_includes=self.application.package_includes,\n local_includes=self.application.local_includes,\n scripts=self.application.js_code)\n\n\nclass SocketHandler(tornado.websocket.WebSocketHandler):\n \"\"\" Handler for websocket. \"\"\"\n def open(self):\n if self.application.verbose:\n print(\"Socket opened!\")\n\n def check_origin(self, origin):\n return True\n\n def on_message(self, message):\n \"\"\" Receiving a message from the websocket, parse, and act accordingly.\n\n \"\"\"\n if self.application.verbose:\n print(message)\n msg = tornado.escape.json_decode(message)\n\n if msg[\"type\"] == \"get_step\":\n self.application.model.step()\n self.write_message({\"type\": \"viz_state\",\n \"data\": self.application.render_model()})\n\n elif msg[\"type\"] == \"reset\":\n self.application.reset_model()\n self.write_message({\"type\": \"viz_state\",\n \"data\": self.application.render_model()})\n\n else:\n if self.application.verbose:\n print(\"Unexpected message!\")\n\n\nclass ModularServer(tornado.web.Application):\n \"\"\" Main visualization application. \"\"\"\n verbose = True\n\n model_name = \"Mesa Model\"\n model_cls = None # A model class\n portrayal_method = None\n port = 8521 # Default port to listen on\n canvas_width = 500\n canvas_height = 500\n grid_height = 0\n grid_width = 0\n\n max_steps = 100000\n\n model_args = ()\n model_kwargs = {}\n\n # Handlers and other globals:\n page_handler = (r'/', PageHandler)\n socket_handler = (r'/ws', SocketHandler)\n static_handler = (r'/static/(.*)', tornado.web.StaticFileHandler,\n {\"path\": os.path.dirname(__file__) + \"/templates\"})\n local_handler = (r'/local/(.*)', tornado.web.StaticFileHandler,\n {\"path\": ''})\n\n handlers = [page_handler, socket_handler, static_handler, local_handler]\n\n settings = {\"debug\": True,\n \"template_path\": os.path.dirname(__file__) + \"/templates\"}\n\n def __init__(self, model_cls, visualization_elements, name=\"Mesa Model\",\n *args, **kwargs):\n \"\"\" Create a new visualization server with the given elements. \"\"\"\n # Prep visualization elements:\n self.visualization_elements = visualization_elements\n self.package_includes = set()\n self.local_includes = set()\n self.js_code = []\n for element in self.visualization_elements:\n for include_file in element.package_includes:\n self.package_includes.add(include_file)\n for include_file in element.local_includes:\n self.local_includes.add(include_file)\n self.js_code.append(element.js_code)\n\n # Initializing the model\n self.model_name = name\n self.model_cls = model_cls\n\n self.model_args = args\n self.model_kwargs = kwargs\n self.reset_model()\n\n # Initializing the application itself:\n super().__init__(self.handlers, **self.settings)\n\n def reset_model(self):\n \"\"\" Reinstantiate the model object, using the current parameters. \"\"\"\n self.model = self.model_cls(*self.model_args, **self.model_kwargs)\n\n def render_model(self):\n \"\"\" Turn the current state of the model into a dictionary of\n visualizations\n\n \"\"\"\n visualization_state = []\n for element in self.visualization_elements:\n element_state = element.render(self.model)\n visualization_state.append(element_state)\n return visualization_state\n\n def launch(self, port=None):\n \"\"\" Run the app. \"\"\"\n if port is not None:\n self.port = port\n url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)\n print('Interface starting at {url}'.format(url=url))\n self.listen(self.port)\n webbrowser.open(url)\n tornado.ioloop.IOLoop.instance().start()\n", "path": "mesa/visualization/ModularVisualization.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nModularServer\n=============\n\nA visualization server which renders a model via one or more elements.\n\nThe concept for the modular visualization server as follows:\nA visualization is composed of VisualizationElements, each of which defines how\nto generate some visualization from a model instance and render it on the\nclient. VisualizationElements may be anything from a simple text display to\na multilayered HTML5 canvas.\n\nThe actual server is launched with one or more VisualizationElements;\nit runs the model object through each of them, generating data to be sent to\nthe client. The client page is also generated based on the JavaScript code\nprovided by each element.\n\nThis file consists of the following classes:\n\nVisualizationElement: Parent class for all other visualization elements, with\n the minimal necessary options.\nPageHandler: The handler for the visualization page, generated from a template\n and built from the various visualization elements.\nSocketHandler: Handles the websocket connection between the client page and\n the server.\nModularServer: The overall visualization application class which stores and\n controls the model and visualization instance.\n\n\nModularServer should *not* need to be subclassed on a model-by-model basis; it\nshould be primarily a pass-through for VisualizationElement subclasses, which\ndefine the actual visualization specifics.\n\nFor example, suppose we have created two visualization elements for our model,\ncalled canvasvis and graphvis; we would launch a server with:\n\n server = ModularServer(MyModel, [canvasvis, graphvis], name=\"My Model\")\n server.launch()\n\nThe client keeps track of what step it is showing. Clicking the Step button in\nthe browser sends a message requesting the viz_state corresponding to the next\nstep position, which is then sent back to the client via the websocket.\n\nThe websocket protocol is as follows:\nEach message is a JSON object, with a \"type\" property which defines the rest of\nthe structure.\n\nServer -> Client:\n Send over the model state to visualize.\n Model state is a list, with each element corresponding to a div; each div\n is expected to have a render function associated with it, which knows how\n to render that particular data. The example below includes two elements:\n the first is data for a CanvasGrid, the second for a raw text display.\n\n {\n \"type\": \"viz_state\",\n \"data\": [{0:[ {\"Shape\": \"circle\", \"x\": 0, \"y\": 0, \"r\": 0.5,\n \"Color\": \"#AAAAAA\", \"Filled\": \"true\", \"Layer\": 0,\n \"text\": 'A', \"text_color\": \"white\" }]},\n \"Shape Count: 1\"]\n }\n\n Informs the client that the model is over.\n {\"type\": \"end\"}\n\nClient -> Server:\n Reset the model.\n TODO: Allow this to come with parameters\n {\n \"type\": \"reset\"\n }\n\n Get a given state.\n {\n \"type\": \"get_step\",\n \"step:\" index of the step to get.\n }\n\n\"\"\"\nimport os\n\nimport tornado.autoreload\nimport tornado.ioloop\nimport tornado.web\nimport tornado.websocket\nimport tornado.escape\nimport tornado.gen\n\nimport webbrowser\n\n# Suppress several pylint warnings for this file.\n# Attributes being defined outside of init is a Tornado feature.\n# pylint: disable=attribute-defined-outside-init\n\n\nclass VisualizationElement:\n \"\"\"\n Defines an element of the visualization.\n\n Attributes:\n package_includes: A list of external JavaScript files to include that\n are part of the Mesa packages.\n local_includes: A list of JavaScript files that are local to the\n directory that the server is being run in.\n js_code: A JavaScript code string to instantiate the element.\n\n Methods:\n render: Takes a model object, and produces JSON data which can be sent\n to the client.\n\n \"\"\"\n\n package_includes = []\n local_includes = []\n js_code = ''\n render_args = {}\n\n def __init__(self):\n pass\n\n def render(self, model):\n \"\"\" Build visualization data from a model object.\n\n Args:\n model: A model object\n\n Returns:\n A JSON-ready object.\n\n \"\"\"\n return \"<b>VisualizationElement goes here</b>.\"\n\n# =============================================================================\n# Actual Tornado code starts here:\n\n\nclass PageHandler(tornado.web.RequestHandler):\n \"\"\" Handler for the HTML template which holds the visualization. \"\"\"\n\n def get(self):\n elements = self.application.visualization_elements\n for i, element in enumerate(elements):\n element.index = i\n self.render(\"modular_template.html\", port=self.application.port,\n model_name=self.application.model_name,\n package_includes=self.application.package_includes,\n local_includes=self.application.local_includes,\n scripts=self.application.js_code)\n\n\nclass SocketHandler(tornado.websocket.WebSocketHandler):\n \"\"\" Handler for websocket. \"\"\"\n def open(self):\n if self.application.verbose:\n print(\"Socket opened!\")\n\n def check_origin(self, origin):\n return True\n\n def on_message(self, message):\n \"\"\" Receiving a message from the websocket, parse, and act accordingly.\n\n \"\"\"\n if self.application.verbose:\n print(message)\n msg = tornado.escape.json_decode(message)\n\n if msg[\"type\"] == \"get_step\":\n self.application.model.step()\n self.write_message({\"type\": \"viz_state\",\n \"data\": self.application.render_model()})\n\n elif msg[\"type\"] == \"reset\":\n self.application.reset_model()\n self.write_message({\"type\": \"viz_state\",\n \"data\": self.application.render_model()})\n\n else:\n if self.application.verbose:\n print(\"Unexpected message!\")\n\n\nclass ModularServer(tornado.web.Application):\n \"\"\" Main visualization application. \"\"\"\n verbose = True\n\n model_name = \"Mesa Model\"\n model_cls = None # A model class\n portrayal_method = None\n port = 8521 # Default port to listen on\n canvas_width = 500\n canvas_height = 500\n grid_height = 0\n grid_width = 0\n\n max_steps = 100000\n\n model_args = ()\n model_kwargs = {}\n\n # Handlers and other globals:\n page_handler = (r'/', PageHandler)\n socket_handler = (r'/ws', SocketHandler)\n static_handler = (r'/static/(.*)', tornado.web.StaticFileHandler,\n {\"path\": os.path.dirname(__file__) + \"/templates\"})\n local_handler = (r'/local/(.*)', tornado.web.StaticFileHandler,\n {\"path\": ''})\n\n handlers = [page_handler, socket_handler, static_handler, local_handler]\n\n settings = {\"debug\": True,\n \"autoreload\": False,\n \"template_path\": os.path.dirname(__file__) + \"/templates\"}\n\n def __init__(self, model_cls, visualization_elements, name=\"Mesa Model\",\n *args, **kwargs):\n \"\"\" Create a new visualization server with the given elements. \"\"\"\n # Prep visualization elements:\n self.visualization_elements = visualization_elements\n self.package_includes = set()\n self.local_includes = set()\n self.js_code = []\n for element in self.visualization_elements:\n for include_file in element.package_includes:\n self.package_includes.add(include_file)\n for include_file in element.local_includes:\n self.local_includes.add(include_file)\n self.js_code.append(element.js_code)\n\n # Initializing the model\n self.model_name = name\n self.model_cls = model_cls\n\n self.model_args = args\n self.model_kwargs = kwargs\n self.reset_model()\n\n # Initializing the application itself:\n super().__init__(self.handlers, **self.settings)\n\n def reset_model(self):\n \"\"\" Reinstantiate the model object, using the current parameters. \"\"\"\n self.model = self.model_cls(*self.model_args, **self.model_kwargs)\n\n def render_model(self):\n \"\"\" Turn the current state of the model into a dictionary of\n visualizations\n\n \"\"\"\n visualization_state = []\n for element in self.visualization_elements:\n element_state = element.render(self.model)\n visualization_state.append(element_state)\n return visualization_state\n\n def launch(self, port=None):\n \"\"\" Run the app. \"\"\"\n startLoop = not tornado.ioloop.IOLoop.initialized()\n if port is not None:\n self.port = port\n url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)\n print('Interface starting at {url}'.format(url=url))\n self.listen(self.port)\n webbrowser.open(url)\n tornado.autoreload.start()\n if startLoop:\n tornado.ioloop.IOLoop.instance().start()\n", "path": "mesa/visualization/ModularVisualization.py"}]}
| 2,846 | 317 |
gh_patches_debug_4544
|
rasdani/github-patches
|
git_diff
|
uccser__cs-unplugged-455
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update links in lessons to use generate resources
Items in the "CS Unplugged Provided Resources" still have links pointing at google drive where there are now generated resources available, these links should be updated.
Spotted in Lesson 1 (8-10) of Binary Numbers.
Update binary numbers 8 - 10 lessons
This includes adding learning outcomes and updating text.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/topics/views.py`
Content:
```
1 """Views for the topics application."""
2
3 from django.shortcuts import get_object_or_404
4 from django.views import generic
5 from django.http import JsonResponse, Http404
6 from general.templatetags.render_html_field import render_html_with_static
7 from utils.group_lessons_by_age import group_lessons_by_age
8 from .models import (
9 Topic,
10 CurriculumIntegration,
11 UnitPlan,
12 Lesson,
13 ProgrammingExercise,
14 ProgrammingExerciseLanguageImplementation,
15 ConnectedGeneratedResource,
16 GlossaryTerm,
17 )
18
19
20 class IndexView(generic.ListView):
21 """View for the topics application homepage."""
22
23 template_name = "topics/index.html"
24 context_object_name = "all_topics"
25
26 def get_queryset(self):
27 """Get queryset of all topics.
28
29 Returns:
30 Queryset of Topic objects ordered by name.
31 """
32 return Topic.objects.order_by("name")
33
34
35 class TopicView(generic.DetailView):
36 """View for a specific topic."""
37
38 model = Topic
39 template_name = "topics/topic.html"
40 slug_url_kwarg = "topic_slug"
41
42 def get_context_data(self, **kwargs):
43 """Provide the context data for the topic view.
44
45 Returns:
46 Dictionary of context data.
47 """
48 # Call the base implementation first to get a context
49 context = super(TopicView, self).get_context_data(**kwargs)
50 # Add in a QuerySet of all the connected unit plans
51 unit_plans = UnitPlan.objects.filter(topic=self.object).order_by("name").select_related()
52 for unit_plan in unit_plans:
53 unit_plan.grouped_lessons = group_lessons_by_age(unit_plan.unit_plan_lessons)
54 context["unit_plans"] = unit_plans
55 # Add in a QuerySet of all the connected curriculum integrations
56 context["curriculum_integrations"] = CurriculumIntegration.objects.filter(topic=self.object).order_by("number")
57 context["programming_exercises"] = ProgrammingExercise.objects.filter(topic=self.object).order_by(
58 "exercise_set_number",
59 "exercise_number"
60 )
61 return context
62
63
64 class UnitPlanView(generic.DetailView):
65 """View for a specific unit plan."""
66
67 model = UnitPlan
68 template_name = "topics/unit-plan.html"
69 context_object_name = "unit_plan"
70
71 def get_object(self, **kwargs):
72 """Retrieve object for the unit plan view.
73
74 Returns:
75 UnitPlan object, or raises 404 error if not found.
76 """
77 return get_object_or_404(
78 self.model.objects.select_related(),
79 topic__slug=self.kwargs.get("topic_slug", None),
80 slug=self.kwargs.get("unit_plan_slug", None)
81 )
82
83 def get_context_data(self, **kwargs):
84 """Provide the context data for the unit plan view.
85
86 Returns:
87 Dictionary of context data.
88 """
89 # Call the base implementation first to get a context
90 context = super(UnitPlanView, self).get_context_data(**kwargs)
91 # Loading object under consistent context names for breadcrumbs
92 context["topic"] = self.object.topic
93 # Add all the connected lessons
94 context["grouped_lessons"] = group_lessons_by_age(self.object.unit_plan_lessons)
95 return context
96
97
98 class LessonView(generic.DetailView):
99 """View for a specific lesson."""
100
101 model = Lesson
102 template_name = "topics/lesson.html"
103 context_object_name = "lesson"
104
105 def get_object(self, **kwargs):
106 """Retrieve object for the lesson view.
107
108 Returns:
109 Lesson object, or raises 404 error if not found.
110 """
111 return get_object_or_404(
112 self.model.objects.select_related(),
113 topic__slug=self.kwargs.get("topic_slug", None),
114 unit_plan__slug=self.kwargs.get("unit_plan_slug", None),
115 slug=self.kwargs.get("lesson_slug", None),
116 )
117
118 def get_context_data(self, **kwargs):
119 """Provide the context data for the lesson view.
120
121 Returns:
122 Dictionary of context data.
123 """
124 # Call the base implementation first to get a context
125 context = super(LessonView, self).get_context_data(**kwargs)
126 # Loading objects under consistent context names for breadcrumbs
127 context["topic"] = self.object.topic
128 context["unit_plan"] = self.object.unit_plan
129 # Add all the connected programming exercises
130 context["programming_exercises"] = self.object.programming_exercises.all()
131 # Add all the connected learning outcomes
132 context["lesson_learning_outcomes"] = self.object.learning_outcomes.all().select_related()
133 # Add all the connected generated resources
134 related_resources = self.object.generated_resources.all()
135 generated_resources = []
136 for related_resource in related_resources:
137 generated_resource = dict()
138 generated_resource["slug"] = related_resource.slug
139 generated_resource["name"] = related_resource.name
140 generated_resource["thumbnail"] = related_resource.thumbnail_static_path
141 relationship = ConnectedGeneratedResource.objects.get(resource=related_resource, lesson=self.object)
142 generated_resource["description"] = relationship.description
143 generated_resources.append(generated_resource)
144 context["lesson_generated_resources"] = generated_resources
145
146 return context
147
148
149 class ProgrammingExerciseList(generic.ListView):
150 """View for listing all programming exercises for a lesson."""
151
152 model = ProgrammingExercise
153 template_name = "topics/programming-exercise-lesson-list.html"
154 context_object_name = "programming_exercises"
155
156 def get_queryset(self, **kwargs):
157 """Retrieve all programming exercises for a topic.
158
159 Returns:
160 Queryset of ProgrammingExercise objects.
161 """
162 lesson_slug = self.kwargs.get("lesson_slug", None)
163 exercises = ProgrammingExercise.objects.filter(lessons__slug=lesson_slug)
164 return exercises.order_by("exercise_set_number", "exercise_number")
165
166 def get_context_data(self, **kwargs):
167 """Provide the context data for the programming exercise list view.
168
169 Returns:
170 Dictionary of context data.
171 """
172 context = super(ProgrammingExerciseList, self).get_context_data(**kwargs)
173 lesson = get_object_or_404(
174 Lesson.objects.select_related(),
175 topic__slug=self.kwargs.get("topic_slug", None),
176 unit_plan__slug=self.kwargs.get("unit_plan_slug", None),
177 slug=self.kwargs.get("lesson_slug", None),
178 )
179 context["lesson"] = lesson
180 context["unit_plan"] = lesson.unit_plan
181 context["topic"] = lesson.topic
182 return context
183
184
185 class ProgrammingExerciseView(generic.DetailView):
186 """View for a specific programming exercise."""
187
188 model = ProgrammingExercise
189 template_name = "topics/programming-exercise.html"
190 context_object_name = "programming_exercise"
191
192 def get_object(self, **kwargs):
193 """Retrieve object for the programming exercise view.
194
195 Returns:
196 ProgrammingExercise object, or raises 404 error if not found.
197 """
198 return get_object_or_404(
199 self.model.objects.select_related(),
200 topic__slug=self.kwargs.get("topic_slug", None),
201 slug=self.kwargs.get("programming_exercise_slug", None)
202 )
203
204 def get_context_data(self, **kwargs):
205 """Provide the context data for the programming exercise view.
206
207 Returns:
208 Dictionary of context data.
209 """
210 # Call the base implementation first to get a context
211 context = super(ProgrammingExerciseView, self).get_context_data(**kwargs)
212 context["lessons"] = self.object.lessons.order_by("number")
213 context["topic"] = self.object.topic
214 # Add all the connected learning outcomes
215 context["programming_exercise_learning_outcomes"] = self.object.learning_outcomes.all()
216 context["implementations"] = self.object.ordered_implementations()
217 return context
218
219
220 class ProgrammingExerciseLanguageSolutionView(generic.DetailView):
221 """View for a language implementation for a programming exercise."""
222
223 model = ProgrammingExerciseLanguageImplementation
224 template_name = "topics/programming-exercise-language-solution.html"
225 context_object_name = "implementation"
226
227 def get_object(self, **kwargs):
228 """Retrieve object for the language implementation view.
229
230 Returns:
231 ProgrammingExerciseLanguageImplementation object, or raises 404
232 error if not found.
233 """
234 return get_object_or_404(
235 self.model.objects.select_related(),
236 topic__slug=self.kwargs.get("topic_slug", None),
237 exercise__slug=self.kwargs.get("programming_exercise_slug", None),
238 language__slug=self.kwargs.get("programming_language_slug", None)
239 )
240
241 def get_context_data(self, **kwargs):
242 """Provide the context data for the language implementation view.
243
244 Returns:
245 Dictionary of context data.
246 """
247 # Call the base implementation first to get a context
248 context = super(ProgrammingExerciseLanguageSolutionView, self).get_context_data(**kwargs)
249 # Loading object under consistent context names for breadcrumbs
250 context["topic"] = self.object.topic
251 context["programming_exercise"] = self.object.exercise
252 return context
253
254
255 class AllCurriculumIntegrationList(generic.ListView):
256 """View for listing all curriculum integrations."""
257
258 model = CurriculumIntegration
259 template_name = "topics/all-curriculum-integration-list.html"
260 context_object_name = "curriculum_integrations"
261
262 def get_queryset(self, **kwargs):
263 """Retrieve all curriculum integrations.
264
265 Returns:
266 Queryset of CurriculumIntegration objects.
267 """
268 return CurriculumIntegration.objects.select_related().order_by("topic__name", "number")
269
270
271 class CurriculumIntegrationView(generic.DetailView):
272 """View for a specific curriculum integration."""
273
274 model = CurriculumIntegration
275 queryset = CurriculumIntegration.objects.all()
276 template_name = "topics/curriculum-integration.html"
277 context_object_name = "integration"
278
279 def get_object(self, **kwargs):
280 """Retrieve object for the curriculum integration view.
281
282 Returns:
283 CurriculumIntegration object, or raises 404 error if not found.
284 """
285 return get_object_or_404(
286 self.model.objects.select_related(),
287 topic__slug=self.kwargs.get("topic_slug", None),
288 slug=self.kwargs.get("integration_slug", None)
289 )
290
291 def get_context_data(self, **kwargs):
292 """Provide the context data for the curriculum integration view.
293
294 Returns:
295 Dictionary of context data.
296 """
297 # Call the base implementation first to get a context
298 context = super(CurriculumIntegrationView, self).get_context_data(**kwargs)
299 # Loading objects under consistent context names for breadcrumbs
300 context["topic"] = self.object.topic
301 # Add in a QuerySet of all the connected curriculum areas
302 context["integration_curriculum_areas"] = self.object.curriculum_areas.order_by("name")
303 # Add in a QuerySet of all the prerequisite lessons
304 context["prerequisite_lessons"] = self.object.prerequisite_lessons.select_related().order_by(
305 "unit_plan__name",
306 "age_range",
307 "number"
308 )
309 return context
310
311
312 class OtherResourcesView(generic.DetailView):
313 """View for detailing other resources for a specific topic."""
314
315 model = Topic
316 template_name = "topics/topic-other-resources.html"
317 slug_url_kwarg = "topic_slug"
318
319
320 class GlossaryList(generic.ListView):
321 """Provide glossary view of all terms."""
322
323 template_name = "topics/glossary.html"
324 context_object_name = "glossary_terms"
325
326 def get_queryset(self):
327 """Get queryset of all glossary terms.
328
329 Returns:
330 Queryset of GlossaryTerm objects ordered by term.
331 """
332 return GlossaryTerm.objects.order_by("term")
333
334
335 def glossary_json(request, **kwargs):
336 """Provide JSON data for glossary term.
337
338 Args:
339 request: The HTTP request.
340
341 Returns:
342 JSON response is sent containing data for the requested term.
343 """
344 # If term parameter, then return JSON
345 if "term" in request.GET:
346 glossary_slug = request.GET.get("term")
347 glossary_item = get_object_or_404(
348 GlossaryTerm,
349 slug=glossary_slug
350 )
351 data = {
352 "slug": glossary_slug,
353 "term": glossary_item.term,
354 "definition": render_html_with_static(glossary_item.definition)
355 }
356 return JsonResponse(data)
357 else:
358 raise Http404("Term parameter not specified.")
359
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/csunplugged/topics/views.py b/csunplugged/topics/views.py
--- a/csunplugged/topics/views.py
+++ b/csunplugged/topics/views.py
@@ -58,6 +58,13 @@
"exercise_set_number",
"exercise_number"
)
+ lessons = self.object.topic_lessons.all()
+ resources = set()
+ for lesson in lessons:
+ lesson_resources = lesson.generated_resources.all()
+ for lesson_resource in lesson_resources:
+ resources.add(lesson_resource)
+ context["resources"] = resources
return context
|
{"golden_diff": "diff --git a/csunplugged/topics/views.py b/csunplugged/topics/views.py\n--- a/csunplugged/topics/views.py\n+++ b/csunplugged/topics/views.py\n@@ -58,6 +58,13 @@\n \"exercise_set_number\",\n \"exercise_number\"\n )\n+ lessons = self.object.topic_lessons.all()\n+ resources = set()\n+ for lesson in lessons:\n+ lesson_resources = lesson.generated_resources.all()\n+ for lesson_resource in lesson_resources:\n+ resources.add(lesson_resource)\n+ context[\"resources\"] = resources\n return context\n", "issue": "Update links in lessons to use generate resources\nItems in the \"CS Unplugged Provided Resources\" still have links pointing at google drive where there are now generated resources available, these links should be updated.\r\nSpotted in Lesson 1 (8-10) of Binary Numbers.\nUpdate binary numbers 8 - 10 lessons\nThis includes adding learning outcomes and updating text.\n", "before_files": [{"content": "\"\"\"Views for the topics application.\"\"\"\n\nfrom django.shortcuts import get_object_or_404\nfrom django.views import generic\nfrom django.http import JsonResponse, Http404\nfrom general.templatetags.render_html_field import render_html_with_static\nfrom utils.group_lessons_by_age import group_lessons_by_age\nfrom .models import (\n Topic,\n CurriculumIntegration,\n UnitPlan,\n Lesson,\n ProgrammingExercise,\n ProgrammingExerciseLanguageImplementation,\n ConnectedGeneratedResource,\n GlossaryTerm,\n)\n\n\nclass IndexView(generic.ListView):\n \"\"\"View for the topics application homepage.\"\"\"\n\n template_name = \"topics/index.html\"\n context_object_name = \"all_topics\"\n\n def get_queryset(self):\n \"\"\"Get queryset of all topics.\n\n Returns:\n Queryset of Topic objects ordered by name.\n \"\"\"\n return Topic.objects.order_by(\"name\")\n\n\nclass TopicView(generic.DetailView):\n \"\"\"View for a specific topic.\"\"\"\n\n model = Topic\n template_name = \"topics/topic.html\"\n slug_url_kwarg = \"topic_slug\"\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the topic view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(TopicView, self).get_context_data(**kwargs)\n # Add in a QuerySet of all the connected unit plans\n unit_plans = UnitPlan.objects.filter(topic=self.object).order_by(\"name\").select_related()\n for unit_plan in unit_plans:\n unit_plan.grouped_lessons = group_lessons_by_age(unit_plan.unit_plan_lessons)\n context[\"unit_plans\"] = unit_plans\n # Add in a QuerySet of all the connected curriculum integrations\n context[\"curriculum_integrations\"] = CurriculumIntegration.objects.filter(topic=self.object).order_by(\"number\")\n context[\"programming_exercises\"] = ProgrammingExercise.objects.filter(topic=self.object).order_by(\n \"exercise_set_number\",\n \"exercise_number\"\n )\n return context\n\n\nclass UnitPlanView(generic.DetailView):\n \"\"\"View for a specific unit plan.\"\"\"\n\n model = UnitPlan\n template_name = \"topics/unit-plan.html\"\n context_object_name = \"unit_plan\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the unit plan view.\n\n Returns:\n UnitPlan object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n slug=self.kwargs.get(\"unit_plan_slug\", None)\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the unit plan view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(UnitPlanView, self).get_context_data(**kwargs)\n # Loading object under consistent context names for breadcrumbs\n context[\"topic\"] = self.object.topic\n # Add all the connected lessons\n context[\"grouped_lessons\"] = group_lessons_by_age(self.object.unit_plan_lessons)\n return context\n\n\nclass LessonView(generic.DetailView):\n \"\"\"View for a specific lesson.\"\"\"\n\n model = Lesson\n template_name = \"topics/lesson.html\"\n context_object_name = \"lesson\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the lesson view.\n\n Returns:\n Lesson object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n unit_plan__slug=self.kwargs.get(\"unit_plan_slug\", None),\n slug=self.kwargs.get(\"lesson_slug\", None),\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the lesson view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(LessonView, self).get_context_data(**kwargs)\n # Loading objects under consistent context names for breadcrumbs\n context[\"topic\"] = self.object.topic\n context[\"unit_plan\"] = self.object.unit_plan\n # Add all the connected programming exercises\n context[\"programming_exercises\"] = self.object.programming_exercises.all()\n # Add all the connected learning outcomes\n context[\"lesson_learning_outcomes\"] = self.object.learning_outcomes.all().select_related()\n # Add all the connected generated resources\n related_resources = self.object.generated_resources.all()\n generated_resources = []\n for related_resource in related_resources:\n generated_resource = dict()\n generated_resource[\"slug\"] = related_resource.slug\n generated_resource[\"name\"] = related_resource.name\n generated_resource[\"thumbnail\"] = related_resource.thumbnail_static_path\n relationship = ConnectedGeneratedResource.objects.get(resource=related_resource, lesson=self.object)\n generated_resource[\"description\"] = relationship.description\n generated_resources.append(generated_resource)\n context[\"lesson_generated_resources\"] = generated_resources\n\n return context\n\n\nclass ProgrammingExerciseList(generic.ListView):\n \"\"\"View for listing all programming exercises for a lesson.\"\"\"\n\n model = ProgrammingExercise\n template_name = \"topics/programming-exercise-lesson-list.html\"\n context_object_name = \"programming_exercises\"\n\n def get_queryset(self, **kwargs):\n \"\"\"Retrieve all programming exercises for a topic.\n\n Returns:\n Queryset of ProgrammingExercise objects.\n \"\"\"\n lesson_slug = self.kwargs.get(\"lesson_slug\", None)\n exercises = ProgrammingExercise.objects.filter(lessons__slug=lesson_slug)\n return exercises.order_by(\"exercise_set_number\", \"exercise_number\")\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the programming exercise list view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n context = super(ProgrammingExerciseList, self).get_context_data(**kwargs)\n lesson = get_object_or_404(\n Lesson.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n unit_plan__slug=self.kwargs.get(\"unit_plan_slug\", None),\n slug=self.kwargs.get(\"lesson_slug\", None),\n )\n context[\"lesson\"] = lesson\n context[\"unit_plan\"] = lesson.unit_plan\n context[\"topic\"] = lesson.topic\n return context\n\n\nclass ProgrammingExerciseView(generic.DetailView):\n \"\"\"View for a specific programming exercise.\"\"\"\n\n model = ProgrammingExercise\n template_name = \"topics/programming-exercise.html\"\n context_object_name = \"programming_exercise\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the programming exercise view.\n\n Returns:\n ProgrammingExercise object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n slug=self.kwargs.get(\"programming_exercise_slug\", None)\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the programming exercise view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(ProgrammingExerciseView, self).get_context_data(**kwargs)\n context[\"lessons\"] = self.object.lessons.order_by(\"number\")\n context[\"topic\"] = self.object.topic\n # Add all the connected learning outcomes\n context[\"programming_exercise_learning_outcomes\"] = self.object.learning_outcomes.all()\n context[\"implementations\"] = self.object.ordered_implementations()\n return context\n\n\nclass ProgrammingExerciseLanguageSolutionView(generic.DetailView):\n \"\"\"View for a language implementation for a programming exercise.\"\"\"\n\n model = ProgrammingExerciseLanguageImplementation\n template_name = \"topics/programming-exercise-language-solution.html\"\n context_object_name = \"implementation\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the language implementation view.\n\n Returns:\n ProgrammingExerciseLanguageImplementation object, or raises 404\n error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n exercise__slug=self.kwargs.get(\"programming_exercise_slug\", None),\n language__slug=self.kwargs.get(\"programming_language_slug\", None)\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the language implementation view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(ProgrammingExerciseLanguageSolutionView, self).get_context_data(**kwargs)\n # Loading object under consistent context names for breadcrumbs\n context[\"topic\"] = self.object.topic\n context[\"programming_exercise\"] = self.object.exercise\n return context\n\n\nclass AllCurriculumIntegrationList(generic.ListView):\n \"\"\"View for listing all curriculum integrations.\"\"\"\n\n model = CurriculumIntegration\n template_name = \"topics/all-curriculum-integration-list.html\"\n context_object_name = \"curriculum_integrations\"\n\n def get_queryset(self, **kwargs):\n \"\"\"Retrieve all curriculum integrations.\n\n Returns:\n Queryset of CurriculumIntegration objects.\n \"\"\"\n return CurriculumIntegration.objects.select_related().order_by(\"topic__name\", \"number\")\n\n\nclass CurriculumIntegrationView(generic.DetailView):\n \"\"\"View for a specific curriculum integration.\"\"\"\n\n model = CurriculumIntegration\n queryset = CurriculumIntegration.objects.all()\n template_name = \"topics/curriculum-integration.html\"\n context_object_name = \"integration\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the curriculum integration view.\n\n Returns:\n CurriculumIntegration object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n slug=self.kwargs.get(\"integration_slug\", None)\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the curriculum integration view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(CurriculumIntegrationView, self).get_context_data(**kwargs)\n # Loading objects under consistent context names for breadcrumbs\n context[\"topic\"] = self.object.topic\n # Add in a QuerySet of all the connected curriculum areas\n context[\"integration_curriculum_areas\"] = self.object.curriculum_areas.order_by(\"name\")\n # Add in a QuerySet of all the prerequisite lessons\n context[\"prerequisite_lessons\"] = self.object.prerequisite_lessons.select_related().order_by(\n \"unit_plan__name\",\n \"age_range\",\n \"number\"\n )\n return context\n\n\nclass OtherResourcesView(generic.DetailView):\n \"\"\"View for detailing other resources for a specific topic.\"\"\"\n\n model = Topic\n template_name = \"topics/topic-other-resources.html\"\n slug_url_kwarg = \"topic_slug\"\n\n\nclass GlossaryList(generic.ListView):\n \"\"\"Provide glossary view of all terms.\"\"\"\n\n template_name = \"topics/glossary.html\"\n context_object_name = \"glossary_terms\"\n\n def get_queryset(self):\n \"\"\"Get queryset of all glossary terms.\n\n Returns:\n Queryset of GlossaryTerm objects ordered by term.\n \"\"\"\n return GlossaryTerm.objects.order_by(\"term\")\n\n\ndef glossary_json(request, **kwargs):\n \"\"\"Provide JSON data for glossary term.\n\n Args:\n request: The HTTP request.\n\n Returns:\n JSON response is sent containing data for the requested term.\n \"\"\"\n # If term parameter, then return JSON\n if \"term\" in request.GET:\n glossary_slug = request.GET.get(\"term\")\n glossary_item = get_object_or_404(\n GlossaryTerm,\n slug=glossary_slug\n )\n data = {\n \"slug\": glossary_slug,\n \"term\": glossary_item.term,\n \"definition\": render_html_with_static(glossary_item.definition)\n }\n return JsonResponse(data)\n else:\n raise Http404(\"Term parameter not specified.\")\n", "path": "csunplugged/topics/views.py"}], "after_files": [{"content": "\"\"\"Views for the topics application.\"\"\"\n\nfrom django.shortcuts import get_object_or_404\nfrom django.views import generic\nfrom django.http import JsonResponse, Http404\nfrom general.templatetags.render_html_field import render_html_with_static\nfrom utils.group_lessons_by_age import group_lessons_by_age\nfrom .models import (\n Topic,\n CurriculumIntegration,\n UnitPlan,\n Lesson,\n ProgrammingExercise,\n ProgrammingExerciseLanguageImplementation,\n ConnectedGeneratedResource,\n GlossaryTerm,\n)\n\n\nclass IndexView(generic.ListView):\n \"\"\"View for the topics application homepage.\"\"\"\n\n template_name = \"topics/index.html\"\n context_object_name = \"all_topics\"\n\n def get_queryset(self):\n \"\"\"Get queryset of all topics.\n\n Returns:\n Queryset of Topic objects ordered by name.\n \"\"\"\n return Topic.objects.order_by(\"name\")\n\n\nclass TopicView(generic.DetailView):\n \"\"\"View for a specific topic.\"\"\"\n\n model = Topic\n template_name = \"topics/topic.html\"\n slug_url_kwarg = \"topic_slug\"\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the topic view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(TopicView, self).get_context_data(**kwargs)\n # Add in a QuerySet of all the connected unit plans\n unit_plans = UnitPlan.objects.filter(topic=self.object).order_by(\"name\").select_related()\n for unit_plan in unit_plans:\n unit_plan.grouped_lessons = group_lessons_by_age(unit_plan.unit_plan_lessons)\n context[\"unit_plans\"] = unit_plans\n # Add in a QuerySet of all the connected curriculum integrations\n context[\"curriculum_integrations\"] = CurriculumIntegration.objects.filter(topic=self.object).order_by(\"number\")\n context[\"programming_exercises\"] = ProgrammingExercise.objects.filter(topic=self.object).order_by(\n \"exercise_set_number\",\n \"exercise_number\"\n )\n lessons = self.object.topic_lessons.all()\n resources = set()\n for lesson in lessons:\n lesson_resources = lesson.generated_resources.all()\n for lesson_resource in lesson_resources:\n resources.add(lesson_resource)\n context[\"resources\"] = resources\n return context\n\n\nclass UnitPlanView(generic.DetailView):\n \"\"\"View for a specific unit plan.\"\"\"\n\n model = UnitPlan\n template_name = \"topics/unit-plan.html\"\n context_object_name = \"unit_plan\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the unit plan view.\n\n Returns:\n UnitPlan object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n slug=self.kwargs.get(\"unit_plan_slug\", None)\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the unit plan view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(UnitPlanView, self).get_context_data(**kwargs)\n # Loading object under consistent context names for breadcrumbs\n context[\"topic\"] = self.object.topic\n # Add all the connected lessons\n context[\"grouped_lessons\"] = group_lessons_by_age(self.object.unit_plan_lessons)\n return context\n\n\nclass LessonView(generic.DetailView):\n \"\"\"View for a specific lesson.\"\"\"\n\n model = Lesson\n template_name = \"topics/lesson.html\"\n context_object_name = \"lesson\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the lesson view.\n\n Returns:\n Lesson object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n unit_plan__slug=self.kwargs.get(\"unit_plan_slug\", None),\n slug=self.kwargs.get(\"lesson_slug\", None),\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the lesson view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(LessonView, self).get_context_data(**kwargs)\n # Loading objects under consistent context names for breadcrumbs\n context[\"topic\"] = self.object.topic\n context[\"unit_plan\"] = self.object.unit_plan\n # Add all the connected programming exercises\n context[\"programming_exercises\"] = self.object.programming_exercises.all()\n # Add all the connected learning outcomes\n context[\"lesson_learning_outcomes\"] = self.object.learning_outcomes.all().select_related()\n # Add all the connected generated resources\n related_resources = self.object.generated_resources.all()\n generated_resources = []\n for related_resource in related_resources:\n generated_resource = dict()\n generated_resource[\"slug\"] = related_resource.slug\n generated_resource[\"name\"] = related_resource.name\n generated_resource[\"thumbnail\"] = related_resource.thumbnail_static_path\n relationship = ConnectedGeneratedResource.objects.get(resource=related_resource, lesson=self.object)\n generated_resource[\"description\"] = relationship.description\n generated_resources.append(generated_resource)\n context[\"lesson_generated_resources\"] = generated_resources\n\n return context\n\n\nclass ProgrammingExerciseList(generic.ListView):\n \"\"\"View for listing all programming exercises for a lesson.\"\"\"\n\n model = ProgrammingExercise\n template_name = \"topics/programming-exercise-lesson-list.html\"\n context_object_name = \"programming_exercises\"\n\n def get_queryset(self, **kwargs):\n \"\"\"Retrieve all programming exercises for a topic.\n\n Returns:\n Queryset of ProgrammingExercise objects.\n \"\"\"\n lesson_slug = self.kwargs.get(\"lesson_slug\", None)\n exercises = ProgrammingExercise.objects.filter(lessons__slug=lesson_slug)\n return exercises.order_by(\"exercise_set_number\", \"exercise_number\")\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the programming exercise list view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n context = super(ProgrammingExerciseList, self).get_context_data(**kwargs)\n lesson = get_object_or_404(\n Lesson.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n unit_plan__slug=self.kwargs.get(\"unit_plan_slug\", None),\n slug=self.kwargs.get(\"lesson_slug\", None),\n )\n context[\"lesson\"] = lesson\n context[\"unit_plan\"] = lesson.unit_plan\n context[\"topic\"] = lesson.topic\n return context\n\n\nclass ProgrammingExerciseView(generic.DetailView):\n \"\"\"View for a specific programming exercise.\"\"\"\n\n model = ProgrammingExercise\n template_name = \"topics/programming-exercise.html\"\n context_object_name = \"programming_exercise\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the programming exercise view.\n\n Returns:\n ProgrammingExercise object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n slug=self.kwargs.get(\"programming_exercise_slug\", None)\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the programming exercise view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(ProgrammingExerciseView, self).get_context_data(**kwargs)\n context[\"lessons\"] = self.object.lessons.order_by(\"number\")\n context[\"topic\"] = self.object.topic\n # Add all the connected learning outcomes\n context[\"programming_exercise_learning_outcomes\"] = self.object.learning_outcomes.all()\n context[\"implementations\"] = self.object.ordered_implementations()\n return context\n\n\nclass ProgrammingExerciseLanguageSolutionView(generic.DetailView):\n \"\"\"View for a language implementation for a programming exercise.\"\"\"\n\n model = ProgrammingExerciseLanguageImplementation\n template_name = \"topics/programming-exercise-language-solution.html\"\n context_object_name = \"implementation\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the language implementation view.\n\n Returns:\n ProgrammingExerciseLanguageImplementation object, or raises 404\n error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n exercise__slug=self.kwargs.get(\"programming_exercise_slug\", None),\n language__slug=self.kwargs.get(\"programming_language_slug\", None)\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the language implementation view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(ProgrammingExerciseLanguageSolutionView, self).get_context_data(**kwargs)\n # Loading object under consistent context names for breadcrumbs\n context[\"topic\"] = self.object.topic\n context[\"programming_exercise\"] = self.object.exercise\n return context\n\n\nclass AllCurriculumIntegrationList(generic.ListView):\n \"\"\"View for listing all curriculum integrations.\"\"\"\n\n model = CurriculumIntegration\n template_name = \"topics/all-curriculum-integration-list.html\"\n context_object_name = \"curriculum_integrations\"\n\n def get_queryset(self, **kwargs):\n \"\"\"Retrieve all curriculum integrations.\n\n Returns:\n Queryset of CurriculumIntegration objects.\n \"\"\"\n return CurriculumIntegration.objects.select_related().order_by(\"topic__name\", \"number\")\n\n\nclass CurriculumIntegrationView(generic.DetailView):\n \"\"\"View for a specific curriculum integration.\"\"\"\n\n model = CurriculumIntegration\n queryset = CurriculumIntegration.objects.all()\n template_name = \"topics/curriculum-integration.html\"\n context_object_name = \"integration\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the curriculum integration view.\n\n Returns:\n CurriculumIntegration object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n slug=self.kwargs.get(\"integration_slug\", None)\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the curriculum integration view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(CurriculumIntegrationView, self).get_context_data(**kwargs)\n # Loading objects under consistent context names for breadcrumbs\n context[\"topic\"] = self.object.topic\n # Add in a QuerySet of all the connected curriculum areas\n context[\"integration_curriculum_areas\"] = self.object.curriculum_areas.order_by(\"name\")\n # Add in a QuerySet of all the prerequisite lessons\n context[\"prerequisite_lessons\"] = self.object.prerequisite_lessons.select_related().order_by(\n \"unit_plan__name\",\n \"age_range\",\n \"number\"\n )\n return context\n\n\nclass OtherResourcesView(generic.DetailView):\n \"\"\"View for detailing other resources for a specific topic.\"\"\"\n\n model = Topic\n template_name = \"topics/topic-other-resources.html\"\n slug_url_kwarg = \"topic_slug\"\n\n\nclass GlossaryList(generic.ListView):\n \"\"\"Provide glossary view of all terms.\"\"\"\n\n template_name = \"topics/glossary.html\"\n context_object_name = \"glossary_terms\"\n\n def get_queryset(self):\n \"\"\"Get queryset of all glossary terms.\n\n Returns:\n Queryset of GlossaryTerm objects ordered by term.\n \"\"\"\n return GlossaryTerm.objects.order_by(\"term\")\n\n\ndef glossary_json(request, **kwargs):\n \"\"\"Provide JSON data for glossary term.\n\n Args:\n request: The HTTP request.\n\n Returns:\n JSON response is sent containing data for the requested term.\n \"\"\"\n # If term parameter, then return JSON\n if \"term\" in request.GET:\n glossary_slug = request.GET.get(\"term\")\n glossary_item = get_object_or_404(\n GlossaryTerm,\n slug=glossary_slug\n )\n data = {\n \"slug\": glossary_slug,\n \"term\": glossary_item.term,\n \"definition\": render_html_with_static(glossary_item.definition)\n }\n return JsonResponse(data)\n else:\n raise Http404(\"Term parameter not specified.\")\n", "path": "csunplugged/topics/views.py"}]}
| 3,943 | 130 |
gh_patches_debug_23548
|
rasdani/github-patches
|
git_diff
|
facebookresearch__ParlAI-1595
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Empty training examples throwing runtime error
I'm getting an error with the OpenSubtitles (2009 and 2018) and the Cornell Movie datasets with length 0 examples being loaded. My guess is that there are empty messages in the datasets, which could easily be removed with some preprocessing or a flag to avoid such examples. However, I'm having trouble finding such a flag and would appreciate any help you can provide. I do not have this issue arise on other datasets that I have explored so far.
I'm a bit puzzled as to why others haven't seemed to run into this problem and was hoping you could help point me in the right direction, especially if I'm missing an easy fix. Thanks.
**Error:**
```
Traceback (most recent call last):
File "examples/train_model.py", line 16, in <module>
TrainLoop(opt).train()
File "/data/odemasi/packages/ParlAI/parlai/scripts/train_model.py", line 523, in train
world.parley()
File "/data/odemasi/packages/ParlAI/parlai/core/worlds.py", line 654, in parley
batch_act = self.batch_act(agent_idx, batch_observations[agent_idx])
File "/data/odemasi/packages/ParlAI/parlai/core/worlds.py", line 627, in batch_act
batch_actions = a.batch_act(batch_observation)
File "/data/odemasi/packages/ParlAI/parlai/core/torch_agent.py", line 1206, in batch_act
output = self.train_step(batch)
File "/data/odemasi/packages/ParlAI/parlai/core/torch_generator_agent.py", line 478, in train_step
raise e
File "/data/odemasi/packages/ParlAI/parlai/core/torch_generator_agent.py", line 454, in train_step
scores, preds, _ = self.model(batch.text_vec, batch.label_vec)
File "/data/odemasi/anaconda3/envs/parlai_venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/data/odemasi/packages/ParlAI/parlai/core/torch_generator_agent.py", line 214, in forward
encoder_states = prev_enc if prev_enc is not None else self.encoder(xs)
File "/data/odemasi/anaconda3/envs/parlai_venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/data/odemasi/packages/ParlAI/parlai/agents/seq2seq/modules.py", line 226, in forward
xes = pack_padded_sequence(xes, x_lens, batch_first=True)
File "/data/odemasi/anaconda3/envs/parlai_venv/lib/python3.7/site-packages/torch/nn/utils/rnn.py", line 148, in pack_padded_sequence
return PackedSequence(torch._C._VariableFunctions._pack_padded_sequence(input, lengths, batch_first))
RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0
```
**Resulted from:**
python examples/train_model.py \
-t cornell_movie \
-bs 32 \
--hiddensize 1024 \
--dict-minfreq 2 \
--dict-lower True \
--dict-file cornell_dict \
-m seq2seq \
-mf cornell_seq2seq
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parlai/agents/seq2seq/seq2seq.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 from parlai.core.torch_generator_agent import TorchGeneratorAgent
8 from .modules import Seq2seq, opt_to_kwargs
9
10 import torch
11 import torch.nn as nn
12
13 import json
14
15
16 class Seq2seqAgent(TorchGeneratorAgent):
17 """Agent which takes an input sequence and produces an output sequence.
18
19 This model supports encoding the input and decoding the output via one of
20 several flavors of RNN. It then uses a linear layer (whose weights can
21 be shared with the embedding layer) to convert RNN output states into
22 output tokens. This model supports greedy decoding, selecting the
23 highest probability token at each time step, as well as beam
24 search.
25
26 For more information, see the following papers:
27 - Neural Machine Translation by Jointly Learning to Align and Translate
28 `(Bahdanau et al. 2014) <arxiv.org/abs/1409.0473>`_
29 - Sequence to Sequence Learning with Neural Networks
30 `(Sutskever et al. 2014) <arxiv.org/abs/1409.3215>`_
31 - Effective Approaches to Attention-based Neural Machine Translation
32 `(Luong et al. 2015) <arxiv.org/abs/1508.04025>`_
33 """
34
35 @classmethod
36 def add_cmdline_args(cls, argparser):
37 """Add command-line arguments specifically for this agent."""
38 agent = argparser.add_argument_group('Seq2Seq Arguments')
39 agent.add_argument('-hs', '--hiddensize', type=int, default=128,
40 help='size of the hidden layers')
41 agent.add_argument('-esz', '--embeddingsize', type=int, default=128,
42 help='size of the token embeddings')
43 agent.add_argument('-nl', '--numlayers', type=int, default=2,
44 help='number of hidden layers')
45 agent.add_argument('-dr', '--dropout', type=float, default=0.1,
46 help='dropout rate')
47 agent.add_argument('-bi', '--bidirectional', type='bool',
48 default=False,
49 help='whether to encode the context with a '
50 'bidirectional rnn')
51 agent.add_argument('-att', '--attention', default='none',
52 choices=['none', 'concat', 'general', 'dot',
53 'local'],
54 help='Choices: none, concat, general, local. '
55 'If set local, also set attention-length. '
56 '(see arxiv.org/abs/1508.04025)')
57 agent.add_argument('-attl', '--attention-length', default=48, type=int,
58 help='Length of local attention.')
59 agent.add_argument('--attention-time', default='post',
60 choices=['pre', 'post'],
61 help='Whether to apply attention before or after '
62 'decoding.')
63 agent.add_argument('-rnn', '--rnn-class', default='lstm',
64 choices=Seq2seq.RNN_OPTS.keys(),
65 help='Choose between different types of RNNs.')
66 agent.add_argument('-dec', '--decoder', default='same',
67 choices=['same', 'shared'],
68 help='Choose between different decoder modules. '
69 'Default "same" uses same class as encoder, '
70 'while "shared" also uses the same weights. '
71 'Note that shared disabled some encoder '
72 'options--in particular, bidirectionality.')
73 agent.add_argument('-lt', '--lookuptable', default='unique',
74 choices=['unique', 'enc_dec', 'dec_out', 'all'],
75 help='The encoder, decoder, and output modules can '
76 'share weights, or not. '
77 'Unique has independent embeddings for each. '
78 'Enc_dec shares the embedding for the encoder '
79 'and decoder. '
80 'Dec_out shares decoder embedding and output '
81 'weights. '
82 'All shares all three weights.')
83 agent.add_argument('-soft', '--numsoftmax', default=1, type=int,
84 help='default 1, if greater then uses mixture of '
85 'softmax (see arxiv.org/abs/1711.03953).')
86 agent.add_argument('-idr', '--input-dropout', type=float, default=0.0,
87 help='Probability of replacing tokens with UNK in training.')
88
89 super(Seq2seqAgent, cls).add_cmdline_args(argparser)
90 return agent
91
92 @staticmethod
93 def model_version():
94 """Return current version of this model, counting up from 0.
95
96 Models may not be backwards-compatible with older versions.
97 Version 1 split from version 0 on Aug 29, 2018.
98 Version 2 split from version 1 on Nov 13, 2018
99 To use version 0, use --model legacy:seq2seq:0
100 To use version 1, use --model legacy:seq2seq:1
101 (legacy agent code is located in parlai/agents/legacy_agents).
102 """
103 return 2
104
105 def __init__(self, opt, shared=None):
106 """Set up model."""
107 super().__init__(opt, shared)
108 self.id = 'Seq2Seq'
109
110 def build_model(self, states=None):
111 """Initialize model, override to change model setup."""
112 opt = self.opt
113 if not states:
114 states = {}
115
116 kwargs = opt_to_kwargs(opt)
117 self.model = Seq2seq(
118 len(self.dict), opt['embeddingsize'], opt['hiddensize'],
119 padding_idx=self.NULL_IDX, start_idx=self.START_IDX,
120 end_idx=self.END_IDX, unknown_idx=self.dict[self.dict.unk_token],
121 longest_label=states.get('longest_label', 1),
122 **kwargs)
123
124 if (opt.get('dict_tokenizer') == 'bpe' and
125 opt['embedding_type'] != 'random'):
126 print('skipping preinitialization of embeddings for bpe')
127 elif not states and opt['embedding_type'] != 'random':
128 # `not states`: only set up embeddings if not loading model
129 self._copy_embeddings(self.model.decoder.lt.weight,
130 opt['embedding_type'])
131 if opt['lookuptable'] in ['unique', 'dec_out']:
132 # also set encoder lt, since it's not shared
133 self._copy_embeddings(self.model.encoder.lt.weight,
134 opt['embedding_type'], log=False)
135
136 if states:
137 # set loaded states if applicable
138 self.model.load_state_dict(states['model'])
139
140 if self.use_cuda:
141 self.model.cuda()
142
143 if opt['embedding_type'].endswith('fixed'):
144 print('Seq2seq: fixing embedding weights.')
145 self.model.decoder.lt.weight.requires_grad = False
146 self.model.encoder.lt.weight.requires_grad = False
147 if opt['lookuptable'] in ['dec_out', 'all']:
148 self.model.decoder.e2s.weight.requires_grad = False
149
150 if self.use_cuda:
151 self.model.cuda()
152
153 return self.model
154
155 def build_criterion(self):
156 # set up criteria
157 if self.opt.get('numsoftmax', 1) > 1:
158 self.criterion = nn.NLLLoss(
159 ignore_index=self.NULL_IDX, size_average=False)
160 else:
161 self.criterion = nn.CrossEntropyLoss(
162 ignore_index=self.NULL_IDX, size_average=False)
163
164 if self.use_cuda:
165 self.criterion.cuda()
166
167 def batchify(self, *args, **kwargs):
168 """Override batchify options for seq2seq."""
169 kwargs['sort'] = True # need sorted for pack_padded
170 return super().batchify(*args, **kwargs)
171
172 def save(self, path=None):
173 """Save model parameters if model_file is set."""
174 path = self.opt.get('model_file', None) if path is None else path
175
176 if path and hasattr(self, 'model'):
177 model = {}
178 if hasattr(self.model, 'module'):
179 model['model'] = self.model.module.state_dict()
180 model['longest_label'] = self.model.module.longest_label
181 else:
182 model['model'] = self.model.state_dict()
183 model['longest_label'] = self.model.longest_label
184 model['optimizer'] = self.optimizer.state_dict()
185 model['optimizer_type'] = self.opt['optimizer']
186
187 with open(path, 'wb') as write:
188 torch.save(model, write)
189
190 # save opt file
191 with open(path + '.opt', 'w') as handle:
192 # save version string
193 self.opt['model_version'] = self.model_version()
194 json.dump(self.opt, handle)
195
196 def load(self, path):
197 """Return opt and model states."""
198 states = torch.load(path, map_location=lambda cpu, _: cpu)
199 # set loaded states if applicable
200 self.model.load_state_dict(states['model'])
201 if 'longest_label' in states:
202 self.model.longest_label = states['longest_label']
203 return states
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parlai/agents/seq2seq/seq2seq.py b/parlai/agents/seq2seq/seq2seq.py
--- a/parlai/agents/seq2seq/seq2seq.py
+++ b/parlai/agents/seq2seq/seq2seq.py
@@ -5,6 +5,7 @@
# LICENSE file in the root directory of this source tree.
from parlai.core.torch_generator_agent import TorchGeneratorAgent
+from parlai.core.utils import warn_once
from .modules import Seq2seq, opt_to_kwargs
import torch
@@ -201,3 +202,22 @@
if 'longest_label' in states:
self.model.longest_label = states['longest_label']
return states
+
+ def is_valid(self, obs):
+ normally_valid = super().is_valid(obs)
+ if not normally_valid:
+ # shortcut boolean evaluation
+ return normally_valid
+ contains_empties = obs['text_vec'].shape[0] == 0
+ if self.is_training and contains_empties:
+ warn_once(
+ 'seq2seq got an empty input sequence (text_vec) during training. '
+ 'Skipping this example, but you should check your dataset and '
+ 'preprocessing.'
+ )
+ elif not self.is_training and contains_empties:
+ warn_once(
+ 'seq2seq got an empty input sequence (text_vec) in an '
+ 'evaluation example! This may affect your metrics!'
+ )
+ return not contains_empties
|
{"golden_diff": "diff --git a/parlai/agents/seq2seq/seq2seq.py b/parlai/agents/seq2seq/seq2seq.py\n--- a/parlai/agents/seq2seq/seq2seq.py\n+++ b/parlai/agents/seq2seq/seq2seq.py\n@@ -5,6 +5,7 @@\n # LICENSE file in the root directory of this source tree.\n \n from parlai.core.torch_generator_agent import TorchGeneratorAgent\n+from parlai.core.utils import warn_once\n from .modules import Seq2seq, opt_to_kwargs\n \n import torch\n@@ -201,3 +202,22 @@\n if 'longest_label' in states:\n self.model.longest_label = states['longest_label']\n return states\n+\n+ def is_valid(self, obs):\n+ normally_valid = super().is_valid(obs)\n+ if not normally_valid:\n+ # shortcut boolean evaluation\n+ return normally_valid\n+ contains_empties = obs['text_vec'].shape[0] == 0\n+ if self.is_training and contains_empties:\n+ warn_once(\n+ 'seq2seq got an empty input sequence (text_vec) during training. '\n+ 'Skipping this example, but you should check your dataset and '\n+ 'preprocessing.'\n+ )\n+ elif not self.is_training and contains_empties:\n+ warn_once(\n+ 'seq2seq got an empty input sequence (text_vec) in an '\n+ 'evaluation example! This may affect your metrics!'\n+ )\n+ return not contains_empties\n", "issue": "Empty training examples throwing runtime error\nI'm getting an error with the OpenSubtitles (2009 and 2018) and the Cornell Movie datasets with length 0 examples being loaded. My guess is that there are empty messages in the datasets, which could easily be removed with some preprocessing or a flag to avoid such examples. However, I'm having trouble finding such a flag and would appreciate any help you can provide. I do not have this issue arise on other datasets that I have explored so far. \r\n\r\nI'm a bit puzzled as to why others haven't seemed to run into this problem and was hoping you could help point me in the right direction, especially if I'm missing an easy fix. Thanks.\r\n\r\n**Error:**\r\n```\r\nTraceback (most recent call last):\r\n File \"examples/train_model.py\", line 16, in <module>\r\n TrainLoop(opt).train()\r\n File \"/data/odemasi/packages/ParlAI/parlai/scripts/train_model.py\", line 523, in train\r\n world.parley()\r\n File \"/data/odemasi/packages/ParlAI/parlai/core/worlds.py\", line 654, in parley\r\n batch_act = self.batch_act(agent_idx, batch_observations[agent_idx])\r\n File \"/data/odemasi/packages/ParlAI/parlai/core/worlds.py\", line 627, in batch_act\r\n batch_actions = a.batch_act(batch_observation)\r\n File \"/data/odemasi/packages/ParlAI/parlai/core/torch_agent.py\", line 1206, in batch_act\r\n output = self.train_step(batch)\r\n File \"/data/odemasi/packages/ParlAI/parlai/core/torch_generator_agent.py\", line 478, in train_step\r\n raise e\r\n File \"/data/odemasi/packages/ParlAI/parlai/core/torch_generator_agent.py\", line 454, in train_step\r\n scores, preds, _ = self.model(batch.text_vec, batch.label_vec)\r\n File \"/data/odemasi/anaconda3/envs/parlai_venv/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 489, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/odemasi/packages/ParlAI/parlai/core/torch_generator_agent.py\", line 214, in forward\r\n encoder_states = prev_enc if prev_enc is not None else self.encoder(xs)\r\n File \"/data/odemasi/anaconda3/envs/parlai_venv/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 489, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/odemasi/packages/ParlAI/parlai/agents/seq2seq/modules.py\", line 226, in forward\r\n xes = pack_padded_sequence(xes, x_lens, batch_first=True)\r\n File \"/data/odemasi/anaconda3/envs/parlai_venv/lib/python3.7/site-packages/torch/nn/utils/rnn.py\", line 148, in pack_padded_sequence\r\n return PackedSequence(torch._C._VariableFunctions._pack_padded_sequence(input, lengths, batch_first))\r\nRuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0\r\n```\r\n\r\n\r\n**Resulted from:**\r\npython examples/train_model.py \\\r\n-t cornell_movie \\\r\n-bs 32 \\\r\n--hiddensize 1024 \\\r\n--dict-minfreq 2 \\\r\n--dict-lower True \\\r\n--dict-file cornell_dict \\\r\n-m seq2seq \\\r\n-mf cornell_seq2seq \n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom parlai.core.torch_generator_agent import TorchGeneratorAgent\nfrom .modules import Seq2seq, opt_to_kwargs\n\nimport torch\nimport torch.nn as nn\n\nimport json\n\n\nclass Seq2seqAgent(TorchGeneratorAgent):\n \"\"\"Agent which takes an input sequence and produces an output sequence.\n\n This model supports encoding the input and decoding the output via one of\n several flavors of RNN. It then uses a linear layer (whose weights can\n be shared with the embedding layer) to convert RNN output states into\n output tokens. This model supports greedy decoding, selecting the\n highest probability token at each time step, as well as beam\n search.\n\n For more information, see the following papers:\n - Neural Machine Translation by Jointly Learning to Align and Translate\n `(Bahdanau et al. 2014) <arxiv.org/abs/1409.0473>`_\n - Sequence to Sequence Learning with Neural Networks\n `(Sutskever et al. 2014) <arxiv.org/abs/1409.3215>`_\n - Effective Approaches to Attention-based Neural Machine Translation\n `(Luong et al. 2015) <arxiv.org/abs/1508.04025>`_\n \"\"\"\n\n @classmethod\n def add_cmdline_args(cls, argparser):\n \"\"\"Add command-line arguments specifically for this agent.\"\"\"\n agent = argparser.add_argument_group('Seq2Seq Arguments')\n agent.add_argument('-hs', '--hiddensize', type=int, default=128,\n help='size of the hidden layers')\n agent.add_argument('-esz', '--embeddingsize', type=int, default=128,\n help='size of the token embeddings')\n agent.add_argument('-nl', '--numlayers', type=int, default=2,\n help='number of hidden layers')\n agent.add_argument('-dr', '--dropout', type=float, default=0.1,\n help='dropout rate')\n agent.add_argument('-bi', '--bidirectional', type='bool',\n default=False,\n help='whether to encode the context with a '\n 'bidirectional rnn')\n agent.add_argument('-att', '--attention', default='none',\n choices=['none', 'concat', 'general', 'dot',\n 'local'],\n help='Choices: none, concat, general, local. '\n 'If set local, also set attention-length. '\n '(see arxiv.org/abs/1508.04025)')\n agent.add_argument('-attl', '--attention-length', default=48, type=int,\n help='Length of local attention.')\n agent.add_argument('--attention-time', default='post',\n choices=['pre', 'post'],\n help='Whether to apply attention before or after '\n 'decoding.')\n agent.add_argument('-rnn', '--rnn-class', default='lstm',\n choices=Seq2seq.RNN_OPTS.keys(),\n help='Choose between different types of RNNs.')\n agent.add_argument('-dec', '--decoder', default='same',\n choices=['same', 'shared'],\n help='Choose between different decoder modules. '\n 'Default \"same\" uses same class as encoder, '\n 'while \"shared\" also uses the same weights. '\n 'Note that shared disabled some encoder '\n 'options--in particular, bidirectionality.')\n agent.add_argument('-lt', '--lookuptable', default='unique',\n choices=['unique', 'enc_dec', 'dec_out', 'all'],\n help='The encoder, decoder, and output modules can '\n 'share weights, or not. '\n 'Unique has independent embeddings for each. '\n 'Enc_dec shares the embedding for the encoder '\n 'and decoder. '\n 'Dec_out shares decoder embedding and output '\n 'weights. '\n 'All shares all three weights.')\n agent.add_argument('-soft', '--numsoftmax', default=1, type=int,\n help='default 1, if greater then uses mixture of '\n 'softmax (see arxiv.org/abs/1711.03953).')\n agent.add_argument('-idr', '--input-dropout', type=float, default=0.0,\n help='Probability of replacing tokens with UNK in training.')\n\n super(Seq2seqAgent, cls).add_cmdline_args(argparser)\n return agent\n\n @staticmethod\n def model_version():\n \"\"\"Return current version of this model, counting up from 0.\n\n Models may not be backwards-compatible with older versions.\n Version 1 split from version 0 on Aug 29, 2018.\n Version 2 split from version 1 on Nov 13, 2018\n To use version 0, use --model legacy:seq2seq:0\n To use version 1, use --model legacy:seq2seq:1\n (legacy agent code is located in parlai/agents/legacy_agents).\n \"\"\"\n return 2\n\n def __init__(self, opt, shared=None):\n \"\"\"Set up model.\"\"\"\n super().__init__(opt, shared)\n self.id = 'Seq2Seq'\n\n def build_model(self, states=None):\n \"\"\"Initialize model, override to change model setup.\"\"\"\n opt = self.opt\n if not states:\n states = {}\n\n kwargs = opt_to_kwargs(opt)\n self.model = Seq2seq(\n len(self.dict), opt['embeddingsize'], opt['hiddensize'],\n padding_idx=self.NULL_IDX, start_idx=self.START_IDX,\n end_idx=self.END_IDX, unknown_idx=self.dict[self.dict.unk_token],\n longest_label=states.get('longest_label', 1),\n **kwargs)\n\n if (opt.get('dict_tokenizer') == 'bpe' and\n opt['embedding_type'] != 'random'):\n print('skipping preinitialization of embeddings for bpe')\n elif not states and opt['embedding_type'] != 'random':\n # `not states`: only set up embeddings if not loading model\n self._copy_embeddings(self.model.decoder.lt.weight,\n opt['embedding_type'])\n if opt['lookuptable'] in ['unique', 'dec_out']:\n # also set encoder lt, since it's not shared\n self._copy_embeddings(self.model.encoder.lt.weight,\n opt['embedding_type'], log=False)\n\n if states:\n # set loaded states if applicable\n self.model.load_state_dict(states['model'])\n\n if self.use_cuda:\n self.model.cuda()\n\n if opt['embedding_type'].endswith('fixed'):\n print('Seq2seq: fixing embedding weights.')\n self.model.decoder.lt.weight.requires_grad = False\n self.model.encoder.lt.weight.requires_grad = False\n if opt['lookuptable'] in ['dec_out', 'all']:\n self.model.decoder.e2s.weight.requires_grad = False\n\n if self.use_cuda:\n self.model.cuda()\n\n return self.model\n\n def build_criterion(self):\n # set up criteria\n if self.opt.get('numsoftmax', 1) > 1:\n self.criterion = nn.NLLLoss(\n ignore_index=self.NULL_IDX, size_average=False)\n else:\n self.criterion = nn.CrossEntropyLoss(\n ignore_index=self.NULL_IDX, size_average=False)\n\n if self.use_cuda:\n self.criterion.cuda()\n\n def batchify(self, *args, **kwargs):\n \"\"\"Override batchify options for seq2seq.\"\"\"\n kwargs['sort'] = True # need sorted for pack_padded\n return super().batchify(*args, **kwargs)\n\n def save(self, path=None):\n \"\"\"Save model parameters if model_file is set.\"\"\"\n path = self.opt.get('model_file', None) if path is None else path\n\n if path and hasattr(self, 'model'):\n model = {}\n if hasattr(self.model, 'module'):\n model['model'] = self.model.module.state_dict()\n model['longest_label'] = self.model.module.longest_label\n else:\n model['model'] = self.model.state_dict()\n model['longest_label'] = self.model.longest_label\n model['optimizer'] = self.optimizer.state_dict()\n model['optimizer_type'] = self.opt['optimizer']\n\n with open(path, 'wb') as write:\n torch.save(model, write)\n\n # save opt file\n with open(path + '.opt', 'w') as handle:\n # save version string\n self.opt['model_version'] = self.model_version()\n json.dump(self.opt, handle)\n\n def load(self, path):\n \"\"\"Return opt and model states.\"\"\"\n states = torch.load(path, map_location=lambda cpu, _: cpu)\n # set loaded states if applicable\n self.model.load_state_dict(states['model'])\n if 'longest_label' in states:\n self.model.longest_label = states['longest_label']\n return states\n", "path": "parlai/agents/seq2seq/seq2seq.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom parlai.core.torch_generator_agent import TorchGeneratorAgent\nfrom parlai.core.utils import warn_once\nfrom .modules import Seq2seq, opt_to_kwargs\n\nimport torch\nimport torch.nn as nn\n\nimport json\n\n\nclass Seq2seqAgent(TorchGeneratorAgent):\n \"\"\"Agent which takes an input sequence and produces an output sequence.\n\n This model supports encoding the input and decoding the output via one of\n several flavors of RNN. It then uses a linear layer (whose weights can\n be shared with the embedding layer) to convert RNN output states into\n output tokens. This model supports greedy decoding, selecting the\n highest probability token at each time step, as well as beam\n search.\n\n For more information, see the following papers:\n - Neural Machine Translation by Jointly Learning to Align and Translate\n `(Bahdanau et al. 2014) <arxiv.org/abs/1409.0473>`_\n - Sequence to Sequence Learning with Neural Networks\n `(Sutskever et al. 2014) <arxiv.org/abs/1409.3215>`_\n - Effective Approaches to Attention-based Neural Machine Translation\n `(Luong et al. 2015) <arxiv.org/abs/1508.04025>`_\n \"\"\"\n\n @classmethod\n def add_cmdline_args(cls, argparser):\n \"\"\"Add command-line arguments specifically for this agent.\"\"\"\n agent = argparser.add_argument_group('Seq2Seq Arguments')\n agent.add_argument('-hs', '--hiddensize', type=int, default=128,\n help='size of the hidden layers')\n agent.add_argument('-esz', '--embeddingsize', type=int, default=128,\n help='size of the token embeddings')\n agent.add_argument('-nl', '--numlayers', type=int, default=2,\n help='number of hidden layers')\n agent.add_argument('-dr', '--dropout', type=float, default=0.1,\n help='dropout rate')\n agent.add_argument('-bi', '--bidirectional', type='bool',\n default=False,\n help='whether to encode the context with a '\n 'bidirectional rnn')\n agent.add_argument('-att', '--attention', default='none',\n choices=['none', 'concat', 'general', 'dot',\n 'local'],\n help='Choices: none, concat, general, local. '\n 'If set local, also set attention-length. '\n '(see arxiv.org/abs/1508.04025)')\n agent.add_argument('-attl', '--attention-length', default=48, type=int,\n help='Length of local attention.')\n agent.add_argument('--attention-time', default='post',\n choices=['pre', 'post'],\n help='Whether to apply attention before or after '\n 'decoding.')\n agent.add_argument('-rnn', '--rnn-class', default='lstm',\n choices=Seq2seq.RNN_OPTS.keys(),\n help='Choose between different types of RNNs.')\n agent.add_argument('-dec', '--decoder', default='same',\n choices=['same', 'shared'],\n help='Choose between different decoder modules. '\n 'Default \"same\" uses same class as encoder, '\n 'while \"shared\" also uses the same weights. '\n 'Note that shared disabled some encoder '\n 'options--in particular, bidirectionality.')\n agent.add_argument('-lt', '--lookuptable', default='unique',\n choices=['unique', 'enc_dec', 'dec_out', 'all'],\n help='The encoder, decoder, and output modules can '\n 'share weights, or not. '\n 'Unique has independent embeddings for each. '\n 'Enc_dec shares the embedding for the encoder '\n 'and decoder. '\n 'Dec_out shares decoder embedding and output '\n 'weights. '\n 'All shares all three weights.')\n agent.add_argument('-soft', '--numsoftmax', default=1, type=int,\n help='default 1, if greater then uses mixture of '\n 'softmax (see arxiv.org/abs/1711.03953).')\n agent.add_argument('-idr', '--input-dropout', type=float, default=0.0,\n help='Probability of replacing tokens with UNK in training.')\n\n super(Seq2seqAgent, cls).add_cmdline_args(argparser)\n return agent\n\n @staticmethod\n def model_version():\n \"\"\"Return current version of this model, counting up from 0.\n\n Models may not be backwards-compatible with older versions.\n Version 1 split from version 0 on Aug 29, 2018.\n Version 2 split from version 1 on Nov 13, 2018\n To use version 0, use --model legacy:seq2seq:0\n To use version 1, use --model legacy:seq2seq:1\n (legacy agent code is located in parlai/agents/legacy_agents).\n \"\"\"\n return 2\n\n def __init__(self, opt, shared=None):\n \"\"\"Set up model.\"\"\"\n super().__init__(opt, shared)\n self.id = 'Seq2Seq'\n\n def build_model(self, states=None):\n \"\"\"Initialize model, override to change model setup.\"\"\"\n opt = self.opt\n if not states:\n states = {}\n\n kwargs = opt_to_kwargs(opt)\n self.model = Seq2seq(\n len(self.dict), opt['embeddingsize'], opt['hiddensize'],\n padding_idx=self.NULL_IDX, start_idx=self.START_IDX,\n end_idx=self.END_IDX, unknown_idx=self.dict[self.dict.unk_token],\n longest_label=states.get('longest_label', 1),\n **kwargs)\n\n if (opt.get('dict_tokenizer') == 'bpe' and\n opt['embedding_type'] != 'random'):\n print('skipping preinitialization of embeddings for bpe')\n elif not states and opt['embedding_type'] != 'random':\n # `not states`: only set up embeddings if not loading model\n self._copy_embeddings(self.model.decoder.lt.weight,\n opt['embedding_type'])\n if opt['lookuptable'] in ['unique', 'dec_out']:\n # also set encoder lt, since it's not shared\n self._copy_embeddings(self.model.encoder.lt.weight,\n opt['embedding_type'], log=False)\n\n if states:\n # set loaded states if applicable\n self.model.load_state_dict(states['model'])\n\n if self.use_cuda:\n self.model.cuda()\n\n if opt['embedding_type'].endswith('fixed'):\n print('Seq2seq: fixing embedding weights.')\n self.model.decoder.lt.weight.requires_grad = False\n self.model.encoder.lt.weight.requires_grad = False\n if opt['lookuptable'] in ['dec_out', 'all']:\n self.model.decoder.e2s.weight.requires_grad = False\n\n if self.use_cuda:\n self.model.cuda()\n\n return self.model\n\n def build_criterion(self):\n # set up criteria\n if self.opt.get('numsoftmax', 1) > 1:\n self.criterion = nn.NLLLoss(\n ignore_index=self.NULL_IDX, size_average=False)\n else:\n self.criterion = nn.CrossEntropyLoss(\n ignore_index=self.NULL_IDX, size_average=False)\n\n if self.use_cuda:\n self.criterion.cuda()\n\n def batchify(self, *args, **kwargs):\n \"\"\"Override batchify options for seq2seq.\"\"\"\n kwargs['sort'] = True # need sorted for pack_padded\n return super().batchify(*args, **kwargs)\n\n def save(self, path=None):\n \"\"\"Save model parameters if model_file is set.\"\"\"\n path = self.opt.get('model_file', None) if path is None else path\n\n if path and hasattr(self, 'model'):\n model = {}\n if hasattr(self.model, 'module'):\n model['model'] = self.model.module.state_dict()\n model['longest_label'] = self.model.module.longest_label\n else:\n model['model'] = self.model.state_dict()\n model['longest_label'] = self.model.longest_label\n model['optimizer'] = self.optimizer.state_dict()\n model['optimizer_type'] = self.opt['optimizer']\n\n with open(path, 'wb') as write:\n torch.save(model, write)\n\n # save opt file\n with open(path + '.opt', 'w') as handle:\n # save version string\n self.opt['model_version'] = self.model_version()\n json.dump(self.opt, handle)\n\n def load(self, path):\n \"\"\"Return opt and model states.\"\"\"\n states = torch.load(path, map_location=lambda cpu, _: cpu)\n # set loaded states if applicable\n self.model.load_state_dict(states['model'])\n if 'longest_label' in states:\n self.model.longest_label = states['longest_label']\n return states\n\n def is_valid(self, obs):\n normally_valid = super().is_valid(obs)\n if not normally_valid:\n # shortcut boolean evaluation\n return normally_valid\n contains_empties = obs['text_vec'].shape[0] == 0\n if self.is_training and contains_empties:\n warn_once(\n 'seq2seq got an empty input sequence (text_vec) during training. '\n 'Skipping this example, but you should check your dataset and '\n 'preprocessing.'\n )\n elif not self.is_training and contains_empties:\n warn_once(\n 'seq2seq got an empty input sequence (text_vec) in an '\n 'evaluation example! This may affect your metrics!'\n )\n return not contains_empties\n", "path": "parlai/agents/seq2seq/seq2seq.py"}]}
| 3,567 | 356 |
gh_patches_debug_24707
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-3714
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Adding new dimension name to indicator should propagate to children
Adding dimension names to indicators after an import should update the child indicators and add the corresponding indicator dimension names to the children too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rsr/models/result/indicator.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from akvo.codelists.models import IndicatorMeasure
8 from akvo.codelists.store.default_codelists import INDICATOR_MEASURE as IM
9 from akvo.rsr.fields import ValidXMLCharField
10 from akvo.utils import codelist_choices, codelist_value
11
12 from django.core.exceptions import ValidationError
13 from django.db import models
14 from django.utils.translation import ugettext_lazy as _
15 from django.db.models.signals import post_save
16 from django.dispatch import receiver
17
18 from .indicator_period import IndicatorPeriod
19 from .indicator_label import IndicatorLabel
20 from .result import Result
21 from .utils import PERCENTAGE_MEASURE, QUALITATIVE, QUANTITATIVE
22
23 # Currently we support only Unit, Percentage measures. Qualitative is
24 # implemented as a different Indicator type, and hence we drop that from the
25 # measure list. We also drop nominal and ordinal since we don't support those.
26 INDICATOR_MEASURE = IM[:3]
27
28
29 class Indicator(models.Model):
30 project_relation = 'results__indicators__in'
31
32 INDICATOR_TYPES = (
33 (QUANTITATIVE, _('Quantitative')),
34 (QUALITATIVE, _('Qualitative')),
35 )
36
37 result = models.ForeignKey('Result', verbose_name=_(u'result'), related_name='indicators')
38 parent_indicator = models.ForeignKey(
39 'self', blank=True, null=True, default=None,
40 verbose_name=_(u'parent indicator'), related_name='child_indicators'
41 )
42 title = ValidXMLCharField(
43 _(u'indicator title'), blank=True, max_length=500,
44 help_text=_(u'Within each result indicators can be defined. Indicators should be items '
45 u'that can be counted and evaluated as the project continues and is completed.')
46 )
47 # NOTE: type and measure should probably only be one field measure, wit the values Unit,
48 # Percentage and Qualitative. However since the project editor design splits the choice we use
49 # two fields, type and measure to simplify the interaction between front and back end.
50 type = models.PositiveSmallIntegerField(
51 _('indicator type'), choices=INDICATOR_TYPES, default=QUANTITATIVE
52 )
53 measure = ValidXMLCharField(
54 _(u'indicator measure'), blank=True, max_length=1,
55 choices=codelist_choices(INDICATOR_MEASURE),
56 help_text=_(u'Choose how the indicator will be measured (in percentage or units).')
57 )
58 ascending = models.NullBooleanField(
59 _(u'ascending'), blank=True,
60 help_text=_(u'Choose ascending if the target value of the indicator is higher than the '
61 u'baseline value (eg. people with access to sanitation). Choose descending if '
62 u'the target value of the indicator is lower than the baseline value '
63 u'(eg. people with diarrhea).'))
64 description = ValidXMLCharField(
65 _(u'indicator description'), blank=True, max_length=2000,
66 help_text=_(u'You can provide further information of the indicator here.')
67 )
68 baseline_year = models.PositiveIntegerField(
69 _(u'baseline year'), blank=True, null=True,
70 help_text=_(u'The year the baseline value was taken.')
71 )
72 baseline_value = ValidXMLCharField(
73 _(u'baseline value'), blank=True, max_length=200,
74 help_text=_(u'The value of the baseline at the start of the project.')
75 )
76 baseline_comment = ValidXMLCharField(
77 _(u'baseline comment'), blank=True, max_length=2000,
78 help_text=_(u'Here you can provide extra information on the baseline value, if needed.')
79 )
80 order = models.PositiveSmallIntegerField(_(u'indicator order'), null=True, blank=True)
81 default_periods = models.NullBooleanField(
82 _(u'default indicator periods'), default=False, blank=True,
83 help_text=_(u'Determines whether periods of indicator are used by default.')
84 )
85 export_to_iati = models.BooleanField(
86 _(u'Include indicator in IATI exports'), default=True,
87 help_text=_(u'Choose whether this indicator will be included in IATI exports. '
88 u'If you are not exporting to IATI, you may ignore this option.')
89 )
90 dimension_names = models.ManyToManyField('IndicatorDimensionName', related_name=u'indicators')
91
92 def __unicode__(self):
93 indicator_unicode = self.title if self.title else u'%s' % _(u'No indicator title')
94
95 if self.periods.all():
96 indicator_unicode += u' - %s %s' % (unicode(self.periods.count()),
97 _(u'period(s)'))
98
99 indicator_unicode += u' - %s' % dict(self.INDICATOR_TYPES)[self.type]
100
101 return indicator_unicode
102
103 def save(self, *args, **kwargs):
104 """Update the values of child indicators, if a parent indicator is updated."""
105
106 new_indicator = not self.pk
107
108 if new_indicator and Indicator.objects.filter(result_id=self.result.id).exists():
109 prev_indicator = Indicator.objects.filter(result_id=self.result.id).reverse()[0]
110 if prev_indicator.order:
111 self.order = prev_indicator.order + 1
112
113 # HACK: Delete IndicatorLabels on non-qualitative indicators
114 if new_indicator and self.type != QUALITATIVE:
115 IndicatorLabel.objects.filter(indicator=self).delete()
116
117 super(Indicator, self).save(*args, **kwargs)
118
119 for child_result in self.result.child_results.all():
120 if new_indicator:
121 child_result.project.copy_indicator(child_result, self, set_parent=True)
122 else:
123 child_result.project.update_indicator(child_result, self)
124
125 def clean(self):
126 validation_errors = {}
127
128 if self.pk and self.is_child_indicator():
129 orig_indicator = Indicator.objects.get(pk=self.pk)
130
131 # Don't allow some values to be changed when it is a child indicator
132 if self.result != orig_indicator.result:
133 validation_errors['result'] = u'%s' % \
134 _(u'It is not possible to update the result of this indicator, '
135 u'because it is linked to a parent result.')
136 if self.title != orig_indicator.title:
137 validation_errors['title'] = u'%s' % \
138 _(u'It is not possible to update the title of this indicator, '
139 u'because it is linked to a parent result.')
140 if self.measure != orig_indicator.measure:
141 validation_errors['measure'] = u'%s' % \
142 _(u'It is not possible to update the measure of this indicator, '
143 u'because it is linked to a parent result.')
144 if self.ascending != orig_indicator.ascending:
145 validation_errors['ascending'] = u'%s' % \
146 _(u'It is not possible to update the ascending value of this indicator, '
147 u'because it is linked to a parent result.')
148
149 if validation_errors:
150 raise ValidationError(validation_errors)
151
152 def delete(self, *args, **kwargs):
153 """
154 Check if indicator is ordered manually, and cascade following indicators if needed
155 """
156 if self.order:
157 sibling_indicators = Indicator.objects.filter(result_id=self.result.id)
158
159 if not self == sibling_indicators.reverse()[0]:
160 for ind in range(self.order + 1, len(sibling_indicators)):
161 sibling_indicators[ind].order -= 1
162 sibling_indicators[ind].save()
163
164 super(Indicator, self).delete(*args, **kwargs)
165
166 def iati_measure(self):
167 return codelist_value(IndicatorMeasure, self, 'measure')
168
169 def iati_measure_unicode(self):
170 return str(self.iati_measure())
171
172 def is_calculated(self):
173 return self.result.project.is_impact_project
174
175 def is_child_indicator(self):
176 """
177 Indicates whether this indicator is linked to a parent indicator.
178 """
179 return bool(self.parent_indicator)
180
181 def is_parent_indicator(self):
182 """
183 Indicates whether this indicator has children.
184 """
185 return self.child_indicators.count() > 0
186
187 @property
188 def children_aggregate_percentage(self):
189 """
190 Returns True if this indicator has percentage as a measure and has children that aggregate
191 to this indicator.
192 """
193 if self.measure == PERCENTAGE_MEASURE and self.is_parent_indicator() and \
194 self.result.project.aggregate_children and \
195 any(self.child_indicators.values_list('result__project__aggregate_to_parent', flat=True)):
196 return True
197 return False
198
199 class Meta:
200 app_label = 'rsr'
201 ordering = ['order', 'id']
202 verbose_name = _(u'indicator')
203 verbose_name_plural = _(u'indicators')
204 unique_together = ('result', 'parent_indicator')
205
206
207 # Add default indicator periods if necessary
208 @receiver(post_save, sender=Indicator, dispatch_uid='add_default_periods')
209 def add_default_periods(sender, instance, created, **kwargs):
210 if created:
211 project = instance.result.project
212 results = Result.objects.filter(project_id=project)
213 default_indicator = Indicator.objects.filter(result_id__in=results,
214 default_periods=True).first()
215
216 if default_indicator:
217 default_periods = IndicatorPeriod.objects.filter(indicator_id=default_indicator)
218
219 for period in default_periods:
220 period.pk = None
221
222 # Blank all values except id and locked status
223 period.target_value = ''
224 period.target_comment = ''
225 period.actual_value = ''
226 period.actual_comment = ''
227
228 period.indicator_id = instance.id
229 period.save()
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/akvo/rsr/models/result/indicator.py b/akvo/rsr/models/result/indicator.py
--- a/akvo/rsr/models/result/indicator.py
+++ b/akvo/rsr/models/result/indicator.py
@@ -12,7 +12,7 @@
from django.core.exceptions import ValidationError
from django.db import models
from django.utils.translation import ugettext_lazy as _
-from django.db.models.signals import post_save
+from django.db.models.signals import post_save, m2m_changed
from django.dispatch import receiver
from .indicator_period import IndicatorPeriod
@@ -227,3 +227,25 @@
period.indicator_id = instance.id
period.save()
+
+
+@receiver(m2m_changed, sender=Indicator.dimension_names.through)
+def add_dimension_names_to_children(sender, instance, action, **kwargs):
+ if not action.startswith('post_'):
+ return
+
+ if not instance.child_indicators.exists():
+ return
+
+ dimension_name = kwargs['model'].objects.filter(id__in=kwargs['pk_set']).first()
+ for indicator in instance.child_indicators.all():
+ child_dimension_name, _ = dimension_name.child_dimension_names.get_or_create(
+ name=dimension_name.name,
+ parent_dimension_name=dimension_name,
+ project=indicator.result.project)
+
+ if action == 'post_add':
+ indicator.dimension_names.add(child_dimension_name)
+
+ elif action == 'post_remove':
+ indicator.dimension_names.remove(child_dimension_name)
|
{"golden_diff": "diff --git a/akvo/rsr/models/result/indicator.py b/akvo/rsr/models/result/indicator.py\n--- a/akvo/rsr/models/result/indicator.py\n+++ b/akvo/rsr/models/result/indicator.py\n@@ -12,7 +12,7 @@\n from django.core.exceptions import ValidationError\n from django.db import models\n from django.utils.translation import ugettext_lazy as _\n-from django.db.models.signals import post_save\n+from django.db.models.signals import post_save, m2m_changed\n from django.dispatch import receiver\n \n from .indicator_period import IndicatorPeriod\n@@ -227,3 +227,25 @@\n \n period.indicator_id = instance.id\n period.save()\n+\n+\n+@receiver(m2m_changed, sender=Indicator.dimension_names.through)\n+def add_dimension_names_to_children(sender, instance, action, **kwargs):\n+ if not action.startswith('post_'):\n+ return\n+\n+ if not instance.child_indicators.exists():\n+ return\n+\n+ dimension_name = kwargs['model'].objects.filter(id__in=kwargs['pk_set']).first()\n+ for indicator in instance.child_indicators.all():\n+ child_dimension_name, _ = dimension_name.child_dimension_names.get_or_create(\n+ name=dimension_name.name,\n+ parent_dimension_name=dimension_name,\n+ project=indicator.result.project)\n+\n+ if action == 'post_add':\n+ indicator.dimension_names.add(child_dimension_name)\n+\n+ elif action == 'post_remove':\n+ indicator.dimension_names.remove(child_dimension_name)\n", "issue": "Adding new dimension name to indicator should propagate to children\nAdding dimension names to indicators after an import should update the child indicators and add the corresponding indicator dimension names to the children too. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom akvo.codelists.models import IndicatorMeasure\nfrom akvo.codelists.store.default_codelists import INDICATOR_MEASURE as IM\nfrom akvo.rsr.fields import ValidXMLCharField\nfrom akvo.utils import codelist_choices, codelist_value\n\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\n\nfrom .indicator_period import IndicatorPeriod\nfrom .indicator_label import IndicatorLabel\nfrom .result import Result\nfrom .utils import PERCENTAGE_MEASURE, QUALITATIVE, QUANTITATIVE\n\n# Currently we support only Unit, Percentage measures. Qualitative is\n# implemented as a different Indicator type, and hence we drop that from the\n# measure list. We also drop nominal and ordinal since we don't support those.\nINDICATOR_MEASURE = IM[:3]\n\n\nclass Indicator(models.Model):\n project_relation = 'results__indicators__in'\n\n INDICATOR_TYPES = (\n (QUANTITATIVE, _('Quantitative')),\n (QUALITATIVE, _('Qualitative')),\n )\n\n result = models.ForeignKey('Result', verbose_name=_(u'result'), related_name='indicators')\n parent_indicator = models.ForeignKey(\n 'self', blank=True, null=True, default=None,\n verbose_name=_(u'parent indicator'), related_name='child_indicators'\n )\n title = ValidXMLCharField(\n _(u'indicator title'), blank=True, max_length=500,\n help_text=_(u'Within each result indicators can be defined. Indicators should be items '\n u'that can be counted and evaluated as the project continues and is completed.')\n )\n # NOTE: type and measure should probably only be one field measure, wit the values Unit,\n # Percentage and Qualitative. However since the project editor design splits the choice we use\n # two fields, type and measure to simplify the interaction between front and back end.\n type = models.PositiveSmallIntegerField(\n _('indicator type'), choices=INDICATOR_TYPES, default=QUANTITATIVE\n )\n measure = ValidXMLCharField(\n _(u'indicator measure'), blank=True, max_length=1,\n choices=codelist_choices(INDICATOR_MEASURE),\n help_text=_(u'Choose how the indicator will be measured (in percentage or units).')\n )\n ascending = models.NullBooleanField(\n _(u'ascending'), blank=True,\n help_text=_(u'Choose ascending if the target value of the indicator is higher than the '\n u'baseline value (eg. people with access to sanitation). Choose descending if '\n u'the target value of the indicator is lower than the baseline value '\n u'(eg. people with diarrhea).'))\n description = ValidXMLCharField(\n _(u'indicator description'), blank=True, max_length=2000,\n help_text=_(u'You can provide further information of the indicator here.')\n )\n baseline_year = models.PositiveIntegerField(\n _(u'baseline year'), blank=True, null=True,\n help_text=_(u'The year the baseline value was taken.')\n )\n baseline_value = ValidXMLCharField(\n _(u'baseline value'), blank=True, max_length=200,\n help_text=_(u'The value of the baseline at the start of the project.')\n )\n baseline_comment = ValidXMLCharField(\n _(u'baseline comment'), blank=True, max_length=2000,\n help_text=_(u'Here you can provide extra information on the baseline value, if needed.')\n )\n order = models.PositiveSmallIntegerField(_(u'indicator order'), null=True, blank=True)\n default_periods = models.NullBooleanField(\n _(u'default indicator periods'), default=False, blank=True,\n help_text=_(u'Determines whether periods of indicator are used by default.')\n )\n export_to_iati = models.BooleanField(\n _(u'Include indicator in IATI exports'), default=True,\n help_text=_(u'Choose whether this indicator will be included in IATI exports. '\n u'If you are not exporting to IATI, you may ignore this option.')\n )\n dimension_names = models.ManyToManyField('IndicatorDimensionName', related_name=u'indicators')\n\n def __unicode__(self):\n indicator_unicode = self.title if self.title else u'%s' % _(u'No indicator title')\n\n if self.periods.all():\n indicator_unicode += u' - %s %s' % (unicode(self.periods.count()),\n _(u'period(s)'))\n\n indicator_unicode += u' - %s' % dict(self.INDICATOR_TYPES)[self.type]\n\n return indicator_unicode\n\n def save(self, *args, **kwargs):\n \"\"\"Update the values of child indicators, if a parent indicator is updated.\"\"\"\n\n new_indicator = not self.pk\n\n if new_indicator and Indicator.objects.filter(result_id=self.result.id).exists():\n prev_indicator = Indicator.objects.filter(result_id=self.result.id).reverse()[0]\n if prev_indicator.order:\n self.order = prev_indicator.order + 1\n\n # HACK: Delete IndicatorLabels on non-qualitative indicators\n if new_indicator and self.type != QUALITATIVE:\n IndicatorLabel.objects.filter(indicator=self).delete()\n\n super(Indicator, self).save(*args, **kwargs)\n\n for child_result in self.result.child_results.all():\n if new_indicator:\n child_result.project.copy_indicator(child_result, self, set_parent=True)\n else:\n child_result.project.update_indicator(child_result, self)\n\n def clean(self):\n validation_errors = {}\n\n if self.pk and self.is_child_indicator():\n orig_indicator = Indicator.objects.get(pk=self.pk)\n\n # Don't allow some values to be changed when it is a child indicator\n if self.result != orig_indicator.result:\n validation_errors['result'] = u'%s' % \\\n _(u'It is not possible to update the result of this indicator, '\n u'because it is linked to a parent result.')\n if self.title != orig_indicator.title:\n validation_errors['title'] = u'%s' % \\\n _(u'It is not possible to update the title of this indicator, '\n u'because it is linked to a parent result.')\n if self.measure != orig_indicator.measure:\n validation_errors['measure'] = u'%s' % \\\n _(u'It is not possible to update the measure of this indicator, '\n u'because it is linked to a parent result.')\n if self.ascending != orig_indicator.ascending:\n validation_errors['ascending'] = u'%s' % \\\n _(u'It is not possible to update the ascending value of this indicator, '\n u'because it is linked to a parent result.')\n\n if validation_errors:\n raise ValidationError(validation_errors)\n\n def delete(self, *args, **kwargs):\n \"\"\"\n Check if indicator is ordered manually, and cascade following indicators if needed\n \"\"\"\n if self.order:\n sibling_indicators = Indicator.objects.filter(result_id=self.result.id)\n\n if not self == sibling_indicators.reverse()[0]:\n for ind in range(self.order + 1, len(sibling_indicators)):\n sibling_indicators[ind].order -= 1\n sibling_indicators[ind].save()\n\n super(Indicator, self).delete(*args, **kwargs)\n\n def iati_measure(self):\n return codelist_value(IndicatorMeasure, self, 'measure')\n\n def iati_measure_unicode(self):\n return str(self.iati_measure())\n\n def is_calculated(self):\n return self.result.project.is_impact_project\n\n def is_child_indicator(self):\n \"\"\"\n Indicates whether this indicator is linked to a parent indicator.\n \"\"\"\n return bool(self.parent_indicator)\n\n def is_parent_indicator(self):\n \"\"\"\n Indicates whether this indicator has children.\n \"\"\"\n return self.child_indicators.count() > 0\n\n @property\n def children_aggregate_percentage(self):\n \"\"\"\n Returns True if this indicator has percentage as a measure and has children that aggregate\n to this indicator.\n \"\"\"\n if self.measure == PERCENTAGE_MEASURE and self.is_parent_indicator() and \\\n self.result.project.aggregate_children and \\\n any(self.child_indicators.values_list('result__project__aggregate_to_parent', flat=True)):\n return True\n return False\n\n class Meta:\n app_label = 'rsr'\n ordering = ['order', 'id']\n verbose_name = _(u'indicator')\n verbose_name_plural = _(u'indicators')\n unique_together = ('result', 'parent_indicator')\n\n\n# Add default indicator periods if necessary\n@receiver(post_save, sender=Indicator, dispatch_uid='add_default_periods')\ndef add_default_periods(sender, instance, created, **kwargs):\n if created:\n project = instance.result.project\n results = Result.objects.filter(project_id=project)\n default_indicator = Indicator.objects.filter(result_id__in=results,\n default_periods=True).first()\n\n if default_indicator:\n default_periods = IndicatorPeriod.objects.filter(indicator_id=default_indicator)\n\n for period in default_periods:\n period.pk = None\n\n # Blank all values except id and locked status\n period.target_value = ''\n period.target_comment = ''\n period.actual_value = ''\n period.actual_comment = ''\n\n period.indicator_id = instance.id\n period.save()\n", "path": "akvo/rsr/models/result/indicator.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom akvo.codelists.models import IndicatorMeasure\nfrom akvo.codelists.store.default_codelists import INDICATOR_MEASURE as IM\nfrom akvo.rsr.fields import ValidXMLCharField\nfrom akvo.utils import codelist_choices, codelist_value\n\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.db.models.signals import post_save, m2m_changed\nfrom django.dispatch import receiver\n\nfrom .indicator_period import IndicatorPeriod\nfrom .indicator_label import IndicatorLabel\nfrom .result import Result\nfrom .utils import PERCENTAGE_MEASURE, QUALITATIVE, QUANTITATIVE\n\n# Currently we support only Unit, Percentage measures. Qualitative is\n# implemented as a different Indicator type, and hence we drop that from the\n# measure list. We also drop nominal and ordinal since we don't support those.\nINDICATOR_MEASURE = IM[:3]\n\n\nclass Indicator(models.Model):\n project_relation = 'results__indicators__in'\n\n INDICATOR_TYPES = (\n (QUANTITATIVE, _('Quantitative')),\n (QUALITATIVE, _('Qualitative')),\n )\n\n result = models.ForeignKey('Result', verbose_name=_(u'result'), related_name='indicators')\n parent_indicator = models.ForeignKey(\n 'self', blank=True, null=True, default=None,\n verbose_name=_(u'parent indicator'), related_name='child_indicators'\n )\n title = ValidXMLCharField(\n _(u'indicator title'), blank=True, max_length=500,\n help_text=_(u'Within each result indicators can be defined. Indicators should be items '\n u'that can be counted and evaluated as the project continues and is completed.')\n )\n # NOTE: type and measure should probably only be one field measure, wit the values Unit,\n # Percentage and Qualitative. However since the project editor design splits the choice we use\n # two fields, type and measure to simplify the interaction between front and back end.\n type = models.PositiveSmallIntegerField(\n _('indicator type'), choices=INDICATOR_TYPES, default=QUANTITATIVE\n )\n measure = ValidXMLCharField(\n _(u'indicator measure'), blank=True, max_length=1,\n choices=codelist_choices(INDICATOR_MEASURE),\n help_text=_(u'Choose how the indicator will be measured (in percentage or units).')\n )\n ascending = models.NullBooleanField(\n _(u'ascending'), blank=True,\n help_text=_(u'Choose ascending if the target value of the indicator is higher than the '\n u'baseline value (eg. people with access to sanitation). Choose descending if '\n u'the target value of the indicator is lower than the baseline value '\n u'(eg. people with diarrhea).'))\n description = ValidXMLCharField(\n _(u'indicator description'), blank=True, max_length=2000,\n help_text=_(u'You can provide further information of the indicator here.')\n )\n baseline_year = models.PositiveIntegerField(\n _(u'baseline year'), blank=True, null=True,\n help_text=_(u'The year the baseline value was taken.')\n )\n baseline_value = ValidXMLCharField(\n _(u'baseline value'), blank=True, max_length=200,\n help_text=_(u'The value of the baseline at the start of the project.')\n )\n baseline_comment = ValidXMLCharField(\n _(u'baseline comment'), blank=True, max_length=2000,\n help_text=_(u'Here you can provide extra information on the baseline value, if needed.')\n )\n order = models.PositiveSmallIntegerField(_(u'indicator order'), null=True, blank=True)\n default_periods = models.NullBooleanField(\n _(u'default indicator periods'), default=False, blank=True,\n help_text=_(u'Determines whether periods of indicator are used by default.')\n )\n export_to_iati = models.BooleanField(\n _(u'Include indicator in IATI exports'), default=True,\n help_text=_(u'Choose whether this indicator will be included in IATI exports. '\n u'If you are not exporting to IATI, you may ignore this option.')\n )\n dimension_names = models.ManyToManyField('IndicatorDimensionName', related_name=u'indicators')\n\n def __unicode__(self):\n indicator_unicode = self.title if self.title else u'%s' % _(u'No indicator title')\n\n if self.periods.all():\n indicator_unicode += u' - %s %s' % (unicode(self.periods.count()),\n _(u'period(s)'))\n\n indicator_unicode += u' - %s' % dict(self.INDICATOR_TYPES)[self.type]\n\n return indicator_unicode\n\n def save(self, *args, **kwargs):\n \"\"\"Update the values of child indicators, if a parent indicator is updated.\"\"\"\n\n new_indicator = not self.pk\n\n if new_indicator and Indicator.objects.filter(result_id=self.result.id).exists():\n prev_indicator = Indicator.objects.filter(result_id=self.result.id).reverse()[0]\n if prev_indicator.order:\n self.order = prev_indicator.order + 1\n\n # HACK: Delete IndicatorLabels on non-qualitative indicators\n if new_indicator and self.type != QUALITATIVE:\n IndicatorLabel.objects.filter(indicator=self).delete()\n\n super(Indicator, self).save(*args, **kwargs)\n\n for child_result in self.result.child_results.all():\n if new_indicator:\n child_result.project.copy_indicator(child_result, self, set_parent=True)\n else:\n child_result.project.update_indicator(child_result, self)\n\n def clean(self):\n validation_errors = {}\n\n if self.pk and self.is_child_indicator():\n orig_indicator = Indicator.objects.get(pk=self.pk)\n\n # Don't allow some values to be changed when it is a child indicator\n if self.result != orig_indicator.result:\n validation_errors['result'] = u'%s' % \\\n _(u'It is not possible to update the result of this indicator, '\n u'because it is linked to a parent result.')\n if self.title != orig_indicator.title:\n validation_errors['title'] = u'%s' % \\\n _(u'It is not possible to update the title of this indicator, '\n u'because it is linked to a parent result.')\n if self.measure != orig_indicator.measure:\n validation_errors['measure'] = u'%s' % \\\n _(u'It is not possible to update the measure of this indicator, '\n u'because it is linked to a parent result.')\n if self.ascending != orig_indicator.ascending:\n validation_errors['ascending'] = u'%s' % \\\n _(u'It is not possible to update the ascending value of this indicator, '\n u'because it is linked to a parent result.')\n\n if validation_errors:\n raise ValidationError(validation_errors)\n\n def delete(self, *args, **kwargs):\n \"\"\"\n Check if indicator is ordered manually, and cascade following indicators if needed\n \"\"\"\n if self.order:\n sibling_indicators = Indicator.objects.filter(result_id=self.result.id)\n\n if not self == sibling_indicators.reverse()[0]:\n for ind in range(self.order + 1, len(sibling_indicators)):\n sibling_indicators[ind].order -= 1\n sibling_indicators[ind].save()\n\n super(Indicator, self).delete(*args, **kwargs)\n\n def iati_measure(self):\n return codelist_value(IndicatorMeasure, self, 'measure')\n\n def iati_measure_unicode(self):\n return str(self.iati_measure())\n\n def is_calculated(self):\n return self.result.project.is_impact_project\n\n def is_child_indicator(self):\n \"\"\"\n Indicates whether this indicator is linked to a parent indicator.\n \"\"\"\n return bool(self.parent_indicator)\n\n def is_parent_indicator(self):\n \"\"\"\n Indicates whether this indicator has children.\n \"\"\"\n return self.child_indicators.count() > 0\n\n @property\n def children_aggregate_percentage(self):\n \"\"\"\n Returns True if this indicator has percentage as a measure and has children that aggregate\n to this indicator.\n \"\"\"\n if self.measure == PERCENTAGE_MEASURE and self.is_parent_indicator() and \\\n self.result.project.aggregate_children and \\\n any(self.child_indicators.values_list('result__project__aggregate_to_parent', flat=True)):\n return True\n return False\n\n class Meta:\n app_label = 'rsr'\n ordering = ['order', 'id']\n verbose_name = _(u'indicator')\n verbose_name_plural = _(u'indicators')\n unique_together = ('result', 'parent_indicator')\n\n\n# Add default indicator periods if necessary\n@receiver(post_save, sender=Indicator, dispatch_uid='add_default_periods')\ndef add_default_periods(sender, instance, created, **kwargs):\n if created:\n project = instance.result.project\n results = Result.objects.filter(project_id=project)\n default_indicator = Indicator.objects.filter(result_id__in=results,\n default_periods=True).first()\n\n if default_indicator:\n default_periods = IndicatorPeriod.objects.filter(indicator_id=default_indicator)\n\n for period in default_periods:\n period.pk = None\n\n # Blank all values except id and locked status\n period.target_value = ''\n period.target_comment = ''\n period.actual_value = ''\n period.actual_comment = ''\n\n period.indicator_id = instance.id\n period.save()\n\n\n@receiver(m2m_changed, sender=Indicator.dimension_names.through)\ndef add_dimension_names_to_children(sender, instance, action, **kwargs):\n if not action.startswith('post_'):\n return\n\n if not instance.child_indicators.exists():\n return\n\n dimension_name = kwargs['model'].objects.filter(id__in=kwargs['pk_set']).first()\n for indicator in instance.child_indicators.all():\n child_dimension_name, _ = dimension_name.child_dimension_names.get_or_create(\n name=dimension_name.name,\n parent_dimension_name=dimension_name,\n project=indicator.result.project)\n\n if action == 'post_add':\n indicator.dimension_names.add(child_dimension_name)\n\n elif action == 'post_remove':\n indicator.dimension_names.remove(child_dimension_name)\n", "path": "akvo/rsr/models/result/indicator.py"}]}
| 2,972 | 333 |
gh_patches_debug_10351
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-3993
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
In Bplans required field (start-date & end-date) for publication is not marked as such
**URL:** https://meinberlin-dev.liqd.net/dashboard/projects/sabinas-bebauungsplan/bplan/
**user:** A Bplan initiator
**expected behaviour:** As a Bplan initiator, I want the form to tell me which fields are required so that I can publish
**behaviour:** Because the start- and enddate are not marked as required for publication, it's unclear that not filling these in is hindering publication
**important screensize:** any
**device & browser:** ubuntu chrome
**Comment/Question:**
These are just missing red the exclamation point icons. It's a simple fix in forms.py
Screenshot?

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/bplan/forms.py`
Content:
```
1 from django import forms
2
3 from meinberlin.apps.extprojects.forms import ExternalProjectCreateForm
4 from meinberlin.apps.extprojects.forms import ExternalProjectForm
5
6 from . import models
7
8
9 class StatementForm(forms.ModelForm):
10 class Meta:
11 model = models.Statement
12 fields = ['name', 'email', 'statement',
13 'street_number', 'postal_code_city']
14
15
16 class BplanProjectCreateForm(ExternalProjectCreateForm):
17
18 class Meta:
19 model = models.Bplan
20 fields = ['name', 'description', 'tile_image', 'tile_image_copyright']
21
22
23 class BplanProjectForm(ExternalProjectForm):
24
25 class Meta:
26 model = models.Bplan
27 fields = ['name', 'identifier', 'url', 'description', 'tile_image',
28 'tile_image_copyright', 'is_archived', 'office_worker_email',
29 'start_date', 'end_date']
30 required_for_project_publish = ['name', 'url', 'description',
31 'office_worker_email']
32
33 def __init__(self, *args, **kwargs):
34 super().__init__(*args, **kwargs)
35 self.fields['name'].widget.attrs.update({
36 'autocomplete': 'off', 'autofill': 'off'
37 })
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/bplan/forms.py b/meinberlin/apps/bplan/forms.py
--- a/meinberlin/apps/bplan/forms.py
+++ b/meinberlin/apps/bplan/forms.py
@@ -28,7 +28,8 @@
'tile_image_copyright', 'is_archived', 'office_worker_email',
'start_date', 'end_date']
required_for_project_publish = ['name', 'url', 'description',
- 'office_worker_email']
+ 'office_worker_email',
+ 'start_date', 'end_date']
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
|
{"golden_diff": "diff --git a/meinberlin/apps/bplan/forms.py b/meinberlin/apps/bplan/forms.py\n--- a/meinberlin/apps/bplan/forms.py\n+++ b/meinberlin/apps/bplan/forms.py\n@@ -28,7 +28,8 @@\n 'tile_image_copyright', 'is_archived', 'office_worker_email',\n 'start_date', 'end_date']\n required_for_project_publish = ['name', 'url', 'description',\n- 'office_worker_email']\n+ 'office_worker_email',\n+ 'start_date', 'end_date']\n \n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n", "issue": "In Bplans required field (start-date & end-date) for publication is not marked as such\n**URL:** https://meinberlin-dev.liqd.net/dashboard/projects/sabinas-bebauungsplan/bplan/\r\n**user:** A Bplan initiator\r\n**expected behaviour:** As a Bplan initiator, I want the form to tell me which fields are required so that I can publish\r\n**behaviour:** Because the start- and enddate are not marked as required for publication, it's unclear that not filling these in is hindering publication\r\n**important screensize:** any\r\n**device & browser:** ubuntu chrome\r\n**Comment/Question:**\r\nThese are just missing red the exclamation point icons. It's a simple fix in forms.py\r\n\r\nScreenshot?\r\n\r\n\r\n\n", "before_files": [{"content": "from django import forms\n\nfrom meinberlin.apps.extprojects.forms import ExternalProjectCreateForm\nfrom meinberlin.apps.extprojects.forms import ExternalProjectForm\n\nfrom . import models\n\n\nclass StatementForm(forms.ModelForm):\n class Meta:\n model = models.Statement\n fields = ['name', 'email', 'statement',\n 'street_number', 'postal_code_city']\n\n\nclass BplanProjectCreateForm(ExternalProjectCreateForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'description', 'tile_image', 'tile_image_copyright']\n\n\nclass BplanProjectForm(ExternalProjectForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'identifier', 'url', 'description', 'tile_image',\n 'tile_image_copyright', 'is_archived', 'office_worker_email',\n 'start_date', 'end_date']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email']\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['name'].widget.attrs.update({\n 'autocomplete': 'off', 'autofill': 'off'\n })\n", "path": "meinberlin/apps/bplan/forms.py"}], "after_files": [{"content": "from django import forms\n\nfrom meinberlin.apps.extprojects.forms import ExternalProjectCreateForm\nfrom meinberlin.apps.extprojects.forms import ExternalProjectForm\n\nfrom . import models\n\n\nclass StatementForm(forms.ModelForm):\n class Meta:\n model = models.Statement\n fields = ['name', 'email', 'statement',\n 'street_number', 'postal_code_city']\n\n\nclass BplanProjectCreateForm(ExternalProjectCreateForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'description', 'tile_image', 'tile_image_copyright']\n\n\nclass BplanProjectForm(ExternalProjectForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'identifier', 'url', 'description', 'tile_image',\n 'tile_image_copyright', 'is_archived', 'office_worker_email',\n 'start_date', 'end_date']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email',\n 'start_date', 'end_date']\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['name'].widget.attrs.update({\n 'autocomplete': 'off', 'autofill': 'off'\n })\n", "path": "meinberlin/apps/bplan/forms.py"}]}
| 814 | 149 |
gh_patches_debug_26987
|
rasdani/github-patches
|
git_diff
|
nvaccess__nvda-8138
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Windows 10: NVDA announces "unknown" when opening and closing quick link menu (Windows+X)
Hi,
Reported by several people: in Windows 10, when one opens quick link menu (Windows+X), NVDA announces "unknown".
STR:
1. Press Windows+X to open quick link menu.
2. Select an item.
Expected: no "unknown".
Actual: "unknown" is announced.
Cause: an app launcher or an unknown window gains focus.
Related to: #5116, #6671.
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/appModules/explorer.py`
Content:
```
1 #appModules/explorer.py
2 #A part of NonVisual Desktop Access (NVDA)
3 #Copyright (C) 2006-2017 NV Access Limited, Joseph Lee
4 #This file is covered by the GNU General Public License.
5 #See the file COPYING for more details.
6
7 """App module for Windows Explorer (aka Windows shell).
8 Provides workarounds for controls such as identifying Start button, notification area and others.
9 """
10
11 from comtypes import COMError
12 import time
13 import appModuleHandler
14 import controlTypes
15 import winUser
16 import api
17 import speech
18 import eventHandler
19 import mouseHandler
20 from NVDAObjects.window import Window
21 from NVDAObjects.IAccessible import sysListView32, IAccessible, List
22 from NVDAObjects.UIA import UIA
23
24 # Suppress incorrect Win 10 Task switching window focus
25 class MultitaskingViewFrameWindow(UIA):
26 shouldAllowUIAFocusEvent=False
27
28
29 # Suppress focus ancestry for task switching list items if alt is held down (alt+tab)
30 class MultitaskingViewFrameListItem(UIA):
31
32 def _get_container(self):
33 if winUser.getAsyncKeyState(winUser.VK_MENU)&32768:
34 return api.getDesktopObject()
35 else:
36 return super(MultitaskingViewFrameListItem,self).container
37
38
39 # Support for Win8 start screen search suggestions.
40 class SuggestionListItem(UIA):
41
42 def event_UIA_elementSelected(self):
43 speech.cancelSpeech()
44 api.setNavigatorObject(self, isFocus=True)
45 self.reportFocus()
46 super(SuggestionListItem,self).event_UIA_elementSelected()
47
48
49 # Windows 8 hack: Class to disable incorrect focus on windows 8 search box (containing the already correctly focused edit field)
50 class SearchBoxClient(IAccessible):
51 shouldAllowIAccessibleFocusEvent=False
52
53
54 # Class for menu items for Windows Places and Frequently used Programs (in start menu)
55 class SysListView32MenuItem(sysListView32.ListItemWithoutColumnSupport):
56
57 # #474: When focus moves to these items, an extra focus is fired on the parent
58 # However NVDA redirects it to the real focus.
59 # But this means double focus events on the item, so filter the second one out
60 def _get_shouldAllowIAccessibleFocusEvent(self):
61 res=super(SysListView32MenuItem,self).shouldAllowIAccessibleFocusEvent
62 if not res:
63 return False
64 focus=eventHandler.lastQueuedFocusObject
65 if type(focus)!=type(self) or (self.event_windowHandle,self.event_objectID,self.event_childID)!=(focus.event_windowHandle,focus.event_objectID,focus.event_childID):
66 return True
67 return False
68
69
70 class ClassicStartMenu(Window):
71 # Override the name, as Windows names this the "Application" menu contrary to all documentation.
72 # Translators: The title of Start menu/screen in your language (only the word start).
73 name = _("Start")
74
75 def event_gainFocus(self):
76 # In Windows XP, the Start button will get focus first, so silence this.
77 speech.cancelSpeech()
78 super(ClassicStartMenu, self).event_gainFocus()
79
80
81 class NotificationArea(IAccessible):
82 """The Windows notification area, a.k.a. system tray.
83 """
84
85 def event_gainFocus(self):
86 if mouseHandler.lastMouseEventTime < time.time() - 0.2:
87 # This focus change was not caused by a mouse event.
88 # If the mouse is on another toolbar control, the notification area toolbar will rudely
89 # bounce the focus back to the object under the mouse after a brief pause.
90 # Moving the mouse to the focus object isn't a good solution because
91 # sometimes, the focus can't be moved away from the object under the mouse.
92 # Therefore, move the mouse out of the way.
93 winUser.setCursorPos(0, 0)
94
95 if self.role == controlTypes.ROLE_TOOLBAR:
96 # Sometimes, the toolbar itself receives the focus instead of the focused child.
97 # However, the focused child still has the focused state.
98 for child in self.children:
99 if child.hasFocus:
100 # Redirect the focus to the focused child.
101 eventHandler.executeEvent("gainFocus", child)
102 return
103 # We've really landed on the toolbar itself.
104 # This was probably caused by moving the mouse out of the way in a previous focus event.
105 # This previous focus event is no longer useful, so cancel speech.
106 speech.cancelSpeech()
107
108 if eventHandler.isPendingEvents("gainFocus"):
109 return
110 super(NotificationArea, self).event_gainFocus()
111
112
113 class GridTileElement(UIA):
114
115 role=controlTypes.ROLE_TABLECELL
116
117 def _get_description(self):
118 name=self.name
119 descriptionStrings=[]
120 for child in self.children:
121 description=child.basicText
122 if not description or description==name: continue
123 descriptionStrings.append(description)
124 return " ".join(descriptionStrings)
125 return description
126
127
128 class GridListTileElement(UIA):
129 role=controlTypes.ROLE_TABLECELL
130 description=None
131
132
133 class GridGroup(UIA):
134 """A group in the Windows 8 Start Menu.
135 """
136 presentationType=UIA.presType_content
137
138 # Normally the name is the first tile which is rather redundant
139 # However some groups have custom header text which should be read instead
140 def _get_name(self):
141 child=self.firstChild
142 if isinstance(child,UIA):
143 try:
144 automationID=child.UIAElement.currentAutomationID
145 except COMError:
146 automationID=None
147 if automationID=="GridListGroupHeader":
148 return child.name
149
150
151 class ImmersiveLauncher(UIA):
152 # When the Windows 8 start screen opens, focus correctly goes to the first tile, but then incorrectly back to the root of the window.
153 # Ignore focus events on this object.
154 shouldAllowUIAFocusEvent=False
155
156
157 class StartButton(IAccessible):
158 """For Windows 8.1 and 10 Start buttons to be recognized as proper buttons and to suppress selection announcement."""
159
160 role = controlTypes.ROLE_BUTTON
161
162 def _get_states(self):
163 # #5178: Selection announcement should be suppressed.
164 # Borrowed from Mozilla objects in NVDAObjects/IAccessible/Mozilla.py.
165 states = super(StartButton, self).states
166 states.discard(controlTypes.STATE_SELECTED)
167 return states
168
169
170 class AppModule(appModuleHandler.AppModule):
171
172 def chooseNVDAObjectOverlayClasses(self, obj, clsList):
173 windowClass = obj.windowClassName
174 role = obj.role
175
176 if windowClass in ("Search Box","UniversalSearchBand") and role==controlTypes.ROLE_PANE and isinstance(obj,IAccessible):
177 clsList.insert(0,SearchBoxClient)
178 return
179
180 if windowClass == "ToolbarWindow32" and role == controlTypes.ROLE_POPUPMENU:
181 parent = obj.parent
182 if parent and parent.windowClassName == "SysPager" and obj.windowStyle & 0x80:
183 clsList.insert(0, ClassicStartMenu)
184 return
185
186 if windowClass == "SysListView32" and role == controlTypes.ROLE_MENUITEM:
187 clsList.insert(0, SysListView32MenuItem)
188 return
189
190 if windowClass == "ToolbarWindow32":
191 # Check whether this is the notification area, a.k.a. system tray.
192 if isinstance(obj.parent, ClassicStartMenu):
193 return # This can't be a notification area
194 try:
195 # The toolbar's immediate parent is its window object, so we need to go one further.
196 toolbarParent = obj.parent.parent
197 if role != controlTypes.ROLE_TOOLBAR:
198 # Toolbar item.
199 toolbarParent = toolbarParent.parent
200 except AttributeError:
201 toolbarParent = None
202 if toolbarParent and toolbarParent.windowClassName == "SysPager":
203 clsList.insert(0, NotificationArea)
204 return
205
206 # #5178: Start button in Windows 8.1 and 10 should not have been a list in the first place.
207 if windowClass == "Start" and role in (controlTypes.ROLE_LIST, controlTypes.ROLE_BUTTON):
208 if role == controlTypes.ROLE_LIST:
209 clsList.remove(List)
210 clsList.insert(0, StartButton)
211
212 if isinstance(obj, UIA):
213 uiaClassName = obj.UIAElement.cachedClassName
214 if uiaClassName == "GridTileElement":
215 clsList.insert(0, GridTileElement)
216 elif uiaClassName == "GridListTileElement":
217 clsList.insert(0, GridListTileElement)
218 elif uiaClassName == "GridGroup":
219 clsList.insert(0, GridGroup)
220 elif uiaClassName == "ImmersiveLauncher" and role == controlTypes.ROLE_PANE:
221 clsList.insert(0, ImmersiveLauncher)
222 elif uiaClassName=="ListViewItem" and obj.UIAElement.cachedAutomationId.startswith('Suggestion_'):
223 clsList.insert(0,SuggestionListItem)
224 elif uiaClassName=="MultitaskingViewFrame" and role==controlTypes.ROLE_WINDOW:
225 clsList.insert(0,MultitaskingViewFrameWindow)
226 elif obj.windowClassName=="MultitaskingViewFrame" and role==controlTypes.ROLE_LISTITEM:
227 clsList.insert(0,MultitaskingViewFrameListItem)
228
229 def event_NVDAObject_init(self, obj):
230 windowClass = obj.windowClassName
231 role = obj.role
232
233 if windowClass == "ToolbarWindow32" and role == controlTypes.ROLE_POPUPMENU:
234 parent = obj.parent
235 if parent and parent.windowClassName == "SysPager" and not (obj.windowStyle & 0x80):
236 # This is the menu for a group of icons on the task bar, which Windows stupidly names "Application".
237 obj.name = None
238 return
239
240 if windowClass == "#32768":
241 # Standard menu.
242 parent = obj.parent
243 if parent and not parent.parent:
244 # Context menu.
245 # We don't trust the names that Explorer gives to context menus, so better to have no name at all.
246 obj.name = None
247 return
248
249 if windowClass == "DV2ControlHost" and role == controlTypes.ROLE_PANE:
250 # Windows Vista/7 start menu.
251 obj.presentationType=obj.presType_content
252 obj.isPresentableFocusAncestor = True
253 # In Windows 7, the description of this pane is extremely verbose help text, so nuke it.
254 obj.description = None
255 return
256
257 # The Address bar is embedded inside a progressbar, how strange.
258 # Lets hide that
259 if windowClass=="msctls_progress32" and winUser.getClassName(winUser.getAncestor(obj.windowHandle,winUser.GA_PARENT))=="Address Band Root":
260 obj.presentationType=obj.presType_layout
261
262 def event_gainFocus(self, obj, nextHandler):
263 wClass = obj.windowClassName
264 if wClass == "ToolbarWindow32" and obj.role == controlTypes.ROLE_MENUITEM and obj.parent.role == controlTypes.ROLE_MENUBAR and eventHandler.isPendingEvents("gainFocus"):
265 # When exiting a menu, Explorer fires focus on the top level menu item before it returns to the previous focus.
266 # Unfortunately, this focus event always occurs in a subsequent cycle, so the event limiter doesn't eliminate it.
267 # Therefore, if there is a pending focus event, don't bother handling this event.
268 return
269
270 if wClass == "ForegroundStaging":
271 # #5116: The Windows 10 Task View fires foreground/focus on this weird invisible window before and after it appears.
272 # This causes NVDA to report "unknown", so ignore it.
273 # We can't do this using shouldAllowIAccessibleFocusEvent because this isn't checked for foreground.
274 return
275
276 if wClass == "WorkerW" and obj.role == controlTypes.ROLE_PANE and obj.name is None:
277 # #6671: Never allow WorkerW thread to send gain focus event, as it causes 'pane" to be announced when minimizing windows or moving to desktop.
278 return
279
280 nextHandler()
281
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/source/appModules/explorer.py b/source/appModules/explorer.py
--- a/source/appModules/explorer.py
+++ b/source/appModules/explorer.py
@@ -1,6 +1,6 @@
#appModules/explorer.py
#A part of NonVisual Desktop Access (NVDA)
-#Copyright (C) 2006-2017 NV Access Limited, Joseph Lee
+#Copyright (C) 2006-2018 NV Access Limited, Joseph Lee
#This file is covered by the GNU General Public License.
#See the file COPYING for more details.
@@ -267,10 +267,11 @@
# Therefore, if there is a pending focus event, don't bother handling this event.
return
- if wClass == "ForegroundStaging":
- # #5116: The Windows 10 Task View fires foreground/focus on this weird invisible window before and after it appears.
+ if wClass in ("ForegroundStaging", "LauncherTipWnd", "ApplicationManager_DesktopShellWindow"):
+ # #5116: The Windows 10 Task View fires foreground/focus on this weird invisible window and foreground staging screen before and after it appears.
# This causes NVDA to report "unknown", so ignore it.
# We can't do this using shouldAllowIAccessibleFocusEvent because this isn't checked for foreground.
+ # #8137: also seen when opening quick link menu (Windows+X) on Windows 8 and later.
return
if wClass == "WorkerW" and obj.role == controlTypes.ROLE_PANE and obj.name is None:
|
{"golden_diff": "diff --git a/source/appModules/explorer.py b/source/appModules/explorer.py\n--- a/source/appModules/explorer.py\n+++ b/source/appModules/explorer.py\n@@ -1,6 +1,6 @@\n #appModules/explorer.py\r\n #A part of NonVisual Desktop Access (NVDA)\r\n-#Copyright (C) 2006-2017 NV Access Limited, Joseph Lee\r\n+#Copyright (C) 2006-2018 NV Access Limited, Joseph Lee\r\n #This file is covered by the GNU General Public License.\r\n #See the file COPYING for more details.\r\n \r\n@@ -267,10 +267,11 @@\n \t\t\t# Therefore, if there is a pending focus event, don't bother handling this event.\r\n \t\t\treturn\r\n \r\n-\t\tif wClass == \"ForegroundStaging\":\r\n-\t\t\t# #5116: The Windows 10 Task View fires foreground/focus on this weird invisible window before and after it appears.\r\n+\t\tif wClass in (\"ForegroundStaging\", \"LauncherTipWnd\", \"ApplicationManager_DesktopShellWindow\"):\r\n+\t\t\t# #5116: The Windows 10 Task View fires foreground/focus on this weird invisible window and foreground staging screen before and after it appears.\r\n \t\t\t# This causes NVDA to report \"unknown\", so ignore it.\r\n \t\t\t# We can't do this using shouldAllowIAccessibleFocusEvent because this isn't checked for foreground.\r\n+\t\t\t# #8137: also seen when opening quick link menu (Windows+X) on Windows 8 and later.\r\n \t\t\treturn\r\n \r\n \t\tif wClass == \"WorkerW\" and obj.role == controlTypes.ROLE_PANE and obj.name is None:\n", "issue": "Windows 10: NVDA announces \"unknown\" when opening and closing quick link menu (Windows+X)\nHi,\r\n\r\nReported by several people: in Windows 10, when one opens quick link menu (Windows+X), NVDA announces \"unknown\".\r\n\r\nSTR:\r\n\r\n1. Press Windows+X to open quick link menu.\r\n2. Select an item.\r\n\r\nExpected: no \"unknown\".\r\nActual: \"unknown\" is announced.\r\nCause: an app launcher or an unknown window gains focus.\r\n\r\nRelated to: #5116, #6671.\r\n\r\nThanks.\n", "before_files": [{"content": "#appModules/explorer.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2006-2017 NV Access Limited, Joseph Lee\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\n\"\"\"App module for Windows Explorer (aka Windows shell).\r\nProvides workarounds for controls such as identifying Start button, notification area and others.\r\n\"\"\"\r\n\r\nfrom comtypes import COMError\r\nimport time\r\nimport appModuleHandler\r\nimport controlTypes\r\nimport winUser\r\nimport api\r\nimport speech\r\nimport eventHandler\r\nimport mouseHandler\r\nfrom NVDAObjects.window import Window\r\nfrom NVDAObjects.IAccessible import sysListView32, IAccessible, List\r\nfrom NVDAObjects.UIA import UIA\r\n\r\n# Suppress incorrect Win 10 Task switching window focus\r\nclass MultitaskingViewFrameWindow(UIA):\r\n\tshouldAllowUIAFocusEvent=False\r\n\r\n\r\n# Suppress focus ancestry for task switching list items if alt is held down (alt+tab)\r\nclass MultitaskingViewFrameListItem(UIA):\r\n\r\n\tdef _get_container(self):\r\n\t\tif winUser.getAsyncKeyState(winUser.VK_MENU)&32768:\r\n\t\t\treturn api.getDesktopObject()\r\n\t\telse:\r\n\t\t\treturn super(MultitaskingViewFrameListItem,self).container\r\n\r\n\r\n# Support for Win8 start screen search suggestions.\r\nclass SuggestionListItem(UIA):\r\n\r\n\tdef event_UIA_elementSelected(self):\r\n\t\tspeech.cancelSpeech()\r\n\t\tapi.setNavigatorObject(self, isFocus=True)\r\n\t\tself.reportFocus()\r\n\t\tsuper(SuggestionListItem,self).event_UIA_elementSelected()\r\n\r\n\r\n# Windows 8 hack: Class to disable incorrect focus on windows 8 search box (containing the already correctly focused edit field)\r\nclass SearchBoxClient(IAccessible):\r\n\tshouldAllowIAccessibleFocusEvent=False\r\n\r\n\r\n# Class for menu items for Windows Places and Frequently used Programs (in start menu)\r\nclass SysListView32MenuItem(sysListView32.ListItemWithoutColumnSupport):\r\n\r\n\t# #474: When focus moves to these items, an extra focus is fired on the parent\r\n\t# However NVDA redirects it to the real focus.\r\n\t# But this means double focus events on the item, so filter the second one out\r\n\tdef _get_shouldAllowIAccessibleFocusEvent(self):\r\n\t\tres=super(SysListView32MenuItem,self).shouldAllowIAccessibleFocusEvent\r\n\t\tif not res:\r\n\t\t\treturn False\r\n\t\tfocus=eventHandler.lastQueuedFocusObject\r\n\t\tif type(focus)!=type(self) or (self.event_windowHandle,self.event_objectID,self.event_childID)!=(focus.event_windowHandle,focus.event_objectID,focus.event_childID):\r\n\t\t\treturn True\r\n\t\treturn False\r\n\r\n\r\nclass ClassicStartMenu(Window):\r\n\t# Override the name, as Windows names this the \"Application\" menu contrary to all documentation.\r\n\t# Translators: The title of Start menu/screen in your language (only the word start).\r\n\tname = _(\"Start\")\r\n\r\n\tdef event_gainFocus(self):\r\n\t\t# In Windows XP, the Start button will get focus first, so silence this.\r\n\t\tspeech.cancelSpeech()\r\n\t\tsuper(ClassicStartMenu, self).event_gainFocus()\r\n\r\n\r\nclass NotificationArea(IAccessible):\r\n\t\"\"\"The Windows notification area, a.k.a. system tray.\r\n\t\"\"\"\r\n\r\n\tdef event_gainFocus(self):\r\n\t\tif mouseHandler.lastMouseEventTime < time.time() - 0.2:\r\n\t\t\t# This focus change was not caused by a mouse event.\r\n\t\t\t# If the mouse is on another toolbar control, the notification area toolbar will rudely\r\n\t\t\t# bounce the focus back to the object under the mouse after a brief pause.\r\n\t\t\t# Moving the mouse to the focus object isn't a good solution because\r\n\t\t\t# sometimes, the focus can't be moved away from the object under the mouse.\r\n\t\t\t# Therefore, move the mouse out of the way.\r\n\t\t\twinUser.setCursorPos(0, 0)\r\n\r\n\t\tif self.role == controlTypes.ROLE_TOOLBAR:\r\n\t\t\t# Sometimes, the toolbar itself receives the focus instead of the focused child.\r\n\t\t\t# However, the focused child still has the focused state.\r\n\t\t\tfor child in self.children:\r\n\t\t\t\tif child.hasFocus:\r\n\t\t\t\t\t# Redirect the focus to the focused child.\r\n\t\t\t\t\teventHandler.executeEvent(\"gainFocus\", child)\r\n\t\t\t\t\treturn\r\n\t\t\t# We've really landed on the toolbar itself.\r\n\t\t\t# This was probably caused by moving the mouse out of the way in a previous focus event.\r\n\t\t\t# This previous focus event is no longer useful, so cancel speech.\r\n\t\t\tspeech.cancelSpeech()\r\n\r\n\t\tif eventHandler.isPendingEvents(\"gainFocus\"):\r\n\t\t\treturn\r\n\t\tsuper(NotificationArea, self).event_gainFocus()\r\n\r\n\r\nclass GridTileElement(UIA):\r\n\r\n\trole=controlTypes.ROLE_TABLECELL\r\n\r\n\tdef _get_description(self):\r\n\t\tname=self.name\r\n\t\tdescriptionStrings=[]\r\n\t\tfor child in self.children:\r\n\t\t\tdescription=child.basicText\r\n\t\t\tif not description or description==name: continue\r\n\t\t\tdescriptionStrings.append(description)\r\n\t\treturn \" \".join(descriptionStrings)\r\n\t\treturn description\r\n\r\n\r\nclass GridListTileElement(UIA):\r\n\trole=controlTypes.ROLE_TABLECELL\r\n\tdescription=None\r\n\r\n\r\nclass GridGroup(UIA):\r\n\t\"\"\"A group in the Windows 8 Start Menu.\r\n\t\"\"\"\r\n\tpresentationType=UIA.presType_content\r\n\r\n\t# Normally the name is the first tile which is rather redundant\r\n\t# However some groups have custom header text which should be read instead\r\n\tdef _get_name(self):\r\n\t\tchild=self.firstChild\r\n\t\tif isinstance(child,UIA):\r\n\t\t\ttry:\r\n\t\t\t\tautomationID=child.UIAElement.currentAutomationID\r\n\t\t\texcept COMError:\r\n\t\t\t\tautomationID=None\r\n\t\t\tif automationID==\"GridListGroupHeader\":\r\n\t\t\t\treturn child.name\r\n\r\n\r\nclass ImmersiveLauncher(UIA):\r\n\t# When the Windows 8 start screen opens, focus correctly goes to the first tile, but then incorrectly back to the root of the window.\r\n\t# Ignore focus events on this object.\r\n\tshouldAllowUIAFocusEvent=False\r\n\r\n\r\nclass StartButton(IAccessible):\r\n\t\"\"\"For Windows 8.1 and 10 Start buttons to be recognized as proper buttons and to suppress selection announcement.\"\"\"\r\n\r\n\trole = controlTypes.ROLE_BUTTON\r\n\r\n\tdef _get_states(self):\r\n\t\t# #5178: Selection announcement should be suppressed.\r\n\t\t# Borrowed from Mozilla objects in NVDAObjects/IAccessible/Mozilla.py.\r\n\t\tstates = super(StartButton, self).states\r\n\t\tstates.discard(controlTypes.STATE_SELECTED)\r\n\t\treturn states\r\n\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef chooseNVDAObjectOverlayClasses(self, obj, clsList):\r\n\t\twindowClass = obj.windowClassName\r\n\t\trole = obj.role\r\n\r\n\t\tif windowClass in (\"Search Box\",\"UniversalSearchBand\") and role==controlTypes.ROLE_PANE and isinstance(obj,IAccessible):\r\n\t\t\tclsList.insert(0,SearchBoxClient)\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"ToolbarWindow32\" and role == controlTypes.ROLE_POPUPMENU:\r\n\t\t\tparent = obj.parent\r\n\t\t\tif parent and parent.windowClassName == \"SysPager\" and obj.windowStyle & 0x80:\r\n\t\t\t\tclsList.insert(0, ClassicStartMenu)\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"SysListView32\" and role == controlTypes.ROLE_MENUITEM:\r\n\t\t\tclsList.insert(0, SysListView32MenuItem)\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"ToolbarWindow32\":\r\n\t\t\t# Check whether this is the notification area, a.k.a. system tray.\r\n\t\t\tif isinstance(obj.parent, ClassicStartMenu):\r\n\t\t\t\treturn # This can't be a notification area\r\n\t\t\ttry:\r\n\t\t\t\t# The toolbar's immediate parent is its window object, so we need to go one further.\r\n\t\t\t\ttoolbarParent = obj.parent.parent\r\n\t\t\t\tif role != controlTypes.ROLE_TOOLBAR:\r\n\t\t\t\t\t# Toolbar item.\r\n\t\t\t\t\ttoolbarParent = toolbarParent.parent\r\n\t\t\texcept AttributeError:\r\n\t\t\t\ttoolbarParent = None\r\n\t\t\tif toolbarParent and toolbarParent.windowClassName == \"SysPager\":\r\n\t\t\t\tclsList.insert(0, NotificationArea)\r\n\t\t\t\treturn\r\n\r\n\t\t# #5178: Start button in Windows 8.1 and 10 should not have been a list in the first place.\r\n\t\tif windowClass == \"Start\" and role in (controlTypes.ROLE_LIST, controlTypes.ROLE_BUTTON):\r\n\t\t\tif role == controlTypes.ROLE_LIST:\r\n\t\t\t\tclsList.remove(List)\r\n\t\t\tclsList.insert(0, StartButton)\r\n\r\n\t\tif isinstance(obj, UIA):\r\n\t\t\tuiaClassName = obj.UIAElement.cachedClassName\r\n\t\t\tif uiaClassName == \"GridTileElement\":\r\n\t\t\t\tclsList.insert(0, GridTileElement)\r\n\t\t\telif uiaClassName == \"GridListTileElement\":\r\n\t\t\t\tclsList.insert(0, GridListTileElement)\r\n\t\t\telif uiaClassName == \"GridGroup\":\r\n\t\t\t\tclsList.insert(0, GridGroup)\r\n\t\t\telif uiaClassName == \"ImmersiveLauncher\" and role == controlTypes.ROLE_PANE:\r\n\t\t\t\tclsList.insert(0, ImmersiveLauncher)\r\n\t\t\telif uiaClassName==\"ListViewItem\" and obj.UIAElement.cachedAutomationId.startswith('Suggestion_'):\r\n\t\t\t\tclsList.insert(0,SuggestionListItem)\r\n\t\t\telif uiaClassName==\"MultitaskingViewFrame\" and role==controlTypes.ROLE_WINDOW:\r\n\t\t\t\tclsList.insert(0,MultitaskingViewFrameWindow)\r\n\t\t\telif obj.windowClassName==\"MultitaskingViewFrame\" and role==controlTypes.ROLE_LISTITEM:\r\n\t\t\t\tclsList.insert(0,MultitaskingViewFrameListItem)\r\n\r\n\tdef event_NVDAObject_init(self, obj):\r\n\t\twindowClass = obj.windowClassName\r\n\t\trole = obj.role\r\n\r\n\t\tif windowClass == \"ToolbarWindow32\" and role == controlTypes.ROLE_POPUPMENU:\r\n\t\t\tparent = obj.parent\r\n\t\t\tif parent and parent.windowClassName == \"SysPager\" and not (obj.windowStyle & 0x80):\r\n\t\t\t\t# This is the menu for a group of icons on the task bar, which Windows stupidly names \"Application\".\r\n\t\t\t\tobj.name = None\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"#32768\":\r\n\t\t\t# Standard menu.\r\n\t\t\tparent = obj.parent\r\n\t\t\tif parent and not parent.parent:\r\n\t\t\t\t# Context menu.\r\n\t\t\t\t# We don't trust the names that Explorer gives to context menus, so better to have no name at all.\r\n\t\t\t\tobj.name = None\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"DV2ControlHost\" and role == controlTypes.ROLE_PANE:\r\n\t\t\t# Windows Vista/7 start menu.\r\n\t\t\tobj.presentationType=obj.presType_content\r\n\t\t\tobj.isPresentableFocusAncestor = True\r\n\t\t\t# In Windows 7, the description of this pane is extremely verbose help text, so nuke it.\r\n\t\t\tobj.description = None\r\n\t\t\treturn\r\n\r\n\t\t# The Address bar is embedded inside a progressbar, how strange.\r\n\t\t# Lets hide that\r\n\t\tif windowClass==\"msctls_progress32\" and winUser.getClassName(winUser.getAncestor(obj.windowHandle,winUser.GA_PARENT))==\"Address Band Root\":\r\n\t\t\tobj.presentationType=obj.presType_layout\r\n\r\n\tdef event_gainFocus(self, obj, nextHandler):\r\n\t\twClass = obj.windowClassName\r\n\t\tif wClass == \"ToolbarWindow32\" and obj.role == controlTypes.ROLE_MENUITEM and obj.parent.role == controlTypes.ROLE_MENUBAR and eventHandler.isPendingEvents(\"gainFocus\"):\r\n\t\t\t# When exiting a menu, Explorer fires focus on the top level menu item before it returns to the previous focus.\r\n\t\t\t# Unfortunately, this focus event always occurs in a subsequent cycle, so the event limiter doesn't eliminate it.\r\n\t\t\t# Therefore, if there is a pending focus event, don't bother handling this event.\r\n\t\t\treturn\r\n\r\n\t\tif wClass == \"ForegroundStaging\":\r\n\t\t\t# #5116: The Windows 10 Task View fires foreground/focus on this weird invisible window before and after it appears.\r\n\t\t\t# This causes NVDA to report \"unknown\", so ignore it.\r\n\t\t\t# We can't do this using shouldAllowIAccessibleFocusEvent because this isn't checked for foreground.\r\n\t\t\treturn\r\n\r\n\t\tif wClass == \"WorkerW\" and obj.role == controlTypes.ROLE_PANE and obj.name is None:\r\n\t\t\t# #6671: Never allow WorkerW thread to send gain focus event, as it causes 'pane\" to be announced when minimizing windows or moving to desktop.\r\n\t\t\treturn\r\n\r\n\t\tnextHandler()\r\n", "path": "source/appModules/explorer.py"}], "after_files": [{"content": "#appModules/explorer.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2006-2018 NV Access Limited, Joseph Lee\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\n\"\"\"App module for Windows Explorer (aka Windows shell).\r\nProvides workarounds for controls such as identifying Start button, notification area and others.\r\n\"\"\"\r\n\r\nfrom comtypes import COMError\r\nimport time\r\nimport appModuleHandler\r\nimport controlTypes\r\nimport winUser\r\nimport api\r\nimport speech\r\nimport eventHandler\r\nimport mouseHandler\r\nfrom NVDAObjects.window import Window\r\nfrom NVDAObjects.IAccessible import sysListView32, IAccessible, List\r\nfrom NVDAObjects.UIA import UIA\r\n\r\n# Suppress incorrect Win 10 Task switching window focus\r\nclass MultitaskingViewFrameWindow(UIA):\r\n\tshouldAllowUIAFocusEvent=False\r\n\r\n\r\n# Suppress focus ancestry for task switching list items if alt is held down (alt+tab)\r\nclass MultitaskingViewFrameListItem(UIA):\r\n\r\n\tdef _get_container(self):\r\n\t\tif winUser.getAsyncKeyState(winUser.VK_MENU)&32768:\r\n\t\t\treturn api.getDesktopObject()\r\n\t\telse:\r\n\t\t\treturn super(MultitaskingViewFrameListItem,self).container\r\n\r\n\r\n# Support for Win8 start screen search suggestions.\r\nclass SuggestionListItem(UIA):\r\n\r\n\tdef event_UIA_elementSelected(self):\r\n\t\tspeech.cancelSpeech()\r\n\t\tapi.setNavigatorObject(self, isFocus=True)\r\n\t\tself.reportFocus()\r\n\t\tsuper(SuggestionListItem,self).event_UIA_elementSelected()\r\n\r\n\r\n# Windows 8 hack: Class to disable incorrect focus on windows 8 search box (containing the already correctly focused edit field)\r\nclass SearchBoxClient(IAccessible):\r\n\tshouldAllowIAccessibleFocusEvent=False\r\n\r\n\r\n# Class for menu items for Windows Places and Frequently used Programs (in start menu)\r\nclass SysListView32MenuItem(sysListView32.ListItemWithoutColumnSupport):\r\n\r\n\t# #474: When focus moves to these items, an extra focus is fired on the parent\r\n\t# However NVDA redirects it to the real focus.\r\n\t# But this means double focus events on the item, so filter the second one out\r\n\tdef _get_shouldAllowIAccessibleFocusEvent(self):\r\n\t\tres=super(SysListView32MenuItem,self).shouldAllowIAccessibleFocusEvent\r\n\t\tif not res:\r\n\t\t\treturn False\r\n\t\tfocus=eventHandler.lastQueuedFocusObject\r\n\t\tif type(focus)!=type(self) or (self.event_windowHandle,self.event_objectID,self.event_childID)!=(focus.event_windowHandle,focus.event_objectID,focus.event_childID):\r\n\t\t\treturn True\r\n\t\treturn False\r\n\r\n\r\nclass ClassicStartMenu(Window):\r\n\t# Override the name, as Windows names this the \"Application\" menu contrary to all documentation.\r\n\t# Translators: The title of Start menu/screen in your language (only the word start).\r\n\tname = _(\"Start\")\r\n\r\n\tdef event_gainFocus(self):\r\n\t\t# In Windows XP, the Start button will get focus first, so silence this.\r\n\t\tspeech.cancelSpeech()\r\n\t\tsuper(ClassicStartMenu, self).event_gainFocus()\r\n\r\n\r\nclass NotificationArea(IAccessible):\r\n\t\"\"\"The Windows notification area, a.k.a. system tray.\r\n\t\"\"\"\r\n\r\n\tdef event_gainFocus(self):\r\n\t\tif mouseHandler.lastMouseEventTime < time.time() - 0.2:\r\n\t\t\t# This focus change was not caused by a mouse event.\r\n\t\t\t# If the mouse is on another toolbar control, the notification area toolbar will rudely\r\n\t\t\t# bounce the focus back to the object under the mouse after a brief pause.\r\n\t\t\t# Moving the mouse to the focus object isn't a good solution because\r\n\t\t\t# sometimes, the focus can't be moved away from the object under the mouse.\r\n\t\t\t# Therefore, move the mouse out of the way.\r\n\t\t\twinUser.setCursorPos(0, 0)\r\n\r\n\t\tif self.role == controlTypes.ROLE_TOOLBAR:\r\n\t\t\t# Sometimes, the toolbar itself receives the focus instead of the focused child.\r\n\t\t\t# However, the focused child still has the focused state.\r\n\t\t\tfor child in self.children:\r\n\t\t\t\tif child.hasFocus:\r\n\t\t\t\t\t# Redirect the focus to the focused child.\r\n\t\t\t\t\teventHandler.executeEvent(\"gainFocus\", child)\r\n\t\t\t\t\treturn\r\n\t\t\t# We've really landed on the toolbar itself.\r\n\t\t\t# This was probably caused by moving the mouse out of the way in a previous focus event.\r\n\t\t\t# This previous focus event is no longer useful, so cancel speech.\r\n\t\t\tspeech.cancelSpeech()\r\n\r\n\t\tif eventHandler.isPendingEvents(\"gainFocus\"):\r\n\t\t\treturn\r\n\t\tsuper(NotificationArea, self).event_gainFocus()\r\n\r\n\r\nclass GridTileElement(UIA):\r\n\r\n\trole=controlTypes.ROLE_TABLECELL\r\n\r\n\tdef _get_description(self):\r\n\t\tname=self.name\r\n\t\tdescriptionStrings=[]\r\n\t\tfor child in self.children:\r\n\t\t\tdescription=child.basicText\r\n\t\t\tif not description or description==name: continue\r\n\t\t\tdescriptionStrings.append(description)\r\n\t\treturn \" \".join(descriptionStrings)\r\n\t\treturn description\r\n\r\n\r\nclass GridListTileElement(UIA):\r\n\trole=controlTypes.ROLE_TABLECELL\r\n\tdescription=None\r\n\r\n\r\nclass GridGroup(UIA):\r\n\t\"\"\"A group in the Windows 8 Start Menu.\r\n\t\"\"\"\r\n\tpresentationType=UIA.presType_content\r\n\r\n\t# Normally the name is the first tile which is rather redundant\r\n\t# However some groups have custom header text which should be read instead\r\n\tdef _get_name(self):\r\n\t\tchild=self.firstChild\r\n\t\tif isinstance(child,UIA):\r\n\t\t\ttry:\r\n\t\t\t\tautomationID=child.UIAElement.currentAutomationID\r\n\t\t\texcept COMError:\r\n\t\t\t\tautomationID=None\r\n\t\t\tif automationID==\"GridListGroupHeader\":\r\n\t\t\t\treturn child.name\r\n\r\n\r\nclass ImmersiveLauncher(UIA):\r\n\t# When the Windows 8 start screen opens, focus correctly goes to the first tile, but then incorrectly back to the root of the window.\r\n\t# Ignore focus events on this object.\r\n\tshouldAllowUIAFocusEvent=False\r\n\r\n\r\nclass StartButton(IAccessible):\r\n\t\"\"\"For Windows 8.1 and 10 Start buttons to be recognized as proper buttons and to suppress selection announcement.\"\"\"\r\n\r\n\trole = controlTypes.ROLE_BUTTON\r\n\r\n\tdef _get_states(self):\r\n\t\t# #5178: Selection announcement should be suppressed.\r\n\t\t# Borrowed from Mozilla objects in NVDAObjects/IAccessible/Mozilla.py.\r\n\t\tstates = super(StartButton, self).states\r\n\t\tstates.discard(controlTypes.STATE_SELECTED)\r\n\t\treturn states\r\n\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef chooseNVDAObjectOverlayClasses(self, obj, clsList):\r\n\t\twindowClass = obj.windowClassName\r\n\t\trole = obj.role\r\n\r\n\t\tif windowClass in (\"Search Box\",\"UniversalSearchBand\") and role==controlTypes.ROLE_PANE and isinstance(obj,IAccessible):\r\n\t\t\tclsList.insert(0,SearchBoxClient)\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"ToolbarWindow32\" and role == controlTypes.ROLE_POPUPMENU:\r\n\t\t\tparent = obj.parent\r\n\t\t\tif parent and parent.windowClassName == \"SysPager\" and obj.windowStyle & 0x80:\r\n\t\t\t\tclsList.insert(0, ClassicStartMenu)\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"SysListView32\" and role == controlTypes.ROLE_MENUITEM:\r\n\t\t\tclsList.insert(0, SysListView32MenuItem)\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"ToolbarWindow32\":\r\n\t\t\t# Check whether this is the notification area, a.k.a. system tray.\r\n\t\t\tif isinstance(obj.parent, ClassicStartMenu):\r\n\t\t\t\treturn # This can't be a notification area\r\n\t\t\ttry:\r\n\t\t\t\t# The toolbar's immediate parent is its window object, so we need to go one further.\r\n\t\t\t\ttoolbarParent = obj.parent.parent\r\n\t\t\t\tif role != controlTypes.ROLE_TOOLBAR:\r\n\t\t\t\t\t# Toolbar item.\r\n\t\t\t\t\ttoolbarParent = toolbarParent.parent\r\n\t\t\texcept AttributeError:\r\n\t\t\t\ttoolbarParent = None\r\n\t\t\tif toolbarParent and toolbarParent.windowClassName == \"SysPager\":\r\n\t\t\t\tclsList.insert(0, NotificationArea)\r\n\t\t\t\treturn\r\n\r\n\t\t# #5178: Start button in Windows 8.1 and 10 should not have been a list in the first place.\r\n\t\tif windowClass == \"Start\" and role in (controlTypes.ROLE_LIST, controlTypes.ROLE_BUTTON):\r\n\t\t\tif role == controlTypes.ROLE_LIST:\r\n\t\t\t\tclsList.remove(List)\r\n\t\t\tclsList.insert(0, StartButton)\r\n\r\n\t\tif isinstance(obj, UIA):\r\n\t\t\tuiaClassName = obj.UIAElement.cachedClassName\r\n\t\t\tif uiaClassName == \"GridTileElement\":\r\n\t\t\t\tclsList.insert(0, GridTileElement)\r\n\t\t\telif uiaClassName == \"GridListTileElement\":\r\n\t\t\t\tclsList.insert(0, GridListTileElement)\r\n\t\t\telif uiaClassName == \"GridGroup\":\r\n\t\t\t\tclsList.insert(0, GridGroup)\r\n\t\t\telif uiaClassName == \"ImmersiveLauncher\" and role == controlTypes.ROLE_PANE:\r\n\t\t\t\tclsList.insert(0, ImmersiveLauncher)\r\n\t\t\telif uiaClassName==\"ListViewItem\" and obj.UIAElement.cachedAutomationId.startswith('Suggestion_'):\r\n\t\t\t\tclsList.insert(0,SuggestionListItem)\r\n\t\t\telif uiaClassName==\"MultitaskingViewFrame\" and role==controlTypes.ROLE_WINDOW:\r\n\t\t\t\tclsList.insert(0,MultitaskingViewFrameWindow)\r\n\t\t\telif obj.windowClassName==\"MultitaskingViewFrame\" and role==controlTypes.ROLE_LISTITEM:\r\n\t\t\t\tclsList.insert(0,MultitaskingViewFrameListItem)\r\n\r\n\tdef event_NVDAObject_init(self, obj):\r\n\t\twindowClass = obj.windowClassName\r\n\t\trole = obj.role\r\n\r\n\t\tif windowClass == \"ToolbarWindow32\" and role == controlTypes.ROLE_POPUPMENU:\r\n\t\t\tparent = obj.parent\r\n\t\t\tif parent and parent.windowClassName == \"SysPager\" and not (obj.windowStyle & 0x80):\r\n\t\t\t\t# This is the menu for a group of icons on the task bar, which Windows stupidly names \"Application\".\r\n\t\t\t\tobj.name = None\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"#32768\":\r\n\t\t\t# Standard menu.\r\n\t\t\tparent = obj.parent\r\n\t\t\tif parent and not parent.parent:\r\n\t\t\t\t# Context menu.\r\n\t\t\t\t# We don't trust the names that Explorer gives to context menus, so better to have no name at all.\r\n\t\t\t\tobj.name = None\r\n\t\t\treturn\r\n\r\n\t\tif windowClass == \"DV2ControlHost\" and role == controlTypes.ROLE_PANE:\r\n\t\t\t# Windows Vista/7 start menu.\r\n\t\t\tobj.presentationType=obj.presType_content\r\n\t\t\tobj.isPresentableFocusAncestor = True\r\n\t\t\t# In Windows 7, the description of this pane is extremely verbose help text, so nuke it.\r\n\t\t\tobj.description = None\r\n\t\t\treturn\r\n\r\n\t\t# The Address bar is embedded inside a progressbar, how strange.\r\n\t\t# Lets hide that\r\n\t\tif windowClass==\"msctls_progress32\" and winUser.getClassName(winUser.getAncestor(obj.windowHandle,winUser.GA_PARENT))==\"Address Band Root\":\r\n\t\t\tobj.presentationType=obj.presType_layout\r\n\r\n\tdef event_gainFocus(self, obj, nextHandler):\r\n\t\twClass = obj.windowClassName\r\n\t\tif wClass == \"ToolbarWindow32\" and obj.role == controlTypes.ROLE_MENUITEM and obj.parent.role == controlTypes.ROLE_MENUBAR and eventHandler.isPendingEvents(\"gainFocus\"):\r\n\t\t\t# When exiting a menu, Explorer fires focus on the top level menu item before it returns to the previous focus.\r\n\t\t\t# Unfortunately, this focus event always occurs in a subsequent cycle, so the event limiter doesn't eliminate it.\r\n\t\t\t# Therefore, if there is a pending focus event, don't bother handling this event.\r\n\t\t\treturn\r\n\r\n\t\tif wClass in (\"ForegroundStaging\", \"LauncherTipWnd\", \"ApplicationManager_DesktopShellWindow\"):\r\n\t\t\t# #5116: The Windows 10 Task View fires foreground/focus on this weird invisible window and foreground staging screen before and after it appears.\r\n\t\t\t# This causes NVDA to report \"unknown\", so ignore it.\r\n\t\t\t# We can't do this using shouldAllowIAccessibleFocusEvent because this isn't checked for foreground.\r\n\t\t\t# #8137: also seen when opening quick link menu (Windows+X) on Windows 8 and later.\r\n\t\t\treturn\r\n\r\n\t\tif wClass == \"WorkerW\" and obj.role == controlTypes.ROLE_PANE and obj.name is None:\r\n\t\t\t# #6671: Never allow WorkerW thread to send gain focus event, as it causes 'pane\" to be announced when minimizing windows or moving to desktop.\r\n\t\t\treturn\r\n\r\n\t\tnextHandler()\r\n", "path": "source/appModules/explorer.py"}]}
| 3,842 | 373 |
gh_patches_debug_12785
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-8076
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Catch ValueError during network discovery for when JSON parsing fails
<!--
Instructions:
* Fill out the sections below, replace …'s with information about your issue
* Use the 'preview' function above this text box to verify formatting before submitting
-->
### Observed behavior
<!--
Description of the behavior that was observed, including screenshots or other references when applicable
-->
When trying to import content channels (from local network or internet), no error message is produced even though the connection isn't established. (The "trying to connect" message stays on forever) This happens when there is a typo in the address. (I used kolibri-demo.learningequality.omg) When there's no internet, an error message DOES get produced.
### Expected behavior
<!--
Description of what behavior was expected but did not occur
-->
An error message should've been produced.
### User-facing consequences
<!--
Implications and real-world consequences for learners, coaches, admins, and other users of the application
-->
The user will not know how long they have to wait to see if the connection is being established.
### Errors and logs
<!--
Relevant logs from:
* the command line
* ~/.kolibri/logs/kolibri.txt
* the browser console
Please wrap errors in triple backticks for clean formatting like this:
```
01:10 info: something happened
01:12 error: something bad happened
```
-->
…
### Steps to reproduce
<!--
Precise steps that someone else can follow in order to see this behavior
-->
Import content channels from local network or internet >input "kolibri-demo.learningequality.omg" or something else with a typo in it.
### Context
<!--
Tell us about your environment, including:
* Kolibri version
* Operating system
* Browser
-->
Kolibri version: 0.14.3
Operating system: Android 5.1
Browser: Android App
Device: BLU Advance 5.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/core/discovery/utils/network/client.py`
Content:
```
1 import logging
2
3 import requests
4 from six.moves.urllib.parse import urljoin
5
6 from . import errors
7 from .urls import get_normalized_url_variations
8
9 logger = logging.getLogger(__name__)
10
11
12 class NetworkClient(object):
13 DEFAULT_TIMEOUT_IN_SECS = 5
14
15 def __init__(self, base_url=None, address=None, timeout=None, **kwargs):
16 """If an explicit base_url is already known, provide that. If a vague address is provided, we can try to infer the base_url"""
17 if not base_url and not address:
18 raise Exception(
19 "You must provide either a `base_url` or `address` argument"
20 )
21 self.timeout = timeout or self.DEFAULT_TIMEOUT_IN_SECS
22 self.session = requests.Session(**kwargs)
23 if base_url:
24 self.base_url = self._attempt_connections([base_url])
25 else:
26 # normalize the URL and try a number of variations until we find one that's able to connect
27 logger.info(
28 "Attempting connections to variations of the URL: {}".format(address)
29 )
30 self.base_url = self._attempt_connections(
31 get_normalized_url_variations(address)
32 )
33
34 def _attempt_connections(self, urls):
35 # try each of the URLs in turn, returning the first one that succeeds
36 for url in urls:
37 try:
38 logger.info("Attempting connection to: {}".format(url))
39 response = self.get(
40 "/api/public/info/",
41 base_url=url,
42 timeout=self.timeout,
43 allow_redirects=True,
44 )
45 # check that we successfully connected, and if we were redirected that it's still the right endpoint
46 if response.status_code == 200 and response.url.rstrip("/").endswith(
47 "/api/public/info"
48 ):
49 self.info = response.json()
50 if self.info["application"] not in ["studio", "kolibri"]:
51 raise requests.RequestException(
52 "Server is not running Kolibri or Studio"
53 )
54 logger.info("Success! We connected to: {}".format(response.url))
55 return response.url.rstrip("/").replace("api/public/info", "")
56 except (requests.RequestException) as e:
57 logger.info("Unable to connect: {}".format(e))
58
59 # we weren't able to connect to any of the URL variations, so all we can do is throw
60 raise errors.NetworkLocationNotFound()
61
62 def get(self, path, **kwargs):
63 return self.request("get", path, **kwargs)
64
65 def head(self, path, **kwargs):
66 return self.request("head", path, **kwargs)
67
68 def request(self, method, path, base_url=None, **kwargs):
69 base_url = base_url or self.base_url
70 url = urljoin(base_url, path)
71 response = getattr(self.session, method)(url, **kwargs)
72 response.raise_for_status()
73 return response
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kolibri/core/discovery/utils/network/client.py b/kolibri/core/discovery/utils/network/client.py
--- a/kolibri/core/discovery/utils/network/client.py
+++ b/kolibri/core/discovery/utils/network/client.py
@@ -55,6 +55,10 @@
return response.url.rstrip("/").replace("api/public/info", "")
except (requests.RequestException) as e:
logger.info("Unable to connect: {}".format(e))
+ except ValueError:
+ logger.info(
+ "Invalid JSON returned when attempting to connect to a remote server"
+ )
# we weren't able to connect to any of the URL variations, so all we can do is throw
raise errors.NetworkLocationNotFound()
|
{"golden_diff": "diff --git a/kolibri/core/discovery/utils/network/client.py b/kolibri/core/discovery/utils/network/client.py\n--- a/kolibri/core/discovery/utils/network/client.py\n+++ b/kolibri/core/discovery/utils/network/client.py\n@@ -55,6 +55,10 @@\n return response.url.rstrip(\"/\").replace(\"api/public/info\", \"\")\n except (requests.RequestException) as e:\n logger.info(\"Unable to connect: {}\".format(e))\n+ except ValueError:\n+ logger.info(\n+ \"Invalid JSON returned when attempting to connect to a remote server\"\n+ )\n \n # we weren't able to connect to any of the URL variations, so all we can do is throw\n raise errors.NetworkLocationNotFound()\n", "issue": "Catch ValueError during network discovery for when JSON parsing fails\n<!--\r\nInstructions:\r\n * Fill out the sections below, replace \u2026's with information about your issue\r\n * Use the 'preview' function above this text box to verify formatting before submitting\r\n-->\r\n\r\n### Observed behavior\r\n<!--\r\nDescription of the behavior that was observed, including screenshots or other references when applicable\r\n-->\r\n\r\nWhen trying to import content channels (from local network or internet), no error message is produced even though the connection isn't established. (The \"trying to connect\" message stays on forever) This happens when there is a typo in the address. (I used kolibri-demo.learningequality.omg) When there's no internet, an error message DOES get produced. \r\n\r\n### Expected behavior\r\n<!--\r\nDescription of what behavior was expected but did not occur\r\n-->\r\n\r\nAn error message should've been produced.\r\n\r\n### User-facing consequences\r\n<!--\r\nImplications and real-world consequences for learners, coaches, admins, and other users of the application\r\n-->\r\n\r\nThe user will not know how long they have to wait to see if the connection is being established.\r\n\r\n### Errors and logs\r\n<!--\r\nRelevant logs from:\r\n * the command line\r\n * ~/.kolibri/logs/kolibri.txt\r\n * the browser console\r\n\r\nPlease wrap errors in triple backticks for clean formatting like this:\r\n```\r\n01:10 info: something happened\r\n01:12 error: something bad happened\r\n```\r\n-->\r\n\r\n\u2026\r\n\r\n### Steps to reproduce\r\n<!--\r\nPrecise steps that someone else can follow in order to see this behavior\r\n-->\r\n\r\nImport content channels from local network or internet >input \"kolibri-demo.learningequality.omg\" or something else with a typo in it.\r\n\r\n### Context\r\n<!--\r\nTell us about your environment, including:\r\n * Kolibri version\r\n * Operating system\r\n * Browser\r\n-->\r\n\r\nKolibri version: 0.14.3\r\nOperating system: Android 5.1\r\nBrowser: Android App\r\nDevice: BLU Advance 5.0\r\n\n", "before_files": [{"content": "import logging\n\nimport requests\nfrom six.moves.urllib.parse import urljoin\n\nfrom . import errors\nfrom .urls import get_normalized_url_variations\n\nlogger = logging.getLogger(__name__)\n\n\nclass NetworkClient(object):\n DEFAULT_TIMEOUT_IN_SECS = 5\n\n def __init__(self, base_url=None, address=None, timeout=None, **kwargs):\n \"\"\"If an explicit base_url is already known, provide that. If a vague address is provided, we can try to infer the base_url\"\"\"\n if not base_url and not address:\n raise Exception(\n \"You must provide either a `base_url` or `address` argument\"\n )\n self.timeout = timeout or self.DEFAULT_TIMEOUT_IN_SECS\n self.session = requests.Session(**kwargs)\n if base_url:\n self.base_url = self._attempt_connections([base_url])\n else:\n # normalize the URL and try a number of variations until we find one that's able to connect\n logger.info(\n \"Attempting connections to variations of the URL: {}\".format(address)\n )\n self.base_url = self._attempt_connections(\n get_normalized_url_variations(address)\n )\n\n def _attempt_connections(self, urls):\n # try each of the URLs in turn, returning the first one that succeeds\n for url in urls:\n try:\n logger.info(\"Attempting connection to: {}\".format(url))\n response = self.get(\n \"/api/public/info/\",\n base_url=url,\n timeout=self.timeout,\n allow_redirects=True,\n )\n # check that we successfully connected, and if we were redirected that it's still the right endpoint\n if response.status_code == 200 and response.url.rstrip(\"/\").endswith(\n \"/api/public/info\"\n ):\n self.info = response.json()\n if self.info[\"application\"] not in [\"studio\", \"kolibri\"]:\n raise requests.RequestException(\n \"Server is not running Kolibri or Studio\"\n )\n logger.info(\"Success! We connected to: {}\".format(response.url))\n return response.url.rstrip(\"/\").replace(\"api/public/info\", \"\")\n except (requests.RequestException) as e:\n logger.info(\"Unable to connect: {}\".format(e))\n\n # we weren't able to connect to any of the URL variations, so all we can do is throw\n raise errors.NetworkLocationNotFound()\n\n def get(self, path, **kwargs):\n return self.request(\"get\", path, **kwargs)\n\n def head(self, path, **kwargs):\n return self.request(\"head\", path, **kwargs)\n\n def request(self, method, path, base_url=None, **kwargs):\n base_url = base_url or self.base_url\n url = urljoin(base_url, path)\n response = getattr(self.session, method)(url, **kwargs)\n response.raise_for_status()\n return response\n", "path": "kolibri/core/discovery/utils/network/client.py"}], "after_files": [{"content": "import logging\n\nimport requests\nfrom six.moves.urllib.parse import urljoin\n\nfrom . import errors\nfrom .urls import get_normalized_url_variations\n\nlogger = logging.getLogger(__name__)\n\n\nclass NetworkClient(object):\n DEFAULT_TIMEOUT_IN_SECS = 5\n\n def __init__(self, base_url=None, address=None, timeout=None, **kwargs):\n \"\"\"If an explicit base_url is already known, provide that. If a vague address is provided, we can try to infer the base_url\"\"\"\n if not base_url and not address:\n raise Exception(\n \"You must provide either a `base_url` or `address` argument\"\n )\n self.timeout = timeout or self.DEFAULT_TIMEOUT_IN_SECS\n self.session = requests.Session(**kwargs)\n if base_url:\n self.base_url = self._attempt_connections([base_url])\n else:\n # normalize the URL and try a number of variations until we find one that's able to connect\n logger.info(\n \"Attempting connections to variations of the URL: {}\".format(address)\n )\n self.base_url = self._attempt_connections(\n get_normalized_url_variations(address)\n )\n\n def _attempt_connections(self, urls):\n # try each of the URLs in turn, returning the first one that succeeds\n for url in urls:\n try:\n logger.info(\"Attempting connection to: {}\".format(url))\n response = self.get(\n \"/api/public/info/\",\n base_url=url,\n timeout=self.timeout,\n allow_redirects=True,\n )\n # check that we successfully connected, and if we were redirected that it's still the right endpoint\n if response.status_code == 200 and response.url.rstrip(\"/\").endswith(\n \"/api/public/info\"\n ):\n self.info = response.json()\n if self.info[\"application\"] not in [\"studio\", \"kolibri\"]:\n raise requests.RequestException(\n \"Server is not running Kolibri or Studio\"\n )\n logger.info(\"Success! We connected to: {}\".format(response.url))\n return response.url.rstrip(\"/\").replace(\"api/public/info\", \"\")\n except (requests.RequestException) as e:\n logger.info(\"Unable to connect: {}\".format(e))\n except ValueError:\n logger.info(\n \"Invalid JSON returned when attempting to connect to a remote server\"\n )\n\n # we weren't able to connect to any of the URL variations, so all we can do is throw\n raise errors.NetworkLocationNotFound()\n\n def get(self, path, **kwargs):\n return self.request(\"get\", path, **kwargs)\n\n def head(self, path, **kwargs):\n return self.request(\"head\", path, **kwargs)\n\n def request(self, method, path, base_url=None, **kwargs):\n base_url = base_url or self.base_url\n url = urljoin(base_url, path)\n response = getattr(self.session, method)(url, **kwargs)\n response.raise_for_status()\n return response\n", "path": "kolibri/core/discovery/utils/network/client.py"}]}
| 1,421 | 160 |
gh_patches_debug_12113
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-1465
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gRPC server interceptor raises AttributeError
When a gRPC service is not found, the Datadog interceptor raises an AttributeError:
```
ERROR grpc._server:_server.py:732 Exception servicing handler: 'NoneType' object has no attribute '_pin'
...
File "/src/ddtrace/contrib/grpc/server_interceptor.py", line 153, in intercept_service
return self._fn(continuation, handler_call_details)
File "/src/ddtrace/contrib/grpc/server_interceptor.py", line 22, in interceptor_function
return _TracedRpcMethodHandler(pin, handler_call_details, rpc_method_handler)
File "/src/ddtrace/contrib/grpc/server_interceptor.py", line 58, in __init__
self._pin = pin
AttributeError: 'NoneType' object has no attribute '_pin'
```
The gRPC documentation [points](https://grpc.github.io/grpc/python/grpc.html#grpc.ServerInterceptor.intercept_service) out that `continuation()` in a server-side interceptor can return a `None`, which is not handled by the Datadog interceptor.
This applies to `ddtrace>=0.28.0` when the server interceptor was added.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/grpc/server_interceptor.py`
Content:
```
1 import grpc
2 from ddtrace.vendor import wrapt
3
4 from ddtrace import config
5 from ddtrace.ext import errors
6 from ddtrace.compat import to_unicode
7
8 from ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY
9 from ...ext import SpanTypes
10 from ...propagation.http import HTTPPropagator
11 from . import constants
12 from .utils import parse_method_path
13
14
15 def create_server_interceptor(pin):
16 def interceptor_function(continuation, handler_call_details):
17 if not pin.enabled:
18 return continuation(handler_call_details)
19
20 rpc_method_handler = continuation(handler_call_details)
21 return _TracedRpcMethodHandler(pin, handler_call_details, rpc_method_handler)
22
23 return _ServerInterceptor(interceptor_function)
24
25
26 def _handle_server_exception(server_context, span):
27 if server_context is not None and \
28 hasattr(server_context, '_state') and \
29 server_context._state is not None:
30 code = to_unicode(server_context._state.code)
31 details = to_unicode(server_context._state.details)
32 span.error = 1
33 span.set_tag(errors.ERROR_MSG, details)
34 span.set_tag(errors.ERROR_TYPE, code)
35
36
37 def _wrap_response_iterator(response_iterator, server_context, span):
38 try:
39 for response in response_iterator:
40 yield response
41 except Exception:
42 span.set_traceback()
43 _handle_server_exception(server_context, span)
44 raise
45 finally:
46 span.finish()
47
48
49 class _TracedRpcMethodHandler(wrapt.ObjectProxy):
50 def __init__(self, pin, handler_call_details, wrapped):
51 super(_TracedRpcMethodHandler, self).__init__(wrapped)
52 self._pin = pin
53 self._handler_call_details = handler_call_details
54
55 def _fn(self, method_kind, behavior, args, kwargs):
56 if config.grpc_server.distributed_tracing_enabled:
57 headers = dict(self._handler_call_details.invocation_metadata)
58 propagator = HTTPPropagator()
59 context = propagator.extract(headers)
60
61 if context.trace_id:
62 self._pin.tracer.context_provider.activate(context)
63
64 tracer = self._pin.tracer
65
66 span = tracer.trace(
67 'grpc',
68 span_type=SpanTypes.GRPC,
69 service=self._pin.service,
70 resource=self._handler_call_details.method,
71 )
72 span.set_tag(SPAN_MEASURED_KEY)
73
74 method_path = self._handler_call_details.method
75 method_package, method_service, method_name = parse_method_path(method_path)
76 span.set_tag(constants.GRPC_METHOD_PATH_KEY, method_path)
77 span.set_tag(constants.GRPC_METHOD_PACKAGE_KEY, method_package)
78 span.set_tag(constants.GRPC_METHOD_SERVICE_KEY, method_service)
79 span.set_tag(constants.GRPC_METHOD_NAME_KEY, method_name)
80 span.set_tag(constants.GRPC_METHOD_KIND_KEY, method_kind)
81 span.set_tag(constants.GRPC_SPAN_KIND_KEY, constants.GRPC_SPAN_KIND_VALUE_SERVER)
82
83 sample_rate = config.grpc_server.get_analytics_sample_rate()
84 if sample_rate is not None:
85 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)
86
87 # access server context by taking second argument as server context
88 # if not found, skip using context to tag span with server state information
89 server_context = args[1] if isinstance(args[1], grpc.ServicerContext) else None
90
91 if self._pin.tags:
92 span.set_tags(self._pin.tags)
93
94 try:
95 response_or_iterator = behavior(*args, **kwargs)
96
97 if self.__wrapped__.response_streaming:
98 response_or_iterator = _wrap_response_iterator(response_or_iterator, server_context, span)
99 except Exception:
100 span.set_traceback()
101 _handle_server_exception(server_context, span)
102 raise
103 finally:
104 if not self.__wrapped__.response_streaming:
105 span.finish()
106
107 return response_or_iterator
108
109 def unary_unary(self, *args, **kwargs):
110 return self._fn(
111 constants.GRPC_METHOD_KIND_UNARY,
112 self.__wrapped__.unary_unary,
113 args,
114 kwargs
115 )
116
117 def unary_stream(self, *args, **kwargs):
118 return self._fn(
119 constants.GRPC_METHOD_KIND_SERVER_STREAMING,
120 self.__wrapped__.unary_stream,
121 args,
122 kwargs
123 )
124
125 def stream_unary(self, *args, **kwargs):
126 return self._fn(
127 constants.GRPC_METHOD_KIND_CLIENT_STREAMING,
128 self.__wrapped__.stream_unary,
129 args,
130 kwargs
131 )
132
133 def stream_stream(self, *args, **kwargs):
134 return self._fn(
135 constants.GRPC_METHOD_KIND_BIDI_STREAMING,
136 self.__wrapped__.stream_stream,
137 args,
138 kwargs
139 )
140
141
142 class _ServerInterceptor(grpc.ServerInterceptor):
143 def __init__(self, interceptor_function):
144 self._fn = interceptor_function
145
146 def intercept_service(self, continuation, handler_call_details):
147 return self._fn(continuation, handler_call_details)
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddtrace/contrib/grpc/server_interceptor.py b/ddtrace/contrib/grpc/server_interceptor.py
--- a/ddtrace/contrib/grpc/server_interceptor.py
+++ b/ddtrace/contrib/grpc/server_interceptor.py
@@ -18,7 +18,15 @@
return continuation(handler_call_details)
rpc_method_handler = continuation(handler_call_details)
- return _TracedRpcMethodHandler(pin, handler_call_details, rpc_method_handler)
+
+ # continuation returns an RpcMethodHandler instance if the RPC is
+ # considered serviced, or None otherwise
+ # https://grpc.github.io/grpc/python/grpc.html#grpc.ServerInterceptor.intercept_service
+
+ if rpc_method_handler:
+ return _TracedRpcMethodHandler(pin, handler_call_details, rpc_method_handler)
+
+ return rpc_method_handler
return _ServerInterceptor(interceptor_function)
|
{"golden_diff": "diff --git a/ddtrace/contrib/grpc/server_interceptor.py b/ddtrace/contrib/grpc/server_interceptor.py\n--- a/ddtrace/contrib/grpc/server_interceptor.py\n+++ b/ddtrace/contrib/grpc/server_interceptor.py\n@@ -18,7 +18,15 @@\n return continuation(handler_call_details)\n \n rpc_method_handler = continuation(handler_call_details)\n- return _TracedRpcMethodHandler(pin, handler_call_details, rpc_method_handler)\n+\n+ # continuation returns an RpcMethodHandler instance if the RPC is\n+ # considered serviced, or None otherwise\n+ # https://grpc.github.io/grpc/python/grpc.html#grpc.ServerInterceptor.intercept_service\n+\n+ if rpc_method_handler:\n+ return _TracedRpcMethodHandler(pin, handler_call_details, rpc_method_handler)\n+\n+ return rpc_method_handler\n \n return _ServerInterceptor(interceptor_function)\n", "issue": "gRPC server interceptor raises AttributeError\nWhen a gRPC service is not found, the Datadog interceptor raises an AttributeError:\r\n\r\n```\r\nERROR grpc._server:_server.py:732 Exception servicing handler: 'NoneType' object has no attribute '_pin'\r\n...\r\n File \"/src/ddtrace/contrib/grpc/server_interceptor.py\", line 153, in intercept_service\r\n return self._fn(continuation, handler_call_details)\r\n File \"/src/ddtrace/contrib/grpc/server_interceptor.py\", line 22, in interceptor_function\r\n return _TracedRpcMethodHandler(pin, handler_call_details, rpc_method_handler)\r\n File \"/src/ddtrace/contrib/grpc/server_interceptor.py\", line 58, in __init__\r\n self._pin = pin\r\nAttributeError: 'NoneType' object has no attribute '_pin'\r\n```\r\n\r\nThe gRPC documentation [points](https://grpc.github.io/grpc/python/grpc.html#grpc.ServerInterceptor.intercept_service) out that `continuation()` in a server-side interceptor can return a `None`, which is not handled by the Datadog interceptor.\r\n\r\nThis applies to `ddtrace>=0.28.0` when the server interceptor was added.\n", "before_files": [{"content": "import grpc\nfrom ddtrace.vendor import wrapt\n\nfrom ddtrace import config\nfrom ddtrace.ext import errors\nfrom ddtrace.compat import to_unicode\n\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\nfrom ...ext import SpanTypes\nfrom ...propagation.http import HTTPPropagator\nfrom . import constants\nfrom .utils import parse_method_path\n\n\ndef create_server_interceptor(pin):\n def interceptor_function(continuation, handler_call_details):\n if not pin.enabled:\n return continuation(handler_call_details)\n\n rpc_method_handler = continuation(handler_call_details)\n return _TracedRpcMethodHandler(pin, handler_call_details, rpc_method_handler)\n\n return _ServerInterceptor(interceptor_function)\n\n\ndef _handle_server_exception(server_context, span):\n if server_context is not None and \\\n hasattr(server_context, '_state') and \\\n server_context._state is not None:\n code = to_unicode(server_context._state.code)\n details = to_unicode(server_context._state.details)\n span.error = 1\n span.set_tag(errors.ERROR_MSG, details)\n span.set_tag(errors.ERROR_TYPE, code)\n\n\ndef _wrap_response_iterator(response_iterator, server_context, span):\n try:\n for response in response_iterator:\n yield response\n except Exception:\n span.set_traceback()\n _handle_server_exception(server_context, span)\n raise\n finally:\n span.finish()\n\n\nclass _TracedRpcMethodHandler(wrapt.ObjectProxy):\n def __init__(self, pin, handler_call_details, wrapped):\n super(_TracedRpcMethodHandler, self).__init__(wrapped)\n self._pin = pin\n self._handler_call_details = handler_call_details\n\n def _fn(self, method_kind, behavior, args, kwargs):\n if config.grpc_server.distributed_tracing_enabled:\n headers = dict(self._handler_call_details.invocation_metadata)\n propagator = HTTPPropagator()\n context = propagator.extract(headers)\n\n if context.trace_id:\n self._pin.tracer.context_provider.activate(context)\n\n tracer = self._pin.tracer\n\n span = tracer.trace(\n 'grpc',\n span_type=SpanTypes.GRPC,\n service=self._pin.service,\n resource=self._handler_call_details.method,\n )\n span.set_tag(SPAN_MEASURED_KEY)\n\n method_path = self._handler_call_details.method\n method_package, method_service, method_name = parse_method_path(method_path)\n span.set_tag(constants.GRPC_METHOD_PATH_KEY, method_path)\n span.set_tag(constants.GRPC_METHOD_PACKAGE_KEY, method_package)\n span.set_tag(constants.GRPC_METHOD_SERVICE_KEY, method_service)\n span.set_tag(constants.GRPC_METHOD_NAME_KEY, method_name)\n span.set_tag(constants.GRPC_METHOD_KIND_KEY, method_kind)\n span.set_tag(constants.GRPC_SPAN_KIND_KEY, constants.GRPC_SPAN_KIND_VALUE_SERVER)\n\n sample_rate = config.grpc_server.get_analytics_sample_rate()\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n # access server context by taking second argument as server context\n # if not found, skip using context to tag span with server state information\n server_context = args[1] if isinstance(args[1], grpc.ServicerContext) else None\n\n if self._pin.tags:\n span.set_tags(self._pin.tags)\n\n try:\n response_or_iterator = behavior(*args, **kwargs)\n\n if self.__wrapped__.response_streaming:\n response_or_iterator = _wrap_response_iterator(response_or_iterator, server_context, span)\n except Exception:\n span.set_traceback()\n _handle_server_exception(server_context, span)\n raise\n finally:\n if not self.__wrapped__.response_streaming:\n span.finish()\n\n return response_or_iterator\n\n def unary_unary(self, *args, **kwargs):\n return self._fn(\n constants.GRPC_METHOD_KIND_UNARY,\n self.__wrapped__.unary_unary,\n args,\n kwargs\n )\n\n def unary_stream(self, *args, **kwargs):\n return self._fn(\n constants.GRPC_METHOD_KIND_SERVER_STREAMING,\n self.__wrapped__.unary_stream,\n args,\n kwargs\n )\n\n def stream_unary(self, *args, **kwargs):\n return self._fn(\n constants.GRPC_METHOD_KIND_CLIENT_STREAMING,\n self.__wrapped__.stream_unary,\n args,\n kwargs\n )\n\n def stream_stream(self, *args, **kwargs):\n return self._fn(\n constants.GRPC_METHOD_KIND_BIDI_STREAMING,\n self.__wrapped__.stream_stream,\n args,\n kwargs\n )\n\n\nclass _ServerInterceptor(grpc.ServerInterceptor):\n def __init__(self, interceptor_function):\n self._fn = interceptor_function\n\n def intercept_service(self, continuation, handler_call_details):\n return self._fn(continuation, handler_call_details)\n", "path": "ddtrace/contrib/grpc/server_interceptor.py"}], "after_files": [{"content": "import grpc\nfrom ddtrace.vendor import wrapt\n\nfrom ddtrace import config\nfrom ddtrace.ext import errors\nfrom ddtrace.compat import to_unicode\n\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\nfrom ...ext import SpanTypes\nfrom ...propagation.http import HTTPPropagator\nfrom . import constants\nfrom .utils import parse_method_path\n\n\ndef create_server_interceptor(pin):\n def interceptor_function(continuation, handler_call_details):\n if not pin.enabled:\n return continuation(handler_call_details)\n\n rpc_method_handler = continuation(handler_call_details)\n\n # continuation returns an RpcMethodHandler instance if the RPC is\n # considered serviced, or None otherwise\n # https://grpc.github.io/grpc/python/grpc.html#grpc.ServerInterceptor.intercept_service\n\n if rpc_method_handler:\n return _TracedRpcMethodHandler(pin, handler_call_details, rpc_method_handler)\n\n return rpc_method_handler\n\n return _ServerInterceptor(interceptor_function)\n\n\ndef _handle_server_exception(server_context, span):\n if server_context is not None and \\\n hasattr(server_context, '_state') and \\\n server_context._state is not None:\n code = to_unicode(server_context._state.code)\n details = to_unicode(server_context._state.details)\n span.error = 1\n span.set_tag(errors.ERROR_MSG, details)\n span.set_tag(errors.ERROR_TYPE, code)\n\n\ndef _wrap_response_iterator(response_iterator, server_context, span):\n try:\n for response in response_iterator:\n yield response\n except Exception:\n span.set_traceback()\n _handle_server_exception(server_context, span)\n raise\n finally:\n span.finish()\n\n\nclass _TracedRpcMethodHandler(wrapt.ObjectProxy):\n def __init__(self, pin, handler_call_details, wrapped):\n super(_TracedRpcMethodHandler, self).__init__(wrapped)\n self._pin = pin\n self._handler_call_details = handler_call_details\n\n def _fn(self, method_kind, behavior, args, kwargs):\n if config.grpc_server.distributed_tracing_enabled:\n headers = dict(self._handler_call_details.invocation_metadata)\n propagator = HTTPPropagator()\n context = propagator.extract(headers)\n\n if context.trace_id:\n self._pin.tracer.context_provider.activate(context)\n\n tracer = self._pin.tracer\n\n span = tracer.trace(\n 'grpc',\n span_type=SpanTypes.GRPC,\n service=self._pin.service,\n resource=self._handler_call_details.method,\n )\n span.set_tag(SPAN_MEASURED_KEY)\n\n method_path = self._handler_call_details.method\n method_package, method_service, method_name = parse_method_path(method_path)\n span.set_tag(constants.GRPC_METHOD_PATH_KEY, method_path)\n span.set_tag(constants.GRPC_METHOD_PACKAGE_KEY, method_package)\n span.set_tag(constants.GRPC_METHOD_SERVICE_KEY, method_service)\n span.set_tag(constants.GRPC_METHOD_NAME_KEY, method_name)\n span.set_tag(constants.GRPC_METHOD_KIND_KEY, method_kind)\n span.set_tag(constants.GRPC_SPAN_KIND_KEY, constants.GRPC_SPAN_KIND_VALUE_SERVER)\n\n sample_rate = config.grpc_server.get_analytics_sample_rate()\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n # access server context by taking second argument as server context\n # if not found, skip using context to tag span with server state information\n server_context = args[1] if isinstance(args[1], grpc.ServicerContext) else None\n\n if self._pin.tags:\n span.set_tags(self._pin.tags)\n\n try:\n response_or_iterator = behavior(*args, **kwargs)\n\n if self.__wrapped__.response_streaming:\n response_or_iterator = _wrap_response_iterator(response_or_iterator, server_context, span)\n except Exception:\n span.set_traceback()\n _handle_server_exception(server_context, span)\n raise\n finally:\n if not self.__wrapped__.response_streaming:\n span.finish()\n\n return response_or_iterator\n\n def unary_unary(self, *args, **kwargs):\n return self._fn(\n constants.GRPC_METHOD_KIND_UNARY,\n self.__wrapped__.unary_unary,\n args,\n kwargs\n )\n\n def unary_stream(self, *args, **kwargs):\n return self._fn(\n constants.GRPC_METHOD_KIND_SERVER_STREAMING,\n self.__wrapped__.unary_stream,\n args,\n kwargs\n )\n\n def stream_unary(self, *args, **kwargs):\n return self._fn(\n constants.GRPC_METHOD_KIND_CLIENT_STREAMING,\n self.__wrapped__.stream_unary,\n args,\n kwargs\n )\n\n def stream_stream(self, *args, **kwargs):\n return self._fn(\n constants.GRPC_METHOD_KIND_BIDI_STREAMING,\n self.__wrapped__.stream_stream,\n args,\n kwargs\n )\n\n\nclass _ServerInterceptor(grpc.ServerInterceptor):\n def __init__(self, interceptor_function):\n self._fn = interceptor_function\n\n def intercept_service(self, continuation, handler_call_details):\n return self._fn(continuation, handler_call_details)\n", "path": "ddtrace/contrib/grpc/server_interceptor.py"}]}
| 1,914 | 188 |
gh_patches_debug_18834
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3308
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider kopps is broken
During the global build at 2021-06-16-14-42-20, spider **kopps** failed with **0 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/kopps.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/kopps.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/kopps.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/kopps.py`
Content:
```
1 import scrapy
2 import re
3 from locations.items import GeojsonPointItem
4
5 DAY_MAPPING = {
6 "Mon": "Mo",
7 "Tues": "Tu",
8 "Wed": "We",
9 "Thur": "Th",
10 "Fri": "Fr",
11 "Sat": "Sa",
12 "Sun": "Su"
13 }
14 class KoppsSpider(scrapy.Spider):
15 name = "kopps"
16 item_attributes = { 'brand': "Kopps" }
17 allowed_domains = ["www.kopps.com"]
18 download_delay = 1.5
19 start_urls = (
20 'https://www.kopps.com/',
21 )
22
23 def parse_day(self, day):
24 if re.search('-', day):
25 days = day.split('-')
26 osm_days = []
27 if len(days) == 2:
28 for day in days:
29 osm_day = DAY_MAPPING[day.strip()]
30 osm_days.append(osm_day)
31 return "-".join(osm_days)
32
33 def parse_times(self, times):
34 if times.strip() == 'Open 24 hours':
35 return '24/7'
36 hours_to = [x.strip() for x in times.split('-')]
37 cleaned_times = []
38
39 for hour in hours_to:
40 if re.search('pm$', hour):
41 hour = re.sub('pm', '', hour).strip()
42 hour_min = hour.split(":")
43 if int(hour_min[0]) < 12:
44 hour_min[0] = str(12 + int(hour_min[0]))
45 cleaned_times.append(":".join(hour_min))
46
47 if re.search('am$', hour):
48 hour = re.sub('am', '', hour).strip()
49 hour_min = hour.split(":")
50 if len(hour_min[0]) <2:
51 hour_min[0] = hour_min[0].zfill(2)
52 else:
53 hour_min[0] = str( int(hour_min[0]))
54
55 cleaned_times.append(":".join(hour_min))
56 return "-".join(cleaned_times)
57
58 def parse_hours(self, lis):
59 hours = []
60 for li in lis:
61 day_times = li.xpath('normalize-space(./text())').extract_first()
62 day = re.findall(r"^[a-zA-Z-]+" , day_times)
63 if(len(day)>0):
64 day = day[0]
65 else:
66 day = 'Mon-Sun'
67 times = re.findall(r"[0-9]{2}:[0-9]{2}[a|p]m - [0-9]{2}:[0-9]{2}[a|p]m" ,day_times)
68 times = times[0]
69 if times and day:
70 parsed_time = self.parse_times(times)
71 parsed_day = self.parse_day(day)
72 hours.append(parsed_day + ' ' + parsed_time)
73
74 return "; ".join(hours)
75
76
77 def parse(self, response):
78 locations = response.xpath('//div[@id="locations"]/div/div')
79
80 for location in locations:
81 properties = {
82 'addr_full': location.xpath('normalize-space(./div/address/a/text())').extract_first(),
83 'phone': location.xpath('normalize-space(./div/ul/li/span/a/text())').extract_first(),
84 'city': location.xpath('./div/address/a/text()').extract()[1].replace(' ' ,'').split(',')[0].replace('\r\n' ,''),
85 'state': location.xpath('./div/address/a/text()').extract()[1].lstrip().split(',')[1].split(' ')[1],
86 'postcode': location.xpath('./div/address/a/text()').extract()[1].lstrip().split(',')[1].split(' ')[2].replace('\r\n' ,''),
87 'ref': location.xpath('normalize-space(./div/address/a/@href)').extract_first(),
88 'website': response.url,
89 'lat':re.findall(r"\/[0-9]{2}[^(\/)]+z",location.xpath('normalize-space(./div/address/a/@href)').extract_first())[0][1:].split(',')[0],
90 'lon': re.findall(r"\/[0-9]{2}[^(\/)]+z",location.xpath('normalize-space(./div/address/a/@href)').extract_first())[0][1:].split(',')[1],
91 }
92
93 hours = self.parse_hours(location.xpath('./div/ul/li[3]/span'))
94 if hours:
95 properties['opening_hours'] = hours
96
97 yield GeojsonPointItem(**properties)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/kopps.py b/locations/spiders/kopps.py
--- a/locations/spiders/kopps.py
+++ b/locations/spiders/kopps.py
@@ -26,8 +26,9 @@
osm_days = []
if len(days) == 2:
for day in days:
- osm_day = DAY_MAPPING[day.strip()]
- osm_days.append(osm_day)
+ if day.strip() in DAY_MAPPING:
+ osm_day = DAY_MAPPING[day.strip()]
+ osm_days.append(osm_day)
return "-".join(osm_days)
def parse_times(self, times):
@@ -69,7 +70,8 @@
if times and day:
parsed_time = self.parse_times(times)
parsed_day = self.parse_day(day)
- hours.append(parsed_day + ' ' + parsed_time)
+ if parsed_day and parsed_time:
+ hours.append(parsed_day + ' ' + parsed_time)
return "; ".join(hours)
|
{"golden_diff": "diff --git a/locations/spiders/kopps.py b/locations/spiders/kopps.py\n--- a/locations/spiders/kopps.py\n+++ b/locations/spiders/kopps.py\n@@ -26,8 +26,9 @@\n osm_days = []\n if len(days) == 2:\n for day in days:\n- osm_day = DAY_MAPPING[day.strip()]\n- osm_days.append(osm_day)\n+ if day.strip() in DAY_MAPPING:\n+ osm_day = DAY_MAPPING[day.strip()]\n+ osm_days.append(osm_day)\n return \"-\".join(osm_days)\n \n def parse_times(self, times):\n@@ -69,7 +70,8 @@\n if times and day:\n parsed_time = self.parse_times(times)\n parsed_day = self.parse_day(day)\n- hours.append(parsed_day + ' ' + parsed_time)\n+ if parsed_day and parsed_time:\n+ hours.append(parsed_day + ' ' + parsed_time)\n \n return \"; \".join(hours)\n", "issue": "Spider kopps is broken\nDuring the global build at 2021-06-16-14-42-20, spider **kopps** failed with **0 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/kopps.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/kopps.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/kopps.geojson))\n", "before_files": [{"content": "import scrapy\nimport re\nfrom locations.items import GeojsonPointItem\n\nDAY_MAPPING = {\n \"Mon\": \"Mo\",\n \"Tues\": \"Tu\",\n \"Wed\": \"We\",\n \"Thur\": \"Th\",\n \"Fri\": \"Fr\",\n \"Sat\": \"Sa\",\n \"Sun\": \"Su\"\n}\nclass KoppsSpider(scrapy.Spider):\n name = \"kopps\"\n item_attributes = { 'brand': \"Kopps\" }\n allowed_domains = [\"www.kopps.com\"]\n download_delay = 1.5\n start_urls = (\n 'https://www.kopps.com/',\n )\n\n def parse_day(self, day):\n if re.search('-', day):\n days = day.split('-')\n osm_days = []\n if len(days) == 2:\n for day in days:\n osm_day = DAY_MAPPING[day.strip()]\n osm_days.append(osm_day)\n return \"-\".join(osm_days)\n\n def parse_times(self, times):\n if times.strip() == 'Open 24 hours':\n return '24/7'\n hours_to = [x.strip() for x in times.split('-')]\n cleaned_times = []\n\n for hour in hours_to:\n if re.search('pm$', hour):\n hour = re.sub('pm', '', hour).strip()\n hour_min = hour.split(\":\")\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n cleaned_times.append(\":\".join(hour_min))\n\n if re.search('am$', hour):\n hour = re.sub('am', '', hour).strip()\n hour_min = hour.split(\":\")\n if len(hour_min[0]) <2:\n hour_min[0] = hour_min[0].zfill(2)\n else:\n hour_min[0] = str( int(hour_min[0]))\n\n cleaned_times.append(\":\".join(hour_min))\n return \"-\".join(cleaned_times)\n\n def parse_hours(self, lis):\n hours = []\n for li in lis:\n day_times = li.xpath('normalize-space(./text())').extract_first()\n day = re.findall(r\"^[a-zA-Z-]+\" , day_times)\n if(len(day)>0):\n day = day[0]\n else:\n day = 'Mon-Sun'\n times = re.findall(r\"[0-9]{2}:[0-9]{2}[a|p]m - [0-9]{2}:[0-9]{2}[a|p]m\" ,day_times)\n times = times[0]\n if times and day:\n parsed_time = self.parse_times(times)\n parsed_day = self.parse_day(day)\n hours.append(parsed_day + ' ' + parsed_time)\n\n return \"; \".join(hours)\n\n\n def parse(self, response):\n locations = response.xpath('//div[@id=\"locations\"]/div/div')\n\n for location in locations:\n properties = {\n 'addr_full': location.xpath('normalize-space(./div/address/a/text())').extract_first(),\n 'phone': location.xpath('normalize-space(./div/ul/li/span/a/text())').extract_first(),\n 'city': location.xpath('./div/address/a/text()').extract()[1].replace(' ' ,'').split(',')[0].replace('\\r\\n' ,''),\n 'state': location.xpath('./div/address/a/text()').extract()[1].lstrip().split(',')[1].split(' ')[1],\n 'postcode': location.xpath('./div/address/a/text()').extract()[1].lstrip().split(',')[1].split(' ')[2].replace('\\r\\n' ,''),\n 'ref': location.xpath('normalize-space(./div/address/a/@href)').extract_first(),\n 'website': response.url,\n 'lat':re.findall(r\"\\/[0-9]{2}[^(\\/)]+z\",location.xpath('normalize-space(./div/address/a/@href)').extract_first())[0][1:].split(',')[0],\n 'lon': re.findall(r\"\\/[0-9]{2}[^(\\/)]+z\",location.xpath('normalize-space(./div/address/a/@href)').extract_first())[0][1:].split(',')[1],\n }\n\n hours = self.parse_hours(location.xpath('./div/ul/li[3]/span'))\n if hours:\n properties['opening_hours'] = hours\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/kopps.py"}], "after_files": [{"content": "import scrapy\nimport re\nfrom locations.items import GeojsonPointItem\n\nDAY_MAPPING = {\n \"Mon\": \"Mo\",\n \"Tues\": \"Tu\",\n \"Wed\": \"We\",\n \"Thur\": \"Th\",\n \"Fri\": \"Fr\",\n \"Sat\": \"Sa\",\n \"Sun\": \"Su\"\n}\nclass KoppsSpider(scrapy.Spider):\n name = \"kopps\"\n item_attributes = { 'brand': \"Kopps\" }\n allowed_domains = [\"www.kopps.com\"]\n download_delay = 1.5\n start_urls = (\n 'https://www.kopps.com/',\n )\n\n def parse_day(self, day):\n if re.search('-', day):\n days = day.split('-')\n osm_days = []\n if len(days) == 2:\n for day in days:\n if day.strip() in DAY_MAPPING:\n osm_day = DAY_MAPPING[day.strip()]\n osm_days.append(osm_day)\n return \"-\".join(osm_days)\n\n def parse_times(self, times):\n if times.strip() == 'Open 24 hours':\n return '24/7'\n hours_to = [x.strip() for x in times.split('-')]\n cleaned_times = []\n\n for hour in hours_to:\n if re.search('pm$', hour):\n hour = re.sub('pm', '', hour).strip()\n hour_min = hour.split(\":\")\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n cleaned_times.append(\":\".join(hour_min))\n\n if re.search('am$', hour):\n hour = re.sub('am', '', hour).strip()\n hour_min = hour.split(\":\")\n if len(hour_min[0]) <2:\n hour_min[0] = hour_min[0].zfill(2)\n else:\n hour_min[0] = str( int(hour_min[0]))\n\n cleaned_times.append(\":\".join(hour_min))\n return \"-\".join(cleaned_times)\n\n def parse_hours(self, lis):\n hours = []\n for li in lis:\n day_times = li.xpath('normalize-space(./text())').extract_first()\n day = re.findall(r\"^[a-zA-Z-]+\" , day_times)\n if(len(day)>0):\n day = day[0]\n else:\n day = 'Mon-Sun'\n times = re.findall(r\"[0-9]{2}:[0-9]{2}[a|p]m - [0-9]{2}:[0-9]{2}[a|p]m\" ,day_times)\n times = times[0]\n if times and day:\n parsed_time = self.parse_times(times)\n parsed_day = self.parse_day(day)\n if parsed_day and parsed_time:\n hours.append(parsed_day + ' ' + parsed_time)\n\n return \"; \".join(hours)\n\n\n def parse(self, response):\n locations = response.xpath('//div[@id=\"locations\"]/div/div')\n\n for location in locations:\n properties = {\n 'addr_full': location.xpath('normalize-space(./div/address/a/text())').extract_first(),\n 'phone': location.xpath('normalize-space(./div/ul/li/span/a/text())').extract_first(),\n 'city': location.xpath('./div/address/a/text()').extract()[1].replace(' ' ,'').split(',')[0].replace('\\r\\n' ,''),\n 'state': location.xpath('./div/address/a/text()').extract()[1].lstrip().split(',')[1].split(' ')[1],\n 'postcode': location.xpath('./div/address/a/text()').extract()[1].lstrip().split(',')[1].split(' ')[2].replace('\\r\\n' ,''),\n 'ref': location.xpath('normalize-space(./div/address/a/@href)').extract_first(),\n 'website': response.url,\n 'lat':re.findall(r\"\\/[0-9]{2}[^(\\/)]+z\",location.xpath('normalize-space(./div/address/a/@href)').extract_first())[0][1:].split(',')[0],\n 'lon': re.findall(r\"\\/[0-9]{2}[^(\\/)]+z\",location.xpath('normalize-space(./div/address/a/@href)').extract_first())[0][1:].split(',')[1],\n }\n\n hours = self.parse_hours(location.xpath('./div/ul/li[3]/span'))\n if hours:\n properties['opening_hours'] = hours\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/kopps.py"}]}
| 1,588 | 228 |
gh_patches_debug_1434
|
rasdani/github-patches
|
git_diff
|
Textualize__rich-2642
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] export_html leaks style into the page
**Describe the bug**
Following discussion https://github.com/Textualize/rich/discussions/2640
Injecting HTML generated with Rich's `Console.export_html()` method seems to leak some style into the page.

The page is built with MkDocs + Material for MkDocs. The dark theme is activated.
The light theme is less impacted, I just see extra "bars" (like an empty code line) in-between block elements:

Code used to generate the HTML:
```python
import os
from rich.console import Console
report = """$ griffe check griffe -ssrc -b0.24.0 -a0.23.0
[bold]src/griffe/loader.py[/]:156: GriffeLoader.resolve_aliases([blue]only_exported[/]): [yellow]Parameter kind was changed[/]: positional or keyword -> keyword-only
etc.
"""
with open(os.devnull, "w") as devnull:
console = Console(record=True, width=150, file=devnull)
console.print(report, markup=True, highlight=False)
print(console.export_html(inline_styles=True))
```
Both `inline_styles=True` and `inline_styles=False` give the same result.
**Platform**
<details>
<summary>Click to expand</summary>
Linux
```console
$ python -m rich.diagnose
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=239 ColorSystem.TRUECOLOR> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 58 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=239, height=58), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=239, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=58, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=239, height=58) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 239 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'xterm-256color', │
│ 'COLORTERM': 'truecolor', │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Linux"
```
```console
% pdm list --freeze | grep rich
rich==12.6.0
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rich/_export_format.py`
Content:
```
1 CONSOLE_HTML_FORMAT = """\
2 <!DOCTYPE html>
3 <head>
4 <meta charset="UTF-8">
5 <style>
6 {stylesheet}
7 body {{
8 color: {foreground};
9 background-color: {background};
10 }}
11 </style>
12 </head>
13 <html>
14 <body>
15 <code>
16 <pre style="font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">{code}</pre>
17 </code>
18 </body>
19 </html>
20 """
21
22 CONSOLE_SVG_FORMAT = """\
23 <svg class="rich-terminal" viewBox="0 0 {width} {height}" xmlns="http://www.w3.org/2000/svg">
24 <!-- Generated with Rich https://www.textualize.io -->
25 <style>
26
27 @font-face {{
28 font-family: "Fira Code";
29 src: local("FiraCode-Regular"),
30 url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Regular.woff2") format("woff2"),
31 url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Regular.woff") format("woff");
32 font-style: normal;
33 font-weight: 400;
34 }}
35 @font-face {{
36 font-family: "Fira Code";
37 src: local("FiraCode-Bold"),
38 url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Bold.woff2") format("woff2"),
39 url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Bold.woff") format("woff");
40 font-style: bold;
41 font-weight: 700;
42 }}
43
44 .{unique_id}-matrix {{
45 font-family: Fira Code, monospace;
46 font-size: {char_height}px;
47 line-height: {line_height}px;
48 font-variant-east-asian: full-width;
49 }}
50
51 .{unique_id}-title {{
52 font-size: 18px;
53 font-weight: bold;
54 font-family: arial;
55 }}
56
57 {styles}
58 </style>
59
60 <defs>
61 <clipPath id="{unique_id}-clip-terminal">
62 <rect x="0" y="0" width="{terminal_width}" height="{terminal_height}" />
63 </clipPath>
64 {lines}
65 </defs>
66
67 {chrome}
68 <g transform="translate({terminal_x}, {terminal_y})" clip-path="url(#{unique_id}-clip-terminal)">
69 {backgrounds}
70 <g class="{unique_id}-matrix">
71 {matrix}
72 </g>
73 </g>
74 </svg>
75 """
76
77 _SVG_FONT_FAMILY = "Rich Fira Code"
78 _SVG_CLASSES_PREFIX = "rich-svg"
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rich/_export_format.py b/rich/_export_format.py
--- a/rich/_export_format.py
+++ b/rich/_export_format.py
@@ -12,9 +12,9 @@
</head>
<html>
<body>
- <code>
- <pre style="font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">{code}</pre>
- </code>
+ <pre style="font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">
+ <code>{code}</code>
+ </pre>
</body>
</html>
"""
|
{"golden_diff": "diff --git a/rich/_export_format.py b/rich/_export_format.py\n--- a/rich/_export_format.py\n+++ b/rich/_export_format.py\n@@ -12,9 +12,9 @@\n </head>\n <html>\n <body>\n- <code>\n- <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">{code}</pre>\n- </code>\n+ <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n+ <code>{code}</code>\n+ </pre>\n </body>\n </html>\n \"\"\"\n", "issue": "[BUG] export_html leaks style into the page\n**Describe the bug**\r\n\r\nFollowing discussion https://github.com/Textualize/rich/discussions/2640\r\nInjecting HTML generated with Rich's `Console.export_html()` method seems to leak some style into the page.\r\n\r\n\r\n\r\nThe page is built with MkDocs + Material for MkDocs. The dark theme is activated.\r\nThe light theme is less impacted, I just see extra \"bars\" (like an empty code line) in-between block elements:\r\n\r\n\r\n\r\nCode used to generate the HTML:\r\n\r\n```python\r\nimport os\r\n\r\nfrom rich.console import Console\r\n\r\nreport = \"\"\"$ griffe check griffe -ssrc -b0.24.0 -a0.23.0\r\n[bold]src/griffe/loader.py[/]:156: GriffeLoader.resolve_aliases([blue]only_exported[/]): [yellow]Parameter kind was changed[/]: positional or keyword -> keyword-only\r\netc.\r\n\"\"\"\r\n\r\nwith open(os.devnull, \"w\") as devnull:\r\n console = Console(record=True, width=150, file=devnull)\r\n console.print(report, markup=True, highlight=False)\r\nprint(console.export_html(inline_styles=True))\r\n```\r\n\r\nBoth `inline_styles=True` and `inline_styles=False` give the same result.\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nLinux\r\n\r\n```console\r\n$ python -m rich.diagnose\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 <console width=239 ColorSystem.TRUECOLOR> \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'truecolor' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 58 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=239, height=58), \u2502\r\n\u2502 legacy_windows=False, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=239, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=58, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=239, height=58) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 239 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'xterm-256color', \u2502\r\n\u2502 'COLORTERM': 'truecolor', \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': None, \u2502\r\n\u2502 'COLUMNS': None, \u2502\r\n\u2502 'LINES': None, \u2502\r\n\u2502 'JUPYTER_COLUMNS': None, \u2502\r\n\u2502 'JUPYTER_LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Linux\"\r\n```\r\n\r\n```console\r\n% pdm list --freeze | grep rich\r\nrich==12.6.0\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "CONSOLE_HTML_FORMAT = \"\"\"\\\n<!DOCTYPE html>\n<head>\n<meta charset=\"UTF-8\">\n<style>\n{stylesheet}\nbody {{\n color: {foreground};\n background-color: {background};\n}}\n</style>\n</head>\n<html>\n<body>\n <code>\n <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">{code}</pre>\n </code>\n</body>\n</html>\n\"\"\"\n\nCONSOLE_SVG_FORMAT = \"\"\"\\\n<svg class=\"rich-terminal\" viewBox=\"0 0 {width} {height}\" xmlns=\"http://www.w3.org/2000/svg\">\n <!-- Generated with Rich https://www.textualize.io -->\n <style>\n\n @font-face {{\n font-family: \"Fira Code\";\n src: local(\"FiraCode-Regular\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Regular.woff2\") format(\"woff2\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Regular.woff\") format(\"woff\");\n font-style: normal;\n font-weight: 400;\n }}\n @font-face {{\n font-family: \"Fira Code\";\n src: local(\"FiraCode-Bold\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Bold.woff2\") format(\"woff2\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Bold.woff\") format(\"woff\");\n font-style: bold;\n font-weight: 700;\n }}\n\n .{unique_id}-matrix {{\n font-family: Fira Code, monospace;\n font-size: {char_height}px;\n line-height: {line_height}px;\n font-variant-east-asian: full-width;\n }}\n\n .{unique_id}-title {{\n font-size: 18px;\n font-weight: bold;\n font-family: arial;\n }}\n\n {styles}\n </style>\n\n <defs>\n <clipPath id=\"{unique_id}-clip-terminal\">\n <rect x=\"0\" y=\"0\" width=\"{terminal_width}\" height=\"{terminal_height}\" />\n </clipPath>\n {lines}\n </defs>\n\n {chrome}\n <g transform=\"translate({terminal_x}, {terminal_y})\" clip-path=\"url(#{unique_id}-clip-terminal)\">\n {backgrounds}\n <g class=\"{unique_id}-matrix\">\n {matrix}\n </g>\n </g>\n</svg>\n\"\"\"\n\n_SVG_FONT_FAMILY = \"Rich Fira Code\"\n_SVG_CLASSES_PREFIX = \"rich-svg\"\n", "path": "rich/_export_format.py"}], "after_files": [{"content": "CONSOLE_HTML_FORMAT = \"\"\"\\\n<!DOCTYPE html>\n<head>\n<meta charset=\"UTF-8\">\n<style>\n{stylesheet}\nbody {{\n color: {foreground};\n background-color: {background};\n}}\n</style>\n</head>\n<html>\n<body>\n <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n <code>{code}</code>\n </pre>\n</body>\n</html>\n\"\"\"\n\nCONSOLE_SVG_FORMAT = \"\"\"\\\n<svg class=\"rich-terminal\" viewBox=\"0 0 {width} {height}\" xmlns=\"http://www.w3.org/2000/svg\">\n <!-- Generated with Rich https://www.textualize.io -->\n <style>\n\n @font-face {{\n font-family: \"Fira Code\";\n src: local(\"FiraCode-Regular\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Regular.woff2\") format(\"woff2\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Regular.woff\") format(\"woff\");\n font-style: normal;\n font-weight: 400;\n }}\n @font-face {{\n font-family: \"Fira Code\";\n src: local(\"FiraCode-Bold\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Bold.woff2\") format(\"woff2\"),\n url(\"https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Bold.woff\") format(\"woff\");\n font-style: bold;\n font-weight: 700;\n }}\n\n .{unique_id}-matrix {{\n font-family: Fira Code, monospace;\n font-size: {char_height}px;\n line-height: {line_height}px;\n font-variant-east-asian: full-width;\n }}\n\n .{unique_id}-title {{\n font-size: 18px;\n font-weight: bold;\n font-family: arial;\n }}\n\n {styles}\n </style>\n\n <defs>\n <clipPath id=\"{unique_id}-clip-terminal\">\n <rect x=\"0\" y=\"0\" width=\"{terminal_width}\" height=\"{terminal_height}\" />\n </clipPath>\n {lines}\n </defs>\n\n {chrome}\n <g transform=\"translate({terminal_x}, {terminal_y})\" clip-path=\"url(#{unique_id}-clip-terminal)\">\n {backgrounds}\n <g class=\"{unique_id}-matrix\">\n {matrix}\n </g>\n </g>\n</svg>\n\"\"\"\n\n_SVG_FONT_FAMILY = \"Rich Fira Code\"\n_SVG_CLASSES_PREFIX = \"rich-svg\"\n", "path": "rich/_export_format.py"}]}
| 2,222 | 146 |
gh_patches_debug_12705
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-bolts-315
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
add caching for used dataset
## 🚀 Feature
atm for each CI run the datasets are downloaded (seems so as I observed som downloading failers)
### Motivation
Speed-up runs and avoids random connection failers
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pl_bolts/models/mnist_module.py`
Content:
```
1 import os
2 from argparse import ArgumentParser
3 from warnings import warn
4
5 import torch
6 from pytorch_lightning import LightningModule, Trainer
7 from torch.nn import functional as F
8 from torch.utils.data import DataLoader, random_split
9
10 try:
11 from torchvision import transforms
12 from torchvision.datasets import MNIST
13 except ModuleNotFoundError:
14 warn('You want to use `torchvision` which is not installed yet,' # pragma: no-cover
15 ' install it with `pip install torchvision`.')
16
17
18 class LitMNIST(LightningModule):
19 def __init__(self, hidden_dim=128, learning_rate=1e-3, batch_size=32, num_workers=4, data_dir='', **kwargs):
20 super().__init__()
21 self.save_hyperparameters()
22
23 self.l1 = torch.nn.Linear(28 * 28, self.hparams.hidden_dim)
24 self.l2 = torch.nn.Linear(self.hparams.hidden_dim, 10)
25
26 self.mnist_train = None
27 self.mnist_val = None
28
29 def forward(self, x):
30 x = x.view(x.size(0), -1)
31 x = torch.relu(self.l1(x))
32 x = torch.relu(self.l2(x))
33 return x
34
35 def training_step(self, batch, batch_idx):
36 x, y = batch
37 y_hat = self(x)
38 loss = F.cross_entropy(y_hat, y)
39 self.log('train_loss', loss)
40 return loss
41
42 def validation_step(self, batch, batch_idx):
43 x, y = batch
44 y_hat = self(x)
45 loss = F.cross_entropy(y_hat, y)
46 self.log('val_loss', loss)
47
48 def test_step(self, batch, batch_idx):
49 x, y = batch
50 y_hat = self(x)
51 loss = F.cross_entropy(y_hat, y)
52 self.log('test_loss', loss)
53
54 def configure_optimizers(self):
55 return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
56
57 def prepare_data(self):
58 MNIST(self.hparams.data_dir, train=True, download=True, transform=transforms.ToTensor())
59
60 def train_dataloader(self):
61 dataset = MNIST(self.hparams.data_dir, train=True, download=False, transform=transforms.ToTensor())
62 mnist_train, _ = random_split(dataset, [55000, 5000])
63 loader = DataLoader(mnist_train, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)
64 return loader
65
66 def val_dataloader(self):
67 dataset = MNIST(self.hparams.data_dir, train=True, download=False, transform=transforms.ToTensor())
68 _, mnist_val = random_split(dataset, [55000, 5000])
69 loader = DataLoader(mnist_val, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)
70 return loader
71
72 def test_dataloader(self):
73 test_dataset = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
74 loader = DataLoader(test_dataset, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)
75 return loader
76
77 @staticmethod
78 def add_model_specific_args(parent_parser):
79 parser = ArgumentParser(parents=[parent_parser], add_help=False)
80 parser.add_argument('--batch_size', type=int, default=32)
81 parser.add_argument('--num_workers', type=int, default=4)
82 parser.add_argument('--hidden_dim', type=int, default=128)
83 parser.add_argument('--data_dir', type=str, default='')
84 parser.add_argument('--learning_rate', type=float, default=0.0001)
85 return parser
86
87
88 def cli_main():
89 # args
90 parser = ArgumentParser()
91 parser = Trainer.add_argparse_args(parser)
92 parser = LitMNIST.add_model_specific_args(parser)
93 args = parser.parse_args()
94
95 # model
96 model = LitMNIST(**vars(args))
97
98 # training
99 trainer = Trainer.from_argparse_args(args)
100 trainer.fit(model)
101
102
103 if __name__ == '__main__': # pragma: no cover
104 cli_main()
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pl_bolts/models/mnist_module.py b/pl_bolts/models/mnist_module.py
--- a/pl_bolts/models/mnist_module.py
+++ b/pl_bolts/models/mnist_module.py
@@ -1,4 +1,3 @@
-import os
from argparse import ArgumentParser
from warnings import warn
@@ -70,7 +69,7 @@
return loader
def test_dataloader(self):
- test_dataset = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
+ test_dataset = MNIST(self.hparams.data_dir, train=False, download=True, transform=transforms.ToTensor())
loader = DataLoader(test_dataset, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)
return loader
|
{"golden_diff": "diff --git a/pl_bolts/models/mnist_module.py b/pl_bolts/models/mnist_module.py\n--- a/pl_bolts/models/mnist_module.py\n+++ b/pl_bolts/models/mnist_module.py\n@@ -1,4 +1,3 @@\n-import os\n from argparse import ArgumentParser\n from warnings import warn\n \n@@ -70,7 +69,7 @@\n return loader\n \n def test_dataloader(self):\n- test_dataset = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())\n+ test_dataset = MNIST(self.hparams.data_dir, train=False, download=True, transform=transforms.ToTensor())\n loader = DataLoader(test_dataset, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)\n return loader\n", "issue": "add caching for used dataset\n## \ud83d\ude80 Feature\r\n\r\natm for each CI run the datasets are downloaded (seems so as I observed som downloading failers)\r\n\r\n### Motivation\r\n\r\nSpeed-up runs and avoids random connection failers\r\n\n", "before_files": [{"content": "import os\nfrom argparse import ArgumentParser\nfrom warnings import warn\n\nimport torch\nfrom pytorch_lightning import LightningModule, Trainer\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader, random_split\n\ntry:\n from torchvision import transforms\n from torchvision.datasets import MNIST\nexcept ModuleNotFoundError:\n warn('You want to use `torchvision` which is not installed yet,' # pragma: no-cover\n ' install it with `pip install torchvision`.')\n\n\nclass LitMNIST(LightningModule):\n def __init__(self, hidden_dim=128, learning_rate=1e-3, batch_size=32, num_workers=4, data_dir='', **kwargs):\n super().__init__()\n self.save_hyperparameters()\n\n self.l1 = torch.nn.Linear(28 * 28, self.hparams.hidden_dim)\n self.l2 = torch.nn.Linear(self.hparams.hidden_dim, 10)\n\n self.mnist_train = None\n self.mnist_val = None\n\n def forward(self, x):\n x = x.view(x.size(0), -1)\n x = torch.relu(self.l1(x))\n x = torch.relu(self.l2(x))\n return x\n\n def training_step(self, batch, batch_idx):\n x, y = batch\n y_hat = self(x)\n loss = F.cross_entropy(y_hat, y)\n self.log('train_loss', loss)\n return loss\n\n def validation_step(self, batch, batch_idx):\n x, y = batch\n y_hat = self(x)\n loss = F.cross_entropy(y_hat, y)\n self.log('val_loss', loss)\n\n def test_step(self, batch, batch_idx):\n x, y = batch\n y_hat = self(x)\n loss = F.cross_entropy(y_hat, y)\n self.log('test_loss', loss)\n\n def configure_optimizers(self):\n return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)\n\n def prepare_data(self):\n MNIST(self.hparams.data_dir, train=True, download=True, transform=transforms.ToTensor())\n\n def train_dataloader(self):\n dataset = MNIST(self.hparams.data_dir, train=True, download=False, transform=transforms.ToTensor())\n mnist_train, _ = random_split(dataset, [55000, 5000])\n loader = DataLoader(mnist_train, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)\n return loader\n\n def val_dataloader(self):\n dataset = MNIST(self.hparams.data_dir, train=True, download=False, transform=transforms.ToTensor())\n _, mnist_val = random_split(dataset, [55000, 5000])\n loader = DataLoader(mnist_val, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)\n return loader\n\n def test_dataloader(self):\n test_dataset = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())\n loader = DataLoader(test_dataset, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)\n return loader\n\n @staticmethod\n def add_model_specific_args(parent_parser):\n parser = ArgumentParser(parents=[parent_parser], add_help=False)\n parser.add_argument('--batch_size', type=int, default=32)\n parser.add_argument('--num_workers', type=int, default=4)\n parser.add_argument('--hidden_dim', type=int, default=128)\n parser.add_argument('--data_dir', type=str, default='')\n parser.add_argument('--learning_rate', type=float, default=0.0001)\n return parser\n\n\ndef cli_main():\n # args\n parser = ArgumentParser()\n parser = Trainer.add_argparse_args(parser)\n parser = LitMNIST.add_model_specific_args(parser)\n args = parser.parse_args()\n\n # model\n model = LitMNIST(**vars(args))\n\n # training\n trainer = Trainer.from_argparse_args(args)\n trainer.fit(model)\n\n\nif __name__ == '__main__': # pragma: no cover\n cli_main()\n", "path": "pl_bolts/models/mnist_module.py"}], "after_files": [{"content": "from argparse import ArgumentParser\nfrom warnings import warn\n\nimport torch\nfrom pytorch_lightning import LightningModule, Trainer\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader, random_split\n\ntry:\n from torchvision import transforms\n from torchvision.datasets import MNIST\nexcept ModuleNotFoundError:\n warn('You want to use `torchvision` which is not installed yet,' # pragma: no-cover\n ' install it with `pip install torchvision`.')\n\n\nclass LitMNIST(LightningModule):\n def __init__(self, hidden_dim=128, learning_rate=1e-3, batch_size=32, num_workers=4, data_dir='', **kwargs):\n super().__init__()\n self.save_hyperparameters()\n\n self.l1 = torch.nn.Linear(28 * 28, self.hparams.hidden_dim)\n self.l2 = torch.nn.Linear(self.hparams.hidden_dim, 10)\n\n self.mnist_train = None\n self.mnist_val = None\n\n def forward(self, x):\n x = x.view(x.size(0), -1)\n x = torch.relu(self.l1(x))\n x = torch.relu(self.l2(x))\n return x\n\n def training_step(self, batch, batch_idx):\n x, y = batch\n y_hat = self(x)\n loss = F.cross_entropy(y_hat, y)\n self.log('train_loss', loss)\n return loss\n\n def validation_step(self, batch, batch_idx):\n x, y = batch\n y_hat = self(x)\n loss = F.cross_entropy(y_hat, y)\n self.log('val_loss', loss)\n\n def test_step(self, batch, batch_idx):\n x, y = batch\n y_hat = self(x)\n loss = F.cross_entropy(y_hat, y)\n self.log('test_loss', loss)\n\n def configure_optimizers(self):\n return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)\n\n def prepare_data(self):\n MNIST(self.hparams.data_dir, train=True, download=True, transform=transforms.ToTensor())\n\n def train_dataloader(self):\n dataset = MNIST(self.hparams.data_dir, train=True, download=False, transform=transforms.ToTensor())\n mnist_train, _ = random_split(dataset, [55000, 5000])\n loader = DataLoader(mnist_train, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)\n return loader\n\n def val_dataloader(self):\n dataset = MNIST(self.hparams.data_dir, train=True, download=False, transform=transforms.ToTensor())\n _, mnist_val = random_split(dataset, [55000, 5000])\n loader = DataLoader(mnist_val, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)\n return loader\n\n def test_dataloader(self):\n test_dataset = MNIST(self.hparams.data_dir, train=False, download=True, transform=transforms.ToTensor())\n loader = DataLoader(test_dataset, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)\n return loader\n\n @staticmethod\n def add_model_specific_args(parent_parser):\n parser = ArgumentParser(parents=[parent_parser], add_help=False)\n parser.add_argument('--batch_size', type=int, default=32)\n parser.add_argument('--num_workers', type=int, default=4)\n parser.add_argument('--hidden_dim', type=int, default=128)\n parser.add_argument('--data_dir', type=str, default='')\n parser.add_argument('--learning_rate', type=float, default=0.0001)\n return parser\n\n\ndef cli_main():\n # args\n parser = ArgumentParser()\n parser = Trainer.add_argparse_args(parser)\n parser = LitMNIST.add_model_specific_args(parser)\n args = parser.parse_args()\n\n # model\n model = LitMNIST(**vars(args))\n\n # training\n trainer = Trainer.from_argparse_args(args)\n trainer.fit(model)\n\n\nif __name__ == '__main__': # pragma: no cover\n cli_main()\n", "path": "pl_bolts/models/mnist_module.py"}]}
| 1,415 | 169 |
gh_patches_debug_5031
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2190
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Visual quirk in the login failure message
### Describe the bug
When you fail to log in on desktop (or probably any wider screen) the message saying this shows up on the left and the login fields are offset to the right. I would expect the message to appear on top, while the login fields were to stay in the same horizontal position.
### How to reproduce
Steps to reproduce the behaviour:
1. Go to the login page
2. Enter invalid credentials
### Expected behaviour
The login fields do not move to the right. The error message is either shown above the login fields or next to them.
### Screenshots

### Additional context
Not sure in what other situations these message boxes are used, but it may occur in other places too. Haven't checked.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/thaliawebsite/templatetags/alert.py`
Content:
```
1 from django import template
2
3 register = template.Library()
4
5
6 @register.inclusion_tag("includes/alert.html")
7 def alert(alert_type="info", message=None, dismissible=False, extra_classes=""):
8 if dismissible:
9 extra_classes += " alert-dimissable"
10 return {
11 "type": alert_type,
12 "message": message,
13 "dismissible": dismissible,
14 "extra_classes": extra_classes,
15 }
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/thaliawebsite/templatetags/alert.py b/website/thaliawebsite/templatetags/alert.py
--- a/website/thaliawebsite/templatetags/alert.py
+++ b/website/thaliawebsite/templatetags/alert.py
@@ -6,7 +6,7 @@
@register.inclusion_tag("includes/alert.html")
def alert(alert_type="info", message=None, dismissible=False, extra_classes=""):
if dismissible:
- extra_classes += " alert-dimissable"
+ extra_classes += " alert-dismissible"
return {
"type": alert_type,
"message": message,
|
{"golden_diff": "diff --git a/website/thaliawebsite/templatetags/alert.py b/website/thaliawebsite/templatetags/alert.py\n--- a/website/thaliawebsite/templatetags/alert.py\n+++ b/website/thaliawebsite/templatetags/alert.py\n@@ -6,7 +6,7 @@\n @register.inclusion_tag(\"includes/alert.html\")\n def alert(alert_type=\"info\", message=None, dismissible=False, extra_classes=\"\"):\n if dismissible:\n- extra_classes += \" alert-dimissable\"\n+ extra_classes += \" alert-dismissible\"\n return {\n \"type\": alert_type,\n \"message\": message,\n", "issue": "Visual quirk in the login failure message\n### Describe the bug\r\nWhen you fail to log in on desktop (or probably any wider screen) the message saying this shows up on the left and the login fields are offset to the right. I would expect the message to appear on top, while the login fields were to stay in the same horizontal position.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to the login page\r\n2. Enter invalid credentials\r\n\r\n### Expected behaviour\r\nThe login fields do not move to the right. The error message is either shown above the login fields or next to them.\r\n\r\n### Screenshots\r\n\r\n\r\n\r\n### Additional context\r\nNot sure in what other situations these message boxes are used, but it may occur in other places too. Haven't checked.\r\n\n", "before_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag(\"includes/alert.html\")\ndef alert(alert_type=\"info\", message=None, dismissible=False, extra_classes=\"\"):\n if dismissible:\n extra_classes += \" alert-dimissable\"\n return {\n \"type\": alert_type,\n \"message\": message,\n \"dismissible\": dismissible,\n \"extra_classes\": extra_classes,\n }\n", "path": "website/thaliawebsite/templatetags/alert.py"}], "after_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag(\"includes/alert.html\")\ndef alert(alert_type=\"info\", message=None, dismissible=False, extra_classes=\"\"):\n if dismissible:\n extra_classes += \" alert-dismissible\"\n return {\n \"type\": alert_type,\n \"message\": message,\n \"dismissible\": dismissible,\n \"extra_classes\": extra_classes,\n }\n", "path": "website/thaliawebsite/templatetags/alert.py"}]}
| 621 | 144 |
gh_patches_debug_25902
|
rasdani/github-patches
|
git_diff
|
psychopy__psychopy-4533
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
uninstall: cannot change path
Hello,
the current uninst.exe is fixed to the path C:\Program Files\PsychoPy3,
which makes it impossible to uninstall an old version in another fold or disk.
Best
PJ
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/__init__.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # Part of the PsychoPy library
5 # Copyright (C) 2002-2018 Jonathan Peirce (C) 2019-2021 Open Science Tools Ltd.
6 # Distributed under the terms of the GNU General Public License (GPL).
7
8 # --------------------------------------------------------------------------
9 # This file is automatically generated during build (do not edit directly).
10 # --------------------------------------------------------------------------
11
12 import os
13 import sys
14
15 __version__ = '2022.1.0rc5'
16 __license__ = 'GNU GPLv3 (or more recent equivalent)'
17 __author__ = 'Jonathan Peirce'
18 __author_email__ = '[email protected]'
19 __maintainer_email__ = '[email protected]'
20 __url__ = 'https://www.psychopy.org/'
21 __download_url__ = 'https://github.com/psychopy/psychopy/releases/'
22 __git_sha__ = 'n/a'
23 __build_platform__ = 'n/a'
24
25 __all__ = ["gui", "misc", "visual", "core",
26 "event", "data", "sound", "microphone"]
27
28 # for developers the following allows access to the current git sha from
29 # their repository
30 if __git_sha__ == 'n/a':
31 from subprocess import check_output, PIPE
32 # see if we're in a git repo and fetch from there
33 try:
34 thisFileLoc = os.path.split(__file__)[0]
35 output = check_output(['git', 'rev-parse', '--short', 'HEAD'],
36 cwd=thisFileLoc, stderr=PIPE)
37 except Exception:
38 output = False
39 if output:
40 __git_sha__ = output.strip() # remove final linefeed
41
42 # update preferences and the user paths
43 if 'installing' not in locals():
44 from psychopy.preferences import prefs
45 for pathName in prefs.general['paths']:
46 sys.path.append(pathName)
47
48 from psychopy.tools.versionchooser import useVersion, ensureMinimal
49
50 if sys.version_info.major < 3:
51 raise ImportError("psychopy does not support Python2 installations. "
52 "The last version to support Python2.7 was PsychoPy "
53 "2021.2.x")
54
55 # import readline here to get around an issue with sounddevice
56 # issues GH-2230 GH-2344 GH-2662
57 try:
58 import readline
59 except ImportError:
60 pass # all that will happen is the stderr/stdout might get redirected
61
62
```
Path: `setupApp.py`
Content:
```
1 #!/usr/bin/env python
2 ################
3 # see notes at bottom for requirements
4
5 import glob
6 import os
7 import sys
8 from sys import platform
9 from distutils.core import setup
10 from pkg_resources import parse_version
11 # import versioneer
12 import psychopy
13 version = psychopy.__version__
14
15 # regenerate __init__.py only if we're in the source repos (not in a zip file)
16 try:
17 from building import createInitFile # won't exist in a sdist.zip
18 writeNewInit=True
19 except:
20 writeNewInit=False
21 if writeNewInit:
22 vStr = createInitFile.createInitFile(dist='bdist')
23
24 #define the extensions to compile if necess
25 packageData = []
26 requires = []
27
28 if platform != 'darwin':
29 raise RuntimeError("setupApp.py is only for building Mac Standalone bundle")
30
31 import bdist_mpkg
32 import py2app
33 resources = glob.glob('psychopy/app/Resources/*')
34 frameworks = [ # these installed using homebrew
35 "/usr/local/opt/libevent/lib/libevent.dylib",
36 "/usr/local/opt/lame/lib/libmp3lame.0.dylib",
37 "/usr/local/opt/libffi/lib/libffi.dylib",
38 "/usr/local/opt/libglfw/lib/libglfw.3.2.dylib",
39 ]
40 opencvLibs = glob.glob(os.path.join(sys.exec_prefix, 'lib', 'libopencv*.2.4.dylib'))
41 frameworks.extend(opencvLibs)
42
43 import macholib
44 #print("~"*60 + "macholib version: "+macholib.__version__)
45
46 if parse_version(macholib.__version__) <= parse_version('1.7'):
47 print("Applying macholib patch...")
48 import macholib.dyld
49 import macholib.MachOGraph
50 dyld_find_1_7 = macholib.dyld.dyld_find
51 def dyld_find(name, loader=None, **kwargs):
52 #print("~"*60 + "calling alternate dyld_find")
53 if loader is not None:
54 kwargs['loader_path'] = loader
55 return dyld_find_1_7(name, **kwargs)
56 macholib.MachOGraph.dyld_find = dyld_find
57
58 includes = ['Tkinter', 'tkFileDialog',
59 'imp', 'subprocess', 'shlex',
60 'shelve', # for scipy.io
61 '_elementtree', 'pyexpat', # for openpyxl
62 'hid',
63 'pyo', 'greenlet', 'zmq', 'tornado',
64 'psutil', # for iohub
65 'tobii_research', # need tobii_research file and tobiiresearch pkg
66 'pysoundcard', 'soundfile', 'sounddevice', 'readline',
67 'hid',
68 'xlwt', # writes excel files for pandas
69 'vlc', # install with pip install python-vlc
70 'msgpack_numpy',
71 'configparser',
72 ]
73 packages = ['wx', 'psychopy',
74 'pyglet', 'pytz', 'OpenGL', 'glfw',
75 'scipy', 'matplotlib', 'openpyxl',
76 'xml', 'xmlschema', 'elementpath',
77 'moviepy', 'imageio', 'imageio_ffmpeg',
78 '_sounddevice_data', '_soundfile_data',
79 'cffi', 'pycparser',
80 'PIL', # 'Image',
81 'objc', 'Quartz', 'AppKit', 'QTKit', 'Cocoa',
82 'Foundation', 'CoreFoundation',
83 'pkg_resources', # needed for objc
84 'pyo',
85 'requests', 'certifi', 'cryptography',
86 # for unit testing
87 'coverage',
88 # handy external science libs
89 'serial',
90 'egi', 'pylink', 'tobiiresearch',
91 'pyxid2', 'ftd2xx', # ftd2xx is used by cedrus
92 'pandas', 'tables', # 'cython',
93 'msgpack', 'yaml', 'gevent', # for ioHub
94 # these aren't needed, but liked
95 'bidi', 'arabic_reshaper', # for right-left language conversions
96 # for Py3 compatibility
97 'ujson', # faster than built-in json
98 'json_tricks', # allows saving arrays/dates in json
99 'git', 'gitlab',
100 'astunparse', 'esprima', # for translating/adapting py/JS
101 'pylsl', 'pygaze',
102 'smite', # https://github.com/marcus-nystrom/SMITE (not pypi!)
103 'cv2',
104 'badapted', 'darc_toolbox', # adaptive methods from Ben Vincent
105 'questplus',
106 'metapensiero.pj', 'dukpy', 'macropy',
107 'jedi', 'parso',
108 'psychtoolbox',
109 'freetype', 'h5py',
110 'markdown_it',
111 'speech_recognition', 'googleapiclient', 'pocketsphinx',
112 'six', # needed by configobj
113 'PyQt5',
114 ]
115
116 setup(
117 app=['psychopy/app/psychopyApp.py'],
118 options=dict(py2app=dict(
119 includes=includes,
120 packages=packages,
121 excludes=['bsddb', 'jinja2', 'IPython','ipython_genutils','nbconvert',
122 'libsz.2.dylib', 'pygame',
123 # 'stringprep',
124 'functools32',
125 ], # anything we need to forcibly exclude?
126 resources=resources,
127 argv_emulation=False, # must be False or app bundle pauses (py2app 0.21 and 0.24 tested)
128 site_packages=True,
129 frameworks=frameworks,
130 iconfile='psychopy/app/Resources/psychopy.icns',
131 plist=dict(
132 CFBundleIconFile='psychopy.icns',
133 CFBundleName = "PsychoPy",
134 CFBundleShortVersionString = version, # must be in X.X.X format
135 CFBundleVersion = version,
136 CFBundleExecutable = "PsychoPy",
137 CFBundleIdentifier = "org.opensciencetools.psychopy",
138 CFBundleLicense = "GNU GPLv3+",
139 NSHumanReadableCopyright = "Open Science Tools Limited",
140 CFBundleDocumentTypes=[dict(CFBundleTypeExtensions=['*'],
141 CFBundleTypeRole='Editor')],
142 LSEnvironment=dict(PATH="/usr/local/git/bin:/usr/local/bin:"
143 "/usr/local:/usr/bin:/usr/sbin"),
144 ),
145 )) # end of the options dict
146 )
147
148
149 # ugly hack for opencv2:
150 # As of opencv 2.4.5 the cv2.so binary used rpath to a fixed
151 # location to find libs and even more annoyingly it then appended
152 # 'lib' to the rpath as well. These were fine for the packaged
153 # framework python but the libs in an app bundle are different.
154 # So, create symlinks so they appear in the same place as in framework python
155 rpath = "dist/PsychoPy.app/Contents/Resources/"
156 for libPath in opencvLibs:
157 libname = os.path.split(libPath)[-1]
158 realPath = "../../Frameworks/"+libname # relative path (w.r.t. the fake)
159 fakePath = os.path.join(rpath, "lib", libname)
160 os.symlink(realPath, fakePath)
161 # they even did this for Python lib itself, which is in diff location
162 realPath = "../Frameworks/Python.framework/Python" # relative to the fake path
163 fakePath = os.path.join(rpath, "Python")
164 os.symlink(realPath, fakePath)
165
166 if writeNewInit:
167 # remove unwanted info about this system post-build
168 createInitFile.createInitFile(dist=None)
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/psychopy/__init__.py b/psychopy/__init__.py
--- a/psychopy/__init__.py
+++ b/psychopy/__init__.py
@@ -12,7 +12,7 @@
import os
import sys
-__version__ = '2022.1.0rc5'
+__version__ = '2022.1.0rc7'
__license__ = 'GNU GPLv3 (or more recent equivalent)'
__author__ = 'Jonathan Peirce'
__author_email__ = '[email protected]'
diff --git a/setupApp.py b/setupApp.py
--- a/setupApp.py
+++ b/setupApp.py
@@ -6,9 +6,12 @@
import os
import sys
from sys import platform
+import setuptools # noqa: setuptools complains if it isn't implicitly imported before distutils
from distutils.core import setup
from pkg_resources import parse_version
-# import versioneer
+import bdist_mpkg # noqa: needed to build bdist, even though not explicitly used here
+import py2app # noqa: needed to build app bundle, even though not explicitly used here
+
import psychopy
version = psychopy.__version__
@@ -28,8 +31,6 @@
if platform != 'darwin':
raise RuntimeError("setupApp.py is only for building Mac Standalone bundle")
-import bdist_mpkg
-import py2app
resources = glob.glob('psychopy/app/Resources/*')
frameworks = [ # these installed using homebrew
"/usr/local/opt/libevent/lib/libevent.dylib",
|
{"golden_diff": "diff --git a/psychopy/__init__.py b/psychopy/__init__.py\n--- a/psychopy/__init__.py\n+++ b/psychopy/__init__.py\n@@ -12,7 +12,7 @@\n import os\n import sys\n \n-__version__ = '2022.1.0rc5'\n+__version__ = '2022.1.0rc7'\n __license__ = 'GNU GPLv3 (or more recent equivalent)'\n __author__ = 'Jonathan Peirce'\n __author_email__ = '[email protected]'\ndiff --git a/setupApp.py b/setupApp.py\n--- a/setupApp.py\n+++ b/setupApp.py\n@@ -6,9 +6,12 @@\n import os\n import sys\n from sys import platform\n+import setuptools # noqa: setuptools complains if it isn't implicitly imported before distutils\n from distutils.core import setup\n from pkg_resources import parse_version\n-# import versioneer\n+import bdist_mpkg # noqa: needed to build bdist, even though not explicitly used here\n+import py2app # noqa: needed to build app bundle, even though not explicitly used here\n+\n import psychopy\n version = psychopy.__version__\n \n@@ -28,8 +31,6 @@\n if platform != 'darwin':\n raise RuntimeError(\"setupApp.py is only for building Mac Standalone bundle\")\n \n-import bdist_mpkg\n-import py2app\n resources = glob.glob('psychopy/app/Resources/*')\n frameworks = [ # these installed using homebrew\n \"/usr/local/opt/libevent/lib/libevent.dylib\",\n", "issue": "uninstall: cannot change path\nHello,\r\nthe current uninst.exe is fixed to the path C:\\Program Files\\PsychoPy3,\r\nwhich makes it impossible to uninstall an old version in another fold or disk.\r\nBest\r\nPJ\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# Part of the PsychoPy library\n# Copyright (C) 2002-2018 Jonathan Peirce (C) 2019-2021 Open Science Tools Ltd.\n# Distributed under the terms of the GNU General Public License (GPL).\n\n# --------------------------------------------------------------------------\n# This file is automatically generated during build (do not edit directly).\n# --------------------------------------------------------------------------\n\nimport os\nimport sys\n\n__version__ = '2022.1.0rc5'\n__license__ = 'GNU GPLv3 (or more recent equivalent)'\n__author__ = 'Jonathan Peirce'\n__author_email__ = '[email protected]'\n__maintainer_email__ = '[email protected]'\n__url__ = 'https://www.psychopy.org/'\n__download_url__ = 'https://github.com/psychopy/psychopy/releases/'\n__git_sha__ = 'n/a'\n__build_platform__ = 'n/a'\n\n__all__ = [\"gui\", \"misc\", \"visual\", \"core\",\n \"event\", \"data\", \"sound\", \"microphone\"]\n\n# for developers the following allows access to the current git sha from\n# their repository\nif __git_sha__ == 'n/a':\n from subprocess import check_output, PIPE\n # see if we're in a git repo and fetch from there\n try:\n thisFileLoc = os.path.split(__file__)[0]\n output = check_output(['git', 'rev-parse', '--short', 'HEAD'],\n cwd=thisFileLoc, stderr=PIPE)\n except Exception:\n output = False\n if output:\n __git_sha__ = output.strip() # remove final linefeed\n\n# update preferences and the user paths\nif 'installing' not in locals():\n from psychopy.preferences import prefs\n for pathName in prefs.general['paths']:\n sys.path.append(pathName)\n\n from psychopy.tools.versionchooser import useVersion, ensureMinimal\n\nif sys.version_info.major < 3:\n raise ImportError(\"psychopy does not support Python2 installations. \"\n \"The last version to support Python2.7 was PsychoPy \"\n \"2021.2.x\")\n\n# import readline here to get around an issue with sounddevice\n# issues GH-2230 GH-2344 GH-2662\ntry:\n import readline\nexcept ImportError:\n pass # all that will happen is the stderr/stdout might get redirected\n\n", "path": "psychopy/__init__.py"}, {"content": "#!/usr/bin/env python\n################\n# see notes at bottom for requirements\n\nimport glob\nimport os\nimport sys\nfrom sys import platform\nfrom distutils.core import setup\nfrom pkg_resources import parse_version\n# import versioneer\nimport psychopy\nversion = psychopy.__version__\n\n# regenerate __init__.py only if we're in the source repos (not in a zip file)\ntry:\n from building import createInitFile # won't exist in a sdist.zip\n writeNewInit=True\nexcept:\n writeNewInit=False\nif writeNewInit:\n vStr = createInitFile.createInitFile(dist='bdist')\n\n#define the extensions to compile if necess\npackageData = []\nrequires = []\n\nif platform != 'darwin':\n raise RuntimeError(\"setupApp.py is only for building Mac Standalone bundle\")\n\nimport bdist_mpkg\nimport py2app\nresources = glob.glob('psychopy/app/Resources/*')\nframeworks = [ # these installed using homebrew\n \"/usr/local/opt/libevent/lib/libevent.dylib\", \n \"/usr/local/opt/lame/lib/libmp3lame.0.dylib\",\n \"/usr/local/opt/libffi/lib/libffi.dylib\",\n \"/usr/local/opt/libglfw/lib/libglfw.3.2.dylib\",\n ]\nopencvLibs = glob.glob(os.path.join(sys.exec_prefix, 'lib', 'libopencv*.2.4.dylib'))\nframeworks.extend(opencvLibs)\n\nimport macholib\n#print(\"~\"*60 + \"macholib version: \"+macholib.__version__)\n\nif parse_version(macholib.__version__) <= parse_version('1.7'):\n print(\"Applying macholib patch...\")\n import macholib.dyld\n import macholib.MachOGraph\n dyld_find_1_7 = macholib.dyld.dyld_find\n def dyld_find(name, loader=None, **kwargs):\n #print(\"~\"*60 + \"calling alternate dyld_find\")\n if loader is not None:\n kwargs['loader_path'] = loader\n return dyld_find_1_7(name, **kwargs)\n macholib.MachOGraph.dyld_find = dyld_find\n\nincludes = ['Tkinter', 'tkFileDialog',\n 'imp', 'subprocess', 'shlex',\n 'shelve', # for scipy.io\n '_elementtree', 'pyexpat', # for openpyxl\n 'hid',\n 'pyo', 'greenlet', 'zmq', 'tornado',\n 'psutil', # for iohub\n 'tobii_research', # need tobii_research file and tobiiresearch pkg\n 'pysoundcard', 'soundfile', 'sounddevice', 'readline',\n 'hid',\n 'xlwt', # writes excel files for pandas\n 'vlc', # install with pip install python-vlc\n 'msgpack_numpy',\n 'configparser',\n ]\npackages = ['wx', 'psychopy',\n 'pyglet', 'pytz', 'OpenGL', 'glfw',\n 'scipy', 'matplotlib', 'openpyxl',\n 'xml', 'xmlschema', 'elementpath',\n 'moviepy', 'imageio', 'imageio_ffmpeg',\n '_sounddevice_data', '_soundfile_data',\n 'cffi', 'pycparser',\n 'PIL', # 'Image',\n 'objc', 'Quartz', 'AppKit', 'QTKit', 'Cocoa',\n 'Foundation', 'CoreFoundation',\n 'pkg_resources', # needed for objc\n 'pyo',\n 'requests', 'certifi', 'cryptography',\n # for unit testing\n 'coverage',\n # handy external science libs\n 'serial',\n 'egi', 'pylink', 'tobiiresearch',\n 'pyxid2', 'ftd2xx', # ftd2xx is used by cedrus\n 'pandas', 'tables', # 'cython',\n 'msgpack', 'yaml', 'gevent', # for ioHub\n # these aren't needed, but liked\n 'bidi', 'arabic_reshaper', # for right-left language conversions\n # for Py3 compatibility\n 'ujson', # faster than built-in json\n 'json_tricks', # allows saving arrays/dates in json\n 'git', 'gitlab',\n 'astunparse', 'esprima', # for translating/adapting py/JS\n 'pylsl', 'pygaze',\n 'smite', # https://github.com/marcus-nystrom/SMITE (not pypi!)\n 'cv2',\n 'badapted', 'darc_toolbox', # adaptive methods from Ben Vincent\n 'questplus',\n 'metapensiero.pj', 'dukpy', 'macropy',\n 'jedi', 'parso',\n 'psychtoolbox',\n 'freetype', 'h5py',\n 'markdown_it',\n 'speech_recognition', 'googleapiclient', 'pocketsphinx',\n 'six', # needed by configobj\n 'PyQt5',\n ]\n\nsetup(\n app=['psychopy/app/psychopyApp.py'],\n options=dict(py2app=dict(\n includes=includes,\n packages=packages,\n excludes=['bsddb', 'jinja2', 'IPython','ipython_genutils','nbconvert',\n 'libsz.2.dylib', 'pygame',\n # 'stringprep',\n 'functools32',\n ], # anything we need to forcibly exclude?\n resources=resources,\n argv_emulation=False, # must be False or app bundle pauses (py2app 0.21 and 0.24 tested)\n site_packages=True,\n frameworks=frameworks,\n iconfile='psychopy/app/Resources/psychopy.icns',\n plist=dict(\n CFBundleIconFile='psychopy.icns',\n CFBundleName = \"PsychoPy\",\n CFBundleShortVersionString = version, # must be in X.X.X format\n CFBundleVersion = version,\n CFBundleExecutable = \"PsychoPy\",\n CFBundleIdentifier = \"org.opensciencetools.psychopy\",\n CFBundleLicense = \"GNU GPLv3+\",\n NSHumanReadableCopyright = \"Open Science Tools Limited\",\n CFBundleDocumentTypes=[dict(CFBundleTypeExtensions=['*'],\n CFBundleTypeRole='Editor')],\n LSEnvironment=dict(PATH=\"/usr/local/git/bin:/usr/local/bin:\"\n \"/usr/local:/usr/bin:/usr/sbin\"),\n ),\n )) # end of the options dict\n)\n\n\n# ugly hack for opencv2:\n# As of opencv 2.4.5 the cv2.so binary used rpath to a fixed\n# location to find libs and even more annoyingly it then appended\n# 'lib' to the rpath as well. These were fine for the packaged\n# framework python but the libs in an app bundle are different.\n# So, create symlinks so they appear in the same place as in framework python\nrpath = \"dist/PsychoPy.app/Contents/Resources/\"\nfor libPath in opencvLibs:\n libname = os.path.split(libPath)[-1]\n realPath = \"../../Frameworks/\"+libname # relative path (w.r.t. the fake)\n fakePath = os.path.join(rpath, \"lib\", libname)\n os.symlink(realPath, fakePath)\n# they even did this for Python lib itself, which is in diff location\nrealPath = \"../Frameworks/Python.framework/Python\" # relative to the fake path\nfakePath = os.path.join(rpath, \"Python\")\nos.symlink(realPath, fakePath)\n\nif writeNewInit:\n # remove unwanted info about this system post-build\n createInitFile.createInitFile(dist=None)\n", "path": "setupApp.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# Part of the PsychoPy library\n# Copyright (C) 2002-2018 Jonathan Peirce (C) 2019-2021 Open Science Tools Ltd.\n# Distributed under the terms of the GNU General Public License (GPL).\n\n# --------------------------------------------------------------------------\n# This file is automatically generated during build (do not edit directly).\n# --------------------------------------------------------------------------\n\nimport os\nimport sys\n\n__version__ = '2022.1.0rc7'\n__license__ = 'GNU GPLv3 (or more recent equivalent)'\n__author__ = 'Jonathan Peirce'\n__author_email__ = '[email protected]'\n__maintainer_email__ = '[email protected]'\n__url__ = 'https://www.psychopy.org/'\n__download_url__ = 'https://github.com/psychopy/psychopy/releases/'\n__git_sha__ = 'n/a'\n__build_platform__ = 'n/a'\n\n__all__ = [\"gui\", \"misc\", \"visual\", \"core\",\n \"event\", \"data\", \"sound\", \"microphone\"]\n\n# for developers the following allows access to the current git sha from\n# their repository\nif __git_sha__ == 'n/a':\n from subprocess import check_output, PIPE\n # see if we're in a git repo and fetch from there\n try:\n thisFileLoc = os.path.split(__file__)[0]\n output = check_output(['git', 'rev-parse', '--short', 'HEAD'],\n cwd=thisFileLoc, stderr=PIPE)\n except Exception:\n output = False\n if output:\n __git_sha__ = output.strip() # remove final linefeed\n\n# update preferences and the user paths\nif 'installing' not in locals():\n from psychopy.preferences import prefs\n for pathName in prefs.general['paths']:\n sys.path.append(pathName)\n\n from psychopy.tools.versionchooser import useVersion, ensureMinimal\n\nif sys.version_info.major < 3:\n raise ImportError(\"psychopy does not support Python2 installations. \"\n \"The last version to support Python2.7 was PsychoPy \"\n \"2021.2.x\")\n\n# import readline here to get around an issue with sounddevice\n# issues GH-2230 GH-2344 GH-2662\ntry:\n import readline\nexcept ImportError:\n pass # all that will happen is the stderr/stdout might get redirected\n\n", "path": "psychopy/__init__.py"}, {"content": "#!/usr/bin/env python\n################\n# see notes at bottom for requirements\n\nimport glob\nimport os\nimport sys\nfrom sys import platform\nimport setuptools # noqa: setuptools complains if it isn't implicitly imported before distutils\nfrom distutils.core import setup\nfrom pkg_resources import parse_version\nimport bdist_mpkg # noqa: needed to build bdist, even though not explicitly used here\nimport py2app # noqa: needed to build app bundle, even though not explicitly used here\n\nimport psychopy\nversion = psychopy.__version__\n\n# regenerate __init__.py only if we're in the source repos (not in a zip file)\ntry:\n from building import createInitFile # won't exist in a sdist.zip\n writeNewInit=True\nexcept:\n writeNewInit=False\nif writeNewInit:\n vStr = createInitFile.createInitFile(dist='bdist')\n\n#define the extensions to compile if necess\npackageData = []\nrequires = []\n\nif platform != 'darwin':\n raise RuntimeError(\"setupApp.py is only for building Mac Standalone bundle\")\n\nresources = glob.glob('psychopy/app/Resources/*')\nframeworks = [ # these installed using homebrew\n \"/usr/local/opt/libevent/lib/libevent.dylib\", \n \"/usr/local/opt/lame/lib/libmp3lame.0.dylib\",\n \"/usr/local/opt/libffi/lib/libffi.dylib\",\n \"/usr/local/opt/libglfw/lib/libglfw.3.2.dylib\",\n ]\nopencvLibs = glob.glob(os.path.join(sys.exec_prefix, 'lib', 'libopencv*.2.4.dylib'))\nframeworks.extend(opencvLibs)\n\nimport macholib\n#print(\"~\"*60 + \"macholib version: \"+macholib.__version__)\n\nif parse_version(macholib.__version__) <= parse_version('1.7'):\n print(\"Applying macholib patch...\")\n import macholib.dyld\n import macholib.MachOGraph\n dyld_find_1_7 = macholib.dyld.dyld_find\n def dyld_find(name, loader=None, **kwargs):\n #print(\"~\"*60 + \"calling alternate dyld_find\")\n if loader is not None:\n kwargs['loader_path'] = loader\n return dyld_find_1_7(name, **kwargs)\n macholib.MachOGraph.dyld_find = dyld_find\n\nincludes = ['Tkinter', 'tkFileDialog',\n 'imp', 'subprocess', 'shlex',\n 'shelve', # for scipy.io\n '_elementtree', 'pyexpat', # for openpyxl\n 'hid',\n 'pyo', 'greenlet', 'zmq', 'tornado',\n 'psutil', # for iohub\n 'tobii_research', # need tobii_research file and tobiiresearch pkg\n 'pysoundcard', 'soundfile', 'sounddevice', 'readline',\n 'hid',\n 'xlwt', # writes excel files for pandas\n 'vlc', # install with pip install python-vlc\n 'msgpack_numpy',\n 'configparser',\n ]\npackages = ['wx', 'psychopy',\n 'pyglet', 'pytz', 'OpenGL', 'glfw',\n 'scipy', 'matplotlib', 'openpyxl',\n 'xml', 'xmlschema', 'elementpath',\n 'moviepy', 'imageio', 'imageio_ffmpeg',\n '_sounddevice_data', '_soundfile_data',\n 'cffi', 'pycparser',\n 'PIL', # 'Image',\n 'objc', 'Quartz', 'AppKit', 'QTKit', 'Cocoa',\n 'Foundation', 'CoreFoundation',\n 'pkg_resources', # needed for objc\n 'pyo',\n 'requests', 'certifi', 'cryptography',\n # for unit testing\n 'coverage',\n # handy external science libs\n 'serial',\n 'egi', 'pylink', 'tobiiresearch',\n 'pyxid2', 'ftd2xx', # ftd2xx is used by cedrus\n 'pandas', 'tables', # 'cython',\n 'msgpack', 'yaml', 'gevent', # for ioHub\n # these aren't needed, but liked\n 'bidi', 'arabic_reshaper', # for right-left language conversions\n # for Py3 compatibility\n 'ujson', # faster than built-in json\n 'json_tricks', # allows saving arrays/dates in json\n 'git', 'gitlab',\n 'astunparse', 'esprima', # for translating/adapting py/JS\n 'pylsl', 'pygaze',\n 'smite', # https://github.com/marcus-nystrom/SMITE (not pypi!)\n 'cv2',\n 'badapted', 'darc_toolbox', # adaptive methods from Ben Vincent\n 'questplus',\n 'metapensiero.pj', 'dukpy', 'macropy',\n 'jedi', 'parso',\n 'psychtoolbox',\n 'freetype', 'h5py',\n 'markdown_it',\n 'speech_recognition', 'googleapiclient', 'pocketsphinx',\n 'six', # needed by configobj\n 'PyQt5',\n ]\n\nsetup(\n app=['psychopy/app/psychopyApp.py'],\n options=dict(py2app=dict(\n includes=includes,\n packages=packages,\n excludes=['bsddb', 'jinja2', 'IPython','ipython_genutils','nbconvert',\n 'libsz.2.dylib', 'pygame',\n # 'stringprep',\n 'functools32',\n ], # anything we need to forcibly exclude?\n resources=resources,\n argv_emulation=False, # must be False or app bundle pauses (py2app 0.21 and 0.24 tested)\n site_packages=True,\n frameworks=frameworks,\n iconfile='psychopy/app/Resources/psychopy.icns',\n plist=dict(\n CFBundleIconFile='psychopy.icns',\n CFBundleName = \"PsychoPy\",\n CFBundleShortVersionString = version, # must be in X.X.X format\n CFBundleVersion = version,\n CFBundleExecutable = \"PsychoPy\",\n CFBundleIdentifier = \"org.opensciencetools.psychopy\",\n CFBundleLicense = \"GNU GPLv3+\",\n NSHumanReadableCopyright = \"Open Science Tools Limited\",\n CFBundleDocumentTypes=[dict(CFBundleTypeExtensions=['*'],\n CFBundleTypeRole='Editor')],\n LSEnvironment=dict(PATH=\"/usr/local/git/bin:/usr/local/bin:\"\n \"/usr/local:/usr/bin:/usr/sbin\"),\n ),\n )) # end of the options dict\n)\n\n\n# ugly hack for opencv2:\n# As of opencv 2.4.5 the cv2.so binary used rpath to a fixed\n# location to find libs and even more annoyingly it then appended\n# 'lib' to the rpath as well. These were fine for the packaged\n# framework python but the libs in an app bundle are different.\n# So, create symlinks so they appear in the same place as in framework python\nrpath = \"dist/PsychoPy.app/Contents/Resources/\"\nfor libPath in opencvLibs:\n libname = os.path.split(libPath)[-1]\n realPath = \"../../Frameworks/\"+libname # relative path (w.r.t. the fake)\n fakePath = os.path.join(rpath, \"lib\", libname)\n os.symlink(realPath, fakePath)\n# they even did this for Python lib itself, which is in diff location\nrealPath = \"../Frameworks/Python.framework/Python\" # relative to the fake path\nfakePath = os.path.join(rpath, \"Python\")\nos.symlink(realPath, fakePath)\n\nif writeNewInit:\n # remove unwanted info about this system post-build\n createInitFile.createInitFile(dist=None)\n", "path": "setupApp.py"}]}
| 3,087 | 351 |
gh_patches_debug_31883
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-1385
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PS-1] RPC service `push_model` implementation
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/ps/parameters.py`
Content:
```
1 import tensorflow as tf
2
3 from elasticdl.python.common.ndarray import tensor_to_ndarray
4 from elasticdl.python.ps.embedding_table import create_embedding_table
5
6
7 class Parameters(object):
8 """
9 There are two kinds of parameters:
10
11 1. non-embedding parameters, or dense tensor parameters. We save it
12 in a hashmap `non-embedding_params`, the key is the parameter name,
13 the value is a tf.Variable` object.
14 2. embedding parameters, or row-sparse parameters. We save it in a
15 hashmap `embedding_params`, the key is the embedding layer name,
16 the value is an `EmbeddingTable` object.
17
18 """
19
20 def __init__(self):
21 self.init_status = False
22 self.non_embedding_params = {}
23 self.embedding_params = {}
24
25 def get_embedding_param(self, name, indices):
26 if name not in self.embedding_params:
27 raise ValueError(
28 "Please initialize embedding param %s first!", name
29 )
30 return self.embedding_params[name].get(indices)
31
32 def set_embedding_param(self, name, indices, values):
33 if name not in self.embedding_params:
34 raise ValueError(
35 "Please initialize embedding param %s first!", name
36 )
37 self.embedding_params[name].set(indices, values)
38
39 def init_from_model_pb(self, model_pb):
40 if not self.init_status:
41 tensors_pb = model_pb.param
42 embeddings_pb = model_pb.embedding_table_info
43 self._init_non_embedding_params(tensors_pb)
44 self._init_embedding_params(embeddings_pb)
45 self.init_status = True
46
47 def _init_non_embedding_params(self, tensors_pb):
48 for pb in tensors_pb:
49 name = pb.name
50 arr = tensor_to_ndarray(pb)
51 var = tf.Variable(name=name, initial_value=arr, trainable=True)
52 self.non_embedding_params[name] = var
53
54 def _init_embedding_params(self, embeddings_pb):
55 for pb in embeddings_pb:
56 self.embedding_params[pb.name] = create_embedding_table(pb)
57
```
Path: `elasticdl/python/ps/servicer.py`
Content:
```
1 from google.protobuf import empty_pb2
2
3 from elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc
4
5
6 class PserverServicer(elasticdl_pb2_grpc.PserverServicer):
7 """PS service implementation"""
8
9 def __init__(
10 self,
11 parameters,
12 grads_to_wait,
13 optimizer,
14 lr_staleness_modulation=False,
15 use_async=False,
16 ):
17 self._parameters = parameters
18 self._grads_to_wait = grads_to_wait
19 self._optimizer = optimizer
20 self._lr_staleness_modulation = lr_staleness_modulation
21 self._use_async = use_async
22 self._version = 0
23
24 def pull_variable(self, request, _):
25 # TODO: implement this RPC service
26 return elasticdl_pb2.PullVariableResponse()
27
28 def pull_embedding_vector(self, request, _):
29 # TODO: implement this RPC service
30 return elasticdl_pb2.Tensor()
31
32 def push_model(self, request, _):
33 # TODO: implement this RPC service
34 return empty_pb2.Empty()
35
36 def push_gradient(self, request, _):
37 # TODO: implement this RPC service
38 return elasticdl_pb2.PushGradientResponse()
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticdl/python/ps/parameters.py b/elasticdl/python/ps/parameters.py
--- a/elasticdl/python/ps/parameters.py
+++ b/elasticdl/python/ps/parameters.py
@@ -18,6 +18,7 @@
"""
def __init__(self):
+ self.version = 0
self.init_status = False
self.non_embedding_params = {}
self.embedding_params = {}
@@ -42,6 +43,7 @@
embeddings_pb = model_pb.embedding_table_info
self._init_non_embedding_params(tensors_pb)
self._init_embedding_params(embeddings_pb)
+ self.version = model_pb.version
self.init_status = True
def _init_non_embedding_params(self, tensors_pb):
diff --git a/elasticdl/python/ps/servicer.py b/elasticdl/python/ps/servicer.py
--- a/elasticdl/python/ps/servicer.py
+++ b/elasticdl/python/ps/servicer.py
@@ -1,3 +1,5 @@
+import threading
+
from google.protobuf import empty_pb2
from elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc
@@ -20,6 +22,7 @@
self._lr_staleness_modulation = lr_staleness_modulation
self._use_async = use_async
self._version = 0
+ self._lock = threading.Lock()
def pull_variable(self, request, _):
# TODO: implement this RPC service
@@ -30,7 +33,8 @@
return elasticdl_pb2.Tensor()
def push_model(self, request, _):
- # TODO: implement this RPC service
+ with self._lock:
+ self._parameters.init_from_model_pb(request)
return empty_pb2.Empty()
def push_gradient(self, request, _):
|
{"golden_diff": "diff --git a/elasticdl/python/ps/parameters.py b/elasticdl/python/ps/parameters.py\n--- a/elasticdl/python/ps/parameters.py\n+++ b/elasticdl/python/ps/parameters.py\n@@ -18,6 +18,7 @@\n \"\"\"\n \n def __init__(self):\n+ self.version = 0\n self.init_status = False\n self.non_embedding_params = {}\n self.embedding_params = {}\n@@ -42,6 +43,7 @@\n embeddings_pb = model_pb.embedding_table_info\n self._init_non_embedding_params(tensors_pb)\n self._init_embedding_params(embeddings_pb)\n+ self.version = model_pb.version\n self.init_status = True\n \n def _init_non_embedding_params(self, tensors_pb):\ndiff --git a/elasticdl/python/ps/servicer.py b/elasticdl/python/ps/servicer.py\n--- a/elasticdl/python/ps/servicer.py\n+++ b/elasticdl/python/ps/servicer.py\n@@ -1,3 +1,5 @@\n+import threading\n+\n from google.protobuf import empty_pb2\n \n from elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\n@@ -20,6 +22,7 @@\n self._lr_staleness_modulation = lr_staleness_modulation\n self._use_async = use_async\n self._version = 0\n+ self._lock = threading.Lock()\n \n def pull_variable(self, request, _):\n # TODO: implement this RPC service\n@@ -30,7 +33,8 @@\n return elasticdl_pb2.Tensor()\n \n def push_model(self, request, _):\n- # TODO: implement this RPC service\n+ with self._lock:\n+ self._parameters.init_from_model_pb(request)\n return empty_pb2.Empty()\n \n def push_gradient(self, request, _):\n", "issue": "[PS-1] RPC service `push_model` implementation\n\n", "before_files": [{"content": "import tensorflow as tf\n\nfrom elasticdl.python.common.ndarray import tensor_to_ndarray\nfrom elasticdl.python.ps.embedding_table import create_embedding_table\n\n\nclass Parameters(object):\n \"\"\"\n There are two kinds of parameters:\n\n 1. non-embedding parameters, or dense tensor parameters. We save it\n in a hashmap `non-embedding_params`, the key is the parameter name,\n the value is a tf.Variable` object.\n 2. embedding parameters, or row-sparse parameters. We save it in a\n hashmap `embedding_params`, the key is the embedding layer name,\n the value is an `EmbeddingTable` object.\n\n \"\"\"\n\n def __init__(self):\n self.init_status = False\n self.non_embedding_params = {}\n self.embedding_params = {}\n\n def get_embedding_param(self, name, indices):\n if name not in self.embedding_params:\n raise ValueError(\n \"Please initialize embedding param %s first!\", name\n )\n return self.embedding_params[name].get(indices)\n\n def set_embedding_param(self, name, indices, values):\n if name not in self.embedding_params:\n raise ValueError(\n \"Please initialize embedding param %s first!\", name\n )\n self.embedding_params[name].set(indices, values)\n\n def init_from_model_pb(self, model_pb):\n if not self.init_status:\n tensors_pb = model_pb.param\n embeddings_pb = model_pb.embedding_table_info\n self._init_non_embedding_params(tensors_pb)\n self._init_embedding_params(embeddings_pb)\n self.init_status = True\n\n def _init_non_embedding_params(self, tensors_pb):\n for pb in tensors_pb:\n name = pb.name\n arr = tensor_to_ndarray(pb)\n var = tf.Variable(name=name, initial_value=arr, trainable=True)\n self.non_embedding_params[name] = var\n\n def _init_embedding_params(self, embeddings_pb):\n for pb in embeddings_pb:\n self.embedding_params[pb.name] = create_embedding_table(pb)\n", "path": "elasticdl/python/ps/parameters.py"}, {"content": "from google.protobuf import empty_pb2\n\nfrom elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\n\n\nclass PserverServicer(elasticdl_pb2_grpc.PserverServicer):\n \"\"\"PS service implementation\"\"\"\n\n def __init__(\n self,\n parameters,\n grads_to_wait,\n optimizer,\n lr_staleness_modulation=False,\n use_async=False,\n ):\n self._parameters = parameters\n self._grads_to_wait = grads_to_wait\n self._optimizer = optimizer\n self._lr_staleness_modulation = lr_staleness_modulation\n self._use_async = use_async\n self._version = 0\n\n def pull_variable(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.PullVariableResponse()\n\n def pull_embedding_vector(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.Tensor()\n\n def push_model(self, request, _):\n # TODO: implement this RPC service\n return empty_pb2.Empty()\n\n def push_gradient(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.PushGradientResponse()\n", "path": "elasticdl/python/ps/servicer.py"}], "after_files": [{"content": "import tensorflow as tf\n\nfrom elasticdl.python.common.ndarray import tensor_to_ndarray\nfrom elasticdl.python.ps.embedding_table import create_embedding_table\n\n\nclass Parameters(object):\n \"\"\"\n There are two kinds of parameters:\n\n 1. non-embedding parameters, or dense tensor parameters. We save it\n in a hashmap `non-embedding_params`, the key is the parameter name,\n the value is a tf.Variable` object.\n 2. embedding parameters, or row-sparse parameters. We save it in a\n hashmap `embedding_params`, the key is the embedding layer name,\n the value is an `EmbeddingTable` object.\n\n \"\"\"\n\n def __init__(self):\n self.version = 0\n self.init_status = False\n self.non_embedding_params = {}\n self.embedding_params = {}\n\n def get_embedding_param(self, name, indices):\n if name not in self.embedding_params:\n raise ValueError(\n \"Please initialize embedding param %s first!\", name\n )\n return self.embedding_params[name].get(indices)\n\n def set_embedding_param(self, name, indices, values):\n if name not in self.embedding_params:\n raise ValueError(\n \"Please initialize embedding param %s first!\", name\n )\n self.embedding_params[name].set(indices, values)\n\n def init_from_model_pb(self, model_pb):\n if not self.init_status:\n tensors_pb = model_pb.param\n embeddings_pb = model_pb.embedding_table_info\n self._init_non_embedding_params(tensors_pb)\n self._init_embedding_params(embeddings_pb)\n self.version = model_pb.version\n self.init_status = True\n\n def _init_non_embedding_params(self, tensors_pb):\n for pb in tensors_pb:\n name = pb.name\n arr = tensor_to_ndarray(pb)\n var = tf.Variable(name=name, initial_value=arr, trainable=True)\n self.non_embedding_params[name] = var\n\n def _init_embedding_params(self, embeddings_pb):\n for pb in embeddings_pb:\n self.embedding_params[pb.name] = create_embedding_table(pb)\n", "path": "elasticdl/python/ps/parameters.py"}, {"content": "import threading\n\nfrom google.protobuf import empty_pb2\n\nfrom elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\n\n\nclass PserverServicer(elasticdl_pb2_grpc.PserverServicer):\n \"\"\"PS service implementation\"\"\"\n\n def __init__(\n self,\n parameters,\n grads_to_wait,\n optimizer,\n lr_staleness_modulation=False,\n use_async=False,\n ):\n self._parameters = parameters\n self._grads_to_wait = grads_to_wait\n self._optimizer = optimizer\n self._lr_staleness_modulation = lr_staleness_modulation\n self._use_async = use_async\n self._version = 0\n self._lock = threading.Lock()\n\n def pull_variable(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.PullVariableResponse()\n\n def pull_embedding_vector(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.Tensor()\n\n def push_model(self, request, _):\n with self._lock:\n self._parameters.init_from_model_pb(request)\n return empty_pb2.Empty()\n\n def push_gradient(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.PushGradientResponse()\n", "path": "elasticdl/python/ps/servicer.py"}]}
| 1,160 | 413 |
gh_patches_debug_37394
|
rasdani/github-patches
|
git_diff
|
great-expectations__great_expectations-7139
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SQLAlchemy warning when not using SQL
**Describe the bug**
I'm using Azure Blob Storage as a backend and getting the following warning in the new version of ge 0.15.4. Don't think this warning showed up in versions prior to 0.15.
`WARNING:great_expectations.data_context.util:Something went wrong when trying to use SQLAlchemy to obfuscate URL: Could not parse rfc1738 URL from string 'BlobEndpoint=xxxxxx'`
**Expected behavior**
No warning of this is type should show when TupleAzureBlobStoreBackend backend is used.
**Environment (please complete the following information):**
- System: Azure
- Great Expectations Version: 0.15.4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `great_expectations/data_context/util.py`
Content:
```
1 import copy
2 import inspect
3 import logging
4 import pathlib
5 import warnings
6 from typing import Any, Optional
7 from urllib.parse import urlparse
8
9 import pyparsing as pp
10
11 from great_expectations.alias_types import PathStr # noqa: TCH001
12 from great_expectations.types import safe_deep_copy
13 from great_expectations.util import load_class, verify_dynamic_loading_support
14
15 try:
16 import sqlalchemy as sa
17 except ImportError:
18 sa = None
19
20 logger = logging.getLogger(__name__)
21
22
23 # TODO: Rename config to constructor_kwargs and config_defaults -> constructor_kwarg_default
24 # TODO: Improve error messages in this method. Since so much of our workflow is config-driven, this will be a *super* important part of DX.
25 def instantiate_class_from_config(config, runtime_environment, config_defaults=None):
26 """Build a GX class from configuration dictionaries."""
27
28 if config_defaults is None:
29 config_defaults = {}
30
31 config = copy.deepcopy(config)
32
33 module_name = config.pop("module_name", None)
34 if module_name is None:
35 try:
36 module_name = config_defaults.pop("module_name")
37 except KeyError:
38 raise KeyError(
39 "Neither config : {} nor config_defaults : {} contains a module_name key.".format(
40 config,
41 config_defaults,
42 )
43 )
44 else:
45 # Pop the value without using it, to avoid sending an unwanted value to the config_class
46 config_defaults.pop("module_name", None)
47
48 logger.debug(f"(instantiate_class_from_config) module_name -> {module_name}")
49 verify_dynamic_loading_support(module_name=module_name)
50
51 class_name = config.pop("class_name", None)
52 if class_name is None:
53 logger.warning(
54 "Instantiating class from config without an explicit class_name is dangerous. Consider adding "
55 f"an explicit class_name for {config.get('name')}"
56 )
57 try:
58 class_name = config_defaults.pop("class_name")
59 except KeyError:
60 raise KeyError(
61 "Neither config : {} nor config_defaults : {} contains a class_name key.".format(
62 config,
63 config_defaults,
64 )
65 )
66 else:
67 # Pop the value without using it, to avoid sending an unwanted value to the config_class
68 config_defaults.pop("class_name", None)
69
70 class_ = load_class(class_name=class_name, module_name=module_name)
71
72 config_with_defaults = copy.deepcopy(config_defaults)
73 config_with_defaults.update(config)
74 if runtime_environment is not None:
75 # If there are additional kwargs available in the runtime_environment requested by a
76 # class to be instantiated, provide them
77 argspec = inspect.getfullargspec(class_.__init__)[0][1:]
78
79 missing_args = set(argspec) - set(config_with_defaults.keys())
80 config_with_defaults.update(
81 {
82 missing_arg: runtime_environment[missing_arg]
83 for missing_arg in missing_args
84 if missing_arg in runtime_environment
85 }
86 )
87 # Add the entire runtime_environment as well if it's requested
88 if "runtime_environment" in missing_args:
89 config_with_defaults.update({"runtime_environment": runtime_environment})
90
91 try:
92 class_instance = class_(**config_with_defaults)
93 except TypeError as e:
94 raise TypeError(
95 "Couldn't instantiate class: {} with config: \n\t{}\n \n".format(
96 class_name, format_dict_for_error_message(config_with_defaults)
97 )
98 + str(e)
99 )
100
101 return class_instance
102
103
104 def format_dict_for_error_message(dict_):
105 # TODO : Tidy this up a bit. Indentation isn't fully consistent.
106
107 return "\n\t".join("\t\t".join((str(key), str(dict_[key]))) for key in dict_)
108
109
110 def file_relative_path(
111 source_path: PathStr,
112 relative_path: PathStr,
113 strict: bool = True,
114 ) -> str:
115 """
116 This function is useful when one needs to load a file that is
117 relative to the position of the current file. (Such as when
118 you encode a configuration file path in source file and want
119 in runnable in any current working directory)
120
121 It is meant to be used like the following:
122 file_relative_path(__file__, 'path/relative/to/file')
123
124 This has been modified from Dagster's utils:
125 H/T https://github.com/dagster-io/dagster/blob/8a250e9619a49e8bff8e9aa7435df89c2d2ea039/python_modules/dagster/dagster/utils/__init__.py#L34
126 """
127 dir_path = pathlib.Path(source_path).parent
128 abs_path = dir_path.joinpath(relative_path).resolve(strict=strict)
129 return str(abs_path)
130
131
132 def parse_substitution_variable(substitution_variable: str) -> Optional[str]:
133 """
134 Parse and check whether the string contains a substitution variable of the case insensitive form ${SOME_VAR} or $SOME_VAR
135 Args:
136 substitution_variable: string to be parsed
137
138 Returns:
139 string of variable name e.g. SOME_VAR or None if not parsable. If there are multiple substitution variables this currently returns the first e.g. $SOME_$TRING -> $SOME_
140 """
141 substitution_variable_name = pp.Word(pp.alphanums + "_").setResultsName(
142 "substitution_variable_name"
143 )
144 curly_brace_parser = "${" + substitution_variable_name + "}"
145 non_curly_brace_parser = "$" + substitution_variable_name
146 both_parser = curly_brace_parser | non_curly_brace_parser
147 try:
148 parsed_substitution_variable = both_parser.parseString(substitution_variable)
149 return parsed_substitution_variable.substitution_variable_name
150 except pp.ParseException:
151 return None
152
153
154 class PasswordMasker:
155 """
156 Used to mask passwords in Datasources. Does not mask sqlite urls.
157
158 Example usage
159 masked_db_url = PasswordMasker.mask_db_url(url)
160 where url = "postgresql+psycopg2://username:password@host:65432/database"
161 and masked_url = "postgresql+psycopg2://username:***@host:65432/database"
162
163 """
164
165 MASKED_PASSWORD_STRING = "***"
166
167 # values with the following keys will be processed with cls.mask_db_url:
168 URL_KEYS = {"connection_string", "url"}
169
170 # values with these keys will be directly replaced with cls.MASKED_PASSWORD_STRING:
171 PASSWORD_KEYS = {"access_token", "password"}
172
173 @classmethod
174 def mask_db_url(cls, url: str, use_urlparse: bool = False, **kwargs) -> str:
175 """
176 Mask password in database url.
177 Uses sqlalchemy engine parsing if sqlalchemy is installed, otherwise defaults to using urlparse from the stdlib which does not handle kwargs.
178 Args:
179 url: Database url e.g. "postgresql+psycopg2://username:password@host:65432/database"
180 use_urlparse: Skip trying to parse url with sqlalchemy and use urlparse
181 **kwargs: passed to create_engine()
182
183 Returns:
184 url with password masked e.g. "postgresql+psycopg2://username:***@host:65432/database"
185 """
186 if sa is not None and use_urlparse is False:
187 try:
188 engine = sa.create_engine(url, **kwargs)
189 return engine.url.__repr__()
190 # Account for the edge case where we have SQLAlchemy in our env but haven't installed the appropriate dialect to match the input URL
191 except Exception as e:
192 logger.warning(
193 f"Something went wrong when trying to use SQLAlchemy to obfuscate URL: {e}"
194 )
195 else:
196 warnings.warn(
197 "SQLAlchemy is not installed, using urlparse to mask database url password which ignores **kwargs."
198 )
199 return cls._mask_db_url_no_sa(url=url)
200
201 @classmethod
202 def _mask_db_url_no_sa(cls, url: str) -> str:
203 # oracle+cx_oracle does not parse well using urlparse, parse as oracle then swap back
204 replace_prefix = None
205 if url.startswith("oracle+cx_oracle"):
206 replace_prefix = {"original": "oracle+cx_oracle", "temporary": "oracle"}
207 url = url.replace(replace_prefix["original"], replace_prefix["temporary"])
208
209 parsed_url = urlparse(url)
210
211 # Do not parse sqlite
212 if parsed_url.scheme == "sqlite":
213 return url
214
215 colon = ":" if parsed_url.port is not None else ""
216 masked_url = (
217 f"{parsed_url.scheme}://{parsed_url.username}:{cls.MASKED_PASSWORD_STRING}"
218 f"@{parsed_url.hostname}{colon}{parsed_url.port or ''}{parsed_url.path or ''}"
219 )
220
221 if replace_prefix is not None:
222 masked_url = masked_url.replace(
223 replace_prefix["temporary"], replace_prefix["original"]
224 )
225
226 return masked_url
227
228 @classmethod
229 def sanitize_config(cls, config: dict) -> dict:
230 """
231 Mask sensitive fields in a Dict.
232 """
233
234 # be defensive, since it would be logical to expect this method works with DataContextConfig
235 if not isinstance(config, dict):
236 raise TypeError(
237 "PasswordMasker.sanitize_config expects param `config` "
238 + f"to be of type Dict, not of type {type(config)}"
239 )
240
241 config_copy = safe_deep_copy(config) # be immutable
242
243 def recursive_cleaner_method(config: Any) -> None:
244 if isinstance(config, dict):
245 for key, val in config.items():
246 if not isinstance(val, str):
247 recursive_cleaner_method(val)
248 elif key in cls.URL_KEYS:
249 config[key] = cls.mask_db_url(val)
250 elif key in cls.PASSWORD_KEYS:
251 config[key] = cls.MASKED_PASSWORD_STRING
252 else:
253 pass # this string is not sensitive
254 elif isinstance(config, list):
255 for val in config:
256 recursive_cleaner_method(val)
257
258 recursive_cleaner_method(config_copy) # Perform anonymization in place
259
260 return config_copy
261
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/great_expectations/data_context/util.py b/great_expectations/data_context/util.py
--- a/great_expectations/data_context/util.py
+++ b/great_expectations/data_context/util.py
@@ -1,7 +1,10 @@
+from __future__ import annotations
+
import copy
import inspect
import logging
import pathlib
+import re
import warnings
from typing import Any, Optional
from urllib.parse import urlparse
@@ -9,6 +12,7 @@
import pyparsing as pp
from great_expectations.alias_types import PathStr # noqa: TCH001
+from great_expectations.exceptions import StoreConfigurationError
from great_expectations.types import safe_deep_copy
from great_expectations.util import load_class, verify_dynamic_loading_support
@@ -183,7 +187,9 @@
Returns:
url with password masked e.g. "postgresql+psycopg2://username:***@host:65432/database"
"""
- if sa is not None and use_urlparse is False:
+ if url.startswith("DefaultEndpointsProtocol"):
+ return cls._obfuscate_azure_blobstore_connection_string(url)
+ elif sa is not None and use_urlparse is False:
try:
engine = sa.create_engine(url, **kwargs)
return engine.url.__repr__()
@@ -198,6 +204,25 @@
)
return cls._mask_db_url_no_sa(url=url)
+ @classmethod
+ def _obfuscate_azure_blobstore_connection_string(cls, url: str) -> str:
+ # Parse Azure Connection Strings
+ azure_conn_str_re = re.compile(
+ "(DefaultEndpointsProtocol=(http|https));(AccountName=([a-zA-Z0-9]+));(AccountKey=)(.+);(EndpointSuffix=([a-zA-Z\\.]+))"
+ )
+ try:
+ matched: re.Match[str] | None = azure_conn_str_re.match(url)
+ if not matched:
+ raise StoreConfigurationError(
+ f"The URL for the Azure connection-string, was not configured properly. Please check and try again: {url} "
+ )
+ res = f"DefaultEndpointsProtocol={matched.group(2)};AccountName={matched.group(4)};AccountKey=***;EndpointSuffix={matched.group(8)}"
+ return res
+ except Exception as e:
+ raise StoreConfigurationError(
+ f"Something went wrong when trying to obfuscate URL for Azure connection-string. Please check your configuration: {e}"
+ )
+
@classmethod
def _mask_db_url_no_sa(cls, url: str) -> str:
# oracle+cx_oracle does not parse well using urlparse, parse as oracle then swap back
|
{"golden_diff": "diff --git a/great_expectations/data_context/util.py b/great_expectations/data_context/util.py\n--- a/great_expectations/data_context/util.py\n+++ b/great_expectations/data_context/util.py\n@@ -1,7 +1,10 @@\n+from __future__ import annotations\n+\n import copy\n import inspect\n import logging\n import pathlib\n+import re\n import warnings\n from typing import Any, Optional\n from urllib.parse import urlparse\n@@ -9,6 +12,7 @@\n import pyparsing as pp\n \n from great_expectations.alias_types import PathStr # noqa: TCH001\n+from great_expectations.exceptions import StoreConfigurationError\n from great_expectations.types import safe_deep_copy\n from great_expectations.util import load_class, verify_dynamic_loading_support\n \n@@ -183,7 +187,9 @@\n Returns:\n url with password masked e.g. \"postgresql+psycopg2://username:***@host:65432/database\"\n \"\"\"\n- if sa is not None and use_urlparse is False:\n+ if url.startswith(\"DefaultEndpointsProtocol\"):\n+ return cls._obfuscate_azure_blobstore_connection_string(url)\n+ elif sa is not None and use_urlparse is False:\n try:\n engine = sa.create_engine(url, **kwargs)\n return engine.url.__repr__()\n@@ -198,6 +204,25 @@\n )\n return cls._mask_db_url_no_sa(url=url)\n \n+ @classmethod\n+ def _obfuscate_azure_blobstore_connection_string(cls, url: str) -> str:\n+ # Parse Azure Connection Strings\n+ azure_conn_str_re = re.compile(\n+ \"(DefaultEndpointsProtocol=(http|https));(AccountName=([a-zA-Z0-9]+));(AccountKey=)(.+);(EndpointSuffix=([a-zA-Z\\\\.]+))\"\n+ )\n+ try:\n+ matched: re.Match[str] | None = azure_conn_str_re.match(url)\n+ if not matched:\n+ raise StoreConfigurationError(\n+ f\"The URL for the Azure connection-string, was not configured properly. Please check and try again: {url} \"\n+ )\n+ res = f\"DefaultEndpointsProtocol={matched.group(2)};AccountName={matched.group(4)};AccountKey=***;EndpointSuffix={matched.group(8)}\"\n+ return res\n+ except Exception as e:\n+ raise StoreConfigurationError(\n+ f\"Something went wrong when trying to obfuscate URL for Azure connection-string. Please check your configuration: {e}\"\n+ )\n+\n @classmethod\n def _mask_db_url_no_sa(cls, url: str) -> str:\n # oracle+cx_oracle does not parse well using urlparse, parse as oracle then swap back\n", "issue": "SQLAlchemy warning when not using SQL\n**Describe the bug**\r\nI'm using Azure Blob Storage as a backend and getting the following warning in the new version of ge 0.15.4. Don't think this warning showed up in versions prior to 0.15.\r\n\r\n`WARNING:great_expectations.data_context.util:Something went wrong when trying to use SQLAlchemy to obfuscate URL: Could not parse rfc1738 URL from string 'BlobEndpoint=xxxxxx'`\r\n\r\n**Expected behavior**\r\nNo warning of this is type should show when TupleAzureBlobStoreBackend backend is used.\r\n\r\n**Environment (please complete the following information):**\r\n - System: Azure\r\n - Great Expectations Version: 0.15.4\r\n\r\n\n", "before_files": [{"content": "import copy\nimport inspect\nimport logging\nimport pathlib\nimport warnings\nfrom typing import Any, Optional\nfrom urllib.parse import urlparse\n\nimport pyparsing as pp\n\nfrom great_expectations.alias_types import PathStr # noqa: TCH001\nfrom great_expectations.types import safe_deep_copy\nfrom great_expectations.util import load_class, verify_dynamic_loading_support\n\ntry:\n import sqlalchemy as sa\nexcept ImportError:\n sa = None\n\nlogger = logging.getLogger(__name__)\n\n\n# TODO: Rename config to constructor_kwargs and config_defaults -> constructor_kwarg_default\n# TODO: Improve error messages in this method. Since so much of our workflow is config-driven, this will be a *super* important part of DX.\ndef instantiate_class_from_config(config, runtime_environment, config_defaults=None):\n \"\"\"Build a GX class from configuration dictionaries.\"\"\"\n\n if config_defaults is None:\n config_defaults = {}\n\n config = copy.deepcopy(config)\n\n module_name = config.pop(\"module_name\", None)\n if module_name is None:\n try:\n module_name = config_defaults.pop(\"module_name\")\n except KeyError:\n raise KeyError(\n \"Neither config : {} nor config_defaults : {} contains a module_name key.\".format(\n config,\n config_defaults,\n )\n )\n else:\n # Pop the value without using it, to avoid sending an unwanted value to the config_class\n config_defaults.pop(\"module_name\", None)\n\n logger.debug(f\"(instantiate_class_from_config) module_name -> {module_name}\")\n verify_dynamic_loading_support(module_name=module_name)\n\n class_name = config.pop(\"class_name\", None)\n if class_name is None:\n logger.warning(\n \"Instantiating class from config without an explicit class_name is dangerous. Consider adding \"\n f\"an explicit class_name for {config.get('name')}\"\n )\n try:\n class_name = config_defaults.pop(\"class_name\")\n except KeyError:\n raise KeyError(\n \"Neither config : {} nor config_defaults : {} contains a class_name key.\".format(\n config,\n config_defaults,\n )\n )\n else:\n # Pop the value without using it, to avoid sending an unwanted value to the config_class\n config_defaults.pop(\"class_name\", None)\n\n class_ = load_class(class_name=class_name, module_name=module_name)\n\n config_with_defaults = copy.deepcopy(config_defaults)\n config_with_defaults.update(config)\n if runtime_environment is not None:\n # If there are additional kwargs available in the runtime_environment requested by a\n # class to be instantiated, provide them\n argspec = inspect.getfullargspec(class_.__init__)[0][1:]\n\n missing_args = set(argspec) - set(config_with_defaults.keys())\n config_with_defaults.update(\n {\n missing_arg: runtime_environment[missing_arg]\n for missing_arg in missing_args\n if missing_arg in runtime_environment\n }\n )\n # Add the entire runtime_environment as well if it's requested\n if \"runtime_environment\" in missing_args:\n config_with_defaults.update({\"runtime_environment\": runtime_environment})\n\n try:\n class_instance = class_(**config_with_defaults)\n except TypeError as e:\n raise TypeError(\n \"Couldn't instantiate class: {} with config: \\n\\t{}\\n \\n\".format(\n class_name, format_dict_for_error_message(config_with_defaults)\n )\n + str(e)\n )\n\n return class_instance\n\n\ndef format_dict_for_error_message(dict_):\n # TODO : Tidy this up a bit. Indentation isn't fully consistent.\n\n return \"\\n\\t\".join(\"\\t\\t\".join((str(key), str(dict_[key]))) for key in dict_)\n\n\ndef file_relative_path(\n source_path: PathStr,\n relative_path: PathStr,\n strict: bool = True,\n) -> str:\n \"\"\"\n This function is useful when one needs to load a file that is\n relative to the position of the current file. (Such as when\n you encode a configuration file path in source file and want\n in runnable in any current working directory)\n\n It is meant to be used like the following:\n file_relative_path(__file__, 'path/relative/to/file')\n\n This has been modified from Dagster's utils:\n H/T https://github.com/dagster-io/dagster/blob/8a250e9619a49e8bff8e9aa7435df89c2d2ea039/python_modules/dagster/dagster/utils/__init__.py#L34\n \"\"\"\n dir_path = pathlib.Path(source_path).parent\n abs_path = dir_path.joinpath(relative_path).resolve(strict=strict)\n return str(abs_path)\n\n\ndef parse_substitution_variable(substitution_variable: str) -> Optional[str]:\n \"\"\"\n Parse and check whether the string contains a substitution variable of the case insensitive form ${SOME_VAR} or $SOME_VAR\n Args:\n substitution_variable: string to be parsed\n\n Returns:\n string of variable name e.g. SOME_VAR or None if not parsable. If there are multiple substitution variables this currently returns the first e.g. $SOME_$TRING -> $SOME_\n \"\"\"\n substitution_variable_name = pp.Word(pp.alphanums + \"_\").setResultsName(\n \"substitution_variable_name\"\n )\n curly_brace_parser = \"${\" + substitution_variable_name + \"}\"\n non_curly_brace_parser = \"$\" + substitution_variable_name\n both_parser = curly_brace_parser | non_curly_brace_parser\n try:\n parsed_substitution_variable = both_parser.parseString(substitution_variable)\n return parsed_substitution_variable.substitution_variable_name\n except pp.ParseException:\n return None\n\n\nclass PasswordMasker:\n \"\"\"\n Used to mask passwords in Datasources. Does not mask sqlite urls.\n\n Example usage\n masked_db_url = PasswordMasker.mask_db_url(url)\n where url = \"postgresql+psycopg2://username:password@host:65432/database\"\n and masked_url = \"postgresql+psycopg2://username:***@host:65432/database\"\n\n \"\"\"\n\n MASKED_PASSWORD_STRING = \"***\"\n\n # values with the following keys will be processed with cls.mask_db_url:\n URL_KEYS = {\"connection_string\", \"url\"}\n\n # values with these keys will be directly replaced with cls.MASKED_PASSWORD_STRING:\n PASSWORD_KEYS = {\"access_token\", \"password\"}\n\n @classmethod\n def mask_db_url(cls, url: str, use_urlparse: bool = False, **kwargs) -> str:\n \"\"\"\n Mask password in database url.\n Uses sqlalchemy engine parsing if sqlalchemy is installed, otherwise defaults to using urlparse from the stdlib which does not handle kwargs.\n Args:\n url: Database url e.g. \"postgresql+psycopg2://username:password@host:65432/database\"\n use_urlparse: Skip trying to parse url with sqlalchemy and use urlparse\n **kwargs: passed to create_engine()\n\n Returns:\n url with password masked e.g. \"postgresql+psycopg2://username:***@host:65432/database\"\n \"\"\"\n if sa is not None and use_urlparse is False:\n try:\n engine = sa.create_engine(url, **kwargs)\n return engine.url.__repr__()\n # Account for the edge case where we have SQLAlchemy in our env but haven't installed the appropriate dialect to match the input URL\n except Exception as e:\n logger.warning(\n f\"Something went wrong when trying to use SQLAlchemy to obfuscate URL: {e}\"\n )\n else:\n warnings.warn(\n \"SQLAlchemy is not installed, using urlparse to mask database url password which ignores **kwargs.\"\n )\n return cls._mask_db_url_no_sa(url=url)\n\n @classmethod\n def _mask_db_url_no_sa(cls, url: str) -> str:\n # oracle+cx_oracle does not parse well using urlparse, parse as oracle then swap back\n replace_prefix = None\n if url.startswith(\"oracle+cx_oracle\"):\n replace_prefix = {\"original\": \"oracle+cx_oracle\", \"temporary\": \"oracle\"}\n url = url.replace(replace_prefix[\"original\"], replace_prefix[\"temporary\"])\n\n parsed_url = urlparse(url)\n\n # Do not parse sqlite\n if parsed_url.scheme == \"sqlite\":\n return url\n\n colon = \":\" if parsed_url.port is not None else \"\"\n masked_url = (\n f\"{parsed_url.scheme}://{parsed_url.username}:{cls.MASKED_PASSWORD_STRING}\"\n f\"@{parsed_url.hostname}{colon}{parsed_url.port or ''}{parsed_url.path or ''}\"\n )\n\n if replace_prefix is not None:\n masked_url = masked_url.replace(\n replace_prefix[\"temporary\"], replace_prefix[\"original\"]\n )\n\n return masked_url\n\n @classmethod\n def sanitize_config(cls, config: dict) -> dict:\n \"\"\"\n Mask sensitive fields in a Dict.\n \"\"\"\n\n # be defensive, since it would be logical to expect this method works with DataContextConfig\n if not isinstance(config, dict):\n raise TypeError(\n \"PasswordMasker.sanitize_config expects param `config` \"\n + f\"to be of type Dict, not of type {type(config)}\"\n )\n\n config_copy = safe_deep_copy(config) # be immutable\n\n def recursive_cleaner_method(config: Any) -> None:\n if isinstance(config, dict):\n for key, val in config.items():\n if not isinstance(val, str):\n recursive_cleaner_method(val)\n elif key in cls.URL_KEYS:\n config[key] = cls.mask_db_url(val)\n elif key in cls.PASSWORD_KEYS:\n config[key] = cls.MASKED_PASSWORD_STRING\n else:\n pass # this string is not sensitive\n elif isinstance(config, list):\n for val in config:\n recursive_cleaner_method(val)\n\n recursive_cleaner_method(config_copy) # Perform anonymization in place\n\n return config_copy\n", "path": "great_expectations/data_context/util.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport copy\nimport inspect\nimport logging\nimport pathlib\nimport re\nimport warnings\nfrom typing import Any, Optional\nfrom urllib.parse import urlparse\n\nimport pyparsing as pp\n\nfrom great_expectations.alias_types import PathStr # noqa: TCH001\nfrom great_expectations.exceptions import StoreConfigurationError\nfrom great_expectations.types import safe_deep_copy\nfrom great_expectations.util import load_class, verify_dynamic_loading_support\n\ntry:\n import sqlalchemy as sa\nexcept ImportError:\n sa = None\n\nlogger = logging.getLogger(__name__)\n\n\n# TODO: Rename config to constructor_kwargs and config_defaults -> constructor_kwarg_default\n# TODO: Improve error messages in this method. Since so much of our workflow is config-driven, this will be a *super* important part of DX.\ndef instantiate_class_from_config(config, runtime_environment, config_defaults=None):\n \"\"\"Build a GX class from configuration dictionaries.\"\"\"\n\n if config_defaults is None:\n config_defaults = {}\n\n config = copy.deepcopy(config)\n\n module_name = config.pop(\"module_name\", None)\n if module_name is None:\n try:\n module_name = config_defaults.pop(\"module_name\")\n except KeyError:\n raise KeyError(\n \"Neither config : {} nor config_defaults : {} contains a module_name key.\".format(\n config,\n config_defaults,\n )\n )\n else:\n # Pop the value without using it, to avoid sending an unwanted value to the config_class\n config_defaults.pop(\"module_name\", None)\n\n logger.debug(f\"(instantiate_class_from_config) module_name -> {module_name}\")\n verify_dynamic_loading_support(module_name=module_name)\n\n class_name = config.pop(\"class_name\", None)\n if class_name is None:\n logger.warning(\n \"Instantiating class from config without an explicit class_name is dangerous. Consider adding \"\n f\"an explicit class_name for {config.get('name')}\"\n )\n try:\n class_name = config_defaults.pop(\"class_name\")\n except KeyError:\n raise KeyError(\n \"Neither config : {} nor config_defaults : {} contains a class_name key.\".format(\n config,\n config_defaults,\n )\n )\n else:\n # Pop the value without using it, to avoid sending an unwanted value to the config_class\n config_defaults.pop(\"class_name\", None)\n\n class_ = load_class(class_name=class_name, module_name=module_name)\n\n config_with_defaults = copy.deepcopy(config_defaults)\n config_with_defaults.update(config)\n if runtime_environment is not None:\n # If there are additional kwargs available in the runtime_environment requested by a\n # class to be instantiated, provide them\n argspec = inspect.getfullargspec(class_.__init__)[0][1:]\n\n missing_args = set(argspec) - set(config_with_defaults.keys())\n config_with_defaults.update(\n {\n missing_arg: runtime_environment[missing_arg]\n for missing_arg in missing_args\n if missing_arg in runtime_environment\n }\n )\n # Add the entire runtime_environment as well if it's requested\n if \"runtime_environment\" in missing_args:\n config_with_defaults.update({\"runtime_environment\": runtime_environment})\n\n try:\n class_instance = class_(**config_with_defaults)\n except TypeError as e:\n raise TypeError(\n \"Couldn't instantiate class: {} with config: \\n\\t{}\\n \\n\".format(\n class_name, format_dict_for_error_message(config_with_defaults)\n )\n + str(e)\n )\n\n return class_instance\n\n\ndef format_dict_for_error_message(dict_):\n # TODO : Tidy this up a bit. Indentation isn't fully consistent.\n\n return \"\\n\\t\".join(\"\\t\\t\".join((str(key), str(dict_[key]))) for key in dict_)\n\n\ndef file_relative_path(\n source_path: PathStr,\n relative_path: PathStr,\n strict: bool = True,\n) -> str:\n \"\"\"\n This function is useful when one needs to load a file that is\n relative to the position of the current file. (Such as when\n you encode a configuration file path in source file and want\n in runnable in any current working directory)\n\n It is meant to be used like the following:\n file_relative_path(__file__, 'path/relative/to/file')\n\n This has been modified from Dagster's utils:\n H/T https://github.com/dagster-io/dagster/blob/8a250e9619a49e8bff8e9aa7435df89c2d2ea039/python_modules/dagster/dagster/utils/__init__.py#L34\n \"\"\"\n dir_path = pathlib.Path(source_path).parent\n abs_path = dir_path.joinpath(relative_path).resolve(strict=strict)\n return str(abs_path)\n\n\ndef parse_substitution_variable(substitution_variable: str) -> Optional[str]:\n \"\"\"\n Parse and check whether the string contains a substitution variable of the case insensitive form ${SOME_VAR} or $SOME_VAR\n Args:\n substitution_variable: string to be parsed\n\n Returns:\n string of variable name e.g. SOME_VAR or None if not parsable. If there are multiple substitution variables this currently returns the first e.g. $SOME_$TRING -> $SOME_\n \"\"\"\n substitution_variable_name = pp.Word(pp.alphanums + \"_\").setResultsName(\n \"substitution_variable_name\"\n )\n curly_brace_parser = \"${\" + substitution_variable_name + \"}\"\n non_curly_brace_parser = \"$\" + substitution_variable_name\n both_parser = curly_brace_parser | non_curly_brace_parser\n try:\n parsed_substitution_variable = both_parser.parseString(substitution_variable)\n return parsed_substitution_variable.substitution_variable_name\n except pp.ParseException:\n return None\n\n\nclass PasswordMasker:\n \"\"\"\n Used to mask passwords in Datasources. Does not mask sqlite urls.\n\n Example usage\n masked_db_url = PasswordMasker.mask_db_url(url)\n where url = \"postgresql+psycopg2://username:password@host:65432/database\"\n and masked_url = \"postgresql+psycopg2://username:***@host:65432/database\"\n\n \"\"\"\n\n MASKED_PASSWORD_STRING = \"***\"\n\n # values with the following keys will be processed with cls.mask_db_url:\n URL_KEYS = {\"connection_string\", \"url\"}\n\n # values with these keys will be directly replaced with cls.MASKED_PASSWORD_STRING:\n PASSWORD_KEYS = {\"access_token\", \"password\"}\n\n @classmethod\n def mask_db_url(cls, url: str, use_urlparse: bool = False, **kwargs) -> str:\n \"\"\"\n Mask password in database url.\n Uses sqlalchemy engine parsing if sqlalchemy is installed, otherwise defaults to using urlparse from the stdlib which does not handle kwargs.\n Args:\n url: Database url e.g. \"postgresql+psycopg2://username:password@host:65432/database\"\n use_urlparse: Skip trying to parse url with sqlalchemy and use urlparse\n **kwargs: passed to create_engine()\n\n Returns:\n url with password masked e.g. \"postgresql+psycopg2://username:***@host:65432/database\"\n \"\"\"\n if url.startswith(\"DefaultEndpointsProtocol\"):\n return cls._obfuscate_azure_blobstore_connection_string(url)\n elif sa is not None and use_urlparse is False:\n try:\n engine = sa.create_engine(url, **kwargs)\n return engine.url.__repr__()\n # Account for the edge case where we have SQLAlchemy in our env but haven't installed the appropriate dialect to match the input URL\n except Exception as e:\n logger.warning(\n f\"Something went wrong when trying to use SQLAlchemy to obfuscate URL: {e}\"\n )\n else:\n warnings.warn(\n \"SQLAlchemy is not installed, using urlparse to mask database url password which ignores **kwargs.\"\n )\n return cls._mask_db_url_no_sa(url=url)\n\n @classmethod\n def _obfuscate_azure_blobstore_connection_string(cls, url: str) -> str:\n # Parse Azure Connection Strings\n azure_conn_str_re = re.compile(\n \"(DefaultEndpointsProtocol=(http|https));(AccountName=([a-zA-Z0-9]+));(AccountKey=)(.+);(EndpointSuffix=([a-zA-Z\\\\.]+))\"\n )\n try:\n matched: re.Match[str] | None = azure_conn_str_re.match(url)\n if not matched:\n raise StoreConfigurationError(\n f\"The URL for the Azure connection-string, was not configured properly. Please check and try again: {url} \"\n )\n res = f\"DefaultEndpointsProtocol={matched.group(2)};AccountName={matched.group(4)};AccountKey=***;EndpointSuffix={matched.group(8)}\"\n return res\n except Exception as e:\n raise StoreConfigurationError(\n f\"Something went wrong when trying to obfuscate URL for Azure connection-string. Please check your configuration: {e}\"\n )\n\n @classmethod\n def _mask_db_url_no_sa(cls, url: str) -> str:\n # oracle+cx_oracle does not parse well using urlparse, parse as oracle then swap back\n replace_prefix = None\n if url.startswith(\"oracle+cx_oracle\"):\n replace_prefix = {\"original\": \"oracle+cx_oracle\", \"temporary\": \"oracle\"}\n url = url.replace(replace_prefix[\"original\"], replace_prefix[\"temporary\"])\n\n parsed_url = urlparse(url)\n\n # Do not parse sqlite\n if parsed_url.scheme == \"sqlite\":\n return url\n\n colon = \":\" if parsed_url.port is not None else \"\"\n masked_url = (\n f\"{parsed_url.scheme}://{parsed_url.username}:{cls.MASKED_PASSWORD_STRING}\"\n f\"@{parsed_url.hostname}{colon}{parsed_url.port or ''}{parsed_url.path or ''}\"\n )\n\n if replace_prefix is not None:\n masked_url = masked_url.replace(\n replace_prefix[\"temporary\"], replace_prefix[\"original\"]\n )\n\n return masked_url\n\n @classmethod\n def sanitize_config(cls, config: dict) -> dict:\n \"\"\"\n Mask sensitive fields in a Dict.\n \"\"\"\n\n # be defensive, since it would be logical to expect this method works with DataContextConfig\n if not isinstance(config, dict):\n raise TypeError(\n \"PasswordMasker.sanitize_config expects param `config` \"\n + f\"to be of type Dict, not of type {type(config)}\"\n )\n\n config_copy = safe_deep_copy(config) # be immutable\n\n def recursive_cleaner_method(config: Any) -> None:\n if isinstance(config, dict):\n for key, val in config.items():\n if not isinstance(val, str):\n recursive_cleaner_method(val)\n elif key in cls.URL_KEYS:\n config[key] = cls.mask_db_url(val)\n elif key in cls.PASSWORD_KEYS:\n config[key] = cls.MASKED_PASSWORD_STRING\n else:\n pass # this string is not sensitive\n elif isinstance(config, list):\n for val in config:\n recursive_cleaner_method(val)\n\n recursive_cleaner_method(config_copy) # Perform anonymization in place\n\n return config_copy\n", "path": "great_expectations/data_context/util.py"}]}
| 3,251 | 604 |
gh_patches_debug_6
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-1273
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PEP257 docstrings for file "./docs/__init__.py"
Cover `./docs/__init__.py` file with docstrings and follow [PEP257](https://www.python.org/dev/peps/pep-0257/). We use [pydocstyle](https://pypi.org/project/pydocstyle/) for validation.
Current validation log:
```
./docs/__init__.py:1 at module level:
D104: Missing docstring in public package
```
Subtask for #742
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/__init__.py`
Content:
```
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/__init__.py b/docs/__init__.py
--- a/docs/__init__.py
+++ b/docs/__init__.py
@@ -0,0 +1,3 @@
+# -*- coding: utf-8 -*-
+
+"""Main package for docs."""
|
{"golden_diff": "diff --git a/docs/__init__.py b/docs/__init__.py\n--- a/docs/__init__.py\n+++ b/docs/__init__.py\n@@ -0,0 +1,3 @@\n+# -*- coding: utf-8 -*-\n+\n+\"\"\"Main package for docs.\"\"\"\n", "issue": "PEP257 docstrings for file \"./docs/__init__.py\"\nCover `./docs/__init__.py` file with docstrings and follow [PEP257](https://www.python.org/dev/peps/pep-0257/). We use [pydocstyle](https://pypi.org/project/pydocstyle/) for validation.\r\n\r\nCurrent validation log:\r\n\r\n```\r\n./docs/__init__.py:1 at module level:\r\n D104: Missing docstring in public package\r\n```\r\n\r\nSubtask for #742 \n", "before_files": [{"content": "", "path": "docs/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Main package for docs.\"\"\"\n", "path": "docs/__init__.py"}]}
| 376 | 58 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.